## python 3.x – I can not load csv in jupyter

Hi, I hope you can help me, I actually have a lot of time to work with R and I have never had such a problem.

I am new to Python and that has turned up. First, I gave a simple command

import pandas as pd

later

to load my file called experiment1, I tried everything with the path "cut it out" using two dots in front of "/ Documents" (which I did in R)
but I always get this error:

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
in

~/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
700                     skip_blank_lines=skip_blank_lines)
701
703
704     parser_f.__name__ = name

427
428     # Create the parser.
--> 429     parser = TextFileReader(filepath_or_buffer, **kwds)
430
431     if chunksize or iterator:

~/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds)
893             self.options('has_index_names') = kwds('has_index_names')
894
--> 895         self._make_engine(self.engine)
896
897     def close(self):

~/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py in _make_engine(self, engine)
1120     def _make_engine(self, engine='c'):
1121         if engine == 'c':
-> 1122             self._engine = CParserWrapper(self.f, **self.options)
1123         else:
1124             if engine == 'python':

~/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py in __init__(self, src, **kwds)
1851         kwds('usecols') = self.usecols
1852
1855

## python 3.x – How do I use Selenium web drivers to test 2 players participating in the same game?

I'm trying to create a web-based board game (in Python, Flask & Angular)
Players navigate to the site and enter their name. There, they enter a lobby where they can create a new game or join an existing game.
I want to test the frontend with Selenium web drivers to see if a player can join the game when he creates it.
I've tried to do this with a combination of threads that create new Webdirver objects, or threads that use the same Webdriver object – but no matter what games they seem to connect to.

What would be the best way to achieve this?

## python – Pandas – Rolling average for a group over several columns; big data frame

I have the following data frame:

-----+-----+-------------+-------------+-------------------------+
| ID1 | ID2 | Box1_weight | Box2_weight | Average Prev Weight ID1 |
+-----+-----+-------------+-------------+-------------------------+
|  19 | 677 |      3      |      2      |            -            |
+-----+-----+-------------+-------------+-------------------------+
| 677 |  19 |      1      |      0      |            2            |
+-----+-----+-------------+-------------+-------------------------+
|  19 | 677 |      3      |      1      |      (0 + 3 )/2=1.5     |
+-----+-----+-------------+-------------+-------------------------+
|  19 | 677 |      7      |      0      |       (3+0+3)/3=2       |
+-----+-----+-------------+-------------+-------------------------+
| 677 |  19 |      1      |      3      |      (0+1+1+2)/4=1      |

I would like to calculate the moving average of the weight of the last 3 boxes based on the ID. I want to do this for all IDs in ID1.

I've inserted the column I'd like to calculate along with the calculations in the table above titled "Average Prev Weight ID1".

I can get a moving average for each column by using:

df_copy.groupby('ID1')('Box1_weight').apply(lambda x: x.shift().rolling(period_length, min_periods=1).mean())

However, this does not take into account that the article may also have been packaged in the "Box2_weight" column.

How can I find a moving average per ID across the two columns?

Every guide is appreciated.

## python – # A program that lets you see how red, blue, and purple appear in the resolution for fun

I'm doing a bit of analogue and digital art. This time I did a color test digital. To work on my programmability so that it is divisible, I ask for help to get a code review.

Is it easy to follow the train of thought?
Suggestions?

Greetings Johan

#A program for fun to see how red, blue and purple appears at resolution

from math import *
from graphics import *

def hole(win, centerx, centery, radius): #draws an inverted sphere
for circle in range(radius, 0, -1):
c = Circle(Point(centerx, centery), circle)
c.setWidth(0)
coloratangle = int(sin(circle/radius*pi/2)*255) #max resolution from black to white is 255 steps
c.setFill(color_rgb(coloratangle * (circle % 2), 0, coloratangle * ((circle + 1) % 2))) # Modulus instead of Boolean? Modulus in c.setFill or in another line?
c.draw(win)

windowheight = 1350
windowwidth = 730
win = GraphWin("My Circle", windowheight, windowwidth)
win.setBackground(color_rgb(255, 100, 100))

centerx = int(windowheight / 2)
centery = int(windowwidth / 2) #intetger
radius = int(sqrt(centerx**2 + centery**2)) #pythagoras

win.getMouse() # Pause to view result
win.close()    # Close window when done
$$```$$

## python – top-down or bottom-up recursion errors

I worked at https://leetcode.com/problems/coin-change/ and wrote a top-down and bottom-up recursion. Spent a lot of time trying to figure out why the top-down version did not work. But I could not find any ideas?

# recursion: top-down
class Solution:
def coinChange(self, coins: List(int), amount: int) -> int:
def compute_num_coins(amount, num_coins_so_far):
if amount == 0:
return num_coins_so_far
if amount not in dp:
min_coins = float('inf')
for coin in coins:
if amount - coin >= 0:
num_coins = compute_num_coins(amount - coin, num_coins_so_far + 1)
min_coins = min(min_coins, num_coins)
dp(amount) = min_coins
return dp(amount)

dp = {}
min_coins = compute_num_coins(amount, 0)
return min_coins if min_coins != float('inf') else - 1

The bottom-up version I wrote works fine. I can not see the difference between the brands that do not work earlier.

# recursion: bottom-up
class Solution:
def coinChange(self, coins: List(int), amount: int) -> int:
def compute_num_coins(amount):
if amount == 0:
return 0
if amount not in dp:
min_coins = float('inf')
for coin in coins:
if amount - coin >= 0:
num_coins = compute_num_coins(amount - coin) + 1
min_coins = min(min_coins, num_coins)
dp(amount) = min_coins
return dp(amount)

dp = {}
min_coins = compute_num_coins(amount)
return min_coins if min_coins != float('inf') else - 1

## machine learning – neural network from scratch in Python

After seeing Week 5 of the machine learning course on Coursera by Andrew Ng, I decided to use Python to write a simple neural network from scratch. Here is my code:

import numpy as np
import csv
global e
global epsilon
global a
global lam
global itr
e = 2.718281828
epsilon = 0.12
a = 1
lam = 1
itr = 1000

# The sigmoid function(and its derivative)
def sigmoid(x, derivative=False):
if derivative:
return sigmoid(x) * (1 - sigmoid(x))
return 1 / (1 + e**-x)

# The cost function
def J(X, theta1, theta2, y, lam, m):
j = 0
for i in range(m):
# The current case
currX = X(i).reshape(X(i).shape(0), 1)
z2 = theta1 @ currX
a2 = sigmoid(z2)
a2 = np.append((1), a2).reshape(a2.shape(0) + 1, 1)
z3 = theta2 @ a2
a3 = sigmoid(z3)
j += sum(-y(i) * np.log(a3) - (1 - y(i)) * np.log(1 - a3)) / m + (lam / (2 * m)) * (sum(sum(theta1(:, 1:) ** 2)) + sum(sum(theta2(:, 1:) ** 2)))
return j

def gradient(X, theta1, theta2, y, lam, m):
Delta1 = np.zeros(theta1.shape)
Delta2 = np.zeros(theta2.shape)
for i in range(m):
# The current case
currX = X(i).reshape(X(i).shape(0), 1)
z2 = theta1 @ currX
a2 = sigmoid(z2)
a2 = np.append((1), a2).reshape(a2.shape(0) + 1, 1)
z3 = theta2 @ a2
a3 = sigmoid(z3)
delta3 = a3 - y(i)
delta2 = theta2(:, 1:).T @ delta3 * sigmoid(z2, derivative=True)
Delta1 += delta2 @ currX.reshape(1, -1)
Delta2 += delta3 * a2.reshape(1, -1)
theta1Grad(:, 1:) += (lam / m) * theta1(:, 1:)
theta2Grad(:, 1:) += (lam / m) * theta2(:, 1:)

def gradientDescent(X, theta1, theta2, y, lam, m):
for i in range(itr):
theta1 = theta1 - a * theta1Grad
theta2 = theta2 - a * theta2Grad
return (theta1, theta2)

with open('data.csv', 'r') as f:
d = ()
c = 0
for row in data:
# Don't add the first line(it's our features' labels)
if c == 0:
c += 1
continue
curr_row = ()
k = 0
for j in row:
if j != '':
if k == 1:
# Add a 1 between the y and x values(for the bias)
curr_row.append(1)
curr_row.append(float(j))
k += 1
d.append(curr_row)
d = np.array(d)
x = d(:, 1:)
y = d(:, 0)
# Split the data into training cases(80%) and test cases(20%)
x_train = x(0:(d.shape(0)//5) * 4, :)
y_train = y(0:(d.shape(0)//5) * 4)
x_test = x((d.shape(0)//5) * 4 : d.shape(0), :)
y_test = y((d.shape(0)//5) * 4 : d.shape(0))
# Initialize theta(s)
theta1 = np.random.rand(5, x(0).shape(0)) * 2 * epsilon - epsilon
theta2 = np.random.rand(1, 6) * 2 * epsilon - epsilon
print(J(x_train, theta1, theta2, y_train, lam, x_train.shape(0)))
theta1, theta2 = gradientDescent(x_train, theta1, theta2, y_train, lam, x_train.shape(0))

Please note that my data has only 2 possible outputs, so no one-to-all classification is required.

## python – apt-listchanges crashes How can I get the e-mail?

Normally, I get an email from localhost to apt-listchanges via exim4. For 2 days it worked.

Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main i386 thermald i386 1.7.0-5ubuntu5 (203 kB)
Fetched 203 kB in 0s (519 kB/s)
apt-listchanges: Changelogs werden gelesen...
Traceback (most recent call last):
File "/usr/bin/apt-listchanges", line 281, in
main(config)
File "/usr/bin/apt-listchanges", line 145, in main
_send_email(changes, lambda: _("apt-listchanges: changelogs for %s") % hostname)
File "/usr/bin/apt-listchanges", line 267, in _send_email
apt_listchanges.mail_changes(config, changes, subject_getter())
File "/usr/share/apt-listchanges/apt_listchanges.py", line 65, in mail_changes
'subject': subject})
File "/usr/share/apt-listchanges/ALCLog.py", line 36, in info
print(_("apt-listchanges: %(msg)s") % {'msg': msg}, file=sys.stdout);
UnicodeEncodeError: 'ascii' codec can't encode character 'xfc' in position 73: ordinal not in range(128)
(Reading database ... 381751 files and directories currently installed.)
Preparing to unpack .../thermald_1.7.0-5ubuntu5_i386.deb ...
Unpacking thermald (1.7.0-5ubuntu5) over (1.7.0-5ubuntu2) ...
Setting up thermald (1.7.0-5ubuntu5) ...
Processing triggers for dbus (1.12.2-1ubuntu1.1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

Python versions

python --version
Python 2.7.15+

and

python3 --version
Python 3.6.8

I am not sure what information is important.

hostnamectl
Static hostname: beelzemon
Icon name: computer-laptop
Chassis: laptop
Machine ID: acaecd284840b500117c175c571c9e28
Boot ID: c583ba22cd6c4abdbf6627d82e26283d
Operating System: Ubuntu 18.04.3 LTS
Kernel: Linux 5.0.21-050021-generic
Architecture: x86

I need this Ukuu kernel for hibernation. Here
How can I get the email from apt-listchanges again?

## python – Find connected components of a graph

I have a diagram represented using a dictation of sets, e.g.

graph = {
0: {1, 2, 3},  # neighbors of node "0"
1: {0, 2},
2: {0, 1},
3: {0, 4, 5},
4: {3, 5},
5: {3, 4, 7},
6: {8},
7: {5},
8: {9, 6},
9: {8}
}

I use the following code to find connected components in the graphic:

def get_connected_components(graph):
result = ()
for node in graph:
result.append(connected_component)
return result

result = ()
nodes = {node}

while nodes:
node = nodes.pop()
result.append(node)
return result

print(get_connected_components(graph))  # ((0, 1, 2, 3, 4, 5, 7), (6, 8, 9))

Is there a better way to do this (for example, a better representation of the graph or an algorithm)?

## object-oriented – interfaces in Python, multiple inheritance compared to a home-made solution

I am writing a Python framework. To make sure a class has some properties, I make basic "interface" classes like:

class BananaContainer:
def __init__(self):
self._bananas = ()
@property
def bananas(self):
return self._bananas

Then, if an object is to be a container with bananas, I just
derive it from BananaContainer,

One potential problem is that objects are multiple containers. In my
Team, we wonder if multiple inheritance is the right one
Way to go or if there are alternative solutions.

• Is it right to have an object from which both are inherited? BananaContainer and AppleContainer for example ?
• Can it come back later with issues like the order of method resolution or name collisions? (We can be careful if we do not have similar characteristics .name for example – we want to limit these interfaces to the minimum required, not more)

A colleague suggests a "capabilities" system based on composition rather than multiple inheritance. I am doubtful about the real advantages of his method; see below:

class BaseCapabilities:
EXISTING_CAPABILITIES = {"banana": BananaContainer, "apple": AppleContainer, ...}

def get_capability(self, name):
if name in self.capabilities():
return EXISTING_CAPABILITIES(name)(self)
def capabilities(self):
# should return a list of capabilities
raise NotImplementedError

Each object would only have to be derived from one class, but then it has to
Implement the skills machines:

class MyContainer(BaseCapabilities):
def capabilities(self):
return ("banana", "apple")

It is then possible to check whether an object has the desired ability and to obtain an instance of the container class …
This is to avoid multiple inheritance that is considered harmful.

My suggestion is to go and use only for multiple inheritance isinstance(obj, BananaContainer) for example, to know if obj is a banana container.

I would be very grateful if you could help me make a decision.

## python – Network Programming – asynchronous single-threaded chat server with nicknames only

I'm starting to learn network programming. I've implemented a very simple TCP-based chat server, where users just have to provide the username.
The assumption is that the server does not use additional threads and does not block the sockets. The code:

import sys
import time
import socket
import select

class User:
def __init__(self, name, sock):
self.name = name
self.sock = sock

class Server:
def __init__(self, host, port):
self.users = ()

self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.bind((host, port))

def accept_usr(self):
usock, _ = self.sock.accept()
uname = usock.recv(1024)
self.users.append(User(uname, usock))
_, writes, _ = select.select((), self._usocks(), ())
for w in writes:
w.send(bytes(f"{uname} has joined us!", encoding="ascii"))

def handlerr(self, errs):
for s in errs:
s.close()

def update(self):
reads, writes, errs = select.select((*self._usocks(), self.sock), self._usocks(), self._usocks())
if r is self.sock:
self.accept_usr()
else:
uname = (u for u in self.users if u.sock is u.sock)(0).name
msg = r.recv(1024)
for w in writes:
w.send(bytes(f"{uname}: {msg}", encoding="ascii"))
self.handlerr(errs)

def run(self):
self.sock.listen(5)
while 1:
self.update()
time.sleep(0.1)

def _usocks(self):
return (u.sock for u in self.users)

if __name__ == "__main__":
s = Server("localhost", int(sys.argv(1)))
s.run()

For hints and comments, I would be grateful.

TO EDIT:
One obvious improvement I can think of is storing the mapping of socket-> users in a dictation so that I can quickly determine the author of the sent message.