## A lower resolution image has a larger file size, while a higher resolution image has a smaller file size. How does that happen?

This 3200 x 2129 pixel image has a 5.86 MB file sieve, while this image has a 6000 x 4000 pixel resolution, but the file is 4.55 MB. What happens in both cases?

Any help to clarify this is appreciated.
Thanks a lot.

Posted on Categories Articles

## numpy – Wrote Python code for wireless communication work, is incredibly slow for larger files

I need some pointers on how I can speed up my code as it is now incredibly slow for larger inputs. The code works so that the Loc_Circle_50U.txt file contains the true locations of 50 vehicles, while the files in ns3_files contain some incorrect locations. I analyze these differences, which are stored as errors, and use the errors and speeds of the vehicles to calculate whether they are likely to collide. The time is divided into 1 millisecond slots.

``````import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from joblib import Parallel, delayed

ns3_files = ('EQ_100p_1Hz_50U.txt','EQ_100p_5Hz_50U.txt','EQ_100p_10Hz_50U.txt',
'EQ_100p_20Hz_50U.txt','EQ_100p_30Hz_50U.txt','EQ_100p_40Hz_50U.txt','EQ_100p_50Hz_50U.txt','EQ_100p_100Hz_50U.txt')

sumo_file = ('Loc_Circle_50U.txt')
sumo_df = pd.read_csv(sumo_file(0), delim_whitespace = True)

for qqq in sumo_file:
print("analyzing file ", qqq)

if 'Time' in rr.columns:
if 'VID' in rr.columns:
if 'Pos(X)' in rr.columns:
if 'Pos(Y)' in rr.columns:
if 'Vel(X)' in rr.columns:
if 'Vel(Y)' in rr.columns:
print("sumo file ", qqq, " is OK")

for qqq in ns3_files:
print("analyzing file ", qqq)

if 'PId' in rr.columns:
if 'TxId' in rr.columns:
if 'RxId' in rr.columns:
if 'GenTime' in rr.columns:
if 'RecTime' in rr.columns:
if 'Delay' in rr.columns:
if 'TxTruePos(X)' in rr.columns:
if 'TxTruePos(Y)' in rr.columns:
if 'Error(m)' in rr.columns:
if 'RecvPos(X)' in rr.columns:
if 'RecvPos(Y)' in rr.columns:
print("ns3 file ", qqq, " is OK")

prediction = 0 # 0 means no prediction and 1 is prediction enabled

def calc_error(c): # pass the ns3 dataframe cleaned of all nan values.
c.sort_values("RecTime")
#     print("c = ", c, " to be processed")
nrows = c.shape(0)
error = () # will store slot wise error
collision = 0 # will be 0 as long as ttc error < 6.43. Becomes 1 if ttc_error exceeds 6.43 even for 1 slot
ttc_error_x = 0 # calculates the ttc error in x for every slot
ttc_error_y = 0 # calculates the ttc error in y for every slot
ttc_error = () # will store slot wise ttc error
sender = c.loc(0, "TxId") # sender will be the same throughout, so take the sender value from any row
receiver = c.loc(0, "RxId") # same as above
if nrows==1: # only 1 message exchanged
error_x_val = abs( (c("TxTruePos(X)")) - c("TxHeadPos(X)")).values(0)
error_y_val = abs( (c("TxTruePos(Y)")) - c("TxHeadPos(Y)")).values(0)
rel_vel_x = abs (c("Tx_HeadVelX") - c("RecvVel(X)")).values(0)
rel_vel_y = abs (c("Tx_HeadVelY") - c("RecvVel(Y)")).values(0)
# now for the relative velocity, which of the sender's velocity to take ? the sending instant or receiving instant ?
#         print("error_x = ", error_x, "rel_vel_x = ", rel_vel_x)
if (rel_vel_x!=0):
ttc_error_x = error_x_val/rel_vel_x
else:
ttc_error_x = 0 # rel vel same means no error
if (rel_vel_y!=0):
ttc_error_y = error_y_val/rel_vel_y
else:
ttc_error_y = 0
#         print("1 packet scenario ", ttc_error_x, ttc_error_y, error_x_val, error_y_val, rel_vel_x, rel_vel_y)
ttc_error.append(max(ttc_error_x, ttc_error_y))
err = c("Error(m)").values(0)
error.append(np.mean(err))

else: # more than 1 packet exchanged

for k in range(nrows-1): # one k for each BSM. here BSMs are analyzed at reception instants
current_bsm = c.loc(k)
next_bsm =  c.loc(k+1)
slots = int(next_bsm('RecTime') - current_bsm('RecTime') - 1)
current_time = current_bsm("RecTime")
df_actual = sumo_df(mask_1) # df_actual is senders sumo information
# print("current_bsm" , current_bsm) #("RecTime")) #("RecvVel(X)"))
x_actual=(current_bsm("TxTruePos(X)"))
y_actual=(current_bsm("TxTruePos(Y)"))
error.append(math.sqrt(error_x_val**2 + error_y_val**2))
rel_vel_x = abs(sender_velocity_x - current_bsm("RecvVel(X)"))
rel_vel_y = abs(sender_velocity_y - current_bsm("RecvVel(Y)"))
#             print("sender_velocity_x=",sender_velocity_x,"sender_velocity_y=",sender_velocity_y,"rec_vel_x=",current_bsm("RecvVel(X)"),"rec_vel_y",current_bsm("RecvVel(Y)"),
#                  "rel_vel_x",rel_vel_x,"rel_vel_y",rel_vel_y)
# the next are header info as error is from rx perspective and rx has only header info

if (rel_vel_x!=0):
ttc_error_x = error_x_val/rel_vel_x
else:
ttc_error_x = 0
if (rel_vel_y!=0):
ttc_error_y = error_y_val/rel_vel_y
else:
ttc_error_y = 0
ttc_error.append(max(ttc_error_x, ttc_error_y))
#             print(" BSM at", time," rel_vel_x=",rel_vel_x,"rel_vel_y=",rel_vel_y,"error_x=",error_x,"error_y=",error_y  )
#             print("ttc_error_x=", ttc_error_x, "ttc_error_y=", ttc_error_y)
for j in range(slots): # this for loop will run fir every slot in between 2 receptions
#                 print("prediction slot = ", current_slot+j+1)
x_pos_predicted = x_pos_BSM + prediction*(x_speed_BSM * (j+1))*(0.001) # as slots are in msec and speed in m/s
y_pos_predicted = y_pos_BSM + prediction*(y_speed_BSM * (j+1))*(0.001)
mask_3 = (df_actual("Time") == (current_time + (j+1))) # df_actual has the senders info
#                 print(df_row)
# row of sumo file at the ongoing slot for receiver
x_pos_actual = df_row("Pos(X)").values(0)
y_pos_actual = df_row("Pos(Y)").values(0)
#                 print("x actual=", x_pos_actual," y actual=",y_pos_actual," x pred=",x_pos_predicted, " y pred =", y_pos_predicted)
error_x_val = abs((x_pos_predicted) - (x_pos_actual))
error_y_val = abs((y_pos_predicted) - (y_pos_actual))

error.append(error_x_val**2 + error_y_val**2)
#                 print("x error = ", error_x, ", y error = ", error_y)
if (rel_vel_x!=0):
ttc_error_x = error_x_val/rel_vel_x
else:
ttc_error_x = 0
if (rel_vel_y!=0):
ttc_error_y = error_y_val/rel_vel_y
else:
ttc_error_y = 0
ttc_error.append(max(ttc_error_x, ttc_error_y))
#                 print("ttc_error_x=", ttc_error_x, "ttc_error_y=", ttc_error_y)
#                 print("predslot",current_time+j+1,"x_actual",x_pos_actual,"y_actual",y_pos_actual,"x_predicted",x_pos_predicted,"y_predicted",y_pos_predicted,"error_x_val",error_x_val,"error_y_val",error_y_val) #, " is ", slot_error)

# add the last packet details
err_lastpacket = c.loc(nrows-1, "Error(m)")
error.append(err_lastpacket)
current_time = c.loc(nrows-1, "RecTime")
error_x_val = abs( (c.loc(nrows-1,"TxTruePos(X)")) - c.loc(nrows-1,"TxHeadPos(X)"))
error_y_val = abs( (c.loc(nrows-1,"TxTruePos(Y)")) - c.loc(nrows-1,"TxHeadPos(Y)"))
rel_vel_x = abs (sender_x_vel - c.loc(nrows-1,"RecvVel(X)"))
rel_vel_y = abs (sender_y_vel - c.loc(nrows-1,"RecvVel(Y)"))
if (rel_vel_x!=0):
ttc_error_x = error_x_val/rel_vel_x
#         print("error_x_val",error_x_val,"rel_vel_x",rel_vel_x)
else:
ttc_error_x = 0
if (rel_vel_y!=0):
ttc_error_y = error_y_val/rel_vel_y
else:
ttc_error_y = 0
#     print("ttc_error_x",ttc_error_x, "ttc_error_y",ttc_error_y)
ttc_error.append(max(ttc_error_x, ttc_error_y))

#             print("current_time",current_time,"sender ",sender)
#             print("overall error", error)
#     print("overall ttc_error", ttc_error)
#     print("n")
avg_error = np.mean(error)
if np.mean(ttc_error)>6.43:
collision = 1
else:
collision = 0
return (avg_error, collision)

overall_errors = () # to store error per file
overall_collisions = () # to store collision per file

def start_process(fil):
print("File ", fil, " started")
b = pd.read_csv(fil, delim_whitespace = True)
b = b.sort_values(('RecTime'))
b = b.reset_index(drop=True)
m = b('RxId').nunique() # m is number of receivers
# throughput block starts
#     overall_duration = (b('RecTime').max() - b('RecTime').min())/1000 # milliseconds to seconds

## in throughput case, we work on the whole file so 1 pair or 1 packet exchanged cases do not apply.

#     packets = b.shape(0) # no of rows = no of packets received

average_errors = () # hold error for every pair in a file
average_collisions = () # hold collision (0 or 1) for every pair in a file
#     collisions = 0 # will have the number of collision risky pairs in every file
for j in range(len(senders)):
sender = senders(j)
mask = (b('RxId') == receiver) & (b('TxId') == sender) # extract out the rx-tx pair
#             print("cc=",cc)
c = c.reset_index(drop=True)
#             print("cc=",cc)
#             print("error calculation for sender ",sender, " and receiver ", receiver, "n")
#             print("c = ", c , "before being sent")
avg_error, collision = calc_error(c) # calc_error is the function
# this will give the whole error for that pair
# pos_error should return a value of error
#             avg_error = np.sum(pos_error)/overall_duration # errors for single pair
average_errors.append(avg_error) # average_errors will hold the error for every pair in a file
average_collisions.append(collision)
#                 print("average errors for Tx ",sender, " and receiver ", receiver, " is ", avg_error, "n")
#             print("collision status is ", collision)
#         print("average_collisions", average_collisions,"average_errors",average_errors)
average_error = np.average(average_errors)
average_collision = np.average(average_collisions)

print("File ", fil, " completed")

#     print("n")
#     print("average_error = ", average_error)
overall_collisions.append(average_collision)
overall_errors.append(average_error)
# print(average_errors)

if prediction==0:
print("for file ", fil, file = open("parallel_error_collision_P.txt", "a"))
print("no prediction result follows with prediction flag =", prediction, file = open("parallel_error_collision_P.txt", "a"))
print("overall_collisions = ", overall_collisions, file = open("parallel_error_collision_P.txt", "a"))
print("overall_errors = ", overall_errors,"n", file = open("parallel_error_collision_P.txt", "a"))
else:
print("for file ", fil, file = open("parallel_error_collision_P.txt", "a"))
print("prediction assisted result follows with prediction flag =", prediction, file = open("parallel_error_collision_P.txt", "a"))
print("overall_collisions = ", overall_collisions, file = open("parallel_error_collision_P.txt", "a"))
print("overall_errors = ", overall_errors, "n", file = open("parallel_error_collision_P.txt", "a"))

Parallel(n_jobs=len(ns3_files))(delayed(start_process)(fil) for fil in ns3_files)
``````

Posted on Categories Articles

## Google Analytics – Why do my property hits show a larger number than pageviews and events combined?

I'm getting the data limit that exceeds the warning in the Google Analytics dashboard. When I checked the Property Hit Volume (under Admin -> Property Settings), a larger number is displayed, about 87 million. However, the hits for the combined pageviews and events for a month are only 12 million. So my question is where these additional hits come from.

I use a SPA and send the page view hit manually when changing the route. Even if I send the page hit multiple times (because of bad codes), it should be listed in the pageviews. What am I missing here?

I checked the network tab of my app. For some pages / routes, a page hit is called up several times. Would that be a reason for bigger hits? If so, why is it not shown under pageviews?

Any hint is appreciated. Thanks a lot.

Posted on Categories Articles

## 35mm – How do you roll a larger roll of film onto a smaller one?

Without empty cores, you're almost done before you start.

The only real option I see is to get the bulk loader into the darkroom and roll and thread a lot of film that will fit into the loader's pantry by hand. However, it still works in the loader (I think) The Remjet may get scratched if it drives on the back of the film in the load chamber.

If you know someone loading bulk goods, you may be able to roll some empty bulk spools to roll.

One possible option that has just come to my mind is a method that others have talked about, but which I have not tried myself. Since a 36mm 35mm exposure roll is about 150cm long, you can simply pull the end (in the dark) off the large roll with one hand, run the film down to your full arm span with the other hand, and hold the film there get cut. Then roll the strip into a cassette by hand (and don't forget to grasp and protect the new end of the large bulk roll before turning the light back on).

There is little risk here that a strip will become too long to fit into an evolving role. However, you can use a reference pin or notches on the edge of the darkroom counter as a length reference to prevent this.

## Partitioning – Btrfs reports "ERROR: minimum size for each btrfs device is 131072000", but my drives are much larger

I have x2 2TB drives that I want to make a single Raid0 logical drive. The mistake I get makes no sense. Can anyone explain what I'm missing here?

I use

``````lsblk -o name,mountpoint,size,uuid,fstype,model,serial
``````

I see that both devices are mounted and the size is the same (953.4 G *). I know because I had different ftypes on them during installation, so I changed both to ext4 and then used them `dd` to make them identical in terms of partitioning.

* Yes, I know that this partition is not currently using the full size of this drive.

I'm trying to use btrfs

``````mkfs.btrfs -f -d raid0 -m raid0 zdata1 zdata2
``````

but i get the error

ERROR: & # 39; zdata1 & # 39; is too small to create a usable file system

ERROR: The minimum size for each btrfs is 131072000

Posted on Categories Articles

## Terminal – Rescuing a scratched DVD with DDRescue causes an output file to be 5 times larger than the DVD. How can I fix it?

With the steps from this previous post, I used ddrescue to read the hard drive. I left the process unattended because I knew it could take a while, but when I got back I found that my laptop ran out of space and the output file was 50 GB. The source DVD is 8.7 GB hard drive.

I followed the instructions under the link below carefully.
How to rescue a scratched CD / DVD on Mac OSX

I suspect ddrescue could not determine the disk size, which led to an accumulation of data.

When I read the MAN options, I'm not sure which switch to use. Should I tell DDrescue to limit the size of the input source (s)? If not, how can I limit the size of the output file? How do we ask to offer a 1: 1 copy?

Eric

Posted on Categories Articles

## Colocation with larger companies / Directly against smaller companies

Hey guys, just wanted to have some opinions here.

If you are a small company that wants to rent 1-3 racks, does it still make sense to contact the larger companies / data centers yourself? (i.e. Digital Realty, RagingWire, Atlantic Metro, Equinix, etc.)

I understand that these companies generally focus more on wholesale customers who need multiple cages or an entire suite. I've actually turned to some of them for prices, and surprisingly, they seem more than happy to meet my needs, even if it was just a single rack.

However, their prices are definitely at the top end. For example, if I chose a smaller WHT provider, I could get the same performance / bandwidth on a single rack in the same location for about 50% less.

So my question to experienced people is: What do I get for my extra dollars when I work with data center / larger company operators?

Posted on Categories Articles

## Multiple monitors – xrandr with two displays, problems returning to "just side by side" after using a larger screen with scaling

I use two displays, main HDMI1 right 1680×1050, additional HDMI2 left 1280×1024, on an i5 with integrated Xeon E3-1200 graphics.

I use them side by side most of the time. Occasionally I would like to see a 2: 1 desktop with a size of 3360 x 2100 on the right monitor, do some work and then reset it to its original state. I can't reset it.

I have set a few aliases to change the size of the main monitor.

``````alias x20="xrandr --fb 3360x2100 --output HDMI1 --scale 2x2 --mode 1680x1050 --panning 3360x2100"
alias x10="xrandr --fb 1680x1050 --output HDMI1 --scale 1x1 --mode 1680x1050 --panning 1680x1050"
alias xh="xrandr --output HDMI1 --right-of HDMI2"
``````

Swiveling is not necessary per se but I seem to have to set it to work around the "caged mouse" bug where the mouse is limited to the top left quadrant of the display when I don't. I then seemed to have to take it off to get the right behavior back on a 1: 1 scale.

After executing x10, both displays are 1: 1, but both are top left corner 0.0. When I start xh nothing seems to change, the displays are still at 0.0 in the top left. At this point, however, xrandr says that the desktop is 2960 x 1050, large enough to hold them side by side, and the mouse disappears into the void on the right. If you make a scrotum, it is confirmed that the screen is 2960 wide.

I have tried many experiments to bring the main display back to the corresponding offset of 1280.0, including: –

Turn off the aux display before scaling 2 and turn it back on after I return
Setting –pos 1280×0
Setting – left or right of for the corresponding monitor
Turn the main monitor on and off
but nothing seems to work.

In the last experiment with switching the main line off and on, the error message was displayed

``````prompt> xrandr --output HDMI1 --auto --right-of HDMI2
X Error of failed request:  BadMatch (invalid parameter attributes)
Major opcode of failed request:  140 (RANDR)
Minor opcode of failed request:  29 (RRSetPanning)
Serial number of failed request:  38
Current serial number in output stream:  38
``````

So far, I could only reset the displays to the normal state side by side by restarting the PC, which is not ideal.

When I go into the GUI screen resolution tool, I can move the screens, but when I ask them to apply, they snap back to where they were, so this tool also seems to have lost control.

In some previous experiments I used –above and –left-of on HDMI2 and they behaved as expected, stacking the two monitors or displaying them side by side. However, I hadn't played around with –scle, –fb, or –panning before using them.

Posted on Categories Articles

## Linux – MySQL runs larger threads

I have my MySQL DB instance in RDS. My CPU jumps from 50% to 100%. I later checked my DB threads.

I was surprised by my thread count.

`````` SHOW STATUS WHERE variable_name LIKE "Threads_%" OR variable_name =
"Connections"
``````

The output for the above query is as follows

``````Threads connected 21