hard drive – Disk Is Full Error When Copying Large Folders Onto Empty HDD

Bought myself a brand new Seagate HDD (formatted in exFAT). I tried copying over a bunch of folders to back up (160 GB worth) and I am met with the below error message. This is strange, because there is absolutely nothing on the HDD. I tried reformatting it to Mac OS Extended (Journaled) thinking it would help since that format is optimized for Mac OS, but unfortunately, I am met with the same error message. How can I get around this? By the way this is on Big Sur.

Error Message

python – splitting very large csv file into hundred of thousands csv files based on colulm values

I would like to split a very big csv file with hundreds of millions of row into small hundred of thousand files based on column value.

I have tried many options:

  • I have tried opening file and not closing but there is a limit for the number of opened file at the same time, by using this function from this post
def split_csv_file(f, dst_dir, keyfunc):
    csv_reader = csv.reader(f)
    csv_writers = {}
    for row in csv_reader:
        k = keyfunc(row)
        if k not in csv_writers:
            csv_writers(k) = csv.writer(open(os.path.join(dst_dir, k),
                                             mode='w', newline=''))
        csv_writers(k).writerow(row)
  • I have tried using a simple algorithm with iterating through the files and appending the row to the corresponding file but it is very very slow
with open(filename, 'r') as f:
    with line in f:
        filename_w = line.split(',')(1) + '.csv'
        if os.path.exists(filename_w):
             with open(filename_w, 'a') as fw:
                  fw.write(line)
        else:
             with open(filename_w, 'w') as fw:
                  fw.write(line)

with open

  • I have tried using pyspark using the partition option, same
df.coalesce(1).write.partitionBy(colname).format("csv").option("header", "true").mode("overwrite").save(out_dir)
  • I have tried using awk, same
awk -F, '{print >$2".csv"}' something.csv

Thank you

sequences and series – Large $a$ asymptotics for the function $B(z,-a,0) + pi cot(pi a )$

This is somewhat related to my other question linked here.

Fix some $0 < z < 1$ and set $a > 0$ and define the function
$$
f_{z}(a) := B(z,-a,0) + pi cot(pi a ) .
$$

What is the asymptotic form for $a to infty$ (for fixed $0<z<1$)?

This seems difficult to compute since both of the functions $B(z,-a,0)$ and $pi cot(pi a )$ have singularities at $a = 1,2,3,4,ldots$, however their sum yields a smooth curve without any singularities at all.

The above function can be written equivalently as $f_{z}(a) = z^{-a} Phi(z,1,-a) + pi cot(pi a)$ where $Phi$ is the (Hurwitz) Lerch transcendent. This form seems useful since the identity
$$
Phi(z,s,alpha) = z^n Phi(z,s,alpha+n) + sum_{k=0}^{n-1} frac{z^k}{(k+alpha)^s}
$$

seems to imply that $Phi$ has singularities at each integer $alpha = – k$ in the series. But I am unable to figure out how to use this to derive the asymptotic series for $f_{z}(a)$.

Furthermore, after some playing around it seems like numerically $f_{z}(a) simeq frac{z^{-a}}{a(z-1)} + mathcal{O}(a^{-2})$ but I have no clue how to derive this and how to compute higher terms in the asymptotic series.

How does one get $f_{z}(a)$ for large $a$?

c# – How to make this algorithm faster. Calculates and searches through large arrays

Ive got this algoritm thats “complex”. The comments in code give examples of how large the various data types could be. My cpu usage is less than 10% when running this and ram usage is good. no leakage or anything.

I have a list of arrays. where each array is x coordinates. We are storing a list of several x coordinate-groups essentially. = “xs” in code
And I have the same thing but for y-values = “ys”

Each array in xs and ys are different sizes. HOWEVER, the size of xs and ys is always the same. So if an array in xs contains 321654 elements then there is a corresponding array in ys with exactly 321654 elements.

The corresponding elements or “paired” xs and ys arrays are always at the same index in their corresponding list. so if xs_array(321654) is in xs(4) then ys_array(321654) is at ys(4).

The following code aims to get mean values. standard deviation and -1std and +1std from mean, as y coordinates from a collection of coordinates. It does this by taking the smallest arrays (the smallest set of x and y coordinates. It then looks at each array in xs and finds the index at which the x-coordinates are in the array. It then goes into ys, finds the corresponding xs array, and gets the y value from the x-coordinate. It goes through this, adds it all up. calculate, mean, std etc etc.

    List<Double()> xs;  //each array may be be e.g 40000 elements. and the list contains 50 - 100 of those
    List<Double()> ys; //a list containing arrays with an exact equal size as xs

public void main_algorithm()
{
    
    int TheSmallestArray = GetSmallestArray(xs); //get the smallest array out of xs and store the size in TheSmallestArray
    for (int i = 0; i < TheSmallestArray; i++) 
    {
        double The_X_at_element = The_Smallest_X_in_xs(i); //store the value at the index i
        //go through each array find the element at which x_values is at. If it doesnt exist, find the closest element. 
        List<Double> elements = new List<double>(); //create a new list of doubles
        for (int o = 0; o<xs.Count(); o++) //go through each array in xs
        {
            //go through the array o and find the index at which the number or closest number of the_X_at_element is
            int nearestIndex = Array.IndexOf(xs(o), xs(o).OrderBy(number => Math.Abs(number - The_X_at_element)).First()); 
            double The_Y_at_index = ys(o)(nearestIndex); //go through ys and get the value at this index
            elements.Add(The_Y_at_index); store the value in elements
        }
        mean_values.Add(elements.Mean()); //get the mean of all the values from ys taken
        standard_diviation_values.Add(elements.PopulationStandardDeviation());//get the mean of all the values from ys taken
        Std_MIN.Add(mean_values(i) - standard_diviation_values(i)); //store the mean - std and add to min
        Std_MAX.Add(mean_values(i) + standard_diviation_values(i)); //store the mean + std and add to max
    }
}

 public int GetSmallestArray(List<double()> arrays)
{
    int TheSmallestArray = int.MaxValue;
    foreach(double() ds in arrays)
    {
        if(ds.Length < TheSmallestArray)
        {
            TheSmallestArray = ds.Length; //store the length as TheSmallestArray
            The_Smallest_X_in_xs = ds;//and the entirety of the array ds as TheSmallestArray_xs
        }
    }
    return TheSmallestArray;
}

SQL Server backup very large compared to table size

I have a database with 31 tables and my largest table has about 12 columns and 750,000 records. I checked the size of all my tables using this query:

SELECT 
    t.NAME AS TableName,
    s.Name AS SchemaName,
    p.rows,
    SUM(a.total_pages) * 8 AS TotalSpaceKB, 
    CAST(ROUND(((SUM(a.total_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS TotalSpaceMB,
    SUM(a.used_pages) * 8 AS UsedSpaceKB, 
    CAST(ROUND(((SUM(a.used_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS UsedSpaceMB, 
    (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB,
    CAST(ROUND(((SUM(a.total_pages) - SUM(a.used_pages)) * 8) / 1024.00, 2) AS NUMERIC(36, 2)) AS UnusedSpaceMB
FROM 
    sys.tables t
INNER JOIN      
    sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN 
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN 
    sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN 
    sys.schemas s ON t.schema_id = s.schema_id
WHERE 
    t.NAME NOT LIKE 'dt%' 
    AND t.is_ms_shipped = 0
    AND i.OBJECT_ID > 255 
GROUP BY 
    t.Name, s.Name, p.Rows
ORDER BY 
    TotalSpaceMB DESC, t.Name

and it says that my largest table has 147.45 MB of data:

enter image description here

When I total all of the space used from the tables it is not even 200 MB. However, when I create a backup file it is 4.5 GB.

I am not sure what is going on.

malware – large dataset for malicious pdfs

malware – large dataset for malicious pdfs – Information Security Stack Exchange

factorization – Factorizing large numbers

Hello I am trying to factorize large prime numbers with the code bellow. The code works properly for values like 1927 and 69527 (results), but gives no result for larger values like 655051 and 864109. The code goes as follows:

myfunction(n_, B_) :=Module({a, g, i},
    a=2;
    i=2;
    g=1;
    While(i<B && g==1,
        a=PowerMod(a,i,n);
        g=GCD(a-1,n);
        If(g>1&&g<n,Return(g));
        i=i+1);
)

This works pollard(69527,11)
But this doesn’t pollard(864109,9) Any idea what might be the problem?

dnd 5e – What would happen if I were to use a Ring of Water Walking on a raging sea with very large waves?

That’s up to the DM.

As usual, there’s no single clear answer to anything that isn’t explicitly stated in the rules. A DM could certainly decide that waves represent an uncertain surface that the PCs will have to make rolls to move across; but they could easily rule the other way, since it’s magic that says you can move across water ‘as if it were solid ground’, and solid ground is not generally known for heaving up and down under your feet. The latter interpretation does have some basis in our world; the original Biblical example of walking on water, which presumably inspired the spell and ring, took place in a serious storm with large waves.

What’s your goal?

In general, I think the real answer comes from answering the deeper question, “What do you want to accomplish by calling for rolls?”

If there’s a fight or other challenge happening and you, as the DM, want the waves to count as an environmental problem that impacts the PCs but not their aquatic enemies (thus increasing the difficulty level of the encounter), then I think that makes a pretty great fantastical setting for the scenario.

By contrast, if you’re considering just having the PCs roll some checks to cross the stormy area, but those checks don’t come with any actual consequences for failure (usually taking a longer time to cross an area and looking like an idiot while doing it aren’t actually consequences), probably just skip it and move on to the next point of interest. You can describe them stumbling and sliding across the waves if you want to have a comedy beat, I suppose, but calling for checks in this scenario sounds a lot like the classic newbie-DM mistake of having the players make tons and tons of inconsequential rolls.

It’s also worth asking yourself if your plan is eliminating the benefit of the magic item in question. If the ring is allowing the player to walk on water, but you’re making them functionally perform the same rolls you’d call for from a swimming character, then you’re kind of taking away the coolness and benefit of having a magic item that’s perfectly suited to this challenge, and that’s usually a bad thing.

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123