2010 – SharePoint Calculated Field Error updating the value for all items in the large list

So that's the problem:

I have a list of about 1 million records

I've added a calculated column with a very simple formula:

"0" + (textfield)

and return as number

The request returned an update conflict error. However, when browsing the list, I found that the items were updated from oldest to newest with the newly calculated field value over time. But some time later, it stopped leaving more than half of the articles without their value being calculated. They are returned as -4XXXXXXX.

I checked the logs and found that the ULS logged the following error every 20 seconds for 3 hours:

A large block with literal text was sent to sql. This can result in SQL blocking and excessive front-end memory usage. Make sure that no binary parameters are passed as literals, and divide the stacks into smaller components. If this request affects a SharePoint list or list item, you may be able to fix it by reducing the number of fields.

And finally, a request timeout error log with the same correlation ID as the previous one

Is there a safe way to add the calculated field?

Do I have to increase a timeout variable to give the computed field enough time to compute all elements?

How can I recalculate the remaining fields without affecting the modification date and the modification date of the elements?

Thanks!

Matrices – Definition of large matrices for iterative algorithms

I have to solve this equation system
Using the methods of Seidel and Jacobi, I have tried to define a large matrix in tungsten.

I read the documentation and wrote the following command to help me, but now it works:

Table[If[i == j, a], If[i == j + 1, 1], If[i + 1 == j, 1], 
 If[i + 2 == j, 1/b], {i, 100}, {j, 100}]

Can we work with such large matrices in tungsten?
If so, how should I declare it?

And maybe someone can guess where to read about iterative algorithms on matrices in tungsten?

co.combinatorics – Classification of large line bundles via the flag splitter $ G / B $

For a complex Lie group $ G $With $ B $ a selection of the Borel subgroup. The cable is bundled over the flag distributor $ G / B $ are indexed by $ P ^ + $, the dominant weights of $ frak {g} $, This is because corresponding representation gives a character $ B $and thus an associated trunk group over $ G / B $, Which of these trunk groups is sufficient?

sharepoint online – Move large files with modern authentication

I'm trying to move files from one site collection to another. The following code works for smaller files, but not for large files because of memory exceptions:

if (item.FileSystemObjectType == FileSystemObjectType.File)
{
    var fileName = item("FileLeafRef") as string;
    var fileSize = item("File_x0020_Size");


    item.Context.Load(item.File);

    using (var stream = item.File.OpenBinaryStream().Value)
    { 
        item.Context.ExecuteQueryWithIncrementalRetry(3, logger);

        var fi = new FileCreationInformation();
        fi.ContentStream = stream;
        fi.Url = fileName;
        fi.Overwrite = true;
        folder.Files.Add(fi);
        destLibrary.Context.ExecuteQueryWithIncrementalRetry(3, logger);
    }
}

Is there anyway the same in batches? Note SaveBinary etc. can not be used with modern authentication.

How should I design an architecture for an endpoint to hold large amounts of data without loading the database?

Every day there will be a lot of new data that I have to write and update in the database. This record can contain 8-10 million records.

This new record comes from Service A and it writes directly into a database that is being used by Service B, I do not claim Service A but I say Service B,

The problem is, whenever Service A If the database has to be stressed so much, it consumes all IOPS and blocks the database, which in turn affects the database Service B, And because there are so many records, it takes more than 10-15 hours to complete the process. This means that the utilization of the database remains high even for this number of hours!

I'm thinking about providing an endpoint for Service A If you send the new data from, I write "more elegant" into the database, so this has no effect Service B,

However, I'm not sure how to do it without blocking the database with an endpoint so Service A does not have to write directly to the database. I think that even if I provide an endpoint for Service A to write data to my database, I would still somehow have to deal with the same load that is now written directly into the database by Service B, and issue results to it load.

I am currently using AWS Postgres RDS as a database. All our services are hosted on AWS and we have a "microservice" architecture.

How should I design my endpoint in such a scenario, or which AWS services should I use so that I can handle the large amount of data more appropriately?

Unity – sharing large 3D objects or not?

I have a pretty simple question for which I get advice:

Should large 3D objects be split into smaller ones?

By far, I mean an object that would be as wide as a game level. Below is a mountain. The first image shows it as a single mesh, the second image shows it as multiple meshes.

Enter image description here

Enter image description here

Facts that I have considered (possibly wrong):

  • A large object simplifies the scene hierarchy with fewer objects. At the same time, it may render more often than necessary because it is considered to be visible

  • A large object consisting of smaller objects inflates the scene hierarchy, but at the same time a performance aspect is better, as only visible parts of it are rendered by the camera, the rest is discarded

Note that these objects are not complex by today's standards. The mountain is made up of less than 1K triangles, and a full plain is probably made up of less than 30K triangles.

Ideally, I would like to have the least number of objects in the scene hierarchy to keep it simple, but at the same time, I wonder if over-simplification of the layer could bring up additional issues I did not think about.

c ++ – Is there a way to speed up a large switch statement?

In practice, I am working on a CPU simulator (running at about 1.78 MHz) and using a switch instruction to execute correct opcodes based on the value in the IR variable (instruction register). This switch statement requires 256 cases. Even though it may not be that big, the switch statement must be executed several times in a short time. Are there any better ways to create fast code than using a switch statement for the same purpose?

The operation codes can be divided into two parts: addressing mode and actual operation. For compact code, these could probably be arranged in functions and placed in cases based on those needed. I'm just not sure if it would be more efficient to write each one out, even if the code gets bigger.

Another idea I am not sure how to do is that I can try to determine the addressing mode and operation from the opcode before the switch statement (s) and then one of the addressing modes to use it to determine the effective address and then a second switch statement to do the actual operation (and write back to memory if it was a RMW instruction).

So any thoughts? Is switch statement the best choice here and what other optimization can I do to make sure the simulation runs smoothly?

python – Performs a given set of operations with a large sequence

Well, there the annoying "Not looking for the verification of my code" is gone…

Step 1: White space

Follow the PEP 8 guidelines and put a space around operators and commas (but not limited to):

n, k = map(int, input().split())   
arr = list(map(int, input().split()))   # read input sequence and store it as list type
for i in range(k):                      # iterate over 0 to (k-1)
    t = i % n                           # module of i wrt n
    arr(t) = arr(t) ^ arr(n - (t) - 1)  # xor between two list elements and then set result to the ith list element 
for i in arr:
    print(i, end=" ")                   # print the final sequence

Much easier to read.

Step 2: Avoid multiple searches

Python is an interpreted language, and the meaning of a line of code-or even a piece of code-can change until the interpreter returns to run the code the second time. This means that the interpreter can not really compile the code. Unless something is a well-defined short-circuit operation, every operation must be performed.

Consider:

arr(t) = arr(t) ^ arr(n - (t) - 1)

The interpreter must calculate the address of arr(t) twice; once to retrieve the value and a second time to save the new value because of some side effects that occur during the execution of arr(n - (t) - 1) can change the meaning of arr(t), In your case, arr is a list, and n and t are simple integers, but with custom types anything can happen. Therefore, the Python interpreter can never do the following optimization:

arr(t) ^= arr(n - (t) - 1)

It's a tiny acceleration, but considering that the code can be executed $ 10 ^ {12} $ Sometimes it can add up.

Step 3: Avoid calculations

Speaking of labor avoidance: Because we know that the length of the array is fixed, arr(n - 1) is the same as arr(-1), So we can further speed up the line of code as follows:

arr(t) ^= arr(-1 - t)

Instead of two subtractions, we only have one. Yes, Python needs to index from the back of the array, which internally involves a subtraction. BUT This will be an optimized, C-coded subtraction operation ssize_t Values ​​instead of subtractions for variable byte length integers that must be allocated and released by the heap.

Step 4: Print space-separated lists

The following is slow:

for i in arr:
    print(i, end=" ")

This is faster:

print(*arr)

And for long lists, this can be the fastest:

print(" ".join(map(str, arr)))

For a detailed discussion, including timing diagrams, see my answer and this answer to another question.

Step 5: The algorithm

Look at the list (A, B, C, D, E),

After applying a single pass of the operation to it (ie k = n), You will get:

(A^E, B^D, C^C, D^(B^D), E^(A^E))

which simplifies:

(A^E, B^D, 0, B, A)

If we apply a second pass (ie k = 2*n), You will get:

((A^E)^A, (B^D)^B, 0^0, B^((B^D)^B), A^((A^E)^A))

which simplifies:

(E, D, 0, B^D, A^E)

A third pass (ie k = 3*n) gives:

(E^(A^E), D^(B^D), 0^0, (B^D)^(D^(B^D)), (A^E)^(E^(A^E)))

or:

(A, B, 0, D, E)

Now k does not have to be an exact multiple of nSo you have to figure out what to do in the general cases, but you should be able to use the above observation to eliminate many unnecessary calculations.

Implementation left to the student.

SEO – Google can not retrieve a large sitemap with 50,000 URLs and is not rendered by browsers

My sitemap contains 50,000 URLs / 7.8 MB and the following URL syntax:




 https://www.ninjogos.com.br/resultados?pesquisa=vestido, maquiagem,   2019-10-03T17:12:01-03:00 
 1.00 


The problems are:

• The search console reports that the sitemap could not be read.

• Sitemap loading takes 1 hour and Chrome stops working.

Enter image description here

• In Firefox, the sitemap was downloaded in 1483 ms and fully loaded after 5 minutes.

Things that I have done without success:

• Disable GZip compression.

• Delete my .htaccess file;

• Create a test sitemap with 1 KB URLs and the same syntax, and send it to Search Console. However, the sitemap with 50 KB URLs still displays "" that no sitemap can be retrieved.

Enter image description here

• An attempt was made to directly check the URL, but an error occurred and you are asked to try again later while the 1KB URLs worked.

• An attempt was made to validate the sitemap on five different websites (YANDEX, ETC), and all worked without error / warning

Any light?