Parallelization – DistributeDefinitions evaluates the definitions, but only for a large number of definitions

I use Mathematica 11.3 and this seems to me to be a mistake. If possible, I'd like to have an idea for a workaround.

Here is an example of trivial code that works as expected:

nI = 10;
(NM[#] : = Print[#] ) & /@ Offer[1, nI];
Launch kernel[];

That is, the above code is expected not to generate output.

Well, if the first line is changed in

nI = 20;

The same code causes 40 lines to be printed! From 1 to 20 twice.

For some reason, the DistributionDefinition execution enforces the definition of NM, and I do not want that to happen before I use ParallelSubmit and WaitAll. I tried this on two computers with Mathematica 11.3. Do you have any ideas what happens?

Parallelization – Using ExternalEvaluate in ParallelDo

I've had great success with Python in Mathematica (for some calculations that have very well-optimized Python packages, but not for Mathematica). I have some Mathematica functions that are calls to Python functions, similar to the example in the Applications section of this page:

I would now like to execute these functions within a ParallelDo function, unfortunately it does not work. Here is a MWE and the issue:

session = StartExternalSession["Python-NumPy"];
ExternalEvaluate[session, "def double(x):
    return x*2"];
double python[arg_] : = ExternalEvaluate[session, "double(" <> ToString[arg] <> ")"]Do[Pause[1]; To press[doublePython[i]]{{Reach[4]}]// Absolute timing
ParallelDo[Pause[1]; To press[doublePython[i]]{{Reach[4]}]// Absolute timing

MWE edition

Parallelization – ParallelDo offers another solution for self-system

I'm trying to calculate the eigensystem of a large matrix (eg 256×256). I've found that if I do this in a ParallelDo (because I actually compute many of these eigensystems), the result is different than if I calculated it on the main kernel.

Matrix = RandomReal[NormalDistribution[0, 1]{256, 256}];
(* To facilitate a Hermitian matrix *)
Matrix = Matrix + ConjugateTranspose @ Matrix;
mainkernel = own system[matrix];
parallel kernel = table[0, {j, 4}];
ParallelDo[parallelkernel[[j]]= Own system[matrix]{j, 4}];

(* Check eigenvalues ​​*)
Max @ Abs @ (main kernel[[1]]-parallelkernel[[1,1]])
(*! = 0 *)
Max @ Abs @ (parallel kernel[[1,1]]-parallelkernel[[2,1]])
(* = 0 *)

(* Make sure the largest entry in the eigenvector is positive to get the same direction *)
signofmain = map[Sign@Total@MinMax[#] &, Main core[[2]]];
signofparallel = map[Sign@Total@MinMax[#] &, parallel kernel[[1, 2]]];

(* Check eigenvectors *)
Max @ Abs @ (signofmain * mainkernel)[[2]]-signofparallel * parallelkernel[[1,2]])
(*! = 0 *)
Max @ Abs @ (parallel kernel[[1,2]]-parallelkernel[[2,2]])
(* = 0 *)

Obviously, all calculations done in ParallelDo show exactly the same result, but different from the main one.

I realize that the differences here are extremely small. However, a subsequent division by the difference between two eigenvalues ​​(in my case, self-energies) can in some cases lead to an error amplification of up to 10 ^ -2, which of course is not negligible.

Where does this difference come from and how can I avoid it?