numerics – How to use parallel computation in mathematica for numerical Integration

I have a multi dimensional integral and I need to integrate it numerically and than plot it over an extra variable for zeta = 0.5.

g(kx_, ky_, k1x_, k1y_, qx_, qy_, zeta_, zz_) := (1/((1 + Exp(-Sqrt(k1x^2 + k1y^2) + zeta*k1x*Cos(zz) + 
         zeta*k1y*Sin(zz)))*(1 + 
       Exp(-Sqrt(kx^2 + ky^2) + zeta*kx*Cos(zz) + 
         zeta*ky*Sin(zz)))*(1 + 
       Exp(Sqrt((kx + qx)^2 + (ky + qy)^2) + zeta*(kx + qx)*Cos(zz) + 
          zeta*(ky + qy)*Sin(zz))*(1 + 
          Exp(Sqrt((k1x - qx)^2 + (k1y - qy)^2) + 
            zeta*(k1x - qx)*Cos(zz) + zeta*(k1y - qy)*Sin(zz))))));
  PolarPlot(NIntegrate(g(kx, ky, k1x, k1y, qx, qy, 0.5, zz), {kx, -1, 1}, {ky, -1, 
     1}, {k1x, -1, 1}, {k1y, -1, 1}, {qx, -1, 1}, {qy, -1, 1}), {zz, 
    0, 2 Pi})

my problem is that, it take many hours (more than 10 hours) without answer. my solution was to calculate the integral for some discrete set of angles, and than plot it, but I am searching for more better way.
based on searching in site,

Parallelizing Numerical Integration in Mathematica

it may be possible to use parallel computation to plot it. my knowledge in Mathematica is limited and I couldn’t understand what the role of index “i” is in that link.
Would someone please tell me how can I use all cores to decrease the time of calculation.

thanks

parallel computing – Repeatedly finding and deleting maximal independent sets on a graph: Number of necessary iterations in restricted cases

I am trying to design a parallel scheduling algorithm based on a constraint graph $G=(V,E)$ in which each node represents a task and each edge $e=(v_1, v_2)$ signifies, that tasks $v_1$ and $v_2$ can not be executed in parallel. Each task is executed exactly once, so the problem is finding “good” independent sets $V_i$, so that

$$
bigcup_{i=1}^{k} V_i = V\
$$

with all independent sets $V_i, V_j$ being pairwise disjoint. Since MaxIS is NP-Hard my approach would be solving MIS repeatedly (finding some maximal independent set, removing those vertices and start again until the Graph is empty). I know that in the worst case of $G$ being a clique this approach would yield $n$ iterations, however in my instance i would have the guarantee that the number of neighbors of each node would be upper-bound by $c ll |V|$.

My question is: Given such a $c$ is there any upper bound on the number of necessary steps $k$?

c# – Most efficient way to invoke a list of delegates in parallel in .NET 5

I am looking for the best way to invoke an array of delegates in parallel.

It is important that the delegates are invoked at exactly the same time, I understand that this makes little difference, but it’s a requirement in my situation. Please do not suggest executing the tasks as adding them to an array.

I have chosen to use arrays instead of lists for memory efficiency. I have profiled the app before and after switching from lists to arrays and have noticed a small memory difference overall. I understand that this is probably not going to make much difference.

The list called ‘urls’ is a list of API endpoints with varying latency. These are retrieved from a database and I have only included this code only to avoid being “stub code”. No need to be concerned about this list or its origin.

The ‘GetData()’ method is also of no concern for this question.

The following code works, I am looking for any suggestions on how to do this better:

//get urls from database
List<string> urls = await uow.Urls.GetAllAsync();

//define array for delegates that will be invoked in paralel
Func<Task<ApiResult>>() delegates = new Func<Task<Reference>>(urls.Length);

//iterate through urls, creating and populating the tasks variables
foreach (string url in urls)
{        
    async Task<ApiResult> GetData()
    {
        result.Timestamp = DateTime.UtcNow;
        ApiResult result = GetData<ApiResult>(url);
        return result;
    }
    
    delegates(Array.IndexOf(urls, url)) = GetData;     
}

//invoke/execute all the tasks in parallel
Task<ApiResult>() tasks = delegates
        .AsParallel()
        .Select(d => Task.Run(d.Invoke))
        .ToArray();

//wait for all of the tasks to complete
await Task.WhenAll(tasks);

//extract the results of the tasks into an array
ApiResult() apiResults = tasks
        .Select(l => l.Result)
        .ToArray();

//iterate through the results and write the timestamp to the console
foreach (ApiResult result in apiResults)
{
    Console.WriteLine(result.Timestamp.ToString());
}

foreach – Parallel for each not giving same output as for loop in R

In the current code I am able to get the output when i use print(gf). But the output doesn’t get saved inside gf . But if i replace all this with for loop it works and even the output gets saved inside gf. But my end objective is to make it run parallel.

  gf<-list(list())
for(k in 1:length(ps)){
  DF<-mg(,c(3:16,k+16))
  if( length(ps((k)))==0){
    next
  }
  colnames(DF)<-c(1:14,"I1")
  ps1<-ps((k))(1:(length(ps((k)))-1))
  DF1<-NULL
  gf<-list(list())
  outt<-foreach(i =1:length(ps1), .export=c("colnames", "DF","ps1","DF1","gf"), .combine='c') %dopar% {
    print(c(k,i))
    colnum<-length(ps1((i))((1)))
    colname<-ps1((i))((1))
    DF1<-DF(,colname)
    copname<-ps1((i))((4))
    if(copname=="Easy"){copname="Normal"}else if(copname=="Hard"){
      copname="ezz"
    }
    copname<-tolower(copname)
    go<-as.vector(gtfc(copname,d=colnum))
    for(j in 3:length(go)){
      #if(j==5){next}
      if(copname=="normal"){
        result1<-try(gof(as.matrix(DF),copula=copname,tests = go(j),M=100,dispstr = "un"),silent = TRUE)
        if(class(result1) == "try-error"){
          next}
        gres=gof(as.matrix(DF),copula=copname,tests = go(j),M=100,dispstr = "un")#,param=f1((3))),dispstr = "un"
      }else{
        result1<-try(gof(as.matrix(DF),copula=copname,tests = go(j),M=100),silent = TRUE)
        if(class(result1) == "try-error"){
          next}
        gres=gof(as.matrix(DF),copula=copname,tests = go(j),M=100)#,param=f1((3))),dispstr = "un"
      }
      print(tail(unlist(gres),2)(1))
      if(is.na(tail(unlist(gres),2)(1))==TRUE){
        print("Not done")
        next}
      if(tail(unlist(gres),2)(1)>=0.05){
        print("done")
        print(tail(unlist(gres),2)(1))
        break}else{print("not done")}
    }
    gofsumm<-list(list(colname),colnum,copname,go(j),list(gres))
    gf(k)(i)<-list(gofsumm)
    colnum=colname=DF1=copname=go=gres=NULL
  }}
gf

c# – Fastest / most efficient way to invoke a list if delegates in parallel in .NET Core 5

Here are 2 examples:

List<Func<Reference>> referenceDelegates = new List<Func<Reference>>();

foreach (string url in urls)
{
    // not async as I need to add this to a list without invoking it.
    // making it async would require a return type of Task<Reference> and 
    // force the task to execute when adding it to a list
    // e.g. referenceDelegates.Add(GetReference());
    Reference GetReference()
    {
        // uses the url string and gets data from a bunch of different servers
        // from around the world, a different server with each foreach iteration
        // returns Result;
    }
    referenceDelegates.Add(GetReference);
}

List<Task<Reference>> referenceTasks = referenceDelegates
    .AsParallel()
    .Select(d => Task.Run(d.Invoke))
    .ToList();

List<Reference> references = Task.WhenAll(referenceTasks)
    .Result
    .ToList();

// work with results

or

List<Func<Task<Reference>>> referenceDelegates = new List<Func<Task<Reference>>>();
foreach (string url in urls)
{        
    async Task<Reference> GetReference()
    {
        // uses the url string and gets data from a bunch of different servers
        // from around the world, a different server with each foreach iteration
        // returns Result;
    }
    referenceDelegates.Add(GetReference);
}

List<Task<Reference>> referenceTasks = referenceDelegates
    .AsParallel()
    .Select(d => Task.Run(d.Invoke))
    .ToList();

Task.WhenAll(referenceTasks);

List<Reference> result = referenceTasks
    .Select(l => l.Result)
    .ToList();

// work with results

What this the most efficient (fastest with least amount of memory) way to invoke a list of delegates at the same time and wait for the result?

Perhaps it can be rewritten?

Plain-language example of how a functional style makes parallel programming easier

I read a few “function >> imperative/OOP” articles because I heard there was a move in imperative OOP languages toward a functional style of coding, especially encouraging pure functions when possible. One recurring argument is that by not mutating state, you don’t have to worry about race conditions and locking.

I do get the logic behind that: no two processes would mutate the same data, so locking and race conditions are irrelevant. Problem is I hadn’t done any parallel programming or know any functional languages, so it’s hard for me to really understand. I got as far as reading about persistent data structures as replacements for large mutable structures, but I hit a wall. I’m looking for an example, in plain language, of a parallel algorithm in an imperative style and a functional style that illustrates this recurring argument.

If it helps to provide a programming problem, let’s say I have an array of integers (A). People submit commands (A(i) += 1) in real time to increment an element of that array by 1 (at a valid index i). Different elements are intended to be incremented in parallel. I can imagine the imperative solution does this but it locks an index during an increment. What would a functional solution look like? I will accept answers as simple as naming a functional data structure, if I could understand it by looking it up.

Most fastest and suitable database for parallel writes for incrementing values which is not in RAM

I have a lot of workers which increment values in a Redis Cluster which works fine (220k w/s) . To have a more sustainable persistence I want to use a hdd based db. Currently I’m using postgres with some preprocessing to reduce the writes. But even this gets a bit busy now. I have a lot of errors while updating (incrementing) the postgres fields.

That’s why I wonder if there is a db which is strongly consistent and which can cope with parallel writes better than postgres does…