Probability – To converge the ratio of the order statistics of gaps induced by $ n $ uniform points to $ (0,1) , $

In a MO question here @IosifPinelis shows that the ratio of expectations $ mathbb {E} (A) / mathbb {E} (B) $ the largest (speak $ A $) and smallest (speak $ B $) Gap resulting from $ n $ uniform random variables $ (0.1) $ tends to infinity as $ n rightarrow infty. $

He has also shown this in an earlier question related to the above question $ mathbb {E} (A / B) $ goes to infinity as $ n rightarrow infty. $

I have a related question:

To let $ G _ {(1)} $ be the smallest, $ G _ {(2)} $ be the second smallest, etc. and let $ G _ {(n)} $ be the biggest gap.

What is the fastest growing sequence? $ ell (n) $ so that
$$ lim_ {n rightarrow infty} frac {G _ { ell (n)}} {G _ {(1)}} < infty $$

Statistics – Again another question about total and partial derivations

I have a question about this thread difference between implicit, explicit and total time dependency

Considering Kostya's answer above, I knew what the difference is $ frac { partial rho} { partial t} $ and $ frac {d rho} {d t} $, What I want to know is what is $ frac {d rho} {d x} $ for a function $ rho = rho (t, x (t), p (t)) $?

In my opinion, we have $$ frac {{d rho}} {{dx}} = frac {{ partial rho}} {{ partial t}} frac {{dt}} {{dx}} + frac { { partial rho}} {{ partial x}} $$, if $ x $ and $ p $ are independent variables.

But if that is the case, I get another confusion about integrating parts when calculating an integration in calculating the time evolution of the ensemble average

I think I have a misunderstanding

Statistics – Is there a statistical meaning for $ { sqrt {2n}} $?

I'm writing a new algorithm that creates internal subsequences based on an input sequence. The number of subsequences correlates with the number of different elements in the input.

When I throw in a pseudo-random input of sufficiently long length n, somehow the number of subsequences generated by my algorithm is approximate $ { sqrt {2n}} $, and I've tested this over 1000 random sequences with varying ones n,

I know, it's hard to define what "random" is, but that confuses me. My algorithm is deterministic and consistently generates the same subsequences for the same input, so I am sure that this is not accidental, as if there is a mathematical correlation between random data and that $ { sqrt {2n}} $but i could not figure out why or how.

As I write the paper, I would like to point out this observation, but I do not know what to call this phenomenon.

SQL Server – Is it a bad practice to automate updating statistics and rebuilding query plans in a relatively small SQL Azure database?

I have a SQL Azure database with a size of approximately 500 MB. Every 2-3 weeks, the database suddenly becomes slow and some queries (generated by LINQ) expire. There is no gradual extension of execution time, only sudden peaks.

The only way to fix this is to update the statistics. I usually delete the query plans at the same time. It then goes back to normal.

Some of the indexes were created incorrectly STATISTICS_NORECOMPUTE=ON at the beginning. As she change OFFthe problem occurs less frequently. However, looking at the indexes and the table-building SQL, nothing is unusual and nothing is unusual for the system. New lines are added daily, but not unusually large or small. I read that the statistics only updated the 20% + 500 of the table, but I ran other similar databases without any problems.

So, if I plan to update the statistics at midnight every few days, will I treat the symptom while ignoring the elusive cause? Is it a bad practice or is it a standard database maintenance?

Probability or Statistics – generating random variables within the summation

I want to generate a summary table that contains a non-correlated, normally distributed random variable. Here is the sample code:

Ir = Cos[2 Pi xr/P + 2 Pi k/n] + Subscript[[Epsilon], k];
Io = ReplaceAll[Ir, {xr -> xo, k -> j}];
d[Phi]measured =     FullSimplify[ArcTan[-Sum[Io Sin[2 Pi j/n], {j, 1, n}]/Sum[Io Cos[2 Pi j/n], {j, 1, n}]] - ArcTan[-Sum[Ir Sin[2 Pi k/n], {k, 1, n}]/Sum[Ir Cos[2 Pi k/n], {k, 1, n}]]];
nval = 4;
numvars = 4;
a = Table[
ReplaceAll[d[Phi]measured, { xo -> 0.1, xr -> 0, P -> 10, n -> nval}], 
{x, 1,numvars}]

When this comes back, I want everyone {Epsilon_1, Epsilon_2, Epsilon_3 …} to be a random variable, but I'm not sure how to generate a new random variable for each of the values.

Thank you in advance…

Statistics – possible analysis and prediction of a problem

I apologize in advance for not using exact / correct terms when describing the following problem.

Suppose I have the following system with which I want to be able to accurately sample (non-exhaustive sampling) or with tolerable error tolerance, either in efficient time. The system has a dependent $ Y $ and independent $ X, Z, M, I $. $ Y $ represents an energy of a certain value and the other variables influence the energy level, but they only depend on the behavior of

The problem here represents 60% of the cases in which the correlation is inversely proportional, and about 40% in which the correlation appears as shown in the figure below.

So far, I have tried several methods to find either an improved state through evolution algorithms / mountaineering and random search, or a linear regression that resulted in large edge errors in the prediction of new data sets. Is there a way I can either construct a model for the prediction or simply optimize the problem with minimal time?

Probability or Statistics – Create a Custom PDF

I have a derived distribution that I want to play with, which is of the form
$$ frac {1} { sigma ^ {2}} 2 ^ {- 2+ frac {x} {10}} 5 ^ {- 1+ frac {x} {10}} e ^ {- frac {1} { sigma ^ {2}} 2 ^ {- 1+ frac {x} {10}} 5 ^ { frac {x} {10}} ln (10) $$
That is a $ x rightarrow 10 ^ {x / 20} $ Transformation of the Rayleigh distribution.

The function seems to behave well for values ‚Äč‚Äčthat interest me, here I choose (Sigma) = 0.00005 what, when plotted, gives

Plot((2^(-2 + x/10) 5^(-1 + x/10)E^(-((2^(-1 + x/10) 5^(x/10))/(Sigma)^2))Log(10))/(Sigma)^2, {x, -180, -50}, PlotRange -> All)

I want to use this feature as a PDF so I can do some analysis that I have tried

CustomDistribution((Sigma)_) := ProbabilityDistribution(Evaluate(10^(x/20)/(Sigma)^2 Exp(-(10^(x/20))^2/ (2 (Sigma)^2))D(10^(x/20),x)),{x, -Infinity, Infinity})

Which one returns

Function((FormalX), (
 2^(-2 + (FormalX)/10) 5^(-1 + (FormalX)/10)
   E^(-((2^(-1 + (FormalX)/10) 5^((FormalX)/10))/(Sigma)^2))
   Log(10))/(Sigma)^2)

But if I try this now and draw with the same value for $ sigma $, as

Plot(PDF(CustomDistribution((Sigma)))(x),{x, -60, -160})

I only get a flat line. Am I defining my PDF incorrectly? Is it possible to create a custom PDF?

Statistics – How to perform a regression with different error variances

I have two series of readings, the first series is X and the second is Y.
I have to model Y as a function of X, knowing that the method by which X was measured is twice better in terms of error variance than the method by which Y was measured.

I'm talking about regression with error expression in the independent variable, but I'm not sure what to do if there's also error expression in the dependent variable and they are different.

Which technique / tools / restrictions should I use for this type of case?

Algorithm – Price Statistics with Arrays and Files – Java

I wanted to get feedback on my solution because I can not submit any answers because of the course I use.

Aims:

Write a java program that:

1: Create a grades.txt file with some entered notes

2: Reads these notes from the file and saves them in an ArrayList

3: After saving all elements in the ArrayList, the maximum, minimum and average scores of the list are returned

4: Returns an array list without duplicate notes. All duplicate notes must be removed

import java.io.*;
import java.lang.reflect.Array;
import java.util.*;


public class Main {

    public static void main(String() args) throws IOException {
       File file = new File("grades.txt");

      PrintWriter output = new PrintWriter(file);
      output.println("12.5");
      output.println("19.75");
      output.println("11.25");
      output.println("10");
      output.println("15");
      output.println("13.25");
      output.println("14");
      output.println("9");
      output.println("10");
      output.println("19.75");
      output.close();

      // Reading from the file //

        ArrayList gradesString = new ArrayList();

        Scanner input = new Scanner(file);

        while(input.hasNext())
        {
            String line = input.nextLine();
            gradesString.add(Double.parseDouble(line));
        }

        double result = 0;

        for(Double i : gradesString)
        {
            result += i;
        }

        double mean = result / gradesString.size();

        LinkedHashSet hashSet = new LinkedHashSet<>(gradesString);

        ArrayList gradesWithoutDuplicates = new ArrayList(hashSet);


        System.out.println("The grades are: " + gradesString);
        System.out.println("The highest grade is: " + Collections.max(gradesString));
        System.out.println("The lowest grade is: " + Collections.min(gradesString));
        System.out.println("The average is: " + mean);
        System.out.println("The grades list without duplicates is: " + gradesWithoutDuplicates);


    }

}

Output:

The grades are: (12.5, 19.75, 11.25, 10.0, 15.0, 13.25, 14.0, 9.0, 10.0, 19.75)
The highest grade is: 19.75
The lowest grade is: 9.0
The average is: 13.45
The grades list without duplicates is: (12.5, 19.75, 11.25, 10.0, 15.0, 13.25, 14.0, 9.0)