php – How do I use these google variables from their javascript and place in my db?

From the title, I am using google api to login people within my website. I am trying to grab base information to create an account for them. I am having trouble because I need to put their information within php, so I can use my database. any help would be appreciated!

<html lang="en">
    <link rel="stylesheet" type="text/css" href="login.css">
    <meta name="google-signin-scope" content="profile email">
    <meta name="google-signin-client_id" content="">
    <script src="" async defer></script>

      <form class="form" action="googleLogin.php" method="get" enctype="multipart/form-data">
        <h1 class="login-title">COVTRACK Login</h1>
          <div id="google-log" class="g-signin2" data-onsuccess="onSignIn" data-theme="dark"></div>

      <script type="text/javascript">
        function onSignIn(googleUser) {
          // Useful data for your client-side scripts:
          var profile = googleUser.getBasicProfile();
          console.log("ID: " + profile.getId()); // Don't send this directly to your server!
          console.log('Full Name: ' + profile.getName());
          console.log('Given Name: ' + profile.getGivenName());
          console.log('Family Name: ' + profile.getFamilyName());
          console.log("Image URL: " + profile.getImageUrl());
          console.log("Email: " + profile.getEmail());
            // The ID token you need to pass to your backend:
          var id_token = googleUser.getAuthResponse().id_token;

          var usernameVar = profile.getName();
          var givennameVar = profile.getGivenName();
          var familynameVar = profile.getFamilyName();
          var imgVar = profile.getImageUrl();
          var emailVar = profile.getEmail();
          console.log("ID Token: " + id_token);

          // document.write (usernameVar + " ");
          // document.write (givennameVar + " ");
          // document.write (familynameVar + " ");
          // document.write (emailVar + " ");
          // document.write (profile.getUsername());


mathematical optimization – Optimizing on a matrix of binary variables

I think I might just have a bug here, and I’ll delete the question if so, but what is causing this optimization to not terminate in a reasonable amount of time? (20+ minutes, no answer)

NMinimize({Total(W, 2), 
  W (Element) Matrices({15, 446}, Integers) && 
   And @@ Flatten(Table(0 <= W(i, j) <= 1, {i, 1, 15}, {j, 1, 446}))},

This is meant to be
min_W sum_{i}^{15}sum_{j}^{446} W_{ij} \ text{s.t.}quad W in {0,1}^{15times 446}

which should be a simple integer linear programming problem for Mathematica: return a matrix of zeros and done.

Would this run faster if I reformulated without the use of matrices?

pr.probability – Ljapunow condition on scheme of random variables for generalization of central limit theorem

on numerical tests I see that the scaled and centered sum of dependent variates
{X_1, X_2, …, X_m}hspace{1cm} text{ with } hspace{1cm} X_i sim Bin(n,theta_i), hspace{1 cm }theta_i sim Beta(alpha_i, beta_i)

is normal distributed. For the Ljapunow or Lindeberg version of the CLT it is not a problem that the random variables are not identical distributed. And I guess that the variates meet the requirements. (will check that)

I assume that the dependence can be described as a mixing process. The dependence might vary as it is derived via a minimum spanning tree. I guess the next step would be to find a scheme of random variables and check again if the Ljapunow or the Lindeberg requirement apply.

Could one of you smart guys approve this approach? Or are you experienced with dependencies of the variates with respect to the CLT? Is there a good paper suggestion? Do you have tipps about how create a scheme of random variables? Do you have suggestions which of the both requirements is more easily accessible?

Thanks a lot

complexity theory – Multiple Variables in Asymptotic Notation

I am trying to understand the multiple variable definition of an asymtotic notation. Particularly the definition in wikipedia. It’s also discussed in Asymptotic Analysis for two variables? but I think the answer is wrong. At least it is just corrected in the comments and and referenced to a lengthy answer. What I look for is just the answer of my confusion here. Wikipedia says,

Big $O$ (and little o, $Omega$, etc.) can also be used with multiple
variables. To define big $O$ formally for multiple variables, suppose
$f$ and $g$ are two functions defined on some subset of

We say $f(vec{x})$ is $O(g(vec{x}))$ as
$vec{x} rightarrow infty$ if and only if $exists M exists C>0$ such that for all $vec{x}$ with $x_{i} geq M$ $textbf{for some i}$ $|f(vec{x})| leq C|g(vec{x})|$.

… For example, if $f(n, m)=1$ and $g(n, m)=n$,
then $f(n, m)=O(g(n, m))$ if we restrict $f$ and $g$ to $(1,
> infty)^{2},$
but not if they are defined on $(0, infty)^{2} .$ This
is not the only generalization of big o to multivariate functions, and
in practice, there is some inconsistency in the choice of defintition.

What I don’t understand is, if we only look for some i, why can not we use the domain $(0, infty)^{2} .$ For example, if I only take n variable the infinity (i is 0 in this case), then shouldn’t it be fine and $f(n,m) in O(g(n,m))$. Shouldn’t be definition not for some i but for all i then? Do I understand the notion of “for some” in the wrong way?

mathematical optimization – Numerical Minimization for a function of five variables

I have the function

f(w_,x_,y_, α_,g_)=Sqrt(((w^2 + x^4) (1 + 2 w α^2 + α^4))/(
 2 x^2 w) - α^2 y)*Sqrt(((g w)^2/x^2) + (2 x^2)/w + (2 w (g α - 1 )^2)/x^2)

with the restrictions

$$w geq 1, $$

$$ x>0, $$

$$ y,alpha, g in mathbb{R}.$$

and I appeal to NMinimize() to find a numerical value for the minimum of the function f(w_,x_,y_, (Alpha)_,g_), that is,

NMinimize({f(w, x, y, α, g), x > 0 && w >= 1}, {w, 
   x, {α, y, g} ∈ Reals}) // Quiet

therefore mathematica shows the result

{2., {w -> 1.78095, x -> 1.33452, (Alpha) -> -8.73751*10^-9, 
  y -> 0.731324, g -> -2.98148*10^-8}}

On the other hand, inquiring in the package help of the software I find that I can specify a non-default method which could give a better solution (with more accuracy), for example with DifferentialEvolution; that is,

NMinimize({f(w, x, y, α, g), x > 0 && w >= 1}, {w, 
   x, {α, y, g} ∈ Reals}, 
  Method -> "DifferentialEvolution") // Quiet

giving the result

{1.09831, {w -> 1.00016, x -> 0.962037, (Alpha) -> 0.276323, 
  y -> 11.3393, g -> -0.0477925}}

Therefore, I have the question:

What is the best method (with mathematica) to obtain the most accurate value for the real minimum of the function?

I am a novice with the use of NMinimize comand

numerical algorithms – Numerically solving an ode with infinitely many variables of which only finitely many are significant in magnitude

Suppose I have an ode that involves infinitely many variables, with the property that at any given time, only finitely many of them are large enough to be of interest (say $>10^{-10}$). However, at different times, different variables may become large.

It is also the case that given such a set of interesting variables, only a finite number of equations contain terms that are large. This is somewhat like a generalized version of locality.

The question is, is there any research on solving such equations numerically? My idea is to keep track of the variables of interest, and also “secondary” variables, which are significantly (in magnitude) coupled with the “interesting” variables; we can also keep track of “tertiary” variables and so on. We then go on solving the ode, ignoring the uninteresting variables (assuming them to be 0), and check regularly if a new variable comes into (or goes out of) interest.

To give the background, I’m working on an artificial chemistry simulation. All the reactions, and their reaction rate formulae (following Arrhenius) can be determined by my set of rules. For example, $$A + X to B\ B to X + C$$ simulates the conversion $A xrightarrow X C$ with catalyst $X$. In this case, $C$ is initially small, but eventually becomes large. This system is finite, and so is solvable by conventional methods. But if the reations are infinite (but recursively enumerable and decidable, and for simplicity, each combination of reactants only result in a finite number or reactions), it creates an infinitely large set of ode.

How to find Windows environment variables from hard drive without booting?

How can I find what the PATH variable on Windows 10 was on a backup “image” of an old system hard drive?

I returned my computer to field support, they “backed it up”, restored it to a new system, sent me the new system and nuked the old system.

When I asked about getting missing stuff off the old system, they said the only option is to see if I can find what’s missing on the backup image which they keep for seven days.

I’m missing my PATH and environment variables.

Unfortunately, restoring the system (not sure what their process was – I know that they re-installed some apps, so it wasn’t a system mirror) didn’t restore my environment variables and PATH.

I know that simply having the old PATH (and other environement variables as well) won’t necessarily “fix” any problems on the new machine.

But, for example, I spent a lot of time setting up my Python environment, and I have no idea what the Python environment variables even were that I used, much less what they were set to (there are several Python instances on the “restored” hard drive).

I have access to the back up that field service made of my boot drive.

Back in the “old” days, I’d just copy the AUTOEXEC.BAT and CONFIG.SYS files. Those haven’t been around (AFAIK) for a long time.

How can I find what the PATH and other environment variables WERE in the backup of the old system that I can no longer boot or use (since it and its hard drive are gone)?

I welcome pointers to other posts. I know Google is my friend, but not today. 99.9% of everything I’ve found is “how do I set my PATH variable”, and so on.

Thank you!


(P.S. I just dumped all my environment variables to a file I named CONFIG.SYS so I never have to consult the answer to this question again)

algorithms – Assign few binary variables to make all polynomials identically zero

I have 50 polynomials $f_i$ over binary variables $(x_1,ldots,x_{100})$. Also $f_i(0,0,ldots,0)=0$
for any $i in (1,50)$. I want to assign few variables so that all $f_i$ will be
identically zero. I want to assign number of variables as small as possible. Is there
any way for this like using SMT etc? Kindly give your idea.

deployment – Tool to Execute SQL Server Scripts and Automatically Recognize and Prompt for Scripting Variables

I have a folder of scripts that contain multiple objects and jobs that I roll out every time I deploy a new SQL Server Instance. The scripts utilize scripting variables, as an example, here is an abridged example of a job creation script:

DECLARE @Owner SYSNAME = (SELECT (name) FROM sys.server_principals WHERE (sid) = 0x01)

EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'Myjob', 
        @category_name=N'Database Maintenance', 
        @notify_email_operator_name=N'$(AlertOperator)', @job_id = @jobId OUTPUT

Note that @notify_email_operator_name will be set to whatever value is passed to $(AlertOperator)

These scripts are usually run through a Powershell script which loops through the folder and passes values to the $(AlertOperator) variable.

This approach allows a suite of scripts to be kept which can be rolled out to a new server easily.

I was wondering if there was a GUI tool where I can open one or more .sql files and it would automatically recognize the scripting variables in those files and prompt for their values before running the files against one or more defined servers?

pr.probability – An Inequality of Expected Value of Random Variables

I encountered the following problem in my research:

Suppose there are $N$ random variables that are independent and identically distributed (IID). The probability density function (PDF) of these random variables $f(x)$ is a unimodal function symmetrical about $0$ (i.e., $f(x)$ is non-decreasing within $(-∞,0)$, and for any $x$, $f(x) = f(-x)$ holds. for example, the distribution can be uniform distribution, normal distribution, Cauchy distribution with mean $0$, etc.).
For a given real number $x_0$, Sort these random variables as $X_1, X_2, …, X_N$ such that $$|X_1-x_0|leq |X_2-x_0| leq … leq |X_N-x_0|$$
For example, if $N = 3$, the $N$ random variables are randomly chosen as $-0.5, 1.5, 5$, and $x_0 = 1$, then $X_1 = 1.5, X_2 = -0.5, X_3 = 5$.
Let $Y_i = |frac{X_1+X_2+…+X_i}{i}-x_0|^r (i=1,…,N, r = 1 or 2)$, then for any $x_0$ and $f(x)$, does the inequality
$$EY_1 leq EY_2 leq… leq EY_N$$
always hold? Where $E$ denotes the expected value.

The inequality above is tested via the Monte Carlo method for cases where the distributions are uniform distribution, normal distribution, and Cauchy distribution. Details can be seen in since I cannot post figures here…

Moreover, is it possible to derive the PDF of $Y_i$?

Answers or ideas for either $r=1$ or $r=2$ would be so grateful!