polynomials – How do you approach when completing the square?

If M = 3x² – 8xy + 9y² – 4x + 6y + 13 then M must be
a) positive
b) negative
c) 0
d) an integer

I somehow managed to figure it out by completing the square but in order to do so, it took me a lot of time and I’m not sure if every time I could solve such problems.

This whole expression can be written as:-
2(x – 2y)² + (x – 2)² + (y + 3)²
which implies M is positive.

My point is sometimes I’m lucky and I could group them in squares but other times not.
Is there any particular technique/method which always works?

Secondaly I also wanna know what you guys observe when completing the squares?

polynomials – How to prove that a coefficient is divisible by two using Multinomial theorem?

Given the product of terms $ (a + b + 2*c + 2*d)*(2*a + 2*b + c + d)*(a + b + c + 2*d)*(a + 2*b + c + d) $ prove that coefficient in combination $a*b*c*d$ is not divisible by two.

Multinomial theorem defines the coefficient formulas for the $ (a+b+c+d)^n $ but they can’t be applied to the task above.

How to solve this task in general case without polynomial expansion ?

polynomials – Count the number of k inversions using cross correlation?

I think I have some of the logic behind this question but I’m not sure how to apply it?

A k-inversion in a bitstring b occurs when 1 appears k indexes before some 0. We’re to come up with a way to count all the k-inversion for each k from 1 to n-1, given a bitstring b of length n. The algorithm should take less than O(n^2) time assuming arithmetic can be done in constant time.

I think this should be using cross correlation as a black box — on n bits at a time. I’m thinking I can return a list of tuples or mappings between each bit-substring and the number of k conversions. But how do I represent the system as two bit strings, and how exactly would I reason that? How would you argue the polynomial multiplication side of things work and prove this is $ < O(n^2)$?

simplifying expressions – Want to realize this operation (multiplication of divergent integrals of polynomials) in Mathematica

I am currently researching divergent integrals.

  1. Definition. An extended number is an expression of the form $int_a^b f(x)dx$, where function $f(x)$ is defined almost everywhere at $(a,b)$. Generally (when Riemann or Lebesque sum converges or when the equivalence follows from the rules expressed below), an extended number can be equal to a real or complex number.

  2. There are four simple equivalence rules based on linearity:

$int_a^c f(x) dx=int_a^b f(x)dx+int_b^c f(x)dx$

$int_a^b (f(x)+g(x)) dx=int_a^b f(x)dx+int_a^b g(x)dx$

$int_a^b c f(x) dx =c int_a^b f(x) dx$

$int_{-infty}^{-a} f(x) dx=int_a^infty f(-x) dx$

  1. There is one complicated Laplace-transform based rule:

$int_0^infty f(x)dx=int_0^inftymathcal{L}_t(t f(t))left(xright)dx=int_0^inftyfrac1xmathcal{L}^{-1}_t( f(t))left(xright)dx$

  1. There is a rule that allows to represent divergent integrals of polynomials via the most basic divergent integral $tau=int_0^infty dx$:

$int_0^infty x^n dx=frac{left(tau +frac{1}{2}right)^{n+2}-left(tau -frac{1}{2}right)^{n+2}}{(n+1)(n+2)}$

  1. Following Laplace transform, there is a similar rule (for $n>1$):

$int_0^infty frac1{x^n} dx=frac1{(n-1)!}int_0^infty x^{n-2} dx=frac{left(tau +frac{1}{2}right)^{n}-left(tau -frac{1}{2}right)^{n}}{(n-1)n!}$

  1. There is the opposite rule, converting in the opposite direction:

$tau^n=B_n(1/2)+nint_0^infty B_{n-1}(x+1/2)dx$


That said, one can use these rules to multiply divergent integrals of polynomials.

Example.

$int_0^infty left(2x^3-3x^2+x-4right) dx cdot int_0^infty left(2x^2-3x+1right) dx=left(frac{2 tau ^3}{3}-frac{3 tau ^2}{2}+frac{7 tau }{6}-frac{1}{8}right)
left(frac{tau ^4}{2}-tau ^3+frac{3 tau ^2}{4}-frac{17 tau
}{4}+frac{23}{480}right)=frac{tau ^7}{3}-frac{17 tau ^6}{12}+frac{31 tau ^5}{12}-frac{83 tau ^4}{16}+frac{5333
tau ^3}{720}-frac{4919 tau ^2}{960}+frac{1691 tau }{2880}-frac{23}{3840}=int_0^{infty } left(frac{7 x^6}{3}-frac{17 x^5}{2}+10 x^4-frac{41 x^3}{3}+frac{1007
x^2}{60}-frac{63 x}{10}-frac{113}{120}right) dx+frac{127}{420}$


I did the previous example by hand. Is it possible to optimally realize it in Mathematica?

I mean, 1. Enter the coefficients of two polynomials under the integrals 2. Obtain the coefficients of the resulting polynomial under integral plus the free term.

I suspect, this may be some kind of convolution.

ag.algebraic geometry – Is this property of polynomials generic?

Let $n geq 2$, and consider a polynomial $f$ in $n$ variables, say over a field $K$ of characteristic 0. Recall that $f$ is geometrically irreducible if $f$ is irreducible over the algebraic closure of $K$. We know that for $n geq 2$, being geometrically irreducible is a generic condition, i.e., applies to a non-empty open subset (in the Zariski topology) of the space of polynomials of degree $d$, say.

It seems that some geometrically irreducible polynomials are “more” reducible than others. Here is the example I have in mind: take $f(x,y) = x^3 – y^2$, so that $f$ is geometrically irreducible. However, $f(u^2,v^3) = u^6 – v^6$ IS reducible, in fact splits completely over $overline{mathbb{Q}}$.

Let us define two further classes of polynomials: we say that $f$ is practically reducible if there exist polynomials $u_1, cdots, u_n$ such that $f(u_1, cdots, u_n)$ is geometrically reducible, and we say that $f$ is algebraically practically reducible if there exist algebraic functions $u_1, cdots, u_n$ such that $f(u_1, cdots, u_n)$ is a polynomial which is geometrically reducible.

My questions are: is the condition of being “practically irreducible” and “algebraically practically irreducible” generic conditions? That is, do there exist non-empty Zariski-open subsets of polynomials of given degree which are not practically reducible/algebraically practically reducible?

equation solving – Nice cubic polynomials

Let a polynomial with integer coefficients be nice if

  1. this polynomial has integer roots;
  2. its derivative has also integer roots.

For instance
$$p(x)=x(x-9)(x-24),\
p'(x)=3(x-4)(x-18)$$

is the smallest known nice cubic polynomial. Smallest here means a polynomial with the smallest absolute value of the largest coefficient ($9times24=216$). But how to verify with the help of MA that there are no smaller ones?

polynomials – Curve fitting intraday bets on political candidates

I’m trying to generate a couple of numbers to represent all individual bets on political candidates (this is a lot like intraday stock sales). So there’s a ton of individual sales and I want to reduce it all down to a few numbers, where that gives a decent approximation.

Each bet tends to be just a bit higher or lower from the previous bet. Sometimes it’s more, but it should, almost all the time, come close to fitting a line. With the X as time over the day and Y the price, is a fourth order polynomial using a least squares fit the best approach for data like this? (And maybe a 5th order?)

The purpose here is to have few enough numbers that I can make use of them. And have it model the activity as a decent rough approximation.

Is there a better curve fitting other than a polynomial? And is there a better way to fit the curve than least squares? For data of this type (tends to revert to the mean second by second).

ps – I learned all this stuff 40 years ago, and haven’t used it since. So apologies if this question is off, I’m re-learning as I go.

limits – Asymptotic behavior of the zeros of a polynomials for large values of a parameter

Consider a polynomial in $r$ of the form
$$
r^4+p_3(lambda)r^3+p_2(lambda)r^2+p_1(lambda)r+p_0(lambda),
$$

where the $p_i$ are polynomials in the parameter $lambda$. I use degree four to simplify the notation but you can think of a polynomial of any degree you want. Then I belive this result to be true: as $lambdatoinfty$, the zeros of the polynomials above approach the solutions to
$$
r^4+l_3 r^3+l_2r^2+l_1r+l_0=0,
$$

where $l_i$ is the leading term of $p_i$.

For example, I would like to be able to say that the solutions of
$$
r^4+5r^3+(lambda+1)r^2+(6lambda+5)r+17lambda^2+3=0,;;;;(1)
$$

approach the solutions to
$$
r^4+5r^3+lambda r^2+6lambda r+17lambda^2=0
$$

as $lambdatoinfty$. One thing I can do is to apply the scaling $r=sqrt{lambda}rho$
and substitute in (1), divide by $lambda^2$ and apply the limit $lambdatoinfty$ to obtain
$$
rho^4+rho^2+17=0,;;;;;(2)
$$

which seems to show that the solutions to (1) approach the quantities $sqrt{lambda}rho$, where $rho$ are the solutions of (2).
It does not contradict the result I am trying to prove and one would think it is possible to apply this scaling argument all the time. However, the advantage of the result I want to prove is that it is very easy to state and apply and does not require finding the correct scaling for each particular case.

Of course, if one would take the limit as $lambda$ approaches a finite real value, then the solutions approach the solutions of the polynomial where the limit is applied to each coefficient. However, the difficulty here is that the limit is at $infty$ and I could not find any results concerning this type of problems. Any reference would be appreciated.

trigonometric polynomials – How do you solve an equation like this?

trigonometric polynomials – How do you solve an equation like this? – MathOverflow

ag.algebraic geometry – Braid group and the fundamental group of coefficients of polynomials

I am studying Norbert A’Campo’s article “Tresses, monodromie et e groupe symplectique” and I am trying to understand why $rho(t_i)=T_i$. I am stating this just for the sake of reference, in principle this is a stand-alone question. Here it goes:

Let n>3 be a natural number. For each $a=(a_1,ldots,a_n)in mathbb{C}^n$, consider the polynomial begin{equation}f_a(x)=x^{n+1}+sum_{i=1}^na_ix^{i-1}=prod_{i=1}^{n+1}(x-z_i^a).end{equation}Define $Delta$ to be the subset of those $a$ such that $f_a$ has a double root. It is well-known that the fundamental group of $mathbb{C}^nsetminus Delta$ is isomorphic to the Artin braid group $B(n+1)$ on $n+1$ strands. Namely, the isomorphism is induced by the homeomorphism $mathbb{C}^nsetminus Deltato text{Conf}_{n+1}(mathbb{C})$ where $amapsto (z_1^a,ldots,z_{n+1}^a)$. The repeated application of a generalisation of the Hyperplane Section Theorem, gives us a line $Hsubset mathbb{P}^n$ such that $(mathbb{C}^nsetminusDelta)cap Hcong mathbb{C}setminus A$, where $A$ is a set of $n$ points in the complex plane.
This is the space of coefficients of $f_a$ where we have fixed $n-1$ of the $a_i$ and are letting the remaining one vary. The same theorem also sates that we have a surjective group homomorphism $pi_1((mathbb{C}^nsetminusDelta) cap H)to pi_1(mathbb{C}^nsetminus Delta)$ induced by the inclusion.

Now, the fundamental group of $mathbb{C}setminus A$ is the free group on $n$ generators and we have a composition of group homomorphisms begin{equation}pi_1(mathbb{C}setminus A)cong pi_1((mathbb{C}^nsetminus Delta)cap H)topi_1(mathbb{C}^nsetminus Delta)cong B(n+1)end{equation}where all the maps are discussed above.

Question

I am trying to show that the generator $(gamma_i)in pi_1(mathbb{C}setminus A)$, where $gamma_i$ is the loop going around once around a puncture $p_i$, to mapped to the $i$-th generator $t_iin B(n+1)$. One thing that I need is that going once around $p_i$ corresponds to permunting the $i$-th and $(i+1)$-th root of $f_a$. Any help is greatly appreciated!

DreamProxies - Cheapest USA Elite Private Proxies 100 Cheapest USA Private Proxies Buy 200 Cheap USA Private Proxies 400 Best Private Proxies Cheap 1000 USA Private Proxies 2000 USA Private Proxies 5000 Cheap USA Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive.com Proxies-free.com New Proxy Lists Every Day Proxies123.com Proxyti.com Buy Quality Private Proxies