## number theory – Radix Economy of Complex Bases

If we extend the allowed bases for a numerical system to the complex numbers, is e still the most economic base? If not, what would it be?

There’s the well-known formula for radix economy: Where b is the base, and N is a given number.

I don’t know if this formula is still valid for complex numbers. Nonetheless, the only local minimum this function seems to have, even extended over the complex numbers, is at b = e + 0i.

## The set of isomorphism classes of Z/nZ-equivariant line bundles over a 2 dimensional Z/nZ-CW complex

Suppose I wish to find the set of isomorphism classes of $$mathbb{Z}/nmathbb{Z}$$-equivariant line bundles over a 2-dimensional, compact $$mathbb{Z}/nmathbb{Z}$$-CW-complex $$X$$, i.e. $$mathrm{Vect}^{1}_{mathbb{Z}/nmathbb{Z}}(X)$$, I can write it using equivariant homotopy theory as

$$mathrm{Vect}^{1}_{mathbb{Z}/nmathbb{Z}}(X)approx(X,B_{mathbb{Z}/nmathbb{Z}}GL_{1}(mathbb{C}))_{mathbb{Z}/nmathbb{Z}}$$.

Nonequivariantly, $$B_{mathbb{Z}/nmathbb{Z}}GL_{1}(mathbb{C})$$ is $$K(mathbb{Z},2)$$ so the above would seem like a Bredon cohomology group $$H^2_{mathbb{Z}/nmathbb{Z}}(X;M)$$ with $$M$$ some Mackey functor and I could try to use the universal coefficients spectral sequence to attempt at computing it, however it seems like an awful lot of machinery for one of the simpler cases.

If the action is relatively nice, though it has fixed points, is there a simpler way of finding
$$mathrm{Vect}^{1}_{mathbb{Z}/nmathbb{Z}}(X)$$?

If not, which Mackey functor $$M$$ is the right one?

Posted on Categories Articles

## complex analysis – Find \$e^{z}\$ in the form of u+iv and the magnitude of \$e^{z}\$

Find e^{z} in the form $$u + iv$$

So I am trying tto break this down into real and imaginary parts so I can put it into euler’s formula and find the length Generally, I’m trying to follow this:

$$f(z) = 2pi i(1 + i)$$
$$2pi i(1+i) = 2pi i + 2 pi i^2 = i(2 pi + 2 pi i)$$
$$= e^{0}e^{2pi i(1+i)}$$
so applying Euler’s:
$$= e^{0}(cos{} +sin{(2pi + 2pi i)})$$

but I’m stuck… my v still has an i in it. What can I do?

## calculus and analysis – Why does this integral of a real, analytic, absolutely integrable function give a complex result?

I am using Mathematica to develop some “interesting” problems for students to solve using Fourier series.

The following computation seems as though it should yield a real result:

$$B_n = int_0^1 exp(-9 x^2) cos(n pi x), dx~~ nin Integers,~nge 0$$

When I code this in Mathematica, I find it returns a complex result, which does not seem plausible. Here is the code for the first term $$(n=0)$$ (which is simpler than the general case):

``````B(0) = Integrate(Exp(-9x^2), {x, 0, 1}, Assumptions -> {x (Element) Reals})
``````

This returns $$tfrac{1}{6}sqrt{pi} ,textrm{erf}{(3)}$$, which is the correct answer.

However, when I compute the integral for the values of $$n>0$$, I find the following:

``````B(n_) = 2 Integrate(Exp(-9x^2) Cos(n Pi x), {x, 0, 1},
Assumptions -> {n (Element) Integers && n > 0 &&  x (Element) Reals})
``````

Returns:
$$frac{1}{6} sqrt{pi } e^{-frac{1}{36} pi ^2 n^2} left(text{erf}left(3-frac{i pi n}{6}right)+text{erf}left(3+frac{i pi n}{6}right)right)$$

This is a bit baffling. I see no reason that we should have wandered into the complex plane to compute this integral. Anyone have some perspective here?

Thanks.

Posted on Categories Articles

## Integral Value through Complex Integration (residue theorem)

I’d like to know how to evaluate the integral

$$I=int_0^inftyfrac{e^{-s^2}sin(s)}{s},ds=frac{pi}{2}text{erf}(1/2)$$

through the residue theorem. My first steps were to expand $$sin$$ as exponentials, and not that the only pole is located at $$s=0$$ which is removable, so the residue is $$0$$. However, I can’t seem to design a closed path in $$mathbb{C}$$, enclosing the origin, that produces an error function. I would appreciate any help with determining a useful path to get the above result!

As a side note, the result can also be obtained by expanding $$sin(s)$$ in a power series and integrating term-by-term. The result is an infinite sum of Gamma functions that ends up giving the power series for the error function. However, I am more interested in how this may be computed through the residue theorem, if at all.

Posted on Categories Articles

## How is the integral of \$frac{f^prime}{f}\$ being chosen for proofs of the Complex Logarithm and Roots?

I have been working with some friends to make sense of a proof in Ulrich’s Complex Made Simple, which follows below:

Corollary 4.15. Any nonvanishing holomorphic function in a simply connected set has a holomorphic logarithm. That is, if V is simply connected, $$f in H(V)$$ and $$f$$ has no zero in $$V$$ then there exists $$L in H(V)$$ with $$e^L = f$$ in V.

Proof. It follows from Theorem 4.0 (integral of a holomorphic function over an open set is 0 everywhere if and only if the function is the derivative of some holomorphic function in the open set) and Theorem 4.14 (Cauchy’s Theorem for simply connected sets) that there exists $$F in H(V)$$ such that
$$F^prime = frac{f^prime}{f}. (text{WHAT IS THIS!?!})$$
Now the chain rule show that $$(fe^{-F})^prime =0$$, so $$fe^{-F}$$ is constant. Setting $$L = F + C$$ for a suitable $$c in mathbb{C}$$ we obtain $$fe^{-L} = 1$$. $$square$$

Now, everything about the proof and it’s result makes sense to us, including how you can show, using this idea, that any nonvanishing holomorphic function in a simply connected set has a holomorphic nth root.

The only thing we do not understand is how they chose their function. I have seen this pop up in every explanation or proof of this theorem, but I do not believe it is ever addressed as to where this comes from. The only time I have seen it be addressed is in reference to the fact that $$f^prime$$ and $$f$$ are both holomorphic over the set (which we get and that makes sense to us), but never where they come up with the idea to use $$frac{f^prime}{f}$$ as their function for $$F^prime$$.

Any help clarifying this would be fantastic!

Posted on Categories Articles

## graphics – Plotting Minkowski product of two sets in complex 2D plane

I am trying to draw Minkowski product of two sets in complex 2D
plane in `Mathematica`. While I can draw the individual complex 2d plane for these sets
in Mathematica using `ComplexRegionPlot`, I do not know if there is
a way to draw the corresponding Minkowski product.

For example, consider the following complex 2d regions
begin{align*} mathcal{G}_{1} & =left{ zinmathbf{C}midmathrm{Re}(z)geqvert zvert^{2}right} ,\ mathcal{G}_{2} & =left{ zinmathbf{C}midfrac{3}{2}mathrm{Re}(z)geqvert zvert^{2}+frac{1}{2}right} , end{align*}

where their Minkowski product is

$$mathcal{G}_{1}cdotmathcal{G}_{2}=left{ z_{1}z_{2} in mathbf{C} mid z_{1}inmathcal{G}_{1},z_{2}inmathcal{G}_{2}right} ,$$

and I am trying to plot the complex region associated with this Minkowski
product $$mathcal{G}_{1}cdotmathcal{G}_{2}$$. Any help/suggestions will be much appreciated.

Posted on Categories Articles

## cv.complex variables – Real integrals with complex analysis

I don’t have a clear formal viewpoint on this problem.
Resolving the Euler-Lagrange equations for the string with a point mass perturbation:
$$frac{partial^2 phi }{partial x^2} = delta (x-a)$$
I encountered the following integral:
$$I = int_{ – infty}^{+ infty} dk f(x,k) = int_{ – infty}^{+ infty} dk frac{e^{ik(x-a)}}{k^2}$$
Now, during the lessons we computed it as:
$$I = lim_{ epsilon longrightarrow 0}int_{ gamma_+ cup gamma_-} dk frac{e^{ik(x-a)}}{(k – i epsilon)^2}$$
with $$gamma_+$$ being the semicircle in the upper half plane, counterclockwise, and $$gamma_-$$ being the semicircle in the lower half plane, clockwise. We found that
$$I = 2 pi (x-a) theta(x-a)$$
My first question is:
If we compute it with choosing another pole shift such as $$(k+i epsilon)$$, we have:
$$int_{ gamma_-} dk frac{e^{ik(x-a)}}{(k-iepsilon)^2} = – 2 pi (x-a) theta(a-x)$$

In general every shift can give a different result.
Is it due to the fact that
$$lim_{epsilon longrightarrow 0 } int neq int lim_{epsilon longrightarrow 0 }$$
and I can’t commute the operations (because for instance the integrand $$f(x,k,epsilon)$$ isn’t dominated by a function $$g(x,k)$$ )???
Moreover, how can I show I can’t commute the two? And in which case I can commute?
My second question is:
Is there another method that uniquely provide an answer to the integral $$I$$ above?

Posted on Categories Articles

## plotting – Complex mapping of curve

I have a curve in the complex plane (a unit circle), and I want to plot the curve with a complex function applied to it ($$f(z)=sqrt{z}$$) How do I go about doing that?

I know how this can be done analytically, I just want a more general method that can be applied to different curves.

## code request – How can we find the powers of a complex number?

this is the complex number im dealing with
zb=(1/7)* (Cos(pi/3)+iSin(pi/3))

I need to represent for the first 10 powers in the plane. I also need to label the points with “z to the power n(n is the power for which i elevate zb)

I need to label the axes, i need to do all this in mathematica

My questions
1.How can I represent a complex number in mathematica
2.How can I represent for the first 10 powers the plane.
3.How can I label each point on the graph to the corresponding power?

my professor gave me a hint to use the callout() command in mathematica. thank you!