real analysis – The Cartesian Product of two metric spaces and sequences that converge

Prove that if $(X,d)$ is the Cartesian product of the two metric spaces $(X_1,d_1)$ and $(X_2,d_2)$, then a sequence ${(x_n^1,x_n^2)}$ in $X$ converges to $(x^1,x^2)$ if and only if $x_n^1 rightarrow x^1$ and $x_n^2 rightarrow x^2$

my proof:$(rightarrow)$ Suppose $(x_n^1,x_n^2)rightarrow (x^1,x^2)$. Choose $epsilon >0$ so that $exists Ninmathbb{N} $ such that $d((x_n^1,x_n^2),(x^1,x^2))<epsilonspace,space forall ngeq N $. Then it follows by definition $d_1(x_n^1,x^1)+d_2(x_n^2,x^2)<epsilon$. Then $d_1(x_n^1,x^1)<epsilon_1 space$and $d_2(x_n^2,x^2)<epsilon_2$ where $epsilon=epsilon_1+epsilon_2$ and $epsilon_1>0,epsilon_2>0$ are both chosen arbitrary for $ngeq N$. Then $x_n^1rightarrow x^1$ and $x_n^2rightarrow x^2$.

$(leftarrow)$ Suppose $x_n^1rightarrow x^1$ and $x_n^2rightarrow x^2$. Choose $epsilon_1>0$ and $epsilon_2>0$ so $exists N_1,N_2inmathbb{N}$ such that $d_1(x_n^1,x^1)<epsilon_1 space$and $d_2(x_n^2,x^2)<epsilon_2space forall ngeq max{N_1,N_2}$. Take $epsilon=epsilon_1+epsilon_2$ so that $d_1(x_n^1,x^1)+d_2(x_n^2,x^2)<epsilonspace, forall ngeq max{N_1,N_2}$. Then $d((x_n^1,x_n^2),(x^1,x^2))<epsilonspace,space forall ngeq max{N_1,N_2}$. Therefore, $(x_n^1,x_n^2)rightarrow (x^1,x^2)$

Let me know if my proof is wrong and where it needs correcting

complex analysis – Prove, there exists z, so $|sum_{k=0}^n a_k z^k| geq sqrt{sum_{k=0} ^n |a_k|}, |z|=1$

I have to prove, that there exists $z$, that expression is valid for z with |z|=1:
|sum_{k=0}^n a_k z^k| geq sqrt{sum_{k=0} ^n |a_k|}

Note that $a_0, a_1, … , a_n in mathbb{C} $.

I don’t really know how to start. First I tried to solve only for real $a_k$, not complex, and than expressed, $z$ as $e^{iphi}$, I than expressed left side of equation as $|a_0+a_1 e^{iphi}+ a_2 e^{2iphi} + … +a_n e^{niphi}|$, than I tried to use triangle inequality, but I dont know, how this is usefull.

fa.functional analysis – discontinuous functions on the Sobolev borderline

It seems to be tricky to find concrete examples of discontinuous functions that are Sobolev borderline cases. Some searching turned up an example in $H^1({mathbb R}^2)$, but I have been unable to find a “simple” example in what I intuitively expect to be the easiest case, namely $H^{1/2}(S^1)$, and I was surprised that none of the textbooks I could think to search in give one. My first impulse was to try classic discontinuous functions like the square wave and sawtooth whose Fourier series are easy to compute: these just miss the mark, as they turn out to be in $H^s(S^1)$ for all $s < 1/2$ but not for $s=1/2$. The one thing I have tried that worked was writing down an explicit Fourier series like
f(x) := sum_{k=2}^infty frac{e^{2pi i k x}}{k ln k},
qquad text{ (here $x in S^1 := {mathbb R} / {mathbb Z}$)}

which one can easily check is in $H^{1/2}(S^1)$, and one can then use summation by parts to estimate $sum_{k=N}^infty frac{e^{2pi i kx}}{k ln k}$ for large $N$ and small $|x|$ and thus prove $lim_{x to 0} f(x) = infty$. One can do something similar with a Fourier transform and integration by parts to find a function in $H^{1/2}({mathbb R})$ that is continuous everywhere except at $x=0$, where it blows up. But this type of construction is a lot trickier than what I was hoping for; expressing a function as a conditionally convergent series or improper integral does not give me the feeling that I can get my hands on it.

So, first question: does anyone know a simpler example of something that is discontinuous and belongs to $H^{1/2}(S^1)$ or $H^{1/2}({mathbb R})$? Or other interesting examples of Sobolev borderline functions that can be understood without having to search the exercises in Baby Rudin for hints?

Followup question, admittedly a little vague: if you don’t know more concrete examples, is there any deep reason why they don’t exist, i.e. why every function I can think to write down in a reasonable way turns out to fall short of the borderline case?

python – League of Legends Summoner Analysis

find_game_ids is far more complicated than it needs to be. You have essentially two “counters”, Idgame and i. One is being used to be placed in a string, and the other is to limit how many loops happen, but they’re the same value if you think about it; just opposites. You don’t need Idgame since you can just check if i < 20. You also don’t need to manually manage i. range is for use-cases exactly like this:

def find_game_ids(self, accId, key):
    game_id = ()
    url_match_list = f"{accId}?queue=420&endIndex=20&api_key={key}"
    response2 = requests.get(url_match_list)
    for i in range(20):

    return game_id

i here will be every number from 0 to 19. I would also recommend creating a variable elsewhere to hold the 20 and call in N_GAMES or something. You seem to use that 20 in multiple spots. If you change it in one place and forget to change it somewhere else, you’ll potentially have a nasty bug.

Other things I changed:

  • Variable names should be lowercase, separated by underscores according to PEP8. You have names all around this file that inconsistently use Upper_case. Use lower_case unless you’re naming a class name.
  • Instead of adding string together using +, I changed it to use f-strings (note the f before the quotes). That lets you put a variable directly into a string using the {variable_name} syntax.

This can be further improved though. If you’re iterating to create a list like you are here, list comprehensions can sometimes be cleaner:

def find_game_ids(self, accId, key):
    url_match_list = f"{accId}?queue=420&endIndex=20&api_key={key}"
    response2 = requests.get(url_match_list)
    return (f"{response2.json()('matches')(i)('gameId')}?api_key={key}"
            for i in range(20))

The major readability problem in each case stems from how long that string is. You may want to break it over multiple lines, or generate it outside of the function using another function.

In game_data, you’re calling response.json() repeatedly. I don’t know the implementation of that method, so I don’t know if it uses caching, but it wouldn’t surprise me if json is an expensive method. Save that into a variable once and use it as needed:

def game_data(self, game_list, key, sumName):
    . . .
    for urls in game_list:

        response = requests.get(urls)
        resp_json = response.json()  # Save it to use it again later
        Loop = 0
        index = 0
        while Loop <= 10:

            if resp_json('participantIdentities')(index)('player')('summonerName') != sumName:
                Loop = Loop + 1
                index = index + 1
            elif resp_json('participantIdentities')(index)('player')('summonerName') == sumName:


        . . .

Not only is that shorter, it also makes it easier to add in some preprocessing to the data later, and also has the potential to be much faster, because you aren’t doing the same processing over and over again (if it doesn’t cache).

#Finding avg of each stat

Like I said, you’re using 20 in multiple places. What if you want to change this number later? It’s not going to be fun to go around and find every relevant 20 and update it to the new value.

Have that number stored once, and use that variable:

# Top of file by imports
N_GAMES = 20

. . .

# The for-loop in the updated find_game_ids
for i in range(N_GAMES):

. . .

# At the bottom of game_data

For the classes win_calc and id_collect, there a few noteworthy things.

First, they shouldn’t be classes. A good indicator that you shouldn’t be using a class is that you’re never using self in any of its methods. By using a class in this case, you need to construct an empty object just to call a method on it, which you’re doing here:


Just to call a method on it later:


Just make those classes plain functions:

import random

def is_dis_mane_good(winlist):

    winlist = sum(winlist) / 20

    if (winlist < .33):
        trash = ('DIS MANE STINKS', 'run while you can', 'I repeat, YOU ARE NOT WINNING THIS', 'I predict a fat L',
                 'Have fun trying to carry this person', 'He is a walking trash can', 'He needs to find a new game',
                 'BAD LUCK!!!')
    . . .

And then just use them as plain functions:


Second, if it were appropriate to have them as classes, the names should be in CapitalCase: WinCalc and IDCollect (or maybe IdCollect).

Also, I’d rename is_dis_mane_good. Using a slang in the output of the program is one thing, but naming your methods obscure names isn’t doing yourself or any of your readers any favors.

As well in that function, I’d make some more changes:

  • I suggest you prefix your decimal numbers with a 0. 0.33 is much more readable than .33.

  • You can use operator chaining to simplify those checks too. winlist > 0.33 and winlist <= 0.5 can become 0.33 < winlist <= 0.5

  • There’s that 20 again ;). The more places you have it, the more likely you are to forget to update at least one of them. I’d use N_GAMES there instead.

  • You can get rid of the duplicated print(random.choice(. . .)) calls by assigning the list to a variable after each check, then having one print at the bottom.

After those changes, I’m left with this:

def competency_message(winlist):
    winlist = sum(winlist) / N_GAMES

    message_set = ()
    if winlist < 0.33:
        message_set = ('DIS MANE STINKS', 'run while you can', 'I repeat, YOU ARE NOT WINNING THIS', 'I predict a fat L',
                 'Have fun trying to carry this person', 'He is a walking trash can', 'He needs to find a new game',
                 'BAD LUCK!!!')

    elif 0.33 < winlist <= 0.5:
        message_set = ('Losing a bit', 'Not very good', 'He needs lots of help', 'Your back might hurt a little',
                   'Does not win much')

    elif 0.5 < winlist <= 0.65:
        message_set = ('He is ight', 'He can win a lil', 'You guys have a decent chance to win', 'Serviceable',
                'Should be a dub')

    elif winlist > 0.65:
        message_set = ('DUB!', 'You getting carried', 'His back gonna hurt a bit', 'winner winner chicken dinner',
                'Dude wins TOO MUCH', 'You aint even gotta try', 'GODLIKE')


complex analysis – Analytic continuation of a power series with non-negative rational coefficients.

Let $D$ be an open disk about zero in $mathbb{C}$, $f:Dto mathbb{C}$ be an analytic function, and suppose the power series expansion of $f$ about $z=0$ is given by $f(z)=sum_{n=0}^{infty}a_nz^n$ with $a_nin mathbb{Q}$ non-negative for all $n$. My question is the following:

Are there any conditions on the coefficients $a_n$ which ensure that $f$ admits an analytic continuation to $z=1$, and in such a case, are there further conditions which ensure any such analytic continuation of $f$ admits the same value at $z=1$?

fa.functional analysis – Closed convex hull in infinite dimensions vs. continuous convex combinations

No. Even in one dimension. Say $K$ is the open interval $(0,1)$. Show $0 notin K^*$. Let $mu$ be a probability measure with support contained in $(0,1)$. Indeed,
r(mu) := int_K x,dmu(x)

is the integral of a positive function. That is, $x > 0$ a.e. So $int_K x,dmu(x) > 0$. Similarly $1 notin K^*$.

In a Banach space $E$, if there is any extreme point of $M = overline{text{conv} K}$ that does not already belong to $K$, then it also does not belong to $K^*$. So what if $K$ is the set $text{ex}; M$ of extreme points of $M$? Can we recover $M$ as $K^*$?

A very nice little book that discusses this situation is

Phelps, Robert R., Lectures on Choquet’s theorem, Lecture Notes in Mathematics. 1757. Berlin: Springer. 124 p. (2001). ZBL0997.46005.

Choquet’s theorem tells us that every point of a compact convex set $M$ is of the form $r(mu)$ for some probability measure concentrated on the set $text{ex}; M$ of extreme points of $M$.

My first publication to attract any notice was this one, where there is a generalization of Choquet’s theorem to certain closed bounded noncompact sets $M$.

Edgar, G. A., A noncompact Choquet theorem, Proc. Am. Math. Soc. 49, 354-358 (1975). ZBL0273.46012.

ca.classical analysis and odes – inequality for two integral expressions

Given positive functions $f, g, h in L^1$, I would like to compare the following two expressions:
a_1 &= int_{0}^{infty} ! dx , f(x) int_{-infty}^{infty} ! dy , g(y) int_{-infty}^{infty} ! dz , h(z) , frac{sin(x , (y-z))}{pi , (y-z)} \
a_0 &= int_{0}^{infty} ! dx , f(x) int_{-infty}^{infty} ! dy , g(y) , h(y)

In other words, $a_0$ is $a_1$ with the sinc kernel replaced by a delta distribution, so an alternative formulation is to look at the behaviour of
a_epsilon = int_{0}^{infty} ! dx , f(x) int_{-infty}^{infty} ! dy , g(y) int_{-infty}^{infty} ! dz , h(z) , frac{sin(tfrac{x}{epsilon} , (y-z))}{pi , (y-z)} \

as $epsilon$ goes to zero.

I am interested in conditions for when something can be said about the relative values of $a_0$ and $a_1$. Ideally, there would be some bounds for $a_0$ in terms of $a_1$, but even just conditions for when there is a simple inequality would be helpful.

fa.functional analysis – Particular subspace of the orthogonal complement of a function

Fix $fin L^2(mathbb{R})$ compactly supported. For any $gin L^infty(mathbb{R})$ supported away from the support of $f$, the space $g L^2(mathbb{R})$ is included in $f^perp$.

Now, fix $hin L^2(mathbb{R})$. Does $h^perp$ include a space $g L^2(mathbb{R})$ with $gin L^infty(mathbb{R})$? If not, are the compactly supported function the only functions that satisfy this property ?

analysis – Contribution of Time-Dependent Variable to Change in Function

Going through a paper recently I got stuck on the simple differential analysis that the authors were using. I had not come across this before, so maybe there is an elegant way to explain this.

In a 2018 paper on solar photovoltaics (P.9 Main Body, P1 in Supplementary Material), the authors have a cost function $C$ which describes the cost associated with manufacturing one unit and depends on manufacturing variables $x,y$, which change over time (e.g. price of silicon, price of chemicals, etc.)


They want to determine the contribution of a single variable $x$ to the total change of the cost function between two points in time $Delta C (t_0, t_1)$. Variables are known only at discrete points in time ($t_0,t_1$).

They start by writing out the differential of the cost function $C$ as

dC (x(t), y(t)) = frac{ partial C }{ partial x } frac{ text{d} x }{ text{d} t} text{d} t + frac{ partial C }{ partial y } frac{ text{d} y }{ text{d} t} text{d} t

where the contribution of the change in variable x over time $t_0 < t < t_1$ is then

Delta C_x = int_{t=t_0}^{t_1} frac{ partial C }{ partial x } frac{ text{d} x }{ text{d} t} text{d} t

Here they say

If it were possible to observe the (…) variables x in continuous time, (…) (this equation) would provide all that is needed to compute the contribution of each variable x.

Using logarithmic differentiation, they go on to rewrite the expression as

Delta C_x = int_{t=t_0}^{t_1} C(t) frac{ partial ln C }{ partial x } frac{ text{d} x }{ text{d} t} text{d} t

and then for $C(t)$ assume a constant $C(t) approx tilde{C} $ which is ultimately chosen to be $tilde{C} = frac{ Delta tilde{C} }{ Delta ln tilde{C} }$, such that $Delta C_x + Delta C_y = Delta C$.

My questions:

  1. Why not assume all variables $x,y$ change linearly between $t_0$ and $t_1$, and then integrate:
    Delta C_x = int_{t=t_0}^{t_1} frac{ partial C }{ partial x } frac{ text{d} x }{ text{d} t} text{d} t

  2. Even if the time dependence of variables was known (eg. daily data on the price of silicon, etc.), then integrating would not yield what the authors are actually looking for. They are interested in the contribution of single variables to the total change in $C$ (eg. what percentage of total manufacturing cost reductions are due to decrease in silicon price). But integrating using $Delta C_x = int_{t=t_0}^{t_1} frac{ partial C }{ partial x } frac{ text{d} x }{ text{d} t} text{d} t $
    would yield different results for different time dependency of variables. A variable $x(t)$ (purple)
    would yield a different $Delta C_x$ than a variable $x'(t)$ (blue).

enter image description here