## Prove a lower bound

Prove: $$n^{5}-3n^{4}+logleft(n^{10}right)∈ Ωleft(n^{5}right)$$.

I always get stuck in these types of questions, where there is a $$”-(xy^{z})”$$ in the expression.
Whenever I see the solutions for these type of questions, I can’t identify a single method that works every time and it’s frustrating. How do I approach these types of questions?

Posted on Categories ArticlesTags ,

## fa.functional analysis – Intuition/references for understanding bound states/discrete spectrum relationship

I am trying to form intuition for the following `well-known’ facts about spectrum of unbounded operators (Schrodinger/wave etc.) $$L$$ on $$mathbb{R}^n$$.

Let $$lambdainmathbb{R}$$ satisfy
$$Lf=lambda f$$, for some function $$f$$.

Roughly speaking: we call “eigenvalues” the things that give us $$f$$ in some $$L_2$$ space. If the eigenvalue equation is satisfied but the $$f$$ is not in $$L_2$$ (roughly, it doesn’t decay at infinity), then we don’t call $$lambda$$ an eigenvalue, but consider it part of essential (or continuous ?) spectrum. Example of latter is the usual Laplacian.

Is there a resource where I can understand how generic is this “equivalence”:

Existence of “bound state” <-> Existence of eigenvalue

Posted on Categories Articles

## parameterized complexity – Reduction rules to lower bound minimum degree of a graph

I’m trying to come up with a list of rules that return an equivalent instance to the following problem, while eliminating all vertices of degree 2 or less from the graph:

Given a graph $$G=(V,E)$$, the goal is to know if there’s a set $$Ssubseteq V$$ of size at most $$k$$ such that $$G-S$$ is an Almost Forest.

An almost forest is a graph where every component is either a tree or a cycle.

So given any graph (multi-graph) and $$k$$: $$(G,k)$$

I know I can remove any component that’s either a tree or a cycle, that includes isolated vertices and the graph obtained has a solution $$Ssubseteq V’$$ of size at most $$k$$ iff the original graph has a solution of size at most $$k$$.

That eliminates all vertices of degree 0.

The problem with vertices of degree 1(leaves) is:

Suppose $$vin V$$ is a leaf and let $$uin V$$ be it’s only neighbor. if $$u$$ is part of a cycle $$C$$ then $$C$$ is not a component so we must remove either $$v$$ or a vertex from $$C$$ to obtain an almost forest. If we delete $$v$$ we obtain a cycle(and possibly other vertices attached to it, and chords within it).
If $$C$$ becomes a component then this is the best we could’ve done and picking $$v$$ was a smart choice so we reduce it to $$(G-v,k-1)$$. But if $$C$$ still has chords for example within it, it might have been smarter to delete some other vertex from $$C$$ that is an endpoint of the chord in $$C$$, so the reduction performed by picking $$v$$ is incorrect.
Also as stated before $$v$$ might actually be a part of the solution so it definitely is not correct to reduce the instance to $$(G-v,k)$$. It seems as if it’s impossible to remove leaves..

I’d love any ideas I can get..

Posted on Categories Articles

## nt.number theory – Lower bound on a Truncated Divisor Sum

Let $$d(n)$$ be the number of divisors function, i.e., $$d(n)=sum_{kmid n} 1$$ of the positive integer $$n$$.

I am interested in estimating, the following sum
$$A(a,x)=sum_{nleq x} min( d(n), M)^a$$
from below for some function $$M$$ depending on $$x,$$ the upper limit.

For now, let $$a=1.$$

In the answer to the following question where the case $$a=1$$ was considered, the lower bound on $$A(a,x)$$ for the case $$Mleq (log x)^{10},$$ was stated to be of the same order as that of the upper bound in that answer.

Reading over the answer much later now, it is not so clear to me what the lower bound actually is, because the upper bound is also not fully stated. I have looked at the Selberg-Delange chapter, and the subsequent chapter in Tenenbaum’s Introduction to Probabilistic and Analytic Number Theory book as suggested, but it is still not fully clear what’s going on.

For example, Theorem 4, p. 205 in that book states
$$pi_k(x)=frac{x}{log x}frac{(log log x)^{k-1}}{k!}left{lambdaleft( frac{k-1}{loglog x}right)+ Oleft(frac{k}{(log log x)^2}right)right}$$
where $$pi_k(x)=mid { nleq x: omega(n)=k } mid,$$
and
$$lambda(z)=frac{1}{Gamma(z+1)}prod_p left(1+frac{z}{p-1}right)left(1-frac{1}{p}right)^z.$$
and $$k$$ is allowed to grow with $$n,$$ with $$1leq kleq A log log x.$$

So presumably I need an estimate of the form
$$sum_{1leq k:2^kleq M} 2^k pi_k(x),$$
for the lower bound. What are the explicit steps in establishing this?

Posted on Categories Articles

## na.numerical analysis – Is Sun’s spectral variation bound for normal matrices optimal?

In On the variation of the spectrum of a normal matrix, Sun proves the following result (Corollary 1.2):

Let $$A$$ be an $$n$$-square normal matrix and $$B$$ an arbitrary $$n$$-square matrix. Then $$min_{sigma in S_n} max_{1le ile n} |lambda_i(A) – lambda_{sigma(i)}(B)| le C(n) |A-B|,quad C(n) = n, tag{star}$$ where $$|cdot|$$ is the spectral norm.

This result is a direct consequence of a result for the Frobenius norm (Theorem 1.1), which is shown to be optimal. However, the spectral norm result above is not shown to be optimal: the example provided only shows that the constant $$C(n)$$ in ($$star$$) must be at least $$sqrt{n}$$.

Further work by Li and Sun provides additional hypotheses under which $$C(n)$$ can be taken to be smaller, but I have not seen any results which improve ($$star$$) under the sole hypothesis of normality of $$A$$.

I’m interested in the optimality of the constant $$C(n)$$ in ($$star$$). Can the constant $$C(n)$$ in ($$star$$) be improved? Is it possible that $$C(n) = o(n)$$ works? Is there a lower bound better than $$C(n) = Omega(sqrt{n})$$ as shown by Sun?

Posted on Categories Articles

## solid – By applying the ISP are we bound to segregating the class too?

Splitting the interface does not necessarily mean that you should split the class. You can think of interfaces as of a roles an object of a class is playing in the context of some client code. An object can play more than one role – e.g. picture two different clients “seeing” two different aspects of the same concept. The aspects themselves could be of a more general nature. E.g., consider IComparable: it allows a sorting algorithm to be written in terms of that interface, without the developer having to worry about anything else; at the same time, a concrete class implementing IComparable may have a more complex nature, and may implement other interfaces.

To deal with SRP, you can try to split the class into two (or more) separate concepts, but you can also extract parts of it into separate classes to delegate responsibilities to (composition). I.e. the source class can stay a single overall concept, but “relinquish” some of its original responsibilities to its constituent objects; it would essentially only orchestrate them, implementing a high-level policy.

Basically, if splitting the class would still require the objects to be very “chatty” and rely on each other’s internals, and there’s no real way around that, then splitting would hurt cohesion, and it’s probably best not to do it. Such classes would still be coupled, and furthermore might couple other code that uses them.

So basically the ISP states we should break big interfaces with members that are not cohesive with each other to smaller and more cohesive interfaces

That’s a partial picture; what’s missing is that this is to be judged with respect to, or from the perspective of, clients (code that uses objects through these interfaces, code written against these interfaces). ISP states that clients shouldn’t depend on stuff they don’t use, even though it may make sense to bundle that stuff together into a single object.

I think it helps to contrast SRP and ISP in the following way. SRP is more focused on what the objects themselves are doing, and is about (1) splitting things that aren’t closely related (and change for different reasons and with different rates), but also about (2) bringing together things that are closely connected, but are scattered throughout the code. ISP is more about controlling coupling by limiting the surface area exposed to client code. Splitting a large interface into more focused, smaller interfaces provides flexibility for the clients. It can even be useful if two interfaces segregated from the same source class appear in the same client, because this lets you plug something else for the particular “role” embodied by each interface. This lets you reuse that client code for some other feature that has a similar “shape”, and it also allows for testing.

That said, there’s always some judgement involved. If everything was taken to the extreme, then everything would be too granular, too separated to be usable, and so thoroughly decoupled that the system wouldn’t be able to do anything. Sometimes you can’t satisfy both principles to your liking for various reasons. And it’s not always worth it; some parts of the codebase will work fine and won’t change much, so expending design effort there would bring limited benefit.

Posted on Categories Articles

## solid – By applying the ISP are we bound to segregating the class to?

Splitting the interface does not necessarily mean that you should split the class. You can think of interfaces as of a roles an object of a class is playing in the context of some client code. An object can play more than one role – e.g. picture two different clients “seeing” two different aspects of the same concept. The aspects themselves could be of a more general nature. E.g., consider IComparable: it allows a sorting algorithm to be written in terms of that interface, without the developer having to worry about anything else; at the same time, a concrete class implementing IComparable may have a more complex nature, and may implement other interfaces.

To deal with SRP, you can try to split the class into two (or more) separate concepts, but you can also extract parts of it into separate classes to delegate responsibilities to (composition). I.e. the source class can stay a single overall concept, but “relinquish” some of its original responsibilities to its constituent objects; it would essentially only orchestrate them, implementing a high-level policy.

Basically, if splitting the class would still require the objects to be very “chatty” and rely on each other’s internals, and there’s no real way around that, then splitting would hurt cohesion, and it’s probably best not to do it. Such classes would still be coupled, and furthermore might couple other code that uses them.

So basically the ISP states we should break big interfaces with members that are not cohesive with each other to smaller and more cohesive interfaces

That’s a partial picture; what’s missing is that this is to be judged with respect to, or from the perspective of, clients (code that uses objects through these interfaces, code written against these interfaces). ISP states that clients shouldn’t depend on stuff they don’t use, even though it may make sense to bundle that stuff together into a single object.

I think it helps to contrast SRP and ISP in the following way. SRP is more focused on what the objects themselves are doing, and is about (1) splitting things that aren’t closely related (and change for different reasons and with different rates), but also about (2) bringing together things that are closely connected, but are scattered throughout the code. ISP is more about controlling coupling by limiting the surface area exposed to client code. Splitting a large interface into more focused, smaller interfaces provides flexibility for the clients. It can even be useful if two interfaces segregated from the same source class appear in the same client, because this lets you plug something else for the particular “role” embodied by each interface. This lets you reuse that client code for some other feature that has a similar “shape”, and it also allows for testing.

That said, there’s always some judgement involved. If everything was taken to the extreme, then everything would be too granular, too separated to be usable, and so thoroughly decoupled that the system wouldn’t be able to do anything. Sometimes you can’t satisfy both principles to your liking for various reasons. And it’s not always worth it; some parts of the codebase will work fine and won’t change much, so expending design effort there would bring limited benefit.

Posted on Categories Articles

## dg.differential geometry – Optimal lower bound on the volume of balls under a Sobolev inequality

Let $$M$$ be a complete non-compact $$n$$-dimensional ($$n geq 3$$)
Riemannian manifold with volume element
$$dv$$ such that, for every smooth compactly supported function $$f : M to mathbb {R}$$,
$$bigg ( int_M |f|^{frac {2n}{n-2}} dvbigg)^{frac {n-2}{n}} , leq , C int_M |nabla f|^2 dv$$
where $$C >0$$ is the optimal constant of this Sobolev inequality in the Euclidean case
$$M = mathbb{R}^n$$. Is it true that
$$mathrm {Vol} (B(x,r) ) , geq , mathrm{V}(r)$$
where $$B(x,r)$$, $$x in M$$, is a ball of radius $$r >0$$ in $$M$$ and
$$mathrm{V}(r)$$ is the volume of a ball of radius $$r$$ in $$mathbb{R}^n$$?

Posted on Categories Articles

## numerical methods – Upper bound relative error in IEEE 754 standard

Prove $$∑^{∞}_{i=t} 2^{−i} < 2^{1−t}$$ and show the upper bound for relative error on rounding is
$$2^{-t}$$when t is the number of bits used to represent the mantissa in IEEE 754 standard.

I think it can be proved with geometric progression. However, I’ve no clue how to approach the rest of the problem.

Posted on Categories Articles

## Asymptotic upper and lower bound of \$frac{1}{n^c}\$?

Let me elaborate more on $$Omega(n!)$$ and $$Omega(2^{n/1000})$$:

$$f(n)=Omega(n!) implies f(n) geq c cdot n! implies frac{1}{n^c} geq c cdot n!$$

Given some constant $$c geq 1$$ and $$n geq n_0$$ where $$n_0 = 1$$, we can clearly see that $$frac{1}{n^c} leq c cdot n!$$ for all $$c geq 1$$ and $$n geq 1$$.

Hence, $$Omega(n!)$$ cannot be a lower bound for $$f(n)$$.

We can use the same logic for $$Omega(2^{n/1000})$$. Specifically:

Since $$Omega(2^{n/1000}) = Omega(2^n)$$ for large values of $$n$$: $$frac{1}{n^c}$$ is always less than or equal to $$2^n$$ (provided that $$cgeq 1$$).

Hence, $$Omega(2^{n/1000})$$ cannot be a lower bound for $$f(n)$$.

Is that explanation sound?