ordinary differential equations – How to find domain of definition of solution?

For the given initial-value problem:
$$dfrac{dy}{dt}=dfrac{1}{left(y+2right)^{2}}, quad y(0)=1$$
we are asked to solve it then to state the domain of definition of the solution. So first of all I separated the variables and applied the initial condition and obtained:
$$y(t)=left(3t+27right)^{1/3}-2$$

but now I can’t figure out why does the solution exist only when $t>-9$? I am having trouble understanding also what does this have to do with $dfrac{dy}{dt}$ not being defined at $y=-2$. Why are we letting
$$3t+27 >0 $$

what if $t=-10$? how does this make $y$ undefined?

PRECISE DPLL algorithm definition – Computer Science Stack Exchange

I am confused about the precise definition of the DPLL algorithm. Various sources tend to define DPLL differently:

  1. In pages 110-114 of the book Handbook of Satisfiability(Editors: Biere, A., Heule, M., Van Maaren, H., Walsh, T. Feb 2009. Volume 185 of Frontiers in Artificial Intelligence and Applications) it defines it as backtracking + unit propagation.

Also can be accessed from: http://reasoning.cs.ucla.edu/fetch.php?id=97&type=pdf (pages 106-110).

  1. In wikipedia: https://en.wikipedia.org/wiki/DPLL_algorithm#:~:text=In%20logic%20and%20computer%20science,solving%20the%20CNF%2DSAT%20problem.
    it defines it as backtracking + unit propagation + pure literal elimination.

  2. And in original 1962 paper: https://archive.org/details/machineprogramfo00davi/page/n5/mode/2up
    it mentions 3 rules: one-literal clause rule(unit propagation), affirmitive-negative rule(pure literal elimination) and rule for eliminating atomic formulas(creating resolvents).

Therefore, I am looking for a clear and strict definition of DPLL algorithm. Maybe it should be considered as purely backtracking algorithm and unit propagation and pure literal elimination as its extensions? Or maybe unit propagation is essential part of the algorithm and pure literal elimination is considered to be extention..?

real analysis – Using Definition prove that following limit exists

Suppose that $x_n$ is a sequence of real numbers that converges to $1$ as $n → ∞$ Using Definition prove that following limit exists.
Here is my attempt but not sure , If it is wrong give me hint to prove its enough

(a) $frac{x^2_n − e}{√n}→ 1 − e$ as $n → ∞$

attempt

By hypothesis, given $epsilon>0$ there is an $Ninmathbb{N}$ such that $ngeq N$ implies $|x_n-1|<epsilon$.

next apply $epsilon=1$ to choose $ngeq N_2$ implies $|x_n-1|<1$(i.e $x_n<2$)

$$|frac{x^2_n − e}{√n}- (1 − e)| =|frac{(x_n −1)(x_n +1)}{√n}+frac{1-e}{√n}-1 +e| $$

$$|frac{x^2_n − e}{√n}- (1 − e)|=|frac{(x_n −1)(x_n +1)}{√n}+frac{(1-e)(1-√n)}{√n})| leq |frac{(x_n −1)(x_n +1)}{√n}|+|frac{(1-e)(1-√n)}{√n}|$$

since $|1-e|<2$ and given $ϵ>0 $ there is an $Ninmathbb{N}$ such that $n≥N$ implies $|frac{1}{√n}|lt epsilon$ so choose $n≥N_1 $ implies $frac{1}{√n}lt 1$ so
$$|frac{x^2_n − e}{√n}- (1 − e)|<|frac{(x_n −1)(x_n +1)}{√n}|+2$$ since $x_n+1<3$ set $N=max(N_1,N_2)$

$$|frac{x^2_n − e}{√n}- (1 − e)|<3|x_n-1|+2<3epsilon+2$$

Thank you!!!

real analysis – Definition and properties of the inverse of the flow of an ODE

At lesson, the teacher considers a flow $Phi$ given by the solutions of the ode system for $tin(0, T)$ and $xinmathbb R^d$,
$$
begin{cases}
y'(s)=b(y(s), s),&sleq T\
y(t)=x
end{cases},label{1}tag{*}
$$

that is $Phi(x, t, s)=y(s)$ solving eqref{1}. He said that we will be mostly concerned with $Phi(cdot, 0, cdot)$. The field $b$ is assumed to be Lipschitz continuous in both variables and bounded.

Then, he intoduces the inverse $Psi$ of the above flow as follows: $Psi(x, 0, s)=y(s)$ satisfying
$$
begin{cases}
y'(s)=-b(y(s), t-s),&s<tleq T\
y(0)=x
end{cases},
$$

and he said that $Psi$ is such that
$$
Phi(Psi(x, 0, s), 0, s)=x,quad Psi(Phi(x, 0, s), 0, s)=x.label{2}tag{**}
$$

I do not understand eqref{2}. Can someone help me? Maybe is the definition of the inverse wrong?

Thank you

data flow analysis – Reaching definition: what is “entry” and “exit”?

I am currently studying the textbook Principles of Program Analysis by Flemming Nielson, Hanne R. Nielson, and Chris Hankin. The section on reaching analysis in chapter 1 presents the following:

Example 1.1 An example of a program written in this language is the following which computes the factorial of the number stored in $mathrm{x}$ and leaves the result in $mathrm{z}$:
$$(y := x)^1 ; (z := 1)^2 ; text{while} (y > 1)^3 text{do} ((z := z * y)^4 ; (y := y – 1)^5) ; (y := 0)^6$$
enter image description here

Reaching Definitions Analysis. The use of distinct labels allows us to identify the primitive constructs of a program without explicitly constructing a flow graph (or flow chart). It also allows us to introduce a program analysis to be used throughout the chapter: Reaching Definitions Analysis, or as it should be called more properly, reaching assignments analysis:

An assignment (called a definition in the classical literature) of the form $(x: = a)^{mathscr{l}}$ may reach a certain program point (typically the entry or exit of an elementary block) if there is an execution of the program where $x$ was last assigned a value at $mathscr{l}$ when the program point is reached.

Consider the factorial program of Example 1.1. Here $(y := mathrm{x})^1$ reaches the entry to $(z := 1)^2$; to allow a more succinct presentation we shall say that $(y, 1)$ reaches the entry to $2$. Also we shall say that $(mathrm{x}, ?)$ reaches the entry to $2$; here “$?$” is a special label not appearing in the program and it is used to record the possibility of an uninitialised variable reaching a certain program point.

Full information about reaching definitions for the factorial program is then given by the pair $text{RD} = (text{RD}_{entry}(mathscr{l}), text{RD}_{exit}(mathscr{l}))$ of functions in Table 1.1. Careful inspection of this table reveals that the entry and exit information agree for elementary blocks of the form $(b)^mathscr{l}$ whereas for elementary blocks of the form $( x := a)^mathscr{l}$ they may differ on pairs $(x, mathscr{l}^prime)$. We shall come back to this when formulating the analysis in subsequent sections.

Returning to the discussion of safe approximation note that if we modify Table 1.1 to include the pair $(z, 2)$ in $text{RD}_{entry}(5)$ and $text{RD}_{exit}(5)$ we still have safe information about reaching definitions but the information is more approximate. However, if we remove $(z, 2)$ from $text{RD}_{entry}(6)$ and $text{RD}_{exit}(6)$ then the information will no longer be safe – there exists a run of the factorial program where the set ${ (mathrm{x}, ?), (y, 6), (z, 4) }$ does not correctly describe the reaching definition at the exit of label $6$.

This textbook’s explanations are very unclear, but I seem to understand the $text{RD}_{entry}(mathscr{l})$ values after reading this Wikipedia article. However, I still do not understand what $text{RD}_{exit}(mathscr{l})$ means, and nor do I understand how the authors are getting those values. What does $text{RD}_{exit}(mathscr{l})$ mean? What is the difference between $text{RD}_{entry}(mathscr{l})$ and $text{RD}_{exit}(mathscr{l})$? How do the authors get the values for $text{RD}_{exit}(mathscr{l})$ in table 1.1?

evaluation – How to inline a variable in function definition

I need to compile a function g that calls an external function f. f and g are defined as such:

r = {0,0,1};
f = # - r &;
g = Compile[{{a, _Real}}, f[a], CompilationOptions -> {"InlineExternalDefinitions" -> True}]

But this way r will be held, and g will assume it to be a real number instead of a vector, throwing type error when trying to find a - r. How can I inline r such that f = # - {0,0,1}&? Evaluate doesn’t work because the result will be f = {#, #, #-1}&.

disjoint sets – Definition of Disjointness for binary strings

Basically, most of the definition for disjointness are such that $DISJ(A, B) = 1$ if $A cap B = emptyset $ and $DISJ(A, B) = 0$ for other case. My confusion is how is $0$‘s influence in here. For example, is all-zeros string considered as $emptyset$ and therefore the result will be $1$ no matter what the other string is ? Another case is if two disjoint string, e.g., $01$ and $10$ becomes not disjoint after same number of $0$ appended to each string.

Currently my thought is, based on most research papers in communication complexity field, that $DISJ(A, B) = 0$ iff there exists one entry is $1$ for both $A, B$. Would like some clarification and related detailed definition with source.

Dimension of Harmonic Polynomial space from Tensor definition

I would like to calculate the dimension of the Harmonic Polynomial space using the following definition:
Let $P_n(overrightarrow{x})$ be a polynomial of degree n of the variable $overrightarrow{x}=(x_1,x_2,x_3)$; it can be wrote as $P_n(overrightarrow{x})=sum_{i_1,…,i_n} T_{n;i_1,…,i_n}x^{i_1}…x^{i_n}$ where $T_{n;i_1,…,i_n}$ is a symmetric tensor. If $P_n(overrightarrow{x})$ is harmonic, $nabla^2P_n(overrightarrow{x})=0$ implies $Tr(T_{n;i_1,…,i_n})=0$.
Hence $T_{n;i_1,…,i_n}$ is a symmetric traceless tensor.
From this property of $T$, can I deduce the dimension of the space?
Thank you!

linear algebra – Spectrum of integral operator $A$: $(A f)t = int_0^t f$ by definition of $Sp$

Exercise 7.15 from Ciprian Foias, Michael Jolly, “Differential Equations in Banach Spaces” (with some edits).

Let $X = C((0, 1), mathbb C)$ (a set of continuous functions from $(0, 1)$ to $mathbb C$ with the uniform norm) and $A in B(X)$ (bounded linear operator from $X$ to $X$) defined by

$$(A f)(t) = int_0^t f$$

Determine the spectrum of $A$ (i.e. the set of $lambda$ s.t. $A – lambda I$ is not invertible).


I can show that $|A^n|^{1/n} to 0$, and therefore by Theorem 7.14 from the book $max {|lambda| colon lambda in Sp(A)} = 0$. I.e. $Sp(A) = {0}$.

Question: is there a way to show that by the definition of spectrum? Currently, it’s not clear to me at all why $A$ is not invertible, and why $A + lambda I$ is invertible for any $lambda ne 0$. Also, is there a way to guess the answer by just looking at the operator?

Motivation: While I can solve this particular problem, I’m interested in a general method of how a spectrum can be found.


I found this answer: https://math.stackexchange.com/a/199730/743044, but again it doesn’t show how to do this by definition.

Definition of $BPP$

We know that $BPP$ is described as ${Lmid exists ;TM;M,;s.t.;Pr(M(x)=L(x))geq2/3}$. I did saw a proof which uses chernoff bound to prove, if probability $geq 1/2$, then it can be turned to any probability $in(1/2,1)$. My question is what about probabilities below $1/2$. Does they fall on a different class?