real analysis – Proof existence of limit if is \$f\$ is two times differentiable

I am trying to proof that if $$f$$ is two times differentiable then exists limit
$$lim_{(x,y)to(x_0,y_0)}frac{f(x,y)-f(x_0,y)-f(x,y_0)+f(x_0,y_0)}{(x-x_0)(y-y_0)}$$

I have been thinking that idea is add and subtract by conveniently but I don’t have idea. Some hint?

Will a function be non differentiable at point not in domain

What is correct to say among non-differentiable or cannot be decided for a point which isn’t in domain for a function
Example f(x)=x for all x belongs to real numbers except 1
So will we say it is non-differentiable at x=1 ?

Continuos functions nowhere differentiable

If: $$D(x) = sum^infty_{k=1}frac{1}{k!}sin((k+1)!x)$$ How can I prove that $$D(x)$$ is nowhere differentiable on $$mathbb{R}$$

differential geometry – Main differences between differentiable manifolds without boundary and differentiable manifolds with boundary

I have some questions about differentiable manifolds with boundaries.

1- First of all, do you have good references of books about manifolds with boundaries ? I’m not really familiar with these objects, so it might be good for me to have a benchmark.

2- I read that the notions of differentiable map, tangent space, tangent map, vector field, etc., are not really different in the case of a manifold with boundary (compare to a classical differentiable manifold). However, I have the feeling that things are not always similar. For example, consider the map $$f: (0,1) rightarrow mathbb{R}, x mapsto 2x$$.

2.a- Is $$f$$ differentiable at $$0$$ (or at $$1$$) and if yes, what is the tangent map of $$f$$ at $$0$$ (or at $$1$$) ?

2.b- A useful theorem for me is the following one:

Let $$M$$ be a differentiable manifold (without boundary), $$m in M$$ and $$phi: M rightarrow mathbb{R}$$ a differentiable map. If $$m$$ is a local extremum of $$phi$$, then $$dphi(m) = 0$$.” (where $$dphi(m)$$ is the tangent map of $$phi$$ at $$m$$).

The problem is that, with the manifold (with boundary) $$(0,1)$$, we obtain that $$mathrm{arg}mathrm{max}_{x in (0,1)} f(x) = {1}$$, but that $$df(1) neq 0$$ (I guess that for every $$x in (0,1)$$ and every $$y in mathbb{R}$$, $$df(x)(y) = f(y)$$, but I maybe make a mistake), so this theorem does not work with manifolds with boundary…

2.c- In view of question 2.b: to which extend do we have similarity between manifolds without boundary and manifolds with boundary ?

3- A particular case where I need to be sure that we don’t have so much differences between manifolds without boundary and manifolds with boundary is the following one: I have a differentiable manifold $$Sigma$$ with boundary and I need to consider a vector field $$X: Sigma rightarrow TSigma$$.
The aim for me is to study the ODE induced by $$X$$ and the eventual dynamical system that this ODE may induces and I am not really confident to go into this problem, due to the fact that my manifold has a boundary (and for the reasons explain before)…

Thank you for your help !

real analysis – Azrela -Ascoli on uniformly bounded sequence of differentiable functions

Let a, b ∈ R with a < b, and let {$$f_n$$} be a sequence of differentiable
functions from (a, b) to R. Suppose that both the sequences {$$f_n$$} and {$$f’_n$$}
are uniformly bounded. Prove that the sequence {fn} is equicontinuous and has a uniformly convergent subsequence.

I am able to prove that {$$f_n$$} is equicontinuous using Mean Value theorem. Now for next part we can use the Azrela-Ascoli theorem with statement

If X is a compact metric space and F a subset of C(X), then F is compact if and only if F is closed, uniformly bounded, and equicontinuous.

Now (a,b) is compact and {$$f_n$$} is uniformly bounded and is equicontinuous also. Can somebody explain how can i prove that {$$f_n$$} is closed also ?

calculus – Differentiable at \$x=a\$ implies continuous at \$x=a\$

Consider the function $$f(x)=left{begin{array}{cc} x^2-4 & text{ if }xleq 2\4x+3&text{ if } xgt 2end{array}right.$$

This function is differentiable at $$x=2$$ since $$lim_{hto 0^{pm}}frac{f(2+h)-f(2)}{h}=4$$ (EDIT: this actually isn’t true but it is true that $$lim_{xto 2^-}f^prime (x)=4=lim_{xto 2^+}f^prime(x)$$); however, it’s not continuous at $$x=2$$.

How is that possible, doesn’t differentiability at $$x=a$$ imply continuity at $$x=a$$?

This question came up when I tried to answer the question of finding $$a$$ and $$b$$ such that the function $$f(x)=left{begin{array}{cc} ax^2-b & text{ if }xleq 2\bx+3&text{ if } xgt 2end{array}right.$$

The solution is achieved by finding conditions on $$a$$ and $$b$$ such that it’s continuous, and also such that the left/right derivates exist. The left/right derivative question gives $$4a=b$$. With the condition of continuity you get the additional condition that $$4a-b=2b+3$$, giving a unique solution. But doesn’t differentiability imply continuity? What’s wrong with just solving $$4a=b$$ like in the first example above?

stochastic processes – Differentiable approximation of Brownian diffusion with bounded volatility

Yes. Let $$X_t := int^t_0 sigma_s mathrm dW_s$$. Due to Theorem V.6 from the book Stochastic Integration and Differential Equations (second edition) by P.E. Protter, there is a continuous and adapted process $${tilde X^n_t}_{tin(0;T)}$$ such that
$$tilde X^n_t = int^t_0 n cdot big( tilde X^n_s – X_s big) mathrm ds.$$
Hence we define $$tildesigma^n_s := n cdot ( tilde X^n_s – X_s )$$, which is adapted and even continuous.

To prove the limit property, we first prove the following:

Lemma 1. For all $$beta,delta in (0;infty)$$, there exists $$nu in (0;infty)$$ such that
$$mathbb P bigg( sup_{0 le s le t le min(s+nu,T)} bigg| int^t_s sigma_u mathrm dW_u bigg| le beta bigg) ge 1 – delta.$$
Proof: We discretize the interval $$(0;T)$$ and consider events that its increments stay bounded. For all integer $$0 le k < N$$ and all $$alpha in (0;infty)$$, we define
$$A^alpha_{k,N} := bigg{ max_{frac{T}{N} k le t le frac{T}{N} (k+1)} bigg| int_{frac{T}{N} k}^t sigma_u mathrm dW_u bigg| ge alpha bigg}.$$
Due to the Burkholder–Davis–Gundy inequality (since $$sigma$$ is bounded, say $$vertsigmavert le overlinesigma$$),
$$mathbb Ebigg( max_{T/Ncdot k le t le T/Ncdot (k+1)} bigg| int_{T/Ncdot k}^t sigma_u mathrm dW_u bigg|^4 bigg) le C_4 mathbb Ebigg( bigglangle int_{T/Ncdot k}^cdot sigma_u mathrm dW_u biggrangle_{T/Ncdot (k+1)}^2 bigg) \= C_4 mathbb Ebigg( bigg( int_{T/Ncdot k}^{T/Ncdot (k+1)} big(sigma_ubig)^2 mathrm du bigg)^2 bigg) le C_4 bigg( frac{T} N cdot overlinesigma^2 bigg)^2 = N^{-2} C$$
and due to the Markov inequality, we obtain
$$mathbb P big( A^alpha_{k,N} big) le alpha^{-4} mathbb Ebigg( max_{T/Ncdot k le t le T/Ncdot (k+1)} bigg| int_{T/Ncdot k}^t sigma_u mathrm dW_u bigg|^4 bigg) le alpha^{-4} N^{-2} C$$
Now we assume that $$omega in Omega backslash bigcup_{k=0}^{N-1} A^alpha_{k,N}$$ and assume $$s, t in (0;T)$$ with $$s le t le s + T/N$$. Then we can find a $$k in {0,ldots,N-1}$$ such that $$T/Ncdot k le s le T/Ncdot (k+1) le t le T/Ncdot (k+2)$$ or $$T/Ncdot k le s le t le T/Ncdot (k+1)$$. In the first case, we obtain
$$bigg| bigg(int^t_s sigma_u mathrm dW_ubigg)(omega) bigg| le bigg| bigg(int_{T/Ncdot (k+1)}^t sigma_u mathrm dW_ubigg)(omega) bigg| + bigg| bigg(int_{T/Ncdot k}^{T/Ncdot (k+1)} sigma_u mathrm dW_ubigg)(omega) bigg| \ quad + bigg| bigg(int_{T/Ncdot k}^s sigma_u mathrm dW_ubigg)(omega) bigg| le 3 alpha.$$
In the second case, we get the same result analogously.

Let $$omega in Omega backslash bigcup_{k=0}^{N-1} A^alpha_{k,N}$$ and $$s, t in (0;T)$$ with $$|s – t| le frac{T}{N}$$. Then, $$bigg| bigg(int^t_s sigma_u mathrm dW_ubigg)(omega) bigg| leq 3 alpha$$ and so
$$Omega backslash bigcup_{k=0}^{N-1} A^alpha_{k,N} subseteq bigg{ max_{s,tin (0;T), |s-t| le frac T N} bigg| int^t_s sigma_u mathrm dW_u bigg| le 3 alpha bigg}.$$
As a result, if $$N$$ is large enough,
$$mathbb P bigg( max_{s,tin (0;T), |s-t| le frac T N} bigg| int^t_s tildesigma_u mathrm dW_u bigg| le 3 alpha bigg) \ge 1 – sum_{k=0}^{N-1} mathbb P big( A^alpha_{k,N} big) ge 1 – frac{C}{alpha^{4} N^{1}} ge 1 – delta,$$
which proves the statement.

Since $$tilde X^n$$ always moves into the direction of $$X$$, we also have the following:

Lemma 2.
$$sup_{t in (0;T)} vert tilde X^n_t vert le sup_{t in (0;T)} vert X_t vert$$

Now since the increments of $$X$$ are bounded on an event of large probability due to Lemma 1, it is also straightforward to prove this:

Lemma 3. Let
$$M^{beta,nu}:=bigg{sup_{0 le s le t le min(s+nu,T)} bigg| int^t_s sigma_u mathrm dW_u bigg| le betabigg}.$$
Then for all $$omegain M$$, we have
$$sup_{tin (0;T)} bigvert tilde X^{beta/nu}_t – X_t bigvert le 3 beta.$$

Now we prove the main statement. Let $$n:=beta/nu$$. Due to the Minkovski inequality,
$$sqrt{ mathbb Ebigg( int^T_0 big( tilde X^n_s – X_s big)^2 mathrm dt bigg) } \le sqrt{ mathbb Ebigg( mathbb 1_{M^{beta,nu}} int^T_0 big( tilde X^n_s – X_s big)^2 mathrm dt bigg) } + sqrt{ mathbb Ebigg( mathbb 1_{Omegabackslash M^{beta,nu}} int^T_0 big( tilde X^n_s – X_s big)^2 mathrm dt bigg) }$$
The first summand can be bound directly by $$3 beta sqrt T$$ using Lemma 2. The second summand can be bound using Hölder inequality by
$$mathbb Ebigg( int^T_0 mathbb 1_{Omegabackslash M^{beta,nu}} big( tilde X^n_s – X_s big)^2 mathrm dt bigg) \le sqrt{ mathbb Ebigg( int^T_0 mathbb 1_{Omegabackslash M^{beta,nu}} mathrm dt bigg) } sqrt{ mathbb Ebigg( int^T_0 big( tilde X^n_s – X_s big)^4 mathrm dt bigg) } = sqrt T sqrt{1 – mathbb Pbig(M^{beta,nu}big) } sqrt{ mathbb Ebigg( int^T_0 big( tilde X^n_s – X_s big)^4 mathrm dt bigg) }$$
The first factor can be made arbitrarily small if choosing $$nu$$ small enough depending on $$beta$$ due to Lemma 1, and the second factor is bounded due to the boundedness of $$sigma$$ and Lemma 2.

f(x) = (z^3 + z^2 – z – 1)/ (z-1) How can I show that f is entire(C differentiable)?

I know that I should get the limit of (f(z + h) – f(z) )/ h as h-> 0, but it is getting a bit complicated, I would be grateful for any hint

mobius transformation – In which points is the following function complex differentiable?

Let $$a,b,c,d in Bbb C$$ s.t. $$cd neq 0$$. We define
$$f(z)=frac{az+b}{cz+d}$$
I am looking at the points, where $$f$$ is complex differentiable. I though to use the Cauchy-Riemann equations, but I don’t know how to use it here…

Many thanks for some help!

derivatives – F(x) is a function differentiable and monotonic strictly increasing on the OPEN interval (a,b).

F(x) is a function differentiable and monotonic strictly increasing on the OPEN interval (a,b). Prove or Disprove that f'(c)>0 for all C that belongs to (a,b). I was trying to apply Lagrange’s derivative theorem, but the condition that the function must be continuous on the CLOSED interval (a,b) is not given. Im assuming the answer should be false, but I can’t prove that’s false or give a counterexample. Can we find a way to apply Lagrange’s theorem? Then for sure, it’s a true affirmative