elliptic pde – A classic problem of uniqueness in a limitation minimization problem

Consider the following restriction minimization problem

$$
inf _ { | u | _p = 1} int _ { mathbb {R} ^ N} | nabla u | ^ 2 + V (x) u ^ 2 , dx
$$

from where $ | cdot | _p $ is the $ L ^ p $ Standard, $ 2 <p < frac {2N} {N-2}, $ and $ V (x) $ is not negative with $ lim_ {| x | to infty} V (x) = infty $

This is a classical approach to construct a weak solution for the following semilinear PDE
$$
– Delta u + V (x) u = | u | ^ {p-2} u quad text {in} ; mathbb {R} ^ N,
$$

This uses the homogeneity of the nonlinear term.

The existence of the minimizer is a classic and relatively simple result, since the state of $ V $ indicates the smooth fall of the minimization sequence and thus restores the compactness of each minimization sequence.

I look forward to the result of uniqueness or non-uniqueness for this minimization problem. I try to google something that inspires me, but it's not working well so far.

For a classic special case $ V (x) = | x | ^ 2 $ What is radial, I wonder if the minimizer needs to be radial? When this happens, the corresponding Euler-Lagrange ODE may help us demonstrate its uniqueness.

Thanks for any discussion or reference.

Differential geometry – local uniqueness of the metric for locally maximally symmetric spaces

I'm currently studying maximally symmetrical spaces, physics style. So I am mainly interested in purely local results.

I define a (locally) maximally symmetric space as a pseudo-Riemannian manifold that has $ n (n + 1) / 2 $ independent killing vector fields.

I have found that this is equivalent to the curvature tensor that is of the shape $$ R _ { kappa lambda mu nu} = K (g _ { mu kappa} g _ { nu lambda} -g _ { mu lambda} g _ { nu kappa} ), $$ from where $ K $ is a constant.

The book Gravity and cosmology von Weinberg has a sentence that if $ bar g _ { mu ^ { prime} nu ^ prime} (x ^ prime) $ and $ g _ { mu nu} (x) $ are two metrics (since this is purely local, I basically work in an open set of $ mathbb R ^ n $) which have the same signature and are both maximally symmetric, so that (assuming Einstein summation convention in this post) $$ bar R _ { kappa ^ prime lambda ^ prime mu ^ prime nu ^ prime} = K ( bar g _ { mu ^ prime kappa ^ prime} bar g _ { nu ^ prime lambda ^ prime} – bar g _ { mu ^ prime lambda ^ prime} bar g _ { nu ^ prime kappa ^ prime}) \ R _ { kappa lambda mu nu} = K (g _ { mu kappa} g _ { nu lambda} -g _ { mu lambda} g _ { nu kappa}) $$for the equal $ K $ constant, then the two metrics $ bar g _ { mu ^ prime nu ^ prime} $ and $ g _ { mu nu} $ differ by a coordinate transformation, z. There are functions $$ x ^ { mu ^ prime} = Phi ^ { mu ^ prime} (x) $$ so that $$ g _ { mu nu} (x) = bar g _ { mu ^ prime nu ^ prime} ( Phi (x)) frac { partial Phi ^ { mu ^ prime}} { partial x ^ mu} (x) frac { partial Phi ^ { nu ^ prime}} { partial x ^ nu} (x). $$

Weinberg proves this by explicitly constructing a coordinate transformation over a power series. It is ugly and long.


I thought there is probably an easier way.

Namely, if $ bar g $ and $ g $ are two metrics of the same signature and $ bar theta ^ {a ^ prime} $ is a $ bar g $while $ theta ^ a $ is a $ g $-orthonormal coframe, then the two metrics are the same if and only if the two coframes differ by a generalized orthogonal transformation (Lorentz transformation for general relativity), z. There is a $ mathrm O (n-s, s) $-valued function $ Lambda $ on the open set so that $$ bar theta ^ {a ^ prime} = Lambda ^ {a ^ prime} _ { a} theta ^ a. $$

But even if that is not true, there is must be on $ mathrm {GL} (n, mathbb R) $-valued function $ L $ so that $$ bar theta ^ {a ^ prime} = L ^ {a ^ prime} _ { a} theta ^ a. $$

So I thought I could probably prove that statement by proving that $ L $ is actually a (generalized) orthogonal transformation.


The curvature shape for (locally) maximally symmetrical spaces has a simple form $$ mathbf R ^ {ab} = K theta ^ a wedge theta ^ b \ mathbf R ^ {a ^ prime b ^ prime} = K bar theta ^ {a ^ prime} wedge bar theta ^ {b ^ prime}. $$

My strategy was to take the "locked" sizes in the "prepared" frame and transform them (possibly through non-orthogonal ones) $ L $) in the "unprimed" framework.

For example, for the metric we have $ bar g_ {a ^ prime b ^ prime} equiv eta_ {a ^ prime b ^ prime} $ (from where $ eta $ is the canonical symbol associated with the metric of a given signature, e.g. the Minkowski symbol for general theory of relativity), but in the unpainted frame it is $ bar g_ {ab} $ This is not necessarily "Minkowskian".

I tried to construct the curvature shape directly out of the frame and compare it with the expression that I listed above in the hope that I come to a relationship that implies one of $$ bar g_ {ab} = eta_ {ab} \ bar gamma ^ {ab} = – bar gamma ^ {ba}, $$ that would mean that immediately $ L $ is actually a generalized orthogonal transformation, but I have come to no useful conclusion.

Question: Can I prove this statement (namely, that two locally maximal symmetric spaces of the same dimension, signature, and the same value of $ K $ will be locally isometric) using this orthonormal framework method?

If yes, how does it work? I'm pretty stuck with it.

Functional Analysis – Uniqueness of the weak solution of $ -u & # 39; & # 39; = – u ^ 2 $ to $ (0,1) $, $ u (0) = u (1) = 0 $

Show that there is more than one solution $ u in H_0 ^ 1 (0,1) $ to $$ – u & # 39; & # 39; = – u ^ 2 text {on} (0,1), u (0) = u (1) = 0. $$

The weak formulation is $ int -u & # 39; v & ds; dx = int -u ^ 2v dx $ for all $ v in H_0 ^ 1 $, Do I have to test it somehow with different functions?

The question comes from the problem $$ – Delta u = -u ^ 2 text {in} Omega \ u geq 0 text {in} Omega u = 0 text {on} partial { Omega} $$ This has a unique solution.

Uniqueness of the $ 2 $ class theory of $ infty $ categories?

Disclaimer: I'm not going to be very specific about the theoretical foundations of the question, as I think these are not the most important points in the question (I apologize in advance for any theoretic errors or inaccuracies in the question – my theoretic background not very extensive). In particular, I assume that there are enough great cardinals for everything that follows to work.

It might be helpful to imagine that we are watching a kind of game. There are 2 players in this game. The theorist and the skeptic, The theorist and the skeptic agree on the $ 2 $-Category theory of small and big $ 1 $Categories. That is the completely nested one $ 2 $Categories (the objects in the last one make up a class or an even bigger cardinal – I will not highlight this point because I believe it will not be very important in the following).

$$ mathsf {cat} ^ {} _ 1 subset mathsf {CAT} _1 $$

The theorist then gives his favorite model of $ infty $Categories. Unfortunately, the skeptic as such is very much inclined not to accept the model proposed by the theoretician. However, his attitude is more problematic.

  • The skeptic does not accept another model of $ infty $Category theory out there.

  • The skeptic only accepts arguments contained in the standard theoretical foundations of mathematics and, if necessary, is lenient to major cardinal axioms (whatever that means). In particular, he does not accept arguments put forward in alternative foundations such as HoTT and so on.

  • The skeptic is only willing to accept $ 1 $Category and $ 2 $-Categorical concepts. He will not accept model categories as more than a tool to prove things about their homotopy categories, and so on.

To comply with the above restrictions bothers me when the skeptic asks the theorist.

  1. size techniques – The skeptic asks the theoretician to consult his model and provide him with a fully original homotopy $ 2 $categories
    $$ mathsf {cat} _ { infty} subset mathsf {CAT} _ { infty} $$ Whose objects are small and big $ ( infty, 1) $Categories. With functors as 1-morphisms and equivalence classes of natural transformations as $ 2 $-Morphismen.

  2. Compatibility with classical category theory – The theorist must fully deliver the right adjoint inclusions. $ mathsf {cat} _1 subset mathsf {cat} _ { infty} $ and $ mathsf {CAT} _1 subset mathsf {CAT} _ { infty} $compatible with (1). We will designate their left neighbors $ ho (-) $,

  3. Cartesian closed – Skeptics checked that all $ 2 $Categories in (1) are Cartesian closed (this gives us functor $ infty $Categories, in particular diagram categories).

  4. Subcategory of $ infty $-Gruppenoide – Use of $ 1 $ -Category $ Delta ^ 1 $ we come from $ (2) $ and chart categories from which we get $ (3) $ he can define a whole subcategory of small $ infty $-Gruppenoide $ mathsf {grpd} _ { infty} subset mathsf {cat} _ { infty} $ on this $ infty $Categories whose arrows are all invertible.

  5. The $ infty $Category of rooms – The skeptic defines a pseudofunktor $ LFib_ {/ -}: mathsf {CAT} _ { infty} ^ {op} to mathsf {CAT} _ {1} $ by the following:
    $$ mathcal {C} mapsto LFib _ {/ mathcal {C}}: = { text {functors} mathcal {E} to mathcal {C}
    text {with small fibers and s.t. the fibers of} $$

    $$ mathcal {E} ^ { Delta ^ 1} to mathcal {E} ^ { {0}} times _ { mathcal {C} ^ { {0}} mathcal { mathcal {C} ^ { Delta ^ 1}} text {are all terminal} } $$ He goes on to make sure it's presented over the 2-Yoneda embedding of one $ infty $-Category $ mathcal {S} in mathsf {CAT} _ { infty} $, the $ infty $Category of rooms.

  6. Compatibility with the homotopy theory of spaces – The theorist must provide an equivalence between $ ho ( mathcal {S}) in mathsf {CAT} _1 $ and the well-known homotopy category of topological spaces $ ho (top) cong ho (S) in mathsf {CAT} _1 $,

  7. The internal one $ infty $Category of $ infty $categories – The skeptic repeats step $ (5) $ only this time with the pseudo-functor $ CoCart _ {/ -}: mathsf {CAT} _ { infty} ^ {op} to mathsf {CAT} _1 $ of (small) cocartesian fibrations as well $ CoCart ^ { le 1} _ {/ -} $ of co -artic fibers whose fibers are $ 1 $Categories. Then he checks whether both can be represented and the natural inclusion $ CoCart ^ { le 1} _ {/ -} hookrightarrow CoCart _ {/ -} $ is represented by a fully loyal lawyer $ mathfrak {cat} _1 hookrightarrow mathfrak {cat} _ { infty} in mathsf {CAT} _ { infty} $,
    _

Annotation: I assume $ (7) $ that the $ 1 $Category whose objects are cocartesian fibrations over a solid $ infty $Category and whose morphisms are equivalence classes of functors, can only be constructed using homotopy $ 2 $– reduction of the full $ ( infty, 2) $Category of $ infty $Categories. I am not sure if this is a valid assumption. I think the work of Emily and Verity at least suggests that one version of it might be true. Question: Is this step really possible?

  1. Compatibility with the internal category of $ 1 $categories – The theorist must provide an equivalence $ ho ( mathfrak {cat} _1) cong mathsf {cat} _1 in mathsf {CAT} _1 $,

  2. Compatibility with $ mathcal {S} $ – The skeptic notes that the obvious natural transformation between the pseudofuncoren $ (5) $ and $ (7) $ leads to a fully faithful inclusion $ mathcal {S} hookrightarrow mathfrak {cat} _ { infty} $ where both a left and a right adjunct are designated by $ | – | $ and $ (-) ^ { cong} $ respectively.

  3. (Co-) completeness – The skeptic checks that $ mathfrak {cat} _ { infty} $ admits everything (small) $ infty $Limits / Colimits. That means for all small ones $ infty $categories $ mathfrak {I} in mathsf {cat} _ { infty} $ the constant-diagram functor $ Delta: mathfrak {Cat} _ { infty} to mathfrak {Cat} _ { infty} ^ { mathfrak {I}} $ lets both left and right side by side (it follows then $ (9) $ that also applies to $ mathcal {S} $).

  4. Inner Cartesian proximity – The skeptic checks that $ mathfrak {cat} _ { infty} $ is Cartesian closed (i.e. $ Delta: mathfrak {Cat} _ { infty} to mathfrak {Cat} _ { infty} ^ { times 2} in mathsf {CAT} _ { infty} $ from $ (10) $ gives another one to the right).

  5. Compactness of $ bullet $ and $ Delta ^ 1 $ – By $ (8) $ can think of small $ 1 $Categories as objects in $ mathfrak {cat} _ { infty} $ In particular, we have the objects $ bullet, Delta ^ 1 in mathfrak {Cat} _ { infty} $, The skeptic checks that $ (-) ^ { cong}: mathfrak {cat} _ { infty} to mathcal {s} $ and $ ((-) ^ { Delta ^ 1}) ^ { cong}: mathfrak {Cat} _ { infty} to mathcal {S} $ receive $ infty $-Colimits indexed after filtered $ 1 $Categories.

  6. Compact generation – The skeptic checks if a full subcategory of $ mathfrak {cat} _ { infty} $ closed under all (small) $ infty $-limits / colimits included $ bullet, Delta ^ 1 $ is beautiful $ mathfrak {cat} _ { infty} $, The same applies to $ mathcal {S} $ and $ bullet in mathcal {S} $,

  7. Free generation of $ mathcal {S} $ – for every cocomplete (large) $ infty $-Category $ mathcal {C} in mathsf {CAT} _ { infty} $, The skeptic defines the full subcategory $ Fun ^ {cont} ( mathcal {S}, mathcal {C}) subset mathcal {C} ^ { mathcal {S}} $ the functor category (which exists by Cartesian proximity) whose objects are the functors that receive all (small) ones $ infty $BOUNDARY. He verifies that the rating produces equivalence $ Fun ^ {cont} ( mathcal {S}, mathcal {C}) cong mathcal {C} in mathsf {CAT} _ { infty} $,

  8. self-consistency – With $ (11) $. $ (7) $. $ (8) $ and $ (2) $ The skeptic defines a natural enrichment of $ ho ( mathfrak {cat} _ { infty}) in mathsf {CAT} _1 $ about $ ho ( mathfrak {cat} _1) cong mathsf {cat} _1 $ to get the homotopy $ 2 $-Category $ ho_2 ( mathfrak {cat} _ { infty}) $ of the interior $ infty $-Category of the small $ infty $Categories. The theorist must provide an equivalence of $ 2 $categories $ ho_2 ( mathfrak {cat} _ { infty}) cong mathsf {cat} _ { infty} $,

  9. Homotopy hypothesis – The skeptic uses the equivalence of $ (15) $ to identify the subcategory $ mathsf {grpd} subset mathsf {cat} _ { infty} $ with a complete subcategory $ mathsf {grpd} _ { infty} subset mathfrak {cat} _ { infty} $, He then checks that it is essentially the same $ mathcal {S} $ (that is, they are both complete subcategories for the same equivalence classes of objects).

I am sure that some of the above steps are superfluous as they result from a combination of some of the other steps. I realize that this could be an unbelievably big job, but suddenly I feel brave enough to ask (for better or for worse).

Question (inaccurate): Could these requirements be strong enough to repair homotopy? $ 2 $Category theory of $ infty $Categories complete?

To clarify the question, one can imagine that the theoretician instead of just small and big has to deliver homotopy $ 2 $-Category $ CAT ^ { kappa} _ { infty} $ from $ infty $-Categories of size $ kappa $ for every inaccessible cardinal $ kappa $ along with compatible inclusions (you probably want to limit yourself to a nice family of cardinals that are consistent with each other, not very sure about this sentence-theoretical problem, to be honest …). Then complete all of the above steps to involve cardinals $ kappa_1 in kappa_2 $ replaced by "small" $ kappa_1 $ and "big" replaced by $ kappa_2 $, Then the question becomes:

Question (almost precise): Are two collections of $ 2 $categories $ {CAT ^ { kappa} _ { infty} } _ { kappa} $ Fulfillment of the above requirement list actually equivalent? (as a 2-category filtered chart compatible with all additional data in the list above).

Differential equations – Rationale for the uniqueness of dispersive PDE solutions

For the sake of completeness we consider the linear Schrödinger equation
$$
partial_t u = i delta u, u u (0, x) = u_0 (x).
$$

The solution is typically obtained by taking the Fourier transform from both sides, which yields $ widehat { partial_tu} (t, xi) = -i | xi | ^ 2 has {u} (t, xi) $,

The next step is where I have questions. Provided that everything is good enough (for example, in Tao's book, he assumes $ u_0 $ is Schwartz), gives a dominated convergence argument $ widehat { partial_tu} (t, xi) = partial_t has {u} (t, xi) $and then we get an ODE that dissolves
$$
has {u} (t, xi) = e ^ {- i | xi | ^ 2} has {u} _0 ( xi) implies u (t, x) = e ^ {it delta} u_0 (x).
$$

This is then called "the Solution of the Schrödinger equation with initial data $ u_0 $, "

My questionHow do we know that there are no other solutions that may not meet the proper decay / smoothing criteria to justify pulling the Fourier transform into the time derivative? $ u $? I agree that there are no other solutions $ u $ they are "nice enough" to justify this. But how do we exclude the existence of solutions? $ u $ so that $ partial_t has {u} neq widehat { partial_t u} $? For example, I do not understand how to just accept $ u_0 $ Schwartz is enough to guarantee this.

Any help is greatly appreciated.

Probability – existence and uniqueness of a stationary measure

The same question was also asked on MSE https://math.stackexchange.com/questions/3327007/existence-and- uniqueness-of-a-stationary-measure.

Recently I asked the following question about MO Attractors in Random Dynamics.

To let $ Delta $ be the interval $ (-1.1) $Then we can look at the probability space $ ( Delta, mathcal {B} ( delta), nu) $, from where $ mathcal {B} ( Delta) $ is the Borel $ sigma $algebra and $ nu $ is equal to half the Lebesgue measure.

Then we can equip the room $ Delta ^ { mathbb {N}}: = {( omega_n) _ {n in mathbb {N}}; \ omega_n in Delta \ forall n in mathbb {N} } $ with the $ sigma $-Algebra $ mathcal {B} ( Delta ^ { mathbb {N}}) $ (Borel $ sigma $-algebra of $ Delta ^ { mathbb {N}} $ induced by the product topology) and the probability measurement $ nu ^ { mathbb {N}} $ in measurable space$ ( Delta ^ { mathbb {N}}, mathcal {B} ( Delta ^ { mathbb {N}})) $, so that
$$ nu ^ { mathbb {N}} left A_1 times A_2 times ldots times A_n times prod_ {i = n + 1} ^ { infty} Delta right) = nu (A_1) cdot ldots cdot nu (A_n). $$

Now let it go $ sigma> 2 / (3 sqrt {3}) $ be a real number and define
$$ x _- ^ * ( sigma) = text {The unique real root of the polynomial} x ^ 3+ sigma = x, $$
$$ x _ + ^ * ( sigma) = text {The unique real root of the polynomial} x ^ 3- sigma = x, $$
that's easy to see $ x _ + ^ * ( sigma) = -x _- ^ * ( sigma) $,

We can then define the function
$$ h: mathbb {N} times Delta ^ mathbb {N} times (x _- ^ * ( sigma), x _ + ^ * ( sigma)) to (x _- ^ * ( sigma), x _ + ^ * ( sigma)), $$
in the following recursive way,

  • $ h (0, ( omega_n) _ {n}, x) = x $. $ forall ( omega_n) _n in mathbb {N} $ and $ forall x in mathbb {R} $;
  • $ h (i + 1, ( omega_n) _ {n}, x) = sqrt (3) {h (i, ( omega_n) _ {n}, x) + sigma omega_i}. $

That's how we are for everyone $ x in mathbb {R} $ and $ ( omega_n) _n in Delta ^ mathbb {N} $Define the following order
$$ left {x, sqrt (3) {x + sigma omega_1}, sqrt (3) { sqrt (3) {x + sigma omega_1} + sigma w_2}, sqrt ( 3) { sqrt (3) { sqrt (3) {x + sigma omega_1} + sigma w_2} + sigma w_3}, ldots right }. $$

Now define the following family of Markov kernels
$$ P_n (x, A) = nu ^ { mathbb {N}} left ( left {( omega) _ {n in mathbb {N}} in Delta ^ { mathbb { N}}; h (n, ( omega_n) _ {n in mathbb {N}}, x) in A right } right). $$

A probability measure $ mu $ in the $ ((x _- ^ * ( sigma), x _ + ^ * ( sigma)), mathcal {B} ((x _- ^ * ( sigma), x _ + ^ * ( sigma) )) $ is a called stationary measure, if

$$ mu (A) = int _ {(x _- ^ * ( sigma), x _ + ^ * ( sigma))} P_1 (x, A) text {d} mu (x) ; forall A in mathcal {B} ((x _- ^ * ( sigma), x _ + ^ * ( sigma))), $$
from where $ mathcal {B} ((x _- ^ * ( sigma), x _ + ^ * ( sigma))) $ is the Borel $ sigma $-Algebra. Besides, once $ (x _- ^ * ( sigma), x _ + ^ * ( sigma)) $ it is easy to prove that there is at least one stakionäre measure.

The answer I received to MO suggests that there is only one stationary measure.

Enter image description here

Does anyone know if that's true? An indication of such a result is sufficient for my purposes.

postgresql – Ensure uniqueness of values ​​in bigint arrays created after merging two bigint arrays

What is the most efficient way to get the uniqueness of the values ​​in bigint Array created by merging 2 others bigint Arrays?
For example, this operation select ARRAY[1,2] || ARRAY[2, 3] should give as a result 1,2,3, I checked the extension intarray and see, it does not work with bigint,

Probability – Uniqueness of the martingale problem for the Levy operator

Consider the following Levy-type operator:
$$
L_t varphi (x) = int_ {R ^ d} big ( varphi (x + z) – varphi (x) -1_ {| z | leq 1} z cdot nabla varphi (x) big) kappa (x, z) nu (dz), quad forall varphi in C_c ^ 2 (R ^ d),
$$

from where $ nu $ is a symmetric delivery measure, $ kappa ( cdot, z) in C ^ infty (R ^ d) $ (smooth in $ x $but maybe degenerate and unlimited) and for everyone $ x $.
$$
int_ {R ^ d} (| z | ^ 2 wedge1) kappa (x, z) nu (dz) < infty.
$$

Then the martingale problem is for $ L $ has a unique solution? I think the conclusion should be correct, because the coefficient is smooth, but I can not find a reference. Thanks for your help.

Algorithms – Latest hash functions to test their speed and uniqueness

I'm a tenth grader and need to do a research project. I'm doing a scientifically fair experiment on "The Impact of Different Cryptographic Hash Functions on Decryption Times and Singularity". To test this, I wanted to know what software / platform I should use to test these different hash functions, and what are the latest hash functions I should use. I only need 3-5 functions and my control is SHA-256 because it is the most used one. Which other newer hash functions should I use and what makes them special?

Here's a link to a good amount of data I found, but they're pretty outdated and I wanted to know if there are newer functions: Which hashing algorithm works best for uniqueness and speed?

This can be a scientifically fair experiment as I test 200,000 words with some hash functions. I basically test this. I'm sorry if you got confused. The output is what I will insert into my data table.

Many thanks

SQL Server – Index Uniqueness overhead

I have had a constant debate with various developers in my office about the cost of an index and whether the uniqueness is beneficial or costly (probably both). At the heart of the problem are our competing resources.

background

I have previously read a discussion in which a Unique Index is no additional cost to maintain since a Insert The operation implicitly checks where it fits into the B-tree and, if a duplicate is found in a non-unique index, appends a unique to the end of the key but otherwise inserts it directly. In this sequence of events a Unique Index has no additional costs.

My colleague fights this statement by saying so Unique is enforced as a second operation after finding the new position in the B-tree and is therefore more expensive to maintain than a non-unique index.

In the worst case, I've seen tables with an identity column (inherently unique), which is the table's clustering key, but explicitly stated to be ambiguous. On the other hand, my obsession with uniqueness is the worst, and all indices are created as unique. If it is not possible to define an explicitly unique relationship to an index, I append the table's PK to the end of the index to make sure the index is unique. Uniqueness is guaranteed.

question

The uniqueness has extra costs for the back end of one Insert compared to the cost of maintaining a non-unique index? Secondly, what is wrong about appending the primary key of a table to the end of an index to ensure uniqueness?

Example table definition

Create the #test_index table
(
id int not null identity (1, 1),
dt datetime not null default (current_timestamp),
val varchar (100) not null,
is_deleted bit not null default (0),
Primary key not grouped (ID descending),
uniquely grouped (dt desc, id desc)
);

Create an index
    [nonunique_nonclustered_example]
on #test_index
(is deleted)
include
(Val);

Create a unique index
    [unique_nonclustered_example]
on #test_index
(is_deleted, dt desc, id desc)
include
(Val);