## Reference Request – Can we get better results than Azuma Hoeffding if the variance is small?

The Azuma Hoeffding inequality says that if $$X_1, X_2, ldots$$ is a martingale and the differences are limited by constants, $$| X_i – X_ {i-1} | le 1$$ we say, then we should not expect the difference $$| X_N – X_0 |$$ to grow too fast, Formally we have

$$P (| X_N -X_0 |> epsilon N) le exp Big ( frac {- epsilon ^ 2 N} {2} Big)$$ for each $$epsilon> 0$$,

Note that inequality has nothing to do with it to distribute the variables $$X_i$$ are. It is only used as the differences are limited. For all $$X_i$$ With small deviations we expect stronger concentration results.

Suppose the $$A_1, A_2, ldots$$ are drawn independently of $$mathcal N (0, sigma ^ 2)$$ Distributions are cut off outside $$[-1,1]$$ and each one $$X_i = A_1 + ldots + A_i$$, The above inequality does not distinguish between the cases in which $$sigma ^ 2$$ is big and small. If it is smaller, we should expect a higher concentration around the mean. In the degenerate case when $$sigma ^ 2 = 0$$ and everything $$A_i equiv 0$$ The left side is exactly zero.

Are there any modifications by Azuma Hoeffding that take into account the deviations of the conditional variables? $$X_ {i + 1} | X_i, ldots, X_1$$ ? So far I have found this paper only in information theory. Theorem 2 is a version of AH that includes the variance. However, this paper is fairly new and it is likely that probabilists have considered the problem in the past.

Can someone show me the right direction?

## Mysql: Should I use AFTER INSERT triggers to populate reference / join tables?

Could the use of MySQL triggers cause difficulties in the subsequent development of the code? Should I stick to the following relationships in the code instead of the DB?

I have the following tables:

``````CREATE TABLE `models` (
`id` int (11) NOT NULL AUTO_INCREMENT,
`user` varchar (64) NOT NULL
Primary key ("id")
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE utf8mb4_unicode_ci ";
``````
``````CREATE TABLE `texts_model` (
`id` int (11) NOT NULL AUTO_INCREMENT,
`text` varchar (1024) NOT NULL,
PRIMARY KEY (ID)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE utf8mb4_unicode_ci ";
``````
``````CREATE TABLE `model_text_link` (
`id` int (11) NOT NULL AUTO_INCREMENT,
`model_id` int (11) NOT NULL,
`text_id` int (11) NOT NULL,
`last_used` DATETIME NULL,
Primary key ("id"),
KEY `model_id` (` model_id`),
KEY `text_id` (` text_id`),
CONSTRAINT `model_text_link_ibfk_1` FOREIGN KEY (` model_id`) REFERENCES `models` (` id`) ON DELETE CASCADE,
CONSTRAINT `model_text_link_ibfk_2` FOREIGN KEY (` text_id`) REFERENCES `texts_model` (` id`) ON DELETE CASCADE
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE utf8mb4_unicode_ci ";
``````

And the following triggers were used to get the `model_text_link` Table:

``````CREATE TRIGGER model_AI AFTER INSERTING MODELS FOR EACH SERIES
BEGIN
INSERT INTO model_text_link (model_id, text_id) SELECT NEW.id, id as text_id FROM texts_model;
THE END;
``````
``````CREATE TRIGGER texts_model_AI AFTER INSERT ON texts_model FOR EACH SERIES
BEGIN
INSERT INTO model_text_link (model_id, text_id) SELECT id, NEW.id FROM models;
THE END;
``````

Everyone `user` of the `user` Table must have a link to all `text` Rows in the `texts` Table so that I can use the texts for all users, but one `last used` Timestamp for everyone `text` Per `user` in that `user_text_link` Table.

With the current code `model_text_link` Records are deleted / inserted `text` or `model` delete / insert

But most of the frameworks I've seen (for example, Symfony / Doctrine) handle this logic in the framework without using the MySQL triggers, and rely on FOREIGN KEYS to clean up.

## fa.functional analysis – reference request: \$ alpha \$ holder spaces as double duals

If $$(X, d)$$ is a complete metric space that we define $$alpha$$-Hölder class $$Lambda_ alpha (X)$$ as a subset of $$L ^ infty (X)$$ to satisfy that
$$sup_ {x neq y} frac {| f (x) – f (y) |} {| x – y | ^ alpha}.$$
Similarly, we can define the little hellroom $$lambda_ alpha (X)$$ as a subset of functions of $$Lambda_ alpha (X)$$ to satisfy that
$$lim_ {x to y} frac {| f (x) – f (y) |} {| x – y | ^ alpha} = 0.$$
I remember a result that this at least in classic contexts $$X = mathbb {T} ^ n$$, $$Lambda_ alpha (X)$$ is isomorphic to the dual dual of $$lambda_ alpha (X)$$,

Question: Is there any relation to this duality in the context of more general metric spaces? A first Google search revealed nothing.

## Taxonomy Terms – Get a list of all entity reference fields that reference specific entity types

Is there a way to retrieve a list of entity reference fields by field type (similar to what is generated by `\$ all_reference_fields = \$ this-> entityFieldManager-> getFieldMapByFieldType (& # 39; entity_reference & # 39;);`) but filtered to list only the entity reference fields that contain target entity types and bundles that match a specified list.

In EntityFieldManager I can not see anything filtering the results of `getFieldMap ()` or `getFieldMapByFieldType ()` except for certain reference fields that I'm looking for, because the mapping does not contain any storage information of the field that contains the list of target bundles and handlers.

It looks like the Entity_reference module does not provide core services that would be the logical place for that. Therefore, I am currently thinking that I have to do this in two steps:

1. Call `getFieldMapByFieldType (& # 39; entity_reference & # 39;);`,
2. Work through each entity type in the field mapping and invoke it `buildFieldStorageDefinitions (\$ entity_type);` or something to find out the goal `entity_type: bundle name` Combinations and track fields that point to it `& # 39; taxonomy_term: tag & # 39;`, This seems terribly inefficient due to the sheer number of entity_reference type fields on this site. Maybe there is a better way to take this step?

In an ideal world, there would only be a series of database calls I could make, or even a core service for the entity_reference module! (Wishful thinking);)

## How do you multiply a query column by a cell reference in Google Sheets?

Below is the query I'm using, and it works:

``````= QUERY (input! A2: I33;
"SELECT & 000; 39, A, B, C, D, E, & # 39 ;, F, & # 39 ;, & # 39 ;, , G, G, &,, #,, G G, G G,, G & TEXT (Enter! L8; "DD / MM / YYYY") & "& # 39 ;, 0.6 * H
WHERE I = "I"
LABEL & # 39; 000 & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39;, & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39;, & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; TEXT (Enter! L8; "DD / MM / YYYY") & "# 39", 0.6 * H # 39 ")
``````

My question is: how can I multiply H by a cell reference? For example, I tried, but it did not work ("Input! L9" refers to a cell with a value of 0.6):

``````= QUERY (input! A2: I33;
"SELECT & 000; 39, A, B, C, D, E, & # 39 ;, F, & # 39 ;, & # 39 ;, , G, G, &,, #,, G G, G G,, G & TEXT (Enter! L8; "DD / MM / YYYY") & "& # 39; & Enter! L9 & "* H
WHERE I = "I"
LABEL & # 39; 000 & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39;, & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39;, & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; & # 39; TEXT (Enter! L8; "DD / MM / YYYY") & "& # 39; & # 39;" & Enter! L9 & "* H & # 39;")
``````

## Microservices – access to event sources by reference

What is a typical pattern for passing information about related objects (aggregates) in an event sourcing architecture?

For example, in an order processing system `Order Created` Event (published by the `Order Service)` contain `Product ID` or a `product`?

Assuming there is one too `Product / service`), with the first option, the receivers of the event can `Product / service` to get that `product` and with the second option, all the required information is already in the event. The advantages and disadvantages of the two approaches are not clear to me. Can someone shed some light?

## Reference requirement – semi-local uniform convergence

I'm looking for a topology for a set of functions that somehow lies between unified convergence and locally uniform convergence. For simplicity I will explain my idea for the function room $$[0infty)[0infty)[0infty)[0infty)$$ to $$mathbb R$$,

In this area we can define the uniform topology: $$f_n to f$$ if and only if $$sup_ {t in[0infty)}|f_n(t)-f(t)|bis0[0infty)}|f_n(t)-f(t)|to0[0infty)}|f_n(t)-f(t)|bis0[0infty)}|f_n(t)−f(t)|to0$$, We can also define the topology of locally uniform convergence: $$f_n to f$$ locally uniform if and only if for everyone $$T> 0$$ it holds $$sup_ {t in[0,T]} | f_n (t) – f (t) | to 0$$ or equivalent $$sum_ {T = 1} ^ infty 2 ^ {- T} sup_ {t in[0,T]} | f_n (t) – f (t) | to 0$$,

Now consider a sequence of numbers $$t_n in[0infty)[0infty)[0infty)[0infty)$$ converge to $$t in[0,infty]$$ (possibly infinite) and assume $$f ( infty) = lim_ {x to infty} f (x) in mathbb R cup { pm infty }$$ exist. Then
$$f_n to f text {uniformy} implies f_n (t_n) to f (t),$$
while we only have
$$f_n to f text {locally uniformy} implies f_n (t_n) to f (t) text {for all} t < infty.$$
(I am not sure if these are equivalent statements at all.)

In many cases, you actually see something in between. Consider z. $$f_n (t) = min {t, n }$$ and $$f (t) = t$$, Then, $$f_n$$ converges to $$f$$ locally uniform, but not uniform. Nevertheless, for every sequence $$t_n$$ converge to $$infty$$ slow enough, i. $$frac {t_n} {n} le 1$$ for all $$n$$ we have big
$$f_n (t_n) to f ( infty) = infty.$$
However, this fact can not only be deduced from the local uniform convergence of $$f_n$$ to $$f$$,

So what I am looking for is a kind of semi-local uniform topology, eg. of the form
$$f_n to f text {only if} sup_ {t in[0,n]} | f_n (t) – f (t) | to 0.$$
Of course, one could imagine replacing the interval $$[0,n]$$ through intervals of the form $$[0,T_n]$$ for every other sequence $$T_n to infty$$,

Is there some kind of convergence of this kind? I'm also looking for literature that goes in a similar direction.

## Reference Request – name for a pair of gratings, one of which has a theta series with coefficients, a subsequence of the theta series coefficients of another lattice

Is there a name for a pair of lattices whose property is specified in the title? The following example of a pair captures the above-mentioned property:

$$(i) 1 + 80q ^ 3 + 270q ^ 4 + 432q ^ 5 + 960q ^ 6 + 2160q ^ 7 + 3240q ^ 8 + 5360q ^ 9 + 8640q ^ {10} + points$$

$$(ii) 1 + 270q ^ 4 + 960q ^ 6 + 3240q ^ 8 + 8640q ^ {10} + 17790q ^ {12} + 25920q ^ {14} + 62910q ^ {16} + points$$

The second theta series is given by taking only the even coefficients from the first row. The sources of these series are from magma calculations in which the gram matrices from ten lattices known as lattices were used as input $$O_ {10}$$ and $$(C6 times SU (4,2)): C2$$ from Nebe and Sloane's database of grids.

## Reference Request – Motivation behind Analytic Number Theory

I study mathematics and have recently completed an introductory course in analytical number theory, with the instructor following approximately Apostol's first text on the subject. I've started reading Davenport's Multiplicative Number Theory. Without going into too much detail, I found that although the results used some common techniques, they were otherwise quite independent. Considering that I am relatively inexperienced, I found this rather odd, considering that most of the topics read so far (real and complex analysis, abstract algebra, dimensional theory, functional analysis, algebraic topology) are more coherent than simply being a collection problems solved with similar machines.

I have wondered if there is an overall idea behind the study of analytical number theory (classical and sieving), if there are any specific open problems that have motivated research in the past, and if there is a textbook that deals with this perspective and not just a collection of interesting issues. Although I know very little about it, my professor once told me about the Langlands program and said that the current goal of several mathematicians working in areas of algebraic number theory and automorphic forms is to solve the guesses of this program.

Unlike many other areas, I could not effectively use the approach that many of my professors seem to recommend to reading the sentence and trying to prove it to others. I tend to believe that it is my own shortcomings that prevent this approach, but if this is part of a broader trend and I read it "wrongly", I want to know the same thing for lack of better words and I want to know how exactly such a topic can be learned most efficiently.

I have not been able to formulate the question as I had hoped. So if you have answers to the title itself, motivation for a coherent understanding of the topic, then I would be very committed to your input / advice.

I was not sure which day to use and chose what I thought was most appropriate. I hope that will not be a problem.

## Reference Request – Intensity and compensator for a jump operation

Formation and assumptions. To let $$( mathscr {F} _t, t geq 0)$$ be a right-continuous complete filtration. To let $$(X_t, t geq 0)$$ be a pure leap $$mathbb {R}$$-valued process with unit jumps, that is
$$X_t = sum limits _ {i = 1} ^ infty I { tau _i
from where $${ tau _i }$$ is an a.s. increasing order of $$( mathscr {F} _t)$$Times, a.s. $$lim_ {i an infty} tau _i = infty$$, Take that too $$E X _t < infty$$and all of that $$tau _i$$ are completely inaccessible.

We know from Doob-Meyer decomposition theorems that a predictable process exists $$(A_t)$$ so that
$$X_t – A _t$$
is a $$( mathscr {F} _t)$$-Martingale

Suppose we also know that for a uniform, predictable, continuous process $$( alpha _t, t geq 0)$$ as.

$$P big[X_{t + Delta t} – X _t = 1 mid mathscr{F}_t big] = alpha _t Delta t + o ( Delta t), (1)$$
$$P big[X_{t + Delta t} – X _t = 0 mid mathscr{F}_t big] = 1 – alpha _t Delta t + o ( Delta t),$$
$$P big[X_{t + Delta t} – X _t > 1 mid mathscr{F}_t big] = o ( Delta t). (2)$$

Question. Can we prove that? $$A_t = int limits _0 ^ t alpha _s ds$$?

Thoughts. Intuitive $$alpha _t$$ should be the intensity of the jumps $$(X_t)$$ and $$X _t – int limits _0 ^ t alpha _s ds$$ should be a martingale. However, I have found no evidence that confirms this. Several references prove that the inverse implication is true, ie when $$A_t = int limits _0 ^ t alpha _s ds$$then (1) – (2) applies. For example, in point processes and queues. Martingale Dynamics (3.5) in Chapter 2 is very similar to (1) – (2). Another example is Lemma 2.22 in Chapter 2 of the Filtration Expansion with a view to funding. However, I have not found an answer to the posted question and I do not see how to show it myself. I would be very happy about suggestions or suitable references.