co.combinatorics – Permutations in a Bruhat interval with a fixed point

Let $(e,sigma_0)$ be the Bruhat interval of the permutations $sigmaleq sigma_0$ for the (strong) Bruhat order. I am interested in the following set, for fixed $i,j$:
$$(e,sigma_0)_i^j:={ sigma in (e,sigma_0) mid sigma(i)=j}.$$
The set $(e,sigma_0)_i^j$ can be embedded (“flattened”) in $S_{N-1}$ by removing $(i,j)$ and keeping the relative order of all other entries.

My question is: is there necessarily $sigma_0’in S_{N-1}$ such that $(e,sigma_0)_i^j = (e, sigma_0′)$?

From computer experiments, it seems to be true if $sigma_0$ avoids the pattern $3412$.
For example, $(e,3412)_2^2={e,1243,3214} xrightarrow{text{flatten}} {e, 132, 213} subset S_3$, which is not a Bruhat interval.

Is this result known? Does somebody have a reference for this?

lie groups – Root system of fixed point Lie sub-algebra

It is known that a non-simply laced simple root system can be constructed from the simply-laced root system by folding the Dynkin diagram and hence the corresponding non-simply-laced Lie algebra can be constructed by taking the fixed points of a non-trivial diagram automorphism (outer automorphism).

Now let $theta$ be an inner automrphism of order $2$ of a simple Lie algebra $mathfrak{g}$ and let $mathfrak{g}= mathfrak{g}_0 oplus mathfrak{g}_1$ be the eigendecomposition. The subalgebra $mathfrak{g}_0$ is reductive. Now is there a way to get the root system of $mathfrak{g}_0$ from the root system of $mathfrak{g}$ ?

co.combinatorics – Coloring finite subsets of a fixed size with a single modular functuion

Let $k$ and $N$ be positive integers so that $k|N$. Let $M=(k/N){Nchoose k}$. A function $f:(N)^krightarrow M$ is a coloring function if $f(s_1) = f(s_2)$ implies that $s_1=s_2$ or $s_1 cap s_2 = emptyset $. Coloring functions exist for all $k$ and $N$ by Baranyai’s theorem. When $k=2$, $M=N-1$ and it is widely known and easily shown that there is a function $f(i,j)$, such that for every even $N$, $f(s)=f(i,j)bmod (N-1)$ is a coloring function for 2 and N where $s =langle i, j rangle$, $0 leq i < j < N$. My question is whether there can exist for a $k>2$ an $f(i_0,i_1, ldots i_{k-1})$ so that $f(langle i_0,i_1, ldots i_{k-1}rangle) = f(i_0,i_1, ldots i_{k-1})bmod M$ is a coloring function for all $N$ that are multiples of $k$. Probably not but I’d have no idea how to prove it.

dnd 3.5e – Is the damage from Snap Kick fixed like, for example Insightful Strike or can you add any more bonuses to it?

Ultimately, the errata clarifies Snap Kick so we don’t have to worry about this any more. With the errata, the feat’s benefit reads:

Whenever you initiate a strike or use the attack or full attack action, you may take a −2 penalty to attacks made this round gain an additional attack at your highest attack bonus (the −2 applies to this attack as well). This attack is an off-hand unarmed strike, and deals damage as appropriate.

Instead of defining the damage in the feat, it just says it deals off-hand unarmed strike damage. “Off-hand” here is actually a little awkward (because you only really “have” an off-hand while using two-weapon fighting, which you aren’t here), but it’s clear enough what it means—halve your Strength bonus to the damage roll. This result winds up being identical to what Snap Kick originally said explicitly, but now it’s much clearer that this figure can be changed by anything that changes one’s unarmed strike damage and that this should be treated in all ways as a regular unarmed strike attack, albeit one you wouldn’t otherwise have gotten a chance to make.

However, even without the errata, the attack made by Snap Kick was defined as an unarmed attack, and the damage calculation offered by it matched that used by an unarmed strike made as an off-hand attack while two-weapon fighting. This was interpreted by most as simply a reminder of how unarmed strike damage works, rather than a fixed figure that cannot be changed. If it were fixed, then like insightful strike, it would explicitly say so. It’s also unclear what it means to be “an unarmed attack” if your bonuses and improvements to unarmed attacks don’t apply. The errata, effectively, confirms that this reasoning was correct.

So yeah, Shadow Blade affects your extra attack from Snap Kick, adding your Dexterity bonus to the damage roll.

synchronization – How to support text editing when the text has to be synchronized with fixed audio?

In my application, we record audio of people speaking, send this to a speech-to-text (STT) service, and present the text to the user for editing.

Simplifying a bit, the STT service returns results in the form of a long list of words with timings:

      "words": (
          "value": "Today",
          "from": 0.34,
          "to": 0.75,
          "confidence": 0.865
          "value": "is",
          "from": 0.76,
          "to": 0.91,
          "confidence": 0.923
          "value": "Friday",
          "from": 0.92,
          "to": 1.36,
          "confidence": 0.783


The from and to timings are offsets in seconds from the start of the recording, so in this example, the word “Today” starts at t=0.34s and ends at t=0.75s, and so on. (Confidence is a measure of the STT engine’s confidence that it’s right about the word, which I use elsewhere in the app.)

The timings are important, because I have a UI that knows where you are in the audio and indicates this with a mark in the text. You can play the audio out loud and the app moves the marker to keep the text location in sync, or, for any location in the text, when you hit play, it knows where to start playing.

So far so good.

The challenge I’ve got is how to handle it when the user edits text, because it would get out of sync with the timings. If you, say, delete the space between “Today” and “is”, now you have one word, not two. What should it’s “time” be?

I handle this particular case by just concatenating the times, but what should happen if you select from the middle of one word to the middle of another word in another paragraph and then paste a block of text? I can maintain a list of the words, but what should happen to the timings?

Is there a better way to organize my data structures to support text editing that can stay in sync with audio?

How can I measure the exact range of focus of a given fixed focus webcam?

The concept of depth of field is really just an illusion, albeit a rather persistent one. Only a single distance will be at sharpest focus. What we call depth of field are the areas on either side of the sharpest focus that are blurred so insignificantly that we still see them as sharp. Please note that depth-of-field will vary based upon a change to any of the following factors: focal length, aperture, magnification/display size, viewing distance, etc.

There’s only one distance that is in sharpest focus. Everything in front of or behind that distance is blurry. The further we move away from the focus distance, the blurrier things get. The questions become: “How blurry is it? Is that within our acceptable limit? How far from the focus distance do things become unacceptably blurry?”

What we call depth of field (DoF) is the range of distances in front of and behind the point of focus that are acceptably blurry so that to our eyes things still look like they are in focus.

The amount of depth of field depends on two things: total magnification and aperture. Total magnification includes the following factors: focal length, subject/focus distance, enlargement ratio (which is determined by both sensor size and display size), and viewing distance. The visual acuity of the viewer also contributes to what is acceptably sharp enough to appear in focus instead of blurry.

The distribution of the depth of field in front of and behind the focus distance depends on several factors, primarily focal length and focus distance.

The ratio of any given lens changes as the focus distance is changed. Most lenses approach 1:1 at the minimum focus distance. As the focus distance is increased the rear depth of field increases faster than the front depth of field. There is one focus distance at which the ratio will be 1:2, or one-third in front and two-thirds behind the point of focus.

At short focus distances the ratio approaches 1:1. A true macro lens that can project a virtual image on the sensor or film that is the same size as the object for which it is projecting the image achieves a 1:1 ratio. Even lenses that can not achieve macro focus will demonstrate a ratio very near to 1:1 at their minimum focus distance.

At longer focus distances the rear of the depth of field reaches all the way to infinity and thus the ratio between front and rear DoF approaches 1:∞. The shortest focus distance at which the rear DoF reaches infinity is called the hyperfocal distance. The near depth of field will very closely approach one half the focus distance. That is, the nearest edge of the DoF will be halfway between the camera and the focus distance.

For why this is the case, please see:

Why did manufacturers stop including DOF scales on lenses?
Is there a ‘rule of thumb’ that I can use to estimate depth of field while shooting?
How do you determine the acceptable Circle of Confusion for a particular photo?
Find hyperfocal distance for HD (1920×1080) resolution?
Why I am getting different values for depth of field from calculators vs in-camera DoF preview?
As well as this answer to Simple quick DoF estimate method for prime lens

physics – Simulate spring wire through fixed points

Suppose we have a wire made of an ideal spring steel. If we bend it and then release any external force it will straighten itself into a perfect line.

Suppose also that the wire is attached to some set of points in the plane. The wire can rotate in these points slide through the points but can not escape out of these points.

How to simulate this?

I made a code when we have only two fixed points – (large points in image). The small points are ends of the wire. The length of wire is constant all the time.

 Plot({x^2, 1/a x^2 + 1 - 1/a}, {x, -5, 5}, 
  Epilog -> {Point({{b /. 
         b Sqrt(1 + (4 b^2)/a^2) + 1/2 a ArcSinh((2 b)/a) - 
          1/2 (4 Sqrt(17) + ArcSinh(4)), {b, 1}), 
       1/a b^2 + 1 - 1/a /. 
         b Sqrt(1 + (4 b^2)/a^2) + 1/2 a ArcSinh((2 b)/a) - 
          1/2 (4 Sqrt(17) + ArcSinh(4)), {b, 1})}, {-b /. 
         b Sqrt(1 + (4 b^2)/a^2) + 1/2 a ArcSinh((2 b)/a) - 
          1/2 (4 Sqrt(17) + ArcSinh(4)), {b, 1}), 
       1/a b^2 + 1 - 1/a /. 
         b Sqrt(1 + (4 b^2)/a^2) + 1/2 a ArcSinh((2 b)/a) - 
          1/2 (4 Sqrt(17) + ArcSinh(4)), {b, 1})}}), 
    Point({{-2, 4}, {2, 4}}), PointSize -> Large, 
    Point({{-1, 1}, {1, 1}})}, AspectRatio -> Automatic, 
  PlotRange -> {-1, 5}), {a, 1, 100})

enter image description here

This is only a pseudo simulation and it has nothing to do with reality, I only wanted to demonstrate how it might look.

Instead of two fixed points we can have 3, 4 or more points. I have no idea how to simulate it.

How can are measure the exact range of focus of a given fixed focus webcam?

By moving objects around the camera I can see that they get very blurry at 10cm and less sharp at 3-4m away from the camera but how can I measure the exact range of focus? If I check the sharpness of document at different distances – when it’s too close it’s blurry but when it’s far it’s not sharp probably due to the limited resolution.

(Actually this webcam is manual focus so basically I want to calibrate it).

I only know the horizontal view angle (60 degrees) and can guess the sensor size (1/2.3-1/4.5″).

I am also curious to know whether it’s possible to measure focal length or