ho.history overview – Lesser known examples of stamina with a successful ending

The stories of Wiles, Perelman and Zhang are well known to illustrate that sometimes good results are achieved through particularly long perseverance.

What are lesser known examples of this kind?

These need not be epoch-making results as they were mentioned, but they require, for example, at least 6 or 7 years of intensive use (not necessarily of a single person, nor necessarily of a first-time paper, ie a paper that corrects a previous incomplete one Trial would be a good answer too.

To edit: To make it clear, I do not think of many people's long efforts over centuries like the Galois theory, but only of one particular sentence during the lifetime of their authors.

Ps. If moderators could help make this question a community wiki, that would be great, thanks.

ho.history overview – Problem understanding Euclid book 10 Proposition 1

that's embarrassing, but I'm having trouble reading through sentence 1 of Book 10 with Euclid's elements. I struggle with the terminology of Euclid and have no clear idea of ​​what distinctions he makes in the affected lines, so not clear what the evidence says. Here is the text of the proof with some comments / questions from me, embedded in square brackets (I've also numbered the sentences):

Theorem: Let AB and C be two unequal magnitudes, of which AB is the larger one. I say that if an amount greater than half is subtracted from AB, and an amount greater than half is subtracted from AB, and if this process is repeated continuously, an amount remains that remains smaller than the size C.

  1. A multiple DE of C is greater than AB. (Suppose DE is 31 times larger than C and larger than AB, for example.)

  2. Divide DE into parts DF, FG, and GE equal to C. (Euclid can not mean dividing DE into three equal parts of length C. In step 7, he assumes that FD is equal to C, but GE is the same to C?)

  3. Subtract more than half of AB BH, and subtract HK greater than half from AH, and repeat this process until the divisions in AB are equal to most of the divisions in DE. (Does this mean dividing AB into 31 different parts, where BH> half AB and HK> half AH and KL> half AK and so on 31 times?)

  4. Then AK, KH and HB divisions, which correspond in their amount DF, FG and GE. (Does Euclid just say that he considers each line divided into three parts?)

  5. Now that DE is greater than AB and has been subtracted less than half from DE EC (Why is EC less than half of DE?) Do we have to assume here that EG is C? I can not imagine what assumptions we make about the division of DE in step 2), and AB BH is greater than its half, therefore the remainder GD is greater than the remainder HA.

  6. And since GD is greater than HA and GD is half GF (Should GF be half of GD?) And subtracted by HA HK greater than half its size, the remainder DF is larger than the remainder AK.

  7. But DF is equal to C, so C is also bigger than AK. Therefore, AK is smaller than C.

  8. The size AB therefore leaves the size AK which is smaller than the specified smaller size, namely C.

ho.history Overview – Theorems that hampered progress

It may be that certain theorems, if they prove true, are intuitively delayed
Progress in specific areas. Lloyd Trefethen provides two examples:

  • Faber's theorem on polynomial interpolation
  • Squire's theorem on hydrodynamic instability

Trefethen, Lloyd N. "Inverse Yogiisms". Notes from the American Mathematical Society 63, no. 11 (2016).
Likewise: The best typeface on math 2017 6 (2017): 28.
Google Books Link.

In my own experience, I have seen the various negative results sets

Marvin Minsky and Seymour A. Papert.
Perceptrons: An Introduction to Computer Geometry, 1969.
WITH Press.

hamper progress in neural network research for more than a decade.1

Q, What are other examples of theorems, their (correct) evidence (possibly temporary)
Suppression of research in mathematical subfields?

Olazaran, Mikel. "A Sociological Study on the Official History of the Perceptron Controversy." Social science studies 26, no. 3 (1996): 611-659.
Abstract: "[…]I particularly focus on the evidence and arguments of Minsky and Papert, interpreted as meaning that further advances in neural networks are not possible and that this approach to AI has had to be abandoned.[…]"

ho.history – Overview of the entropy of the causal path

in the [1]The authors represent the Causal path entropy as follows:

For every open thermodynamic system like a biological organism we can do this
Treating phase space paths that the system takes over a period of time
$[0,tau]$ as microstates and divide them into macrostats
$ {X_i } _ {i in I} $ using the equivalence relationship:

begin {equation} x (t) sim x & # 39; (t) iff x (0) = x & # 39; (0) end {equation}

As a result, we can identify every macrostat $ X_i $ with a gift
system state $ x (0) $,

We can then define the causal path entropy $ S_c $ from a makrostat $ X_i $
assigned to the current system state $ x (0) $ as path integral:

begin {equation} S_c (X_i, tau) = – k_B int_ {x (t)} P (x (t) | x (0)) ln P (x (t) | x (0)) Dx (t) end {equation}

from where $ k_B $ is the Boltzmann constant.

Well, the authors of [1] I claim that this is a new kind of entropy, and this is a point that I would like to clarify because it looks like this is simply a conditional Boltzmann entropy.

Actually on the second page of [2] C. Villani introduces Boltzmann entropy using time-dependent density $ f $ on particles in phase space $ (x, v) in omega times mathbb {R} _v ^ 3 $:

begin {equation}
S (f) = – int _ { Omega times mathbb {R} _v ^ 3} f (x, v) ln f (x, v) dxdv
end {equation}

and we can analyze the dependence of the development of $ S $ under special initial conditions $ p (0) = (x_0, v_0) $ by the definition of:

begin {equation}
S (f | p (0)) = – int _ { Omega times mathbb {R} _v ^ 3} f (p (t) | p (0)) ln f (p (t) | p (0)) dp
end {equation}

from where $ p in omega times mathbb {R} _v ^ 3 $

I am relatively new to statistical mechanics, but I would be very surprised if it did not occur to him to analyze $ S (f | p_0) $, Is the causal path entropy actually conceptually new?

Note: Although I mention the Boltzmann entropy here, I have to say that the authors of [1] do not count on Boltzmann for their ideas. Meanwhile, in a relatively recent TED talk, Wissner-Gross claims that & # 39; E = mc ^ 2 & # 39; to have discovered for the intelligence.


  1. Gross, A. Wissner. (2013) Causal entropic forces. Physical overview letters.
  2. Villani (2007) H-Theorem and beyond: Boltzmann's entropy in today

ho.history – Why did Voevodsky give up his work on "singletons"?

In an interview (Link to the Google translation) Voevodsky talks about how he worked on the problem of "restoring the history of populations according to their modern genetic composition" in the late 2000s. Some of his unpublished publications on the subject are now available online. For example, an article titled "Singletons" is available on the IAS website. Why did Voevodsky give up the subject of this rather fleshy paper so suddenly?