ac.commutative algebra – How to compute cup product of derived limits / presheaf cohomology

I have a finite category $mathcal{C}$, along with a functor $F colon mathcal{C} to mathsf{GradedCommRings}$. If $F_j$ is $j$-th graded piece of $F$, then I write $H^i(mathcal{C},F_j)$ for the $i$-th derived inverse limit of the diagram $mathcal{C} to mathsf{Ab}$ of abelian groups. Equivalently, it’s the $i$-th sheaf cohomology of the sheaf $F_j$, where I regard $mathcal{C}$ as the site with trivial Grothendieck topology.

I have computed the various $H^i(mathcal{C},F_j)$. Assembling them, there should be a cup product structure $H^i(mathcal{C},F_j) otimes H^{i’}(mathcal{C},F_{j’}) to H^{i+i’}(mathcal{C},F_{j + j’})$. I would like to compute this product structure.

The only method I’m aware of is through sheaf cohomology, involving explicit resolutions, tensor products, and total complexes (see (1)). Unfortunately, I do not have an explicit resolution of $F$ or $F otimes F$: it seems too complicated to do by hand, especially because my $F(c)$ are typically infinitely generated. (In my computation of $H^i(mathcal{C},F_j)$ I circumvented this by using spectral sequences but these obscure the product structure.)

I’m led to the following questions:

  • Does anyone know of a more efficient method for computing cup products of presheaf cohomology / derived limits?
  • If not, is there computer software that might be capable of taking over some of the tasks outline above?

(1) : R.D. Swan. Cup products in sheaf cohomology, pure injectives, and a substitute for projective resolutions.

matrix – FindInstance won’t compute this simple expression

A slick way is to compute the Cholesky decomposition of the starting matrix, and then impose conditions on the diagonal of the resulting upper triangular matrix:

Diagonal(CholeskyDecomposition({{a, f, g}, {f, b, h}, {g, h, c}}))^2 /.
Conjugate -> Identity
   {a, b - f^2/a, c - g^2/a - (-((f g)/a) + h)^2/(b - f^2/a)}

Reduce(Thread(% > 0), {a, b, c, f, g, h})
   a > 0 && b > 0 && c > 0 && -Sqrt(a b) < f < Sqrt(a b) &&
   -Sqrt(a c) < g < Sqrt(a c) &&
   (f g)/a - Sqrt((a^2 b c - a c f^2 - a b g^2 + f^2 g^2)/a^2) < h <
   (f g)/a + Sqrt((a^2 b c - a c f^2 - a b g^2 + f^2 g^2)/a^2)

(* find 10 instances *)
FindInstance(%, {a, b, c, f, g, h}, Integers, 10)
   {{a -> 89, b -> 48, c -> 49, f -> 9, g -> 21, h -> 21},
    {a -> 134, b -> 59, c -> 5, f -> -37, g -> 20, h -> -6},
    {a -> 530, b -> 8, c -> 72, f -> 16, g -> -176, h -> -7},
    {a -> 532, b -> 49, c -> 10, f -> -153, g -> 23, h -> -5},
    {a -> 638, b -> 89, c -> 11, f -> -209, g -> -44, h -> 9},
    {a -> 642, b -> 38, c -> 78, f -> -57, g -> -162, h -> 14},
    {a -> 663, b -> 89, c -> 28, f -> -220, g -> -83, h -> 15},
    {a -> 769, b -> 62, c -> 24, f -> -145, g -> -73, h -> 34},
    {a -> 816, b -> 55, c -> 12, f -> -193, g -> -15, h -> -4},
    {a -> 898, b -> 49, c -> 93, f -> -125, g -> -191, h -> -9}}

I’ll leave you to figure out how to impose your extra condition of the determinant to be equal to $3$.

real analysis – What are ways to compute polynomials that converge from above and below to a continuous and bounded function in [0,1][0,1]?

While the following does not fully answer my question, I give one procedure below.

If $f$ has two or more continuous derivatives in $(0, 1)$ (Gzyl and Palacios 1997):

Let $m$ be the highest value of $|f′′(x)|$ for any $x$ in (0, 1), and let $k$ be any integer in $(0, n)$. Then, in general—

  • the $k$th Bernstein coefficient for the upper polynomial of $n$th degree is $f(k/n) + frac{m}{8n}$, and
  • the $k$th Bernstein coefficient for the lower polynomial of $n$th degree is $f(k/n) – frac{m}{8n}$.

The following Python code uses the SymPy computer algebra library to calculate $m$ and upper and lower bounds of the polynomials for a given $n$, given a function $f(x)$ with two or more continuous derivatives:

i=Interval(0, 1)
m=Max(maximum(-d, x, i),maximum(d,x,i))

(Unfortunately, though, finding the maximum can fail for some functions d; a less robust alternative is to use d.subs(x, nsolve(diff(d), 0.5),(0,1),solver='bisect')+0.1.)

Of course, this is not the only way to build polynomials that converge to a function in the manner asked for by my question, and this answer doesn’t solve all the issues I mention in my question. Notably, the procedure above doesn’t cover functions that lack two continuous derivatives, such as $min(lambda, c)$. Others are encouraged to add other answers to my question.


  • If $f$ is known to be concave in $(0, 1)$ (which roughly means that its rate of growth there never goes up), the Bernstein coefficients for the lower polynomials are simply $f(k/n)$, thanks to Jensen’s inequality.

  • If $f$ is known to be convex in $(0, 1)$ (which roughly means that its rate of growth there never goes down), the Bernstein coefficients for the upper polynomials are simply $f(k/n)$, thanks to Jensen’s inequality.


  • Gzyl, Henryk, and José Luis Palacios. “The Weierstrass Approximation Theorem and Large Deviations.” The American Mathematical Monthly, vol. 104, no. 7, 1997, pp. 650–653.

Floodfilling a texture in HLSL Compute shader

I have a very large texture which I want to fill with values representing “distance in units from a river tile”.
The texture is already seeded with these origin river points (meaning distance/value = 0 in them).

From these points I want to floodfill outwards, incrementing the value with each step from these origin points.

Doing this on the CPU is no problem using a stack or similiar structure but ideally I want to do this in the middle of a Compute shader execution which runs over the entire texture.

I have read this which sound similiar to what I want to do but it mentions there might be a smarter way to do this with Compute shaders – which is what I am using.

Any ideas on how to solve this on compute shaders?

approximation – What algorithm do computers use to compute the square root of a number?

If you are not a hardware designer, then you likely use the Newton iteration method. Given an equation f(x) = 0, and an approximate solution $x_0$, Newton iteration calculates a (hopefully) better approximation as $x_{n+1} = x_n – f(x_n) / f'(x_n)$. If we change $x = a^{1/2}$ to $x^2 = a$ to $x^2 – a = 0$, then this gives the simple formula $x_{n+1}$ = $x_n – ({x_n}^2 – a) / 2x_n$ = $(x_n + a/x_n)/2$.

This works well with a good initial approximation, but each iteration round includes a division, and divisions are sloooow. We therefore use a different formula: Instead of solving $x = a^{1/2}$ we solve $x = a^{-1/2}$ and multiply the result by a, which gives $a^{1/2}$ as requested.

We rearrange $x = a^{-1/2}$ as $x^2 = 1/a$, then $1/x^2 = a$ and $1/x^2 – a = 0$. Now $f'(x) = -2/x^3$, so Newton-iteration gives $x_n – (1/{x_n}^2 – a) / (-2/{x_n}^3)$ or $x_n + (x_n – a{x_n}^3) / 2)$ or $1.5x_n – (a/2){x_n}^3))$. This can be calculated using multiplications only, which is much faster than division. On a modern processor which can calculate multiplications in parallel and has a fused multiply-add operation, you calculate $r = 1.5x_n$, $s = (a/2)x_n$, $t = {x_n}^2$, then $x_{n+1} = r + s cdot t$.

To support this, x86 processors for example have a very fast hardware instruction calculating an approximation to $a^{-1/2}$ with a relative error less than 1 / 1024, implemented using a table.

No FREE Tier compute instances available in GCP?

“Get free hands-on experience with popular products, including Compute Engine and Cloud Storage, up to monthly limits. These free services don’t expire.” – Team, first time I’m trying to create a free tier(F1-micro) VM instance and it’s not available for selection. The min I can choose in the defined NA locations is e-micro instances. Is there any special buttons I have to use to select free tier instance?

trading – Given a list of buy/sell orders and previous trades, compute a buy/sell price

It took quite a bit of thinking, but here is how you arrive at a “buy instantly” or “sell instantly” price.

Lets say you have buy orders:

  1. buy 100LTC at 5BTC
  2. buy 50LTC at 4BTC
  3. buy 200LTC at 3BTC

And sell orders:

  1. sell 100LTC at 6BTC
  2. sell 500LTC at 7BTC
  3. sell 100LTC at 8BTC

and so on.

We can gain a lot from this simple set of orders. For one, the market depth chart becomes very useful when looking at this data. So, you could say that the market spread is 1BTC in this case, because buy and sell orders are separated by 1BTC

Now, lets say we have 125LTC we want to sell. The way you would pick the best sell instant price is to look at the buy orders. Looks like you could sell your first 100LTC at 5BTC and the remaining 25LTC at 4BTC. So, 5BTC should be your max rate. Depending on the API you are using, you might want to place two separate orders at different prices or place one with the max price assumed.

It works the same way with buying. IF you want to buy 650LTC, you would be able to buy the first 100LTC at 6BTC, and then the next 500LTC at 7BTC and the last 50LTC at 8BTC. So, your max price would be 8BTC. Although your actual price would vary quite a lot. It depends on what service and API you’re using as to how to best place such an order spanning prices.

For Cryptsy you would place such a buy order as just one order of 650LTC at 8BTC. Crypsty automatically chooses the best price for you, so you would have this effect. However, it’s not guaranteed to work this way. If you got the order list, and someone was to buy up 50BTC at 6BTC, then your expected price would end up being wrong. If you want a price guarantee, you should place exact matching orders to the order book. Following this method though, you have a high likelyhood of an order not completing as well, or taking longer than expected, such as if you buy 100LTC at 6BTC, but someone bought 50LTC already. Crypsty by default would wait a while before completing the partial order. (assuming no other sell orders were placed at that price)

It’s actually a really simple concept, but it took me a few days to actually wrap my head around it for some reason. Hopefully if other people have the same problem, maybe this post will clear things up for them