## computational geometry – Efficient Data Structure for Closest Euclidean Distance

The question is inspired by the following UVa problem: https://onlinejudge.org/index.php?option=onlinejudge&Itemid=99999999&category=18&page=show_problem&problem=1628.

A network of autonomous, battery-powered, data acquisition stations has been installed to monitor the climate in the region of Amazon. An order-dispatch station can initiate transmission of instructions to the control stations so that they change their current parameters. To avoid overloading the battery, each station (including the order-dispatch station) can only transmit to two other stations. The destinataries of a station are the two closest stations. In case of draw, the first criterion is to chose the westernmost (leftmost on the map), and the second criterion is to chose the southernmost (lowest on the map).
You are commissioned by Amazon State Government to write a program that decides if, given the localization of each station, messages can reach all stations.

The naive algorithm of course would build a graph with stations as vertices and calculate the edges from a given vertex by searching through all other vertices for the closest two. Then, we could simply run DFS/BFS. Of course, this takes $$O(V^2)$$ time to construct the graph (which does pass the test cases). My question, though, is if we can build the graph any faster with an appropriate data structure. Specifically, given an arbitrary query point $$p$$ and a given set of points $$S$$, can we organize the points in $$S$$ in such a way that we can quickly find the two closest points in $$S$$ to $$p$$ (say, in $$log V$$ time?).

Posted on Categories Articles

## linear algebra – computational complexity when calculating the trace of a matrix product under a certain structure

I have two problems with trace calculation and some (possibly sub-optimal) answers. My question is about a potentially more efficient algorithm for everyone. (More interested in answering question 1.)

1. To let $$U, V$$ and $$F$$ are three real matrices. All three matrices are large $$d times r$$With $$r ll d$$ (This is, $$U, V$$ and $$F$$ are big"). I want to do the math $$mathrm {trace} (U V ^ top F F ^ top)$$. Computing $$A = F ^ top U$$, $$B = V ^ top F$$ and the trace of $$FROM$$ has complexity $$mathcal {O} (r ^ 2 d)$$. Is there a faster algorithm (considering? $$r ll d$$)? We can get it $$mathcal {O} (r d)$$?

2. To let $$U, V$$ and $$M$$ are three real matrices. $$U$$ and $$V$$ Have size $$d times r$$ (With $$r ll d$$), and $$M$$ is a lower triangle (with positive elements in its diagonal) of size $$d times d$$. I want to do the math $$mathrm {trace} (U V ^ top M M ^ top)$$. The simple algorithm of arithmetic $$A = M ^ top U$$, $$B = V ^ top M$$and then the trail of $$FROM$$ has a complexity $$mathcal {O} (r d ^ 2)$$. Is there a faster algorithm (considering? $$r ll d$$)?

If this question does not belong here, please let me know! (If so, where can I post it too.)

Thank you so much!

Posted on Categories Articles

## List of long open, elementary problems that are computational in nature

I want to ask a question that is similar to this question.

Question: I ask for a list of long open problems that are computational in nature and that a beginning doctoral student can understand. One problem per answer, please.

Meaning of the "beginning doctoral student": Anyone who can solve all the math math exam problems at one of the top 30 institutions in the United States.

Meaning of "computational nature": By this I do not mean a computing task that can be carried out by a computer, but a problem in which an object (e.g. a topological invariant, a closed formula, etc.) has to be calculated that is assigned to a mathematical object. Example: Calculation of the homotopy groups of a sphere.

Meaning of "not too famous": (As in this question.): If there is an entire monograph that is already devoted to the problem (or a narrow circle of problems), it need not be mentioned again here. I am looking for problems that a mathematician who works outside the field is unlikely to have encountered.

Meaning of "long open": (As in this question): The problem should appear in the literature or have a solid history as folklore. So I don't want to ask for new problems to be invented or to collect the laundry list of all unwanted elementary technical lemmas that hinder private research. There should at least be a small community of mathematicians who will take care of solving any of these problems.

Posted on Categories Articles

## Homological algebra – Čech-Alexander complex in computational (crystalline / prismatic) cohomology

I have a naive question about Čech-Alexander complexes in prismatic cohomology (although I suspect the situation is similar for crystalline cohomology).

They appeared to have been introduced as a method for calculating prismatic cohomology $$R Gamma _ { Delta} ( mathfrak {X} / (A, I), mathcal {O} _ { Delta})$$ for affine formal $$(A / I)$$– Schemes, that is, in the situation when $$mathfrak {X}$$ is in shape $$mathfrak {X} = mathrm {Spf} (R)$$.

Can they be used in more general situations? I assume it is too optimistic to create some global Čech-Alexander complexes (right? It seems that at least one (cosimplicit) presheaf can be obtained (e.g. in the étale topology) $$mathfrak {X}$$) because of the function in $$R$$), but maybe somehow indirectly? If not, are there other tools for calculating prismatic cohomology?

(A similar question can be asked for Čech-Alexander complexes and crystalline cohomology. In this context I have seen only indirect uses, e.g. for the relationship between crystalline and de Rham cohomology in the affine case and then for other arguments in the global case I wonder if this is the "standard template" for using the Čech-Alexander complexes.)

Posted on Categories Articles

## Reference requirement – Computational complexity of optimization algorithms using the random algorithm theory

A fundamental and undoubtedly much studied problem is not only to determine whether an optimization algorithm converges to its optimum or not, but also how quickly it converges (see a discussion on how this can be measured here: https: // mathoverflow. net / a / 90920/47228). I'm interested in whether random algorithm theory techniques were used to investigate this question (either in a very concrete or a very abstract environment). The type of question I think that I think such an approach could answer would be the following:

If you draw a starting point equally randomly from a given set of potential starting points, the algorithm converges with probability 1-$$epsilon$$ in less than $$N$$ Iterations.

As you can see, the question is not particularly specific, but this is deliberate: I am interested in ideas / references at every level of the general public and for every type of optimization technique / algorithm.

Thank you in advance. 🙂 🙂

Posted on Categories Articles

## Turing machines – Computational complexity when counting symbols

Turing machines are a nice model with several advantages, especially their simplicity, but they are not the first choice when analyzing algorithms. Algorithms are typically implicitly analyzed in the RAM machine model and in some cases in the BSS model.

Here are some comments on the computational complexity of counting in different models:

Single belt Turing machine: These are only taken into account because it is relatively easy to prove lower limits for them. As a calculation model, they are even less realistic than Turing machines with multiple belts.

Multi-tape touring machine: It is a standard example in amortized complexity with which an increasing counter can be implemented $$O (1)$$ amortized bit operations. This is because only one bit changes in half the time, only two bits in a quarter of the time, etc. for a total number of changed bits $$1/2 + 2/4 + 3/8 + cdots = 2$$. The complexity of the Turing machine is linear in that the counting can be implemented in $$O (n)$$.

RAM machine: A RAM machine consists of a finite number of registers and a random access memory. The registers are $$O ( log n)$$-bits long where $$n$$ is the size of the input. Appropriate operations on registers are constant. In particular, increasing a counter that can count up $$mathit {poly} (n)$$ takes $$O (1)$$ worst case Time. In particular, your function will be carried out $$O (n)$$.

BSS machine: You have to be careful when calculating large numbers. Arithmetic on the RAM computer only requires a constant time if the size of the operands is the same $$mathit {poly} (n)$$. A BSS machine enables access to special registers in which values ​​are stored in a specific field, for example the real numbers. You can do time constant arithmetic and comparison, but not functions like Floor. You must not use such values ​​for indexing either. (If you don't restrict yourself enough, you will soon solve SAT in polynomial time.) You can think of a BSS machine as floating-point operations that, in practice, take a constant time while ignoring the finite precision involved.

Posted on Categories Articles

## Computational effort – $P ne NP$ from Wikipedia

Here it says:

Thus the question "Is P a correct subset of NP" can be rephrased as "Is existential second-order logic able to describe languages ​​(finite linearly ordered structures with non-trivial signature) that first-order logic with the lowest fixed point does not can? " ,

And here is:

However, no first-order theory has the power to uniquely describe a structure with an infinite domain such as natural numbers or the real line. Axiom systems that fully describe these two structures (ie categorical axiom systems) can be obtained in stronger logics such as second order logic.

that seems to tell me that $$P ne NP$$,

What did i miss?

Posted on Categories Articles

## Terminology – what exactly are computational effects?

I am really confused with the definition of arithmetic effects. What I knew and understood about computational effects was just that the calculations were impure, but someone noticed that computational effects included a sequel that doesn't appear to be impure. Please someone reason these things.

Sorry for the chatter, here are my current questions:

1. What is a formal (or at least precise) definition of computing effects?

2. If arithmetic effects are impure things, why is there a sequel? is the sequel impure?

Thank you.

Posted on Categories Articles

## Predictability – Helps with problems with computational complexity

Thank you for sending an answer to Computer Science Stack Exchange!

But avoid

• Make statements based on opinions; Provide them with references or personal experience.

Use MathJax to format equations. Mathjax reference.

Posted on Categories Articles

## Predictability – How much more powerful are regular expressions in modern programming languages ​​than regular expressions from computational theory?

Thank you for sending an answer to Computer Science Stack Exchange!