## computational mathematics – Computing the analytic solution to the non homogeneous wave equation

I’m working on solving the non homogeneous equation with a numerically in Julia. In order to check if the algorithm is implemented succesfully, I need an analytical solution to the problem. The book I am consulting has the analytic solution to the equation written as follows:

View post on imgur.com

The problem is, I am not sure:

1. what ξ, η, and ζ are. Furthermore
2. how I can write them down in a computer-friendly way.
3. why there is a formulation of the L2 norm between these letters

Do you know how one can write this solution in code?

## computational geometry – Analogue of SpherePoints in higher dimensions?

I’m looking for an analogue of `SpherePoints` that works in dimensions higher than 3, has this been created already?

A random sample from unit ball will be approximately evenly spaced already, so was looking for something that’s more evenly spaced than a typical random sample. Such sample gives a slightly lower variance estimator when estimating a directional statistic.

## computational complexity – Zero knowledge proofs for having a proof that decides the NP vs P problem

Out of curiosity I would like to ask whether there are zero knowledge proofs for every answer to the $$P=NP$$ question.

While it is easy to proof to others to have a polynomial time algorithm of moderate complexity for e.g. the graph-coloring problem without revealing the algorithm itself, I wonder whether zero knowledge proofs are also possible for proving to have a proof that $$NP=P$$ or $$NPne P$$.

## computational geometry – Megiddo’s algorithm for finding lowest point in intersection of disks

Given $$n$$ disks $$D_1,D_2,dots,D_n$$ in $$mathbb{R}^2$$.
We wan to find lowest point in intersection of disks in linear time with extending Megiddo’s algorithm.

I try as follow: first i divide disks to set of $$d^+$$ and $$d^-$$.
I divide each disk to half as follow:

$$d^+_i$$ are all points above each $$D_i$$ , and
$$d^-_i$$ are all points bellow each $$D_i$$.

Then i use megiddo’s algorithm but i have a problem:
If
$$d_i^+,d_j^+$$ that intersect in two points, how i can extend our algorithm in such case?

## np complete – Computational complexity of dividing a set of constraints into a minimum number of satisfiable clusters

I am looking for the computational complexity of the following problem.

Divide a given set of constraints into a minimum number of satisfiable clusters such that the constraints within the same cluster are satisfiable together.

Constraints can be in any type, such as Boolean logic or CSP and you can assume that there is an `S()` function that takes a set of constraints and decides the satisfiability of these constraints.

For instance assume that the following set of constraints in Boolean logic for the Boolean parameters p1, p2, p3, and p4 are given.

``````C = {“p1”, “p1 & p2”, “!p1 & p2”, “p3 & p4”, “!p4”}
``````

This set can be divided into 2 satisfiable clusters as follows:

`````` // A solution for C1 can be p1=True, p2=True, p3=True, p4=True
C1 = {“p1”, “p1 & p2”, “p3 & p4”}

// A solution for C2 can be p1=False, p2=True, p3=True, p4=False
C2 = {“!p1 & p2”, “!p4”}
``````

Another example would be:

``````C = {"x>1", "x!=y", "y==1", "x+y<1", "x+1<2*y", "x==1"} // where both x and y are integers.
``````

This constraint set can be divided into 3 satisfiable clusters as follows:

`````` // A solution for C1 can be x=5, y=1
C1 = {"x>1", "x!=y", "y==1"}

// A solution for C2 can be x=0, y=-1
C2 = {"x+y<1"}

// A solution for C3 can be x=1, y=2
C3 = {"x+1<2*y", "x==1"}
``````

Here is my informal approach for the computation complexity of the problem:

First, I consider the decision version of the problem: Given a set of
constraints, can this set be partitioned into k satisfiable clusters?
Since we assume that an `S()` function is always provided for the
given constraint type, a solution can be easily verified whether it
becomes satisfiable or not using the given `S()` function. Then, it
becomes NP. About the hardness of the problem, let’s assume that we
are concerned with SAT (Boolean satisfiability problem). In this
scenario, considering that the SAT problem is NP-Complete, the problem
itself becomes NP-hard since deciding whether a set of constraints can
be partitioned into 1 satisfiable cluster is a harder problem than
SAT. Therefore, the computational complexity of my problem depends on
the constraint used in the problem. That is, satisfiability of any
constraint problem that is NP-complete makes my problem is NP-hard.

What do you think about my approach? And if you think it is not correct or not complete, can you suggest another approach?

## Context

The main benchmarks that computers are measured on are FLOPs, MIPs, and some related ones, which measure the amount of some basic operations that a certain processor can do. It is very clear to me how these benchmarks relate to the processor’s ability to execute real-world algorithms. For example, most scientific and graphics algorithms have a very clear requirement of a certain amount of floating point operations and this is their primary computational cost, so the FLOPs of a GPU contains a lot of information about how fast the GPU will compute those algorithms. (though not complete information, since there are other bottlenecks, such as communication bandwidth limits and efficient scheduling, etc).

## The TEPS benchmark

Graph500 is a competition for supercomputers that uses a different benchmark “Traversed Edges Per Second”, which is supposed to measure some notion of the communication bandwidth ability of the computer.

I understand the intuitive justification for such a benchmark, since data-communication is a key bottleneck in many applications.
However, I don’t fully understand how this benchmark is computed exactly, since the explanation on the site is not very clear to me, and I don’t understand how exactly this specific benchmark is supposed to relate to real-world computational problems like machine learning tasks. Some subquestions:

• How is TEPs computed? Is there a clearer explanation somewhere than the one on the Graph500 website? (I couldn’t find it after searching google scholar).

• TEPS is somehow measuring the amount of traversed edges in a graph, but what is this graph supposed to be analogous to? e.g. if we compare it to a machine learning task, what would a node be? A single data sample? A single memory location?

• What insight does the TEPs give us above just directly reading the communication bandwidth of the computers off of their specification, if we want to predict how well the supercomputer will do on some ML/big-data task (or some other task)?

## proof of work – Potential fork from genesis block using enourmous computational power

51% attacks are commonly discussed where one entity could potentially mine blocks ahead of the network. The assumption is that the attacker manages to acquire 51% of computational power. However, assuming that some entity manages to acquire much more computational power, and starting now over next few years if it manages to exceed total amount of work, can they fork a complete new chain from the genesis block ? Hypothetical case, but purely according to the rule of longest chain by Total amount of work, isn’t it possible that in future one could overcome the total amount of work done until now and invalidate current chain completely ?

## coding theory – COmputational complexity if rate 1/2 codes

We know from Berlekamp, McEliece and Van Tilborg (On the inherent intractability of certain coding problems, IEEE Trans. Information Theory, 24 (1978)) that computing minimum distance of a (binary) code is hard. Later Dumer, Micciancio and Sudan showed that it is hard to approximate also. Both of them talk about hardness of the problem when all codes are considered.

My question is regarding a slightly more restricted class. Is there anything known for codes of half rate? Without loss of generality, parity check matrix of the code can be assumed to be of the form (1|M) where M is a square matrix. I assume that none of the results above directly give NP-hardness (for exact or approximate answer) when class is restricted since the reductions don’t end up landing in half rate codes.

(Note: I asked this question on TheoryCS stack and it was unanswered even after a bounty period was over. Moderator suggested me to cross put it on mathoverflow. Here is the link to the first question. For the sake of being self-contained I will also write the question here.)

## computational geometry – Algorithm to construct a parabola that hits a given target and avoids given boundaries

I’m working on a video game and I’m struggling with the math behind one of the enemies. The enemy is a grenade launcher mounted on a vertical rail, which can slide up and down, and lob a grenade at any angle with any amount of force. The grenade’s path will be a parabola which must hit the player, but there are line segment boundaries in the way represented by their two endpoints which the parabola must avoid.

Here is a drawing.

What I’d like to do is calculate the equation for the parabola of the grenade which hits a target and misses all of the boundaries, from which I can figure out the position, angle, and force for the launcher to use. The parabola must be subject to these three constraints:

1. The parabola must pass through the target point $$(x_T, y_T)$$
2. The parabola must pass through the line segment $$overline{RS}$$
3. For each boundary $$overline{EF}$$, if the parabola passes through the segment, it must not happen between the line $$overline{RS}$$ and the target point.

Depending on where the target is, there may be no solution, in which case I’d like it to return that information. If there is any solution there will be multiple; I would only need one.

## What I’ve tried so far:

We can represent the parabola as $$y-y_T=A(x-x_T)^2+B(x-x_T)$$, which takes care of the first constraint, and means we need to find values for $$A$$ and $$B$$ that satisfy the other two constraints (we know $$A$$ must be negative because of the direction of gravity). Then for each boundary $$overline{EF}$$ on the map, with endpoints $$(x_E,y_E)$$ and $$(x_F,y_F)$$, we can represent the boundary line as $$(y_E-y_F)x+(x_F-x_E)y+(x_E y_F-x_F y_E)=0$$. From that I can find the points of intersection between the line and the parabola, and make sure that for every boundary, the x-values of the points are either not between $$x_E$$ and $$x_F$$, or they are not between $$x_S$$ and $$x_T$$. This quickly becomes a nasty quadratic equation, which then creates a system of linear inequalities that I don’t know how to solve. Can anyone think of a better way to approach the problem?

## computational geometry – Finding lowest point in circles

Given n disks in the plane, i want to compute the lowest point in their intersection area, im looking for a simple randomized incremental algorithm.

I think this problem have some similarity with 2D half-plane intersection (2D LP). In that problem we were looking for an optimal point in respect of the cost vector. But subproblems in that problem was finding the intersection between a half-plane and a convex region with can be reduced to a simpler 1D problem. That 1D problem is half-line intersection, which is easier to solve.

Here but i have trouble to define a simpler subproblem. Also in the Analysis for expected time, i don’t see how to use backward analysis here.

I also guess maybe we can solve this problem with finding the convex-hull of those circles, then we look for it’s core with half-plane intersection, but i really uncertain about this idea.