Computational Models – What's Wrong With My Bitwise Cyclic Tag Program?

I recently discovered self-modifying bit-wise cyclic tags and started developing a program that stimulates its behavior.

delete from console import

def SBCT (program):
index = 0
t = 0
during the program:
clear()
print (& # 39; step% d: & # 39;% t)
Print (program)
print (& # 39; & # 39; * index + & # 39; & # 39;

h = int (program[index])
if h == 0:
# clears the first program bit

Program = program[1:]
            if program: index% = len (program)
otherwise:
# appends the appropriate bit if the first bit is one

index = (index + 2)% len (program)
program + = int (program[0]) * h * program[(index-1)%len(program)]
        t + = 1

word = & # 39; 1011110111 & # 39;
SBCT (word)

It seems to match the behavior of the sample string (defined here as word) on the page of the language, as well as for smaller examples that I can tell. The problem is that, for the given example, it seems to be messing up in the long run. The page states that the program finishes in 43074 steps, but the program does not seem to end up with my code. Since there were no further steps for the review, I can not say how I made a mistake. Could someone please point out where I am wrong when he sees it?

Computational Linguistics – POS (Part-of-Speech) tagging is closely related to WSD (Word Sense Disambiguation). Explain this sentence better

It is written here: Conceptual clarification: A structured learning perspective:

2.1 Fundamentals of Conceptual Clarification

… the POS is usually deployed before the WSD. POS tagging is closely related to WSD, and POS tagging is a well-studied problem with more than 95% accuracy. Therefore, the separation of POS tagging and WSD can fully expose the hardest core of WSD.

"POS usually before the definition" does not necessarily mean that both tasks are fulfilled closely connected.

I have sought a relationship between the above tasks, but found no good explanations.

Explain the term in block quotes better and explain why WSD is more difficult than POS tagging.

Many Thanks.

Computational Complexity – expressing a torsion point of an elliptic curve as a combination of generators

I face the following problem:
Suppose we have a finite field $ mathbb {F} _p $ and an elliptic curve $ E $ above defined. Suppose that for $ m in mathbb {Z} $ not a multiple of the characteristic of the basic field. So we have an isomorphism
$$ E[m] longleftrightarrow ( mathbb {Z} / m mathbb {Z}) ^ 2 $$
Suppose we know that
$$ E[m] subset E ( mathbb {F} _q) $$
from where $ q $ is a power of $ p $, Suppose we also had generators $ P, Q in E[m]$ and a third point $ R in E[m]$, I want to find $ a, b in [0,m-1]$ for which
$$ R = aP + bQ $$
What is the computational burden of this problem? The most efficient algorithm I think about is to solve a lot of ECDLP
$$ R-aP = bQ $$
from where $ a in [0,m-1]$, Of course, this has a computational burden $ O (m sqrt {m}) $ because there is computational effort for the single ECDLP algorithm $ O ( sqrt {m}) $,
Thanks for your time.

Computational Geometry – Approximate Matching – Two plane geometries with n points

I'm trying to do an approximate alignment of a plane geometry with n points with a lot of other plane geometries. The goal is to get as accurate a form as possible (rotation and scale independence).

My idea would be to "normalize" all shape coordinates in the database [0-1, 0-1] to omit the scaling correction and then compare it to the specified (blue) geometry.

I've looked at some articles by Helmut Alt, especially Discrete Geometric Shapes – Matching, Interpolation and Approximation (under 3.2Approximate Matching), but I can not extract the correct math into a function (I'm trying this in Javascript)

Maybe someone has an idea how to do that or another approach.

Many Thanks!

Enter image description here

Algorithms – Find a sequence of calculable functions whose computational time in the WC will somehow become infinite

Define $ T: mathbb {N} times mathbb {N} rightarrow mathbb {N} $ by:

$ T (k, n) = 2 ^ {T (k-1, n)} $

$ T (0, n) = 1 $

$ T (1, n) = 2 ^ n $

$ T (2, n) = 2 ^ {2 ^ n} $

And indicate the toilet time of calculating $ f (n) $ as $ WC (f (n)) $

Can we find calculable functions? $ f_i: mathbb {N} rightarrow mathbb {N} $ count on t $ f_i (n) $ takes $ Omega (T (i, n)) $ WC and so on every other calculable function $ f equiv f_i $ $ lim_ {n rightarrow infty} frac {WC (f_i (n))} {WC (f (n))} neq 0 $?

In a way, I'm trying to find a concrete example of functions that take at least X time to calculate where X (asymptotically) can be as big as we want.

I thought maybe $ f_i (n) = $ the $ T (i, n) $& # 39; th digit from $ pi $

but not quite sure if this meets the second condition

Computational Geometry – Divide the regular n polygon into k parts

I just came across the following problem:

$ L: $ For positive integers $ n geq 3, k geq1 $, decide if it is possible one $ n $Polygon in $ k $ equally shaped and equal pieces, each piece should be connected in itself.

For some $ n $ it is trivial, e.g. $ forall k: (4, k) in L $, $ forall k | 2n: (n, k) in L. $ But for some others, it seems hard to decide.

By the promise, each input is either off $ L $ Or you can divide by a finite number of straight lines (segments). If these lines all run through a rational coefficient pair of points in the plane, you can check the solution in poly time. But personally, I find this assumption unconvincing and have no other idea.

Is there an algorithmic strategy to decide $ L $or should it be too hard to solve?

—to edit—

Most lines can not be parameterized into rational parameters by selection $ alpha_1, alpha_2, alpha_3, alpha_4 $, $ forall i neq j: mathbb {Q} ( alpha_i) not subseteq mathbb {Q} ( alpha_j) $ and assign the continuous line $ ( alpha_1, alpha_2), ( alpha_3, alpha_4) $It will not go through a rational point.

Computational Geometry – Number of mesh cells for Voronoi mesh too low

I tried to solve that question out of interest, and thought I might create a Voronoi mesh, cut it into a circle, and color the mesh cells. But if I ask VoronoiMesh Create cells for too many points MeshCellCount[mesh, 2] (or equivalent Length @ MeshCells[mesh]) returns a number less than the number of points initially specified.

I tried to use different functions to generate the points around which the cells should be built, using both exact and real numbers and checking out the documentation VoronoiMesh and MeshRegionbut I'm still not sure what causes that. Are my points just too close together? VoronoiMesh uniquely determine a cell for some of them?

The simplest code that reproduces this is:

MeshCellCount[
  VoronoiMesh[
    Flatten[Quiet[Thread[CirclePoints[Range[100], 360]]], 1]],
2]

This should return 36,000, as there are 100 radial points and 360 azimuthal points, instead 35,985. For this code it seems to start when there are about 32,000 elements. When the radial point points inwards offer set to 87, I get the expected result. If the radial points are set to 88 (with the same 360 ​​azimuthal points), I get an unexpected result. For all smaller numbers, it seems to work as expected.

If I use the following code to determine the number of cells, for some reason, this discrepancy will be displayed even with a smaller number of cells.

to generate[i_] : =
table[
   {r Sin[[Theta]], r Cos[[Theta]]},
{[Theta]0, 359 [Pi]/ 180, [Pi]/ 180}
{r, 1/2, (i - 1) + 1/2}
]66 * 360 - MeshCellCount[VoronoiMesh[Flatten[generate[66], 1]], 2]

The result of this code is 2, assuming it is zero for all passed values to generate,

Does anyone know what I am doing wrong or is there a workaround? Or I just ask too much VoronoiMesh?

Cryptography – What is the current computational speed when performing elliptic curve multiplication?

Hello, I was just looking for some information on how long it would take to crack a private key with a brute force approach in Bitcoin, and I could not find a very good answer on how long it takes to check Whether a particular key (or every single key) is key would work.

Basically, so I'm wondering how long the elliptic curve multiplication process would take to check if a single private key works for a particular public key (on average), thanks 🙂

Computational models – RAM and Turing machines: temporal complexity of the simulation

My RAM machine is very simple:

  • it has $ k $ Tapes, an input tape and a special one control tape

  • it has an infinite memory (called Array) $ A $) which can be accessed at random

  • the control tape is read when the machine enters the promotion $ q_ {control} $ State

  • The control strip always contains the symbol $ R $ or $ W $ Specify the memory operation (read or write) and a binary number that represents the address of the cell in $ A $

  • in case of $ W $ The machine must also have a symbol of the alphabet on the keyboard control tape stored in the selected cell $ A $

  • in case of $ R $ The machine puts the read on the control tape once again

  • Otherwise, it is very similar to a Turing machine ($ k $ Labor bands, states)

I would like to show that this calculation can be simulated on a Turing machine with a maximum of quadratic effort. (If function $ f $ is calculable on RAM $ T (n) $is it on the Turing machine in $ T (n) ^ 2 $,


My approach is to take advantage of the fact that the RAM machine can be used at most $ T (n) $ Cells in $ A $, I could turn it into a Turing machine tape that would be processed sequentially for each "RAM" call of the RAM engine. This would of course lead to $ T (n) ^ 2 $ temporal complexity.

However, I would also need a tape to map "RAM addresses" to the appropriate addresses (location) on my fake tape. How else would I remember where I stored the symbol that corresponds to the address "11010101" when it needs to be read? But the catch is: the address can be at most $ T (n) $ long, that brings us immediately $ T (n) ^ 3 $ temporal complexity.

How to solve it?