automata – How to prove the same states are repeating in Synchronizable DFA?

I’m trying to prove the if $𝑀$ is a $π‘˜$-state synchronizable DFA, then it has a synchronizing sequence of length at most $π‘˜^3$. first I have to prove that if a string syncs two states, then its length is at most $k^2$.

The question was answered here.

The proof of this a standard shrinking argument: if such a word is longer than $π‘˜^2$, then during the runs from $π‘ž_1,π‘ž_2$, a pair of states repeats, and we can shrink $𝑀$.

But how to know if the repeating string in both paths from $π‘ž_1,π‘ž_2$ to some states like $q_i$ is the same? and shrinking the string won’t damage the synchronization?

To make it more clear I drew the below picture.

enter image description here

The string $abcd$ takes $π‘ž_1,π‘ž_2$ to $q_i$ but after eliminating the loops, string $acd$ takes $π‘ž_1$ to $q_i$ and string $abd$ takes $π‘ž_2$ to $q_i$, thus removing the loops will break the synchronization.

Any suggestions would be greatly appreciated, thanks.

probabilistic algorithms – From coin flips to algebraic functions via pushdown automata

Given a coin with probability of heads of $lambda$, sample the probability $f(lambda)$. This is the Bernoulli factory problem, and it can be solved only for certain functions $f$. (For example, flipping the coin twice and taking heads only if exactly one coin shows heads, we can simulate the probability $lambda(1-lambda)$.)

Roughly speaking, the Bernoulli factory problem can be solved for $f$ only if $f$ is continuous on (0, 1) and equals neither 0 nor 1 on the open interval (0, 1) (Keane and O’Brien 1994). This result is without regard to the (classical) computational model.

This question is about solving the Bernoulli factory problem on a restricted model, namely the model of pushdown automata (state machines with a stack) that are driven by flips of a coin and produce new probabilities. In that model, Mossel and Peres (2005) showed that a pushdown automaton can simulate a (continuous) function that maps (0, 1) to (0, 1) only if that function is algebraic over the rational numbers, as defined later. They gave a question which remains open: is the converse of that statement true?

Define a pushdown automaton as follows (see Mossel and Peres 2005, Definition 1.5, for a more precise statement). The automaton starts at a given control state and has a symbol stack that starts with at least one stack symbol. With each transition, the automaton (based on the current state, the current input symbol, and the symbol at the top of the stack)β€”

  • determines the next state, and
  • replaces the top stack symbol with zero or more stack symbols.

When the stack is empty, the automaton stops and returns either 0 or 1 based on its current state. The input symbols are each either “heads” or “tails”, are independent and identically distributed, and are the result of a coin with unknown probability of heads.

A function $f(x)$ is algebraic over the rational numbers ifβ€”

  • it can be a solution of a system of polynomial equations whose coefficients are rational numbers, or equivalently,
  • there is a nonzero polynomial $P(x, y)$ in two variables and whose coefficients are rational numbers, such that $P(x, f(x)) = 0$ for every $x$ in the domain of $f$.

Thus, the questions are:

  1. For every $f(lambda)$ that maps $(0, 1)$ to $(0, 1)$, is continuous, and is algebraic over the rational numbers, is there a pushdown automaton that can simulate that function? If so, how can it be constructed?
  2. Are there general constructions for pushdown automata that can simulate a large class of algebraic functions, akin to those found in this section of a page of mine or in section 3.2 of Mossel and Peres 2005?

To be clear, this question is not quite the same as the question of which algebraic functions can be simulated by a context-free grammar (either in general or restricted to those of a certain ambiguity and/or alphabet size), and is not quite the same as the question of which probability generating functions can be simulated by context-free grammars or pushdown automata. (See also Icard 2019.)

REFERENCES:

  • Keane, M. S., and O’Brien, G. L., “A Bernoulli factory”, ACM Transactions on Modeling and Computer Simulation 4(2), 1994.
  • Mossel, Elchanan, and Yuval Peres. New coins from old: computing with unknown bias. Combinatorica, 25(6), pp.707-724, 2005.
  • Icard, Thomas F., “Calibrating generative models: The probabilistic Chomsky–SchΓΌtzenberger hierarchy.” Journal of Mathematical Psychology 95 (2020): 102308.

automata – How to describe the language of an automaton in plain English?

How do I describe the following automaton in plain english?

The only thing that I can think about when explaining in plain english would be the states, alphabet, start, accepting state, but I think there is more to when explaining about an automata. How do I answer that?

The states are as follows: Q = {q1, q2, q3, q4} 
The alphabet is as follows: Ξ£ = {0,1} 
The start state is q1
The accepting state is q4

enter image description here

finite automata – Question about an answer related to designing an ASM for a sequence detector

The question says:

Design a sequence detector that searches for a series of binary inputs to satisfy
the pattern 01(0*)1, where (0*) is any number of consecutive zeroes. The
output (Z) should become true every time the sequence is found.

The answer to this example in the document I am reading is this:

enter image description here

My question is: After going from state ‘first’, the decision box checks X. If it is 0, then it does not fit the pattern 01(0*)1. So, it should go back to state ‘start’. In this answer, it goes back to state ‘first’ instead, and so a sequence that violates the pattern could eventually get accepted. Am I correct to think so? If not, why?

The question is tagged with finite automata because there is no ASM tag. The two are similar enough.

automata – Context Free Grammar to Chomsky Normal Form Help

I am trying to convert the following CFG to CNF:

S -> ABS | Ξ΅
A -> BSBa | a
B -> Ba | a

The finally result looks like this:

S -> AS’ | AB
S’ -> BS
A -> S’B’ |BB’ | X
B’ -> BX
X -> a
Y -> b
B -> BX | XY

I am fairly new to chomsky normal form, so can anyone verify that I did this right?
Thank you in advance!

automata – Difference between a deterministic and a nondeterministic finite automaton

I am currently working on the automata theory and just passed through the definition of a nondeterministic and a deterministic finite automaton. The definition are quite similar and I don’t understand well the difference between them. Can someone maybe passe again through them and also underline the difference with an example, if it is possible? Many thanks!

reference request – Is a register machine built out of automata of some sort?

I am looking at register machines like the Random Access Machine. Wikipedia says:

Random-access machine (RAM) – a counter machine with indirect addressing and, usually, an augmented instruction set. Instructions are in the finite state machine in the manner of the Harvard architecture.

Does this mean the instructions are somehow encoded as a finite automaton? Or how exactly is a register machine implemented (theoretically and practically, at a high level). Where do automata fit into the picture?

I am wondering because I realize automata are pretty limited in what they can do, and once you need to “parse” something but have access to an entire database of information at each step of the parse, and then the result of parsing is a complicated object graph, you are no longer dealing with automata (as far as I can tell). Automata are basically recognizers returning a yes or no answer, with very little access to “data” to make their transition decisions. But I like the idea of “state machine”, where it transitions around state to state.

So I’m wondering what a so-called “state machine” with access to a large database to figure out how to make each transition might be called, and if this is what a register machine could/would be.

My understanding is that a complex register machine is closer to something like x86, where you have instructions and data. But it seems that you might somehow be able to encode these instructions as a state machine somehow. Is this true? What am I missing from this picture? Is a register machine compiled down to automata of some sort, or otherwise built out of automata, or if not, how is it different from automata?

finite automata – Drawing transition diagram from transition table

Try to use an online graph editor, like this one. In the settings set it to have directed edges and custom labels, and type a triplet $(s_1,s_2,v)$ for an edge from $s_1$ to $s_2$ with $v$ written on the edge.


However, this won’t allow you to create “accepting” states, when you draw this yourself, add them by hand… If you prefer a slightly worse-looking editor, but one that can also have accepting states, consider this automata drawer