I’m read *Randomized Algorithms* book by Motwani, the part about Yao’s min-max technique:

Consider a problem where the number of distinct inputs of a fixed size is

finite, as is the number of distinct (deterministic, terminating, and always correct)

algorithms for solving that problem.

Let $Pi$ be a problem with a finite set $mathcal{I}$ of input instances(of a fixed size), and a finite set of deterministic algorithms $mathcal{A}$, then

Proposition $2.5$ (Yao’s Minimax Principle): For all distributions p over $mathcal{I}$ and $boldsymbol{q}$ over $mathcal{A}$,

$$

min _{A in mathcal{A}} mathbf{E}left(Cleft(I_{p}, Aright)right) leq max _{l in mathcal{I}} mathbf{E}left(Cleft(I, A_{q}right)right)

$$

In other words, the expected running time of the optimal deterministic algorithm for an arbitrarily chosen input distribution $boldsymbol{p}$ is a lower bound on the expected running time of the optimal (Las Vegas) randomized algorithm.

It’s reasonable that the number of distinct inputs of fixed size is finite, say the number of graphs with $100$ vertices is finite. But how come the number of distinct deterministic algorithms is also finite? That is to say, the number of determinist algorithms to find the min-cut for graph of $100$ vertices is finite. I’m confused here:

- What exactly does it mean here by being finite?
- Why these two sets has to be finite?