We say that a function $f(n)$ is $O(n^2)$ if there exist constants $C,N>0$ such that $f(n) leq Cn^2$ for all $n geq N$. We usually denote “$f(n)$ is $O(n^2)$” by “$f(n) = O(n^2)$“.

An algorithm has running time $O(n^2)$ if its *worst-case* running time is $O(n^2)$. That is, if we denote by $T(n)$ the maximum running time of the algorithm on inputs of length $n$, then the algorithm has running time $O(n^2)$ if $T(n) = O(n^2)$.

Equivalently (and this is the point of the CLRS formulation), an algorithm has running time $O(n^2)$ if there exists some function $f(n) = O(n^2)$ such that the running time of the algorithm on an input of length $n$ is always at most $f(n)$.

If we denote the running time of the algorithm on the input $x$ by $T(x)$, and the length of $x$ by $|x|$, then we can see the difference between these two equivalent definitions by writing them out formally.

The first definition is

$$

max_{|x| = n} T(x) = O(n^2),

$$

where the left-hand side defines a function of $n$, which we denoted above by $T(n)$.

The second definition states that for some $f(n) = O(n^2)$,

$$

T(x) leq f(|x|).

$$

If the second definition holds, then in particular

$$

max_{|x|=n} T(x) leq f(n),

$$

and so

$$

max_{|x|=n} T(x) = O(n^2),

$$

which is the first definition.

If the first definition holds, then we can take $f(n) = max_{|x|=n} T(x)$ to obtain a function satisfying the second definition:

$$

T(x) leq max_{|y|=|x|} T(y) = f(|x|),

$$

where the first definition guarantees that $f(n) = O(n^2)$.