Approximation Theory and optimal polynomial

I’m just trying to work into the field of approximation theory and don’t understand something.
It’s in the wikipedia article about approximation theory:

This paragraph:

For example, the graphs shown to the right show the error in approximating log(x) and exp(x) for N = 4. The red curves, for the optimal polynomial, are level, that is, they oscillate between +ε and -ε exactly. Note that, in each case, the number of extrema is N+2, that is, 6. Two of the extrema are at the end points of the interval, at the left and right edges of the graphs.

Those graphs are:

enter image description here

Error between optimal polynomial and log(x) (red), and Chebyshev approximation and log(x) (blue) over the interval (2, 4). Vertical divisions are 10−5. Maximum error for the optimal polynomial is 6.07 × 10−5.

As far as I understand it, the red function is the optimal polynomial, and the blue function is an approximating function done with the chebyshev approximation. But why are they optimal? This is the error function, but how did they get to the error function? They are oscillating n+2 times in the intervall which means they are worst-case error.
This article is messing me up a little, I hope someone can help me to understand it better.