mathematical programming – What is the difference between a fraction and a float?

Computers usually deal with floating-point numbers rather than with fractions. The main difference is that floating-point numbers have limited accuracy, but are much faster to perform arithmetic with (and are the only type of non-integer numbers supported natively in hardware).

Floating-point numbers are stored in “scientific notation” with a fixed accuracy, which depends on the datatype. Roughly speaking, they are stored in the form $alpha cdot 2^beta$, where $1 leq alpha < 2$, $beta$ is an integer, and both are stored in a fixed number of bits. This limits the accuracy of $alpha$ and the range of $beta$: if $alpha$ is stored using $a$ bits (as $1.x_1ldots x_a$) then it always expresses a fraction whose denominator is $2^a$, and if $beta$ is stored using $b$ bits then it is always in the range $-2^{b-1},ldots,2^{b-1}-1$.

Due to the limited accuracy of floating-point numbers, arithmetic on these numbers is only approximate, leading to numerical inaccuracies. When developing algorithms, you have to keep that in mind. There is actually an entire area in computer science, numerical analysis, devoted to such issues.