definite integrals – On counting the areas covered by holes in a function in integration

As far as I know, holes in a function at the endpoints of an interval aren’t usually given any importance while integrating over that interval. For example, while calculating the area under the fractional part function from 0 to 1;
enter image description here

You don’t really think of a hole as being an infinitesimally small width that contributes to the calculated area. And I’m fine with that- it makes sense.

But there’s this question that defines f(x) as 0 where x can be expressed as $frac{n}{n+1}$, where n’s an natural number, and as 1 everywhere else. And you’re supposed to find the integral of f(x) from 0 to 2. So you’ve got a line, y=1, with dots on it that grow closer and more numerous as you get closer to 2, nearing an infinite number.

Now the solution to the question just integrates the function from 0 to $frac{1}{2}$, $frac{1}{2}$ to $frac{2}{3}$, $frac{2}{3}$ to $frac{3}{4}$ and so on until 1 and then a normal integral from 1 to 2-essentially just integrating y=1 from 0 to 2.

This seems odd. Isn’t ‘infinitesimally small quantities summed in infinite numbers forming actual numbers’, the general idea of integration? Since there’re an infinite number of small widths here, shouldn’t they be considered as constituting some area and thereby affecting the calculation?