runtime analysis – Pseudo-polynomial Algorithms

Reading wikipedia I found that they give this example

Consider the problem of testing whether a number n is prime, by naively checking whether no number in {displaystyle {2,3,dots ,{sqrt {n}}}}{ 2, 3, dots, sqrt{n} } divides {displaystyle n}n evenly. This approach can take up to {displaystyle {sqrt {n}}-1} sqrt{n} – 1 divisions, which is sub-linear in the value of n but exponential in the length of n (which is about {displaystyle log(n)}log(n)). For example, a number n slightly less than 10,000,000,000 would require up to approximately 100,000 divisions, even though the length of n is only 11 digits.>

I dont understand. First of all why dont they teach this way in regular computer science courses? Most of our time orders classification are regularly done with respect to N and not its digit length.

Secondly what I don’t understand is, if we were to evaluate each problem with respect to its digits then almost every polynomial problem that I know from my CS course wouldn’t be polynomial. Even the simplest of algorithms like counting from 1 to N. Suppose I give you a number you need to count from 1 to n. Usually in my CS course such a problem runtime would be N (therefore polynomial). Just running a loop from 1 to n. But according to wikipedia. This shouldn’t be polynomial but rather exponential because the number of digits is log n but number of runs would be n. So fe: for input of 3 digits I have 8 runs. For input of 6 digits I have 64 runs.. this growth is certainly exponential for just counting from 1 to n. How does that make sense? And why this is not the way we (at least I) study computer science?