Computer Architecture – How does casting really work for primitive data types?

How does casting of primitive data types work?

The restriction "primitive data type" is not sufficient:

Converting from a 16-bit integer to a 32-bit integer works differently than converting between integer types of the same size. Switching between integer and floating-point types is even more complex.

This code works fine. What about this code?

Both code examples work differently on different CPUs, operating systems, and possibly different compilers.

This is because under some operating systems (for example, 32-bit Windows) Long and unsigned Data types have the same number of bits; under other operating systems (eg 64-bit Windows) Long has twice as many bits as unsigned,

And you use that ~ operator (~ n) on a signed data type, the can Depending on the used compiler different results result.

This code actually outputs negative numbers.

So is there any (architectural) possibility to explain this phenomenon?

There are two effects:

The first effect is:

If you pass a value of a wrong data type to a function, C automatically converts the value.

And when you convert a value that's outside the range of the data type you're converting to, the result is usually not up to your intuitive expectations.

For two integer data types, the rightmost bits of the value are usually copied to the new value.

Let us take (signed characters) 3478 as an an example:

The number 3478 is written as 110110010110 in binary. signed char is an 8-bit data type. The right 8 bits of 1101 10010110 are 10010110, The value (-106) is saved as 10010110 in one signed char Variable.

That's why (signed characters) 3478 yields (-106).

The value on is stored in long unsigned Variable, which means it can not be negative!

However, they say that they print the value of on, This means that you pass the value to a function that prints it on the screen.

Obviously, this function expects a signed data type. C converts the stored value on With regard to the signed data type and as in the example with the number 3478, the result of the casting can be a negative number.

When you use printf You can see one second effect:

If a function (such as printf) has a variable number of parameters, the way in which the parameters are passed to the function depends on the data types passed on. (The details may also vary from operating system to operating system.)

You can now pass the wrong data type to the function. For example, use "% d" (signed int) in the printf Format string, but you pass one long unsigned,

In this case, the compiler writes the value of on to a data store (memory or register) that is suitable long unsigned Values ​​before the actual call of the function. (For such functions, the compiler can not figure out which data type is actually expected by the function, so it must be assumed that the function expects the type of data you pass to the function.)

However, printf will look for a value in a datastore that is appropriate signed int Values ​​that could somewhere completely different. This data store does not contain any useful value because the compiler wrote the value of on somewhere else!