Okay, so I’ve read questions such as this one explaining the difference between the two which was illuminating, but I still have a few questions I didn’t really quite gather from this.
What I took away from this, presuming I understand, is that the color gamut for a display essentially defines the border of what’s possible theoretically and the bit-depth in determines the number of shades / tints available in a respective color as described there.
Wuld it be correct in saying that I could have a color gamut and divide that color gamuts representational possibilities, in an arbitrary display, according to bit depth?
Taking a step back…. more abstractly… some screens are paletted and some are not paletted. Meaning, I could divide some digital displays according to whether they are monochrome screens or paletted wherein these monochrome screens could be grayscale or not grayscaled (determined by a bit depth of 1 to 8 forming a black / white to full grayscale palette of potential colors).
Similarly, we have dichrome as a step up from this using permuted pairs of RGB.
Beyond this… in paletted screens using RGB color models we have 3-bit, 6-bit, 8-bit, 9-bit, 12-bit, 15-bit, and 16-bit (theoretically 32-bit) color depths that are possible.
Now, this is where I personally get fuzzy interconnecting these display possibilities with color gamuts and more abstractly color models.
In short, what is the hierarchical structure between these – if any? Is it even possible to say, for example, AdobeRGB on display X with bit-depth Y or is that just fundamentally incorrect?
If that’s possible…. great that’s just for the RGB color model using additive / subtractive representations. What about other color models and the above bit-depth considerations?