There's been a lot of chatter in the AV media about video color bit depth, and how 10-bit is going to give us so much more than 8-bit. But what does it actually mean?
Computers talk in binary language, which only has two units; 1 and 0. That doesn't give much choice. We as humans work with ten base numbers (0-9), then string them together in the decimal system to give us an unlimited number of possibilities. We can represent one billion..ish in only 10 digits - eg; 1,073,741,824.
A computer can't work in decimal, and nor is it practical for it to work in a linear manner (one billion would literally take one billion bits!). Instead computers work with an exponential map of values. Here's what an 8-bit map looks like, containing 8 values in exponentially increasing order from right to left;
128 64 32 16 8 4 2 1The number 147 (random pick) can be achieved as follows; 1 x 128 + 1 x 16 + 1 x 2 + 1 x 1 = 147
128 64 32 16 8 4 2 1
1 0 0 1 0 0 1 1
So the 8-bit binary representation of 147 is 10010011.
If every location is allocated a 1 then the maximum number will be achieved, which for 8-bit is 255.
128 64 32 16 8 4 2 1
1 1 1 1 1 1 1 1 = 255
Binary always starts at zero, so the range is 0-255, giving us 256 values.
There's another way to work out the same figure. A 1-bit signal has the range 0-1, therefore two values. If we then apply the bit depth in question as an exponent over these two values, we end up at the same place. For example, for 8-bit we calculate 2 with exponent 8. That is, 2^8 = 256.
As an aside, those of you that delve into network settings will recognize these numbers too. IP addresses also work in 8-bit, comprising 4 words each separated by a dot. The maximum value for each is 255, which will also be recognized as the subnet mask values.
Now let's move on to 10-bit. You may be able to figure it out. That's right, the map simply adds two more locations to the left, following the same pattern;
512 256 128 64 32 16 8 4 2 1
Again if we allocate a 1 to every location, we get 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 1,023. With the range starting from zero, that's 0-1,023, or 1,024 values all up. Alternatively, 2^10 = 1,024.
So for 10-bit color, that's 1,024 shades per color. There's three colors, so 1,024^3 = 1,073,741,824. Yes this is the same number that I used as an example of one billion in the intro above. Computers don't get there in the same 10 digits as we can, but it does across three color channels, so 30 bits total.
In summary, the various color bit depths result in the following;
8-bit 2^8 = 256 256^3 = 16,777,216 16.7 million colors
10-bit 2^10 = 1,024 1,024^3 = 1,073,741,824 1.07 billion colors
12-bit 2^12 = 4,096 4,096^3 = 68,719,476,736 68.7 billion colors
So at 10-bit that's 1 BILLION shades from dark to brightest white. Enter High Dynamic Range (HDR), and the scale from dark to light actually has some serious substance, harnessing the benefits of 'deep color'. It's a combination thing. That's why we never really saw deep color back in 2006 when HDMI first added it into the specification; we simply had no way of making use of it to any meaningful degree. Now we do.
So that's the technical backend, but if you want to learn more about how this is put to good use, CEDIA's two new courses 'UHD Video and Beyond: The Science of Better Pixels', and 'Delivering the 10K Video Payload' do deliver. Check them out.
CEDIA Director of Technical Curriculum