Well, I'll put it simpler: 10 bit to 8 bit, is like 4K to FullHD. When recording in 4K, You'll have more informations, pixels, to edit later, and the final result even if you export in 1080p, will be better than shooting at 1080p at the very beginning
That is true, but also not. Today, you actually need (as in "have to have, doesn't work otherwise") a true 10-bit monitor/TV (some stuff advertised as "hdr-ready" uses so-called 8-bit+FRC, which doesn't cut it at all, look for actual VESA or Dolby Digital HDR certification) to enjoy HDR content. Although it is possible to further widen dynamic range/gamut into 12-bit level, it is already more of a local contrast/luminosity issue (therefore VESA standart way of grading HDR displays via peak local brightness: HDR400/600/800/1000), meaning that with purely bit depth going up above 10 bits you won't notice visual difference at everyday use scenarios (unless your daily job is industrial grade (cinema or professional photography) image processing). In fact, there are 12-, 14- and 16-bit per channel displays on professional market right now, albeit with a hefty five-digit prices in US dollars, only used in aforementioned industrial grade image processing. But if you were to get one such yourself, you'd be surprised by how underwhelming it is compared to the price and technology that went into it. Partially due to "no content" reason, but also cuz the difference in bit depth itself at this high values already reaches a point of indistinguishability by human eye. At and above 10-bit, what matters more than raw bit depth, is auxiliary features that come with it.
I never got why 10bit was important because all the examples talks about banding in the sky and I was like: so? I hardly shoot sky! But while doing a side-by-side between Z6+Ninja V and A7III, I realized how much better the "color separation" was between objects of similar colors/shades. Even more so in the skin tones/textures - 10bit made the skin look so much more lifelike.
I have been on the 10bit wagon for decades because 16.7 million colors is like a grain of salt in the ocean of mother nature's number of colors. For me 10 bit is the minimum we should be at and striving for 16 bit per channel as that would get us so much closer to what our eyes see when we walk out our front doors.
@barmalini Stop being one of those types "it is good enough" because it has already been written that the number of colors in mother nature is over 4 billion which is sure in the hell a far cry more than our 16.7 million colors.
The Best Stooge actually there are more than that, but we cannot see them. Sure 4 billion colors in terms of visible light. But there’s radio waves, gamma rays, UV rays, etc. that we cannot see or hear. What if we could modify our eyes as to see these colors :O Sorry I’m just rambling at this point
Nicely explained! As a one-man video production crew, I'll stick to 8-bit as the quality is acceptable for the average consumer, and doesn't require more space and processing requirements like the 10-bit. I don't see the budget justification for using 10- bit.
I just went from an 8 bit monitor to a 10 bit one and I I'm seeing shades of purple and blue that I have never seen on the 8 bit one. I tested this by comparing the exact scene and it does really make a difference
I just bought a Lumix S5 camera from you. I didnt understand what 10 bit was, but every video hyping the camera talked about how it has 10 bit, making it clear that anyone who didnt know what that is, aint right for this camera. But nobody ever explained what it is, until this helpful video you made here.
I have the GH5 and normally shoot 8bit, shot a short movie on the weekend with some night scenes, let me tell you the bit 10 made a HUGE difference in post, I was blown away by the flexibly of the gradding
hi bro.. im now using a super cheap sony a6300.. now i want to move for gh5.. what do you think for your side.. based on color grading and sensor size different
Found out why shooting in 8 bit f log I got color banding everywhere when I graded. This helped so much, now shooting in 10 bit I can do really great color grades!
Nicely done. Great speaking voice for RUclips productions and you're a great "explainer." Keep producing! Oh, and the content was great, too! (smiley thing)
A 10-bit display most likely would drive the price higher. With dithering, an output device can be technically HDR-capable even if it lacks a native HDR screen. Even though the display on any iPhone from iPhone 7 onward uses a wider color gamut letting it produce vibrant colors, only the OLED technology in iPhone X/XS/Max is true HDR-these devices are guaranteed to deliver the required peak brightness and contrast while being able to reproduce 10 bits of color per each RGB channel without resorting to dithering. Summing up, devices without native HDR screens (iPhone XR, current iPad Pros, etc.) process the HDR signal but use dithering to simulate the visual enhancements to dynamic range, contrast and wide color gamut that are only made possible by HDR. >Mark
now i know. thanks for the explanation. it's simple, and easy to understand, even without turning speaker on. thanks for "CC" feature from youtube, i can watch this video in office.
I know Wikipedia says dithering is noise, but is really not. Check the algorithms. It compensates for error in one pixel by changing next pixel towards the previous colour. Keep accumulating errors and correcting them. Looks like noise but there is no randomness involved. Doing it linearly is not the only option hence different dithering algorithms. I have also built a few new dithering algorithms but nothing better than existing ones.
You forgot to mention that 8-bit means 8 bits of binary encoding per Red Green Blue channels. The maximum value for an 8 bit integer is 256 (1 bit = 2, 2 bit = 4, 3 bit = 8, 4 bit = 16, 5 bit = 32, 6 bit = 64, 7 bit = 128, 8 bit = 256), hence why 256 is the maximum range of Red, Green and Blue channels.
Thanks for the video my dude. Really helpful explanation. I would have liked banding explained a bit more - but then again I'm a bit slow at the best of times
Hey guys (I know there are many expert viewers here and B&H), some computer related technical question that I'm curious about. Let's say a monitor uses DP 1.2 Input (21.6 Gbps bandwidth) connected to a computer that has DP 1.4 Output (32.4 Gbps bandwidth) with a DP 1.4 Cable. Basically Output DP 1.4 Cable DP 1.4 Input DP 1.2 (That's what written on the monitor) Computer is showing that screen is running 2560 x 1440 Native Resolution at 165 Hz YCbCr422 10 Bit Color Depth. Which I think is higher than DP 1.2 bandwidth on the receiving side. By right, Windows would not even show it if it's not capable of such specs. Unless, the display is secretly DP 1.4 hahaha.... My curiosity is killing me. Help...
It's a shame that RUclipss compression basically reduced all real footage that was supposed to display banding or how it can be fixed with dithering to a blocky mess. Still a great video on the topic
I can definitely see the difference in 10 bit compared to 8 bit. Just got the OnePlus 8 pro which has 10 bit and near perfect colour accuracy, and man when watching hdr videos, it's something else. Along with having possibly the brightness display, when that hdr kicks on when watching hdr vids, it looks so much better then hdr on my old OnePlus 7 pro.
Watching this on a 10bit EIZO. (bought at the trift store for 20 euro) My EIZO is so professional that it has a counter for hours of usage in the menu. Hold tight.. nearly 30000 hours of operation. 30K. The number was not a mistype.. ❤️✌️🥳
i dont understand. filming in log is just a logarithm transform of the data coming from the sensor, so all it does is expand darker areas and compress brighter areas. this is supposedly because this is more in line with how our eye works, where we are better at detecting details in dark areas than in bright areas, so it's fine to compress the bright areas because we won't see much difference there anyway. so the result is that we get darker areas spread out more over the quantization levels, meaning we capture more detail there, and less detail in bright areas. filming in log does not "increase the dynamic range of the sensor" in the sense that it can suddenly detect weaker lights, or won't clip for brighter lights. hence, it seems to me that when you film in 8-bit, it is MORE important to film in log. because without log, we get the dark areas spread out over fewer quantization levels, which is not really a problem in 10-bit because we have so many quantization levels there, but in 8-bit it's a big problem, so we would really want to expand the darker areas when we film in 8-bit. i am so confused about this, it seems most videos just regurgitate stuff they heard in other videos, but so far i haven't found a single video that manages to explain this.
I can't really see any banding in the sky shot at 3:42 even before the noise was added. Not sure if it's because my high-end laptop's monitor might be very good, or because RUclips's compression algorithm actually hid the banding, or what?
Photoshop has 8-bit, 16-bit, or 32-bit for still image editing with jpg, png, tiff, pds. It has 8-bit and no 10-bit or 12-bit for the images for videos which has H264, H265, or other formats. Still Images have to be converted into video formats for video files. Still Images are not exactly the same after they're imported into videos. Why? It was the same standard for both in the old days that 8-bit sRGB for still images JPEG and 8-bit sRGB for videos MPEG DVD and HD blu-ray are the same color space and format.
you should have mentioned the reason people think 8-bit means pixel art: the NES and similar systems ran on an 8-bit processor, SNES has 16, N64 had 64, etc. It used to be a way to tell how good the graphics would be on a system, but doesn't matter for consumers any more.
You can only have 256 total colors in an 8bit per pixel image, in an 8 bit per channel RGB image can have 256x256x256 (24 bits) per pixel or 10 bit per pixel (30 bits per pixel). It's gets a bit confusing just calling it 8bit without specifying which kind of 8bit you are talking about.. Especially in the title of the video and introduction...
You're not wrong, but this technicality is addressed at 0:54. In most video literature, even though it is 8-bits *per channel*, the format itself is identified as 8-bit. In computer (and photographic) nomenclature, it is indeed referred to as 24-bit. If you browse any video camera that has these options, they are referred to as 8-bit and 10-bit respectively.
Yeah it's not a big thing but when explaining these things it's good to be more explicit about the different ways to describe a pixel or a color.. That's at least my experience when explaining how computer images works. I know the video primarily is about the colors but with all that explaining just a little bit more would maybe not have hurt..
Someone troll me if I'm wrong, But what I'm hearing is that the same rules that apply to a JPEG can be roughly applied to an 8 bit video file. And what I mean is like, in terms of gathering as much info for the end result at the time of capture. Versus tweaking the final product of a 10 bit recording back at HQ. I know I've probably way over simplified this, but am I that far off?
Also I'm sure the Industry at large is able to get away with 8 bit because of the power of the eq they're using to capture the cinematic whathaveyous, at least with most movie studios and the Streaming heavy hitters. So I guess that leads to another question. Does the power limitations of the gear your using affect the 8 bit video file you end up with? And at what price point do you begin to see that gain?
Hasselblad raw can do 16 bit / channel, Canon stil raw 14 bit / color (or channel). I think many pro video should be easy at those level also. Dolby Vision require 12 bit / color to end user. I hope soon all entry level SLR can do 10 / 12 bit HDR as same as end user, and pro is more than that.
It really does not relate to noise. The difference between 8-Bit and 10-Bit deals with the amount of colors available, not noise levels. Thanks! -Joey P
Not gonna lie, this video made me realise how f-up my monitor is and why static dithering is so important when chosing the right model. I didn't understand why there is this ugly "stutter" and "glitches" on my monitor. Now I know. Thank you.
thinking about buying a 5D mark IV for video productions, it would be right to upgrade to c-log if i understand you right, no matter if i shoot in full HD 4.2.0 or 4k 4.2.2... correct? btw. thanks for the vid. i learned something today :)
C-Log is a color profile designed to increase your Dynamic Range for Post Production Processing. If you are looking to get more in-depth with your color grading than adding C-Log is the right choice no matter what your bit depth is. Thanks! -Joey P
For a suggestion on how 10 bit can work for you. On your two blue locks, if you had put in a "B" in the first one and an "H" in the second one that was almost the same color, and then used curves to enhance the B & H, you would have more dramatically shown how 10 bits can help in certain situations.
Doug, amazing video mate. With all this said, would you recommend that the EOS RP be used for simple Vlogs on RUclips and possibly a small amount of client work, until one could theoretically get one's hand on a device that is capable of capturing ProRes, log, or whatever else @ 10-bits?
Sure. The Canon EOS RP Mirrorless Digital Camera (B&H # CAERP2410547 ) would be an excellent choice for a vlogger and for professional work . bhpho.to/3oJTk8C >Mark
As you said, it is still seen in your and your viewers monitor. Plus, videos might be affected more than stills. I believe Lightroom only uses 8 bit? What about the difference in pixel size due to monitor size; as 27" vs 32"??? Then there is the difference of HDR in 2k or 4k vs monitors without HDR? Is all of this "too much information" for an amateur, or even most professional, still photographers using Lightroom?
And with 10bit you have a higher possible dynamic-range (60dB (17 stops against 48dB (6 Stops) ) with a higher system-noise-ratio. It‘s not very important for consuming video (no display can ever really display this in terms of the lowest cd/m2 to the highest cd/m2 and you would need a perfect black non reflecting environment while whatching it... and even than your eye wouldnt be able to process it (the highest level would be too bright in relation to the lowest level)
To better assist you with this question we will need a bit more info. Feel free to reach out to us directly at askbh@bhphotovideo.com. We will be better able to assist you from there. Thanks! -Joey P
Your choices will depend upon your shooting style, project needs, customer stipulations, and your budget for new gear. Please e-mail us with details: askbh@bandh.com. >Mark
I’m confused about the following: when a display truly has 10bit color depth (so not 8-bit +FRC) then it should not be dithering right? It should be able to fill up all those 1024 grades just like 8 bit fills 256 grades? Does it mean 10 bit camera files will be shown more smooth? If this makes any sense.
If a monitor is truly 10-Bit and you fee it a 10-Bit signal, yes you should get a true color representation of the image off of that monitor. If you need additional help with this question feel free to email us directly at askbh@bhphotovideo.com. Thanks! -Joey P
I am a bit confused by this when working in Blender. Normally I keep it at 8 bit because the difference by one bit is unnoticeable. And why 10 instead of 16. Does not everything increase to the power of 2.
Power of 2 is not necessarily a" thing" in this case. 10-bit video is where it is at currently, depending on your application, viewing platform and your customer. >Mark
How much more system intensive is it, to go from 8 bit to 10 bit when editing? Theoretically, you have up to 64 times more data to deal with, going from 8 to 10 bits! This is much more than going from 1080p to 4K (4 times data size increase).
Appreciated) I just done my own adaptive dither which correctly represent 12 bit on 8 bit display. And you know? only 16 bit content could really fill up this setup visually :)
OK so why not just always shoot in 10bit? It seems like you are saying 10bit is better in every way but is just not normally used because it is not noticeable. Is there any downside to 10bit?
Hi Eric, Ideally, one would always shoot in 10-bit as it ensures the footage is usable in a television or film environment (many broadcasters require 10-bit 4:2:2 cameras as the primary source). For those working in web or lower budget productions, 8-bit is mainly considered to save costs. There are now more cameras available that shoot 10-bit, but 8-bit is still widely used by a variety of cameras, not to mention 10-bit modes usually mean slightly more storage space required, as they tend to record in higher bit *rates* as well. 10-bit video also tends to require a slightly stronger computer to cut (though editing off proxies eliminates this issue). In short, the only downsides are additional space, computing requirement, and perhaps easy playback on hardware devices. Most hardware players, such as TVs, integrated video decoders, and even some software players, cannot natively playback 10-bit content.
Quick question : Is there a big difference between 10 and 12 (for the eyes) bits ? That's the only thing keeping me from getting a Z-cam S6 instead of BMPCC 6K...
A bit depth of 10 Bit offers a billion possible colors, 12 bits takes that to over 68 billion possible colors. There is a big difference in quality. Thanks! -Joey P
Great explanation, thanks! I'm planning to shoot a budget iphone feature at 8bit sdr. It'll be exposed correctly and plan on applying some grading in Resolve. I'm going to use filmic pro LOGv2. I'm aware of banding, I was wondering if you would expect major issues when using a mist filter (hazier picture). Also to add dithering, would it be better to have a static noise or animated? I would've guessed animated? Can you kindly point to a decent plugin/tutorial? Many thanks!
@@BandH Thanks, what I meant is whether using a mist filter might aggravate colour banding issues when filming 8bit, and doing minimal grading work on it. I truly hope not :-)
I read so much in the past two days because I need a new monitor. ..and now I'm confused. Anyway: Is it possible to edit 10 bit material on a I bit display? With software? It sounds kinda limiting to me...
I'm trying to solve the color banding issue. I have an 8 bit Samsung 43" SVA monitor. Is colorbanding an inconvinient feature of such panel or is my unit faulty?
I still don't know which is more important, but depth or color sampling.. Due to HDMI limitations in bandwidth I must choose between 4K 60hz 10bit 4:2:0 or 4K 60hz 8bit 4:4:4.. I don't know which one is supposed to give me a better image quality 😐
Both are important. But the viewing platform and audience is important also. 4:2:0 is actually the chroma subsampling level required by the 4K UHD Blu-ray standard. most 4K UHD Blu-ray players are actually capable of displaying slightly higher quality or more color accurate movies than you can get from a 4K UHD Blu-ray disc! All that's needed is a 4K camcorder, DSLR, or graphics program that records 10 bits-per-pixel video using 4:2:2 (not 4:2:0) sampling, or even a raw 4:4:4 video format. The benefits of having full color in video are debatable, especially at 4k. It would be tough to recognize the difference between a full 4:4:4 sequence and the same content in 4:2:0. 10-bit color depth and HDR are probably more important visually on many platforms 4:2:0 is almost lossless visually, which is why it can be found used in Blu-ray discs and a lot of modern video cameras. There is virtually no advantage to using 4:4:4 for consumer video content. If anything, it would raise the costs of distribution by far more than its comparative visual impact. This becomes especially true moving towards 4k and beyond. The higher the resolution and pixel density of future displays, the less apparent subsampling artifacts become. >Mark
@@BandH thank you so much! This explanation helped me a LOT.. Even more than the video itself, which is also really good. That's exactly what I was looking for. So, according to what you say, the higher the resolution the less noticeable is the difference between subsamplings. 😋
Well, I'll put it simpler: 10 bit to 8 bit, is like 4K to FullHD. When recording in 4K, You'll have more informations, pixels, to edit later, and the final result even if you export in 1080p, will be better than shooting at 1080p at the very beginning
Nope, this is wrong explanation
No that explanation is only good for explaining 8bit vs 8bit+frc
2025: I finally got 10bit everything!
2026: You must have 12bit.
🤣
oyze we got 16
This comment wins
That is true, but also not. Today, you actually need (as in "have to have, doesn't work otherwise") a true 10-bit monitor/TV (some stuff advertised as "hdr-ready" uses so-called 8-bit+FRC, which doesn't cut it at all, look for actual VESA or Dolby Digital HDR certification) to enjoy HDR content. Although it is possible to further widen dynamic range/gamut into 12-bit level, it is already more of a local contrast/luminosity issue (therefore VESA standart way of grading HDR displays via peak local brightness: HDR400/600/800/1000), meaning that with purely bit depth going up above 10 bits you won't notice visual difference at everyday use scenarios (unless your daily job is industrial grade (cinema or professional photography) image processing). In fact, there are 12-, 14- and 16-bit per channel displays on professional market right now, albeit with a hefty five-digit prices in US dollars, only used in aforementioned industrial grade image processing. But if you were to get one such yourself, you'd be surprised by how underwhelming it is compared to the price and technology that went into it. Partially due to "no content" reason, but also cuz the difference in bit depth itself at this high values already reaches a point of indistinguishability by human eye. At and above 10-bit, what matters more than raw bit depth, is auxiliary features that come with it.
Dolby Vision is 12 bit.
I never got why 10bit was important because all the examples talks about banding in the sky and I was like: so? I hardly shoot sky!
But while doing a side-by-side between Z6+Ninja V and A7III, I realized how much better the "color separation" was between objects of similar colors/shades. Even more so in the skin tones/textures - 10bit made the skin look so much more lifelike.
I have been on the 10bit wagon for decades because 16.7 million colors is like a grain of salt in the ocean of mother nature's number of colors. For me 10 bit is the minimum we should be at and striving for 16 bit per channel as that would get us so much closer to what our eyes see when we walk out our front doors.
you'd need to ask mother nature to give you the 16bit eyeballs before pursuing on that stride any further
@barmalini Stop being one of those types "it is good enough" because it has already been written that the number of colors in mother nature is over 4 billion which is sure in the hell a far cry more than our 16.7 million colors.
The Best Stooge actually there are more than that, but we cannot see them. Sure 4 billion colors in terms of visible light. But there’s radio waves, gamma rays, UV rays, etc. that we cannot see or hear. What if we could modify our eyes as to see these colors :O
Sorry I’m just rambling at this point
B H is our go to source for gear and equipment. Been using you guys for over 30 years. Appreciate your honest and informative videos
256 shades of Gray . yeah!
Yeahhhhhhhh!!!!!!! 😝😝😝
ohh now it makes sense😅
...Ironic that the background perfectly illustrates 8 bit falling apart yet doesnt even get a mention...
😂
But in 10bit you can have 1073741824 shades of Gray. Can you believe that
Nicely explained! As a one-man video production crew, I'll stick to 8-bit as the quality is acceptable for the average consumer, and doesn't require more space and processing requirements like the 10-bit. I don't see the budget justification for using 10- bit.
Yeah I was wondering, does recording in 10 bit use more memory than 8bit? I think it does
As an indie filmmaker, I agree.
I just went from an 8 bit monitor to a 10 bit one and I I'm seeing shades of purple and blue that I have never seen on the 8 bit one. I tested this by comparing the exact scene and it does really make a difference
This was really informative and really easy to understand. Thank you Doug and B&H
I just bought a Lumix S5 camera from you. I didnt understand what 10 bit was, but every video hyping the camera talked about how it has 10 bit, making it clear that anyone who didnt know what that is, aint right for this camera. But nobody ever explained what it is, until this helpful video you made here.
I have the GH5 and normally shoot 8bit, shot a short movie on the weekend with some night scenes, let me tell you the bit 10 made a HUGE difference in post, I was blown away by the flexibly of the gradding
hi bro.. im now using a super cheap sony a6300.. now i want to move for gh5.. what do you think for your side.. based on color grading and sensor size different
Found out why shooting in 8 bit f log I got color banding everywhere when I graded. This helped so much, now shooting in 10 bit I can do really great color grades!
where can i see the different from you works bro.. now im also want to jump from sony to gh5
Love every video that explains the advantage of one tech but not necessarily recommend it to every one.
Nicely done. Great speaking voice for RUclips productions and you're a great "explainer." Keep producing! Oh, and the content was great, too! (smiley thing)
10-bit color has 1 billion color whereas 8-bit color has 16 million colors.
That was a clear and peaceful explanation
One question bothered me for a long time. OLED iPhone has 8bit display, but it supports both HDR10 and Dolby Vision, why?
A 10-bit display most likely would drive the price higher. With dithering, an output device can be technically HDR-capable even if it lacks a native HDR screen. Even though the display on any iPhone from iPhone 7 onward uses a wider color gamut letting it produce vibrant colors, only the OLED technology in iPhone X/XS/Max is true HDR-these devices are guaranteed to deliver the required peak brightness and contrast while being able to reproduce 10 bits of color per each RGB channel without resorting to dithering. Summing up, devices without native HDR screens (iPhone XR, current iPad Pros, etc.) process the HDR signal but use dithering to simulate the visual enhancements to dynamic range, contrast and wide color gamut that are only made possible by HDR. >Mark
now i know. thanks for the explanation. it's simple, and easy to understand, even without turning speaker on. thanks for "CC" feature from youtube, i can watch this video in office.
I know Wikipedia says dithering is noise, but is really not. Check the algorithms. It compensates for error in one pixel by changing next pixel towards the previous colour. Keep accumulating errors and correcting them. Looks like noise but there is no randomness involved. Doing it linearly is not the only option hence different dithering algorithms. I have also built a few new dithering algorithms but nothing better than existing ones.
explications not clear, ended up believing in flat earth theory
you' d believe it, cus you from there!
explications (sic) are always confusing, however, this was an excellent video IMO
Nice
flat earth is not believing. it's seeing.
ball earth is all about belief. you believe in other people's lies
Flat Earth Reality please leave this comment section
Your video helped me realize what 10-bit is, and why it makes a big difference!
I love nerding our to this
You forgot to mention that 8-bit means 8 bits of binary encoding per Red Green Blue channels. The maximum value for an 8 bit integer is 256 (1 bit = 2, 2 bit = 4, 3 bit = 8, 4 bit = 16, 5 bit = 32, 6 bit = 64, 7 bit = 128, 8 bit = 256), hence why 256 is the maximum range of Red, Green and Blue channels.
Very nicely explained in just 5:30 long video!
Thank you
Thanks for the video my dude. Really helpful explanation.
I would have liked banding explained a bit more - but then again I'm a bit slow at the best of times
Glad we're here to help!
Excellent presentation, and needed. I've long wondered about the significance of 8 bit vs 10 bit. Thank you.
Glad you found it helpful
Thank you, it's all starting to make better sense now.
Glad you enjoyed.
Probably the best explanation I've seen to date.
Thats so much information in this video
Hey guys (I know there are many expert viewers here and B&H), some computer related technical question that I'm curious about. Let's say a monitor uses DP 1.2 Input (21.6 Gbps bandwidth) connected to a computer that has DP 1.4 Output (32.4 Gbps bandwidth) with a DP 1.4 Cable.
Basically
Output DP 1.4
Cable DP 1.4
Input DP 1.2 (That's what written on the monitor)
Computer is showing that screen is running 2560 x 1440 Native Resolution at 165 Hz YCbCr422 10 Bit Color Depth. Which I think is higher than DP 1.2 bandwidth on the receiving side. By right, Windows would not even show it if it's not capable of such specs. Unless, the display is secretly DP 1.4 hahaha.... My curiosity is killing me.
Help...
Please e-mail us Ray: askbh@bandh.com >Mark
yo ray!!!
i also have the same thing showing with my monitor, should I keep it on 10BPC?
Jesus fucking christ you again. I did not expect you would be watching this kind of video
It's a shame that RUclipss compression basically reduced all real footage that was supposed to display banding or how it can be fixed with dithering to a blocky mess.
Still a great video on the topic
I can definitely see the difference in 10 bit compared to 8 bit. Just got the OnePlus 8 pro which has 10 bit and near perfect colour accuracy, and man when watching hdr videos, it's something else. Along with having possibly the brightness display, when that hdr kicks on when watching hdr vids, it looks so much better then hdr on my old OnePlus 7 pro.
Funny, I just watched a video the other day about 'dithering' on old gaming consoles, which was the first time I had heard of the term.
Watching this on a 10bit EIZO. (bought at the trift store for 20 euro) My EIZO is so professional that it has a counter for hours of usage in the menu. Hold tight.. nearly 30000 hours of operation. 30K. The number was not a mistype.. ❤️✌️🥳
my computer does not render fast with video card, when i record in 4.2.2 10 bit. only recognizes 4.2.0.
i dont understand. filming in log is just a logarithm transform of the data coming from the sensor, so all it does is expand darker areas and compress brighter areas. this is supposedly because this is more in line with how our eye works, where we are better at detecting details in dark areas than in bright areas, so it's fine to compress the bright areas because we won't see much difference there anyway. so the result is that we get darker areas spread out more over the quantization levels, meaning we capture more detail there, and less detail in bright areas. filming in log does not "increase the dynamic range of the sensor" in the sense that it can suddenly detect weaker lights, or won't clip for brighter lights. hence, it seems to me that when you film in 8-bit, it is MORE important to film in log. because without log, we get the dark areas spread out over fewer quantization levels, which is not really a problem in 10-bit because we have so many quantization levels there, but in 8-bit it's a big problem, so we would really want to expand the darker areas when we film in 8-bit.
i am so confused about this, it seems most videos just regurgitate stuff they heard in other videos, but so far i haven't found a single video that manages to explain this.
I can't really see any banding in the sky shot at 3:42 even before the noise was added. Not sure if it's because my high-end laptop's monitor might be very good, or because RUclips's compression algorithm actually hid the banding, or what?
wow you know your subject, ill have to listen to it many times to catch all infos in the vid
That's a great layman's explanation and set of graphics...
Large macroblocking vs Smaller macroblocking for luminance. Only thing that matters. Same thing for 4:2:0 vs 4:2:2 and 4:4:4 only for colors.
Photoshop has 8-bit, 16-bit, or 32-bit for still image editing with jpg, png, tiff, pds. It has 8-bit and no 10-bit or 12-bit for the images for videos which has H264, H265, or other formats. Still Images have to be converted into video formats for video files. Still Images are not exactly the same after they're imported into videos. Why? It was the same standard for both in the old days that 8-bit sRGB for still images JPEG and 8-bit sRGB for videos MPEG DVD and HD blu-ray are the same color space and format.
you should have mentioned the reason people think 8-bit means pixel art: the NES and similar systems ran on an 8-bit processor, SNES has 16, N64 had 64, etc. It used to be a way to tell how good the graphics would be on a system, but doesn't matter for consumers any more.
Awesome video. Thank you very much. I finally understand.
Excellent explanation
Great explanation. Thank you
The higher the dynamic range, the more vibrant the image colours it will be.
Ok, got it. Now, can you tell me how much larger will be a 1 minute file shot in 10 bit versus 8 bit ? Thank you
You can only have 256 total colors in an 8bit per pixel image, in an 8 bit per channel RGB image can have 256x256x256 (24 bits) per pixel or 10 bit per pixel (30 bits per pixel).
It's gets a bit confusing just calling it 8bit without specifying which kind of 8bit you are talking about.. Especially in the title of the video and introduction...
You're not wrong, but this technicality is addressed at 0:54. In most video literature, even though it is 8-bits *per channel*, the format itself is identified as 8-bit. In computer (and photographic) nomenclature, it is indeed referred to as 24-bit. If you browse any video camera that has these options, they are referred to as 8-bit and 10-bit respectively.
Yeah it's not a big thing but when explaining these things it's good to be more explicit about the different ways to describe a pixel or a color.. That's at least my experience when explaining how computer images works. I know the video primarily is about the colors but with all that explaining just a little bit more would maybe not have hurt..
Thank you, Sir. This video was very helpful.
I think the example of some “hollywood movies” being 8 bit is a bit misleading.
Shooting in 10 bit vs DELIVERING in 8 bit is very different
agreee
10 bit is already here!! GoPro 11
Someone troll me if I'm wrong, But what I'm hearing is that the same rules that apply to a JPEG can be roughly applied to an 8 bit video file. And what I mean is like, in terms of gathering as much info for the end result at the time of capture. Versus tweaking the final product of a 10 bit recording back at HQ. I know I've probably way over simplified this, but am I that far off?
Also I'm sure the Industry at large is able to get away with 8 bit because of the power of the eq they're using to capture the cinematic whathaveyous, at least with most movie studios and the Streaming heavy hitters. So I guess that leads to another question. Does the power limitations of the gear your using affect the 8 bit video file you end up with? And at what price point do you begin to see that gain?
Hasselblad raw can do 16 bit / channel, Canon stil raw 14 bit / color (or channel). I think many pro video should be easy at those level also. Dolby Vision require 12 bit / color to end user. I hope soon all entry level SLR can do 10 / 12 bit HDR as same as end user, and pro is more than that.
Thanks for the helpful video 👍🏻 was thinking about this for quite a while
great video. Does shooting in 10-bit help reduce noise?
It really does not relate to noise. The difference between 8-Bit and 10-Bit deals with the amount of colors available, not noise levels.
Thanks!
-Joey P
@@BandH oo i see. Thanks.
Good understanding here
Not gonna lie, this video made me realise how f-up my monitor is and why static dithering is so important when chosing the right model. I didn't understand why there is this ugly "stutter" and "glitches" on my monitor. Now I know. Thank you.
Thank you Doug.
Very well explained. Good work.
Doug nailed this
thinking about buying a 5D mark IV for video productions, it would be right to upgrade to c-log if i understand you right, no matter if i shoot in full HD 4.2.0 or 4k 4.2.2... correct? btw. thanks for the vid. i learned something today :)
C-Log is a color profile designed to increase your Dynamic Range for Post Production Processing. If you are looking to get more in-depth with your color grading than adding C-Log is the right choice no matter what your bit depth is.
Thanks!
-Joey P
Thank you for video. Just one comment, your backdrop make it looks like video shot with high iso and then denoised it.
Great Explanation!
very helpful resource. thanks!
For a suggestion on how 10 bit can work for you. On your two blue locks, if you had put in a "B" in the first one and an "H" in the second one that was almost the same color, and then used curves to enhance the B & H, you would have more dramatically shown how 10 bits can help in certain situations.
Doug, amazing video mate. With all this said, would you recommend that the EOS RP be used for simple Vlogs on RUclips and possibly a small amount of client work, until one could theoretically get one's hand on a device that is capable of capturing ProRes, log, or whatever else @ 10-bits?
Sure. The Canon EOS RP Mirrorless Digital Camera (B&H # CAERP2410547 ) would be an excellent choice for a vlogger and for professional work . bhpho.to/3oJTk8C >Mark
Very well explained 👍
Thanks for watching.
As you said, it is still seen in your and your viewers monitor. Plus, videos might be affected more than stills. I believe Lightroom only uses 8 bit? What about the difference in pixel size due to monitor size; as 27" vs 32"??? Then there is the difference of HDR in 2k or 4k vs monitors without HDR? Is all of this "too much information" for an amateur, or even most professional, still photographers using Lightroom?
Fantastic explanation thank you
Thank you that was very well presented and easy to understand.
Thank you so much, this info is so good.
And with 10bit you have a higher possible dynamic-range (60dB (17 stops against 48dB (6 Stops) ) with a higher system-noise-ratio. It‘s not very important for consuming video (no display can ever really display this in terms of the lowest cd/m2 to the highest cd/m2 and you would need a perfect black non reflecting environment while whatching it... and even than your eye wouldnt be able to process it (the highest level would be too bright in relation to the lowest level)
So since I have a 64 bit processor, does that mean I should be shooting in 16 bit * (3 colors + 1 for brightness), right?
/s
To better assist you with this question we will need a bit more info. Feel free to reach out to us directly at askbh@bhphotovideo.com. We will be better able to assist you from there.
Thanks!
-Joey P
Great info 10 bit vs 8 bit on 2020, maybe on 2060, will be legend.
When choosing a new gear, what would be the prioroties?
1st 8 bits vs 1p bits ? or 4.2.2 vs 4.20 or 4.0.0?
Or the bitrate?
Your choices will depend upon your shooting style, project needs, customer stipulations, and your budget for new gear. Please e-mail us with details: askbh@bandh.com. >Mark
I’m confused about the following: when a display truly has 10bit color depth (so not 8-bit +FRC) then it should not be dithering right? It should be able to fill up all those 1024 grades just like 8 bit fills 256 grades? Does it mean 10 bit camera files will be shown more smooth? If this makes any sense.
If a monitor is truly 10-Bit and you fee it a 10-Bit signal, yes you should get a true color representation of the image off of that monitor. If you need additional help with this question feel free to email us directly at askbh@bhphotovideo.com.
Thanks!
-Joey P
I am a bit confused by this when working in Blender. Normally I keep it at 8 bit because the difference by one bit is unnoticeable. And why 10 instead of 16. Does not everything increase to the power of 2.
Power of 2 is not necessarily a" thing" in this case. 10-bit video is where it is at currently, depending on your application, viewing platform and your customer. >Mark
If you show your videos side by side, you can see a difference, otherwise, I don’t think it really matters.
thanks for information, i understood
Glad it helped!
How much more system intensive is it, to go from 8 bit to 10 bit when editing? Theoretically, you have up to 64 times more data to deal with, going from 8 to 10 bits! This is much more than going from 1080p to 4K (4 times data size increase).
You may need more RAM, a faster processor and consider more storage. >Mark
Thanks for the explanation
This made my head feel big with my 16 bit camera
Appreciated)
I just done my own adaptive dither which correctly represent 12 bit on 8 bit display. And you know? only 16 bit content could really fill up this setup visually :)
Did you shoot this video on 10bit? Your background looks noisy.
OK so why not just always shoot in 10bit? It seems like you are saying 10bit is better in every way but is just not normally used because it is not noticeable. Is there any downside to 10bit?
Hi Eric,
Ideally, one would always shoot in 10-bit as it ensures the footage is usable in a television or film environment (many broadcasters require 10-bit 4:2:2 cameras as the primary source). For those working in web or lower budget productions, 8-bit is mainly considered to save costs. There are now more cameras available that shoot 10-bit, but 8-bit is still widely used by a variety of cameras, not to mention 10-bit modes usually mean slightly more storage space required, as they tend to record in higher bit *rates* as well. 10-bit video also tends to require a slightly stronger computer to cut (though editing off proxies eliminates this issue).
In short, the only downsides are additional space, computing requirement, and perhaps easy playback on hardware devices. Most hardware players, such as TVs, integrated video decoders, and even some software players, cannot natively playback 10-bit content.
Money. Expensive cameras, expensive hard drives, expensive memory cards, expensive computers.
Quick question : Is there a big difference between 10 and 12 (for the eyes) bits ? That's the only thing keeping me from getting a Z-cam S6 instead of BMPCC 6K...
A bit depth of 10 Bit offers a billion possible colors, 12 bits takes that to over 68 billion possible colors. There is a big difference in quality.
Thanks!
-Joey P
Thanks for the tutorial! What is the general range of bits in film today?
Theatrical distribution standard color depth is 4:4:4 12-bit and the standard color space is DCI XYZ. >Mark
Great explanation, thanks!
I'm planning to shoot a budget iphone feature at 8bit sdr. It'll be exposed correctly and plan on applying some grading in Resolve.
I'm going to use filmic pro LOGv2.
I'm aware of banding, I was wondering if you would expect major issues when using a mist filter (hazier picture).
Also to add dithering, would it be better to have a static noise or animated? I would've guessed animated? Can you kindly point to a decent plugin/tutorial?
Many thanks!
You should be able to achieve the desired mist effect depending upon the filter chosen. >Mark
@@BandH Thanks, what I meant is whether using a mist filter might aggravate colour banding issues when filming 8bit, and doing minimal grading work on it. I truly hope not :-)
@@user-wf9wq9kq6u I do not see this happening with a quality filter. >Mark
If you don't intend to edit a footage, should I still use 10bit for quality or 8bit will be good quality enough?
There is no harm in shooting at a better quality if it is available to you. Even if you are not going to edit the footage.
Thanks! -Joey P
Thanks! Perfectly clear.
I read so much in the past two days because I need a new monitor. ..and now I'm confused. Anyway: Is it possible to edit 10 bit material on a I bit display? With software? It sounds kinda limiting to me...
I'm trying to solve the color banding issue. I have an 8 bit Samsung 43" SVA monitor. Is colorbanding an inconvinient feature of such panel or is my unit faulty?
Excellent thanks!
Really Helpful! Thanks!
Best explaination
I still don't know which is more important, but depth or color sampling.. Due to HDMI limitations in bandwidth I must choose between 4K 60hz 10bit 4:2:0 or 4K 60hz 8bit 4:4:4.. I don't know which one is supposed to give me a better image quality 😐
Both are important. But the viewing platform and audience is important also. 4:2:0 is actually the chroma subsampling level required by the 4K UHD Blu-ray standard. most 4K UHD Blu-ray players are actually capable of displaying slightly higher quality or more color accurate movies than you can get from a 4K UHD Blu-ray disc! All that's needed is a 4K camcorder, DSLR, or graphics program that records 10 bits-per-pixel video using 4:2:2 (not 4:2:0) sampling, or even a raw 4:4:4 video format. The benefits of having full color in video are debatable, especially at 4k. It would be tough to recognize the difference between a full 4:4:4 sequence and the same content in 4:2:0.
10-bit color depth and HDR are probably more important visually on many platforms
4:2:0 is almost lossless visually, which is why it can be found used in Blu-ray discs and a lot of modern video cameras. There is virtually no advantage to using 4:4:4 for consumer video content. If anything, it would raise the costs of distribution by far more than its comparative visual impact. This becomes especially true moving towards 4k and beyond. The higher the resolution and pixel density of future displays, the less apparent subsampling artifacts become. >Mark
@@BandH thank you so much! This explanation helped me a LOT.. Even more than the video itself, which is also really good. That's exactly what I was looking for. So, according to what you say, the higher the resolution the less noticeable is the difference between subsamplings.
😋
here come my 8-bit SD CRT, 6-bit HD LCD, 6-bit FHD LED monitor.
Do you know any way or tool to determin if a video is 8bit or 10bit?
So that we may better assist you with this question feel free to email us directly at askbh@bhphoto.com.
Thanks! -Joey P
@@BandH would be great if you could answer here - maybe someone else has the Same question
is processing 10 bit 4 2 2 files more CPU heavy when editing?
What if I just what to shoot 10 bit 4:2:2 but not use a log and do not plan to color grade it?
Go for it! Still a better option. >Mark
Great explaination
Merci beaucoup. Very helpful
I finally understand!! Thank youu