Gradient handling is an important part of picture quality. It helps deliver better details in the shadows and minimizes banding, especially if you want to game in HDR, view HDR content, or create content. Understanding color depth can help you make a better buying decision or even help you save some money. This article discusses why it matters, the differences between 8-bit and 10-bit color depth, and how we score gradient handling in our monitor reviews.
Gradient handling is important when viewing scenes with different shades of the same color, such as a blue, sunny sky, where the blue would become lighter the closer it is to the light source. Bad gradient handling makes the transition from one shade to the other more visible. This happens when two shades that are supposed to be similar look very different, or when two shades that are supposed to be different end up looking the same, resulting in banding. Below, you can see clear banding on the LG 32GK650F-B, while the Dell U2718Q is almost entirely smooth.
For our test, we only use a Nikon D750 camera and a PC connected to the monitor being tested. We ensure that the monitor is in SDR mode with local dimming off (if applicable), and we set up the camera with an F4.0 aperture lens, a 1/15 sec. shutter time, and an ISO of 200.
1 - The first step is to determine whether a monitor has an 8-bit or 10-bit panel. We do so by verifying in the NVIDIA Control Panel whether the color depth can be set to anything other than 8-bit. If the control panel allows us to set it to 10-bit, we consider it 10-bit, even if it's 8-bit+FRC.
2 - The second step is to take a photo of the screen in a dark room displaying our gradient image. The image is displayed through the NVIDIA 'High Dynamic Range Display SDK' program. For 8-bit monitors, we display the image at 8-bit without any other process. However, for 10-bit panels, we first open the gradient image, then change the Tonemap Mode to 'Linear', which is essentially a passthrough mode. Before taking the photo, we adjust the brightness so the brightest part of the image is at 100 cd/m² (on the right side of the second row). After taking the photo, it's uploaded into Adobe Lightroom, where we apply a custom preset and crop the image to a 16:9 format.
To score the gradient, we run a batch file that divides the photo that we took into smaller sections. These sections are then presented to two testers, one at a time, who then determine whether there's banding or not. Three points are awarded if there's no banding, two points if unsure, and one point if there's banding. These points are then tallied and translated into a score out of ten. If the testers' scores differ, a third tester is brought in to perform the test again. We deduct a point for an 8-bit panel, which means the highest score for an 8-bit monitor is 9.0. That said, an 8-bit monitor can still score higher than a 10-bit one, as some 10-bit monitors don't handle gradients well, and some 8-bit ones are very good at it. We don't make any distinction between 8-bit+FRC and native 10-bit since score according to how smooth the gradients look.
The main difference between an 8-bit and 10-bit panel is the number of colors they can produce; an 8-bit panel can display 16.7 million colors, while a 10-bit panel can display 1.07 billion. However, many modern 8-bit displays use a technique called 'dithering', allowing them to produce as many colors as a native 10-bit panel. 'Temporal Dithering' (also known as Frame Rate Control or FRC) produces certain colors by cycling between the adjacent shades. This type of flickering isn't visible most of the time, but when it is, it's usually in the darker shades. The other technique is called 'Spatial Dithering', which places the two adjacent shades very close to each other to trick the eyes to see an intermediate shade, but it isn't used as often as FRC. As you can see below, there's less banding on the Dell U2718Q than the LG 48 CX OLED, even though the Dell has an 8-bit+FRC panel while the LG is true 10-bit.
While gradient handling is an important part of picture quality, you don't need to have a 10-bit monitor if you're only using it for general productivity, web surfing, or viewing videos online since most content is still in 8-bit. However, if you're a content creator, sensitive to the flickering on an 8-bit+FRC panel, or want a better HDR experience, it might be worth getting a true 10-bit monitor.
What do you think of our article? Let us know below.
Want to learn more? Check out our complete list of articles and tests on the R&D page.
It would be nice to use a more fine-grained and objective measure for gradient performance in future testing. I suggest to measure a complete gamma curve, say for gray, fit the curve with a smoothing spline from which the predicted luminance step size can be inferred for each input value step. Then compute the relative step size errors (= measured/predicted-1) and summarize them by their standard deviation and possibly provide a “chart” showing the errors over the input values. The resolution of the smoothing spline must be chosen wisely so that the spline can still follow regional wiggles in the gamma curve without capturing the local deviations we want to measure. One might want to exclude the very ends of the full input range from analysis. The dark end is hard to measure accurately enough, and both ends might suffer from crushing, which is maybe of less interest when looking at gradient performance. When measuring 10bit output, measuring the full curve (1024 values for one color channel) might take too long, in which case one might need to restrict the measurements to only a few regions of the whole input value range. Regarding the channels to measure, I think there are 2 aspects of interest. First, there is pure luminance banding, which is expected to be worse for a single primary color channel as compared to gray, where inaccuracies in the primary channels can average out to some extent (exception: WOLED, which has a separate white channel). Second, there is color banding, which is most prominent in the gray curve, but capturing this would require some modification of the analysis suggested above - to be discussed. To capture both aspects, it is probably still sufficient to measure only the full gray curve, because it would indirectly also provide an indication of the single color channel’s gradient performance. I hope this falls on fertile ground.
Hi qx1147,
Thanks for taking the time to reach out with your suggestion!
It’s an interesting testing methodology proposition, I’ve added it to our test bench suggestions list so we can revisit it once we get to updating our gradient test in the future :)
Don’t hesitate should you have any other suggestions or feedback for us.
Cheers!
It would be nice to use a more fine-grained and objective measure for gradient performance in future testing. I suggest to measure a complete gamma curve, say for gray, fit the curve with a smoothing spline from which the predicted luminance step size can be inferred for each input value step. Then compute the relative step size errors (= measured/predicted-1) and summarize them by their standard deviation and possibly provide a “chart” showing the errors over the input values. The resolution of the smoothing spline must be chosen wisely so that the spline can still follow regional wiggles in the gamma curve without capturing the local deviations we want to measure. One might want to exclude the very ends of the full input range from analysis. The dark end is hard to measure accurately enough, and both ends might suffer from crushing, which is maybe of less interest when looking at gradient performance. When measuring 10bit output, measuring the full curve (1024 values for one color channel) might take too long, in which case one might need to restrict the measurements to only a few regions of the whole input value range. Regarding the channels to measure, I think there are 2 aspects of interest. First, there is pure luminance banding, which is expected to be worse for a single primary color channel as compared to gray, where inaccuracies in the primary channels can average out to some extent (exception: WOLED, which has a separate white channel). Second, there is color banding, which is most prominent in the gray curve, but capturing this would require some modification of the analysis suggested above - to be discussed. To capture both aspects, it is probably still sufficient to measure only the full gray curve, because it would indirectly also provide an indication of the single color channel’s gradient performance. I hope this falls on fertile ground.
I’ve found diagonal gradients to produce much more visually discernible results than purely horizontal gradients with identical size and pixel values. A square naturally fits 2 diagonal gradients and results in a very compact visual unit. This has helped me find 2 anomalies in my Samsung Odyssey Neo 9 LS49AG950NPXEN monitor when using HDR mode: a slight dip in the middle of the Red-Magenta gradient and smudged hues inside the Red-Yellow gradient. I also discovered a firmware bug in the latest 1015.0 firmware dithering algorithm with several specific high saturation settings combinations (for example: HDR Standard, Contrast 96, Red 90, Green 91, Blue 90, Saturation 96) by using an image with every sRGB color, arranged as a 64x4 grid of 256x256 patches for convenience.
Hi TimoKinnunen,
Thanks for taking the time to get in touch with your feedback!
Although no work is currently planned on this test for now, I’ve added your observations to our list of things to review when we get to work on our monitor gradient test. It will be interesting for us to investigate the results of diagonal gradients and high saturation settings.
Don’t hesitate should you have any additional feedback or suggestions for us.
Cheers!
I’ve found diagonal gradients to produce much more visually discernible results than purely horizontal gradients with identical size and pixel values. A square naturally fits 2 diagonal gradients and results in a very compact visual unit. This has helped me find 2 anomalies in my Samsung Odyssey Neo 9 LS49AG950NPXEN monitor when using HDR mode: a slight dip in the middle of the Red-Magenta gradient and smudged hues inside the Red-Yellow gradient.
I also discovered a firmware bug in the latest 1015.0 firmware dithering algorithm with several specific high saturation settings combinations (for example: HDR Standard, Contrast 96, Red 90, Green 91, Blue 90, Saturation 96) by using an image with every sRGB color, arranged as a 64x4 grid of 256x256 patches for convenience.
I was wondering, have you considered also adding a test for black crush? Like one of those test images that have numbered boxes, going from dark gray to black, so you can see what’s the darkest box the monitor can display? It’d be very useful, since a lot of OLEDs, especially WOLEDs specifically, seem to be fairly prone to have black crush.
Hi Dabi,
Thanks for taking the time to reach out with your question!
We had a test specifically for Black Crush as part of our former test bench 1.9. However, we found that some evaluation criteria were missing to offer a comprehensive evaluation of dark shadow gradation & black reproduction. We’ve thus removed the test from our test bench for the time being.
This being said, we’re carrying some investigations internally to evaluate the behavior of different display technologies in terms of black crush and dark shadow gradation, so it’s possible that we will design a new test for this going forward, fixing the issues we’ve encountered with our previous one.
Cheers
I was wondering, have you considered also adding a test for black crush? Like one of those test images that have numbered boxes, going from dark gray to black, so you can see what’s the darkest box the monitor can display? It’d be very useful, since a lot of OLEDs, especially WOLEDs specifically, seem to be fairly prone to have black crush.