All updates to this product will be posted below. Click 'Follow Product Status' to receive notifications when there are updates for this product.
Misleading Use of “Calibration”: The article repeatedly uses the term “calibration,” which implies hardware adjustments that can change the monitor’s behavior. However, for many consumer-grade monitors, what is being referred to is not true calibration but “profiling” through software. True hardware calibration requires specific hardware tools and allows for consistent results, something that standard consumer monitors, without hardware calibration support, cannot offer. This distinction is crucial when discussing professional-grade monitors used for color-critical work.
Degradation and “Recalibration”: The suggestion to recalibrate every year due to “panel degradation” is flawed for monitors without hardware calibration. When a monitor lacks the ability to adjust its internal settings through a hardware device, recalibration only adjusts the output from the graphics card, which doesn’t account for the monitor’s true degradation. Over time, a monitor’s performance will deteriorate, but without hardware-level calibration, these changes will result in a less accurate image, regardless of how often the graphics card is “recalibrated.”
Local Dimming for Color-Critical Work: The mention of local dimming as a benefit overlooks its significant drawbacks in professional contexts. Local dimming can interfere with uniformity and result in artifacts like halo effects, which are detrimental when working on still images or color grading video. For professionals who require precise and consistent luminance across the screen, local dimming should not be recommended as a feature.
Misleading Use of “Calibration”: The article repeatedly uses the term “calibration,” which implies hardware adjustments that can change the monitor’s behavior. However, for many consumer-grade monitors, what is being referred to is not true calibration but “profiling” through software. True hardware calibration requires specific hardware tools and allows for consistent results, something that standard consumer monitors, without hardware calibration support, cannot offer. This distinction is crucial when discussing professional-grade monitors used for color-critical work. Degradation and “Recalibration”: The suggestion to recalibrate every year due to “panel degradation” is flawed for monitors without hardware calibration. When a monitor lacks the ability to adjust its internal settings through a hardware device, recalibration only adjusts the output from the graphics card, which doesn’t account for the monitor’s true degradation. Over time, a monitor’s performance will deteriorate, but without hardware-level calibration, these changes will result in a less accurate image, regardless of how often the graphics card is “recalibrated.” Local Dimming for Color-Critical Work: The mention of local dimming as a benefit overlooks its significant drawbacks in professional contexts. Local dimming can interfere with uniformity and result in artifacts like halo effects, which are detrimental when working on still images or color grading video. For professionals who require precise and consistent luminance across the screen, local dimming should not be recommended as a feature.
Hi, thanks for bringing up your concerns! We had an internal discussion and decided to change the title, as you’re right, it wasn’t the best title to represent the goal of the article. This is really a guide to explain how each of the monitor’s settings affect the picture quality.
Regarding recalibration and degradation, this is another great point, but most monitors do have settings for hardware-level calibration. If you require perfect calibration for content creation, this is something to consider to make sure it maintains those accurate colors.
As for local dimming, you’re definitely right that local dimming can actually make the picture quality worse. This is unfortunately the case with many monitors, but if a monitor has a good local dimming feature, then it’s beneficial to use to improve picture quality. As mentioned in the article, you should enable or disable local dimming as you wish.
Thanks for your answer. You wrote: but most monitors do have settings for hardware-level calibration. This is very wrong. I’d say that 99% of the monitors hasen’t. You can do a simple test.
Connect your pc to a normal e.g. Dell UltraSharp monitor, do your type of “calibration”. Save the figures as a reference
Then connect another pc to the exact same Dell UltraSharp. Measure the monitor. Is the figures the same? No.
Here is more to read: https://www.eizoglobal.com/library/management/hardware-vs-software-calibration/index.html https://www.benq.com/en-us/knowledge-center/knowledge/what-is-hardware-calibration.html This one explains it good: https://www.benq.com/en-us/knowledge-center/knowledge/hardware-vs-software-calibration.html
Regarding local dimming. In the professional field, e.g. color grading for streaming platforms or broadcast, they general recommendation is to not use local dimming panels. You can test this link but speed upp the speed.
Thanks for your answer. You wrote: but most monitors do have settings for hardware-level calibration. This is very wrong. I’d say that 99% of the monitors hasen’t. You can do a simple test. Connect your pc to a normal e.g. Dell UltraSharp monitor, do your type of “calibration”. Save the figures as a reference Then connect another pc to the exact same Dell UltraSharp. Measure the monitor. Is the figures the same? No. Here is more to read: https://www.eizoglobal.com/library/management/hardware-vs-software-calibration/index.html https://www.benq.com/en-us/knowledge-center/knowledge/what-is-hardware-calibration.html This one explains it good: https://www.benq.com/en-us/knowledge-center/knowledge/hardware-vs-software-calibration.html Regarding local dimming. In the professional field, e.g. color grading for streaming platforms or broadcast, they general recommendation is to not use local dimming panels. You can test this link but speed upp the speed.
Ah ok I understand what you mean, this type of hardware-level calibration isn’t something we test for. What I meant was that most monitors have settings to adjust RGB levels or calibrate the picture as much as possible, at least without an ICC profile. Sorry for the confusion and thanks for sending along those links!
Ah ok I understand what you mean, this type of hardware-level calibration isn’t something we test for. What I meant was that most monitors have settings to adjust RGB levels or calibrate the picture as much as possible, at least without an ICC profile. Sorry for the confusion and thanks for sending along those links!
Exactly! That’s why it’s important to use the correct terminology. You’re not actually calibrating the RGB levels—you’re simply adjusting them to match a target.
True hardware calibration adjusts the monitor’s internal LUT (Look-Up Table), ensuring that color reproduction remains consistent independent of the computer or graphics card. What you’re referring to is software-level adjustments, which only work within the system they’re applied to and don’t account for long-term drift in the panel.
To use an analogy: A professionally hardware-calibrated monitor with ICC profiles is like tuning a sports car—it’s not just about picking the right racing tires tires (ICC profile) for the surface, but also fine-tuning the suspension (LUT calibration) to match the track conditions. A standard office or gaming monitor? That’s like just pumping up the standard tires to a standard PSI and hoping for the best.
And why does this matter? Imagine adjusting product photos of clothing that will be sold online. If your monitor’s colors are even slightly off you will make the wrong decisions, and the end customer might see a completely different shade on their screen. That leads to product returns, wasted resources, and lost revenue.
Consistency is key—and you can’t achieve that with just software adjustments. Last but not least, its not bad to adjust the gaming/office monitors to be better/more close to the right target. i.e. sRGB for browsing the web, its a good thing, but it is not calibration.
It would be interesting to first take measurements in the center of the screen and then in all four corners. The differences will likely be significant. This means that while the test results may look good based on center measurements, the performance in the corners could be much worse. And marketing claims like DeltaE <2 quickly fall apart when you consider these inconsistencies. A low DeltaE value in the center of the screen means little if the corners tell a completely different story.
Exactly! That’s why it’s important to use the correct terminology. You’re not actually calibrating the RGB levels—you’re simply adjusting them to match a target. True hardware calibration adjusts the monitor’s internal LUT (Look-Up Table), ensuring that color reproduction remains consistent independent of the computer or graphics card. What you’re referring to is software-level adjustments, which only work within the system they’re applied to and don’t account for long-term drift in the panel. To use an analogy: A professionally hardware-calibrated monitor with ICC profiles is like tuning a sports car—it’s not just about picking the right racing tires tires (ICC profile) for the surface, but also fine-tuning the suspension (LUT calibration) to match the track conditions. A standard office or gaming monitor? That’s like just pumping up the standard tires to a standard PSI and hoping for the best. And why does this matter? Imagine adjusting product photos of clothing that will be sold online. If your monitor’s colors are even slightly off you will make the wrong decisions, and the end customer might see a completely different shade on their screen. That leads to product returns, wasted resources, and lost revenue. Consistency is key—and you can’t achieve that with just software adjustments. Last but not least, its not bad to adjust the gaming/office monitors to be better/more close to the right target. i.e. sRGB for browsing the web, its a good thing, but it is not calibration. It would be interesting to first take measurements in the center of the screen and then in all four corners. The differences will likely be significant. This means that while the test results may look good based on center measurements, the performance in the corners could be much worse. And marketing claims like DeltaE <2 quickly fall apart when you consider these inconsistencies. A low DeltaE value in the center of the screen means little if the corners tell a completely different story.
Thanks for the feedback, we’ll take that into consideration for future test benches. You’re right that taking measurements at the corners of the screen would be a more accurate way of measuring accuracy. Right now, we measure it in the center of the screen only, but we also have a Gray Uniformity test to see how evenly it displays colors across the screen.
Thanks for the feedback, we’ll take that into consideration for future test benches. You’re right that taking measurements at the corners of the screen would be a more accurate way of measuring accuracy. Right now, we measure it in the center of the screen only, but we also have a Gray Uniformity test to see how evenly it displays colors across the screen.
Thanks for the reply. On the web you write: “We measure uniformity by taking a photo of the screen and calculating the average deviation from the original color. This is not really accurate. You need to use a calibrated measurment device to measure.
In the test of Asus 279CV you write: The gray uniformity is good. While the edges of the screen are darker than the rest, there’s minimal dirty screen effect in the center, which is great if you’re working on content with large areas of uniform colors.
But there’s a site were they really go deep, here is their verdict: https://www.prad.de/testberichte/test-asus-pa279cv-guenstiger-4k-monitor-fuer-bild-und-videobearbeitung/2/#Bildhomogenitaet As you see the monitor is - 14% darker on the far right, almost -10% down in the left corner and suddenly almost perfect in the upper left corner - a very unconsistent monitor. From a gaming monitor perspective not so important at all but for a picture/content monitor perspective its very bad.
https://www.prad.de/lexikon/bildhomogenitaet/
So why am I writing all this? Well, I just want you to keep on getting better.
Thanks for the reply. On the web you write: “We measure uniformity by taking a photo of the screen and calculating the average deviation from the original color. This is not really accurate. You need to use a calibrated measurment device to measure. In the test of Asus 279CV you write: The gray uniformity is good. While the edges of the screen are darker than the rest, there’s minimal dirty screen effect in the center, which is great if you’re working on content with large areas of uniform colors. But there’s a site were they really go deep, here is their verdict: https://www.prad.de/testberichte/test-asus-pa279cv-guenstiger-4k-monitor-fuer-bild-und-videobearbeitung/2/#Bildhomogenitaet As you see the monitor is - 14% darker on the far right, almost -10% down in the left corner and suddenly almost perfect in the upper left corner - a very unconsistent monitor. From a gaming monitor perspective not so important at all but for a picture/content monitor perspective its very bad. https://www.prad.de/lexikon/bildhomogenitaet/ So why am I writing all this? Well, I just want you to keep on getting better.
As always, we appreciate the feedback! We understand our testing methods aren’t perfect, so we’ll keep this in mind next time we do a test bench update. But just to clarify, our Gray Uniformity test is about how evenly it displays the same color across the screen, but you’re right, we don’t take accuracy into account. And keep in mind that every unit has different uniformity, so we should avoid comparing results from another reviewer. Thanks again for bringing this up though!
Thanks for responding, and sorry for being such a pain!
Regarding this statement: “And keep in mind that every unit has different uniformity, so we should avoid comparing results from another reviewer.”
This is where I struggle to understand your standpoint. While it’s true that panel uniformity varies between individual units, that doesn’t mean we should disregard comparative measurements altogether. There are well-established tools and methodologies to measure uniformity across a screen with high precision, minimizing subjective perception and providing quantifiable, repeatable data.
By only measuring a small central area (typically around 2x2 cm), you’re missing critical information about the real-world performance of a monitor, especially for professionals who rely on uniformity across the entire display. Many high-end displays are calibrated and designed to deliver exceptional uniformity across the whole panel—this is a key aspect of their value.
Wouldn’t a more comprehensive measurement approach, such as grid-based luminance and color uniformity analysis, provide a better representation of real-world performance? Otherwise, the review risks being misleading for users who rely on these tests for purchasing decisions.
But yes, thanks for all your answers
Thanks for responding, and sorry for being such a pain! Regarding this statement: “And keep in mind that every unit has different uniformity, so we should avoid comparing results from another reviewer.” This is where I struggle to understand your standpoint. While it’s true that panel uniformity varies between individual units, that doesn’t mean we should disregard comparative measurements altogether. There are well-established tools and methodologies to measure uniformity across a screen with high precision, minimizing subjective perception and providing quantifiable, repeatable data. By only measuring a small central area (typically around 2x2 cm), you’re missing critical information about the real-world performance of a monitor, especially for professionals who rely on uniformity across the entire display. Many high-end displays are calibrated and designed to deliver exceptional uniformity across the whole panel—this is a key aspect of their value. Wouldn’t a more comprehensive measurement approach, such as grid-based luminance and color uniformity analysis, provide a better representation of real-world performance? Otherwise, the review risks being misleading for users who rely on these tests for purchasing decisions. But yes, thanks for all your answers
Just to clarify, I meant to say that you shouldn’t compare results with other reviewers because we each have different testing methodologies, so it’s hard to compare results using different measurements. That said, our current testing methodology does measure the uniformity across the entire screen as we take a photo and our program actually measures the standard deviation across the screen. Of course, there are other ways to measure uniformity, but we feel confident with our current testing as a representation of uniformity.