The frequency response testing for Jabra Elite 7 Pro makes them the best-sounding headphones ever reviewed by Rtings, based on their ability to follow the target response. However, despite such an incredible feat, their Amazon reviews show a deeply disappointing score of just 3.4/5 stars. Multiple customer reviews say “audio is OK”, “highs and mids lack the clarity and brilliance of some other models”, “the worst sounding headphones I’ve used yet”, and “sound quality is just ok”. The situation with these headphone’s high-frequency-score vs low-user-sound-quality-score is almost the same with Samsung’s Galaxy Buds+, the 2nd highest Rtings scoring headphones of all time.
Discrepancies like this, unfortunately, bring into question the testing methodology Rtings uses to base headphone sound quality scores on. All of us here are interested in objectivity, so what exactly is causing this difference? Is it:
Rtings’ TV reviews are an example of a product category that can’t be disputed. If a TV has a good contrast ratio, all other things the same, everyone will be able to tell it is better than one with a bad contrast ratio. But here, headphones that Rtings has deemed more accurate and able to reproduce sound better, are receiving worse user reviews, specifically mentioning those very metrics (i.e. “the sound quality sucks”) as the reason. What is the problem? How can we solve this so a high Rtings headphone score will translate to a high user score, just like it currently does for every other category (TVs, vacuums, blenders, etc.)?
Sincerely, A Long-Time Rtings User