Get insider access
Preferred store
Your browser is not supported or outdated so some features of the site might not be available.

How We Test Vacuums
Turning Data Into Objective Reviews

Updated
A few of the vacuums we've tested. From left to right: Eureka Mighty Mite, Miele Complete C3 Alize, Eureka Whirlwind Bagless Canister, and SEBO Airbelt D4 Premium

We've published over 100 vacuum reviews on our website since we first started testing them in 2020. We purchase each vacuum ourselves, with no cherry-picked manufacturer samples, and run each one through a gauntlet of tests to give you some idea of how they might slot into your day-to-day life. Testing a vacuum isn't as simple as ordering a unit online and sucking up a pile of cereal and dog fur; publishing a single review takes time and requires a coordinated effort from a variety of teams, with the end result showing data including data from over 140 tests and falling somewhere in the region of 2,500 words.

If you're interested in what actually goes into producing and publishing a full vacuum review, you've come to the right spot!

Philosophy

How we buy

A screenshot of our review pipeline page.
You can keep tabs on the status of our ongoing reviews by checking the review pipeline page.

Before we can test any vacuum, we first have to buy it. We mainly select, purchase, and test vacuums available in the United States, though we don't buy all vacuums available in the country, either.

There are different ways for us to buy a vacuum. The first and most common is simply buying popular models after they've been released or buying other models that have gained popularity over time, based on a variety of web traffic analyses. Popularity aside, we also base our product acquisitions based on the content and focus of our recommendations. For instance, we might buy a batch of car & boat vacuums for a rundown of the best handheld models on the market.

The other main way we select products is by simply purchasing the highest-rated model on our Voting Tool. You, the user, can play a part in determining what vacuum we buy and review next. You get one vote every 60 days or, for members of our Insiders program, 10 votes every 60 days. At the end of that cycle, we'll buy the item with the most votes, so long as it's received 25 or more.

We buy our review units the same way any member of the general public: we don't accept review samples from manufacturers that could skew results. We buy them from Amazon, Best Buy, Newegg, B&H, and others, mainly from the United States. Once we've received the vacuum at our office, it's unpacked and handed over to our testing team.

Standardized Tests

Before digging into the nuts and bolts of how we actually test vacuums, we should take some time to explain our guiding philosophy for this process. Our testing methodology is meant to produce consistent and directly comparable results across a wide range of metrics. Inevitably, this practice will produce data and results that will fall outside the norm for most people; for instance, consider the fact that we test the stain-clearing ability for every new vacuum we buy, not just models with a mopping feature. We don't only adhere to standard testing methodology, however; if a vacuum has a feature that falls outside the bounds of our process, we'll give it a look too, whether that be due to outside feedback from readers like you or to simply verify if said feature actually has an impact on real-world use. Standardized testing also allows you to line up multiple vacuums and compare them directly using our Compare tool.

If you're really only concerned with how a vacuum performs in a particular respect, you can use our table tool instead. This table is fully customizable and can be configured to show any piece of test data that you want, from metrics like overall weight to a vacuum's ability to clear away pet hair on furniture.

The full list of tests effectively fall into one of two main groups: Design and Performance. Some of these tests are unscored and are simply there to provide technical information or a broader rundown of a vacuum's features. Others are measured subjectively, with a score being produced after testers run through an extensive rubric. Of course, we wouldn't be RTINGS.com without objective data; some review sections, such as the Suction and Airflow tests, are measured with specialized equipment.

DesignPerformance
Build QualityPortabilityHard Floor Pick-UpManeuverability
User MaintenanceBatteryHigh-Pile Carpet Pick-UpPet Hair Furniture Performance
Recurring CostQuality Of Life FeaturesLow-Pile Carpet Pick-UpAir Quality
StoringTools And BrushesPet-Hair Pick-UpCracks
Dirt CompartmentAlternative ConfigurationSuctionStains
In The Box AirflowWater
Range Noise 

Design

While most Design-focused tests are pretty self-evident in terms of goal and methodology, some do require some explanation.

For the Build Quality test, testers aren't necessarily assessing long-term reliability. Instead, the test is intended to focus more on identifying weak points and how durable the vacuum feels while in use: Does the unit wobble or shake while being pushed around? Is there any part of the vacuum that feels as though it could break if you were to bump it against a wall or drop it from waist height?

The User Maintenance section effectively summarizes the components in a vacuum that need cleaning and how easy it is to access them. Vacuums with features that alleviate some of this maintenance, such as self-cleaning roller brushes or self-emptying base stations, tend to score higher. However, we also consider how many parts need periodic cleaning. Similarly, scoring for the Recurring Cost test is based on how many parts need to be periodically replaced on a vacuum, typically including brushrolls, filters, or dirtbags (in the case of bagged vacuums). Scoring is based not only on the cost of individual components but also on how often they need replacing: this metric is based on the manufacturer's own recommendations.

Tests like In the Box, Quality Of Life Features, Tools And Brushes, and Alternative Configuration aren't assigned a score and exist simply to outline what tools are included with a vacuum or what supplementary features it has, such as a height-adjusting floorhead or automatic power adjustment system.

Performance

It's important to understand that our Hard Floor Pick-Up, High-Pile Carpet Pick-Up, Low-Pile Carpet Pick-Up, and Pet-Hair Pick-Up tests are conducted in a way meant to evaluate the performance of a vacuum head in its ability to deal with not only small, fine debris but also larger material that could get stuck at the front of a floorhead. This is why testers don't lift the vacuum head while cleaning, as this results in more powerful vacuums simply sucking debris upwards. For this section, scores are assigned subjectively based on remaining debris from a single back-and-forth sweep meant to cover the width of the boundary box. The tempo of a metronome governs the speed of the sweep.

A simplified map of the pattern testers will follow when conducting a debris pick-up test.
The vacuuming pattern that testers adhere to when conducting debris pickup tests.

Vacuums can achieve the same score in different ways: some will be more effective in dealing with small, fine debris but struggle with larger material, while others will achieve the opposite result. Consider the following comparison between two vacuums that scored similarly in different ways, the Miele DuoFlex HX1 and the Dyson Ball Animal 2:

The debris pickup results for the Miele DuoFlex HX1 in our Hard Floor Pick-Up test.
Results of the Hard Floor Pick-Up test for the Miele DuoFlex HX1 (Notice that the vacuum did well with large debris, but struggled a little with finer material - Score: 7.0)
The debris pickup results for the Dyson Ball Animal 2 in our Hard Floor Pick-Up test.
Results of the Hard Floor Pick-Up test for the Dyson Ball Animal 2 (Meanwhile, this vacuum does better with finer debris, but leaves more large material behind - Score: 7.0)

We adjust a vacuum's settings to align with what we assume most people will use when vacuuming their homes. An obvious example would be setting a height-adjustable vacuum head to its max height when cleaning high-pile carpeting or turning off a floorhead's roller brush on hard floors.

The Pet-Hair Pick-Up test is conducted only on low-pile carpet, as we've found that this is the most challenging surface to clean as far as pet hair is concerned. If a vacuum does well here, it's all but guaranteed it can handle pet hair on other floor types, too.

Writing

Once we complete testing, another tester peer reviews the data before handing off the review to the review writer for another once-over. These additional validation steps are there for a couple of reasons: having many eyes look over the data minimizes the occurrence of technical errors making their way into the final review and allows us to provide a quick fix if either party notices if something is amiss with the initial results. This process also gives the writer time to do outside research on the product, whether from outside user reports or other review sites, and make sure our data doesn't fall wildly out of line with these experiences.

Before diving into the writing process, here's a quick explanation of the philosophy. Writing reviews isn't about reiterating data obtained from testers wholesale. Instead, our writing focuses on contextualizing that data within a real-world framework, all to explain how these results will impact you in day-to-day use.

The writer then writes each test box in the review based on the data and results obtained by the tester. Writers also use testers' notes, which are internally written points from the tester that convey additional information or supplementary test data that won't necessarily fit in the existing fields.

Beyond the individual test boxes, writers also write the Intro, Verdict, Variants, and Compared sections of the review, not to mention the side-by-side comparisons to other vacuums. The Intro is meant to outline what a product is, its main features, and how it fits into a brand or market. That said, it doesn't refer to specific test results. The Verdict sections sum up whether a vacuum is suited for a given usage based on an aggregate of test data. The Compared section provides a quick assessment of whether or not a product is a compelling option, as well as its standout features and faults. Lastly, the Side-by-Side boxes offer a quick comparison of how a given vacuum stacks up to its competition, not only based on individual test results but also intangibles that we can't cover within the scope of our methodology.

Writing a full review typically takes 1-2 business days, with an additional day for another peer-review process. In this process, the initial draft is looked over by a second writer and the tester who tested the product. This process ensures that the writing is correct and error-free and that the data makes sense.

After writing, we hand the review off to our hard-working editing team. An editor double-checks to ensure the review is free of typos and syntax errors. They're also tasked with ensuring the final product remains stylistically consistent with other product reviews and maintains the same high standard of quality you'd expect from our site.

Recommendations

Our recommendation articles seek to whittle down the enormous list of products we've tested into targeted lists to help you choose a vacuum that fits your needs. These articles are updated regularly to ensure that the recommended products are still what we deem to be the best fit for a specific need and to check if they're still in stock from most major retailers and are still priced appropriately. We don't only go off of test data and our own aggregate scoring to make decisions here; we also take into account more esoteric aspects that we can't fully take into account on our test bench or assign a hard score, like general ease-of-use, the presence of a certain feature, or more simple things like pricing and general availability.

It's important to remember that at the end of the day, our recommendations are just that: recommendations. We aren't aiming to provide a be-all-end-all tier list for people hip-deep in the vacuum space. Instead, we want these articles to be most helpful for a non-expert audience or those who may not have purchased a vacuum for a while and aren't sure exactly where to start with the current product landscape.

Post-Review Processes & Retests

We keep the products we test for as long as they're relevant and widely available. When it does come time to resell some vacuums, often to make space for new inventory, all teams involved in the review process will collaborate to produce a list of products we deem "safe" to sell. In this context, products that are "safe" to sell are units that aren't featured in recommendation articles, are no longer popular with consumers, or have no noteworthy features that would allow us to use them as references when updating our test bench.

Keeping a relatively large inventory of vacuums is especially helpful when it comes time to retest a product. Retests can occur for a number of reasons, whether that be responding to new reports from members of the community regarding a specific product issue or testing a vacuum with a new attachment that wasn't previously available. In the rare instance that a technical error does slip through the many validation systems we have in place, we can also quickly retest a product and publish a fix.

The retest process is essentially a scaled-down review. Testers perform the retests, and writers and testers collaborate to validate the new data. This data is then fully passed over to the writing team, who update the affected test boxes and text within the review to align with the new data. From there, the updated review goes to a member of our editing team, who validates the work of our writers separately and publishes it on the website. For full transparency, we leave a public message to address what changes we made, why we made them, and which tests were affected.

Videos and Further Reading

If you're curious about watching our review pipeline in action, check out this video.

For all other pages on specific tests, test bench version changelogs, or R&D articles, you can browse all our vacuum articles.

How to Contact Us

Constant improvement is key to our continued success, and we rely on feedback to help us. Please send us your questions, criticisms, or suggestions anytime. You can reach us in the comments section of this article, anywhere on our forums, on Discord, or by emailing feedback@rtings.com.

Comments

  1. Article

How We Test Vacuums: Turning Data Into Objective Reviews: Main Discussion

What do you think of our article? Let us know below.


Want to learn more? Check out our complete list of articles and tests on the R&D page.

PreviewBack to editorFormat guide
No comments yet, refresh to see new ones