U.S. flag

An official website of the United States government

Skip Header

2020 Census Data Quality

Ensuring Quality

Our goal for every census is to count everyone once, only once, and in the right place. Ensuring the quality of the census results is built into every step of conducting the census—from designing the census to collecting and processing the data.

We began preparing for the 2020 Census before the 2010 Census ended.

We incorporated lessons learned from the 2010 Census.

We conducted operational census tests across the country—in 2013, 2014, 2015, 2016, and 2018.

Throughout the decade, we extensively researched how people perceived the census and what would motivate them to complete it.

Top of Section

We mailed up to seven invitations and reminders, including up to two paper questionnaires to households, and gave the public three options to respond—online, by phone, or by mail.

For households that we couldn’t reach by mail, a census taker dropped off a paper questionnaire or visited to interview the household.

The census questionnaire was available in 13 languages—giving over 99% of U.S. households the option of responding online or by phone in their language. Help with responding was also available in 59 languages through language guides available in print and video on 2020census.gov.

We encouraged response through advertisements in English and 46 additional languages, reaching over 99% of households more than 300 times.

Over 400,000 partners encouraged people in their communities to respond.

When we didn’t receive a response, census takers visited or called to interview households.

We also had special operations to count people living in group quarters, transitory locations, and other types of living situations.

Top of Section

For example, the online questionnaire has prompts built in to help people respond completely and accurately.

When we send census takers to knock on people’s doors, we check the quality of the census taker’s work too.

As we add up the numbers, we check to make sure they were processed correctly and that they make sense by comparing them to other data. If something doesn’t look right, we take a closer look and fix it if there’s a problem.

We not only used internal data sources to ensure quality, but external as well. Several efforts formed the cornerstone to harnessing the power of those data including a Fusion Center, a Decennial Field Quality Monitoring (DFQM) program, and a Real Time Analysis of Data (RTAD) effort. Information about these and more can be found in the Multidimensional Quality Assessment of the United States 2020 Census by UNECE.

More information about the extensive planning and operations that go into conducting a quality census are available in the 2020 Census Operational Plan.

Top of Section

Evaluating Quality

We continue to evaluate the quality of the census even after the data are released. To do so, we use several techniques to assess how well we conducted the census and also compare census results with other population measures. We get a fuller picture of the quality of the census by looking at operational quality metrics, assessments and evaluations, and comparisons to benchmarks.

We’re also committed to sharing what we know, when we know it, and have published these quality results here.

Operational Quality Metrics

One way we evaluate the quality of a census is looking at operational quality metrics. For the first time, we released a number of data quality indicators along with the first results from the 2020 Census.

Looking for an overview of the different operational metrics we use and what they tell us about the quality of census data? See the Introduction to Quality Indicators  post. More information is also available in the Frequently Asked Questions.

This first set of metrics was released on April 26, 2021 and describe how data for all addresses, including housing units and group quarters, were collected. They show whether people responded to the 2020 Census online, via telephone, or in-person and describe how the Census Bureau accounted for addresses that did not respond. They also provide metrics on whether nonresponding households were occupied, vacant, delete, or remained unresolved.

Top of Section

The second set of metrics was released on May 28, 2021 and focuses on how housing unit data were collected and describe the population size of occupied housing units. These metrics enable us to examine how data collection may have varied depending on the size of the household. They provide indicators for single- and two-person households and categorize these households based on the census operation through which the household responded (such as from self-response, nonresponse follow-up, other census operations, or from count imputation). These metrics also show the percentage of occupied and vacant housing units by census operation.

Top of Section

This is part one of the third set of metrics and was released on August 18, 2021. It contains summary statistics of county-level and tract-level metrics that were previously included in the first release. The substate summaries for each state show the local variation and spread in the operational quality metrics. We calculated each of these metrics for every county and census tract in the nation. Then we calculated the mean, standard deviation, and median values for each state.

Top of Section

This is part two of the third set of metrics and was released on August 25, 2021. It contains item nonresponse rates for demographic characteristics across operations. These metrics show the rate at which respondents did not answer certain census questions: age or date of birth, race, and Hispanic origin questions, as well as households that only provided the count of people living in the household but did not answer other questions.

Top of Section

This fourth set of metrics was released on October 6, 2022. It contains the same metrics as the substate summaries (released August 18, 2021) but for individual counties and a subset of those metrics for individual tracts. The metrics look at how the census obtained a response for each address. To produce these local-level metrics, the Census Bureau used the 2020 Disclosure Avoidance System, based on differential privacy, to protect the privacy of respondents and the confidentiality of their responses. This is the same system used for all 2020 Census data products.

Top of Section

Quality Assessments

Another way that we evaluate the quality of the census is by conducting our own assessments and evaluations, as well as engaging external organizations for independent reviews. Results of this work serve as the background or basis from which the following census will be designed, tested, and implemented.

With each decennial census, the Census Bureau completes a series of assessments and evaluations of census operations, new methods, and census data quality.

2020 Census Evaluations and Experiments (EAE) operation

The 2020 EAE operation was designed to document and evaluate the 2020 Census programs and operations, as well as test new methods that were suggested from previous evaluation work. 

Visit our EAE webpage to download 2020 Census operational assessment reports and learn more about the topics of forthcoming evaluations.

Additional reports

This report assesses the quality of administrative record rosters used to enumerate some addresses when a self-response was not available but high-quality administrative records is available.

This report presents the results of the 2020 Census Group Quarters count imputation process. This process was used to provide a population size for occupied group quarters that lacked a reported population size.

Top of Section

We engage respected members of the scientific and statistical community to conduct independent assessments of the 2020 Census. Their reports advise the Census Bureau on improving future censuses and will help the public to understand the quality of the 2020 Census data.

The experts are from three groups:

National Academy of Sciences Committee on National Statistics (CNSTAT)

The CNSTAT has established panels to assess each decennial census and to suggest parameters for the research and planning of the subsequent censuses. This committee is given access to internal operational and response data to inform their reports.

Learn more about the ongoing work by CNSTAT.

American Statistical Association (ASA) Quality Indicators Task Force

The ASA includes experts who know census work well. The ASA was given access to internal operational and response data from the 2020 Census to help understand the accuracy and coverage of the 2020 Census enumeration. 

See the reports issued by the ASA.

JASON group

The JASON group is an autonomous group of academics, senior scientists, engineers, and technical experts who provide U.S. government agencies with high-level technical and analytical expertise. This expertise is provided via directed studies and reports.

JASON has completed multiple studies for the Census Bureau, covering various aspects of the 2020 Census and other Census Bureau operations.

Read the JASON reports on 2020 Census data quality:

Top of Section

Comparisons to Other Ways of Measuring the Population

We also produce coverage measures that provide insight into the quality of the census and are used to determine how we can improve future censuses. Comparing the 2020 Census results to other sources of data enabled us to analyze differences. Differences could be the result of errors either in the census or in the other data sources. Some differences are simply the result of different ways of collecting or generating the data. The key is to determine if a difference is expected or plausible.

We generate coverage measures through the Demographic Analysis program and the Post-Enumeration Survey. These coverage measures are produced independent of the census and provide alternate estimates of the total population and number of housing units. We then compare these measures to census results.

Our Comparisons to Benchmarks as a Measure of Quality post presents further information about these estimates and how they are used to evaluate quality.

Page Last Revised - May 1, 2023
Is this page helpful?
Thumbs Up Image Yes Thumbs Down Image No
255 characters maximum 255 characters maximum reached
Thank you for your feedback.
Comments or suggestions?


Back to Header