U.S. flag

An official website of the United States government

Skip Header

Randomized Experiments, Evaluations, Administrative Records and Focus Groups

Written by:

Imagine that you want to make an apple pie. You go to the grocery store and buy a bag of shiny red apples. Once home, though, you discover that a third of the apples are bad and that your pie will not be as spectacular as you had hoped.

If you had picked each apple individually instead, you would have minimized the number of bad apples you brought home.

Evaluation through experiments is key to making sure the data we collect are of the highest quality and the most complete they can be.

Census 2020 Logo with Tagline

It may be hard to believe, but how we develop methods for conducting the census works in a similar way.

Instead of assuming that an assortment of data collection methods will give you what you want, you rigorously test separate items to check for imperfect pieces. You make sure each is working as you intended — people can respond safely and conveniently, the questions are clearly understood by respondents, and the quality of the data collected is high.

Sometimes the Census Bureau does this by comparing a respondent’s answers with administrative records, which may contain already existing information about the respondent from an alternate source.

At other times, we conduct focus groups or one-on-one pretesting to verify respondents are interpreting questions the way we intended.

The major method of improving census questionnaires, though, is through randomized experiments and carefully planned evaluations, both during the decennial census year and throughout the 10 years in between.

Why it’s Important

We live in a data-rich world. Statistical agencies like the Census Bureau pride themselves on collecting high-quality data used to inform policymakers at all levels of government, the private sector, and the public.

In September 2017, the Commission on Evidence-Based Policymaking produced a report identifying the relevant and essential role of evaluation and improved use of the data we collect.

By January 2019, Congress acted on recommendations in the report by creating Public Law No: 115-435 – Foundations for Evidence-Based Policymaking Act of 2018.

The Census Bureau’s own deputy director, Ron Jarmin, has written about the importance of harnessing the wealth of data the federal government collects to better inform how we do business.

Its chief scientist, John Abowd, has described the importance of evaluating privacy protections given today’s cyber world and the trail of electronic data crumbs we leave behind, which is in line with the commission’s recommendations on how to use data in a way that protects privacy and confidentiality.

What the Census Bureau is Doing

All this leads to a basic question: How do we as a statistical agency know we are keeping pace with the most efficient and modern methods in this 21st century world of big data? And, how do we make sure our survey methods get not just high response rates, but also accurate responses?

The Census Bureau takes its role seriously as the nation’s premier statistical agency. As such, it maintains a healthy level of self-reflection.

Evaluation tools continually test the accuracy of our assumptions about the data we collect, what these data convey, and the methods we use to distribute information to the community.

A particularly unique and important program within the Census Bureau is the Census Program for Evaluations and Experiments. This program has been around for decades and supports research projects on the decennial census.

For example, in a 1980 experiment, the program found that sending enumerators to specific geographic areas of the country to update the Census Bureau’s address list and leave a questionnaire for residents to complete improved the quality of the address list.

This experiment led to the development of an operation — called Update Leave — that has a similar design as the 1980 experiment. Update Leave has been included in all subsequent censuses for select areas where the majority of households may not receive mail at their home’s physical location.

In 1990, experiments showed that major structural changes to the layout of the census questionnaire could lead to significant increases in self-response.

The success of the 1990 Census experiment transformed the Census Bureau’s thinking about possible questionnaire layouts, and led to the redesign of the questionnaire in 2000.

In 2000, the evaluations and experiments program produced results that continue to inform how data are collected today.

At that time, the Census Bureau was not sure if the soon-to-be-launched annual American Community Survey (ACS) could be conducted in decennial years. An experiment in 2000 where short-form census responders were invited to complete a separate ACS-style form, found that this separate but related survey could occur at the same time.

The experiment solidified plans to replace the decennial long form with the ongoing ACS in all years. The Census Bureau still sends the ACS to a sample of housing units every year — including the census year — to ensure that the wealth of information that the survey provides to the public is not interrupted every 10 years.

Testing Leads to Change for 2020

As these examples show, census testing can result in monumental changes in the way the census is conducted.

The shift from a paper questionnaire to the internet as the primary mode of self-response can be traced to experiments and evaluations in previous decades.

For example, the 2010 Census did not have an internet response option, but it evaluated how such an option might be offered to respondents in future censuses.

A 2010 experiment asked a small sample of self-responders to complete an additional form later in the year that largely mirrored the 2010 Census form. The difference was that one set of respondents was given the choice to respond online or by paper, and another group was invited to respond online only.

This evaluation’s results supported the hypothesis that some people can be invited to respond online and receive no paper questionnaire with little effect on the quality of the data collected.

The results of this evaluation served as the foundation for all mid-decade internet census tests and the design of the 2020 Census itself.

What’s Happening in 2020

We are at it again, leveraging the 2020 Census to improve our methods for future censuses.

We are studying ways different data collection strategies can improve response rates. We are examining if providing more training to English/Spanish bilingual interviewers increases the number of complete responses in Spanish.

Evaluation through experiments is key to making sure the data we collect are of the highest quality and the most complete they can be.

These types of programs within statistical agencies are more critical today than ever. They support the agency’s mission and improve the data we produce. Programs like this also align with the vision of efforts like the Commission on Evidence-Based Policymaking.

Randomized control experiments are a valuable tool in the Census Bureau’s — and other statistical agencies’ — evaluation programs. Such a rigorous method of evaluation ensures that our bag of apples and the resulting pie is of the highest quality.


Misty Heggeness is Senior Advisor for Evaluations and Experiments in the Census Bureau’s Research and Methodology Directorate.

Julia Coombs is the chief of the Census Experiments Branch in the Decennial Statistical Studies Division.


This article was filed under:

Page Last Revised - February 25, 2022
Is this page helpful?
Thumbs Up Image Yes Thumbs Down Image No
255 characters maximum 255 characters maximum reached
Thank you for your feedback.
Comments or suggestions?


Back to Header