Skip Header

Working Paper Number rsm2018-15
Aleia Clark Fobia, Casey M. Eggleston, Gerson D. Morales, Jessica L. Holzberg, Amber Henderson, Mandi Martinez, David Tuttle and Jennifer Hunter Childs
Component ID: #ti1785581104

Abstract

The Census Bureau is required by law to inform respondents about access to and protections of the data it collects from them. For example, required messages include topics such as who has access to respondent data, what their data are used for, and, how data are kept confidential. These and other requirements are not only spelled out in various federal laws, but are also consistent with the Census Bureau’s principles of openness and transparency. For example, the Paperwork Reduction Act (PRA) requires that we tell respondents the authority under which data is collected, the purpose for the survey, an estimate of burden, whether responses are voluntary or mandatory, the extent of confidentiality protection, an approval number from the Office of Management and Budget (OMB) and a statement that an agency may not conduct a collection without the approval number. A general review of the messages the Census Bureau presents respondents to explain data access and confidentiality found them to be not consistent across decennial census and ongoing surveys and recommended research on options for this messaging.

In response, staff from the Center for Survey Measurement (CSM) conducted cognitive testing of the range of the Census Bureau’s respondent messaging concerning privacy and confidentiality. This research was designed to explore various ways of communicating the required description of access to data collected by Title 13, as well as various other language required by the Paperwork Reduction Act (PRA). Testing was designed to identify the messages that were clear to respondents and communicated the intended messages, with the goal of standardizing the messages across Census Bureau collections.

This research project was two-staged, starting with a large online study exploring many possible options for this language, followed by a smaller-scale cognitive test of those options that seemed most viable and reliable based on findings from the larger study. Online data collection for this study was conducted from late November to mid-December 2015. The follow-up cognitive testing was conducted from early February to late March 2016 with thirty participants who were interviewed in-person. This report documents findings from both stages of this study. Final recommendations include language that is clear and easy to understand for respondents and avoids vague and complex language.

Back to Header