Work with interactive mapping tools from across the Census Bureau.
Read briefs and reports from Census Bureau experts.
Watch Census Bureau vignettes, testimonials, and video files.
Read research analyses from Census Bureau experts.
Developer portal to access services and documentation for the Census Bureau's APIs.
Explore Census Bureau data on your mobile device with interactive tools.
Find a multitude of DVDs, CDs and publications in print by topic.
These external sites provide more data.
Download extraction tools to help you get the in-depth data you need.
Explore Census data with interactive visualizations covering a broad range of topics.
Information about the U.S. Census Bureau.
Information about what we do at the U.S. Census Bureau.
Learn about other opportunities to collaborate with us.
Explore the rich historical background of an organization with roots almost as old as the nation.
Explore prospective positions available at the U.S. Census Bureau.
Information about the current field vacancies available at the U.S. Census Bureau Regional Offices.
Discover the latest in Census Bureau data releases, reports, and events.
The Census Bureau's Director writes on how we measure America's people, places and economy.
Find interesting and quirky statistics regarding national celebrations and major events.
Find media toolkits, advisories, and all the latest Census news.
See what's coming up in releases and reports.
The overall purpose of this study is to determine how accurately interviewers ask questions as well as how well respondents answer them through a process of behavior coding. The results of this study will identify problematic question wording and guide future interviewer training. Behavior coding, as a method, systematically describes interactions between interviewers and respondents through the application of a set of uniform codes that make reference to the behaviors that take place during an interaction. There are codes for the ideal question-and-response situation where the question is read as worded and the response easily fits into response categories. However, other codes exist to capture aspects of the interaction that are less than ideal. Deviations might indicate potentially problematic questions and reduced data quality. It should be noted that this is a qualitative study and therefore should not be misinterpreted as representative of the US population. This should be kept in mind when interpreting percentages in the summary of this report, in that they are also not representative of the population.
The primary research question for this study is: How well do Coverage Followup survey questions perform in interviews? The Coverage Followup interview was used to resolve potentially problematic coverage situations identified during the decennial census. The primary goal of the Coverage Followup operation was to make sure that individuals were not counted at more than one location, counted at the wrong location, or omitted from the census. The 2010 Census Coverage Followup was a computer-assisted telephone interview and asked a series of questions related to where household members were living or staying on April 1, 2010, as well as questions designed to identify individuals who might have been staying in the household but who were omitted from the census return. The Coverage Followup instrument also included questions eliciting the same demographic information as the mailout census return, which were asked for any new persons identified, as well as for persons included on the roster but for whom this information had not been provided on the census return.
We examine this issue using data that consist of 239 audio-taped Coverage Followup interviews which included approximately 860 household members. Of these 239 interviews, 122 interviews were conducted in English and 117 were conducted in Spanish, covering 355 and 506 household members respectively (861 total). Six Census Bureau interviewers who did not work on the Coverage Followup operation and who speak both English and Spanish fluently were trained in behavior coding and each coded approximately 40 interviews. For each question, interviewers coded the first interaction between interviewer and respondent as well as the Final Outcome. Additionally, all coders coded ten of the same cases (five in English, five in Spanish) to test for reliability, that is, when presented with the same interview, how often do the behavior coders independently apply the same codes? Using Fliessí kappa statistic, we find moderate agreement between behavior coders, with the exception of the coding of Spanish respondents, which is lower. That coding is less reliable in Spanish-language versions of surveys has been demonstrated in previous studies (Goerman et al., 2008; Jurgenson and Childs, 2012).
Jennifer Hunter Childs, Jennifer Leeman, and Michelle Smirnova. (2012). Behavior Coding Report of 2010 Census Coverage Followup English and Spanish Interviews. Research and Methodology Directorate, Center for Survey Measurement Study Series (Survey Methodology #2012-08). U.S. Census Bureau. Available online at <http://www.census.gov/srd/papers/pdf/ssm2012-08.pdf>.