The Effect of Display Design and Number of Follow-up Questions on the Accuracy of Survey Responses
Alda G Rivas, U.S. Census Bureau
A branching question may lead respondents to additional follow-up questions, which results in higher burden. This may motivate respondents to change their response to the branching question to avoid the follow-up questions, or to provide inaccurate responses to the follow-up questions in an effort to reduce their burden. It is possible that the design used to display the branching question may interact with the number of follow-up questions to produce different levels of accuracy. We implemented a 3 x 3 between-subjects design to explore the effect of display design (next page, unfolding, grayed-out) and number of follow-up questions (3, 5, 7) on the accuracy of responses to the branching question and the follow-up questions. Our results indicated that most participants answered the branching question accurately, with most of the inaccurate responses being observed in the "unfolding" display design. We also found that the "grayed-out" and "unfolding" display designs were more likely to contain accurate responses among the follow-up questions compared to the "next-page" design, regardless of the number of follow-up questions, and that presenting participants with either 5 or 7 follow-up questions decreased accuracy, compared to presenting 3 follow-up questions. Based on our findings, our general recommendations include to avoid a "next-page" design and aim to present a smaller number (e.g., 3) of follow-up questions. However, if the survey design includes 5 or 7 follow-up questions, a "grayed-out" or "unfolding" display design is preferable. The findings from our study provide a clear guide for survey practitioners to choose the optimal combination of display design and number of follow-up questions to minimize respondent burden and maximize accuracy of responses.
Can We Predict Dropout? The Predictive Power of In-Survey Burden Evaluations
Erica C. Yu, U.S. Bureau of Labor Statistics
Robin L. Kaplan, U.S. Bureau of Labor Statistics
Douglas Williams, U.S. Bureau of Labor Statistics
The Bureau of Labor Statistics and Census Bureau are conducting research on the modernization of the Current Population Survey (CPS). The CPS is a monthly survey with a longitudinal design collected using a combination of telephone and in-person interviews. Ongoing research is being conducted on ways to improve response rates, the respondent experience, and manage survey costs. One way to address these issues is to add a web mode of collection. Currently, the CPS is interviewer-administered only and adding the web mode would add a new self-administered option. A pilot test of mixed-mode administration of the CPS was conducted in 2025. All respondents, immediately after completing a wave of the CPS either by self-response on the web or by personal interview, had the option to provide feedback about their survey experience by answering closed-ended questions, including about how burdensome the survey was, how relevant the questions felt, and how easy or difficult it was to answer the questions. The debriefing questions were repeated at the end of each monthly wave of interviews; respondents could participate in up to three waves for this pilot test. This design resulted in a dataset of more than 2,500 cases with data on response mode, demographics, debriefing responses, and whether respondents dropped out in later waves. Analysis focused on evaluating the survey experience between web and personal interview modes, whether labor force status is associated with ratings of difficulty and burden, and debriefing responses associated with survey non-response in subsequent waves. Discussion will include debriefing questionnaire design, factors related to the respondent experience, and the relationship between response mode and burden.
Web-Probing: Motivating Detailed Responses in the Absence of an Interviewer
Robin Kaplan, U.S. Bureau of Labor Statistics
Tywanquila Walker, U.S. Bureau of Labor Statistics
Web-probing, or web-based unmoderated pretesting, is an increasingly common survey pretesting methodology that complements traditional cognitive interviewing in an online self-administered format. Participants in web-probing studies are often asked a mix of closed- and open-ended probes to provide insight into their response processes, including comprehension, recall, judgment, and response formation. The lack of an interviewer in unmoderated testing may result in participants satisficing, insufficient responses, or skipping questions entirely, resulting in poor pretesting results or no data. Research shows that motivational statements (prompts) asking survey respondents to provide answers to items they skipped, or to important questions, can be effective at reducing item nonresponse and increasing response quality in web surveys. However, use of such prompts has not been assessed in web-probing studies. To understand how effective prompts are in the context of web-probing, an experiment was embedded into a web-based study pretesting survey questions about labor force participation and disability. The instrument included seven open-ended probes. Online participants (N=380) from a nonprobability panel were randomly assigned to either receive a prompt encouraging them to provide a detailed response to each probe or were presented with the open-ended probe and no prompt. Item nonresponse and character count were assessed to determine whether prompts can increase response and length of open-ended responses. We found that prompts did not affect item nonresponse but increased the length of open-ended responses. Prompts worked best when the probes asked about topics that were personally relevant to participants. Implications for the design of web-probing studies are discussed.
Using Remote CAPI in Controlled Residential Settings: The Implementation of the Survey of Prison Inmates R&D Field and Cognitive Test
Scarlett Pucci, RTI International
Ashley Murray, RTI International
Eliza Snee, RTI International
Tim Smith, RTI International
The Survey of Prison Inmates (SPI) is a cross-sectional survey of state and federal prison populations conducted by the Bureau of Justice Statistics (BJS) since 1974, with seven collections, most recently in 2016. Traditionally, the SPI has been administered in-person by field interviewers. In 2024, BJS initiated the SPI Research and Design (SPI R&D) program to investigate ways to conduct the interviews remotely, potentially reducing the responsibility placed on facility staff and travel costs. We worked with BJS to design and conduct a field test and cognitive interviews. This research utilized both in-person and remote computer assisted personal interviews (CAPI) to test for mode effects when interviewing incarcerated populations. CAPI interviews relied on study- and facility-provided devices (e.g., laptops, WiFi, hot spots) and video teleconferencing platforms. For the field test, sample members were randomly assigned to the control (in-person) or treatment (remote) group within each facility. Results were analyzed to compare mode differences in terms of response, respondent burden, and data quality. Following the field test, protocols were updated to incorporate lessons learned and cognitive interviews were conducted using the same modes and devices. The cognitive interviews tested for differences in response quality between modes. This research is groundbreaking for the field of survey research by addressing whether data collection efforts within controlled settings can utilize remote data collection to collect reliable and comparable data across modes. Takeaways and lessons learned will be presented.