Applying and Evaluating Logical Coverage Edits to Health Insurance Coverage in the American Community Survey

July 2010
Victoria Lynch, Michel Boudreaux, and Michael Davern


This report describes an evaluation of logical editing to health insurance data collected by the American Community Survey (ACS). Logical editing is a data processing technique in which survey responses, that are likely inaccurate, are changed so that they are consistent with other information obtained in the survey. This project is part of the Census Bureau’s on-going effort to identify sources of error in health insurance data and to improve methods for mitigating the problems that such errors cause (SNACC 2008, Turner et al. 2009, Pascale et al. 2009).

The ACS began gathering health insurance information in 2008. Data are gathered from a single question that asks respondents if they have any of seven types of coverage at the time of survey. It also permits respondents to provide a verbatim response if their coverage type is not listed. Respondents are asked to report for themselves and for each member in their household (Turner et al., 2009). Given the novelty of the ACS health insurance item, relatively little is known about the accuracy of the data it produces. However, like other surveys that measure health insurance coverage, response errors in the ACS likely result in problematic levels of misclassification (O’Hara 2009).

There are several types of measurement error in surveys. Item wording, the mode of survey administration, and a slew of other survey design and respondent factors can induce measurement error (Groves et al. 2004). In surveys of health insurance coverage, the most well documented form of coverage misclassification is known as the “Medicaid Undercount,” a phenomenon in which survey estimates of Medicaid enrollment are lower than administrative counts. By matching individual survey records to state administrative records, researchers have found that survey respondents often do not report Medicaid coverage when administrative records indicate they are covered (SNACC 2008, Davern et al. 2009). This apparent response error leads to an underestimation of Medicaid coverage and, to a lesser degree, an overestimation of uninsurance and other coverage types (depending on the extent of misclassification to non-Medicaid coverage). Like other surveys, evidence from a record-check study of the 2006 ACS Content Test found that respondents under-report Medicaid (O’Hara 2009).

In this project, we evaluate the use of logical coverage edits in the ACS as a remedy for the under-reporting of Medicaid and other types of coverage. The Census Bureau currently employs logical edits for this purpose in the Current Population Survey’s Annual Social and Economic Supplement (CPS). Information, such as participation in cash transfer programs, age, and familial relationships, often imply that individuals have Medicaid, Medicare or military coverage even when such coverage is not directly reported in the CPS. Logical edits assign coverage in the CPS, but do not remove it. Therefore, logical editing may improve the sensitivity of the final coverage estimate, but not its specificity.

Data from the 2008 ACS were disseminated without any logical coverage editing. To design and evaluate edit rules for future ACS years, we developed edit routines and applied them to the internal 2008 data file so that Census Bureau officials and members of their health insurance Technical Advisory Group (TAG) could make informed decisions about which edits comport best with enrollment and/or eligibility policies and are feasible in the ACS survey environment. Our results also inform data users about the design and impact of the edits and any broad differences they exhibit from editing in the CPS. The following questions framed our analysis:

  1. What types of edits should be applied in the ACS given what we know about the eligibility and enrollment procedures for different types of coverage?
  2. What are the impacts of these edits on estimates of specific types of coverage and what is the impact on the rate of uninsurance?
  3. Do the adjustments make sense given what we know about eligibility and enrollment procedures, ACS design, and reasons why survey respondents do not report true coverage?
  4. How do edits developed for the ACS differ from ones used in the CPS, both in content and effect on measures of uninsurance?