To save time and money, and reduce nonresponse, survey practitioners often opt to accept some data from proxies, who report for and about sample persons unavailable, unable, or unwilling to self-respond. The common assumption is that the benefits of allowing proxy reporting are bought at the expense of data quality. The survey methods literature, however, yields little convincing evidence that proxy data are usually more contaminated with measurement error than data obtained from self-respondents (Moore, 1988).
This paper examines and compares the measurement error properties of self- and proxy reports of government transfer program participation from the U.S. Census Bureau’s Survey of Income and Program Participation (SIPP). Although the nonexperimental design of the evaluation poses analytical difficulties, the criterion administrative record data permit a direct assessment of error. The results provide only very weak support for the common wisdom. The measurement error advantage of self-respondents is generally small (and for one type of error may even be reversed), and attempts to identify response circumstances which might prove especially detrimental to proxy reporting also fail to produce consistent or dramatic effects. Rather than any self/proxy differences, the far more compelling feature of the data is the high level of underreporting error for all types of respondents in their reports of monthly program participation and month-to-month participation change.
Moore, Jeffrey C. (2010). Proxy Reports: Results from a Record Check Study. Statistical Research Division Research Report Series (Survey Methodology #2010-09). U.S. Census Bureau. Available online at <http://www.census.gov/srd/papers/pdf/rsm2010-09.pdf>.