Skip Header

Measuring Quality in a Census, Part 3

Wed Jul 14 2010
Robert Groves
Component ID: #ti166383162

An earlier post, “Quality in a Census Part 2,” noted that we have two basic tools to evaluate a census – process-oriented indicators and comparisons to other methods of estimating the population size.

Component ID: #ti2140735582

This is a post about some of the process-oriented indicators. I’ll talk about what operations and features of the census might be relevant to answering “how good is it?”

Component ID: #ti42443475

A recent initiative of the American Association for Public Opinion Research urges survey organizations to become more transparent about their process indicators; we at the Census Bureau support such transparency as a way to permit more open evaluation of methods.

Component ID: #ti42443476

As we finish the nonresponse follow up stage, we’re starting to get some indicators of how everything is going thus far.

Component ID: #ti42443477

All the indicators are preliminary at this writing and will change somewhat as our final operations are completed. But here’s how things are looking right now:

Component ID: #ti42443478

1. We used a short form only for all the approximately 135 million housing units. We finished the mailout/mailback phase with a 72 percent versus 69 percent participation rate in 2000 (combined short form and long form 2000 rate). The 2000 short form rate was the same as this year, the 72 percent figure I’ve cited in earlier posts.

Component ID: #ti42443479

2. For about 13 million units in areas with 20% or more Spanish-speakers, we send out a bilingual form; our preliminary analysis suggests that it increased the participation rate in those areas by about 2 percentage points over the English-only form.

Component ID: #ti42443480

3. For about 40 million units, disproportionately in hard-to-enumerate areas, we sent out a replacement form a couple of weeks after the first mailed form. It worked to increase the participation rate in these areas. The result was that we have less variation in participation rates in 2010 than in 2000.

Component ID: #ti42443481

4. We used new questions to identify households with dynamic membership and then recontacted them (about 7.5 million in total) to make sure we didn’t miscount them (in 2000 we did check out large households in this manner – about 2.5 million). We don’t yet know how many problems were resolved by this effort.

Component ID: #ti42443482

5. We updated the address list multiple times using different sources. As a result we had fewer “deadwood” listings (we deleted 4 million during our visits vs. 6 million in 2000). We also added fewer cases to the list when we did our field work. (This last point is a more ambiguous result, which could have arisen either because of a better address list or less diligent field work.)

Component ID: #ti42443483

6. We designed a more efficient assignment process in the nonresponse followup stage, so the miles driven per interview is less than in 2000; we are under budget on the nonresponse followup stage.

Component ID: #ti42443484

7. Despite this, reaching the nonresponse followup cases and getting their cooperation was harder this time; after failing in six tries to contact and interview units, we had to get counts of residents using informed neighbors and building managers relatively more frequently (currently about 5 percentage points more such reporting in 2010 versus 2000).

Component ID: #ti1315747804

8. The percentage of occupied units that yielded counts of persons, one way or another, may be very slightly lower this year (about 98.0% in 2010 vs. 99.5% in 2000). We think both this finding and 7. above mirror the lower participation rates in surveys more broadly.

Component ID: #ti1315747805

9. We’ve implemented a reinterview process whereby a portion of essentially every enumerator’s work was redone and checked against their original results. (In 2000 only about 75% of the enumerators’ work was subject to the reinterview.)

Component ID: #ti1315747806

10. We found a smaller proportion of the enumerators failing to meet our quality standards than we did in 2000. (This, too, has multiple interpretations; we used much more consistent computer-assisted rules for determining violations than was done in 2000.)

Component ID: #ti1315747807

11. We found many more vacant units when we went out for Nonresponse Followup than was true in 2000 (about 14.3 million vs. 9.9 million in 2000); that makes sense, given the widely publicized foreclosure rates. However, we need to know the April 1 residency status of units now vacant, so they pose challenges to us in our nonresponse followup.

Component ID: #ti1315747808

12. Finally, in my professional experience with large data collection activities, problems during the data collection phase lead to missed deadlines and overruns of the budget. For this Census every operation since the Fall of 2009 has been on schedule and cumulatively we’re significantly under budget.

Component ID: #ti1315747809

As these indicators are revealing themselves, some look better than the experience in 2000; some, not, as you can see above.

Component ID: #ti1315747810

We’ll gradually be refining these results as our final quality assurance operations (the Vacant/Delete check, and Field Verification) take place. I’ll report them when we have them, especially those that show any changes from our initial insights.

Component ID: #ti1315747811

Please submit any questions pertaining to this post to ask.census.gov

X
  Is this page helpful?
Thumbs Up Image Yes    Thumbs Down Image No
X
Comments or suggestions?
No, thanks
255 characters remaining
X
Thank you for your feedback.
Comments or suggestions?
Back to Header