U.S. flag

An official website of the United States government

Skip Header


Testing a New Field of Degree Question for the American Community Survey

Written by:

Executive Summary

Test Objective

The 2007 American Community Survey (ACS) Content Test was designed to assess whether the ACS can reliably collect data on the field of a person’s bachelor’s degree. The inclusion of a field of degree question on the ACS was proposed to provide field of degree data annually for small levels of geography and to assist in building a sampling frame for the National Survey of College Graduates (NSCG).

Methodology

(see sections 2 and 3)

  • We tested two versions of a new question on field of bachelor’s degree – categorical (n=15,000 addresses) and open-ended (n=15,000 addresses).
  • Data were collected in all three ACS modes - mail, Computer-Assisted Telephone Interviewing (CATI), and Computer-Assisted Personal Interviewing (CAPI).
  • The Content Follow-Up reinterview was conducted by CATI to test reliability by reasking the same version of the field of degree question asked in the original interview followed by the other version.
Major Evaluation Measures and Decision Criteria

Comparability of 2007 ACS Content Test field of degree data to other sources (see section 6.1)

  • Relative degree distributions are comparable to the 2003 NSCG data for both questions.
  • Categorical version estimates of field of degree are nominally higher than the open-ended version and the 2003 NSCG estimates due to more reporting of multiple degree categories (Categorical: 13.0 percent; Open-ended: 6.4 percent; NSCG: 3.7 percent).

Field of degree item missing data rates (see section 6.2)

  • The item missing data rate for the open-ended question (6.5 percent) is significantly higher than the rate for the categorical question (3.2 percent).

Reliability of field of degree estimates (see section 6.3)

  • The open-ended question results in significantly more reliable estimates than the categorical question for most degree categories based on Gross Difference Rates and Indexes of Inconsistency, indicating more consistency in reporting for the open-ended question.
  • The open-ended question is producing levels of inconsistency that are mostly in the low range, while the categorical question levels are in the low to moderate range.
  • Reporting of multiple degree categories in the categorical question is the main reason for the difference in reliability.

Correspondence between field of degree question versions (see section 6.4)

  • Using data from the reinterview only where people were asked the open-ended question followed by the categorical question, the agreement rate between questions is low (65.1 percent), indicating a problem for people in classifying degrees into the categories.
  • This low rate is largely due to multiple degree category reporting in the categorical version.
Conclusions

The results favor the open-ended question with the exception of the item missing data rates. The over-reporting of multiple degree categories is a flaw in the design or administration of the categorical question, and may indicate difficulty in classifying degrees into the categories. Cognitive testing also found issues with multiple category reporting for the categorical question.

Page Last Revised - October 8, 2021
Is this page helpful?
Thumbs Up Image Yes Thumbs Down Image No
NO THANKS
255 characters maximum 255 characters maximum reached
Thank you for your feedback.
Comments or suggestions?

Top

Back to Header