U.S. Department of Commerce

Information Quality

Skip top of page navigation
Census.gov Information Quality Main Statistical Quality Standards › Producing Estimates and Measures: Statistical Quality Standard D3

Statistical Quality Standard D3: Producing Measures and Indicators of Nonsampling Error


Purpose: The purpose of this standard is to ensure that measures and indicators of nonsampling error are computed and documented to allow users to interpret the results in information products, to provide transparency regarding the quality of the data, and to guide improvements to the program.

Scope: The Census Bureau’s statistical quality standards apply to all information products released by the Census Bureau and the activities that generate those products, including products released to the public, sponsors, joint partners, or other customers. All Census Bureau employees and Special Sworn Status individuals must comply with these standards; this includes contractors and other individuals who receive Census Bureau funding to develop and release Census Bureau information products.

In particular, this standard applies to activities associated with producing measures or indicators of nonsampling error associated with estimates for Census Bureau information products. Examples of nonsampling error sources include:

  • Nonresponse (e.g., bias from household/establishment nonresponse, person nonresponse, and item nonresponse).
  • Coverage (e.g., listing error, duplicates, undercoverage, overcoverage, and mismatches between the frame of administrative records and the universe of interest for the information product).
  • Processing (e.g., errors due to coding, data entry, editing, weighting, linking records, disclosure avoidance methods, and inaccuracies of assumptions used to develop estimates).
  • Measurement (e.g., errors due to interviewer and respondent behavior, data collection instrument design, data collection modes, definitions of reference periods, reporting unit definitions, and inconsistencies in administrative records data).


  • Exclusions:
    In addition to the global exclusions listed in the Preface, this standard does not apply to:

    • Errors strictly associated with a modeling methodology.  Statistical Quality Standard D2, Producing Estimates from Models, addresses these types of error.

Key Terms: Convenience sample, coverage, coverage error, coverage ratio, equivalent quality data, item allocation rate, item nonresponse, key variables, latent class analysis, longitudinal survey, measurement error, nonresponse bias, nonresponse error, nonsampling error, probability of selection, quantity response rate, reinterview, release phase, respondent debriefing, response analysis survey, total quantity response rate, and unit nonresponse.

Requirement D3-1: Throughout all processes associated with producing measures and indicators of nonsampling error, unauthorized release of protected information or administratively restricted information must be prevented by following federal laws (e.g., Title 13, Title 15, and Title 26), Census Bureau policies (e.g., Data Stewardship Policies), and additional provisions governing the use of the data (e.g., as may be specified in a memorandum of understanding or data-use agreement). ( See Statistical Quality Standard S1, Protecting Confidentiality.)

Requirement D3-2: A plan must be developed that addresses:

  1. The general measures and indicators of nonsampling error that will be produced (e.g., coverage ratios, unit nonresponse rates, item nonresponse rates, data entry error rates, coding error rates, and interviewer quality control (QC) results).
  2. Any special evaluations to be conducted (e.g., studies of interviewer variance, measurement error, and nonresponse bias). Identify the:
    1. Motivation for the study.
    2. Types of errors addressed by the study.
    3. Measures and indicators to be generated.
    4. Data needed to conduct the evaluation and their sources.
    5. Methods for collecting and analyzing the data.
  3. Verification and testing of systems for producing measures and indicators of nonsampling error.
  4. Evaluating the measures and indicators to guide improvements to the program.

  5. Note: Statistical Quality Standard A1, Planning a Data Program, addresses overall planning requirements, including estimates of schedule and costs.

Requirement D3-3: Except in the situations noted below, weighted response rates must be computed to measure unit and item nonresponse. The weights must account for selection probabilities, including probabilities associated with subsampling for nonresponse follow-up.

    Response rates may be computed using unweighted data when:

    1. Monitoring and managing data collection activities.
    2. Making comparisons with surveys using unweighted response rates.
    3. Using weighted response rates would disrupt a time series.
    4. A weighted response rate would be misleading because the sampling frame population in an establishment survey is highly skewed, and a stratified sample design is employed. (See Sub-Requirement D3-3.2.)
    5. The Census Bureau simply collects data for a sponsor and performs no post-collection estimation.

    Note: In general, computing response rates is not appropriate for samples that are not randomly selected (e.g., convenience samples or samples with self-selected respondents).

Sub-Requirement D3-3.1: For demographic surveys and decennial censuses, when computing unit response rates, item response rates or item allocation/imputation rates (for key variables), and total item response rates:

  1. Standard formulas must be used. (See Appendix D3-A.)
  2. The final edited data or edited outcome codes must be used, when available. If the final edited data are not used to compute the response rates, it must be noted.
  3. The definition or threshold of a sufficient partial interview must be noted if partial interviews are counted as interviews.

Sub-Requirement D3-3.2:For economic surveys and censuses, when computing unit response rates, quantity response rates (for key variables), and total quantity response rates:

  1. Standard formulas must be used.(See Appendix D3-B.)
  2. The type of response rate must be noted: unweighted response rate, quantity response rate, or total quantity response rate.
  3. The variable used in computing the response rate must be noted (e.g., total retail sales of an establishment).
  4. The definition of responding units must be noted.
  5. For total quantity response rates, the sources of equivalent quality data for nonresponding tabulation units must be listed (e.g., administrative records or qualified other sources such as Security Exchange Commission (SEC) filings or company annual reports).
  6. The edited data at the time of each estimate’s release phase must be used, when available.
  7. The final edited data for the final release must be used, when available. If the final edited data are not used to compute the response rates, it must be noted.

Sub-Requirement D3-3.3: Rates for the types of nonresponse (e.g., refusal, unable to locate, no one home, temporarily absent, language problem, insufficient data, or undeliverable as addressed) must be computed to facilitate the interpretation of the unit response rate and to better manage resources.

Sub-Requirement D3-3.4: For panel or longitudinal surveys, cumulative response rates must be computed using weighted data or cumulative total quantity response rates must be computed to reflect the total attrition of eligible units over repeated waves of data collection. If a survey uses respondents from another survey or census as its sampling frame, then the response rate of the survey (or census) serving as the frame must be included in the computation of the cumulative response rate.

Sub-Requirement D3-3.5: Cumulative response rates must be computed using weighted data over successive stages of multistage data collections (e.g., a screening interview followed by a detailed interview). If estimated probabilities of selection must be used and the accuracy of the response rate might be affected, then a description of the issues affecting the response rate must also be provided.

    Note: In most situations, a simple multiplication of response rates for each stage is appropriate. In other situations, a more complex computation may be required.

Sub-Requirement D3-3.6: Nonresponse bias analyses must be conducted when unit, item, or total quantity response rates for the total sample or important subpopulations fall below the following thresholds.

  1. The threshold for unit response rates is 80 percent.
  2. The threshold for item response rates of key items is 70 percent.
  3. The threshold for total quantity response rates is 70 percent.  (Thresholds 1 and 2 do not apply for surveys that use total quantity response rates.)

  4. Note: If response rates fall below these thresholds in a reimbursable data collection, the sponsor is responsible for conducting the nonresponse bias analysis.

Requirement D3-4: Coverage ratios must be computed to measure coverage error, as an indicator of potential bias, using statistically sound methods (e.g., computing coverage ratios as the uncontrolled estimate of population for a demographic-by-geographic group divided by the population control total for the demographic-by-geographic cell used in post-stratification adjustments or using capture-recapture methods).

    Note: If computing coverage ratios is not appropriate, a description of the efforts undertaken to ensure high coverage must be made available.

Requirement D3-5: Measures or indicators of nonsampling error associated with data from administrative records must be computed to inform users of the quality of the data.

    Examples of measures and indicators include:

    • Coverage of the target population by the set of administrative records.
    • The proportion of administrative records that have missing data items or that have been imputed to address missing data.
    • The proportion of data items with edit changes because the data items were invalid.
    • The proportion of records lost from the analysis or estimate due to nonmatches between linked data sets.

Requirement D3-6: Measures or indicators of nonsampling error associated with data collection and processing activities must be computed to inform users of the quality of the data.

    Examples of indicators of nonsampling error include:

    • Error rates for data entry/data capture operations.
    • Error rates and referral rates for coding operations.
    • Imputation rates and edit change rates for editing and imputation operations.

    Examples of analyses or studies that generate measures or indicators of nonsampling error include:

    • Geocoding evaluation studies (e.g., address matching rates and analysis of rates of allocation to higher level geographic entities based on postal place-name or ZIP Code matches).
    • Analyses of geospatial accuracy (e.g., analysis of locational information in relation to geodetic control points).
    • Response error evaluation studies (e.g., reinterview and latent class analysis).
    • Interviewer variance studies.
    • Respondent debriefing studies.
    • Response analysis surveys.
    • Record check or validation studies.
    • Mode effect studies.

Requirement D3-7: Methods and systems for calculating measures and indicators of nonsampling error must be verified and tested to ensure all components function as intended.

    Examples of verification and testing activities include:

    • Verifying that calculations are correct.
    • Validating computer code against specifications.
    • Conducting peer reviews of specifications and coding.
    • Using test data to check computer programs.

Requirement D3-8: Measures and indicators of nonsampling error must be evaluated to guide improvements to the program.

    Examples of evaluation activities include:

    • Analyzing the quality control results of processing systems (e.g., error rates from clerical coding and clerical record linkage) and developing improvements to the systems (e.g., improving clerical coding tools or improving training for clerks).
    • Evaluating the results of nonsampling error studies (e.g., response analysis surveys, respondent debriefing studies, and response error reinterview studies) and implementing improvements (e.g., revising questionnaire wording for problematic questions, revising interviewer procedures, or revising interviewer training).
    • Analyzing the results of interviewer quality control systems (e.g., Quality Control (QC) reinterviews and Computer Assisted Telephone Interviewing (CATI) monitoring, and observations) and developing improvements (e.g., improving interviewer training programs or revising questionnaires to address systemic problems).

Requirement D3-9: Documentation needed to replicate and evaluate the activities associated with producing measures and indicators of nonsampling error must be produced. The documentation must be retained, consistent with applicable policies and data use agreements, and must be made available to Census Bureau employees who need it to carry out their work. (See Statistical Quality Standard S2, Managing Data and Documents.)

    Examples of documentation include:

    • Plans, requirements, specifications, and procedures for the systems.
    • Computer source code.
    • Results of quality control activities.
    • Results of nonsampling error studies and evaluations.
    • Quality measures and indicators (e.g., final coverage ratios and response rates).

    Notes:

    1. The documentation must be released on request to external users, unless the information is subject to legal protections or administrative restrictions that would preclude its release. (See Data Stewardship Policy DS007, Information Security Management Program.)
    2. Statistical Quality Standard F2, Providing Documentation to Support Transparency in Information Products, contains specific requirements about documentation that must be readily accessible to the public to ensure transparency of information products released by the Census Bureau.


Back to Main


Source: U.S. Census Bureau | Methodology and Standards Council |  Last Revised: July 08, 2013