Editing: Editing is a process that ensures survey data are accurate, complete, and consistent. Efforts are made at all phases of collection, processing, and tabulation to minimize errors.
Although some edits are built into the Internet data collection instrument and the data entry programs, the majority of the edits are performed post collection. Edits consist primarily of two types: (1) consistency edits and (2) historical ratio edits of the current year's reported value to the prior year's value.
The consistency edits check the logical relationships of data items reported on the form. For example, if a value exists for employees for a function then a value must exist for payroll also. If part-time employees and payroll exist then part-time hours must exist and vice versa.
For each function where employees are reported, the historical ratio edits compare data for the number of employees and the average salary between reporting years. If data fall outside of acceptable tolerance levels, the item is flagged for further review. Additional checks are made comparing data from the Annual Finance Survey to data reported on the Census of Government Employment to verify that employees reported on the Census of Government Employment at a particular function have a corresponding expenditure on the Finance Survey.
For historical ratio edit and consistency edits, the edit results are reviewed by analysts and adjusted as needed. When the analyst is unable to resolve or accept the edit failure, contact is made with the respondent to verify or correct the reported data.
Imputation: Not all respondents answer every item on the questionnaire. There are also questionnaires that are not returned despite efforts to gain a response. Imputation is the process of filling in missing or invalid data with reasonable values in order to have a complete data set for analytical purposes. For census years, the complete data set is also needed for sample design purposes.
For nonresponding general purpose governments and for dependent and independent school districts, the imputations were based on recently reported historical data from either a prior year annual survey or the most recent Census of Governments, if it was available. These data were adjusted by a growth rate that was determined by the growth of responding units that were similar (in size, geography, and type of government) to the nonrespondent. If there was no recent historical data available, the imputations were based on the data from a randomly selected responding donor that was similar to the nonrespondent. This donor’s data were adjusted by dividing each data item by the population (or enrollment) of the donor and multiplying the result by the nonrespondent's population (or enrollment).
The imputations for nonresponding special districts were done similarly. If prior year reported data were available, the data were adjusted by a growth rate that was determined by the growth of reporting units that were similar. Special districts are similar if they are of the same function code and similar geography, e.g., police protection in a state or water transport in a region. For nonresponding special districts with no recently reported data available, data were used from a randomly selected donor that was similar to the nonrespondent. In cases where good secondary data sources exist, the data from those sources were used.
Note: Between years 2002 through 2006, individual government imputed data were not released to the public. Beginning with 2007, the imputed data are available on the Individual Government Data file. Data flags are available on the Individual Government Data file to denote the imputed data.
Sampling Error: The data for the census year are not subject to sampling and do not contain sampling error. The user should be mindful that the data for years not ending in ‘2‘ or ‘7‘ are from sample surveys and are subject to sampling error. Discussions of sampling error are available in the survey methodology descriptions for those years.