U.S. flag

An official website of the United States government

Skip Header


Methodology

The Census Bureau’s 2021 Management and Organizational Practices Survey (MOPS) is a supplement to the 2021 Annual Survey of Manufactures (ASM). For information on the ASM, see below:

The MOPS is conducted as a supplement to the ASM to maximize the analytic utility of the data and minimize the additional respondent burden required to achieve the measurement goals of the survey. These goals were to 1) describe the prevalence and use of structured management practices in U.S. industry and 2) permit analyses of the relationship between these practices and key economic outcomes, such as productivity and employment growth.  The MOPS enhances the information content of the base ASM.  The ASM collects detailed information on many inputs used in manufacturing production, such as labor, capital, energy, and materials, as well as the outputs from this production. The MOPS provides information about other important components in these production processes (the management and organizational practices) and thus enhances our understanding of business dynamics.

Survey Design

Target population: The target population for the 2021 MOPS was the 2021 ASM mailout universe.  The 2021 ASM mailout universe targeted all multiunit manufacturing establishments and large single-unit manufacturing establishments with one or more paid employees.  Establishments with no employees were out-of-scope with the exception of those known to use leased employees.  Establishments were classified as manufacturing establishments if they were classified in the North American Industry Classification System (NAICS) sectors 31-33.  

Sampling frame: The frame was created for the 2021 ASM mailout sample and consisted of approximately 102,000 manufacturing establishments and included all multi-location companies and large single-establishment companies.

Sampling unit: The sampling units for the 2021 MOPS were establishments.

Sample design: The sample for the 2021 MOPS was equivalent to the 2021 ASM mailout sample and consisted of approximately 53,500 establishments.

The mailout sample for the ASM is redesigned at 5-year intervals beginning the second survey year subsequent to the Economic Census.  For the 2019 survey year, a new probability sample was selected from a frame of approximately 102,000 manufacturing establishments of multi-location companies and large single-establishment companies in the 2017 Economic Census, which surveys establishments with paid employees located in the United States.  Using the Census Bureau’s Business Register, the mailout sample was supplemented annually by new establishments, which had paid employees, were located in the United States, and entered into business in 2018-2021.  For more information on the ASM sample design, see ASM Methodology.

To reduce cost and response burden on small- and medium-sized single-establishment companies, which were identified in the Manufacturing component of the 2017 Economic Census, these companies were not mailed ASM questionnaires.  For the 2021 ASM, annual payroll data for these approximately 187,000 single-establishment companies were estimated based on administrative information from the Internal Revenue Service and the Social Security Administration, and other data items were estimated using industry averages.  To produce estimates from the 2021 ASM, the estimated data for these small- and medium-sized single-establishment companies were combined with the estimates from the 2021 ASM mailout sample.  Because the sample for the 2021 MOPS consisted of only the establishments in the 2021 ASM mailout sample, these small- and medium-sized single-establishment companies were not represented in the 2021 MOPS estimates.

Frequency of sample redesign: The MOPS used the ASM mailout sample which is redesigned every 5 years and supplemented with births annually.

Sample maintenance: The 2021 MOPS sample was the 2021 ASM mailout sample.  The MOPS also included births that were identified as 2021 ASM mailout establishments after the 2021 ASM mailout sample was created.

Data Collection

Data items requested and reference period covered:  The 2021 MOPS questionnaire comprised 48 questions.  The form had sections asking about management practices, organization, data, decision making and artificial intelligence, expectations and background characteristics.  Respondents were asked to report their response for 2021 on all questions. For select questions respondents were also asked to report their response for 2019 or projections for 2022 and 2023.  The 16 questions in the management practices section were used to formulate the published management scores.  

The survey questionnaire along with the corresponding instructions and letters can be found here: MOPS Information for Respondents.

Key data items:  To be a respondent in the 2021 MOPS, an establishment had to respond to questions 1, 2, 6, 13, 14, 15, and 16 in Section A of the questionnaire.  Establishments also had to be in the 2021 ASM mailout sample and be tabulated in the 2021 ASM.

Type of request: The 2021 MOPS was a mandatory survey.

Frequency and mode of contact:  Establishments in the MOPS sample received a mailing with instructions to provide responses online via centurion.  Due date and follow up mailings were also conducted during the collection period.

Data collection unit:  For all single-establishment firms and multi-establishment firms, the data collection unit was the establishment.

Special procedures:  The 2021 MOPS did not employ any special data collection procedures.

Compilation of Data

Editing:  Reported data was not changed by edits.  However, for questions that were skipped due to skip patterns, the skipped questions were assigned a zero for computing the establishment’s management score.

Nonresponse:  Nonresponse is defined as the inability to obtain requested data from an eligible survey unit. Two types of nonresponse are often distinguished. Unit nonresponse is the inability to obtain any of the substantive measurements about a unit. In most cases of unit nonresponse, the Census Bureau was unable to obtain any information from the survey unit after several attempts to elicit a response.  Item nonresponse occurs when a question is unanswered.

Nonresponse adjustment and imputation:  An adjustment factor was applied to MOPS respondents to account for unit nonresponse.  To compute the values of the unit nonresponse adjustment factor, each establishment record from the sample was grouped by industry classification into an adjustment cell based on the 2017 North American Industry Classification System (NAICS).  For a given adjustment cell, the unit nonresponse adjustment factor was computed as the ratio of two unweighted counts: 

  • the unweighted number of establishments in the 2021 ASM mailout sample that contributed to the 2021 ASM tabulations
  • the unweighted number of establishments that satisfied the response criteria for the 2021 MOPS, which was based on the key items for the survey and being tabulated in the ASM

The resulting factor was used to adjust the sampling weight for all respondents in the given adjustment cell.

The only imputation performed for the 2021 MOPS occurred for items where the skip pattern of the form precluded the respondent from answering certain questions.  For these skipped items, the value imputed was the least structured practice except where such an imputation was inconsistent with the interpretation of the responses.  For example, if a respondent said that no key performance indicators were monitored at the establishment in question 2 (generating a skip to question 6), then the imputed response for question 3 would be that the respondent never had the key performance indicators reviewed by managers.  On the contrary, if a respondent said that they did not give performance bonuses to non-managers in question 9 (generating a skip to question 11), the least structured response of “Production targets not met” was not imputed for question 10, as this cannot be inferred from the response to question 9.

Other macro-level adjustments:  A calibration factor was computed for establishments using the same adjustment cells as those used for the unit nonresponse adjustment factor.  For a given adjustment cell, the calibration factor was computed as the ratio of two sums:

  • the sum of the sample weights for establishments in the 2021 ASM mailout sample that contributed to the 2021 ASM tabulations
  • the sum of the adjusted weights for establishments that satisfied the response criteria for the 2021 MOPS, where the adjusted weights were based on the product of the sample weight and the nonresponse adjustment factor

The calibration factor was used to calculate the final weight for respondents.

Tabulation unit: Establishments were used for tabulation.

Estimation:  The tabulated management scores for each domain were estimated using multiple steps.  First, a management score was created for each responding establishment in the survey.  Responses to the 16 management practices questions were normalized with the most structured practiced normalized to 1, and the least structured practice normalized to zero.  If a question had three categories, the “in between” category was assigned the value 0.5.  Similarly, for four categories, the “in between” categories were assigned 1/3 and 2/3 and so on.  For the “Mark all that apply” questions where establishments selected multiple responses an average of the values assigned to the selected responses was calculated and then factored into the calculating of the management score. For questions that were skipped due to skip patterns, the skipped questions were assigned numerical values of 0.  An average management score was then calculated for each establishment based on the number of responses for that establishment, with a denominator between 7 and 16 depending on how many management questions an establishment answered.  After adjusting the respondent weights for nonresponse and calibrating them to the 2021 ASM sample weights, a weighted score was computed for each establishment by multiplying the final weight by the average score.  Structured management practice scores were then computed for each publication level by calculating the average of the weighted scores for each establishment in the domain.

Sampling Error:  The sampling error of an estimate based on a sample survey is the difference between the estimate and the result that would be obtained from a complete census conducted under the same survey conditions.  This error occurs because characteristics differ among sampling units in the population and only a subset of the population is measured in a sample survey.  The particular sample used in this survey is one of a large number of samples of the same size that could have been selected using the same sample design.  Because each unit in the sampling frame had a known probability of being selected into the sample, it was possible to estimate the sampling variability of the survey estimates.

Common measures of the variability among these estimates are the sampling variance, the standard error, and the coefficient of variation (CV), which is also referred to as the relative standard error (RSE).  The sampling variance is defined as the squared difference, averaged over all possible samples of the same size and design, between the estimator and its average value.  The standard error is the square root of the sampling variance.  The CV expresses the standard error as a percentage of the estimate to which it refers. For example, an estimate of 200 units that has an estimated standard error of 10 units has an estimated CV of 5 percent.  The sampling variance, standard error, and CV of an estimate can be estimated from the selected sample because the sample was selected using probability sampling.  Note that measures of sampling variability, such as the standard error and CV, are estimated from the sample and are also subject to sampling variability.  It is also important to note that the standard error and CV only measure sampling variability.  They do not measure any systematic biases in the estimates.

The Census Bureau recommends that individuals using these estimates incorporate sampling error information into their analyses, as this could affect the conclusions drawn from the estimates.

The variance estimates for the MOPS were calculated using a stratified jackknife procedure.  Standard errors were published for the MOPS.  Relative standard errors and coefficients of variation did not apply to the MOPS because the estimates were averages and percentages.

Confidence Interval:  The sample estimate and an estimate of its standard error allow us to construct interval estimates with prescribed confidence that the interval includes the average result of all possible samples with the same size and design.  To illustrate, if all possible samples were surveyed under essentially the same conditions, and an estimate and its standard error were calculated from each sample, then:

  1. Approximately 68 percent of the intervals from one standard error below the estimate to one standard error above the estimate would include the average estimate derived from all possible samples.
  2. Approximately 90 percent of the intervals from 1.645 standard errors below the estimate to 1.645 standard errors above the estimate would include the average estimate derived from all possible samples.

In the example above, the margin of error (MOE) associated with the 90 percent confidence interval is the product of 1.645 and the estimated standard error.

Thus, for a particular sample, one can say with specified confidence that the average of all possible samples is included in the constructed interval.  For example, suppose that a domain had an estimated structured management score of 0.500 in 2021 and that the standard error of this estimate was 0.005. This means that we are confident, with 68% chance of being correct, that the average estimate from all possible samples of establishments on the ASM mail frame in 2021 was a management score between 0.495 and 0.505.  To increase the probability to a 90% chance that the interval contains the average value over all possible samples (this is called a 90-percent confidence interval), multiply 0.005 by 1.645, yielding limits of 0.492 and 0.508 (0.500 structured management score plus or minus 0.008).  The average estimate of structured management scores during 2021 may or may not be contained in any one of these computed intervals; but for a particular sample, one can say that the average estimate from all possible samples is included in the constructed interval with a specified confidence of 90 percent.  It is important to note that the standard error only measures sampling error.  It does not measure any systematic nonsampling error in the estimates.

Nonsampling Error:  Nonsampling error encompasses all factors other than sampling error that contribute to the total error associated with an estimate.  This error may also be present in censuses and other nonsurvey programs.  Nonsampling error arises from many sources: inability to obtain information on all units in the sample; response errors; differences in the interpretation of the questions; mismatches between sampling units and reporting units, requested data and data available or accessible in respondents’ records, or with regard to reference periods; and other errors of collection, response, coverage, and processing.

Although no direct measurement of nonsampling error was obtained, precautionary steps were taken in all phases of the collection, processing, and tabulation of the data in an effort to minimize its influence.  Precise estimation of the magnitude of nonsampling errors would require special experiments or access to independent data and, consequently, the magnitudes are often unavailable.

The Census Bureau recommends that individuals using these estimates factor in this information when assessing their analyses of these data, as nonsampling error could affect the conclusions drawn from the estimates.

Two measures or indicators of nonsampling error were calculated for the 2021 MOPS.  The 2021 MOPS had a unit response rate (URR) of 67.6%.  The URR was calculated by taking the number of respondents (R) and dividing that by the number of establishments eligible for data collection (E) plus the number of establishments for which eligibility could not be determined (U).  The rate was then multiplied by 100.  The formula for the URR was URR = ⌊R/(E+U)⌋*100.  Cases were assumed to be active and in-scope in the absence of evidence otherwise.  This included cases that were Undeliverable as Addressed.

The 2021 MOPS had a coverage rate of 69.9%.  The coverage rate was calculated by dividing the total weighted ASM shipments for MOPS respondents by the total weighted ASM shipments in the MOPS sample for establishments tabbed in the 2021 ASM.

Disclosure avoidance:  Disclosure is the release of data that reveals information or permits deduction of information about a particular survey unit through the release of either tables or microdata.  Disclosure avoidance is the process used to protect each survey unit’s identity and data from disclosure.  Using disclosure avoidance procedures, the Census Bureau modifies or removes the characteristics that put information at risk of disclosure.  Although it may appear that a table shows information about a specific survey unit, the Census Bureau has taken steps to disguise or suppress a unit’s data that may be “at risk” of disclosure while making sure the results are still useful.

The 2021 MOPS used cell suppression for disclosure avoidance.

Cell suppression is a disclosure avoidance technique that protects the confidentiality of individual survey units by withholding cell values from release and replacing the cell value with a symbol, usually a “D”.  If the suppressed cell value were known, it would allow one to estimate an individual survey unit’s response too closely.

The cells that must be protected are called primary suppressions.

To make sure the cell values of the primary suppressions cannot be closely estimated by using other published cell values, additional cells may also be suppressed.  These additional suppressed cells are called complementary suppressions.

The process of suppression does not usually change the higher-level totals.  Values for cells that are not suppressed remain unchanged.  Before the Census Bureau releases data, computer programs and analysts ensure primary and complementary suppressions have been correctly applied.

In accordance with federal law governing census reports (Title 13 of the United States Code), no data are published that would disclose the operations of an individual establishment or company.  Additional information on the techniques employed to limit disclosure in the Census Bureau’s economic surveys are discussed at: https://www.census.gov/programs-surveys/economic-census/technical-documentation/methodology/disclosure.html

Disclosure analysis is performed at the field level, i.e., disclosure analysis performed for each variable independent of other variables for that NAICS-based industry or product class.  When data for a NAICS-based industry or product class are suppressed, these data still are included in higher-level totals.

The Census Bureau has reviewed the 2021 MOPS tables for unauthorized disclosure of confidential information and has approved the disclosure avoidance practices applied (Approval ID: CBDRB-FY23-0237, approved March 27, 2023).

For more information on disclosure avoidance practices, see FCSM Statistical Policy Working Paper 22.

History of Survey Program

The first MOPS was conducted in 2010.  Two major changes occurred to the methodology between the 2010 MOPS and the 2015 MOPS.  The first was the change made to the classification of a response for 2015.  The 2015 MOPS only required an establishment to respond to seven specific questions about management practices and be tabulated in the 2015 ASM as part of the mailout sample.  Establishments no longer needed to match to the Longitudinal Business Database, have positive value added, positive employment, or positive imputed capital stock for 2010.  In addition, in 2015 establishments that responded to a “Mark one box” question with multiple responses no longer had responses from that question discounted.  For questions where more than one response was selected, the most structured management practice reported was assigned as the response for computing the establishment’s management score. These edits were only used for calculating management scores and were not used in tabulating the response distribution for each question.

The second major change between the 2010 MOPS and the 2015 MOPS was the decision to use only the most structured practice in the management score for establishments that reported multiple responses to “Mark all that apply” questions.  In the 2010 MOPS, an average value was computed for the question.

The 2021 MOPS also included two significant methodology changes from the prior survey cycles.  The 2021 MOPS was the first cycle to use the internet as the only mode of collection instead of both the internet and a paper form option.  Since the electronic collection instrument did not allow respondents to select multiple responses to the “Mark one box” questions, as was possible with the previously available paper form options, this eliminated the need to edit scenarios where respondents selected more than one option.  

The second modification was the method of calculating the management score for each establishment changed from 2015 to 2021. To be more specific, in 2021, if an establishment chose multiple responses for the "Mark all that apply" questions, the management score was calculated by taking an average of the values assigned to each of the selected responses. In contrast, in 2015, the management score was based on the most structured management practice reported by the establishment.

For more information on the history and development of the MOPS, see The Management and Organizational Practices Survey (MOPS): An Overview (census.gov).

Page Last Revised - April 25, 2023
Is this page helpful?
Thumbs Up Image Yes Thumbs Down Image No
NO THANKS
255 characters maximum 255 characters maximum reached
Thank you for your feedback.
Comments or suggestions?

Top

Back to Header