Skip Header

We are hiring thousands of people for the 2020 Census. Click to learn more and apply.

Component ID: #ti1114707594

In this Section:

Component ID: #ti1469603750

The Census Bureau’s 2015 Management and Organizational Practices Survey (MOPS) is a supplement to the 2015 Annual Survey of Manufactures (ASM). For information on the ASM, see below:

Component ID: #ti1201991604

The MOPS was developed jointly with a team of academic experts and was partially funded by the National Science Foundation. Conducting the MOPS as a supplement to the ASM maximizes the analytic utility of the data and minimizes the additional respondent burden required to achieve the measurement goals of the survey. These goals were to 1) describe the prevalence and use of structured management practices in U.S. industry and 2) permit analyses of the relationship between these practices and key economic outcomes, such as productivity and employment growth. Like previous supplements to the ASM (e.g., the 1999 Computer Network Use Supplement), the MOPS enhances the information content of the base ASM. The ASM collects detailed information on many inputs used in manufacturing production, such as labor, capital, energy, and materials, as well as the outputs from this production. The MOPS provides information about other important components in these production processes (the management and organizational practices) and thus enhances our understanding of business dynamics.

Component ID: #ti727115729

Survey Design

Target population: The target population for the 2015 MOPS was the 2015 ASM mailout universe. The 2015 ASM mailout universe targeted all multiunit manufacturing establishments and large single-unit manufacturing establishments with one or more paid employees. Establishments with no employees were out-of-scope with the exception of those were known to use leased employees for manufacturing. Establishments were classified as manufacturing establishments if they were classified in the North American Industry Classification System (NAICS) sectors 31-33.

Sampling frame: The frame was created for the 2014 ASM mailout sample and consisted of 101,250 manufacturing establishments and included all multi-location companies and the large single-establishment companies.

Sampling unit: The sampling units for the 2015 MOPS were establishments.

Sample design: The sample for the 2015 MOPS was equivalent to the 2015 ASM mailout sample and consisted of approximately 52,000 establishments.

The mailout sample for the ASM is redesigned at 5-year intervals beginning the second survey year subsequent to the Economic Census. For the 2014 survey year, a new probability sample was selected from a frame of approximately 101,250 manufacturing establishments of multi-location companies and large single-establishment companies in the 2012 Economic Census, which surveys establishments with paid employees located in the United States. Using the Census Bureau’s Business Register, the mailout sample was supplemented annually by new establishments, which had paid employees, were located in the United States, and entered into business in 2013-2015. For more information on the ASM sample design, see below:

Component ID: #ti1442702632

To reduce cost and response burden on small- and medium-sized single-establishment companies, which were identified in the Manufacturing component of the 2012 Economic Census, these companies were not mailed ASM questionnaires. For the 2015 ASM, annual payroll data for these approximately 193,350 single-establishment companies were estimated based on administrative information from the Internal Revenue Service and the Social Security Administration, and other data items were estimated using industry averages. To produce estimates from the 2015 ASM, the estimated data for these small- and medium-sized single-establishment companies were combined with the estimates from the 2015 ASM mailout sample. Because the sample for the 2015 MOPS consisted of only the establishments in the 2015 ASM mailout sample, these small- and medium-sized single-establishment companies were not represented in the 2015 MOPS estimates.

Frequency of sample redesign: The MOPS used the ASM mailout sample which is redesigned every 5 years and supplemented with births annually.

Sample maintenance: The 2015 MOPS sample was the 2015 ASM mailout sample. The MOPS also included births that were identified as 2015 ASM mailout establishments after the 2015 ASM mailout sample was created.

Component ID: #ti1673564464
Component ID: #ti1922760005

Data Collection

Data items requested and reference period covered: The 2015 MOPS questionnaire comprised 46 questions. The form had sections asking about management practices, organization, data and decision-making, and background characteristics. For most questions, respondents were asked to report their response for 2015, as well as a response based on recall for 2010. The 16 questions in the management practices section were used to formulate the published management scores. In addition, the form included a section on uncertainty where respondents were asked about product shipments, expenditures, and employees in 2015 and projections for 2016 or 2017.

The survey questionnaire can be found below along with the corresponding instructions and letters.

Component ID: #ti623576781

Key data items: To be a respondent in the 2015 MOPS, an establishment had to respond to 2015 checkbox questions 1, 2, 6, 13, 14, 15, and 16 in Section A of the questionnaire. Establishments also had to be in the 2015 ASM mailout sample and be tabulated in the 2015 ASM.

The set of criteria for tabulation in the 2015 MOPS differed from the 2010 MOPS. To be tabulated in the 2010 MOPS, a given establishment record must have had at least 11 non-missing responses to the MOPS management questions; successfully matched to the ASM database and be included in the 2010 ASM tabulations; successfully matched to the Longitudinal Business Database; and had positive value added, positive employment, and positive imputed capital stock for 2010, except this criterion based on these three variables for 2010 was excluded when estimates for both 2005 and 2010 were calculated and compared in the press release.

Type of request: The 2015 MOPS was a mandatory survey.

Frequency and mode of contact: Establishments in the MOPS sample received an initial mailing that contained a letter, a flyer and a form. Nonrespondents were sent a due date reminder letter and two follow-up letters, if necessary. The births were sent an initial mailing that included a letter, a flyer, and a form and one follow-up letter for nonrespondents. For initial mailings that were determined to be Undeliverable as Addressed (UAA), single-establishment UAAs were mailed again with updated address information, if available, with another letter and flyer. Multi-establishment UAAs were bundled into a mail group for the UAA’s parents organization with a letter asking the company to forward the information to the company’s manufacturing establishment(s). A UAA follow-up letter was sent if the first UAA mailing did not get a response. Establishments could respond to the 2015 MOPS either electronically or by returning the paper form.

Data collection unit: The collection units for the 2015 MOPS were establishments, which were generally single physical locations where business was conducted or where services or industrial operations were performed.

Special procedures: Unlike the ASM, the MOPS was mailed to the physical address (location of the establishment) rather than the enterprise address.

Component ID: #ti1759170943
Component ID: #ti1842254945

Compilation of Data

Editing: Reported data was not changed by edits. However, for questions that were skipped due to skip patterns, the skipped questions were assigned a zero for computing the establishment’s management score. Additionally, for questions where more than one response was selected, the most structured management practice reported was assigned as the response for computing the establishment’s management score. These edits were only used for calculating management scores and were not used in tabulating the response distribution for each question. These edits were a change from the procedures used in the 2010 MOPS. In the 2010 MOPS, if more than one response was provided for a question that specified the respondent to only mark one, the response was nullified. When respondents provided more than one response to a question that was mark all that apply in the 2010 MOPS, the responses were assigned numeric values and an average score was computed for the question. Questions skipped due to skip patterns in the 2010 MOPS were not included in management score calculations.

Nonresponse: Nonresponse is defined as the inability to obtain requested data from an eligible survey unit. Two types of nonresponse are often distinguished. Unit nonresponse is the inability to obtain any of the substantive measurements about a unit. In most cases of unit nonresponse, the Census Bureau was unable to obtain any information from the survey unit after several attempts to elicit a response. Item nonresponse occurs either when a question is unanswered or unusable.

Nonresponse adjustment and imputation: An adjustment factor was applied to MOPS respondents to account for unit nonresponse. To compute the values of the unit nonresponse adjustment factor, each establishment record from the sample was grouped by industry classification into an adjustment cell based on the 2012 North American Industry Classification System (NAICS). For a given adjustment cell, the unit nonresponse adjustment factor was computed as the ratio of two unweighted counts:

  • the unweighted number of establishments in the 2015 ASM mailout sample that contributed to the 2015 ASM tabulations
  • the unweighted number of establishments that satisfied the response criteria for the 2015 MOPS, which was based on the key items for the survey and being tabulated in the ASM

The resulting factor was used to adjust the sampling weight for all respondents in the given adjustment cell.

The only imputation performed for the 2015 MOPS occurred for items where the skip pattern of the form precluded the respondent from answering certain questions. For these skipped items, the value imputed was the least structured practice except where such an imputation was inconsistent with the interpretation of the responses. For example, if a respondent said that no key performance indicators were monitored at the establishment in question 2 (generating a skip to question 6), then the imputed response for question 3 would be that the respondent never had the key performance indicators reviewed by managers. On the contrary, if a respondent said that they did not give performance bonuses to non-managers in question 9 (generating a skip to question 11), the least structured response of “Production targets not met” was not imputed for question 10, as this cannot be inferred from the response to question 9.

Other macro-level adjustments: A calibration factor was computed for establishments using the same adjustment cells as those used for the unit nonresponse adjustment factor. For a given adjustment cell, the calibration factor was computed as the ratio of two sums:

  • the sum of the sample weights for establishments in the 2015 ASM mailout sample that contributed to the 2015 ASM tabulations
  • the sum of the adjusted weights for establishments that satisfied the response criteria for the 2015 MOPS, where the adjusted weights were based on the product of the sample weight and the nonresponse adjustment factor

The calibration factor was used to calculate the final weight for respondents.

Tabulation unit: Establishments were used for tabulation.

Estimation: The tabulated management scores for each domain were estimated using multiple steps. First, a management score was created for each responding establishment in the survey. Responses to the 16 management practices questions were normalized with the most structured practiced normalized to 1, and the least structured practice normalized to zero. If a question had three categories, the “in between” category was assigned the value 0.5. Similarly, for four categories, the “in between” categories were assigned 1/3 and 2/3 and so on. As previously discussed, for questions where more than one choice was selected, the most structured practice was determined to be the response for calculating the management score. For questions that were skipped due to skip patterns, the skipped questions were assigned numerical values of 0. An average management score was then calculated for each establishment based on the number of responses for that establishment, with a denominator between 7 and 16 depending on how many management questions an establishment answered. After adjusting the respondent weights for nonresponse and calibrating them to the 2015 ASM sample weights, a weighted score was computed for each establishment by multiplying the final weight by the average score. Structured management practice scores were then computed for each publication level by calculating the average of the weighted scores for each establishment in the domain.

Sampling Error: The sampling error of an estimate based on a sample survey is the difference between the estimate and the result that would be obtained from a complete census conducted under the same survey conditions. This error occurs because characteristics differ among sampling units in the population and only a subset of the population is measured in a sample survey. The particular sample used in this survey is one of a large number of samples of the same size that could have been selected using the same sample design. Because each unit in the sampling frame had a known probability of being selected into the sample, it was possible to estimate the sampling variability of the survey estimates.

Common measures of the variability among these estimates are the sampling variance, the standard error, and the coefficient of variation (CV), which is also referred to as the relative standard error (RSE). The sampling variance is defined as the squared difference, averaged over all possible samples of the same size and design, between the estimator and its average value. The standard error is the square root of the sampling variance. The CV expresses the standard error as a percentage of the estimate to which it refers. For example, an estimate of 200 units that has an estimated standard error of 10 units has an estimated CV of 5 percent. The sampling variance, standard error, and CV of an estimate can be estimated from the selected sample because the sample was selected using probability sampling. Note that measures of sampling variability, such as the standard error and CV, are estimated from the sample and are also subject to sampling variability. It is also important to note that the standard error and CV only measure sampling variability. They do not measure any systematic biases in the estimates.

The Census Bureau recommends that individuals using these estimates incorporate sampling error information into their analyses, as this could affect the conclusions drawn from the estimates.

The variance estimates for the MOPS were calculated using a stratified jackknife procedure. Standard errors were published for the MOPS. Relative standard errors and coefficients of variation did not apply to the MOPS because the estimates were averages and percentages.

Confidence Interval: The sample estimate and an estimate of its standard error allow us to construct interval estimates with prescribed confidence that the interval includes the average result of all possible samples with the same size and design. To illustrate, if all possible samples were surveyed under essentially the same conditions, and an estimate and its standard error were calculated from each sample, then:

  1. Approximately 68 percent of the intervals from one standard error below the estimate to one standard error above the estimate would include the average estimate derived from all possible samples.
  2. Approximately 90 percent of the intervals from 1.645 standard errors below the estimate to 1.645 standard errors above the estimate would include the average estimate derived from all possible samples.

In the example above, the margin of error (MOE) associated with the 90 percent confidence interval is the product of 1.645 and the estimated standard error.

Thus, for a particular sample, one can say with specified confidence that the average of all possible samples is included in the constructed interval. For example, suppose that a domain had an estimated structured management score of 0.500 in 2015 and that the standard error of this estimate was 0.005. This means that we are confident, with 68% chance of being correct, that the average estimate from all possible samples of establishments on the ASM mail frame in 2015 was a management score between 0.495 and 0.505. To increase the probability to a 90% chance that the interval contains the average value over all possible samples (this is called a 90-percent confidence interval), multiply 0.005 by 1.645, yielding limits of 0.492 and 0.508 (0.500 structured management score plus or minus 0.008). The average estimate of structured management scores during 2015 may or may not be contained in any one of these computed intervals; but for a particular sample, one can say that the average estimate from all possible samples is included in the constructed interval with a specified confidence of 90 percent. It is important to note that the standard error only measures sampling error. It does not measure any systematic nonsampling error in the estimates.

Nonsampling Error: Nonsampling error encompasses all factors other than sampling error that contribute to the total error associated with an estimate. This error may also be present in censuses and other nonsurvey programs. Nonsampling error arises from many sources: inability to obtain information on all units in the sample; response errors; differences in the interpretation of the questions; mismatches between sampling units and reporting units, requested data and data available or accessible in respondents’ records, or with regard to reference periods; mistakes in coding or keying the data obtained; and other errors of collection, response, coverage, and processing.

Although no direct measurement of nonsampling error was obtained, precautionary steps were taken in all phases of the collection, processing, and tabulation of the data in an effort to minimize its influence. Precise estimation of the magnitude of nonsampling errors would require special experiments or access to independent data and, consequently, the magnitudes are often unavailable.

The Census Bureau recommends that individuals using these estimates factor in this information when assessing their analyses of these data, as nonsampling error could affect the conclusions drawn from the estimates.

Two measures or indicators of nonsampling error were calculated for the 2015 MOPS. The 2015 MOPS had a unit response rate (URR) of 70.9%. The URR was calculated by taking the number of respondents (R) and dividing that by the number of establishments eligible for data collection (E) plus the number of establishments for which eligibility could not be determined. The rate was then multiplied by 100. The formula for the URR was URR = [R/(E+U)]*100. Cases were assumed to be active and in-scope in the absence of evidence otherwise. This included cases that were Undeliverable as Addressed.

The 2015 MOPS had a coverage rate of 71.9%. The coverage rate was calculated by dividing the total weighted ASM shipments for MOPS respondents by the total weighted ASM shipments in the MOPS sample for establishments tabbed in the 2015 ASM.

Disclosure avoidance: Disclosure is the release of data that reveals information or permits deduction of information about a particular survey unit through the release of either tables or microdata. Disclosure avoidance is the process used to protect each survey unit’s identity and data from disclosure. Using disclosure avoidance procedures, the Census Bureau modifies or removes the characteristics that put information at risk of disclosure. Although it may appear that a table shows information about a specific survey unit, the Census Bureau has taken steps to disguise or suppress a unit’s data that may be “at risk” of disclosure while making sure the results are still useful.

The 2015 MOPS used cell suppression for disclosure avoidance.

Cell suppression is a disclosure avoidance technique that protects the confidentiality of individual survey units by withholding cell values from release and replacing the cell value with a symbol, usually a “D”. If the suppressed cell value were known, it would allow one to estimate an individual survey unit’s too closely.

The cells that must be protected are called primary suppressions.

To make sure the cell values of the primary suppressions cannot be closely estimated by using other published cell values, additional cells may also be suppressed. These additional suppressed cells are called complementary suppressions.

The process of suppression does not usually change the higher-level totals. Values for cells that are not suppressed remain unchanged. Before the Census Bureau releases data, computer programs and analysts ensure primary and complementary suppressions have been correctly applied.

For more information on disclosure avoidance practices, see FCSM Statistical Policy Working Paper 22:

Component ID: #ti2058176724
Component ID: #ti1334668623

History of Survey Program

The first MOPS was conducted in 2010. Two major changes occurred to the methodology between the 2010 MOPS and the 2015 MOPS. The first was the change made to the classification of a response for 2015. The 2015 MOPS only required an establishment to respond to seven specific questions about management practices and be tabulated in the 2015 ASM as part of the mailout sample. Establishments no longer needed to match to the Longitudinal Business Database, have positive value added, positive employment, or positive imputed capital stock for 2010. In addition, in 2015 establishments that responded to a “Mark one box” question with multiple responses no longer had responses from that question discounted. The second major change was the decision to use only the most structured practice in the management score for establishments that reported multiple responses to “Mark all that apply” questions. In the 2010 MOPS, an average value was computed for the question.

For more information on the history and development of the MOPS, see below:

Component ID: #ti1662945476
X
  Is this page helpful?
Thumbs Up Image Yes    Thumbs Down Image No
X
Comments or suggestions?
No, thanks
255 characters remaining
X
Thank you for your feedback.
Comments or suggestions?
Back to Header