U.S. Census Bureau Homepage

Survey of Program Dynamics


                                                                                                                                                                      1/21/99

I. Introduction

In an attempt to slow the rate of attrition for the Survey of Program Dynamics (SPD), we are requesting permission to offer a prepaid incentive of $40 to all eligible sample cases in the upcoming 1999 data collection. This procedure would be part of a campaign to slow the rate of attrition for this important longitudinal survey. Justification for our request is given in the following section. Section III. offers more details about the operational plan for implementing the incentive procedure. Section IV. offers the most current results from both the Survey of Income and Program Participation (SIPP) and SPD incentive experiments as well as information about other surveys that have used incentives. We would need OMB approval by March 1, 1999 to be able to implement the procedure successfully.

II. Justification

The SPD is a unique survey in the federal statistical system. It is a longitudinal survey with a one-time window of opportunity for success.

The SPD is the vehicle for assessing true changes in behavior from the 1996 welfare reforms, because it is the only survey that provides a source of both baseline data and longitudinal data on individual and family outcomes. The data gathered for the 10-year period (1992-2002) will aid in assessing short- to medium-term consequences of outcomes of the welfare legislation. This is the only time that this data can be collected and will be the only source of data of this type available.

Congress specifically directs the Bureau of the Census to continue to collect data on the 1992 and 1993 panels of the Survey of Income and Program Participation (SIPP) in Title 42, United States Code, Section 614 (Public Law 104-193, Section 414, signed August 22, 1996). The use of the SIPP panels provides a good baseline for pre-welfare reform data. However, it does provide some major obstacles for collection of quality statistical data.

Nonresponse to the SPD is a major concern of project staff. The SIPP respondents provided 9 or 10 waves of detailed data over a three-year period. The SIPP data collection has a burden of 30 minutes per adult respondent per wave. So the average SIPP household (2.1 adults per household) has provided more than 10 hours of their time in burden. At the end of the last wave of SIPP interviews, respondents were thanked for their time and told that there would be no more interviews. Then one to two years later, the respondents were contacted and told they were still in a panel survey. Therefore, it was not surprising that SPD would have nonresponse problems.

The SPD inherited a 26.6 percent sample loss rate from the SIPP sample. After two waves of SPD, the sample loss rate is 50 percent (See Table 1). Previous studies on SIPP sample loss has shown that the sample loss is not uniform. Households in and near poverty attrit at a higher rate than other households. Since poverty households are a key target population in the study of welfare reform, there is some concern about nonresponse bias.

Table 1. Sample Loss- An Average of the 1992 and 1993 Panels and SPD


Interview Eligible HHs Interviewed HHs Average Sample Loss Rate (%)
1 43,394 39,446 8.8
2 44,225 37,936 14.4
45,043 37,882 16.3
4 45,468 37,477 18.1
5 45,985 36,985 20.3
6 46,437 36,676 21.9
7 46,704 36,133 23.6
8 47,030 35,761 25.1
9 47,273 35,291 26.6
*10 17,804 13,337 26.6
**1997 SPD Bridge 48,633 30,125 41.3
***1998 SPD 32,800 16,400 50.0
* A 10th interview for only 3/4 of the sample was conducted for the 1992 panel.
** Only those Hhs interviewed in the last wave of the 1992 or 1993 panels were sent to the Field for the SPD Bridge.
*** Only those Hhs interviewed in the Bridge and selected during the subsampling were eligible for the 1998 SPD.
 

The use of incentives is standard among long-term panel studies similar to the SPD. While there has been little experimental research on the effects of these incentives, many the panel studies provide their respondents some enumeration. A summary of several prominent panel studies' purpose, sample, sample loss and incentives can be found in Attachment A. Table 2. shows a preliminary response rates for SPD 1998, Panel Survey of Income Dynamics (PSID), and National Longitudinal Survey of Youth (NLSY). All the response rates in Table 2. are calculated in the same manner so that they are directly compariable.
 
 

Table 2. Response Rates for SPD, PSID, and NLSY: Period-Specific and Total


Period-Specific SPD1 PSID2 NLSY3
SRC SEO Total
Sample-Selection to Interview 1 90.9 76 76 73 87.0
Interview 1 to Most Recent Interview (Deceased Removed from Base) 60.1 53.0 53.0 53.0 71.5
Sample-Selection to Most Recent Interview (Deceased Removed from Base) 50.0 40.3 36.3 38.7 62.2
1- Response rates based on 1992 and 1993 SIPP panels through SPD 1998.
2- PSID is based on a combined sample from the Survey of Economic Opportunity (SEO) (1966/1967) and fresh sample selected by the Survey Research Center (SRC) for the survey in 1968. Response rates are based on the 26th interview collected in 1993. More current information has been requested, but not received.
3-Response rates for NLSY are based on the 17th interview collected in 1996.

We have been trying to keep SPD response rates up using other enhancements as well. At the last interview, we provided respondents with a portfolio filled with Census Stat Briefs based on the results of the SIPP panels that the respondents participated. This showed the usefulness of the information they had previously provided and was used to encourage continued participation.

In February, 1999, respondents are being sent an "interim mailing" which has information culled from the 1997 SPD Bridge. This mailing serves two purposes, it gives respondents a reminder of the importance of their continued participation and it gives our Field staff a two month head start on tracking down people who have moved since the last interview. Reducing the number of people lost due to moving will also reduce the nonresponse.

We are also studying the feasibility of bringing low income people who left the SIPP sample back into SPD. This is expected to be a costly method of reducing the attrition rate. The Panel Study of Income Dynamics (PSID) Attrition Study brought back in approximately 35% of the noninterview cases they attempted to interview. Our two questions from the SPD attrition study are 1) can we locate low income people that we lost in the 1992/1993 SIPP panels; and 2) once we find them, can we convince them to participate in a lengthy questionnaire?

The Census Bureau is trying various methods to deal with the SPD attrition problem. We now believe that incentives are next method that we need to include to maintain a sample without major nonresponse bias. Incentives are the standard for long-term longitudinal surveys. (Again see Attachment A.)

We know that OMB is reluctant to set precedents regarding incentive use, but we feel SPD would not set precedents because it is a unique survey in the federal statistical system:

  • It is a longitudinal survey studying a phenomena at a place in time;
  • The respondents are from a previous study (SIPP) and believed that they had completed their obligation to the government; and
  • The selection of the sample was Congressionally mandated.
III. Operational Plan

A $40 incentive per household will be given to every SPD sample household eligible for interview in the upcoming interviewing cycle scheduled for 1999. The incentive will be prepaid by enclosing, in the advance letter prior to the interviewer's visit, a $40 debit card along with a PIN for redeeming the amount at an ATM. Each eligible sample household will be allowed to cash in the incentive regardless of the interview outcome (response or nonresponse). To ensure that every sample household gets the incentive, each interviewer will be given additional debit cards to offer to the households who had not received the debit cards through the mail prior to the interview.

IV. Results of Incentive Research.

A. Results of the SIPP Incentive Tests.

1. SIPP Waves 1 Incentive Experiment Results

In Wave 1 of the 1996 Panel, households were given either a $0, $10, or $20 dollar incentive to test whether it would reduce nonresponse rates at the initial interview and reduce item nonresponse rates for those who answered the questionnaire.

The results summarized below are extracted from Mack, S., Huggins, V., Keathley, D, and Sundukchi, M. (1998), "Do Monetary Incentives Improve Response Rates in the Survey of Income and Program Participation?", to be published in 1998 Proceedings of the Survey Research Section of the American Statistical Association. (For details see Attachment B.)

    - The $20 incentives reduced (with statistical significance) household, person, and item
    (gross wages) nonresponse rates in the initial interview (Wave 1).

    - The $20 incentives reduced (with statistical significance) household nonresponse rates in
    subsequent interviews as well (Wave 2 through Wave 6).

    - The $20 incentives were particularly effective for reducing the household nonresponse rates
    of poverty and black households.

    - The $10 incentives did not substantially reduce nonresponse rates.

2. SIPP Wave 7 Incentive Experiment Results

In Wave 7, a $20 booster incentive was given to households who received the incentive in Wave 1 and were also low income in Wave 1. This incentive has had a positive effect on reducing attrition in Wave 7. (See Attachment C for details.) The results are given below with actual nonresponse rates in Table 3.

    - The $20 incentives reduced (with statistical significance) household nonresponse rates in
    Wave 7.

    - The $20 incentives reduced (with statistical significance) household nonresponse rates in
    Wave 7 for households below 150% of their poverty threshold in Wave 1.

    -The $20 incentive did not significantly reduce the nonresponse rates for households above
    150% of their poverty threshold in Wave 1.

Table 3. Weighted Type A Nonresponse Rates for Wave 7 by Incentive Groups


Incentive Groups Overall Below 150% Above 150%
$0 7.89% 6.92% 8.00%
$20 5.97% 5.69% 6.78%

3. SIPP Wave 8 and Wave 9 Incentive Experiment Results.

In Wave 8, households that were Type A nonrespondents for the first time in Wave 7 were given either $0, $20, and $40 incentive during nonresponse conversion. A similar procedure is being done in Wave 9 for Wave 8 Type A's. The incentives have resulted in a significant increase in the Type A conversion rate in Wave 8. Results are given below with actual rates in Table 4. (See Attachment D for more detailed results.)

    - The $40 incentive significantly increased the overall conversion rate of Wave 7 Type A
    noninterviews.

    - The $40 incentives significantly increased the Wave 8 conversion rate for those households
    that said they refused to participate in Wave 7 (i.e., hard refusals).

    - Incentives did not significantly increase the conversion rates for households where no one
    was home after several attempts or were temporarily absent in Wave 7 (i.e., soft refusals).

    - Priority mail alone has increased the Wave 8 conversion rate for refusals compared with
    Wave 7 conversion rates (32.1% to 37.6%).

Table 4. Conversion Rates for Wave 7 Noninterviews in Wave 8 Incentive Experiment

Wave 8 Treatments All Type As Refusals No one home and 

Temporarily absent

$0 Group 47.0% 37.6% 60.8%
$20 Group 49.9% 38.7% 65.5%
$40 Group 56.0% 47.5% 68.2%
*Wave 7 Conversion Rates 44.0% 32.1% 62.4%

B. Results of the SPD Bridge Incentives Test.

A $20 voucher was given to a test group of low income cases and their neighbors in the first interview of the SPD. Based on the results of this incentive test, providing a $20 incentive to households has had a positive, but not significant, effect on response rates overall, as well as by demographic characteristics. However, within the experimental group, the households having received and cashed vouchers had significantly higher response rate than the households having received but not cashed or having not received vouchers. (See Attachment E for detailed results.)

C. Results of Other Survey Incentive Usage

The literature regarding incentives overwhelmingly supports the benefits of incentive use in general. Recently, Mosher, Pratt, and Duffer (1994) proposed the use of incentives for cycle 5 of the National Survey of Family Growth (NSFG). They suggested that previous research (e.g., Groves, Cialdini and Couper, 1992) implies that incentives are effective because they: create a reciprocation norm; create an informal contract between the interviewer and respondent, resulting in an exchange of goods for services; or are viewed as compensation for the respondent's time. Mosher et al. further argued that several federal social and health surveys used incentives because their surveys "...are long, sensitive, involve repeated interviews, and sometimes ask the respondent to leave their home or keep detailed records." (pp. 61). Accordingly they conducted a pretest using three conditions: a $20 incentive for an in-home interview; a $40 incentive for an outside-home interview; and a no incentive control condition. Results indicated that the $20 incentive significantly increased response rates, mostly because of fewer refusals. Incentives also increased data quality (e.g., incentive groups were more likely to report accurate levels of abortion than the non-incentive group) and decreased the amount of time spent locating and converting respondents (by over two hours), resulting in cost savings nearly equal to the incentive amount. The $40 incentives significantly improved response rates compared to the no-incentive group, but there was no significant difference between the $20 and $40 incentive groups. The authors concluded that $20 is an effective incentive for an in-home interview. Similar results were reported by Duffer, Lessler, Weeks, and Mosher (1994).

Other social scientists have also found incentives to be an effective tool for increasing both survey response and retention, and for reducing costs. Berlin, Mohadjer, Waksberg, Lolstad, Kirsch, Rock, and Yamamoto (1992) paid respondents for the National Adult Literacy Survey (NALS) incentives of $0, $20 or $35 at the completion of a face-to-face interview. Results showed that incentives significantly increased response rates but there was no significant difference between the $20 and $35 incentive groups. Results also showed that survey costs per interview were lower for the $20 incentive group than for the $0 and $35 incentive groups. Kerachsky and Mallar (1981) provided $5 per interview to a portion of respondents during face-to-face interviews in a longitudinal study of economically disadvantaged youths. The results showed that incentive payments were increasingly effective in each successive wave of the study. By the third wave of the study, the payment group was significantly more likely than the nonpayment group to: return update postcards (27% vs. 17%); be located (87% vs. 83%); and complete interviews (85% vs. 80%).

There is also considerable evidence that prepaid incentives achieve more substantial gains in response rates than do promised incentives, especially in mail surveys. Armstrong (1975) reviewed 18 empirical studies of monetary incentives used in mail surveys and concluded that only prepaid incentives show substantial reductions in nonresponse rates. Similarly, Church (1993) conducted a meta-analysis of 38 experimental and quasi-experimental studies in order to determine the effects of incentives on mail survey response rates. Significant increases in response rates were found only for surveys in which incentives were given in the initial mailing, with no evidence that incentives contingent on the return of the survey increased response rates. Compared to control conditions, the response rates increased an average of 19.1 percent for monetary incentives and an average of 7.9 percent for non-monetary incentives. These findings are consistent with those of Peck and Dresch (1981), and Berry and Kanouse (1987), which found better response rates for prepaid incentives than for promised or no incentives for mail surveys.

Berk, Mathiowetz, Ward, and White (1987) investigated the effects of both pre- vs. post-paid and monetary vs. nonmonetary incentives ($5) for face-to-face and telephone interviews for the longitudinal National Medical Expenditure Survey (NMES). Results showed that prepaid incentives, but not promised incentives, increased survey response rates and lowered the item nonresponse rates compared to no incentives, with only a moderate increase in cost. Berk et al. (1987) suggest that the benefit of prepaid incentives may result by decreasing the respondents' perceived burden, increasing their satisfaction for participating, and indicating the importance of the survey.

Recently, in a meta-analysis of 39 experiments, Singer, Gebler, Raghunathan, Van Hoewyk, and McGonagle (in press) found that payment of incentives significantly increases response rates for telephone and face-to-face interviews, especially for surveys with low initial response rates. Results also showed that prepaid incentives were more effective than promised incentives, although not at a statistically significant level. However, the authors also found that when comparing prepaid versus promised incentives within the same study, prepaid incentives are significantly more effective than promised payments. In addition, the results suggest that incentives help to increase response rates in interview situations with high respondent burden (e.g., a survey is over an hour in length, contains diary, tests, or sensitive questions, or is a panel study).

Finally, there has been some concern that incentives given early in longitudinal studies may create expectations of further incentives that may cause reductions in the rate or quality of response if these expectations are not met in later waves. Lengacher, Sullivan, Couper, and Groves (1995) investigated differences in cooperation rates in face-to-face interviews for the Health and Retirement Study (HRS) when a large incentive was given in the first wave followed by a smaller incentive in the second wave, compared to consistent incentives across waves. The authors concluded that "commitment to a longitudinal survey is not marginally harmed by large incentives in the first wave as a method to induce entry into a panel" (p.1034). They suggest that large incentives in the first wave do not necessarily create respondent expectations for large incentives in the second wave, but rather that respondents may feel "a surplus of reward" (p.1034), creating a positive feeling about the survey or a feeling of owed reciprocation to the survey organization. Similarly, Singer, Van Hoewyk, and Maher (1998) reported that respondents who received a monetary incentive in the past were more likely to participate in subsequent survey waves than those who had not received an incentive. The authors suggest that respondents may feel that their current participation was covered by the initial incentive.

V. Timing

We propose giving the incentive to all eligible SPD cases starting with the 1999 SPD which would be in the field at the end of April 1999. We would need OMB approval by March 1, 1999 to be able to implement the procedure successfully.

References

Attachments A - E
 

References:

Armstrong, J. S. (1975). Monetary incentives in mail surveys. Public Opinion Quarterly, 39, 111-116.

Berk, M. L., Mathiowetz, N. A., Ward, E. P., and White, A. A. (1987). The effect of prepaid and promised incentives: Results of a controlled experiment. Journal of Official Statistics, 3, 449-457.

Berlin, M., Mohadjer, J., Waksberg, J., Lolstad, A, Kirsch, I., Rock, D., and Yamamoto, K. (1992). An experiment in monetary incentives. Proceedings of the Survey Research Methods Section of the American Statistical Association, 393-398.

Berry, S. H., and Kanouse, D. E. (1987). Physician response to a mailed survey. Public Opinion Quarterly, 51, 102-114.

Church, A. H. (1993). "Estimating the effect of incentives on mail survey response rates" A meta-analysis. Public Opinion Quarterly, 57, 62-79.

Duffer, A., Lessler, J., Weeks, M., and Mosher, W. (1994). "Effects of incentive payments on response rates and field costs in a pretest of a national CAPI survey." Proceedings of the Survey Research Methods Section of the American Statistical Association, 2, 1386-1391.

Groves, R. M., Cialdini, R. B., and Couper, M. P. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56, 475-495.

Kerachsky, S. H., and Mallar, C. D. (1981). The effects of monetary payments of survey responses: Experimental evidence from a longitudinal study of economically disadvantaged youths. Proceedings of the Survey Research Methods Section of the American Statistical Association, 258-263.

Lengacher, J. E., Sullivan, C. M., Couper, M. P., and Groves, R. M. (1995). Once reluctant, always reluctant? Effects of differential incentives on later survey participation in a longitudinal study. Proceedings of the Survey Research Methods Section of the American Statistical Association, 1029-1034.

Mosher, W. D., Pratt, W. F., and Duffer, A. P. (1994). CAPI, event histories, and incentives in the NSFG cycle 5 pretest. Proceedings of the Survey Research Methods Section of the American Statistical Association, 1, 59-63.

Peck, J. K., and Dresch, S. P. (1981). Financial incentives, survey response, and sample representativeness: Does money matter? Review of Public Data Use, 9, 245-266.

Singer, E., Gebler, N., Raghunathan, T., Van Hoewyk, J., and McGonagle, K. (in press) "The effect of incentives on response rates in face-to-face and telephone surveys." Journal of Official Statistics.

Singer, E., Van Hoewyk, J., and Maher, M. P. (1998). Does the payment of incentives create expectation effects? Public Opinion Quarterly, 62, 152-164.

                                                                                                                                                    Attachment A

 Incentive Use in Panel Surveys

 NELS:88

Amount                                    Dropouts paid $25-$75 dependent on time commitment
                                                Teachers paid dependent on number students
                                                School coordinator paid $25,$50,$75 ($50 average)

Time Promised                        Before interview

Time Delivered                       By check, upon completion of interview

Interview Mode/Length           Personal.

Differentials                            As above, dependent on time commitment

Incentive Effects                     No.
Tested?                                   After first follow-up, school coordinator fee was varied, but no reports released.

General Survey                      The NELS is a school-based survey. The sample is based on a
Information                             stratified, national probability sample of 1052 public and private
                                               eight-grade schools (yielding over 25000 student respondents).
                                               Students were administered questionnaires in class,
                                               administrators completed surveys about the school, teachers
                                               completed surveys about the students, themselves and the school,
                                               and a sub-sample of parents completed surveys about
                                               their children's home life and activities.

Response Rate(s)

Contact Person                      Steven Ingels                  773-256-6275

 High School and Beyond (HSB)

Amount                                 Dropouts paid $30-$50 dependent on time commitment
                                             School coordinators paid $30; 1980, 1982 waves
                                            (Note: Web page indicates incentives were given to all respondents,
                                            yet administrator says no. Ask him again)

Time Promised                      Before interview

Time Delivered                      By check, upon completion of interview

Interview Mode/Length         Personal.

Differentials                           As above, dependent on time commitment

Incentive Effects                   No.
Tested

General Survey                  The HSB is a school-based survey. The sample is a stratified,
Information                        national sample of over 1100 secondary schools. The total number of
                                          respondents totaled over 58,000 students at 1015 public and private
                                          schools. The survey frame yielded student questionnaires, cognitive
                                         tests, a school questionnaire (completed by school administrators), a
                                          teacher comment list, and a subsample of parents completed a parent
                                          questionnaire.

Response Rate(s)

Contact Person                Steven Ingels          773-256-6275

 National Survey of Families and Households Wave I & II (NSFH1 and NSFH2)

Amount                           $20 per respondent. Up to two respondents per household.

Time Promised

Time Delivered                 By check, upon completion of interview

Interview Mode/Length     Personal. Wave I: 100 min. Wave II: avg 90 min.

Differentials                       No.

Incentive Effects                No.
Tested?

General Survey                 National probability sample of adults (n=13,007 in Wave I).
Information                       Minorities, single parent and step-families, cohabiters and
                                         newlyweds were oversampled). One adult per household was
                                         designated the respondent; spouses were also asked to complete
                                         related questionnaires.

Response Rate(s)              82%

Contact Person                 Jim Sweet          608-262-2182

 Panel Study of Income Dynamics (PSID)

Amount                           $20 per respondent (one per household)
                                       Formerly, $15 per respondent, plus $5 for completing change of
                                       address postcard

Time Promised                 Before first interview (letter)

Time Delivered                 By check, upon completion of interview

Interview Mode/Length

Differentials                       Finder's fee of $5 or $10 for helping to locate respondents. Towards
                                           end of field season,  interviewers have discretion to offer additional
                                           money as a refusal conversion ($10)

Incentive Effects              In general, no.
Tested?                           In one wave, $20 versus $50 was tested as a one-time only test. $50
                                       was more effective, but created expectations for next wave
                                       (respondents' hopes were dashed, however, as PSID went
                                       back to $20) (am waiting to hear if there is a citation)

General Survey              A panel study begun in 1968. Beginning with 5000 households,
Information                    the PSID is a nationally representative sample of US individuals. Over
                                      time, grown children of initial  respondents have been followed as they
                                      form their own households.

Response Rate(s)          53%-60% cumulative (depends on base; see Don Hernandez report)

Contact Person              Tom Gonzalez             734-936-0307

National Longitudinal Surveys - Youth (NLS-Y)

Amount                           $20 per household head, plus $5 per child in household

Time Promised               Before interview (letter)

Time Delivered              Upon completion of interview
Interview Mode/Length   Personal (20% via phone). Avg 75 minutes for main interview, plus
                                        additional time for child assessment and young adult interview.

Differentials                    Varied by household composition (number children). Interviewers had
                                        discretion to offer additional refusal conversion fees. Interviewers
                                        could also offer non-monetary conversion fees (e.g., pizza, movies for
                                        the kids)

Incentive Effects            In general, no. Refusal conversion fee amounts were tested. $100
                                      tested?   fees statistically significantly increased responses. A sliding
                                      scale was just as effective, however. There is no report yet, as the test
                                      was conducted September, 1998.

General Survey              Begun in 1979 as a nationally representative sample 12,686 men
Information                    women aged 14-22 years. Minorities and poor whites were
                                      oversampled; some of these subsamples were dropped in 1991.

Response Rate(s)          84% (see Dan Weinberg or Steve McClaskie for more info.)

Contact Person              Randy Olsen                  614-442-7348

 National Longitudinal Study of Adolescent Health (AddHealth)

Amount                           $20 per respondent (cash)
                                        $1000 per school
Time Promised                Before interview (in consent form)

Time Delivered              Upon completion of interview

Interview Mode/Length    Personal. 90 minutes.

Differentials                     No.

Incentive Effects              No.
Tested?

General Survey              The AddHealth is a school-based survey. Eligible high schools
Information                     had to have at least 30 students; students were selected from the school
                                       rosters.

Response Rate(s)          Wave I: 80% Wave II: 90% of Wave I (which would be 72%
                                       cumulative)

Contact Person              Jo Jones             919-962-8412

National Evaluation of Welfare to Work Strategies (JOBS)

Amount                    Waves I & II: $10 to $20 plus $5 gift for child, dependent on length of
                                survey and whether child also participated.

Time Promised        Before interview (letter)

Time Delivered        By check, upon completion of interview

Interview Mode/Length     Personal. Wave I: 30-90 min., dependent on additional modules.
                                        Wave II: 35-70 min., dependent on additional modules.

Differentials               Yes. Dependent on time commitment and survey wave.

Incentive Effects          No. However, researchers believe incentives increased responses.
Tested?

General Survey          The JOBS survey evaluates seven state welfare employment
Information                 programs begun under the 1988 Family Support Act and continued under
                                   TANF. The sites are: Atlanta, GA; Grand Rapids, MI; Riverside, CA;
                                   Columbus, OH; Detroit, MI; Oklahoma City, OK; and Portland, OR.

Response Rate(s)      Wave I: 83% Wave II: Approx 82% (still in field)

Contact Person          Greg Hertz (sp?)         212-340-8670

Detroit Area Study, 1996 (DAS)

Amount                                    $5 to respondents
                                                $25 refusal conversion fee

Time Promised                        Before interview

Time Delivered                        $5 - in letter before interview
                                                 $25 - in cash upon completion of interview (B: verify)

Interview Mode/Length             Personal. 60 minutes.

Differentials                               $5 sent to only 2/3 of sample households
                                                   $25 randomly assigned to half the refusal households

Incentive Effects                         Yes. $5 increases responses at p<.05.
Tested?                                       $25 - increases responses of refusers, but not statistically
                                                    significant (base is refusers)

General Survey                          The DAS is an area-probability sample of the Detroit
 Information                                metropolitan area. The purpose of the survey was two-fold: 1)
                                                   the study of social/geographic stratification; and 2) the testing
                                                   of incentive effects and study of attitudes toward incentive use.

Response Rate(s)                      68%

Contact Person                          Barbara Downs                 301-457-2465

 Health and Retirement Survey (HRS)

Amount                                   $20 per respondent. Up to two respondents per household

Time Promised                      Before interview (letter)

Time Delivered                     By check, before interview

Interview Mode/Length         Personal. Avg 65 minutes. Wave I was a few minutes longer, as
                                              baseline information was collected then.

Differentials                           $100 refusal conversion at end of Waves I and II. Respondents
                                               received FedEx letters promising the $100 in exchange for an
                                               interview ($200 in couple households).

Incentive Effects                      No. Conversion test indicates those who received money and
Tested?                                    letter were more likely to complete the interview than those who
                                                only received a letter (correspondence with Dan Hill).

General Survey                     The HRS is an area-probability sample of households. The target
Information                            population is all adults in the contiguous US, born 1931-1941, who
                                              reside in households. The HRS follows adults who move into
                                              institutions after the initial interview. Blacks, Hispanics, and
                                              Floridians were oversampled.

Response Rate(s)                Wave I: 80.2%-82.1% (depends on base; I have documentation
                                            available)
                                            Wave II: 92.06% Wave III: 87.65% (Waves II and III base is all
                                            respondents from Wave I who are still alive at the respective
                                            subsequent interviews.

Contact Person                  Dan Hill                     dhhill@umich.edu
 
 

DO MONETARY INCENTIVES IMPROVE RESPONSE RATES IN THE SURVEY OF INCOME AND PROGRAM PARTICIPATION?

Stephen Mack, Vicki Huggins, Donald Keathley, and Mahdi Sundukchi, U.S. Bureau of the Census
Stephen Mack, U.S. Bureau of the Census, Demographic Statistical Methods Division, Washington D.C. 20233

Key Words: Incentives, Nonresponse

Abstract

The Survey of Income and Program Participation (SIPP) used a monetary incentive in the initial interview of the 1996 panel to lower nonresponse rates. As in other longitudinal surveys, nonresponse rates increase in SIPP panels over time. We plan to interview sample households in the 1996 SIPP panel over a longer period than previous panels, 48 months versus 32 months. Consequently, we expect nonresponse levels to reach record levels, 30% or more by the end of the panel. We conducted an experiment to study the effect of $10 and $20 incentives on nonresponse and interviewing costs. James [1997] analyzed data from the first year of the panel. She found that the $20 incentive was effective in lowering nonresponse rates and that any incentive lowered the number of interviewer visits needed per case. This paper extends the analysis to cover interviews over two years, studies additional population subgroups, and looks at item completion rates.

I. Introduction

The SIPP is a longitudinal survey conducted by the U.S. Census Bureau which provides national estimates of sources, amounts, and determinants of income for households, families, and persons. The principle goal of the SIPP is to provide information to federal policy makers to assist in evaluation and reform of welfare programs, taxes, and entitlement programs. In order to achieve these goals, the SIPP provides both cross-sectional and longitudinal estimates (such as transition probabilities and spell durations).

Interviewing of SIPP panel members usually starts in February of the panel year (the 1984 and 1996 panels are exceptions). Subsequent interviews take place at four month intervals until the panel ends. One round of interviewing of the entire panel is called a wave. SIPP panels are divided into four rotation groups of approximately equal size. One rotation group is interviewed each month. This arrangement smooths out interviewing workloads and reduces bias in transition estimates.

In the initial interview, all persons living at sample addresses are listed as household members. Persons who are 15 years of age and older are interviewed and become original sample persons. Original sample persons are the units of observation for SIPP and are followed for the life of the panel. Exceptions include those who die, move abroad, or move into an institution or military barracks. Persons who move into households with original sample persons after wave 1 are also interviewed as long as they continue to reside with an original sample person.

Details of SIPP panels, such as sample size and panel length, vary among panels. More substantial changes are made after each Decennial Census when we update the sample frame and select new sample. The 1990 redesign of the SIPP took effect with the 1996 panel. We reduced cluster sizes, oversampled for poverty, introduced computer assisted interviewing, and made other changes.

In the first interview of the 1996 panel, wave 1, we obtained interviews from 92% of eligible households; about 36,700 interviews. Like other longitudinal surveys, SIPP noninterview rates increase as panels get older. The household noninterview rate of the 1996 panel stood at 26.4% as of the end of wave 6.

The SIPP conducted an incentive experiment in the initial interview of the 1996 panel to study the effect of incentives on nonresponse rates. SIPP primary sample units (psu's) were divided into three groups to receive no incentive, a $10 incentive, or a $20 incentive. Sample addresses in rotations 2,3, and 4 in the $10 and $20 groups were given vouchers (redeemable by mail) by interviewers immediately before the interview. James [1997] reported on the effectiveness of the incentive up through wave 3. She looked at nonresponse rates and interview cost data among households that were sent out for interviewing; we do not attempt further interviews with households that do not respond in wave 1 or have two consecutive noninterviews. James found that $20 incentives were effective in lowering nonresponse rates in waves 1-3 and that any incentive lowered the number of interviewer visits needed per case in wave 1.

In this paper, we will cover incentive results through wave 6. We compare household nonresponse between population subgroups defined by within-psu stratum (high poverty/low poverty), March poverty status, race, and education. Cumulative household nonresponse rates are used throughout the paper rather than wave nonresponse; i.e., households we no longer attempt to interview due to prior nonresponse are counted as nonrespondents.

Another issue we consider is whether incentives are effective at a person level. Some researchers have suggested that incentives can influence the quality and amount of information obtained from persons. To study this issue, we look at a few person-level rates: noninterview rates of persons within interviewed households (Type Z's), proxy interview rates, and nonresponse rates for gross wages.

II. Literature Review

There are many reports of positive results from using incentives. Ferber and Sudman [1974] reviewed a number of incentive studies. They found that the effect of incentives depends on respondent burden (i.e., the effort needed to cooperate), the amount of the incentive, and the economic level of the respondent. Berlin, et al. [1992] reported that a $20 incentive increased response rates for subgroups with low levels of literacy and lowered interviewer costs. Incentives may increase the willingness of respondents to provide information. A variable incentive was used in an education assessment study (Chromy and Horvitz [1978]). Young adults, age 26 to 35, were asked to take one or more assessment packages. Most respondents decided to take the maximum number of assessments to receive the highest incentive. The literature is mixed, but the following results were found in many studies:

    * Large incentives increase response rates more than small incentives.
    * Incentives are effective for underrepresented populations, such as low income and low
      education.
    * Incentives are effective in surveys with high respondent burden such as panel or diary
       studies.
    * Incentives can reduce interviewer time and costs.
    * Incentives may increase respondent cooperation; i.e., respondents may provide more
       information when given incentives (Chromy and Horvitz [1978]).

Gbur [1988] reported on an incentive experiment in the SIPP 1987 panel. A small gift was given to households scheduled for April 1987 interviews, about 25% of the total sample. The remainder of the panel was interviewed in February, March, and May. Interview rates were 1% higher for gift-recipient households than for nonrecipient households.

III. Design of the SIPP Incentives Experiment

SIPP sample psu's were sorted by size and divided into incentive groups using systematic sampling. Incentives were distributed to sample addresses in $10 and $20 incentive groups during rotations 2,3, and 4 of wave 1. Incentives were not distributed in rotation 1. Table 1 gives counts of eligible households by incentive group and incentive versus nonincentive rotations.
 
Table 1. Wave 1 households eligible for interviewing.
Incentive
group
rotation 1 
(no incentive) 
rotations 2-4 
(incentive) 
$0
3529
10328
$10
3219
9686
$20
3388
10038

Vouchers for $10 and $20 were distributed by SIPP interviewers at the door immediately after verifying the address. Interviewers gave vouchers to noninterviewed as well as interviewed households. Recipients were instructed to fill in their name, check the address, and return the voucher to the Census Bureau in the postage paid preaddressed envelope. After receiving the voucher, the Census Bureau mailed a check to the recipient within 2 to 3 weeks.

In this paper, we compare response rates and imputation rates. All estimates are weighted. We use base weights; i.e., the inverse of the probability of selection, or final weights as noted.
Differences are examined using two-tailed tests based on the normal distribution. Significance is reported at the 10% level. Two types of comparisons are made:

    * differences of rates. The nonresponse rates of households in rotations 2,3, and 4 are
      compared between incentive groups. Significantly lower nonresponse rates in the $20
      incentive group are expected if a $20 incentive is effective in lowering nonresponse.

    * differences of differences. The differences of nonresponse rates from rotation 1 to rotations
      2,3, and 4 are compared between incentive groups. If the $20 incentive is effective in
      reducing nonresponse, then the change in nonresponse rates should be greatest in the $20
      incentive group.

IV. Nonresponse Rates

Within PSU Stratum

We oversampled for low-income households using a stratification approach proposed by Waksburg [1973]. Two within-psu strata were formed, one with a high concentration of poverty and one with a low concentration. In wave 1, we found a poverty rate of 27% in the high poverty stratum and 11% in the low poverty stratum.

Table 2 gives nonresponse rates in rotations 2-4 by poverty stratum. Nonresponse rates are significantly lower in every wave for the $20 incentive group when compared to the $0 and $10 incentive groups: for the high poverty stratum; for the low poverty stratum; and overall.

Differences in nonresponse rates in rotation 1 and rotations 2-4 are shown in Table 3. Positive differences indicate lower nonresponse rates in rotations 2-4 than in rotation 1. Significant overall decreases in rates occur in waves 2 through 6 within the $20 incentive group. The $20 incentive was particularly effective in the high poverty stratum where relatively large differences occurred in all waves.
 
Table 2. Household nonresponse by poverty stratum. Rotations 2-4 only, weighted by base weights.
wave
incentive
group
High Poverty Stratum
Low
Poverty Stratum
Overall

1

$0
.30%
 
9.14%
 
9.18%
 
$10
8.12%
 
9.51%
 
9.26%
 
$20
5.91%
*+
8.16%
*+
7.72%
*+

2

$0
16.06%
 
14.88%
 
15.13%
 
$10
13.77%
*
14.44%
 
14.32%
 
$20
11.40%
*+
13.05%
*+
12.72%
*+

3

$0
19.18%
 
18.10%
 
18.33%
 
$10
17.65%
 
18.17%
 
18.08%
 
$20
14.39%
*+
16.12%
*+
15.77%
*+

4

$0
22.36%
 
21.22%
 
21.46%
 
$10
20.74%
 
21.27%
 
21.18%
 
$20
16.91%
*+
19.33%
*+
18.85%
*+

5

$0
25.53%
 
24.48%
 
24.70%
 
$10
24.26%
 
24.24%
 
24.24%
 
$20
21.06%
*+
22.78%
*+
22.44%
*+

6

$0
28.98%
 
27.27%
 
27.64%
 
$10
27.10%
 
26.70%
 
26.77%
 
$20
23.00%
*+
25.22%
*+
24.78%
*+
* significantly different from $0 incentive group
+ significantly different from $10 incentive group

 
Table 3. Household nonresponse by poverty stratum. Difference of rotation 1 and rotations 2,3, and 4 weighted by base weights.
wave
incentive
group
High Poverty Stratum
Low Poverty Stratum
Overall
 
$0
-0.05%
 
-0.33%
 
-0.27%
 
1
$10
0.34%
 
-0.33%
 
-0.21%
 
 
$20
2.78%
 
-0.31%
 
0.31%
 
 
$0
0.67%
 
0.51%
 
0.55%
 
2
$10
2.64%
 
1.09%
 
1.37%
 
 
$20
5.18%
*
1.81%
 
2.49%
*
 
$0
-0.58%
 
0.45%
 
0.23%
 
3
$10
1.07%
 
-0.12%
 
0.09%
 
 
$20
6.11%
*+
2.30%
*+
3.07%
*+
 
$0
-1.35%
 
-0.76%
 
-0.88%
 
4
$10
0.27%
 
-0.53%
 
-0.39%
 
 
$20
5.42%
*+
1.45%
*
2.25%
*+
 
$0
0.49%
 
-0.38%
 
-0.17%
 
5
$10
0.79%
 
0.35%
 
0.43%
 
 
$20
3.94%
 
2.38%
*
2.69%
*+
 
$0
-0.73%
 
-1.47%
 
-1.30%
 
6
$10
1.71%
 
0.49%
 
0.70%
 
 
$20
4.64%
*
1.78%
*
2.34%
*
* significantly different from $0 incentive group
+ significantly different from $10 incentive group
shaded differences are significantly different from 0

The change in nonresponse rates from rotation 1 to rotations 2-4 is often larger in the $20 incentive group than in other incentive groups. Overall nonresponse rate differences are largest within the $20 incentive group for waves 3,4, and 5.

The $10 incentive does not appear to significantly influence nonresponse rates overall or within poverty strata. The only significant result, i.e., positive result, for the $10 incentive group occurs in wave 2.

Wave 2+ Rates by Poverty Status

Analysis of wave 1+ nonresponse rates is limited to the few variables whose values are known for wave 1 nonrespondents. Geographic and sampling variables are known. Interviewers are asked to provide their best guess of the householder=s race and sex as well as household size and tenure. For other characteristics, we can study the effect of incentives on wave 2+ nonresponse rates; i.e., nonresponse of wave 1 respondents.

Incentives are thought by many researchers to be most effective in low-income areas. Wave 2+ noninterview rates are shown in Tables 4 and 5 by the March poverty status of the original household. Nonresponse is lower in rotations 2-4 for both poverty and nonpoverty households in the $20 incentive group, except for wave 4 poverty. The $20 incentive appears, at first glance, to be more effective for poverty households than for nonpoverty households; however, the differences are not statistically significant except for wave 2.
 
 
Table 4. Wave 2+ nonresponse rates for households in poverty as of March, weighted by base weights.

wave

incentive
group
rotation
1
rotations
2-4
difference
r[1]-r[2-4]

2

$0
7.10%
 
7.87%
 
-0.77%
 
$10
10.06%
 
5.97%
*
4.09%
*
$20
13.53%
*
7.73%
+
5.80%
*

3

$0
10.28%
 
10.81%
 
-0.53%
 
$10
13.59%
 
11.12%
 
2.47%
 
$20
16.70%
*
10.69%
 
6.01%
*

4

$0
13.24%
 
14.83%
 
-1.59%
 
$10
15.66%
 
13.56%
 
2.10%
 
$20
16.46%
 
14.70%
 
1.76%
 

5

$0
15.55%
 
19.03%
 
-3.48%
 
$10
20.80%
*
17.11%
 
3.69%
*
$20
21.91%
*
17.49%
 
4.42%
*

6

$0
16.39%
 
23.50%
 
-7.11%
 
$10
24.88%
*
19.64%
*
5.24%
*
$20
25.27%
*
20.63%
*
4.64%
*
* significantly different from $0 incentive group
+ significantly different from $10 incentive group
shaded differences are significantly different from 0

 
Table 5. Wave 2+ nonresponse rates for households not in poverty as of March, weighted by base weights.

wave

incentive
group
rotation
1
rotations
2-4
difference
r[1]-r[2-4]

2

$0
7.61%
 
6.51%
 
1.10%
 
$10
7.19%
 
5.72%
*
1.47%
 
$20
7.07%
 
5.22%
*
1.85%
 

3

$0
10.88%
 
10.20%
 
0.68%
 
$10
9.91%
 
9.81%
 
0.10%
 
$20
11.23%
 
8.64%
*+
2.59%
*+

4

$0
12.98%
 
13.61%
 
-0.63%
 
$10
12.93%
 
13.43%
 
-0.50%
 
$20
14.12%
 
11.90%
*+
2.22%
*+

5

$0
17.72%
 
17.11%
 
0.61%
 
$10
17.14%
 
16.85%
 
0.29%
 
$20
18.41%
 
16.05%
*
2.36%
+

6

$0
19.93%
 
20.16%
 
-0.23%
 
$10
20.11%
 
19.70%
 
0.41%
 
$20
20.42%
 
18.53%
*+
1.89%
*
* significantly different from $0 incentive group
+ significantly different from $10 incentive group
shaded differences are significantly different from 0

Nonresponse Rates by Race

Nonresponse rates are given by race and incentive group in Tables 6 and 7. We use the race of the original wave 1 householder in all waves.

About 87% of SIPP sample households are headed by non-Blacks in wave 1, so it=s little surprise that results in Table 6 are similar to results in Tables 2 and 3 for the general population. Nonresponse rates are lower in the $20 group than in the $0 and $10 groups for rotations 2 through 4 of every wave. Nonresponse rates decrease in rotations 2-4 for the $20 incentive group in every wave except wave 1.

Looking at "difference" column in Table 7, the $20 incentive is generally effective in decreasing noninterview rates of Black households. Significant decreases in nonresponse rates occur in waves 2 through 6. The $10 incentive is effective in waves 1, 2, and 6. Nonresponse rates decrease more in the $10 and $20 incentive groups than in the $0 incentive group for waves 2, 5, and 6; however, this may be due to the unusually low nonresponse rates in rotation 1 for the $0 incentive group.

Rotation 2-4 nonresponse rates do not differ significantly between incentive groups in most cases. The differences that do occur are not consistent in
 
 
Table 6. Nonresponse rates of non-Black households, weighted by base weights.

wave

incentive
group
rotation
1
rotations
2-4
difference
r[1]-r[2-4]

1

$0
9.04%
 
9.16%
 
-0.12%
 
$10
8.51%
 
9.22%
 
-0.72%
 
$20
7.80%
*
7.70%
*+
0.09%
 

2

$0
15.63%
 
14.84%
 
0.79%
 
$10
14.96%
 
14.17%
 
0.79%
 
$20
14.60%
 
12.41%
*+
2.19%
 

3

$0
18.64%
 
18.19%
 
0.45%
 
$10
17.41%
 
17.80%
 
-0.39%
 
$20
18.07%
 
15.51%
*+
2.56%
*+

4

$0
20.59%
 
21.35%
 
-0.76%
 
$10
20.12%
 
20.89%
 
-0.77%
 
$20
20.26%
 
18.54%
*+
1.72%
*+

5

$0
24.24%
 
24.59%
 
-0.35%
 
$10
23.86%
 
23.75%
 
0.11%
 
$20
24.26%
 
22.05%
*+
2.21%
*

6

$0
26.14%
 
27.56%
 
-1.42%
 
$10
26.30%
 
26.14%
*
0.16%
 
$20
26.08%
 
24.38%
*+
1.70%
*
* significantly different from $0 incentive group
+ significantly different from $10 incentive group
shaded differences are significantly different from 0

 
 
 
Table 7. Nonresponse rates of Black households, weighted by base weights.

wave

incentive
group
rotation
1
rotations
2-4
difference
r[1]-r[2-4]

1

$0
8.02%
 
9.39%
 
-1.37%
 
$10
13.16%
*
9.54%
 
3.62%
*
$20
9.93%
 
7.88%
 
2.05%
 

2

$0
16.12%
 
17.36%
 
-1.24%
 
$10
21.10%
*
15.48%
*
5.62%
*
$20
20.15%
 
15.48%
 
4.67%
*

3

$0
17.91%
 
19.35%
 
-1.44%
 
$10
23.95%
*
20.15%
 
3.80%
*
$20
25.11%
*
18.10%
 
7.01%
*

4

$0
20.52%
 
22.34%
 
-1.82%
 
$10
25.83%
*
23.37%
 
2.46%
 
$20
27.93%
*
21.63%
 
6.30%
*

5

$0
26.67%
 
25.53%
 
1.14%
 
$10
30.80%
 
27.95%
 
2.85%
 
$20
32.25%
*
25.89%
 
6.36%
 

6

$0
27.85%
 
28.26%
 
-0.41%
 
$10
36.27%
*
31.57%
*
4.70%
 
$20
35.66%
*
28.35%
+
7.31%
*
* significantly different from $0 incentive group
+ significantly different from $10 incentive group
shaded differences are significantly different from 0

 
 
 
Table 8. Wave 2+ household nonresponse by education of original wave 1 householder. Difference of rotation 1 and rotations 2,3, and 4 weighted by base weights.

wave

incentive
group
< bachelor
r[1]-r[2-4]
bachelor+
r[1]-r[2-4]

2

$0
0.90%
 
0.54%
 
$10
1.87%
 
1.54%
 
$20
2.44%
*
2.26%
 

3

$0
0.36%
 
0.70%
 
$10
0.35%
 
0.29%
 
$20
3.12%
*+
2.84%
 

4

$0
-0.70%
 
-1.48%
 
$10
-0.32%
 
-0.14%
 
$20
2.09%
*+
2.59%
*

5

$0
-0.02%
 
-0.35%
 
$10
0.73%
 
0.15%
 
$20
2.47%
*
3.63%
*

6

$0
-1.63%
 
-0.23%
 
$10
0.74%
*
1.52%
 
$20
2.36%
*
2.26%
 
* significantly different from $0 incentive group
+ significantly different from $10 incentive group
shaded differences are significantly different from 0

 

direction.

Comparing the "difference" column in Table 6 with the "difference" column in Table 7, the $20 incentive appears to be more effective for Black households than for non-Black households. The differences are statistically significant in waves 3, 4, and 6.

Nonresponse Rates by Education

Berlin et al. [1992] reported on an incentive experiment in the National Adult Literacy Survey. In that study, a $20 incentive significantly improved response rates of people with low educational attainment.

Wave 2+ response rate differences are given in Table 8 by educational attainment of the wave 1 householder. Response rate differences are similar across education groups. About 78% of SIPP households are headed by persons without bachelor degrees. Significant response rate increases occur in every wave among low education households in the $20 incentive group. The $20 incentive was also effective for high education households in waves 2 through 5.

V. Imputation Rates

Incentives are known to affect some measures of respondent cooperation. The number of interviewer callbacks may be reduced. Respondents may be willing to provide more complete information when incentives are given. In this section, we look at a few measures of person and item nonresponse.

SIPP interviewers try to obtain interviews from each person 15 years of age and older who lives at the sample address. Proxy interviews are taken when self interviews (person answers for self) cannot be obtained. Noninterviews of persons, by self or proxy, within interviewed households are referred to as Type Z noninterviews. We impute data for Type Z noninterviews rather than use a weighting adjustment. Table 9 shows the difference of rotation 1 and rotation 2-4 proxy and Type Z rates by incentive group. Proxy rates are not
 
Table 9. Wave 1 Type Z and proxy rates. Difference of rotation 1 and rotations 2,3, and 4 weighted by final weights.
  Incentive Group
$0
$10
$20
Proxy
-0.254%
-1.032%
-0.052%
 
Type Z
-0.107%
-0.262%
0.617%
*+
* Significantly different from $0 incentive group
+ Significantly different from $10 incentive group
shaded differences are significantly different from 0

significantly affected by incentives. The $20 incentive is effective in reducing Type Z rates. Type Z rates are generally around 2% in SIPP panels, so a change of .6% is large in relative terms.

The SIPP asks persons to tell us the amount of income they receive from jobs. This question is considered sensitive and many people refuse to answer it. Table 10 shows item imputation rates for gross wages in March 1996. The $20 incentive is effective in lowering item imputation rates for this question. Item imputation rates in rotations 2-4 are lowest in the $20 incentive group and also show significant improvement between rotation 1 and rotations 2-4.
 
Table 10. Percent of persons with jobs in March 1996 who had imputed amounts of gross pay for any job, weighted by final person weights.
incentive
group
rotation
1
 
rotations
2-4
 
difference
r[1]-r[2-4]
$0
11.61%
 
12.03%
 
-0.42%
 
$10
12.22%
 
12.45%
 
-0.23%
 
$20
12.11%
 
10.47%
*+
1.64%
*
* Significantly different from $0 incentive group
+ Significantly different from $10 incentive group
shaded differences are significantly different from 0

 

VI. Other SIPP Incentives

The 1996 panel has suffered from higher nonresponse rates than any previous SIPP panel. By the end of wave 5, the level of nonresponse had risen to 24%. The two most recent panels, 1992 and 1993, averaged 20% at the end of wave 5. Given the high level of nonresponse and the results of the wave 1 incentive, it was decided to offer an additional incentive in wave 7. We gave a $20 incentive to all low-income households (< 150% poverty in wave 1) that received an incentive in wave 1. Sundukchi [1998] discusses our wave 7 incentive plans in greater detail.

Winters [1998] proposes a wave 8-9 incentive experiment to study the effects of incentives on converting Type A nonresponse (all nonresponse except for movers that we cannot locate) to interviews in the following wave. Conversion rates of Type A's in the following wave are typically low, e.g., less than 40% for waves 2 and 3 of the 1996 panel. The proposal envisions three levels of incentives: a $0 control group, a $20 incentive, and a $40 incentive. Type A households will be randomly assigned to one of the incentive groups and receive the incentive in advance of the subsequent interviewer visit.

Conclusions

Twenty dollar incentives reduced household, person, and item (gross wages) nonresponse rates in the initial interview. Household nonresponse remained lower in subsequent interviews as well. The $20 incentive was particularly effective for poverty and Black households. Ten dollar incentives did not significantly reduce nonresponse.

References

Berlin, M., Mohadjer, L., Waksberg, J., Kolstad, A., Kirsch, I., Rock, D., and Yamamoto,
    K. (1992) An Experiment in Monetary Incentives. Proceedings of the Survey Research
    Section of the American Statistical Association, 393-398.

Chromy, J., and Horvitz, D. (1978) The Use of Monetary Incentives in National
    Assessment Household Surveys. Journal of the American Statistical Association, 73, 473-478.

Ferber, R., and Sudman, S. (1974) Effects of Compensation in Consumer Expenditure
    Studies.  Annals of Economic and Social Measurement, 3. 319-331.

Gbur, P. (1988) SIPP 87: Gift Experiment Results Through Wave 3. Census Bureau
    Memorandum from Gift Experiment Workgroup to Singh, July 6, 1988.

James, T. (1997) Results of the Wave 1 Incentive Experiment in the 1996 Survey of Income and
    Program Participation. Proceedings of the Survey Research Section of the American
    Statistical Association

Sundukchi, M. (1998) SIPP 96: Wave 7 Incentives. Census Bureau memorandum from Baer to
    Kirkendall, April 1, 1998.

Waksberg, J. (1973) The Effect of Stratification with Differential Sampling Rates on Attributes
    of Subsets of the Population. Proceedings of the Social Statistics Section of the American
    Statistical Association, 429-435.

Winters, F. (1998) SIPP 96: Incentives for Reducing Attrition. Census Bureau Memorandum
    from Tupek to Kirkendall, May 29, 1998.

Draft 1/6/99
Attachment C
MEMORANDUM FOR             Documentation.

From:                                         Mahdi S. Sundukchi
                                                   Survey of Income and Program Participation Branch
                                                   Demographic Statistical Methods Division

Subject:                                      SIPP 96: Some Results from the Wave 7 Incentive Experiment.

Executive Summary

This memorandum summarizes results from the Survey of Income and Program Participation (SIPP) 96 Panel, Wave 7 incentive Experiment. It shows the overall analysis on the affect of giving monetary incentive on nonresponse rates comparing with earlier waves on the same panel. Twenty dollar debit cards were given to all low income housing units that were also given an incentive in Wave 1. Wave 7 incentive appears to significantly reduce the nonresponse rate for low-income households. While there is no significant evidence that the $20 incentive helps to reduce the nonresponse rate for those who are in nonpoverty group. It is also appears that incentives work better for reducing a nonresponse rate for low-income Non-Black units. While it does not work well, as it was expected, with low-income Black households because of the sample size is too small to detect significance. .

Highlights

Data from the SIPP 96 Panel, rotation 2, 3, and 4 of Wave 7 indicated that incentives work well with households in poverty. One can also consider the following results:

    * Generally speaking, Wave 1 and Wave 7 incentives together significantly reduced Type A
      nonresponse rate.

    * Wave 7 incentives alone (i.e. average out the Wave 1 incentive effects) reduce Type A
      nonresponse rate for households that are in poverty.

    * Within all interviewed households in Wave 6, the Wave 7 Type A nonresponse rate is
      significantly lower for the $20 incentive group than the $0 group. Additionally we found the
      Wave 7 refusal rate is significantly lower within the $20 group than the $0 group.

    * The Wave 7 incentive reduce the nonresponse rate among low-income Non-Black units,
      while statistical significance was not found among low-income Black units.

    * There is no evidence that $20 incentives effect the nonresponse rate in the nonpoverty group.

Wave 7 Incentive Experiment Literature

Wave 7 twenty dollar debit cards were given to households with the following criteria:

a.     The households' income is less than or equal to 150 percent of the poverty threshold during
        Wave 1 interview.

b.     The household received an incentive during the Wave 1 interview;

c.     All spawned (split) households, formed since Wave 1 from the eligible households;

d.     Neighbors of the item (a) above that are in the sample. This is a result of sampling clusters
        of households in the Area Frame which is approximately 20% of the total sample. We may
        have 4 households on the same street in sample and we want to ensure that neighbors are
        treated consistently with incentives.

Table 1 summarizes Wave 7 outcomes for different groups, we will use these to test some hypotheses of interest. The estimated number of households that have received the incentives is 5469 of which 4065 households are in poverty. Group A and C can serve as a control treatment for the poverty group, and Group B, F, G, and I for the nonpoverty group. On the other hand, one can also consider Wave 6 group that is associated with the $20 incentive Wave 7 group as a control group since no incentives were given in Wave 6.

TABLE 1: Weighted Outcome Rate for Wave 1 and Wave 7 Incentive Groups by Rotation and Poverty Status.

 

 

 

 

Wave 7 Outcomes

 

Rotation

Poverty
Wave 1
Wave 7
Group
Interviewed
Type A
Type D
Total
1
Pov.
$0
$0
A
5.04
0.36
0.23
5.63
Not
$0
$0
B
17.30
1.57
0.47
19.34
2-4
Pov
$0
$0
C
5.24
0.41
0.28
5.93
$10
$20
D
4.78
0.27
0.19
5.24
$20
$20
E
5.30
0.36
0.24
5.90
Not
$0
$0
F
17.22
1.62
0.42
19.26
$10
$0
G
15.37
1.37
0.42
17.16
$20
H
1.73
0.14
0.03
1.90
$20
$0
I
15.82
1.33
0.46
17.61
$20
J
1.85
0.13
0.06
2.04
Total
89.65
7.56
2.79
100.00

Statistical Results

Table 2 below illustrates the type A weighted nonresponse rate for households that are in poverty for Wave 6 and Wave 7. From this table, we found that there is significant difference between the $20 incentive group and the $0 group for the Wave 7. The '*' in the $20 incentive group of Wave 7 cell indicates a significant difference with any other digit in the table.

TABLE 2: Weighted Type A Nonresponse Rate for Households in Poverty, Wave 6 and 7 Incentive Group, Rotation 2-4.

Incentive Group

Wave 6
Wave 7
$0
7.62
6.92
$20
6.75
5.69*

Several other results from Table 2 are summarized below:
 

  • Result 1: A nonresponse rate in Wave 6 is not significantly different between the $20 incentive group and the $0 group. This is within our expectations because there is no money given in Wave 6.
  • Result 2: A nonresponse rate in Wave 7 is significantly different between the $20 incentive group and the $0 group. This result explains the effect of incentive verses no incentive in both Waves 1 and 7.
  • Result 3: A nonresponse rate in the $20 incentive group is significantly different between Wave 6 and Wave 7. This result explains the effects of Wave 7 incentive alone (after removing Wave 1 incentive effects), since both groups had Wave 1 incentives.
  • Result 4: A nonresponse rate in the $0 incentive group is not significantly different between Wave 6 and Wave 7.
The distribution of nonresponse rates across type A categories for the two sample test groups is presented in Table 3. The largest differences among test groups occur in the Not-at-home, Temporarily Absent and Households Refused categories. Surprisingly, the drop in nonresponse, in the refusal category between the $0 and $20 group, is not statistically significant. For the nonpoverty household units the sample size is too small to detect any significant differences. TABLE 3: Weighted Nonresponse Rates by Type A Categories for Wave 7 Rotation 2-4, Poverty Status by Incentive Groups.
Poverty
Status
Wave 7
Incentive
Language
Problems
Not at
Home
Temp.
Absent
Household
Refused
Other
Occupied
 

Total
Poverty
$0
0.06
1.25
0.87
4.18
0.58
6.92
$20
0.03
0.75
0.49
3.87
0.54
5.69
Differences
$0 - $20
0.03
0.50
0.38
0.31
0.04
1.23
Non

Poverty

$0
0.02
1.06
0.83
5.34
0.74
8.00
$20
0.00
0.99
0.75
4.37
0.68
6.78
Differences
$0 - $20
0.02
0.07
0.08
0.97
0.06
1.22

Since there was no significant differences between the refusal households rate in the $20 group and the refusal rate for the $0 group, we decided to look at the refusal households within the Wave 6 interviewed group only. Those rates are presented in Table 4.

ABLE 4: Weighted Type A and Refusal Rates in Wave 7 for all Interviewed households in Wave 6.
 
$0 Incentive
$20 Incentive
Type A
5.23
3.98
Refused
3.05
2.39

Two results from Table 4 are summarized below:

Result 1: Among all Wave 6 interviewed households, the $20 incentive in Wave 7
    significantly lowered the type A nonresponse rate comparing to the no incentive.

Result 2: Among all Wave 6 interviewed households, the $20 incentive in Wave 7 significantly
    lowered the refusal rate compared to the no incentive.

For the nonpoverty group the nonresponse rates for Waves 6 and Wave 7 are presented in Table 5. We found no significant differences between the $20 incentive group and the $0 group for the Wave 7. Similarly, there was no significant differences between Wave 6 and Wave 7 nonresponse rates for the $20 group.

TABLE 5: Weighted Type A Nonresponse Rate for Households Above 150% Poverty Threshold, Wave 6 and 7 by Incentive groups, Rotation 2-4.
Incentive
Group
Wave 6
No Incentive
Wave 7
$20 Incentive
$0
8.07
8.00
$20
7.57
6.78

Table 6 provides nonresponse rates for the $20 incentive group by waves and race. Results are mixed. Nonresponse rates are statistically lowest for Wave 7 Non-Black group. While it is not significant for the Black.

TABLE 6: Weighted Nonresponse Rate (Type A only) of $20 Incentive Group for Wave 2 Through Wave 7 by Rotation 2-4.  
Rotation
Wave 2
Wave 3
Wave 4
Wave 5
Wave 6
Wave 7
Black
4.34
7.30
6.46
8.30
7.98
7.98 
Non-Black
4.01
7.07
6.44
6.76
6.77
5.61*
All
4.07
7.11
6.45
7.01
6.96
5.97*

Below are some results obtained from the data in Table 6:

Result 1:  The Wave 7 nonresponse rate for the $20 incentive group is significantly different
    than the corresponding groups in Wave 6, 5 and 3 at the 5% level.

Result 2:  The Wave 7 nonresponse rate for the Black $20 incentive group is not significantly
    different than the corresponding groups in Wave 2 through Wave 6.

Result 3: The Wave 7 nonresponse rate for the Non-Black $20 incentive group is significantly
    different than the corresponding Nonblack groups in Wave 6, 5 and 3 at the 5% level.

Finally, the number of personal visits per case and miles per case variables were not discussed in this analysis. The reason is related to the fact that 2/3 of the sampled households are telephoned interviewed. Also, there is no evidence that incentives effect type D rates.

Conclusion

The $20 debit card does appear to have positive effects on the response rate for households in poverty. While there were no evidence of any effects of Wave 7 incentives on nonresponse rates for households in nonpoverty group. Also we found positive effects on refusal rate for all households who were interviewed in Wave 6. Finally, it is our belief that incentives in the middle of the panel for the households in poverty could possibly reduce the nonresponse rate.
 
 
 

Attachment D
Draft 1/13/99
MEMORANDUM FOR:        Karen E. King
                                               Chief, Survey of Income & Program Participation
                                               Demographic Statistical Methods Division

From:                                     Denise A. Abreu
                                               Survey of Income & Program Participation
                                               Demographic Statistical Methods Division

Subject:                                 SIPP96: Preliminary Results of an Experiment Using Monetary
                                              Incentives for Waves 8 and 9

I. Executive Summary

In Wave 8, households that were Type A nonrespondents for the first time in Wave 7 were given either $0, $20, or $40 monetary incentive during nonresponse conversion. A similar procedure is being done in Wave 9. Based on results from Wave 8 alone, the incentives have resulted in significant increase in overall Type A conversion rates in Wave 8 for the $40 experimental group. For those households that refused to cooperate in Wave 7, giving $40 worked significantly better than not giving any money at all (control group). Also, for this same group, the $40 incentive worked better than the $20 incentive. We also noticed that using priority mail and incentives combined seem to increase conversion rates for all experimental groups when comparing the overall conversion rates in Wave 7 and Wave 8. Surprisingly, we saw that using priority mail alone seems to increase conversion rates of those that refused in Wave 7.

II. Background

Nonresponse is a significant problem for longitudinal surveys such as the Survey of Income and Program Participation (SIPP) because it biases the estimates. In SIPP, prior research has shown that households in poverty have higher attrition rates than non-poverty households (Waite, Huggins, and Mack, 1997).

The concern over nonresponse bias has increased due to the fact that the 1996 SIPP Panel has higher nonresponse rates than previous panels. The household non-interview rate as of wave 7 of the 1996 SIPP is approximately 28%, with no evidence that sample attrition is abating. The permanent sample loss (cases which refused or could not be contacted for two consecutive interviews and hence are not eligible for further follow up interviews) is 22% through wave 7. This compares unfavorably with sample loss rates from 1992 and 1993 SIPP panels.

The current practice is to revisit nonrespondents once more after their initial nonresponse. Currently, about a third of household nonrespondents in one wave are converted to interviews in the next wave. In an attempt to improve these results, we conducted an incentive experiment that directly targets nonrespondents. All nonrespondents in the previous wave are included in the incentive experiment for waves 8 and 9. (The experiment included all Type A noninterviews, which occur when no-one is home, household members are temporarily absent on vacation, or household members refuse to participate in the survey).

The experiment is investigating the effects on response rates of providing a $20 incentive, or a $40 incentive versus no incentive, before a follow-up face to face interview. Incentives are prepaid and households were randomly assigned to experimental treatment. The data were collected through Computer Assisted Personal Interviewing (CAPI). The goal is to determine if there is any significant differences in response rates among the three treatment conditions, and to later analyze the characteristics of respondents who responded (or failed to respond) to incentive treatments.

Data collection began as part of the 8th Wave of the 1996 SIPP Panel, which started on August 1998. The study is expected to continue through Wave 9, which ends on March 1999. Consistent with current procedures, all groups received an advance letter prior to the interviewer=s visit. The letter received by the incentive groups provided information about the incentive and included a debit card. The letter received by the no incentive group was the usual letter sent to nonrespondents. All letters were sent via priority mail, to ensure that respondents received the incentives (priority mail is not usually used to follow up nonresponding cases). Field representatives gave an incentive at the door, if a respondent claimed he did not receive the letter with the prepayment.

The sample size when the study is finished will consist of approximately 3,200 households that refused to participate in Waves 7 and 8. Four sample selection strata were formed by cross-classifying the poverty category (high poverty stratum/low poverty stratum) by the refusal status category (hard refusals/soft refusals). Hard refusals are those respondents who refuse to give an interview and other refusals are those respondents who are unable to provide an interview either because there was no contact after repeated visits or are temporarily unavailable. After defining the strata boundaries, units were sorted by geographical region. Within each strata, three randomly selected subsamples of almost equal size were assigned to one of the three treatment conditions (a $20 incentive, a $40 incentive and no monetary incentive).

III. Results

A. Overall

Results from the study are given below. Table 1 provides the "b" parameter, the base, and response rate for each of the three experimental groups in the study. The response rates are 47.0% for the control group, 49.9% for the $20 treatment group, and 56.0% for the $40 treatment group. Table 2 shows the percentage difference between the groups, the standard error and t-test for each difference, and a flag indicating whether the test was significant or not. From Table 2, we noticed that the 9% difference between the $40 experimental group and the control group, and the 6% difference between the $40 and the $20 incentive groups to be significant at the 90% confidence level.

Table 1. SIPP: Wave 8 Incentive Study - Preliminary Results
 
Treatment Condition
Ab@ Parameter
Base
Response Rate
Control Group
2,622
1,221,114
47.0%
$20 Group
2,622
1,261,712
49.9%
$40 Group
2,622
1,237,749
56.0%
Base Total=3,720,574
 
 
 

Table 2. 1996 SIPP: Wave 8 Incentive Study - Comparison Analysis (Preliminary)
 
Comparisons
Difference
Standard Error
t-test
Significant?
Control vs $20
-0.029
0.032
0.899
no
Control vs $40
-0.090
0.033
2.772
yes
$20 vs $40
-0.061
0.032
1.887
yes

From Table 3 we obtain the "b" parameter, the base, and the overall conversion rates for Wave 7, Wave 8 and the control group (group receiving no monetary incentive). We see that there is a 7% increase in the conversion rate from Wave 7 to Wave 8. Also, there is a 3% difference between the overall conversion rate in Wave 7 and the conversion rate for the control group in Wave 8. Table 4 provides these percent differences, their standard errors, the t-tests, and the significance indicators. The table shows that the 7% conversion rate increase in wave 8 is significant at the 90% confidence level, but the 3% difference between the control group's conversion rate and the overall conversion rate for Wave 7 is not. So the combination of priority mail and incentive has significantly improved the overall conversion rates between Wave 7 and Wave 8. However, priority mail alone has a positive effect, but not a significant effect.

Table 3. SIPP: Wave 8 Incentive Study - Overall Conversion Rates
 
Treatment Condition
Ab@ Parameter
Base
Conversion Rate
Wave 8 Conversion Rate (W8CR)
2,622
3,720,574
51.0%
Wave 7 Conversion Rate (W7CR)
2,622
3,702,919
44.0%
Control Group Conversion Rate (CGCR)
2,622
1,221,114
47.0%
Base Total=8,644,607
   
 

Table 4. 1996 SIPP: Wave 8 Incentive Study - Comparison Analysis
 
Comparisons
Difference
Standard Error
t-test
Significant?
W8CR vs W7CR
0.070
0.019
3.744
yes
W7CR vs CGCR
-0.030
0.027
1.134
no

 

B.  Results by Type A Noninterview

Table 5 shows the "b" parameter, the base, and the response rates for the hard refusals only. Hard refusals are households where all individuals, in Wave 7, have verbally said they no longer wish to participate in the survey. Table 6 provides the difference between the groups, the standard error of the difference, the t-tests, and the significance indicator. The table shows that only the $40 group is significantly different from both the control group and the $20 experimental group.

Table 5. 1996 SIPP: Wave 8 Incentive Study - Hard Refusals
 
Treatment Condition
Ab@ Parameter
Base
Response Rate
Control Group
2,622
728,106
37.6%
$20 Group
2,622
735,447
38.7%
$40 Group
2,622
730,229
47.5%
Base Total=2,193,782
 
 
 

Table 6. 1996 SIPP: Wave 8 Incentive Study- Comparison Analysis for Hard Refusals
 
Comparisons
Difference
Standard Error
t-test
Significant?
Control vs $20
-0.013
0.041
0.311
no
Control vs $40
-0.100
0.042
2.405
yes
$20 vs $40
-0.087
0.042
2.096
yes

We conducted the same analysis for the soft refusals. Soft refusals are households where there was no one home or all individuals were temporarily absent in Wave 7. We found that for this group there were no significant differences for any of the treatment conditions. Table 7 provides the "b" parameter, the base, and response rates for these cases.

Table 7. 1996 SIPP: Wave 8 Incentive Study - Soft Refusals
 
Treatment Condition
Ab@ Parameter
Base
Response Rate
Control Group
2,622
493,008
60.8%
$20 Group
2,622
526,265
65.5%
$40 Group
2,622
507,520
68.2%
Base Total=1,526,793
 
 
 

Table 8 gives the "b" parameter, the base and the conversion rates or the percent interviewed after nonresponse conversion procedure was implemented. We noticed that for Wave 7 only 32.1 percent of the hard refusals were converted, while 41.3% of the cases were converted in Wave 8. Additionally for Wave 8, 37.6% of the hard refusals in the control group were converted. For the soft refusals, the conversion rates were very close, both in the 60% range and not significantly different. Table 9 provides the comparisons being made, the difference, the standard error of the difference, the t-test, and the significance indicator. The table shows that the conversion rates for hard refusals for Wave 8 were significantly different from the conversion rates in Wave 7. Also, it seems that priority mail alone has significantly improved the conversion rates for the hard refusals.

Table 8. 1996 SIPP Panel - Wave 7 & Wave 8 Conversion Rates
 
Treatment Condition 
Ab@ Parameter
Base
Percent Interviewed
Soft refusals converted in Wave 8 (SR8)
2,622
1,526,793
64.9%
Hard refusals converted in Wave 8 (HR8)
2,622
2,193,781
41.3%
Soft refusals converted in Wave 7 (SR7)
2,622
1,447,578
62.4%
Hard refusals converted in Wave 7 (HR7) 
2,622
2,255,341
32.1%
Wave 8 Hard Refusals Control Group (HRCG8)
2,622
728,106
37.6%
Base Total = 8,151,599
 
 
 

Table 9. 1996 SIPP Panel - Comparison Analysis for Wave 7 & Wave 8 Conversion Rates
 
Comparisons
Difference
Standard Error
t-test
Significant?
SR8 vs SR7
0.025
0.029
0.872
no
HR8 vs HR7
0.092
0.023
3.934
yes
HRCG8 vs HR7
0.055
0.033
1.665
yes

C.  Results by Poverty/Nonpoverty Strata

Table 10 shows the "b" parameter, the base and response rate for households in the poverty stratum. Although, there seem to be large differences in the response rates between the three treatment group, these differences are not statistically significant at the 90% confidence level.

Table 10. 1996 SIPP: Wave 8 Incentive Study - Households in Poverty Stratum
 
Treatment Condition
Ab@ Parameter
Base
Response Rate
Control Group
2,622
226,002
47.5%
$20 Group
2,622
245,947
59.6%
$40 Group
2,622
237,222
55.4%
Base Total=709,170
 
 
 

Table 11 provides the "b" parameter, the base and the response rate for the households in the non-poverty stratum. There is less than a 1% difference between the control and the $20 treatment group, a 9.3% difference between the control and the $40 group, and a 8.6% difference between the two monetary groups. Table 12 gives these differences, their standard errors, the t-tests, and the significance indicators. The table shows only the $40 group as significantly different from both the control group and the $20 experimental group.

Table 11. 1996 SIPP: Wave 8 Incentive Study - Households in Non-poverty Stratum
 
Treatment Condition
Ab@ Parameter
Base
Response Rate
Control Group
2,622
995,112
46.9%
$20 Group
2,622
1,015,765
47.6%
$40 Group
2,622
1,000,527
56.1%
Base Total=3,011,404
 
 
 

Table 12. 1996 SIPP: Wave 8 Incentive Study- Comparison Analysis for Households in Non-poverty Stratum
 
Comparisons
Difference
Standard Error
t-test
Significant?
Control vs $20
-0.007
0.036
0.194
no
Control vs $40
-0.093
0.036
2.570
yes
$20 vs $40
-0.086
0.036
2.387
yes

IV. Conclusion

We find that using priority mail and incentive increases the number of households brought back into the sample after being a nonrespondent in the previous wave. The $40 incentive showed to be effective in keeping respondents in sample. We feel the procedure is so successful that we will seek to use the procedure for all nonrespondents in Waves 10 through 12 of the 1996 panel.
 

Attacment E
Draft 12/23/99
MEMORANDUM FOR         Karen King
                                              Chief, Survey of Income and Program Participation (SIPP) Branch
                                              Demographic Statistical Methods Division

From:                                     Lieu Galvin
                                               Survey of Income and Program Participation (SIPP) Branch
                                               Demographic Statistical Methods Division

Subject:                                 Preliminary Evaluation of the Survey of Program Dynamics (SPD)
                                              Bridge Incentive Experiment by Demographic Characteristics

I. Executive Summary

This memorandum documents the demographic results of the study conducted on the SPD Bridge survey to test the effect of a monetary incentive on the response rates of a follow-up longitudinal survey. In the 1997 SPD Bridge survey, we implemented a targeted incentive test on low income households, which are vitally important to this longitudinal follow-up survey. Based on the results of this experiment, the $20 incentive has had a positive effect, but not a statistically significant effect on the response rate for the experimental group. Within the experimental group, the response rate among households having cashed the vouchers was significantly higher than the one among households having received but not cashed or having not received vouchers. Also, the $20 incentive was not significantly effective on increasing the response rate for households with various demographic characteristics. Detailed results are given below.

II. Background and Design

The SPD Bridge sample consisted of expired sample from the 1992 and 1993 Survey of Income and Program Participation (SIPP) panels and many were told that they would not be revisited again for the SIPP survey. The goal of this test was to determine the effect an incentive has on bringing retired sample respondents back into a follow-up longitudinal survey. A subset of sample clusters containing low income households in the SPD designated sample was selected to receive a $20 voucher. Low income is defined as households that were at or below 150% of their poverty threshold based on previously collected SIPP data. The experimental group consisted of 10,683 households that were at or below their poverty threshold and their neighbors. The control group consisted of 3,343 households who were at or below 150% of their poverty threshold and neighbors. All other households were not eligible.

III. Minimal Detectable Difference and Hypothesis Testing

In this experiment, the minimum detectable difference (MDD) for the response rate is 2.6% at 90% confidence level and with a design effect of 1.8 factored in. Due to subsetting for more detailed classifications, the effective sample sizes reduce accordingly. Thus the hypothesis tests for all differences in the response rates has been performed to reflect the effective samples but the design effect was kept the same, i.e., 1.8.

IV. Merging Data files

    A. Person Level Data

        In order to extract person level data, such as race and ethnicity, we had to combine the
        1997 SPD Bridge data file with the 1992 and 1993 SIPP person level data files. Thus
        creating a combined SPD-SIPP person level data file. We matched the household identifier
        on the SPD file, hecaseid, to the household identifier on the SIPP data files,
        psu||segment||serial||addid. We then extracted only the reference person's data. So the results
        given for race and ethnicity pertains solely to the reference person's race and ethnicity.
        Unfortunately, we continue to have difficulty matching back to SIPP. So as of this date,
        demographic results are based on only 48% of available sample.

    B. Household Level Data

        In order to extract household level data, such as poverty status, we had to combine the
        1997 SPD Bridge data file with the 1992 and 1993 SIPP household level data files. Thus
        creating a combined SPD-SIPP household level data file. We matched the household
        identifier on the SPD file, hecaseid, to the household identifier on the SIPP data files,
        psu||segment||serial||addid. Since the poverty threshold is defined at the family level for the
        SPD survey and defined at the household level for the SIPP survey, we had to redefine low
        income for this analysis. A household was considered in poverty if at least one family in a
        household was at or below 150% of their poverty threshold based on the SPD data for
        interviewed households in SPD Bridge. If a household was not interviewed in Bridge, then
        a household was considered in poverty if they were at or below 150% of their poverty
        threshold based on previously collected SIPP data.

V. Results

The tables below illustrates the results of the SPD Bridge Incentive Test. There are no significant differences in response rates between groups unless indicated. The Experimental category refers to households who were at or below 150% of their poverty threshold and their neighbors, and flagged to receive the incentive. However, as it turned out in this experiment, among all the Experimental households, only the households having received and cashed the vouchers could be identified. We could not differentiate between the households having not received the voucher and the households having received the voucher but not having cashed it. Consequently, the actual effectiveness of the incentive is likely to be higher than those reported in the tables because the Experimental Group contains households that were flagged for an incentive but may not have received it. The Control category refers to households who were at or below 150% of their poverty threshold and their neighbors, not flagged for an incentive, and did not receive an incentive.

For calculation for the response rates in all the tables below, the household nonresponse includes both Types A and D. Type A noninterview includes; no one home, temporarily absent, and refused. Type D noninterview includes; moved with address unknown and moved within the country but beyond the SPD interviewing limits.

A.  Response Rates by Income Status

Table A-1. Response Rates from SPD Bridge for Low Income Households by Experimental Groups

Incentive Category
Eligible Households
Total
Response Rates
Control
1,404
69.37%
Experimental
4,354
69.32%
Table A-2. Response Rates from SPD Bridge for Middle/High Income Households by Experimental Groups
Incentive Category
Eligible Households
Total
Response Rates
Control
1,635
92.60
Experimental
5,388
93.91

B.  Response Rates by Race and Ethnicity

Note that, for the tables in Section B, the SPD files could only be linked back to approximately 48% of the SIPP person level files. Therefore the sample sizes for race and ethnicity is much smaller than the sample size for the income status variable. The people in the SPD sample that we are having trouble linking back to SIPP are frequent movers. The frequent movers are generally (a) low income, (b) much more likely to become nonrespondents, and (c) harder to trace in order to link back to SIPP. On the contrary, the nonmovers are more likely to be respondents. Therefore, the above discussion is the basis for the response rates in Table A-1 and Table A-2 of Section A being much lower than those in Section B.

Table B-1. Response Rates from SPD Bridge for Black Households by Experimental Groups
Incentive Category
Eligible Households
Total
Response Rates
Control
410
89.76%
Experimental
1,243
92.60% 
Table B-2. Response Rates from SPD Bridge for Non-Black Households by Experimental Groups

Incentive Category
Eligible Households 
Total
Response Rates
Control
2,323
91.48%
Experimental
7,531
91.58%
Table B-3. Response Rates from SPD Bridge for Hispanic Households by Experimental Groups
Incentive Category
Eligible Households
Total
Response Rates
Control
259
90.56%
Experimental
769
91.44%
Table B-4. Response Rates from SPD Bridge for Non-Hispanic Households by Experimental Groups

Incentive Category
Eligible Households
Total
Response Rates
Control
2,234
91.30%
Experimental
7,279
91.76%

C. Overall Response Rates

                             Table C-1. Response Rates from the SPD Bridge Incentive Test
 
Total
Response Rate
Not Eligible
19,237
81.37%
Control
3,039
81.87%
Experimental
9,742
82.92%
Table C-2. Response Rates Among Experimental Households by Status of Voucher Receipt  

 

Eligible Households 
Total
Response Rates
Voucher Cashed 
6,333
94.0% *
Voucher Received But not Cashed 
or
Voucher Not Received 
2,892
59.1%

* statistically significant from the voucher received but not cashed or voucher not received households.

VI. Conclusion

Based on the results of this incentive test, providing a $20 incentive to households has had a positive effect but not a significant effect on response rates overall, as well as by demographic characteristics. Among the experimental group, the households having received and cashed vouchers had statistically significantly higher response rate than the response rate of the households having received but not cashed or having not received vouchers.

end of content rule

SPD Working Papers

SPD Publications


Skip bottom navigation groups

Contact: (dsd.survey.program.dyanmics@census.gov)
URL: http://www.census.gov/spd

 

Introduction to SPD |  Survey Design & Content |  Data Editing |  Finding SPD Info |  Sampling & Weighting | 
Linking Files | Publications |  S&A  |  News&Notes |  Users' Guide 

 


Census 2000  |  Subjects A to Z  |  Search  |  Product Catalog  |  Data Access Tools  |  FOIA  |  Privacy · Policies  |  Contact Us  |  Home
separator rule
U.S. Census Bureau: Helping You Make Informed Decisions