During the past ten years, in an effort to improve data quality, there has been an increase in the use of questionnaire pretesting, prior to implementation. Various questionnaire evaluation techniques have been evaluated and the associated strengths and weaknesses have been identified (DeMaio et al, 1993; Esposito et al, 1992; Oksenberg et al, 1991; Presser and Blair, 1994). Some limited research has been conducted about the effectiveness of cognitive interviews in actually reducing questionnaire problems (Willis, 1996, Lessler et al, 1989). The objective of our research is to determine how well various question pretesting methods predict the types of problems that will actually be experienced in the field and to what extent the laboratory testing contributes to improved questions. In this research, multiple researchers in three research organizations conducted expert reviews, cognitive appraisals, and cognitive interviews on three survey instruments. A classification scheme was developed to code problems identified through all three methods. The questions identified as the most problematic were revised. Both the original and the revised questions were tested in an omnibus CATI/RDD survey conducted by the U.S. Census Bureau. Analysis of the field results are being evaluated using independent outcome quality measures. Comparing the results from pretesting with the results of the field study will determine how well the various pretesting methods identified the types of problems which surfaced during field testing. Comparing the results of the independent outcome quality measures from the field study for both the original and revised question wordings will tell us whether the revised question wording actually improved data quality.