During the past few years there has been a great deal of discussion in survey research concerning the problem of non-response. While always a concern, there was a general feeling among practitioners as early as 1970 that non-response was becoming a growing problem. People were not cooperating in granting an interview. Interviewers would not go into certain areas. The population had become more mobile and people were not at their usual place of residence for extended periods. Also, because of the high participation of women in the labor force it had become difficult to find women respondents at home during daytime hours and also difficult to obtain good interviewers to carry out field work.

Because of the concern over the magnitude of the non-response group and its effect upon the validity of survey conclusions, the profession is reacting in at least three different ways, First, there is the reporting of non-response rates in the description of a survey report. It is expected that part of the technical appendix be devoted to a discussion of response rates. In their specifications to bidders for survey work, the Federal RFP's usually specify a minimum response rate. The Office of Federal Statistical Policy and Standards has alerted federal agencies sponsoring data collections that the quality of data obtained may not be sufficient if the response rate falls below 75 percent. In the private sector there are agencies concerned with the dissemination of survey research data that are syndicated and are used to make media purchasing decisions. These agencies, such as the Advertising Research Foundation and the Broadcast Rating Bureau, require the statistical reports that are issued to contain statements of the response rate and, in some cases, full disclosures of the procedures used to derive this rate. The activities of this first type are designed, in a sense, to protect the user of sample survey data and to provide measures to evaluate the interviewing performance.

A second set of activities in which statisticians are currently engaged are designed to cope with the existence of non-response. These activities involve the imputation of missing values in an attempt to correct for non-response and thus minimize its effect. The Committee on National Statistics of the National Research Council has established a Panel oh Incomplete Data. This panel is examining current procedures, developing theoretical bases for amputations, and evaluating various practices.

A third type of activity is concerned with an investigation of the causes of non-response and the development of techniques to reduce its impact in sample surveys. The American Statistical Association (ASA), the Council of American Survey Research Organizations (CASRO), the Marketing Science Institute (MSI), the Research Triangle Institute (RTI), the Institute for Social Research (ISR), the Committee on National Statistics, National Research Council (CONSTAT), and others have been involved. In order to make progress, it is necessary to have full details of response rate calculations and to compare non-response rates under different conditions. For example, how do response rates today compare with response rates ten years ago? What effect does the mode of interviewing have upon non-response? Does the nature of the topic being investigated have any effect? Do government conducted surveys yield different response rates than those conducted by private industry? Are privacy and confidentiality factors that affect survey response rates?

Among theoretical statisticians and also among practical statisticians engaged in fields such as quality control, laboratory and medical experimentation, and agricultural field crop experiments, the concepts of "missing observation" and "incomplete data" have been of great use in handling inferences when not all of the individuals designated to be measured or observed yield the required information. In these applications each sampled unit is uniquely defined and the measurement process is usually straightforward.

In sample surveys that involve not only interviewing but also the sample selection process, the latter for the designation of the selected individual, there has been a tendency to adopt, blindly, the concepts used in the above areas and to use them whether or not they are applicable. Ambiguities of definition create problems that lead to confusion in communication. It is necessary, then, to set up procedures to define the response rate for surveys based upon probability sampling.

Many Terms - Many Meanings

Part of the difficulty in establishing a formula to determine response rates occurs because of looseness in the usage of many terms.  The term "response rate" has a specific meaning to many authors and is generally accepted to designate the ratio of the number of completed interviews divided by the number of eligible units in the sample.  While the determination of this ratio may be complicated, and will be discussed later, there is no question about its unique meaning.

On the other hand, the term "completion rate" can have and has been used with a multiplicity of meanings when applied to the sample survey process.  The completion rate is the extent to which a task has been accomplished.  Since in a sample survey there are many tasks to be undertaken and there are different criteria for accomplishment, the term completion rate has been used with the following meanings:

  1. C. R. =  Number of Completed Interviews/Number of Contacts
  2. C. R. =  Number of Respondents Who Answered All Questions/ Number of Respondents Who Started An Interview
  3. C. R. =  Number of Completed Interviews / Number of Eligible Units in Sample
  4. C. R. =  Number of Completed Interviews / Total Units in Sample (Eligible Plus Ineligible)
  5. C. R. =  Number of Completed Interviews Plus Ineligibles / Total Units in Sample (Eligible Plus Ineligible)
  6. C. R. =  Number of Households Who Completed a Census Form / Number of Households That Received a Census Form
  7. C. R. =  Number of Telephone Numbers That Have Been Established To Be Residential, Other Working or Non-Working / Number of Telephone Numbers Dialed
  8. C.R.  =   Number of Units for Which Eligibility Status Has Been Determined / Number of Units in Sample

All of those have specific uses in planning studies.  The first six, above, refer to the interviewing process, while the other two relate to tasks involved in the sampling operations.  The third form is the same as the definition of the response rate.

The Task Force compiled a list of about a dozen terms that are in common usage.  Some of these terms are more directed than the term "completion rate" such as, for example, "cooperation rate," "interview rate," "at-home rate," etc.  Other terms were the obverse of those mentioned, such as "non-response rate," "refusal rate," "not-at-home rate," etc.  All of these are useful for diagnostic purposes.

Because of the proliferation of these terms, there is a tendency to use more than one term to define the same concept and to find the same term used for different concepts.  Even when different terms are used, the reader of a statistical survey may become, confused.

For example, in the fall of 1976, the Panel on Privacy and Confidentiality as Factors in Survey Response, Committee on National Statistics had two surveys conducted as part of its study.  The first was an "Attitude Survey," nationwide in scope, in which a sample of 1,456 households was selected for a personal interview to obtain information on the expressions of individuals as respondents in surveys and their attitudes about surveys.  In addition to this, an "Experimental Study" was conducted.  A national sample of 2,440 households was selected.  These were divided into five replicates where they differed from each other with respect to different promises of confidentiality.  The questionnaire consisted of a census-type form and also was administered by personal interview.  In the attitude study the "interview rate" was 81.5 percent and in the response behavior study the overall "completion rate" was 91.5 percent.

The principal reason for the difference was that different tasks were being evaluated.  In the attitude study a pre-selected person 18 years of age or older in the sampled household had to be interviewed.  In the behavior study, a measure was used to specify the extent to which a responsible household member would supply information so that a census-type form could be filled out.

In many marketing studies, through the use of the Politz-Simmons technique, the population sampled is defined as the population of persons at home when the interviewer calls, and those who were not at home when the interviewer originally called are not considered part of the non-response group.

Through the use of the coincidental method in determining in-home television viewing, a person who is not at home when the interviewer calls is a respondent whose value for watching television at home is zero.  This is considered to be a completed interview.

In a particular study comparing random digit dialing (RDD) with personal interviews it was stated that, "Cooperation rates in the two survey methods were very close.  The Census Bureau obtained interviews in 96 percent of the households contacted, while the RDD city-wide sample had a cooperation rate of 93 percent.. . ." The cooperation rate for RDD was defined as interviews completed divided by households contacted and not by the number of eligible households that could have been in the sample had there been a concerted effort to contact them.  From the substantive viewpoint, the comparison of the two cooperation rates is meaningless.

It is apparent that there exists a great deal of confusion because of the various meanings of different terms used.  Nevertheless, it is possible to establish a pair of definitions that will clarify the situation.  In discussions within the Task Force the following evolved:

  • Completion Rate is to be considered as a collective term that is used to designate how well a task has been accomplished.  In general, completion rates are used to measure how well the various components involved in a sample survey are accomplished.
  • The term Response Rate is a summary measure and should be used to designate the ratio of the number of interviews to the number of eligible units in the sample.  The response rate is a measure of the result of all efforts, properly carried out, to execute a study.  In determining a response rate, completion rates are used to evaluate the component steps.  These component steps are then combined to form the response rate.
  • Basic Definition: Response Rate = Number of Completed Interviews with Reporting Units/ Number of Eligible Reporting Units in Sample

The above formula, somewhat more specific, takes into account the various possibilities for executing a survey.

Read the whole 1982 report: CASRO on the Definition of Response Rates