Survey planning and questionnaire design

Survey planning and questionnaire design

17.1 Purpose

Surveys and questionnaires are among the analyst’s most important sources of information about user problems, user requirements, user satisfaction, and similar system parameters. In this chapter, we briefly discuss sample surveys and questionnaire design.

17.2 Strengths, weaknesses, and limitations

A sample survey is taken when it is not practical or convenient to conduct a census of an entire population. A sample survey is less expensive and less time consuming than a census. Moreover, the use of an appropriate sampling design allows the analyst to make valid statistical inferences about the population. In fact, survey results based on a proper sampling plan can be more accurate than the results of a census of an entire population. This can happen because a survey can often be conducted by a small number of highly trained field workers who are far less apt to make mistakes than are the possibly large number of field workers who would be needed to conduct a census.

On the other hand, errors can occur when a sample survey is employed. Sampling error occurs because we do not examine the entire population when we conduct a sample survey. Thus, a survey result is likely to be less accurate than the result of an accurate census. Other errors of non-observation can occur when certain segments of the population are not represented in the sample. Errors of observation can occur when the information obtained from a survey is not the truth. However, it is possible to minimize the impact of such errors by intelligently designing the survey instrument or questionnaire.

17.3 Inputs and related ideas

Sample surveys and questionnaires might be used in any stage of the system development life cycle, but they are particularly valuable during the information gathering and problem definition stage (Part II). Sampling techniques are discussed in Chapter 9.

17.4 Concepts

The purpose of this chapter is to give a brief overview of how to plan a survey and of how to design a questionnaire. For additional details, see Scheaffer et al5.

17.4.1 Planning a survey

In their discussion of survey sampling, Scheaffer et al.5 present a checklist containing eleven items that need to be considered when planning a survey (Table 17.1). In much of the remainder of this chapter, we will concentrate on the measurement instrument; more particularly, we will discuss how to design a useful questionnaire.

17.4.2 Errors in survey sampling

There are several common sources of error that are encountered when conducting a sample survey. These errors can be divided into two categories: errors of non-observation and errors of observation.

 

Table 17.1 A checklist of items to be considered when planning a survey.5


1. Statement of objectives
A set of simple objectives must be clearly stated and must be understood by everyone working on the survey.

2. Target population
The population to be sampled must be clearly defined using terminology that everyone understands. The population must be defined clearly enough so that a sample can be selected from the population.

3. The frame
The frame is a list of sampling units from which the sample will be selected. The frame must be defined so that it closely agrees with the target population.

4. Sample design
The sample design must be chosen so that the sample will provide enough information to fulfill the survey objectives. Several sample designs (for instance, the simple random sample) are briefly discussed in
Chapter 9.

5. Method of measurement
Several different measurement methods, such as interviews, questionnaires, direct observation, etc., are available, and the method to be used for the survey must be chosen. Interviews are discussed in
Chapter 8. Questionnaires are discussed in this chapter.

6. Measurement instrument
The survey instrument (for instance, the questionnaire or script of questions to be asked in an interview) must be carefully designed in order to minimize bias in the survey results.

7. Selection and training of field workers
The field workers actually collect the data. For instance, they administer questionnaires, conduct interviews, and so forth. These people must be carefully trained so that how they do their job does not have a detrimental effect on the survey results.

8. The pretest
In a pretest the measurement instrument is field tested on a small preliminary sample of respondents. Changes suggested by the pretest results are made before full-scale sampling is done.

9. Organization of fieldwork
Carefully plan and organize how the field workers will do their jobs and clearly define who has the authority in various situations.

10. Organization of data
Carefully plan how the survey data will be processed,managed, and analyzed at all stages of the survey process.

11. Data analysis
Specify in detail exactly how the data will be analyzed and carefully plan what information is to be included in the final survey report.


Errors of non-observation occur because the elements in a sample are only some (not all) of the elements in the target population. Such errors can be due to sampling, coverage, or non-response. Sampling error refers to the difference between an estimate based on a sample and the true value of the population parameter being estimated. This type of error will always exist because a sample is being taken instead of a census. Errors of coverage occur when the sampling frame is not identical to the target population. For instance, a list of local businesses obtained through the Chamber of Commerce will not be completely up to date and, therefore, a sample randomly selected from the Chamber of Commerce list is not a random sample of all businesses in the locality. Non-response occurs when a sampled element (person, business, etc.) cannot be contacted, when a respondent is not able to answer a question, or when a respondent refuses to answer.

Errors of observation occur when the survey data that has been collected is different from the truth. Such errors can be caused by the data collector (the interviewer), the survey instrument, the respondent, or the data collection process. For instance, the manner in which a question is asked can influence the response. Or, the order in which questions appear on a questionnaire can have an influence on the responses. Or, the data collection method (telephone interview, questionnaire, personal interview, or direct observation) can influence the survey results.

17.4.3 Questionnaire design

One of the best ways to reduce error when conducting a sample survey is to carefully design the questionnaire to be used. There are several important considerations that must be kept in mind when designing a questionnaire.

17.4.3.1 Question ordering

The order in which questions are asked can affect the responses to the questions. One reason for this is that respondents try to answer questions in a consistent fashion. As an example, we consider an example originally discussed in Schuman and Presser6. An experiment involved asking two questions:

1. Do you think the United States should let Communist newspaper reporters from other countries come in here and send back to their papers the news as they see it?
2. Do you think a Communist country like Russia should let American newspaper reporters come in and send back to America the news as they see it?

In surveys conducted in 1980, when question 1 was asked first, 54.7 percent of respondents answered yes to question 1 and 63.7 percent answered yes to question 2. When question 2 was asked first, 74.6 percent answered yes to question 1 and 81.9 percent answered yes to question 2. Evidently, the respondents’ sense of fair play led more respondents to approve of Communist reporters being allowed to report news in the United States as they see it when they had first approved of U.S. reporters doing the same in Communist countries.

We should also point out that a respondent’s reaction to a question can be set by asking preliminary questions dealing with the same topic and that the first question asked is often thought of differently from questions that follow. For instance, the responses to a question about government spending might be very favorable to increased government spending if the question is preceded by several questions emphasizing useful services provided by the government. In contrast, the responses might be opposed to increased government spending if the question is preceded by several questions emphasizing government waste and inefficiency.

As an example of how the first question asked can be thought of differently from the questions that follow, when questions ask the respondent to supply ratings, the first question tends to be given the most extreme rating. For instance, when people are asked to rate the appeal of resort hotels based on descriptive materials, if the first hotel seems appealing it would likely be rated higher than other appealing hotels that are subsequently rated. On the other hand, if the first hotel is not appealing, it would likely be rated lower than other equally unappealing hotels that are subsequently rated.

In addition, the ordering of question responses can also influence survey results. Often, the first choice (or first several choices) in a list of choices are more likely to be selected than are later choices. Moreover, if a choice is long, complicated, or difficult to understand or interpret, the choice that precedes the difficult choice is likely to be selected.

In order to reduce the impact of question ordering and response ordering, one strategy is to vary the orders of questions and/or responses presented to different respondents. Another approach is to carefully describe the context in which each survey question was asked in the analysis of the survey results.

17.4.3.2 Open questions and closed questions

When an open question is posed, the respondent is allowed to formulate any answer that he or she wishes. On the other hand, a closed question requires one of several predetermined choices (such as a, b, c, or d) or requires a single numerical response (such as the number of years a respondent has spent in his or her current job position). Closed questions are advantageous because it is easy to summarize and analyze the responses to such questions (especially when a computer is used). On the other hand, open questions allow the respondent to express ideas and nuances that the designer of the questionnaire may not have considered. However, it might be very difficult to summarize and interpret the responses to open questions because the responses cannot be easily quantified.

For instance, in a market research study we might ask the open question:

What do you like most about this product?

This question would elicit a wide variety of responses, while a closed question such as:

What I like most about this product is its:

(a) price (b) quality (c) design (d) styling

would produce responses that are more easily summarized but might force a respondent to choose a response that would not be his or her best response. As a compromise, a questionnaire will often contain a few open questions in addition to a number of closed questions. If only closed questions are to be employed, a good strategy is to use open questions on a preliminary survey to develop the responses for the closed questions to be asked.

17.4.3.3 Response options and screening questions

When a question is posed, sometimes the respondent would like to answer by stating that he or she has no opinion or does not know how to answer. Therefore, when constructing a questionnaire, one must decide whether a no opinion option will be included among the responses to the various questions. Generally, no opinion responses provide little useful information, and, therefore, such responses are often not allowed. On the other hand, it does not seem reasonable to require a response when the respondent may not have the information needed to intelligently formulate a response.

As a general rule, when a question requests an opinion about a subject that everyone (or almost everyone) is familiar with, a no opinion option is not allowed. For instance, a question about whether federal income taxes are too high might be posed without a no opinion option. On the other hand, a question whose answer requires a specialized background or very specific knowledge might be posed with a no opinion option. For example, a question about a little known and seldom used tax provision would probably include a no opinion option.

A common strategy is to use screening questions. Such questions are posed in order to determine whether or not a respondent has enough knowledge or information to answer the main question. If a respondent does not know enough to answer the main question, the main question is skipped. If a respondent does have the needed background, he or she is asked to answer the main question and this main question is posed without a no opinion option.

Besides deciding whether to include a no opinion option, one must decide how many options will be employed. Because middle ground responses often give respondents an easy out, questions are often posed without a middle ground or neutral option. For instance, the question:

In your opinion, are taxes in the United States too high or too low?

attempts to elicit a response on one side of the taxation issue or the other with no neutral response allowed. If we believe that it will be too difficult for many respondents to choose one side or the other, then more response options should be included. In general, however, it is a good idea to keep the number of response options as small as possible.

17.4.3.4 Wording of questions

The language and phrasing used in constructing questions is also an important consideration. In the book Essentials of Marketing Research, Dillon et al.2 present seven basic principles of question construction. Their principles are summarized in Table 17.2.

Table 17.2 Seven basic principles of question construction.2


1. Be clear and precise.
A question must be understandable and must elicit a precise answer. For instance, the question How many cola drinks do you consume? is too vague. A better version would be: Here is a 16 ounce bottle of a cola drink. If all of the cola you drink came in 16 ounce bottles, how many would you consume in a week? State a number.

2. Response choices should not overlap and should be exhaustive.
The response choices should not overlap and should cover all relevant possibilities.

3. Use natural and familiar language.
Questions should be phrased using words and expressions that respondents will understand. For instance, the question, Do you think that every public building should be equipped with a bubbler?, will be understood in Wisconsin because in that state a water fountain is called a bubbler, but this question will not be understood elsewhere in the United States.

4. Do not use words or phrases that show bias.
Do not use wordings that suggest what the answer to a question should be (that is, do not use loaded questions). In addition, questions should be asked in a balanced way. For instance, the question, Do you favor the death penalty? should be asked in the more balanced form, Do you favor or oppose the death penalty?

5. Avoid double-barreled questions.
Double-barreled questions are questions that ask the respondent to answer two questions at the same time. For instance, the question, Do you feel that major league baseball games are too slow paced and are too expensive?, contains two questions. They should be separated.

6. State explicit alternatives.
For instance, if we wish to investigate the desirability of DSS satellite systems, the question, Would you purchase a DSS satellite system?, does not supply as much information as the question, If you currently subscribe to cable television and DSS satellite television were available to you, would you:

1. subscribe to cable only,

2. purchase a DSS satellite system only,

3. subscribe to cable and purchase a DSS satellite system.

7. Questions should meet criteria of validity and reliability.
Questions must measure what the researcher is trying to measure (validity) and responses should be able to be replicated by other researchers (reliability).


When designing questions one must keep in mind that people do not remember facts very well. Also, people do not determine frequencies by counting. Rather, they determine a rate for a shorter period and then multiply (for instance, I consume 3 cases of soft drinks per month, which when multiplied by 12 gives a yearly consumption of 36 cases). Finally, people telescope easily remembered events so that they believe that they occurred in a shorter period of time than they actually did. On the other hand, events that are difficult to remember are believed to have occurred longer ago than they actually did.

17.5 Key terms
Census —
A set of measurements (or interviews) for every element of a population.
Closed question —
A question that requires one of several predetermined choices or that requires a single numerical response.
Double barreled question —
A question that asks the respondent to answer two questions.
Errors of coverage —
Errors owing to the sampling frame differing from the target population.
Errors of non-observation —
Errors that occur because the elements in the sample are not all of the elements in the target population.
Errors of observation —
Errors that occur when the survey data is different from the truth.
Frame —
A list of sampling units from which the sample will be selected.
Loaded question —
A question whose wording suggests what the answer should be.
Non-response —
A type of sampling error that occurs when a sampled element (person, business, etc.) cannot be contacted, when a respodent is not able to answer a question, or when a respondent refuses to answer.
Open question —
A question for which the respondent is allowed to formulate any answer he or she wishes.
Population —
A set of units that we wish to study.
Sample —
A subset of the units in a population.
Sampling error —
The difference between an estimate based on a sample and the true value of the population parameter being estimated.
Screening questions —
Questions posed in order to determine whether or not a respondent should answer the main question.
17.6 Software

Not applicable.

17.7 References
1.  Dillman, D. A., Mail and Telephone Surveys : The Total Method, John Wiley & Sons, New York, 1978.
2.  Dillon, W. R., Madden, T. J., and Firtle, N. H., Essentials of Marketing Research, Richard D. Irwin, Homewood, IL, 1993, 304.
3.  Gallup, G., The Sophisticated Poll Watchers Guide, Princeton Opinion Press, Princeton, NJ, 1972.
4.  Groves, R. M., Survey Errors and Survey Costs, John Wiley & Sons, New York, 1989.
5.  Scheaffer, R. L., Mendenhall, W., and Ott, R. L., Elementary Survey Sampling, 5th ed., Duxbury Press, Belmont, CA, 1996, 68.
6.  Schuman, H. and Presser, S., Questions and Answers in Attitude Surveys, Academic Press, New York, 1981.

Comments

Popular posts from this blog

The Conversion Cycle:The Traditional Manufacturing Environment

The Revenue Cycle:Manual Systems

HIPO (hierarchy plus input-process-output)