Survey and Questionnaire Research

Survey and Questionnaire Research

Chapter Learning Outcomes

After reading and studying this chapter, students should be able to:

• understand the decisions that are made regarding how the population is sampled and the various techniques to approximate a representative sample.

• compare and contrast different survey research methods and comprehend what research situ- ations match better with different research methodologies.

• appreciate different survey research designs and the various scaling methods that can be used to construct survey items.

• anticipate the types of errors that may occur within the survey research project, know how to handle data collection issues, and begin to understand the various approaches to analyzing the data collected.

• construct survey items using the appropriate scale that helps to capture the desired behav- iors, perceptions, and/or attitudes of the population of interest to be surveyed.

Hemera/Thinkstock

lan66845_06_c06_p157-190.indd 157 4/20/12 2:48 PM

158

CHAPTER 6Introduction

Introduction

I f you’ve ever enjoyed the task of trying to assem-ble a large jigsaw puzzle, you know that different peo- ple have different strategies. Some people like to assemble the edges first, and then work toward the middle. Others like to use the picture on the box to assemble easily recognizable parts of the puzzle. Some like to find all the corners first and work that way. Assembling a puzzle is a complicated task, and different strategic paths can lead to the same solution. When using surveys and ques- tionnaires—the main topics of this chapter—the same prin- ciple applies: There are many topics to consider, and eventually we will get to them all, but we have to start somewhere.

Surveys and questionnaires are similar to jigsaw puzzles in that many pieces come together to form the final picture.

Nordic Photos/SuperStock

Voices from the Workplace

Your name: Jessica F.

Your age: 30

Your gender: Female

Your primary job title: Survey Research Specialist

Your current employer: Society for Human Resource Management, Research Department

How long have you been employed in your present position?

15 months

What year did you graduate with your bachelor’s degree in psychology?

2000

Describe your major job duties and responsibilities.

Produce and manage quantitative and qualitative research on HR topics. Design survey instruments and programs online surveys for fielding. Involved in all aspects of data management including the data collection process and performing data quality control. Designs the analysis plan and conducts the analysis using SPSS statistical software. Produces written technical reports.

What elements of your undergraduate training in psychology do you use in your work?

Coursework in social psychology research methods—learned and applied the fundamentals of survey research methodology, writing technical research reports, running analyses in SPSS, and conducting background research through literature reviews. I also use the information acquired from my statistics course in my job. Coursework in organizational behavior and I/O (industrial/organizational) psychology (e.g., dealing with conflict resolution, change management, motivation, personality tests, (continued)

lan66845_06_c06_p157-190.indd 158 4/20/12 2:48 PM

159

CHAPTER 6Introduction

etc.), that are relevant in the human resource profession. Volunteer work as a research assistant in the department of psychology. Spent a year coding data on an emotional experiences study.

What do you like most about your job?

Meaningfulness of the research—produce research that HR (human resources) professionals and other customers can utilize and apply in their organizations to improve workforce dynamics and make strategic business decisions. Other things that I like about my job include variety of work, managing research projects from beginning to end, the ability to work independently and autonomously.

What do you like least about your job?

It can be very tedious at times (e.g., data entry, data cleaning, writing) since a high level of accuracy is necessary. The environment is also very structured (e.g., specific procedures and protocols to follow); however, this can vary from job to job.

Beyond your bachelor’s degree, what additional education and/or specialized training have you received?

I took several classes through SPSS—survey methodology, survey analysis, statistical analysis, syntax, and intermediate topics in SPSS. To design/program web-based surveys—experience in HTML, Dream- weaver, ColdFusion and Microsoft Access. I took classes in most of these areas, however I picked up most of my experience on the job. I have also taken various HR workshops/seminars to stay current with HR and broaden my knowledge base.

What is the compensation package for an entry-level position in your occupation?

A research assistant position in a non-profit organization in the Washington D.C. area: $22,000–26,000.

What benefits (e.g., health insurance, pension, etc.) are typically available for someone in your profession?

Medical, dental and vision insurance, 401K, flexible work schedules (e.g., telecommuting, compressed workweek), tuition assistance, professional development opportunities and casual dress.

What are the key skills necessary for you to succeed in your career?

Ability to pick things up quickly (e.g., learning programming skills, learn about a new topic), strong oral and communication skills, research skills, analytical and problem solving skills, attention to detail and computer skills. I have been fortunate to progress as far as I have in research in the non-profit sector with a bachelor’s degree; however, I do think that at some point in time I will need to get a masters or a doctorate degree.

Thinking back to your undergraduate career, what courses would you recommend that you believe are key to success in your type of career?

Statistics, psychology research methodology class, I/O psychology, and organizational behavior.

Thinking back to your undergraduate career, can you think of outside of class activities (e.g., research assistantships, internships, Psi Chi, etc.) that were key to success in your type of career?

I believe that my research assistantship helped me to get my first professional research position. It made a difference to have real world research experience outside of the classroom.

As an undergraduate, do you wish you had done anything differently? If so, what?

I wish that I would have joined Psi Chi so that I would have been more active in psychology. I think that it would have helped me to learn more about the field and take advantage of opportunities (e.g., pub- lishing research, presenting, serving on committees, etc.).

What advice would you give to someone who was thinking about entering the field you are in?

A bachelor’s degree in psychology provides the fundamentals to be successful in just about any line of work. I think that it’s important to try out different types of jobs to see what is a good fit before making a decision to go back to school. A masters or doctorate in psychology is not

Voices from the Workplace (continued)

(continued)

lan66845_06_c06_p157-190.indd 159 4/20/12 2:48 PM

160

CHAPTER 6Section 6.1 Sampling the Population

6.1 Sampling the Population

The ultimate goal of sampling the population is so that a representative portion of the population can be studied. Thus, by studying the sample carefully and methodi-cally, generalizations can be drawn about the variables or behaviors of interest in the greater population. Two major types of sampling approaches exist—probability sam- pling and non-probability sampling. Why sample? If the goal is to understand how the population thinks, acts, feels, believes, and so on, then why not study the entire popula- tion? First, we often do not have comprehensive lists of members of a population. Say, for example, you wanted to survey all the citizens of Indiana. Is there a comprehensive list of all citizens available? The tax rolls might be a good start, but names and addresses are unlikely to be part of the public record. Plus, some Indiana residents may have moved, or others moved to Indiana. So it is unlikely to have a complete roster of all citizens that is accurate. You can make the same generalization about the students at your college or university, all the individuals in the community with Alzheimer’s disease, or a list of all the skateboarders in your town. Having an accurate roster of all the members of the popu- lation of interest would be unlikely.

In addition, there are other methodological issues as well. Because of the mathematics and probability behind sampling theory, very good samples can be drawn from populations with relatively small margins of error. Dillman, Smyth, and Christian (2009) offer this exam- ple: “one can estimate within ± 3 percentage points the percentage of people who have a high school education in a small county of 25,000 adults with 1,024 completes [completed surveys] and can measure the same thing among the entire U.S. population of more than 300 million by obtaining only 43 more completes” (p. 59). Sampling is efficient. Lastly, survey- ing an entire population might lead to a greater number of non-respondents, and survey researchers become concerned about non-respondents because if bias is driving a person’s choice to not complete the survey, that may weaken the validity of the data (Dillman et al., 2009). We are better suited to select a sampling procedure that allows us to estimate any potential of sampling error in order to obtain a representative sample while minimizing bias and high non-response rates. Probability sampling strives to achieve each of those goals.

Probability Sampling

There are a variety of approaches to probability sampling, including simple random sam- pling, systematic sampling, stratified sampling, cluster sampling, and multistage sampling.

always necessary, and it really depends on what you want to do in the long run. I started out as a research assistant and worked hard and proved that I was capable of doing more. I was promoted twice within about three years.

Copyright © 2009 by the American Psychological Association. Reproduced with permission. The official citation that should be used in referencing this material is R. Eric Landrum, Finding Jobs With a Psychol- ogy Bachelor’s Degree: Expert Advice for Launching Your Career, American Psychological Association, 2009. The use of this information does not imply endorsement by the publisher. No further reproduction or distribution is permitted without written permission from the American Psychological Association.

Voices from the Workplace (continued)

lan66845_06_c06_p157-190.indd 160 4/20/12 2:48 PM

161

CHAPTER 6Section 6.1 Sampling the Population

Each is briefly described in this section. Remember, the overarching goal of probability sampling is that the sample drawn will be representative of the population if all the mem- bers of that population have an equal probability of being selected for the sample. Often, you’ll hear this stated as a non-zero probability (StatPac, 2009; StatTrek, 2009), meaning that there is a chance for every person to be selected, no matter how slim that chance might be.

Simple Random Sampling The simple random sample is perhaps the purest form of sampling, and probably one of the rarest techniques used. If you had the roster of the entire population available, you could assign numbers to all members in sample frame, assign random numbers to the possible participants, and then select the sample through a random number table (Babbie, 1973). Random number tables are often found at the back of statistics textbooks just for this purpose. Think of it this way—if we could throw all the names into a large hat and draw a certain target percentage for our survey, in this situation everybody in the survey population has the same probability of being tested (Edwards & Thomas, 1993).

Systematic Random Sampling Simply put, in a systematic random sample, every nth person from a list is selected (Edwards & Thomas, 1993). Let’s say that at your college there are 2,000 students currently enrolled, and you determine that you would like to have 100 students complete your sur- vey. Each student completing your survey would have an equal chance of being selected; that is, the probability of being selected is n/N (Lohr, 2008), or in our example, 100/2,000, or 1 out of every 20 students. So, every 20th student would be selected. After determining a random starting point (let’s say No. 4, for example), every 20th student on the roster is selected, meaning the 4th, 24th, 44th, 64th, 84th, 104th, 124th, and so forth (Chromy, 2006).

Stratified Sampling Stratified sampling involves an approach where extra precautions are taken to ensure rep- resentativeness of the sample. Strata define groups of people who share at least one com- mon characteristic that is relevant to the topic of the study (StatPac, 2009). The term strata is the plural of stratum; a study can have one stratum, or multiple strata. For example, if you want to ensure that your sample is representative based on gender, then you would stratify on gender. If you know that 55% of the population consists of females and 45% of the pop- ulation consists of males, then you could use random sampling within a gender stratum to extract a sample that matches the gender breakdown of the population precisely. Some- times oversampling is used to decrease sampling error from relatively small groups—that is, researchers may choose to oversample from groups less likely to respond (Edwards & Thomas, 1993). If the percentages in the population match the sample strata selected (as in the gender example above), this is proportionate stratification; if oversampling is used, this practice would be considered disproportionate stratification (Henry, 1990).

Cluster Sampling Let’s say you were interested in studying the perceptions of high school seniors enrolled in Advanced Placement (AP) psychology courses throughout the state of New York. It would be difficult to obtain a comprehensive roster of all students at all schools enrolled in AP psy- chology courses. The concept of clustering means that rather than randomize on the level of the individual person, you would randomize on the level of the school where AP psychology

lan66845_06_c06_p157-190.indd 161 4/20/12 2:48 PM

162

CHAPTER 6Section 6.1 Sampling the Population

is taught. That is, each “participant” is a school, not an individual person. It probably would be possible to obtain a list of all the schools in New York that offer AP psychology; once the students are assigned to a group or cluster, then the entire cluster is selected or not selected at random (Edwards & Thomas, 1993). One of the general guidelines about cluster sampling is that the researcher desires “to have a larger number of small clustering units than to have a small number of larger clustering units” (Fife-Schaw, 2000, p. 97). The cluster sample tech- nique is particularly useful when it is impossible or impractical to compile an exhaustive list of members composing the target population (Babbie, 1973; Henry, 1990).

Multistage Sampling Multistage sampling describes a process that follows after cluster sampling has been implemented. In our AP psychology example, a random sample of New York high schools that offer AP psychology (clusters) is selected for further study. Multistage sampling kicks in once the schools to be studied are selected. For instance, is every high school senior within the selected school/cluster surveyed, or is a systematic random sample drawn? In essence, the multistage sampling approach is two-stage sampling, involving (a) the selection of clusters as a primary selection, and (b) sampling members from the selected clusters to produce the final sample (Chromy, 2006; Henry, 1990).

Nonprobability Sampling

Nonprobability methods of sampling mean just that; it is unknown what the probability is of each possible participant in the population being selected for the study. Unfortunately, with nonprobability sampling, sampling error cannot be estimated (StatPac, 2009). Two key advantages to nonprobability sampling, however, are cost and convenience (StatTrek, 2009). The main approaches utilizing the nonprobability sampling approach are conve- nience sampling, quota sampling, snowball sampling, and a volunteer sample.

Convenience Sampling Convenience samples are just that—convenient. This tech- nique is often used in explor- atory research where a quick and inexpensive method is used to gather data (StatPac, 2009). Psychologists have long relied on convenience samples; for instance, the use of introduc- tory psychology human subject pools represent a convenience sample approach.

Quota Sampling Quota sampling as a nonproba- bility sampling technique is the equivalent of stratified sampling

Convenience samples are a quick, low-cost method to gather data from an available population of people. If you wanted to have a convenience sample, where would you go?

age fotostock/SuperStock

lan66845_06_c06_p157-190.indd 162 4/20/12 2:48 PM

163

CHAPTER 6Section 6.2 Survey Research Methodologies

from the probability sampling world. In stratified sampling, you identify key characteris- tics of interest, and then you sample to ensure that those individuals selected represent the population of interest in a proportional manner. In quota sampling, the researcher also desires the strata of interest, but then recruits individuals (non-randomly) to participate in a study (StatPac, 2009). Thus, quotas are filled with respect to the key characteristics needed for survey participants from the population.

Snowball Sampling When using the snowball sample technique, members of the target population of inter- est are asked to recruit other members of the same population to participate in the study. This procedure is often used when there is no roster of members in the population, and those members may be relatively inaccessible, such as illegal drug users, pedophiles, or members of a cult (Fife-Schaw, 2000). Snowball sampling relies on referrals and may be a relatively low-cost sampling procedure (StatPac, 2009), but there is a high probability that the individuals who participate may not be representative of the larger population.

Volunteer Sample This is a commonly used method for soliciting survey participation, but often the results are quite limited due to the possible motivational differences between volunteers and non-volunteers. When a popular website posts a survey and invites volunteers to partici- pate, the explanatory and predictive power of the data gathered may be suspect (StatTrek, 2009). It is difficult to make confident generalizations from a sample to a population when nonprobability samples are employed, and even less confidence exists if a volunteer sam- ple is utilized. With one piece of the survey/questionnaire puzzle in place (sampling), the next section presents the major survey research approaches or strategies that are com- monly used.

6.2 Survey Research Methodologies

This section provides an overview of the choices that survey researchers must answer concerning how the data are collected. Interviews

In some ways, in-person interviews remain the gold standard in survey research. Inter- views have fewer limitations about the types and length of survey items to be asked, and trained interviewers can use visual aids to assist during the interview (Frey & Oishi, 1995)—for example, the interviewee can see, feel, or taste a product (Creative Research Systems, 2009). Interviews are thought to be one of the best ways to obtain detailed infor- mation from survey participants. With an in-person interview, the interviewer and the participant can build rapport through conversation and eye contact, which might allow for deeper questions to be asked about the topic of interest. The drawbacks of interview- ing include high costs and the reluctance of individuals to take the time to complete an interview (Creative Research Systems, 2009; Frey & Oishi, 1995). In addition to one-on- one interviews that may be pre-arranged, there are also intercept interviews, such as those

lan66845_06_c06_p157-190.indd 163 4/20/12 2:48 PM

164

CHAPTER 6Section 6.2 Survey Research Methodologies

you may have seen at a mall, where an interviewer intercepts shoppers and asks them for an interview. The level of intimacy that can be achieved with an in-person interview could also be a drawback for some individuals. There are also group interviews, which some call focus groups, where a group of people are interviewed at the same time.

Telephone Research

In some ways, a growing reluctance to participate in in-person interviews led to the growth of using the telephone as a modality of conducting survey research (Tuckel & O’Neill, 2002). The use of telephone methodology has increased over time, but faces a number of challenges today. For instance, think about how difficult it can be to reach someone on the phone who is willing to participate—Figure 6.1 (from Kempf & Remington, 2007) illus- trates this challenge.

Potential subject

Telephone No

telephone

Cell phone only

Landline

Not at home

Screen calls

Agree to participate

Do not screen calls

At home

Decline

By the time you have agreement from a possible participant in a telephone study, a great deal of screening has already occurred.

Source: Kempf and Remington, 2007

Figure 6.1: Example of telephone methodology

lan66845_06_c06_p157-190.indd 164 4/20/12 2:48 PM

165

CHAPTER 6Section 6.2 Survey Research Methodologies

Coverage has always been a concern of telephone research as well. That is, the greater percentage of homes with a telephone, the better the survey coverage, and the better the possibility of drawing a representative sample from the population of interest. See the fol- lowing for how telephone coverage in the United States has changed over time (Kempf & Remington, 2007):

• In 1920, 65% of households did not have a telephone. • In 1970, 10% of households did not have a telephone. • In 1986, 7–8% of households did not have a telephone. • In 2003, less than 5% of households did not have a telephone.

As you can see, coverage is quite good regarding households with a phone, but researchers who rely on telephone surveys as their modality for data collection face many challenges today, such as working within the context of Do Not Call lists. Researchers continue to develop new strategies for improving the efficiency of telephone surveys, such as by using computer-assisted telephone interviewing (CATI) systems, random digit dialing (RDD), and interactive voice response systems (“press 1 if you are . . .”). But the challenges seem to be growing as well. The growth of cell phone usage is changing the face of telephone survey research. And that growth has been explosive—from fewer than 500,000 users in 1985 to 35 million users in 1995, and more than 200 million cell phone users in 2005 (Kempf & Remington, 2007). Answering machines, Caller ID, privacy managers, and call blocking services all add to the increasing challenges of conducting survey research by telephone.

Mail Surveys

Odds are you’ve received a survey in the mail. Did you complete it? Did you give it to someone else in your household to complete? As you can see, there are challenges to using mailed surveys as your modality of survey data collection. There are advantages and dis- advantages of using a particular approach, as explained by de Leeuw and Hox (2008). The advantages to mail surveys include (a) relatively low cost per survey respondent—mailed surveys can be completed with a relatively small staff; (b) no time pressure on the part of the survey respondent; (c) the mailed survey can include visual stimuli, using different scaling techniques and visual cues for survey completions (such as skip patterns); (d) the potential effect (bias) of the interviewer is removed with a mail survey; (e) participants have greater privacy in responding to a mail survey; and (f) if a good sample frame is available with a mailing list, the benefits of random sampling techniques can be realized. The potential disadvantages to mail surveys include (a) potentially low response rates; (b) limited capabilities for complex questions, and the inability for an interviewer to clar- ify questions being asked; (c) when mail is delivered to a household, there is no guarantee that the person for whom the survey is intended is the person completing the survey; and (d) the turnaround time for receiving mailed survey responses can be long.

Internet Surveys

Participating in a survey facilitated by the Internet could involve invitations through list- servs, discussion groups, advertisements on search engine pages, email directories, pub- lic membership directories, chat room rosters, guest lists from web pages, and of course individual email solicitations (Cho & LaRose, 1999). Compared with paper and pencil

lan66845_06_c06_p157-190.indd 165 4/20/12 2:48 PM

166

CHAPTER 6Section 6.2 Survey Research Methodologies

surveys, online/Internet surveys offer a number of advantages (Beidernikl & Kerschbau- mer, 2007), including easy and inexpensive distribution to large numbers of individuals via email, the participant is guided through the survey by essentially filling out a form (i.e., skip patterns are hidden from view), digital resources (e.g., video clips, sound, ani- mation) can be incorporated into the survey design if necessary, and questions can be “required” to be answered as well as verified instantly (e.g., when asked in what year you were born, if something other than a four digit number is entered, the participant can be instantly prompted to use the correct format and prevented from proceeding until making the correction).

A number of survey tools are available to assist in the collection of online survey data. Two of the more population choices are SurveyMonkey (http://www.surveymonkey.com) and Qualtrics (http://www.qualtrics.com); others include QuestionPro, Zoomerang, KeySurvey, SurveyGizmo, and SurveyMethods. Many of these online survey websites allow you to create an account for free and use it on a limited basis to design a survey and then collect data with that survey (once you exceed a certain number of surveys or a certain number of responses, then most of these sites will want you to purchase an annual membership). After creating your survey, the software will create a custom URL that you then can email to potential par- ticipants or post on a website. You probably have completed a number of online surveys and are familiar with the types of questions and formats. One of the advantages to online survey software is that you can usually download the outcomes/results directly into an Excel file for later analysis (or other types of files, such as SPSS files). Also, some of the sites can assist with rudimentary data analysis (and creating graphs and charts) without even exporting the data.

Two key drawbacks of Internet surveys are issues of coverage and nonresponse (de Leeuw & Hox, 2008). The issue of coverage, that is, who has Internet access and who does not, is sometimes referred to as the digital divide (Suarez-Balcazar, Balcazar, & Taylor-Ritzler, 2009). Coverage is a problem for Internet sur- veys (de Leeuw & Hox, 2008), and Suarez-Bal- cazar et al. (2009) provided some specific exam- ples of the possible drawbacks: (a) individuals from low-income and working-class communi- ties are less likely to have access to the Internet; (b) low-income and working-class, culturally diverse individuals are more likely to have only one computer, which would limit the potential for completing Internet-based surveys; (c) lim- ited access often translates into limited famil- iarity with online/Internet applications, and (d) there may be cultural barriers that make Internet research more difficult to successfully accomplish (more on this in a moment).

In addition to the challenge of coverage, there is also the challenge of representativeness. An Internet survey approach may not achieve the level of representativeness desired (Beidernikl

The Internet can facilitate many types of surveys, which are easier and less expensive than regular paper and pencil surveys.

PR Newswire/Associated Press

lan66845_06_c06_p157-190.indd 166 4/20/12 2:48 PM

167

CHAPTER 6Section 6.3 Comparisons of Methodologies

& Kerschbaumer, 2007; de Leeuw & Hox, 2008). In fact, you can think about whether those replying to an Internet survey are representative of the entire population, represen- tative of the Internet population, or even representative of a certain targeted population (Beidernikl & Kerschbaumer, 2007). Add in the complexity of culture, and you can see that well-designed Internet surveys can take a significant amount of work. Consider this example offered by Suarez-Balcazar et al. (2009):

For instance, in the Chicago Public Schools, students speak over 100 dif- ferent languages and dialects. Social scientists planning studies in these types of settings must consider how they are going to communicate with the participants’ parents. Although children of first generation immigrants may be able to speak, read, and participate in Internet-based surveys in English, information such as consent forms and research protocols that are sent to the parents may need to be translated into their native language and administered using paper-and-pencil format. (p. 99)

If not used carefully, online/Internet survey researchers are capable of invading privacy (Cho & LaRose, 1999), and care should be taken to minimize that threat.

6.3 Comparisons of Methodologies

W ith all the different modalities of survey administration, the natural ques-tion arises—which approach is best? The answer to that complex question is it depends. However, there have been some very useful studies conducted that compare the different methodologies, and below is a sampling. de Leeuw and Hox (2008) report that, on average, web-based surveys have an 11% lower response rate than mailed and telephone surveys. In an experiment that directly compared regular mail and e-mail

surveys, Schaefer and Dillman (1998) found comparable response rates—57.5% for regular mail, and 58.0% for e-mail. When Braunsberger, Wybenga, and Gates (2007) compared telephone surveys and web-based surveys, a two-wave web- based approach provided more reliable data estimates than telephone surveys, and at a lower cost: Each telephone survey cost $22.75 to complete, whereas the cost of each web-panel survey was $6.50.

What does the future hold for preferred survey research modality? In addition to the particularly useful comparison stud- ies, a growing trend is to utilize a mixed- mode approach (e.g., Nicolle & Lou, 2008), where multiple modalities are accessed to achieve the research goals. Thus, you may see email reminders to participate in a tele- phone survey. The mixed-mode approach

The mixed-mode approach uses several methods to gather research. What are the benefits of this approach? The drawbacks?

iStockphoto/Thinkstock

lan66845_06_c06_p157-190.indd 167 4/20/12 2:48 PM

168

CHAPTER 6Section 6.4 Designs for Survey Research

can also involve the collection of qualitative data as well as quantitative data. Qualita- tive data, such as the responses to open-ended questions on a survey (e.g. “How do you feel about parking on your campus?), can provide particularly rich and useful data, and qualitative approaches are often the most helpful when we know the least. In the Nicolle and Lou (2008) example, faculty members were asked about the process by which they adopt new technologies for use in college courses, and some faculty completed surveys, whereas others were interviewed in person—thus, a mixed-mode approach. In another example, McDevitt and Small (2002) used both Internet and mail to survey participants of an annual sporting event.

If the sampling plan and survey modality puzzle pieces are in place, another decision to be made is the overall design of the survey research. In some regard, these concepts do overlap with topics from Chapter 8 on quasi-experimental research designs. But a brief review of how these design decisions affect survey research is warranted here.

6.4 Designs for Survey Research

Although different researchers may use slightly different terminology, the major cat-egories of survey research designs are presented in this section. Cross-Sectional Survey Designs

In a cross-sectional survey design, data collection occurs at a single point in time with the population of interest (Fife-Schaw, 2000; Visser, Krosnick, & Lavrakas, 2000). One way to think about a cross-sectional sur- vey is that it is a snapshot in time (Fink & Kosecoff, 1985). Cross- sectional surveys are relatively inexpensive (Fife-Schaw, 2000) and relatively easy to do (Fink & Kosecoff, 1985). However, if the landscape changes rapidly, and that amount of change is impor- tant to your survey research, then using a cross-sectional design will not allow you to capture this change over time (Fife-Schaw, 2000; Fink & Kosecoff, 1985).

Longitudinal Survey Designs

A longitudinal survey is con- ducted over time, but this label alone does not give us enough details about the type of longitudinal survey. Longitudinal studies face unique challenges, such as keeping track of respondents over time and how

Cross-sectional survey design gathers data from the population all at one time, as shown in this call center that collects survey information for clients.

Marka/SuperStock

lan66845_06_c06_p157-190.indd 168 4/20/12 2:48 PM

169

CHAPTER 6Section 6.4 Designs for Survey Research

to motivate respondents to continue to respond in the future (Dillman et al., 2009). In general, the key advantage of longitudinal designs is that they allow for the study of age- related development. However, this can be confounded with events over time that might influence your variables (Fife-Schaw, 2000). For example, if you are interested in how individuals feel about their personal safety, and the span of your longitudinal research includes September 11, 2001, then your research might be affected by that historical event, and changes may not be due only to the passage of time. Attrition (dropping out of the study over time) is a drawback, and participants repeatedly tested over time can be sus- ceptible to the demand characteristics of the research—having participated multiple times in the past, the participants know what is expected and probably understand the variables and general hypotheses being tested (Fife-Schaw, 2000).

Cohort and Panel Survey Designs

In a cohort study, new samples of individuals are followed over time, whereas in a panel study, the same people are tracked over time (Jackson & Antonucci, 1994). In a panel study, the same people are studied over time, spanning at least two points in time (Fink & Kosecoff, 1985; Jackson & Antonucci, 1994; Visser et al., 2000). This type of study can be particularly useful for understanding why particular changes are occurring over time, because you are asking the same individuals to respond over time (you also have a base- line comparison measure from when they first entered the study).

There are so many more variations of possible research designs, such as trend studies, population sampling, and even an approach called the “multigenerational lineal panel” approach (Jackson & Antonucci, 1994). The key to remember for now is that there are many pieces of this puzzle to be solved, and the survey research design that psychologists select is based on a number of factors. But the types of questions that we can answer are strongly governed by how we ask the question. This is illustrated in the “Classic Studies in Psychology” story that follows, and much of the remainder of this chapter is devoted to providing helpful advice about crafting your own survey questions, selecting the scales of measurement, and choosing data analysis strategies to make the most of survey data.

Classic Studies in Psychology: Loftus and Eyewitness Testimony (Loftus & Palmer, 1974; Loftus, 1975)

As you will see, psychologist Elizabeth Loftus cleverly studied the relationship between the phrasing of a question and the impact of that phrasing on the answer. Not only is this an important consideration for survey research, but this line of research helped Loftus to develop expertise concerning eyewitness testimony (and how asking questions may lead to the creation of false memories).

In the Loftus and Palmer (1974) experiment, 45 stu- dents were shown 7 films being used by a local Seattle Police Department as part of their driver’s education program. Following each film, the participants were

asked to write about the film they had just seen and to answer a series of survey questions—the key research question asked about the speed at which the cars were going when the collision

Associated Press

(continued)

lan66845_06_c06_p157-190.indd 169 4/20/12 2:48 PM

170

CHAPTER 6Section 6.4 Designs for Survey Research

occurred. However, for the 45 students who viewed the accident film, groups of nine were asked dif- ferent questions, as presented in the table below. After being asked the particular question (note the key word in boldface), students responded with their average speed estimate of the two cars, in miles per hour (mph). The results are presented in Table 6.1.

Table 6.1: Loftus and Palmer survey questions and estimates

Survey Question Average Speed Estimate

About how fast were the cars going when they smashed each other? 40.5 mph

About how fast were the cars going when they collided with each other?

39.3 mph

About how fast were the cars going when they bumped each other? 38.1 mph

About how fast were the cars going when they hit each other? 34.0 mph

About how fast were the cars going when they contacted each other? 31.8 mph

Loftus and Palmer (1974) found these speeds to be significantly different. Thus, even the verb used to ask the question made a significant difference in how memories were reported. But Loftus’ creative thinking about these issues continued.

In a study published a year later, Loftus (1975) further explored how survey answers were dependent on the questions, and furthermore, how embedding false information in the original survey questions can lead to the embedding of false memories over time. This classic study reports the outcomes of four different experiments, but we’ll only describe two of those experiments here. In Experiment 1, stu- dents “were shown a film of a multiple-car accident in which one car, after failing to stop at a stop sign, makes a right-hand turn into the main stream of traffic. In an attempt to avoid a collision, the cars in the oncoming traffic stop suddenly and a five-car, bumper-to-bumper collision results. The film lasts less than 1 min., and the accident occurs within a 4-sec. period” (p. 563). The key car in the scenario (Car A) is then presented as a part of a diagram with the other cars. Half the students were asked, “How fast was Car A going when it ran the stop sign?” and the other half of students were asked, “How fast was Car A going when it turned right?” However, in this study, the key question of interest was not about miles per hour but rather is “Did you see a stop sign for Car A?” See the results in Table 6.2.

Table 6.2: Survey results

Leading Question Answer to the Next Question “Did you see a stop sign for Car A?”

How fast was Car A going when it ran the stop sign? 53% answer YES

How fast was Car A going when it turned right? 35% answer YES

Just mentioning the stop sign in the question helps participants remember that there was a stop sign. But what if leading questions contained misinformation? What impact would that have on memory?

Loftus addressed that issue in Experiment No. 4 in her 1975 study. She showed students a 3-minute film of an automobile that eventually collides with man pushing a baby carriage. After viewing

Classic Studies in Psychology: Loftus and Eyewitness Testimony (Loftus & Palmer, 1974; Loftus, 1975) (continued)

(continued)

lan66845_06_c06_p157-190.indd 170 4/20/12 2:48 PM

171

CHAPTER 6Section 6.4 Designs for Survey Research

the film, the participants are asked 45 questions about the film, but Loftus is only interested in 5 of the answers. In the “Direct” condition, the participants were asked a straightforward question, such as, “Did you see a woman pushing the carriage?” (We know from the description above that the correct answer is no.) In the “False Presupposition” condition, participants were asked, “Did the woman push- ing the carriage cross into the road?” A third group served as the control group and did not receive any key questions at all (just filler questions). One week later, the participants returned and were asked the direct question—in this case, did you see a woman pushing the carriage? See Table 6.3 to find out what happens one week later.

Table 6.3: Experiment No. 4 follow-up questions and results

Experimental Condition Percentage YES Responses to “Did you see a woman pushing the carriage?”

Direct—Did you see a woman pushing the carriage? 36% YES

False Presupposition—Did the woman who was pushing the carriage cross into the road?

54% YES

Control (No leading question) 26% YES

Note: Remember, it was a man who was pushing the carriage in the film. If memory were working perfectly, the percentage of YES in all three rows should be 0%.

Note that for the control group, without any leading questions at all, 26% remember a woman push- ing the carriage, when in fact it was a man. But look what happens one week later—the amount of misremembering increases, and it can be manipulated by the researcher. You should know that Loftus did this with other scenarios throughout the study (1975), as well as in other studies (e.g., Loftus & Hoffman, 1989). These fascinating outcomes have continued to influence Loftus’ work, and have influ- enced the work of others as well (e.g., Crombag, Wagenaar, & van Koppen, 1996).

If you think about it, the ability to change memories based on the way that a question is asked has important implications for issues such as eyewitness testimony and repressed memories, two top- ics that Loftus has explored throughout her career. Niland (2007) correctly pointed out that Loftus’ research squarely puts her in the center of the controversy about repressed childhood memories, and that it is possible to implant a false memory. This capability (or an accusation to some) threatens a number of therapists and victims of abuse who have come to believe that the memories of the abuse have been repressed for years, and with the help of a psychotherapist those memories can be discov- ered (Niland, 2007).

Elizabeth Loftus has received many accolades for her work about the formation and manipulation of memory. When Philip Zimbardo (President of the American Psychological Association in 2002) wrote about “does psychology make a significant difference in our lives,” Loftus’ research was listed as semi- nal work in the area of eyewitness identification (Zimbardo, 2004). In 2004, Loftus was elected to the National Academy of Sciences (a high honor); she was also named as one of the 100 most influential psychologists of the 20th century and the highest ranked woman on the list (Zagorski, 2005).

Reflection Questions

1. How does the careful selection of the verb used in the experiments by Loftus compare to the types of verbs you might select to develop survey research questions for a project at work? Is there a chance that the specific wording selected might have an impact on the results you observe? Why or why not?

Classic Studies in Psychology: Loftus and Eyewitness Testimony (Loftus & Palmer, 1974; Loftus, 1975) (continued)

(continued)

lan66845_06_c06_p157-190.indd 171 4/20/12 2:48 PM

172

CHAPTER 6Section 6.5 Scaling Methods

6.5 Scaling Methods

As you can surely see by now, survey research is a complex puzzle with multiple pieces needing to be put into place before the picture is complete. Perhaps one of the most complicated parts of survey research is deciding on the scale by which to measure a person’s attitudes, opinions, behavior, knowledge, etc.—in fact, there are entire books on the subject (e.g., Netemeyer, Bearden, & Sharma, 2003). As you read ear- lier in Loftus’ work, how you ask the questions does shape the answer you receive. In fact, how you shape the possible answers can even influence the answers you receive. For example, Schwartz (1999) reported on some of his previous research where he had sur- veyed German respondents about the number of hours per day that they watch television. Two groups were asked the same question but given different response categories—these response categories are depicted in Table 6.4.

Table 6.4: How response scales can shape the results—daily TV consumption

Low Frequency Alternatives

Percent Reporting High Frequency Alternatives

Percent Reporting

Up to ½ hour 7.4%

½ hour to 1 hour 17.7%

1 hour to 1 ½ hours 26.5%

1 ½ hours to 2 hours 14.7%

2 hours to 2 ½ hours 17.7% Up to 2 ½ hours 62.5%

More than 2 ½ hours 16.2% 2 ½ hours to 3 hours 23.4%

3 hours to 3 ½ hours 7.8%

3 ½ hours to 4 hours 4.7%

4 to 4 ½ hours 1.6%

More than 4 ½ hours 0.0%

2. Have you ever been in a car accident or spoken to someone who has? Think about your memory for that event (or ask the person about his or her memory for that event). Is the memory like a flashbulb memory, where every element of the scene is remembered, or have some memories faded over time while other “memories” seem to have been invented? What about the effect of an emotional reaction during a car accident, such as the rush of adrenaline in anticipation of the fight-or-flight response? How do these individual factors need to be considered and combined to better our understanding of memory for these kinds of events?

3. Eyewitness testimony has important ramifications for how our criminal justice system works. Eyewitness testimony can help clear some people of crimes, whereas eyewitness testimony some- times provides key evidence that leads to the incarceration of an individual. Given the fallibility of memory, does the legal system have checks and balances in place to help prevent misremember- ing and to minimize the fallibility of eyewitness testimony?

Classic Studies in Psychology: Loftus and Eyewitness Testimony (Loftus & Palmer, 1974; Loftus, 1975) (continued)

lan66845_06_c06_p157-190.indd 172 4/20/12 2:48 PM

173

CHAPTER 6Section 6.5 Scaling Methods

Look what happens, depending on the response scale. When the scale starts low (left side of table), only 16.2% of respondents report watching more than 2 ½ hours of television per day, but when the alternatives start higher on the scale (on the right side of the table), 37.5% of respondents report watching more than 2 ½ hours of television per day. Just by the scale difference alone, the magnitude of this difference makes it difficult to draw meaningful conclusions. So what do we do about situations where we need to design sur- veys and items and scales? We rely on best practices and established research that guides the decision making necessary to select an appropriate scale. What follows is a brief over- view of the major types of scales you are likely to use.

Dichotomous Scales

When you use a dichotomous scale, there are only two possible options. So if the possible options are agree/disagree, yes/no, true/false, male/female, and so on, then you are using a binary scale. Respondents provide nominal scale data (this is an important consideration for later data analysis options). Some examples of dichotomous scales where a yes/no type of response would be ade- quate are:

• I am married. • I download music illegally. • My parents are divorced.

Some argue (e.g., Spector, 1992) that single yes/no questions are insufficient, because they are not sen- sitive to subtle change over time, they dictate that individuals place themselves into large categories, and that many psychological phenomena are so complex that a singular yes/no response may fail to capture the complexity. As you design your sur- veys, keep in mind that the hypotheses you wish to test will help to inform you if a dichotomous scale can yield the type of information you seek.

Likert Scales

Likert scales, or perhaps Likert-type scales, may be the most famous/popular type of scale used by psychological researchers today. The Likert scale is named after the psy- chologist from the University of Michigan, Rensis Likert (pronounced Lick-ert). Likert’s seminal work (1932), now called a Likert scale, called for a survey response scale to have a 5-point scale, measuring from one pole of disagreement to the other pole of agreement. Each of the scale points has a specific verbal description (Wuensch, 2005). A declarative statement is made, and then the respondent selects the appropriate answer. The low value is strongly disagree, and the high value is strongly agree, like this:

When using a dichotomous scale, there are only two possible options, such as yes or no.

Mauritius/SuperStock

lan66845_06_c06_p157-190.indd 173 4/20/12 2:49 PM

174

CHAPTER 6Section 6.5 Scaling Methods

1 = strongly disagree

2 = disagree

3 = neutral (neither agree nor disagree)

4 = agree

5 = strongly agree

There have been many variations and changes suggested that are loosely based on the above criteria, so you will often see “Likert-type” scale used rather than the very spe- cific Likert scale as described above. For example, Fowler (1988) has made the argument that Likert-type variations (shown below) might be better suited because they would have lesser emotional ties: 4 = completely agree, 3 = generally agree, 2 = generally disagree, and 1 = completely disagree or 4 = completely true, 3 = mostly true, 2 = mostly untrue, and 1 = completely untrue. Of course, these would not conform to the true Likert scale but would be categorized as Likert-type scales. There have been many variations on this theme. The following examples demonstrate many of these variations, as presented by Vagias (2006). Note the varying types of response anchors possible with a Likert-type scale approach, including the use of frequency, truthfulness, probability, importance, concern, support, usage, awareness, satisfaction, and influence. As you think about the type of scale you might employ in your survey research, and you examine the following examples, you should begin to appreciate just how useful and versatile using a Likert-type scale can be.

Level of Acceptability

1 – Totally unacceptable 2 – Unacceptable 3 – Slightly unacceptable 4 – Neutral 5 – Slightly acceptable 6 – Acceptable 7 – Perfectly Acceptable

Level of Importance

1 – Not at all important 2 – Low importance 3 – Slightly important 4 – Neutral 5 – Moderately important 6 – Very important 7 – Extremely important

Knowledge of Action

1 – Never true 2 – Rarely true 3 – Sometimes but

infrequently true 4 – Neutral 5 – Sometimes true 6 – Usually true 7 – Always true

Level of Problem

1 – Not at all a problem 2 – Minor problem 3 – Moderate problem 4 – Serious problem

Level of Awareness

1 – Not at all aware 2 – Slightly aware 3 – Somewhat aware 4 – Moderately aware 5 – Extremely aware

Likelihood

1 – Extremely unlikely 2 – Unlikely 3 – Neutral 4 – Likely 5 – Extremely likely

Level of Satisfaction – 5 point

1 – Very dissatisfied 2 – Dissatisfied 3 – Unsure 4 – Satisfied 5 – Very satisfied

Level of Appropriateness

1 – Absolutely inappropriate 2 – Inappropriate 3 – Slightly inappropriate 4 – Neutral 5 – Slightly appropriate 6 – Appropriate 7 – Absolutely appropriate

(continued)

Likert-Type Scale Response Anchors

lan66845_06_c06_p157-190.indd 174 4/20/12 2:49 PM

175

CHAPTER 6Section 6.5 Scaling Methods

Level of Agreement

1 – Strongly disagree 2 – Disagree 3 – Somewhat disagree 4 – Neither agree or disagree 5 – Somewhat agree 6 – Agree 7 – Strongly agree

Frequency – 5 point

1 – Never 2 – Rarely 3 – Sometimes 4 – Often 5 – Always

Level of Familiarity

1 – Not at all familiar 2 – Slightly familiar 3 – Somewhat familiar 4 – Moderately familiar 5 – Extremely familiar

Level of Difficulty

1 – Very difficult 2 – Difficult 3 – Neutral 4 – Easy 5 – Very easy

Level of Quality – 5 point

1 – Poor 2 – Fair 3 – Good 4 – Very good 5 – Excellent

Level of Satisfaction – 5 point

1 – Not at all satisfied 2 – Slightly satisfied 3 – Moderately satisfied 4 – Very satisfied 5 – Extremely satisfied

Source: Vagias (2006).

Likert-Type Scale Response Anchors (continued)

Thurstone Scale and Guttman Scale Both the Thurstone scale and the Guttman scale describe a methodology of scale develop- ment as well as measuring individual responses. In 1928, Thurstone proposed the technique (now called the Thurstone scale) to develop a response scale of equally appearing intervals by having participants make a series of comparative judgments (Page-Bucci, 2003; Roberts, Laughlin, & Wedell, 1999). First, a large number of attitude statements would presumably represent the entire range of possible options, and respondents would provide a global eval- uation of favorability or unfavorability toward the topic presented in the survey items—for instance, a pairwise comparison could be presented, where a respondent is forced to choose which statement he or she agrees with more, and the process is repeated over and over. From a group of individuals, this yields a hierarchy of agreement scores for each item, and then in the second stage individuals re-rate the items in terms of agreement or disagreement (Page-Bucci, 2003; Roberts et al., 1999). The goal of using this multistage process is so that the final items retained in the survey fit the respondents’ patterns of answering well, rather than hoping that survey items capture what the respondents think about a particular topic.

A Guttman scale is difficult to construct because it is based on generating a set of items that increase in difficulty; on a 7-item scale, if the easiest item to agree to is Item No. 1, and the most difficult item to agree to is Item No. 7, and you agree with Item No. 5, that automatically means that you agree with the first four items as well. In other words, what- ever item you agree with on the hierarchy, it is assumed that you agree with all the items leading up to it also. Page-Bucci indicated (2003) that although this scale may allow for more complex measures than a Likert-type scale, the scales are difficult to construct and the scoring systems are cumbersome.

Semantic Differential Scales The semantic differential scale technique, developed by Osgood in the 1950s, is a scale that is designed to measure affect or emotion (Henerson, Morris, & Fitz-Gibbon, 1987), but

lan66845_06_c06_p157-190.indd 175 4/20/12 2:49 PM

176

CHAPTER 6Section 6.5 Scaling Methods

it can measure much more than that. Using adjectives that are polar opposites, participants are asked to select how they feel about the survey topic being presented. For example, to respond to the question “Thinking about this course, how do you feel about the grading policies being used?” the surveyed person would be asked to place on a checkmark on one of the seven lines spanning the polar opposites on the the semantic differential scale below:

fair ___ ___ ___ ___ ___ ___ ___ unfair

unreliable ___ ___ ___ ___ ___ ___ ___ reliable

confusing ___ ___ ___ ___ ___ ___ ___ clear

helpful ___ ___ ___ ___ ___ ___ ___ not helpful

good ___ ___ ___ ___ ___ ___ ___ bad

Based on prior research, three types of findings tend to emerge from the use of semantic differential scales (Page-Bucci, 2003): an evaluative fac- tor (good-bad), an intensity/potency factor (strong-weak), and an activity factor (slow-fast). Responses on these items can be given a score of 1 to 7, depending on where the mark on the scale occurred; most researchers ana- lyze these data the same as they would Likert-type agreement scale data—as interval/ratio (scale) data. The seman- tic differential scale is good at captur- ing feelings and emotions, is relatively simple to construct, and is relatively easy for participants to use, but the resulting analyses can be complicated (Page-Bucci, 2003). An example of more possible pairings appears below (from Henerson et al., 1987):

The use of semantic differential scales reveals three types of findings: evaluative; intensity/potency; and activity. This type of scale is helpful in recording feelings and emotions.

iStockphoto/Thinkstock

angry-calm

bad-good

biased-objective

boring-interesting

closed-open

cold-warm

confusing-clear

dirty-clean

dull-lively

dull-sharp

irrelevant-relevant

last-first

lan66845_06_c06_p157-190.indd 176 4/20/12 2:49 PM

177

CHAPTER 6Section 6.5 Scaling Methods

not brave-brave

old-new

passive-active

purposeless-purposeful

sad-funny

slow-fast

sour-sweet

static-dynamic

superficial-profound

tense-relaxed

ugly-pretty

unfair-fair

unfriendly-friendly

unhappy-happy

unhealthy-healthy

uninformative-informative

useless-useful

weak-strong

worthless-valuable

wrong-right

Other Types of Scales

There are many more types of scales that are used in survey research. Visual analog scales can be used to obtain a score along a continuum, where a participant places a checkmark to indicate where his or her attitude or opinion falls along the scale. Below is an example of the visual analog scale:

No pain at all ———————————— The worst pain I ever experienced

This would be an example of a subjective continuum scale, where a checkmark is made along the scale to indicate how positive or negative a respondent’s opinion is about a particular topic:

Very positive ———————————————————— Very negative

With the advent of online survey packages, the visual analog scale has become digital. In the online survey software package Qualtrics, visual analog scales are presented as “slid- ers,” and respondents can click on the pointer and slide it to location along the continuum that represents their belief. See Figure 6.2 for an example of a series of slider questions.

lan66845_06_c06_p157-190.indd 177 4/20/12 2:49 PM

178

CHAPTER 6Section 6.5 Scaling Methods

Completely dissatisfied

0 10 20 30 40 50 60 70 80 90 100

My co-workers

The workplace environment

My company in general

My direct supervisor

My annual compensation

The opportunities for advancement

Completely satisfied

Please rate your overall level of SATISFACTION for each of the workplace categories below. Move the slider to the appropriate level: 0 = completely dissatisfied and 100 = completely satisfied.

This is an example of a visual analog scale used in survey research in a survey software program called Qualtrics. Participants click on the blue arrow and drag it to the location that indicates their answer.

Source: Qualtrics, 2011

Figure 6.2: Example of a visual analog scale

Surveys do have advantages though: They allow for anonymity of responses and sta- tistical analysis of large amounts of data, they can be relatively cost effective, sampling mechanisms can be carefully controlled in some cases, and by using standardized ques- tions change can be detected over time (Seashore, 1987). Some of the limitations and risks of the survey research approach include a lack of control over variables of interest, response rates may be problematical, ambiguous surveys may lead to difficult interpre- tation, in some contexts participants may not believe their data are truly anonymous and confidential, the possibilities of bias due to non-response or socially desirable responding, and the inability to draw cause-and-effect conclusions (Fowler, 1998; Sea- shore, 1987).

Surveys are pervasive in psychology and throughout culture. The ability to properly design a survey and interpret its results appropriately is a skill that well-suits psychology majors for a future in the workplace, or for graduate school first and then the workplace. But it is important to remember that surveys are a measure of self-report and not actual behavior. There are multiple reasons why survey data may be inaccurate; it could be that the respondents don’t know the answer, know the answer but can’t recall the answer, don’t understand the question (but answer anyway), or just choose not to answer for whatever reason (Fowler, 1998). Because most survey research does not share the same

lan66845_06_c06_p157-190.indd 178 4/20/12 2:49 PM

179

CHAPTER 6Section 6.6 Analysis of Survey Data

characteristics as experimental designs, it is important not to over-interpret the results of survey research. The survey approach is powerful in helping psychologists identify the relationships between variables and differences among groups of people, but the results are only as good as the design quality that is necessary for this complex task.

6.6 Analysis of Survey Data

In most respects, analyzing survey data is the same as analyzing any other type of data—your analysis choices are based on your hypotheses, the scales of measurement, the tools available for data analysis, and so on. Before mentioning specific approaches for data analysis, let’s review at a conceptual level the types of errors that are encountered in survey research. Remember that errors in this context are not mistakes but are the pos- sible outcomes of the study that the researcher cannot account for—that is, the changes or values of the dependent variable that are not due to the independent variables being manipulated, controlled, or arranged.

Types of Errors

In classic psychometric measurement theory, the total amount of error is assumed to be the sum of measurement error + sampling error (Dutka & Frankel, 1993). Those who study survey research design further categorize the types of threats and errors that can occur with this type of research. Although Dillman et al. (2009) were referring specifically to Internet panel research in this case, they present a four cornerstone model of surveying and errors that is useful here for our greater understanding.

A coverage error in survey research refers to the methodology used. For example, if an Internet approach is used, only about 70% of households have Internet access, so cover- age error exists (Dillman et al., 2009). The coverage error is much smaller with telephone surveys, but the proportion of individuals with landlines is decreasing whereas cell phone subscribers are increasing (Kempf & Remington, 2007). Survey researchers need to be cog- nizant of coverage error concerns when making methodological choices.

A sampling error occurs when not all the potential participants from a population are rep- resented in a sample (Dutka & Frankel, 1993), and this is often due to the sampling method utilized by the researcher (Futrell, 1994). In fact, this sampling procedure is so important that it was the opening puzzle piece of this chapter. Another related sampling issue is volunteerism, or self-selection. When a study relies on volunteers (for whatever reason), there is always a concern that volunteers may behave differently than non-volunteers, and if this is the case, it weakens the generalizability of the survey results. In fact, Rosenthal and Rosnow (1975) have reliably demonstrated that volunteers differ from non-volunteers in the following ways: (a) volunteers are more educated than non-volunteers; (b) volun- teers are from a higher social class than non-volunteers; (c) volunteers are more intelligent than non-volunteers; (d) volunteers are more approval-motivated than non-volunteers; and (e) volunteers are more sociable than non-volunteers. However, if the only way you can conduct your research is through volunteers, then that is what you do. But it would be important to remember these caveats when drawing conclusions from your survey

lan66845_06_c06_p157-190.indd 179 4/20/12 2:49 PM

180

CHAPTER 6Section 6.6 Analysis of Survey Data

research (or any research) that depends exclusively on volun- teer participants.

Measurement error can occur due to a number of reasons, but measurement errors tend to fall into the category of mea- surement variation (the lack of a reliable instrument) and measurement bias (asking the wrong questions, or using the results inappropriately) (Dutka & Frankel, 1993). As in any com- plex enterprise, the potential for mistakes can be high, and Futrell (1994) listed some com- mon measurement errors that can occur in survey research:

1. Failing to assess the reliability of the survey. 2. Ignoring the subjectivity of participant responses in survey research. 3. Asking non-specific survey questions. 4. Failing to ask enough questions to capture the behavior, opinion, or attitude of

interest. 5. Utilizing incorrect or incomplete data analysis methods. 6. Drawing generalizations that are not supported by the data nor the data analysis

strategy selected.

Essentially, measurement errors address issues of (a) did we measure what we thought we measured, and (b) did we interpret the results appropriately?

Non-response error is of particular concern in survey research (Dillman et al., 2009). As a general rule, if there is a response rate of 25% or less (or a non-response rate of 75% or more), then the survey researcher should be concerned with the question “Are those responding to my survey different from those not responding to my survey?” (Dillman et al., 2009). There are many different approaches for dealing with high non-response rates, and some of those methods involve weighting the responses that are received (Dale, 2006) as well as specifically following up with a subset of non-responders and asking them why they didn’t respond. The goal here would be to determine that there was no systematic bias in why people responded or did not respond to the initial survey request. If there is no bias (that is, no systematic reason driving non-response), then the non-response rate is less of a concern to the survey researcher.

Data Handling Issues

The details and complexity of data handling issues within survey research are beyond the scope of this chapter, but two issues are worth mentioning, if only briefly. After collecting

When volunteers are used in sampling, there is concern that volunteers could change the survey results by behaving differently than non-volunteers. How might this be addressed?

iStockphoto/Thinkstock

lan66845_06_c06_p157-190.indd 180 4/20/12 2:49 PM

181

CHAPTER 6Section 6.6 Analysis of Survey Data

your data, but prior to analysis, you will have to do some data “cleaning” (sometimes called data editing). Even though every survey researcher must do this, there are not com- monly accepted standards for data cleaning (Leahey, Entwisle, & Einaudi, 2003). Some- times it involves the elimination of outliers (which is relatively straightforward), but other times data decisions are more complex. For example, someone may be hand-coding data into an SPSS file, and on a written survey form completed by a college student, the student filled in “Age: ____” with 107. It would be pretty clear from this scenario that there was not a 107-year-old college student in the laboratory setting when the data were collected, so this value should be discarded from “age” variable (thus, this participant has missing data). But this brings other issues to mind: If this respondent reports many outliers, did he or she take the survey seriously? Should just the age value be discarded, or should the entirety of the survey responses from this individual be deleted?

Data cleaning decisions can become more complex. Let’s say you are asking survey items where the responses are made on a Likert-type agreement scale, where 1 = strongly dis- agree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree. One coded response to the statement “I am comfortable with the undergraduate major I have selected,” is 55. What do you do? Do you assume the respondent meant a 5 (strongly agree), and change the response? Is it possible to go back and confirm what the participant meant, or were the data collected anonymously? You could guess that a 55 meant a 5, but what about a 23 entry? Did the person mean 2 (disagree) or 3 (neutral)? Here’s one more: In an online sur- vey, where respondents directly enter their age, a participant enters the value 1.9. Should that be recoded as 19 years old, or should the data be deleted?

These data cleaning issues are also related to how survey researchers handle missing data, and there are a number of complex approaches for that (Dale, 2006; Graham, Taylor, Olchowski, & Cumsille, 2006; Rudas, 2005). As a psychologist/survey researcher-in-train- ing, you should err on the side of caution. If you cannot confirm what a participant meant by his or her response, delete it. As you become more savvy at performing data cleaning and missing data analyses, you can alter this conservative approach. Furthermore, if you collect your survey data anonymously, you have no method of contacting individuals to clarify their intended response. We’ll discuss more data cleaning issues in Chapter 7.

Data Analysis Approaches

As alluded to earlier, the possibilities for analyzing survey data are vast, and they depend on many of the same characteristics of other data analysis situations, such as the scale of measurement, the amount of data available, and the hypotheses to be tested. It would not be possible to summarize all of the options here, as entire books are available about the subject (Fink, 1995). Data analytic strategies can become more or less complicated, how- ever. If your goal is to communicate effectively with the public, you might not choose to present the results of a repeated measures ANOVA, but you might present a table of means or a bar graph that clearly and succinctly communicates the story you want to tell. If you are comparing two nominal scale variables, such as gender differences on how respon- dents answered a categorical survey item (“Are you married?”), then a chi-square analysis would be appropriate. Essentially, you will need the knowledge that you (hopefully) learn from a statistics course to be able to analyze your survey data. This is why some call the Statistics-Research Methods sequence the core of the undergraduate psychology major.

lan66845_06_c06_p157-190.indd 181 4/20/12 2:49 PM

182

CHAPTER 6Section 6.6 Analysis of Survey Data

Data analyses can range from simple to complex. Table 6.5 is an example of “complex,” as Roelen, Koopmans, and Groothoff (2008) reported the predictors between overall job satisfaction and specific aspects of a job. These researchers used survey research as their method of data collection and a multiple regression as part of their data analysis strategy.

Table 6.5: Correlation between overall job satisfaction and specific job aspects

Mean (SD) B (SE) 𝛃

Age (years) 38.1 (10.3) 0.01 (0.00) 0.05

Gender (female = 0, male = 1) −0.11 (0.09) −0.04

Educational level 1.7 (0.7)

Primary education relative to tertiary 0.15 (0.14) 0.06

Secondary education relative to tertiary 0.18 (0.13) 0.07

Physical demands (range 17) 4.0 (1.9) 0.02 (0.02) 0.03

Psychological demands (range 1–7) 4.1 (1.7) −0.04 (0.03) −0.05

Job autonomy (range 1–7) 5.4 (1.4) 0.09 (0.04) 0.09*

Decision latitude (range 1–7) 4.8 (1.7) −0.01 (0.03) −0.02

Career perspectives (range 1–7) 4.2 (1.7) 0.12 (0.04) 0.16*

Overall satisfaction (range 1–7) 5.3 (1.3)

Specific satisfaction (range 1–7) with:

Colleagues 5.6 (1.2) 0.15 (0.04) 0.14 **

Work times 5.5 (1.4) −0.02 (0.04) −0.03

Task variety 5.1 (1.5) 0.28 (0.04) 0.31**

Supervisor 4.7 (1.7) 0.06 (0.04) 0.07

Working conditions 4.7 (1.5) 0.11 (0.04) 0.13**

Workload 4.7 (1.4) 0.11 (0.05) 0.12*

Work pace 4.7 (1.5) 0.02 (0.04) 0.02

Salary 4.3 (1.6) −0.05 (0.03) −0.06

Work briefings 4.3 (1.8) −0.01 (0.04) −0.02

Mean (standard deviation, SD) calculated using age, educational level, work-related factors, and job satisfaction. In addition, the table presents the unstandardized correlation coefficients B (standard error, SE) and the standardized correlation coefficients (𝛃), which measure the type (positive or negative) and relative importance of correlation. *p < 0.05 and **p < 0.01 Source: Roelen, Koopmans, and Groothoff (2008)

lan66845_06_c06_p157-190.indd 182 4/20/12 2:49 PM

183

CHAPTER 6Section 6.7 Quick Tips for Survey Item Construction

At first glance, this looks complicated, but the more courses you have in statistics, and the more survey research you do, the easier it will be to interpret this type of data. What the researchers found with their multiple regression data analysis approach was that there are six statistically significant predictors of a person’s overall job satisfaction (based on the sample that was studied by Roelen et al., 2008). All of these predictors happen to have positive beta weights, which means the higher the value on the particular scale, the higher the overall job satisfaction. The six significant predictors (starting with the predictors with highest beta weight) are task variety, career perspectives, colleagues, working conditions, workload, and job autonomy. Note that compared to popular belief, salary is not a sig- nificant predictor of overall job satisfaction—and it is this type of insight that can make a survey design coupled with an effective data analysis strategy so powerful. As you do more work in psychology, you’ll gain experience and confidence in designing surveys as well as analyzing the results. But just how would you go about designing that survey, especially if it were the first “scientific” survey you had ever developed? We’ll discuss that in the next section.

6.7 Quick Tips for Survey Item Construction

You determine that closed-ended items are better suited for your research needs, and you are just about ready to start generating your item pool. But before you do that, it might be beneficial to think broadly for a moment about what you are trying to measure—that broad category of human response you are trying to capture. Consider these categories offered by eSurveyPro (2009) and Rattray and Jones (2007): (a) attitudes, beliefs, intentions, goals, aspirations; (b) knowledge or perceptions of knowledge, (c) cog- nitions; (d), emotions; (e) behaviors and practices; (f) skills or perceptions of skills, and (g) demographics. Making decisions about which broad category (or categories) you would inquire about has implications for your entire survey. For example, if you ask too many knowledge questions of your respondents, and the items are difficult, respondents may quit your survey early, not providing you with the data you need. Actual skills may be difficult to capture in a survey format, but you may be able to ask respondents about their perceptions of their own skills. Demographics can be tricky as well. Ask for too many demographics, and participants may feel a sense of intrusion. The more demographics asked, the more identifiable a participant is, even if the data are collected anonymously. Ask too few demographics and you may not be able to provide tentative answers to your hypotheses. As you have the opportunity to practice your survey skills over time, you should become more comfortable in being able to assess these broad areas.

lan66845_06_c06_p157-190.indd 183 4/20/12 2:49 PM

184

CHAPTER 6Section 6.7 Quick Tips for Survey Item Construction

Use of demographics in surveys can be problematic if not thoughtfully carried out. Sometimes, however, demographic information is vital to research. How should these surveys be handled?

PR Newswire/Associated Press

General advice for constructing survey items comes from many sources. The following list is a compilation of ideas from these sources: Babbie (1973), Cardinal (2002), Converse and Presser (1986), Crawford and Christensen (1995), Edwards and Thomas (1993), eSurvey- Pro (2009), Fink and Kosecoff (1985), HR-Survey (2008), Jackson (1970), McGreevy (2008), and University of Texas at Austin (2007):

1. Avoid double-barreled items. That is, each question should contain just one thought. A tipoff to this occurring is sometimes the use of the word “and” in a survey item.

Example to avoid: I like cats and dogs.

2. Avoid using double negatives.

Example to avoid: Should the instructor not schedule an exam the same week a paper is due? (Answered from Strongly Disagree to Strongly Agree).

3. Try to avoid using implicit negatives—that is, using words like control, restrict, forbid, ban, outlaw, restrain, or oppose.

Examples to avoid: Handgun use should be banned. All abortions should be outlawed.

lan66845_06_c06_p157-190.indd 184 4/20/12 2:49 PM

185

CHAPTER 6Section 6.7 Quick Tips for Survey Item Construction

4. Consider offering a “no opinion” or “don’t know” option.

5. To measure intensity, consider omitting the middle alternative.

Example: Strongly disagree, disagree, neutral, agree, and strongly agree.

6. Make sure that each item is meaningful to the individuals being asked to com- plete the survey. That is, are the respondents competent to provide meaningful responses?

Example to avoid: Xanax is the best prescription medication for clinical depression.

7. Use simple language, standard English as appropriate, and avoid unfamiliar or difficult words. Depending on the sample, aim for an eighth-grade reading level.

Example to avoid: How ingenuous are you when the professor asks if you have understood the material presented during a lecture?

8. Avoid biased questions, words, and phrases.

Example to avoid: Using clickers represents state-of-the-art learning technology. To what extent have clickers enhanced your learning?

9. Check to make sure your own biases are not represented in your survey items, such as through leading questions.

Example to avoid: Do you think gas-guzzling SUVs are healthy for the environment?

10. Do not get more personal than you need to be to adequately address your hypotheses. Focus on “need to know” items and not “nice to know” items (helps control for survey length).

11. Try to be as concrete as possible; items should be clear and free from ambiguity. Avoid using acronyms or abbreviation that are not widely understood.

Example to avoid: The DSM-IV-TR is a more accurate diagnostic tool for PSTD patients than the ICD-10.

12. Start the survey with clear instructions, and make sure the first few questions are non-threatening. Typically, present demographic questions at the end of the survey. If you ask too many demographic items, respondents may be concerned that their responses are not truly anonymous.

13. If the response scales change within a survey, include brief instructions about this so that respondents will be more likely to notice the change.

14. If your survey is long, be sure to put the most important questions first—in a long survey, respondents may become fatigued or bored by the end.

lan66845_06_c06_p157-190.indd 185 4/20/12 2:49 PM

186

CHAPTER 6Section 6.7 Quick Tips for Survey Item Construction

15. Be sure to frame questions to minimize response set acquiescence. Ask questions that are reverse-scored (that is, strongly disagreeing is a positive outcome).

Example: This course is a waste of time. (A positive answer would be strongly disagree.)

Case Study: Read All About It: Sampling Matters (and Dewey Defeats Truman)

American political polling has a long history dating back to 1824 (International Directory of Company Histories [IDCH], 2001), but perhaps the most famous blunder that involves the sampling of opinions from a population comes from the 1948 election where incumbent Harry S. Truman defeated the challenger Thomas Dewey. Although there had been some successes with mail-in polling in predicting

presidential election outcomes in the 1930s, for the 1948 election a “perfect storm” of circumstances inter- sected to produce one of the most famous mistaken newspaper headlines of all times.

At the time, George Gallup was using quota sampling, where pollsters would ask a certain number of indi- viduals from certain categories (e.g., working females, percentage of factory workers) their opinions about issues, and in particular, who they intended to vote for in the upcoming election (Jamison, 2008). After the election (and the famous blunder where the even- tual presidential election winner was declared the loser), a congressional committee chastised Gallup

for not using probability sampling, which by definition would give every eligible voter in the coun- try an equal chance to be polled (IDCH, 2001). However, it was not just the misstep of selecting the wrong sampling procedure that led to this famous blunder; other events conspired to make it so. For instance, all the major pollsters (Gallup, Crossley, and Roper) stopped polling weeks before the elec- tion because major opinion changes were not expected. The Chicago Tribune, publisher of the most famous newspaper gaffe of all time, over-relied on its Washington correspondent to accurately pre- dict the outcomes. Furthermore, to get the first edition to press on time (and due to a printer’s strike at the time), the Tribune had to publish its first edition well-before election returns were known, thus preventing any last-minute changes based on early returns. Gallup also admitted after the elec- tion that he was a close friend of Thomas Dewey, and that Gallup had been in contact with Dewey throughout the campaign of 1948. All of these events coalesced into one moment where a famous national newspaper got it wrong in the front page headline on November 3, 1948 (Blackwell, n.d.; IDCH, 2001; Jamison, 2008; Walther, 2009).

Reflection Questions

1. Thinking about the polling process and presidential elections today, what would be the impact of declaring victory too early for the wrong candidate? To some extent, isn’t this precisely what hap- pened in 2000 when George W. Bush ran against Al Gore for U.S. president?

2. Digging a bit deeper, would there be a way in which quota sampling could be as efficient as prob- ability sampling? What types of safeguards would need to be put into place to prevent such egre- gious errors to be drawn from survey results?

3. How does this famous incident in political history relate to the types of surveys and questionnaires that you might be asked to administer in the workplace? What lessons can be extracted from this type of sampling error that you can acknowledge and avoid if survey methodology is part of your job responsibilities someday?

Associated Press

lan66845_06_c06_p157-190.indd 186 4/20/12 2:49 PM

187

CHAPTER 6Concept Check

Chapter Summary

Of all the types of research you will be learning about in this course as you prepare your applied project, survey methodology may be the most valuable, because you likely will encounter surveys in the workplace, and you may be in a manage- ment position where you are asked to develop a survey or to be a savvy consumer of survey research results for your company or organization. Thus, a basic knowledge of the key aspects of survey sampling, design, scaling, and analysis could prove useful to your future. It is important to be able to distinguish between the characteristics of probability and nonprobability sampling and to know that the difference is often meaningful depend- ing on the types of conclusions you would like to draw from the data. There are a variety of approaches to survey methodology, and the design of a survey project may involve cross-sectional, longitudinal, cohort, or panel survey aspects of research design. Many scaling approaches are available, and although Likert-type scaling is prevalent, knowing the type of research question you want answered can help in the selection of the survey scale best suited for the task. There are numerous details to attend to regarding data analy- sis from surveys, and key reminders are provided in the chapter, as well as some tips for generating your own survey questions.

Concept Check

1. Probability sampling means that

A. the sample definitely represents the population. B. the population has multiple identifiable characteristics. C. all members of the population have an equal chance of being in the sample. D. the sample was described in sufficient detail for individual identification of

members.

2. The non-random equivalent to stratified random sampling is

A. cluster sampling. B. volunteer sampling. C. convenience sampling. D. quota sampling.

3. What can be both an advantage and a drawback of in-person interviews?

A. The intimacy between the interviewer and participant. B. The ability of the interviewer to ask follow-up questions. C. The ability of the participant to ask clarification questions. D. The cost associated with administration.

4. In Loftus’ (1975) experiment No. 4, people were most likely to “recall” a woman pushing the baby carriage if

A. the woman wore an unusual hat. B. participants were given a false presupposition. C. participants were asked a direct question. D. the baby carriage was destroyed in the video.

lan66845_06_c06_p157-190.indd 187 4/20/12 2:49 PM

188

CHAPTER 6Key Terms to Remember

5. The most likely famous and popular scale used by psychological researchers is the

A. Likert-type scale. B. visual analog scale. C. Guttman scale. D. dichotomous scale.

Answers 1. C. All members of the population have an equal chance of being in the sample. The answer can be found Sec-

tion 6.1.

2. D. Quota sampling. The answer can be found Section 6.1.

3. A. The intimacy between the interviewer and participant. The answer can be found in Section 6.2.

4. B. Participants were given a false presupposition. The answer can be found in Section 6.4.

5. A. Likert-type scale. The answer can be found in Section 6.5.

Questions for Critical Thinking

1. Why is the survey such a prevalent methodology that is used so frequently? Does the prevalence of surveys have a negative effect on individuals answering sur- veys? Think about the number of surveys that you have received in the past two months, including telephone surveys, email surveys, mail surveys, invitations to web surveys, and so forth. How many did you answer (completely)? How might response rate temper one’s enthusiasm for the survey approach?

2. Much of the variety of survey approaches relies on stable and emerging technolo- gies. In your workplace, you may have global concerns where survey information from a specific region of the world might be valuable, but the technology infra- structure there is not as reliable as you would hope. What are your other options for gaining information about cultures and locations where technology is not so accessible? What mistakes should be avoided when looking at the application of survey methodologies as described in this chapter to other regions of the world?

3. Every methodological approach in the sciences has limitations—no approach is perfect, nor is any singular application of a methodological approach performed perfectly. What types of information are surveys good at extracting, and what types of information should be left to other types of research designs? Why?

Key Terms to Remember

cluster sampling The sampling practice of “clustering” groups of a population instead of evaluating each individual person to gain information when it is impossible or impractical to compile an exhaustive list of members composing the target population.

cohort study A study design in which new samples of individuals are followed over time.

convenience samples The sampling practice often used in exploratory research where a quick and inexpensive method is used to gather data by gathering partici- pants who are conveniently available for the purposes of data collection.

coverage The issue of who has Internet access and who does not that provides a barrier to obtaining information through Internet surveys.

lan66845_06_c06_p157-190.indd 188 4/20/12 2:49 PM

189

CHAPTER 6Key Terms to Remember

coverage error An error regarding the methodology used including access to Internet, use of land lines, and other methodologies.

cross-sectional survey design A study design where data collection occurs at a single point in time with the population of interest.

data analysis The process of interpreting data through statistical analysis into mean- ingful and accurate conclusions.

data cleaning A method of reviewing data to ensure that it has been handled and entered accurately.

demographics Variables used to identify the traits of a study population.

dichotomous scale A scale in which there are only two possible responses, i.e., yes/ no, male/female, true/false.

Guttman scale A survey response scale that generates a set of items that increase in difficulty. If a participant agrees with one scale item, it is assumed that they agree with the preceding scale items.

in-person interviews A research method- ology that allows an interviewer and a par- ticipant to build rapport through conversa- tion and eye contact, which might allow for deeper questions to be asked about the topic of interest. This presents fewer limitations about the types and length of survey items to be asked.

Likert scale A survey response scale that has a 5-point scale, measuring from one pole of disagreement to the other pole of agreement with each of the scale points having a specific verbal description.

longitudinal survey A study design where data collection occurs at several points over an extended period of time.

measurement error An error that can occur due to a number of reasons, typi- cally including measurement variation and measurement bias.

mixed-mode approach A study design where multiple research modalities are accessed to achieve the research goals.

multistage sampling The two-stage sam- pling practice involving the formation of clusters as a primary selection, then sam- pling members from the selected clusters to produce a final sample.

nonprobability sampling The sampling practice where the probability of each participant being selected for a study is unknown and sampling error cannot be estimated. See convenience sampling, quota sampling, snowball sampling, and volunteer sample.

non-response error An error occurring when there is a response rate of 25% or less for a particular question.

panel study A study design in which the same people are studied over time, span- ning at least two points in time.

probability sampling The sampling prac- tice where the probability of each partici- pant being selected for a study is known and sampling error can be estimated. See simple random sampling, systematic sampling, stratified sampling, cluster sam- pling, and multistage sampling.

quota sampling The sampling practice where a researcher identifies a target population of interest and then recruits individuals (non-randomly) of that popu- lation to participate in a study.

representative The assumption that a sample will resemble all qualities of the general population to ensure that results of a sample can be applied to the whole general population.

lan66845_06_c06_p157-190.indd 189 4/20/12 2:49 PM

190

CHAPTER 6Web Resources

Web Resources

Calculators for determining confidence intervals, sample sizes, correlations, and other research tool aids. http://www.surveysystem.com/sscalc.htm

An online survey glossary that defines important research terms relevant to survey devel- opment and administration. http://knowledge-base.supersurvey.com/glossary.htm

A writing guide for survey research that assists researchers in areas such as survey development, administration, and the process of reporting results. http://writing.colostate.edu/guides/research/survey/

Resource for researchers to understand how to make proper choices in data analysis. It also contains examples of excerpts from other texts and resources. http://www.ats.ucla.edu/stat/stata/topics/Survey.htm

Examples of best ways to construct items for survey responses so that researchers get all of the information they need for the purposes of their study. http://www.hr-survey.com/ItemConstruction.htm

representativeness A challenge in Internet surveys regarding whether or not results obtained from this method is representa- tive of the entire population.

sampling error An error occurring when all the potential participants from a popu- lation may not be represented in a sample.

scale A tool used to measure a person’s attitudes, perceptions, behaviors, etc. that is chosen to best represent a study.

semantic differential scale A survey response scale used to measure affect and emotion using dichotomous pairs of words and phrases that a participant evaluates on a scale of 1 to 7.

simple random sample The sampling practice of the purest form of sampling, and probably one of the rarest techniques used where everybody in the survey population has the same probability of being tested.

snowball sample The sampling practice where members of the target population of interest are asked to recruit other members of the same population to participate in the study.

stratified sampling The practice of divid- ing a sample into subcategories (strata) in a way that identifies existing subgroups (such as gender) in a general population to make a sample the same proportion as displayed in a population.

systematic random sample The sampling practice in which every nth person from a sample is selected.

Thurstone scale A survey response scale developed to measure attitude by creat- ing a response scale of equally appearing intervals by having participants make a series of comparative judgments.

visual analog scale A survey response scale used to obtain a score along a contin- uum, where a participant places a check- mark to indicate where his or her attitude or opinion falls along the scale.

volunteer sample The common sampling practice where volunteers are asked to participate in a survey.

volunteerism An error occurring when there is not enough self-selection in a study.

lan66845_06_c06_p157-190.indd 190 4/20/12 2:49 PM


Comments are closed.