Sheri Bauman, Ph.D., is an assistant professor in
the Department of Educational Psychology
at the University of Arizona, Tucson.
This article describes comparison group research designs and discusses how such designs can be used in school counseling research to demonstrate the effective- ness of school counselors and school counseling inter- ventions. The article includes a review of internal and external validity constructs as they relate to this approach to research. Examples of relevant research using this design are presented.
•he lack of a sound research base in the field of school counseling has been lamented for many
Syears (Allen, 1992; Bauman, 2004; Cramer, Herr, Morris, & Frantz, 1970; Lee & Workman, 1992; Loesch, 1988; Whiston & Sexton, 1998; Wilson, 1985). The recent emphasis on research in No Child Left Behind legislation (2002) and the ASCA National Model® (American School Coun- selor Association, 2005) has moved the need for rig- orous empirical research to the forefront. The ASCA National Model stresses that school counseling pro- grams include learning objectives that are based on measurable student outcomes and that are data-driv- en and accountable for student outcomes. The focus on data and measurement makes clear that school counselors can no longer avoid conducting research and using empirical research to make decisions.
The nature and goals of such research are the sub- ject of a recent debate. Brown and Trusty (2005) have contended that research should focus on demonstrating that well-designed and appropriate interventions used by school counselors are effec- tive, and they further argued that research investi- gating whether comprehensive school counseling programs increase student academic achievement is not productive given the presence of numerous con- founding influences. Sink (2005) disagreed, noting that school counselors are expected to contribute to the total educational effort to raise academic achievement. He advised that research to examine how school counselors influence achievement can be conducted using carefully selected methodologies, and while not definitively establishing causality, such research can provide strong evidence of the impact
of comprehensive school counseling programs on student achievement.
In their review of school counseling outcome research from 1988 to 1995, Whiston and Sexton (1998) found that of the 50 published studies they located, most provided only descriptive data, used convenience samples, lacked control or comparison groups, used outcome measures of questionable reli- ability and validity, and did not monitor adherence to intervention protocol, Such studies do little to add to the knowledge base of the profession, and they do not meet established standards for scientific rigor.
In an era of limited resources for education and “accountability” becoming a watchword, counselors must demonstrate how they contribute to the aca- demic success of students. Heartfelt letters of appre- ciation and positive comments by constituents, while sincere, will not convince stakeholders and holders of purse strings of the value of the profes- sion. School counselors, occupied by providing serv- ices in schools, often neglect to demonstrate their
importance until their positions are considered for reduction. This reactive approach is less likely to sway opinion than ongoing proactive efforts to use research effectively. Collecting, analyzing, and dis- seminating data that provide evidence of counselors’ effectiveness are consistent with the professional goals and models that define the profession.
Under No Child Left Behind, school counselors (along with other education professionals) are called upon to demonstrate their effectiveness using quan- titative data such as evidence of academic achieve- ment, attendance and graduation rates, and meas- ures of school safety (McGannon, Carey, & Dimmitt, 2005). No Child Left Behind and the ASCA National Model both emphasize the impor- tance of scientific, rigorous, well-designed research as an essential component of modern school coun- seling programs. These guidelines indicate that con- ducting research is no longer a peripheral activity that a few counselors might attempt but is a central part of the role of all school counselors. The ASCA
9:5 JUNE 2006 1 ASCA 357
~~sin Com~p-rs1G~ush Sc~ o Co~s~i a=3
National Model says the folloxving about data:
Data analysis: Counselors analyze student achievement and counseling-program-related data to evaluate the counseling program, con- duct research on activity outcomes and dis- cover gaps that exist between different groups of students that need to be addressed. Data analysis also aids in the continued develop- ment and updating of the school counseling program and resources. School counselors share data and their interpretation with staff and administration to ensure each student has the opportunity to receive an optimal educa- tion. (ASCA, 2005, p. 44)
Although most school counselors have had a graduate course in research methods (and perhaps statistics), these introductory courses typically are designed to prepare students to be critical readers of research. Even among those few students who con- duct research in their graduate training programs, it is the rare school counselor who continues to do research once he or she is a practicing school coun- selor. Responsibility for the absence of research from school counselors’ job description lies not only with the counselors, but also with the administrators and district officials who do not require or value research. No Child Left Behind has raised the aware- ness of educators in all fields that accountability is expected; data are the foundation for educational decisions, including decisions about counselors. In this climate, there is greater support (some might say pressure) for research.
There are a number of different kinds of research, and a description of all relevant types is beyond the scope of this article. The purpose of this article is to provide a rationale for using control and comparison group designs in school counseling research. I begin by defining some basic terminology and reviewing the concept of validity, which is fundamental to all research. I then briefly discuss single-group pre-post research designs, which often are used in schools because they are relatively easy to conduct. The main focus of the article is comparison group designs, and these will be described in more detail. Finally, I provide a discussion of relevant research using comparison group designs as examples of this research strategy.
Several technical terms are used in this discussion of research, and it is important that the reader be clear about their meaning. Researchers study variables that can assume different values. The independent variable is the intervention variable, or the variable
manipulated by the researcher. The dependent var- able is the outcome variable, the effect. In a study of the effect of participation in extracurricular activities on graduation rates, the independent variable is the participation (which could be defined as the number of activities, or the number of hours per week of involvement, or a yes/no category) and the gradua- tion rate is the dependent variable. Researchers also may refer to moderator variables, which are variables that influence the relationship between the inde- pendent and dependent variables. Parental educa- tion might moderate the relationship between extracurricular participation and graduation rates, and it then would be a moderator variable. Statistical significance means that the obtained results are unlikely to have occurred by chance. If results are statistically significant at p < .05, the results would be obtained by chance in less than 5 out of every 100 cases. The counselor/researcher should keep in mind that with large samples, results might be statistically significant but not practically significant.
Let us imagine that a new program for elementary math skills were implemented in several schools in a large district. At the end of a school year, the differ- ence between the achievement scores of those who used the program and those who continued the usual math program was statistically significant. One might conclude that the new approach is better. But what if the difference in scores were only .10 (grade equivalent)? Depending on the cost of the program, one might conclude that although the difference is statistically significant (p < .05), in practice the dif- ference or gain is not substantial enough to justify a large expenditure on the new program. There are ways to describe the practical significance of the findings through the use of effect sizes; these are dis- cussed in Sink and Stroh’s article in this issue (“Practical Significance: The IUse of Effect Sizes in School Counseling Research”).
Regardless of the design or method of research, school counselors must be concerned with the valid- ity of the research they conduct or read. In general, validity refers to the degree of confidence we can have in the findings of a research study. If a study does not demonstrate adequate validity, the results are of questionable application and should not be the basis for decisions. Internal validity refers to whether the observed change in the dependent (outcome) variable is due to the independent vari- able and only the independent variable. For exam- ple, if we are interested in whether student atten- dance (our dependent variable) improved for high school freshmen when a new orientation program
358 ASCA I PROFESSIONAL SCHOOL COUNSELING
The focus on data
makes clear that
can no longer avoid
and using empirical
research to make
was conducted by the school counselors (our inde- pendent variable; the orientation program), we want to be sure that no other variables could explain the obtained results. If, in addition to the new orienta- tion program for freshmen, the school employed additional truant officers, we could not be sure that the change in attendance was due only to the new orientation program and not to the truant officers’ activities. The internal validity of the study would be compromised.
External validity refers to the degree that the results of one study can generalize to (apply to) other people in other places or times. School coun- selors reading the results of research in a journal want to know whether they can reasonably expect that the reported results would apply in their own setting with their own students. Researchers hope that their results will be useful to others in other locations and times. If the new orientation program improved attendance for students in one school or district, the issue of external validity asks the ques- tion of whether other schools are likely to achieve the same results with the same program. The threats to external validity are related to the population from which the sample was selected (was it repre- sentative, did it include members of all groups of interest?) and the context in which the study was conducted (was it in a laboratory or a school, did participants know they were involved in an experi- ment, did the researcher convey the hoped-for out- comes?). These two types of external validity often are referred to as population validity and ecological validity. A study conducted at a private school with European American upper-class students is of ques- tionable validity for an inner-city school with a large percentage of minority students.
Another factor in external validity is the nature of the research itself. If the students in the experimen- tal group were aware they were receiving a special program different from that of the control group, their efforts may have been changed by that knowl- edge. In addition, the researcher must incorporate a way to ensure that interventions delivered in a natu- ralistic school setting are faithfud to the protocol of the experiment. If the intervention is a series of les- sons, for example, the researcher must be sure that the lessons are delivered as described in the manual. If each teacher or counselor makes changes in the program, external validity is compromised by the absence of treatment (intervention) fidelity. Researchers can increase the external validity of their work by attending carefully to sample selection and to the conduct of the experiment.
Campbell and Stanley (1963) described important threats to internal validity. These are conditions that provide possible alternative explanations for obtained results, or -ways that events or conditions
other than the independent variable may explain observed changes in the dependent variable. The following is a brief review of those threats.
History In this context, history refers to any event not planned or part of the research that occurs during the research. In the example above regarding the new orientation program, let’s imagine that the principal decides to visit each freshman English class during the first week of school. Although not part of the research, this event (history) might be an alter- native explanation for the difference in attendance rates. History is the greatest threat to internal valid- ity when it affects only one group of research partic- ipants. If your research design used a comparison group (last year’s freshmen) who had not experi- enced the historical event, your internal validity would be reduced. However, if you were studying whether the attendance of males vs. females increased when the new orientation program was implemented, and both males and females experi- enced the visits by the principal, internal validity would not be affected. When research is being con- ducted in schools, there are often events that occur outside the counselors’ control, and the counselor must be alert to these competing explainers of results.
Maturation Human beings change and develop over time. This means that some changes will occur independently of any intervention. For example, a middle school counselor might provide a series of guidance lessons on conflict resolution to seventh graders. If the counselor were to measure student attitude toward fighting, or the number of fights before and after the lessons, results might show a decrease in conflict after the lessons. However, maturation might be an alternative explanation for the results; students may be exhibiting less physical conflict because they are developing cognitively and socially, not because of the lessons. In the discussion of comparison group designs later in this article, I suggest designs that minimize the influence of this threat to internal validity.
Testing Researchers may want to give participants a pretest to determine the base rate of whatever behavior or attitude is of interest. If a counselor were going to do a series of guidance activities to reduce racial/ethnic stereotyping in a school, he or she may want to get a measure of the degree of stereotyping that students do at the start of the project. However, the pretest may sensitize participants to the issue of stereotyping, and that may influence their scores on
9:5 JUNE 2006 1 ASCA 359
In an era of limited
they contribute to
success of students.
the posttest. This is called the testing effect.
Instrumentation A counselor who is leading an anti-bullying program at her school wants to measure the effect of the pro- gram on student bullying behavior. She knows that much bullying occurs on the playground, and she uses a behavioral observation method to determine the frequency of playground bullying before the program begins, and after the program has been in place for a semester. The behavioral observation method requires several observers, and it may be that some observers are more alert than others. Or, the observers may become more adept with practice. If the observers are not the same at both measure- ment points, instrumentation is a threat to internal validity. The changes may not be a result of the chil- dren’s behavior, but of the observers’ skill.
Regression to the Mean When a counselor is interested in extreme groups (students high or low in a particular characteristic), a pretest-posttest design is vulnerable to this threat. We know that on subsequent testing, both high and low scores tend to become closer to the mean (aver- age score). So observed changes may be due to this tendency rather than any real change in the charac- teristic being measured.
Selection In some research, counselors are studying more than one group of students (e.g., classes, genders). If you are trying a new program with one class and using another class as a comparison group, the groups might be different on some other factors (e.g., read- ing level, intelligence) that can affect the results.
Mortality This threat to internal validity refers to loss of par- ticipants during the course of the study. In a com- parison group design, this becomes a problem when mortality is greater in one group than in another. For example, in a study where the comparison group is another school, there might be asbestos discov- ered in one of the schools and many students trans- fer out of that school. That group would have greater mortality than the other group.
Selection Interaction It is possible that one of the other threats to internal validity combines with selection. This means that one of the comparison groups is affected by those threats (e.g., history, maturation) differently than other groups in the study. For example, an elemen- tary school counselor implements a new program to teach empathy skills to fifth-grade classes. Lessons are given throughout the school year, and a nearby
school serves as a control group. On an outcome measure, the counselor finds that at the end of the year, girls show more improvement in empathy than boys do. It might be that those findings are because girls of this age tend to develop these skills naturally at this age, while boys develop them later. The find- ings may reflect a selection (grade level) maturation (girls faster than boys) effect.
School counselors doing research must be alert to potential threats to internal validity. While it is impossible to avoid all such threats, especially when doing the research with students in schools (rather than in a laboratory), if results are to be meaningful, the researcher must acknowledge them. In some cases, there are statistical methods to control for the influence of these threats.
One of the advantages to publishing the results of research is that school counselors do not have to reinvent the wheel. That is, we read about research in the hope that findings will generalize to other stu- dents, settings, and times. Generally, results should be replicated in other contexts so that it is not just a single study but a body of research that establishes the generalizibility of findings. Let us assume that the original study of the orientation program was conducted in a large, urban high school. Will the same program have the same outcome in a small, rural high school and with a different racial/ethnic composition?
SINGLE-GROUP PRETEST-POSTTEST DESIGN
School counselors have the advantage of conducting research in settings that reflect real students. Some programs of interest to counselors cannot be effec- tively studied in a laboratory; if they could be, we would question the external validity of the findings. Conducting research in a school also has disadvan- tages, not the least of which is the inability to con- trol many factors in a research process. For example, researchers may not be able to randomly assign stu- dents to classrooms, and they may have to contend with numerous historical events that occur during a research study. Nevertheless, the findings are clearly relevant and applicable to the school of interest.
One research strategy that is relatively uncompli- cated to do is a single-group pretest-posttest design. In this design, the school counselor implements a program (a series of guidance lessons or counseling groups to address a particular topic). Prior to start- ing the program, the students take a pretest so their baseline levels can be determined. The program is delivered, and then the students take a posttest. The improvement in scores from pretest to posttest is used to measure the impact of the program. At first glance, that seems to be a logical approach. One
360 ASCA I PROFESSIONAL SCHOOL COUNSELING
advantage of the pretest-posttest design is that one does not have to include a control group, and the pretest information allows the counselor to deter- mine differential effects (e.g., the lessons increased t6lerance in boys more than in girls).
However, this design is particularly vulnerable to the threats to internal validity described above. How can the counselor demonstrate that it was the pro- gram that caused the change in scores, and not mat- uration, history, testing, or regression to the mean? For example, let us imagine that the lessons were developed to increase tolerance toward physically handicapped students. During_ the time the weekly lessons were being presented, there was a television special on the topic that many students watched. Or perhaps there were classroom disruptions during the time the lessons were presented. Were the observed changes the result of the TV program, the disrup- tions, or the lessons?
COMPARISON OR CONTROL GROUP DESIGN
A more rigorous design that avoids many of the threats to internal validity inherent in pretest- posttest designs is the control group or comparison group design. A control group is a group of partici- pants who get no intervention; if a group gets a dif- ferent intervention, then we call it a comparison group. History and maturation will affect both the experimental and the comparison groups, so any dif- ferences in the outcome variable cannot be biased by those threats. Testing and regression to the mean also are going to influence both groups, so observed differences can be attributed to the intervention rather than these alternative explanations. Of the 50 school counseling outcome studies published between 1988 and 1995, only 26% used this design (Whiston & Sexton, 1998). The authors of the review concluded that more research of this kind is needed, and they recommended the wait-list control group strategy used often in other counseling research. In the school setting, this means that class- es, schools, or students who do not receive the intervention (program, activity) during the research period will receive it at a later time (the following semester, year, etc.).
There are several ways in which comparison groups can be created. The first is random assign- ment. That means that all eligible participants are randomly assigned to one of the experimental con- ditions (intervention, comparison group, control group). When this is not possible, the researcher can use preexisting groups (e.g., already formed or intact classes) that are matched on key variables, such as reading level or socioeconomic status. An investigation of the impact of a new “transition to
kindergarten program” might use current students in kindergarten as the experimental group and stu- dents from a previous year (now first graders) at the same school as the comparison group. The assump- tion in this case is that previous students resemble current students on the relevant characteristics.
A final method would be to use pretest scores to ensure that the groups are matched on key variables prior to the introduction of the intervention. After creating matched groups, the researcher then can randomly assign the groups to the intervention con- ditions. If the intervention might have a differential effect based on levels of test anxiety, the researcher can administer a pretest of test anxiety; create groups of high-, average-, and low-anxiety students; and create two groups with equal representation from the different levels of anxiety. Once the groups are created, a random procedure can be used to assign one group to receive the intervention (e.g., instruc- tion in progressive relaxation) and the other group to serve as the control group.
Random Assignment The most rigorous comparison group design utilizes random assignment to condition (experimental group or control/comparison group). Random assignment means that every participant has an equal chance of being in the experimental condition. Most statistical software packages include features that allow the researcher to randomize assignment in a scientific manner. There are also sites on the Internet that a counselor might be able to locate and use if such software is not readily available. In most edu- cational settings, it is usually not possible to ran- domly assign students to one or another class or program. However, random assignment can be accomplished by using classes or schools as sampling units. For example, in a study evaluating the effects of a new drug prevention curriculum for middle school students, if there is more than one school interested in participating, the schools can be ran- domly divided into two groups (using a number of randomization procedures), with one group desig- nated as the experimental group (the schools receiv- ing the curriculum) and the other as the comparison or control group (which will not receive the cur- riculum at this time). If only one school is going to participate, the same procedure can be applied to classrooms.
In some cases, it may not be possible to random- ly assign classrooms to the intervention or non- intervention groups. If there are two schools or two classrooms that are potential participants, and only one is interested in testing the curriculum, the other can become the comparison group. The problem with this method is that the two groups (schools, classes) may be different prior to the curriculum
9:5 JUNE 2006 1 ASCA 361
No Child Left
Behind and the
research as an
implementation in ways that affect the outcome (e.g., intelligence, reading level). If, however, the researcher is able to administer the pretests and posttests to both groups (or obtain data on both groups), these differences can be identified and con- trolled for statistically. That means that the analyses can determine whether the obtained differences would exist over and above the influence of these potentially influential variables. If reading level were a possible confounding variable, the researcher can use a statistical analysis called analysis of covariance, in which reading level is designated the covariate. The results of this analysis xvill reveal whether observed differences in the dependent variable (the outcome) are significant when differences in reading level have been taken into account. The researcher can include more than one covariate if several attrib- utes are potential competing explainers of results.
Measurement Concerns How are variables measured? This is a basic question that researchers must address when designing the study. For the results to be valid, the measures must be reliable and valid. Reliability refers to the consis- tency of the scores, and validity relates to whether the measure measures what it purports to measure and for whom it does so. Researchers need to use considerable care in the selection of the instruments to be used. While researcher-designed question- naires may be used, it is essential to establish the reli- ability and validity of such measures. Published measures generally will report such data so that the researcher can make informed decisions. If other methods of assessment are used (such as observa- tion), those also must be evaluated prior to use, and they must meet the same standards of psychometric adequacy as paper-and-pencil measures. Some stud- ies in the school counseling field have used self- reported student grades as outcome variables. A more precise measure would be to use actual record- ed grades from student records. To take this a step further, grades may be influenced by the grading practices and standards of different teachers; achievement test scores might be a more valid meas- ure to use as a dependent variable.
Data Analysis The word statistics invokes fear and anxiety in many for whom research is not a frequent activity. Counselors need to know that in the age of com- puters, the task of analyzing data is far less daunting. Even without the specialized statistical programs used by most researchers, school counselors can uti- lize statistical features of Microsoft Excel and EZAnalyze (Poynton, 2005), an add-in for Excel. Using these tools, the school counselor can easily obtain descriptive data about the sample (including
means and standard deviations) and can disaggre- gate data by group (e.g., by gender or ethnicity). In addition, the school counselor can perform several analyses, including correlations (to assess the strength of relationships between two variables such as math and reading test scores), t tests (to test the significance of differences between two groups or pretests and posttests for the same group), and analyses of variance (to test differences among more than two groups). The counselor also can obtain tables and graphs directly from the program, allow- ing for visual presentation of results.
For more complex analyses, many school districts have research departments that can help. And many counselor educators at universities are eager to assist, and can do so even when located at a distance, using e-mail to receive data. Consulting with university researchers is a good idea throughout a research project when school counselors are novice research- ers, but it can be especially important during the data analysis and interpretation step. A very useful review of the various analysis options for comparison group designs can be found in Gliner, Morgan, and Harmon (2003).
Reporting Results Whether the purpose of the research is for program improvement or to comply with mandatory report- ing regulations, it is important to present the results clearly and accurately. Much of the audience will be unfamiliar with the terms used, so definitions are essential. It makes sense to begin by stating the research question at the outset, and then describing how you went about answering that question. For example, if the question was, “What is the effect of the new study skills group on student achievement?” you would begin by describing the study skills group and defining student achievement (e.g., overall grade point average, scores on achievement tests). Then the research design is described. For example, researchers often will write something like, “A pretest-posttest comparison group design was used, with last year’s students serving as the comparison group [or another school with comparable demo- graphic variables].” The next step is to present results dearly, using tables and graphs when they enhance the presentation. Finally, you want to pro- vide the answer to your question, and discuss the implications of your findings. Any limitations of your research should be acknowledged.
School counselors need to disseminate their research to advance the profession. Presentations to stakeholders and administration are one method of doing so. A brief summary report of findings to the administration or school board, or a more detailed presentation, can educate these important groups about the interventions that counselors are using
362 ASCA I PROFESSIONAL SCHOOL COUNSELING
Regardless of the
design or method
of research, school
counselors must he
concerned with the
validity of the
conduct or read.
and will demonstrate to these stakeholders that pro- grams have been scientifically evaluated for effective- ness. Writing articles for state and national journals in which you present a report of your research is an important contribution to the field. Knowledge builds on previous knowledge, and if research is not published, others will not have the benefit of the findings.
To illustrate the control/comparison group approach to research, several studies have been selected for review. These will demonstrate the advantages and pitfalls of this strategy. In the inter- est of space, the details of statistical analyses -ill not be given below. Interested readers may refer to the original journal articles for that information.
Random Assignment-RIPP A school-based violence prevention program, Responding in Peaceful and Positive Ways (RIPP), was studied in three public middle schools in a Southern city (see Farrell, Meyer, & White, 2001). All regular education sixth-grade classes were poten- tial participants. Thirteen classes, including 305 stu- dents, were randomly assigned to the experimental group (receiving the program), and 14 classes (321 students) to the control group. The schools also implemented a school-wide peer mediation pro- gram, which was available to all students in the schools. The vast majority of participants were African American.
The RIPP program consisted of 25 sessions (50 minutes each) taught by trained prevention special- ists who were African American men. The lessons were presented weekly during social studies or health education classes over one school year. A manual followed by the presenters was used to increase treatment fidelity; researchers observed the implementation and provided an additional check of treatment fidelity.
In addition to pretest data obtained in October and posttest data collected in May of that school year, follow-up data were gathered 6 and 12 months after the completion of the program. These meas- ures were administered by research assistants who were “blind” to (did not know) the condition (inter- vention or control group) of the classes. Of the 626 students who began the study, complete pretest and posttest data were obtained from 474. Four hun- dred ten students were available at the 6-month fol- low-up, and 359 at the 12-month follow-up.
A variety of measures was used to assess the vari- ables of interest: frequency scales for problem behav- iors, violent behaviors, and drug use; a RIP-P multi- ple-choice knowledge test, a problem situation
inventory, and two scales assessing relevant attitudes (beliefs supporting aggression, attitudes toward con- flict). Reliability and validity data are given for all measures, and all were acceptable. Demographic data also are available, include disciplinary code vio- lations.
Analyses included a comparison of experimental and control groups on demographic variables of gender, age, and ethnicity. No differences were detected between the groups. Because some stu- dents did not complete the entire program, the researchers examined the effect of attrition on the two groups. Their analyses determined that attrition affected both the experimental and control groups in a similar way. Analyses also investigated differ- ences on the pretests between experimental and con- trol groups. There were no differences between the two groups on disciplinary referrals, although differ- ences were detected by age and gender (the older the student, the higher the rate of disciplinary refer- rals; boys also were more likely than girls to have violations). No differences were found on specific violent behaviors. In fact, the only difference found at pretest was the higher incidence of positive atti- tudes toward nonviolence in the control group.
The researchers then analyzed the differences at posttest. Although many of the results were in the predicted direction, statistically significant differ- ences between the intervention group (which received the RIPP curriculum) and the control group were found on the number of disciplinary vio- lations for violent behavior (the control group had 2.2 times as many) and the rates of in-school sus- pensions (the control group had 5 times as many). The intervention participants used peer mediation more frequently at posttest than did the control par- ticipants and also reported fewer fight-related injuries at posttest. Differences at follow-up time periods (6 and 12 months post-intervention) were in the expected direction but were not significant, with the exception of the rate of in-school suspen- sions at 12-month follow-up, which was three times greater for boys only.
Further analysis revealed an important finding: The participants who reported high levels of vio- lence on the pretest had lower scores at 6- and 12- month follow-up than did comparable participants in the control group. Results also indicated that while students in the intervention group demon- strated increased knowledge of the material in the curriculum, they did not show any change in their attitudes or on the use of nonviolent responses in hypothetical situations. The authors speculate that because knowledge improved but attitudes and skills did not, there may not have been adequate support for the use of these skills in the school environment. This is important for counselors: If they teach skills
9:5 JUNE 2006 1 ASCA 363
to students, the school community must endorse the use of the skills, reinforce the skills, and encourage
students to apply them in the school setting. Although this study was one of few conducted in
schools using random assignment and a large sam-
ple, the research was somewhat compromised by the nature of the design. With some classes in each school receiving the curriculum, it is difficult to
ensure that the students in the control condition
were unaware of the program and did not learn
some of the material via modeling from peers in the
intervention condition. In those cases where the
results “approached significance,” this limitation
may have had an effect, and it in fact may have influ- enced other results as well.
Random Assignment-SSS Researchers (Brigman & Campbell, 2003) tested the Student Success Skills (SSS) on a random sample
of 180 students in four grades at six schools in a
Southern state. The SSS program is an intervention in which school counselors teach skills found to pre-
dict school success using effective teaching strategies
for classroom guidance and group counseling com- ponents. Students in the participating schools who
scored between the 25th and 50th percentile on a
state assessment test were eligible to participate.
Thirty students from each school were randomily
selected for participation. The comparison group
was created by matching participating schools with
nonparticipating schools based on geography, race, and socioeconomic status and then randomily select-
ing students. In the study, the outcome (dependent)
variables were math and reading scores on a stan-
dardized test and teacher ratings of classroom
behavior. The authors reported the psychometric properties of all the measures used.
In order to ensure that the counselors were pro-
viding the program correctly, counselors received three days of training and three half-day follow-up
training sessions. Half-day peer coaching sessions
were scheduled during months when follow-up
training did not occur. The group counseling com-
ponent consisted of eight weekly 45-minute sessions
and four booster sessions using a structured format
for teaching cognitive, social, and self-management
skills. The number and frequency of classroom guid- ance lessons were not specified for each site, but at
least three classroom guidance lessons were to be delivered at each grade level. Five of the six schools
met the established criteria for implementation guidelines.
Results showed statistically significant differences
between the intervention and control groups on the
math and reading posttest scores. In this case, the
size of the difference between the two groups is
large. The behavior scale was not administered to
the comparison group, but authors noted that 70% of participants demonstrated an average of 22 per- centile points improvement in behavior between September and April administrations of the behavior rating scale.
The use of the comparison group was essential for these results to have an impact, as any improvement found for the intervention group could have been the result of history, maturation, or regression to the mean. Although different idiosyncratic historical influences may have affected results, the findings are strengthened by the use of the comparison group. The researchers noted that the absence of the behav- ior ratings for the comparison group was a limita- tion. In addition, it is not known how this interven- tion compares to other interventions used to increase student achievement. This study demon- strates how counselors can use research to demon- strate that their interventions positively affect stu- dent achievement.
To further document the impact of this interven- tion, the researchers have published two replications of this study (for details, see Webb, Brigman, & Campbell, 2005, and Campbell & Brigman, 2005). Replication is a way to demonstrate that a research study conducted using the same intervention, pro- cedures, and measures yields similar results in other settings or contexts. In this case, while results were similar in the replications, all three studies’ partici- pants were predominantly European American, and it remains to be demonstrated that the intervention is successful with more diverse students.
Using Both Comparison and Control Groups Both a comparison group and a control group were used to investigate the effects of a social cognitive group on self-esteem and difficulties in peer rela- tionships among adolescents aged 13-16 years (Barrett, Webster, & Wallis, 1999). In order to be certain that the effects of the intervention could be attributed to the intervention and not just the atten- tion received as a result of being in a group, the researchers used a comparison group. This group received the special attention of being in a group, but instead of the program using skill development process and procedures, the comparison group was given similar information in a didactic fashion using lecture and films. A wait-list control group also was used.
To place the 51 participants (students recom- mended by teacher nomination) in one of the groups, the researchers matched the students on gender, age, and scores on the three pretests (meas- ures of self-esteem, self-related cognitions, and social competence with peers). Thus, the design ensured that the three groups were comparable prior to the interventions. Measures were described and the psy-
364 ASCA I PROFESSIONAL SCHOOL COUNSELING
This is an optimal
time for counselor
school counselors in
order to enhance
the credibility of
chometric properties were adequate. The researchers used change scores from pretest
to posttest to assess the impact of the intervention. The experimental group showed significantly greater improvement in self-esteem and self-related cogni- tions and perceptions than did both the comparison and control groups. Also detected was a significant increase in the self-reported interpersonal difficulties with peers in the intervention group. The researchers speculated that the increased difficulties may reflect a concern with ecological validity. That is, while the participants may be able to use the new skills within the program, they were not able to transfer those skills to their daily environment. This is a useful finding: Programs in schools need to include components to help apply the skills in the participants’ natural environment.
Although this study utilized a relatively small con- venience sample from one school, and it did not incorporate a follow-up component, the employ- ment of both comparison and control groups is a useful feature given the nature of the intervention, and one that counselors might consider for other research questions.
This article has discussed how comparison and con- trol group designs strengthen research. Well- designed research studies that examine the effective- ness of school counseling programs and practices contribute to a growing body of evidence that sup- ports the positive impact of school counselors on academic achievement. Although there has always been a need for school counselors to be accountable, the current emphasis on data in education makes this need particularly salient. The urgency of the sit- uation is evident from Whiston’s (2002) observa- ton:
In my opinion, this is a critical time for leaders in school counseling to invest in the future of the profession and support school counseling research. School counselors may believe they make a difference, but without “hard data” to support these claims, school counselors run the risks of losing their positions. (p. 153)
This article assists school counselors and researchers by describing an approach that produces the most robust findings.
To ensure both internal and external validity, school counselors must attend carefully to all the elements of their study: They must choose the best measures (e.g., achievement test scores vs. grades, objective testing vs. self-report when possible) that have adequate reliability and validity. They must
ensure that data are analyzed in the most effective manner (e.g., using statistical methods to control for possible pre-intervention differences in groups). They must clearly describe the population and con- text from which participants were selected, explain procedures for assignment to groups (with random assignment being the ideal), and articulate how the researchers ensured that the intervention was deliv- ered correctly in the same way to all groups.
School counselors have an ethical obligation to seek evidence of their effectiveness. Principle A.9.g of the Ethical Standards for School Counselors (ASCA, 2004) maintains that a professional school counselor assess “the effectiveness of his/her pro- gram in having an impact on students’ academic, career and personal/social development through accountability measures especially examining efforts to close achievement, opportunity and attainment gaps.” It may seem that the pressure to produce sci- entific evidence is unreasonable, given the many duties that most school counselors perform. It is important to remember that without such evidence, school counselors may have difficulty justifying their role in a system that is increasingly data-driven.
This is an optimal time for counselor educators and researchers to collaborate with school coun- selors in order to enhance the credibility of the school counseling profession. Together they can produce the high-quality research that is necessary for both accountability and advancement of the field. I
References Allen, J. (1992). Action-oriented research: Promoting school coun-
selor advocacy and accountability (ERIC Digest No. ED347477). Retrieved June 9,2002, from http://www.ed. gov/databases/ERIC_Digest/ed347477.html
American School Counselor Association. (2004). Ethicalstan- dards for school counselors. Alexandria, VA: Author. Retrieved August 21,2005, from http://www. schoolcounselor.org/files/ethical%20standards.pdf
American School Counselor Association. (2005). The ASCA national model:.A framework for school counseling pro- grams (2nd ed.). Alexandria,VA: Author.
Barrett, P. M.,Webster, H. M., & Wallis, J. R. (1999). Adolescent self-esteem and cognitive skills training: A school-based intervention.Journal of Child and Family Studies, 8, 217-227.
Bauman, S. (2004). School counselors and research revisited. Professional School Counseling, 7, 141-151.
Brigman, G., & Campbell, C. (2003). Helping students improve academic achievement and school success behavior. Professional School Counseling, 7, 91-98.
Brown, D., &Trusty,J. (2005). School counselors, comprehensive school counseling programs, and academic achieve- ment: Are school counselors promising more than they can deliver? Professional School Counseling, 9, 1-8.
Campbell, C., & Brigman, G. (2005). Closing the achievement gap: A structured approach to group counseling.Journal for Specialists in Group Work,30, 67-82.
9:5 JUNE 2006 1 AsCA 365
Campbell, D.T., & Stanley, J. C. (1963). Experimental and quasi- experimental designs for research. Chicago: Rand McNally.
Cramer, S. H., Herr, E. L, Morris, C. N., & Fra ntz7T.T. (1970). Research and the school counselor. Boston: Houghton Mifflin.
Farrell, A. D., Meyer, A. L, & White, 1 S. (2001). Evaluation of Responding in Peaceful and Positive Ways (RIPP): A
school-based prevention program for reducing violence among urban adolescents.Journal of Clinical Child Psychology,30, 451-463.
Gliner, J. A., Morgan, G. A., & Harmon, R.J. (2003). Pretest- posttest comparison group designs: Analysis and inter- pretation.Journal of the Academy of Child and Adolescent
Psychiatry, 42, 500-503. Lee, C. C., & Workman, D. J. (1992). School counselors and
research: Current status and future direction. School
Counselor, 40,15-19. Loesch, L. C. (1988). Is “school counseling research” an oxy-
moron? In G. R.Walz (Ed.), Research and counseling: Building strong school counseling programs (pp. 169-180).
Alexandria, VA: American School Counselor Association. McGannon,W., Carey, J., & Dimmitt, C. (2005). The currentstatus
of school counseling outcome research (Research Monograph No. 2).Amherst, MA: University of Massachusetts, School of Education, Center for School Counseling Outcome Research.
No Child Left Behind Act of 2001, Pub. L No. 107-110,115 Stat. (2002).
Poynton,T.A. (2005). EZAnalyze (Version 2.0) [Computer soft- ware]. Retrieved August 21,2005, from http://www. ezanalyze.com
Sink, C.A. (2005). Comprehensive school counseling programs
and academic achievement-A rejoinder to Brown and
Trusty. Professional School Counseling, 9, 9-12. Webb, L D., Brigman, G.A., & Campbell, C. (2005). Linking
school counselors and student success:A replication of
the Student Success Skills approach targeting the aca- demic and social competence of students. Professional School Counseling, 8,407-413.
Whiston,S. C. (2002). Response to the past, present, and future of school counseling: Raising some issues. Professional
School Counseling, 5, 148-155. Whiston, S. C., & Sexton,T. L. (1998). A review of school counsel-
ing outcome research: Implications for practice.Journal of Counseling & Development, 76,412-426.
Wilson, N. S. (1985). School counselors and research: Obstacles and opportunities. School Counselor, 33, 111-119.
Earn CEUs for reading this article. Visit www.schoolcounselor.org, and click on Professional School Counseling to learn how.
366 ASCA I PROFESSIONAL SCHOOL COUNSELING
TITLE: Using Comparison Groups in School Counseling Research: A Primer
SOURCE: Professional School Counseling 9 no5 Je 2006 PAGE(S): 357-66
The magazine publisher is the copyright holder of this article and it is reproduced with permission. Further reproduction of this article in violation of the copyright is prohibited.
Copyright 1982-2006 The H.W. Wilson Company. All rights reserved.