Connecting Social Problems and Popular Culture SECOND EDITION

Connecting Social Problems and Popular Culture SECOND EDITION

Connecting Social Problems and Popular Culture

3

Connecting Social Problems and

Popular Culture SECOND EDITION

WHY MEDIA IS NOT THE ANSWER

Karen Sternheimer University of Southern California

A Member of the Perseus Books Group

4

Westview Press was founded in 1975 in Boulder, Colorado, by notable publisher and intellectual Fred Praeger. Westview Press continues to publish scholarly titles and high- quality undergraduate- and graduate-level textbooks in core social science disciplines. With books developed, written, and edited with the needs of serious nonfiction readers, professors, and students in mind, Westview Press honors its long history of publishing books that matter.

Copyright © 2013 by Karen Sternheimer

Published by Westview Press, A Member of the Perseus Books Group

All rights reserved. No part of this book may be reproduced in any manner whatsoever without written permission except in the case of brief quotations embodied in critical articles and reviews. For information, address Westview Press, 2465 Central Avenue, Boulder, CO 80301.

Find us on the World Wide Web at www.westviewpress.com.

Every effort has been made to secure required permissions for all text, images, maps, and other art reprinted in this volume.

Westview Press books are available at special discounts for bulk purchases in the United States by corporations, institutions, and other organizations. For more information, please contact the Special Markets Department at the Perseus Books Group, 2300 Chestnut Street, Suite 200, Philadelphia, PA 19103, or call (800) 810-4145, ext. 5000, or e-mail special.markets@perseusbooks.com.

Library of Congress Cataloging-in-Publication Data Sternheimer, Karen. Connecting social problems and popular culture : why media is not the answer / Karen Sternheimer. —2nd ed. p. cm. Includes bibliographical references and index. ISBN 978-0-8133-4724-0 (e-book) 1. Mass media—Moral and ethical aspects—United States. 2. Popular culture—Moral and ethical aspects—United States. 3. Mass media and culture—United States. 4. Social problems—United States. I. Title. HN90.M3S75 2013 302.2301—dc23

2012034416

10 9 8 7 6 5 4 3 2 1

5

For Frieda Fettner, whose wisdom and encouragement

will be with me always

6

CONTENTS

Preface

1 Media Phobia: Why Blaming Pop Culture for Social Problems Is a Problem

2 Is Popular Culture Really Ruining Childhood?

3 Does Social Networking Kill? Cyberbullying, Homophobia, and Suicide

4 What’s Dumbing Down America: Media Zombies or Educational Disparities?

5 From Screen to Crime Scene: Media Violence and Real Violence

6 Pop Culture Promiscuity: Sexualized Images and Reality

7 Changing Families: As Seen on TV?

8 Media Health Hazards? Beauty Image, Obesity, and Eating Disorders

9 Does Pop Culture Promote Smoking, Toking, and Drinking?

10 Consumption and Materialism: A New Generation of Greed?

11 Beyond Popular Culture: Why Inequality Is the Problem

Selected Bibliography Index

7

PREFACE

Rather than viewing popular culture as “guilty” or “innocent,” the central theme running through Connecting Social Problems and Popular Culture is that various media and the popular culture they promote and produce are reflections of deeper structural conditions—such as poverty, racism, sexism, and homophobia—and economic disparities woven into major social institutions. While discussions of sexism in various forms of media, for instance, are often lively and provocative, the representations themselves are not the core reason that gender inequality continues to exist. Media images bring it to our attention and may further normalize sexism for us, but our examination of our society should not end with media.

In order to understand social problems, we need to look beyond media as a prime causal factor. Media may be a good entry point for thinking about how social problems have a basis beyond the sole individual. But while that premise can open the discussion, this book aims to help students and other readers take the next step in understanding social problems. We must look deeper than popular culture—we need to look at the structural roots to understand issues such as bullying, violence, suicide, teen sex and pregnancy, divorce, substance use, materialism, and educational failure.

Neither media nor popular culture stands still for very long—making the study of both a never-ending endeavor. In this second edition of Connecting Social Problems and Popular Culture, I include a new chapter on fears about social networking and electronic harassment. With concerns about bullying and “sexting” leading to suicide after news accounts of high-profile cases, it is important to uncover what we know about the role that new media play in such incidents. Perhaps not surprisingly, social networking is less of a culprit than an attention getter. Additionally, each chapter has been updated to incorporate, where applicable, new research and trend data on crime, pregnancy, birth- and divorce rates, substance use, and other social issues for which popular culture is so often blamed.

The “link” between video games and actual violence is always a topic of interest for readers and lay theorists of social problems. In 2011 the US Supreme Court upheld a lower court’s ruling that states cannot limit the purchase of violent video games. In handing down this major decision, the Supreme Court decided that California had not proven how actual harm came from playing video games. I address this ruling in greater detail in Chapter 5 on media and violence.

8

Because popular culture is so ubiquitous—and, frankly, fun—it is a great window for students in a variety of courses to look through as they begin exploring social issues. Students in introductory sociology and media studies courses and social problems and social issues classes, as well as those studying inequality, will be able to make connections between the material and the many common beliefs about media’s effects on society that this book addresses.

By challenging the conventional wisdom about what the media “does” to its consumers—especially those considered less capable than their critics—readers can begin to think critically about the ways in which social issues are framed and how sensationalized news accounts help shape our thinking about the causes of societal problems. Beyond simply debunking common beliefs, this second edition stresses the importance of social structure and provides an introduction to structural explanations for the issues commonly blamed on popular culture. By digging deeper beyond simple cultural arguments, readers learn how policy decisions and economic shifts are important explanatory factors for many issues blamed on media.

Each chapter begins with examples from pop culture that many readers will already be familiar with, taken from celebrity gossip and controversial television shows like Teen Mom, high-profile news stories, and other easily accessible accounts. Additionally, each chapter introduces findings from recent research, often breaking down the components of the sampling and methods for readers to better understand how research is conducted and how to think critically about the results presented in the press. Where applicable, each chapter includes supporting data— and in some cases graphs—from federal sources, such as the census, Federal Bureau of Investigation, and Centers for Disease Control and Prevention, to provide evidence of long-term trends, often challenging misperceptions about particular issues. Because these sources are easily accessed online (and URLs are included in notes at the end of chapters), readers can learn to spot-check popular claims about these issues on their own in the future.

The evolution of this book, across its editions, has truly been a team effort. Thanks to Alex Masulis, my first editor at Westview Press, to Evan Carver who, early on, championed the second edition, and to Leanne Silverman, who helped bring the book in your hands to print.

I am also very thankful for my student researchers who helped find articles for this book. William Rice, Jessica Sackman, and Mishirika Scott assisted with the first edition, and Kimberly Blears helped with the revised edition. They and many other undergraduate students at the University of Southern California have been a pleasure to work with; their input in my classes helps keep me grounded in youth culture as time takes me further away from being anywhere near pop culture’s

9

cutting edge. Several anonymous reviewers provided useful comments and suggestions, and I thank them for helping make this book stronger. For their helpful criticisms and invaluable suggestions, I also want to thank David Briscoe, Joshua Gamson, Kelly James, Marcia Maurycy, Janet McMullen, and Markella Rutherford.

The Department of Sociology at the University of Southern California has been my professional home for many years, and I could not have written this book without years of the department’s enthusiastic support. I am grateful for the many graduate and undergraduate students with whom I have shared countless hours of thought-provoking discussions. Special thanks to Mike Messner, Barry Glassner, Sally Raskoff, Elaine Bell Kaplan, Karl Bakeman, and Eileen Connell for their continued support of me and my work. And most of all, thanks to my family, without whom none of this would be possible. A special thanks to my parents and sisters for their continued support, and for Eli and Julian, who are introducing me to a new generation’s pop culture.

10

CHAPTER 1

11

Media Phobia Why Blaming Pop Culture for Social Problems Is a Problem

“They’re here!” Carol Anne exclaims in the 1982 film Poltergeist. “Who’s here?” her mother asks. “The TV people!” answers the wide-eyed blonde girl, mesmerized by the “snow” on the family’s television set. What follows is a family’s sci-fi nightmare: Carol Anne is taken away by the angry spirits terrorizing their home. Her only means of communication to her family is through the television set.

This film’s plot serves as a powerful example of American anxieties about media culture. The angelic child is helpless against its pull and is ultimately stolen, absorbed into its vast netherworld. She is the family’s most vulnerable victim, and as such is drawn into evil without recognizing its danger. Carol Anne’s fate highlights the fear of what television in particular and popular culture more generally may “do to” children: take them someplace dangerous and beyond their parents’ reach. Ultimately, Carol Anne is saved with the help of a medium, but the imagery in the film reflects the terror that children are somehow prey to outsiders who come into unsuspecting homes via the TV set.

Thirty years later, media culture has expanded well beyond television; unlike in Carol Anne’s day, kids today use social networking, smartphones, iPods, the Internet, video games, and other technology that their parents may not even know how to use. Cable television was in its infancy in 1982: MTV was one year old, CNN was two. Today there are hundreds of channels, with thousands more programs available on demand at any time. Unlike in 1982, television stations no longer sign off at night. Our media culture does not rest. What does this mean for young people today, and our future?

Much of the anxiety surrounding popular culture focuses on children, who are often perceived as easily influenced by media images. The fear that popular culture leads young people to engage in problematic behavior, culminating in large-scale social problems, sometimes leads the general public to blame media for a host of troubling conditions.

For many people, this explosion of media over the past decades brings worry that, for instance, kids are so distracted by new technology that they don’t study as much. Are they crueler to one another now, thanks to social networking? Does our entertainment culture mean kids expect constant entertainment? Do kids know too much about sex, thanks to the Internet? Does violent content in video games, movies, and television make kids violent? Promiscuous? Materialistic? Overweight? Anorexic? More likely to smoke, drink, or take drugs?

12

This book seeks to address these questions, first by examining the research that attempts to connect these issues to popular culture. Despite the commonsense view that media must be at least partly to blame for these issues, the evidence suggests that there are many more important factors that create serious problems in the United States today. Popular culture gets a lot of attention, but it is rarely a central causal factor. Throughout the book, we will also take a step back and think about exactly why it is that so many people fear the effects of popular culture.

You might have noticed that all of the questions posed above focus on young people’s relationship with media and leave most adults out of the equation. As we will see, a great deal of our concern about media and media’s potential effects on kids has more to do with uncertainty about the future and the changing experiences of childhood and adolescence. In addition to considering why we are concerned about the impact of popular culture, this book also explores why many researchers and politicians encourage us to remain afraid of media culture and of kids themselves. Of course, popular culture has an impact on everyone’s life, regardless of age. But this impact is less central in causing problems than factors like inequality, which we will explore throughout the book.

The Big Picture: Poverty, Not Pop Culture

Blaming media for changes in childhood and for causing social problems has shifted the public conversation away from addressing the biggest issues that impact children’s lives. The most pressing crisis American children face today is not media culture but poverty. In 2011—the most recent year for which data are available—more than 16 million children (just under 22 percent of Americans under eighteen) lived in poverty, a rate two to three times higher than that in other industrialized nations. Reduced funding for families in poverty has only exacerbated this problem, as we now see the effects of the 1996 welfare reform legislation that has gradually taken away the safety net from children. Additionally, our two-tiered health care system often prevents poor children from receiving basic health care, as just over 9 percent of American children had no health insurance in 2011.1 These are often children with parents who work at jobs that offer no benefits.

These same children are admonished to stay in school to break the cycle of poverty, yet many of them attend schools without enough books or basic school supplies. Schools in high-poverty areas are more likely to have uncertified teachers; for instance, 70 percent of seventh through twelfth graders in such schools are taught science by teachers without science backgrounds.2 We worry about kids being in danger at school but forget that the most perilous place, statistically speaking, is in their own homes. In 2010, for instance, 915 children were killed by

13

their parents, compared with 17 killed at school during the 2009–2010 school year.3 By continually hyping the fear of media-made child killers, we forget that the biggest threats to childhood are adults and the policies adults create.

As we will see throughout this book, many of the problems that we tend to lay at the feet of popular culture have more mundane causes. At the root of the most serious challenges American children face, problems like lack of a quality education, violent victimization, early pregnancies, single parenthood, and obesity, poverty plays a starring role; popular culture is a bit player at best. And other issues that this book addresses, such as materialism, substance use, racism, sexism, and homophobia, might be highly visible in popular culture, but it is the adults around young people, as well as the way in which American society is structured, that contribute the most to these issues. These issues are made most visible in popular culture, but their causes are more complex. We will examine these causes in the chapters that follow.

The media have come to symbolize society and provide glimpses of both social changes and social problems. Changes in media culture and media technologies are easier to see than the complex host of economic, political, and social changes Americans have experienced in the past few decades. Graphic video games are easier to see than changes in public policies, which we hear little about, even though they better explain why violence happens and where it happens. We may criticize celebrity single mothers because it is difficult to explore the real and complex situations that impact people’s choices and behavior. What lies behind our fear of media culture is anxiety about an uncertain future. This fear has been deflected onto children, symbolic of the future, and onto media, symbolic of contemporary society.

In addition to geopolitical changes, we have experienced economic shifts over the past few decades, such as the increased necessity for two incomes to sustain middle-class status, which has reshaped family life. Increased opportunities for women have created greater independence, making marriage less of a necessity for economic survival. Deindustrialization and the rise of an information-based economy have left the poorest and least-skilled workers behind and eroded job security for many members of the middle class. Ultimately, these economic changes have made supervision of children more of a challenge for adults, who are now working longer hours.

Since the Industrial Revolution, our economy has become more complex, and adults and children have increasingly spent their days separated from one another. From a time when adults and children worked together on family farms to the development of institutions specifically for children, like age-segregated schools, day care, and organized after-school activities, daily interaction in American society has become more separated by age. Popular culture is another experience that kids may enjoy beyond adult supervision. An increase of youth autonomy has

14

created fear within adults, who worry that violence, promiscuity, and other forms of “adult” behavior will emerge from these shifts and that parents will have a declining level of influence on their children. Kids spend more time with friends than with their parents as they get older, and more time with popular culture, too. These changes explain in large part why children’s experiences are different now than in the past, but are not just the result of changes in popular culture.

A Brief History of Media Fears

Fear that popular culture has a negative impact on youth is nothing new: it is a recurring theme in history. Whereas in the past, fears about youth were largely confined to children of the working class, immigrants, or racial minorities, fear of young people now appears to be a more generalized fear of the future, which explains why we have brought middle-class and affluent youth into the spectrum of worry. Like our predecessors, we are afraid of change, of popular culture we don’t like or understand, and of a shifting world that at times feels out of control.

Fears about media and children date back at least to Plato, who was concerned about the effects that the classic Greek tragedies had on children.4 Historian John Springhall describes how penny theaters and cheap novels in early-nineteenth- century England were thought to create moral decay among working-class boys.5 Attending the theater or reading a book would hardly raise an eyebrow today, but Springhall explains that the concern emerged following an increase in working- class youths’ leisure time.

As in contemporary times, commentators blamed youth for a rise in crime and considered any gathering place of working-class youth threatening. Young people could afford admission only to penny theaters, which featured entertainment geared toward a working-class audience, rather than the “respectable” theaters catering to middle- or upper-class patrons. Complaints about the performances were very similar to those today: youngsters would learn the wrong values and possibly become criminals. Penny and later dime novels garnered similar reaction, accused of being tawdry in content and filled with slang that kids might imitate. Springhall concludes that the concern had less to do with actual content and more to do with the growing literacy of the working class, shifting the balance of power from elites to the masses and threatening the status quo.

Examining the social context enables us to understand what creates underlying anxieties about media. Fear of comic books in the 1940s and 1950s, for instance, took place in the McCarthy era, when the control over culture was high on the national agenda. Like the dime novels before, comic books were cheap, were based on adventurous tales, and appealed to the masses. Colorful and graphic depictions of violence riled critics, who lobbied Congress unsuccessfully to place restrictions

15

on comics’ sale and production.6 Psychiatrist and author Frederic Wertham wrote in 1953 that “chronic stimulation … by comic books [is a] contributing [factor] to many children’s maladjustment.”7 He and others believed that comics were a major cause of violent behavior, ignoring the possibility that violence in postwar suburban America could be caused by anything but the reading material of choice for many young boys. Others considered pinball machines a bad influence; the city of New York even banned pinball from 1942 to 1976 as a game of chance that allegedly encouraged youth gambling.

During the middle of the twentieth century, music routinely appeared on the public-enemy list. Historian Grace Palladino recounts concerns about swing music in the early 1940s. Adults feared that kids wasted so much time listening to it that they could never become decent soldiers in World War II (sixty years later Tom Brokaw dubbed these same would-be delinquents “the greatest generation”).8 Palladino contends that adult anxieties stemmed from the growing separation between “teenagers,” a term market researchers coined in 1941, and the older generation in both leisure time and cultural tastes. Just a few years later, similar concerns arose when Elvis Presley brought traditionally African American music to white middle America. His hips weren’t really the problem; it was the threat of bringing traditionally black music to white middle-class teens during a time of enforced and de facto segregation.

Later, concerns about satanic messages allegedly heard when listeners played vinyl albums backward and panic over Prince’s “1999” lyrics about masturbation in the 1980s led to the formation of Tipper Gore’s Parents Music Resource Center, Senate hearings, and parental warning labels. Both stem from parents’ discomfort with their children’s cultural preferences and the desire to increase their ability to control what their children know. Today, fears of media culture stem from the decreased ability to control content and consumption. While attending the theater or reading newspapers or novels elicits little public concern today, fears have shifted to newer forms of cultural expression like smart-phones, social media, video games, and the Internet. Throughout the twentieth century, popular culture became something increasingly consumed privately. Before the invention of radio and television, popular culture was more public, and controlling the information young people were exposed to was somewhat easier. Fears surrounding newer media have largely been based on the reduced ability of adults to control children’s access. Smartphones and near-constant Internet access make it practically impossible for adults to seal off the walls of childhood from the rest of society.

These recurring concerns about popular culture are examples of what sociologist Stanley Cohen refers to as “moral panics,” fears that are very real but also out of proportion to their actual threat.9 Underneath the fear lies the belief that our way of life is at stake, threatened by evildoers—often cast as popular culture or

16

its young consumers—who must be controlled. The rhetoric typically takes on a shrill and angry tone, joined by people nominated as experts to attest to the danger of what might happen unless we rein in the troublemakers. Cohen calls those blamed for the crisis “folk devils,” the people or things that seem to embody everything that is wrong with society today. Typically, moral panics attempt to redefine the public’s understanding of deviance, recasting the folk devils as threats in need of restraint.

Moral panics typically have a triggering event that gathers signifi-cant media attention, much like the Columbine High School shootings in Littleton, Colorado, did in 1999. The tragic murder of twelve students and a teacher shocked the nation, who could view nonstop live coverage of the event on a variety of news networks. Drawing on previous concerns about youth violence and popular culture, a panic began surrounding video games, music, and the use of the Internet to post threats and gather information about carrying out similar attacks. In the aftermath, commentators linked the perpetrators’ pop culture preferences to their actions, suggesting that it was highly predictable that violent music and video games would lead to actual violence. This panic cast both teens and violent media as folk devils, claiming that both were a threat to public safety.

Panics about popular culture often mask attempts to condemn the tastes and cultural preferences of less powerful social groups. Popular culture has always been viewed as less valuable than “high culture,” the stuff that is supposed to make you more refined, like going to the ballet, the opera, or the symphony. Throughout history people have been ready to believe the worst about the “low culture” of the common folk, just as bowling, wrestling, and monster truck rallies often bear the brunt of put-downs today. It’s more socially acceptable to make fun of something working-class people might enjoy than to appear snobby and insensitive by criticizing people for their economic status.

The same is true of criticizing rap music rather than African Americans directly. Sociologist Bethany Bryson analyzed data from the General Social Survey, a nationally representative random household survey, and found strong associations between musical intolerance and racial intolerance. She notes that “people use cultural taste to reinforce symbolic boundaries between themselves and categories of people they dislike. Thus, music is used as a symbolic dividing line that aligns people with some and apart from others.” Bryson also observed a correlation between dislike of certain groups and the music associated with that group.10 So for many people, rap becomes a polite proxy for criticizing African Americans without appearing overtly racist.

Africana studies professor Tricia Rose writes that the discourse surrounding rap is a way to further construct African Americans “as a dangerous internal element in urban America—an element that if allowed to roam about freely will threaten the

17

social order.”11 She goes on to describe how rap concerts have been portrayed as bastions of violence in order to justify greater restrictions on black youth from public spaces. Likewise, sociologist Amy Binder studied more than one hundred news stories about gangsta rap and found that heavy metal is feared for being potentially dangerous for individual listeners, but rap’s critics have focused on its alleged danger to society as a whole.12

Popular culture often creates power struggles. Every new medium creates new freedom for some, more desire to control for others. For instance, although the printing press was regarded as one of the greatest inventions of the second millennium, it also destabilized the power of the church when literacy became more widespread and people could read the Bible themselves. Later, the availability of cheap newspapers and novels reduced the ability of the upper class to control popular culture created specifically for the working class. Fears of media today reflect a similar power struggle, although now the elites are adults who fear losing control of what their children know, what their children like, and who their children are.

Constructing Media Phobia

Ironically, we are encouraged to fear media by the news media itself, because doomsday warnings sell papers, attract viewers, and keep us so scared we stay glued to the news for updates. “TV is leading children down a moral sewer!” the late entertainer Steve Allen claimed in several full-page ads in the Los Angeles Times. Other headlines seem to concur: “Teens’ Web Is a Wild West,” warned the Orange County Register. The New York Times wrote of the dangers of “video games and the depressed teenager.” “Health Groups Link Hollywood Fare to Youth Violence,” announced the front page of the Los Angeles Times.13 These and hundreds of other stories nationwide imply that the media are a threat to children and, more ominously, that children are subsequently a threat to the rest of us.

The news media are central within American public thought, maybe not telling us what to think, but, to borrow a popular phrase, focusing our attention on what to think about. Known as agenda-setting theory, this idea suggests that the repetition of issues in the news shapes what the public believes is most important.14 The abundance of news stories similar to the ones listed above directs us to think about entertainment as public enemy number one for kids in particular. Whether the stories are about popular culture causing young people to commit acts of violence or to become sexually active, depressed, or addicted, stories about the alleged danger of popular culture help us make seemingly easy connections between media and social problems. Although not everyone who hears about these stories agrees

18

that there is a cause-effect relationship, the repeated focus on media effects keeps the debate alive and the attention away from other potential causes of troubling conditions.

Problems do not emerge fully formed; they need to be created in order to claim the status as important and worthy of our attention and concern. In their 1977 book, Constructing Social Problems, sociologists John Kitsuse and Malcolm Spector argue that social problems are the result of the work of claims makers, people who actively work to raise awareness and define an issue as a significant problem. This is not to suggest that problems don’t really exist, only that to rise to the level of a social problem, issues need to have people who lobby for greater attention to any given topic.

The constructionist approach to social problems requires us to look closely not just at the issue of concern, but also at how we have come to think of it as a problem and—equally important—who wants us to view it as such. The popular culture problem is one example, created by a variety of people, including academics who do research testing only for negative effects and provide commentary attesting to its alleged harm; activist groups that seek to raise public awareness about pop culture’s supposed threat; and, as noted earlier, the news organizations that report on these claims. Politicians also campaign against popular culture, hold hearings, and propose legislation to appear to be doing something about the pop culture problem. Author Cynthia Cooper analyzed nearly thirty congressional hearings held on this issue, finding them to be little more than an exercise in public relations for the elected officials, yet hearings add to the appearance of a weighty problem in need of federal intervention. These claims makers do not simply raise awareness in response to a problem; their actions help create our sense that problems exist in the first place. Claims makers also shape the way we think about an issue and frequently “distort the nature of a problem,” as sociologist Joel Best details in his analysis of crime news.15 He acknowledges that claims makers might not do this on purpose and often have good intentions. After all, if people see what they believe to be a serious problem, raising awareness makes sense.

For example, consider the surgeon general’s report on youth violence, released in January 2001. This report indicated that poverty and family violence are the best predictors of youth violence. Nonetheless, the report concludes, “Exposure to violent media plays an important causal role,” based on research that is highly criticized by many media studies scholars.16 Newspapers capitalized on this single statement, running stories with the headlines “Surgeon General Links TV, Real Violence” and “Media Dodges Violence Bullet.”17 Even when studies point to other central causal factors, media violence often dominates the story—even in Hollywood.

19

You might be wondering what the harm could be in conducting research, holding hearings, and reporting on this issue. After all, media culture is very pervasive, and if it could be even a minor issue, shouldn’t we pay attention to it?

There is danger, however, in taking our attention away from other potentially more serious issues. The pop culture answer diverts us from delving into the other questions. Focusing on the media only in a cause-and-effect manner fails to help us understand the connection between media culture as a form of commerce created in a particular economic context. The quest to get the biggest box-office opening or Nielsen ratings leads to lowest-common-denominator storytelling, which explains the overuse of sex and violence as plot devices. Profit, not critical acclaim, equals success in Hollywood (and on Wall Street). Sex and violence create fascination and are sold in popular culture like commodities to attract our attention, if only for a little while.

Most ominously, the effects question crowds out other vital issues affecting the well-being and future of young people. These issues play out more quietly on a daily basis and lie hidden underneath the more dramatic fear-factor-type headlines. Sociologist Barry Glassner, author of The Culture of Fear, refers to this as social sleight of hand, a magician’s trick that keeps us focused on one hand while the other actually does the work, encouraging us to think of a trick as real magic. He warns that these diversions encourage us to fear the wrong things, while the real roots of problems go unexamined and often don’t rise in public awareness.

It’s not surprising that we have a difficult time looking beyond popular culture as an explanation for social problems. As a nation rooted in the ethos of individualism, Americans tend to understand troubling conditions as the result of poor personal choices. Certainly, these choices play a role, but we often fail to understand the contexts in which people make such choices.

Social structure is the sociological concept that gives us information about these contexts. For instance, social structure encourages us to look in depth at the big picture to understand what factors may shape people’s choices. Looking carefully at patterns of arrangements within our economic system, at inequality in terms of race, gender, sexual orientation, and socioeconomic status, will help us understand why, for instance, some people might be more prone to bully, to commit violence, to become pregnant as a teen, or to drop out of school.

For example, many critics of rap music have argued that some of the lyrics are extremely misogynistic, encouraging young listeners to devalue women. While disturbing lyrics get our attention, sociologists Terri M. Adams and Douglas B. Fuller argue that rap is just a continuation of a long history of demonizing women, particularly black women. The “Jezebel” myth (the modern-day “ho”) of the hypersexual woman who uses her wiles to manipulate men dates back to slavery and served as an excuse for white men to violate African American women. Similarly, the “Mammy” myth (today’s “bitch”) also has roots in slavery as the

20

bossy woman who orders black men around while serving her white masters. In more contemporary times, politicians have used these characterizations to

blame women for urban poverty: Ronald Reagan’s 1980s-era “welfare queen” who allegedly can’t stop having babies and Senator Daniel Patrick Moynihan’s emasculating matriarch of the 1960s, supposedly destroying the African American family with her strength.18 Whereas politicians may use more genteel language, the outcome of reduced funding for children in poverty carries far more potential destructiveness than the prolific use of profanity in rap. In fact, part of the insidiousness of sexism lies in the use of language to cover and obfuscate its continued importance in American life. The realities of discrimination and violence against women are less sensational than rap’s in-your-face lyrics, but they are still with us.

For example, the National Crime Victimization Survey (NCVS), a nationally representative survey conducted by the Department of Justice each year, stated in 2010 that 169,370 American women and girls over twelve reported being raped or sexually assaulted, a rate of .7 per 1,000. Intimate-partner violence accounted for 22 percent of nonfatal violence against women.19 This is partly because females are generally less likely to be victims of violence than males are, but it also highlights the dangers women often face from those closest to them.

Structural factors are often difficult to see for those not trained to think sociologically. It is often difficult to see how policies enacted decades ago might shape patterns of violence or school failure today, but they do. Social structure involves connecting the dots between the past and present, between large-scale social institutions and individual choices. One of the central goals of this book is to help readers understand that there are many structural factors that can help us understand the many problems that popular culture is often blamed for causing.

Not only is this an issue that politicians can use to connect with middle-class voters, but researchers can get funding from a host of sources to continue to seek negative media effects. David L. Altheide, sociologist and author of Creating Fear: News and the Construction of Crisis, suggests that fear-based news helps support the status quo, justifies further social control, and encourages us to look for punitive solutions to perceived problems. Meanwhile, more significant causes of American social problems fall by the wayside.

Deconstructing Media Phobia

This book uses the constructionist approach to understand how claims makers blame popular culture for causing social problems. This does not mean that all problems are just invented crises, nor does it mean that popular culture is all benign entertainment and should not be crucially analyzed. Within each chapter, we

21

will examine the structural roots of the various issues that tend not to attract the massive attention or news coverage that popular culture does. Issues such as the persistence of poverty, unequal access to quality education, reduced information about birth control, overall disparities in opportunity, and the continued presence of racial and gender inequality explain many of the problems we hear blamed on popular culture.

Understanding moral panics about popular culture involves both addressing how the fear is constructed as well as why the fear is out of proportion, requiring us to include objective evidence. Throughout this book, we will examine data and trends within each chapter to see that many of the problems attributed to popular culture are not necessarily getting worse. Sometimes the problems are very serious (such as violence and educational disparities), and an emphasis on media serves to trivialize them. Studies purporting to find evidence of media culpability are often profoundly flawed or overstate their findings. Since research methodology can be complex and dry, the public almost never hears how researchers actually conducted the studies that are discussed in the news. We will do that here, and in the process you will see that some of the research we hear so much about has serious shortcomings.

In the following chapters, we will consider claims that popular culture promotes educational failure, online bullying, violence, promiscuity, single parenthood, materialism, obesity and eating disorders, drinking, drug use, and smoking, as well as racism, sexism, and homophobia. These are important and often misunderstood issues that merit further exploration.

Media culture may not be the root cause of American social problems, but it is more than simply benign entertainment. The purpose of this book is not to simply exonerate media culture as inconsequential: I contend that media culture is a prime starting point for social criticism, but our look at social problems should never end with the media. Pointing out the real issues we should be concerned about does not absolve the entertainment industry of its excesses and mediocrity, particularly the news media, which often heighten our fears while providing little context or analysis. Fear is a powerful force, especially when children seem to be potential victims, so it is understandable that the public would be concerned about our ubiquitous media culture. However compelling news reports are, with attention- grabbing visuals and the constant competition for our interest, the fear that media are a central threat to children and the future of America is a tempting explanation, but at best, it is misguided.

This fear of media was not invented out of thin air, nor is it fanned only by news stories suggesting media culture is dangerous. There is a parallel groundswell of public concern about the larger role of media culture in contemporary American society. Let’s face it: a lot of media culture is highly sexualized, is filled with violence, and seems to appeal to our basest interests, and some people do use

22

social networking to be incredibly rude and abusive. The media act as a refracted social mirror, providing us with insights about

major social issues such as race, gender, class, and the power and patterns of inequality. The media are an intricate element of our culture, woven into the fabric of social life. For example, many people rightly criticize the highly sexualized images of women in popular culture, the limited representations of people of color on television, and the brutality of fantasy violence in movies and video games. These images exist in the context of a society still mired in various forms of inequality, and although in many respects inequality has been reduced, it still exists. Limited or absent representations of the elderly, the plus-sized, the disabled, and other marginalized groups reflect the tendency of mass entertainment to focus on a narrow portrait of American life. Popular culture can be a great starting point to discuss issues of power, privilege, and inequality.

Media Matter

I want to be clear that by arguing that popular culture isn’t the central cause of our biggest problems, I am not saying that media have no impact on American society or that popular culture doesn’t matter. Far from it. Our various forms of media shape our communication with each other and how we spend our time, and we use many forms in constructing our identities. Popular culture shapes what we talk about, how we think of each other, and how we think about ourselves. Media matter, but our relationship to their many forms is more complex and multifaceted than simple cause-effect arguments suggest.

For example, people might use music as a means of forming connections with others at festivals like Burning Man and for navigating emotional challenges of relationships and self-image. A Facebook account is a way to construct a public self and has become a central means of communication for many people. Debates about use of the N word in music lyrics can lead to broader discussions about the word’s history and meaning and the state of racism today.

I also understand why people are concerned about the content of popular culture. Many of us find it to be distasteful at times and wonder what its impact may be. Others don’t like hearing foul language blasting from the stereo of the car next to theirs and cringe when young girls seem to emulate sexy pop stars. Media culture has become very pervasive in the past few decades, and at times it feels like it bombards us—twenty-four-hour news streams, constant texting, and social networking have reshaped our daily lives and interactions. The news media are often guilty of peddling fascination rather than information. This book serves as a critique of the press coverage of social problems and why the “media made them do it” theme continually resurfaces. I understand why critics sometimes argue that

23

graphic media depictions of sex and violence and the prolific use of profanity debase our culture. Hollywood’s dependence on these tools often represents the failure to tell complex stories and the lack of courage to take artistic (and financial) risks. Rather than just ask Hollywood for self-censorship, we should have more choices, more opportunities for our media culture to engage the complexities of life that the summer blockbusters seldom do. But business as usual often makes this impossible, when a handful of big conglomerates produce the lion’s share of entertainment media and smaller producers have a difficult time getting attention. The 1996 Telecommunications Act, which eased media-ownership restrictions, made it even harder for smaller media outlets to compete with the big conglomerates like Disney, Time-Warner, and Viacom.

That said, I know that sometimes at the end of a long day, I prefer to be distracted and amused rather than informed or inspired. With the threat of terrorism and the lingering fallout from the Great Recession, superficial entertainment serves a purpose. But deflected anxiety doesn’t go away; it just resurfaces elsewhere. And in uncertain times such as our own, it is understandable that our concerns would eventually focus on popular culture that both reminds us of our insecurities and distracts us from them. But understanding the most important issues and their causes can help alleviate anxieties about both popular culture and young people, and help us focus on the roots of troubling issues in order to find solutions. This book aims to do just that.

Notes 1. US Bureau of the Census, Income, Poverty, and Health Insurance Coverage in the

United States: 2011, Report P60, n. 243, Table B-2, 16, 22, http://www.census.gov/prod/2012pubs/p60-243.pdf.

2. Children’s Defense Fund, The State of America’s Children Yearbook, 2002 (Washington, DC: CDF, 2002).

3. US Department of Health and Human Services, Administration on Children, Youth, and Family, Child Maltreatment, 2010 (Washington, DC: Government Printing Office, 2011), http://www.acf.hhs.gov/programs/cb/pubs/cm10/cm10.pdf#page=70; US Department of Education, Indicators of School Crime and Safety: 2011 (Washington, DC: Government Printing Office, 2012), http://nces.ed.gov/programs/crimeindicators/crimeindicators2011/figures/figure_01_1.asp.

4. For further discussion of Plato’s concerns, see David Buckingham, After the Death of Childhood: Growing Up in the Age of Electronic Media.

5. John Springhall, Youth, Popular Culture, and Moral Panics: Penny Gaffs to Gangsta-Rap, 1830–1996.

6. For further discussion, see ibid., chap. 5. 7. Frederic Wertham, “Such Trivia as Comic Books.” 8. Grace Palladino, Teenagers: An American History; Tom Brokaw, The Greatest

Generation (New York: Random House, 1998). 9. Stanley Cohen, Folk Devils and Moral Panics.

24

10. Bethany Bryson, “’Anything but Heavy Metal’: Symbolic Exclusion and Musical Dislikes.”

11. Tricia Rose, “’Fear of a Black Planet’: Rap Music and Black Cultural Politics in the 1990s,” 279.

12. Amy Binder, “Constructing Racial Rhetoric: Media Depictions of Harm in Heavy Metal and Rap Music,” 754.

13. David Whiting, “Teens’ Web Is a Wild West,” Orange County Register, December 14, 2011, http://www.ocregister.com/articles/-149862-ocprint-.html; Roni Caryn Rabin, “Video Games and the Depressed Teenager,” New York Times, January 18, 2011, http://well.blogs.nytimes.com/2011/01/18/video-games-and-the-depressed-teenager/; Marlene Cimons, “Health Groups Link Hollywood Fare to Youth Violence,” Los Angeles Times, December 13, 2000, A34.

14. Maxwell E. McCombs and Donald L. Shaw, “The Agenda-Setting Function of the Mass Media.”

15. Cynthia Cooper, Violence on Television: Congressional Inquiry, Public Criticism, and Industry Response—a Policy Analysis; Joel Best, Random Violence: How We Talk About New Crimes and New Victims, xiii.

16. US Department of Health and Human Services, Youth Violence: A Report of the Surgeon General (Washington, DC: Government Printing Office, 2001). For more discussion of the research on which the statement was based, see Chapter 2.

17. Jeff Leeds, “Surgeon General Links TV, Real Violence,” Los Angeles Times, January 17, 2001, A1; Jesse Hiestand, “Media Dodges Violence Bullet; Poverty, Peers More to Blame,” New York Daily News, January 18, 2001, B1.

18. Terri M. Adams and Douglas B. Fuller, “The Words Have Changed but the Ideology Remains the Same: Misogynistic Lyrics in Rap Music.”

19. Jennifer L. Truman, “Criminal Victimization, 2010,” National Crime Victimization Survey (Washington, DC: US Department of Justice, 2011), http://www.bjs.gov/content/pub/pdf/cv10.pdf.

25

CHAPTER 2

26

Is Popular Culture Really Ruining Childhood?

“There is reason to believe that childhood is now in crisis,” writes law professor Joel Bakan in a 2011 New York Times op-ed. He lists a number of factors for his concern, beginning with a description of his teenage children, “a million miles away, absorbed by the titillating roil of online social life, the addictive pull of video games and virtual worlds, as they stare endlessly at video clips and digital pictures of themselves and their friends.” He is not alone in the belief that popular culture is at least partly to blame for negatively impacting childhood. “Pop culture is destroying our daughters,” a 2005 Boston Globe story declared, affirming what many parents and critics believe. The article, tellingly titled “Childhood Lost to Pop Culture,” described young girls “walking around with too much of their bodies exposed,” their posteriors visible while sitting in low-rise jeans.1

The concerns are not just in the United States, either. A British newspaper warned readers of children’s “junk culture,” asking whether we have “poisoned childhood” with video games and other kinds of popular culture. A Canadian newspaper asks, “Can the kids be deprogrammed?” noting that “concern is mounting that pop culture may be accountable for a wide range of social and physical problems that begin in childhood and carry through to adulthood.”2

Stories like these reinforce what many people think is obvious: childhood is under siege, and popular culture is the main culprit. From celebrities making questionable life choices to violent video games and explicit websites, there is certainly a deep well of pop culture to draw from in order to find examples of bad behavior that many fear will send the wrong message to kids. But despite the plethora of potential bad influences, pop culture is not changing children and childhood as much as we might fear.

First, we need to examine the meaning of childhood itself. If childhood looks different from what many people presume it should, we need to critically consider what it is “supposed” to be like and how we collectively create the meaning of childhood. Are children’s lives really far from the ideal that pop culture is allegedly destroying?

Second is the presumption that the experience of childhood has changed for the worse. Some people are deeply concerned that children know things that we think they shouldn’t—about sex, violence, alcohol, and drugs. But who decides what children should and shouldn’t know (or when they should know it) and whether knowledge itself is dangerous? Before we convict popular culture, we need to consider whether children and childhood itself have really been damaged.

Finally, if children’s experiences of childhood have changed, we often presume

27

that popular culture is the main cause. But is it really? In this chapter we will examine these three basic questions about children and

popular culture. As we will see, childhood has not been ruined, nor is it ending earlier than in generations past. Yes, children’s experiences are different now than they were when I was growing up and likely from when you were growing up, too. When I was ten, cable television was just coming out (with only a few dozen channels), VHS and Betamax were starting their battle for household domination, and portable music mostly meant a transistor radio. But there were many other factors—more important factors—shaping the experiences of kids my age than our media consumption, just as there are for kids today.

Americans fear media in part because we are constantly told we should and, more important, because media are the most visible representation of the many changes that have altered the experiences of childhood. Changes in popular culture are much easier to spot than shifts in social structure. In this chapter I address why media are so often considered detrimental to childhood and the primary spoilers of innocence. Instead of media being the true culprit, broader social, political, and economic changes over the past century have made adults uneasy about their ability to control children and the experience of childhood itself. Most centrally, fears about the demise of childhood make us nostalgic for our own lost childhoods. In a way we are longing for our lost selves when we think that childhood and children have been damaged by popular culture. The many moral panics surrounding young people and popular culture stem from misunderstandings about children’s well- being today, and the shifting meanings of childhood itself.

The Meaning(s) of Childhood

What is childhood? This may seem like an obvious question, but its definition is trickier than we might think. For one, Americans don’t even agree on when a child’s life begins—at conception? the second trimester of pregnancy? at birth? Once children are born, the confusion doesn’t end. Many might agree that people under ten can be classified as children, but we will probably not all agree on the sorts of experiences they should have. A religious education? Chores? Responsibility for younger siblings? A job? Underlying these decisions are a variety of basic ideas about what childhood should mean, and these decisions change over both time and place.

If we have trouble defining when childhood begins, we really have difficulty agreeing on when it ends. Is adolescence the cutoff? Age eighteen? Twenty-one? Neither age is really the clear threshold to adulthood; after all, in some states children as young as ten can be tried as adults in criminal court.3 On the other hand, some adults regard college students—many well over eighteen and even over

28

twenty-one—as kids, not yet in the real world. As a society we have mixed feelings about children and childhood. We all have

different experiences of childhood ourselves. For some of us, this experience might have been fun and seem carefree (at least through the benefit of hindsight). For others, childhood might have been a painful experience, one best left behind. While people’s experiences of childhood are quite varied, when I ask my students to define the term child, they seem to have no trouble finding common adjectives. Words ranging from innocent, good, cute, pure, helpless, and vulnerable to mischievous, impulsive, ignorant, and selfish come up year after year. A close analysis of these terms reveals that they certainly do not apply to all children, and they actually fit the behavior of some adults. Note that these words connote either sentimental or pejorative views of young people, a caricature of a vast and diverse group. Advertisers and politicians frequently use these symbols in order to sell products or their political platform.

But these words are not as benign as they might seem. Similar descriptors have historically been used to define women, people of color, and other minority groups to justify their inferior social status.4 Although most people now realize that one’s race, ethnicity, gender, or religion cannot be used to identify personality traits, we still often view children as sharing a set of stable characteristics. Children are a group easily stereotyped, sentimentalized, and misrepresented.

At the same time, there is a danger in viewing children as a singular group. Experiences of childhood are diverse and changing, yet often our standard for the ideal childhood in America (and adulthood for that matter) is based on white, middle-class, and usually suburban standards. If I’m not careful I can fall into this trap too, since this was my experience of childhood growing up in a Midwestern suburb not too far from where the mythical Cleavers of Leave It to Beaver supposedly lived. Childhood is rooted in social, economic, and political realities and is not a universal experience shared by all people of a certain age from the beginning of time. These realities, like the air we breathe, are often invisible, and thus this experience of childhood might seem normal to those who once lived it.

Certainly, each one of us can think of how children’s experiences are different now than in the past. But they are also different based on the circumstances of the present. For instance, a girl growing up in my old neighborhood today will likely have a very different experience if her family’s economic situation, ethnicity, and immigration status are different from mine. Across town, another girl of the same age who lost a parent and lives in public housing will have yet other experiences, as will the girl from another religious background who lives in a rural area miles away. Like snowflakes, no two experiences of childhood are exactly alike.

But we tend to define children as a unitary group and focus on how they are unlike adults. I know what you might be thinking—children aren’t adults. This is true, but some of the differences are not as clear-cut as we might think. Some

29

children have significant family responsibilities and can always be counted on to be there for the ones they love. Some adults cannot. Some children are very serious and stressed out, while some adults are not. And we all probably know some adults who are financially dependent on others and anything but emotionally mature. Just as some grown-ups don’t meet the ideal definition of what it means to be an adult, many children don’t necessarily fit the stereotype of the child.

This is why we must strive to understand the varied experiences of childhood and to understand how they define their own reality, rather than simply how different they are from the dominant group. Just as the historical definition of women as less competent than men served to perpetuate male dominance, the social construction of childhood serves adult needs and reinforces adult power rather than best meeting the needs of young people. While young children are dependent upon adults in many ways, we tend to define them only by the qualities they lack rather than the competencies they possess.

David Buckingham, professor of education at the University of London, explains the danger of thinking about children as fragile and focusing only on adult protection. Instead, he argues that we need to work toward preparing children to face the realities of the world around them.5 Protection is an idea difficult to let go of—it sounds so noble and above reproach. To prepare rather than protect empowers children to make their own decisions, armed with the necessary information. As much as some people might hope, shielding children from information in media is practically impossible; Buckingham urges adults to focus on preparing children to become empowered media consumers.

Children who know things adults don’t think they should challenge the notion of innocence and sometimes seem threatening. Knowledge is the antithesis of innocence, often seen as the antithesis of childhood itself. The “knowing” child, author Joe Kincheloe points out, is routinely seen as a threat within horror movies. For example, he describes the 1960s British film Village of the Damned, where children can read adults’ minds. Based on this perceived threat, the parents ultimately decide they must kill their own kids. Jenny Kitzinger notes in her study of abuse that a child who has knowledge about sex is often considered ruined and less a victim than a naive counterpart.6 Withholding knowledge is central to maintaining both the myth of innocence and power over children, which is at the heart of media fears. Media destabilize the myth of innocence and challenge adults’ ability to withhold knowledge from children. This is the real threat popular culture poses; rather than threatening kids themselves, popular culture often challenges adult control.

Our conception of childhood reveals a major contradiction between the value of knowledge and the luxury of innocence. On the other hand, it is often through media that adults confront the reality that children do not necessarily embody innocence as

30

much as adults might hope. We struggle to maintain the sense that childhood means carefree innocence and blame popular culture for getting in the way. The more closely we examine both media and the way we conceptualize childhood, the better we will understand the fear surrounding this relationship. We see how unclear the boundary between adulthood and childhood really is. Sometimes it is the media that help blur the line of demarcation; other times it is media that expose the ambiguity.

We often perceive childhood innocence as a natural, presocial, and ahistorical state that all children pass through.7 Idealizing childhood as a time of innocence causes us to panic when children know more than some think they should. We place a great deal of blame for this loss of innocence on media, as if innocence were something that would stick around longer without popular culture. As we will see in the next section, “innocence” before the age of electronic media was likely to involve higher child mortality rates and an early introduction to hard work in factories, fields, and mills.

Childhood is constantly shifting and changing, and it becomes defined based on the needs of society. The idea that childhood in the past was composed of carefree days without worry is a conveniently reconstructed version of history. This fantasy allows adults to feel nostalgia for a lost idealized past that never was. Experiences of children have changed, but popular culture is at best a minor player in the story.

What Really Changed Childhood?

There should be no doubt that children’s experiences of childhood change over time. In my own family history (and likely yours too), when we compare generations the differences become clear. I have a grandfather whose education ended in the eighth grade so he could work full-time in the family business, something not unusual for his peers during the 1920s. Of course, if my parents took me out of eighth grade to work in the 1980s, they would have been in big trouble. This isn’t because people in the 1920s didn’t care about children, but the needs in many families were different at that time, and child labor wasn’t as restricted. My grandfather was the seventh of eight children and lost his father in World War I, as did many children of his generation. Many like him were needed to contribute to their families to ensure basic survival.

By the time I came around, much had changed, both in my family and within American society as a whole. The country had gone through a period of tremendous economic growth, making children’s labor unnecessary. The passage of child labor laws and compulsory education laws made school attendance mandatory. And most important, the postindustrial, information-based economy created the need for a highly educated workforce. A lack of high school (and increasingly college) education would put economic survival in jeopardy for people of my generation.

31

By contrast, my grandfather learned his family trade and eventually had his own business in the garment industry, something that would be more difficult today with the predominance of large retail chains and Internet commerce.

These generational differences had much more to do with economics than culture. Yes, the array of media available was vastly different in my grandfather’s day (and he took pleasure in buying me the stereo he never had), but popular culture did not alter the structural realities of either of our childhood experiences.

Not only have childhood experiences changed significantly over time, but the notion of the ideal childhood has, too. In fact, even the idea that there is a distinct period of the life course called “childhood” is a relatively recent development, according to historian Phillipe Ariès, whose groundbreaking 1962 book, Centuries of Childhood: A Social History of Family Life, claims that childhood did not exist as a separate social category in Western culture before the seventeenth century. Based on his analysis of paintings, Ariès observes that children were painted as miniature adults, mostly wearing the same type of clothing and drawn in adult proportions. Little seemed to separate the social roles between adults and children at that time. Although historians have challenged Ariès on several points, his work clearly demonstrates that childhood was conceptualized very differently in the past than it is today.

Whereas Ariès’s focus was on the children of French aristocrats, historian Karin Calvert describes how colonial American childhood was not regarded as an ideal time of life, as so often it is today.8 She describes how high rates of infant mortality and childhood illness made childhood particularly risky, something to hurry up and survive rather than slow down and savor (or worry it is over too fast). Childhood itself became associated with illness. A colonist entering the New World often met with danger, and growing old was a form of conquest.

Unlike today, when popular culture reveres all things youthful, maturity was highly regarded and looked forward to as a time of prestige. Think of the nation’s founding fathers and their white powdered wigs and white stockings, which added years to their appearance. Calvert goes on to say that by the early nineteenth century, American independence had changed the conception of childhood from a period of intense protection to one of greater freedom. She contends that coddling fell out of favor: just as overinvolvement of the mother country was seen as restrictive, parents were discouraged from being overprotective of their children. The belief was that children were made strong by a tough upbringing, while coddling only weakened them.

Calvert explains that during the Victorian era, when infant mortality rates began to fall, childhood evolved into a celebration of innocence and virtue. Families of wealth attempted to keep children pure by separating them from adult society, even from their own parents. Governesses and boarding schools attempted to prevent contamination from adults as long as possible. Childhood became an idealized time

32

of life, reflected in advertisements, which used images of children to connote purity in products like food and soap.9

But the Victorian attempt to keep children away from the adult world was clearly available only to the affluent. For many children, carefree play and ignorant bliss do not mark past or present experiences of childhood. Death was much more likely to be part of childhood in previous centuries, with high rates of infant mortality, childhood illness, and shorter life expectancy. Historian Miriam Formanek-Brunell notes that nineteenth-century children’s doll play often involved mock funerals, reflecting anything but happy-go-lucky childhood experiences.10 It is our recent conception that insists that childhood should mean freedom from knowledge of the darker side of life.

For other families, childhood meant work at far younger ages than we see now in the United States—although children in developing countries frequently work for wages today. In nineteenth-century America, children in rural areas were needed on family farms, and even if they attended school, their labor was still a necessary part of the family economy. Learning a craft might have meant becoming an apprentice at age eight or nine. Children held in slavery were considered chattel and expected to work as well. By twenty-first-century standards, children working for wages may seem inhumane, but for many families it was economically necessary. Households required full-time labor for tasks like cooking, cleaning, and sewing, particularly in the decades before World War I when poor and rural families were unlikely to have electricity. Since an adult was needed to do the work of maintaining the family, it was necessary for nearly 2 million children to work for wages in 1910.11

Working children often experienced a great deal of autonomy, especially those living in cities. As historian David Nasaw describes, city kids selling newspapers or shining shoes sold their goods and services late into the night, as newspapers published evening editions.12 They kept a portion of their earnings for themselves but gave most to their parents, who were often dependent on the extra money their kids brought in. When reformers—mostly affluent white women who favored the idea that children should be protected from city life—attempted to get them into schools, many of these young peddlers resisted. Giving up their freedom and their incomes did not sit well with the kids, or with their parents who relied on their contributions.

Children’s wages were vital sources of income around the turn of the century, particularly for immigrant families, and constructions of the ideal childhood reflected this need. The useful child was regarded as a moral child, mirroring the adage “Idle hands are the devil’s workshop.” Work and responsibility were considered fundamental values for children, which sociologist Viviana A. Zelizer notes date back to the Puritan ethic of hard work and moral righteousness in early colonial America. Work was viewed as good preparation for a productive adult

33

life, while higher education remained the domain of elites. The industrial-based economy did not require a great deal of academic training from its labor force. Thus, receiving only an eighth-grade education, as my grandfather did, was not nearly as problematic in the first decades of the twentieth century as it is now.

Zelizer concludes that child labor “lost its good reputation” because children’s labor became less necessary due to rising adult incomes and the growing need for a more educated labor force.13 Compulsory education became more widespread in the early twentieth century, not just because it was more humane for children to be in school rather than factories, but because it became more economically necessary. The growth of automation reduced the need for children in the labor force, and the increasing enrollments in public schools stemmed from a desire to create a separate institution to keep children busy during the day in the interest of public safety, as the large number of immigrant children led to concerns about juvenile delinquency. Fearing that poor immigrants constituted a criminal class, reformers instituted compulsory education, a way to legally enforce social control of this group.14 Schools provided a way to Americanize children, keep them out of the labor force until needed, and remove them from the streets.

This is a defining moment in the history of American childhood: from this point on, adults’ and children’s lives became increasingly divided. Children and adults went from sharing tasks on family farms or the shop floor before the 1930s to increasingly spending more time isolated from one another and creating distinct cultures.

The Creation of Childhood as We Know It

In a way, childhood as we think of it today is rooted in the fallout of the Great Depression years of the 1930s. Historian Grace Palladino contends that the separation between adults and children intensified during the Depression, when adolescents were far more likely to attend high school than in years past due to the shrinking labor market. Children were all but expelled from the workforce. Whereas only about 17 percent of all seventeen-year-olds graduated from high school in 1920, by 1935 the percentage had risen to 42 percent.15 It is during this time that some of the early concerns about young people and popular culture began, too.

The shared space of high school led to the creation and growth of youth culture. Young people’s tastes in music, for example, grew to bear more resemblance to their peers’ than their parents’. Palladino cites swing music as a major cultural wedge between parents and youth in the late 1930s. Parents complained that young people wasted their time listening to the music and were not as industrious as prior generations, a reflection of children’s exclusion from the labor force and increase

34

in leisure time. This was particularly true following World War II, when economic prosperity coupled with mass marketing created even more distinction between what it meant to be a child, a teenager, and an adult.

The postwar economic boom fueled a consumption-based economy. Following strict rationing of goods during World War II, consumption and the widespread availability of goods expanded dramatically. The amount of consumer goods available to both adults and children exploded, and it became patriotic to spend instead of conserve. Families could also carry more debt with the introduction of credit cards, and home mortgages required much smaller down payments than in prewar days. Increases in wages and automation of household labor provided children with even more leisure time; this prosperity helped to create the new category called “teenager.”

Free from contributing to the family income, this young person had both more time and more money than his or her parents had a generation earlier. Producers created movies, television, and music with this large demographic group in mind, particularly as baby-boom children reached spending age in the late 1950s. But perhaps most centrally, market researchers recognized children as a distinct demographic group. Palladino details how market-research firms that focused specifically on understanding youth culture emerged during the late 1940s to better sell products to this increasingly important consumer group. The perception of youth as a time for leisurely consumption of popular culture began.

Marketers sold the idea that postwar childhood and adolescence should be fun. Following the struggles of the Depression and World War II, children born during the baby-boom years were seen as symbols of a bright, new future. Childhood illnesses like polio were gradually conquered, and basic survival was no longer most parents’ major concern. Instead, happiness and psychological well-being, luxuries of prosperity, became central.

Rather than simply being a time of physical vulnerability, as in the colonial period, or moral vulnerability, as in the Victorian era, postwar childhood came to be defined as a psychologically vulnerable time. Following the popularity of Freud in the United States, parents not only were expected to produce healthy and productive children but were also charged with the responsibility of ensuring their psychological well-being. From a Freudian perspective, the adult personality is formed through childhood conflicts. If these conflicts go unresolved, then neurosis or psychosis is likely to follow in adulthood, placing the burden of lifelong psychological health mainly on the mother, who, according to Freud, was central in these conflicts. This emphasis on children’s psychological health also supported a rigid gender ideology. Middle-class mothers, herded out of the paid labor force following World War II, held the lion’s share of responsibility to raise happy children, a relatively new mandate that would eventually suggest that parents— especially mothers—worry about their children’s media use.

35

The midcentury growth of suburbs also influenced the meaning and experience of childhood. Shifts from an agrarian to an industrial-based economy led to the growth of cities in the late nineteenth and early twentieth centuries, and following World War II the expansion of American suburbs altered both the experiences and the conceptions of childhood. With suburban life came the growing dependence on automobiles, often creating less mobility for young children dependent on parents for transportation and more mobility for teens who had access to cars. The car culture symbolized American independence: advertisements boasted of the adventures a car could offer on newly constructed superhighways.

Teenagers could also congregate away from parental supervision, listen to music, and visit drive-in movies on their own; in many ways the widespread availability of the automobile altered teen sexuality. Teens, now often free from the need to work to help their families, experienced less adult control, creating parental anxiety about their children’s access to the world around them.

Cultural scholar Henry Jenkins notes that political discourse increasingly described families as individual “forts,” or separate units striving to shield their children from the perceived harms of the larger community.16 In this approach to understanding childhood, children are considered to be under siege, while individual family homes and white picket fences serve as bunkers of suburban safety. The perceived outside dangers include not only unknown neighbors, but also popular culture. This view of childhood as being in danger from the outside world and in need of parental protection continues more than fifty years later, in spite of important social changes that have altered the realities of parenting and family life since that time.

Recently, the postwar era has been held up as ideal, a benchmark against which childhood today is often compared. This has more to do with adults thinking back to their own twentieth-century childhood experiences and idyllic midcentury television shows than reality. Although far fewer children lived in single-parent families and divorce was less common than today, this era was itself the product of specific economic, political, and social realities of the time.17 The prosperity after World War II, coupled with the strength of labor unions, meant that many more families could achieve and maintain middle-class status with one wage earner’s income. New homes in brand-new suburbs could be purchased with little money down, thanks largely to the GI Bill, which also made it possible for many returning vets to attend college for the first time in their family’s history. In many ways, the post-war years were golden.

But not for all. We forget about inequality when we romanticize the happy days of the 1950s.

Nostalgia for an allegedly carefree childhood of the past does not take into account the pervasive history of inequality in the United States. Economic prosperity was

36

not shared by everyone: in 1955 African American families earned only fifty-five cents for every dollar white families earned.18 Those who mourn the loss of childhood innocence in the twenty-first century tend to ignore the struggles faced by many children of color. In previous centuries children born into slavery, for instance, were regarded as individual units of labor and sometimes sold away from their families. Fifty-five percent of African American families, for instance, lived below the poverty line in 1959, and not only were most suburbs economically out of reach, but unfair housing practices kept suburbs white.19 Our collective nostalgia for this mythical version of childhood calls upon memories of Cleaver- like families, when divorce and family discord were unheard of. In reality it was during the 1950s that divorce rates started to climb, and the families of old that we revere existed mostly on television.

As we will see in Chapter 6, the 1950s was not the age of sexual innocence we often believe today. Pregnancy precipitated many marriages in the 1950s, when the median age of marriage for women dipped to its lowest point in the twentieth century, down to twenty in 1950.20 We often think that teenage pregnancy is a relatively new social problem, believed to be exacerbated by sexual content in media, but the reality is that it has been steadily decreasing. In 1950 the pregnancy rate for fifteen- to nineteen-year-olds was 80.6 per 1,000, whereas by 2009 the rate had dropped to an all-time low of 39.1 per 1,000.21 The difference is that pregnant teenagers now are less likely to be married or to be forced into secret adoptions or abortions. Teens also have more choices, including using birth control, having abortions, or keeping their babies without getting married.

What has changed is our perception of teens and sex. Also changed is our idea of what it means to be a teenager: before the mid-twentieth century, people in their teen years often held adult roles and responsibilities, including full-time jobs and parenting. We have redefined the teenage years as more akin to childhood than adulthood, making previously normative behavior unacceptable.

So childhood in the past was not necessarily as innocent as our collective memory incorrectly remembers. Nor was chewing gum or talking out of turn the biggest complaint adults had about children during that time, as a highly publicized but made-up list claimed regarding how benign children’s problems used to be in the good old days.22 People feared changes in youth at that time just as we do today: juvenile delinquency and promiscuity were big concerns even during this hallowed time, something we conveniently forget today.

Perceptions of childhood now reflect adult anxieties about information technology, a shifting economy, a multiethnic population, and an unknown future. Not unlike the Victorian era, childhood innocence today is prized, and we often attempt in vain to remove children from the adult world. Parents are viewed as the guardians of both their children and the meaning of childhood itself. Those who

37

permit children to cross over into adulthood are demonized, particularly if they are poor or a member of a racial minority group. Many believe that childhood today ends too soon, with popular culture frequently cited as a cause of this “crisis.” Innocence is seen as a birthright destroyed by popular culture or ineffective parents. Yet we often overlook the realities of children’s experiences in both the past and the present that defy the assumption that childhood without electronic media was idyllic.

The Best Time to Be a Child?

Throughout the past three centuries, childhood has gradually expanded, as our economy has enabled most young people to delay entry into the paid labor force.23 We have also prolonged the time between sexual maturity and marriage, particularly as the onset of puberty happens sooner now for girls than in the past.24 It is only within the past century that such a large group of physically mature people has had so few rights and responsibilities and been considered emotionally immature, a luxury of prosperity.

So while we mourn the early demise of childhood, the reality is that for many Americans, childhood and adolescence have never lasted longer. At the beginning of the twentieth century, a large number of young people entered the labor force and took on many adult responsibilities at fourteen and earlier, compared with eighteen, twenty-one, or even later today. Childhood has been extended chronologically and emotionally, filled with meaning it cannot sustain. Contemporary childhood is charged with providing adults with hope for the future and remembrance of an idealized past. It is a complex and contested concept that adults struggle to maintain to offset anxiety about a changing world.

Although the news provides a steady diet of doom-and-gloom reports about young people, on the whole the news is good. High school and college graduation rates are at an all-time high.25 Youth violence has dropped considerably since the 1990s; the number of juveniles arrested for homicide fell 68 percent between 1994 and 2009; juvenile arrests for any violent offense fell nearly 58 percent between 1994 and 2009.26 The teen birthrate fell 37 percent between 1991 and 2009.27 According to the Centers for Disease Control and Prevention (CDC), fewer teens reported being sexually active in 2011 than in 1991, and those who are used condoms more often. Fewer were involved in fistfights or reported carrying guns in 2011 compared with the early 1990s, and young people were much more likely to wear seat belts and avoid riding in a car driven by a drunk driver. The percentage committing or contemplating suicide decreased steadily as well.28

As we will see in Chapter 9, the percentage of high school seniors who report

38

drinking alcohol has been declining annually, as has drinking to intoxication.29 Rates of both consumption and intoxication are substantially lower than in the 1970s and 1980s, when their parents were likely teens. Likewise, illegal drug use has declined since the 1970s and 1980s.

So in spite of public perception and the fears that the new media technologies are breeding a violent, sex-obsessed, hedonistic, and self-indulgent young generation, young people are mostly more sober, chaste, and well-behaved than their parents were. Additionally, nearly 55 percent of teens volunteer, averaging twenty-nine hours of service each year.30

Certainly, some changes in the experiences of childhood can be attributed to media and technological changes, which young people often spend a lot of time using. For example, cell phones allow kids both greater freedom from and greater contact with parents. Kids can be physically tracked through Global Positioning System software embedded in their phones and called to return home. On the other hand, children can use online social networking to forge relationships with less parental intervention, and their regular mode of communication with friends might be very different from their parents’. Although many adults fear that playing video games or using the Internet will harm children, we forget that they also serve to prepare them to participate in a high-tech economy. Visual literacy has become more important in the past fifteen years, as video games and computers became staples in many homes that could afford them. The children we should be worried about are the ones who don’t have access to these new technologies.

Changes in childhood may be most apparent when we see kids constantly texting, but technology itself cannot single-handedly create change. The often hidden social conditions that alter experiences of childhood were also behind the creation of these new products; changes in the economy produce both the widespread use of new devices and also the specific experiences of childhood. Media technologies are the icons of contemporary society; they represent and reflect what scares us most about the unknown future. We tend to see the most tangible differences and credit them with creating powerful social changes without considering other structural shifts. To understand changes in childhood, we must look further to see more than media.

Childhood has not disappeared. Instead, it is constantly shifting and mutating with the fluctuations in society. The perceived crisis in childhood is derived from the gap between the fantasy of childhood and the reality. We have filled the idea of childhood with our hopes and expectations as well as our fears and anxieties. We want childhood to be everything adulthood is not, but in reality adults and children live in the same social setting and have more experiences in common than adults are often comfortable admitting. Our economic realities are theirs; they suffer when parents lose their jobs, and they feel the effects of political conflicts, too. Although

39

we would like to keep the realities of terrorism and violence away from them, unfortunately we cannot. For many young people, these are firsthand experiences, not mediated by television, movies, or popular culture at all.

If childhood has changed, it is because the world has changed. Rapid change can be very frightening, even if the changes have many positive outcomes. Social life has been shifting so rapidly in the past few years that yesterday’s technological breakthrough is tomorrow’s dinosaur, obsolete and useless. Changes in family structure and economic realities reduce adult’s ability to control youth. Automated households rarely require young people to perform lengthy chores to ensure the family’s survival, so they are not needed at home as much as they were a few generations ago. And many young people have access to more information now than they did in the past. Yes, this is partially due to media, but it is also a reflection of changing attitudes about sexuality, for example, when open discussion of this topic is much more prevalent than in generations past.

This does not mean that adults should ignore the challenges of childhood—in fact, many of the problems children face are overshadowed by the fear of media. For instance, an up-close look at the roots of problems often blamed on media, like youth violence and teen pregnancy, reveals that poverty, not media, is the common denominator.31 When communications scholar Ellen Seiter studied adult perceptions of media effects on children, she found that the middle class and affluent were the most likely to blame media for harming children and causing social problems.32 Lower-income people have more experience with the reality of problems like violence to know that the media are not a big part of the equation in their struggles to keep their children safe in troubled communities. Yet our continued response is to attempt to focus on the supposed shortcomings of parents and to see popular culture as enemy number one of childhood. Politicians often help us choose to focus on popular culture instead, making it seem like popular culture is more important for children than food stamps and health care.

Ultimately, it is easier to blame media than ourselves for policies that fail to adequately support children. School levies are routinely rejected because we don’t want to pay more taxes or don’t trust the adults who control school budgets. Affordable, quality child care is so difficult to find because as a society we do not monetarily value people who care for children: those who do frequently earn less than minimum wage. It is not media that have changed childhood over the past century; it is our changing economy and the reluctance of the public to create programs that deal with the very real challenges children face.

Why We Blame Media Anyway

In spite of the fact that kids today are actually doing quite well by many measures,

40

we worry anyway. Concerns about the next generation are anything but new; as I discuss in the next chapter, fearing that the next generation is going downhill is a perennial concern. What is different is that now we have visual manifestations of these fears in the form of all kinds of new media.

In the worrier’s defense, many people aren’t aware that kids aren’t in as much trouble as catchy news reports often suggest. It’s no wonder, then, that we focus on the most visible changes: in the past century one of the biggest transformations has been the growth of electronic media, which by their very nature command our attention. We have seen the development of movies, television, popular music, video games, the Internet, and social networking, each of which has received its share of public criticism.

New technologies elicit fears of the unknown, particularly because they have enabled children’s consumption of popular culture to move beyond adult control. Parents may now feel helpless to control what music their kids listen to, what movies they see, or what websites they visit. Over the past hundred years, media culture has moved from the public sphere (movies) to private (television) to individual (the Internet and social networking), each creating less opportunity for adult monitoring.

This is not to say that media content is unimportant, nor am I suggesting that parents ignore their children’s media use. These are important family decisions, but on a societal level media culture is not the root cause of social problems. Media do matter, but not in the way many of us think they do. Communications scholar John Fiske describes media as providing “a visible and material presence to deep and persistent currents of meaning by which American society and American consciousness shape themselves.”33 Media are not the central cause of social change, but they are ever present and reflect these changes and also bring many social issues to our attention.

Media have become an important American social institution intertwined with government, commerce, family, education, and religion. Communications scholar John Hartley asserts that media culture has replaced the traditional town square or marketplace as the center of social life. He and others argue that it is one of our few links in a large and increasingly segmented society, serving to connect us in times of celebration and crisis in a way nothing else quite can.34 In a sense media have become representative of society itself. The media receive the brunt of the blame for social problems because they have become symbolic of contemporary American society.

Media culture also enables young people to develop separate interests and identities from their parents. The biggest complaints I have heard from parents is that their children like toys, music, movies, or television programs that they consider junk, and therefore must have harmful consequences. This generational—

41

and perennial—concern reflects adults’ attempt to exercise their power by condemning tastes that differ from their own sensibilities and displace their fears of the future onto popular culture.

When we relentlessly pursue the idea that media damage children, we are saying that children are damaged. Adults have always believed that kids were worse than the generation before, dating back to Socrates in ancient Greece, who complained about children’s materialism, manners, and general disrespect for elders. Blaming the media is much like attempting to swim full force against a powerful riptide: you end up exhausted and frustrated and get nowhere. Understanding what is really happening will allow the swimmer to survive. Likewise, projecting our collective concern about both childhood and society onto media will not take us very far unless we use it as a starting point to better understand structural factors that have a much larger impact on young people’s well-being.

Notes 1. Joel Bakan, “The Kids Are Not All Right,” New York Times, August 21, 2011,

http://www.nytimes.com/2011/08/22/opinion/corporate-interests-threaten-childrens- welfare.html; Beverly Beckham, “Childhood Lost to Pop Culture,” Boston Globe, November 7, 2005.

2. Jenifer Johnston, “Have We Poisoned Childhood?,” Sunday Herald (Glasgow), September 17, 2006; Hal Niedzviecki, “Can We Save These Kids?,” Globe and Mail (Toronto), June 5, 2004.

3. Both Kansas and Vermont have statutes allowing children as young as ten to be transferred to adult criminal court.

4. For a comparison between children’s and women’s disempowerment, see Barrie Thorne, “Re-visioning Women and Social Change: Where Are the Children?”

5. David Buckingham, After the Death of Childhood: Growing Up in the Age of Electronic Media.

6. Joe Kincheloe, “The New Childhood: Home Alone as a Way of Life”; Jenny Kitzinger, “Who Are You Kidding? Children, Power, and the Struggle Against Sexual Abuse,” 168.

7. Henry Jenkins, “Introduction: Childhood Innocence and Other Myths,” in The Children’s Culture Reader, edited by Jenkins.

8. Karin Calvert, Children in the House: Material Culture of Early Childhood, 1600– 1900.

9. Stephen Kline, “The Making of Children’s Culture,” in The Children’s Culture Reader, edited by Jenkins.

10. Miriam Formanek-Brunell, Made to Play House: Dolls and the Commercialization of American Girlhood, 1830–1930 (New Haven, CT: Yale University Press, 1993).

11. Viviana A. Zelizer, “From Useful to Useless: Moral Conflict over Child Labor,” in The Children’s Culture Reader, edited by Jenkins, 81.

12. David Nasaw, Children of the City: At Work and at Play. 13. Zelizer, “From Useful to Useless,” 84. 14. Anthony Platt, “The Child-Saving Movement and the Origins of the Juvenile Justice

42

System,” in Juvenile Delinquency: Historical, Theoretical, and Societal Reactions to Youth, edited by Paul M. Sharp and Barry W. Hancock, 2nd ed. (Upper Saddle River, NJ: Prentice-Hall, 1998), 3–17.

15. Grace Palladino, Teenagers: An American History; US National Center for Education Statistics, 1900–1985, 120 Years of Education: A Statistical Portrait (Washington, DC: Digest of Education Statistics, annual).

16. Jenkins, “Introduction,” 4. 17. Judith Stacey, Brave New Families: Stories of Domestic Upheaval in Late-

Twentieth-Century America. 18. US Census Bureau, Statistical Abstract of the United States, Tables P60-200 and

P60-203, in Current Population Reports (Washington, DC: Government Printing Office, 1999).

19. James Heintz, Nancy Folbre, and the Center for Popular Economics, The Ultimate Field Guide to the U.S. Economy (New York: New Press, 2000).

20. US Bureau of the Census, Statistical Abstract of the United States, Current Population Reports, Series P20-537 (Washington, DC: Government Printing Office, annual).

21. National Center for Health Statistics, Natality, Vital Statistics of the United States (1937–), Birth Statistics (1905–1936) (Washington, DC: US Bureau of the Census); Joyce A. Martin et al., “Births: Final Data for 2009,” National Vital Statistics Reports (Hyattsville, MD: National Center for Health Statistics) 60, no. 1 (2011), http://www.cdc.gov/nchs/data/nvsr/nvsr60/nvsr60_01.pdf.

22. A fake list of the top-ten biggest problems in schools of the 1990s (robbery, drug abuse, pregnancy) compared with the supposed top-ten problems in 1940 (gum chewing, running in the halls, improper clothing) was widely distributed and treated as real in spite of evidence otherwise. For a discussion, see Mike Males, Framing Youth: Ten Myths About the Next Generation.

23. James E. Côté and Anton L. Allahar, Generation on Hold: Coming of Age in the Late Twentieth Century.

24. Marcia E. Herman-Giddens et al., “Secondary Sexual Characteristics and Menses in Young Girls Seen in Office Practice: A Study from the Pediatric Research in Office Settings Network,” Pediatrics 99 (April 4, 1997): 505–512.

25. Camille L. Ryan and Julie Siebens, Educational Attainment in the United States: 2009, Current Population Reports, 2012 (Washington, DC: US Bureau of the Census), http://www.census.gov/prod/2012pubs/p20-566.pdf.

26. Howard N. Snyder and Melissa Sickmund, Juvenile Offenders and Victims: 2006 National Report (Washington, DC: US Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention, 2006), 64, http://ojjdp.ncjrs.org/ojstatbb/nr2006/downloads/chapter3.pdf (page 65) and http://ojjdp.ncjrs.org/ojstatbb/nr2006/downloads/chapter3.pdf; C. Puzzanchera, B. Adams, and W. Kang, “Easy Access to FBI Arrest Statistics, 1994–2009,” 2012, http://www.ojjdp.gov/ojstatbb/ezaucr/ (numbers: homicides in 2009 1,170, homicides in 1994 3,660; all violent arrests in 2009 85,890, in 1994 148,430).

27. Martin et al., “Births: Final Data for 2009.” 28. Department of Health and Human Services, “Trends in the Prevalence of Sexual

Behaviors,” in National Youth Risk Behavior Survey: 1991–2011 (Washington, DC: Centers for Disease Control and Prevention, 2012),

43

http://www.cdc.gov/healthyyouth/sexualbehaviors/index.htm; Department of Health and Human Services, Youth Risk Behavior Surveillance—United States, 2011 (Washington, DC: Centers for Disease Control and Prevention, 2012), http://www.cdc.gov/mmwr/pdf/ss/ss6104.pdf.

29. Monitoring the Future Study, “Long-Term Trends in Lifetime Prevalence of Use of Various Drugs for Twelfth Graders” (Ann Arbor: Survey Research Center, University of Michigan, 2012), http://monitoringthefuture.org/data/11data/pr11t15.pdf.

30. “Youth Helping America: The Role of Social Institutions in Teen Volunteering,” (Washington, DC: Corporation for National and Community Service, 2005), http://www.polk- fl.net/community/volunteers/documents/servicelearning/FactSheet_ROSITV.pdf.

31. For a discussion, see Mike Males, The Scapegoat Generation: America’s War on Adolescents.

32. Ellen Seiter, Television and New Media Audiences, 58–90. 33. John Fiske, Media Matters: Everyday Culture and Political Change, xv. 34. John Hartley, The Politics of Pictures: The Creation of the Public in the Age of

Popular Media. For further discussion, see Daniel Dayan and Elihu Katz, Media Events: The Live Broadcasting of History.

44

CHAPTER 3

45

Does Social Networking Kill? Cyberbullying, Homophobia, and Suicide

Is the new digital world fraught with danger? It is easy to understand why many people would be concerned about the uncharted waters we seem to be traversing online. Will Facebook change the nature of friendships? Might texting alter the ability of its users to construct complete sentences? Has the distinction between public and private eroded, thanks to social networking? And will young people post too much online and not consider the consequences of their actions?

These are just a few of the many questions that our digital environment has created. As I discussed in the first chapter, with the advent of any new medium comes anxiety about what kind of changes it will create and the potential harms we might not yet anticipate. Moral panics are likely to emerge when a new form of media emerges and its users are primarily those who seem particularly vulnerable or threatening (or both). New media do create cultural changes too, in this case shifting the way that people communicate and navigate relationships.

Having grown up before the use of social media took off—and in many cases before the widespread use of the Internet—many adults are especially concerned about young people’s use of these new forms of communication. Texts and tweets are harder to monitor than the old-fashioned landline telephone and mail, making it easier for kids to circumvent parental control at times. And perhaps most alarming to parents, these new media may make it more challenging to shield their children from others. The idea that parents can put a wall up between their families and the outside world was never quite a reality, but the new media environment makes this inability abundantly clear.

Perhaps parents’ and critics’ biggest fear is that new media will be harmful to young people, a fear heightened by national news coverage of several tragedies involving young people who committed suicide over the past few years. A common thread appears to place at least some of the blame on “cyberbullies,” who allegedly harassed the victim using social networking sites, taking old-fashioned teasing to a new and very public level, leading critics to ponder and parents to fear the threat of new technology.

With headlines like “Mean Girls: Cyberbullying Blamed for Teen Suicides” (ABC News), “As Bullies Go Digital, Parents Play Catch-Up” (New York Times), and “Death by Cyber-Bully” (Boston Globe), it is easy to understand why concerns about cyberbulllies would rise. CBS News ran a story titled “Phoebe Prince: ‘Suicide by Bullying’; Teen’s Death Angers Town Asking Why Bullies Roam the Halls.” A USA Today column titled “Bullying: Are We Defenseless?” implores

46

readers to “find a way to save the children.” Not only do parents want to protect their kids from harm at school, but new technology allows meanness to pervade new spaces. The Washington Post reported in 2010 that “the Internet’s alarming potential as a means of tormenting others … raises questions whether young people in the age of Twitter and Facebook can even distinguish public from private.” “It’s just a matter of when the next suicide’s going to hit, when the next attack’s going to hit,” says attorney Parry Aftab in the article, sounding very much like concerns that arose about terrorism after September 11, 2001.1

Of course, children aren’t alone in using the Internet to defame others. The very nature of the Internet allows for uncensored and seemingly anonymous speech, enabling angry, often hateful websites free rein. Visit nearly any website that allows comments, and you will see a range of sometimes abusive language, perhaps none more so than those that are political in nature. Let’s face it: people of all ages can be really rude online.

But cyberbullying involving young people strikes a nerve; most of us have had the experience of being teased, at least mildly, as children, but taunts typically ended after school let out. New media like the Internet and smartphones are extremely difficult to monitor, so it is easy to understand why social networking sites, texting, and other online communication would create concerns. New media reflect a brave new world of sorts, where something as common as a schoolyard taunt takes on new meaning when it happens electronically. Spoken words may fade into the past eventually, but electronic messages never really die.

Not only is there fear that kids will communicate inappropriately with one another, but the Internet also seems to make it easier for strangers to interact with children, creating new concerns about cyber predators. As I wrote in Kids These Days: Facts and Fictions About Today’s Youth, fears of “stranger danger” and kidnapping coincide with children using social networking tools. Stories about kidnappings or sexual assaults highlight the potential dangers adults could pose to young people online. This fear was doubtlessly heightened by NBC’s Dateline series, “To Catch a Predator.” The hidden-camera segments aired from 2004 to 2007, featuring producers posing as young teens online in order to catch adult men who come to a house, presumably to have sex with a minor. Seemingly ordinary men appeared, suggesting that Internet predators could be anyone, anywhere, and are just a click away.

Besides concerns of abuse from peers and predatory adults, the shift into the electronic age has also sparked concerns that the Internet itself is dangerous. Stories of marriages ruined from too much online gaming, shopping, or Facebook friending suggest that the existence of the Internet itself can be detrimental to our health and relationships. Talk of “Internet addiction” as a new form of mental illness also dominates self-help talk shows, despite the fact that it is not currently classified as an illness by the American Psychiatric Association.

47

Is the Internet putting people at greater risk for suicide, depression, kidnapping, and sexual abuse? While questions like this might be great fodder for cable-news pundits and talk-show hosts, the concern reflects anxieties about new media, not actual increases in the feared behaviors. Stories like one about a Chinese teen who sold his kidney to buy an iPhone and iPad may make us shake our heads about the impact new media has on young people, but the relationship most teens have with new technology is typically more mundane than extreme examples like this one.2

In this chapter, I explore two central fears surrounding new media: first, that cyberbullying can push people to commit suicide, and second, that online predators routinely use the Internet to lure kidnapping or sexual abuse victims or both. By comparing the headlines to data on these problems, we will see that although these new communication technologies have become much bigger parts of many people’s lives, the problems they are often associated with are in fact not getting worse. The stories we hear may be shocking and familiar, and although powerful examples, they are not necessarily representative of a larger trend of increased danger to young people.

“Cyberbullicide”: Familiar Tragedies

You probably have heard many of their names: Tyler, Megan, Amanda, Phoebe, and Jamey, to list a few. These are the names of young people who committed suicide, apparently after enduring online harassment. Their stories became regular features on national news programs and talk shows, seeming to be symbolic of the scary new Internet world we inhabit.

When news of Tyler Clementi’s tragic jump from the George Washington Bridge made headlines in the fall of 2010, it really hit home among students in my classes. Like Clementi, many of my students were eighteen-year-old college freshmen adjusting to being away from home for the first time, and some were dealing with a new roommate they didn’t particularly like.

Clementi was a student at Rutgers University who had apparently requested a roommate change after he had discovered that his roommate, Dharun Ravi, had set up a webcam to watch him become intimate with another man in their room. After Ravi streamed a second encounter live online, Clementi committed suicide. Ravi was charged with invasion of privacy, bias intimidation, and other charges relating to a cover-up. In early 2012 Ravi was found guilty of intimidation, witness tampering, and tampering with evidence. He could have faced up to ten years in prison and deportation to India but was sentenced to thirty days in jail (of which he served twenty) and three years of probation, and he must pay eleven thousand dollars in restitution.3

Ravi appeared to embody the role of cyberbully. His defense attorney attempted

48

to frame webcam spying as a juvenile prank, stating that “he hasn’t lived long enough to have any experience with homosexuality or gays” and claiming the incident was not a hate crime as charged. News coverage portrayed Ravi as immature, but also cruel and dismissive of the seriousness of the charges, even appearing to fall asleep during closing arguments of his trial.4

Text messages and Twitter entries became evidence introduced at trial, highlighting the break from traditional forms of evidence. News stories translated text-speak for their presumably older readers (idc means “I don’t care,” rents means “parents,” for instance).5 According to an Associated Press report, the roommates checked out each other’s Internet postings before school began. Both wrote negative comments about the other online.6

But perhaps the most central part of this case, beyond the new forms of media it involved, was the issue of homophobia. Were Ravi’s actions meant to embarrass Clementi because he was gay? According to reports, friends denied that Ravi was homophobic, and Ravi did as well in a text to Clementi after the spying incident.7

The case raises questions about the meaning of homophobia and whether cyber- spying constitutes a hate crime. Broken down, the term homophobia translates to fear of homosexuality. This fear can manifest in many forms; it can include violence, harassment, exclusion, or discomfort. Existing on a continuum, people may feel homophobic without being openly hostile toward gay and lesbian individuals. Homophobia is a central part of the concept of hegemonic masculinity, a narrowly constructed idea of what it means to be a “real man.” Rigid definitions of manhood demand heterosexuality, and thus antigay slurs are a prime way that men degrade one another. In fact, homophobia affects men regardless of their sexual orientation, since it is used as both a put-down and a way to enforce strict adherence to hegemonic masculinity.

It’s hard to imagine the Rutgers case getting so much global attention if Clementi had been with a woman in his dorm room. Even before a jury agreed that Ravi’s actions constituted bias, the issue of sexuality was a large part of the case’s coverage. For instance, openly gay talk show host Ellen DeGeneres spoke out publicly about Clementi’s suicide, calling bullying an “epidemic,” and stated that “the death rate is climbing.” Even blogger Perez Hilton, known for often inflammatory online posts about celebrities, reconsidered his approach after Clementi’s death.8

This incident happened at a time when several other stories of young people whom classmates teased about their sexual orientation—or perceived sexual orientation—made national news after they committed suicide. Jamey Rodemeyer, a fourteen-year-old boy from Buffalo, New York, was bullied about his perceived sexual orientation and later committed suicide that same year, garnering coverage on or in NBC’s Today, CNN, the New York Times, the Huffington Post, and other

49

national news outlets.9 In response to the many highly publicized stories of bullied young people, in

2010 Dan Savage and Terry Miller founded the It Gets Better Project, a website where adults assure young gays and lesbians that they will find acceptance and not to be discouraged by teasing or discrimination they may currently face. President Barack Obama and Secretary of State Hillary Rodham Clinton, as well as other prominent political leaders in the United States and abroad, have participated in the project. Not only can the Internet be used to harass others, but it clearly can also help people who may feel isolated and alone find a sense of community and acceptance.

Social networking and the Internet are relatively new ways of expressing homophobia. A Rutgers instructor, quoted on nj.com, claimed, “Intolerance is growing at the same time cyberspace has given every one of us an almost magical ability to invade other people’s lives.”10 Yet it is important to recognize that young people are certainly not alone in perpetuating homophobia, as political leaders often reinforce the idea that it is okay to discriminate based on sexual orientation. A Michigan antibullying law faced opposition from conservative groups that argued laws preventing antigay comments violate free speech and the rights of those to express their religious beliefs. A compromise included a “moral and religious clause” that allows students to tell others they will go to hell due to their sexual orientation, for example. The bill passed in 2011.11

Is intolerance increasing, and is cyberbullying against gay and lesbian young people an epidemic with growing death rates, as reaction to Clementi’s suicide suggested?

Realities of Suicide and Cyberbullying

It appears that lesbian, gay, bisexual, and transgendered teens are more likely to experience cyberbullying than their peers, according to a few recent studies. A 2009 study of just under twenty-five hundred students in a Colorado county found that LGBT youth were more than twice as likely to report “electronic harassment” than those who identified as heterosexual (nearly 30 percent versus 13 percent).12 In a 2010 study of eleven- to eighteen-year-olds, nonheterosexual respondents report greater likelihood of being bullied on- and offline—but also a greater likelihood to admit to bullying others online and off-.13

However, there is also no evidence that LGBT youth are more bullied now than in the past. If anything, it is likely that growing awareness and acceptance of gays and lesbians over the past few decades would stem some of the harassment compared with the past, when teachers and administrators might have been less

50

likely to intervene. Legal changes after a 1999 US Supreme Court decision also mean that schools can be liable if they do not take reasonable efforts to protect students from sexual harassment.14

Although there is evidence that LGBT youth do experience more harassment than their peers, there is no solid evidence that there is a new epidemic, nor that youth LGBT suicides are significantly higher nationwide. Instead, we had an “epidemic” of tragic cases that became national news stories.

Because death certificates do not include sexual orientation, we just don’t know for sure if suicide rates for LGBT youth are higher on a national scale. Despite this limitation, many people have seen a statistic claiming that 30 percent of all youth suicides involve LGBT individuals. As a 2008 Suicide Prevention Resource Center report explains, this number emerged from a ballpark estimate contained in a 1989 Health and Human Services report rather than an observed trend.15

This 30 percent statistic has become what sociologist Joel Best calls a “mythic statistic,” a statistic that takes on a life of its own, spreading through news reports to become taken for granted as common sense.16 For gay-rights activists, this statistic seems to provide proof of the seriousness of homophobia in American society and creates a sense of urgency to prevent harassment.

It may be that LGBT youth are more likely to commit suicide than their peers; we just don’t have data to know for sure. We do have data from several small studies on suicide attempts and suicidal ideation (thoughts about suicide) that suggest that LGBT individuals are more likely than their peers to attempt and think about suicide. Exactly how much more varies from study to study, and the studies are too isolated to make national generalizations. Because the amount of acceptance of LGBT individuals varies significantly across regions in the country, the social context of any given community likely influences the outcome of these studies, so it would be difficult to generalize from these isolated studies.17

Although we don’t know the sexual orientation of suicide victims nationwide, we do know their ages. One major misconception is that teens are the group most prone to suicide. In fact, they are among the least likely to commit suicide. According to data from the Centers for Disease Control and Prevention (CDC), forty-five- to fifty-four-year-olds were the group most likely to commit suicide in 2009 (the most recent year for which data are available), with 19.3 suicides per 100,000. The age groups with the fewest suicides? Five- to fourteen-year-olds (0.7 per 100,000), followed by fifteen- to twenty-four-year-olds (10.1 per 100,000). Rates for young people have been flat for the past decade, with virtually no changes. But suicide rates have crept up slightly for thirty-five- to sixty-four-year- olds, while declining slightly for those sixty-five and older.18

Ironically, children, teens, and young adults are the least likely to take their own lives but are presumed to be the most at risk. This might be because we routinely

51

hear that suicide is one of the leading causes of deaths for teens, behind car accidents and homicide. Though that statistic is true, the good news is teens are unlikely to die at all, compared to their older counterparts who are more likely to commit suicide and are also more likely to succumb to heart disease, cancer, and other ailments.19

If anything, we might wonder about a “suicide epidemic” among forty-five- to fifty-four-year-olds, whose rates rose from 13.9 per 100,000 in 1999 to 19.3 per 100,000 in 2009. But concerns for middle- aged Americans’ mental health are rarely expressed in dramatic news stories like the ones about young people who have been cyberbullied.

Has Bullying Gotten Worse?

Reports of bullying have become very widespread in recent years, with cable news devoting hours of coverage to the issue. CNN aired programs like Stop Bullying: Speak Up and Bullying: It Stops Here in 2011, the heightened coverage implying that there is a new crisis.20 But is there?

The Bureau of Justice Statistics publishes a report titled Indicators of Crime and School Safety each year and includes bullying as a measure. With bullying described as being called names, insulted, made fun of, pushed, tripped, spit on, being excluded from activities, or threatened with physical harm, about 28 percent of twelve- to eighteen-year-old students reported any one of these experiences at school in 2009 (the most recent year of data available), a decline from 2007 and the same percentage as 2005.21

Bullying clearly exists on a continuum; being called a name by one classmate one time is a very different experience from being harassed every day by many students, so it is difficult to measure the intensity of bullying from this study. However, only 6 percent report that they were threatened with bodily harm in 2009.22

Cyberbullying seems like a new, more menacing form of bullying, like a mutating virus that is more dangerous than the one from which it originates. Just as bullying can take many forms of varied intensity, so can cyberbullying. A 2007 Pew Research Center publication describes cyberbullying as “a range of annoying and potentially menacing online activities—such as receiving threatening messages; having their private emails or text messages forwarded without consent; having an embarrassing picture posted without permission; or having rumors about them spread online.”23

According to the Indicators of Crime and School Safety report, only 6 percent of students twelve to eighteen reported being cyberbullied. Other studies have

52

come up with higher estimates; a 2011 nationally representative survey conducted by the Pew Internet and American Life Project found that 8 percent of all twelve- to seventeen-year-olds reported having been bullied online, and 12 percent reported being bullied in person. A 2010 Pew Internet and American Life study also found that young people were far more likely to be bullied at school than online (31 percent versus 13 percent online).24 Both studies suggest it is a small minority of young people who have had this experience. According to the 2011 Pew study, most respondents thought that others were mostly kind online, although twelve- to seventeen-year-olds were less likely to respond this way than adults eighteen and over (69 percent compared with 85 percent).25

Other studies, like a 2007 National Crime Prevention Council study, found that 43 percent of thirteen- to seventeen-year-olds report having been cyberbullied; another study claimed that 72 percent of all students had been cyberbullied. Justin W. Patchin and Sameer Hinduja, authors of Cyberbullying Prevention and Response: Expert Perspectives, reviewed several surveys and found an average of 24 percent overall, the variation largely a result of narrower or wider definitions of cyberbullying.26 The more minor the behavior included in the definition, the larger number of people who are likely to have had the experience. There’s a big difference between having an e-mail or text forwarded without our knowledge once or twice and having hateful taunts or doctored pictures repeatedly posted on Facebook about someone.

Although the creation of a new word seems to indicate a different concept, people who experience cyberbullying often experience bullying offline, and both experiences have a lot in common. A 2010 study of middle school–age youth found that both on- and offline bullying victims and offenders were more likely to have attempted suicide than those not involved in bullying of any kind. The authors of the study note that “it is unlikely that experience with cyberbullying by itself leads to youth suicide. Rather, it tends to exacerbate instability and hopelessness in the minds of adolescents already struggling with stressful life circumstances.”27

That same year, the National Institutes of Health (NIH) reported on a study that found that cyberbully victims had higher rates of depression than victims of traditional bullying and than those who cyberbully, in contrast to traditional face-to- face bullying, where both victim and offender tend to show elevated rates of depression.28 Perhaps those who experience cyberbullying feel even less of a sense of control over their environment, one that now extends into cyberspace.

Although it is problematic to presume that the Internet, social networking, or even cyberbullying alone is a primary cause of suicide, the Internet and new electronic communications create additional complexities in our lives and relationships. Yet it is important to note that suicide rates among young people have not been increasing.

53

So why is bullying so prevalent in the news today, even described as a crisis, when there is no evidence it is actually getting worse? As I discussed in Chapter 2, what has shifted in recent years is the construction of childhood and adolescence as periods of heightened vulnerability. As parents have fewer children increasingly later in life, there is more focus on protecting children emotionally than in previous generations. Beyond concerns about bullying, so-called helicopter parenting extends well into early adulthood, as many parents seek to care for their kids’ emotional needs even while in college and beyond.29 Colleagues tell me of parents calling to try to get their kids added to closed college courses or complain about a grade their young adult student received on a paper. It is this heightened level of caretaking, rather than actual increases in bullying, that has shifted most over time.

Suicide is far more of a complex behavior than the cyberbullying stories might have us believe at first glance. For instance, girls are more likely to report being cyberbullied, according to a variety of studies, yet males are much more likely to commit suicide.30 And middle-aged adults have the greatest likelihood of committing suicide. Despite the dramatic rise of social networking, the use of texting and of the Internet in general have not produced notable changes in suicide rates for young people.

Adult Cyber Predators

Stories of cyberbullying tend to focus on young people as the primary predators, too immature to exercise good judgment about how to treat others. Headlines like “Cyber Bullies Harass Teen Even After Suicide” (Huffington Post) and “The Untouchable Mean Girls” (Boston Globe) paint a picture suggesting that amoral youth are the core threat to their peers.31

Adults aren’t always so nice to each other, either. According to a 2010 survey, 35 percent of workers reported experiencing some kind of bullying at work, defined as “sabotage by others that prevented work from getting done, verbal abuse, threatening conduct, intimidation, and humiliation.” Nearly two-thirds of bullies are men (62 percent), while more than half of the victims are women (58 percent), suggesting an important gender dynamic in the workplace. The Occupational Health and Safety Administration (OSHA) notes that 2 million Americans report being the victims of workplace violence each year as well.32

Of course, it’s not just young people who use the Internet to harass others. Whereas news reports often portray parents as hapless observers, struggling to understand the twenty-first-century world that their children inhabit, adults can be cruel online as well. For instance, a fifty-one-year-old commodities trader was sentenced to twenty-eight months in jail in 2012 for posting an “execution list” of

54

dozens of Securities and Exchange Commission officials on his Facebook page. In a 2011 National Science Foundation report, a forty-year-old described being harassed online by a former high school classmate, who sent pornographic messages to his employer. A seventy-seven-year-old singer-songwriter allegedly received thousands of harassing e-mails from his fifty-five-year-old former manager, violating a restraining order that ordered her not to contact him further. Currently, the Arizona legislature has proposed a law to define “annoying” or “offensive” online posts as criminal acts, similar to prank phone calls.33

A 2006 incident was particularly shocking, because it involved an adult bullying a child online and became national news. Megan Meier was thirteen years old when she met a boy online—or so she thought. Through her MySpace page, she corresponded with someone she thought was named Josh for a couple of weeks before he turned on her and allegedly told her, “The world would be a better place without you.” Soon after, Megan committed suicide.

There never was a boy named Josh, though. He was fabricated by the mother of a former friend who lived down the street, Lori Drew—who was forty-seven. Megan had recently changed schools and had made new friends, and Drew allegedly wanted to retaliate against Megan for not continuing the friendship with her daughter and to see if Megan gossiped about her daughter online.

Megan had struggled with depression prior to this, occasionally spoke of suicide, and took antidepressants—something Drew knew about before creating the fake boyfriend.34 Drew was later charged and found guilty of three misdemeanor computer crimes in federal court, but the conviction was later thrown out on appeal.35

Although cases like this one appear to be rare, we are more likely to hear of adults whose fake profiles are meant to lure young people in order to have sexual contact. As in Dateline’s now defunct “To Catch a Predator” series, stories highlighting young people led to danger online still echo across the airwaves. In April 2009 Oprah aired “Alicia’s Story: A Cautionary Tale,” about Alicia Kozakiewicz, who at thirteen met a thirty-eight-year-old man online who abducted, beat, tortured, and raped her in 2002. The show also featured similar stories of young girls lured by predators online later that year.36 Kozakiewicz has used her horrific ordeal to speak out about online predators and is currently active in helping to create new laws to help crack down on abusers.

Although news reports occasionally highlight other stories of young people meeting strangers online and becoming victims of crime, these events are fortunately rare and are not limited to teens. In 2008 a twenty-four-year-old woman was killed when she answered a Craigslist ad for a nanny position. And in 2009 Julissa Brisman, twenty-six, working as a masseuse, was murdered in Boston by the man who became known as the “Craigslist Killer.”37 Countless stories of

55

online dating gone awry, and the numerous scams perpetrated online serve as reminders that we all should be wary of those we encounter online.

But statistically, those we know offline pose a much greater threat.

Cyberreality: Safer than Ever?

Most of the time, violence has nothing to do with new media or social networking. Since Internet use became widespread in the mid-1990s, violent crime has dropped dramatically in the United States. Between 1991 and 2010, violent crime fell by 47 percent; from 2001 to 2010, the rate declined 13 percent. Over the past two decades, homicides in the United States declined 50 percent.38 Although certainly new media cannot be credited for much if any of these declines, it is a reminder that this is a much safer country than it was in the recent past.

When people are victims of violence, the perpetrator is often someone they know reasonably well. According to the 2010 FBI Uniform Crime Reports, about 44 percent of homicide victims were killed by family or acquaintances; just 12 percent were killed by strangers (44 percent of the offenders were not known). For child victims, 79 percent of perpetrators are their parents.39

Victims of other violent crime likely know the perpetrators as well. The National Crime Victimization Survey, a nationally representative survey of Americans twelve and older, found that in 2010 strangers were the offenders in just 39 percent of incidents (a decline from 44 percent in 2001). Female victims were much more likely than males to know their assailants (64 percent versus 40 percent). In cases of rape or sexual assault, 73 percent of females knew their attackers.40

The percentage is similar for juvenile victims. According to a 2008 Office of Juvenile Justice and Delinquency Prevention Program report, 74 percent of perpetrators were family members or acquaintances; the report also estimates that sexual assaults of children have declined since the 1990s. (NCVS data found that incidents of rape declined by 24 percent nationwide since 2001.)41

Although data are not collected as regularly on young people who run away or are kidnapped, previous studies suggest that about one in five minors who runs away from home has been physically or sexually abused, and nearly as many have substance abuse problems. More than three-quarters of abductions are committed by family members—typically a noncustodial parent—but of those kidnapped by a nonfamily member, more than half are taken by an acquaintance (a neighbor, family friend, or babysitter, for instance).42

Not only are we safer offline today than before the rise of social networking within the past decade, but people are gradually learning to protect their privacy

56

more online as well. According to a 2007 Pew Internet and American Life study of teens, the vast majority—91 percent—report using social networking only to talk with people they already know. Two-thirds try to make their profile visible only to people they know; nearly a third have been contacted by a stranger online, and most (65 percent) reported that they ignored them. Just 7 percent of all teens who are online reported being scared by an online encounter with a stranger.43

Navigating the Cyber Age

Yes, there are plenty of pitfalls online, and people of all ages are still learning to navigate them. Whether it is writing nasty comments about schoolmates or coworkers on Facebook, sending texts or e-mails we later regret, or posting photos that we wouldn’t want the world to see, many people are still figuring out that although we might feel like we have private space electronically, that is mostly an illusion.

One of the best pieces of advice I received as the electronic age dawned was to send only e-mails, texts, or voice mails or write posts that I wouldn’t mind being introduced as evidence in court. That sounds severe, but electronic communication has a way of taking on a life of its own beyond our control once sent.

Part of the challenge of navigating an online identity is that as users of social networking, we are commodities rather than customers. Companies like Facebook, LinkedIn, and Google use our information for advertisers and have been criticized by privacy advocates for not always being transparent about how they use our information.44 Face-book’s frequent changes often switch users’ privacy options, making it difficult to maintain your desired settings from the past without manually resetting them.

Love or hate social networking, it is here to stay. Online platforms have become central in many people’s lives, not replacing offline contact by any means, but they are integral communication for work and socializing. As laws and etiquette struggle to keep up with ever-evolving technology, it is understandable that young people’s use of social networking tools would be a source of concern. But the danger is not quite as severe as some dramatic news accounts may have us believe.

Concerns about bullying and suicide can be channeled to address the limited access to mental health care that many people experience. Whether victims of bullying online, at school, or at work, many people lack the resources or access to receive needed mental health care. According to the Substance Abuse and Mental Health Services Administration, private health insurance is the most common way people pay for mental health care; those without health insurance have more limited access to mental health services. SAMHSA estimates that the percentage of the population whose need for treatment goes unmet is nearly as high as those who

57

receive mental health care. Perhaps not surprisingly, the groups that have the highest unmet need tend to be young adults eighteen to twenty-five, those who are unemployed, and those without health insurance.45

There’s no doubt that some people have chosen to use new forms of electronic communication to express hostility and hatred, which we are still learning to navigate individually and legally. Rude comments written on a public bathroom wall can be cleaned or painted over; electronic communication isn’t easy to completely erase.

Yet it’s important to keep in mind that despite these new challenges, young people appear to be managing much better than we might think. In fact, we might be more concerned about people who lack access to these new modes of communication and the implications for them both socially and economically. Tragic examples of young people who were bullied and later committed suicide might frighten us into thinking that a new trend of youth suicide coincides with the rise of social networking. As devastating as these incidents may be, they fortunately remain rare. As we struggle to figure out how to navigate this new and ever- changing media environment, parents often feel anxious about technology their children may use and understand better than they do.

Notes 1. Yunji De Nies, Susan Donaldson James, and Sarah Netter, “Mean Girls:

Cyberbullying Blamed for Teen Suicides,” ABC News, January 28, 2010, http://abcnews.go.com/GMA/Parenting/girls-teen-suicide-calls-attention- cyberbullying/story?id=9685026; Jan Hoffman, “As Bullies Go Digital, Parents Play Catch- Up,” New York Times, December 4, 2010, http://www.nytimes.com/2010/12/05/us/05bully.html?pagewanted=all; John Halligan, “Death by Cyber-Bully,” Boston Globe, August 17, 2005, http://www.boston.com/news/globe/editorial_opinion/oped/articles/2005/08/17/death_by_cyber_bully/ Kealan Oliver, “Phoebe Prince ‘Suicide by Bullying’; Teen’s Death Angers Town Asking Why Bullies Roam the Halls,” CBS News, February 10, 2010, http://www.cbsnews.com/8301-504083_162-6173960-504083.html; Bruce Kluger, “Bullying: Are We Defenseless?,” USA Today, January 25, 2012, A11; Geoff Mulvihill and Samantha Henry, “NJ Student’s Suicide Illustrates Internet Dangers,” Washington Post, October 1, 2010, http://www.washingtonpost.com/wp- dyn/content/article/2010/09/30/AR2010093000534.html.

2. “Chinese Teen Sells Kidney to Buy iPhone, iPad,” USA Today, April 7, 2012, http://www.usatoday.com/news/world/story/2012-04-07/china-iphone-ipad- kidney/54090470/1.

3. David Ariosto, “Guilty Verdict in Rutgers Webcam Spying Case,” CNN, March 17, 2012, http://www.cnn.com/2012/03/16/justice/new-jersey-rutgers-trial/index.html? hpt=hp_t1; Ashley Hays, “Prosecutors to Appeal Ex-Rutgers’ Student’s 30-Day Sentencing for Bullying Gay Roommate,” CNN, May 21, 2012, http://www.cnn.com/2012/05/21/justice/new-jersey-rutgers-sentencing/index.html?

58

hpt=hp_t3. 4. “Dharun Ravi Seen Snoozing in Court as Jury Prepares to Begin Deliberations,”

CBS2 New York, March 14, 2012, http://newyork.cbslocal.com/2012/03/14/dharun-ravi- seen-snoozing-in-court-as-jury-prepares-to-begin-deliberations/.

5. Richard Perez-Pena, “More Complex Picture Emerges in Rutgers Student’s Suicide,” New York Times, August 12, 2011, http://www.nytimes.com/2011/08/13/nyregion/with- tyler-clementi-suicide-more-complex-picture-emerges.htm.

6. Geoff Mulvihill, “In Tyler Clementi’s NJ Dorm, Tensions Were High,” Atlanta Journal Constitution, September 8, 2011, http:/www.ajc.com/new/nation-world/in-tyler- clementis-nj-1163838.html.

7. Mulvihill and Henry, “NJ Student’s Suicide Illustrates Internet Dangers.” 8. “Ellen Speaks out on Rutgers Suicide,” ABC News, October 1, 2010,

http://abcnews.go.com/Entertainment/video/ellen-degeneres-speaks-out-on-rutgers-suicide- 11773812; Andrew M. Brown, “If Perez Hilton Stops Bullying Celebrities, His Readers Will Desert Him,” Telegraph (Glasgow), October 15, 2010, http://blogs.telegraph.co.uk/news/andrewmcfbrown/100059086/if-perez-hilton-stops- bullying-celebrities-his-readers-will-desert-him/.

9. Hemanshu Nigam, “Cyberbullying: What It Is and What to Do About it,” ABC News, October 7, 2011, http://abcnews.go.com/Technology/We_Find_Them/cyberbullying-/story? id=14675883#.T6AaUtmh2So; Elizabeth Held, “27 Percent of College Students Say They Have Been Cyber Bullied,” USA Today, December 9, 2011, http://www.usatodayeducate.com/staging/index.php/ccp/27-percent-of-college-students-say- they-have-been-cyber-bullied; “Jamey Rodemeyer Still Being Bullied After His Death Say Tim and Tracy Rodemeyer,” Huffington Post, September 27, 2011, http://www.huffingtonpost.com/2011/09/27/jamey-rodemeyer-bullied-after- death_n_983926.html; Danah Boyd and Alice Marwick, “Bullying as True Drama,” New York Times, September 22, 2011, http://www.nytimes.com/2011/09/23/opinion/why- cyberbullying-rhetoric-misses-the-mark.html.

10. Judy Peet, “Rutgers Student Tyler Clementi’s Suicide Spurs Actions Across U.S.,” New Jersey Real Time News, October 3, 2010, http://www.nj.com/news/index.ssf/2010/10/rutgers_student_tyler_clementi_4.html.

11. Marilisa Kinney Sachteleben, “Michigan Senate Passes Anti-bullying Law, Despite Objections,” Yahoo News, November 3, 2011, http://news.yahoo.com/michigan-senate- passes-school-anti-bullying-law-despite-162200561.html.

12. Bob Roehr, “Harassment/Suicide Rates Doubled for Gay/Lesbian Students,” Medscape Today News, November 15, 2010, http://www.medscape.com/viewarticle/732511.

13. Sameer Hinduja and Justin W. Patchin, “Cyberbullying Research Summary: Bullying, Cyberbullying, and Sexual Orientation,” Cyberbullying Research Center, 2011, http://www.cyberbullying.us/cyberbullying_sexual_orientation_fact_sheet.pdf, 2.

14. See Davis v. Monroe County Board of Education 526 US 629 (1999). 15. Suicide Prevention Resource Center, Suicide Risk and Prevention for Lesbian,

Gay, Bisexual, and Transgender Youth (Newton, MA: Education Development Center, 2008), http://www.sprc.org/sites/sprc.org/files/library/SPRC_LGBT_Youth.pdf.

16. Joel Best, Damned Lies and Statistics: Untangling Numbers from Media, Politicians, and Activists, 89–93. See also Benjamin Radford, “Is There a Gay Teen Suicide Epidemic?,” Live Science, October 8, 2010, http://www.livescience.com/8734-gay-

59

teen-suicide-epidemic.html. 17. Suicide Prevention Resource Center, Suicide Risk and Prevention, 16–17. 18. Centers for Disease Control and Prevention, “Death Rates by Age and Age-

Adjusted Death Rates for the 15 Leading Causes of Death in 2009: United States, 1999– 2009,” Deaths: Final Data for 2009 (National Vital Statistics Report) 60, no. 3 (2012), http://www.cdc.gov/nchs/data/dvs/deaths_2009_release.pdf, table 9, p. 21.

19. Arialdi M. Miniño, “Mortality Among Teenagers 12–19 Years: United States, 1999– 2006,” National Center for Health Statistics, May 2010, no. 37, http://www.cdc.gov/nchs/data/databriefs/db37.htm#leading.

20. “Stop Bullying: Speak Up,” CNN, 2011, http://www.cnn.com/SPECIALS/2011/bullying/; “CNN, Facebook, Cartoon Network, and Time Inc. Team Up for Anti-Bullying Efforts,” CNN, October 4, 2011, http://cnnpressroom.blogs.cnn.com/2011/10/04/anderson-cooper-360%C2%B0-town-hall- %E2%80%9Cbullying-it-stops-here%E2%80%9D-to-air-october-9/.

21. Simone Robers et al., “Bullying at School and Cyber-Bullying Anywhere,” Indicators of Crime and School Safety: 2011, National Center for Education Statistics, US Department of Education, 2012, http://nces.ed.gov/pubs2012/2012002.pdf, 44–50.

22. Ibid. 23. Amanda Lenhart, “Mean Teens Online: Forget Sticks and Stones, They’ve Got

Mail,” Pew Internet and American Life Project, June 27, 2007, http://pewresearch.org/pubs/527/cyber-bullying.

24. Amanda Lenhart, “Cyberbullying 2010: What the Research Tells Us,” Pew Internet and American Life Project, May 6, 2010, http://www.pewinternet.org/Presentations/2010/May/Cyberbullying-2010.aspx. See slide 16; also, note that in slide 22 just 4 percent said they sent a sexually suggestive picture of themselves.

25. Amanda Lenhart et al., “Teens, Cruelty, and Kindness on Social Networking Sites,” Pew Internet and American Life Project, November 9, 2011, http://www.pewinternet.org/Reports/2011/Teens-and-social-media/Summary/Majority-of- teens.aspx.

26. “Teens and Cyberbullying,” National Crime Prevention Council, February 28, 2007, http://www.ncpc.org/resources/files/pdf/bullying/Teens%20and%20Cyberbullying%20Research%20Study.pdf 2; Justin W. Patchin, “How Many Teens Are Actually Involved in Cyberbullying?,” Cyberbullying Research Center, April 4, 2012, http://cyberbullying.us/blog/how-many- teens-are-actually-involved-in-cyberbullying.html.

27. Sameer Hinduja and Justin W. Patchin, “Cyberbullying Research Summary: Cyberbullying and Suicide,” Cyberbullying Research Center, 2010, http://www.cyberbullying.us/cyberbullying_and_suicide_research_fact_sheet.pdf, 2 (emphasis in the original).

28. “Depression High Among Youth Victims of School Cyber Bullying, NIH Researchers Report,” National Institutes of Health, September 21, 2010, http://www.nih.gov/news/health/sep2010/nichd-21.htm.

29. Larry Gordon, “Keeping Parents’ ‘Helicopters’ Grounded During College,” Los Angeles Times, August 29, 2010, http://articles.latimes.com/2010/aug/29/local/la-me- parents-20100829.

30. For suicide rates by gender, see Centers for Disease Control and Prevention,

60

“Trends in Suicide Rates Among Persons Ages 10 Years and Older, by Sex, United States, 1991–2006,” September 30, 2009, http://www.cdc.gov/ViolencePrevention/suicide/statistics/trends01.html. See also Lenhart, “Mean Teens Online.”

31. “Alexis Pilkington Facebook Horror: Cyber Bullies Harass Teen Even After Suicide,” Huffington Post, May 24, 2010, http://www.huffingtonpost.com/2010/03/24/alexis-pilkington-faceboo_n_512482.html; Kevin Cullen, “The Untouchable Mean Girls,” Boston Globe, December 28, 2011, http://www.boston.com/news/local/massachusetts/articles/2010/01/24/the_untouchable_mean_girls/

32. “Results of the 2010 and 2007 WBI Workplace Bullying Survey,” Workplace Bullying Institute, 2010, http://www.workplacebullying.org/wbiresearch/2010-wbi-national- survey/; “Workplace Violence,” OSHA Factsheet, Occupational Safety and Health Administration, 2002, http://www.osha.gov/OshDoc/data_General_Facts/factsheet- workplace-violence.pdf.

33. “Ex-US Trader Gets 28 Months in Jail for Death Threats,” Thomson Reuters News and Insight, April 9, 2012, http://newsandinsight.thomsonreuters.com/Securities/News/2012/04_-_April/Ex- US_trader_gets_28_months_in_jail_for_death_threats/; “Defining a Cyberbully,” National Science Foundation, November 8, 2011, http://www.nsf.gov/discoveries/disc_summ.jsp? cntn_id=121847; Hailey Branson-Potts, “Singer-Songwriter Leonard Cohen Testifies About Harassing Voice-mails,” Los Angeles Times, April 9, 2012, http://latimesblogs.latimes.com/lanow/2012/04/singer-songwriter-leonard-cohen-testifies- about-harassing-voicemails.html; “Arizona Bill Broadens Online Bullying Laws,” ABC News, April 3, 2012, http://abcnews.go.com/US/video/arizona-bill-broadens-online-bullying- laws-16064936.

34. Christopher Maag, “A Hoax Turned Fatal Draws Anger but No Charges,” New York Times, November 28, 2007, http://www.nytimes.com/2007/11/28/us/28hoax.html.

35. De Nies, James, and Netter, “Mean Girls.” 36. “Alicia’s Cautionary Tale,” on The Oprah Winfrey Show, April 15, 2009,

http://www.oprah.com/relationships/Alicias-Story-Kidnapped-and-Held-Captive; “Child Predators on the Internet,” on The Oprah Winfrey Show, June 13, 2009, http://www.oprah.com/relationships/Protect-Your-Children-from-Internet-Predators/1.

37. “Craigslist Killing: Rare, but Not Unique,” CBS News, July 16, 2010, http://www.cbsnews.com/2100-18559_162-4969012.html; Sarah Armaghan, Kerry Burke, and Dave Goldiner, “Craigslist Date with Murder for N.Y. Beauty Julissa Brisman, Model and Internet Masseuse Shot in Hotel,” New York Daily News, April 17, 2009, http://articles.nydailynews.com/2009-04-17/news/17919537_1_hotel-room-masseuse- craigslist.

38. Federal Bureau of Investigation, Crime in the United States, by Volume and Rate per 100,000 Inhabitants, 1991–2010, Uniform Crime Reports for the United States, 2011 (Washington, DC: US Department of Justice, 2011), http://www.fbi.gov/about- us/cjis/ucr/crime-in-the-u.s/2010/crime-in-the-u.s.-2010/tables/10tbl01.xls.

39. Federal Bureau of Investigation, “Crime in the United States, Expanded Homicide Data,” in Uniform Crime Reports for the United States, 2011 (Washington, DC: US Department of Justice, 2011), http://www.fbi.gov/about-us/cjis/ucr/crime-in-the- u.s/2010/crime-in-the-u.s.-2010/offenses-known-to-law- enforcement/expanded/expandhomicidemain; US Department of Health and Human

61

Services, Administration for Children and Families, Administration on Children, Youth, and Families, Children’s Bureau, Child Maltreatment, 2010, 2011, http://www.acf.hhs.gov/programs/cb/pubs/cm10/cm10.pdf#page=31.

40. Jennifer L. Truman, Criminal Victimization, 2010: National Crime Victimization Survey (Washington, DC: US Department of Justice, 2011), http://bjs.ojp.usdoj.gov/content/pub/pdf/cv10.pdf, 9.

41. David Finkelhor, Heather Hammer, and Andrea J. Sedlak, “Sexually Assaulted Children: National Estimates and Characteristics,” in National Incidence Studies of Missing, Abducted, Runaway, and Thrownaway Children, Office of Juvenile Justice and Delinquency Prevention, August 2008, https://www.ncjrs.gov/pdffiles1/ojjdp/214383.pdf; Truman, Criminal Victimization, 2010, 2.

42. Heather Hammer, David Finkelhor, and Andrea J. Sedlak, “Children Abducted by Family Members: National Estimates and Characteristics,” in National Incidence Studies of Missing, Abducted, Runaway, and Thrownaway Children, Office of Juvenile Justice and Delinquency Prevention, October 2002, http://www.missingkids.com/en_US/documents/nismart2_familyabduction.pdf; https://www.ncjrs.gov/html/ojjdp/nismart/04/ns4.html; David Finkelhor, Heather Hammer, and Andrea J. Sedlak, “Nonfamily Abducted Children: National Estimates and Characteristics,” in National Incidence Studies of Missing, Abducted, Runaway, and Throwaway Children, Office of Juvenile Justice and Delinquency Prevention, October 2002, http://www.missingkids.com/en_US/documents/nismart2_nonfamily.pdf.

43. Amanda Lenhart and Mary Madden, “Teens, Privacy, and Online Social Networks,” Pew Internet and American Life Project, April 18, 2007, http://www.pewinternet.org/Reports/2007/Teens-Privacy-and-Online-Social-Networks/1- Summary-of-Findings.aspx.

44. Cecilia Kang, “Google Announces Privacy Changes Across Products; Users Can’t Opt Out,” Washington Post, January 24, 2012, http://www.washingtonpost.com/business/economy/google-tracks-consumers-across- products-users-cant-opt-out/2012/01/24/gIQArgJHOQ_story.html.

45. National Survey on Drug Use and Health, “Source of Payment for Outpatient Mental Health Treatment/Counseling Among Persons Aged 18 or Older Who Received Outpatient Mental Health Treatment in the Past Year, by Age Group: Numbers in Thousands, 2009 and 2010,” in The NSDUH Report (Rockville, MD: Substance Abuse and Mental Health Services Administration, 2011), http://www.samhsa.gov/data/NSDUH/2k10MH_Findings/2k10MH_DTables/Sect1peMHtabs.htm#Tab1.36A

62

CHAPTER 4

63

What’s Dumbing Down America Media Zombies or Educational Disparities?

Can you name all of Brad and Angelina’s kids? President John F. Kennedy’s siblings? The sisters in Louisa May Alcott’s Little Women? Jacob’s sons from the Old Testament? My guess is the first question is easiest for most readers coming of age in the twenty-first century, whether we are actually interested in knowing the Jolie-Pitt children’s names or not. After all, you don’t have to try very hard to hear them mentioned in celebrity gossip or fan magazines that feature their pictures. Television, magazines, and the Internet help us much more with the first question than the others. The other questions require us to draw on knowledge of history, literature, and the Bible, information that is not circulating as freely and rapidly as information about contemporary popular culture. I admit that my ability to name any of Jacob’s sons is solely based on memories of the play Joseph and the Amazing Technicolor Dream Coat. Is popular culture turning us into a nation of shallow idiots?

Many critics of popular culture are certain that the answer is yes. Although there are numerous examples of ways popular culture can help us waste time with content that is not exactly intellectually stimulating, the cultural explanation helps us overlook very important structural factors that shape educational disparities. Popular culture does not help us understand the educational experiences of young people who live in communities with overcrowded, dilapidated schools, whose families may have attained little education themselves.

But focusing on popular culture may get more attention than addressing these complicated structural factors. Consider these recent news stories suggesting technology and culture are to blame: “Is Google Making Us Stupid?” (Atlantic), “Does the Internet Make You Dumber?” (Wall Street Journal), “Are Smartphones Making Us Stupid?” (Huffington Post), “Generation Hopeless: Are Computers Making Kids Dumb?” (Associated Press), and last “Is It Just Us, or Are Kids Getting Really Stupid?” (Philadelphia), which argues that the Internet is “rewiring” young people’s minds, and not for the best.1

A Washington Times story called “The Pull of Pop Culture” argues that young people must choose between “the pull of the popular or the push of schooling,” and that kids consistently choose the former, or 50 Cent over Shakespeare. A Chicago Sun-Times story, “Successful Kids Reject Pop Culture’s Message,” notes that being able to graduate from high school is based on kids’ “ability to reject the nonsense they are exposed to in our pop culture.”2 A 2008 book by Emory University English

64

professor Mark Bauerlein, The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future, reflects this same concern.

Within these stories, popular culture is cast as antithetical to education and knowledge, something that prevents learning. None address the massive budget cuts that many public schools have had to endure, or the dramatic racial and ethnic disparities in high school and college graduation rates. That one’s ZIP code is a central predictor of the quality of education one has access to also gets left out of these attention-grabbing headlines.

Concerns that popular culture makes us dumber predate the Internet age. Communications scholar Neil Postman argues in his 1985 book, Amusing Ourselves to Death, that as the United States shifted from “the magic of writing to the magic of electronics,” public discourse changed from “coherent, serious and rational” to “shriveled and absurd,” thanks largely to television.3 Drawing from Aldous Huxley’s Brave New World, Postman decries what he sees as the rejection of books in favor of a show-business mentality that has pervaded every aspect of public life, from politics and religion to education. He believed that these amusements undermine our capacity to think, encouraging us to move away from the written word—rationality, in his view—toward television and visual media.

Postman got it partly right. This new media world does act as a never-ending shiny object that grabs our attention. It distracts us from knowing too much about the way American society is structured, being too aware of social problems that might seem boring in the face of so much other interesting stuff out there to pay attention to. This keeps us focused on cultural explanations for social issues, rather than the less immediate—and arguably less interesting—structural conditions that shape our education system.

But instead of impeding knowledge and discourse across the board, new media like the Internet have increased public discourse, along with the number of amusements available to distract us. Television news programs now use interactive media to further engage citizens, through live blogs and using sites like YouTube in presidential debates, rather than just enabling people to be passively entertained. In fairness to Postman, who wrote before the Internet age, these developments are still unfolding. But rather than replacing traditional means of informing the public and furthering the flow of knowledge, new media and even popular culture are sometimes used to create new ways to educate.

This chapter considers the complaints that popular culture interferes with education and has created an intellectually lazy population. As we will see, changes in visual media and the increased ability to communicate electronically have altered how people interact and exchange information. Television, texting, and a culture awash in seemingly frivolous gossip may appear to be the causes of educational failure, but the reality is far less entertaining. Problems within

65

education stem from structural factors bigger than popular culture: lack of resources, inconsistent family and community support, and inequality.

While some school districts have significant dropout and failure problems, Americans are not as dumb as we are often told … at least no more so than we have been in the past. The vast divides of educational attainment and intellectual achievement can be explained not by popular culture, but by the continuing reality of inequality in American society.

A Nation of Television Zombies?

Does television put viewers into a hypnotic trance, injecting ideas into otherwise disengaged minds? During the 1970s, several books suggested that this was in fact the case. Marie Winn’s 1977 book, The Plug-in Drug, described television as a dangerous addiction. Following Winn, in 1978 Jerry Mander’s provocatively titled Four Arguments for the Elimination of Television concurred. According to Mander, television viewers are spaced out, “little more than … vessel[s] of reception” implanted with “images in the unconscious realms of the mind.” Put simply, Mander argues that television viewing produces “no cognition.”4

Television viewing increases with age (television viewing is highest for adults seventy-five and over), yet nearly all of the concerns about television dulling the intellect focused on children and teens.5 According to Nielsen Media Research, children and teens watch much less television than their elders: adults sixty-five and over watched an average of more than forty-seven hours per week in 2009, almost double that of children two to eleven, who averaged just over twenty-five hours. Teens twelve to seventeen watched the least television of any age group, averaging just over twenty-three hours.6 Television viewing has been declining in recent years, particularly among young people and teens, who more often use newer forms of media during their leisure time.7

Both Winn’s and Mander’s books rely upon anecdotal observations yet make important charges about the negative effects television supposedly has on thinking. Some of these claims seem like common sense: television shortens one’s attention span, reduces interest in reading, promotes hyperactivity, impedes language development, and reduces overall school performance. Yet research into these claims reveals that television is not exactly the idiot box its critics suggest.

It might surprise you to learn that one of the programs most heavily criticized in the 1970s was Sesame Street, the educational program many of us grew up watching. Cognitive psychologist Daniel R. Anderson studied claims that preschoolers get transfixed in zombielike fashion while viewing Sesame Street, as well as the contradictory complaint that it contributes to hyperactivity. Studies

66

where researchers observed three- to five-year-olds watch television found that their attention is anything but fixed: they look away 40 to 60 percent of the time, draw letters with their fingers in the air along with characters, and pay more attention to segments compatible with their current cognitive aptitude level. There was no evidence of hyperactivity after watching, and Sesame Street viewers had larger vocabularies and showed greater readiness for school than other children.8

Anderson and several colleagues conducted a long-term study, following 570 children from preschool into adolescence, to see if a relationship between preschool television viewing and academic performance exists. Their findings cast serious doubt on the speculation that television impedes learning later in life. In contrast to the claims that the nature of television itself dulls intellectual ability, their data repeatedly reveal that content matters: children—especially boys—who watched what they call “informative” programming as preschoolers had higher grade point averages and were likely to read more as teens. These findings counter a well-worn idea that television primes children to expect to be entertained at all times, leading to intellectual laziness and the idea that learning is boring.9

Their study also challenges the idea that television has a “displacement” effect: people spend more time watching television, and thus less time engaged in more rigorous intellectual activities like reading. Anderson and colleagues found that this effect was small, complicated, and observed only in middle- and high-income kids. Children who watched fewer than ten hours a week actually had poorer academic achievement than those who averaged about ten hours of viewing per week, and those who watch a lot more than ten had slightly lower academic achievement than those in the middle. The authors conclude that there is no evidence that television viewing displaces educational activities; instead, it is likely that television viewing replaces other leisure activities, like listening to music, playing video games, and so forth. The authors also found that more television viewing did not necessarily translate into doing less homework.10

The authors list other studies to support their claims, finding that television does not ruin reading skills, lower intelligence quotient (IQ), or otherwise interfere with education. This does not mean that parents should let kids watch as much television as they want and let them do their homework when they feel like it. We should certainly not presume from this study that television is children’s best teacher, but it does not necessarily have the damaging effect critics have suggested.

In fact, the best predictor of student achievement is parents’ level of education. It is likely that this effect is so strong—for better and for worse in some cases—that television cannot compete with the academic environment created by parents. Parents who encourage reading, read themselves, and emphasize the importance of education are a far more powerful source than television. Not surprisingly, reading more is a good predictor of school success, but watching television does not

67

interfere with literacy skills, as many critics charged.11 This connection means that educational achievement—a good predictor of one’s economic success—is inherited more than we might care to acknowledge.

The critiques of educational television have had political underpinnings in some cases. Anderson describes how much of the concern about Sesame Street was driven by those who sought to cut funding for the Children’s Television Workshop, and public television more generally, during the early 1990s.12 If opponents could find that educational programming had no impact, or even deleterious effects, they could justify eliminating public funding as yet another form of budgetary pork. But such was not the case.

Television has never really left the hot seat. More recently, TV has been blamed for causing attention deficit/hyperactivity disorder (ADHD) and even autism. Although it may seem like television’s electronic images can wreak havoc on the young brain’s wiring process, research does not support this conclusion. It is likely that people who have grown up with electronic media think differently from those who did not, but different is not always pathological.

Let’s look more closely at some of the research on ADHD and television. It is mostly based on correlations, and therefore causality cannot be assessed. But if you Google “television and ADHD” you will be told otherwise. One online article concludes in its headline, “It’s Official: TV Linked to Attention Deficit.”13 But the authors of the study cited by this article would not go that far.

The study in question, published in a 2004 issue of the journal Pediatrics, assessed the “overstimulating” effect television may have on children who watch TV as toddlers. To do so, they asked parents about their children’s television viewing at ages one and three and asked them questions regarding their children’s attentional behavior at age seven. Although they did find a relationship between lower attentional behavior and more television viewing, the authors themselves acknowledge that “we have not in fact studied or found an association between television viewing and clinically diagnosed ADHD,” because none of the children in the study had been diagnosed.14 They also conclude that it is equally likely that a more lax or stressful environment might make television viewing more prevalent in early childhood and that television viewing is associated with, but not the cause of, children’s inattention.

Likewise, a 2006 study published in the Archives of Pediatrics and Adolescent Medicine found significant differences between children diagnosed with ADHD and their peers. The authors found “no effect of subsequent story comprehension in either group,” and that for the non-ADHD children, “children who have difficulty paying attention may favor television and other electronic media to a greater extent than the media environment of children without attention problems.”15

Most interestingly, their study found that any effect that television watching had

68

on attention was with the non-ADHD kids only; those diagnosed with ADHD showed no declines in attention after watching television. This study challenges the conventional wisdom that television has particularly adverse effects for children with ADHD; instead, the authors conclude that “the cognitive processing deficits associated with ADHD are so strongly rooted in biological predisposition that, among children with this diagnosis, environmental characteristics such as television viewing have a negligible effect on these cognitive processing areas.” A similar study was published in 2007, claiming an association between television viewing and “attention problems,” but did not assess ADHD. Another study did use the protocol for diagnosing ADHD, but again it was unclear whether any participants had actually been diagnosed with the disorder.16

In 2011 a study of four-year-olds watching a fast-paced clip of SpongeBob SquarePants made national news, claiming that children who watched the clip did not perform as well on cognitive tests as the children in control groups who did not see the cartoon segment. To read the news coverage, it seemed as though the undersea cartoon character was uniformly making kids dumb. ABC News headlined its story “Watching SpongeBob SquarePants Makes Kids Slower Thinkers, Study Finds.”17 YouTube videos and blogs boldly stated that “SpongeBob makes kids stupid.”

The study itself, published in the journal Pediatrics, did not go that far. Based on a nonrandom study of sixty four-year-olds from mostly white, affluent families, the experiment involved showing a subsample a fast-paced clip from the cartoon, followed by cognitive tests and a test to measure the ability to delay gratification. The SpongeBob viewers performed worse on all of these tests, but the authors cannot—and did not—claim that this result enabled them to draw any conclusions about the children’s long-term intellectual prospects.

The authors’ conclusion included an interesting hypothesis: that the fantasy nature of the program actually required more of the children cognitively, making it harder for them to perform well on the tests immediately after. They state, “Encoding new events is likely to be particularly depleting of cognitive resources, as orienting responses are repeatedly engaged in response to novel events.”18 So we could just as easily conclude that a fast-paced cartoon requires more mentally and is more of a cognitive workout than slower tasks.

Some critics have even asserted that television is linked with autism, which garnered coverage in a 2006 issue of Time and in the online magazine Slate.19 A study by economists found a correlation between autism rates and cable-television subscription rates in California and Pennsylvania. They did not measure what children watched (or if children were watching at all). Studies like this, although profoundly flawed, help maintain the doomsday specter of television. Easy answers for complex neurological processes are digestible to the public and thus make for

69

interesting speculation, but probably will yield little in the way of getting to the root cause of autism, just as study after study on television and video games will likely do little for those attending struggling schools.

The cumulative effect of questionable studies helps create an environment where television seems to be the answer for educational failure. The American Academy of Pediatrics insists that parents should not allow children under two to watch any television, for fear that it interferes with development, a claim that has yet to be scientifically supported. The AAP statement does not reference any research on infants, but instead focuses on research on older children and teens. Still, the AAP concludes that “babies and toddlers have a critical need for direct interactions with parents and other significant care givers (e.g., child care providers) for healthy brain growth and the development of appropriate social, emotional, and cognitive skills.”20

Although television does not provide the direct one-on-one interaction babies need and can never replace human interaction, there is no evidence of direct harm from television. A 2003 Kaiser Family Foundation (KFF) report found that the majority of children under two—74 percent—have watched television (or at least their parents admit that they have), and 43 percent watch every day.21

I am not suggesting that propping infants up in front of the TV set is a good idea, especially if children are left unattended (in the KFF report, 88 percent of parents said they were with their children all or most of the time). But there is no evidence that television has a negative impact on infants either, only that it does not necessarily contribute to their development. If parents decide they would like to keep their children away from television, they have the right to make that choice. But many parents are made to feel guilty for choosing to allow some television viewing when there is no concrete evidence of harm. The TV blackout is especially difficult for parents with older children who might watch or those who enjoy watching TV themselves.

In contrast to the widespread belief that television interferes with intelligence, writer Steven Johnson suggests that the opposite might be true. In his book Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter, Johnson argues that television has actually become more complex and cross-referential and that the best dramas and comedies of today require significantly more of viewers than in the past. He cites programs like 24, which expect that viewers think along with the show and draw from plot twists and information from previous shows, in contrast to older television, which provided more exposition, if any was needed at all. He says that these kinds of shows are “cognitive workouts” and that even reality shows sometimes encourage us to develop greater social intelligence.22

Although I’m not sure that television makes most people smarter—I would

70

hypothesize that those who are already intelligent can use television to improve upon an already strong intellect—the research does not support blaming educational failure on television. It is another attempt to use a cultural explanation while once again ignoring social structure.

Certainly, being able to concentrate and focus is important to educational success. But focusing on popular culture helps us ignore issues such as hunger and family and neighborhood violence that may interfere with learning. These issues are also more likely to be major concerns in low-income areas with high dropout rates.

Minding Newer Media

Although concerns about television will probably never completely fade away, they are sometimes overshadowed now by newer forms of media, particularly time spent online. Adults are more likely to spend time online than children or teens: adults aged thirty-five to forty-four spent an average of nearly thirty-nine hours online in 2008, compared with just over twelve hours for teens twelve to seventeen, according to Nielsen Media Research.23 And video games also cut into television time, especially for boys.

A 2007 study, published in the Archives of Pediatrics and Adolescent Medicine, found that 36 percent of their respondents in a nationally representative sample played video games, averaging an hour a day (and an hour and a half on weekends). Gamers reported spending less time reading and doing homework than nongamers.24 While this may indicate that video gamers’ schoolwork will suffer, other studies—including two that I discussed above—have found no evidence that video games were associated with lower academic performance.

In one of these studies, published in Pediatrics in 2006, the authors seem to contradict themselves. In their analysis they state that “video game use [was] not associated with school performance,” yet conclude that “television, movies and video game use during middle school years is uniformly associated with a detrimental impact on school performance.” They also neglect to add that television use itself has no negative impact, just heavy viewing during the school week, according to their own findings.25

Another researcher responded to this contradiction by writing to the journal that the “conclusions are not warranted,” yet the authors refused to accept their own study’s findings, responding that “from this ‘displacement’ perspective, we have little reason to believe that four hours of video game time would be any different from four hours of television time.”26

The reality is that very few people actually play video games for four hours a day, as the 2007 study found; in the Pediatrics study, 95 percent of kids played

71

fewer than four hours a day. Their unfounded conclusion that video-game playing must negatively affect academic achievement reflects the persistent belief that video games are problematic; it’s equally likely that children who must spend the same amount of time in other activities, such as caring for siblings or doing extensive household chores, would also find their academic achievement lower. Focusing on video games does not address the broader structural factors that impact school success or failure.

For people who have played video games, the question about gaming and academic achievement might seem backward. Wouldn’t games that require you to learn often complex rules at increasingly difficult levels actually provide intellectual benefits? Steven Johnson, author of Everything Bad Is Good for You, makes this argument, using The Sims as an example, where users need to master a host of rules as they play the game about urban planning. Yes, common sense dictates that people (of all ages) should not neglect their other responsibilities in favor of playing, but the games themselves tend to offer a kind of mental workout, especially improving spatial skills.27

I suspect the disdain for video games and other new media comes from a lack of familiarity. The games are so much more complex now than when they first came out in the 1970s that they compel users to play a lot more than Pong, Merlin, or Atari did when I was growing up. Back then the games were much like other children’s toys that kids played occasionally and mostly grew tired of. By contrast, games today are likely to be serious endeavors that kids don’t give up after a few weeks, but instead are likely to continue to play into adulthood.

Video games bear little resemblance to their predecessors from decades ago, and thus seem like a strange new development for many older adults. But at least some people over forty have a frame of reference for video games, unlike texting, a relatively new development. Recently, texting has come under fire for presumably ruining young people’s ability to spell and write coherently.

Many complaints come from people I can relate to: college professors who read students’ papers and e-mails. A Howard University professor told the Washington Times that electronic communication has “destroyed literacy and how students communicate.” A University of Illinois professor wrote to the New York Times that she is concerned about the informality in written communication, with no regard for spelling and grammar. A tutor wrote an op-ed in the Los Angeles Times of the “linguistic horrors” she frequently reads in students’ essays. “The sentence is dead and buried,” the author concludes.28

I can relate to these concerns, especially when I get rambling e-mails in all lowercase letters from students. But to tell the truth I have not seen a decline in students’ ability to write since e-mail and texting became so widespread. And according to a Pew Internet and American Life study, teens don’t confuse texting

72

with actual writing. A surprising 93 percent of those surveyed indicated that they did some form of writing for pleasure (journaling, blogging, writing music lyrics, and so on). Most teens—82 percent—also thought that they would benefit from more writing instruction at school. Others are also optimistic. Michael Gerson of the Washington Post writes, “A command of texting seems to indicate a broader facility for language. And these students seem to switch easily between text messaging and standard English.”29

Texting reminds me of another form of language use that is all but obsolete: shorthand. This used to be considered a skill, taught in school often to prepare students for secretarial work. Court reporters also master a language within a language in their daily work. But because texting is associated with young people, critics presume it is a detriment rather than a new skill. And like television, video games, and the Internet, texting is not just a young person’s activity (although the younger a person is, the more texts they are likely to send per day).30 According to industry research, the median age of a texter is thirty-eight.31

Perhaps at the heart of these concerns are uncertainties about these new media. Will they distract people from being productive citizens? Enable too many shortcuts? Much has been written recently about teens and multitasking, mostly with an undercurrent of anxiety. “Some fear that the penchant for flitting from task to task could have serious consequences on young people’s ability to focus and develop analytical skills,” warns a 2007 Washington Post article. Time published an article in 2006 called “The Multitasking Generation,” stating that “the mental habit of dividing one’s attention into many small slices has significant implications for the way young people learn, reason, socialize, do creative work and understand the world. Although such habits may prepare kids for today’s frenzied workplace, many cognitive scientists are positively alarmed by the trend.” The article goes on to quote a neuroscientist who fears that multitaskers “aren’t going to do well in the long run.”32

It is interesting that rather than celebrate the possible positive outcomes of multitasking—which most mothers will tell you they have no choice but to learn— where young people are concerned, the prognosis is grim. As Time observes, multitasking is a valuable professional skill, as any brief observation of the frenzied Wall Street trader or busy executive reveals.

The Kaiser Family Foundation released a report on youth multitasking in 2006 and found that while doing homework, the mostly likely other activity teens engage in is listening to music. Most of the multitasking comes while doing other leisure activities, like instant messaging and Web surfing at once. The KFF study seems to imply that using a computer to do homework invites distraction. “When doing homework on the computer is their primary activity, they’re usually doing something else at the same time (65% of the time),” the report concludes.33 It’s

73

also the case that people think they are better at multitasking than they actually are. As many other professors have likely also observed, students who spend time online during class lectures and discussions can miss crucial information, though they might think they can do both at once.

Yet computer use is a vital part of being educated in the twenty-first century. In creating access to a tremendous amount of information, the Internet also changes the nature of education. Items that had to be researched from a physical library can be recalled by computer or smartphone, basically eliminating the need for memorization of many facts. These shifts remind me of Albert Einstein’s alleged ignorance of his own phone number, which he supposedly said he could look up if he needed to know. How many phone numbers do you know now that phones remember them for us?

Yes, the Internet and other technologies can be major distractions and have created new ways to take intellectual shortcuts and to cheat. Education needs to evolve along with the technology, shifting the nature of learning away from memorization and onto teaching how to think. The Internet can and has been used to thwart cheating, too, and rather than new media being the enemy, educators need to make peace with them and embrace them as much as possible.

Just as the written word moved societies away from oral culture, visual media require a new intelligence that needs to be fully integrated into education today. Our continued reliance on standardized testing impedes this shift in many ways. But a new way of sharing information has arrived and will likely continue to mutate in the coming years.

How Dumb Are We Really?

For those who glorify the past, the present or future can never compare. What’s interesting is that complaining about how little the next generation knows never abates. People have found young people’s knowledge lacking for centuries, and commentators have grimly assessed Americans’ intellectual abilities, whether it be math, reading skills, or geography, for more than a century.34 The complaint that we are superficial and interested only in amusements has been around for a long time. But are we really less knowledgeable than our predecessors?

One source of support critics look to is SAT (formerly known as the Scholastic Aptitude Test) scores. Between 1967 and 1980, average verbal scores fell 41 points, from 543 to 502, a fall of about 8 percent, and math scores fell 24 points, from 516 to 492. As you can see in Figure 4.1, this appears to suggest that high school aptitude nose-dived during the 1970s. Since that time, average math scores rose to an all-time high in 2005 before falling back to previous levels in the years after. Verbal scores continue to fluctuate but have yet to match levels of the late

74

1960s and early 1970s.

Figure 4.1: Average Critical Reading and Math SAT Scores, 1967–2011 Source: College Board

Critic Marie Winn, author of The Plug-in Drug, argues that television is the “primary cause” for this decline, claiming that as kids grew up watching more television in the late 1960s, their ability to read declined. But as the above-noted studies detail, television had little to do with high school grade point average, which is highly related to SAT scores.35

Ironically, the decline in SAT scores from four decades ago reflects a positive trend: more high school students are taking the test and planning on attending college than in the past. According to the US Department of Education, in 1972, 59 percent of high school seniors planned on attending college, compared with 79 percent in 2004.36 The number of students enrolled in college more than doubled between 1970 and 2009 as well.37

Not only are more people attending college, many more African American and Latino students are attending college than in 1970, groups that have been historically underrepresented and tend to have slightly lower scores on average than whites or Asian Americans.38 In 2011 more students took the SAT than ever before in history; 44 percent of the test takers were minority students, the largest proportion in history.39 They are also more likely to attend underfunded and overcrowded urban schools with less qualified teachers, and in some cases English is their second language.40

Donald P. Hayes, Loreen T. Wolfer, and Michael F. Wolfe of Cornell University suggest that the decline in the quality of textbooks also help explain declining achievement. They examined eight hundred textbooks published between 1919 and 1991 and found that the newer texts are less comprehensive and, in their estimation, less likely to prepare students to master reading comprehension.41

75

Still others wonder if verbal abilities are really declining at all. Psychologists have studied scores on intelligence quotient tests from the beginning of the twentieth century, when they were first administered, to 2001 and found that IQ scores are continually rising—so much so that they have had to be periodically recalibrated to reflect the population’s average score. Called the “Flynn Effect” after psychologist James R. Flynn, total unadjusted IQ scores have risen about 18 points between 1947 and 2002. This means the average IQ of someone in 2002— always scaled to 100—would have been about 118 in 1947 (the corollary means that a person of average intelligence in 1947 would have an IQ of 82 in 2002). Four of the points accounted for in the gain are from vocabulary.42

So are we smarter or dumber? Flynn says that “today’s children are far better at solving problems on the spot without a previously learned method for doing so.” He also suggests that if we look at achievement tests of children’s reading from 1971 to 2002, fourth and eighth grade students’ reading skills improved, but by twelfth grade there were no differences over time.43

Looking at the data to which he refers, what is most interesting is that nine-year- old boys in particular gained a great deal on reading scores—15 points between 1971 and 2008, compared with girls’ 10-point gain.44 In all age groups, significant racial and ethnic disparities persist, despite some reduction since 1971. This may partially explain why verbal SAT scores haven’t risen (but not why they fell). In any case, these observations refute the notion that young children can’t read because of television.

The case of IQ and SAT disparities reminds us that these tests are only approximations of intelligence and aptitude, rife with problems of cultural bias, and reflect the narrow ways that aptitude and intelligence are defined. The long-term changes in both measures tell us that people are better prepared for one test, but not for the other … yet they purport to measure some of the same skills.

The National Center for Education Statistics (NCES) conducted assessments of adult literacy in 1992 and 2003 and found that overall results were virtually the same, but there were significant differences in terms of race, education, and age. Whites had higher scores than those in other racial categories, although their scores were virtually unchanged during the two time periods. Blacks and Asian Americans made gains in 2003, while Latino literacy scores declined.

Not surprisingly, having more education correlated with higher scores. Nineteen- to forty-nine-year-olds had the highest scores, with adults over sixty-five having the lowest.45 Overall, people of all ages are reading less than in past decades, according to a 2007 National Endowment for the Arts report. But despite declines in leisure reading, the NEA study found that nearly 60 percent of adults twenty-five to forty-four still read for pleasure. In contrast, a Harris Interactive Poll found that between 1995 and 2004, the percentage of adults who reported

76

reading as their favorite leisure activity increased, from 28 to 35 percent (although in 2007 it fell to 29 percent); in every year reading was ranked the respondents’ favorite leisure activity. A 2009 NEA study found increases in adults who read literature—with the highest increases among young adults eighteen to twenty- four.46 With the increasing popularity of iPads, Kindles, and Nooks, e-books may eventually reverse the downward trend.

Declines in reading have many causes and implications. We often think that this is a direct result of other media luring people away from books, but long-term studies have also found that in the past several decades Americans have less leisure time, period. A 2008 Harris Interactive Poll found that respondents had the least amount of leisure time since they began asking the question in 1973.47 Since reading is a more intellectually taxing activity, it may be the first to go after a busy day. I am personally an avid reader, but after a long day at work my eyes and brain don’t want to work that hard. I suspect that for other adults, who are working increasingly longer hours to make ends meet, this rings true. But we need to avoid viewing the past through rose-colored glasses, where entire families would have sat around reading books together. With high school graduation rates hovering below 25 percent until 1940, it is very likely that the number of people reading books was not as high as we might think.

Whereas pleasure reading might not be increasing, educational attainment has risen dramatically since 1960. According to the US Census, high school graduation rates more than doubled between 1960 and 2010, from just 41 percent of the population to 87 percent. Less than 8 percent of Americans had a college degree in 1960, compared with 30 percent in 2010. Rates for African Americans and Latinos still lag behind whites, but these groups have made tremendous gains during this time as well. African American high school graduation quadrupled, and college graduation increased sixfold. Latino high school graduation rates have nearly doubled since 1970 (the first year data were collected), while college graduation has tripled in that time period.48

Overall, we are a more educated society, one that places a great deal of emphasis on higher learning as a vital skill in our information-based economy. But as continuing disparities in graduation, literacy, and SAT scores detail, race and socioeconomic status remain significant factors. This is due not to different innate abilities, as controversial theories suggest, or only to media use, but to different educational opportunities built into our social structure.

Social Structure and Unequal Education

Nearly sixty years have passed since the landmark Supreme Court ruling Brown v. Board of Education, which voided the “separate but equal” doctrine that had

77

dominated American education. Yet children today still largely inhabit very separate public school systems: one that is largely effective in fulfilling its mission of providing students with a quality education and one that fails miserably. The latter tends to be the only option for the nation’s poorest children living in cities, helping to perpetuate the cycle of poverty. Focusing on television and other media as a primary source of educational failure enables us to overlook the pervasive nature of inequality and the most important predictor of educational attainment.

This cycle predates television and has nothing to do with popular culture. Its roots are firmly planted in the days of slavery, when many states outlawed teaching slaves how to read. Education was viewed as a major threat to white supremacy, both during and after slavery. After slavery ended, schools for African American children lacked many of the basic resources, and most colleges and universities excluded them entirely.

While many children, like Brown v. Board of Education’s plaintiff, Linda Brown, lived close to “white” schools, residential segregation ensured that many did not. Segregation actually increased after World War II, with the growth of suburbs that were off-limits for blacks and government policies that refused to underwrite loans for whites who lived in neighborhoods with African Americans. This practice, called “redlining,” dictated the amount of risk involved in home loans, limiting who would get funding to live in a particular neighborhood or who could borrow money for home improvements. Until the passage of the Fair Housing Act in 1968, housing discrimination was rampant and legal, which helped to shuffle Americans into predominantly white or minority neighborhoods, as well as severely limit the property values in nonwhite neighborhoods.

Since schools in the United States are typically funded by property tax revenues, those in areas with a lower tax base had less funding for local schools. Less funding means less money to pay teachers well, so those with more experience and training go to districts with a higher tax base. Those teaching low-income kids are more likely to have emergency credentials and lack training in the specific subject they teach. They are more likely to have older and fewer textbooks, which means that students cannot take their books home to study. The school itself is more likely to be overcrowded and in disrepair.49

As if these obstacles were not enough, as I discuss in upcoming chapters, children living in low-income communities are more likely to experience family disruption and neighborhood violence, making it harder to focus on studying. One of the most important factors predicting educational success is having parents who actively support and are involved in their child’s education. Low-income parents who might need to work several jobs, have little education themselves, or in some cases speak minimal English might not be able to help their children as much as they might hope, despite their best intentions.

Among the best predictors of high educational attainment is having a parent who

78

has a high level of educational attainment—and thus the cycle unfortunately continues. Children who grow up with educated parents, who leverage their educations to obtain good paying jobs, can afford to live in neighborhoods with higher property values and a better tax base for its schools and provide better preparation for college success. Public schools in affluent areas with insufficient public funding have the ability to raise private funds, so budget cuts and economic downturns affect them less.

These disparities reveal how socioeconomic status and race are deeply intertwined. Although African Americans and Latinos have closed some of the achievement gaps in recent decades, they still persist. Think about the area where you live: Is it mostly segregated? Are there black or Latino neighborhoods that are mostly poor? If you are living near just about any American city, the answer would be yes. These communities developed and persist initially due to public policies that ensured the continuation of racial inequality, even after the demise of slavery and Jim Crow laws and the civil rights movement of the twentieth century.

In the past decade, the federal government attempted to address these disparities through its No Child Left Behind (NCLB) policy. In theory, this program was supposed to assess how well schools worked and provide options for those attending schools that were less effective, including tutoring, after-school programs, or even transferring to another school.50 Critics have argued that the NCLB overemphasizes standardized testing and has not provided sufficient funding to help bolster failing schools. The policy also includes sanctions and penalties for schools that do not meet certain goals, which would further challenge schools in already difficult circumstances. Improving school achievement requires more than fixing failing schools—to significantly reduce the disparities in graduation rates and test scores, we need to also begin to repair the communities that they serve to help break the cycle at all points.

As you can imagine, making changes like this takes time, investment, and commitment, things that we have been mostly unwilling to provide to America’s poorest citizens, particularly during times of budget cuts. Throw in the contentious subject of race and inequality, and suddenly it seems much easier to talk about the problem of television, video games, and computers. But the kids who do not have access to computers are not likely developing the same sort of computer skills as their peers. The digitally disempowered are most likely to be from low-income families and may live in communities with libraries that have no computers, no Internet access, or no public library at all.

According to a 2002 Annie E. Casey Foundation study, having access to a computer at home increases educational performance, even when factors like income are taken into account. Not surprisingly, income is a major factor in determining who is likely to have a computer in their home. In 2009 84 percent of Asian American households had Internet access, as did 79 percent of white

79

households. By contrast, just 60 percent of black households and 57 percent of Latino families did. Those with adults who had less education were dramatically less likely to have computers at home: 39 percent of those who did not finish high school had a computer, compared with 63 percent of high school graduates, 79 percent of those with an associate’s degree or less, and 90 percent of college graduates. These differences both reflect—and likely reproduce—economic disparities. A 2010 Kaiser Family Foundation report found that among eight- to eighteen-year-olds, whites are still more likely to have computers and Internet access at home than African American or Latino kids. White young people are also more likely to go online at school than African Americans or Latinos. Children with college-educated parents are also more likely to have computers and Internet access at home than children with less educated parents.51

Clearly, low-income families have more pressing needs, like food and rent, before buying a computer or subscribing to an Internet service provider. Even when schools in low-income communities do have computers, they may not be up to date, and the time students can individually spend using them is limited. Over time, this disparity in computer usage translates into less time to do homework assignments on a computer, less ease with computer software, fewer Internet research opportunities, and an overall educational disadvantage. A Duke University study found that there were small but significant differences in students’ math and reading scores related to home computer access.52 Those without computer skills today already face serious employment setbacks, which are bound to multiply.

Common sense tells us that if someone is watching television, playing games, or otherwise avoiding their school, work, or family responsibilities, that is not good. Planting one’s self in front of the TV or computer screen for a long time does have consequences, and this chapter does not suggest otherwise.

But those who argue that television and media are behind some of this country’s serious educational problems are off the mark. For some, the only solution is to never watch television or, as Jerry Mander suggested in 1977, eliminate it altogether. While traditional television is shifting away from live viewing on a dedicated television set, video viewing has expanded to other media platforms, with the explosion of YouTube and video capabilities on smartphones and other devices.

As our communications media shift, intellectual skills shift along with them. Rather than taking the glass-half-empty approach, we might instead look to see what we gain from these changes and how they can enhance education in the future. Beyond popular culture, we must also deal with the stubborn issue of inequality, which is the most important factor in understanding educational disparities—not simply whether someone watched Sesame Street as a toddler. Focusing on popular culture places the entire burden of educational disparities onto individuals or

80

parents, while completely disregarding the stubborn nature of racial and economic inequality, which is often reflected and reproduced in our educational system.

Notes 1. Nicholas Carr, “Is Google Making Us Stupid?” Atlantic, July/August 2008,

http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/; Nicholas Carr, “Does the Internet Make You Dumber?,” Wall Street Journal, June 5, 2010, http://online.wsj.com/article/SB10001424052748704025304575284981644790098.html; David Wygant, “Are Smartphones Making Us Stupid?,” Huffington Post, November 16, 2010, http://www.huffingtonpost.com/david-wygant/are-smartphones-making- us_b_783750.html; Associated Press, “Generation Hopeless: Are Computers Making Kids Dumb?,” September 30, 2010, http://losangeles.cbslocal.com/2010/09/30/generation- helpless-are-computers-making-kids-dumb/; Sandy Hingston, “Is It Just Us, or Are Kids Getting Really Stupid?,” Philadelphia, December 2010, http://www.phillymag.com/articles/feature-is-it-just-us-or-are-kids-getting-really-stupid/.

2. Deborah Simmons, “The Pull of Pop Culture,” Washington Times, January 18, 2008, A17; Mary A. Mitchell, “Successful Kids Reject Pop Culture’s Message,” Chicago Sun-Times, June 7, 2001, 14.

3. Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business, 13, 16.

4. Jerry Mander, Four Arguments for the Elimination of Television, 204. 5. Bureau of Labor Statistics, “Table 11: Time Spent in Leisure and Sports Activities

for the Civilian Population by Selected Characteristics, 2011 Annual Averages,” Economic News Release, June 22, 2012, http://www.bls.gov/news.release/atus.t11.htm.

6. “Americans Using TV and Internet Together 35% More Than a Year Ago,” Nielsen Wire, March 22, 2010, http://blog.nielsen.com/nielsenwire/online_mobile/three-screen- report-q409/. See also PR Newswire, “Under 35’s Watch Video on Internet and Mobile Phones More Than Over 35’s; Traditional TV Viewing Continues to Grow,” Nielsen Reports TV, Internet, and Mobile Usage Among Americans Press Release, July 8, 2008, http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=109&STORY=/www/story/07–08– 2008/0004844888&EDATE=.

7. Brian Stelter, “Young People Are Watching, but Less Often on TV,” New York Times, February 8, 2012, http://www.nytimes.com/2012/02/09/business/media/young- people-are-watching-but-less-often-on-tv.html?pagewanted=all.

8. Daniel R. Anderson, “Educational Television Is Not an Oxymoron.” 9. Daniel R. Anderson et al., “Early Childhood Television Viewing and Adolescent

Behavior: The Recontact Study.” 10. Ibid., 41. 11. Gary D. Gaddy, “Television’s Impact on High School Achievement.” 12. Anderson, “Educational Television Is Not an Oxymoron.” 13. Jean Lotus, “It’s Official: TV Linked to Attention Deficit,” Post on White Dot, the

International Campaign Against Television Blog, July 21, 2008, http://www.whitedot.org/issue/iss_story.asp?slug=ADHD%20Toddlers.

14. Dimitri A. Christakis et al., “Early Television Exposure and Subsequent Attentional Problems in Children,” Pediatrics 113 (2004): 708–713 (quote on 711).

81

15. Ignacio David Acevedo-Polakovich et al., “Disentangling the Relation Between Television Viewing and Cognitive Processes in Children with Attention- Deficit/Hyperactivity Disorder and Comparison Children,” Archives of Pediatrics and Adolescent Medicine 160 (2006): 358, 359.

16. Ibid., 359; Carl Erik Landhuis et al., “Does Childhood Television Lead to Attention Problems in Adolescence?,” Pediatrics 120 (2007): 532–537; Edward L. Swing et al., “Television and Video Game Exposure and the Development of Attention Problems,” Pediatrics 126 (2011): 214–221.

17. Courtney Hutchison, “Watching SpongeBob SquarePants Makes Kids Slower Thinkers, Study Finds,” ABC News, September 12, 2011, http://abcnews.go.com/Health/Wellness/watching-spongebob-makes-preschoolers-slower- thinkers-study-finds/story?id=14482447#.T7UyCVKh2Sp.

18. Angeline S. Lillard and Jennifer Peterson, “The Immediate Impact of Different Types of Television on Young Children’s Executive Function,” Pediatrics 124 (2011): e1– e36.

19. Claudia Wallis, “Does Watching TV Cause Autism?,” Time, October 26, 2006, http://www.time.com/time/health/article/0,8599,1548682,00.html; Greg Easter-brook, “TV Really Might Cause Autism,” Slate, October 16, 2006, http://www.slate.com/id/2151538.

20. American Academy of Pediatrics, “Policy Statement,” Pediatrics 104 (1999): 341– 343, http://aappolicy.aappublications.org/cgi/content/full/pediatrics;104/2/341.

21. Victoria J. Rideout, Elizabeth A. Vandewater, and Ellen A. Wartella, “Zero to Six: Electronic Media in the Lives of Infants, Toddlers, and Preschoolers,” Henry J. Kaiser Family Foundation, 2003, http://www.kff.org/entmedia/entmedia102803pkg.cfm.

22. Steven Johnson, Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter, 14, 96.

23. Katy Bachman, “Study: Teens Would Rather Hit Web, TV Than Read,” Adweek, June 19, 2008, http://www.adweek.com/news/advertising-branding/study-teens-would- rather-hit-web-tv-read-108836.

24. Hope M. Cummings and Elizabeth A. Vandewater, “Relation of Adolescent Video Game Play to Time Spent in Other Activities,” Archives of Pediatrics and Adolescent Medicine 161 (2007): 684–689.

25. Iman Sharif and James D. Sargent, “Lack of Association Between Video Game Exposure and School Performance: In Reply,” Pediatrics (2007): 1061, 1065.

26. Ibid., 413–414. 27. Johnson, Everything Bad Is Good, 14. 28. Shelley Widhalm, “OMG; How 2 Know Wen 2 Writ N Lingo?,” Washington Times,

January 24, 2008, B1; Letter to the editor, “Email and the Decline of Writing,” New York Times, December 11, 2004, A18; Mary Kolesnikova, “Language That Makes You Say OMG; Teens Are Letting Emoticons and Other Forms of Chat-Speak Slip into Their Essays and Homework,” Los Angeles Times, May 13, 2008, http://www.latimes.com/news/opinion/la-oe-kolesnikova13-2008may13,0,4111689.story.

29. Amanda Lenhart et al., “Writing, Technology, and Teens,” Pew Internet and American Life Project, April 24, 2008, http://www.pewinternet.org/pdfs/PIP_Writing_Report_FINAL3.pdf, iv; Michael Gerson, “Don’t Let Texting Get U :-(,” Washington Post, January 24, 2008, A19.

30. Aaron Smith, “Americans and Texting,” Pew Internet and American Life Project, September 19, 2011, http://pewinternet.org/Reports/2011/Cell-Phone-Texting-2011.aspx.

82

31. CellSigns, industry text-messaging statistics, November 2008, http://www.cellsigns.com/industry.shtml.

32. Lori Aratani, “Teens Can Multitask, but at What Costs?,” Washington Post, February 26, 2007, A1, http://www.washingtonpost.com/wp- dyn/content/article/2007/02/25/AR2007022501600.html; Claudia Wallis, “The Multitasking Generation,” Time, March 19, 2006, http://www.time.com/time/magazine/article/0,9171,1174696,00.html.

33. “Media Multitasking Among American Youth: Prevalence, Predictors, and Pairings,” Henry J. Kaiser Family Foundation, December 12, 2006, http://www.kff.org/entmedia/upload/7593.pdf.

34. Karen Sternheimer, Kids These Days: Facts and Fictions About Today’s Youth, 8– 9. See also Joel Best, The Stupidity Epidemic: Worrying About Students, Schools, and America’s Future, 4–8.

35. Marie Winn, The Plug-in Drug: Television, Computers, and Family Life, 286. See the College Board, “Mean SAT Scores by High School GPA: 1997 and 2007,” http://www.collegeboard.com/prod_downloads/about/news_info/cbsenior/yr2007/tables/17.pdf

36. US Department of Education, National Center for Education Statistics, National Longitudinal Study of the High School Class of 1972; High School and Beyond National Longitudinal Study of 1980 Seniors; National Longitudinal Study of 1988, Second Follow-Up; Student Survey, 1992; Education Longitudinal Study, 2002, First Follow-Up 2004, http://www.icpsr.umich.edu/cocoon/ICPSR/SERIES/00107.xml. Note that the study is ongoing, the most recent cohort being the class of 2009, which is being followed through 2012.

37. US Department of Education, National Center for Education Statistics, Digest of Education Statistics, 2010, http://nces.ed.gov/fastfacts/display.asp?id=98, chap. 3.

38. US Census Bureau, Educational Attainment by Race and Hispanic Origin: 1960 to 2006, US Census of Population, 1960, 1970, and 1980, vol. 1; Current Population Reports P20-550 and earlier reports, http://www.census.gov/compendia/statab/tables/08s0217.pdf; US Department of Education, National Center for Education Statistics, Digest of Education Statistics, 2006, http://nces.ed.gov/programs/digest/d06/ch_2.asp, chap. 2; US Department of Education, National Center for Education Statistics, Status and Trends in the Education of Racial and Ethnic Minorities, 2006, http://nces.ed.gov/pubs2007/minoritytrends/figures/figure_14.asp and http://nces.ed.gov/fastfacts/display.asp?id=171.

39. “Forty-Three Percent of 2011 College-Bound Seniors Met SAT College and Career Readiness Benchmark,” College Board, September 14, 2011, http://press.collegeboard.org/releases/2011/43-percent-2011-college-bound-seniors-met-sat- college-and-career-readiness-benchmark.

40. Sternheimer, Kids These Days, 69–71. 41. Donald P. Hayes, Loreen T. Wolfer, and Michael F. Wolfe, “Schoolbook

Simplification and Its Relation to the Decline in SAT-Verbal Scores,” American Educational Research Journal 33 (1996): 489–508.

42. James R. Flynn, What Is Intelligence?, 8–9. 43. Ibid., 19, 20. 44. US Department of Education, National Center for Education Statistics, Digest of

Education Statistics, 2010 (NCES 2011-015), Table 124, http://nces.ed.gov/programs/digest/d10/tables/dt10_124.asp.

83

45. US Department of Education, National Center for Education Statistics, The Condition of Education, 2007 (NCES 2007-064), Table 18-1, http://nces.ed.gov/programs/coe/2007/section2/table.asp?tableID=692 and http://nces.ed.gov/fastfacts/display.asp?id=69.

46. “To Read or Not to Read: A Question of National Consequence,” National Endowment for the Arts, Research Report no. 47, November 2007, p. 7, http://www.nea.gov/research/ToRead.pdf; Harris Poll, “Reading and TV Watching Still Favorite Activities, but Both Have Seen Drops,” telephone poll of 1,052 American adults aged eighteen and over, conducted October 16–23, 2007, http://www.harrisinteractive.com/harris_poll/index.asp?PID=835; “Reading on the Rise: A New Chapter in American Literacy,” National Endowment for the Arts, January 2009, http://www.arts.gov/research/ReadingonRise.pdf.

47. Anne H. Gauthier and Timothy Smeeding, “Historical Trends in the Patterns of Time Use of Older Adults,” paper presented at the Conference on Population Ageing in Industrialized Countries: Challenges and Issues, Tokyo, Japan, March 19–21, 2001, http://www.oecd.org/dataoecd/21/5/2430978.pdf; Harris Poll, “Leisure Time Plummets 20% in 2008—Hits New Low,” telephone poll of 1,010 Americans aged eighteen and over, conducted October 16 and 19, 2008, http://www.harrisinteractive.com/vault/Harris- Interactive-Poll-Research-Time-and-Leisure-2008-12.pdf.

48. US Census Bureau, Educational Attainment by Race and Hispanic Origin: 1960 to 2010, US Census of Population, 1960, 1970, and 1980, vol. 1; Current Population Reports and earlier reports, http://www.census.gov/compendia/statab/2012/tables/12s0229.xls.

49. Sternheimer, Kids These Days, 70–71. 50. US Department of Education, Office of the Secretary, Office of Public Affairs, No

Child Left Behind: A Parents [sic] Guide (Washington, DC: Government Printing Office, 2003).

51. Tony Wilhelm, Delia Carmen, and Megan Reynolds, “Connecting Kids to Technology: Challenges and Opportunities,” Annie E. Casey Foundation, June 2002, http://www.aecf.org/upload/publicationfiles/connecting%20kids%20technology.pdf; US Census Bureau, Reported Internet Usage for Individuals 3 Years and Older, by Selected Characteristics: 2009, http://www.census.gov/hhes/computer/publications/2009.html; Victoria J. Rideout, Ulla G. Foehr, and Donald F. Roberts, “Generation M2: Media in the Lives of 8- to 18-Year-olds” (Menlo Park, CA: Kaiser Family Foundation, 2010), http://www.kff.org/entmedia/upload/8010.pdf, 23.

52. Charles T. Clotfelter, Helen F. Ladd, and Jacob L. Vigdor, “Scaling the Digital Divide: Home Computer Technology and Student Achievement,” Harvard University Colloquia, July 29, 2008, http://www.hks.harvard.edu/pepg/PDF/events/colloquia/Vigdor_ScalingtheDigitalDivide.pdf.

84

CHAPTER 5

85

From Screen to Crime Scene Media Violence and Real Violence

In 2011 the US Supreme Court upheld a federal court’s ruling to overturn a 2005 California law that would ban the sale of violent video games to minors. The statute—ironically signed into law by violent-movie veteran and then governor Arnold Schwarzenegger—equated violent video games with pornography and argued that video game violence incited actual youth violence.

Writing for the 7–2 majority, Justice Antonin Scalia noted that video games are protected by the First Amendment. He also described how popular culture has been blamed for inciting violence throughout American history; only the “villains” change (from dime novels to movies to comic books to television to music and now to video games).1

Critics from the Left and Right panned the ruling. An op-ed in the conservative Washington Times argued that “the Court took a wrong position in this case because the framers of the Constitution could not envision a world where children as young as 6 or 7 would be able to walk into shops without their parents’ consent and buy virtual weapons they could use to simulate murder.” The liberal magazine the Nation featured an article that argued in favor of protecting minors’ rights to free speech but criticized the ruling as “simply bizarre in dismissing the claimed harmful effects of violent depictions while still insisting on the strictest puritanical view of the dangers of sexual imagery.” The Washington Post editorialized that the decision was “misguided,” insisting that “the diminished threat of government intervention should in no way impede efforts to keep the most violent games out of the hands of children.”2

It should come as no surprise that many people were upset by the Court’s decision. For more than a century, it has become taken for granted as “common sense” that media violence causes actual violence: thousands of news reports and hundreds of studies on the connection have helped the public believe this is a no- brainer. But the reality of violence is far more complex.

In recent years, video games seemed to connect the dots between high-profile school shootings. Immediately after the 2007 shooting at Virginia Tech, critics on cable-news networks blamed video games for the rampage. Whereas it turned out that the VT shooter rarely played video games, the 1999 Columbine High School shooters were allegedly aficionados of Doom, a game where the heavily armed protagonist stops demons from taking over Earth and had used their classmates’ images during their play target practice.

86

These incidents, combined with dramatic news headlines, have repeatedly told us that media are to blame. For example, “Study Links Violent Video Games to Violent Thought, Action” (Washington Post), “Violent Video Games and Changes in the Brain” (Los Angeles Times), “A Poisonous Pleasure” (St. Louis Post- Dispatch), and “Survey Connects Graphic TV Fare, Child Behavior” (Boston Globe) are a few of the thousands of stories that tell us media are the root cause of our violence problem.3

I confess: I once believed the popular culture explanation myself. I’ve never been a fan of graphic violence in movies or television and like many others thought that there must be some widespread virus that violence in popular culture spreads. Before beginning graduate work in sociology, I studied psychology and read many of the media-violence studies. Students of psychology are taught that the individual is the primary unit of analysis and that something that may be bad for the individual can be multiplied many times over and thus become a social problem. This perspective is complementary to the American focus on individualism, where we tend to view an individual’s behavior as stemming only from personal choices rather than social forces.

But as I began to review the research, I saw that the results were not as compelling as I had hoped or had heard on the news. I eventually realized that my feelings about violent movies were driven more by my personal distaste of media violence than by solid social science. Other scholars, like psychologist Jonathan L. Freedman, challenge the conclusions of this research too. Freedman evaluated every study published in English that explored the media-violence connection and concluded that “the evidence … is weak and inconsistent, with more non- supportive results than supportive results.”4 Later, when I began graduate work in sociology, I developed a clearer understanding of the large-scale patterns and learned about the structural roots of violence.

Choosing to avoid violent popular culture for ourselves and our families is certainly the right decision for many people, based on personal tastes, values, and beliefs. But those who enjoy action movies, music that references violence, or first- person shooter video games are not necessarily a threat to the rest of us. Their interests and engagement with violent media are more complex than a simple cause- effect relationship.

Because Americans spend so much time, energy, and money focusing on violent popular culture, ironically we often fail to better understand violence itself. If violence is really the issue of importance here, we should start by studying violence before studying media.

This chapter critically examines the moral panic that surrounds popular culture and violence by examining how the fear of media violence is a distraction from the more complex structural causes of violence. The many taken-for-granted

87

assumptions about the relationship between media and violence are profoundly flawed, which I address in the following pages. Despite the increasingly graphic capabilities of video games, violence in the United States has plummeted over the past two decades. Second, when young people do become violent, they are not merely imitating media violence, and other factors can better explain their behavior. Third, the research on media violence is not nearly as conclusive as many of its authors and sensationalized news reports would have us believe. And last, it is important to consider the context of violence to understand how people of all ages make sense of violence in media, their communities, our nation, and the world.

Violence Has Declined as Media Culture Has Expanded

Media culture has expanded exponentially over the past few decades. It’s hard to keep up with the newest gadgets that make popular culture more portable and allow us to be entertained virtually anywhere. Traditional media like television have grown from a handful of channels to hundreds, now accessible through a variety of online platforms. Video game graphics are much more graphic than they were in the early days of Pac-Man and Space Invaders.

Yet as media culture has expanded, we have seen dramatic declines in rates of crime and violence in the United States. Homicide rates are at their lowest levels in nearly five decades; between 1992 and 2010, the homicide rate fell by almost half, from 9.3 homicides per 100,000 Americans annually to 4.8 per 100,000. The rate of victimization for all violent crimes fell by 70 percent between 1993 and 2010.5

Figure 5.1: Homicide Victimization Rates, 1950–2010, per 100,000 Source: FBI Uniform Crime Reports, 1950–2010

Juveniles were no exception. The homicide offending rate for teens fourteen to

88

seventeen fell by 71 percent between 1993 and 2000 and has been flat ever since. During the ten-year period between 2000 and 2010, arrests of juveniles for violent crimes (like murder, rape, and aggravated assault) declined 22 percent; for adults eighteen and older, the violent arrest rate also declined, but only by 8 percent.6 These numbers just don’t match the panic that popular culture will create a generation of people who take pleasure in hurting others.

It’s also important to keep in mind that adults are far more likely to commit violent crimes than juveniles are, although most of the media-violence arguments focus on young people as potential predators. True, we did see a rise in homicides committed by teens in the late 1980s, but we also saw a rise in homicides committed by adults during that period.7 There is no youth crime wave now; while there was in the late 1980s and early 1990s, it was matched by an adult crime wave. Rates for both violent crime and property crime have fallen significantly in the past twenty years for both juveniles and adults. But most of our attention is placed on youth, especially when violent media are considered a motivating factor. We seldom hear public outcry about what motivates adults to commit crimes, although they are the most likely perpetrators. Eighteen- to twenty-four-year-old adults have been and are now the age group most likely to commit homicide.

Figure 5.2: Homicide Offending Rates, by Age, 1980–2008 Source: Bureau of Justice Statistics

So in the big picture, juvenile violence rates have declined. But are kids becoming killers at earlier ages, lured by gory media they don’t understand but imitate with lethal results? The Federal Bureau of Investigation (FBI) began collecting data on homicide arrests for very young children in 1964, so we can test

89

this quite easily, especially because very young perpetrators have a good chance of getting caught.

Homicide arrest rates for children ages six to twelve are minuscule: in 2010 there were 7 arrests out of a population of more than 36 million children. By contrast, 1,430 adults aged twenty-five to twenty-nine were arrested for homicide in 2010 (as were 90 people sixty-five or older). Still, 7 kids are 7 too many, until we consider that this was the fewest number of arrests since the FBI began keeping separate numbers for young children in 1964. Overall, the period between 1968 and 1976 featured the highest arrest rates, with the numbers generally plummeting since.8 Young kids are actually less likely to be killers now than in the past.

So why do we seem to think that kids are now more violent than ever? A Berkeley Media Studies Group report found that half of news stories about youth were about violence, and that more than two-thirds of violence stories focused on youth.9 We think kids are committing the lion’s share of violence because they constitute a large proportion of crime news. Chances are good that some, if not all, of those seven incidents made the news and will stick in the viewers’ memory. The reality is that adults commit most crime, but a much smaller percentage of these stories make news. Emotional stories draw our attention far more than statistics, which are often dry and left out completely in news stories that focus on young offenders.

But how do we explain the young people who do commit violence? Can violent media help us here? Broad patterns of violence do not match media use as much as they mirror poverty rates. While most people who are poor do not commit crimes and are not violent, there are large-scale patterns worth noting. Take the city of Los Angeles, where I live, as an example. Here, as in many other cities, violent crime rates are higher in lower-income areas relative to the population. The most dramatic example is demonstrated by homicide patterns.

For example, the Seventy-Seventh Street division (near the flash point of the 1992 civil unrest) reported 13 percent of the city’s homicides in 2010, yet composed 5 percent of the city’s total population. Conversely, the West Los Angeles area (which includes affluent neighborhoods such as Brentwood and Bel Air) reported less than 1 percent of the city’s homicides but accounted for 6 percent of the total population.10 If media culture really was a major cause of violence, wouldn’t the children of the wealthy, who have greater access to the Internet, video games, and other visual media, be at greater risk for becoming violent? The numbers don’t bear out because violence patterns do not match media use.

Violence can be linked with a variety of issues, the most important one being poverty. Criminologist E. Britt Patterson examined dozens of studies of crime and poverty and found that communities with extreme poverty, a sense of bleakness, and neighborhood disorganization and disintegration were most likely to have higher

90

levels of violence.11 Violence may be an act committed by an individual, but violence is also a sociological, not just an individual phenomenon, one that is related to patterns of persistently high unemployment, limited educational opportunities, and geographic isolation from more stable communities.12

To attribute actual violence to media violence, we would have to believe that violence has its origins mostly in individual psychological functioning and thus that any kid could snap from playing too many video games or watching violent cartoons. Ongoing sociological research has identified other risk factors that are based on environment: substance use, overly authoritarian or lax parenting, delinquent peers, neighborhood violence, and weak ties to one’s family or community. If we are really interested in confronting youth violence, these are the issues that must be addressed first. Media violence is something worth looking at to better understand our cultural fascination with violence, but not as the primary cause of actual violence.

What about the kids who aren’t from poor neighborhoods and who come from supportive environments? When middle-class white youths commit acts of violence, we seem to be at a loss for explanations beyond media violence. These young people often live in safe communities, enjoy many material privileges, and attend well-funded schools. Opportunities are plentiful. What else could it be, if not media?

For starters, incidents in these communities are rare but extremely well publicized. These stories are dramatic and emotional and thus great ratings boosters. Central-city violence doesn’t raise nearly the same attention or public outcry to ban violent media. We seem to come up empty when looking for explanations of why affluent young white boys, for example, would plot to blow up their school.

We rarely look beyond the media for our explanations, but the social contexts are important here, too. Even well-funded suburban schools can become overgrown, impersonal institutions where young people easily fall through the cracks and feel alienated. Sociologists Wayne Wooden and Randy Blazak suggest that the banality and boredom of suburban life can create overarching feelings of meaninglessness within young people, that perhaps they find their parents’ struggles to obtain material wealth empty and are not motivated by the desire for money enough to conform. White juvenile homicide arrest rates rose (along with black juvenile arrest rates) in the late 1980s and peaked in 1994. The number of African American juveniles arrested for homicide has tumbled even more sharply since its peak in the early 1990s, and homicide arrest rates were at their lowest point in a generation.13

There’s been a lot of good news about crime and violence in the United States

91

over the past two decades that gets lost in fears that media violence is creating violent young people. In reality young people today are far less likely to engage in violence than their parents’ generation.

Violent Youth Are Not Mindless Imitators

When young people do commit crimes or act violently, news reports often compare incidents to popular culture. Didn’t the killer act like he was playing a video game? After the shootings at Columbine and other schools during the 1990s, video games bore the brunt of the blame. In 1999 retired army lieutenant colonel David Grossman published a book, Stop Teaching Our Kids to Kill, claiming video games serve as military-like training that inspires young people to murder. Grossman’s boot camp–instructor authority brought a lot of attention and fed the video game fear. “There’s a generation growing up that the media has cocked and primed for draconian action and a degree of bloodlust that we haven’t seen since the Roman children sat in the Colosseum and cheered as the Christians were killed,” he warned.14 But as we saw in the previous section, crime data show us that kids are not displaying bloodlust, at least not the real, unpixilated kind.

Critics like Grossman argue that video games are even more influential than movies, television, or music because the player is actively participating in the game. This, of course, is what makes video games fun and exciting and sets them apart from other media, where consumers take on more of a spectator role. Critics fear that players of violent games are rewarded for acts of virtual violence, which they believe may translate into learning that violence is acceptable. Straight out of B. F. Skinner, the fear stems from the idea that we learn from rewards, even vicarious rewards. The prevalence of violent video game playing among young boys troubles many for this reason.

Parents will tell you that their kids often play fight in the same style as the characters in cartoons and other characters from popular culture. But as author Gerard Jones points out in Killing Monsters: Why Children Need Fantasy, Super Heroes, and Make-Believe Violence, imitative behavior in play is a way young people may work out pent-up hostility and aggression and feel powerful. Cops and robbers, cowboys and Indians are all modes of play where children, often boys, have acted out violent scenarios without widespread public condemnation. It is different from acting violently, where the intention is to inflict pain.

The idea that children will imitate media violence draws on Albert Bandura’s classic 1963 “Bobo doll” experiment. Bandura and colleagues studied ninety-six children approximately three to six years old (the study doesn’t mention details about the children’s community or economic backgrounds). The children were divided into groups and watched various acts of aggression against a five-foot

92

inflated Bobo doll. Surprise: when they had their chance, the kids who watched adults hit the doll pummeled it too, especially those who watched the cartoon version of the doll beating. Although taken as proof that children will imitate aggressive models from film and television, this study is riddled with leaps in logic.

The main problem with the Bobo-doll study is fairly obvious: hitting an inanimate object is not necessarily an act of violence, nor is real life something that can be adequately re-created in a laboratory. In fairness, contemporary experiments have been a bit more complex than this one, using physiological measures like blinking and heart rate to measure effects. But the only way to assess a cause-effect relationship with certainty is to conduct an experiment, yet violence is too complex an issue to isolate into independent and dependent variables in a lab.

Imagine designing a study where one group is randomly assigned to live in a neighborhood where dodging drug dealers and gang members is normal. Or where one group is randomly assigned to be verbally and physically abused by an alcoholic parent. What happens in a laboratory is by nature out of context, and real- world application is highly questionable. We do learn about children’s play from this study, but by focusing only on how they might become violent, we lose a valuable part of the data.

So whereas this study is limited because it took place in a controlled laboratory and did not involve actual violence, let’s consider a highly publicized case that on the surface seems to be proof that some kids are copycat killers. In the summer of 1999, a twelve-year-old boy named Lionel Tate beat and killed six-year-old Tiffany Eunick, the daughter of a family friend in Pembroke Pines, Florida. Claiming Lionel was imitating wrestling moves he had seen on television, his defense attorney attempted to prove that Lionel did not know what he was doing when he hurt Tiffany; he subpoenaed famous wrestlers like Hulk Hogan and Dwayne “the Rock” Johnson in hopes that they would perform for the jury to show how their moves are choreographed. Ultimately, they did not testify, but his attorney argued that Lionel should not be held criminally responsible for what he called a tragic accident.

The jury didn’t buy this defense, finding that the severity of the girl’s injuries was inconsistent with the wrestling claim. Nonetheless, the news media ran with the wrestling alibi. Headlines shouted “Wrestle-Slay Boy Faces Life,” “Boy, 14, Gets Life in TV Wrestling Death,” and “Young Killer Wrestles Again in Broward Jail.”15 This case served to reawaken fears that media violence, particularly as seen in wrestling, is dangerous because kids allegedly don’t understand that real violence can cause real injuries. Cases like this one are used to justify claims that kids may imitate media violence without recognizing the real consequences.

Lionel’s defense attorney capitalized on this fear by stating that “Lionel had fallen into the trap so many youngsters fall into.” But many youngsters don’t fall

93

into this trap, and neither did Lionel. Lionel Tate was not an average twelve-year- old boy; the warning signs were certainly present before that fateful summer evening. Most news reports focused on the alleged wrestling connection without exploring Lionel’s troubled background. He was described by a former teacher as “almost out of control,” prone to acting out, disruptive, and seeking attention. A forensic psychologist who evaluated Lionel in 1999 described him as having “a high potential for violence” and “uncontrolled feelings of anger, resentment and poor impulse control.”16 Neighbors also described his neighborhood as dangerous, with a significant drug trade.

Evidence from the case also belies the claim that Lionel and Tiffany were just playing, particularly the more than thirty-five serious injuries that Tiffany sustained, including a fractured skull and massive internal damage. These injuries were not found to be consistent with play wrestling, as the defense claimed. The prosecutor pointed out that Lionel did not tell investigators he was imitating wrestling moves initially; instead, he said they were playing tag but changed his story to wrestling weeks later. Although his defense attorney claimed Lionel didn’t realize someone could really get hurt while wrestling, Lionel admitted that he knew television wrestling was fake.17

In spite of the fact that Lionel was deemed too naive to know the difference between media violence and real violence, he was tried as an adult and received a sentence of life in prison without parole. Ultimately, Lionel’s new defense team arranged for his sentence to be overturned in 2003, this time saying that Lionel accidentally jumped on Tiffany when running down a staircase. He was released in January 2004 on the condition that he would remain under court supervision for eleven years. On appeal, a judge ruled that Lionel should have been granted a pretrial hearing to determine if he understood the severity of the charges against him. His case provides an example of the ultimate contradiction: if children really don’t know any better than to imitate wrestling, why would we apply adult punishment? Completely lost in the discussion surrounding this case is our repeated failure as a society to treat children like Lionel before violent behavior escalates, to recognize the warning signs before it is too late.

Unfortunately, this was not the end of Lionel Tate’s troubles. Eleven months after his release, Lionel violated his probation when he was found out at two thirty in the morning with a knife, and a judge extended his probation period to fifteen years. In May 2005, Lionel was arrested for robbing a pizza delivery person at gunpoint and in 2006 was sentenced to thirty years in prison for violating his probation.18

The imitation hypothesis suggests that violence in media puts kids like Lionel over the edge, the proverbial straw that breaks the camel’s back, but this enables us to divert our attention from the seriousness of the other risk factors in Lionel’s life. Chances are we would never have heard about Lionel or Tiffany if there was no

94

wrestling angle to the story. The biggest problem with the imitation hypothesis is that it suggests that we

focus on media instead of the other 99 percent of the pieces of the violence puzzle. When news accounts neglect to provide the full context, it appears as though media violence is the most compelling explanatory factor.

It is certainly likely that young people who are prone to become violent are also drawn toward violent entertainment. For instance, the Columbine shooters probably used video games to practice acting out their rage onto others, but where the will to carry out such extreme levels of violence came from is much more complex. Rather than implanting violent images, video games and other violent forms of popular culture enable people to indulge in dark virtual fantasies, to act out electronically in ways that the vast majority of them would never do in reality.

Here’s what the media-imitation explanation often leaves out: children whose actions parallel media violence come with a host of other more important risk factors. We blame media violence to deflect blame away from adult failings—not simply the failure of parents but our society’s failure to help troubled young people, who unfortunately we often overlook until it is too late.

The Flaws of Media-Effects Research

But what about all the research done on media and violence that tells us there is a connection? Although this is probably one of the most researched issues in social science, the research is not nearly as conclusive as we are told in dramatic news accounts. Headlines like “Survey Connects Graphic TV Fare, Child Behavior” (Boston Globe), “Adolescents’ TV Watching Linked to Violent Behavior” (Los Angeles Times), “Study Links Violent Video Games to Violent Thought, Action” (Washington Post), “Cutting Back on Kids’ TV Use May Reduce Aggressive Acts” (Denver Post), “Doctors Link Kids’ Violence to Media” (Arizona Republic), and “Study Ties Aggression to Violence in Games” (USA Today) are commonplace and help create the idea that the research is conclusive and clear. In fairness, the social science research isn’t readily available (or particularly interesting) for the public to read themselves, nor, I suspect, do most reporters read the studies on which they report. If they did, they would find only a weak connection between violent programming and aggressive behavior at best.19

Many researchers have built their careers on investigating a variety of potentially harmful effects that television, movies, music, video games, and other forms of popular culture might have. Two things are interesting about this body of research: first, it concentrates heavily on children, presuming that effects are strong on children and perhaps unimportant with adults, and second, that researchers almost always test for negative effects of popular culture, with limited interest in

95

other implications, such as how users make meanings from such forms of media. Even when crime rates drop, as they have in the United States over the past two

decades, these studies don’t investigate whether media could explain positive events. We might want to ask why many researchers are so committed to finding reasons to blame media for social problems and use popular culture as the central variable of analysis—rather than violence itself.

In one study, researchers considered responses to a “hostility questionnaire” or children’s aggressive play as evidence that media violence can lead to real-life violence. But aggression is not the same as violence, although in some cases it may be a precursor to violence. There is a big difference between rough play at recess, being involved in an occasional schoolyard brawl, and becoming a serious violent criminal. Most media-effects studies actually measure aggression, not violence.

Nor is it clear that these effects are anything but immediate. And aggression is not necessarily a pathological condition; we all have aggression that we need to learn to deal with and channel appropriately. Second, several of the studies use correlation statistics as proof of causation. Correlation indicates the existence of relationships but cannot measure cause and effect. Reporters may not recognize this, and some researchers may forget this, misleading readers into believing research is more conclusive than it actually is.

One such study claiming media violence turned children into violent adults ironically made news the week that American troops entered Iraq in the spring of 2003. This study is unique in that it tracked 329 respondents for fifteen years, but it contains several serious shortcomings that prevent us from concluding that television creates violence later in life.20

First, the study measures aggression, not violence. The researchers defined aggression rather broadly, constructing an “aggression composite” that includes such antisocial behavior as having angry thoughts, talking rudely to or about others, and having moving violations on one’s driving record. Violence is a big jump from getting a lot of speeding tickets.

But beyond this composite, the connection between television viewing and physical aggression for males, perhaps the most interesting measure, is relatively weak. Television viewing explains only 3 percent of what led to physical aggression in the men studied.21 Although some subjects did report getting into physical altercations, fewer than 10 of the 329 participants had ever been convicted of a crime, too small a sample to make any predictions about serious violent offenders.

Other long-term studies used correlation analysis to isolate television from other factors to attempt to connect watching television with violence later in life. A 2002 study published in Science considered important issues like childhood neglect, family income, neighborhood violence, parental education, and psychiatric

96

disorders. They found that these issues are positively correlated to both more television viewing and aggressive behavior.22

The authors concede that no causal connection can be made—it is very likely the factors that lead people to watch more television are the same factors that contribute to aggression and violence. For instance, someone who watches a lot of television may have less parental involvement and less participation in other recreational activities like sports or extracurricular programs at school, or for older teens a job. They may live in communities plagued by violence and spend more of their leisure time indoors. And of course we have no idea what they are watching on television in studies like these, despite the authors’ blanket statement that “violent acts are depicted frequently on television.”

And as with television, media-violence researchers mostly began studying video games with the expectation that playing violent video games causes aggression in children. Articles like “Video Games and Real-Life Aggression” (2001), “Video Games: Benign or Malignant?” (1992), and “Is Mr. Pac-Man Eating Our Children?” (1997) are just a few examples of a flurry of studies that have appeared in professional journals since the 1980s, all assessing that one outcome.23

We might wonder why researchers conduct so many studies on the same issue if the findings really are as conclusive as the authors sometimes suggest. A 2007 review in the journal Aggression and Violent Behavior found a clear case of publication bias, where studies about video games testing for negative effects are far more likely to be published than other possible findings.24 As much as social scientists claim they can be completely objective, even scholars have preconceived beliefs and agendas that color the research questions they ask, the way their studies are designed, and the interpretations that follow.

In fairness, nearly all professional researchers are up front about the shortcomings of their findings and point out that their results are preliminary or that they cannot truly state that popular culture like video games causes violence. But when a journal article hits the news wires and blogs, cautious science tends to fly out the window. Serious problems in conception or method rarely make it into press reports because they complicate the story.

Just as with other media-violence studies, the main problem with many of these video game studies is how they define and measure aggression. For instance, a 1987 study had subjects impose fake money fines on opponents as an indicator of aggression.25 A pretty big stretch, but equally questionable measures are often used to suggest that video game users will become aggressive, and even violent.

A 2000 study by psychologists Craig Anderson and Karen Dill is a case in point. “Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and Life” was published in the Journal of Personality and Social Psychology and quickly made international news. Newspapers, magazines, and other professional

97

journals reported on their study as definitive evidence that video games can increase aggressive behavior. Time concluded that “playing violent video games can contribute to aggressive and violent behavior in real life,” in May 2000.26

There’s just one problem: upon close inspection, the studies the article based its conclusions on are riddled with both conceptual and methodological problems. Let’s take a closer look to better understand why.

The Anderson and Dill results are based on two studies done with their introductory psychology students, so the sample is not representative. Part of their study looks at whether past video game use is associated with delinquency, but the most seriously delinquent youth rarely make it to college, let alone show up for an appointment to participate in a study for their psychology class. Further, their first study used nearly twice as many female students as males. But males are more likely to play video games and are much more likely to commit serious acts of violence.

In the first study, the students completed a questionnaire that asked about their favorite video games as teens, how violent they thought the games were, how much time they spent playing, and then their history of aggression and delinquency. Students were asked to think back and recall information from four to ten years prior, depending on their age. From this survey, researchers claimed they found a correlation between time spent playing video games and their aggressive or delinquent behavior.

But this study was not designed to assess causality, just the existence of a relationship between time spent playing games and rating higher on irritability and aggression questionnaires.27 Nonetheless, the authors claim that video games “contribute to [the] creation of aggressive personality,” a conclusion that is a clear leap in logic.28 Because correlation measures association, not cause and effect, it is equally possible that those with aggressive personalities are more likely to enjoy aggressive video game playing.

Anderson and Dill conducted a second study in a laboratory; in this experiment, students played a video game for fifteen minutes. Some played a violent game, and others played a nonviolent educational game. When they finished, the students were asked to read “aggressive words” (like murder) on a computer screen and were timed to see how fast they said the words aloud. Because the violent-game players repeated the words faster, they were deemed to have “aggressive thoughts” and perhaps be more prone to violence. Another leap in logic and questionable interpretation, as the words they read on the screen were not indeed their own thoughts, nor are aggressive thoughts necessarily dangerous. It is what we do with our hostility that is important.

The researchers did stumble onto something interesting: even a short time spent playing computer-generated games appears to quicken visual reflexes. Other

98

studies have supported this finding: a 2005 review published by the National Swedish Public Health Institute found no reliable link with violence, but instead found spatial abilities of players improved.29 While video games strengthen hand- eye coordination and improve reflexes, the claim that video games create the desire to actually kill a live human is not supported by evidence. If this were the case, we would see far more of the millions of video game users become violent instead of an extreme minority.

The Anderson and Dill study also included a follow-up one week later. Students returned to the lab and played another game for fifteen minutes. If they won, they were allowed to blast their opponent with noise (unbeknownst to the subjects, they played against a computer and their opponent wasn’t real). The violent-game players blasted their perceived opponents slightly louder and longer, and this was taken as the indicator of increased aggression caused by video games.

Is making noise really a good proxy for aggression, and is this form of aggression in any way linked with violence? The authors admit in their report that “the existence of a violent video game effect cannot be unequivocally established” from their research. Nonetheless, an Alberta, Canada, newspaper reported that this study is proof that “even small doses of violent video games are harmful to children,” even though children were not the subjects of the study. The story proclaimed that this study “discover[ed] what some parents have always suspected.”30

Time concurred: “None of this should be surprising,” the author stated, listing the violent nature of games like Doom and Mortal Kombat. Even the British medical journal the Lancet reported on this story without critical scrutiny.31 It doesn’t matter how weak a study may be; it can still gather international attention as long as it tells us what we think we already know.

The results of studies that challenge the media-violence connection or seek to find out more than a cause-effect relationship seldom make headlines, but there are plenty of them. Psychologist Guy Cumberbatch found that children may become frustrated by their failure to win at video games, as most games are designed to be increasingly difficult, but this anger does not necessarily translate to the outside world. Cumberbatch concluded, “We may be appalled by something and think it’s disgusting, but they know its conventions and see humor in things that others wouldn’t.” In 1995 psychologist Derek Scott concluded that “one should not overgeneralize the negative side of computer games playing” after his study found no evidence that violent video games led to more aggression.32

Beyond individual studies, reviews of research appear regularly in scholarly journals, and their findings are often contradictory. Although a 1998 review in the journal Aggressive and Violent Behavior declared that a “preponderance of evidence” suggests video games lead to aggression, a review the next year in the

99

same journal argued that methodological problems and a lack of conclusive evidence do not enable us to conclude that video games lead to aggression. In 2004 the same journal published another review, which noted that “there is little evidence in favor of focusing on media violence as a means of remedying our violent crime problem.” A 2001 review in Psychological Science concluded that video games “will increase aggressive behavior,” while another 2001 analysis in the Journal of Adolescent Health declared that it is “not possible to determine whether video game violence affects aggressive behavior.”33

Other studies look for more than just negative effects, seeking to understand how consumers make meanings from media texts. For example, a British study found that children’s definitions of violent television differed by gender, telling us that masculinity claims are made by boys “tough enough” to not be scared by media violence. The genre and context of the story contribute to whether kids consider a program violent. The researchers also found that, like adults, children tend to think media violence is harmful, just not for them—kids younger than them may be affected, they tell researchers.34

A study of children’s emotional responses to horror films found that they did sometimes have nightmares (parents’ biggest concern for their children), but chose to watch scary films so they could conquer their fears and toughen up.35 The study’s author concluded that watching media violence might be a way for children to prepare themselves to face their fears more directly. While parents may hope to prevent their children from ever being scared or having a bad dream, nightmares are normal ways for children (and adults) to deal with fear and anxiety.

British researchers Garry Crawford and Victoria Gosling interviewed video gamers and found that it is a central source of male bonding for players. Computer games let people temporarily adopt different identities and also enjoy a sense of mastery upon improving their performance in the games. Participants playing sports-related games also gain specific knowledge about the sport, which for males in particular can enhance social standing among peers.36

Studies like the ones described above are absent from news reports about media and violence, so we are encouraged to keep thinking about children as potential victims of popular culture. Even though so much research on media violence focuses on children, it is telling that children’s ideas are missing. We also overlook the reality that older people watch more television than children or teens, and the average age of a video game player is now thirty-seven.37

We might conclude that people who express higher levels of aggression and hostility are also more likely to enjoy violent forms of media. But this has not translated into higher levels of violence outside of the laboratory. While interesting, studies claiming to find strong, negative effects of media lack external validity: their findings cannot be applied to explain the crime and violence in American

100

society.

The Many Meanings of Violence

Although many young people who have committed violence have also consumed violent media, the majority of people who play video games, watch violent movies, or listen to music with violent lyrics never do. As tempting as it may be to infer how other people will interpret violent media content, we can’t predict someone’s behavior simply from the popular culture they consume.

We might agree that some content is shocking and disturbing, as each new, more realistic-looking version of Grand Theft Auto tends to be. But even though a scene from a film or lyric might be offensive to some, there is no way of knowing for certain how all viewers/listeners/players will actually make sense of the content.

The fear of media violence is based on the belief that young people cannot discern fantasy from reality (but don’t have the same concerns about adults) and that this failure will condition kids to regard violence as a rewarding experience. It’s important to note that the inability to distinguish fantasy from reality is a key indicator of psychosis in adults, but many seem to accept this as a natural condition of childhood and even adolescence.

An unpublished study of eight children claimed to have evidence of the fantasy- reality divide splashed across headlines throughout the United States and Canada. “Kids may say they know the difference between real violence and the kind they see on television and video, but new research shows their brains don’t,” announced Montreal’s Gazette.38 This research, conducted by John Murray, a developmental psychologist at Kansas State University, involved MRIs of eight children, ages eight to thirteen. As the kids watched an eighteen-minute fight scene from Rocky IV, their brains showed activity in areas that are commonly activated in response to threats and emotional arousal. This should come as no surprise, since entertainment often elicits emotional responses; if film and television had no emotional payoff, why would people watch?

But the press took this small study as proof of what we already think we know: kids can’t tell the difference between fantasy and reality. A Kansas City Star reporter described this as “a frightening new insight,” and the study’s author stated the children “were treating Rocky IV violence as real violence.” And while Yale psychologist Dorothy Singer warned that the size of the study was too small to draw any solid conclusions, she also said that the study is “very important.”39

The results of most studies this small might be able to get a researcher some grant money for further investigation, but nearly never make the news. But instead, this study was treated as another piece to the puzzle and clearly made headlines because of its dramatic elements: a popular movie, medical technology, and

101

children viewing violence. In any case, there are big problems with the interpretation offered by the study’s author. First, this study actually discredits the idea of desensitization. The children’s brains clearly showed some sort of emotional reaction to the violence they saw. They were not emotionally deadened, as we are often told to fear. But kids can’t win either way within the media- violence fear, since feeling too little and feeling too much are both interpreted as proof that media violence is harmful to children.

Second, by focusing on children, the study and subsequent reports make it appear as though children’s thoughts are completely different from adults’. Somehow, by virtue of children being children, their brains can know things that they don’t. But in all likelihood adult brains would react in much the same way. Do an MRI on adults while they watch pornography, and their brains will probably show arousal. Does that mean the person would think that he or she just had actual sex? The neurological reaction would probably be extremely similar, if not identical, but we can’t read brain waves and infer meaning. That’s what makes humans human: the ability to create meaning from our experiences. And adults are not the only ones capable of making sense of their lives.

It is a mistake to presume media representations of violence and real violence have the same meaning for all audiences, or that MRIs can measure how we interpret stories. An anvil might fall on a cartoon character, the CSI sleuths investigate a new murder, but the meanings of each are quite different. A great deal of what counts as television violence today comes from the success of franchises such as CSI, Law and Order, and other police investigation shows that promote the power of law enforcement, not crime.

Even if we have become emotionally immune to violence in popular culture, it by no means indicates that when violence really happens, it has no effect. Ironically, studies that assess violence on television do not consider real violence reported on the news. When we hear about real violence, we may feel a little more concerned but still experience minimal emotional reaction; after all, this is a daily feature of news broadcasts, and it would be overwhelming to get upset every time we turn on the news. But when the event is close to home, the violence appears random, or we see the victims as people like us, the event becomes all the more meaningful. Of course, witnessing violence in person has a different meaning than mediated violence.

Ironically, critics of media violence seem to have problems distinguishing between in-person violence and media violence. This is probably because many of them have had little exposure to violence other than through media representations. Thankfully, I include myself in this category. Aside from the popular culture and witnessing a fistfight or two at school, violence has mainly been a vicarious experience for me.

While working as a researcher studying juvenile homicides, I discovered some

102

of the differences between media violence and actual violence. This study required our research team to comb through police investigation files looking for details about the incidents. Just looking at the files could be difficult, so we tried to avoid crime-scene and coroner’s photographs to avoid becoming emotionally overwhelmed.

One morning while I was looking through a case file, the book accidentally fell open to the page with the crime-scene photos. I saw a young man, probably about my age at the time, slumped over the steering wheel of his car. He had a gunshot wound to his forehead, a small red circle. His eyes were open. I felt a wrenching feeling in my stomach, a feeling I have never felt before and have fortunately never felt since. At that point I realized that regardless of the hundreds, if not thousands, of violent acts I had seen in movies and television, none could come close to this. I had never seen the horrific simplicity of a wound like that one, never seen the true absence of expression in a person’s face. No actor I had ever seen was able to truly “do death” right, I realized. It became clear that I knew nothing about violence for the most part. Yes, I have read the research, but that knowledge was just academic; this was real.

This is not to say that violent media do not create real emotional responses. Good storytelling can create sadness and fear, and depending on the context violence can even be humorous (like the Three Stooges or other slapstick comedy). Media violence may elicit no emotional response—but this does not necessarily mean someone is desensitized or uncaring when real violence happens in our lives. It may mean that a script was mediocre and that the audience doesn’t care about its characters.

But it could be because media violence is not real and most of us, even children, know it. Sociologist Todd Gitlin calls media violence a way of getting “safe thrills.”40 Viewing media violence is a way of dealing with the most frightening aspect of life in a safe setting, like riding a roller coaster while knowing that you will get off and walk away in a few minutes.

Violence in Context: Poverty and Racial Inequality

If we want to learn about what causes kids to commit real acts of violence, depictions of media violence won’t help us much—talking with people who have experienced both will. For several years in the mid-1990s, I worked with criminologists on a broad study of juvenile violence to understand the causes and correlates of youth violence in Los Angeles.41 We wanted to understand the full context of violence in order to help develop conflict-management programs with community members and reduce levels of violence in these communities.

Usually when we talk about violence and media, it is common to defer to people

103

who have studied media effects—but most of these researchers haven’t studied violence itself much, if at all.42 Truly understanding the meanings of both violence and media comes from experiencing them both firsthand. Unfortunately, many young people in Los Angeles have; to find them, we went to the areas with the highest arrest rates for violent crime (not to college students or video gamers). These communities consistently had high poverty rates and gang activity and included predominantly African Americans and Latinos in low-income neighborhoods.

Initially, we conducted a survey to ascertain the level of violence in each neighborhood. We then did follow-up in-depth interviews with fifty-six teen boys, aged twelve to eighteen, who had experienced violence as victims or offenders (or both) to understand how they made sense of both real and media violence.43 Our interviewees clearly described the differences between media violence and actually experiencing violence firsthand.

Above all, their stories tell us that the meaning of violence is made within particular social contexts. For most of those interviewed, poverty and neighborhood violence were overwhelming influences in their lives, shaping their interactions and their understanding of their futures. More than three-quarters of respondents (77 percent) noted that gang activity was prominent in their neighborhoods. Slightly less than half (48 percent) reported feeling tremendous pressure to join gangs, but less than one in ten (9 percent) claimed gang membership. Eighty-eight percent heard guns being fired on a regular basis, and nearly one-third (30 percent) had seen someone get shot. More than one-quarter (27 percent) had seen a dead body in person, and 14 percent had been threatened with a gun themselves. Almost one-quarter (23 percent) had been attacked with some sort of weapon.

Through interviewing these young people, we found that the line between victim and offender is hard to draw and that violent incidents occur within murky contexts. The people we call violent offenders are not necessarily predators, looking to swoop down on the weak and innocent. Instead, we see that violent incidents often happen within a larger context of fear, intimidation, despair, and hopelessness. These kids were trying to survive in destroyed communities as best they could. Unfortunately, violence was often a part of their survival.

Critics often charge popular culture like gangsta rap, for instance, for glamorizing violence within central cities. Understanding the broader social context can help us understand both violence and the popular culture it sometimes spawns. The concept of hegemonic masculinity, where men are encouraged to strive to be dominant and powerful over women and other men, can help us understand why violence might emerge more in economically disadvantaged areas where there are few other ways for young men to feel powerful.44 Not all men seek this ideal, nor do many accomplish it; instead, hegemonic masculinity is held out as what makes a

104

man a “real man.” In addition to subordinating women, hegemonic masculinity demands that men show physical strength and aggressiveness, hyperheterosexuality, and emotional detachment.

As sociologist Elijah Anderson found in his ethnographic research, many young people learn to adopt a posture of violence in order to avoid being victims. And as Richard Majors and Janet Mancini Billson, authors of Cool Pose: The Dilemmas of Black Manhood in America, point out, “Presenting to the world an emotionless, fearless, and aloof front counters the low sense of inner control, lack of inner strength, absence of stability, damaged pride, shattered confidence and fragile social competence that come from living on the edge of society.”45 To Majors and Billson, the “cool pose” is a response to African American disempowerment, a defense mechanism for managing emotions in communities with high levels of violence.

As a marginalized group, African American men have historically had serious economic constraints, reducing their ability to achieve the economic domination associated with hegemonic masculinity. In our capitalist, consumer-oriented society, this creates a major sense of emasculation. According to a 2012 report from the Bureau of Labor Statistics, black men working full-time earned 77 percent of white men’s weekly earnings (by contrast, black women earned 84 percent of white women’s wages).46

Public discussions about violence often ignore these contexts. The young people we interviewed clarified several key differences between their actual experiences with violence and media violence. For one, many described media violence as gorier, with over-the-top special effects. Over and over the boys described how fear in their lives comes not from seeing blood on- or off-screen but from the uncertainty about when violence will next occur. One seventeen-year-old stated that because violence in his neighborhood was so pervasive, media violence was strangely comforting: he said at least when it occurred on television, he knew he was safe.

Another key difference in meaning is the clear distinction between good and evil in media depictions of violence. “It’s more pumped-up like, [a] heroic thing,” an eighteen-year-old informant told us. “Like most of violence on TV is like a heroic thing. Like a cop does something amazing. Like somebody like a bad guy, the violence is usually like pin-pointed toward a bad person.” Other boys described the lack of punishment in their experiences compared with media violence; law enforcement to them was not as effective as it may appear on police dramas.

A seventeen-year-old compared his experiences with the Jerry Springer show, saying, “They have security that break it up if something happens. [Nobody] is really going to get hurt that much because there probably will be two or three blows and security will hop on stage and grab the people.” He went on to describe

105

how, in his experience, the police were not concerned with who the good guy was, that there was no discussion, and often no real resolution. Ironically, one of the central complaints about media violence is that often there are no consequences, but our informants told us that in reality things are even worse.

These contexts help us understand why some young people of color mistrust police. For those who have had more positive interactions with police, the simmering rage sometimes reflected in rap lyrics might be hard to comprehend. Sociologist Elijah Anderson’s ethnography of African Americans’ experiences with police in a northeastern city highlights the disparity. “Scrutiny and harassment by local police makes black youths see them as a problem to get beyond,” Anderson notes, and he describes the actions of the “downtown police” as “looking for ‘trouble.’ They are known to swoop down arbitrarily on gatherings of black youths standing on a street corner. They might punch them around, call them names, and administer other kinds of abuse, apparently for sport.”47

A major concern about media violence is that it creates unfounded fear that the world is a dangerous place. Communications scholar George Gerbner describes this as the “mean-world” syndrome: by watching so much television violence, people mistakenly believe that the world is a violent place. But what about people who do live in dangerous communities? With the boys we interviewed, poverty and hopelessness gnaw away at them on a daily basis. “It’s just poverty,” an eighteen- year-old told us. “I wouldn’t recommend nobody comin’ here.… I just wouldn’t recommend it.” Not surprisingly, the majority of boys we interviewed did not find media violence to be a big source of fear. In fact, some boys said they enjoyed watching violence to point out how producers got it wrong. As experts, they can detect the artificiality of media violence.

The boys also expressed resentment when their neighborhoods are used in stereotypical portrayals. “The people that make the movies, I’m pretty sure they never lived where we live at, you know, went to the schools we went to,” explained a seventeen-year-old we interviewed. “They were, most of ’em were born in you know, the upper-class whatever, you know? I don’t think they really have experienced how we live so that’s why I don’t think they really know how it is out here.” Others explained how movies, violent or otherwise, were a luxury they could rarely afford. Besides, impoverished communities often have no movie theaters. One boy told us he never went to movies because it wasn’t safe to be out at night or to go to other neighborhoods and possibly be mistaken for a rival gang member.

Some of the boys did say that media violence made them more afraid, based on the violent realities of their communities. “If you watch a gangster movie and you live in a neighborhood with gangsters, you think you’ll be killed,” an informant said. Another respondent, who said he had to carry a knife for protection, told us, “It makes you fear going outside. It makes you think twice about going outside. I

106

mean, how can you go outside after watching someone get shot on TV? You know, [my friend] was just walking outside of his house and got shot. And you think to yourself, damn, what if I walked out of my house and got shot?” In both cases the fear that stemmed from media violence was rooted in their real-life experiences.

Violence exists within specific social contexts; people make meaning of both real violence and media violence in the context of their lives. It is clear from these examples that neighborhood violence and poverty are important factors necessary to understand the meanings these young people give to media violence. Other contexts would certainly be different, but when researchers or critics focus on media violence, real-life circumstances are often overlooked.

We also need to acknowledge the meaning of violence in American media and American culture. It’s too easy to say that violent media merely reflect society, or that producers are just giving the public what it wants, but violence sells. Violence is dramatic, a simple cinematic tool and easy to sell to domestic and overseas markets, since action-adventure movies present few translation problems for overseas distributors.

But in truth, violence and aggression are very central facets of American society. We reward violence in many contexts outside of popular culture. Aggressive personalities tend to thrive in capitalism: risk takers are highly prized within business culture. We celebrate sports heroes for being aggressive, not passive. The best hits of the day make the football highlights on ESPN, and winning means “decimating” and “destroying” in broadcast lingo.

We also value violence, or its softer-sounding equivalent, the use of force, to resolve conflict. On local, national, and international levels, violence is largely considered acceptable. Whether this is right or wrong is the subject for a different book, but the truth is that in the United States the social order has traditionally been created and maintained through violence. We can’t honestly address media violence until we recognize that, in part, our media culture is violent because we, as a society, are.

Challenging Media and Real Violence

Politicians, researchers, and the news media may be fascinated by media violence, but the everyday causes of actual violence often receive little attention from policy makers. Yes, media violence may be a small link in a long chain, but certainly it’s not the central link. There’s nothing wrong with media criticism—we could probably use more of it—but when media criticism takes the place of understanding the roots of violence, we have a problem. To hear that “Washington [is] again taking on Hollywood” may feel good to the public and make it appear as though lawmakers are on to something, but real violence remains off the agenda.48 This

107

tactic appeals to many middle-class constituents whose experience with violence is often limited.

While some fear that the content of video games and other violent entertainment may be harmful, we also need to consider the harm of diversion from the issues that politicians and policy makers could be exploring instead of succumbing to the media violence moral panic. We might ask why so many parents are afraid for their kids to play outside in their communities and why many neighborhoods have few spaces for teens to safely congregate. For many parents, violent media exposure is far less of a concern than exposure to actual violence. To understand why people become violent, we need to start by looking at garden-variety violence rather than the headline-grabbing exception.

Violence elicits fear because it sometimes may seem to defy prediction. After the high-profile shootings of the 1990s, the FBI conducted a study to produce a profile of school shooters. In the end, they couldn’t: school shootings are so rare, and they shared many characteristics with nonviolent kids—like playing video games.

This is not to say that we cannot predict what leads to violence. The majority of young people who turn to violence have a number of other risk factors that we need to focus on more: violence in the home or neighborhood (or both), a personal or family history of substance abuse (or both), and a sense of hopelessness due to extreme poverty. Specific contexts also must not be ignored; for instance, in the study of youth violence in Los Angeles I noted earlier, we found that the vast majority of homicides involving young offenders are gang related, drawing on the aforementioned problems, not video games.

Economically disadvantaged people living in racially isolated communities are most likely to experience real violence, but least likely to appear on politicians’ radar. A national focus on media rather than real violence draws on existing fears and reinforces the view that popular culture, not the decades-long neglect of whole communities, leads to violence. It provides a cultural explanation that seems to address violence, but completely overlooks social structure. It may be more interesting to think about media violence—and, ironically, more entertaining—as a cause of real violence, but without examining structural conditions like poverty, unemployment, and other factors that contribute to family disruption, we won’t get very far.

Notes 1. Brown v. Entertainment Merchant’s Association, no. 08–1448 US (2011),

http://www.supremecourt.gov/opinions/10pdf/08-1448.pdf. 2. Jeneba Ghatt, “Supreme Court Overreaches on Video Game Ruling,” Washington

Times, June 30, 2011, http://communities.washingtontimes.com/neighbor hood/politics- raising-children/2011/jun/30/supreme-court-overreaches-video-game-ruling/; Robert Scheer,

108

“The Supreme Court’s Video Game Ruling: Yes to Violence, No to Sex,” Nation, June 29, 2011, http://www.thenation.com/article/161741/supreme-courts-video-game-ruling-yes- violence-no-sex; editorial, “The High Court’s Misguided Decision on Video Games,” Washington Post, June 27, 2011, http://www.washingtonpost.com/opinions/the-high-courts- misguided-decision-on-violent-video-games/2011/06/27/AGilYDoH_story.html.

3. Jennifer LaRue Huget, “Study Links Violent Video Games to Violent Thought, Action,” Washington Post, March 1, 2010, http://voices.washingtonpost.com/checkup/2010/03/study_shows_violent_video_game.html; Eryn Brown, “Violent Video Games and Changes in the Brain,” Los Angeles Times, November 30, 2011, http://articles.latimes.com/2011/nov/30/news/la-heb-violent- videogame-brain-20111130; editorial, “A Poisonous Pleasure,” St. Louis Post-Dispatch, July 30, 2000, B2; Richard Saltus, “Survey Connects Graphic TV Fare, Child Behavior,” Boston Globe, March 21, 2001, A1.

4. Jonathan L. Freedman, Media Violence and Its Effect on Aggression, 200. 5. Alexia Cooper and Erica L. Smith, Homicide Trends in the United States, 1980–

2008 (Washington, DC: US Department of Justice, 2011), http://bjs.ojp.usdoj.gov/content/pub/pdf/htus8008.pdf; Jennifer L. Truman, Criminal Victimization, 2010 (Washington, DC: US Department of Justice, 2011), http://bjs.ojp.usdoj.gov/content/pub/pdf/cv10.pdf.

6. Cooper and Smith, Homicide Trends; Federal Bureau of Investigation, Ten-Year Arrest Trends: Uniform Crime Reports for the United States, 2010 (Washington, DC: US Department of Justice, 2011), http://www.fbi.gov/about-us/cjis/ucr/crime-in-the- u.s/2010/crime-in-the-u.s.-2010/tables/10tbl32.xls.

7. James Alan Fox and Marianne W. Zawitz, Homicide Trends in the United States (Washington, DC: US Department of Justice, 2000).

8. Federal Bureau of Investigation, Arrests by Age: Uniform Crime Reports for the United States, 2010 (Washington, DC: United States Department of Justice, 2011), http://www.fbi.gov/about-us/cjis/ucr/crime-in-the-u.s/2010/crime-in-the- u.s.-2010/tables/10tbl38.xls; population estimate from US Census Bureau, Population Division: Annual Estimates of the Population by Selected Age Groups and Sex for the United States, 1980 to 2010 (Washington, DC: US Bureau of the Census, 2012), http://www.census.gov/compendia/statab/2012/tables/12s0007.pdf; Federal Bureau of Investigation, Uniform Crime Reports for the United States, 1964–1999 (Washington, DC: US Department of Justice, 2000).

9. Lori Dorfman et al., “Youth and Violence on Local Television News in California,” American Journal of Public Health 87 (1997): 1311–1316.

10. Los Angeles Police Department, Statistical Digest 2010, Information Technology Division, http://www.lapdonline.org/assets/pdf/2010%20Summary.pdf.

11. E. Britt Patterson, “Poverty, Income Inequality, and Community Crime Rates,” in Juvenile Delinquency: Historical, Theoretical, and Societal Reactions to Youth, edited by Paul M. Sharp and Barry W. Hancock (Upper Saddle River, NJ: Prentice-Hall, 1998), 135– 150.

12. For more discussion, see William Julius Wilson, More Than Just Race: Being Black and Poor in the Inner City.

13. Wayne Wooden and Randy Blazak, Renegade Kids, Suburban Outlaws: From Youth Culture to Delinquency; Howard N. Snyder and Melissa Sickmund, Juvenile Offenders and Victims: 2006 National Report (Washington, DC: US Department of

109

Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention, 2006), 67, http://ojjdp.ncjrs.gov/ojstatbb/nr2006/downloads/chapter3.pdf.

14. Cited in Glenn Gaslin, “Lessons Born of Virtual Violence,” Los Angeles Times, October 3, 2001, E1.

15. “Wrestle-Slay Boy Faces Life,” Daily News, January 26, 2001, 34; Michael Browning et al., “Boy, 14, Gets Life in TV Wrestling Death,” Chicago Sun-Times, March 10, 2001, A1; Caroline J. Keough, “Young Killer Wrestles Again in Broward Jail,” Miami Herald, February 17, 2001, A1.

16. “13 Year-Old Convicted of First-Degree Murder,” Atlanta Journal and Constitution, January 26, 2001, B1; Caroline Keough, “Teen Killer Described as Lonely, Pouty, Disruptive,” Miami Herald, February 5, 2001, A1; Tamara Lush, “Once Again, Trouble Finds Lionel Tate,” St. Petersburg Times, May 25, 2005, B1.

17. “Murder Defendant, 13, Claims He Was Imitating Pro Wrestlers on TV,” Los Angeles Times, January 14, 2001, A24. Later in media interviews, Lionel said that Tiffany was lying down on the stairs and he accidentally crushed her when he came bounding down the steps.

18. Lush, “Once Again, Trouble Finds Lionel Tate”; Abby Goodnough, “Ruling on Young Killer Is Postponed for Psychiatric Exam,” New York Times, December 6, 2005, 25.

19. See Freedman, Media Violence and Its Effect on Aggression, 43. 20. L. Rowell Huesman et al., “Longitudinal Relations Between Children’s Exposure to

TV Violence and Their Aggressive and Violent Behavior in Young Adulthood: 1977–1992,” Developmental Psychology 39, no. 2 (2003): 201–221. Kids who regularly watched shows like Starsky and Hutch, The Six Million Dollar Man, and Road Runner cartoons in 1977 were regarded as high-violence viewers.

21. Based on r=.17. 22. Jeffrey G. Johnson et al., “Television Viewing and Aggressive Behavior During

Adolescence and Adulthood,” Science 29 (March 2002): 2468–2471. 23. Lillian Bensley and Juliet Van Eenwyk, “Video Games and Real-Life Aggression:

Review of the Literature,” Journal of Adolescent Health 29 (2001): 244–257; Jeanne B. Funk, “Video Games: Benign or Malignant?,” Journal of Developmental and Behavioral Pediatrics 13 (1992): 53–54; C. E. Emes, “Is Mr. Pac Man Eating Our Children? A Review of the Effect of Video Games on Children,” Canadian Journal of Psychiatry (1997): 409–414.

24. C. J. Ferguson, “Evidence for Publication Bias in Video Game Violence Effects Literature: A Meta-analytic Review,” Aggression and Violent Behavior (2007): 470–482.

25. M. Winkel, D. M. Novak, and H. Hopson, “Personality Factors, Subject Gender, and the Effects of Aggressive Video Games on Aggression in Adolescents,” Journal of Research in Personality 21 (1987): 211–223.

26. Craig Anderson and Karen Dill, “Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and Life,” Journal of Personality and Social Psychology 78 (2000): 772–790; Amy Dickinson, “Video Playground: New Studies Link Violent Video Games to Violent Behavior,” Time, May 8, 2000, 100.

27. For further problems with this study, see Guy Cumberbatch, “Only a Game?,” New Scientist, June 10, 2000, 44.

28. Anderson and Dill, “Video Games,” 22. 29. A. Lager and S. Bremberg, “Health Effects of Video and Computer Game Playing:

A Systematic Review of Scientific Studies,” National Swedish Public Health Institute, 2005.

110

30. Anderson and Dill, “Video Games,” 33; Marnie Ko, “Mortal Konsequences,” Alberta Report, May 22, 2000.

31. Dickinson, “Video Playground,” 100; Marilynn Larkin, “Violent Video Games Increase Aggression,” Lancet, April 29, 2000, 1525.

32. Cumberbatch quoted in Charles Arthur, “How Kids Cope with Video Games,” New Scientist, December 4, 1993, 5; Derek Scott, “The Effect of Video Games on Feelings of Aggression,” Journal of Psychology 129 (1995): 121–133.

33. Karen E. Dill and Jody C. Dill, “Video Game Violence: A Review of the Empirical Literature,” Aggression and Violent Behavior 3 (1998): 407–428; Mark Griffiths, “Violent Video Games and Aggression: A Review of the Literature,” Aggression and Violent Behavior 4 (1999): 203–212; Joanne Savage, “Does Viewing Violent Media Really Cause Criminal Violence? A Methodological Review,” Aggression and Violent Behavior 10 (2004): 99–128; Craig A. Anderson and Brad J. Bushman, “Effects of Violent Video Games on Aggressive Behavior, Aggressive Cognition, Aggressive Affect, Physiological Arousal, and Prosocial Behavior: A Meta-analytic Review of the Scientific Literature,” Psychological Science 12 (2001): 353–359; Lillian Bensley and Juliet Van Eenwyk, “Video Games and Real-Life Aggression: Review of the Literature,” Journal of Adolescent Health 29 (2001): 244–257.

34. David Buckingham and Julian Wood, “Repeatable Pleasures: Notes on Young People’s Use of Video,” in Reading Audiences: Young People and the Media, edited by David Buckingham, 132.

35. Ibid., 137. 36. Garry Crawford and Victoria Gosling, “Toys for Boys? Marginalization and

Participation as Digital Gamers”; Garry Crawford, “The Cult of the Champ Man: The Cultural Pleasures of Championship Manager/Football Manager Games,” 523–540.

37. Statistics from industry group Entertainment Software Association, http://www.theesa.com/facts/index.asp (accessed on May 19, 2012).

38. Chris Zdeb, “Violent TV Affects Kids’ Brains Just as Real Trauma Does,” Gazette (Montreal), June 5, 2001, C5.

39. Jim Sullinger, “Forum Examines Media Violence,” Kansas City Star, August 29, 2001, B5; Marilyn Elias, “Beaten Unconsciously: Violent Images May Alter Kids’ Brain Activity, Spark Hostility,” USA Today, April 19, 2001, D8.

40. Todd Gitlin, Media Unlimited: How the Torrent of Images and Sounds Overwhelms Our Lives, 92.

41. I would like to thank Cheryl Maxson and Malcolm Klein for including measures in their study, “Juvenile Violence in Los Angeles,” sponsored by the Office of Juvenile Justice and Delinquency Prevention, grants #95-JN-CX-0015, 96-JN-FX-0004, and 97-JD-FX- 0002, Office of Justice Programs, US Department of Justice. The points of view or opinions in this book are my own and do not necessarily represent the official position or policies of the US Department of Justice. All interviews were conducted in 1998. The content of the interviews involved the youths’ descriptions of a selection of the violent incidents that the youths had experienced, the major focus of the study. At the end of each interview, youths were asked whether they thought television and movies contained a lot of violence. This question was posed to ascertain their perceptions of the levels of violence in media. Following this, respondents were asked whether they thought that viewing violence in media made them more afraid in their neighborhoods and why or why not. This topic helped respondents begin to compare the two types of violence and consider the role of

111

media violence in their everyday lives. Finally, respondents were asked to name a film or television program that they felt contained violence and compare the violence in that film or program to the violence they experienced and had described in the interview earlier. This question solicited direct comparison between the two modes of experience (lived and media violence). The subjects were able to define media violence themselves, as they first chose the medium and then the television program or film that they wished to discuss. Definitions of media violence were not imposed on the respondents. The interviews were tape- recorded and transcribed. Data were later coded using qualitative data analysis software to sort and categorize the respondents’ answers. Data were collected by random selection by obtaining a sample of addresses from a marketing organization, and households were then enumerated to determine whether a male between the ages of twelve to seventeen lived in the residence for at least six months. (Interviewees were sometimes eighteen at the time of follow-up.) It was determined that if youths had lived in the neighborhood for less than six months, their experiences might not accurately reflect activity within that particular area. They were excluded in the original sampling process.

42. Researchers who study media violence often have backgrounds in communications, psychology, or medicine.

43. No females were included because primary investigators concluded from previous research that males were more likely to have been involved in violent incidents.

44. R. W. Connell, Masculinities. 45. Elijah Anderson, The Code of the Street: Decency, Violence, and the Moral Life of

the Inner City (New York: W. W. Norton, 2000); Richard Majors and Janet Mancini Billson, Cool Pose: The Dilemmas of Black Manhood in America, 8.

46. Bureau of Labor Statistics, “Median Weekly Earnings by Race, Ethnicity, and Occupation, First Quarter 2012,” April 19, 2012 (Washington, DC: US Department of Labor, 2012), http://www.bls.gov/opub/ted/2012/ted_20120419.htm.

47. Elijah Anderson, Streetwise: Race, Class, and Change in an Urban Community, 197.

48. Megan Garvey, “Washington Again Taking on Hollywood,” Los Angeles Times, June 2, 2001, A1.

112

CHAPTER 6

113

Pop Culture Promiscuity Sexualized Images and Reality

When Disney sensation Miley Cyrus, star of Hannah Montana, appeared in Vanity Fair in 2008, a firestorm of criticism erupted. The fifteen-year-old posed provocatively, wearing only a bed-sheet in some of the photos. A year later, she performed at the Teen Choice Awards while dancing on a pole. Was she a bad role model for young girls, a young woman seeking to shed her Disney image, or both? Do representations of sexuality encourage teens to become sexually active?

We live in a time when virtually nothing is off-limits in pop culture, and the most private information about celebrities’ love lives becomes tabloid fodder. Many adults fear that sex is no longer a big deal to kids, and young teens are casually “hooking up” and “growing up faster than ever.” Books like Teaching True Love to a Sex-at-13-Generation suggest that today’s entire generation of kids is sexually active before high school. Their evidence? They look to the media: pop culture is full of sex, and therefore so must be kids.

As we will see in this chapter, while popular culture may be awash in sex, young people are not nearly as sexually active as many fear. It may seem like a given that teens think sex is just another way of saying hello, thanks to news about “sexting”—sending racy pictures via text or posting them on Facebook or Twitter. The Boston Globe and other news sources describe young teens’ sexy Halloween costumes—including some dressed as prostitutes—and pretty soon it seems like a trend.1 After all, who hasn’t seen a young teen or tween wearing an outfit that is anything but age appropriate?

Daytime talk shows have long featured promiscuous teens as problems that their parents can’t handle. Topics like “My teen is going on a date” might be more applicable to regular kids, but probably not a big ratings grabber. Horror stories of teen promiscuity make the news and appear to support the popular hypothesis that kids now are morally depraved and imply the media are at fault.

Note that stories about promiscuous adults aren’t common, but headlines like “Don’t Let TV Be Your Teenager’s Main Source of Sex Education,” “Grappling with Teen Sex on Television,” “MTV Show Promotes Teen Sex, Drug Use, Experts Say,” and “Racy Content Rising on TV” help create an atmosphere of anxiety among adults who fear that the rules have changed and that young people are becoming more promiscuous than ever.2

“Kids pick up on—and all too often act on—the messages they see and hear around them,” wrote sex educator Deborah Roffman in a Washington Post article

114

titled “Dangerous Games: A Sex Video Broke the Rules, but for Kids the Rules Have Changed.”3 Interesting that we don’t level the same charges against adults, who are more likely to be sexually active, and more likely to be rapists and sex offenders, than teenagers.

Roffman’s article featured the story of a teenage boy from the Baltimore area who videotaped himself having sex with a classmate and then showed the video to his friends. Certainly, this story is troubling, but also troubling is the supposition that this incident is representative of all young people, whose rules of proper conduct have allegedly changed. We wouldn’t dare make the same sweeping generalizations about equally appalling adult behavior. But Roffman is not surprised: “What else do we expect in a culture where by the age of nineteen a child will have spent nearly 19,000 hours in front of the television … where nearly two-thirds of all television programming has sexual content?”4

There are several things wrong with the assumption that sexual content from television led to this sex video. First, if our television culture is so sex laden and causes such inappropriate behavior, we would expect even more incidents like this, but this case was enough of an anomaly that it made headlines. Clearly, the story received media attention based on its shock value and its rarity. Second, the “19,000 hours” is an average, and perhaps a dubious one at that. How many hours of television did you watch last week? Last night? Personally, I have no idea, and neither do a lot of people who respond to surveys that statistics like these are derived from. The amount of viewing tells us nothing about the content itself. Besides, we have no idea if this kid even watches television—typically, television viewing declines in adolescence, and adults tend to watch more television than young people do.5

Finally, “sexual content” in such studies is often broadly defined to include flirting, handholding, kissing, and talk about sex, so the “two-thirds of all television programming” estimate is questionable at best. Roffman compared the incident to the 1999 film American Pie, where the lead character broadcast a sexual encounter over the Internet. However, there is no proof the Maryland boy even saw this movie.

It is far too simplistic to blame raunchy movie scenes for changes in sexual behavior. This chapter challenges this fear by examining why sexual attitudes have changed over the past century. The way we think about sex has changed much more than the actual behavior. We will see that youth of today are not nearly as promiscuous as some might fear. Rather than being blindly influenced by sex in media, teens are actively involved in trying to figure out who they are in a culture that might offer a lot of sexual imagery but little actual information about intimacy, sex, and sexuality. In a 2012 survey, 38 percent of teens reported that their parents were the most influential when it came to making decisions about sex. Only 9


Comments are closed.