MEDIA RESEARCH METHODS

Quantitative Methods
Experiments, Content Analysis & Surveys

You'll remember that we began our lesson on qualitative methods with a discussion of the differences between the qualitative and quantitative approaches. While quantitative researchers seek "causal determination, prediction, and generalization of findings," qualitative researchers seek "illumination, understanding, and extrapolation to similar situations" (Marie C. Hoepfl, "Choosing Qualitative Research: A Primer for Technology Education Researchers" Journal of Technology Education 9:1 (Fall 1997)). While qualitative researchers rely on case studies, field observation and participant observation, interviews, focus groups, etc., quantitative researchers use experimental methods and *numbers* to test hypotheses.

Gunter frames these differences as those between the "positivist" or hypothetico-deductive school of thought and the critical and interpretive perspectives. All of these terms should be familiar to you. We discussed positivism in our discussion of research paradigms; see the "paradigms" chart, part-way down the page. "Hypothetico" obviously refers to the focus on "hypotheses" in quantitative research, and "deductive" refers to the "direction" of data analysis in this kind of research, which we addressed briefly in our lesson on qualitative data analysis (see the "data analysis" section near the end of this lesson). The "critical" and "interpretive" perspectives obviously refer to the critical approaches and qualitative methods we've already discussed.

We're about to discuss three primary quantitative methods: experiments, quantitative content analysis (there is a qualitative kind), and surveys.

EXPERIMENTS

"Experimentation, more than other styles of research, promises clear causal inferences. Its strategy is to manipulate exposure to an hypothesized cause, while controlling for the contaminating influence of other possible causes by the use of control groups and by the random assignment of subjects to control and experimental situations. These arrangements are intended to maximize 'internal validity,' which means confidence that the independent variable did in fact 'make some significant difference in this specific instance'...and that the observed effects are not really the result of some uncontrolled and perhaps unknown variable that is the true cause. However, to gain causal clarity, experiments must frequently sacrifice realism and generalizability to nonexperimental populations and situations (or 'external validity.'). Furthermore, although it is possible to design experimental controls for many types of testing and measurement effects, the risk of such effects is high."

-John Brewer and Albert Hunter, Multimethod Research: A Synthesis of Styles (1989),
pp. 46-7.

An experiment is a study of cause and effect. It involves the deliberate manipulation of one variable (the independent variable, or IV), while attempting to keep all other variables constant. We assume that the IV causes some effect on another variable, the dependent variable, or DV. The DV is called such because it supposedly depends on the presence/absence, degree, condition, etc., of the independent variable. We develop a hypothesis regarding the relationship (causal links of degrees of association) between these variables.

In order to test this relationship, researchers choose a sample -- that is, a group of subjects that is representative of the entire population about which the researchers want to make claims -- and assign those research subjects to groups: the experimental group, the subjects who receive the experimental treatment; and the control group, the subjects who do not receive the experimental treatment and thus serve as a basis for comparison to the experimental group. The researchers would thus "apply" the manipulated independent variable to the control group, but not to the experimental group.

But how can we be sure that it is, in fact, the independent variable -- and not some other variable -- that caused the effects observed in the experimental group? Researchers' confidence in claiming cause-effect relationships is based in part on their ability to control the experimental conditions. For instance, if researchers want to prove that flash-cut editing in music videos causes rapid eye movement, they're likely to control the content of those videos, the conditions in which the subjects watch those videos, the demographics of the viewing population, etc., so that none of these other factors could be used to explain the result, or be thought to cause the effect on the dependent variable.

In addition, researchers need to be able to rule out alternative explanations for the results. Let's say we want to test the impact of music in television commercials on viewers' recall of those commercial messages, and we choose a classroom full of undergraduates as our test subjects. We play commercials with a jazz soundtrack to the first two rows of students, and commercials with a heavy metal soundtrack to the last two rows, and we find that the first two rows have better recall of the commercial message. Are we safe to conclude that jazz is more effective in ensuring message recall?

Uh, not really. Perhaps the two front rows are a self-selecting group of attentive students; they choose to sit in the front row because they prefer to pay attention. Those students who choose to sit in the back row may be predisposed to daydreaming. Could not the students' degree of attention, rather than the type of music, be used to explain their recall? This "external" variable that confounds our experiment is called, fittingly, a confounding variable. We want to be able to rule out these external variables so that we can confidently make claims that it's the IV -- and the IV only -- that causes the effect on the DV.

How do we rule out confounding variables? One method is randomly assigning our test subjects to various treatment groups, and thereby distributing all variables -- age, race, sex, education, etc., -- in approximately the same way in all groups. Researchers can also use a technique called matching, or they can include the confounding variable in the design as an independent variable. Christina Suthammanont explains these techniques in Mass Media and Politics: A Social Science Perspective (Appendix B):

Randomization is one process by which experimenters may achieve comparable groups. From the total number of participants, the experimenter randomly assigns them either to the control or to the experimental group. Randomization may be accomplished by numbering all of the participants serially and selecting numbers through a random number table, or the experimenter might assign the odd-numbered subjects to the experimental group and the even-number participants to the control group. However, successful randomization is largely contingent on the total number of participants. It hardly makes sense to try to randomize placement of a total of six participants.

An alternative to the randomization process is that of matching. This process closely resembles the process of quota sampling. Let's say you have 12 white males show up for your experiment. Quota sampling means that you would place six in the experimental group and six in the control group. Matching, on the other hand, is achieved through the creation of a quota matrix that is constructed based on all of the most relevant characteristics of your sample. Ideally, the matrix would be constructed in such a way that an even number of participants end up in each cell; this even number is distributed equally between the control and experimental groups (see Babbie 1998, 239 for illustration). Whether through randomization, matching, or quota sampling, it is crucial to the internal validity of an experiment that the groups are comparable.

See Robert Magnani's "Sampling Guide" for more information about sampling. And for an overview of some of the highlighted concepts, go here.

So, once we've established our groups, we typically conduct a pretest -- we observe our groups and measure the DV before we've manipulated the IV. Then, we apply the experimental treatment -- we introduce the IV -- and again measure the DV. If our hypothesis was correct, there should be a change in the DV before and after the manipulation of the IV.


There are several advantages of experiments:

1) They establish cause-effect relationships -- such as the relationship that supposedly exists between violent television and violent behavior. Although many wonder if we can ever prove causation, the experiment is the best method to support such claims -- in large part because....

2) The experiment allows for control -- over the environment, the variables, and the subjects. Researchers can control the conditions of the experiment -- from the climate in the room to the content of a television program shown to the test subjects --, the numbers and types of independent and dependent variables, and the selection and assignment of the test subjects. This control allows the researchers to isolate the particular variable whose influence they hope to determine. Research that manages to establish that its results are based on expected conditions, and not on external variables, are said to have internal validity -- that is, the research design is free from theoretical and methodological error.

3) Because the conditions of the experiment are clearly spelled out, this kind of research is relatively easily replicable -- which means that others can repeat the experiment, under the same or slightly different conditions, to test the results.

Much of "media effects" theory is based on experimental research. The Media Effects Research Laboratory at Penn State conducts research in its new facility, measuring such dependent variables as "skin conductance," which supposedly indicates "arousal," and eye movements. Go to the "Research" page on the Lab's website, and scroll down to "Recent Studies" and "New Media Studies." Review some of the posted abstracts, and try to identify their independent and dependent variables -- and what larger claims they make based on those dependent variables.


<-- Yikes. [Image Credit: Penn State Media Effects Research Laboratory]

Those researching media violence effects also often make use of experiments. That research, however, has led to various conclusions -- from claims that exposure to violent media causes children to behave more aggressively and has the potential to affect them as adults, to claims that there is no conclusive evidence linking violent media to violent behavior.

Experimental results are often inconclusive. And even those results that do seem to be well supported are often called into question because of perceived flaws in the methodology.

1) For one thing, life outside the lab, in the real world, is never as controllable as life inside the lab. By controlling so many variables in order to isolate the variable under study, researchers create artificial conditions that produce results unlikely to be found in the "real-world." Sometimes, researchers attempt to dodge this charge by issuing disclaimers, indicating that their experiment's results are applicable to only particular populations in particular conditions at particular times. Such a disclaimer serves only to call into question the value of the research. In order to serve its purpose -- to advance knowledge in the field -- research has to have external validity -- that is, its results must be generalizable to other situations.

David Gauntlett, of the University of Bournemouth, has written an article titled "Ten Things Wrong With the 'Effects Model'", in which he cites the theory's dependence on "artificial studies" for its support. He writes:

Such studies typically take place in a laboratory, or in a 'natural' setting such as a classroom but where a researcher has conspicuously shown up and instigated activities, neither of which are typical environments. Instead of a full and naturally-viewed television diet, research subjects are likely to be shown selected or specially-recorded clips which lack the narrative meaning inherent in everyday TV productions. They may then be observed in simulations of real life presented to them as a game, in relation to inanimate objects such as Bandura's famous 'bobo' doll, or as they respond to questionnaires, all of which are unlike interpersonal interaction, cannot be equated with it, and are likely to be associated with the previous viewing experience in the mind of the subject, rendering the study invalid.


Gauntlett identifies alternative kinds of experiments, or "quasi-experiments." One alternative to the lab is a "field experiment," in which "the researcher goes to the subject's turf" (Westley, B.H., 1989, qtd. in Wimmer & Dominick). Field experiments take place in more "natural" settings, which means that subjects are more likely to act as they would normally. In order to atone for their lack of physical control over the field, though, researchers make use of extra statistical controls so that they can still make claims of causation. [Image Credit: Robert Mankoff, Ed., The Complete Cartoons of the New Yorker (New York: Black Dog & Leventhal, 2004)]

2) Furthermore, experiments -- like any research methodology -- can be influenced by the humans conducting them. If researchers know what results to expect -- or if subjects know the desired results -- the results are more likely to fall in line with the research hypothesis. That is, they're more likely to get results that confirm what they expected to find. To combat the problem of the self-fulfilling prophecy, some researchers use a double-blind technique, which means that neither subjects nor researchers know who's in the control group or the experimental group.

3) Finally, not many research questions lend themselves to experimentation. How, for instance, could we design an experiment to study cultivation effects, which address the long-term impact of media on society? How could we conduct an experiment to establish a causal relationship between exposure to fashion magazines and eating disorders? We can't ethically induce eating disorders in the name of science! Similarly, if we are testing the effectiveness of a new health communication campaign, we cannot ethically withhold this potentially life-saving information from some of our subjects simply because they've been assigned to the control group!

4) You've gotta know statistics. Blech.

Refer again to these sources: Christopher Heffner's Statistics Primer and McGraw Hill Higher Education's Statistics Primer for Sociology. At the very least, you should be familiar with the following concepts: mean, median, mode, range, variance, standard deviation, normal distribution, null hypothesis, probability levels, regression analysis, and degrees of freedom. Here's the University of the Pacific's glossary of statistical terms.


And speaking of statistics: They're not only used to study the media; they're used in the media, too -- and this is another topic of media research: the presentation (or misrepresentation) of statistical information in the media. At the Annenberg Public Policy Center of the University of Pennsylvania, the Political Fact Check project analyzes the (often manipulative) use of statistics in the media. Visit their site and skim a few of their recent reports.

Sociologist Joel Best, author of Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists, was the subject of a 6/8/01 "Talk of the Nation" interview on how statistics are derived, why media and politicians are so dependent on them, and how media makers and users, politicians, activists, and the general public can think more critically about statistics. Download "Arguing with Statistics" [.ram] and listen to the interview in Real Player sometime this week.



CONTENT ANALYSIS

Content analysis is a research technique for the systematic classification and description of communication content according to certain predetermined categories. It may involve quantitative or qualitative analysis, or both. Technical objectivity requires that the categories of classification and analysis be clearly and operationally defined so that other researchers can follow them reliably.... It is important to remember, however, that content analysis itself provides no direct data about the nature of the communicator, audience, or effects. Therefore, great caution must be exercised whenever this technique is used for any purpose other than the classification, description and analysis of the manifest content of the communication (Charles R. Wright, Mass Communication: A Sociological Perspective, 1986, pp. 125-6).

Content analysis may be defined as a methodology by which the researcher seeks to determine the manifest content of written, spoken, or published communication by systematic, objective, and quantitative analysis. It is, then, a quantitative method applicable to what has traditionally been called qualitative material -- written language.... (George V. Zito, Methodology and Meanings: Varieties of Sociological Inquiry, 1975, p. 27)

It is systematic because it makes use of principled sampling and coding; objective because the researcher's biases are not to affect the analysis; and quantifiable because it is primarily concerned with counting occurrences.

In the early 20th century, researchers measured the column inches newspapers devoted to particular subjects in an attempt to determine which subjects were covered in greatest length -- and, by extension, which topics were made most salient -- and to link those findings to surveys of public opinion. This research, obviously, was inspired by a theoretical precursor to "agenda setting." The Tyndall Report still does this kind of work -- but with tv networks' weekday nightly news coverage [Image Credit: Tyndall Report, June 5, 2005].

Other early studies used content analysis to "monitor topics in press content in continuous surveys so as 'to establish a system of bookkeeping of the 'social weather' comparable in principle to the statistics of the U.S. Weather Bureau'" (Krippendorff, quoted in Jensen 103). Also, in wartime, researchers monitored domestic enemy broadcasts to "understand and predict events within Nazi Germany and its allies and to estimate the effects of military actions on war mood" (Krippendorff, quoted in Jensen 103).

Early definitions of quantitative content analysis distinguished between "knowledge" and "interpretation" of media content. According to Berelson, qualitative analysts often developed their interpretations as part of the analytic procedures (remember the "emergent" properties of qualitative methods?), while the quantitative analyst formed his interpretations after the analytic procedure; he based his interpretation "upon the total completed analysis in order to see particular pieces of content in perspective" (Berelson, quoted in Jensen 103). The only way to avoid "interpretation" during analysis is to concentrate solely on "manifest," or denotative, meanings at the exclusion of latent or connotative meanings, since manifest content, these early methodologists though, produced meanings that everybody could agree on. Focusing exclusively on denotation -- and, furthermore, assuming that denotation is universally unambiguous -- is a gross oversimplification of the processes by which meaning is made; a research methodology that focuses on such a limited and flawed understanding of meaning is, therefore, of little use. We can see here how various semiotic, poststructuralist, and reader response theories are linked to the development of new methodologies for dealing with meaning.

Klaus Krippendorff (great name, eh?) argued that interpretation takes place at all stages in the research process: in establishing analytical categories, in assigning units to these categories, and in correlating findings with "theoretical conceptions of society and culture" (Schroder in Jensen 104). He further refined the definition -- and through it, reframed the focus -- of content analysis; he defined it as "a research technique for making replicable and valid inferences from data to their context" (Content Analysis: An Introduction to Its Methodology, Sage, 1980: 21). The "analytical construct" allows you to make claims of correspondence between data and context, which is the environment in which the data was created and in which it exists. It's not enough to focus on the individual units of analysis you're working with in a content analysis; you need to look at the media contexts in which they're presented -- as part of a television show, a newspaper, or a conversation, for example -- and the social contexts in which they've been created and are consumed. This larger context includes "the sender's intentions, a receiver's cognitive and behavioral effects, the institution within which it is exchanged, or the culture within which it plays a role" (Lawrence Mullen, "An Overview of Political Content Analyses of Magazines," EJC/REC 4:2-4 (1994)).



Practical Applications of Content Analysis

  1. To determine authorship.
  2. To examine trends or patterns in documents -- particularly well suited for uncovering patterns "that are too subtle to be visible on casual inspection"; systematicity and objectivity of the method also ensure that a review of your source material yields a comprehensive catalogue of its contents -- not only a list those elements that confirm your hypothesis (Lutz and Collins, quoted in Rose 55).
  3. To assess the representation of social groups and events.
  4. To compare media content with real-world indicators.
  5. To test hypotheses about the policies or aims of media producers.
  6. To monitor shifts in public opinion.
  7. To test for media effects.

There are several examples of academic content analysis research on Daniel Chandler's fantastic Media and Communications Site. The Annenberg Public Policy Center of the University of Pennsylvania also frequently makes use of content analysis in its research, which is intended to influence the making of public policy. In its report on Legislative Issue Advertising in the 108th Congress, the researchers analyzed the content of "ads that were about issues of public policy, specifically national policy; in other words, persuasive ads about issues before the Congress, president, or regulatory agencies." Download and skim the report, but pay particular attention to the methodology section, which begins on page 47 [Image Credit: Annenberg Public Policy Center].

Because it's such a rigorous and rule-bound approach, you might think that content analysis is used primarily in the academy. That's certainly not the case. Audience Dialogue, an Australia-based consultancy, provides a few examples of content analysis -- including studies of TV violence, newspaper coverage of asylum seekers, and technology diffusion. The Center for Media and Public Affairs uses content analysis -- they explain their methodology on their webpage -- to examine various issues in news and entertainment media, including sex, violence, and race in popular culture and political humor.

Delahaye Medialink, a marketing/PR company, uses "media analysis" to analyze their clients' media coverage, and Biz360 uses content analysis to provide "market intelligence," a graphic representation of which you can see in the image to the left [Image Credit: Biz360]. Content analysis is also useful in search engine optimization, as evidenced by this "keyword analysis tool."

MediaTenor monitors international media content. In the following statement, they explain their research focus, their reasons for using content analysis, and their specific methodology:

Media Tenor is dedicated to objective, continuous monitoring of international media content. Every day Media Tenor offices receive hundreds of periodicals, record the leading television news programs in the United States, Europe, South Africa, Israel, the Arabian countries and in Asia, and download the content of dozens of internet sites and news groups in order to amass the most accurate, comprehensive, and useful database of media content information in the world.

More than 225 media analysts pour over each medium, first identifying and categorizing each report into a comprehensive index of topics: every report containing information on a particular company or industry, on a political event or protagonist, or on a broad range of special topics are marked, grouped and archived.

Furthermore, each report's content is then encoded: the protagonists, topics, locations, explicit ratings, implicit valuations, sources' identities, genders, and nationalities as well as other pertinent information are coded into an internationally networked database.

Some topical areas are coded in even greater detail, with each of these aspects fully documented on a per-statement basis, providing an extremely accurate depiction of the exact frequency and rating of topics, protagonists and locations mentioned.

Media Tenor researchers then analyze the data for trends in the coverage of individual businesses, industry sectors, political events, politicians' and countries' images, and special research areas, such as HIV/AIDS, Holocaust reparations, and a variety of other topics. Media Tenor places this research into the context of overall media content trends as well: individual media are analyzed for general content balance and are compared to competing publications. Overall types of media such as internet newsgroups or television news are similarly analyzed and compared to other media types.

According to Krippendorff (1980), six questions must be addressed in every content analysis:

  1. Which data are analyzed?
  2. How are they defined?
  3. What is the population from which they are drawn?
  4. What is the context relative to which the data are analyzed?
  5. What are the boundaries of the analysis?
  6. What is the target of the inferences?

Try to map these abstract questions onto the more pragmatic questions that Berger poses below. It's a good idea to plan your content analysis project on both levels -- the more abstract, "big picture" level that Krippendorff's questions address, and the more practical, "operational" details that Berger's questions deal with.


Steps of Content Analysis

  1. Decide what you want to find out and offer a hypothesis -- that is, an educated guess -- about what you expect to find....
  2. Explain what you'll be investigating and tell why this research is worth doing. It's good to keep these high-level questions in the back of your mind as you begin planning your research so that you don't get so caught up in the technique that you forget the project's larger purpose.
  3. Offer an operational definition of the topic you'll be studying. If you're studying violence, for instance, tell how you define it -- and know it when you see it. This question is equivalent to Krippendorff's question 2.
  4. Explain your basis for selecting the sample you'll be analyzing. Krippendorff's questions 1 and 3 are addressed here. How did you determine which examples to investigate? Over what period of time will this sample span? See this for more info on choosing your sample. Recall that we also discussed several sampling techniques in our lesson on qualitative methods.
  5. Explain what you unit of analysis is. Is your unit (a) the comic strip, (b) number of words, (c) column inches, (d) seconds of airtime? See this for more info on identifying units.
  6. Describe your classification system or system of categories for coding your material. Remember that the categories must be mutually exclusive and exhaustive -- that is they must cover every example of what you're analyzing. You should recall that we discussed coding in our lesson on qualitative data analysis. The most fruitful content analyses involve classification systems that are informed by theory; purely descriptive categories yield little insight into the ideological or semantic work that lies behind the content, or that takes place among the audience.
  7. Prepare your content for coding and, in the process, develop your coding system. See this for more information. Rose explains on pp. 60-63 how Lutz and Collins developed the coding system for their study of National Geographic images. This is often an iterative process; as you prepare your content, you'll usually realize that your coding system needs revision.
  8. Test for intercoding reliability and make any necessary adjustments, such as increased training and practice for the coders or an adjustment of the operational definition and code guides.
  9. Using your coding system, analyze the sample you have selected. Remember to measure only manifest content -- that which is explicitly stated rather than that which you "read between the lines" or underneath the images. You may have to return to your categories and refine your classification scheme as you analyze your sample. See this for more information. You can use software to assist in your coding and data analysis; some resources on content analysis programs are available here and here [choose the "software" link in the menu on the left].
  10. Present your findings using quantified data....
  11. Interpret your results using your numerical data and other material that may be relevant to your research. Krippendorff's questions 4, 5, and 6 are all folded into this step. Audience Analysis offers advice on how to make sense of your data. Kristin Jackson and William Trochim of Cornell University propose that concept mapping, which we discussed in an earlier lesson, can be used in content analysis; click on the link and skim their article.

(Modified from Arthur Asa Berger, Media and Communication Research Methods, 2000: 184).



Here's a flowchart illustrating the stage of content analysis, drawn from Kimberly Neuendorf's The Content Analysis Guidebook . Click on the image to access a .pdf version.




Advantages of Content Analysis (Berger 181)

1) It can be used to confirm or disconfirm your intuitive impressions about media content by performing a systematic description of a large body of media "discourses" from which you can derive numbers to "express the frequency and prominence of particular textual properties" (Schroder in Jensen 102)

2) It is unobtrusive; the researcher does not intrude on his or her subject of study, and therefore does not affect the outcome.

3) It is relatively inexpensive.

4) It uses material that is relatively easy to obtain and work with.




Challenges Presented by Content Analysis (Berger 182)

  1. By reducing media content to numerically expressed trends and patterns, and by focusing on manifest content, you inevitably reduce the complexity of meaning -- and how it's encoded and decoded. As Rose explains, content analysis, applying to visual texts, "focuses almost exclusively on the compositional modality of the site of the image itself. It therefore has very little to say about the production or the audiencing of images" (Rose 56). Furthermore, "numbers do not translate easily into significance"; just because something occurs often does not mean that it's important (Rose 66).

    A recent essay [link] in the New York Times addresses the history of the concordance -- an index of all the words in a text -- and its modern-day usage on Amazon. The author, Deborah Friedell, reminds us that "a statistical tool has no ear for allusions, for echoes, for metrical and musical effects, for any of the attributes that make words worth reading."

  2. Content analysis "cannot discriminate between an aspect of an image that exemplifies a code perfectly, and one that is a weak example of it" (Rose 66).
  3. Content analysis is not well suited to address the relationships between its units of analysis -- that is, what goes on between the column inches or seconds of airtime.
  4. Content analysis does not handle the site of a media text's production or the site of its reception, which is why much content analysis research is supplemented with surveys or other methods that allow the researcher to explore connections between the text and its creation and consumption.
  5. It is often difficult to obtain reliability in coding.
  6. As with many other methodologies, it is often difficult to define your key terms operationally. For instance, does frequency necessarily mean salience or significance?



For more information:




SURVEYS

Surveys are common in marketing research, politics, and audience studies. There are two major types: descriptive and analytical surveys. A descriptive survey attempts to describe or document current conditions or attitudes -- to explain what exists now; examples include Department of Labor surveys of unemployment and pollsters' surveys of the electorate during campaigns. An analytical survey attempts to describe and explain why situations exist; they investigate research questions or test hypotheses. The researcher examines two or more variables to determine their relationships and to develop explanations. For instance, television networks survey the market to make claims about how lifestyles affect viewing habits, or how lifestyle can be used to predict the success of a new program.




The Pew Research Center for the People and the Press make use of surveys to study people's attitudes toward the press and policy issues [Image Credit: PRCPP homepg.] .

They identify five primary areas of investigation:

  • The People & The Press - explores public attitudes about the credibility, social value and salience of the news media.
  • The People, The Press & Politics - features a typology which divides the American electorate into distinct voting groups and identifies the basic values and attitudes that animate political behavior.
  • The News Interest Index - measures on a regular basis how closely the public follows the major news stories and links this to views about politics and policy issues.
  • America's Place in the World - a series of in-depth surveys and analyses of the public and opinion leaders on international policy in the post-Cold War era.
  • Media Use - major surveys that measure the public's use of, and attitudes toward, the Internet and traditional news outlets.

Look over their list of survey reports to see what they've most recently published, and use the "view reports by topic" tool to find a few reports that match your interests.



Advantages of Surveys (from Wimmer & Dominick 167-8)

  1. They can be used to investigate problems in realistic settings.
  2. The cost of reasonable -- and, depending on your budget, you can tailor the kind of survey you use: mail, telephone, personal interview, or group administration.
  3. You can collect a relatively large amount of data from a variety of people with relative ease. Because you're better able to draw larger samples that are representative, your results can be more readily generalized.
  4. Surveys aren't constrained by geographic boundaries.
  5. There is plenty of existing survey research -- data archives, government documents, census data, rating books, etc., -- that can support and provide a foundation for your own survey.


Disadvantages of Surveys:

  1. Independent variables can't be manipulated as they can be in experiments. Therefore, you cannot prove causal relationships.
  2. Poor survey design can introduce bias.
  3. Inappropriate respondents can skew results.
  4. Their dependence on "post hoc, self-reports of phenomena" may suffer from inaccuracies.
  5. People's distaste for telemarketers and sidewalk surveys has made it more difficult to find willing participants.

Sampling:

We discussed sampling in our lesson on qualitative research -- but in quantitative research, we often choose probability sampling so that we can calculate our sampling error -- that is, the degree to which our sample is different from the population as a whole in terms of particular characteristics, if we already know the distribution of those characteristics in the population. You can use one of the sampling methods we discussed in the qualitative research methods lesson -- a purpose sample, a quota sample, or even a haphazard sample. But we would prefer to use one of four kinds of probability sampling:

  1. Random Sampling: every individual in a unit has an equal chance of being included
  2. Systematic Random Sampling: a criterion is fixed to select every nth person or unit from the population
  3. Stratified Random Sampling: involves dividing your population into homogeneous subgroups and then taking a simple random sample in each subgroup.
  4. Cluster Sampling: we progressively and hierarchically divide the population into clusters (e.g., first geographic region, then district within region, then zip codes within districts, then individuals within each zip code), then measure all units within those samples

The Web Center for Social Research Methods provides more information on probability sampling. And Audience Dialogue offers advice on sampling for surveys.

The Gallup Organization explain how they secure a random sample for their polls:

The first one thousand people streaming out of a Yankees game in the Bronx clearly aren't representative of all Americans. Now consider a group compiled by selecting 1,000 people coming out of a Major League Baseball game in every state in the continental United States -- 48,000 people! We now have a much larger group -- but we are still no closer to representing the views of all Americans than we were in the Bronx. We have a lot of baseball fans, but, depending on the circumstances, these 48,000 people may not even be a good representative sample of all baseball fans in the country -- much less all Americans, baseball fans or not.

When setting out to conduct a national opinion poll, the first thing Gallup does is select a place where all or most Americans are equally likely to be found. That wouldn't be a shopping mall, or a grocery store, an office building, a hotel, or a baseball game. The place nearly all adult Americans are most likely to be found is in their home. So, reaching people at home is the starting place for almost all national surveys.

By necessity, the earliest polls were conducted in-person, with Gallup interviewers fanning out across the country, knocking on Americans' doors. This was the standard method of interviewing for nearly fifty years, from about 1935 to the mid-1980s, and it was a demonstrably reliable method. Gallup polls across the twelve presidential elections held between 1936 and 1984 were highly accurate, with the average error in Gallup's final estimate of the election being less than 3 percentage points.

By 1986, a sufficient proportion of American households had at least one telephone to make telephone interviewing a viable and substantially less expensive alternative to the in-person method. And by the end of the 1980s, the vast majority of Gallup's national surveys were being conducted by telephone. Today, approximately 95% of all households have a telephone and every survey reported in this book is based on interviews conducted by telephone.

Gallup proceeds with several steps in putting together its poll with the objective of letting every American household, and every American adult have an equal chance of falling into the sample.

First, we clearly identify and describe the population that a given poll is attempting to represent. If we were doing a poll about baseball fans on behalf of the sports page of a major newspaper, the target population might simply be all Americans aged 18 and older who say they are fans of the sport of baseball. If the poll were being conducted on behalf of Major League Baseball, however, the target audience required by the client might more specific, such as people aged twelve and older who watch at least five hours worth of Major League Baseball games on television, or in-person, each week.

In the case of Gallup polls that track the election and the major political, social and economic questions of the day, the target audience is generally referred to as "national adults." Strictly speaking the target audience is all adults, aged 18 and over, living in telephone households within the continental United States. In effect, it is the civilian, non-institutionalized population. College students living on campus, armed forces personnel living on military bases, prisoners, hospital patients and others living in group institutions are not represented in Gallup's "sampling frame." Clearly these exclusions represent some diminishment in the coverage of the population, but because of the practical difficulties involved in attempting to reach the institutionalized population, it is a compromise Gallup usually needs to make.

Next, we choose or design a method that will enable us to sample our target population randomly. In the case of The Gallup Poll, we start with a list of all household telephone numbers in the continental United States. This complicated process really starts with a computerized list of all telephone exchanges in America, along with estimates of the number of residential households those exchanges have attached to them. The computer, using a procedure called random digit dialing (RDD), actually creates phone numbers from those exchanges, then generates telephone samples from those. In essence, this procedure creates a list of all possible household phone numbers in America and then selects a subset of numbers from that list for Gallup to call.

It's important to go through this complicated procedure because estimates are that about 30% of American residential phones are unlisted. Although it would be a lot simpler if we used phone books to obtain all listed phone numbers in America and sampled from them (much as you would if you simply took every 38th number from your local phone book), we would miss out on unlisted phone numbers, and introduce a possible bias into the sample.

The Number of Interviews or Sample Size Required

One key question faced by Gallup statisticians: how many interviews does it take to provide an adequate cross-section of Americans? The answer is, not many -- that is, if the respondents to be interviewed are selected entirely at random, giving every adult American an equal probability of falling into the sample. The current U.S. adult population in the continental United States is 187 million. The typical sample size for a Gallup poll, which is designed to represent this general population, is 1,000 national adults.

The actual number of people that need to be interviewed for a given sample is to some degree less important than the soundness of the fundamental equal probability of selection principle. In other words -- although this is something many people find hard to believe -- if respondents are not selected randomly, we could have a poll with a million people and still be significantly less likely to represent the views of all Americans than a much smaller sample of just 1,000 people -- if that sample is selected randomly.

To be sure, there is some gain in sampling accuracy that comes from increasing sample sizes. Common sense -- and sampling theory -- tell us that a sample of 1,000 people probably is going to be more accurate than a sample of 20. Surprisingly, however, once the survey sample gets to a size of 500, 600, 700 or more, there are fewer and fewer accuracy gains that come from increasing the sample size. Gallup and other major organizations use sample sizes of between 1,000 and 1,500 because they provide a solid balance of accuracy against the increased economic cost of larger and larger samples. If Gallup were to -- quite expensively -- use a sample of 4,000 randomly selected adults each time it did its poll, the increase in accuracy over and beyond a well-done sample of 1,000 would be minimal, and generally speaking, would not justify the increase in cost.

Statisticians over the years have developed quite specific ways of measuring the accuracy of samples -- so long as the fundamental principle of equal probability of selection is adhered to when the sample is drawn.

For example, with a sample size of 1,000 national adults, (derived using careful random selection procedures), the results are highly likely to be accurate within a margin of error of plus or minus three percentage points. Thus, if we find in a given poll that President Clinton's approval rating is 50%, the margin of error indicates that the true rating is very likely to be between 53% and 47%. It is very unlikely to be higher or lower than that.

To be more specific, the laws of probability say that if we were to conduct the same survey 100 times, asking people in each survey to rate the job Bill Clinton is doing as president, in 95 out of those 100 polls, we would find his rating to be between 47% and 53%. In only five of those surveys would we expect his rating to be higher or lower than that due to chance error.

As discussed above, if we increase the sample size to 2,000 rather than 1,000 for a Gallup poll, we would find that the results would be accurate within plus or minus 2% of the underlying population value, a gain of 1% in terms of accuracy, but with a 100% increase in the cost of conducting the survey. These are the cost value decisions that Gallup and other survey organizations make when they decide on sample sizes for their surveys.

Image Credit: Robert Mankoff, Ed., The Complete Cartoons of the New Yorker (New York: Black Dog & Leventhal, 2004)


Survey Time Span:

Surveys can be used to obtain information at a specific point in time, of over a span of time. Longitudinal research (diachronic studies) might take the form of trend studies; cohort studies, which focuses on the same subset of a population each time data are collected; and panel studies, which tap the same sample of respondents each time.


Creating a Survey/Questionnaire:

Since a poorly designed survey can introduce bias, it's important to make sure your questionnaire is as clear as possible. The American Statistical Association offers several helpful brochures about survey research. One in particular that may be of use is their brochure on "designing a questionnaire." See also Robert Frary, "Hints for Designing Effective Questionnaires," Practical Assessment, Research & Evaluation, 5:3 (1996) and Audience Dialogue's "principles of questionnaires."


VISUALIZING DATA

It's not all number-crunching. There's plenty of room for creativity in quantitative research. Your skills as media designers and producers might come in particularly handy in some of the final stages of the research process -- particularly in interpreting and presenting data. Through a visual representation of data -- either existing data, or that you've collected yourself -- you can sometimes illumine meaningful patterns or come to useful conclusions that a simple "spreadsheet presentation" wouldn't allow.

In the mid 1800s, amidst a cholera epidemic in the city of London, Dr. John Snow, a forefather of epidemiology, mapped both the locations of public water pumps and the sites of cholera deaths onto a city map [click the image to the left for a larger pdf]. The map brought to the fore a most significant correlation: a disproportionate number of deaths occurred around the Broad Street pump.

Hmmm. Very interesting.

After 500 lives had already been lost to the disease, Snow had the Broad Street pump's handle removed, thus drastically slowing the spread of cholera.

In this case, mapping turned out to be more than just a data presentation technique. It aided in analysis; it generated new understanding of Snow's research data. Sure, had there been Excel spreadsheets back in the 1850s, Snow may have come to the same conclusion even without the map. But that doesn't mean that the map was merely superfluous, or was simply an afterthought -- or a means of "packaging" his research insights. Data visualization needn't be relegated to the last step of a research project: to visualizing conclusions. Sometimes, as in Snow's case, visualization can help us reach those conclusions.

Edward Tufte's got lots to say about data visualization -- both its useful and deleterious applications. He offers some warnings we'd do well to heed. For instance, just because you can map it, doesn't mean you should. Does the visualization offer any new insight? Or is it just gratuitous -- an attempt for you to show off your design skills?

And just because you can make a cool flowchart, and draw lots of color-coded arrows between the variables you've plotted on your data map, doesn't mean that you've proven any causal relationship between those variables. A dazzling data map can't atone for poor methods or weak data.

For more examples, check out York University's (CA) "best and worst of statistical graphics."

For examples of research using quantitative methods, visit the New School's electronic resources, choose one of the databases featuring media and communication studies publications, and search for "experiment," "content analysis," "statistical," or some term that might appear in an abstract for a quantitative study. Here are just a few examples:

Kenneth Yang, "The Influence of Humanlike Navigation Interface on Users' Responses to Internet Advertising" Telematics & Information 23:1 (Feb. 2006): 38-55. Abstract: The present study integrates literature from the interface design and Internet advertising effectiveness literature to examine whether a humanlike navigation interface will increase the effectiveness of Internet advertising. The study employs a post-test only with a control group experiment design to examine whether and how a humanlike navigation interface will have effects on users’ responses to Internet advertising. The study found that a humanlike navigation interface leads to more favorable attitudes toward the Internet advertisement, the advertised product, higher interface evaluation and immersive feeling, and better product information recall. Implications and limitations of the study were discussed. (You might wander what "humanlike" means; let's hope he defines his terms in the body of the paper.)

Erica Scharrer, "Hypermasculinity, Aggression, and Television Violence: An Experiment" Media Psychology 7:4 (2005): 353-376. Abstract: This experiment tests the role of hypermasculinity (HM) and trait aggression in predicting aggressive responses following exposure to violent television stimuli. Male college students (N = 93) were exposed to a violent and hypermasculine television program, a violence-only program, or a control program. Results find that some dimensions of HM and preexisting aggression interacted with exposure to the treatment stimuli to predict change in aggression, and that HM can also be treated as a dependent variable that is affected by television exposure.

Kenneth Lachlan, "Models for Aggressive Behavior: The Attributes of Violent Characters in Popular Video Games" Communication Studies 56:4 (Dec. 2005): 313-329. Abstract: Americans are increasingly concerned about video games, presumably due to the amount and graphicness of violence they contain. Social Cognitive Theory suggests that people are more likely to imitate characters they see as attractive or similar to self. To date, however, little research has examined attributes of violent characters in video games related to this issue. This content analysis examined 10 minutes of play from 60 of the most popular video games. Adapting the coding scheme from the National Television Violence Study (Smith et al., 1998; Wilson et al., 1997, 1998), various demographic and contextual features of violent characters and violent interactions were assessed. The results show that violent game characters have attributes that are likely to increase the extent to which some players perceive them as attractive and similar. Moreover, these violent characters engage in aggression that is presented as justified or graphic. [ABSTRACT FROM AUTHOR]

Lara Zwarun, "Doing What They Say, Saying What THey Mean: Self-Regulatory Compliance and Depictions of Drinking in Alcohol Commercials in Televised Sports" Mass Communication & Society 8:4 (Nov. 2005): 347-371. Abstract: A content analysis examined how well alcohol ads from televised sporting events adhered to self-regulatory advertising guidelines. Although nearly every ad followed the guidelines literally, there were numerous instances of strategically ambiguous content that could be interpreted as violating the guidelines’ spirit. For example, a joke about designated drivers hinted at drunkenness without explicitly showing it, and many ads showed drinking as well as risky activities like driving or swimming, although not necessarily occurring simultaneously. The study also confirmed that alcohol ads are common in televised sporting events, and that the most frequent themes are humor, friendship, sex, and romance, a potentially troubling finding given that beliefs about the social benefits of alcohol predict drinking in young people. These findings suggest that alcohol advertising might contribute to the formation of expectancies in young people, and that current self-regulation may not be an effective way to prevent alcohol advertising from appealing to people under the age of 21.

Nicky Mee, "A Communications Strategy for Kerbside (sic?) Recycling" Journal of Marketing Communications 11:4 (Dec. 2005): 297-308. Abstract: This paper is a case study that evaluated the effects of a communications campaign for increasing recycling rates and positively influencing recycling attitudes and the behaviour of residents in Rushcliffe. Household waste is growing by approximately 3% each year and there is a need for a planned and sustained strategy that is regularly evaluated and adapted in order to achieve positive environmental outcomes and a mutual understanding between a council and its residents. The research reported is longitudinal and involved large-scale surveys that explored attitudes to the current refuse collection methods, the new recycling service and the effectiveness of the communications tools used.

Chang ChingChing, "Personal Values, Advertising, and Smoking Motivation in Taiwanese Adolescents" Journal of Health Communication 10:7 (Oct. 2005): 621-634. Abstract: This article explores the role that personal values plays in motivating Taiwanese adolescents to smoke. In a nationwide survey of high school students, smokers attached greater importance to hedonic gratification values and less importance to idealism values than did nonsmokers. Hedonic gratification values were associated with favorable attitudes toward smoking, while idealism values were associated with unfavorable attitudes toward smoking. Attitudes toward smoking predicted adolescent smoking behavior. Evidence suggested that advertising plays an important role in motivating adolescents with hedonic gratification values to smoke. First, in the survey, hedonic gratification values were associated with paying attention to and expressing favorable attitudes toward cigarette advertising. Second, a content analysis of cigarette ads in magazines found hedonic gratification values to be the most commonly portrayed values, occurring in 62.7% of ads.

For more on these quantitative methods, consult these sources:

  • Martin W. Bauer, “Classical Content Analysis: A Review” In George Gaskell & Martin Bauer, Eds., Qualitative Researching With Text, Image and Sound: A Practical Handbook for Social Research (Thousand Oaks, CA: Sage, 2000): 131-151.
  • Colin Robson, “Surveys and Questionnaires” In Real World Research: A Resource for Social Scientists and Practitioner Researchers, 2nd ed. (Malden, MA: Blackwell, 1993): 227-268.
  • Colin Robson, “Fixed Designs” In Real World Research: A Resource for Social Scientists and Practitioner Researchers, 2nd ed. (Malden, MA: Blackwell, 1993): 95-162.
  • Colin Robson, “Tests and Scales” In Real World Research: A Resource for Social Scientists and Practitioner Researchers, 2nd ed. (Malden, MA: Blackwell, 1993): 292-307.
  • Colin Robson, “The Analysis of Quantitative Data” In Real World Research: A Resource for Social Scientists and Practitioner Researchers, 2nd ed. (Malden, MA: Blackwell, 1993): 391-454.


RP: What epistemological or ontological biases does quantitative research present? What subjects, aside from media effects, might lend themselves to quantitative research? Might you find a way to approach your research subject quantitatively? Why or why not?