Download Computer-Mediated Word-of-Mouth Communication on

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Journal of Computer-Mediated Communication
Computer-Mediated Word-of-Mouth
Communication on RateMyProfessors.com:
Expectancy Effects on Student Cognitive and
Behavioral Learning
Autumn Edwards
Ohio University
Chad Edwards
University of Kansas
Carrie Shaver
Indiana University
Mark Oaks
Michigan University
The purpose of this study was to experimentally test the influence of expectancies
formed through computer-mediated word-of-mouth communication (WOM) on student
learning. Increasingly, students rely on computer-mediated WOM through sites such as
RateMyProfessors.com to aid in the process of information-gathering and course selection.
It was hypothesized that students who received positive computer-mediated WOM about
a course would demonstrate greater levels of cognitive and behavioral learning than would
students who received no information or negative computer-mediated WOM. Results
demonstrated the predicted effects for cognitive and behavioral learning. It was further
hypothesized that observed expectancy effects would be mediated by affect toward learning.
Results supported a partial mediational role for affect in the context of positive expectancies,
but not negative expectancies. Results were discussed in terms of the role of computermediated WOM in generating expectations, the expectations-affect-behavior hypothesis,
and the influence of student expectations on learning outcomes.
Key words: computer-mediated communication, word-of-mouth, expectancy effects,
cognitive learning, behavioral learning, affective learning, RateMyProfessors.com.
doi:10.1111/j.1083-6101.2009.01445.x
Research has demonstrated that word-of-mouth communication (WOM) influences
short-term and long-term perceptions (Herr, Kardes, & Kim, 1991), attitudes, and
behaviors (Harrison-Walker, 2001). As ‘‘a dominant force in the marketplace,’’
368
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
WOM has been studied as it relates to judgments regarding products, services, and
organizations (Mangold, Miller, & Brockway, 1999, p. 73). Increasingly, students
rely on computer-mediated WOM through sites such as RateMyProfessors.com
and PickAProf.com to aid in information-gathering and course selection. However,
there has been little examination of this influence in the teaching/learning context
(Edwards, C., Edwards, A., Qing, & Wahl, 2007; Herr et al., 1991). This study
investigates the role of computer-mediated WOM in acting as a source of student
expectations to influence learning. An experimental design is employed to test
the effects of positive and negative computer-mediated WOM (operationalized as
RateMyProfessors.com evaluations) on student levels of cognitive and behavioral
learning, and to explore affect as a mediator of these effects.
Literature Review
Word-of-Mouth Communication (WOM)
Defined by Harrison-Walker (2001) as ‘‘informal, person-to-person communication
between a perceived noncommercial communicator and a receiver regarding a brand,
a product, an organization, or a service’’ (p. 63), WOM is transferred from one person
to another through a communication medium (Brown, Barry, Dacin, & Gunst, 2005).
Research on WOM is extensive, demonstrating links between WOM and consumer
purchasing behavior (e.g., Arndt, 1967; Howard & Gengler, 2001; Liu, 2006), product
success (Day, 1971; Katz & Lazarsfeld, 1955), cross-cultural marketing (Cheung,
Anitsal, & Anitsal, 2007), satisfaction with experiences (Burzynski & Bayer, 1977;
Harrison-Walker, 2001; Wangenheim & Bayón, 2007), response to negative messages
(DeCarlo, Laczniak, Motley, & Ramaswami, 2007), diffusion of innovations (Arndt,
1967; Singhal, Rogers, & Mahajan, 1999; Sultan, Farley, & Lehmann, 1990; Sun,
Youn, Wu, & Kuntaraporn, 2006), perception of risk (Shrum & Bischak, 2001), and
persuasion (Bytwerk, 2005; Carl, 2006; Compton & Pfau, 2004).
According to Bickart and Schindler (2001), conventional WOM refers to spoken
words exchanged face-to-face between friends or relatives. However, technologyfacilitated written personal opinions and experiences shared among acquaintances or
strangers have come to typify computer-mediated WOM (Sun et al., 2006). Because of
the Internet’s bidirectional communication capabilities, large-scale WOM networks
have developed (Dellarocas, 2003) and broadened both the availability and the
importance of WOM in the marketplace (Zinkhan, Kwak, Morrison, & Peters, 2003).
Phelps, Lewis, Mobilio, Perry, and Raman (2004) argued that computer-mediated
WOM has eclipsed conventional WOM’s influence on information and decisionmaking processes because of its speed, convenience, reach, and lack of face-to-face
social pressure.
Within an educational context, computer-mediated WOM and its influences have
gone mostly unexplored. Borgida and Nisbett (1977) found that vivid face-to-face
WOM about college courses had greater influence on course selection than did an
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
369
extensive collection of written course evaluations. However, today’s students have
increased opportunities for accessing and participating in WOM about college courses
and instructors because of the popularity of online instructor rating systems (Wilhelm
& Comegys, 2004). Edwards et al. (2007) demonstrated through experimental design
that positive computer-mediated WOM influenced student perceptions of instructors
(credibility and attractiveness) and attitudes toward learning course content (state
motivation and affective learning).
Online Instructor Rating Systems
Widely used sites like RateMyProfessors.com, PickAProf.com, ProfessorPerformance.com, and MySpace’s professor rating system are utilized for the evaluation of
college instructors and their courses. Founded in 1999, the largest and best known
website of this kind is RateMyProfessors.com (RMP) (Kindred & Mohammed,
2005). As of October 2008, over 6.8 million student-generated ratings had been
posted, reviewing over 1 million instructors from more than 6,000 universities
and colleges in the U.S., Canada, Scotland, and Wales (About Us, RateMyProfessors.com). RMP reaches approximately 10 million college students each year
(Acquisition, PRNewswire.com). During the Fall quarter of 2007, MTV Network’s
mtvU acquired RMP, solidifying the former as the largest multiplatform college network and second most trafficked set of general college-focused websites (Acquisition,
PRNewswire.com). Because mtvU is the largest, most comprehensive television network dedicated to college students (broadcasting around the clock to 750 colleges in
the U.S., with a combined enrollment exceeding 7.2 million students), RMP will likely
demonstrate continued growth and prominence (Acquisition, PRNewswire.com).
On websites such as RMP, quantitative and open-ended evaluations of teaching
effectiveness are anonymously posted in order to aid students in the process of course
selection (Kindred & Mohammed, 2005). On RMP, students use 1 to 5 scales to rate
instructors in terms of their helpfulness, clarity, and easiness. An overall quality rating
for each instructor is derived by averaging their helpfulness and clarity ratings. The
numerical average is also paired with an icon of a face with one of three expressions:
smiling (good quality), neutral expression (average quality), and frowning (poor
quality). In addition, users may indicate the physical attractiveness (‘‘hotness’’) of an
instructor by putting a ‘‘chili pepper’’ next to the name. Open-ended evaluations of
the instructor and course can also be posted and become immediately available for
viewing by other users. Additionally, search tools allow users to browse by course
code, instructor name, university, department, ‘‘hotness’’ of instructor, and overall
quality of instructor.
Several recent studies have focused on RMP. Kindred and Mohammed’s
(2005) investigation demonstrated that students’ motives for using RMP included
convenience, information-seeking, and interpersonal utility (curiosity about peer
experience) and that instructor competence and features of classroom experience
were the primary foci of comments posted on RMP (see also Silva et al., 2008). Felton,
Mitchell and Stinson (2004; 2005) found that instructor quality scores on RMP were
370
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
strongly positively correlated with perceived course easiness and professor sexiness
(as determined by the ‘‘chili pepper’’). Furthermore, Coladarci and Kornfield (2007)
found a strong positive association between instructors’ RMP quality ratings and
their scores on university-sanctioned student evaluations of teaching. In a second
investigation surrounding the validity and usefulness of RMP evaluations, Otto,
Sanford, and Ross (2008) found that RMP data is consistent with a valid measure of
student learning and does not demonstrate a halo effect.
Finally, Edwards et al. (2007) experimentally tested the effects of exposure to
positive and negative RMP evaluations on students’ perceptions of a target instructor
and course. Results showed that the content of RMP evaluations influenced student
perceptions of instructor credibility and attractiveness as well as students’ reported
levels of affective learning and state motivation to learn. Given the effects on
students’ attitudes toward learning observed in Edwards et al.’s (2007) experiment,
it is reasonable to expect that RMP evaluations may also influence expectations to
influence students’ levels of cognitive and behavioral learning.
Further investigation of the impact of RMP ratings on the educational experience
is warranted given the increasing popularity and usage of instructor evaluation
websites. In commenting upon mtvU’s acquisition of RMP, General Manager Stephen
Friedman explained that ‘‘choosing the best courses and professors is a rite of passage
for every college student, and connecting with peers on RateMyProfessors.com has
become a key way millions of students now navigate this process’’ (Acquisition,
PRNewswire.com). As students continue to view higher education from a consumerbased perspective, seeking to maximize the value of their educational dollars, the
demand for information about instructors and courses prior to enrollment will
continue to grow (Gilroy, 2003). And, as demonstrated by Edwards et al. (2007),
students may rely heavily on computer-mediated communication prior to contact
with an instructor and course to form expectations of later experience.
Expectancy Effects
Braun (1976) noted that expectations influence experience by constructing what
becomes reality for an individual. The expectations we hold of ourselves and
others heavily impact our perceptions and evaluations, with profound implications
for cognitions and behavior (Brewer & Crano, 1994). The various effects of our
beliefs, perceptions, and presumptions on our own and others’ behavior are termed
expectancy effects (Rosenthal, 1978). Expectancy effects were first observed as placebo
phenomena in pharmacology research (e.g., Beecher, 1966) and later as experimenter
and confirmation bias effects in laboratory research (e.g., Rosenthal and Fode, 1961).
The wider social relevance of expectancy effects was recognized with Rosenthal and
Jacobson’s (1968) experiment which demonstrated that raising elementary teachers’
expectations of their students resulted in both immediate and persistent intellectual
gains on the part the students.
In the educational context, the vast majority of research has focused on the
ways in which instructors’ expectations of student achievement impact students’
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
371
beliefs in their own abilities and their corresponding levels of educational success
(e.g., Braun, 1976; Brophy, 1983; Cooper & Good, 1983; Raffini, 1993; Rosenthal &
Rubin, 1978). However, a number of studies have focused on the effects of students’
positive expectations of academic experience on their performance. For example,
research has demonstrated that manipulating students’ expectations of teacher
competence, instructor reputation, and program quality affects student perceptions
and performance outcomes (Feldman & Prohaska, 1979; Feldman, Saletsky, Sullivan,
& Theiss, 1983; Fries, Horz, & Haimerl, 2006; Jamieson, Lydon, Stewart, & Zanna,
1987; Leventhal, Perry, & Abrami, 1977; Perry, Abrami, Leventhal, & Check, 1979).
Expectations may derive from past experiences with targets or from third-party
accounts (Snyder & Stukas, 1999), as in the case of WOM. Previous research on WOM
has demonstrated that it is an important source of consumer expectations (e.g., Clow,
Kurtz, Ozment, & Ong, 1997) and evaluations (Herr et al., 1991). In fact, WOM is the
primary means by which consumers gather information about services (Grönroos,
1990; Zeithaml, Berry, & Parasuraman, 1993). Furthermore, WOM received prior
to purchasing a good or service can create postpurchase effects (Burzynski & Bayer,
1977; Sheth, 1971). For example, Wangenheim and Bayón (2004) demonstrated that
receiving positive prepurchase WOM can lead to higher postpurchase satisfaction.
Thus, WOM figures importantly in the formation of expectations and subsequent
behavior and experience.
In the context of educational goods and services, students rely on computermediated WOM (on sites like RMP) to form expectations of prospective instructors
and courses (Kindred & Mohammed, 2005). Students’ anticipations in terms of
the amount and quality of their learning are likely outcomes of exposure to such
messages. Research in the psychology of education demonstrates the significance
of expectations of learning on its actualization. Learning expectations come from
a variety of sources, including institutions, instructors, and peers (Brophy, 1986;
Smith, Jussim, & Eccles, 1999). Significantly, students often internalize expectations
derived from these others (c.f., Dusek, 1985), which then influence their subsequent
educational experiences by acting as a positive or negative motivation for behavior
(Kuh, 1999; Zimmerman, 2000).
Thus, it is plausible that computer-mediated WOM (through sites such as RMP)
will serve as a source of student expectancies that influence student learning outcomes.
Simply stated, students who expect to perform well on academic tasks perform better
than students who do not expect to perform well (Zanna, Sheraf, & Cooper, 1975).
Student Learning
The following sections briefly review two broad types of student learning: cognitive
and behavioral.
Cognitive learning
According to Bloom (1956), cognitive learning refers to the comprehension and
retention of knowledge. Cognitive learning is the ‘‘recall or recognition of knowledge
372
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
and the development of intellectual abilities and skills’’ (Bloom, 1956, p. 7). As
Titsworth (2001) explained, ‘‘cognitive learning refers to the extent to which students
achieve factual, conceptual, and critical understanding of course material’’ (p. 283).
Previous research has shown a relationship between cognitive learning and a
number of communication variables in the college classroom. Ellis (2000; 2004)
demonstrated that teacher confirmation was highly related to students’ perceived
levels of cognitive learning. Additionally, learner empowerment, affective learning,
and state motivation to learn were positively related to students’ course grades
(Frymier & Houser, 1999). Several studies have demonstrated positive associations
between instructor immediacy and cognitive learning (e.g., Christensen & Menzel,
1998; McCroskey, Sallinen, Fayer, Richmond, & Barraclough, 1996; Titsworth, 2001).
In an experimental investigation of student recall and retention of lecture material,
moderate levels of nonverbal immediacy on the part of the lecturer were found to have
a positive impact on student cognitive recall but not retention (Comstock, Rowell, &
Bowers, 1995). Furthermore, cognitive learning has been positively associated with
‘‘high-anxiety’’ learning environments (those characterized by constant monitoring
by instructors), amount of presented information students understand and retain
(Wallace & Truelove, 2006), instructor use of Behavior Alteration Techniques
(Richmond, McCroskey, Kearney, & Plax, 1987), and organizational cues and
note-taking during lectures (Titsworth, 2001).
By shaping students’ expectations of course experience, computer-mediated
communication on websites like RMP should influence the extent to which students
exhibit cognitive learning of course material. Therefore, we pose the following
hypothesis:
H1: Students who receive positive computer-mediated WOM (RMP ratings) about a course
will demonstrate greater levels of cognitive learning than will students who receive no
computer-mediated WOM or negative computer-mediated WOM.
Behavioral learning
Bloom (1956) argued that behavioral learning is evidenced by changes in a person’s
behavior which result from being provided with alternative information. According to
Bandura (1969), students are more likely to perform and enact new behaviors if they
believe the new behaviors are pertinent and beneficial to their lives. Numerous studies
have demonstrated positive correlations among instructor nonverbal immediacy and
student attitudes toward and intent to engage in behaviors proposed in the classroom
(e.g., Christensen & Menzel, 1998; Christophel, 1990; Comstock et al., 1995; Plax,
Kearney, McCroskey & Richmond, 1986; Richmond, Gorham, & McCroskey, 1987;
Richmond et al., 1987). However, beyond the well-established link between behavioral
learning and instructor immediacy, little is known about its causes and correlates.
By creating expectations of educational experience, computer-mediated WOM on
websites like RMP likely influences students’ perceived levels of behavioral learning.
Therefore, we pose the following hypothesis:
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
373
H2: Students who receive positive computer-mediated WOM (RMP ratings) about a course
will report greater perceived levels of behavioral learning than will students who receive no
computer-mediated WOM or negative computer-mediated WOM.
Expectations-Affect-Behavior Hypothesis
Although a plethora of research has established the existence of expectancy effects,
relatively less has been carried out to determine when (and how) expectancy effects
occur and when (and why) they do not. Specifically, there is little understanding of the
mechanisms by which expectations lead to perceptual or behavioral outcomes. The
studies that have been conducted have focused on the ways in which expectations,
once formed, are transmitted to their targets (i.e., the ways in which teachers may
directly and indirectly communicate raised expectations to their students; Harris &
Rosenthal, 1985) as a means of explaining how expectations are fulfilled. What is
lacking, however, is an explanation of how an expectation for one’s self (e.g., an
expectation for learning) leads to the realization of the expected outcome (e.g., actual
learning). Although researchers rarely explicate a causal mechanism to explain the
process, their accounts often imply that expectations are accompanied by a change
in affect, which, in turn, leads to a change in behavior such that an expectancy
effect materializes. Brewer and Crano (1994) term this proposed chain of events the
expectations-affect-behavior hypothesis.
Previous research examining positive affect has demonstrated that positive feeling
states improve task performance ability in a number of ways (Erez & Isen, 2002).
For instance, positive affect (both natural and induced) is associated with effective
and flexible thinking, decision making, and problem solving (e.g., Estrada, Isen, &
Young, 1997; Isen, 1999; Taylor & Aspinwall, 1996; Weiss, Nicholas, & Daus, 1999).
Scholars of instructional communication have posited an important role for affect in
the educational environment. Specifically, the term affective learning has been used
to refer to ‘‘an increasing internalization of positive attitudes toward the content
or subject matter’’ (Kearney, 1994, p. 81). Rodriquez, Plax and Kearney (1996)
demonstrated that affective learning serves as a precursor to cognitive learning and
is positively related with student state motivation to learn (see also, Christensen &
Menzel, 1998; Christophel, 1990; Frymier & Houser, 2000; McCroskey, Richmond,
& Bennett, 2006).
Additionally, affective learning is positively related to instructor communication
behaviors, including immediacy (Mottet, Parker-Raley, Beebe, & Cunningham, 2007;
Pogue & AhYun, 2006; Titsworth, 2001; Witt & Schrodt, 2006), clarity (Chesebro
& McCroskey, 2001), use of instructional technology (Turman & Schrodt, 2005;
Witt & Schrodt, 2006), and humor (Gorham & Christophel, 1990). Myers (2002)
demonstrated an inverse relationship between affective learning and instructor verbal
aggressiveness. Moreover, affective learning is associated with student behavior.
Students who report greater levels of affective learning tend to give higher instructor
evaluations (Teven & McCroskey, 1997), demonstrate greater willingness to comply
374
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
with instructors (Burroughs, 2007), and are more likely to enroll in another class
with the same instructor (Gorham & Christophel, 1990; McCroskey et al., 1996).
Edwards et al. (2007) experimentally demonstrated that students who received
positive computer-mediated WOM about a fictitious instructor reported higher levels
of affective learning than did students who received negative WOM or no WOM.
This research supports the notion that manipulated expectations are accompanied
by a change in affect. H1 and H2 predict that positive expectations will lead to an
increase in cognitive and behavioral learning. In order for the expectations-affectbehavior hypothesis to receive support, affect toward learning should mediate the
relationship between manipulated expectations and the cognitive and behavioral
learning outcomes.
H3a: Student affect toward learning will mediate the relationship between expectations
generated through computer-mediated WOM (positive versus negative or no RMP ratings)
and cognitive learning.
H3b: Student affect toward learning will mediate the relationship between expectations
generated through computer-mediated WOM (positive versus negative or no RMP ratings)
and behavioral learning.
Method
Participants
The convenience sample was composed of 135 undergraduate students enrolled at a
large university in the Midwestern U.S. Participants included 90 females (66.70%)
and 45 males (33.3%).1 The majority self-identified as White/Caucasian (85.9%,
n = 116). Participants’ ages ranged from 18 to 40 years, with a mean of 20.67 (SD
= 2.83). The largest percentage of participants classified as sophomores (37.0%,
n = 50), followed by juniors (32.60%, n = 44), seniors (17.0%, n = 23), first-years
(11.90%, n = 16), and ‘‘others’’ (1.50%, n = 2). Participants received extra credit
points in return for taking part in the study.
Procedures
Upon securing institutional review board approval, an experimental design consisting
of two treatment groups (positive and negative RMP ratings) and a control group
(no RMP ratings) was utilized (Kerlinger & Lee, 2000). According to Snyder
and Stukas (1999), experimental methods are well-suited to precise control of
perceiver expectations, which is essential to being able to isolate observed effects
to perceiver expectations. Participants enrolled in six introductory communication
courses were randomly assigned to one of two treatments: positive RMP or negative
RMP. Participants enrolled in three additional introductory communication courses
comprised the control group. Data collection occurred during regularly scheduled
class sessions near the midpoint of the academic semester. After participants provided
informed consent, all were informed that they would be watching a 10-minute video
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
375
clip of an instructor delivering a lecture on the topic of nutrition and trans fats.
Participants were further informed that at the conclusion of the presentation, they
would be asked to answer questions based on the content of the video. Next, the
students receiving the positive or negative RMP treatments were given an RMP
handout that corresponded with the valence of their specific group treatment. The
handout was described as a printout from RMP about the instructor and course
appearing in the video. The students were encouraged to read the handouts prior to
viewing the video and were given several minutes to do so. The control group did not
receive a RMP handout. At the completion of the video, all participants received a
questionnaire comprised of measures of cognitive, behavioral, and affective learning,
as well as a brief demographic survey. Following collection of the surveys, participants
were debriefed and thanked.
Independent Variable
To create the two treatments, two handouts (each one page in length) were produced
to simulate printouts of RMP ratings. The handouts were created by using html
code to manipulate the text and other content of an actual RMP rating page. A
fictitious instructor name and a fictitious academic affiliation (intended to be a ‘‘peer
institution’’ to the one in which participants were enrolled) were created and listed
at the top of the RMP handout. The middle of the handout included an icon of a face
displaying an expression (either a smile or a frown) and corresponding fabricated
quantitative summaries of fictitious student-raters’ evaluations on the dimensions
of easiness, helpfulness, clarity, and overall quality (the average of helpfulness and
clarity scores). For all dimensions, scores were based on ratings ranging from 1.0
(minimum) to 5.0 (maximum). The bottom portion of the handout included five
fabricated, open-ended comments attributed to five fictitious students regarding
the course and instructor. These comments were modeled from student comments
appearing on actual RMP results pages. The dates attached to each of the fabricated
comments spanned the two academic semesters prior to the one in which the study
was conducted.
In terms of the positive RMP handout, the average easiness rating was listed
at 3.2 out of 5.0, with 1.0 representing difficult and 5.0 representing easy.2 The
average helpfulness rating was listed as 5.0 and the average clarity rating was listed
as 4.8. These numbers were averaged to provide an overall quality rating of 4.9 out
of 5.0, with 5.0 representing the highest possible overall quality rating. Moreover,
the positive RMP stimulus handout included a ‘‘smiley face’’ to visually indicate
the high overall quality of the instructor. Five simulated student-generated positive
comments about the instructor/class were provided. Each was designed to produce a
high expectation for learning in the course. For example:
• You’ll learn a lot about healthy eating from this class. He gives great tips. I’ve
actually been following a lot of his suggestions about how to eat.
• I learned so much from this class. I still remember everything we covered. You
can imagine how easy it was to pass the tests!
376
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
For the negative RMP handout, the average easiness rating was again listed at 3.2
out of 5.0. The average helpfulness rating was listed as 2.0 and the average clarity
rating was listed as 1.4, providing an overall quality rating of 1.7.3 The negative RMP
handout included a ‘‘frowny face’’ to visually indicate the low overall quality of the
instructor. For this condition, the five simulated student-generated comments about
the instructor/class were negative. Correspondingly, each was designed to produce a
low expectation for learning in the course. For example:
• You’ll learn nothing about healthy eating from this class. His eating tips are
worthless, which is why I don’t follow any of his suggestions.
• It was impossible to learn anything in this class. I can’t remember a single thing
we covered. You can imagine how hard it was to pass the tests!
In order to ensure that the difference between the two sets of comments was
limited to valence, the negative RMP comments were produced by reversing the
sentiments expressed in each of the five comments used on the positive RMP
evaluation. In a small group setting, 10 undergraduate students familiar with RMP
were asked to evaluate all comments for their realism. These students suggested
minor changes in the wording of some comments, which were incorporated in the
RMP handouts prior to their use in the experiment.
Video Stimulus
A member of the research team who was unknown to the student participants
was videotaped performing a lecture on the topic of nutrition and trans fats. He
was instructed to deliver a teaching performance of ‘‘average quality.’’4 In order to
ensure that participants believed that the instructor and course were affiliated with
a university other than their own, videotaping of the 10-minute lecture took place
in a classroom at an area college. The upper body of the presenter standing behind
a podium, a basic PowerPoint outline of the lecture, and a portion of a chalkboard
were visible in the video frame.
Dependent Variables
Cognitive learning
Cognitive learning was assessed with a 20-item questionnaire assessing student
factual recall of videotaped lecture content. Frymier and Houser (1999) noted that
‘‘[i]n experimental research, cognitive learning can be adequately measured using
an objective examination over content presented in the experiment’’ (p. 2). Recall
of lecture material was tested by use of a true/false-formatted quiz designed to
simulate those frequently employed in introductory level classes to gauge student
understanding of recently presented lecture or text material. Quiz questions were
developed by the research team and pertained to information that was presented
in the 10-minute nutrition and trans fat lecture (e.g., ‘‘Trans fats are created by
hydrogenising animal fats,’’ ‘‘Tub margarine is a recommended alternative for stick
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
377
margarine,’’ and ‘‘Trans-fats can reduce the amount of refrigeration needed to keep
foods safe’’). Incorrect quiz answers were assigned a value of 0 and correct answers
were assigned a value of 1. An overall cognitive recall score for each participant
was computed by summing the resultant values, such that a score of 0 indicated
no correct answers and a score of 20 indicated all correct answers. The mean for
this measure across all conditions was 12.07 (SD = 2.72), representing an average
‘‘quiz grade’’ of approximately 60%. As a pilot test of the measure, 17 upper-level
undergraduate students were shown the videotaped lecture and asked to complete
the quiz. They were further asked to provide comments on the clarity, difficulty,
and appropriateness of the quiz questions and to offer suggestions for improvement.
Their insights were incorporated into the quiz prior to its use in the experiment.
Behavioral learning
Behavioral learning was assessed with an 8-item measure (Andersen, 1979). The
first four items measure students’ attitudes toward the recommended behaviors
along 7-point semantic differential scales (e.g., ‘‘The behaviors recommended in this
presentation are: worthless/valuable’’). The other four items use the same test format
to measure students’ intentions to engage in the recommended behaviors (e.g.,
‘‘My likelihood of actually attempting to engage in the behaviors recommended in
this presentation is: like/unlikely’’). Past studies have reported reliability coefficients
exceeding. 90 (Sanders & Wiseman, 1990). In this study, a reliability coefficient of.
89 (M = 39.79; SD = 9.43) was obtained.
Affect
Affect was assessed with the 4-item ‘‘affect toward course content’’ subscale of
McCroskey’s (1994) measure of affective learning. Each item requests students to
indicate their affect for course subject matter along a 7-point semantic differential
scale (e.g., ‘‘I feel that the class content is: good/bad’’). Past studies have reported
reliability coefficients exceeding. 90 (McCroskey, 1994). In the present study, the
internal reliability for affect toward course content was. 82 (M = 19.23, SD = 5.02).
Results
In order to address H1 and H2, a one-way K-group multivariate analysis of variance
(MANOVA) was conducted to determine the effects of learning expectations induced
through computer-mediated WOM (positive, negative, or no RMP ratings) on
the dependent variables of cognitive learning (recall) and behavioral learning.
A MANOVA was chosen because previous research has demonstrated that the
dependent variables are associated. In the present study, cognitive and behavioral
learning were moderately positively related, r(128) = .263, p < .01. Results of Box’s
M indicated that the assumption of equality of covariance matrices was tenable,
M = 10.76, F(6, 379456.004) = 1.75, p = .105.
Significant differences were found among the positive, negative, and control
computer-mediated WOM conditions on the dependent measures, Wilks’s = .775,
378
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
F(4, 252) = 8.544, p < .001. The multivariate η2 based on Wilks’s lambda was
moderate,. 12. Table 1 reports the means and standard deviations on the dependent
variables for the three groups.
Analyses of variance (ANOVAs) on each dependent variable were conducted as
follow-up tests to the MANOVA. To test the ANOVA assumption of equality of
error variances, Levine’s test was performed on both dependent variables. Results
indicated no violation of the assumption for cognitive learning [F(2, 127) = .93,
p > .05] or behavioral learning [F(2, 127) = 3.22, p > .05]. To control for Type I
error, the Bonferroni method (.05/2) was utilized and the ANOVAs were tested at
the. 025 level. The ANOVAs were significant for both cognitive learning [F(2, 127)
= 7.75, p = .001, η2 = .10] and behavioral learning [F(2, 127) = 13.02, p < .001,
η2 = .17].
Posthoc analyses to the ANOVAs consisted of pairwise comparisons using Tukey’s
Honestly Significant Difference (HSD). Results demonstrated that the positive
treatment group scored significantly higher in cognitive learning and behavioral
learning when compared to the negative treatment group and the control group.
As a multivariate follow-up to the MANOVA, a discriminant analysis was
conducted to determine whether the two learning outcomes could be used to
predict the condition to which student participants had been assigned (positive,
negative, or control). Wilks’s lambda was significant, = .78, χ 2 (4, N = 130) =
32.175, p < .001, indicating that overall, the predictors differentiated among the
three conditions. The residual Wilks’s lambda was not significant, = .99 χ 2 (1,
N = 130) = 1.072, p = .30. This test indicated that the predictors did not differentiate
significantly between the three conditions after partialling out the effects of the first
discriminant function. Therefore, we chose only to interpret the first discriminant
function.
The within-group correlations between the predictors and the discriminant
function, as well as the standardized weights are presented in Table 2. Based on these
coefficients, both cognitive learning and behavioral learning demonstrated strong
relationships with the discriminant function. On the basis of the results, we chose to
label the discriminant function ‘‘student learning.’’
Table 1 Means and Standard Deviations for the Three Conditions on the Dependent
Variables
Positive
Variable
Cognitive Learning
Behavioral Learning
Negative
Control
M
(SD)
M
(SD)
M
13.37a
45.09a
(2.31)
(7.47)
11.37b
35.56b
(2.44)
(10.63)
11.52b
39.02b
(SD)
(2.93)
(7.52)
Note. Means in a row with differing subscripts are significantly different at p < .05 in the
Tukey HSD comparison.
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
379
The means on the discriminant function are consistent with this interpretation.
The positive RMP rating condition (M = .74) had the highest mean on student
learning, whereas the negative RMP rating condition (M = −.52) and control
condition (M = −.18) had lower means. When we tried to predict condition, we
were able to correctly classify 45.4% of the individuals in this sample. To take into
account chance agreement, a kappa coefficient, which may range from -1 to +1, was
computed. The obtained value of. 18 represents a moderate value. Finally, to assess
how well the classification procedure would predict in a new sample, we estimated
the percentage of people accurately classified using the leave-one-out technique (in
which all cases are left out once and classified based on classification functions
for the N-1 cases; Green & Salkind, 2005) and correctly classified 42.3% of the
cases.
H3a and H3b predicted that affect would mediate the relationship between
learning expectations induced through computer-mediated WOM (positive versus
negative or no RMP ratings) and cognitive and behavioral learning. Following
procedures detailed by Baron and Kenny (1986) and Holmbeck (2002), mediation
was tested through regression analyses by examining the following for significance:
(1) the association between the predictor and outcome, (2) the association between
the predictor and mediator, and (3) the association between the mediator and
outcome, after controlling for the effect of the predictor. Upon meeting the above
conditions, the predictor ♦ outcome effect was examined to determine whether it
significantly decreased after controlling for the mediator. An effect reduced to zero
indicates full mediation, whereas a remaining effect that is significantly reduced
indicates partial mediation.
To make the categorical condition variable (positive, negative, control) amenable
to regression analysis, we employed orthogonal coding, which allowed for planned
contrasts specified in H3a and H3b. Contrast codes in multiple regression are
appropriate for directly testing a set of a priori hypotheses (Cohen & Cohen, 1983;
Wendorf, 2004). Two (K − 1) vectors were created (Pedhazur, 1997; Serlin & Levin,
1985). Vector 1 compared the mean of the positive treatment group to the means of
both the negative and control groups. Vector 2 compared the means of the control
group to the negative treatment group, while ignoring the positive treatment group.
The first model tested the relation between expectation conditions, affect, and
cognitive learning (see Fig. 1). A multiple regression treating the two vectors as
predictor variables and cognitive learning as the outcome variable was significant,
Table 2 Standardized Coefficients and Correlations of Predictor Variables With the Student
Learning Discriminant Function
Predictors
Cognitive Learning
Behavioral Learning
380
Correlation coefficients
with discriminant function
.65
.85
Standardized coefficients
with discriminant function
.53
.77
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
Affect
−.35**
Expectations
.19*
-.35**
Cognitive
Learning
Figure 1 Model illustrating the mediating relation of affect between expectations and
cognitive learning ∗ = p < .05; ∗∗ = p < .001
R = .349, r 2 = .12, F(2, 132) = 9.138, p < .001. Consistent with the results of
the MANOVA, the planned contrast between the control group and negative
treatment group (vector 2) was not significant for cognitive learning, β = −.030,
t(132) = −.366, p = .715. However, the contrast between the control group and
negative treatment group was significant for affect toward learning, β = −.20,
t(128) = −2.497, p = .014. Because of the failure to meet the first condition of
mediation testing, subsequent analysis was confined to the contrast between the
means of the positive treatment group and the means of both the negative and
control groups. The model suggested a significant relation between expectations and
cognitive learning, β = −.35, t(132) = −4.259, p < .001, and between expectations
and affect, β = −.35, t(128) = −4.280, p < .001. The model also suggested a
significant relation between affect and cognitive learning, after controlling for
expectations, β = .19, t(127) = 8.385, p < .001. Results from the Sobel test indicated
that the impact of expectations significantly decreases when affect is considered a
mediator variable (Sobel test statistic = 2.627, p < .01). Hence, affect was a significant
partial mediator of the relationship between learning expectations induced through
computer-mediated WOM (positive versus negative or no RMP ratings) and cognitive
learning.
To test H3b, the above procedure was replicated using behavioral learning as
the outcome variable (see Fig. 2). The multiple regression was significant, R = .412,
r 2 = .17, F(2, 127) = 13.017, p < .001. As expected, the planned contrast between
the control group and negative treatment group (vector 2) was not significant
for behavioral learning, β = −.15, t(127) = −1.886, p = .062, thus analysis again
focused on vector 1 (contrasting the positive treatment with both the negative and
control groups). The model suggested a significant relation between expectations and
behavioral learning, β = −.39, t(127) = −4.776, p < .001, and between expectations
and affect, β = −.35, t(128) = −4.280, p < .001. The model also suggested a
significant relation between affect and behavioral learning, after controlling for
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
381
Affect
−35**
Expectations
.60**
−.39**
Behavioral
Learning
Figure 2 Model illustrating the mediating relation of affect between expectations and
behavioral learning ∗∗ = p < .001
expectations, β = .60, t(125) = 2.094, p < .05. Results from the Sobel test indicated
that the impact of expectations significantly decreases when affect is considered a
mediator variable (Sobel test statistic = 3.937, p < .001). Thus, H3b also received
support. Affect was a significant partial mediator of the relationship between learning
expectations induced through computer-mediated WOM (positive versus negative
or no RMP ratings) and behavioral learning.
Discussion
The current study sought to experimentally test the effects of expectations induced
through computer-mediated WOM (RMP) on students’ cognitive and behavioral
learning. Results supported hypotheses 1 and 2, demonstrating that students who
received positive computer-mediated WOM performed better on a cognitive recall
task and reported higher levels of behavioral learning than did students who received
negative or no computer-mediated WOM. Furthermore, hypotheses 3a and 3b
received support, with affect acting as a partial mediator of the relationships between
expectations generated through computer-mediated WOM (positive versus negative
or no RMP ratings) and cognitive and behavioral learning outcomes. Thus, the
findings establish the existence of an expectancy effect based on computer-mediated
WOM and support the expectations-affect-behavior hypothesis of expectancy effects.
These findings are consistent with Harrison-Walker’s (2001) claim that
communication about products or services affects customers’ evaluation of their
experiences. The results also further evidence the ‘‘postpurchase’’ effects observed
in WOM research, as they show that WOM received prior to an experience can
influence expectations to result in altered perceptions of subsequent experience
quality (Burzynski & Bayer, 1977; Sheth, 1971; Wangeheim & Bayón, 2004).
Current findings are also consistent with previous research demonstrating that
computer-mediated WOM influences students’ levels of state motivation to learn
382
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
and affective learning (Edwards et al., 2007). Finally, these results extend the literature
on determinants and outcomes of student expectations of learning by demonstrating
that online comments attributable to peers are consequential for students’ learning
achievements and perceptions.
Significantly, type of computer-mediated WOM (RMP ratings) accounted for
10% of the variance in cognitive learning. On average, students exposed to positive
RMP ratings prior to viewing the presentation scored approximately 10 percentage
points higher on the cognitive recall task than did those exposed to negative or no
RMP ratings, a difference which translates to a letter grade advantage. Seventeen
percent of the variance in behavioral learning was due to type of computer-mediated
WOM (RMP ratings). Importantly, a sizeable portion of the variance in the overall set
of student learning variables (12%) was due to factors wholly outside an instructor’s
realm of control (i.e., expectations generated through computer-mediated student
interaction).
Research on WOM has yielded contradictory results in terms of the relative
strength of positive versus negative appraisals (see, e.g., Ahluwalia, 2002; Fiske, 1980;
Holmes & Lett, 1977; Mizerski, 1982). The expectancy literature has focused almost
exclusively on the effects of positive expectations, owing mainly to the questionable
ethics associated with manipulating expectations of self or others in a negative
direction. Taken in conjunction with the results of Edwards et al.’s (2007) study, the
current findings suggest that in the context of education, positive computer-mediated
WOM is more influential than negative computer-mediated WOM on student
learning outcomes. Negative RMP comments produced no effect on cognitive or
behavioral learning. These results should be heartening to educators who may worry
about the damaging effects of the online circulation of negative student appraisals
of their courses. Simultaneously, the findings point to the advantages of positive
computer-mediated WOM for both instructors and students.
The current study also sheds light on how expectations lead to their effects.
The relationship between expectations (positive versus negative or none) and each
learning outcome was partially mediated by affect toward learning, thereby lending
some support to the frequently implied assumption that expectations work by
producing a change in affect, which then leads to a change in behavior. But, two
points need to be made. The first is that the mediational function of affect was
partial. Thus, there may also be a direct causal relationship between expectations and
learning outcomes, and/or additional mediating variables that were not accounted
for in this experiment. The second point is that affect did not serve as a mediator
variable in the contrast between the negative treatment and control groups, which
did not differ in terms of cognitive and behavioral learning. The negative group
reported significantly less affect than the control group, but the lowered affect did
not correspond to lower learning scores. Therefore, the expectations-affect-behavior
hypothesis received support solely in the context of positive expectations. Additional
research is needed to account for the reasons that altered affect did not impact
learning in the context of negative expectations.
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
383
In general, the results of this study demonstrate that positive computer-mediated
WOM can generate expectancy effects. These findings may be broadly applicable to
the accumulating mass of online rating systems, including those devoted to reviews
of physicians, lawyers, hotels, restaurants, and a whole range of consumer goods and
services. Furthermore, the expectations-affect-behavior hypothesis may help partially
explain the ways in which all such sites exert their influence on recipients’ subsequent
behavior.
However, the findings should be interpreted in light of several limitations.
First, the measurement of cognitive learning is a formidable task. The 20-question
instrument used in this experiment tested only short-term recall of information.
Long-term knowledge retention and deeper analytical skills (e.g., application) were
not assessed. Similarly, the measure of behavioral learning was limited to attitudinal
dimensions (self-reported affect toward recommended behaviors and intent to engage
in recommended behaviors). Thus, changes in actual behavior were not addressed.
Future research could remedy these shortcomings by employing a delayed posttest of
cognitive learning and observing students’ responses to a behavioral choice offered
within the experiment (cf., Comstock et al., 1995).
Second, experimental investigations often involve sacrificing a degree of realism in
order to isolate effects to an independent variable. In the present study, students were
exposed only to a brief videotaped lecture. In their actual college careers, students
have an entire semester to form and modify impressions of an instructor and to adapt
their own behavior accordingly to achieve learning objectives. Moreover, the stakes
for performing well on academic tasks are considerably higher in students’ actual
college courses than they were in the present study. Future research could examine
associations among computer-mediated WOM and its effects in more naturalistic
conditions.
Finally, continued research on the topic of computer-mediated WOM is necessary
to provide a more complete understanding of its range of effects and the mechanisms
by which they occur. Explorations of students’ processes for making sense of
information posted on sites such as RMP represents one avenue. For instance, in
navigating online professor rating systems to choose courses or form expectations,
students must regularly encounter seemingly contradictory opinions about an
instructor or course (the presence of mixed reviews). The process by which students
utilize some comments to form expectations and dismiss others may shed light on
computer-mediated message features important to perceived credibility and degree
of influence. Such a study is currently underway.
Notes
1 Since there were considerably more female than male participants, analyses were
conducted to determine whether there were gender effects in the data. Results from
a series of 2 × 3 ANOVAS demonstrated no signification gender by condition effect
for any dependent variable.
384
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
2 In the U.S., the average easiness rating of social science professors is 3.2 (Felton et al.,
2005). On the RMP website, easiness is not factored into the overall quality rating
assigned to professors; thus, we held constant across conditions the rating of 3.2.
3 The overall quality ratings of 1.7 and 4.9 used for the positive and negative RMP
rating pages approximate equidistant intervals from the average overall quality
rating of social science professors in the U.S., which is several tenths higher than 3.0
(Felton et al., 2005).
4 The authors scripted a lecture on the topic of nutrition and trans fats to be
presented by the fourth author. The first and second authors coached the fourth
author to incorporate influential instructor behaviors at a moderate level.
Additionally, the video stimulus was subjected to a manipulation check employing
26 advanced undergraduate students asked to rate the instructor and the lecture as
‘‘above average,’’ ‘‘average,’’ or ‘‘below average.’’ Almost all rated the instructor and
lecture as average.
References
About Us. (n.d.). RateMyProfessor.com homepage. Retrieved October 27, 2008, from
http://www.ratemyprofessors.com.
Acquisition Establishes mtvU as Number Two General Interest Online Destination for
College Students. (2007). PRNewswire. Retrieved November 26, 2007, from
http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/01-172007/0004507644&EDATE.
Ahluwalia, R. (2002). How prevalent is the negativity effect in consumer environments?
Journal of Consumer Research, 29, 270–279.
Anderson, J. F. (1979). Teacher immediacy as a predictor of teaching effectiveness. In D.
Nimmo (Ed.) Communication yearbook vol. 3 (pp. 543–559). New Brunswick, NJ:
Transaction Books.
Arndt, J. (1967). Role of product-related conversations in the diffusion of a new product.
Journal of Marketing Research, 4, 291–295.
Bandura, A. (1969). Principles of behavior modification. New York: Rinehart & Winston.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of
Personality and Social Psychology, 51, 1173–1182.
Beecher, H. K. (1966). Pain: One mystery solved. Science, 151, 840–841.
Bickart, B., & Schindler, R. M. (2001). Internet forums as influential sources of consumer
information. Journal of Interactive Marketing, 15, 31–40.
Bloom, B. S. (Ed.). (1956). A taxonomy of educational objectives: Handbook 1: The cognitive
domain. New York: Longmans Green.
Borgida, E., & Nisbett, R. E. (1977). The differential impact of abstract vs. concrete
information on decisions. Journal of Applied Social Psychology, 7, 258–271.
Braun, C. (1976). Teacher expectation: Sociopsychological dynamics. Review of Educational
Research, 46, 185–213.
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
385
Brewer, M. B., & Crano, W. D. (1994). Social psychology. Minneapolis, MN: West Publishing
Company.
Brophy, J. E. (1983). Research on the self-fulfilling prophecy and teacher expectations.
Journal of Educational Psychology, 75, 631–661.
Brophy, J. E. (1986). On motivating students. In D. Berliner & B. Rosenshine (Eds.), Talks to
teachers (pp. 201–245). New York: Random House.
Brown, T. J., Barry, T. E., Dacin, P. A., & Gunst, R. F. (2005). Spreading the word:
Investigating antecedents of consumers’ positive word-of-mouth intentions and
behaviors in a retailing context. Journal of the Academy of Marketing Science, 33, 123–138.
Burroughs, N. F. (2007). A reinvestigation of the relationship of teacher nonverbal
immediacy and student compliance-resistance with learning. Communication Education,
56, 453–475.
Burzynski, M. H., & Bayer, D. J. (1977). The effect of positive and negative prior information
on motion picture appreciation. Journal of Social Psychology, 101, 215–218.
Bytwerk, R. L. (2005). The argument for genocide in Nazi propaganda. Quarterly Journal of
Speech, 91, 37–62.
Carl, W. J. (2006). What’s all the buzz about? Everyday communication and the relational
basis of word-of-mouth and buzz marketing practices. Management Communication
Quarterly, 19, 601–634.
Chesebro, J. L., & McCroskey, J. C. (2001). The relationship of teacher clarity and immediacy
with student state receiver apprehension, affect, and cognitive learning. Communication
Education, 50, 59–68.
Cheung, M., Anitsal, M. M., & Anitsal, I. (2007). Revisiting word-of-mouth communications:
A cross-national exploration. Journal of Marketing Theory and Practice, 14, 235–279.
Christensen, L. J., & Menzel, K. E. (1998). The linear relationship between student reports of
teacher immediacy behaviors and perceptions of state motivation, and of cognitive,
affective, and behavioral learning. Communication Education, 47, 82–90.
Christophel, D. (1990). The relationships among teacher immediacy behaviors, student
motivation and learning. Communication Education, 39, 323–340.
Clow, K. E., Kurtz, D. L., Ozment, J., & Ong, B. S. (1997). The antecedents of consumer
expectations of services: An empirical study across four industries. The Journal of Services
Marketing, 11, 230–248.
Cohen, J., & Cohen, P. (1983). Applied multiple regression for the behavioral sciences (2nd ed.).
Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Coladarci, T., & Kornfield, I. (2007). RateMyProfessors.com versus formal in-class student
evaluations of teaching. Practical Assessment, Research & Evaluation, 12, 1–15.
Compton, J. A., & Pfau, M. (2004). Use of inoculation to foster resistance to credit card
marketing targeting college students. Journal of Applied Communication Research, 32,
343–364.
Comstock, J., Rowell, E., & Bowers, J. W. (1995). Food for thought: Teacher nonverbal
immediacy, student learning and curvilinearity. Communication Education, 44, 251–266.
Cooper, H. M., & Good, T. L. (1983). Pygmalion grows up: Studies in the expectation
communication process. New York: Longman.
386
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
Day, G. S. (1971). Attitude change, media, and word of mouth. Journal of Advertising
Research, 11, 31–40.
DeCarlo, T. W., Laczniak, R. N., Motley, C. M., & Ramaswami, S. (2007). Influence of image
and familiarity on consumer response to negative word-of-mouth communication about
retail entities. Journal of Marketing Theory and Practice, 15, 41–51.
Dellarocas, C. (2003). The digitization of word of mouth: Promise and challenges of online
feedback mechanisms. Management Science, 49, 1407–1424.
Dusek, J. B. (Ed.). (1985). Teacher expectancies. Hillsdale, NJ: Erlbaum.
Edwards, C., Edwards, A., Qing, Q., & Wahl, S. (2007). The influence of computer- mediated
word-of-mouth communication on student perceptions of instructor credibility and
attractiveness. Communication Education, 56, 255–277.
Ellis, K. (2000). Perceived teacher confirmation: The development and validation of an
instrument and two studies of the relationship to cognitive and affective learning. Human
Communication Research, 26, 264–291.
Ellis, K. (2004). The impact of perceived teacher confirmation on receiver apprehension,
motivation, and learning. Communication Education, 53, 1–20.
Estrada, C. A., Isen, A. M., & Young, M. J. (1997). Positive-affect improves creative problem
solving and influences reported source of practice satisfaction in physicians. Motivation
and Emotion, 18, 285–299.
Erez, A., & Isen, A. M. (2002). The influence of positive affect on the components of
expectancy motivation. Journal of Applied Psychology, 87, 1055–1067.
Feldman, R. S., & Prohaska, T. (1979). The student as Pygmalion: Effect on student
expectation on the teacher. Journal of Educational Psychology, 71, 485–593.
Feldman, R. S., Saletsky, R. D., Sullivan, J., & Theiss, A. J. (1983). Student locus of control
and response to expectations about self and teacher. Journal of Educational Psychology, 75,
27–32.
Felton, J., Mitchell, J., & Stinson, M. (2004). Web-based student evaluations of professors:
The relations between perceived quality, easiness and sexiness. Assessment & Evaluation
in Higher Education, 29, 91–108.
Felton, J., Mitchell, J., & Stinson, M. (2005). Cultural differences in student evaluations of
professors. Academy of Business Education Conference Proceedings. Retrieved December 2,
2007, from http://www.abe.villanova.edu/proc2004/felton2.pdf.
Fiske, S. T. (1980). Attention and weight in person perception: The impact of negative and
extreme behavior. Journal of Personality and Social Psychology, 38, 889–906.
Fries, S., Horz, H., & Haimerl, C. (2006). Pygmalion in media-based learning: Effects of
quality expectancies on learning outcomes. Learning and Instruction, 16, 339–349.
Frymier, A. B., & Houser, M. L. (1999). The revised learning indicators scale. Communication
Studies, 50, 1–12.
Frymier, A. B., & Houser, M. L. (2000). The teacher-student relationship as an interpersonal
relationship. Communication Education, 49, 207–219.
Gilroy, M. (2003). Rate-a-prof systems: Here to stay; venting, or valuable consumer
feedback? The Hispanic Outlook in Higher Education, 13, 10–12.
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
387
Gorham, J., & Christophel, D. (1990). The relationship of teachers’ use of humor in the
classroom to immediacy and student learning. Communication Education, 39, 46–62.
Green, S. B., & Salkind, N. J. (2005). Using SPSS for Windows and Macintosh: Analyzing and
understanding data (4th ed.). Upper Saddle River, NJ: Prentice Hall.
Grönroos, C. (1990). Relationship approach to marketing in service contexts: The marketing
and organizational behavior interface. Journal of Business Research, 20, 3–11.
Harris, M. J., & Rosenthal, R. (1985). Mediated of interpersonal expectancy effects: 31
meta-analyses. Psychological Bulletin, 97, 363–386.
Harrison-Walker, J. L. (2001). The measurement of word-of-mouth communication and an
investigation of service quality and customer commitment as potential antecedents.
Journal of Service Research, 4, 60–75.
Herr, P. M., Kardes, F. R., & Kim, J. (1991). Effects of word-of-mouth and product-attribute
information on persuasion: An accessibility-diagnosticity perspective. Journal of
Consumer Research, 17, 454–462.
Holmbeck, G. N. (2002). Post-hoc probing of significant moderational and mediational
effects in studies of pediatric populations. Journal of Pediatric Psychology, 27, 87–96.
Holmes, J. H., & Lett, J. D., Jr. (1977). Product sampling and word of mouth. Journal of
Advertising, 17, 35–40.
Howard, D. J., & Gengler, C. (2001). Emotional contagion effects on product attitudes.
Journal of Consumer Research, 28, 189–201.
Isen, A. M. (1999). Positive affect. In T. Dagleish & M. Powers (Eds.), Handbook of cognition
and emotion (pp. 521–540). Sussex, England: Wiley.
Jamieson, D. W., Lydon, J. E., Stewart, G., & Zanna, M. P. (1987). Pygmalion revisited: New
evidence for student expectancy effect in the classroom. Journal of Educational Psychology,
79, 461–466.
Katz, E., & Lazarsfeld, P. F. (1955). Personal influence: The part played by people in the flow of
mass communications. Glencoe, IL: Free Press.
Kearney, P. (1994). Affective learning. In R. B. Rubin, P. Palmgreen, & H. E. Sypher (Eds.),
Communication research measures: A sourcebook (pp. 81–85). New York: Guilford
Publications.
Kerlinger, F., & Lee, H. B. (2000). Foundations of behavioral research. Fort Worth, TX:
Harcourt College.
Kindred, J., & Mohammed, S. N. (2005). He will crush you like an academic ninja: Exploring
teacher ratings on RateMyProfessors.com. Journal of Computer-Mediated Communication, 10. Retrieved December 2, 2007, from http://jcmc.indiana.edu/vol10/issue3/
kindred.html.
Kuh, G. D. (1999). Setting the bar high to promote student learning. In G.. S. Blimling, E. J.
Whitt, & Associates (Eds.), Good practice in student affairs: Principles to foster student
learning (pp. 67–89). San Francisco: Jossey-Bass.
Leventhal, L., Perry, R. P., & Abrami, P. C. (1977). Effects of lecturer quality and student
perception of lecturer’s experience on teacher ratings and student achievement. Journal of
Educational Psychology, 69, 360–374.
388
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
Liu, Y. (2006). Word of mouth for movies: Its dynamics and impact on box office revenue.
Journal of Marketing, 70, 74–89.
Mangold, W. G., Miller, F., & Brockway, G. R. (1999). Word-of-mouth communication in
the service marketplace. Journal of Services Marketing, 13, 73–89.
McCroskey, J. C. (1994). Assessment of affect toward communication and affect toward
instruction in communication. In S. Morreale & M. Brooks (Eds.), 1994 SCA summer
conference proceedings and prepared remarks: Assessing college student competence in speech
communication. Annandale, VA: Speech Communication Association.
McCroskey, J. C., Richmond, V. P., & Bennett, V. E. (2006). The relationship of student
end-of-class motivation with teacher communication behaviors and instructional
outcomes. Communication Education, 55, 403–414.
McCroskey, J. C., Sallinen, A., Fayer, J. M., Richmond, V. P., & Barraclough, R. A. (1996).
Nonverbal immediacy and cognitive learning: A cross-cultural investigation.
Communication Education, 45, 200–211.
Mizerski, R.W. (1982). An attributional explanation of the disproportionate influence of
unfavorable information. Journal of Consumer Research, 9, 301–310.
Mottet, T., P., Parker-Raley, J., Beebe, S. A., & Cunningham, C. (2007). Instructors who resist
‘‘College Lite’’: The neutralizing effect of instructor immediacy on students’
course-workload violations and perceptions of instructor credibility and affective
learning. Communication Education, 56, 145–167.
Myers, S. A. (2002). Perceived aggressive instructor communication and student state
motivation, learning, and satisfaction. Communication Reports, 15, 113–121.
Otto, J., Sanford, D. A., & Ross, D. N. (2008). Does ratemyprofessor.com really rate my
professor? Assessment & Evaluation in Higher Education, 33, 355–368.
Pedhazur, E. J. (1997). Multiple regression in behavioral research: Explanation and prediction
(3rd ed.). Fort Worth, TX: Harcourt Brace.
Perry, R. P., Abrami, P. C., Leventhal, L., & Check, J. (1979). Instructor reputation: An
expectancy relationship involving student ratings and achievement. Journal of
Educational Psychology, 71, 776–787.
Phelps, J. E., Lewis, R., Mobilio, L., Perry, D., & Raman, N. (2004). Viral marketing or
electronic world-of-mouth advertising: Examining consumer responses and motivations
to pass along email. Journal of Advertising Research, 45, 333–348.
Plax, T. G., Kearney, P., McCroskey, J. C., & Richmond, V. P. (1986). Power in the classroom
VI: Verbal control strategies, nonverbal immediacy and affective learning.
Communication Education, 35, 43–55.
Pogue, L. L., & AhYun, K. (2006). The effect of teacher nonverbal immediacy and credibility
on student motivation and affective learning. Communication Education, 55, 331–344.
Raffini, J. (1993). Winners without losers: Structures and strategies for increasing student
motivation to learn. Needham Heights, MA: Allyn and Bacon. (ERIC Document
Reproduction Service No. 362952).
Richmond, V. P., Gorham, J., & McCroskey, J. C. (1987). The relationship between selected
immediacy behaviors and cognitive learning. In M. A. McLaughlin (Ed.), Communication
yearbook 70 (pp. 574–590). Newbury Park, CA:Sage.
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
389
Richmond, V. P., McCroskey, J. C., Kearney, P., & Plax, T. G.. (1987). Power in the
classroom VII: Linking behavior alteration techniques to cognitive learning.
Communication Education, 36, 1–12.
Rodriguez, J., Plax, T. G., & Kearney, P. (1996). Clarifying the relationship between teacher
nonverbal immediacy and student cognitive learning: Affective learning as the central
causal mediator. Communication Education, 45, 293–305.
Rosenthal, R. (1978). Combining results of independent studies. Psychological Bulletin, 85,
185–193.
Rosenthal, R., & Fode, K. L. (1961). The problem of experimenter outcome-bias. In D. P. Ray
(Ed.), Series research in social psychology, symposia studies series, no. 8 (pp. 9–14).
Washington: National Institute of Social and Behavioral Science.
Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom. New York: Holt, Rinehart &
Winston.
Rosenthal, R., & Rubin, D. B. (1978). Interpersonal expectancy effects: The first 345 studies.
Behavioral and Brain Sciences, 3, 377–386.
Sanders, J., & Wiseman, R. (1990). The effects of verbal and nonverbal teacher immediacy on
perceived cognitive, affective, and behavioral learning, in the multicultural classroom.
Communication Education, 39, 341–353.
Serlin, R. C., & Levin, J. R. (1985). Teaching how to derive directly interpretable coding
schemes for multiple regression analysis. Journal of Educational Statistics, 10, 223–238.
Sheth, J. N. (1971). Word-of-mouth in low-risk innovations. Journal of Advertising Research,
11, 15–18.
Shrum, L. J., & Bischak, V. D. (2001). Mainstreaming, resonance, and impersonal impact:
Testing moderators of the cultivation effect for estimates of crime risk. Human
Communication Research, 27, 187–215.
Silva, K. M., Silva, F. J., Quinn, M. A., Draper, J. N., Cover, K. R., & Munoff, A. A. (2008).
Rate my professor: Online evaluations of psychology instructors. Teaching of Psychology,
35, 71–80.
Singhal, A., Rogers, E., & Mahajan, M. (1999). The Gods are drinking milk! Asian Journal of
Communication, 9, 86–107.
Smith, A. E., Jussim, L., & Eccles, J. (1999). Do self-fulfilling prophecies accumulate, dissipate,
or remain stable over time? Journal of Personality and Social Psychology, 77, 548–565.
Snyder, M., & Stukas, A. A. (1999). Interpersonal processes: The interplay of
cognitive,motivational, and behavioral activities in social interaction. Annual Review of
Psychology, 50, 273–303.
Sultan, F., Farley, J. U., & Lehmann, D. R. (1990). A meta-analysis of applications of diffusion
models. Journal of Marketing Research, 27, 70–77.
Sun, T., Youn, S., Wu, G., & Kuntaraporn, M. (2006). Online word-of-mouth (or mouse):
An exploration of its antecedents and consequences. Journal of Computer-Mediated
Communication, 11. Retrieved December 2, 2007, from http://jcmc.indiana.edu/vol11/
issue4/sun.html.
Taylor, S. E., & Aspinwall, L. G. (1996). Mediating and moderating processes in psychological
stress: Appraisal, coping, resistance, and vulnerability. In H. B. Kaplan (Ed.), Psychosocial
390
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
stress: Perspectives on structure, theory, life-course, and methods (pp. 71–110). Sand Diego,
CA: Academic Press.
Teven, J. J., & McCroskey, J. C. (1997). The relationship of perceived teacher caring with
student learning and teacher evaluation. Communication Education, 46, 1–9.
Titsworth, B. S. (2001). An experiment testing the effects of teacher immediacy, use of
organizational lecture cues, and students’ note taking on cognitive learning.
Communication Education, 50, 283–297.
Turman, P. D., & Schrodt, P. (2005). The influence of instructional technology use on
students’ affect: Do course designs and biological sex make a difference? Communication
Studies, 56, 109–129.
Wallace, B., & Truelove, J. (2006). Monitoring student cognitive-affective processing through
reflection to promote learning in high-anxiety contexts. Journal of Cognitive Affective
Learning, 3, 22–27.
Wangenheim, F. V., & Bayón, T. (2004). Satisfaction, loyalty and word of mouth within the
customer base of a utility provider: Differences between stayers, switchers and referral
switchers. Journal of Consumer Behavior, 3, 211–220.
Wangenheim, F. V., & Bayón, T. (2007). The chain from customer satisfaction via
word-of-mouth referrals to new customer acquisition. Journal of the Academy of
Marketing Science, 35, 233–249.
Weiss, H. M., Nicholas, J. P., & Daus, C. S. (1999). An examination of the joint effects of
affective experiences and job beliefs on job satisfaction and variations in affective
experiences over time. Organizational Behavior and Human Decision Processes, 78, 1–24.
Wendorf, C. A. (2004). Primer on multiple regression coding: Common forms and the
additional case of repeated contrasts. Understanding Statistics, 3, 47–57.
Wilhelm, W. B., & Comegys, C. (2004). Course selection decisions by students on campuses
with and without published teaching evaluations. Practical Assessment, Research &
Evaluation, 9. Retrieved December 2, 2007 from http://PAREonline.net/getvn.asp?v
=9&n=16.
Witt, P. L., & Schrodt, P. (2006). The influence of instructional technology use and teacher
immediacy on student affect for teacher and course. Communication Reports, 19, 1–15.
Zanna, M. P., Sheraf, P. L., & Cooper, J. (1975). Pygmalion and Galatea: The interactiveeffect
of teacher and student expectancies. Journal of Experimental Social Psychology, 11,
279–287.
Zeithaml, V. A., Berry, L. L., & Parasuraman, A. (1993). The nature and determinants of
customer expectations of service. Journal of the Academy of Marketing Science, 21, 1–12.
Zimmerman, B. J. (2000). Self-efficacy: An essential motive to learn. Contemporary
Educational Psychology, 25, 82–91.
Zinkhan, G. M., Kwak, H., Morrison, M., & Peters, C. O. (2003). Web-based chatting:
Consumer communication in cyberspace. Journal of Consumer Psychology, 13, 17–27.
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association
391
About the Authors
Autumn Edwards (Ph.D. Ohio University) is an assistant professor in the School
of Communication at Western Michigan University. Her research in instructional
communication explores the influences of peer interaction and student-teacher
relationships on educational experience.
Address: 1903 W. Michigan Ave., 300 Sprau Tower, Kalamazoo, MI, 490085318, USA.
Chad Edwards (Ph.D. University of Kansas) is an assistant professor in the
School of Communication at Western Michigan University. His research focuses
on student/student and teacher/student relationships in the classroom.
Address: 1903 W. Michigan Ave., 300 Sprau Tower, Kalamazoo, MI 490085318, USA.
Carrie Shaver is a recent M.A. graduate of Western Michigan University’s School of
Communication. Her research focuses primarily on American popular culture of the
1950s and 1960s and its relationship to the queer community.
Address: Indiana University, 21st Century Scholars Program, Eigenmann Hall 612,
1900 East Tenth Street, Bloomington, IN 47406, USA.
Mark Oaks is an M.A. student in the School of Communication at Western Michigan
University. His research focuses on the study of leadership.
Address: 330 New Hampshire Drive, Portage, MI 49024, USA.
392
Journal of Computer-Mediated Communication 14 (2009) 368–392 © 2009 International Communication Association