Download Grading classroom participation through peer assessment

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Grading classroom participation through peer assessment:
perception and experience of marketing students
Fidella, Tiew
Curtin University, Miri, Malaysia
[email protected]
Chai Lee, Goi
Curtin University, Miri, Malaysia
[email protected]
Abstract: This paper reports the marketing students’ perception and experience of grading
classroom participation through peer assessment at Curtin Sarawak. This assessment
strategy was introduced with a desire to improve class participation and increase student
involvement in assessments. At the end of three semesters, a questionnaire was used to
gather responses from a sample of 117 students about their opinions on the peer assessment
practice. Students agreed that the practice promotes a sense of ownership, engagement and
personal responsibility of the learning experience. But at the same time, some experienced
stress in the assessment process and found it not easy to evaluate their peers. The study
found students do not reject peer assessment strategy.
Keywords: classroom participation, peer assessment, marketing students’ perception and
experience
Introduction
Teaching and learning in higher education institutions have shifted from instructor-centred to
student-centred (Barr & Tagg, 1995; Knowles, 1984). External forces behind this change are
believed to be due to the growth in consumption of higher education and the perceived need
to produce employable students. Students need to acquire a whole range of transferable soft
skills in their course of studies which will help them in the world of employment (Cox &
King, 2006; Fallows & Steven, 2000; Schlee & Harich, 2010). Practice and literature suggest
that traditional lecture format is being accompanied by and in some cases replaced by
contemporary pedagogy which are more interactive and discussion-based (Spiller & Scovotti,
2008), that moved educational participants to share responsibility for learning outcomes
(Hawes, 2004). Instructors utilize a broad range of strategies to actively engage students in
the teaching-learning process. Of the recommended strategies, classroom discussion or
participation is perhaps the most frequently used and the more often embraced “active
learning” strategy (Dallimore et al. 2006).
Classroom participation, which refers to students’ active engagement in discussions of the
course concepts, represents an important teaching method valued by many university courses,
including marketing courses. Nunn (1996) suggests there is a positive relationship between
participative and active learning. Students actively involved through class participation can
better elaborate on and engage with the course content (Ackerman, Gross, & Perner, 2003;
Brookfield & Preskill, 1999). Furthermore, in-class discussion enables idea sharing,
encourages different points of view, develops critical understanding and oral communication
skills, and provides social interaction through student-based learning (Brookfield & Preskill,
1999; Farranda & Clarke, 2004; Ponzurick, Russo France, & Logar, 2000). Many marketing
educators therefore include active participation as a key learning criterion and student
assessment based on the superior teaching outcomes that improve students’ learning
experience. Yet these educators lack a structured approach to stimulating and measuring
class participation (Sautter, 2007).
Assessments of class participation can be problematic and complicated because of its
subjective nature (Sautter, 2007). Several evaluation tools have been published to assist
instructors in assessing class participation (Bean & Peterson, 1998; Chapnick, 2009; Craven
& Hogan, 2001; Maznevski 1996; Melvin, 1988). The use of published scales may assist in
the process, but assigning a class participation grade remains difficult to objectify. The
equivocal nature of evaluating class participation makes it an ideal area in which to share
evaluation with students. Multiple evaluators may increase the accuracy of class
participation grading. In addition, there is increasing interest in the educational benefits of
students assessing their own work (self-assessment) and that of other students (peer
assessment) (Ballantyne, Hughes & Mylonas, 2002; Brindley and Scoffield (1998); Falchikov
& Goldfinch, 2000; Hanrahan & Isaacs, 2001; Topping, 1998).
Against this background, we conducted an action research with 3rd-year undergraduate
marketing students at Curtin University, Sarawak Campus to test the effects of peer
assessment on class participation in a marketing course over time. We investigated whether
peer assessment affects students’ class participation behaviour and learning experience. This
paper aims to report on the students’ perceptions and experiences of using peer assessment in
grading their class participation.
Literature Review
Peer assessment is not a new assessment strategy in higher education institutions and the use
of it is increasing. A number of studies have been done related to peer assessment (examples
Stefani, 1994; Boud, 1995; Topping, 1998, 2009; Falchikov and Goldfinch, 2000; and Sivan,
2000). Peer assessment has been tried out at different levels, across disciplines and with
different types of assignments (examples Bean and Peterson, 1998; Gopinath, 1999; Melvin,
1988; Ryan et al., 2007; and Topping, 2009). It is an arrangement for learners to consider and
specify the level, value or quality of a product or performance of other equal-status learners
(Topping, 2009), and students grading the work or performance of their peers using relevant
criteria (Falchikov, 2001).
In a review of the peer assessment literature, Topping (1998) concludes that peer
assessment has been used in a wide variety of contexts and that it can result in gains
in the cognitive, social, affective, transferable skill and systemic domains. The
majority of the studies reviewed showed an acceptably high level of validity and
reliability. A subsequent review of peer assessment by Dochy, Segers and Sluijsmans
(1999) showed that peer assessment can be valuable as a formative assessment
method, and that students find the process sufficiently fair and accurate.
Ballantyne et al. (2002) report significant benefits of peer assessment, but at the
cost of significant administrative overheads.
There is substantial evidence that peer assessment can result in improvements in the
effectiveness and quality of learning, with gains for assessors, assessees, or both (Topping,
2009). A research done by Al-Barakat and Al-Hassan (2009) showed that students and
teachers have positive beliefs about peer assessment because it can be beneficial if some
changes are made in the way it is employed in teacher education programs.
Vickerman’s study (2009) found that generally formative peer assessment was a positive
experience in enhancing students’ learning and development. However, when tutors are
constructing peer assessment strategies, they should be cognisant at the planning stage of the
variety of learning styles that are evident in order to maximise the developmental
opportunities this can bring to students.
The implementation of peer evaluation in the class is debatable. Social processes can
influence and contaminate the reliability and validity of peer assessments. Peer assessments
can be partly determined by friendship bonds, enmity or other power processes, the
popularity of individuals, perception of criticism as socially uncomfortable, or even collusion
to submit average scores, leading to lack of differentiation (Topping, 2009).
Method
In view of the value of peer assessment, we used peer assessment method to grade class
participation in a marketing course at Curtin University, Sarawak Campus over three
semester periods in 2009 and 2010. Each week there was pre-assigned readings, case studies
or open-ended assignments given to the student. Throughout the semester, student
participation was evaluated during whole-class discussions, small group presentations,
question and answer sessions, and other in-class activities. Class participation contributes ten
percent to the final grade of the course. Students take turn to evaluate their peers’
participation in the class. In each class, two students were assigned to the role and used a
scoring rubric to assess their peers’ contribution. The assessment criteria in the scoring
rubric was adapted from Chapnick (2009), which include:
C1: Preparedness – attends class punctually; comes to class prepared; makes class materials
readily available; contributes readily to the conversation.
C2: Sharing sources and resources – brings sources of information to the class to share with
lecturer or peers; brings resources that can be used to extend the learning activities of the
class.
C3: Class presence and communication – participates actively and frequently; contributes
consistently to discussions and activities; raises relevant questions and shares ideas with
peers; offers clear and concise oral & written presentation of ideas; demonstrates
attentiveness and good command of unit materials.
C4: Accepts and provides constructive feedback to others – positively accepts constructive
feedback; offers viable suggestions for improvement to peers and lecturer.
C5: Respect - shows interest in and respect for others’ views; listens to others; does not
dominate discussion; helps others to succeed in class.
In order to maintain confidentiality, the name of the assessor was not included in the
assessment form. The individual mark on class participation (10%) was determined by taking
the average individual score obtained from peer assessment, adding to the lecturer’s score,
then divide by two to derive the final mark.
A Likert scale questionnaire was performed at the end of the semester. Students were asked
to gauge the extent of their agreement with a number of statements. For analysis purposes a
“strongly agree” response was given a value of 5 and a “strongly disagree” response was
given a value of 1. The statements were derived from other published studies (Brindley &
Scoffield, 1998; Ryan et al., 2007) and based on the authors’ experience with issues of class
participation assessment. The questionnaire also included two open ended questions that
asked students “What have you learned from this experience?”; “Overall, what do you think
of peer assessment on class participation?”
The data from the close ended questions were collated and presented in table format
following an analysis via the Statistical Package for Social Scientists (SPSS) Version 17.0.
An attempt was also made to compare the results for the three semesters and analyse the
qualitative findings from the two open ended questions.
A total of 117 students (100%) have replied to this questionnaire, of which 42 students (36%)
were from Semester 1, 2009, 35 students (30%) from Semester 2, 2009 and 40 students
(34%) from Semester 1, 2010.
Results
In terms of reliability, Hair et al. (2006) highlighted that 0.7 is an acceptable level. The
analysis of the samples found that the Cronbach's Alpha is 0.764, which is exceeding the
recommended value.
Overall, the analysis shows that the result of this study is very stimulating. Table 1 below
shows that out of the total of 13 statements, 10 statements have statistically significant mean
scores of more than 3, at 0.05 level. This is based on the total sample size of 117.
Table 1: Students’ view on peer assessment (n=117)
One-Sample Test
Test Value = 3
Mean
3.86
Std.
Deviation
.694
Sig. (2tailed)
.000
The assessment form given is helpful in doing the peer
assessments.
3.68
.808
.000
Assessment should be the sole responsibility of tutors.
2.94
.922
.484
Statement
I fully understand what was expected of me in doing the peer
assessments.
I feel that peer evaluation was fair in helping to determine my
class participation grade.
3.42
.940
.000
I feel intimidated by the whole process.
3.07
.704
.295
I participated more because I knew my peers were evaluating me.
3.34
.984
.000
I found it easy to evaluate my peers on their class participation.
2.89
.972
.219
I do not feel sufficiently capable to mark other students’
participation level.
3.16
.880
.048
Peer assessment marks should be taken into consideration to
compute the overall participation score.
3.62
.879
.000
I would recommend using peer evaluation grades in the future.
3.38
.899
.000
Involvement in the assessment process increases my personal
motivation in the class.
3.80
.863
.000
Peer assessment promotes self-evaluation and develops my
critical thinking and other professional skills.
3.66
.822
.000
Peer assessment promotes a sense of ownership, engagement,
and personal responsibility in my learning experience.
3.76
.727
.000
Comparison was made between the three semesters. The results show some differing mean
scores over the three semesters. Students in semester 2, 2009 and semester 1, 2010 chose "I
fully understand what was expected of me in doing the peer Assessments" and "The
assessment form given is helpful in doing the peer assessments" as the highest mean scores.
Students in semester 1, 2009 rated "Peer assessment to promote a sense of ownership,
engagement, and personal responsibility in my learning experience" and "Involvement in the
assessment process increases my personal motivation in the class" with highest mean score.
One of the statements with a low mean score was “I found it easy to evaluate my peers on
their class participation." For semester 2, 2009, in addition to this statement, there are two
other statements that have the same mean score, 3.17: "I feel intimidated by the whole
process" and "I do not feel sufficiently capable to mark other students' participation level."
The statement "Assessment should be the sole responsibility of tutors" has the lowest mean
score for semester 1, 2009 and semester 1, 2010. The comparison of average responses for
each statement over the three semesters is shown in table 2 below.
Table 2: Comparison of students’ view on peer assessment over three semesters
Semester 1,
Semester 2,
Semester 1,
2009
2009
2010
TOTAL
Statement
n=42
n=35
n=40
n=117
I fully understand what was
expected of me in doing the peer
assessments.
The assessment form given is
helpful
in
doing
the
peer
assessments.
Assessment should be the sole
Mean
3.71
S.D
.805
Mean
3.89
S.D
.583
Mean
4.00
S.D
.641
Mean
3.86
S. D
.694
3.62
.764
3.74
.852
3.68
.829
3.68
.808
2.90
.958
3.29
.710
2.68
.971
2.94
.922
responsibility of tutors.
I feel that peer evaluation was fair
in helping to determine my class
participation grade.
I feel intimidated by the whole
process.
I participated more because I knew
my peers were evaluating me.
I found it easy to evaluate my peers
on their class participation.
I do not feel sufficiently capable to
mark other students’ participation
level.
Peer assessment marks should be
taken into consideration to compute
the overall participation score.
I would recommend using peer
evaluation grades in the future.
Involvement in the assessment
process increases my personal
motivation in the class.
Peer assessment promotes selfevaluation and develops my critical
thinking and other professional
skills.
Peer assessment promotes a sense
of ownership, engagement, and
personal responsibility in my
learning experience.
3.52
.943
3.37
.877
3.35
1.001
3.42
.940
3.12
.705
3.17
.618
2.93
.764
3.07
.704
3.48
1.153
3.43
.655
3.13
1.017
3.34
.984
2.74
.939
3.17
.985
2.80
.966
2.89
.972
3.29
.835
3.17
.747
3.03
1.025
3.16
.880
3.74
.885
3.46
.611
3.63
1.055
3.62
.879
3.60
.912
3.29
.825
3.25
.927
3.38
.899
3.93
.894
3.66
.802
3.80
.883
3.80
.863
3.83
.696
3.51
.781
3.60
.955
3.66
.822
3.90
.656
3.71
.710
3.65
.802
3.76
.727
This study found that there is a significant difference on the statement “Assessment should be
the sole responsibility of tutors”. This can be referred to the F-Test is 4.386 and Significant
level is 0.015. However, there is no difference for the other statements. Refer to the
ANOVA table below.
Table 3: ANOVA
Statement
I fully understand what was expected of me in doing the peer
assessments.
F
Sig.
Decision
1.788
.172 No Difference
The assessment form given is helpful in doing the peer
assessments.
.221
.802 No Difference
Assessment should be the sole responsibility of tutors.
4.386
I feel that peer evaluation was fair in helping to determine my
class participation grade.
.015
Difference
.410
.665 No Difference
I feel intimidated by the whole process.
1.321
.271 No Difference
I participated more because I knew my peers were evaluating
me.
1.512
.225 No Difference
I found it easy to evaluate my peers on their class participation.
2.196
.116 No Difference
I do not feel sufficiently capable to mark other students’
participation level.
.899
.410 No Difference
Peer assessment marks should be taken into consideration to
compute the overall participation score.
.978
.379 No Difference
I would recommend using peer evaluation grades in the future.
1.840
.163 No Difference
Involvement in the assessment process increases my personal
motivation in the class.
.943
.393 No Difference
Peer assessment promotes self-evaluation and develops my
critical thinking and other professional skills.
1.608
.205 No Difference
Peer assessment promotes a sense of ownership, engagement,
and personal responsibility in my learning experience.
1.369
.259 No Difference
Table 4 highlights the results of the two open ended questions asked to students, “What have
you learned from this experience?”; “Overall, what do you think of peer assessment on class
participation?”
Table 4: Summary of responses to open questions
1
2
3
4
5
6
7
8
9
10
Comment
Motivate students to participate more
Is a good practice for student to improve assessment skill
Is a good experience and learn to be responsible and self-confident
Difficult to give participation marks
Not so comfortable with the task, is stressful and tedious
More mindful of their classmates’ input
Peer assessments are biased and unfair
Should not be evaluated by students as they don’t have enough capabilities
to do so
Not effective or useful, as students do not take it seriously or do not know the
students in class
Dislike class participation assessment
TOTAL
Frequency
of response
33
25
16
13
11
11
8
5
4
4
130
Discussion & Conclusion
Overall, this study shows that the mean score for all variables exceed 2.5 out of 5. Students
have a positive perception on peer assessment. They fully understood what was expected of
them. In terms of the mean score, the statement “I fully understand what was expected of me
in doing the peer assessments” has the highest score (3.86). The scoring rubric with clear
marking criteria (as used in this study) helps student fully understand the assessment scheme
and facilitate the assessment process (“The assessment form given is helpful in doing the peer
assessments” – 3.68). Without a criterion-referenced marking sheet, students would have no
rational basis on which to evaluate their peers’ participation and may struggle to prompt and
objectively evaluate peers’ performance consistently during class.
This study agrees with a number of previous researches, for example Pond and uI-Haq
(1995), Topping (2009), Vickerman (2009) and Al-Barakat and Al-Hassan (2009) on the
value and benefit of peer assessment. This can be referred to the analysis of three statements:
“Involvement in the assessment process increases my personal motivation in the class”;
“Peer assessment promotes self-evaluation and develops my critical thinking and other
professional skills”; and “Peer assessment promotes a sense of ownership, engagement, and
personal responsibility in my learning experience”. These three statements are among the
statements which have the highest mean score with statistically significant mean scores of
more than 3. In the open ended questions, 25 students mentioned that it is a good practice for
students to improve assessment skill, and 16 noted that it is a good experience to learn to be
responsible and self-confident.
Another statement which is statistically significant is “I participated more because I knew my
peers were evaluating me”. So, in this study, shared responsibility in assessing class
participation behavior stimulates class participation which results in improved perception of
educational experiences for students. The results indicated that students’ motivation
increased as a result of their active involvement in the assessment process, and their ability to
gain a greater understanding of the assessment process was developed. From the open ended
questions, 33 students noted that they were motivated to participate more and 11 said they
were more mindful of their classmates’ input. The mechanism has contributed to enhance
student learning experience and improve engagement during class discussions.
The finding reveals that students perceived peer evaluation as fair and recommend the use of
peer evaluation. Three statements gauge the acceptance of this practice with statistically
significant mean scores of more than 3 (“I feel that peer evaluation was fair in helping to
determine my class participation grade”; Peer assessment marks should be taken into
consideration to compute the overall participation score”; “I would recommend using peer
evaluation grades in the future”). Whilst in the open ended questions, 8 students mentioned
that peer assessments are biased and unfair, and 4 students pointed out that it is not effective
or useful.
On the other hand, there are responses which were relatively less positive. As commented by
Topping (2009), peer assessments can be partly determined by friendship bonds, enmity or
other power processes, the popularity of individuals, perception of criticism as socially
uncomfortable, or even collusion to submit average scores, and leading to lack of
differentiation. In this research, the results suggested that students to a certain degree are
intimidated by the assessment process. The majority of students enrolled to the course were
local students from high-power-distance cultures, and therefore naturally shy and can easily
be intimidated by the discussion-participation and peer assessment requirements (“I feel
intimidated by the whole process” – 3.07). From the open ended questions response, 4
students stated that they dislike class participation assessment and 11 revealed that they were
not comfortable with the peer assessment task, it is stressful and tedious.
The pattern of results also suggested that students are uncertain about their ability to evaluate
their peers and found it difficult at times to assess their peers’ participation (“I do not feel
sufficiently capable to mark other students’ participation level” – 3.16; “I found it easy to
evaluate my peers on their class participation” – 2.89). From the open ended questions, 13
students indicated it was difficult to give participation marks. One student commented “it is
a great way to make the students understand what the lecturer is going through in grading
class participation.” Thus, this peer assessment process will make the students empathize
more with their instructors.
The statement “Assessment should be the sole responsibility of tutors” has the lowest
significant mean score of 2.94. There is a different opinion from three semester’s students,
whether the assessment should be the sole responsibility of tutors. This can be referred to the
ANOVA test, in which F-Test is 4.386 and significant level is .015. From the open ended
questions, 5 students suggested not to involve student in evaluation as they do not have the
capabilities to do so.
The use of peer assessment as an alternative form of evaluation method demonstrated in this
study has obvious positive effect on students’ learning experience. But we do not know the
details or the extent of these effects, which might probably be investigated in future studies.
The major benefit of the current study include understanding of assessment task, appreciate
the difficulties of assessing others’ performance, increase confidence and motivation,
promote a sense of ownership, and engagement and personal responsibility among students.
Because marketing education is in part job preparation, developing desirable managerial traits
in students should be a priority for marketing educators; such qualities can be initiated and
fostered by integrating a shared responsibility model in our teaching strategy.
Students in this study were generally positive towards the notion of peer assessment, but the
experience was not consistently positive. The method can be a useful strategy for engaging
students with assessment process and encourages students to participate more, but it may also
discourage a few. Some students found no benefit from peer assessment experience. They
did not learn much due to stress. Consequently in developing any peer assessment strategy,
Elwood (2006), Liu and Carless (2006) and Vickerman (2009) suggest due consideration
needs to be taken into account especially the diversity of student learning styles that need to
be accommodated in which some will prefer more directed rather than self-directed support
in relation to assessment issues.
The research issues above are fairly general. Nonetheless, we hope this article offers some
new approaches and ideas to teachers who are thinking of adopting peer assessment as one of
their teaching strategy. To those who may be struggling with the grading of class
participation, peer assessment may be an alternative method to consider within the diversity
of assessment strategies.
References
Ackerman, D. S., Gross, B. L., & Perner, L. (2003). Instructor, student, and employer perceptions on preparing
marketing students for changing business landscapes. Journal of Marketing Education, 29(2), 97-110.
Al-Barakat, A. & Al-Hassan, O. (2009). Peer assessment as a learning tool for enhancing student teachers'
preparation. Asia-Pacific Journal of Teacher Education, 37(4), 399-413.
Ballantyne, R., Hughes, K., & Mylonas, A. (2002). Developing procedures for implementing peer assessment in
large classes using an action research process. Assessment and Evaluation in Higher Education, 27(5),
427–441.
Barr, R. B., & Tagg, J. (1995). From teaching to learning: A new paradigm for undergraduate education.
Change, 27(6), 13-25.
Bean, J.C. & Peterson, D. (1998). Grading classroom participation. New Directions For Teaching And Learning,
74(1), 33-40.
Boud, D. (1995). Enhancing learning through self-assessment. (1st edn). London: Kogan Page.
Brindley, C. & Scoffield, S. (1998). Peer assessment in undergraduate programmes. Teaching in Higher
Education. 3(1), 79-90.
Brookfield, S. D., & Preskill, S. (1999). Discussion as a way of teaching: Tools and techniques for democratic
classrooms. San Francisco: Jossey-Bass.
Chapnick, A. (2009). A participation rubric. The Teaching Professor. Retrieved from
http://www46.homepage.villanova.edu/john.immerwahr/TP101/lects/participation%20matrix0001.pdf
Cox, S. & King, D. (2006). Skill sets: an approach to embed employability in course design. Education +
Training, 48(4), 262-274.
Craven III, J.A. & Hogan, T. (2001). Assessing student participation in the classroom. Science Scope. 25(1), 3640.
Dallimore, E. J., Hertenstein, J. H. & Platt, M. B. (2006). Nonvolunatry class participation in graduate
discussion courses: effects on grading cold calling. Journal of ManagementEducation, 30 (2), 354-377.
Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: A
review. Studies in Higher Education, 24(3), 331–350.
Elwood, J. (2006). Formative assessment: Possibilities, boundaries and limitations. Assessment in Education:
Principles, Policy and Practice, 13(2), 215–232.
Falchikov, N. (2001). Learning together: Peer tutoring in higher education. London. Routledge Falmer.
Falchikov, N. & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing
peer and teacher marks. Review of Educational Research. Fall. 70(3), 287-322.
Fallows, S. & Steven, C. (2000). Building employability skills into the higher education curriculum: a
university-wide initiative. Education + Training, 42(2), 75-82.
Farranda, W. T., & Clarke, I., III. (2004). Student observations of outstanding teaching: Implications for
marketing educators. Journal of Marketing Education, 26(3), 271-281.
Gopinath, C. (1999). Alternatives to instructor assessment of class participation. Journal of Education for
Business. 75(1), 10-14.
Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E. & Tatham, R. L. (2006). Multivariate Data Analysis, 6th
Ed. Upper Saddle River: Pearson Education.
Hanrahan, S. J. & Isaacs, G. (2001). Assessing self- and peer-assessment: the students' views. Higher Education
Research & Development, 20(1), 53-70.
Hawes, J. M. (2004). Teaching is not telling: The case method as a form of interactive learning. Journal for
Advancement of Marketing Education, 5(Winter), 47-54.
Knowles, M. (1984). The adult learner: A neglected species. (3rd ed.). Houston: Gulf.
Liu, N., & Carless, D. (2006). Peer feedback: The learning element of peer assessment. Teaching in Higher
Education, 11(3), 279-90.
Maznevski, M. L. (1996). Grading class participation. Teaching Concerns. University of Virginia. Retrieved
from
http://trc.virginia.edu/Publications/Teaching_Concerns/Spring_1996/TC_Spring_1996_Maznevski.htm
Melvin, K. (1988). Rating class participation: The prof/peer method. Teaching of Psychology. 15(3), 137-139.
Nunn, C. E. (1996). Discussion in the college classroom, triangulating observational and survey results. The
Journal of Higher Education, 67(3), 243-266.
Pond, K., Ul-Haq, R. & Wade, W. (1995). Peer review: A precursor to peer assessment, Innovations in
Education and Teaching International, 32(4), 314-323.
Ponzurick, T., Russo France, K., & Logar, C. M. (2000). Delivering graduate marketing education: An analysis
of face-to-face versus distance education. Journal of Marketing Education, 22(3), 180-187.
Ryan, G. J., Marshall, L. L., Porter, K. & Jia, H. (2007). Peer, professor and self-evaluation of class
participation. Active Learning in Higher Education. 8(1), 49-61.
Sautter, P. (2007). Designing discussion activities to achieve desired learning outcomes: Choices using mode of
delivery and structure. Journal of Marketing Education, 29(2), 122-131.
Schlee, R. P. & Harich, K. R. (2010). Knowledge and skill requirements for marketing jobs in the 21st century.
Journal of Marketing Education, 32(3), 341- 352.
Sivan, A. (2000). The implementation of peer assessment: An action research approach. Assessment in
Education, 7(2), 193-213.
Spiller, L. D., & Scovotti, C. (2008). Curriculum currency: Integrating direct and interactive marketing content
in introductory marketing courses. Journal of Marketing Education, 30(1), 66-81.
Stefani, L. A. J. (1994). Peer, self and tutor assessment: Relative reliabilities. Assessment and Evaluation in
Higher Education, 19(1), 69-75.
Topping, K. (2009). Peer Assessment. Theory Into Practice, 48(1), 20-27.
Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational
Research, 68(3), 249-276.
Vickerman, P. (2009). Student perspectives on formative peer assessment: an attempt to deepen learning?.
Assessment & Evaluation in Higher Education, 34(2), 221- 230.
Copyright © 2011 Fidella Tiew and Goi Chai Lee: The author/s assign to Enhancing Learning: Teaching and Learning
Conference 2011 a non-exclusive licence to use this document for personal use and in courses of instruction provided that the
article is used in full and this copyright statement is reproduced. The author also grants a non-exclusive license to the
organisers of the Enhancing Learning: Teaching and Learning Conference 2011 Conference to publish this document as part of
the conference proceedings. Any other usage is prohibited without the express permission of the authors.