PRISM

Peer-feedback and Turnitin

Home

Peer-feedback and Turnitin

This case study reflects on the use of peer-feedback in first year Research Skills module of the BA (Hons) Media and Communications, and considers the introduction of the Turnitin PeerMark tool to facilitate this activity in the classroom.

This case study was led by Dr Ruth Sanz Sabido, Reader in Media and Social Inequality, in the School of Creative Arts and Industries.

Context: The Module

Research Skills is a Level 4 compulsory module for Single and Combined Honours students taking the BA (Hons) Media and Communications. The aims of this module are to equip students with basic skills to enable them to carry out independent research in media and journalism research, focusing particularly on quantitative research.

When the case study was conducted, the summative assessment of this module consisted of a Research Report (100% of the module mark) due in May, and a formative point of assessment that was completed about 5 weeks into the module. The assignment brief for the Research Report includes a detailed structure of different sections with which students need to engage, such as a Literature Review and Discussion of Findings. The final report resembles a journal article, combining both primary and secondary research.

For the formative assessment, called Progress Report, students were asked to submit a draft of the literature review of their research topic. A mixed approach to formative assessment was taken: tutors provided feedback on drafts, but a peer-feedback activity was also part of one of the module sessions.

Process and Rationale

In order to facilitate the exercise, students were given a form to guide them through the task of providing peer-feedback. This form was an adaptation of an activity included in Cottrell’s (2011) book on Critical Thinking Skills. It offered students a checklist to provide their peers with constructive feedback on their Progress Reports. It also advised them that those are some of the questions that we consider when we mark their assignments, so they could also use the checklist to self-assess their own work. The form listed a combination of 14 questions and statements (e.g. The research question is clear and specific; The writer provides accurate references to academic literature). Students needed to answer Yes or No, and use the space provided to add further comments.

The form also included a section titled ‘How to assess your peers?’, which we covered at the beginning of the session. We explained that the exercise is designed to help them develop their critical skills by reflecting on their peers’ research, what they are doing well and what they can improve. In turn, they could apply these principles to their own work and improve the quality of their own submission. We also reminded students that being critical is not the same as criticising, and that they could point out positive aspects of their peers’ work as well as areas for improvement in a constructive way, for example, by giving examples of what they thought might work better, and explaining why. Examples of the types of comments they might make were also provided.

For several years, this activity was conducted in class by distributing physical copies of the work that students had completed. This required a constant redistribution of papers across the classroom, trying to remember who had already given feedback to whom, and returning peer-feedback forms to the right author. Overall, these rudimentary sessions worked to a certain extent, although, from a logistics point of view, they were quite stressful for tutors, who also had to monitor who had finished doing the task in order to assign a new copy, so that students could continue working. More importantly, from a peer-review perspective, the main downside was that there was no guarantee that neither the author of the work nor the reviewer would remain anonymous, which meant that feedback may not always be given honestly in fear of upsetting friends and peers.

The activity also raised other questions, particularly in relation to the students’ perceptions of fairness, reliability and validity. Indeed, research has found that this type of feedback is often perceived to be questionable in terms of its accuracy, depth or reliability, and the quantity and quality of feedback provided by students is often uneven (Nilson, 2003). Amongst other emotional responses often elicited by feedback, students are sometimes disappointed because they do not receive as much help as they have given. There are, however, mixed reports on these practices and subsequent results – see, for example, Topping (1998) and Orsmond, Merry and Reitch (1996) for different takes on the subject.

The activity continued to be an intrinsic part of the module because students reported that it was a useful task to get them to think about what was required of them. In addition, research has found that peer learning and assessment are “effective methods for developing critical thinking, communication, lifelong learning, and collaborative skills” (Nilson, 2003: 34).

Against this backdrop, the Turnitin PeerMark Assignment tool was tested in Research Skills in order to address some of the logistics and anonymity challenges presented during previous peer-review sessions. The use of this tool effectively meant that we moved from a paper-based activity to an online one. This Turnitin tool, which works by distributing a pool of papers amongst users, automatically resolved the problem of having to circulate multiple drafts and peer-review forms throughout the session. In addition, authors and reviewers remained anonymous, as long as authors had not typed their names within the script (which, admittedly, was not always the case).

Simon Starr, our Faculty Learning Technologist, set up the submission point and provided useful guidance and support before and during the session. The peer-review form that we had been using on paper was incorporated into the software, so students were able to read and answer the questions online alongside the allocated scripts. Tutors (the Module Leader and two sessional members of staff) were in the classroom offering support throughout the session.

Evaluation

The evaluation is based on (1) the tutors’ observations during the session, (2) conversations maintained with students when we sat with them and asked how the activity was going, and (3) a questionnaire for students and members of staff to assess their experiences of the PeerMark tool.

Overall, the experience was positive because it offered the benefits of previous peer-review activities, with some additional advantages, which are summed up by the following quotes extracted from the survey:

“It was a very easy tool to use” (Student).

“I think that the session went more smoothly and was better organised if compared to other years when we did this task on paper. Anonymity was also a nice plus” (Member of Staff A).

“Students were able to feedback in a systematic and clear way to their peers” (Member of Staff B).

However, the introduction of PeerMark also presented some technical challenges that need to be considered. We observed that students who got distracted or needed more time ‘to process’ (to use their own words) took longer to go through the script and write up the feedback, which made Turnitin behave in one of the following ways: 1) by the time students submitted the review and clicked on the next allocated script, this was no longer available, so they had to refresh the page to have a new script allocated; and 2) the feedback they had written timed out before they submitted it, so the work was lost.

In addition, the previously known responses to the activity, such as the potential disappointment felt when students who had given plenty of feedback did not receive a similar amount of support, were also voiced during this session. The other challenge that remains has to do with the ways in which students perceive the usefulness of the task. Those who understood why we were doing it engaged with it and responded positively. Those who did not see the benefit of engaging with the task exhibited more lukewarm responses.

Possible ways forward

These are some of the points to consider when running a peer-review session using the Turnitin PeerMark tool:

– To ensure anonymity, students should be reminded not to include their names in the file that they submit for peer-review.

– To avoid the technical issues described in the previous section, it would help to warn students that the form will time out after a period of inactivity. Students can also be informed at the beginning of the session that they should refresh the page if any of their pre-allocated scripts are no longer available by the time they click on them.

– More broadly, in terms of peer-feedback, one aspect that can be revised in future is the peer-review form itself, considering at least two factors:

  • It is worth remembering that, despite the guidance and examples offered before the peer-review activity, students are not professionally trained to provide feedback, and if they are not able to ‘assess’ the work of others, they may not be able to do it with their own work either. One way to address this, as Sadler (2002) argues, may involve using exemplars to help students understand what quality means.
  • With regards to the emotional element surrounding the acts of giving and receiving feedback, Nilson (2003: 36) suggests that questions should try to avoid judgments or opinions by directing students to specific details of the work in question.

– It would also be beneficial to foster a broader culture of peer-review within the module (and possibly the programme), for example, by adopting approaches such as those suggested by Sadler (2002) and Kean (2012), who argue in favour of using exemplars within the classroom. For this to work effectively, this activity would need to be embedded more consistently within the module.

This would help students gain further experience and confidence, and develop a more positive engagement and understanding of the long-term benefits of this exercise (Huisman et al, 2018; McConlogue, 2015). It would be important, however, to begin by challenging the notion of learning as an individual achievement, and nurture, instead, more collaborative approaches for students to value activities that they do with and for others (Liu and Carless, 2006).

Contact

For further information or to discuss similar or related case studies, please contact Ruth at ruth.sanz-sabido@canterbury.ac.uk

Reference List

Cottrell, S. (2011) Critical Thinking Skills. Developing Effective Analysis and Argument. Basingstoke: Palgrave Macmillan.

Huisman, B., Saab, N., van Driel, J. and van den Broek, P. (2018) “Peer feedback on academic writing: undergraduate students’ peer feedback role, peer feedback perceptions and essay performance”. Assessment & Evaluation in Higher Education, 43(6): 955-968.

Kean, J. (2012) “Show AND Tell: Using Peer Assessment and Exemplars to Help Students Understand Quality in Assessment”. Practitioner Research in Higher Education, 6(2): 83-94.

Liu, N. and Carless, D. (2006) “Peer feedback: the learning element of peer assessment”. Teaching in Higher Education, 11(3): 279-290.

McConlogue, T. (2015) “Making Judgements: Investigating the Process of Composing and Receiving Peer Feedback”. Studies in Higher Education, 40(9): 1495-1506.

Nilson, L.B. (2003) “Improving Student Peer Feedback”. College Teaching, 51(1): 34-38.

Orsmond, P., Merry, S. and Reitch, K. (1996) “The importance of marking criteria in the use of peer assessment”. Assessment and Evaluation in Higher Education, 21(3): 239-249.

Sadler, R. (2002) “Ah!… So That’s Quality”. In: Schwartz, P. and Webb, G. (eds.) Assessment: Case Studies, Experience and Practice from Higher Education, 130-136. London: Kogan Page.

Topping, K. (1998) “Peer-assessment between students in colleges and universities”. Review of Educational Research, 68(3): 249-276.

Share this page:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.