Main Article Content

Abstract

Peer review can be used as a teaching methodology to improve students’ learning and critical thinking. However teachers have many concerns about the reliability and validity of students’ grading.


The paper describes the application of peer review as a teaching strategy to the large course of Biomedical Informatics in the School of Medicine at the University of Florence. The aim of the study was twofold: (I) assessing the validity of students’ reviews, calculating the correlation between students’ assigned score and instructor’s assigned score; (II) assessing the validity of student’s self-evaluation, calculating the correlation between student’s assigned score and teacher’s assigned score. To this aim a statistical analysis was performed.


The results showed a moderate concordance between the marks assigned by peers and those assigned by the instructor. Neverthless the comparison between the teacher median and the peer-review median shows a minimal difference that has almost no effect on changing the final grade. Instead, there was poor concordance between the marks attributed by the instructor and those relating to the student’s self-evaluation.. Even if further studies are needed, the promising results can begin to dispel teachers’ concerns about students’ grading skills that prevent the application of peer review. On such basis, the use of peer review systems can streamline the application of peer.review in classes with a high number of students reducing the workload on the teacher.

Keywords

Peer assessment Peer review Self assessment Higher Education Peer Assessment Validity

Article Details

Author Biography

Marco Masoni, Department of Experimental and Clinical Medicine - University of Firenze

Marco Masoni was born in Reggio Emilia on May 7th 1961.
In 1991 he obtained the Degree of Medicine at the University of Modena. In 1997 he obtained the Postgraduate Diploma in Nuclear Medicine at the University of Florence. He is scientific coordinator of the Research Unit of Didactical Innovation in Continuing Medical Education at the Department of Experimental and Clinical Medicina at the University of Florence.
He carries out research activities in the fields of e-learning in medical education and in the area of Consumer Health Informatics.
How to Cite
Guelfi, M. R., Formiconi, A. R., Vannucci, M., Tofani, L., Shtylla, J., & Masoni, M. (2021). Application of peer review in a university course: are students good reviewers?. Journal of E-Learning and Knowledge Society, 17(2), 1-8. https://doi.org/10.20368/1971-8829/1135380

References

  1. Bouzidi L., Jaillet A. (2009), Can Online Peer Assessment be Trusted?, Educational Technology & Society, 12 (4), 257-268.
  2. Cho K., Schunn C. D., Wilson R.W. (2006), Validity and Reliability of Scaffolded Peer Assessment of Writing From Instructor and Student Perspectives. Journal of Educational Psychology, 98, 891-901.
  3. Geithner C.A., Pollastro A.N. (2016), Doing peer review and receiving feedback: impact on scientific literacy and writing skills. Adv Physiol Educ, 40, 38-46.
  4. Greeenhalgh T., eds (2014), How to read a paper. The basics of Evidence Based Medicine. Wiley Blackwell V edition.
  5. Guelfi M.R., Masoni M., Shtylla J., Formiconi A.R. (2019), Peer assessment nell’insegnamento di Informatica del Corso di Laurea in Medicina e Chirurgia dell’Università di Firenze. Firenze, Firenze University Press. DOI: 10.36253/978-88-6453-890-7
  6. Higher Education Funding Council for England (2011), The National Student Survey: Findings and Trends 2006–2010. Bristol: Higher Education Funding Council for England.
  7. Isaacs A. N., Miller M. L., Hu T., Johnson B., Weber Z. A. (2020), Inter-Rater Reliability of Web-Based Calibrated Peer Review within a Pharmacy Curriculum. American journal of pharmaceutical education, 84(4), 7583.
  8. Jones L., Allen B., Dunn P., Brooker L. (2017), Demystifying the rubric: a five-step pedagogy to improve student understanding and utilization of marking criteria. Higher Education Research & Development, 36:1, 129-142. DOI:10.1080/07294360.2016.1177000
  9. Luckner N., Purgathofer P. (2015), Exploring the use of peer review in large university courses. IxD&A, 25: 21-38.
  10. McBride G.B. (2005), A Proposal for Strength-of-Agreement Criteria for Lin’s Concordance Correlation Coefficient. NIWA (National Institute of Water & Atmospheric Research) Client Report: HAM2005-062.
  11. McCarty T., Parkes M. V., Anderson T. T., Mines J., Skipper B. J., Grebosky J. (2005), Improved patient notes from medical students during web-based teaching using faculty-calibrated peer review and self-assessment. Acad Med, 80(10 Suppl), S67-70. DOI: 10.1097/00001888-200510001-00019
  12. Mulder R., Pearce J., Baik C., Payne C. (2012), Guide to student peer review. URL: http://peerreview.cis.unimelb.edu.au/wp-content/uploads/2012/06/Academic-guide-FINAL.pdf (ver 25/08/2021).
  13. Nicol D., Thomson A., Breslin, C. (2014), Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education, 39:1, 102-122.
  14. Pelaez N.J. (2002), Problem-Based Writing with peer-review improves academic performance in Physiology. Adv Physiol Educ, 26, 174-184.
  15. Robinson R. (2001), Calibrated Peer Review™ an Application to Increase Student Reading & Writing Skills. The American Biology Teacher, 63(7), 474-480.
  16. Strang, K.D. (2015), Effectiveness of peer assessment in a professionalism course using an online workshop. Journal of Information Technology Education: Innovations in Practice, 14: 1-16.
  17. Timmerman B., Strickland D. (2009), Faculty should consider peer review as a means of improving students' scientific reasoning skills. J Sc Acad Sci 7: 1.
  18. Topping, K. (1998), Peer assessment between students in colleges and universities. Rev Educ Res 68, 249–276.