Week 11 – Annotation – Using peer feedback to enhance the quality of student online postings: An exploratory study.

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412-433. https://doi.org/10.1111/j.1083-6101.2007.00331.x

Ertmer et al. (2007) conducted a mixed-methods exploratory study which examined the perception students had toward feedback in an online classroom and the impact peer feedback had on discussion quality. This study focused on graduate students (n =15). Ertmer et al. pose three research questions which they set out to answer in their study. The first is what impact does peer feedback have on posting quality in an online class and does the quality lead to increased discussion posts over time? The second is what are students perceptions about the values of receiving peer feedback and what is the perception of peer feedback in relationship to instructor feedback? The third is what perceptions do students have of the value of providing feedback to peers. These three questions are drawn from a literature review which situates feedback in the context of online instruction. First Ertmer et al. examine the role of feedback in instruction. Next, the narrow their focus to the role of feedback in online instruction. Then, they discuss the advantages of using peer feedback. Finally, they discuss the challenges of using peer feedback. Ertmer et al. explain that there is a team of researchers working together on this mixed methods study. Students in a graduate level course were taught how to use a two-factor rubric, based on Bloom’s Taxonomy to rate their peers’ work over the course of the semester. All feedback was sent filtered through the instructor and returned to the students, with an up to two-week lag in some cases. In addition to providing discussion responses, peer responses, and peer feedback, students also completed pre-and post surveys on their perception of peer and instructor feedback and individual interviews that were held in person or by phone. The researchers worked to ensure validity and reliability by triangulating data sources, using Bloom’s, using multiple interviewers and evaluators by divvying up the students among the research team, and using standardized interviews, and providing quotes directly from the participants. The results showed that while students did value giving and receiving peer feedback, they still valued instructor feedback more. Peer feedback was valued, but it was not viewed as being as robust or valid as instructor feedback, even when using Bloom’s Taxonomy as a basis. The study also did not show peer feedback significantly improved the discussion contents. Ertmer et al. noted that using peer feedback can reduce instructor workload and make students equal partners in their learning. Students also reported they learned by constructing feedback to peers. Finally, the limitations of the study included a small sample size, limited scale on the rubric, and no interrater reliability protocols for students to use the rubric to provide peer feedback.

Ertmer et al’s article has several strengths. First, the literature review provides a theoretical and conceptual framework that starts broad and gets narrowed in scope; they move from feedback generally, to online feedback, to the advantages and disadvantages of peer feedback. The concept of feedback is anchored in works that were current at the time of publication. The research questions naturally flow from the research presented in the literature. The purpose of the study is also clearly stated – to examine student perception of feedback – as a means of closing a specific gap in the literature – peer feedback’s effect on shaping the quality of discourse. Ertmer et al. also explain their methodology, and some of it just seems overly complicated in the name of being able to support validity and triangulation of qualitative data. While there were a team of researchers who worked on the project, they involved too many different people – even with attempts at interrater reliability in assessing the quality of the discussion posts. There is too much opportunity for subjectivity, even with discussion. It would have been better for the discussion questions to all first be scored by 1 researcher and then another after a clear interrater reliability calibration exercise to insure the rubric was being applied in the same way. Second, they did not disclose the survey instrument questions – they only described the Likert score and that it had open ended questions. I know it is also common in qualitative studies to choose exemplary comments to illustrate points, but when the research states “interview comments suggested that students (n=8) used information obtained, …” and then only provides one illustrative example, that is not enough information for me to see the full scope of the student perception the researchers saw. Even the quantitative data given was brief (though easy to understand in context). I would have liked to see more depth of analysis and more examples, especially from interviews in a study about student perception. The discussion tied the results back to relevant literature, some of which was cited in the literature review as well, to help the reader draw specific and clear connections. Finally, the limitations did not really address that the student population was professional graduate students who already likely have a strong sense of evaluation. And 15 students is not a large class, even when giving feedback. The survey results showed that students preferred instructor feedback, but the study did not really address how the peer feedback given back to students was filtered through instructors to weed out poor quality feedback and the effect that had on student perception of the feedback. The study also concluded that peer feedback helps instructors save time, however, the researchers in this study read all student feedback, and thus caused a two-week delay in return of feedback in some instances, rendering it untimely to student development. This study did not show peer feedback increased quality of discussion, and the factors I just mentioned are not fully discussed as a factor. And here, the instructors still had to engage with all the feedback, and really, they should have just given the students feedback.

As a doctoral student I appreciate this article as well-structured article. Even if I do not necessarily think all the methodology is set up in a way that makes complete sense to me, I can still see where they were going and why they set their study up in this manner. As a doctoral student who is in a program were peer feedback is the main source of feedback received, it was nice to see the research supports the idea that peer feedback is valuable in an online environment and that we learn as much from giving feedback as receiving the feedback. I also agree that instructor feedback is more valuable because it’s the more expert opinion on the subject matter and students are still learning. That doesn’t mean students don’t have valuable contributions, just that in a learning situation where courses are expensive, it’s important for the instructor’s voice to weigh in on the conversation – even if it’s not every post or point.