Week 11 – Annotation – Using peer feedback to enhance the quality of student online postings: An exploratory study.

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412-433. https://doi.org/10.1111/j.1083-6101.2007.00331.x

Ertmer et al. (2007) conducted a mixed-methods exploratory study which examined the perception students had toward feedback in an online classroom and the impact peer feedback had on discussion quality. This study focused on graduate students (n =15). Ertmer et al. pose three research questions which they set out to answer in their study. The first is what impact does peer feedback have on posting quality in an online class and does the quality lead to increased discussion posts over time? The second is what are students perceptions about the values of receiving peer feedback and what is the perception of peer feedback in relationship to instructor feedback? The third is what perceptions do students have of the value of providing feedback to peers. These three questions are drawn from a literature review which situates feedback in the context of online instruction. First Ertmer et al. examine the role of feedback in instruction. Next, the narrow their focus to the role of feedback in online instruction. Then, they discuss the advantages of using peer feedback. Finally, they discuss the challenges of using peer feedback. Ertmer et al. explain that there is a team of researchers working together on this mixed methods study. Students in a graduate level course were taught how to use a two-factor rubric, based on Bloom’s Taxonomy to rate their peers’ work over the course of the semester. All feedback was sent filtered through the instructor and returned to the students, with an up to two-week lag in some cases. In addition to providing discussion responses, peer responses, and peer feedback, students also completed pre-and post surveys on their perception of peer and instructor feedback and individual interviews that were held in person or by phone. The researchers worked to ensure validity and reliability by triangulating data sources, using Bloom’s, using multiple interviewers and evaluators by divvying up the students among the research team, and using standardized interviews, and providing quotes directly from the participants. The results showed that while students did value giving and receiving peer feedback, they still valued instructor feedback more. Peer feedback was valued, but it was not viewed as being as robust or valid as instructor feedback, even when using Bloom’s Taxonomy as a basis. The study also did not show peer feedback significantly improved the discussion contents. Ertmer et al. noted that using peer feedback can reduce instructor workload and make students equal partners in their learning. Students also reported they learned by constructing feedback to peers. Finally, the limitations of the study included a small sample size, limited scale on the rubric, and no interrater reliability protocols for students to use the rubric to provide peer feedback.

Ertmer et al’s article has several strengths. First, the literature review provides a theoretical and conceptual framework that starts broad and gets narrowed in scope; they move from feedback generally, to online feedback, to the advantages and disadvantages of peer feedback. The concept of feedback is anchored in works that were current at the time of publication. The research questions naturally flow from the research presented in the literature. The purpose of the study is also clearly stated – to examine student perception of feedback – as a means of closing a specific gap in the literature – peer feedback’s effect on shaping the quality of discourse. Ertmer et al. also explain their methodology, and some of it just seems overly complicated in the name of being able to support validity and triangulation of qualitative data. While there were a team of researchers who worked on the project, they involved too many different people – even with attempts at interrater reliability in assessing the quality of the discussion posts. There is too much opportunity for subjectivity, even with discussion. It would have been better for the discussion questions to all first be scored by 1 researcher and then another after a clear interrater reliability calibration exercise to insure the rubric was being applied in the same way. Second, they did not disclose the survey instrument questions – they only described the Likert score and that it had open ended questions. I know it is also common in qualitative studies to choose exemplary comments to illustrate points, but when the research states “interview comments suggested that students (n=8) used information obtained, …” and then only provides one illustrative example, that is not enough information for me to see the full scope of the student perception the researchers saw. Even the quantitative data given was brief (though easy to understand in context). I would have liked to see more depth of analysis and more examples, especially from interviews in a study about student perception. The discussion tied the results back to relevant literature, some of which was cited in the literature review as well, to help the reader draw specific and clear connections. Finally, the limitations did not really address that the student population was professional graduate students who already likely have a strong sense of evaluation. And 15 students is not a large class, even when giving feedback. The survey results showed that students preferred instructor feedback, but the study did not really address how the peer feedback given back to students was filtered through instructors to weed out poor quality feedback and the effect that had on student perception of the feedback. The study also concluded that peer feedback helps instructors save time, however, the researchers in this study read all student feedback, and thus caused a two-week delay in return of feedback in some instances, rendering it untimely to student development. This study did not show peer feedback increased quality of discussion, and the factors I just mentioned are not fully discussed as a factor. And here, the instructors still had to engage with all the feedback, and really, they should have just given the students feedback.

As a doctoral student I appreciate this article as well-structured article. Even if I do not necessarily think all the methodology is set up in a way that makes complete sense to me, I can still see where they were going and why they set their study up in this manner. As a doctoral student who is in a program were peer feedback is the main source of feedback received, it was nice to see the research supports the idea that peer feedback is valuable in an online environment and that we learn as much from giving feedback as receiving the feedback. I also agree that instructor feedback is more valuable because it’s the more expert opinion on the subject matter and students are still learning. That doesn’t mean students don’t have valuable contributions, just that in a learning situation where courses are expensive, it’s important for the instructor’s voice to weigh in on the conversation – even if it’s not every post or point.

Week 8 – Extending the Discussion

Extending the Discussion – Week 8

Online learning environments are difficult because they’re asynchronous. In face to face classrooms, I could always tell where my students were in terms of understanding and could very easily course correct. I built rapport very easily through bad jokes and being able to show my humanity to my students regularly. In online classes – I’m a block of text, and maybe sometimes a quick video or a voice file. It’s a very different type of interaction with students. But a key piece of success is students feeling like their teacher is present. Richardson (2003) pointed out that students who felt their instructor was present in the class felt like they learned more. Hratsinski (2009) pointed out that having someone around with a higher level knowledge than the learner increases learning. It’s difficult to be present as a block of text and it’s so easy to stifle a discussion as the instructor if you encroach on it too soon.

Online classrooms need a level of instructor interaction, however. They cannot just be left for students to engage with each other or with content and/or technology as a means of feedback. Hrtatsinski (2009) cited there are three types of interaction in a classroom: learner to learner, learner to content, and learner to instructor. When I observe and evaluate any classroom environment, I look for all of these interactions. When I build my own classroom environments, I strategically and intentionally build in all of these pieces. I would also add there needs to be clear interaction between the instructor and the content to model disciplinary thinking for a student. All three interactions need to be present for an online classroom to function well (Abrami et al., 2011). Learners cannot be left to engage only with other learners and the content – only reaching out to the instructor piecemeal for clarification and expect to leave with a well-rounded learning experience. Instructors need to set up learner to learner engagements that have a specific end goal and participate in that interaction at key points to provide timely feedback. As Jensen et al. (2023) argued, feedback while students are in process of an assignment at a useful point can lead to substantive learning. If there is no feedback from the instructor, the learner can be uncertain about their interactions with peers or content leading to correct understanding. The instructor also needs to challenge student ideas at a time when they’re developing, as that is when that feedback is more likely, in my opinion, to be able to help shape approaches and ideas.

References

Abrami, P. C., Bernard, R. M., Bures, E. M., Borokhovski, E., & Tamim, R. M. (2011). Interaction in distance education and online learning: using evidence and theory to improve practice. Journal of Computing in Higher Education, 23, 82-103.

Hrastinski, S. (2009). A theory of online learning as online participationComputers & Education, 52(1), 78–82.

Jensen, L. X., Bearman, M., & Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teaching in Higher Education, 1-15. https://doi.org/10.1080/13562517.2023.2213168

Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1), 71-88.

Week 8 Annotation – Characteristics of productive feedback encounters in online learning

Jensen, L. X., Bearman, M., & Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teaching in Higher Education, 1-15. https://doi.org/10.1080/13562517.2023.2213168

 

Jensen et al. (2023) provide a digital ethnographic approach to studying the perceived usefulness of feedback for students taking online classes at an Australian and Danish university in courses that were not emergency remote courses due to COVID-19. Jensen et al. situate their argument in the context of productive feedback – feedback a student finds meaningful – and feedback in higher education, noting feedback can come from interactions from humans, technology, or resources. The dataset for this study was derived from 18 students whose online text-based work was observed and 27 semi-structured interviews were conducted using longitudinal audio diaries. The data was thematically coded. Three major themes for feedback emerged. The first was elicited feedback encounters, where students directly asked for feedback. An example of this would be asking for peer review or emailing a question. The second was formal feedback encounters, where feedback is structured as part of the course design. An example is any instructor feedback on a submitted assignment. The final is incidental feedback encounters, where students get information that causes them to reflect on their understanding. An example is a discussion with peers about the directions for an assignment. Feedback has two types of impact: instrumental or substantive. Instrumental feedback is feedback that clearly delineates for the student what action they need to take next. This type of feedback often leads to superficial changes based on direct instruction for what to do. Substantive feedback is feedback that asks a student to critically reflect on their own assumptions. This type of feedback is often ignored because it’s too challenging; however, if the student engages with this feedback it reshapes their understanding of the task, the work, or their own approach. Instrumental and substantive feedback are equally valuable to student learning and serve a purpose. However, all feedback is most valuable to students when it is received when the are open to the feedback and the feedback arrives in time for them to apply it to their current work.

Jensen et al. do a good job of situating their problem in the context of feedback. There was discussion about the framework and approach for feedback, but it was not related back to the discussion or the findings in a clear way. It was also not clear if the authors of the study collected the dataset on their own or used a dataset that was collected for someone else and available for other researchers to utilize to conduct research. It was, however, very easy to follow the conceptual framework to the research questions and methodology used to explore this problem.

Feedback is something I have grappled with for a very long time as an educator. When I teach writing classes, I am always swamped by feedback. When I teach online courses, I log in every single day to read what students post and provide them formative feedback that I hope will shape their work. I’m not always sure that students read and engage with the feedback I give them, except when I require it as part of a reflection assignment (as Jensen et al. pointed out in their literature review). But I do agree that students will happily take surface feature/lower-order concern feedback and apply it easily because it is direct and tells them what to do. For example, if I tell them to restate their thesis and give them a template, they almost always do it. But if I ask them to reframe their thinking on a topic – which would lead to major overhaul of their paper – they often don’t do anything with that feedback. Jensen et al. pointed out that this type of feedback is hard to do. I mean, it’s a big ask to ask a first-year composition student to reframe their entire way of thinking about a topic while at the same time they’re learning how to research, evaluate sources, and all the things that come with learning to do academic research. It defeats the purpose of giving that kind of feedback, however, for me to tell them how to think about the topic differently. But this kind of feedback is successful if they’re ready and open to the conversation of changing how they think.

In online learning, it’s even harder to give this kind of feedback that leads to substantive learning because you can’t see the student to know how well it’s received or even understood. I also don’t always know the right time to give that feedback so it’s useful. In 8 week classes, I’m usually one week behind in grading so they’re getting feedback as they’re working two assignments down from the initial feedback. It’s not really helpful anymore. I need to think about ways to structure feedback so they get it when it’s useful.