Week 6 – Extending the Discussion

Extending the Conversation

While reading Kay (2012) literature review, one thing that stood out to me was the underlying idea that using video in the classroom – especially streaming video that an instructor could create on their own or students could create – really challenges the role of the instructor. Kay’s literature review found there were multiple reasons students use videos: improve learning, preparing for class, self-checking understanding, obtaining a global overview of chapters, taking better notes and to improve face-to-face quality of classes. Kay also pointed out that there as a concern (maybe a fear) among instructors that videoing lectures or PowerPoint lectures would mean students won’t come to class. And Kay’s literature review uncovered that students were as likely to come to class as not when a video lecture was posted, but if it was a PowerPoint lecture, then students would not be as willing to come to class.

Prior to the emergence of the science of learning in the 1980s, the common model of education was one where knowledge was transferred from instructor to student, creating a dynamic where the instructor has all the power, and for students to get what they need, they had to be physically present (Nathan et al., 2022). Educational technology allows a shift in where, when, and how students access information. This also displaces the power dynamic that has been put in place, especially in the context of direct learning environments. Videos allow students to have some more ownership and control over their learning experiences. They’re not quite ready to give up on face-to-face interactions, as evidenced by the fact that brick and mortar education still exists in 2023, and students chose to return to that space after the COVID-19 pandemic’s long pause of face-to-face learning.

While instructors may record, create, or curate video content for their students to consume, that still places them in a different role in the learning context. I see an underlying fear in the ways video can shift a dynamic – if there is a video lecture, then it can be reused indefinitely, in perpetuity. For example, Concordia University was assigning a dead professor a course, using his recorded lecture materials, while being led by a living professor and two TAs. There are some ethical concerns that come up. McCllelan et al. (2022) also point out that video lectures mean students can over inflate their learning because the instructor is not there to immediately guide understanding. The role of the professor even shifts away from “guide on the side”. I’m not sure really what it looks like. But I am interested in the question of how video lectures – active or passive in the student experience – can reshape the power dynamic of the instructor and the student in a learning context. What happens to learning when the instructor is potentially perceived as more passive in the learning experience than in student-centered learning?

References:

McClellan, D., Chastain, R. J., & DeCaro, M. S. (2023). Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson. Journal of Computing in Higher Education. https://doi.org/10.1007/s12528-023-09379-w

Nathan, M. J., & Sawyer, R. K. (2022). Foundations of the learning sciences. Cambridge University Press. https://doi.org/10.1017/9781108888295.004

Tangermann, V. (n.d.). A university is using a dead professor to teach an online class: “I just found out the prof for this online course I’m taking died in 2019.” The Byte. https://futurism.com/the-byte/university-dead-professor-teach-online-class

Week 6 Annotation 2 – Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson.

McClellan, D., Chastain, R. J., & DeCaro, M. S. (2023). Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson. Journal of Computing in Higher Education. https://doi.org/10.1007/s12528-023-09379-w

McClellen et al. (2023) conducted a study to explore how adding cognitive or metacognitive embedded prompts to an asynchronous online video would improve learning in an undergraduate physics course. McClellen et al. situated their research in the conceptual framework of online learning, cognitive vs. metacognitive prompts, individual learner differences and prompt effectiveness, deep vs. surface learning, disorganized studying, and metacognitive skills. Further, McClellen at al. utilized cognitive load theory to support their investigation. The study was carried out over three semesters and used undergraduate physics students (n=253) who regularly used online video in their physics course; all three sections were taught by the same instructor. Students were randomly assigned to three sub-groups: no-prompt control group (n=86), cognitive embedded prompt (n=86) and metacognitive group (n=81). All students watched the same video and took the same quiz. The video was segmented into four parts. In both prompt groups, at the end of each segment, there was a set of questions – cognitive or metacognitive, depending on the condition the students were randomly assigned to. Students also completed validated questionnaire instruments asking about their individual differences in deep vs surface learning, organization of study habits, metacognitive awareness, and cognitive load in the third semester only. The results of the study showed that the embedded prompts did not have a statistically significant effect on cognitive load. The effect of cognitive embedded prompts was in line with the research; students in this group achieved quiz scores improved by 10% compared to the control group. The metacognitive embedded questions, while trending more positively than the control group, did not line up with research in the area and student quiz scores did not significantly improve. Overall, McClellen et al. recommend that cognitive embedded prompts that ask students to extend and organize information they are learning should be used with video lectures.

This study was very well organized and easy to follow. The literature review led directly to the research questions, and there were no surprises in the findings as they related to the conceptual framework. However, there were no sources to back up the individual learner differences section of the conceptual framework. While McClellen et al. (2023) did acknowledge the limitations of their study, including only one of the three groups having access to the surveys, including a new instrument of measurement to the study at the last minute does make me question the results. This study was also conducted in the natural world, so there were a lack of controls for what students were doing when the watched the videos; there was a missed opportunity for students to report on what they were doing at the time, as well. This was not acknowledged in the study.

As a doctoral student, this study shows the significance of the literature review in building up a conceptual framework to put others in the same headspace and point of view as the researchers at the time of publication. While I’ll be the first to admit my statistics are rusty, there was a lot of written description of the statistical analysis, and the argument would have been better served with more visual representations of the data. If I use quantitative methods in my research, I will be mindful to include visual representations which can clarify meaning – not every reader is going to come to the article with the same level of understanding of quantitative methods and I think it’s important that all educational researchers can access the research to meaningfully question it, use it, and apply it to their own projects.

Week 6 Annotation 1 – It is not television anymore: Designing digital video for learning and assessment.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science (pp. 349-366). Mahwah, NJ: Lawrance Erlbaum Associates.

Schwartz et al. (2007) establish a framework specifically for those new to the learning sciences for how to use video to observe and identify learning outcomes and to strategically implement videos into the classroom learning space. This framework is situated in the new world of YouTube and streaming video, where students at the time had access to more information, but were limited by broadband access (because streaming video was spotty in 2005). They also contextualize their framework in the current research of the day, giving an overview of the minimal research available on the topic in 2007. Schwarts et al. give an overview of four common learning outcome: seeing, engaging, doing, and saying. Within each of these four common leaning outcomes is a variety of criteria that are observable when learners are engaging with video, and might direct what video and when video is selected to be used in a learning situation. Seeing videos are videos that help learners visualize and experience things they have not or cannot experience. Seeing videos can be categorized as tour videos (e.g. travel videos, historical re-enactments, nature videos), point of view videos (e.g. from a character’s point of view), simulated experiences (e.g. first person video of a sky dive). The associated assessable criteria are: recognition, noticing, discernment, and familiarity. Engagement videos are designed to keep people engaged in a topic. These videos develop interest and contextualize information. The associated assessable actions are assessing preferences for learning and measuring future learning. Doing videos present human behavior or processes – there are attitude and skill distinctions. In order to do an action, the viewer needs to see the action. Videos that shape attitudes ask viewers to identify the behavior and demonstrate the behavior – either globally or in step-by-step fashion. To assess the effectiveness of the video, a viewer would be asked do the behavior they learned from watching the video. If there is an action that is unable to replicated, then the viewer should be able to explain the action in detail. Saying videos are videos that lead to knowledge acquisition of facts and retaining the facts. Things like news broadcasts, fall into this category. Features of analogy, commentary, and explosion can be used. To assess success of saying videos, viewers should be asked to recall facts they acquired from watching the video. Overall, video works within a larger context. They also provided an extended example of pre-service teachers applying the framework in a course.

Schwartz et al. (2007) did an excellent job of establishing the framework. The framework was clearly and explicitly explained. There was a clear visual representation of the framework. The tenants of the framework were explained, supported with evidence from the literature, and then clear and specific examples were given that a reader could apply to their own situation or research. Additionally, they provided an extended example of how this process could be applied in a learning context. Schwartz et al. also provided appropriate critique and contextualization for the framework. This framework is deceptively simple, as it easy to apply to a condition, but has a lot of room for growth and assessment in application.

As a doctoral student, this framework provides a way to view the application of video usage in a classroom. It was interesting to see the development of a framework for studying something that was so new. This framework emerged alongside the technology. The way the framework was explained and presented in the article was also of great value. Thinking forward to explaining my own conceptual or theoretical framework in my dissertation, I also want to be as clear in my writing. I also appreciate that the framework was so explicit. I feel as though I could pick this framework up and apply it to a scenario. As an administrator who works with faculty, I could direct faculty to this framework to help them assess their use of video in their classes, as this could be part of the evaluation process. Since this is easily accessible, I feel like it’s something that could be seen as value-added right away, especially since it looks a lot like the Bloom’s Taxonomy wheels that many faculty are already familiar with and use. They know it’s easy to apply Bloom’s and would likely assume this framework is just as easy to apply since it can be visually represented in the same way.