Week 6 Annotation 1 – It is not television anymore: Designing digital video for learning and assessment.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science (pp. 349-366). Mahwah, NJ: Lawrance Erlbaum Associates.

Schwartz et al. (2007) establish a framework specifically for those new to the learning sciences for how to use video to observe and identify learning outcomes and to strategically implement videos into the classroom learning space. This framework is situated in the new world of YouTube and streaming video, where students at the time had access to more information, but were limited by broadband access (because streaming video was spotty in 2005). They also contextualize their framework in the current research of the day, giving an overview of the minimal research available on the topic in 2007. Schwarts et al. give an overview of four common learning outcome: seeing, engaging, doing, and saying. Within each of these four common leaning outcomes is a variety of criteria that are observable when learners are engaging with video, and might direct what video and when video is selected to be used in a learning situation. Seeing videos are videos that help learners visualize and experience things they have not or cannot experience. Seeing videos can be categorized as tour videos (e.g. travel videos, historical re-enactments, nature videos), point of view videos (e.g. from a character’s point of view), simulated experiences (e.g. first person video of a sky dive). The associated assessable criteria are: recognition, noticing, discernment, and familiarity. Engagement videos are designed to keep people engaged in a topic. These videos develop interest and contextualize information. The associated assessable actions are assessing preferences for learning and measuring future learning. Doing videos present human behavior or processes – there are attitude and skill distinctions. In order to do an action, the viewer needs to see the action. Videos that shape attitudes ask viewers to identify the behavior and demonstrate the behavior – either globally or in step-by-step fashion. To assess the effectiveness of the video, a viewer would be asked do the behavior they learned from watching the video. If there is an action that is unable to replicated, then the viewer should be able to explain the action in detail. Saying videos are videos that lead to knowledge acquisition of facts and retaining the facts. Things like news broadcasts, fall into this category. Features of analogy, commentary, and explosion can be used. To assess success of saying videos, viewers should be asked to recall facts they acquired from watching the video. Overall, video works within a larger context. They also provided an extended example of pre-service teachers applying the framework in a course.

Schwartz et al. (2007) did an excellent job of establishing the framework. The framework was clearly and explicitly explained. There was a clear visual representation of the framework. The tenants of the framework were explained, supported with evidence from the literature, and then clear and specific examples were given that a reader could apply to their own situation or research. Additionally, they provided an extended example of how this process could be applied in a learning context. Schwartz et al. also provided appropriate critique and contextualization for the framework. This framework is deceptively simple, as it easy to apply to a condition, but has a lot of room for growth and assessment in application.

As a doctoral student, this framework provides a way to view the application of video usage in a classroom. It was interesting to see the development of a framework for studying something that was so new. This framework emerged alongside the technology. The way the framework was explained and presented in the article was also of great value. Thinking forward to explaining my own conceptual or theoretical framework in my dissertation, I also want to be as clear in my writing. I also appreciate that the framework was so explicit. I feel as though I could pick this framework up and apply it to a scenario. As an administrator who works with faculty, I could direct faculty to this framework to help them assess their use of video in their classes, as this could be part of the evaluation process. Since this is easily accessible, I feel like it’s something that could be seen as value-added right away, especially since it looks a lot like the Bloom’s Taxonomy wheels that many faculty are already familiar with and use. They know it’s easy to apply Bloom’s and would likely assume this framework is just as easy to apply since it can be visually represented in the same way.

Annotated Bibliography – “Enhancing the learning effectiveness of ill-structured problem solving with online co-creation”

In this early empirical study on co-creation in learning, Pee (2019) attempts to support the hypothesis that the open-ended nature of ill-structured problem solving (ISPS) can be used to a learner’s advantage in increasing cognitive and epistemic knowledge. Three concepts were derived from business disciplines, where co-creation is commonly used, to develop a framework to determine of online co-creation to test for increased student learning in ISPS: solution co-creation, decision co-creation, and solution sharing. Pee created an asynchronous, voluntary, and optionally anonymous activity on Blackboard for students to participate in decision co-creation for evaluative criteria and then to discuss their solutions to the problem in the assignment to engage in co-solution sharing and solution co-creation. Pee interprets the student survey results to indicate that by engaging in online co-creation, learning increases. Ultimately, Pee suggests that while this study is an early study and cannot yet be generalized, it should be replicated in other areas and current course instructors can implement this method to increase learning in the context of working with ISPS.

            While the article excels in presenting its data visually and the limitations of the study are adequately acknowledged, there are areas of concern in the arguments Pee presents. While Pee (2019) presents a cogent statistical analysis of the survey deployed to students (n=225), and the survey had an excellent return rate of 70.3%, the findings were presented in the article as proving learning increased when the survey measured student perception of learning. A brief follow up interview with 13 students was mentioned in the article, but these were not discussed in depth and did not support the hypothesis that learning had increased, but a single student was quoted as showing their perception of learning increased. Finally, the examples to illustrate the method for collecting data was described and limited to the graduate student sample who made up only 32.4% of the sample size – the undergraduate student experience shaped most of the survey results, but it was not described in the methodology or discussion. Pee draws a conclusion that the survey results show the model for co-creation online worked in a classroom to “leverage the multiplicity of ISPs,” to enhance student learning, not noting the survey can only measure objective perception since student work was not evaluated or controlled for with groups who used co-creation and groups who did not.

            As a writing teacher, the idea of ill-structured problems and co-creation is interesting to me. Writing is often difficult to teach because it’s amorphous and doesn’t have a “right” answer. The idea of online co-creation where students work together to contribute to discussions of how a project will be evaluated is exciting because it shifts the burden of teaching in an ISP context from the instructor only to instructor and students. I like this idea in terms of establishing rubrics that are more individualized for learners to help them grow their writing in ways they find relevant while also meeting course standards and outcomes. As a doctoral student, I am interested in the ways students perceive their own learning versus how instructors perceive student learning based on knowledge acquisition. I find the methodology and framework used by Pee to study perception of learning is interesting.  

References

Pee, L. G. (2019). Enhancing the learning effectiveness of ill-structured problem solving with online co-creation. Studies in Higher Education, 45(11), 2341-2355. https://doi.org/10.1080/03075079.2019.1609924