Week 15 and 16 Annotation – It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence?

Lodge, J. M., Yang, S., Furze, L., & Dawson, P. (2023). It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence? Learning: Research and Practice, 9(2), 117-124. https://doi.org/10.1080/23735082.2023.2261106

            Lodge et al. (2023) set out to establish a frame for discussions about generative AI technologies in education by offering typologies for AI. They begin by dispelling the common analogy of generative AI technology to the calculator. Lodge et al. hold that comparing generative AI to a calculator assumes that generative AI technologies will be able to do tasks to arrive at a correct answer, but really generative AI functions are more complex than that oversimplified analogy of the calculator. Lodge et al. view generative AI as an infrastructure, rather than a singular tool. Next, Lodge et al. provide an overview of human-generative AI interaction. They contextualize this interaction in the more firmly established human computer interactions that  take in to account social and cognitive processes involved in learning. Computers were used to offload tasks – like complex addition problems to a calculator, for example. However generative AI is not about offloading a task to take the data and move forward. Lodge et al. introduced a four-quadrant typology for human and machine interactions for education. The vertical axis represents how AI can free up humans from boring tasks so they can engage in higher-level thinking or extend human thought capabilities. The horizontal axis then illustrates the way the human-machine relationship functions, individually or in collaboration. For example, using a calculator is an individual use and is an example of cognitive offloading. Cognitive offloading occurs people shift part of their cognitive tasks elsewhere – for example, a Google calendar keeps track of appointments, cell phones hold phone numbers, journals hold notes, etc. – but technology does not necessarily have to be involved. Cognitive offloading can damage learning if too much information or cognitive work is left to other devices. But cognitive offloading can also free up thinking space to allow people to engage in higher-order thinking processes. The extended mind theory holds that technology is used to expand human capabilities in complex tasks and thought processes. Generative AI could be an extension of the mind. Next. AI can also be used as a collaborative tool that assists in the co-regulation of learning. While generative AI cannot regulate human learning, the outputs of generative AI as it monitors human learning can help humans reflect on and monitor their learning in relationship to their goals. AI can “coach” humans here. Finally, ther eis hybrid learning. AI tools can help humans learn because it provides real-time feedback that is adaptive and personalized. AI can guide learners to grow and develop through opportunities for reflection.

            Lodge et al. provide a very clear description of their typologies for generative AI use and human interaction in education. Their writing is clear and concise and cites relevant resources. This article does provide a framework for discussing generative-AI human interaction without reducing it down to an overly simplified statement of: “It’s just like a calculator.” Not only does that phrase oversimplify, but it also discredits those who have legitimate concerns about the integration and implementation of generative AI in the classroom. The four typologies are discussed in a way that connects each one to the next. Lodge et al. start out with generative-AI human interaction, then discuss cognitive offloading, then expanded mind theory, then co-regulated learning and hybrid learning. This process allows the framework to develop from more simple to more complex, and as the paper moves forward the seeds for discussion about generative AI grow more complex, leaving the reader to seek out more complex connections.

            I am interested in this as a doctoral student because I am extremely interested in the ways generative AI, cognitive offloading, knowledge acquisition, and transactive memory partnerships work. When I was earning my Ed.S. degree, I focused my work on the ways Google was shifting knowledge acquisition and learning as a possible transactive memory partner in the context of classroom discussions. But as my work was ending on that degree ChatGPT emerged, and my interested shifted to generative AI. As Lodge et al. showed, there are many ways that generative AI will reshape learning and knowledge work – and don’t know all of those ways yet. So this is something that is very in line with my research interests.

Also, reading this made me cringe because I have been using the calculator analogy in almost every discussion I have had about generative AI. And when I read how the analogy was broken down to illustrate the ways generative AI is more complex than a calculator, I realized I had inadvertently been dismissing some very legitimate concerns about the inclusion of generative AI in the classroom. This was a good reminder to slow down and think through something before just embracing it.

APA Citations for Additional Sources

References

Chauncey, S. A., & McKenna, H. P. (2023). A framework and exemplars for ethical and responsible use of AI chatbot technology to support teaching and learning. Computers and Education: Artificial Intelligence, 5, 100182. https://doi.org/10.1016/j.caeai.2023.100182

Chen, B., Zhu, X., & Díaz del Castillo H, F. (2023). Integrating generative AI in knowledge building. Computers and Education: Artificial Intelligence, 5, 100184. https://doi.org/10.1016/j.caeai.2023.100184

Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., Tondeur, J., De Laat, M., Buckingham Shum, S., Gašević, D., & Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with ai? Computers and Education: Artificial Intelligence, 3, 100056. https://doi.org/10.1016/j.caeai.2022.100056

Vinchon, F., Lubart, T., Bartolotta, S., Gironnay, V., Botella, M., Bourgeois, S., Burkhardt, J.-M., Bonnardel, N., Corazza, G. E., Glaveanu, V., Hanson, M. H., Ivcevic, Z., Karwowski, M., Kaufman, J. C., Okada, T., Reiter-Palmon, R., & Gaggioli, A. (2023). Artificial intelligence & Creativity: A manifesto for collaboration. Pre-Print. https://doi.org/10.31234/osf.io/ukqc9

Week 14 – Annotation 2 – Creating technology-enhanced, learner-centered classrooms: K-12 teachers beliefs, perceptions, barriers, and support needs.

An, Y. J., & Reigeluth, C. (2011). Creating technology-enhanced, learner-centered classrooms: K–12 teachers’ beliefs, perceptions, barriers, and support needs. Journal of Digital Learning in Teacher Education, 28(2), 54-62.

            An and Reighluth (2011) conducted a survey-based exploratory study to learn about teacher perceptions, barriers, and support needed to create technology-enhanced classrooms. An et al. provide a literature review that defines the learner-center classroom space, personalized and customized learning, self-regulated learning, collaborative and authentic learning experiences, and technology integration. An et al. draw from the current-at-the-time research to develop a theoretical underpinning of technology use in the classroom. The study focuses on five elements related to K -12 teachers: beliefs and attitudes toward using technology in teaching and learning, perception of learner-centered instruction, perception of barriers to implementing technology and learner-centered classrooms, perceptions effective professional development and how to improve professional development, and teacher support needs. An et al. conducted a survey developed from their literature review and added additional Likert scale questions. The survey had a response rate of 32%. The results showed that teachers believed technology was important to teaching, they supported use of classroom technology, learned new technologies, and believed it was part of their job as teachers to learn new technology to implement in the classroom. Teachers positively viewed learner-centered instruction, but found it both challenging and rewarding. Most teachers perceived they provided personalized learning to their students. Most teachers did not perceive that their own personal attitudes were a barrier to implementing learner-centered instruction or technology. Teachers found two weaknesses in professional development: they were not specific enough and contain too much information in too short a time frame. Teachers want improved professional development sessions that given them hands on support, learner-centered environments, and specificity. An et al. also acknowledge the systems teachers want in place cannot exist without support from the systems teachers work in. The end with a suggestion or further research to test the generalizability of the findings of this study.

            An et al. organize their study very well. The problem was clearly articulated up front. The conceptual framework led directly to the five facets of research presented. The methodology was clearly described. They provided relevant citations to support their work, which was current. The research was presented in a very clear and organized fashion that was easily accessible to the reader. The concepts were all clearly operationalized and defined so that a scholar unfamiliar with all of the concepts were easily accessible. The results tied back to the literature review, and appropriate studies were cited to support the findings. The discussion led to other opportunities for research.

            As a doctoral student, this article serves as a good model for organizing a paper. I enjoyed reading a study that was well organized and where all the parts were easily identifiable. I did not have to do a lot of work to understand the concepts presented because the authors clearly defined terms that needed to be defined and provided adequate supporting research. This is something I think about as I write my own papers – being clear in the patterns and ideas I see and clearly articulating those ideas for an audience who may read my work. The development of the instrument was discussed and that is also important so we can understand what is being measured is accurately represented. The results and discussion also made clear connections back to the literature review.

            As an administrator and a teacher, I found the discussion about teacher perceptions of professional developments. I share the same sentiments that most professional development is too long, too broad, and a reflection of what someone else thinks I should know as a professional rather than what I need to know. It’s a good reminder to ask the people who need the professional development what they need instead of thinking I know, even though I teach, too.

Week 14 Annotation 1 – OMMI: Offline multimodal instruction.

Dirkin, K., Hain, A., Tolin, M. & McBride, A. (2020). OMMI: Offline MultiModal Instruction. In E. Langran (Ed.), Proceedings of SITE Interactive 2020 Online Conference (pp. 24-28). Online: Association for the Advancement of Computing in Education (AACE).

            In this conference proposal, Dirkin et al. (2020) present a model of offline multi-modal instruction (OMMI) to address the digital divide made evidence during the acute phase of the COVID-19 pandemic. Dirkin et al. situate their instructional model in the context of datacasting, and provide an overview of the process. Data is securely transmitted one way by antenna to students so they can complete their learning. OMMI requires three separate elements to be successfully implemented: an anchor document, creative repurposing of software, and assessment that is performance-based. Anchor documents are students’ “home base” that give them the roadmap to success, and includes links to all elements students will need to study. Creative repurposing of software means using things like Word or PowerPoint to “mimic the interactivity of a website” by embedding videos and hyperlinking slides together. This provides students with an organized learning experience. Balanced assessment should be used because students may not have access to instant feedback from the instructor so formative and self-assessments need to be used to help facilitate learning. It is also important, given that students in the acute phase of the pandemic were under the care of parents who could not provide necessary levels of assistance to complete learning tasks, that students be given choices and scaffolded projects that were broken down in to small and workable phases. While Dirken et al. recognize it was best for learners to have access to the Internet, OMMI was the next best thing to help students not fall behind. The OMMI model replicates as best as possible the interactivity and interconnectivity of the Internet.

            Dirkin et al. organize their proposal very well. There is an immediate statement of the problem, followed by a contextualization in the literature and emergency of the COVID-19 remote learning environment. Dirken et al. ground their three-pronged approach in research and clearly define their operationalized terms to outline their argument. While conference proposals and proceedings are limited by nature, and cannot be fully developed due to space, the argument and support in the literature were very clearly articulated.

            As an administrator who had to navigate and survive the shift from a mostly on campus learning experience for learners to an all online experience, this work is interesting to me. While I do not work on a campus that largely did not have access to internet during learning, there were many challenges to the instantaneous pivot to remote learning that occurred in March 2020. For over a year, we had to find creative solutions to help learners where they were – particularly in hands-on disciplines like art and music, which I oversaw at the time. While this specific model OMMI is not directly applicable to my situation, the idea of the anchor document would have been extremely helpful in navigating learning in situations where a lot of learning was asynchronous. If I put on my writing instructor hat, I can see the benefit of preparing lessons and teaching opportunities, even as we have returned to “normal” – whatever that is – through an anchor document and creating interactive, non-Internet based resources to support learning for students who may not always be available to work online in the LMS.

Week 12 – Annotation “Do less teaching, more coaching: Toward critical thinking for ethical applications of artificial intelligence”

Park, C. S.-Y., Kim, H., & Lee, S. (2021). Do less teaching, more coaching: Toward critical thinking for ethical applications of artificial intelligence. Journal of Learning and Teaching in Digital Age, 6(2), 97-100.

Park et al. (2021) present a cleaned up version of a series of academic discussions held over the period of the pandemic as these educators attempted to meet their student learning needs where they were in emergent conditions. The authors provided an overview of four areas where educators should work to coach students as their education intersects with AI usage in the classroom. The first is that using virtual learning spaces could make it harder for students to think about what they want or what their peers want in favor of what AI is presenting. They oddly move to a discussion of AI in healthcare, that is not really related to their students. Park et al. (2021) argued that health care professionals should not be reliant on, but rather critical of it, because there are many variables AI is unable to take into account at this time. Park et al. argued AI should not replace humans, but should work along side it, especially in healthcare. Finally, Park et al. (2021) argued humans should prioritize their own intellectual curiosity to create their own knowledge.

While slightly outdated, the notion that we should be thinking critically about how AI is used in educational spaces is very much at the forefront of thinking about AI. This was an interesting way to display some different facets of thinking about AI in education. However, the article title was misleading and there was not much discussion about coaching students in how to use AI. Many ethical concerns were raised, but none were grounded in research or specific examples. The article was also extremely repetitive in terms of the underlying assumption that AI would usurp human thinking. While there are some uses of previous research to underpin the thinking, this is really a thought exercise to contribute lines of thinking to the discussion rather than to answer questions.

As a researcher and doctoral student, I think it’s good to be aware of these types of conversations, especially to think about the ethical considerations of novel technology. Though, I find it interesting that each new technology seems to be heralded in with the same amount of trepidation of the previous. As a Millennial, one thing that has been constant in my life is technological change. So I tend to embrace all the new things that come out and then think about the ethical implications later. And that is not the best approach to take. So, I really liked that this article enshrines a years’ long conversation about the ethical considerations of a new technology. It’s a good note and remind to slow down and think past the shiny new thing to what happens next.

Week 11 – Annotation – Using peer feedback to enhance the quality of student online postings: An exploratory study.

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412-433. https://doi.org/10.1111/j.1083-6101.2007.00331.x

Ertmer et al. (2007) conducted a mixed-methods exploratory study which examined the perception students had toward feedback in an online classroom and the impact peer feedback had on discussion quality. This study focused on graduate students (n =15). Ertmer et al. pose three research questions which they set out to answer in their study. The first is what impact does peer feedback have on posting quality in an online class and does the quality lead to increased discussion posts over time? The second is what are students perceptions about the values of receiving peer feedback and what is the perception of peer feedback in relationship to instructor feedback? The third is what perceptions do students have of the value of providing feedback to peers. These three questions are drawn from a literature review which situates feedback in the context of online instruction. First Ertmer et al. examine the role of feedback in instruction. Next, the narrow their focus to the role of feedback in online instruction. Then, they discuss the advantages of using peer feedback. Finally, they discuss the challenges of using peer feedback. Ertmer et al. explain that there is a team of researchers working together on this mixed methods study. Students in a graduate level course were taught how to use a two-factor rubric, based on Bloom’s Taxonomy to rate their peers’ work over the course of the semester. All feedback was sent filtered through the instructor and returned to the students, with an up to two-week lag in some cases. In addition to providing discussion responses, peer responses, and peer feedback, students also completed pre-and post surveys on their perception of peer and instructor feedback and individual interviews that were held in person or by phone. The researchers worked to ensure validity and reliability by triangulating data sources, using Bloom’s, using multiple interviewers and evaluators by divvying up the students among the research team, and using standardized interviews, and providing quotes directly from the participants. The results showed that while students did value giving and receiving peer feedback, they still valued instructor feedback more. Peer feedback was valued, but it was not viewed as being as robust or valid as instructor feedback, even when using Bloom’s Taxonomy as a basis. The study also did not show peer feedback significantly improved the discussion contents. Ertmer et al. noted that using peer feedback can reduce instructor workload and make students equal partners in their learning. Students also reported they learned by constructing feedback to peers. Finally, the limitations of the study included a small sample size, limited scale on the rubric, and no interrater reliability protocols for students to use the rubric to provide peer feedback.

Ertmer et al’s article has several strengths. First, the literature review provides a theoretical and conceptual framework that starts broad and gets narrowed in scope; they move from feedback generally, to online feedback, to the advantages and disadvantages of peer feedback. The concept of feedback is anchored in works that were current at the time of publication. The research questions naturally flow from the research presented in the literature. The purpose of the study is also clearly stated – to examine student perception of feedback – as a means of closing a specific gap in the literature – peer feedback’s effect on shaping the quality of discourse. Ertmer et al. also explain their methodology, and some of it just seems overly complicated in the name of being able to support validity and triangulation of qualitative data. While there were a team of researchers who worked on the project, they involved too many different people – even with attempts at interrater reliability in assessing the quality of the discussion posts. There is too much opportunity for subjectivity, even with discussion. It would have been better for the discussion questions to all first be scored by 1 researcher and then another after a clear interrater reliability calibration exercise to insure the rubric was being applied in the same way. Second, they did not disclose the survey instrument questions – they only described the Likert score and that it had open ended questions. I know it is also common in qualitative studies to choose exemplary comments to illustrate points, but when the research states “interview comments suggested that students (n=8) used information obtained, …” and then only provides one illustrative example, that is not enough information for me to see the full scope of the student perception the researchers saw. Even the quantitative data given was brief (though easy to understand in context). I would have liked to see more depth of analysis and more examples, especially from interviews in a study about student perception. The discussion tied the results back to relevant literature, some of which was cited in the literature review as well, to help the reader draw specific and clear connections. Finally, the limitations did not really address that the student population was professional graduate students who already likely have a strong sense of evaluation. And 15 students is not a large class, even when giving feedback. The survey results showed that students preferred instructor feedback, but the study did not really address how the peer feedback given back to students was filtered through instructors to weed out poor quality feedback and the effect that had on student perception of the feedback. The study also concluded that peer feedback helps instructors save time, however, the researchers in this study read all student feedback, and thus caused a two-week delay in return of feedback in some instances, rendering it untimely to student development. This study did not show peer feedback increased quality of discussion, and the factors I just mentioned are not fully discussed as a factor. And here, the instructors still had to engage with all the feedback, and really, they should have just given the students feedback.

As a doctoral student I appreciate this article as well-structured article. Even if I do not necessarily think all the methodology is set up in a way that makes complete sense to me, I can still see where they were going and why they set their study up in this manner. As a doctoral student who is in a program were peer feedback is the main source of feedback received, it was nice to see the research supports the idea that peer feedback is valuable in an online environment and that we learn as much from giving feedback as receiving the feedback. I also agree that instructor feedback is more valuable because it’s the more expert opinion on the subject matter and students are still learning. That doesn’t mean students don’t have valuable contributions, just that in a learning situation where courses are expensive, it’s important for the instructor’s voice to weigh in on the conversation – even if it’s not every post or point.

Week 10 Annotation – Establishing the concept of AI literacy: Focusing on competence and purpose.

Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353-368. https://doi.org/10.21860/j.12.2.8

Yi (2021) establishes AI literacy using traditional literacy as a foundation. Yi situates the concept within the every expanding realm of literacies, which emerge as new technologies emerge. Within his framework, Yi calls basic reading, writing, and arithmetic skills functional literacy. Social literacy is new literacy, which takes into account social practice and critical thinking, Technological literacy encompasses technological intimacy and future literacy (the ways technology could be used in the future). He argues that we have moved beyond the realm of simply understanding signifiers and signifieds in printed texts; reading and writing are not sufficient to participate in today’s world. Communication media extends functional literacy to include technology as a means of communication. However, to communicate effectively using a technology, the user has to understand the changing nature of technology and the ways technology is used to communicate in a specific time and place. Yi rejects the idea that AI literacy definitions belong as an extension of digital literacy discussions because they all “set goals for artificial intelligence education” (p. 359). Yi’s definition centers on competency in being adaptable. AI literate individuals will use AI, adapt AI to help them create life, and recognize the change to culture that comes as a result of AI usage. AI literacy also requires that a person be able to maintain their privacy and leverage the AI tool to help them realize their goals. Using AI helps humans grow using non-human technology. AI literacy is inclusive of functional literacy, technological literacy, and new literacy. AI competence literacy is demonstrated by the use through metacognition and anticipation of future needs. In order to be successful, people need to consider the ways AI could alter future prospects and educate themselves accordingly. This also means learners can use AI to create personalized learning, while teachers remain along side to mentor and guide.

Yi does a good job of grounding his theory in the traditions of literacy studies, new literacy, and technological literacy. He establishes clearly how AI literacy is the next evolution of new literacy, and emphasizes that adaptability will be at the core of a human non-human interaction. The sources he cited to articulate his point are grounded in literacy studies, motivational research, and work in artificial intelligence. The concept forms from the literature.

As a doctoral student, educator, and higher education administrator, this new view on AI literacy opens up conversation about what it means to partner with non-human technology in a learning setting. New Literacies up to this point were focused on technologies that served as repositories of knowledge or allowed users to create and interact with knowledge – all at the human level. The shift to AI is different than the shift from traditionally published material to digital material that anyone could produce. AI not only has the capacity to allow humans short cuts in consuming information, it can take human information and create new information and make new knowledge. As a doctoral student, I think this is s fascinating thing to study. As a writing teacher, it’s important to understand so I can prepare my students. As an administrator – this is going to make writing AI policy very difficult because policies are slow to form, the future has to be taking into account. AI is so new that it’s also almost impossible for any one to claim to be AI literate.

Week 8 Annotation – Characteristics of productive feedback encounters in online learning

Jensen, L. X., Bearman, M., & Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teaching in Higher Education, 1-15. https://doi.org/10.1080/13562517.2023.2213168

 

Jensen et al. (2023) provide a digital ethnographic approach to studying the perceived usefulness of feedback for students taking online classes at an Australian and Danish university in courses that were not emergency remote courses due to COVID-19. Jensen et al. situate their argument in the context of productive feedback – feedback a student finds meaningful – and feedback in higher education, noting feedback can come from interactions from humans, technology, or resources. The dataset for this study was derived from 18 students whose online text-based work was observed and 27 semi-structured interviews were conducted using longitudinal audio diaries. The data was thematically coded. Three major themes for feedback emerged. The first was elicited feedback encounters, where students directly asked for feedback. An example of this would be asking for peer review or emailing a question. The second was formal feedback encounters, where feedback is structured as part of the course design. An example is any instructor feedback on a submitted assignment. The final is incidental feedback encounters, where students get information that causes them to reflect on their understanding. An example is a discussion with peers about the directions for an assignment. Feedback has two types of impact: instrumental or substantive. Instrumental feedback is feedback that clearly delineates for the student what action they need to take next. This type of feedback often leads to superficial changes based on direct instruction for what to do. Substantive feedback is feedback that asks a student to critically reflect on their own assumptions. This type of feedback is often ignored because it’s too challenging; however, if the student engages with this feedback it reshapes their understanding of the task, the work, or their own approach. Instrumental and substantive feedback are equally valuable to student learning and serve a purpose. However, all feedback is most valuable to students when it is received when the are open to the feedback and the feedback arrives in time for them to apply it to their current work.

Jensen et al. do a good job of situating their problem in the context of feedback. There was discussion about the framework and approach for feedback, but it was not related back to the discussion or the findings in a clear way. It was also not clear if the authors of the study collected the dataset on their own or used a dataset that was collected for someone else and available for other researchers to utilize to conduct research. It was, however, very easy to follow the conceptual framework to the research questions and methodology used to explore this problem.

Feedback is something I have grappled with for a very long time as an educator. When I teach writing classes, I am always swamped by feedback. When I teach online courses, I log in every single day to read what students post and provide them formative feedback that I hope will shape their work. I’m not always sure that students read and engage with the feedback I give them, except when I require it as part of a reflection assignment (as Jensen et al. pointed out in their literature review). But I do agree that students will happily take surface feature/lower-order concern feedback and apply it easily because it is direct and tells them what to do. For example, if I tell them to restate their thesis and give them a template, they almost always do it. But if I ask them to reframe their thinking on a topic – which would lead to major overhaul of their paper – they often don’t do anything with that feedback. Jensen et al. pointed out that this type of feedback is hard to do. I mean, it’s a big ask to ask a first-year composition student to reframe their entire way of thinking about a topic while at the same time they’re learning how to research, evaluate sources, and all the things that come with learning to do academic research. It defeats the purpose of giving that kind of feedback, however, for me to tell them how to think about the topic differently. But this kind of feedback is successful if they’re ready and open to the conversation of changing how they think.

In online learning, it’s even harder to give this kind of feedback that leads to substantive learning because you can’t see the student to know how well it’s received or even understood. I also don’t always know the right time to give that feedback so it’s useful. In 8 week classes, I’m usually one week behind in grading so they’re getting feedback as they’re working two assignments down from the initial feedback. It’s not really helpful anymore. I need to think about ways to structure feedback so they get it when it’s useful.

Week 7 Annotation – TPACK in the age of ChatGPT and generative AI.

Mishra et al. (2023) apply the TPACK framework to ChatGPT to illustrate the framework is relevant to generative AI. Mirsha situate their argument the significance of the TPACK framework in educational technology and the daily work of teachers. Mishra et al. also point out that TPACK has fallen into the canon of educational technology research, and that it’s not engaged with intellectually. Rather education students learn it as part of another theory to memorize. They seek to make TPACK relevant again and encourage educators to questions of generative AI through the lens of this framework. Mirsha et al. provide an overview of the state of generative AI in education pointing out the pitfalls and benefits of using AI in the classroom, while ultimately coming to the conclusion that the educational space is forever changed because it is not a human only space any longer. Generative AI will require that new pedagogies are created to support learning that is inclusive of generative AI. Through the lense of TPACK, educators will have to think beyond the current moment to the long-term ramifications of generative AI in the classroom to assess student learning and prepare them for jobs in a world we cannot yet fully envision. Mishra et al. also point out that assessment will have to change to accommodate the ways learning will change as a result of human/machine partnerships in learning.

While Mishra et al. provide a robust overview of the current state of the TPACK framework in educational literature, they do fall into a pitfall of separating the elements of TPACK apart in order to explain the framework rather than analyzing ChatGPT holistically (Saubern et al., 2020). Mirsha et al. provide relevancy for application of the TPACK framework and try to provide some examples for how teachers can use generative AI in their classrooms to stimulate learning in new ways. These examples are cursory and only show how the focus on the discussion over academic dishonesty is the wrong place to situation the conversation in education around generative AI. Ultimately, the paper is very well organized. The literature review pulls from relevant TPACK literature, always choosing to cite the seminal work over discussions of the seminal work. The framework does not appear to be mischaracterized, but the separation of the parts does not allow for the creative dynamic between knowledge, pedagogy and technology Mishra et al. pointed out in their literature review to be fully explored in their own assessment of how TPACK can apply to generative AI in the classroom – which is also interesting because Mishra is one of the architects of TPACK.

All three facets of my academic identity – doctoral student, writing instructor, and administrator – are very interested in how Generative AI effects the classroom experience. This article opened my eyes to the reality that learning spaces are not human only learning spaces. While technology has always been at the center of my teaching practice, the technology was always mediating the learning. And now, the technology is participating in the learning (and in some ways, it’s a co-learning experience where the AI can learn from the learner, too). I’m very interested in this area as doctoral student. As a writing teacher, I want to teach my students to leverage generative AI so they can be proficient in using it as a tool to leverage their critical thinking to get better jobs. As an administrator, I want to understand the application of generative AI in the classroom to help faculty create learning spaces that don’t penalize and police students while they also navigate how to use generative AI as a learning tool in their own educational journey.

References

Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235-251. https://doi.org/10.1080/21532974.2023.2247480

Saubern, R., Henderson, M., Heinrich, E., & Redmond, P. (2020). TPACK–time to reboot?. Australasian Journal of Educational Technology36(3), 1-9.

Week 6 Annotation 2 – Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson.

McClellan, D., Chastain, R. J., & DeCaro, M. S. (2023). Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson. Journal of Computing in Higher Education. https://doi.org/10.1007/s12528-023-09379-w

McClellen et al. (2023) conducted a study to explore how adding cognitive or metacognitive embedded prompts to an asynchronous online video would improve learning in an undergraduate physics course. McClellen et al. situated their research in the conceptual framework of online learning, cognitive vs. metacognitive prompts, individual learner differences and prompt effectiveness, deep vs. surface learning, disorganized studying, and metacognitive skills. Further, McClellen at al. utilized cognitive load theory to support their investigation. The study was carried out over three semesters and used undergraduate physics students (n=253) who regularly used online video in their physics course; all three sections were taught by the same instructor. Students were randomly assigned to three sub-groups: no-prompt control group (n=86), cognitive embedded prompt (n=86) and metacognitive group (n=81). All students watched the same video and took the same quiz. The video was segmented into four parts. In both prompt groups, at the end of each segment, there was a set of questions – cognitive or metacognitive, depending on the condition the students were randomly assigned to. Students also completed validated questionnaire instruments asking about their individual differences in deep vs surface learning, organization of study habits, metacognitive awareness, and cognitive load in the third semester only. The results of the study showed that the embedded prompts did not have a statistically significant effect on cognitive load. The effect of cognitive embedded prompts was in line with the research; students in this group achieved quiz scores improved by 10% compared to the control group. The metacognitive embedded questions, while trending more positively than the control group, did not line up with research in the area and student quiz scores did not significantly improve. Overall, McClellen et al. recommend that cognitive embedded prompts that ask students to extend and organize information they are learning should be used with video lectures.

This study was very well organized and easy to follow. The literature review led directly to the research questions, and there were no surprises in the findings as they related to the conceptual framework. However, there were no sources to back up the individual learner differences section of the conceptual framework. While McClellen et al. (2023) did acknowledge the limitations of their study, including only one of the three groups having access to the surveys, including a new instrument of measurement to the study at the last minute does make me question the results. This study was also conducted in the natural world, so there were a lack of controls for what students were doing when the watched the videos; there was a missed opportunity for students to report on what they were doing at the time, as well. This was not acknowledged in the study.

As a doctoral student, this study shows the significance of the literature review in building up a conceptual framework to put others in the same headspace and point of view as the researchers at the time of publication. While I’ll be the first to admit my statistics are rusty, there was a lot of written description of the statistical analysis, and the argument would have been better served with more visual representations of the data. If I use quantitative methods in my research, I will be mindful to include visual representations which can clarify meaning – not every reader is going to come to the article with the same level of understanding of quantitative methods and I think it’s important that all educational researchers can access the research to meaningfully question it, use it, and apply it to their own projects.

Week 6 Annotation 1 – It is not television anymore: Designing digital video for learning and assessment.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science (pp. 349-366). Mahwah, NJ: Lawrance Erlbaum Associates.

Schwartz et al. (2007) establish a framework specifically for those new to the learning sciences for how to use video to observe and identify learning outcomes and to strategically implement videos into the classroom learning space. This framework is situated in the new world of YouTube and streaming video, where students at the time had access to more information, but were limited by broadband access (because streaming video was spotty in 2005). They also contextualize their framework in the current research of the day, giving an overview of the minimal research available on the topic in 2007. Schwarts et al. give an overview of four common learning outcome: seeing, engaging, doing, and saying. Within each of these four common leaning outcomes is a variety of criteria that are observable when learners are engaging with video, and might direct what video and when video is selected to be used in a learning situation. Seeing videos are videos that help learners visualize and experience things they have not or cannot experience. Seeing videos can be categorized as tour videos (e.g. travel videos, historical re-enactments, nature videos), point of view videos (e.g. from a character’s point of view), simulated experiences (e.g. first person video of a sky dive). The associated assessable criteria are: recognition, noticing, discernment, and familiarity. Engagement videos are designed to keep people engaged in a topic. These videos develop interest and contextualize information. The associated assessable actions are assessing preferences for learning and measuring future learning. Doing videos present human behavior or processes – there are attitude and skill distinctions. In order to do an action, the viewer needs to see the action. Videos that shape attitudes ask viewers to identify the behavior and demonstrate the behavior – either globally or in step-by-step fashion. To assess the effectiveness of the video, a viewer would be asked do the behavior they learned from watching the video. If there is an action that is unable to replicated, then the viewer should be able to explain the action in detail. Saying videos are videos that lead to knowledge acquisition of facts and retaining the facts. Things like news broadcasts, fall into this category. Features of analogy, commentary, and explosion can be used. To assess success of saying videos, viewers should be asked to recall facts they acquired from watching the video. Overall, video works within a larger context. They also provided an extended example of pre-service teachers applying the framework in a course.

Schwartz et al. (2007) did an excellent job of establishing the framework. The framework was clearly and explicitly explained. There was a clear visual representation of the framework. The tenants of the framework were explained, supported with evidence from the literature, and then clear and specific examples were given that a reader could apply to their own situation or research. Additionally, they provided an extended example of how this process could be applied in a learning context. Schwartz et al. also provided appropriate critique and contextualization for the framework. This framework is deceptively simple, as it easy to apply to a condition, but has a lot of room for growth and assessment in application.

As a doctoral student, this framework provides a way to view the application of video usage in a classroom. It was interesting to see the development of a framework for studying something that was so new. This framework emerged alongside the technology. The way the framework was explained and presented in the article was also of great value. Thinking forward to explaining my own conceptual or theoretical framework in my dissertation, I also want to be as clear in my writing. I also appreciate that the framework was so explicit. I feel as though I could pick this framework up and apply it to a scenario. As an administrator who works with faculty, I could direct faculty to this framework to help them assess their use of video in their classes, as this could be part of the evaluation process. Since this is easily accessible, I feel like it’s something that could be seen as value-added right away, especially since it looks a lot like the Bloom’s Taxonomy wheels that many faculty are already familiar with and use. They know it’s easy to apply Bloom’s and would likely assume this framework is just as easy to apply since it can be visually represented in the same way.