Week 10 Annotation – Establishing the concept of AI literacy: Focusing on competence and purpose.

Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353-368. https://doi.org/10.21860/j.12.2.8

Yi (2021) establishes AI literacy using traditional literacy as a foundation. Yi situates the concept within the every expanding realm of literacies, which emerge as new technologies emerge. Within his framework, Yi calls basic reading, writing, and arithmetic skills functional literacy. Social literacy is new literacy, which takes into account social practice and critical thinking, Technological literacy encompasses technological intimacy and future literacy (the ways technology could be used in the future). He argues that we have moved beyond the realm of simply understanding signifiers and signifieds in printed texts; reading and writing are not sufficient to participate in today’s world. Communication media extends functional literacy to include technology as a means of communication. However, to communicate effectively using a technology, the user has to understand the changing nature of technology and the ways technology is used to communicate in a specific time and place. Yi rejects the idea that AI literacy definitions belong as an extension of digital literacy discussions because they all “set goals for artificial intelligence education” (p. 359). Yi’s definition centers on competency in being adaptable. AI literate individuals will use AI, adapt AI to help them create life, and recognize the change to culture that comes as a result of AI usage. AI literacy also requires that a person be able to maintain their privacy and leverage the AI tool to help them realize their goals. Using AI helps humans grow using non-human technology. AI literacy is inclusive of functional literacy, technological literacy, and new literacy. AI competence literacy is demonstrated by the use through metacognition and anticipation of future needs. In order to be successful, people need to consider the ways AI could alter future prospects and educate themselves accordingly. This also means learners can use AI to create personalized learning, while teachers remain along side to mentor and guide.

Yi does a good job of grounding his theory in the traditions of literacy studies, new literacy, and technological literacy. He establishes clearly how AI literacy is the next evolution of new literacy, and emphasizes that adaptability will be at the core of a human non-human interaction. The sources he cited to articulate his point are grounded in literacy studies, motivational research, and work in artificial intelligence. The concept forms from the literature.

As a doctoral student, educator, and higher education administrator, this new view on AI literacy opens up conversation about what it means to partner with non-human technology in a learning setting. New Literacies up to this point were focused on technologies that served as repositories of knowledge or allowed users to create and interact with knowledge – all at the human level. The shift to AI is different than the shift from traditionally published material to digital material that anyone could produce. AI not only has the capacity to allow humans short cuts in consuming information, it can take human information and create new information and make new knowledge. As a doctoral student, I think this is s fascinating thing to study. As a writing teacher, it’s important to understand so I can prepare my students. As an administrator – this is going to make writing AI policy very difficult because policies are slow to form, the future has to be taking into account. AI is so new that it’s also almost impossible for any one to claim to be AI literate.

Week 9 Annotation – The promises and pitfalls of using ChatGPT for self-determined learning in higher education: an argumentative review.

Baskara, F. R. (2023). The promises and pitfalls of using chat GPT for self-determined learning in higher education: An argumentative review. Prosiding Seminar Nasional Fakultas Tarbiyah Dan Ilmu Keguruan IAIM Sinjai, 2, 95-101. https://doi.org/10.47435/sentikjar.v2i0.1825

Baksara provides an argumentative review of ChatGPT in relationship to self-determination learning and self-regulated learning. The paper provides a review of the current, emerging literature as it relates to ChatGPT to understand the ways self-determined learning and self-regulated learning can be supported by generative AI. The goal of the paper is to review the literature to provide support for uses and caution against pitfalls for educators in higher education. The fact that ChatGPT can provide students with the ability for personalized learning can foster self-regulated learning. However, there are ethical concerns. The review of literature shows that learners can get tailored feedback, customized learning opportunities, and amplifying self-regulation and self-determination skills. The ethical concerns of privacy and equitable access to the tool were raised.

This article was brief and repetitive. Baksara did disclose the search methodology for this literature review, but did not specify how many articles were read in the methodology, or any provisions for exclusion of articles. 22 papers were cited on the References page. The article made reference to self-determination and self-regulated learning, but no sources were cited to support those elements. The review was laid out very clearly and the writing was easy to follow. But the review seems cursory and not in depth. While the literature review does pull out key concepts to arrive and the conclusion, there are not many supporting citations that back up what Baksara argues.

Generative AI conversations seem to be dominating many spheres of the educational space right now. There seems to be a push and pull between policing student (and even staff) usage of these tools with allowing students freedom to learn along side the AI technology. Personally, as a higher education administrator, I see generative AI fully implemented into the educational landscape, much like we see calculators or the internet. The idea that Generative AI can support the development of intrinsic motivation through competency, relevance, and autonomy (e.g. self-determination theory) is exciting. It allows students access to another resource to support them as they write. I think the ability to engage with generative AI to get personalized learning experiences is exciting! I can see that there is uncertainty about what that means for students, especially in writing classes. In a section of composition I’m teaching this semester, I had students help me write our ChatGPT policy, and most students were against the usage of ChatGPT in a writing course because they didn’t feel it was relevant to have AI do their work for them. However, once we discussed the ways ChatGPT could be used to support their writing, they started discussing things like choice (autonomy), pre-writing and editing things ChatGPT could help with (relevancy) and choice to use it or not (autonomy). We’ll see what happens, but there is opportunity for ChatGPT to create motivation to write and overcome writer’s block, for example.

Week 8 – Extending the Discussion

Extending the Discussion – Week 8

Online learning environments are difficult because they’re asynchronous. In face to face classrooms, I could always tell where my students were in terms of understanding and could very easily course correct. I built rapport very easily through bad jokes and being able to show my humanity to my students regularly. In online classes – I’m a block of text, and maybe sometimes a quick video or a voice file. It’s a very different type of interaction with students. But a key piece of success is students feeling like their teacher is present. Richardson (2003) pointed out that students who felt their instructor was present in the class felt like they learned more. Hratsinski (2009) pointed out that having someone around with a higher level knowledge than the learner increases learning. It’s difficult to be present as a block of text and it’s so easy to stifle a discussion as the instructor if you encroach on it too soon.

Online classrooms need a level of instructor interaction, however. They cannot just be left for students to engage with each other or with content and/or technology as a means of feedback. Hrtatsinski (2009) cited there are three types of interaction in a classroom: learner to learner, learner to content, and learner to instructor. When I observe and evaluate any classroom environment, I look for all of these interactions. When I build my own classroom environments, I strategically and intentionally build in all of these pieces. I would also add there needs to be clear interaction between the instructor and the content to model disciplinary thinking for a student. All three interactions need to be present for an online classroom to function well (Abrami et al., 2011). Learners cannot be left to engage only with other learners and the content – only reaching out to the instructor piecemeal for clarification and expect to leave with a well-rounded learning experience. Instructors need to set up learner to learner engagements that have a specific end goal and participate in that interaction at key points to provide timely feedback. As Jensen et al. (2023) argued, feedback while students are in process of an assignment at a useful point can lead to substantive learning. If there is no feedback from the instructor, the learner can be uncertain about their interactions with peers or content leading to correct understanding. The instructor also needs to challenge student ideas at a time when they’re developing, as that is when that feedback is more likely, in my opinion, to be able to help shape approaches and ideas.

References

Abrami, P. C., Bernard, R. M., Bures, E. M., Borokhovski, E., & Tamim, R. M. (2011). Interaction in distance education and online learning: using evidence and theory to improve practice. Journal of Computing in Higher Education, 23, 82-103.

Hrastinski, S. (2009). A theory of online learning as online participationComputers & Education, 52(1), 78–82.

Jensen, L. X., Bearman, M., & Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teaching in Higher Education, 1-15. https://doi.org/10.1080/13562517.2023.2213168

Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1), 71-88.

Week 8 Annotation – Characteristics of productive feedback encounters in online learning

Jensen, L. X., Bearman, M., & Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teaching in Higher Education, 1-15. https://doi.org/10.1080/13562517.2023.2213168

 

Jensen et al. (2023) provide a digital ethnographic approach to studying the perceived usefulness of feedback for students taking online classes at an Australian and Danish university in courses that were not emergency remote courses due to COVID-19. Jensen et al. situate their argument in the context of productive feedback – feedback a student finds meaningful – and feedback in higher education, noting feedback can come from interactions from humans, technology, or resources. The dataset for this study was derived from 18 students whose online text-based work was observed and 27 semi-structured interviews were conducted using longitudinal audio diaries. The data was thematically coded. Three major themes for feedback emerged. The first was elicited feedback encounters, where students directly asked for feedback. An example of this would be asking for peer review or emailing a question. The second was formal feedback encounters, where feedback is structured as part of the course design. An example is any instructor feedback on a submitted assignment. The final is incidental feedback encounters, where students get information that causes them to reflect on their understanding. An example is a discussion with peers about the directions for an assignment. Feedback has two types of impact: instrumental or substantive. Instrumental feedback is feedback that clearly delineates for the student what action they need to take next. This type of feedback often leads to superficial changes based on direct instruction for what to do. Substantive feedback is feedback that asks a student to critically reflect on their own assumptions. This type of feedback is often ignored because it’s too challenging; however, if the student engages with this feedback it reshapes their understanding of the task, the work, or their own approach. Instrumental and substantive feedback are equally valuable to student learning and serve a purpose. However, all feedback is most valuable to students when it is received when the are open to the feedback and the feedback arrives in time for them to apply it to their current work.

Jensen et al. do a good job of situating their problem in the context of feedback. There was discussion about the framework and approach for feedback, but it was not related back to the discussion or the findings in a clear way. It was also not clear if the authors of the study collected the dataset on their own or used a dataset that was collected for someone else and available for other researchers to utilize to conduct research. It was, however, very easy to follow the conceptual framework to the research questions and methodology used to explore this problem.

Feedback is something I have grappled with for a very long time as an educator. When I teach writing classes, I am always swamped by feedback. When I teach online courses, I log in every single day to read what students post and provide them formative feedback that I hope will shape their work. I’m not always sure that students read and engage with the feedback I give them, except when I require it as part of a reflection assignment (as Jensen et al. pointed out in their literature review). But I do agree that students will happily take surface feature/lower-order concern feedback and apply it easily because it is direct and tells them what to do. For example, if I tell them to restate their thesis and give them a template, they almost always do it. But if I ask them to reframe their thinking on a topic – which would lead to major overhaul of their paper – they often don’t do anything with that feedback. Jensen et al. pointed out that this type of feedback is hard to do. I mean, it’s a big ask to ask a first-year composition student to reframe their entire way of thinking about a topic while at the same time they’re learning how to research, evaluate sources, and all the things that come with learning to do academic research. It defeats the purpose of giving that kind of feedback, however, for me to tell them how to think about the topic differently. But this kind of feedback is successful if they’re ready and open to the conversation of changing how they think.

In online learning, it’s even harder to give this kind of feedback that leads to substantive learning because you can’t see the student to know how well it’s received or even understood. I also don’t always know the right time to give that feedback so it’s useful. In 8 week classes, I’m usually one week behind in grading so they’re getting feedback as they’re working two assignments down from the initial feedback. It’s not really helpful anymore. I need to think about ways to structure feedback so they get it when it’s useful.

Extending the Discussion, Week 7 – TPACK/TPAC

Mishra et al. (2006) presented a framework that to me makes sense. When I started working as a trainer for technology in 2008/2009, I didn’t know that this was a theory – but the idea that we would train teachers on specific technologies, like Blackboard or YouTube or whatever the software-du-jour was made no sense to me. I always pushed back on this idea that if you know how to use the software, then you can teach your students to use it. None of the training I received to train others ever focused on a pedagogical underpinning of why the technology should be implemented. And this of course, led to a lot of resistance from instructors to adopt and adapt technology into their classroom. The biggest pedagogical reason that was offered is that it would help students get jobs; and that idea was at odds with the purpose of a college education, which was to learn to think and become well-rounded so you could would, but not be trained for a job.

            TPACK centers the instructor in the conversation, and in turn, the instructors are supposed to center the students so they can deliver the best content using the most reasonable technology to get the job done. In a lot of the work and discourse around technology, students are centered – educators want to know about their perceptions, their learning, their motivation, etc. as it’s influenced by technology. This discourse has also dominated my professional academic career. We are always measuring how X effects student learning – but rarely do we stop to ask how X effects teaching. So, TPACK re-centers this idea in powerful ways.

            For my additional article on the topic this week, I read about TPACK and Generative AI (Mishra et al. 2023). This article attempts to re-center the discourse around generative AI from: How to do we stop students from cheating? to How do we create learning spaces that leverage this technology that is not going away? How do we adapt teaching to the changing educational landscape? And again, the instructor’s work is re-centered.

            Education is cognitive work. Educators strive to help their students build their own content knowledge in areas so they can adapt it to their future needs. TPACK provides a way for instructors to be intentional about the use of technology in their classroom to maximize benefits for their students. One thing that I found compelling in Mishra et al. (2023) is the idea that if educators are intentional about the deployment of Generative AI into learning spaces, they don’t need to police student use – which is a futile effort anyway, since AI detection software is unreliable. TPACK provides a lens for intentionality that I really find valuable as a student, instructor, and administrator when it comes to technology implementation.  

References

Mishra, P., & Koehler, M.J. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.

Week 7 Annotation – TPACK in the age of ChatGPT and generative AI.

Mishra et al. (2023) apply the TPACK framework to ChatGPT to illustrate the framework is relevant to generative AI. Mirsha situate their argument the significance of the TPACK framework in educational technology and the daily work of teachers. Mishra et al. also point out that TPACK has fallen into the canon of educational technology research, and that it’s not engaged with intellectually. Rather education students learn it as part of another theory to memorize. They seek to make TPACK relevant again and encourage educators to questions of generative AI through the lens of this framework. Mirsha et al. provide an overview of the state of generative AI in education pointing out the pitfalls and benefits of using AI in the classroom, while ultimately coming to the conclusion that the educational space is forever changed because it is not a human only space any longer. Generative AI will require that new pedagogies are created to support learning that is inclusive of generative AI. Through the lense of TPACK, educators will have to think beyond the current moment to the long-term ramifications of generative AI in the classroom to assess student learning and prepare them for jobs in a world we cannot yet fully envision. Mishra et al. also point out that assessment will have to change to accommodate the ways learning will change as a result of human/machine partnerships in learning.

While Mishra et al. provide a robust overview of the current state of the TPACK framework in educational literature, they do fall into a pitfall of separating the elements of TPACK apart in order to explain the framework rather than analyzing ChatGPT holistically (Saubern et al., 2020). Mirsha et al. provide relevancy for application of the TPACK framework and try to provide some examples for how teachers can use generative AI in their classrooms to stimulate learning in new ways. These examples are cursory and only show how the focus on the discussion over academic dishonesty is the wrong place to situation the conversation in education around generative AI. Ultimately, the paper is very well organized. The literature review pulls from relevant TPACK literature, always choosing to cite the seminal work over discussions of the seminal work. The framework does not appear to be mischaracterized, but the separation of the parts does not allow for the creative dynamic between knowledge, pedagogy and technology Mishra et al. pointed out in their literature review to be fully explored in their own assessment of how TPACK can apply to generative AI in the classroom – which is also interesting because Mishra is one of the architects of TPACK.

All three facets of my academic identity – doctoral student, writing instructor, and administrator – are very interested in how Generative AI effects the classroom experience. This article opened my eyes to the reality that learning spaces are not human only learning spaces. While technology has always been at the center of my teaching practice, the technology was always mediating the learning. And now, the technology is participating in the learning (and in some ways, it’s a co-learning experience where the AI can learn from the learner, too). I’m very interested in this area as doctoral student. As a writing teacher, I want to teach my students to leverage generative AI so they can be proficient in using it as a tool to leverage their critical thinking to get better jobs. As an administrator, I want to understand the application of generative AI in the classroom to help faculty create learning spaces that don’t penalize and police students while they also navigate how to use generative AI as a learning tool in their own educational journey.

References

Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235-251. https://doi.org/10.1080/21532974.2023.2247480

Saubern, R., Henderson, M., Heinrich, E., & Redmond, P. (2020). TPACK–time to reboot?. Australasian Journal of Educational Technology36(3), 1-9.

Week 6 – Extending the Discussion

Extending the Conversation

While reading Kay (2012) literature review, one thing that stood out to me was the underlying idea that using video in the classroom – especially streaming video that an instructor could create on their own or students could create – really challenges the role of the instructor. Kay’s literature review found there were multiple reasons students use videos: improve learning, preparing for class, self-checking understanding, obtaining a global overview of chapters, taking better notes and to improve face-to-face quality of classes. Kay also pointed out that there as a concern (maybe a fear) among instructors that videoing lectures or PowerPoint lectures would mean students won’t come to class. And Kay’s literature review uncovered that students were as likely to come to class as not when a video lecture was posted, but if it was a PowerPoint lecture, then students would not be as willing to come to class.

Prior to the emergence of the science of learning in the 1980s, the common model of education was one where knowledge was transferred from instructor to student, creating a dynamic where the instructor has all the power, and for students to get what they need, they had to be physically present (Nathan et al., 2022). Educational technology allows a shift in where, when, and how students access information. This also displaces the power dynamic that has been put in place, especially in the context of direct learning environments. Videos allow students to have some more ownership and control over their learning experiences. They’re not quite ready to give up on face-to-face interactions, as evidenced by the fact that brick and mortar education still exists in 2023, and students chose to return to that space after the COVID-19 pandemic’s long pause of face-to-face learning.

While instructors may record, create, or curate video content for their students to consume, that still places them in a different role in the learning context. I see an underlying fear in the ways video can shift a dynamic – if there is a video lecture, then it can be reused indefinitely, in perpetuity. For example, Concordia University was assigning a dead professor a course, using his recorded lecture materials, while being led by a living professor and two TAs. There are some ethical concerns that come up. McCllelan et al. (2022) also point out that video lectures mean students can over inflate their learning because the instructor is not there to immediately guide understanding. The role of the professor even shifts away from “guide on the side”. I’m not sure really what it looks like. But I am interested in the question of how video lectures – active or passive in the student experience – can reshape the power dynamic of the instructor and the student in a learning context. What happens to learning when the instructor is potentially perceived as more passive in the learning experience than in student-centered learning?

References:

McClellan, D., Chastain, R. J., & DeCaro, M. S. (2023). Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson. Journal of Computing in Higher Education. https://doi.org/10.1007/s12528-023-09379-w

Nathan, M. J., & Sawyer, R. K. (2022). Foundations of the learning sciences. Cambridge University Press. https://doi.org/10.1017/9781108888295.004

Tangermann, V. (n.d.). A university is using a dead professor to teach an online class: “I just found out the prof for this online course I’m taking died in 2019.” The Byte. https://futurism.com/the-byte/university-dead-professor-teach-online-class

Week 6 Annotation 2 – Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson.

McClellan, D., Chastain, R. J., & DeCaro, M. S. (2023). Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson. Journal of Computing in Higher Education. https://doi.org/10.1007/s12528-023-09379-w

McClellen et al. (2023) conducted a study to explore how adding cognitive or metacognitive embedded prompts to an asynchronous online video would improve learning in an undergraduate physics course. McClellen et al. situated their research in the conceptual framework of online learning, cognitive vs. metacognitive prompts, individual learner differences and prompt effectiveness, deep vs. surface learning, disorganized studying, and metacognitive skills. Further, McClellen at al. utilized cognitive load theory to support their investigation. The study was carried out over three semesters and used undergraduate physics students (n=253) who regularly used online video in their physics course; all three sections were taught by the same instructor. Students were randomly assigned to three sub-groups: no-prompt control group (n=86), cognitive embedded prompt (n=86) and metacognitive group (n=81). All students watched the same video and took the same quiz. The video was segmented into four parts. In both prompt groups, at the end of each segment, there was a set of questions – cognitive or metacognitive, depending on the condition the students were randomly assigned to. Students also completed validated questionnaire instruments asking about their individual differences in deep vs surface learning, organization of study habits, metacognitive awareness, and cognitive load in the third semester only. The results of the study showed that the embedded prompts did not have a statistically significant effect on cognitive load. The effect of cognitive embedded prompts was in line with the research; students in this group achieved quiz scores improved by 10% compared to the control group. The metacognitive embedded questions, while trending more positively than the control group, did not line up with research in the area and student quiz scores did not significantly improve. Overall, McClellen et al. recommend that cognitive embedded prompts that ask students to extend and organize information they are learning should be used with video lectures.

This study was very well organized and easy to follow. The literature review led directly to the research questions, and there were no surprises in the findings as they related to the conceptual framework. However, there were no sources to back up the individual learner differences section of the conceptual framework. While McClellen et al. (2023) did acknowledge the limitations of their study, including only one of the three groups having access to the surveys, including a new instrument of measurement to the study at the last minute does make me question the results. This study was also conducted in the natural world, so there were a lack of controls for what students were doing when the watched the videos; there was a missed opportunity for students to report on what they were doing at the time, as well. This was not acknowledged in the study.

As a doctoral student, this study shows the significance of the literature review in building up a conceptual framework to put others in the same headspace and point of view as the researchers at the time of publication. While I’ll be the first to admit my statistics are rusty, there was a lot of written description of the statistical analysis, and the argument would have been better served with more visual representations of the data. If I use quantitative methods in my research, I will be mindful to include visual representations which can clarify meaning – not every reader is going to come to the article with the same level of understanding of quantitative methods and I think it’s important that all educational researchers can access the research to meaningfully question it, use it, and apply it to their own projects.

Week 6 Annotation 1 – It is not television anymore: Designing digital video for learning and assessment.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science (pp. 349-366). Mahwah, NJ: Lawrance Erlbaum Associates.

Schwartz et al. (2007) establish a framework specifically for those new to the learning sciences for how to use video to observe and identify learning outcomes and to strategically implement videos into the classroom learning space. This framework is situated in the new world of YouTube and streaming video, where students at the time had access to more information, but were limited by broadband access (because streaming video was spotty in 2005). They also contextualize their framework in the current research of the day, giving an overview of the minimal research available on the topic in 2007. Schwarts et al. give an overview of four common learning outcome: seeing, engaging, doing, and saying. Within each of these four common leaning outcomes is a variety of criteria that are observable when learners are engaging with video, and might direct what video and when video is selected to be used in a learning situation. Seeing videos are videos that help learners visualize and experience things they have not or cannot experience. Seeing videos can be categorized as tour videos (e.g. travel videos, historical re-enactments, nature videos), point of view videos (e.g. from a character’s point of view), simulated experiences (e.g. first person video of a sky dive). The associated assessable criteria are: recognition, noticing, discernment, and familiarity. Engagement videos are designed to keep people engaged in a topic. These videos develop interest and contextualize information. The associated assessable actions are assessing preferences for learning and measuring future learning. Doing videos present human behavior or processes – there are attitude and skill distinctions. In order to do an action, the viewer needs to see the action. Videos that shape attitudes ask viewers to identify the behavior and demonstrate the behavior – either globally or in step-by-step fashion. To assess the effectiveness of the video, a viewer would be asked do the behavior they learned from watching the video. If there is an action that is unable to replicated, then the viewer should be able to explain the action in detail. Saying videos are videos that lead to knowledge acquisition of facts and retaining the facts. Things like news broadcasts, fall into this category. Features of analogy, commentary, and explosion can be used. To assess success of saying videos, viewers should be asked to recall facts they acquired from watching the video. Overall, video works within a larger context. They also provided an extended example of pre-service teachers applying the framework in a course.

Schwartz et al. (2007) did an excellent job of establishing the framework. The framework was clearly and explicitly explained. There was a clear visual representation of the framework. The tenants of the framework were explained, supported with evidence from the literature, and then clear and specific examples were given that a reader could apply to their own situation or research. Additionally, they provided an extended example of how this process could be applied in a learning context. Schwartz et al. also provided appropriate critique and contextualization for the framework. This framework is deceptively simple, as it easy to apply to a condition, but has a lot of room for growth and assessment in application.

As a doctoral student, this framework provides a way to view the application of video usage in a classroom. It was interesting to see the development of a framework for studying something that was so new. This framework emerged alongside the technology. The way the framework was explained and presented in the article was also of great value. Thinking forward to explaining my own conceptual or theoretical framework in my dissertation, I also want to be as clear in my writing. I also appreciate that the framework was so explicit. I feel as though I could pick this framework up and apply it to a scenario. As an administrator who works with faculty, I could direct faculty to this framework to help them assess their use of video in their classes, as this could be part of the evaluation process. Since this is easily accessible, I feel like it’s something that could be seen as value-added right away, especially since it looks a lot like the Bloom’s Taxonomy wheels that many faculty are already familiar with and use. They know it’s easy to apply Bloom’s and would likely assume this framework is just as easy to apply since it can be visually represented in the same way.

Week 5 Extension Discussion – Overview of Educational Hypermedia Research

Extension Discussion – Week 5 – Overview of Educational Hypermedia Research

In our guided reading, we were asked to think about researchable ideas from the Kuiper et al. (2005) article. In short summary, the article explores the new-in-2005 concerns about K-12 students being able to use the Internet in their learning and whether the Internet requires specific skill of students. In 2023, these questions are still relevant.

In the article, Kuiper et al. (2005) make reference to a research study where students were being taught explicitly that when they click on a hyperlink, they also need to interact with it deeply. In 2023, in the college setting where I teach, the assumption is that kids are going to come into the classroom with a fully formed understanding of how to interact with the Internet. The myth of the digital native, coined by Prensky (2001) persists in higher education to the detriment of learners and teachers. Prenksy’s theory was that those who grew up with technology – digital natives– would have an innate sense of how use technology, unlike digital immigrants, who came to technology later. The assumption carries over to educational spaces where it can be easy to assume that just because students grew up with technology, they will automatically know how to apply that technology to a variety of learning contexts. An innate skill to apply technology to learning does not exist.

Enyon (2020) explored the harm of the persistent nature of the digital native myth. The myth itself presents a generational divide (which Enyon notes, the literature does not support) and leads to a very hands-off approach in adults teaching children to use technology. Now, this may be different as so-called elder millennials, the original digital natives per Prensky’s theory, are taking spaces as educators in classrooms. Millennials were assumed to be native to technology because it was ubiquitous as they grew. As an elder millennial, I know that I had to learn technology and how to apply it on my own. There was no one to teach me because the divide Enyon pointed out was ever-present in my educational experiences. I had no guidance when it came to encountering hypertext for the first time, for example. The closest I ever got to “online training” was in grad school when a research librarian taught us Boolean searches in the time before Google was ubiquitous and natural language searches were a thing.

The research opportunities in this area come from looking at how learner relationship to technology is established, nurtured, and supported. The skills an Internet user needed in 2005 are also vastly different from the skills an Internet user needs in 2023.

The Web has become a different place. In the early days of the Internet, people in general were leery of it. I remember being explicitly told by high school English teachers and college professors that I could not trust everything I found online. But in 2023, “the Internet” has become an all-encompassing resource. “I read it online” becomes the only needed – or maybe even differently in 2023, “I saw it on TikTok.”  It seems that the old traditions of authorial authority (from the days of publishing when author work had to be vetted for credibility among other things before it was published) has transplanted online. If it’s published online, it must be credible, right? I see this a lot with Internet users who don’t understand that the vetting process for publishing online is to hit “submit” on a website. There are no more checks and balances. The Internet democratizes access to information, and it also allows anyone with Internet access to become a content creator. Search engine algorithms have also become very siloed. People get return results based on what they like to see, which means they confront ideas less and less that challenge their worldviews (Pariser, 2011) Not to mention the dawn of Chat GPT, which manufactures source information to appear credible and returns results based on user inputs.

Students today need to be trained to be critical of information and resources they encounter online. The Internet is a great repository of information, but not all information is created equal, or should be held as having the same value or veracity. The notion that students need specific skills still holds true and is still an area of valid research. This is an area of research I am personally very interested in.  

References

Kuiper, E., Volman, M., & Terwel, J. (2005). The web as an information resource in k–12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75(3), 285-328. https://doi.org/10.3102/00346543075003285

Eynon, R. (2020).  “The myth of the digital native: Why it persists and the harm it inflicts”, in Burns, T. and F. Gottschalk (eds.), Education in the Digital Age: Healthy and Happy Children, OECD Publishing, Paris, https://doi.org/10.1787/2dac420b-en.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. penguin UK.

Prensky, M. (2001). Digital natives, digital immigrants, part 1. On the Horizon, 9(5), 1-6. https://doi.org/10.1108/10748120110424816