Tag: edu800
Week 15 and 16 Annotation – It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence?
Lodge, J. M., Yang, S., Furze, L., & Dawson, P. (2023). It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence? Learning: Research and Practice, 9(2), 117-124. https://doi.org/10.1080/23735082.2023.2261106
Lodge et al. (2023) set out to establish a frame for discussions about generative AI technologies in education by offering typologies for AI. They begin by dispelling the common analogy of generative AI technology to the calculator. Lodge et al. hold that comparing generative AI to a calculator assumes that generative AI technologies will be able to do tasks to arrive at a correct answer, but really generative AI functions are more complex than that oversimplified analogy of the calculator. Lodge et al. view generative AI as an infrastructure, rather than a singular tool. Next, Lodge et al. provide an overview of human-generative AI interaction. They contextualize this interaction in the more firmly established human computer interactions that take in to account social and cognitive processes involved in learning. Computers were used to offload tasks – like complex addition problems to a calculator, for example. However generative AI is not about offloading a task to take the data and move forward. Lodge et al. introduced a four-quadrant typology for human and machine interactions for education. The vertical axis represents how AI can free up humans from boring tasks so they can engage in higher-level thinking or extend human thought capabilities. The horizontal axis then illustrates the way the human-machine relationship functions, individually or in collaboration. For example, using a calculator is an individual use and is an example of cognitive offloading. Cognitive offloading occurs people shift part of their cognitive tasks elsewhere – for example, a Google calendar keeps track of appointments, cell phones hold phone numbers, journals hold notes, etc. – but technology does not necessarily have to be involved. Cognitive offloading can damage learning if too much information or cognitive work is left to other devices. But cognitive offloading can also free up thinking space to allow people to engage in higher-order thinking processes. The extended mind theory holds that technology is used to expand human capabilities in complex tasks and thought processes. Generative AI could be an extension of the mind. Next. AI can also be used as a collaborative tool that assists in the co-regulation of learning. While generative AI cannot regulate human learning, the outputs of generative AI as it monitors human learning can help humans reflect on and monitor their learning in relationship to their goals. AI can “coach” humans here. Finally, ther eis hybrid learning. AI tools can help humans learn because it provides real-time feedback that is adaptive and personalized. AI can guide learners to grow and develop through opportunities for reflection.
Lodge et al. provide a very clear description of their typologies for generative AI use and human interaction in education. Their writing is clear and concise and cites relevant resources. This article does provide a framework for discussing generative-AI human interaction without reducing it down to an overly simplified statement of: “It’s just like a calculator.” Not only does that phrase oversimplify, but it also discredits those who have legitimate concerns about the integration and implementation of generative AI in the classroom. The four typologies are discussed in a way that connects each one to the next. Lodge et al. start out with generative-AI human interaction, then discuss cognitive offloading, then expanded mind theory, then co-regulated learning and hybrid learning. This process allows the framework to develop from more simple to more complex, and as the paper moves forward the seeds for discussion about generative AI grow more complex, leaving the reader to seek out more complex connections.
I am interested in this as a doctoral student because I am extremely interested in the ways generative AI, cognitive offloading, knowledge acquisition, and transactive memory partnerships work. When I was earning my Ed.S. degree, I focused my work on the ways Google was shifting knowledge acquisition and learning as a possible transactive memory partner in the context of classroom discussions. But as my work was ending on that degree ChatGPT emerged, and my interested shifted to generative AI. As Lodge et al. showed, there are many ways that generative AI will reshape learning and knowledge work – and don’t know all of those ways yet. So this is something that is very in line with my research interests.
Also, reading this made me cringe because I have been using the calculator analogy in almost every discussion I have had about generative AI. And when I read how the analogy was broken down to illustrate the ways generative AI is more complex than a calculator, I realized I had inadvertently been dismissing some very legitimate concerns about the inclusion of generative AI in the classroom. This was a good reminder to slow down and think through something before just embracing it.
APA Citations for Additional Sources
References
Chauncey, S. A., & McKenna, H. P. (2023). A framework and exemplars for ethical and responsible use of AI chatbot technology to support teaching and learning. Computers and Education: Artificial Intelligence, 5, 100182. https://doi.org/10.1016/j.caeai.2023.100182
Chen, B., Zhu, X., & Díaz del Castillo H, F. (2023). Integrating generative AI in knowledge building. Computers and Education: Artificial Intelligence, 5, 100184. https://doi.org/10.1016/j.caeai.2023.100184
Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., Tondeur, J., De Laat, M., Buckingham Shum, S., Gašević, D., & Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with ai? Computers and Education: Artificial Intelligence, 3, 100056. https://doi.org/10.1016/j.caeai.2022.100056
Vinchon, F., Lubart, T., Bartolotta, S., Gironnay, V., Botella, M., Bourgeois, S., Burkhardt, J.-M., Bonnardel, N., Corazza, G. E., Glaveanu, V., Hanson, M. H., Ivcevic, Z., Karwowski, M., Kaufman, J. C., Okada, T., Reiter-Palmon, R., & Gaggioli, A. (2023). Artificial intelligence & Creativity: A manifesto for collaboration. Pre-Print. https://doi.org/10.31234/osf.io/ukqc9
Week 14 – Annotation 2 – Creating technology-enhanced, learner-centered classrooms: K-12 teachers beliefs, perceptions, barriers, and support needs.
An, Y. J., & Reigeluth, C. (2011). Creating technology-enhanced, learner-centered classrooms: K–12 teachers’ beliefs, perceptions, barriers, and support needs. Journal of Digital Learning in Teacher Education, 28(2), 54-62.
An and Reighluth (2011) conducted a survey-based exploratory study to learn about teacher perceptions, barriers, and support needed to create technology-enhanced classrooms. An et al. provide a literature review that defines the learner-center classroom space, personalized and customized learning, self-regulated learning, collaborative and authentic learning experiences, and technology integration. An et al. draw from the current-at-the-time research to develop a theoretical underpinning of technology use in the classroom. The study focuses on five elements related to K -12 teachers: beliefs and attitudes toward using technology in teaching and learning, perception of learner-centered instruction, perception of barriers to implementing technology and learner-centered classrooms, perceptions effective professional development and how to improve professional development, and teacher support needs. An et al. conducted a survey developed from their literature review and added additional Likert scale questions. The survey had a response rate of 32%. The results showed that teachers believed technology was important to teaching, they supported use of classroom technology, learned new technologies, and believed it was part of their job as teachers to learn new technology to implement in the classroom. Teachers positively viewed learner-centered instruction, but found it both challenging and rewarding. Most teachers perceived they provided personalized learning to their students. Most teachers did not perceive that their own personal attitudes were a barrier to implementing learner-centered instruction or technology. Teachers found two weaknesses in professional development: they were not specific enough and contain too much information in too short a time frame. Teachers want improved professional development sessions that given them hands on support, learner-centered environments, and specificity. An et al. also acknowledge the systems teachers want in place cannot exist without support from the systems teachers work in. The end with a suggestion or further research to test the generalizability of the findings of this study.
An et al. organize their study very well. The problem was clearly articulated up front. The conceptual framework led directly to the five facets of research presented. The methodology was clearly described. They provided relevant citations to support their work, which was current. The research was presented in a very clear and organized fashion that was easily accessible to the reader. The concepts were all clearly operationalized and defined so that a scholar unfamiliar with all of the concepts were easily accessible. The results tied back to the literature review, and appropriate studies were cited to support the findings. The discussion led to other opportunities for research.
As a doctoral student, this article serves as a good model for organizing a paper. I enjoyed reading a study that was well organized and where all the parts were easily identifiable. I did not have to do a lot of work to understand the concepts presented because the authors clearly defined terms that needed to be defined and provided adequate supporting research. This is something I think about as I write my own papers – being clear in the patterns and ideas I see and clearly articulating those ideas for an audience who may read my work. The development of the instrument was discussed and that is also important so we can understand what is being measured is accurately represented. The results and discussion also made clear connections back to the literature review.
As an administrator and a teacher, I found the discussion about teacher perceptions of professional developments. I share the same sentiments that most professional development is too long, too broad, and a reflection of what someone else thinks I should know as a professional rather than what I need to know. It’s a good reminder to ask the people who need the professional development what they need instead of thinking I know, even though I teach, too.
Week 14 Annotation 1 – OMMI: Offline multimodal instruction.
Dirkin, K., Hain, A., Tolin, M. & McBride, A. (2020). OMMI: Offline MultiModal Instruction. In E. Langran (Ed.), Proceedings of SITE Interactive 2020 Online Conference (pp. 24-28). Online: Association for the Advancement of Computing in Education (AACE).
In this conference proposal, Dirkin et al. (2020) present a model of offline multi-modal instruction (OMMI) to address the digital divide made evidence during the acute phase of the COVID-19 pandemic. Dirkin et al. situate their instructional model in the context of datacasting, and provide an overview of the process. Data is securely transmitted one way by antenna to students so they can complete their learning. OMMI requires three separate elements to be successfully implemented: an anchor document, creative repurposing of software, and assessment that is performance-based. Anchor documents are students’ “home base” that give them the roadmap to success, and includes links to all elements students will need to study. Creative repurposing of software means using things like Word or PowerPoint to “mimic the interactivity of a website” by embedding videos and hyperlinking slides together. This provides students with an organized learning experience. Balanced assessment should be used because students may not have access to instant feedback from the instructor so formative and self-assessments need to be used to help facilitate learning. It is also important, given that students in the acute phase of the pandemic were under the care of parents who could not provide necessary levels of assistance to complete learning tasks, that students be given choices and scaffolded projects that were broken down in to small and workable phases. While Dirken et al. recognize it was best for learners to have access to the Internet, OMMI was the next best thing to help students not fall behind. The OMMI model replicates as best as possible the interactivity and interconnectivity of the Internet.
Dirkin et al. organize their proposal very well. There is an immediate statement of the problem, followed by a contextualization in the literature and emergency of the COVID-19 remote learning environment. Dirken et al. ground their three-pronged approach in research and clearly define their operationalized terms to outline their argument. While conference proposals and proceedings are limited by nature, and cannot be fully developed due to space, the argument and support in the literature were very clearly articulated.
As an administrator who had to navigate and survive the shift from a mostly on campus learning experience for learners to an all online experience, this work is interesting to me. While I do not work on a campus that largely did not have access to internet during learning, there were many challenges to the instantaneous pivot to remote learning that occurred in March 2020. For over a year, we had to find creative solutions to help learners where they were – particularly in hands-on disciplines like art and music, which I oversaw at the time. While this specific model OMMI is not directly applicable to my situation, the idea of the anchor document would have been extremely helpful in navigating learning in situations where a lot of learning was asynchronous. If I put on my writing instructor hat, I can see the benefit of preparing lessons and teaching opportunities, even as we have returned to “normal” – whatever that is – through an anchor document and creating interactive, non-Internet based resources to support learning for students who may not always be available to work online in the LMS.
Week 13 Annotation – “Understanding creativity in primary English, science, and history”
McLean, N., Georgiou, H., Matruglio, E., Turney, A., Gardiner, P., Jones, P., & Groves, C. E. (2021). Understanding creativity in primary English, science, and history. The Australian Educational Researcher, 50(2), 581-600. https://doi.org/10.1007/s13384-021-00501-4
McLean et al. (2021) conducted exploratory qualitative research of nine Australian primary school teachers to expand understanding of how teaching creativity manifests in the primary classroom and to see how creative thinking is operationalized within the classroom – a gap the authors perceived in the literature. McLean et al provided a robust literature review on creativity, emphasizing that the research gap was in how creativity is manifest in the classroom. This study investigated how “teacher practice can increase students’ creative capacity and creative confidence” (para. 6). Three research questions were investigated: What are primary teachers’ conceptualizations of creativity; What does creativity look like in the classroom, according to teachers? And What, according to teachers are the creative thinking skills associated with each discipline? Nine Australian primary teachers were participants. Three English teachers, three history teachers, and three science teachers represented their disciplines. The data was coded to three themes to answer the research questions: definitions, manifestations, and teaching for creativity. The first finding is that while teachers reported creativity was important to student learning, it was a difficult concept to define. Teachers could articulate creativity for their discipline, but not clearly or in a complete definition. Though all agreed creativity was essential for student learning. Manifestations of creativity were specific classroom practices and examples. Things like dramatic performance of concepts of text analysis were considered manifestations of creativity. Teaching for creativity was also a key theme included skills such as: analysis, communication, curiosity, inquiry, open-mindedness, and problem-solving. However to use the skills, students needed to be able to so in discipline specific ways. Additionally, all teachers spoke of creativity in relationship to foundational skill and knowledge in their area, holding that students could not be creative unless and until they understood the basics for the content area. McLean et al. hold there is opportunity for more research into how creativity is conceptualized in the classroom to eliminate ambiguity of the concept in operationalization in education,
Overall, this article is very well structured. The literature review leads directly to the research questions. The literature review leads also to a clear gap that the research can address. McLean et al. did a great job of clearly articulating the research questions, methodology to address the research questions, and provided coding process. The research results provided clear and specific examples to help the reader fully see the phenomenon at play. The only things that I wish were included that were not included were the questions used in the semi-structured inverview process and examples of the coding. While the process was very clearly described, and there was member checking and validation of the coding process, only the coding reference from NVivo was provided, and it would have been nice to see more details from NVivo, for example.
As an writing teacher, I think I take for granted that my students are going to be creative. It seems a given, that if you’re going to write a paper, for example, there will be an expression of creativity in the process. But from this week’s readings, it’s clear that creativity or having creative thinking can and should more granular and specific. Seeing that in the classroom spaces, there are other educators who can state creativity is important – because it’s at the top of Bloom’s Taxonomy and we’ve been taught we want students operating in higher order thinking activities such as creation – but being unable to fully articulate what it is to be creative is challenging when you’re trying to help students be creative. I think it helps to have a clear definition of creativity for a field that can be operationalized for a task and for assessment. This article affirmed that the ambiguity of creativity can be further refined through more research.
Week 12 – AI Blog Discussion
Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 38-51.
- AI Tech policies could potentially be another form of western imperialism since the West is making up the rules governing AI usage in industry.
- AI Tech is not inherently inert – its use takes people and turns them into data that can (and will be) exploited.
Nemorin et al. (2023) changed my view of AI use because there are many implications to those who make the rules shaping the narrative, which will inherently erase people as it collects all the data it can from them; that data will be used to change the world.
Sofia, M., Fraboni, F., De Angelis, M., Puzzo, G., Giusino, D., & Pietrantoni, L. (2023). The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science: The International Journal of an Emerging Transdiscipline, 26, 39-68.
- AI use requires workers to be skilled and trained critical thinkers, collaborators, and problem solvers in order to effectively leverage AI to maximize potential in the workspace.
- Humans have unique skills that are not replicable by AI (yet – I mean, I have seen Battlestar Galactica and AI could become sentient at some point) and they have to use AI to free up time to work within skillsets AI cannot replicate.
Sofia et al (2023) have a theory that functions on the idea that skilled workers need to develop new skills, which rely being educated in critical thinking, collaboration and problem solving – but in a society that increasingly diminishes the value of those skills, AI may not be as helpful as imagined in the future.
Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019, July). Envisioning AI for K-12: What should every child know about AI?. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 9795-9799).
- AI researchers need to make their work available and accessible to K-12 teachers and students in intellectually appropriate ways.
- Students across the K-12 spectrum are capable of and should be using and evaluating AI tools as part of their education.
Touretzky et al (2019) reaffirm what I have been hearing as higher ed administrator for years: kids today are preparing for jobs that don’t even exist yet and will need skills we don’t have yet – by providing them the time and space to experiment with known technology, they’ll be more flexible in learning new skills in the future.
Park, C. S.-Y., Kim, H., & Lee, S. (2021). Do less teaching, more coaching: Toward critical thinking for ethical applications of artificial intelligence. Journal of Learning and Teaching in Digital Age, 6(2), 97-100.
- AI has the huge potential to change classroom teaching, but educators have to teach students to think critically when using AI
- There are many ethical implications, that if not considered, could undermine the educational goal of critical thinking
Park et al (2023) emphasize that educators need to teach the implications of AI in relationship to critical thinking along side the content.
Adding in ideas from Peers
Susan Lindsay pointed out “AI in education” is driven by market interests. As an administrator in higher education, overseeing a writing department, I have been heavily involved in discussions related to AI use in the classroom and its implications. Community partners are now coming back to advisory boards and noting that they want to see graduates who are comfortable working alongside AI technology. This opens up a lot of questions for beginning writing classes, where critical thinking for academic thinking is often first introduced. The biggest question is: How can we teach students to think critically if we immediately embrace AI technology? It’s an interesting question to grapple and leads to larger questions about the intersection of education and jobs training. Until recently college education wasn’t about being trained for a specific job – it was about gaining the “soft skills” – critical thinking, broad base knowledge, and professional discourse foundations – to be successful on the job.
Martha Ducharme also commented along the same vein about adding AI instruction in to K-12 educational spaces. I had not viewed the recommendation Touretzky et al. (2019) made to teach AI technology along side other content, as a skill, as a “rip and replace” approach. But, I can see how it can be viewed that way. AI has come into play the same way the Internet did. Heck, I remember when I was in the 6th grade in the late 90s and we started learning basic computer programming – and it was seen as revolutionary for us to make a turtle avatar make geometric shapes across a screen. We had computer class twice a week, instead of once a week, and it was a disruption, but we still learned all the other basic subjects. So, I think it’s really important to prepare students from a young age to be comfortable with the foundational aspects of AI because by the time they enter the work force, AI won’t be novel, it will be integrated into life. And if we don’t make the AI visible, they won’t have the necessary skepticism to think critically about it as Park et al. (2021) warned about in their work.
Week 11 – Annotation – Using peer feedback to enhance the quality of student online postings: An exploratory study.
Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412-433. https://doi.org/10.1111/j.1083-6101.2007.00331.x
Ertmer et al. (2007) conducted a mixed-methods exploratory study which examined the perception students had toward feedback in an online classroom and the impact peer feedback had on discussion quality. This study focused on graduate students (n =15). Ertmer et al. pose three research questions which they set out to answer in their study. The first is what impact does peer feedback have on posting quality in an online class and does the quality lead to increased discussion posts over time? The second is what are students perceptions about the values of receiving peer feedback and what is the perception of peer feedback in relationship to instructor feedback? The third is what perceptions do students have of the value of providing feedback to peers. These three questions are drawn from a literature review which situates feedback in the context of online instruction. First Ertmer et al. examine the role of feedback in instruction. Next, the narrow their focus to the role of feedback in online instruction. Then, they discuss the advantages of using peer feedback. Finally, they discuss the challenges of using peer feedback. Ertmer et al. explain that there is a team of researchers working together on this mixed methods study. Students in a graduate level course were taught how to use a two-factor rubric, based on Bloom’s Taxonomy to rate their peers’ work over the course of the semester. All feedback was sent filtered through the instructor and returned to the students, with an up to two-week lag in some cases. In addition to providing discussion responses, peer responses, and peer feedback, students also completed pre-and post surveys on their perception of peer and instructor feedback and individual interviews that were held in person or by phone. The researchers worked to ensure validity and reliability by triangulating data sources, using Bloom’s, using multiple interviewers and evaluators by divvying up the students among the research team, and using standardized interviews, and providing quotes directly from the participants. The results showed that while students did value giving and receiving peer feedback, they still valued instructor feedback more. Peer feedback was valued, but it was not viewed as being as robust or valid as instructor feedback, even when using Bloom’s Taxonomy as a basis. The study also did not show peer feedback significantly improved the discussion contents. Ertmer et al. noted that using peer feedback can reduce instructor workload and make students equal partners in their learning. Students also reported they learned by constructing feedback to peers. Finally, the limitations of the study included a small sample size, limited scale on the rubric, and no interrater reliability protocols for students to use the rubric to provide peer feedback.
Ertmer et al’s article has several strengths. First, the literature review provides a theoretical and conceptual framework that starts broad and gets narrowed in scope; they move from feedback generally, to online feedback, to the advantages and disadvantages of peer feedback. The concept of feedback is anchored in works that were current at the time of publication. The research questions naturally flow from the research presented in the literature. The purpose of the study is also clearly stated – to examine student perception of feedback – as a means of closing a specific gap in the literature – peer feedback’s effect on shaping the quality of discourse. Ertmer et al. also explain their methodology, and some of it just seems overly complicated in the name of being able to support validity and triangulation of qualitative data. While there were a team of researchers who worked on the project, they involved too many different people – even with attempts at interrater reliability in assessing the quality of the discussion posts. There is too much opportunity for subjectivity, even with discussion. It would have been better for the discussion questions to all first be scored by 1 researcher and then another after a clear interrater reliability calibration exercise to insure the rubric was being applied in the same way. Second, they did not disclose the survey instrument questions – they only described the Likert score and that it had open ended questions. I know it is also common in qualitative studies to choose exemplary comments to illustrate points, but when the research states “interview comments suggested that students (n=8) used information obtained, …” and then only provides one illustrative example, that is not enough information for me to see the full scope of the student perception the researchers saw. Even the quantitative data given was brief (though easy to understand in context). I would have liked to see more depth of analysis and more examples, especially from interviews in a study about student perception. The discussion tied the results back to relevant literature, some of which was cited in the literature review as well, to help the reader draw specific and clear connections. Finally, the limitations did not really address that the student population was professional graduate students who already likely have a strong sense of evaluation. And 15 students is not a large class, even when giving feedback. The survey results showed that students preferred instructor feedback, but the study did not really address how the peer feedback given back to students was filtered through instructors to weed out poor quality feedback and the effect that had on student perception of the feedback. The study also concluded that peer feedback helps instructors save time, however, the researchers in this study read all student feedback, and thus caused a two-week delay in return of feedback in some instances, rendering it untimely to student development. This study did not show peer feedback increased quality of discussion, and the factors I just mentioned are not fully discussed as a factor. And here, the instructors still had to engage with all the feedback, and really, they should have just given the students feedback.
As a doctoral student I appreciate this article as well-structured article. Even if I do not necessarily think all the methodology is set up in a way that makes complete sense to me, I can still see where they were going and why they set their study up in this manner. As a doctoral student who is in a program were peer feedback is the main source of feedback received, it was nice to see the research supports the idea that peer feedback is valuable in an online environment and that we learn as much from giving feedback as receiving the feedback. I also agree that instructor feedback is more valuable because it’s the more expert opinion on the subject matter and students are still learning. That doesn’t mean students don’t have valuable contributions, just that in a learning situation where courses are expensive, it’s important for the instructor’s voice to weigh in on the conversation – even if it’s not every post or point.
Week 10 – Extending the Discussion: New Literacies
Extending the Discussion – Week 11 New Literacies
Technology has shaped and reshaped the way people interact with and create information. Up until the late 1990s, information was gate-kept by publishers. People had to be bona fide experts who paid their dues in the formal education process. Scholars published works on politics, health, science, etc. Conspiracy theories were relegated to cheap tabloids found at check out counters. There was limited information available, as the experts could only supply so much information at one time. But the internet changed that (Leu et al., 2012). Especially in the wake of Web 2.0 technologies of the early 2000s and 2010s – blogs, wikis, Facebook, and Twitter – changed the ways users created and interacted with content. Anyone could post anything. In 2008, I was a newly minted college instructor and would warn my students not to use Wikipedia. I’d set up assignments where I would have them look up things on Wikipedia and then edit the pages to feature outlandish nonsense to prove it couldn’t be trusted – because what would twinklestar099 know about literature of the Cold War that Richard Slotkin didn’t know better? We were wary of authority of sources in the early days of Web 2.0 because those of us teaching and working had grown up with card catalogues and library collections that could only be used on campus, even if the catalogue was now digitized on the computer. There were processes in place to make sure disseminated information was as accurate and well composed as was possible (most of the time).
Teaching literacy in writing classes used to be the difference between an encyclopedia, a trade journal, a scholarly peer-reviewed source, and the difference between .com and .edu sources. But once Web 2.0 emerged – more was demanded. Yi (2021) get at this in his definition of AI competency. As the technologies we have become more complex, so do the means of critical thinking and reflection of the tool. And then to add to that, people have to be cognizant of how the information they find and the tools they use to find it shape them and their possible futures. We have to simultaneously evaluate the material we get, plus the source of the material, and it’s future effect on us, our culture, and our opportunities (Leander et al, 2020; Yi, 2021). Generative AI has the capacity to shape our world more than any Wikipedia article ever did. At some point, the AI is going to become indistinguishable from reality and people are going to have to be critical observers and critical participants in their world. Leu et al. (2012) articulated that the youth will drive the change and the way language happens. But in the past, language change and social change were driven by youth in a social context where fact checking was always going to be possible; I’m not sure that with AI that will be the same. Not to mention that algorithms shape what we see online and there is not a single Internet or ChatGPT or TikTok we encounter (Leander et al, 2020). Every single thing we do online is shaped by our specific interactions with the Internet. Knobel et al. (2014) discuss the ways people work to collaborate to create knowledge, and de-centralize knowledge making, but don’t talk about the pitfalls. We’re living them in 2023. Anyone can post anything online. People give credibility to the person with the camera or the blog post, I think, because we’re still stuck with the old gate-keeping mentality of we can trust published things because their published. Being published used to mean an entire vetting process of credentials, veracity of claims, research validation – now anyone with a smartphone and opinion can post anything. We have witnessed what happens when arm chair experts dominate the discourse an so many important topics with a sharp intensity in this last five years, especially. And if people do not learn to approach digital texts, digital searches, and the technologies that facilitate our access to that information with a critical eye – especially with all generative AI can do – there are problems on the horizon we cannot even articulate today.
References:
Knobel, M., & Lankshear, C. (2014). Studying new literacies. Journal of Adolescent & Adult Literacy, 57(9), 1-5.
Leu, D. J., & Forzani, E. (2012). New literacies in a Web 2.0, 3.0, 4.0, …∞ world. Research in the Schools, 19(1), 75-81.
Leander, K. M., & Burriss, S. K. (2020). Critical literacy for a posthuman world: When people read, and become, with machines. British Journal of Educational Technology, 51(4), 1262-1276.
Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353-368. https://doi.org/10.21860/j.12.2.8
Week 10 Annotation – Establishing the concept of AI literacy: Focusing on competence and purpose.
Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353-368. https://doi.org/10.21860/j.12.2.8
Yi (2021) establishes AI literacy using traditional literacy as a foundation. Yi situates the concept within the every expanding realm of literacies, which emerge as new technologies emerge. Within his framework, Yi calls basic reading, writing, and arithmetic skills functional literacy. Social literacy is new literacy, which takes into account social practice and critical thinking, Technological literacy encompasses technological intimacy and future literacy (the ways technology could be used in the future). He argues that we have moved beyond the realm of simply understanding signifiers and signifieds in printed texts; reading and writing are not sufficient to participate in today’s world. Communication media extends functional literacy to include technology as a means of communication. However, to communicate effectively using a technology, the user has to understand the changing nature of technology and the ways technology is used to communicate in a specific time and place. Yi rejects the idea that AI literacy definitions belong as an extension of digital literacy discussions because they all “set goals for artificial intelligence education” (p. 359). Yi’s definition centers on competency in being adaptable. AI literate individuals will use AI, adapt AI to help them create life, and recognize the change to culture that comes as a result of AI usage. AI literacy also requires that a person be able to maintain their privacy and leverage the AI tool to help them realize their goals. Using AI helps humans grow using non-human technology. AI literacy is inclusive of functional literacy, technological literacy, and new literacy. AI competence literacy is demonstrated by the use through metacognition and anticipation of future needs. In order to be successful, people need to consider the ways AI could alter future prospects and educate themselves accordingly. This also means learners can use AI to create personalized learning, while teachers remain along side to mentor and guide.
Yi does a good job of grounding his theory in the traditions of literacy studies, new literacy, and technological literacy. He establishes clearly how AI literacy is the next evolution of new literacy, and emphasizes that adaptability will be at the core of a human non-human interaction. The sources he cited to articulate his point are grounded in literacy studies, motivational research, and work in artificial intelligence. The concept forms from the literature.
As a doctoral student, educator, and higher education administrator, this new view on AI literacy opens up conversation about what it means to partner with non-human technology in a learning setting. New Literacies up to this point were focused on technologies that served as repositories of knowledge or allowed users to create and interact with knowledge – all at the human level. The shift to AI is different than the shift from traditionally published material to digital material that anyone could produce. AI not only has the capacity to allow humans short cuts in consuming information, it can take human information and create new information and make new knowledge. As a doctoral student, I think this is s fascinating thing to study. As a writing teacher, it’s important to understand so I can prepare my students. As an administrator – this is going to make writing AI policy very difficult because policies are slow to form, the future has to be taking into account. AI is so new that it’s also almost impossible for any one to claim to be AI literate.
Week 9 Annotation – The promises and pitfalls of using ChatGPT for self-determined learning in higher education: an argumentative review.
Baskara, F. R. (2023). The promises and pitfalls of using chat GPT for self-determined learning in higher education: An argumentative review. Prosiding Seminar Nasional Fakultas Tarbiyah Dan Ilmu Keguruan IAIM Sinjai, 2, 95-101. https://doi.org/10.47435/sentikjar.v2i0.1825
Baksara provides an argumentative review of ChatGPT in relationship to self-determination learning and self-regulated learning. The paper provides a review of the current, emerging literature as it relates to ChatGPT to understand the ways self-determined learning and self-regulated learning can be supported by generative AI. The goal of the paper is to review the literature to provide support for uses and caution against pitfalls for educators in higher education. The fact that ChatGPT can provide students with the ability for personalized learning can foster self-regulated learning. However, there are ethical concerns. The review of literature shows that learners can get tailored feedback, customized learning opportunities, and amplifying self-regulation and self-determination skills. The ethical concerns of privacy and equitable access to the tool were raised.
This article was brief and repetitive. Baksara did disclose the search methodology for this literature review, but did not specify how many articles were read in the methodology, or any provisions for exclusion of articles. 22 papers were cited on the References page. The article made reference to self-determination and self-regulated learning, but no sources were cited to support those elements. The review was laid out very clearly and the writing was easy to follow. But the review seems cursory and not in depth. While the literature review does pull out key concepts to arrive and the conclusion, there are not many supporting citations that back up what Baksara argues.
Generative AI conversations seem to be dominating many spheres of the educational space right now. There seems to be a push and pull between policing student (and even staff) usage of these tools with allowing students freedom to learn along side the AI technology. Personally, as a higher education administrator, I see generative AI fully implemented into the educational landscape, much like we see calculators or the internet. The idea that Generative AI can support the development of intrinsic motivation through competency, relevance, and autonomy (e.g. self-determination theory) is exciting. It allows students access to another resource to support them as they write. I think the ability to engage with generative AI to get personalized learning experiences is exciting! I can see that there is uncertainty about what that means for students, especially in writing classes. In a section of composition I’m teaching this semester, I had students help me write our ChatGPT policy, and most students were against the usage of ChatGPT in a writing course because they didn’t feel it was relevant to have AI do their work for them. However, once we discussed the ways ChatGPT could be used to support their writing, they started discussing things like choice (autonomy), pre-writing and editing things ChatGPT could help with (relevancy) and choice to use it or not (autonomy). We’ll see what happens, but there is opportunity for ChatGPT to create motivation to write and overcome writer’s block, for example.