Brief Review – GPTZero

GPTZero Brief Review 

GPTZero (2024) , described as “bring[ing] transparency to humans navigating a world filled with AI content” on its homepage, provides educators an opportunity to “restore confidence and understanding” with students by detecting AI generated content. According to The Tech Advocate writer Matthew Lynch (2023), GPTZero is a significant resource due to its accuracy in detecting AI generated content to help maintain integrity in human-created works. Even though GPTZero is not not 100% reliable in detecting AI generated content, it still boasts a high degree of accuracy (Lynch, 2023). As I log in and initially explore GPTZero, I notice the AI scan is in the center of the dashboard; it is there, waiting to be populated with text. The text box also privileges the idea of finding human writing over sussing out AI writing by asking: “Was this text written by a human or AI?” The prompt also cautions users that if they are seeking to detect “AI involvement” – again, contributing to a narrative that human creation is privileged and not just identifying AI generated content. When compared to Turnitin’s (2024a) AI writing detection tool, which promises to “help educators identify when AI writing tools such as ChatGPT have been used in students’ submissions,” the goal seems to not be to penalize students it seems to be focused on first identifying what the student created on their own and opening up a conversation about when AI text has likely been used. GPTZero provides a “probability breakdown” of a spectrum of human and AI writing. Turnitin (2024b) provides a percentage that states “How much of the submission has been generated by AI” which is absolute. GPTZero emphasizes that it is not 100% accurate and leaves the implication open that a human needs to do more work to ascertain if there are any integrity violations. In fact, the report generated states specifically that the “report should not be used to punish students” (GPTZero, AI scan, January 28, 2024). One question I am wondering about the tool is: Where does the extensive database of human written content writing is compared against come from? 

References 

GPTZero. (2024). The best AI checker for teachers. GPTZero. https://gptzero.me/educators

Lynch, M. (n.d.). What is GPTZero? How to use it to detect AI-generated text. Tech Advocate. https://www.thetechedvocate.org/what-is-gptzero-how-to-use-it-to-detect-ai-generated-text/

Tian, E., & Cui, A. (2023). GPTZero: Towards detection of AI-generated text using zero-shot and supervised methods [Computer software]. https://gptzero.me/

Turnitin. (2024a). Turnitin’s AI detector capabilities. TurnitIn. https://www.turnitin.com/solutions/topics/ai-writing/ai-detector/

Turnitin. (2024b). Turnitin’s AI writing detection available now. Turnitin. https://www.turnitin.com/solutions/topics/ai-writing/

Week 15 and 16 Annotation – It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence?

Lodge, J. M., Yang, S., Furze, L., & Dawson, P. (2023). It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence? Learning: Research and Practice, 9(2), 117-124. https://doi.org/10.1080/23735082.2023.2261106

            Lodge et al. (2023) set out to establish a frame for discussions about generative AI technologies in education by offering typologies for AI. They begin by dispelling the common analogy of generative AI technology to the calculator. Lodge et al. hold that comparing generative AI to a calculator assumes that generative AI technologies will be able to do tasks to arrive at a correct answer, but really generative AI functions are more complex than that oversimplified analogy of the calculator. Lodge et al. view generative AI as an infrastructure, rather than a singular tool. Next, Lodge et al. provide an overview of human-generative AI interaction. They contextualize this interaction in the more firmly established human computer interactions that  take in to account social and cognitive processes involved in learning. Computers were used to offload tasks – like complex addition problems to a calculator, for example. However generative AI is not about offloading a task to take the data and move forward. Lodge et al. introduced a four-quadrant typology for human and machine interactions for education. The vertical axis represents how AI can free up humans from boring tasks so they can engage in higher-level thinking or extend human thought capabilities. The horizontal axis then illustrates the way the human-machine relationship functions, individually or in collaboration. For example, using a calculator is an individual use and is an example of cognitive offloading. Cognitive offloading occurs people shift part of their cognitive tasks elsewhere – for example, a Google calendar keeps track of appointments, cell phones hold phone numbers, journals hold notes, etc. – but technology does not necessarily have to be involved. Cognitive offloading can damage learning if too much information or cognitive work is left to other devices. But cognitive offloading can also free up thinking space to allow people to engage in higher-order thinking processes. The extended mind theory holds that technology is used to expand human capabilities in complex tasks and thought processes. Generative AI could be an extension of the mind. Next. AI can also be used as a collaborative tool that assists in the co-regulation of learning. While generative AI cannot regulate human learning, the outputs of generative AI as it monitors human learning can help humans reflect on and monitor their learning in relationship to their goals. AI can “coach” humans here. Finally, ther eis hybrid learning. AI tools can help humans learn because it provides real-time feedback that is adaptive and personalized. AI can guide learners to grow and develop through opportunities for reflection.

            Lodge et al. provide a very clear description of their typologies for generative AI use and human interaction in education. Their writing is clear and concise and cites relevant resources. This article does provide a framework for discussing generative-AI human interaction without reducing it down to an overly simplified statement of: “It’s just like a calculator.” Not only does that phrase oversimplify, but it also discredits those who have legitimate concerns about the integration and implementation of generative AI in the classroom. The four typologies are discussed in a way that connects each one to the next. Lodge et al. start out with generative-AI human interaction, then discuss cognitive offloading, then expanded mind theory, then co-regulated learning and hybrid learning. This process allows the framework to develop from more simple to more complex, and as the paper moves forward the seeds for discussion about generative AI grow more complex, leaving the reader to seek out more complex connections.

            I am interested in this as a doctoral student because I am extremely interested in the ways generative AI, cognitive offloading, knowledge acquisition, and transactive memory partnerships work. When I was earning my Ed.S. degree, I focused my work on the ways Google was shifting knowledge acquisition and learning as a possible transactive memory partner in the context of classroom discussions. But as my work was ending on that degree ChatGPT emerged, and my interested shifted to generative AI. As Lodge et al. showed, there are many ways that generative AI will reshape learning and knowledge work – and don’t know all of those ways yet. So this is something that is very in line with my research interests.

Also, reading this made me cringe because I have been using the calculator analogy in almost every discussion I have had about generative AI. And when I read how the analogy was broken down to illustrate the ways generative AI is more complex than a calculator, I realized I had inadvertently been dismissing some very legitimate concerns about the inclusion of generative AI in the classroom. This was a good reminder to slow down and think through something before just embracing it.

APA Citations for Additional Sources

References

Chauncey, S. A., & McKenna, H. P. (2023). A framework and exemplars for ethical and responsible use of AI chatbot technology to support teaching and learning. Computers and Education: Artificial Intelligence, 5, 100182. https://doi.org/10.1016/j.caeai.2023.100182

Chen, B., Zhu, X., & Díaz del Castillo H, F. (2023). Integrating generative AI in knowledge building. Computers and Education: Artificial Intelligence, 5, 100184. https://doi.org/10.1016/j.caeai.2023.100184

Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., Tondeur, J., De Laat, M., Buckingham Shum, S., Gašević, D., & Siemens, G. (2022). Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with ai? Computers and Education: Artificial Intelligence, 3, 100056. https://doi.org/10.1016/j.caeai.2022.100056

Vinchon, F., Lubart, T., Bartolotta, S., Gironnay, V., Botella, M., Bourgeois, S., Burkhardt, J.-M., Bonnardel, N., Corazza, G. E., Glaveanu, V., Hanson, M. H., Ivcevic, Z., Karwowski, M., Kaufman, J. C., Okada, T., Reiter-Palmon, R., & Gaggioli, A. (2023). Artificial intelligence & Creativity: A manifesto for collaboration. Pre-Print. https://doi.org/10.31234/osf.io/ukqc9

Week 10 – Extending the Discussion: New Literacies

Extending the Discussion – Week 11 New Literacies

Technology has shaped and reshaped the way people interact with and create information. Up until the late 1990s, information was gate-kept by publishers. People had to be bona fide experts who paid their dues in the formal education process. Scholars published works on politics, health, science, etc. Conspiracy theories were relegated to cheap tabloids found at check out counters. There was limited information available, as the experts could only supply so much information at one time. But the internet changed that (Leu et al., 2012). Especially in the wake of Web 2.0 technologies of the early 2000s and 2010s – blogs, wikis, Facebook, and Twitter – changed the ways users created and interacted with content. Anyone could post anything. In 2008, I was a newly minted college instructor and would warn my students not to use Wikipedia. I’d set up assignments where I would have them look up things on Wikipedia and then edit the pages to feature outlandish nonsense to prove it couldn’t be trusted – because what would twinklestar099 know about literature of the Cold War that Richard Slotkin didn’t know better? We were wary of authority of sources in the early days of Web 2.0 because those of us teaching and working had grown up with card catalogues and library collections that could only be used on campus, even if the catalogue was now digitized on the computer. There were processes in place to make sure disseminated information was as accurate and well composed as was possible (most of the time).

Teaching literacy in writing classes used to be the difference between an encyclopedia, a trade journal, a scholarly peer-reviewed source, and the difference between .com and .edu sources. But once Web 2.0 emerged – more was demanded. Yi (2021) get at this in his definition of AI competency. As the technologies we have become more complex, so do the means of critical thinking and reflection of the tool. And then to add to that, people have to be cognizant of how the information they find and the tools they use to find it shape them and their possible futures. We have to simultaneously evaluate the material we get, plus the source of the material, and it’s future effect on us, our culture, and our opportunities (Leander et al, 2020; Yi, 2021). Generative AI has the capacity to shape our world more than any Wikipedia article ever did. At some point, the AI is going to become indistinguishable from reality and people are going to have to be critical observers and critical participants in their world. Leu et al. (2012) articulated that the youth will drive the change and the way language happens. But in the past, language change and social change were driven by youth in a social context where fact checking was always going to be possible; I’m not sure that with AI that will be the same. Not to mention that algorithms shape what we see online and there is not a single Internet or ChatGPT or TikTok we encounter (Leander et al, 2020). Every single thing we do online is shaped by our specific interactions with the Internet. Knobel et al. (2014) discuss the ways people work to collaborate to create knowledge, and de-centralize knowledge making, but don’t talk about the pitfalls. We’re living them in 2023. Anyone can post anything online. People give credibility to the person with the camera or the blog post, I think, because we’re still stuck with the old gate-keeping mentality of we can trust published things because their published. Being published used to mean an entire vetting process of credentials, veracity of claims, research validation – now anyone with a smartphone and opinion can post anything. We have witnessed what happens when arm chair experts dominate the discourse an so many important topics with a sharp intensity in this last five years, especially. And if people do not learn to approach digital texts, digital searches, and the technologies that facilitate our access to that information with a critical eye – especially with all generative AI can do – there are problems on the horizon we cannot even articulate today.

References:

Knobel, M., & Lankshear, C. (2014). Studying new literacies. Journal of Adolescent & Adult Literacy, 57(9), 1-5.

Leu, D. J., & Forzani, E. (2012). New literacies in a Web 2.0, 3.0, 4.0, …∞ world. Research in the Schools, 19(1), 75-81.

Leander, K. M., & Burriss, S. K. (2020). Critical literacy for a posthuman world: When people read, and become, with machines. British Journal of Educational Technology51(4), 1262-1276.

Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353-368. https://doi.org/10.21860/j.12.2.8

Week 10 Annotation – Establishing the concept of AI literacy: Focusing on competence and purpose.

Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353-368. https://doi.org/10.21860/j.12.2.8

Yi (2021) establishes AI literacy using traditional literacy as a foundation. Yi situates the concept within the every expanding realm of literacies, which emerge as new technologies emerge. Within his framework, Yi calls basic reading, writing, and arithmetic skills functional literacy. Social literacy is new literacy, which takes into account social practice and critical thinking, Technological literacy encompasses technological intimacy and future literacy (the ways technology could be used in the future). He argues that we have moved beyond the realm of simply understanding signifiers and signifieds in printed texts; reading and writing are not sufficient to participate in today’s world. Communication media extends functional literacy to include technology as a means of communication. However, to communicate effectively using a technology, the user has to understand the changing nature of technology and the ways technology is used to communicate in a specific time and place. Yi rejects the idea that AI literacy definitions belong as an extension of digital literacy discussions because they all “set goals for artificial intelligence education” (p. 359). Yi’s definition centers on competency in being adaptable. AI literate individuals will use AI, adapt AI to help them create life, and recognize the change to culture that comes as a result of AI usage. AI literacy also requires that a person be able to maintain their privacy and leverage the AI tool to help them realize their goals. Using AI helps humans grow using non-human technology. AI literacy is inclusive of functional literacy, technological literacy, and new literacy. AI competence literacy is demonstrated by the use through metacognition and anticipation of future needs. In order to be successful, people need to consider the ways AI could alter future prospects and educate themselves accordingly. This also means learners can use AI to create personalized learning, while teachers remain along side to mentor and guide.

Yi does a good job of grounding his theory in the traditions of literacy studies, new literacy, and technological literacy. He establishes clearly how AI literacy is the next evolution of new literacy, and emphasizes that adaptability will be at the core of a human non-human interaction. The sources he cited to articulate his point are grounded in literacy studies, motivational research, and work in artificial intelligence. The concept forms from the literature.

As a doctoral student, educator, and higher education administrator, this new view on AI literacy opens up conversation about what it means to partner with non-human technology in a learning setting. New Literacies up to this point were focused on technologies that served as repositories of knowledge or allowed users to create and interact with knowledge – all at the human level. The shift to AI is different than the shift from traditionally published material to digital material that anyone could produce. AI not only has the capacity to allow humans short cuts in consuming information, it can take human information and create new information and make new knowledge. As a doctoral student, I think this is s fascinating thing to study. As a writing teacher, it’s important to understand so I can prepare my students. As an administrator – this is going to make writing AI policy very difficult because policies are slow to form, the future has to be taking into account. AI is so new that it’s also almost impossible for any one to claim to be AI literate.

Extending the Discussion, Week 7 – TPACK/TPAC

Mishra et al. (2006) presented a framework that to me makes sense. When I started working as a trainer for technology in 2008/2009, I didn’t know that this was a theory – but the idea that we would train teachers on specific technologies, like Blackboard or YouTube or whatever the software-du-jour was made no sense to me. I always pushed back on this idea that if you know how to use the software, then you can teach your students to use it. None of the training I received to train others ever focused on a pedagogical underpinning of why the technology should be implemented. And this of course, led to a lot of resistance from instructors to adopt and adapt technology into their classroom. The biggest pedagogical reason that was offered is that it would help students get jobs; and that idea was at odds with the purpose of a college education, which was to learn to think and become well-rounded so you could would, but not be trained for a job.

            TPACK centers the instructor in the conversation, and in turn, the instructors are supposed to center the students so they can deliver the best content using the most reasonable technology to get the job done. In a lot of the work and discourse around technology, students are centered – educators want to know about their perceptions, their learning, their motivation, etc. as it’s influenced by technology. This discourse has also dominated my professional academic career. We are always measuring how X effects student learning – but rarely do we stop to ask how X effects teaching. So, TPACK re-centers this idea in powerful ways.

            For my additional article on the topic this week, I read about TPACK and Generative AI (Mishra et al. 2023). This article attempts to re-center the discourse around generative AI from: How to do we stop students from cheating? to How do we create learning spaces that leverage this technology that is not going away? How do we adapt teaching to the changing educational landscape? And again, the instructor’s work is re-centered.

            Education is cognitive work. Educators strive to help their students build their own content knowledge in areas so they can adapt it to their future needs. TPACK provides a way for instructors to be intentional about the use of technology in their classroom to maximize benefits for their students. One thing that I found compelling in Mishra et al. (2023) is the idea that if educators are intentional about the deployment of Generative AI into learning spaces, they don’t need to police student use – which is a futile effort anyway, since AI detection software is unreliable. TPACK provides a lens for intentionality that I really find valuable as a student, instructor, and administrator when it comes to technology implementation.  

References

Mishra, P., & Koehler, M.J. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.

Week 7 Annotation – TPACK in the age of ChatGPT and generative AI.

Mishra et al. (2023) apply the TPACK framework to ChatGPT to illustrate the framework is relevant to generative AI. Mirsha situate their argument the significance of the TPACK framework in educational technology and the daily work of teachers. Mishra et al. also point out that TPACK has fallen into the canon of educational technology research, and that it’s not engaged with intellectually. Rather education students learn it as part of another theory to memorize. They seek to make TPACK relevant again and encourage educators to questions of generative AI through the lens of this framework. Mirsha et al. provide an overview of the state of generative AI in education pointing out the pitfalls and benefits of using AI in the classroom, while ultimately coming to the conclusion that the educational space is forever changed because it is not a human only space any longer. Generative AI will require that new pedagogies are created to support learning that is inclusive of generative AI. Through the lense of TPACK, educators will have to think beyond the current moment to the long-term ramifications of generative AI in the classroom to assess student learning and prepare them for jobs in a world we cannot yet fully envision. Mishra et al. also point out that assessment will have to change to accommodate the ways learning will change as a result of human/machine partnerships in learning.

While Mishra et al. provide a robust overview of the current state of the TPACK framework in educational literature, they do fall into a pitfall of separating the elements of TPACK apart in order to explain the framework rather than analyzing ChatGPT holistically (Saubern et al., 2020). Mirsha et al. provide relevancy for application of the TPACK framework and try to provide some examples for how teachers can use generative AI in their classrooms to stimulate learning in new ways. These examples are cursory and only show how the focus on the discussion over academic dishonesty is the wrong place to situation the conversation in education around generative AI. Ultimately, the paper is very well organized. The literature review pulls from relevant TPACK literature, always choosing to cite the seminal work over discussions of the seminal work. The framework does not appear to be mischaracterized, but the separation of the parts does not allow for the creative dynamic between knowledge, pedagogy and technology Mishra et al. pointed out in their literature review to be fully explored in their own assessment of how TPACK can apply to generative AI in the classroom – which is also interesting because Mishra is one of the architects of TPACK.

All three facets of my academic identity – doctoral student, writing instructor, and administrator – are very interested in how Generative AI effects the classroom experience. This article opened my eyes to the reality that learning spaces are not human only learning spaces. While technology has always been at the center of my teaching practice, the technology was always mediating the learning. And now, the technology is participating in the learning (and in some ways, it’s a co-learning experience where the AI can learn from the learner, too). I’m very interested in this area as doctoral student. As a writing teacher, I want to teach my students to leverage generative AI so they can be proficient in using it as a tool to leverage their critical thinking to get better jobs. As an administrator, I want to understand the application of generative AI in the classroom to help faculty create learning spaces that don’t penalize and police students while they also navigate how to use generative AI as a learning tool in their own educational journey.

References

Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235-251. https://doi.org/10.1080/21532974.2023.2247480

Saubern, R., Henderson, M., Heinrich, E., & Redmond, P. (2020). TPACK–time to reboot?. Australasian Journal of Educational Technology36(3), 1-9.