Brief Review – GPTZero

GPTZero Brief Review 

GPTZero (2024) , described as “bring[ing] transparency to humans navigating a world filled with AI content” on its homepage, provides educators an opportunity to “restore confidence and understanding” with students by detecting AI generated content. According to The Tech Advocate writer Matthew Lynch (2023), GPTZero is a significant resource due to its accuracy in detecting AI generated content to help maintain integrity in human-created works. Even though GPTZero is not not 100% reliable in detecting AI generated content, it still boasts a high degree of accuracy (Lynch, 2023). As I log in and initially explore GPTZero, I notice the AI scan is in the center of the dashboard; it is there, waiting to be populated with text. The text box also privileges the idea of finding human writing over sussing out AI writing by asking: “Was this text written by a human or AI?” The prompt also cautions users that if they are seeking to detect “AI involvement” – again, contributing to a narrative that human creation is privileged and not just identifying AI generated content. When compared to Turnitin’s (2024a) AI writing detection tool, which promises to “help educators identify when AI writing tools such as ChatGPT have been used in students’ submissions,” the goal seems to not be to penalize students it seems to be focused on first identifying what the student created on their own and opening up a conversation about when AI text has likely been used. GPTZero provides a “probability breakdown” of a spectrum of human and AI writing. Turnitin (2024b) provides a percentage that states “How much of the submission has been generated by AI” which is absolute. GPTZero emphasizes that it is not 100% accurate and leaves the implication open that a human needs to do more work to ascertain if there are any integrity violations. In fact, the report generated states specifically that the “report should not be used to punish students” (GPTZero, AI scan, January 28, 2024). One question I am wondering about the tool is: Where does the extensive database of human written content writing is compared against come from? 

References 

GPTZero. (2024). The best AI checker for teachers. GPTZero. https://gptzero.me/educators

Lynch, M. (n.d.). What is GPTZero? How to use it to detect AI-generated text. Tech Advocate. https://www.thetechedvocate.org/what-is-gptzero-how-to-use-it-to-detect-ai-generated-text/

Tian, E., & Cui, A. (2023). GPTZero: Towards detection of AI-generated text using zero-shot and supervised methods [Computer software]. https://gptzero.me/

Turnitin. (2024a). Turnitin’s AI detector capabilities. TurnitIn. https://www.turnitin.com/solutions/topics/ai-writing/ai-detector/

Turnitin. (2024b). Turnitin’s AI writing detection available now. Turnitin. https://www.turnitin.com/solutions/topics/ai-writing/

Week 10 – Extending the Discussion: New Literacies

Extending the Discussion – Week 11 New Literacies

Technology has shaped and reshaped the way people interact with and create information. Up until the late 1990s, information was gate-kept by publishers. People had to be bona fide experts who paid their dues in the formal education process. Scholars published works on politics, health, science, etc. Conspiracy theories were relegated to cheap tabloids found at check out counters. There was limited information available, as the experts could only supply so much information at one time. But the internet changed that (Leu et al., 2012). Especially in the wake of Web 2.0 technologies of the early 2000s and 2010s – blogs, wikis, Facebook, and Twitter – changed the ways users created and interacted with content. Anyone could post anything. In 2008, I was a newly minted college instructor and would warn my students not to use Wikipedia. I’d set up assignments where I would have them look up things on Wikipedia and then edit the pages to feature outlandish nonsense to prove it couldn’t be trusted – because what would twinklestar099 know about literature of the Cold War that Richard Slotkin didn’t know better? We were wary of authority of sources in the early days of Web 2.0 because those of us teaching and working had grown up with card catalogues and library collections that could only be used on campus, even if the catalogue was now digitized on the computer. There were processes in place to make sure disseminated information was as accurate and well composed as was possible (most of the time).

Teaching literacy in writing classes used to be the difference between an encyclopedia, a trade journal, a scholarly peer-reviewed source, and the difference between .com and .edu sources. But once Web 2.0 emerged – more was demanded. Yi (2021) get at this in his definition of AI competency. As the technologies we have become more complex, so do the means of critical thinking and reflection of the tool. And then to add to that, people have to be cognizant of how the information they find and the tools they use to find it shape them and their possible futures. We have to simultaneously evaluate the material we get, plus the source of the material, and it’s future effect on us, our culture, and our opportunities (Leander et al, 2020; Yi, 2021). Generative AI has the capacity to shape our world more than any Wikipedia article ever did. At some point, the AI is going to become indistinguishable from reality and people are going to have to be critical observers and critical participants in their world. Leu et al. (2012) articulated that the youth will drive the change and the way language happens. But in the past, language change and social change were driven by youth in a social context where fact checking was always going to be possible; I’m not sure that with AI that will be the same. Not to mention that algorithms shape what we see online and there is not a single Internet or ChatGPT or TikTok we encounter (Leander et al, 2020). Every single thing we do online is shaped by our specific interactions with the Internet. Knobel et al. (2014) discuss the ways people work to collaborate to create knowledge, and de-centralize knowledge making, but don’t talk about the pitfalls. We’re living them in 2023. Anyone can post anything online. People give credibility to the person with the camera or the blog post, I think, because we’re still stuck with the old gate-keeping mentality of we can trust published things because their published. Being published used to mean an entire vetting process of credentials, veracity of claims, research validation – now anyone with a smartphone and opinion can post anything. We have witnessed what happens when arm chair experts dominate the discourse an so many important topics with a sharp intensity in this last five years, especially. And if people do not learn to approach digital texts, digital searches, and the technologies that facilitate our access to that information with a critical eye – especially with all generative AI can do – there are problems on the horizon we cannot even articulate today.

References:

Knobel, M., & Lankshear, C. (2014). Studying new literacies. Journal of Adolescent & Adult Literacy, 57(9), 1-5.

Leu, D. J., & Forzani, E. (2012). New literacies in a Web 2.0, 3.0, 4.0, …∞ world. Research in the Schools, 19(1), 75-81.

Leander, K. M., & Burriss, S. K. (2020). Critical literacy for a posthuman world: When people read, and become, with machines. British Journal of Educational Technology51(4), 1262-1276.

Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353-368. https://doi.org/10.21860/j.12.2.8

Week 9 Annotation – The promises and pitfalls of using ChatGPT for self-determined learning in higher education: an argumentative review.

Baskara, F. R. (2023). The promises and pitfalls of using chat GPT for self-determined learning in higher education: An argumentative review. Prosiding Seminar Nasional Fakultas Tarbiyah Dan Ilmu Keguruan IAIM Sinjai, 2, 95-101. https://doi.org/10.47435/sentikjar.v2i0.1825

Baksara provides an argumentative review of ChatGPT in relationship to self-determination learning and self-regulated learning. The paper provides a review of the current, emerging literature as it relates to ChatGPT to understand the ways self-determined learning and self-regulated learning can be supported by generative AI. The goal of the paper is to review the literature to provide support for uses and caution against pitfalls for educators in higher education. The fact that ChatGPT can provide students with the ability for personalized learning can foster self-regulated learning. However, there are ethical concerns. The review of literature shows that learners can get tailored feedback, customized learning opportunities, and amplifying self-regulation and self-determination skills. The ethical concerns of privacy and equitable access to the tool were raised.

This article was brief and repetitive. Baksara did disclose the search methodology for this literature review, but did not specify how many articles were read in the methodology, or any provisions for exclusion of articles. 22 papers were cited on the References page. The article made reference to self-determination and self-regulated learning, but no sources were cited to support those elements. The review was laid out very clearly and the writing was easy to follow. But the review seems cursory and not in depth. While the literature review does pull out key concepts to arrive and the conclusion, there are not many supporting citations that back up what Baksara argues.

Generative AI conversations seem to be dominating many spheres of the educational space right now. There seems to be a push and pull between policing student (and even staff) usage of these tools with allowing students freedom to learn along side the AI technology. Personally, as a higher education administrator, I see generative AI fully implemented into the educational landscape, much like we see calculators or the internet. The idea that Generative AI can support the development of intrinsic motivation through competency, relevance, and autonomy (e.g. self-determination theory) is exciting. It allows students access to another resource to support them as they write. I think the ability to engage with generative AI to get personalized learning experiences is exciting! I can see that there is uncertainty about what that means for students, especially in writing classes. In a section of composition I’m teaching this semester, I had students help me write our ChatGPT policy, and most students were against the usage of ChatGPT in a writing course because they didn’t feel it was relevant to have AI do their work for them. However, once we discussed the ways ChatGPT could be used to support their writing, they started discussing things like choice (autonomy), pre-writing and editing things ChatGPT could help with (relevancy) and choice to use it or not (autonomy). We’ll see what happens, but there is opportunity for ChatGPT to create motivation to write and overcome writer’s block, for example.

Extending the Discussion, Week 7 – TPACK/TPAC

Mishra et al. (2006) presented a framework that to me makes sense. When I started working as a trainer for technology in 2008/2009, I didn’t know that this was a theory – but the idea that we would train teachers on specific technologies, like Blackboard or YouTube or whatever the software-du-jour was made no sense to me. I always pushed back on this idea that if you know how to use the software, then you can teach your students to use it. None of the training I received to train others ever focused on a pedagogical underpinning of why the technology should be implemented. And this of course, led to a lot of resistance from instructors to adopt and adapt technology into their classroom. The biggest pedagogical reason that was offered is that it would help students get jobs; and that idea was at odds with the purpose of a college education, which was to learn to think and become well-rounded so you could would, but not be trained for a job.

            TPACK centers the instructor in the conversation, and in turn, the instructors are supposed to center the students so they can deliver the best content using the most reasonable technology to get the job done. In a lot of the work and discourse around technology, students are centered – educators want to know about their perceptions, their learning, their motivation, etc. as it’s influenced by technology. This discourse has also dominated my professional academic career. We are always measuring how X effects student learning – but rarely do we stop to ask how X effects teaching. So, TPACK re-centers this idea in powerful ways.

            For my additional article on the topic this week, I read about TPACK and Generative AI (Mishra et al. 2023). This article attempts to re-center the discourse around generative AI from: How to do we stop students from cheating? to How do we create learning spaces that leverage this technology that is not going away? How do we adapt teaching to the changing educational landscape? And again, the instructor’s work is re-centered.

            Education is cognitive work. Educators strive to help their students build their own content knowledge in areas so they can adapt it to their future needs. TPACK provides a way for instructors to be intentional about the use of technology in their classroom to maximize benefits for their students. One thing that I found compelling in Mishra et al. (2023) is the idea that if educators are intentional about the deployment of Generative AI into learning spaces, they don’t need to police student use – which is a futile effort anyway, since AI detection software is unreliable. TPACK provides a lens for intentionality that I really find valuable as a student, instructor, and administrator when it comes to technology implementation.  

References

Mishra, P., & Koehler, M.J. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.

Week 7 Annotation – TPACK in the age of ChatGPT and generative AI.

Mishra et al. (2023) apply the TPACK framework to ChatGPT to illustrate the framework is relevant to generative AI. Mirsha situate their argument the significance of the TPACK framework in educational technology and the daily work of teachers. Mishra et al. also point out that TPACK has fallen into the canon of educational technology research, and that it’s not engaged with intellectually. Rather education students learn it as part of another theory to memorize. They seek to make TPACK relevant again and encourage educators to questions of generative AI through the lens of this framework. Mirsha et al. provide an overview of the state of generative AI in education pointing out the pitfalls and benefits of using AI in the classroom, while ultimately coming to the conclusion that the educational space is forever changed because it is not a human only space any longer. Generative AI will require that new pedagogies are created to support learning that is inclusive of generative AI. Through the lense of TPACK, educators will have to think beyond the current moment to the long-term ramifications of generative AI in the classroom to assess student learning and prepare them for jobs in a world we cannot yet fully envision. Mishra et al. also point out that assessment will have to change to accommodate the ways learning will change as a result of human/machine partnerships in learning.

While Mishra et al. provide a robust overview of the current state of the TPACK framework in educational literature, they do fall into a pitfall of separating the elements of TPACK apart in order to explain the framework rather than analyzing ChatGPT holistically (Saubern et al., 2020). Mirsha et al. provide relevancy for application of the TPACK framework and try to provide some examples for how teachers can use generative AI in their classrooms to stimulate learning in new ways. These examples are cursory and only show how the focus on the discussion over academic dishonesty is the wrong place to situation the conversation in education around generative AI. Ultimately, the paper is very well organized. The literature review pulls from relevant TPACK literature, always choosing to cite the seminal work over discussions of the seminal work. The framework does not appear to be mischaracterized, but the separation of the parts does not allow for the creative dynamic between knowledge, pedagogy and technology Mishra et al. pointed out in their literature review to be fully explored in their own assessment of how TPACK can apply to generative AI in the classroom – which is also interesting because Mishra is one of the architects of TPACK.

All three facets of my academic identity – doctoral student, writing instructor, and administrator – are very interested in how Generative AI effects the classroom experience. This article opened my eyes to the reality that learning spaces are not human only learning spaces. While technology has always been at the center of my teaching practice, the technology was always mediating the learning. And now, the technology is participating in the learning (and in some ways, it’s a co-learning experience where the AI can learn from the learner, too). I’m very interested in this area as doctoral student. As a writing teacher, I want to teach my students to leverage generative AI so they can be proficient in using it as a tool to leverage their critical thinking to get better jobs. As an administrator, I want to understand the application of generative AI in the classroom to help faculty create learning spaces that don’t penalize and police students while they also navigate how to use generative AI as a learning tool in their own educational journey.

References

Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235-251. https://doi.org/10.1080/21532974.2023.2247480

Saubern, R., Henderson, M., Heinrich, E., & Redmond, P. (2020). TPACK–time to reboot?. Australasian Journal of Educational Technology36(3), 1-9.