Brief Review – GPTZero

GPTZero Brief Review 

GPTZero (2024) , described as “bring[ing] transparency to humans navigating a world filled with AI content” on its homepage, provides educators an opportunity to “restore confidence and understanding” with students by detecting AI generated content. According to The Tech Advocate writer Matthew Lynch (2023), GPTZero is a significant resource due to its accuracy in detecting AI generated content to help maintain integrity in human-created works. Even though GPTZero is not not 100% reliable in detecting AI generated content, it still boasts a high degree of accuracy (Lynch, 2023). As I log in and initially explore GPTZero, I notice the AI scan is in the center of the dashboard; it is there, waiting to be populated with text. The text box also privileges the idea of finding human writing over sussing out AI writing by asking: “Was this text written by a human or AI?” The prompt also cautions users that if they are seeking to detect “AI involvement” – again, contributing to a narrative that human creation is privileged and not just identifying AI generated content. When compared to Turnitin’s (2024a) AI writing detection tool, which promises to “help educators identify when AI writing tools such as ChatGPT have been used in students’ submissions,” the goal seems to not be to penalize students it seems to be focused on first identifying what the student created on their own and opening up a conversation about when AI text has likely been used. GPTZero provides a “probability breakdown” of a spectrum of human and AI writing. Turnitin (2024b) provides a percentage that states “How much of the submission has been generated by AI” which is absolute. GPTZero emphasizes that it is not 100% accurate and leaves the implication open that a human needs to do more work to ascertain if there are any integrity violations. In fact, the report generated states specifically that the “report should not be used to punish students” (GPTZero, AI scan, January 28, 2024). One question I am wondering about the tool is: Where does the extensive database of human written content writing is compared against come from? 

References 

GPTZero. (2024). The best AI checker for teachers. GPTZero. https://gptzero.me/educators

Lynch, M. (n.d.). What is GPTZero? How to use it to detect AI-generated text. Tech Advocate. https://www.thetechedvocate.org/what-is-gptzero-how-to-use-it-to-detect-ai-generated-text/

Tian, E., & Cui, A. (2023). GPTZero: Towards detection of AI-generated text using zero-shot and supervised methods [Computer software]. https://gptzero.me/

Turnitin. (2024a). Turnitin’s AI detector capabilities. TurnitIn. https://www.turnitin.com/solutions/topics/ai-writing/ai-detector/

Turnitin. (2024b). Turnitin’s AI writing detection available now. Turnitin. https://www.turnitin.com/solutions/topics/ai-writing/

Week 12 – Annotation “Do less teaching, more coaching: Toward critical thinking for ethical applications of artificial intelligence”

Park, C. S.-Y., Kim, H., & Lee, S. (2021). Do less teaching, more coaching: Toward critical thinking for ethical applications of artificial intelligence. Journal of Learning and Teaching in Digital Age, 6(2), 97-100.

Park et al. (2021) present a cleaned up version of a series of academic discussions held over the period of the pandemic as these educators attempted to meet their student learning needs where they were in emergent conditions. The authors provided an overview of four areas where educators should work to coach students as their education intersects with AI usage in the classroom. The first is that using virtual learning spaces could make it harder for students to think about what they want or what their peers want in favor of what AI is presenting. They oddly move to a discussion of AI in healthcare, that is not really related to their students. Park et al. (2021) argued that health care professionals should not be reliant on, but rather critical of it, because there are many variables AI is unable to take into account at this time. Park et al. argued AI should not replace humans, but should work along side it, especially in healthcare. Finally, Park et al. (2021) argued humans should prioritize their own intellectual curiosity to create their own knowledge.

While slightly outdated, the notion that we should be thinking critically about how AI is used in educational spaces is very much at the forefront of thinking about AI. This was an interesting way to display some different facets of thinking about AI in education. However, the article title was misleading and there was not much discussion about coaching students in how to use AI. Many ethical concerns were raised, but none were grounded in research or specific examples. The article was also extremely repetitive in terms of the underlying assumption that AI would usurp human thinking. While there are some uses of previous research to underpin the thinking, this is really a thought exercise to contribute lines of thinking to the discussion rather than to answer questions.

As a researcher and doctoral student, I think it’s good to be aware of these types of conversations, especially to think about the ethical considerations of novel technology. Though, I find it interesting that each new technology seems to be heralded in with the same amount of trepidation of the previous. As a Millennial, one thing that has been constant in my life is technological change. So I tend to embrace all the new things that come out and then think about the ethical implications later. And that is not the best approach to take. So, I really liked that this article enshrines a years’ long conversation about the ethical considerations of a new technology. It’s a good note and remind to slow down and think past the shiny new thing to what happens next.

Week 12 – AI Blog Discussion

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology48(1), 38-51.

  • AI Tech policies could potentially be another form of western imperialism since the West is making up the rules governing AI usage in industry.
  • AI Tech is not inherently inert – its use takes people and turns them into data that can (and will be) exploited.

Nemorin et al. (2023) changed my view of AI use because there are many implications to those who make the rules shaping the narrative, which will inherently erase people as it collects all the data it can from them; that data will be used to change the world.

Sofia, M., Fraboni, F., De Angelis, M., Puzzo, G., Giusino, D., & Pietrantoni, L. (2023). The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science: The International Journal of an Emerging Transdiscipline26, 39-68.

  • AI use requires workers to be skilled and trained critical thinkers, collaborators, and problem solvers in order to effectively leverage AI to maximize potential in the workspace.
  • Humans have unique skills that are not replicable by AI (yet – I mean, I have seen Battlestar Galactica and AI could become sentient at some point) and they have to use AI to free up time to work within skillsets AI cannot replicate.

Sofia et al (2023) have a theory that functions on the idea that skilled workers need to develop new skills, which rely being educated in critical thinking, collaboration and problem solving – but in a society that increasingly diminishes the value of those skills, AI may not be as helpful as imagined in the future.

Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019, July). Envisioning AI for K-12: What should every child know about AI?. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 9795-9799).

  • AI researchers need to make their work available and accessible to K-12 teachers and students in intellectually appropriate ways.
  • Students across the K-12 spectrum are capable of and should be using and evaluating AI tools as part of their education.

Touretzky et al (2019) reaffirm what I have been hearing as higher ed administrator for years: kids today are preparing for jobs that don’t even exist yet and will need skills we don’t have yet – by providing them the time and space to experiment with known technology, they’ll be more flexible in learning new skills in the future.

Park, C. S.-Y., Kim, H., & Lee, S. (2021). Do less teaching, more coaching: Toward critical thinking for ethical applications of artificial intelligence. Journal of Learning and Teaching in Digital Age, 6(2), 97-100.

  • AI has the huge potential to change classroom teaching, but educators have to teach students to think critically when using AI
  • There are many ethical implications, that if not considered, could undermine the educational goal of critical thinking

Park et al (2023) emphasize that educators need to teach the implications of AI in relationship to critical thinking along side the content.

Adding in ideas from Peers

Susan Lindsay pointed out “AI in education” is driven by market interests. As an administrator in higher education, overseeing a writing department, I have been heavily involved in discussions related to AI use in the classroom and its implications. Community partners are now coming back to advisory boards and noting that they want to see graduates who are comfortable working alongside AI technology. This opens up a lot of questions for beginning writing classes, where critical thinking for academic thinking is often first introduced. The biggest question is: How can we teach students to think critically if we immediately embrace AI technology? It’s an interesting question to grapple and leads to larger questions about the intersection of education and jobs training. Until recently college education wasn’t about being trained for a specific job – it was about gaining the “soft skills” – critical thinking, broad base knowledge, and professional discourse foundations – to be successful on the job.

Martha Ducharme also commented along the same vein about adding AI instruction in to K-12 educational spaces. I had not viewed the recommendation Touretzky et al. (2019) made to teach AI technology along side other content, as a skill, as a “rip and replace” approach. But, I can see how it can be viewed that way. AI has come into play the same way the Internet did. Heck, I remember when I was in the 6th grade in the late 90s and we started learning basic computer programming – and it was seen as revolutionary for us to make a turtle avatar make geometric shapes across a screen. We had computer class twice a week, instead of once a week, and it was a disruption, but we still learned all the other basic subjects. So, I think it’s really important to prepare students from a young age to be comfortable with the foundational aspects of AI because by the time they enter the work force, AI won’t be novel, it will be integrated into life. And if we don’t make the AI visible, they won’t have the necessary skepticism to think critically about it as Park et al. (2021) warned about in their work.