Week 12 – AI Blog Discussion

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology48(1), 38-51.

  • AI Tech policies could potentially be another form of western imperialism since the West is making up the rules governing AI usage in industry.
  • AI Tech is not inherently inert – its use takes people and turns them into data that can (and will be) exploited.

Nemorin et al. (2023) changed my view of AI use because there are many implications to those who make the rules shaping the narrative, which will inherently erase people as it collects all the data it can from them; that data will be used to change the world.

Sofia, M., Fraboni, F., De Angelis, M., Puzzo, G., Giusino, D., & Pietrantoni, L. (2023). The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science: The International Journal of an Emerging Transdiscipline26, 39-68.

  • AI use requires workers to be skilled and trained critical thinkers, collaborators, and problem solvers in order to effectively leverage AI to maximize potential in the workspace.
  • Humans have unique skills that are not replicable by AI (yet – I mean, I have seen Battlestar Galactica and AI could become sentient at some point) and they have to use AI to free up time to work within skillsets AI cannot replicate.

Sofia et al (2023) have a theory that functions on the idea that skilled workers need to develop new skills, which rely being educated in critical thinking, collaboration and problem solving – but in a society that increasingly diminishes the value of those skills, AI may not be as helpful as imagined in the future.

Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019, July). Envisioning AI for K-12: What should every child know about AI?. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 9795-9799).

  • AI researchers need to make their work available and accessible to K-12 teachers and students in intellectually appropriate ways.
  • Students across the K-12 spectrum are capable of and should be using and evaluating AI tools as part of their education.

Touretzky et al (2019) reaffirm what I have been hearing as higher ed administrator for years: kids today are preparing for jobs that don’t even exist yet and will need skills we don’t have yet – by providing them the time and space to experiment with known technology, they’ll be more flexible in learning new skills in the future.

Park, C. S.-Y., Kim, H., & Lee, S. (2021). Do less teaching, more coaching: Toward critical thinking for ethical applications of artificial intelligence. Journal of Learning and Teaching in Digital Age, 6(2), 97-100.

  • AI has the huge potential to change classroom teaching, but educators have to teach students to think critically when using AI
  • There are many ethical implications, that if not considered, could undermine the educational goal of critical thinking

Park et al (2023) emphasize that educators need to teach the implications of AI in relationship to critical thinking along side the content.

Adding in ideas from Peers

Susan Lindsay pointed out “AI in education” is driven by market interests. As an administrator in higher education, overseeing a writing department, I have been heavily involved in discussions related to AI use in the classroom and its implications. Community partners are now coming back to advisory boards and noting that they want to see graduates who are comfortable working alongside AI technology. This opens up a lot of questions for beginning writing classes, where critical thinking for academic thinking is often first introduced. The biggest question is: How can we teach students to think critically if we immediately embrace AI technology? It’s an interesting question to grapple and leads to larger questions about the intersection of education and jobs training. Until recently college education wasn’t about being trained for a specific job – it was about gaining the “soft skills” – critical thinking, broad base knowledge, and professional discourse foundations – to be successful on the job.

Martha Ducharme also commented along the same vein about adding AI instruction in to K-12 educational spaces. I had not viewed the recommendation Touretzky et al. (2019) made to teach AI technology along side other content, as a skill, as a “rip and replace” approach. But, I can see how it can be viewed that way. AI has come into play the same way the Internet did. Heck, I remember when I was in the 6th grade in the late 90s and we started learning basic computer programming – and it was seen as revolutionary for us to make a turtle avatar make geometric shapes across a screen. We had computer class twice a week, instead of once a week, and it was a disruption, but we still learned all the other basic subjects. So, I think it’s really important to prepare students from a young age to be comfortable with the foundational aspects of AI because by the time they enter the work force, AI won’t be novel, it will be integrated into life. And if we don’t make the AI visible, they won’t have the necessary skepticism to think critically about it as Park et al. (2021) warned about in their work.

Week 5 Extension Discussion – Overview of Educational Hypermedia Research

Extension Discussion – Week 5 – Overview of Educational Hypermedia Research

In our guided reading, we were asked to think about researchable ideas from the Kuiper et al. (2005) article. In short summary, the article explores the new-in-2005 concerns about K-12 students being able to use the Internet in their learning and whether the Internet requires specific skill of students. In 2023, these questions are still relevant.

In the article, Kuiper et al. (2005) make reference to a research study where students were being taught explicitly that when they click on a hyperlink, they also need to interact with it deeply. In 2023, in the college setting where I teach, the assumption is that kids are going to come into the classroom with a fully formed understanding of how to interact with the Internet. The myth of the digital native, coined by Prensky (2001) persists in higher education to the detriment of learners and teachers. Prenksy’s theory was that those who grew up with technology – digital natives– would have an innate sense of how use technology, unlike digital immigrants, who came to technology later. The assumption carries over to educational spaces where it can be easy to assume that just because students grew up with technology, they will automatically know how to apply that technology to a variety of learning contexts. An innate skill to apply technology to learning does not exist.

Enyon (2020) explored the harm of the persistent nature of the digital native myth. The myth itself presents a generational divide (which Enyon notes, the literature does not support) and leads to a very hands-off approach in adults teaching children to use technology. Now, this may be different as so-called elder millennials, the original digital natives per Prensky’s theory, are taking spaces as educators in classrooms. Millennials were assumed to be native to technology because it was ubiquitous as they grew. As an elder millennial, I know that I had to learn technology and how to apply it on my own. There was no one to teach me because the divide Enyon pointed out was ever-present in my educational experiences. I had no guidance when it came to encountering hypertext for the first time, for example. The closest I ever got to “online training” was in grad school when a research librarian taught us Boolean searches in the time before Google was ubiquitous and natural language searches were a thing.

The research opportunities in this area come from looking at how learner relationship to technology is established, nurtured, and supported. The skills an Internet user needed in 2005 are also vastly different from the skills an Internet user needs in 2023.

The Web has become a different place. In the early days of the Internet, people in general were leery of it. I remember being explicitly told by high school English teachers and college professors that I could not trust everything I found online. But in 2023, “the Internet” has become an all-encompassing resource. “I read it online” becomes the only needed – or maybe even differently in 2023, “I saw it on TikTok.”  It seems that the old traditions of authorial authority (from the days of publishing when author work had to be vetted for credibility among other things before it was published) has transplanted online. If it’s published online, it must be credible, right? I see this a lot with Internet users who don’t understand that the vetting process for publishing online is to hit “submit” on a website. There are no more checks and balances. The Internet democratizes access to information, and it also allows anyone with Internet access to become a content creator. Search engine algorithms have also become very siloed. People get return results based on what they like to see, which means they confront ideas less and less that challenge their worldviews (Pariser, 2011) Not to mention the dawn of Chat GPT, which manufactures source information to appear credible and returns results based on user inputs.

Students today need to be trained to be critical of information and resources they encounter online. The Internet is a great repository of information, but not all information is created equal, or should be held as having the same value or veracity. The notion that students need specific skills still holds true and is still an area of valid research. This is an area of research I am personally very interested in.  

References

Kuiper, E., Volman, M., & Terwel, J. (2005). The web as an information resource in k–12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75(3), 285-328. https://doi.org/10.3102/00346543075003285

Eynon, R. (2020).  “The myth of the digital native: Why it persists and the harm it inflicts”, in Burns, T. and F. Gottschalk (eds.), Education in the Digital Age: Healthy and Happy Children, OECD Publishing, Paris, https://doi.org/10.1787/2dac420b-en.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. penguin UK.

Prensky, M. (2001). Digital natives, digital immigrants, part 1. On the Horizon, 9(5), 1-6. https://doi.org/10.1108/10748120110424816