Week 10 – Extending the Discussion: New Literacies

Extending the Discussion – Week 11 New Literacies

Technology has shaped and reshaped the way people interact with and create information. Up until the late 1990s, information was gate-kept by publishers. People had to be bona fide experts who paid their dues in the formal education process. Scholars published works on politics, health, science, etc. Conspiracy theories were relegated to cheap tabloids found at check out counters. There was limited information available, as the experts could only supply so much information at one time. But the internet changed that (Leu et al., 2012). Especially in the wake of Web 2.0 technologies of the early 2000s and 2010s – blogs, wikis, Facebook, and Twitter – changed the ways users created and interacted with content. Anyone could post anything. In 2008, I was a newly minted college instructor and would warn my students not to use Wikipedia. I’d set up assignments where I would have them look up things on Wikipedia and then edit the pages to feature outlandish nonsense to prove it couldn’t be trusted – because what would twinklestar099 know about literature of the Cold War that Richard Slotkin didn’t know better? We were wary of authority of sources in the early days of Web 2.0 because those of us teaching and working had grown up with card catalogues and library collections that could only be used on campus, even if the catalogue was now digitized on the computer. There were processes in place to make sure disseminated information was as accurate and well composed as was possible (most of the time).

Teaching literacy in writing classes used to be the difference between an encyclopedia, a trade journal, a scholarly peer-reviewed source, and the difference between .com and .edu sources. But once Web 2.0 emerged – more was demanded. Yi (2021) get at this in his definition of AI competency. As the technologies we have become more complex, so do the means of critical thinking and reflection of the tool. And then to add to that, people have to be cognizant of how the information they find and the tools they use to find it shape them and their possible futures. We have to simultaneously evaluate the material we get, plus the source of the material, and it’s future effect on us, our culture, and our opportunities (Leander et al, 2020; Yi, 2021). Generative AI has the capacity to shape our world more than any Wikipedia article ever did. At some point, the AI is going to become indistinguishable from reality and people are going to have to be critical observers and critical participants in their world. Leu et al. (2012) articulated that the youth will drive the change and the way language happens. But in the past, language change and social change were driven by youth in a social context where fact checking was always going to be possible; I’m not sure that with AI that will be the same. Not to mention that algorithms shape what we see online and there is not a single Internet or ChatGPT or TikTok we encounter (Leander et al, 2020). Every single thing we do online is shaped by our specific interactions with the Internet. Knobel et al. (2014) discuss the ways people work to collaborate to create knowledge, and de-centralize knowledge making, but don’t talk about the pitfalls. We’re living them in 2023. Anyone can post anything online. People give credibility to the person with the camera or the blog post, I think, because we’re still stuck with the old gate-keeping mentality of we can trust published things because their published. Being published used to mean an entire vetting process of credentials, veracity of claims, research validation – now anyone with a smartphone and opinion can post anything. We have witnessed what happens when arm chair experts dominate the discourse an so many important topics with a sharp intensity in this last five years, especially. And if people do not learn to approach digital texts, digital searches, and the technologies that facilitate our access to that information with a critical eye – especially with all generative AI can do – there are problems on the horizon we cannot even articulate today.

References:

Knobel, M., & Lankshear, C. (2014). Studying new literacies. Journal of Adolescent & Adult Literacy, 57(9), 1-5.

Leu, D. J., & Forzani, E. (2012). New literacies in a Web 2.0, 3.0, 4.0, …∞ world. Research in the Schools, 19(1), 75-81.

Leander, K. M., & Burriss, S. K. (2020). Critical literacy for a posthuman world: When people read, and become, with machines. British Journal of Educational Technology51(4), 1262-1276.

Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353-368. https://doi.org/10.21860/j.12.2.8

Week 8 Annotation – Characteristics of productive feedback encounters in online learning

Jensen, L. X., Bearman, M., & Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teaching in Higher Education, 1-15. https://doi.org/10.1080/13562517.2023.2213168

 

Jensen et al. (2023) provide a digital ethnographic approach to studying the perceived usefulness of feedback for students taking online classes at an Australian and Danish university in courses that were not emergency remote courses due to COVID-19. Jensen et al. situate their argument in the context of productive feedback – feedback a student finds meaningful – and feedback in higher education, noting feedback can come from interactions from humans, technology, or resources. The dataset for this study was derived from 18 students whose online text-based work was observed and 27 semi-structured interviews were conducted using longitudinal audio diaries. The data was thematically coded. Three major themes for feedback emerged. The first was elicited feedback encounters, where students directly asked for feedback. An example of this would be asking for peer review or emailing a question. The second was formal feedback encounters, where feedback is structured as part of the course design. An example is any instructor feedback on a submitted assignment. The final is incidental feedback encounters, where students get information that causes them to reflect on their understanding. An example is a discussion with peers about the directions for an assignment. Feedback has two types of impact: instrumental or substantive. Instrumental feedback is feedback that clearly delineates for the student what action they need to take next. This type of feedback often leads to superficial changes based on direct instruction for what to do. Substantive feedback is feedback that asks a student to critically reflect on their own assumptions. This type of feedback is often ignored because it’s too challenging; however, if the student engages with this feedback it reshapes their understanding of the task, the work, or their own approach. Instrumental and substantive feedback are equally valuable to student learning and serve a purpose. However, all feedback is most valuable to students when it is received when the are open to the feedback and the feedback arrives in time for them to apply it to their current work.

Jensen et al. do a good job of situating their problem in the context of feedback. There was discussion about the framework and approach for feedback, but it was not related back to the discussion or the findings in a clear way. It was also not clear if the authors of the study collected the dataset on their own or used a dataset that was collected for someone else and available for other researchers to utilize to conduct research. It was, however, very easy to follow the conceptual framework to the research questions and methodology used to explore this problem.

Feedback is something I have grappled with for a very long time as an educator. When I teach writing classes, I am always swamped by feedback. When I teach online courses, I log in every single day to read what students post and provide them formative feedback that I hope will shape their work. I’m not always sure that students read and engage with the feedback I give them, except when I require it as part of a reflection assignment (as Jensen et al. pointed out in their literature review). But I do agree that students will happily take surface feature/lower-order concern feedback and apply it easily because it is direct and tells them what to do. For example, if I tell them to restate their thesis and give them a template, they almost always do it. But if I ask them to reframe their thinking on a topic – which would lead to major overhaul of their paper – they often don’t do anything with that feedback. Jensen et al. pointed out that this type of feedback is hard to do. I mean, it’s a big ask to ask a first-year composition student to reframe their entire way of thinking about a topic while at the same time they’re learning how to research, evaluate sources, and all the things that come with learning to do academic research. It defeats the purpose of giving that kind of feedback, however, for me to tell them how to think about the topic differently. But this kind of feedback is successful if they’re ready and open to the conversation of changing how they think.

In online learning, it’s even harder to give this kind of feedback that leads to substantive learning because you can’t see the student to know how well it’s received or even understood. I also don’t always know the right time to give that feedback so it’s useful. In 8 week classes, I’m usually one week behind in grading so they’re getting feedback as they’re working two assignments down from the initial feedback. It’s not really helpful anymore. I need to think about ways to structure feedback so they get it when it’s useful.

Week 5 Annotation 1 – Learning from hypertext: Research issues and findings

Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In D. H. Jonassen (Ed), Handbook of Research for Educational Communications and Technology (pp. 605-620). New York: Macmillan.

               Shapiro et al. (2004) provide an overview of research issues in hypertext assisted learning (HAL). The overview of research includes the theoretical underpinnings, the practical matters of reading and learning from hypertext, including metacognitive processes and the role of conceptual structure of hypertexts in relationship to human memory construction. A lot of space is devoted to providing an overview of the effect of system structures on learning. Learning structures are discussed in terms of information structures that function in a hierarchy -lets the reader go back and forth between the original text and more information – versus the unstructured hypertext structures that rely on user choice to help in creating meaning. Well-defined structures are best for learners when they have little or no prior knowledge on a subject. Ill-defined structures are better for learners who have more prior knowledge on a subject; however, just because a student is advanced does not mean they will automatically apply themselves to learning in an unstructured hypertext learning task. Learner variables are also discussed as related to the effectiveness of HAL; students who have more prior knowledge can engage at a higher level with HAL. The reading patterns of the learner also impact success with HAL; the purpose of reading influences how students interact with the text. For example, if students have a very specific goal for reading, they will make better connections with and between the material than those reading without scope. The HAL research is also problematic because there is not a unifying theoretical underpinning to this field of study, no coherence in methodological approach, lack of a precise language for discussing HAL. The lack of published research on the topic makes it hard to see HAL as a powerful learning tool and more research needs to be done.

            Shapiro et al. present a very cohesive and well catalogued literature review of HAL research. The headers and sub headers make it very clear and easy to follow the connections from the research to their own assertions about the state of the field of HAL research. Additionally, each heading has its own conclusion which neatly and succinctly ties the literature reviewed together. This makes it very easy to see the way the conclusions were drawn from the literature. The critiques made of the lack of cohesiveness are shown to the readers of this article and all ideas expressed are very concrete and connected to specific studies that had been conducted up until that point. I would also argue that this piece brings some of the cohesiveness to the field of study of HAL that Shapiro et al. say the field is lacking. By drawing these specific pieces of literature under the umbrella of this literature review, they are pulling together the early studies that are the seeds of the field of research into HAL.  

            As a doctoral student, I find this article compelling on two fronts. First, I see this as a model/exemplar of how to construct a literature review that supports making claims and assertions on a topic. It is also a very great example of how to pull from the literature to locate and discuss gaps to pinpoint where a valuable research question may be lying in wait for a researcher to expand on the topic. I also find this notion of trying to pull a field together interesting. Shapiro et al. see that there is an emerging field on HAL based on hypertexts and the uses in practice that emerge from the literature – but pulling it all together so the field has value is a significant task. Someone has to be the one to ask these questions. I’ve read a lot of educational scholarship, and it’s the first time I have come across a call like this from the field. And having done research on this topic in the last 5 years, I see there is more scholarship on the topic, but I’m not sure if it’s any more cohesive than what was described in this article as it hadn’t occurred to me to pay attention to that kind of organization across a field before this article.

Annotation – “The Triple-S framework: ensuring scalable, sustainable, and serviceable practices in educational technology”

Moro et al. (2023) present a new research-based framework, Triple-S Framework, for educators and institutions to consider before electing to adopt and adapt educational technology into learning spaces. The research-based framework was built in the context of every-evolving technology and the push and drive of institutions and educators to adopt the latest technology to remain relevant, the financial and practical cost of technology implementation, and student desire to see more consistent technology implementation. The Triple-S Framework guides institutions and educators to evaluate the scalability (continued growth of use) , sustainability (long-term implementation viability) , and serviceability (access to skills, tools, and resources to maintain use of technology)of educational technologies that are implemented into the schools. Moro et al. provide an overview of common and trendy educational technologies from most scalable, sustainable, and serviceable (digital text texts and images) to least scalable, sustainable and serviceable (VR technology) to illustrate application of the model.

Moro et al. (2023) provide a clear presentation of the need for a framework that takes into account not just learning out comes, but long-term viability of educational technology intervention in classrooms and institutions. The examinations of common, widely used educational technology such as digital texts and images, audio, slideshow presentations, and video allow for newcomers to the framework to bring their practical experience to bear on the benefits and pitfalls of technology implementation. Progressing to apps, which are accessible to use, but not necessarily to create then extends the application of the framework to less common technologies to show how the Triple-S framework is practical and accessible to researchers, educators and decision makers. Moro et al. also use very easy-to-grasp common language when explaining their framework. An college professor with no formal educational training can pick this up and implement the steps without having to do much work to make it happen. It’s very practical.

As a college administrator, I really love the practical examples and explanations that are provided and grounded in research. I can see that there are clear steps, questions, and processes to follow. This would be very easy to use as a jumping off point for discussions with faculty about technologies they would like me to purchase from my budget to use in their classroom. As a doctoral student, I can only hope to strive for the level of clarity of explanation, clear connection to the literature, and clear and concise applications to make the research I do practical for the practitioner and administrators who support practitioners.

Moro, C., Mills, K. A., Phelps, C., & Birt, J. (2023). The triple-s framework: Ensuring scalable, sustainable, and serviceable practices in educational technology. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-022-00378-y

Annotated Bibliography – “Enhancing the learning effectiveness of ill-structured problem solving with online co-creation”

In this early empirical study on co-creation in learning, Pee (2019) attempts to support the hypothesis that the open-ended nature of ill-structured problem solving (ISPS) can be used to a learner’s advantage in increasing cognitive and epistemic knowledge. Three concepts were derived from business disciplines, where co-creation is commonly used, to develop a framework to determine of online co-creation to test for increased student learning in ISPS: solution co-creation, decision co-creation, and solution sharing. Pee created an asynchronous, voluntary, and optionally anonymous activity on Blackboard for students to participate in decision co-creation for evaluative criteria and then to discuss their solutions to the problem in the assignment to engage in co-solution sharing and solution co-creation. Pee interprets the student survey results to indicate that by engaging in online co-creation, learning increases. Ultimately, Pee suggests that while this study is an early study and cannot yet be generalized, it should be replicated in other areas and current course instructors can implement this method to increase learning in the context of working with ISPS.

            While the article excels in presenting its data visually and the limitations of the study are adequately acknowledged, there are areas of concern in the arguments Pee presents. While Pee (2019) presents a cogent statistical analysis of the survey deployed to students (n=225), and the survey had an excellent return rate of 70.3%, the findings were presented in the article as proving learning increased when the survey measured student perception of learning. A brief follow up interview with 13 students was mentioned in the article, but these were not discussed in depth and did not support the hypothesis that learning had increased, but a single student was quoted as showing their perception of learning increased. Finally, the examples to illustrate the method for collecting data was described and limited to the graduate student sample who made up only 32.4% of the sample size – the undergraduate student experience shaped most of the survey results, but it was not described in the methodology or discussion. Pee draws a conclusion that the survey results show the model for co-creation online worked in a classroom to “leverage the multiplicity of ISPs,” to enhance student learning, not noting the survey can only measure objective perception since student work was not evaluated or controlled for with groups who used co-creation and groups who did not.

            As a writing teacher, the idea of ill-structured problems and co-creation is interesting to me. Writing is often difficult to teach because it’s amorphous and doesn’t have a “right” answer. The idea of online co-creation where students work together to contribute to discussions of how a project will be evaluated is exciting because it shifts the burden of teaching in an ISP context from the instructor only to instructor and students. I like this idea in terms of establishing rubrics that are more individualized for learners to help them grow their writing in ways they find relevant while also meeting course standards and outcomes. As a doctoral student, I am interested in the ways students perceive their own learning versus how instructors perceive student learning based on knowledge acquisition. I find the methodology and framework used by Pee to study perception of learning is interesting.  

References

Pee, L. G. (2019). Enhancing the learning effectiveness of ill-structured problem solving with online co-creation. Studies in Higher Education, 45(11), 2341-2355. https://doi.org/10.1080/03075079.2019.1609924