Week 8 – Extending the Discussion

Extending the Discussion – Week 8

Online learning environments are difficult because they’re asynchronous. In face to face classrooms, I could always tell where my students were in terms of understanding and could very easily course correct. I built rapport very easily through bad jokes and being able to show my humanity to my students regularly. In online classes – I’m a block of text, and maybe sometimes a quick video or a voice file. It’s a very different type of interaction with students. But a key piece of success is students feeling like their teacher is present. Richardson (2003) pointed out that students who felt their instructor was present in the class felt like they learned more. Hratsinski (2009) pointed out that having someone around with a higher level knowledge than the learner increases learning. It’s difficult to be present as a block of text and it’s so easy to stifle a discussion as the instructor if you encroach on it too soon.

Online classrooms need a level of instructor interaction, however. They cannot just be left for students to engage with each other or with content and/or technology as a means of feedback. Hrtatsinski (2009) cited there are three types of interaction in a classroom: learner to learner, learner to content, and learner to instructor. When I observe and evaluate any classroom environment, I look for all of these interactions. When I build my own classroom environments, I strategically and intentionally build in all of these pieces. I would also add there needs to be clear interaction between the instructor and the content to model disciplinary thinking for a student. All three interactions need to be present for an online classroom to function well (Abrami et al., 2011). Learners cannot be left to engage only with other learners and the content – only reaching out to the instructor piecemeal for clarification and expect to leave with a well-rounded learning experience. Instructors need to set up learner to learner engagements that have a specific end goal and participate in that interaction at key points to provide timely feedback. As Jensen et al. (2023) argued, feedback while students are in process of an assignment at a useful point can lead to substantive learning. If there is no feedback from the instructor, the learner can be uncertain about their interactions with peers or content leading to correct understanding. The instructor also needs to challenge student ideas at a time when they’re developing, as that is when that feedback is more likely, in my opinion, to be able to help shape approaches and ideas.

References

Abrami, P. C., Bernard, R. M., Bures, E. M., Borokhovski, E., & Tamim, R. M. (2011). Interaction in distance education and online learning: using evidence and theory to improve practice. Journal of Computing in Higher Education, 23, 82-103.

Hrastinski, S. (2009). A theory of online learning as online participationComputers & Education, 52(1), 78–82.

Jensen, L. X., Bearman, M., & Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teaching in Higher Education, 1-15. https://doi.org/10.1080/13562517.2023.2213168

Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1), 71-88.

Week 8 Annotation – Characteristics of productive feedback encounters in online learning

Jensen, L. X., Bearman, M., & Boud, D. (2023). Characteristics of productive feedback encounters in online learning. Teaching in Higher Education, 1-15. https://doi.org/10.1080/13562517.2023.2213168

 

Jensen et al. (2023) provide a digital ethnographic approach to studying the perceived usefulness of feedback for students taking online classes at an Australian and Danish university in courses that were not emergency remote courses due to COVID-19. Jensen et al. situate their argument in the context of productive feedback – feedback a student finds meaningful – and feedback in higher education, noting feedback can come from interactions from humans, technology, or resources. The dataset for this study was derived from 18 students whose online text-based work was observed and 27 semi-structured interviews were conducted using longitudinal audio diaries. The data was thematically coded. Three major themes for feedback emerged. The first was elicited feedback encounters, where students directly asked for feedback. An example of this would be asking for peer review or emailing a question. The second was formal feedback encounters, where feedback is structured as part of the course design. An example is any instructor feedback on a submitted assignment. The final is incidental feedback encounters, where students get information that causes them to reflect on their understanding. An example is a discussion with peers about the directions for an assignment. Feedback has two types of impact: instrumental or substantive. Instrumental feedback is feedback that clearly delineates for the student what action they need to take next. This type of feedback often leads to superficial changes based on direct instruction for what to do. Substantive feedback is feedback that asks a student to critically reflect on their own assumptions. This type of feedback is often ignored because it’s too challenging; however, if the student engages with this feedback it reshapes their understanding of the task, the work, or their own approach. Instrumental and substantive feedback are equally valuable to student learning and serve a purpose. However, all feedback is most valuable to students when it is received when the are open to the feedback and the feedback arrives in time for them to apply it to their current work.

Jensen et al. do a good job of situating their problem in the context of feedback. There was discussion about the framework and approach for feedback, but it was not related back to the discussion or the findings in a clear way. It was also not clear if the authors of the study collected the dataset on their own or used a dataset that was collected for someone else and available for other researchers to utilize to conduct research. It was, however, very easy to follow the conceptual framework to the research questions and methodology used to explore this problem.

Feedback is something I have grappled with for a very long time as an educator. When I teach writing classes, I am always swamped by feedback. When I teach online courses, I log in every single day to read what students post and provide them formative feedback that I hope will shape their work. I’m not always sure that students read and engage with the feedback I give them, except when I require it as part of a reflection assignment (as Jensen et al. pointed out in their literature review). But I do agree that students will happily take surface feature/lower-order concern feedback and apply it easily because it is direct and tells them what to do. For example, if I tell them to restate their thesis and give them a template, they almost always do it. But if I ask them to reframe their thinking on a topic – which would lead to major overhaul of their paper – they often don’t do anything with that feedback. Jensen et al. pointed out that this type of feedback is hard to do. I mean, it’s a big ask to ask a first-year composition student to reframe their entire way of thinking about a topic while at the same time they’re learning how to research, evaluate sources, and all the things that come with learning to do academic research. It defeats the purpose of giving that kind of feedback, however, for me to tell them how to think about the topic differently. But this kind of feedback is successful if they’re ready and open to the conversation of changing how they think.

In online learning, it’s even harder to give this kind of feedback that leads to substantive learning because you can’t see the student to know how well it’s received or even understood. I also don’t always know the right time to give that feedback so it’s useful. In 8 week classes, I’m usually one week behind in grading so they’re getting feedback as they’re working two assignments down from the initial feedback. It’s not really helpful anymore. I need to think about ways to structure feedback so they get it when it’s useful.

Week 7 Annotation – TPACK in the age of ChatGPT and generative AI.

Mishra et al. (2023) apply the TPACK framework to ChatGPT to illustrate the framework is relevant to generative AI. Mirsha situate their argument the significance of the TPACK framework in educational technology and the daily work of teachers. Mishra et al. also point out that TPACK has fallen into the canon of educational technology research, and that it’s not engaged with intellectually. Rather education students learn it as part of another theory to memorize. They seek to make TPACK relevant again and encourage educators to questions of generative AI through the lens of this framework. Mirsha et al. provide an overview of the state of generative AI in education pointing out the pitfalls and benefits of using AI in the classroom, while ultimately coming to the conclusion that the educational space is forever changed because it is not a human only space any longer. Generative AI will require that new pedagogies are created to support learning that is inclusive of generative AI. Through the lense of TPACK, educators will have to think beyond the current moment to the long-term ramifications of generative AI in the classroom to assess student learning and prepare them for jobs in a world we cannot yet fully envision. Mishra et al. also point out that assessment will have to change to accommodate the ways learning will change as a result of human/machine partnerships in learning.

While Mishra et al. provide a robust overview of the current state of the TPACK framework in educational literature, they do fall into a pitfall of separating the elements of TPACK apart in order to explain the framework rather than analyzing ChatGPT holistically (Saubern et al., 2020). Mirsha et al. provide relevancy for application of the TPACK framework and try to provide some examples for how teachers can use generative AI in their classrooms to stimulate learning in new ways. These examples are cursory and only show how the focus on the discussion over academic dishonesty is the wrong place to situation the conversation in education around generative AI. Ultimately, the paper is very well organized. The literature review pulls from relevant TPACK literature, always choosing to cite the seminal work over discussions of the seminal work. The framework does not appear to be mischaracterized, but the separation of the parts does not allow for the creative dynamic between knowledge, pedagogy and technology Mishra et al. pointed out in their literature review to be fully explored in their own assessment of how TPACK can apply to generative AI in the classroom – which is also interesting because Mishra is one of the architects of TPACK.

All three facets of my academic identity – doctoral student, writing instructor, and administrator – are very interested in how Generative AI effects the classroom experience. This article opened my eyes to the reality that learning spaces are not human only learning spaces. While technology has always been at the center of my teaching practice, the technology was always mediating the learning. And now, the technology is participating in the learning (and in some ways, it’s a co-learning experience where the AI can learn from the learner, too). I’m very interested in this area as doctoral student. As a writing teacher, I want to teach my students to leverage generative AI so they can be proficient in using it as a tool to leverage their critical thinking to get better jobs. As an administrator, I want to understand the application of generative AI in the classroom to help faculty create learning spaces that don’t penalize and police students while they also navigate how to use generative AI as a learning tool in their own educational journey.

References

Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235-251. https://doi.org/10.1080/21532974.2023.2247480

Saubern, R., Henderson, M., Heinrich, E., & Redmond, P. (2020). TPACK–time to reboot?. Australasian Journal of Educational Technology36(3), 1-9.

Week 6 – Extending the Discussion

Extending the Conversation

While reading Kay (2012) literature review, one thing that stood out to me was the underlying idea that using video in the classroom – especially streaming video that an instructor could create on their own or students could create – really challenges the role of the instructor. Kay’s literature review found there were multiple reasons students use videos: improve learning, preparing for class, self-checking understanding, obtaining a global overview of chapters, taking better notes and to improve face-to-face quality of classes. Kay also pointed out that there as a concern (maybe a fear) among instructors that videoing lectures or PowerPoint lectures would mean students won’t come to class. And Kay’s literature review uncovered that students were as likely to come to class as not when a video lecture was posted, but if it was a PowerPoint lecture, then students would not be as willing to come to class.

Prior to the emergence of the science of learning in the 1980s, the common model of education was one where knowledge was transferred from instructor to student, creating a dynamic where the instructor has all the power, and for students to get what they need, they had to be physically present (Nathan et al., 2022). Educational technology allows a shift in where, when, and how students access information. This also displaces the power dynamic that has been put in place, especially in the context of direct learning environments. Videos allow students to have some more ownership and control over their learning experiences. They’re not quite ready to give up on face-to-face interactions, as evidenced by the fact that brick and mortar education still exists in 2023, and students chose to return to that space after the COVID-19 pandemic’s long pause of face-to-face learning.

While instructors may record, create, or curate video content for their students to consume, that still places them in a different role in the learning context. I see an underlying fear in the ways video can shift a dynamic – if there is a video lecture, then it can be reused indefinitely, in perpetuity. For example, Concordia University was assigning a dead professor a course, using his recorded lecture materials, while being led by a living professor and two TAs. There are some ethical concerns that come up. McCllelan et al. (2022) also point out that video lectures mean students can over inflate their learning because the instructor is not there to immediately guide understanding. The role of the professor even shifts away from “guide on the side”. I’m not sure really what it looks like. But I am interested in the question of how video lectures – active or passive in the student experience – can reshape the power dynamic of the instructor and the student in a learning context. What happens to learning when the instructor is potentially perceived as more passive in the learning experience than in student-centered learning?

References:

McClellan, D., Chastain, R. J., & DeCaro, M. S. (2023). Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson. Journal of Computing in Higher Education. https://doi.org/10.1007/s12528-023-09379-w

Nathan, M. J., & Sawyer, R. K. (2022). Foundations of the learning sciences. Cambridge University Press. https://doi.org/10.1017/9781108888295.004

Tangermann, V. (n.d.). A university is using a dead professor to teach an online class: “I just found out the prof for this online course I’m taking died in 2019.” The Byte. https://futurism.com/the-byte/university-dead-professor-teach-online-class

Week 6 Annotation 2 – Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson.

McClellan, D., Chastain, R. J., & DeCaro, M. S. (2023). Enhancing learning from online video lectures: The impact of embedded learning prompts in an undergraduate physics lesson. Journal of Computing in Higher Education. https://doi.org/10.1007/s12528-023-09379-w

McClellen et al. (2023) conducted a study to explore how adding cognitive or metacognitive embedded prompts to an asynchronous online video would improve learning in an undergraduate physics course. McClellen et al. situated their research in the conceptual framework of online learning, cognitive vs. metacognitive prompts, individual learner differences and prompt effectiveness, deep vs. surface learning, disorganized studying, and metacognitive skills. Further, McClellen at al. utilized cognitive load theory to support their investigation. The study was carried out over three semesters and used undergraduate physics students (n=253) who regularly used online video in their physics course; all three sections were taught by the same instructor. Students were randomly assigned to three sub-groups: no-prompt control group (n=86), cognitive embedded prompt (n=86) and metacognitive group (n=81). All students watched the same video and took the same quiz. The video was segmented into four parts. In both prompt groups, at the end of each segment, there was a set of questions – cognitive or metacognitive, depending on the condition the students were randomly assigned to. Students also completed validated questionnaire instruments asking about their individual differences in deep vs surface learning, organization of study habits, metacognitive awareness, and cognitive load in the third semester only. The results of the study showed that the embedded prompts did not have a statistically significant effect on cognitive load. The effect of cognitive embedded prompts was in line with the research; students in this group achieved quiz scores improved by 10% compared to the control group. The metacognitive embedded questions, while trending more positively than the control group, did not line up with research in the area and student quiz scores did not significantly improve. Overall, McClellen et al. recommend that cognitive embedded prompts that ask students to extend and organize information they are learning should be used with video lectures.

This study was very well organized and easy to follow. The literature review led directly to the research questions, and there were no surprises in the findings as they related to the conceptual framework. However, there were no sources to back up the individual learner differences section of the conceptual framework. While McClellen et al. (2023) did acknowledge the limitations of their study, including only one of the three groups having access to the surveys, including a new instrument of measurement to the study at the last minute does make me question the results. This study was also conducted in the natural world, so there were a lack of controls for what students were doing when the watched the videos; there was a missed opportunity for students to report on what they were doing at the time, as well. This was not acknowledged in the study.

As a doctoral student, this study shows the significance of the literature review in building up a conceptual framework to put others in the same headspace and point of view as the researchers at the time of publication. While I’ll be the first to admit my statistics are rusty, there was a lot of written description of the statistical analysis, and the argument would have been better served with more visual representations of the data. If I use quantitative methods in my research, I will be mindful to include visual representations which can clarify meaning – not every reader is going to come to the article with the same level of understanding of quantitative methods and I think it’s important that all educational researchers can access the research to meaningfully question it, use it, and apply it to their own projects.

Week 6 Annotation 1 – It is not television anymore: Designing digital video for learning and assessment.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science (pp. 349-366). Mahwah, NJ: Lawrance Erlbaum Associates.

Schwartz et al. (2007) establish a framework specifically for those new to the learning sciences for how to use video to observe and identify learning outcomes and to strategically implement videos into the classroom learning space. This framework is situated in the new world of YouTube and streaming video, where students at the time had access to more information, but were limited by broadband access (because streaming video was spotty in 2005). They also contextualize their framework in the current research of the day, giving an overview of the minimal research available on the topic in 2007. Schwarts et al. give an overview of four common learning outcome: seeing, engaging, doing, and saying. Within each of these four common leaning outcomes is a variety of criteria that are observable when learners are engaging with video, and might direct what video and when video is selected to be used in a learning situation. Seeing videos are videos that help learners visualize and experience things they have not or cannot experience. Seeing videos can be categorized as tour videos (e.g. travel videos, historical re-enactments, nature videos), point of view videos (e.g. from a character’s point of view), simulated experiences (e.g. first person video of a sky dive). The associated assessable criteria are: recognition, noticing, discernment, and familiarity. Engagement videos are designed to keep people engaged in a topic. These videos develop interest and contextualize information. The associated assessable actions are assessing preferences for learning and measuring future learning. Doing videos present human behavior or processes – there are attitude and skill distinctions. In order to do an action, the viewer needs to see the action. Videos that shape attitudes ask viewers to identify the behavior and demonstrate the behavior – either globally or in step-by-step fashion. To assess the effectiveness of the video, a viewer would be asked do the behavior they learned from watching the video. If there is an action that is unable to replicated, then the viewer should be able to explain the action in detail. Saying videos are videos that lead to knowledge acquisition of facts and retaining the facts. Things like news broadcasts, fall into this category. Features of analogy, commentary, and explosion can be used. To assess success of saying videos, viewers should be asked to recall facts they acquired from watching the video. Overall, video works within a larger context. They also provided an extended example of pre-service teachers applying the framework in a course.

Schwartz et al. (2007) did an excellent job of establishing the framework. The framework was clearly and explicitly explained. There was a clear visual representation of the framework. The tenants of the framework were explained, supported with evidence from the literature, and then clear and specific examples were given that a reader could apply to their own situation or research. Additionally, they provided an extended example of how this process could be applied in a learning context. Schwartz et al. also provided appropriate critique and contextualization for the framework. This framework is deceptively simple, as it easy to apply to a condition, but has a lot of room for growth and assessment in application.

As a doctoral student, this framework provides a way to view the application of video usage in a classroom. It was interesting to see the development of a framework for studying something that was so new. This framework emerged alongside the technology. The way the framework was explained and presented in the article was also of great value. Thinking forward to explaining my own conceptual or theoretical framework in my dissertation, I also want to be as clear in my writing. I also appreciate that the framework was so explicit. I feel as though I could pick this framework up and apply it to a scenario. As an administrator who works with faculty, I could direct faculty to this framework to help them assess their use of video in their classes, as this could be part of the evaluation process. Since this is easily accessible, I feel like it’s something that could be seen as value-added right away, especially since it looks a lot like the Bloom’s Taxonomy wheels that many faculty are already familiar with and use. They know it’s easy to apply Bloom’s and would likely assume this framework is just as easy to apply since it can be visually represented in the same way.

Week 5 Extension Discussion – Overview of Educational Hypermedia Research

Extension Discussion – Week 5 – Overview of Educational Hypermedia Research

In our guided reading, we were asked to think about researchable ideas from the Kuiper et al. (2005) article. In short summary, the article explores the new-in-2005 concerns about K-12 students being able to use the Internet in their learning and whether the Internet requires specific skill of students. In 2023, these questions are still relevant.

In the article, Kuiper et al. (2005) make reference to a research study where students were being taught explicitly that when they click on a hyperlink, they also need to interact with it deeply. In 2023, in the college setting where I teach, the assumption is that kids are going to come into the classroom with a fully formed understanding of how to interact with the Internet. The myth of the digital native, coined by Prensky (2001) persists in higher education to the detriment of learners and teachers. Prenksy’s theory was that those who grew up with technology – digital natives– would have an innate sense of how use technology, unlike digital immigrants, who came to technology later. The assumption carries over to educational spaces where it can be easy to assume that just because students grew up with technology, they will automatically know how to apply that technology to a variety of learning contexts. An innate skill to apply technology to learning does not exist.

Enyon (2020) explored the harm of the persistent nature of the digital native myth. The myth itself presents a generational divide (which Enyon notes, the literature does not support) and leads to a very hands-off approach in adults teaching children to use technology. Now, this may be different as so-called elder millennials, the original digital natives per Prensky’s theory, are taking spaces as educators in classrooms. Millennials were assumed to be native to technology because it was ubiquitous as they grew. As an elder millennial, I know that I had to learn technology and how to apply it on my own. There was no one to teach me because the divide Enyon pointed out was ever-present in my educational experiences. I had no guidance when it came to encountering hypertext for the first time, for example. The closest I ever got to “online training” was in grad school when a research librarian taught us Boolean searches in the time before Google was ubiquitous and natural language searches were a thing.

The research opportunities in this area come from looking at how learner relationship to technology is established, nurtured, and supported. The skills an Internet user needed in 2005 are also vastly different from the skills an Internet user needs in 2023.

The Web has become a different place. In the early days of the Internet, people in general were leery of it. I remember being explicitly told by high school English teachers and college professors that I could not trust everything I found online. But in 2023, “the Internet” has become an all-encompassing resource. “I read it online” becomes the only needed – or maybe even differently in 2023, “I saw it on TikTok.”  It seems that the old traditions of authorial authority (from the days of publishing when author work had to be vetted for credibility among other things before it was published) has transplanted online. If it’s published online, it must be credible, right? I see this a lot with Internet users who don’t understand that the vetting process for publishing online is to hit “submit” on a website. There are no more checks and balances. The Internet democratizes access to information, and it also allows anyone with Internet access to become a content creator. Search engine algorithms have also become very siloed. People get return results based on what they like to see, which means they confront ideas less and less that challenge their worldviews (Pariser, 2011) Not to mention the dawn of Chat GPT, which manufactures source information to appear credible and returns results based on user inputs.

Students today need to be trained to be critical of information and resources they encounter online. The Internet is a great repository of information, but not all information is created equal, or should be held as having the same value or veracity. The notion that students need specific skills still holds true and is still an area of valid research. This is an area of research I am personally very interested in.  

References

Kuiper, E., Volman, M., & Terwel, J. (2005). The web as an information resource in k–12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75(3), 285-328. https://doi.org/10.3102/00346543075003285

Eynon, R. (2020).  “The myth of the digital native: Why it persists and the harm it inflicts”, in Burns, T. and F. Gottschalk (eds.), Education in the Digital Age: Healthy and Happy Children, OECD Publishing, Paris, https://doi.org/10.1787/2dac420b-en.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. penguin UK.

Prensky, M. (2001). Digital natives, digital immigrants, part 1. On the Horizon, 9(5), 1-6. https://doi.org/10.1108/10748120110424816

Week 5 Annotation 2 – Mind wandering during hypertext reading: The impact of hyperlink structure on reading and attention

Schurer, T., Opitz, B., & Schubert, T. (2023). Mind wandering during hypertext reading: The impact of hyperlink structure on reading comprehension and attention. Acta Psychologica, 233, 103836. https://doi.org/10.1016/j.actpsy.2023.103836

Schurer et al. conducted a study to examine how the organization of a text into a hyperlinked text – both structured and unstructured – effect mind wandering in readers. The goal of the study was to find out which hyperlink structure caused mind wandering and the effect of mind wandering on comprehension of the text. The study asked participants (n=90) to read a hyperlinked texts – there were multiple versions of the text. Students were tested for prior knowledge of the topic. During the reading process, they were asked questions to determine their current state of being during reading as a way to measure mind wandering. Then after the reading was completed, participants took a paper and pencil single choice reading comprehension test. Two groups of 45 were created from the participant pool. 27 read the high cohesion (easy) version and 27 read the low cohesion (difficult version) of the document. The texts were created using a previous model from the literature, Storrer, 1999. The findings were assessed using statistical tests, specifically ANCOVA analysis to try to account for variables. The findings supported the hypothesis presented those readers in hierarchical structures had better reading comprehension scores that readers in the networked hyperlinked condition. Participants in the networked condition also experienced more thoughts unrelated to the task they were completing than those in the hierarchical structure. Mind wandering occurred when the text was too difficult to comprehend or when it was too easy to comprehend. Schurer et al. pointed out that their findings did not have a control group, so there may be variables at play with how the documents were structured to take these findings as conclusive.

Schurer et al. presented a very clear synopsis of how the study would be conducted. Some of the methodology was not clearly spelled out. For example, the original 90 participants were broken down into sub groups, but it was not clear without multiple re-reads where all the participants ended up. The lack of a control group – having students simply read the text – was kind of astounding when the hypothesis was centered around which conditions cause mind wandering, off task thoughts, and effect reading comprehension. They didn’t really know for sure if the reading was easily understood by the participant pool. Participants were also asked about their prior knowledge on the topic, and there was no significant discussion of how prior knowledge effected understanding. They indicated that ANCOVA testing was used and the confidence interval used was r = .5, but there was no discussion of what variables they were trying to account for – especially given they did not have a control group. I found it ironic that for a study that wanted to illustrate mind wandering and found texts that were too difficult to read caused mind wandering didn’t take the time to refine their language and narrative to make it easier to follow. Aside from the statistical analysis – which I am admittedly rusty with – I had to re-read paragraphs a few times for clarity of understanding, and I don’t think that makes for good research.

            I have been really attentive to the structure of the articles we have been reading for class and the articles I have been selecting for extension readings. I am starting to see a shift in the literature toward a more clear and natural language rather than academic jargon, which I think is great. The research should be accessible. This article was well organized. But it was not as clear as I would like in the narrative. The statistical explanations were represented in charts as well as explained, but I would have liked clearer explanation of how the statistical analysis supported their findings. I also am still perplexed as to why a control group would not be used for this type of research explanation.  

Week 5 Annotation 1 – Learning from hypertext: Research issues and findings

Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In D. H. Jonassen (Ed), Handbook of Research for Educational Communications and Technology (pp. 605-620). New York: Macmillan.

               Shapiro et al. (2004) provide an overview of research issues in hypertext assisted learning (HAL). The overview of research includes the theoretical underpinnings, the practical matters of reading and learning from hypertext, including metacognitive processes and the role of conceptual structure of hypertexts in relationship to human memory construction. A lot of space is devoted to providing an overview of the effect of system structures on learning. Learning structures are discussed in terms of information structures that function in a hierarchy -lets the reader go back and forth between the original text and more information – versus the unstructured hypertext structures that rely on user choice to help in creating meaning. Well-defined structures are best for learners when they have little or no prior knowledge on a subject. Ill-defined structures are better for learners who have more prior knowledge on a subject; however, just because a student is advanced does not mean they will automatically apply themselves to learning in an unstructured hypertext learning task. Learner variables are also discussed as related to the effectiveness of HAL; students who have more prior knowledge can engage at a higher level with HAL. The reading patterns of the learner also impact success with HAL; the purpose of reading influences how students interact with the text. For example, if students have a very specific goal for reading, they will make better connections with and between the material than those reading without scope. The HAL research is also problematic because there is not a unifying theoretical underpinning to this field of study, no coherence in methodological approach, lack of a precise language for discussing HAL. The lack of published research on the topic makes it hard to see HAL as a powerful learning tool and more research needs to be done.

            Shapiro et al. present a very cohesive and well catalogued literature review of HAL research. The headers and sub headers make it very clear and easy to follow the connections from the research to their own assertions about the state of the field of HAL research. Additionally, each heading has its own conclusion which neatly and succinctly ties the literature reviewed together. This makes it very easy to see the way the conclusions were drawn from the literature. The critiques made of the lack of cohesiveness are shown to the readers of this article and all ideas expressed are very concrete and connected to specific studies that had been conducted up until that point. I would also argue that this piece brings some of the cohesiveness to the field of study of HAL that Shapiro et al. say the field is lacking. By drawing these specific pieces of literature under the umbrella of this literature review, they are pulling together the early studies that are the seeds of the field of research into HAL.  

            As a doctoral student, I find this article compelling on two fronts. First, I see this as a model/exemplar of how to construct a literature review that supports making claims and assertions on a topic. It is also a very great example of how to pull from the literature to locate and discuss gaps to pinpoint where a valuable research question may be lying in wait for a researcher to expand on the topic. I also find this notion of trying to pull a field together interesting. Shapiro et al. see that there is an emerging field on HAL based on hypertexts and the uses in practice that emerge from the literature – but pulling it all together so the field has value is a significant task. Someone has to be the one to ask these questions. I’ve read a lot of educational scholarship, and it’s the first time I have come across a call like this from the field. And having done research on this topic in the last 5 years, I see there is more scholarship on the topic, but I’m not sure if it’s any more cohesive than what was described in this article as it hadn’t occurred to me to pay attention to that kind of organization across a field before this article.

Extending the Discussion – Week 4: Educational Research Methods

Extending the Discussion –   Week 4 Educational Research Methods

Early discourse in educational technology research were focused on the difference between quantitative, experimental research and qualitative, descriptive research. Quantitative research designs are privileged in that discussion as though they illuminate generalizable truths, while qualitative methods may be viewed as illuminating specific, local truths. The discourse has since shifted to adopting mixed-methods approaches so the right tool can be employed for the research task at hand (Cobb, 2003; Foster, 2023; Jacobsen et al., 2023). Design-based research seems to be emerging in the discourse as a top contender for “gold standard” status of research in educational technology.

Design-based research does not privilege one qualitative or quantitative study. Rather, the process of research, the question posed, and the desired outcome of the research should shape and determine what processes are applied to gain an understanding (Jacobsen et al., 2023; Sandoval, 2014). Research is an iterative process – and when a researcher starts out looking at a topic, the questions asked are not fully formed and shaped because information is gathered during the research process (Jacobsen et al., 2023). Since the question evolves based on the phase and researcher’s knowledge, the methodologies employed may also need to evolve as the study progresses (Jacobsen et al., 2023). Cobb (2002) pointed out that a “primary goal for a design experiment is to improve the initial design by testing and revising conjectures as informed by ongoing analysis …” (p. 11.) Even though Cobb is speaking specifically to student learning, this goal underscores the iterative process of specifically educational research that may be overlooked in strictly quantitative or qualitative research designs, where the questions do not evolve much during the process.

Jacobsen et al. (2023) analyzed two student dissertations to illustrate the iterative process of design-based approaches in educational research. The methods to achieve understanding aren’t as important as having an open mind for this iterative process. The goal of methodological alignment should be to make sure that the questions asked by researchers can be “operationalized at each phase” of the process and are “precise” so the questions can be answered proficiently by the research (p.5). Qualitative methods should be applied when the question calls for it, just as quantitative methods should. Results from all aspects of investigation should be analyzed, compared and contrasted, and synthesized to make meaning.

The most compelling aspect of design research for me so far is that it breaks down silos of scientific vs. non-scientific, qualitative vs. quantitative, and hard vs. soft science. It opens up the discourse to focus not on how educational researchers approach questions, but what questions we are asking and what value those answers will have on the field of educational technology.

References

Cobb, P., Confrey, J., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9-13.

Foster, C. (2023). Methodological pragmatism in educational research: From qualitative-quantitative to exploratory-confirmatory distinctions. International Journal of Research & Method in Education, 1-16. https://doi.org/10.1080/1743727x.2023.2210063

Jacobsen, M., & McKenney, S. (2023). Educational design research: Grappling with methodological fit. Educational Technology Research and Development. https://doi.org/10.1007/s11423-023-10282-5

Sandoval, W. (2013). Conjecture mapping: An approach to systematic educational design research. Journal of the Learning Sciences, 23(1), 18-36. https://doi.org/10.1080/10508406.2013.778204