Week 5 Annotation 2 – Mind wandering during hypertext reading: The impact of hyperlink structure on reading and attention

Schurer, T., Opitz, B., & Schubert, T. (2023). Mind wandering during hypertext reading: The impact of hyperlink structure on reading comprehension and attention. Acta Psychologica, 233, 103836. https://doi.org/10.1016/j.actpsy.2023.103836

Schurer et al. conducted a study to examine how the organization of a text into a hyperlinked text – both structured and unstructured – effect mind wandering in readers. The goal of the study was to find out which hyperlink structure caused mind wandering and the effect of mind wandering on comprehension of the text. The study asked participants (n=90) to read a hyperlinked texts – there were multiple versions of the text. Students were tested for prior knowledge of the topic. During the reading process, they were asked questions to determine their current state of being during reading as a way to measure mind wandering. Then after the reading was completed, participants took a paper and pencil single choice reading comprehension test. Two groups of 45 were created from the participant pool. 27 read the high cohesion (easy) version and 27 read the low cohesion (difficult version) of the document. The texts were created using a previous model from the literature, Storrer, 1999. The findings were assessed using statistical tests, specifically ANCOVA analysis to try to account for variables. The findings supported the hypothesis presented those readers in hierarchical structures had better reading comprehension scores that readers in the networked hyperlinked condition. Participants in the networked condition also experienced more thoughts unrelated to the task they were completing than those in the hierarchical structure. Mind wandering occurred when the text was too difficult to comprehend or when it was too easy to comprehend. Schurer et al. pointed out that their findings did not have a control group, so there may be variables at play with how the documents were structured to take these findings as conclusive.

Schurer et al. presented a very clear synopsis of how the study would be conducted. Some of the methodology was not clearly spelled out. For example, the original 90 participants were broken down into sub groups, but it was not clear without multiple re-reads where all the participants ended up. The lack of a control group – having students simply read the text – was kind of astounding when the hypothesis was centered around which conditions cause mind wandering, off task thoughts, and effect reading comprehension. They didn’t really know for sure if the reading was easily understood by the participant pool. Participants were also asked about their prior knowledge on the topic, and there was no significant discussion of how prior knowledge effected understanding. They indicated that ANCOVA testing was used and the confidence interval used was r = .5, but there was no discussion of what variables they were trying to account for – especially given they did not have a control group. I found it ironic that for a study that wanted to illustrate mind wandering and found texts that were too difficult to read caused mind wandering didn’t take the time to refine their language and narrative to make it easier to follow. Aside from the statistical analysis – which I am admittedly rusty with – I had to re-read paragraphs a few times for clarity of understanding, and I don’t think that makes for good research.

            I have been really attentive to the structure of the articles we have been reading for class and the articles I have been selecting for extension readings. I am starting to see a shift in the literature toward a more clear and natural language rather than academic jargon, which I think is great. The research should be accessible. This article was well organized. But it was not as clear as I would like in the narrative. The statistical explanations were represented in charts as well as explained, but I would have liked clearer explanation of how the statistical analysis supported their findings. I also am still perplexed as to why a control group would not be used for this type of research explanation.  

Week 5 Annotation 1 – Learning from hypertext: Research issues and findings

Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In D. H. Jonassen (Ed), Handbook of Research for Educational Communications and Technology (pp. 605-620). New York: Macmillan.

               Shapiro et al. (2004) provide an overview of research issues in hypertext assisted learning (HAL). The overview of research includes the theoretical underpinnings, the practical matters of reading and learning from hypertext, including metacognitive processes and the role of conceptual structure of hypertexts in relationship to human memory construction. A lot of space is devoted to providing an overview of the effect of system structures on learning. Learning structures are discussed in terms of information structures that function in a hierarchy -lets the reader go back and forth between the original text and more information – versus the unstructured hypertext structures that rely on user choice to help in creating meaning. Well-defined structures are best for learners when they have little or no prior knowledge on a subject. Ill-defined structures are better for learners who have more prior knowledge on a subject; however, just because a student is advanced does not mean they will automatically apply themselves to learning in an unstructured hypertext learning task. Learner variables are also discussed as related to the effectiveness of HAL; students who have more prior knowledge can engage at a higher level with HAL. The reading patterns of the learner also impact success with HAL; the purpose of reading influences how students interact with the text. For example, if students have a very specific goal for reading, they will make better connections with and between the material than those reading without scope. The HAL research is also problematic because there is not a unifying theoretical underpinning to this field of study, no coherence in methodological approach, lack of a precise language for discussing HAL. The lack of published research on the topic makes it hard to see HAL as a powerful learning tool and more research needs to be done.

            Shapiro et al. present a very cohesive and well catalogued literature review of HAL research. The headers and sub headers make it very clear and easy to follow the connections from the research to their own assertions about the state of the field of HAL research. Additionally, each heading has its own conclusion which neatly and succinctly ties the literature reviewed together. This makes it very easy to see the way the conclusions were drawn from the literature. The critiques made of the lack of cohesiveness are shown to the readers of this article and all ideas expressed are very concrete and connected to specific studies that had been conducted up until that point. I would also argue that this piece brings some of the cohesiveness to the field of study of HAL that Shapiro et al. say the field is lacking. By drawing these specific pieces of literature under the umbrella of this literature review, they are pulling together the early studies that are the seeds of the field of research into HAL.  

            As a doctoral student, I find this article compelling on two fronts. First, I see this as a model/exemplar of how to construct a literature review that supports making claims and assertions on a topic. It is also a very great example of how to pull from the literature to locate and discuss gaps to pinpoint where a valuable research question may be lying in wait for a researcher to expand on the topic. I also find this notion of trying to pull a field together interesting. Shapiro et al. see that there is an emerging field on HAL based on hypertexts and the uses in practice that emerge from the literature – but pulling it all together so the field has value is a significant task. Someone has to be the one to ask these questions. I’ve read a lot of educational scholarship, and it’s the first time I have come across a call like this from the field. And having done research on this topic in the last 5 years, I see there is more scholarship on the topic, but I’m not sure if it’s any more cohesive than what was described in this article as it hadn’t occurred to me to pay attention to that kind of organization across a field before this article.

Extending the Discussion – Week 4: Educational Research Methods

Extending the Discussion –   Week 4 Educational Research Methods

Early discourse in educational technology research were focused on the difference between quantitative, experimental research and qualitative, descriptive research. Quantitative research designs are privileged in that discussion as though they illuminate generalizable truths, while qualitative methods may be viewed as illuminating specific, local truths. The discourse has since shifted to adopting mixed-methods approaches so the right tool can be employed for the research task at hand (Cobb, 2003; Foster, 2023; Jacobsen et al., 2023). Design-based research seems to be emerging in the discourse as a top contender for “gold standard” status of research in educational technology.

Design-based research does not privilege one qualitative or quantitative study. Rather, the process of research, the question posed, and the desired outcome of the research should shape and determine what processes are applied to gain an understanding (Jacobsen et al., 2023; Sandoval, 2014). Research is an iterative process – and when a researcher starts out looking at a topic, the questions asked are not fully formed and shaped because information is gathered during the research process (Jacobsen et al., 2023). Since the question evolves based on the phase and researcher’s knowledge, the methodologies employed may also need to evolve as the study progresses (Jacobsen et al., 2023). Cobb (2002) pointed out that a “primary goal for a design experiment is to improve the initial design by testing and revising conjectures as informed by ongoing analysis …” (p. 11.) Even though Cobb is speaking specifically to student learning, this goal underscores the iterative process of specifically educational research that may be overlooked in strictly quantitative or qualitative research designs, where the questions do not evolve much during the process.

Jacobsen et al. (2023) analyzed two student dissertations to illustrate the iterative process of design-based approaches in educational research. The methods to achieve understanding aren’t as important as having an open mind for this iterative process. The goal of methodological alignment should be to make sure that the questions asked by researchers can be “operationalized at each phase” of the process and are “precise” so the questions can be answered proficiently by the research (p.5). Qualitative methods should be applied when the question calls for it, just as quantitative methods should. Results from all aspects of investigation should be analyzed, compared and contrasted, and synthesized to make meaning.

The most compelling aspect of design research for me so far is that it breaks down silos of scientific vs. non-scientific, qualitative vs. quantitative, and hard vs. soft science. It opens up the discourse to focus not on how educational researchers approach questions, but what questions we are asking and what value those answers will have on the field of educational technology.

References

Cobb, P., Confrey, J., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9-13.

Foster, C. (2023). Methodological pragmatism in educational research: From qualitative-quantitative to exploratory-confirmatory distinctions. International Journal of Research & Method in Education, 1-16. https://doi.org/10.1080/1743727x.2023.2210063

Jacobsen, M., & McKenney, S. (2023). Educational design research: Grappling with methodological fit. Educational Technology Research and Development. https://doi.org/10.1007/s11423-023-10282-5

Sandoval, W. (2013). Conjecture mapping: An approach to systematic educational design research. Journal of the Learning Sciences, 23(1), 18-36. https://doi.org/10.1080/10508406.2013.778204

Annotation – Educational design research: Grappling with methodological fit

Jacobsen et al. (2023) present a conceptual framework to assess methodological fit based on educational design research (EDR) — a term that includes all research approaches that enhance practice and advance scientific understanding. Jacobsen at al. situate the framework in the current discussion and debate of educational methodologies. Researchers seeking out theoretical or practical research must  identify problems worth study which are legitimate, researchable, and research-worthy in theoretical and/or practical terms.  

Jacobsen et al. further explore three orientations of EDR trajectories, which they define as research for interventions, research on interventions, and research through interventions. Research for interventions add to theoretical knowledge and design work. Research on interventions aims to provide information on an intervention’s characteristics. Research through interventions focuses on implementation processes of an intervention. These trajectories are usually combined and used for comparative analysis in EDR work. Jacobsen et al. use two recent dissertations to examine and illustrate the EDR trajectories they describe.

The discussion of the conceptual framework comes to a close through a discussion of why methodological fit is so challenging for researchers. Jacobsen et al. point out methodological fit depends on a variety of factors such as: the researcher’s research expertise in the area, expertise in methodologies, concerns of the researcher and other practical considerations. Four specific challenges to applying the correct methodological fit are identified: asking beginner level questions, focusing on state-of-art, rather than state of practice, insufficient measures for causal inferences, and absence of synthesis. Jacobsen et al. conclude the field of educational research needs more EDR examples to show how valuable this type of research can be.

Jacobsen et al.’s conceptual framework’s main strength comes first from the way the discussion is situated in the current discourse of methodological framework. The analysis of two dissertations to illustrate the concepts of the orientations of EDR trajectories was very strong. Elements of research design were shown at various stages of the dissertation process to illustrate and highlight the iterative nature of creating questions to shift focus of the orientation EDR trajectory. Jacobsen et al. also point out the pitfalls of this type of research for the novice researcher – which is what doctoral students are – and underscore the significance of support and mentorship from faculty if students pursue this avenue of research.

As a doctoral student knowing a dissertation is on the horizon, this conceptual framework is helpful thinking about potential topics and approaches. The most interesting sections of the article for me were the determinants of research-worthy problems and the orientations of EDR trajectories. This also connects back to Salomon & Perkins (2005) and the discussion of a concept of, with, and through technology, though this time focusing on specific intervention, which may or may not be technology. EDR is a complex, but rich way to analyze topics using mixed-methodologies that are brought to bear on a research topic as the research grows.

References

Jacobsen, M., & McKenney, S. (2023). Educational design research: Grappling with methodological fit. Educational Technology Research and Development. https://doi.org/10.1007/s11423-023-10282-5

Salomon, G., & Perkins, D. (2005). Do technologies make us smarter? Intellectual amplification with, of and through technology. In R. J. Sternberg, & D. D. Preiss (Eds). Intelligence and technology: The impact of tools on the nature and development of human abilities (pp. 71-86). Mahwah, NJ: Lawrence Erlbaum Associates.

Annotation – “The Triple-S framework: ensuring scalable, sustainable, and serviceable practices in educational technology”

Moro et al. (2023) present a new research-based framework, Triple-S Framework, for educators and institutions to consider before electing to adopt and adapt educational technology into learning spaces. The research-based framework was built in the context of every-evolving technology and the push and drive of institutions and educators to adopt the latest technology to remain relevant, the financial and practical cost of technology implementation, and student desire to see more consistent technology implementation. The Triple-S Framework guides institutions and educators to evaluate the scalability (continued growth of use) , sustainability (long-term implementation viability) , and serviceability (access to skills, tools, and resources to maintain use of technology)of educational technologies that are implemented into the schools. Moro et al. provide an overview of common and trendy educational technologies from most scalable, sustainable, and serviceable (digital text texts and images) to least scalable, sustainable and serviceable (VR technology) to illustrate application of the model.

Moro et al. (2023) provide a clear presentation of the need for a framework that takes into account not just learning out comes, but long-term viability of educational technology intervention in classrooms and institutions. The examinations of common, widely used educational technology such as digital texts and images, audio, slideshow presentations, and video allow for newcomers to the framework to bring their practical experience to bear on the benefits and pitfalls of technology implementation. Progressing to apps, which are accessible to use, but not necessarily to create then extends the application of the framework to less common technologies to show how the Triple-S framework is practical and accessible to researchers, educators and decision makers. Moro et al. also use very easy-to-grasp common language when explaining their framework. An college professor with no formal educational training can pick this up and implement the steps without having to do much work to make it happen. It’s very practical.

As a college administrator, I really love the practical examples and explanations that are provided and grounded in research. I can see that there are clear steps, questions, and processes to follow. This would be very easy to use as a jumping off point for discussions with faculty about technologies they would like me to purchase from my budget to use in their classroom. As a doctoral student, I can only hope to strive for the level of clarity of explanation, clear connection to the literature, and clear and concise applications to make the research I do practical for the practitioner and administrators who support practitioners.

Moro, C., Mills, K. A., Phelps, C., & Birt, J. (2023). The triple-s framework: Ensuring scalable, sustainable, and serviceable practices in educational technology. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-022-00378-y

Annotated Bibliography – Methodological pragmatism in educational research: From qualitative-quantitative to exploratory-confirmatory distinctions.

References

Foster, C. (2023). Methodological pragmatism in educational research: From qualitative-quantitative to exploratory-confirmatory distinctions. International Journal of Research & Method in Education, 1-16. https://doi.org/10.1080/1743727x.2023.2210063

Foster (2023) argues the unnecessary divide between qualitative and quantitative research methodologies hinders educational research because they are more similar than not. Foster further holds the implementation of methodological pragmatism and discussions of research in terms of exploratory and confirmatory research would increase rigor and collaboration. Foster establishes qualitative and quantitative methodologies create division within educational research, which may also cause researchers to miss out on critical discussions and information within their own research. He argues both qualitative and quantitative data use different analysis techniques, but have analogous issues of defining constants, context, and measurement, reduce data down to its basic form, stripping it of content, context, detail and nuance, and must provide analytic interpretation of data. Foster also points out researcher overreliance on one method over the other might call educational research into question because researchers may not read all literature on their topic or miss out on opportunities to further their knowledge. Foster presents methodological pragmatism as the solution to this problem. Foster argues that methodological pragmatism is an approach which helps researchers solve the logical problem of what methodological tools to use. The end point could be any combination of qualitative, quantitative or mixed-method approaches. Because methodological pragmatism provides more tools and requires researchers to understand their problem fully and positions researchers to better examine their own biases and assumptions, it leads to more range of methodology used in educational research and higher quality research. Foster finally shows that exploratory (source of ideas and discussions) vs confirmatory research (tests hypotheses and conjectures) is a better way to situate discussions of educational research because they emphasize why a researcher is exploring a problem and allows access to the full array of research methods available. Foster also addresses ethical questions of methodological pragmatism including the perception that the practice is about getting results at any cost by countering that no avenue which presents a risk of harm should be used. Ultimately, Foster concludes methodological pragmatism frees researchers to make progress in their research.

Foster’s argument is very well organized. His entire focus of the article remains on expanding the horizons of researchers and providing researchers with accessibility He uses very clear and illustrative examples to make his point establishing the similarities of qualitative and quantitative research. Foster is also up front about the perception of pragmatism as having less intellectual rigor than other approaches. He counters that argument with underlining the rigor having greater access to all methodologies and may force researchers to consider approaches they otherwise would not because they are inhibited by their own ingrained biases toward methodology, theoretical, or epistemological stances.

As a doctoral student preparing to complete a dissertation process, this discussion was helpful to read. I recognize that I come with biases from my previous learning experiences regarding the nature of qualitative work vs quantitative research. Foster’s work did illuminate the many similarities – positive and negative – between both methodologies. I don’t know where I stand in relationship to methodological pragmatism, but I do like the ideas of exploratory vs. conformational research and the way a researcher really needs to be aware of their own biases and theoretical, ontological, and epistemological assumptions. The idea that a method of interpretation is available, regardless of the problem at hand is exciting. I am interested in further exploration of how qualitative and quantitative work can inform each other to provide a better description of a problem in educational research. This article helped me think and resituate the ideas of qualitative and quantitative research that came up during the discussion on assigned readings. I realized I privilege qualitative research in my own mind, first because I am not fan of statistics, and second because I am biased against quantitative research because I didn’t appreciate that even statistics happen in a context and can provide a story.

Annotated Bibliography – “Enhancing the learning effectiveness of ill-structured problem solving with online co-creation”

In this early empirical study on co-creation in learning, Pee (2019) attempts to support the hypothesis that the open-ended nature of ill-structured problem solving (ISPS) can be used to a learner’s advantage in increasing cognitive and epistemic knowledge. Three concepts were derived from business disciplines, where co-creation is commonly used, to develop a framework to determine of online co-creation to test for increased student learning in ISPS: solution co-creation, decision co-creation, and solution sharing. Pee created an asynchronous, voluntary, and optionally anonymous activity on Blackboard for students to participate in decision co-creation for evaluative criteria and then to discuss their solutions to the problem in the assignment to engage in co-solution sharing and solution co-creation. Pee interprets the student survey results to indicate that by engaging in online co-creation, learning increases. Ultimately, Pee suggests that while this study is an early study and cannot yet be generalized, it should be replicated in other areas and current course instructors can implement this method to increase learning in the context of working with ISPS.

            While the article excels in presenting its data visually and the limitations of the study are adequately acknowledged, there are areas of concern in the arguments Pee presents. While Pee (2019) presents a cogent statistical analysis of the survey deployed to students (n=225), and the survey had an excellent return rate of 70.3%, the findings were presented in the article as proving learning increased when the survey measured student perception of learning. A brief follow up interview with 13 students was mentioned in the article, but these were not discussed in depth and did not support the hypothesis that learning had increased, but a single student was quoted as showing their perception of learning increased. Finally, the examples to illustrate the method for collecting data was described and limited to the graduate student sample who made up only 32.4% of the sample size – the undergraduate student experience shaped most of the survey results, but it was not described in the methodology or discussion. Pee draws a conclusion that the survey results show the model for co-creation online worked in a classroom to “leverage the multiplicity of ISPs,” to enhance student learning, not noting the survey can only measure objective perception since student work was not evaluated or controlled for with groups who used co-creation and groups who did not.

            As a writing teacher, the idea of ill-structured problems and co-creation is interesting to me. Writing is often difficult to teach because it’s amorphous and doesn’t have a “right” answer. The idea of online co-creation where students work together to contribute to discussions of how a project will be evaluated is exciting because it shifts the burden of teaching in an ISP context from the instructor only to instructor and students. I like this idea in terms of establishing rubrics that are more individualized for learners to help them grow their writing in ways they find relevant while also meeting course standards and outcomes. As a doctoral student, I am interested in the ways students perceive their own learning versus how instructors perceive student learning based on knowledge acquisition. I find the methodology and framework used by Pee to study perception of learning is interesting.  

References

Pee, L. G. (2019). Enhancing the learning effectiveness of ill-structured problem solving with online co-creation. Studies in Higher Education, 45(11), 2341-2355. https://doi.org/10.1080/03075079.2019.1609924