We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To capture the distortion of exploratory activity typical of patients with spatial neglect, traditional diagnostic methods and new virtual reality applications use confined workspaces that limit patients’ exploration behavior to a predefined area. Our aim was to overcome these limitations and enable the recording of patients’ biased activity in real, unconfined space.
Methods:
We developed the Free Exploration Test (FET) based on augmented reality technology. Using a live stream via the back camera on a tablet, patients search for a (non-existent) virtual target in their environment, while their exploration movements are recorded for 30 s. We tested 20 neglect patients and 20 healthy participants and compared the performance of the FET with traditional neglect tests.
Results:
In contrast to controls, neglect patients exhibited a significant rightward bias in exploratory movements. The FET had a high discriminative power (area under the curve = 0.89) and correlated positively with traditional tests of spatial neglect (Letter Cancellation, Bells Test, Copying Task, Line Bisection). An optimal cut-off point of the averaged bias of exploratory activity was at 9.0° on the right; it distinguished neglect patients from controls with 85% sensitivity.
Discussion:
FET offers time-efficient (execution time: ∼3 min), easy-to-apply, and gamified assessment of free exploratory activity. It supplements traditional neglect tests, providing unrestricted recording of exploration in the real, unconfined space surrounding the patient.
This chapter will explore the use of digital technologies to develop psychomotor procedures when learning with our bodies. This includes the use of video, images and annotations to practise technique or strategy in physical education, such as improving a cricket bowling technique, or to review and analyse team performance and gameplay following a match. It could be using video or audio to develop musical instrument technique or to improve public speaking or other acting or speaking skills in drama. It could be used to develop choreography or dance technique, or to practice speaking a new language. Psychomotor procedures are also involved in learning to form letters when writing and acquiring the manual skill of typing.
This chapter begins with a theory-based explanation of psychomotor procedures and how they are incorporated in some of the key models of knowledge such as Bloom’s Taxonomy and Marzano and Kendall’s New Taxonomy. It then considers how you can use digital tools to develop psychomotor procedures in curriculum subjects.
Medical resuscitations in rugged prehospital settings require emergency personnel to perform high-risk procedures in low-resource conditions. Just-in-Time Guidance (JITG) utilizing augmented reality (AR) guidance may be a solution. There is little literature on the utility of AR-mediated JITG tools for facilitating the performance of emergent field care.
Study Objective:
The objective of this study was to investigate the feasibility and efficacy of a novel AR-mediated JITG tool for emergency field procedures.
Methods:
Emergency medical technician-basic (EMT-B) and paramedic cohorts were randomized to either video training (control) or JITG-AR guidance (intervention) groups for performing bag-valve-mask (BVM) ventilation, intraosseous (IO) line placement, and needle-decompression (Needle-d) in a medium-fidelity simulation environment. For the interventional condition, subjects used an AR technology platform to perform the tasks. The primary outcome was participant task performance; the secondary outcomes were participant-reported acceptability. Participant task score, task time, and acceptability ratings were reported descriptively and compared between the control and intervention groups using chi-square analysis for binary variables and unpaired t-testing for continuous variables.
Results:
Sixty participants were enrolled (mean age 34.8 years; 72% male). In the EMT-B cohort, there was no difference in average task performance score between the control and JITG groups for the BVM and IO tasks; however, the control group had higher performance scores for the Needle-d task (mean score difference 22%; P = .01). In the paramedic cohort, there was no difference in performance scores between the control and JITG group for the BVM and Needle-d tasks, but the control group had higher task scores for the IO task (mean score difference 23%; P = .01). For all task and participant types, the control group performed tasks more quickly than in the JITG group. There was no difference in participant usability or usefulness ratings between the JITG or control conditions for any of the tasks, although paramedics reported they were less likely to use the JITG equipment again (mean difference 1.96 rating points; P = .02).
Conclusions:
This study demonstrated preliminary evidence that AR-mediated guidance for emergency medical procedures is feasible and acceptable. These observations, coupled with AR’s promise for real-time interaction and on-going technological advancements, suggest the potential for this modality in training and practice that justifies future investigation.
Immersive learning technologies offer K–12 English learners simulated contexts for language acquisition through virtual interactions, influencing learner attitudes and enhancing cross-curricular skills. While past literature reviews have explored learners’ English skills and emotions, few have delved into the learning effectiveness of immersive technologies for K–12 students. This systematic review analyzed 33 studies from 2012 to 2021, focusing on research designs, the role of immersive technologies in English learning, and the theoretical underpinnings of these studies. Results highlight the methods used to gauge learning effectiveness, the ways immersive technologies bolster learners’ attitudes and skills, and a noticeable gap in theoretical grounding. Recommendations for future research are provided.
The study clarified differences in understanding and satisfaction between face-to-face and online training on radiation emergency medical preparedness (REMP) training.
Methods:
The training was held at Hirosaki University between 2018 and 2022, with 46 face-to-face participants and 25 online participants.
Results:
Face-to-face training was significantly more understandable than online for the use of the Geiger counter (P < 0.05), but the educational effect of virtual reality (VR) was not significantly different from the actual practice. For the team exercise of taking care of the victims, online resulted in a significantly higher understanding (P < 0.05).
Conclusions:
Interactive exercises can be done online with equipment sent to learners, and VR is also as effective. The use of videos was more effective for first-timers to learn the practical process from a bird’s-eye view, especially for team-based medical procedures.
As part of the digital transformation towards Industry 4.0, the tasks of staff on the shop floor are changing. Despite increasing automation, complex assembly steps still have to be carried out by humans, especially when it comes to complex products rich in variants, whose assembly cannpt be fully automated for various reasons. Due to increasing individualization and the steadily growing complexity of products, providing the right information at the right time and in the right place is becoming more important. In this context, the visualization of information via novel technologies such as augmented reality plays a crucial role towards an efficient and error-free production process. This paper compiles existing challenges when using augmented reality as a visualization form for an assistance system. On the one hand, the challenges found originate from a systematic literature review and are organized according to predefined categories. On the other hand, these challenges are complemented and compared through findings gained from expert interviews, which are conducted with employees of two European commercial vehicle manufacturers in the field of production. The analysis of the two methods highlights the need for further research.
Recognition skills refer to the ability of a practitioner to rapidly size up a situation and know what actions to take. We describe approaches to training recognition skills through the lens of naturalistic decision-making. Specifically, we link the design of training to key theories and constructs, including the recognition-primed decision model, which describes expert decision-making; the data-frame model of sensemaking, which describes how people make sense of a situation and act; and macrocognition, which encompasses complex cognitive activities such as problem solving, coordination, and anticipation. This chapter also describes the components of recognition skills to be trained and defines scenario-based training.
Augmented reality technology enables the creation of training that more closely resembles real-world environments without the cost and complexity of organizing large- scale training exercises in high-stakes domains that require recognition skills (e.g., military operations, emergency medicine). Augmented reality can be used to project virtual objects such as patients, medical equipment, colleagues, and terrain features onto any surface, transforming any space into a simulation center. Augmented reality can also be integrated into an existing simulation center. For example, a virtual patient can be mapped onto a physical manikin so learners can practice assessments skills on the highly tailorable virtual patient, and practice interventions on the physical manikin using the tools they use in their everyday work. This chapter sets the stage by describing how the author drew from their own experiences, reviewed scientific literature, and consulted with skilled instructors to articulate eleven design principles for creating augmented reality training.
The Handbook of Augmented Reality Training Design Principles is for anyone interested in using augmented reality and other forms of simulation to design better training. It includes eleven design principles aimed at training recognition skills for combat medics, emergency department physicians, military helicopter pilots, and others who must rapidly assess a situation to determine actions. Chapters on engagement, creating scenario-based training, fidelity and realism, building mental models, and scaffolding and reflection use real-world examples and theoretical links to present approaches for incorporating augmented reality training in effective ways. The Learn, Experience, Reflect framework is offered as a guide to applying these principles to training design. This handbook is a useful resource for innovative design training that leverages the strengths of augmented reality to create an engaging and productive learning experience.
The COVID-19 pandemic has accelerated the growing global interest in the role of augmented and virtual reality in surgical training. While this technology grows at a rapid rate, its efficacy remains unclear. To that end, we offer a systematic review of the literature summarizing the role of virtual and augmented reality on spine surgery training.
Methods:
A systematic review of the literature was conducted on May 13th, 2022. PubMed, Web of Science, Medline, and Embase were reviewed for relevant studies. Studies from both orthopedic and neurosurgical spine programs were considered. There were no restrictions placed on the type of study, virtual/augmented reality modality, nor type of procedure. Qualitative data analysis was performed, and all studies were assigned a Medical Education Research Study Quality Instrument (MERSQI) score.
Results:
The initial review identified 6752 studies, of which 16 were deemed relevant and included in the final review, examining a total of nine unique augmented/virtual reality systems. These studies had a moderate methodological quality with a MERSQI score of 12.1 + 1.8; most studies were conducted at single-center institutions, and unclear response rates. Statistical pooling of the data was limited by the heterogeneity of the study designs.
Conclusion:
This review examined the applications of augmented and virtual reality systems for training residents in various spine procedures. As this technology continues to advance, higher-quality, multi-center, and long-term studies are required to further the adaptation of VR/AR technologies in spine surgery training programs.
Specialty on-call clinicians cover large areas and complex workloads. This study aimed to assess clinical communication using the mixed-reality HoloLens 2 device within a simulated on-call scenario.
Method
This study was structured as a randomised, within-participant, controlled study. Thirty ENT trainees used either the HoloLens 2 or a traditional telephone to communicate a clinical case to a consultant. The quality of the clinical communication was scored objectively and subjectively.
Results
Clinical communication using the HoloLens 2 scored statistically higher than telephone (n = 30) (11.9 of 15 vs 10.2 of 15; p = 0.001). Subjectively, consultants judged more communication episodes to be inadequate when using the telephone (7 of 30) versus the HoloLens 2 (0 of 30) (p = 0.01). Qualitative feedback indicates that the HoloLens 2 was easy to use and would add value during an on-call scenario with remote consultant supervision.
Conclusion
This study demonstrated the benefit that mixed-reality devices, such as the HoloLens 2 can bring to clinical communication through increasing the accuracy of communication and confidence of the users.
The vast world of biotechnology applications to human health is reviewed and the terminology used in the rest of the book is defined here. An overview of the industry, the value chains, the specific types of human health products covered in this text are presented in this chapter. A time-tested way to analyze an industry’s attractiveness for new entrants is presented here using Porter’s five forces model. Technology trends such as mobile health, artificial intelligence, 3D printing, cell and gene therapy, and robotics are presented to the reader in the context of the mission of improving human health. The overall process of development of new products in these various segments of drugs, devices and diagnostics sectors is reviewed here. The reader will leave this chapter with a 30,000-foot view of the industry dynamics and understand the context within which product commercialization is to be done.
Three-dimensional rotational angiography has become a mainstay of congenital cardiac catheterisation. Augmented reality is an exciting and emerging technology that allows for interactive visualisation of 3D holographic images in the user’s environment. This case series reports on the feasibility of intraprocedural augmented reality visualisation of 3D rotational angiography in five patients with CHD.
Stroke education is a key factor in minimising secondary stroke risk, yet worldwide stroke education rates are low. Technology has the potential to increase stroke education accessibility. One technology that could be beneficial is augmented reality (AR). We developed and trialled a stroke education lesson using an AR application with stroke patients and significant others.
Methods:
A feasibility study design was used. Following development of the AR stroke education lesson, 19 people with stroke and three significant others trialled the lesson then completed a customised mixed method questionnaire. The lesson involved narrated audio while participants interacted with a model brain via a tablet. Information about participant recruitment and retention, usage, and perceptions were collected.
Results:
Fifty-eight percent (n = 22) of eligible individuals consented to participate. Once recruited, 100% of participants (n = 22) were retained. Ninety percent of participants used the lesson once. Most participants used the application independently (81.82%, n = 18), had positive views about the lesson (over 80% across items including enjoyment, usefulness and perception of the application as a good learning tool) and reported improved confidence in stroke knowledge (72.73%, n = 16). Confidence in stroke knowledge post-lesson was associated with comfort using the application (p = 0.046, Fisher’s exact test) and perception of the application as a good learning tool (p = 0.009, Fisher’s exact test).
Conclusions:
Technology-enhanced instruction in the form of AR is feasible for educating patients and significant others about stroke. Further research following refinement of the lesson is required.
Augmented reality (AR) combines digitally generated 3D content with real-world objects that users are looking at. The “virtual” computer-generated 3D content is overlaid on a view of the real world through a specialized display. All augmented reality technologies involve some form of display technology that combines real and virtual content – including headset devices, camera-enabled smartphones and tablets, computer-based webcams, and projectors displaying interactive images on a physical surface. These technologies support real-time tracking of hands, 3D objects, and bodies as they push on or touch virtual objects. This enables a more-natural interaction between the learner and the virtual content. AR technologies support learning by allowing learners to interact with 3D representations; they enable embedded assessments; they support groups of learners engaging with shared virtual objects; and they tap into a child’s natural inclination to play and experiment by moving around and touching and manipulating objects.
This chapter reviews media comparison research on the effects of various immersive technologies, including virtual reality and two types of mixed reality, augmented reality and augmented virtuality, on learning outcomes, as well as some boundary conditions for these effects. In sum, previous meta-analyses report that low immersion virtual reality (d = .22–.41) and low immersion augmented reality (d = .46–.68) improve learning outcomes compared to other instructional media, with small-to-medium-sized effects. However, high immersion virtual reality (median d = .10) and high immersion augmented reality (median d = .16) are less promising. Research on augmented virtuality is sparse, but shows positive effects on learning (median d = .45) based on a few studies. Theoretical implications of these immersive technologies regarding cognitive frameworks, as well as their practical implications on the future of technology in the classroom, are discussed.
Enhancing the appearance of physical prototypes with digital elements, also known as mixed prototyping, has demonstrated to be a valuable approach in the product development process. However, the adoption is limited also due to the high time and competence required for authoring the digital contents. This paper presents a content authoring tool that aims to improve the user acceptance by reducing the specific competence required, which is needed for segmentation and UV mapping of the 3D model used to implement a mixed prototype. Part of the tasks related to 3D modelling software, in fact, has been transferred to simpler manual tasks applied onto the physical prototype. Moreover, the proposed tool can recognise these manual inputs thanks to a computer-vision algorithm and automatically manage the segmentation and UV mapping tasks, freeing time for the user in a task that otherwise would require complete engagement. To preliminarily evaluate effectiveness and potential of the tool, it has been used in a case study to build up the mixed prototype of a coffee machine. The result demonstrated that the tool can correctly segment the 3D model of a physical prototype in its relevant parts and generate their corresponding UV maps.
This study investigates the use of augmented reality technology (AR) in the field of maritime navigation and how researchers and designers have addressed AR data visualisation. The paper presents a systematic review analysing the publication type, the AR device, which information elements are visualised and how, the validation method and technological readiness. Eleven AR maritime solutions identified from scientific papers are studied and discussed in relation to previous navigation tools. It is found that primitive information such as course, compass degrees, boat speed and geographic coordinates continue to be fundamental information to be represented even with AR maritime solutions.