TY - JOUR AU - Spano, Lucio Davide AB - Abstract Training operators to efficiently operate critical systems is a cumbersome and costly activity. A training program aims at modifying operators’ knowledge and skills about the system they will operate. The design, implementation and evaluation of a ‘good’ training program is a complex activity that requires involving multi-disciplinary work from multiple stakeholders. This paper proposes the combined use of task descriptions and augmented reality (AR) technologies to support training activities both for trainees and instructors. AR interactions offer the unique benefit of bringing together the cyber and the physical aspects of an aircraft cockpit, thus providing support to training in this context that cannot be achieved by software tutoring systems. On the instructor side, the LeaFT-MixeR system supports the systematic coverage of planed tasks as well as the constant monitoring of trainee performance. On the trainee side, LeaFT-MixeR provides real-time AR information supporting the identification of objects with which to interact, in order to perform the planned task. The paper presents the engineering principles and their implementation to bring together AR technologies and tool-supported task models. We show how these principles are embedded in LeaFT-MixeR system as well as its application to the training of flight procedures in aircraft cockpits. RESEARCH HIGHLIGHTS Use of augmented reality (AR) technologies to support training of complex systems. We focus the contribution on the engineering aspects of the training-support system. An architecture and its associated environment, named LeaFT-MixeR, for the combined use of tasks models and AR technologies to support training activities in aviation of both trainees and instructors. On the instructor side, LeaFT-MixeR supports the systematic coverage of planned tasks as well as the constant, real-time monitoring of trainee performance. On the trainee side, LeaFT-MixeR provides real-time AR information identifying objects with which to interact for completing the planned tasks. Discussion of the motivation for such an architecture as well as its functioning when applied to the training of flight procedures in aircraft cockpits. The engineering concepts can be reused with other technologies and in other domains. 1. Introduction Training is a complex activity aiming at transforming trainees by improving their knowledge and their skills. In the context of critical systems, the level of knowledge and performance is of prime importance as operators are at the same time a source of errors with potentially catastrophic consequences and a source of safety, recovering from impossible to save situations. In aviation, these two aspects are deeply acknowledged as, according to (Krey, 2007), about 79% of US fatal accidents in 2006 were attributed to pilot error. On the other side, (Reason, 2008) demonstrated that ‘human as a hero’ is sometimes the last barrier to prevent accidents to occur. In this domain, training programs are task-based as trainee pilots are qualified according to their knowledge and their capability of executing predefined procedures (Joint Aviation Authorities, 2006). Instructors teach trainee pilots the various predefined procedures to operate an aircraft and they assess to what extent trainee pilots are able to apply procedures. The development and execution of such training programs are systematic, task-based and scenario-based (i.e. predefined scenarios are identified with various stakeholders to integrate training sessions in meaningful contexts of operations) as they positively impact learning and performance (Fowlkes et al., 2010). Several tools may be selected and used for the training programs (aeroplane, flight simulator, computer-based applications, etc. ). These tools and setups provide support to the trainee to learn and rehearse operations with the system, but they do not provide explicit support for the instructors’ activities related to guiding and monitoring trainees. In the application domain of industrial maintenance, augmented reality (AR) has been studied for many years starting in military (Sims, 1994) to train manufacturing operators (Haritos & Macchiarella, 2005) for repair and assembly tasks (Boeing 1994). It has been shown to provide learning benefits and to facilitate instructors’ activities (Tang et al., 2003), in particular reducing the error rate and the cognitive effort required for completing the tasks. In the area of medicine and healthcare, teaching systems have been developed in multiple areas such as gynaecology or surgery, but they were dedicated to learning the 3D organization of the human body (Kancherla et al., 1995) and were not connected to specific tasks to be performed. In this paper, we show how to integrate AR technologies within a flight simulator that is coupled with a task model simulator resulting in a training environment supporting instructors’ and trainees’ activities called LeaFT-MixeR. Section 2 positions the proposed engineering framework with respect to existing frameworks dedicated to the systematic training of procedural work. Section 3 presents the overall architecture and the main components of the LeaFT-MixeR environment. Section 4 presents the main training sessions activities enabled by LeaFT-MixeR. Section 5 presents the main steps to prepare and conduct a training session, while Section 6 demonstrates the application of the stepwise approach to the training procedure ‘Take Off’. Section 7 presents the preliminary qualitative results from the running of training a session with an instructor pilot, a professional pilot and a student pilot in an A320 cockpit and its associated flight simulator. Section 8 discusses benefits and limitations of the current approach. And Section 9 concludes the paper introducing possible developments of the work. 2. Related Work This section presents previous research work that has been carried out in the domains related to the proposed tool-supported approach. This related work is decomposed into five subsections and is concluded by the identification of benefits from the joint use of AR and Tasks Models for training. 2.1. AR for operational learning AR technologies have been investigated for many years in the context of operational training, i.e. for learning procedures or how to accomplish tasks that include physical object manipulations. Such type of training targeted multiple application domains (e.g. manufacturing industry, aviation, nuclear power plants, etc.) (Palmarini et al., 2018). AR offers different advantages in this setting. First, it facilitates the observation of events that cannot easily be observed with the naked eye (Wu et al., 2013). Second, it supports visualizations and representations that are otherwise inaccessible, thus enabling experimentation with low cost and risk (Kaufmann & Dünser, 2007). AR demonstrated a positive effect on retrieving information from memory, supporting a virtual operation execution in a real-word setting (Navab, 2004). In addition, AR is the only computerized means to provide guidance to learn to use cyber-physical systems at the same time as interacting with the cyber-physical system itself (Kerpen et al., 2016). AR promotes enhanced learning achievement (Akçayir & Akçayir, 2017). Furthermore, AR provides support to deictic actions (e.g. ‘do this, watch this’) that are recurrent actions performed by the instructor during training sessions to provide guidance to the trainees (Gurevich et al., 2012). Lastly, augmented feedback effectively enhances motor learning (Sigrist et al., 2013). In the operational field, traditional training methods (e.g. manuals) are less effective with respect to AR (Haritos & Macchiarella, 2005), and they can lead the maintainers to frustration, decreasing the quality of their performance (Hincapié et al., 2011). There are many systems and prototypes applying AR for maintenance tasks. A comprehensive review of the state-of-the-art is available in (Palmarini et al., 2018). In this section, we focus on its application to the aviation domain using head-mounted displays (HMDs). Most of them focus on maintenance procedures. The definition of the term augmented reality is attributed to a Boeing researcher in 1990, Tom Caudell, who created his prototype for connecting wires in an aircraft (Lee, 2012). However, the interest in the AR application increased after the research proving the AR-based HMD effectiveness (Tang et al., 2003). In particular, the information spatialization reduced the error rate and lowered the mental effort for completing the task. Such efforts resulted in systems that overlaid images for guiding technicians in executing operational tasks (Haritos & Macchiarella, 2005) or the positive consideration of AR-based learning material and its consequent adoption into standard content management systems (Christian et al., 2007). Light aircraft maintenance was one of the first case study in the field. An informal task model drove the implementation of an AR application and related HMD prototype for the Cessna C.172P (Crescenzio et al., 2011). Through computer vision techniques and the pre-loaded procedure sequence, the application supported the maintainer in learning and executing a procedure. One of the most important limitations of AR guidance in the operational field is the restricted mobility (Henderson & Feiner, 2011): the need for computational power usually required wired connections to the hardware, which limited the technician’s movements. The introduction of the Microsoft HoloLens represents an attempt to provide general-purpose hardware for HMD-based AR. The US Department of Defence experimented with this device for guiding technicians in aircraft maintenance tasks (Hebert, 2019). The AR-based support increased the quality of the task results, decreasing the discrepancies within the participant group. Such an increase in quality was registered in both sequential and non-sequential tasks. The former is mostly used when safety is crucial, so the study concludes that AR may find application from less crucial inspections to important checks where accuracy is the priority. However, the AR technology does not solve the issues related to the design of an exhaustive training programme, its executions and the assessment of the progression of the trainees. 2.2. Training for safety-critical systems In the field of critical systems, such as air transportation, users (e.g. pilots, air traffic controllers) must be qualified and certified by an authority (e.g. Joint Aviation Authorities, EUROCONTROL) or by their employer (depending on the application domain) before performing operational work. Training practices for critical systems are generally highly regulated and systematic, as presented in (Joint Aviation Authorities, 2006) and structured around recommendations. These recommendations usually contain the following: A precise description of the need to identify users’ tasks; A description of the different training phases and their objectives; The types of teaching materials to be used for each phase (courses, computer-assisted training, simulator training, etc.); The importance of objectives and performance measures. Figure 1 presents an excerpt from the pilot skills evaluation form (Joint Aviation Authorities, 2006). This form describes the manoeuvres and procedures that pilots must be able to perform, the means by which they have been trained (A for aeroplane, FS for flight simulators, FTD for flight trainingdevice and OTD for other training device) and whether the skills are considered acquired (the letter M in the corresponding column indicates that the test of this procedure is mandatory) and by which instructor (last column). While training on the real aircraft presents clear benefits, training on simulators allows practising dangerous and/or hard to trigger scenarios. This is why training in safety-critical systems involve multiple technologies, and this paper proposes to extend the use of AR technologies. FIGURE 1 Open in new tabDownload slide Excerpt from an aircraft pilot skills evaluation form (Joint Aviation Authorities, 2006). FIGURE 1 Open in new tabDownload slide Excerpt from an aircraft pilot skills evaluation form (Joint Aviation Authorities, 2006). 2.3. Task-based approaches to training Systematic approaches to training (SAT) type of training development process, detailed in (Reiser, 2001) and (Army Field, 1984), are called systematic as they provide a process applicable each time a new training program must be built. The most used framework used for training in critical contexts is commonly named Instructional System Development (ISD), which is the name that has been given to it by the U.S. Department of Defense (1975). ISD develops guidelines concerning the design, setup, evaluation and maintenance of an educational or training program. This process is composed of several phases that must be followed systematically: analysis, design, development, implementation and evaluation, which forms the acronym ADDIE (Army Field, 1984) used to refer to it. It also highlights the importance of the following: Methodology for ensuring that a team remains qualified over time; Job and tasks analysis of the operators; Performance goals and measures expected for the team; Training program continual evaluation (external and internal) for each phase. The first component of ADDIE (the analysis, see Fig. 2) is a key aspect of instructional design practices (Gagné, 1985) and is a mandatory stage of the training design process. This phase describes in a very simple way how to decompose a job in tasks and sub-tasks. The design and construction of the training program (covered by the design, development and implementation phases of the ISD process) are followed by a performance evaluation that must be passed by each trainee. Tasks are a central aspect of this approach as they are the starting point of the process, they frame the requirements for the users’ capabilities, the training structure and content, and they are the reference for the trainee and for the training evaluation. FIGURE 2 Open in new tabDownload slide Steps of the ADDIE approach to training development (Army Field, 1984). FIGURE 2 Open in new tabDownload slide Steps of the ADDIE approach to training development (Army Field, 1984). Task modelling is a technique to structure and to record the information about user task in a systematic way, fitting the ADDIE framework perfectly. It provides support to identify information and knowledge required to perform a task in an exhaustive manner, while information task representations may be incomplete, inconsistent and ambiguous. The use of task models within systematic approaches to training activity provides support to several steps of the training program development (Martinie et al., 2011a). It provides support to list and select operators’ tasks during the analysis phase of the ADDIE process. It also allows deriving all the possible scenarios for the job, which are then used to prepare the training (design and development phase) for the objectives and tests definition and to execute training sessions (implementation phase). 2.4. Engineering model-based training When supported by software tools, task-models may be co-executed synergistically with interactive applications (Barboni et al., 2010, Martinie et al., 2018, 2015), provided that interactive tasks in task models have been connected to input and output functions offered by the interactive application. This synergistic use of task models with interactive applications provides support for generating scenarios and executing them on the interactive application (Campos et al., 2017), but also to detect user activity with a system at runtime (Parvin et al., 2018). It also provides support for training implementation and training execution as demonstrated in Martinie et al. (2011a). For training implementation, it provides support to generate scenarios and to associate them with the planned training session. During the execution of training sessions, it provides support for driving interaction, for providing help to the trainees on how to reach their goals, for guiding the trainee to apply and to complete the correct procedures to be learned and for monitoring the progression of the trainees throughout the training program. However, none of the approaches above exploited AR technologies that bring additional complexity and that has to be considered as a specific dedicated output and input device. We summarize hereafter the multiple possibilities offered by the co-execution of task models with an interactive system (whether it is model-based or textual code) at runtime: Task-driven: the commands in the system are triggered by the execution of a task using the task model simulator. This capability can be used to enable the instructor to trigger a function in the system (related to a trainee task) to be executed. It can also be used to enable the trainee to learn the tasks to be performed by executing the task model step by step and by observing at the same time the evolution of the state of the system (in the case of an autonomous learning session). System-driven: the commands in the system are triggered with the UI of the system by the trainee. For each interaction, the corresponding task is tracked in the task model, enabling either the instructor to check the trainee progress or the trainee to auto-evaluate its own progress in the case of an autonomous learning session. Scenario-driven: The trainee executes a procedure according to a scenario or observes the execution of a procedure according to a scenario (the scenario that has been produced before with the task models). In the first case, the co-execution environment monitors the progression of the trainee with regards to the tasks planned in the scenario and record the gaps between the trainee tasks and the scenario tasks. In the second case, the trainee learns the execution of the procedure according to the scenario. 2.5. Software tools for assisting the instructors Instructors prepare and lead training sessions. For that purpose, they have to manage a high quantity of tasks as well as to select the relevant set of tasks to be executed and the sequence of their execution (learning events, progression) (Gagné, 1985). To support these instructors’ activities, software tools have been proposed. Corbalan et al. (2006) proposed a learning environment enabling the instructor to personalize tasks to be performed and showed that the personalization of learning tasks yields more efficient and more effective learning than a fixed sequence of learning tasks that is identical for all learners. Martinie et al. (2011a) proposed a task model-based environment that also provides support for the personalization of learning tasks and, in addition, provides support for individual trainee evaluation. With this environment, the instructor can produce scenarios from task models and select a set of scenarios to be learned by selected individual trainees during a training session. It also provides support for recording the actions performed by each trainee during the session. The instructor can then evaluate if the trainees reached the objectives or if further training sessions are required. We did not find more references to software tools that support the instructors in their tasks of preparing and conducting task-based training. 2.6. Expected benefits from the synergistic use of AR, simulator and task model In this paper, we go beyond the state-of-the-art approaches applying AR guidance for flying procedures and routines, proposing an architecture for an advanced learning environment. We use task models for guiding the procedure definition, which supports inspection, and validation, when coupled with running applications (Barboni et al., 2010, Martinie et al., 2018, 2015). In addition, we define a set of general-purpose AR guidance elements for the Microsoft HoloLens. Considering the evidence in the literature, we engineered our learning environment in order to achieve different benefits, which will be discussed in the following sections. First, the AR application will support the trainee pilot during the execution of the different procedure steps, using a wireless HMD that supports full mobility. The goal is to exploit the benefits of AR in operational learning following recommendations documented in the literature. Second, using the task model formalization, we expect to add a means to provide a process leading to systematic learning. Indeed, beyond tasks identification and their temporal relationship, we propose a task modelling approach that supports the analysis of the information need, the scenarios formalization and training implementation. Finally, the architecture we propose in this paper combines the AR interface, the flight simulator and the task models. Such combination offers the following advantages that derive from their synergistic use: It is able to detect what step of the task the trainee is accomplishing, supporting both automatic and instructor-driven feedback; It is able to provide cues to the trainee for executing the steps in the procedure according to the task modelling, providing contextual guidance and/or feedback to the trainee; It is able to show the instructor the currently accomplished step on a graphical representation of trainee tasks. The representation supports his/her intervention for simulating particular scenarios or situations, fostering the reinforcement of the procedure parts that the trainee needs most. It is able to monitor a change in the flight simulator state and provide AR feedback to the trainee. This allows supporting guidance in tasks that require having a given configuration of the plane state (e.g. height, speed, etc.) for continuing in the procedure. 3. The LeaFT-MixeR Environment We designed the LeaFT-MixeR (Learn Flight through Task-based simulations in Mixed Reality1 ) environment for fostering active collaboration between an instructor and a trainee pilot flying2 (PF). It supports the co-execution of flight routines described with task models. The control of the routine can be shared between the two roles: the trainee advances its state interacting with the cockpit, while the instructor is provided with an interactive graphical representation of the task model, which supports triggering the actions corresponding to the leaves. The environment handles the execution of the actions coming from both sources and view synchronization. In this section, we show the overall environment architecture, the concerns assigned to the different components and how they work. 3.1. Overall architecture Figure 3 depicts the overall architecture of the learning environment, which consists of three main components. The first is the Flight Simulator, which simulates the behaviour of a real plane (in our work, we consider the Airbus A320). The second component in the learning environment is HoloPit, an AR application running on an HMD, which supports the trainee PF. The third component is the Task Simulator that supports synchronization between the task model and the HoloPit application. FIGURE 3 Open in new tabDownload slide LeaFT-MixeR learning environment architecture. FIGURE 3 Open in new tabDownload slide LeaFT-MixeR learning environment architecture. 3.2. Flight Simulator The flight simulator reacts to the actions of the PF on the cockpit controls. It is an important part of the proposed learning environment, but it was not developed specifically for this use: any flight simulator may be instrumented for sending notifications to the other software through a public API. We used two different flight simulators in our work. In development sandboxes, we used Flight Gear3, an open-source simulator that allowed us to test the whole environment on a single machine for debugging purposes. In such configuration, we used the virtual cockpit models included in the open-source program for simulating the interaction with the cockpit controls. After the development phase, we deployed the solution on a physical reproduction of an Airbus A320 cockpit running Prepar3D as the flight simulation software. For integrating the flight simulator, the LeaFT-MixeR environment needs an adaptor tailored specifically to the considered software in order to allow the other components in the learning environment to register for receiving notifications about the changes in the plane state. The changes in the plane state are related to user actions which are described in a task model. 3.3. HoloPit The second component in the learning environment is HoloPit, an AR application running on an HMD, which supports the trainee PF (detailed in Section 4). The HoloPit application provides guidance to a trainee PF for completing a procedure in the cockpit, such as the procedures for take-off, landing, parking and securing, etc. The HoloPit application maintains internally a representation of the procedure steps derived from a task model (detailed in Section 5.1). At each procedure step, the trainee PF has to inspect or operate one or more controls in the cockpit. In addition, the HoloPit application provides feedback on the performed action, showing whether it was correct or not according to the considered procedure. The application shows guidance and feedback thanks to holograms overlaid on top of the cockpit controls. It updates the visualized holograms according to the progress made by the trainee PF in the procedure. Most of the times, such updates are a reaction to a change in the plane status by the flight simulator. They may occur both when the trainee PF operates a cockpit control or simply occur according to the evolution of the journey. In this case, HoloPit receives an event by the flight simulator component through the internal Flight Event Manager component and, according to the current routine status, the HoloPit application evaluates whether to proceed or not with the next task. HoloPit exploits the Microsoft Spatial Mapping between the virtual scene and the physical configuration of the cockpit (Airbus A320) for overlaying guidance and feedback information over the real aeroplane controls4. The technique is user-independent, and the mesh caching requires running the cockpit scanning process only in the first run. It works on Microsoft HoloLens (first generation)5, an HMD supporting AR experiences through the visualization of virtual elements blended into the real world. The HoloLens are smart glasses equipped with semi-transparent lenses that support the holograms projection and allows the user to see through them for obtaining the illusion of a 3D object positioned into the real world. In addition, the HMD is provided with a depth camera that performs an on the fly scan of the surrounding environment and supports the absolute positioning of virtual objects into the real world. An RGB camera, as well as loudspeakers, are positioned above the user’s ears and a microphone completes its hardware configuration. The device supports a basic gestural interaction, including hand pointing, selection confirmation (the click gesture, pinching the forefinger and the thumb) and the bloom gesture for opening the operating system menu (spreading the fingers of the dominant hand keeping the palm up). In addition, it supports vocal interaction through the Microsoft Speech API. 3.4. Task simulator The task simulator component supports synchronization between the execution of the task model and the flight simulation. The instructor can check the ongoing tasks through a graphical representation of the simulation of the task model in the task simulator. There are two ways of managing the co-execution: driven by the actions of the trainee PF on the cockpit controls or driven by the tasks triggered by the instructor with the task simulator. In the first case, each relevant change on the plane state or each action on the cockpit controls updates the runtime state of the task model. In the second case, the instructor may trigger one of the enabled tasks in the model and take its control at any point in time during the procedure execution. In this way, the instructor may help the trainee in going forward, for instance, if he is not able to complete a certain operation, or she can explicitly trigger a task that is related to an event that requires special management by the PF, such as, e.g. the loss of power on one engine. For the two ways of managing the co-execution, the task simulator sends events to and receives events from the task event manager component inside the HoloPit application. 4. Flight Procedure Guidance in AR This section presents the high-level flows of interaction with the HMD in the cockpit. We suggest watching the demo video available at the following URL for a better grasp of the interaction: url https://www.youtube.com/watch?v=uTRVeXcVOTo 4.1. Constraints on the field of view for designing procedure guidance The Microsoft HoloLens HMD is a powerful device but has, by construction, two main limitations that are worth discussing for understanding some design choices in our work. The first is the narrow field of view (FOV) for the virtual objects, estimated as about 30 degrees wide and 17 degrees high. This means that the user sees the virtual objects when they are in the centre of the FOV. A HoloLens application cannot rely on peripheral vision. We visually summarize the problem in Fig. 4: the red-dashed frustum represents the user’s FOV, while the green one represents the portion available for displaying virtual objects wearing the HoloLens. The virtual objects that are outside of the green frustum are not visible, even if they are attached to physical ones (i.e. cockpit controls) contained inside the user’s FOV. Those partially contained in the green frustum are cropped. From now on, we will refer to the green frustum as the hologram field of view (HFOV), while the red one is the user’s real field of view (RFOV). FIGURE 4 Open in new tabDownload slide Representation of the user and Hololens field of view in the cockpit: the user sees the controls contained in the frustum in red while in the green one he sees also the virtual objects (the proportions between the two are not accurate). FIGURE 4 Open in new tabDownload slide Representation of the user and Hololens field of view in the cockpit: the user sees the controls contained in the frustum in red while in the green one he sees also the virtual objects (the proportions between the two are not accurate). The second limitation derives from the hologram projection on the lenses, which makes it difficult to support the occlusion between virtual and real objects. The lenses are closer to the user’s eye with respect to any other object in the real world, so virtual objects always occlude real ones. This has a consequence on the dynamic evolution of an AR scene. For instance, consider a world configuration where we have a real and a virtual object. When the real object moves between the user and the virtual one, the user will continue seeing the virtual object as if she can see through the real object. Such a problem may be solved if the scene is static (i.e. real objects do not move during the interaction), or analysing in real-time the data coming from the depth sensor. Both solutions have drawbacks: the first does not catch dynamic changes to the real object configuration, the second is costly from a computational point of view. In our application, the problem is mainly related to the position of the hand, which may be occluded by the holograms even if the hands are closer. We decided that a realistic effect is not worth the computational cost. 4.2. Guiding the trainee PF attention Due to the HFOV limitation, HoloPit enables to visualize information in a restricted volume. Therefore, the AR interface includes a guidance system for attracting the trainee PF’s attention towards the position of a hologram that highlights a cockpit control. If the object of interest is outside the HFOV, HoloPit displays an indicator near the side of the HFOV base rectangle nearest to the object. The indicator has the shape of an arrow that points at the object. Figure 5 shows two sample indications: on the left part (A), the arrow is pointing towards the flap disarming controls that are below the navigational system in the pedestal. The user is looking up with respect to the control position, so the indicator is near the bottom side and points down. In Fig. 5B, the user has to operate on the overhead panel, so the arrow points up-right, and it is near the top-right corner. FIGURE 5 Open in new tabDownload slide The arrow indicator guides the user’s attention towards a cockpit control for continuing the procedure. FIGURE 5 Open in new tabDownload slide The arrow indicator guides the user’s attention towards a cockpit control for continuing the procedure. FIGURE 6 Open in new tabDownload slide Highlighting a single cockpit control in HoloPit. FIGURE 6 Open in new tabDownload slide Highlighting a single cockpit control in HoloPit. 4.3. Guiding the trainee PF to interact with controls Once the trainee PF is looking in the correct direction, HoloPit highlights the control or the controls he needs to operate for continuing the procedure. The simplest way for highlighting a control is a simple ring positioned over the control that the PF has to use. Figure 6 shows a sample interaction sequence. The trainee PF has to turn on the external power, and the application highlights the corresponding switch. In Fig. 6A, the application shows a ring on top of the switch position on the overhead panel. In addition, it shows a text box explaining the operation he needs to complete in the current procedure step. The application pronounces once the same text. After the PF pushes the switch, the application receives the state change notification from the Flight Event Manager (see Section 3.1), it shows a green check confirming he completed the step correctly, and the box changes its text to ‘Good!’ (Fig. 6B). The feedback is completed by a text-to-speech call that reads such message. We require a more complex visualization when the trainee PF has to operate more than one control for completing the step. In such a case, the trainee PF has to complete a set of interactions with the different controls, but the order is not mandatory in the procedure. HoloPit shows a ring on top of each control involved in the step and a yellow rectangle that encloses the group. A text box explains the operation and lists the name of the involved controls. Figure 7 shows a sample step that involves multiple controls: turning on the engine generators. The plane we considered has two of them and, for taking-off, they must both be turned on. For completing this step, the order does not matter. FIGURE 7 Open in new tabDownload slide Highlighting multiple controls in HoloPit. FIGURE 7 Open in new tabDownload slide Highlighting multiple controls in HoloPit. Figure 7A shows the initial state: both generators are off, then HoloPit shows two red rings on top of the two switches and the text box explains the trainee PF that he has to turn on both the generator 1 and 2. When the trainee PF pushes the first button (e.g. generator 1), the application shows a feedback for that switch, but maintains the ring over the second button and the containing rectangle remains yellow. The text box updates its message, reporting only the remaining control (Fig. 7B). Finally, the trainee PF pushes the second switch, and the step is completed. The application shows a green check on top of each button, the rectangle turns green and the text panel shows a ‘Good!’ message (Fig. 7C). Besides the explicit operations on the cockpit controls, a procedure may include checks or time-based operations. In the first case, the procedure requires the trainee PF to look at a cockpit control in order to read the displayed value or to double-check the state of the aeroplane through an instrument. HoloPit supports such operation by displaying a yellow box around the instrument that has to be monitored. We consider the step as completed when the trainee PF is looking direction points at the box for more than 3 seconds. The HoloPit application provides similar feedback for time-based operations, such as waiting for the plane to gain a certain speed. Such events do not depend on a trainee PF action, but they require him to monitor some instrument. The application shows the yellow box around the instrument and a progress ring on top of it. When the aeroplane reaches the required status, the box turns green and acknowledges the step completion. Figure 8 shows a sample time-based procedure step. During take-off, the trainee PF needs to monitor the engine status and to wait until they reach 30% of their power before turning on the engine generators. The value is available in the monitors highlighted in Fig. 8A by the yellow box. The text box briefly describes what the trainee PF has to do in this step. Figure 8B shows the waiting ring that suggests the trainee PF that he must wait for an external event before going on with the next step. When the engines reach the required value, the interface changes its state and displays the green box visible in Fig. 8C, which provides feedback on the step completion. FIGURE 8 Open in new tabDownload slide An example of a time-based procedure step in HoloPit. FIGURE 8 Open in new tabDownload slide An example of a time-based procedure step in HoloPit. 4.3.1 HoloPit application modes The instructor may configure the HoloPit application in different modes, namely guidance mode and test mode. In the guidance mode, HoloPit shows both the guidance and action feedback elements, supporting the learner in recalling the correct actions in the procedure. Such mode is useful when the trainee PF is learning the procedure, and he has still to fix in his memory the different steps. In the test mode, HoloPit hides the guidance elements in order to let the trainee PF to recall the procedure steps autonomously. After each action on the cockpit controls, the application shows the green check feedback when the operation was correct or an opposite red cross feedback when the action was wrong. The trainee and the instructor may use such mode for rehearsing and evaluating the learning outcome. 5. Steps for the Preparation and Execution of Training Procedures in LeaFT-MixeR In this section, we describe how LeaFT-MixeR exploits task models for defining flight procedures. In addition, we show and how the different components in the environment communicate for reacting to the trainee PF operations on the cockpit controls, how the instructor manages the task simulation and how the environment is affected by changes in the plane state. 5.1. Training session setup preparation The main steps for preparing the training session are: to describe the procedures that the trainee PF has to learn in task models, to prepare the annotation file that contains the correspondences between interactive tasks, visual cues and main simulated cockpit commands and, at last, to build the HoloPit application. 5.1.1. Task modelling Task models are suitable for both formalizing, analysing, inspecting and verifying the procedures (Martinie et al., 2015). They focus on the actions that both the user and the system must perform for achieving a given goal, while they abstract from other design decisions that belong to more concrete levels, such as the visual layout of the output or the graphic controls used for collecting the input. If we consider the HoloPit application, the task model defines the set of actions that the trainee PF needs to perform, their temporal relationships, the tasks that are performed with the HoloLens, the tasks that are performed with the cockpit controls, the actions and the cognitive activities of the trainee PF and the tasks (functions) that are executed by the plane only (the system tasks). In order to support the correct execution of the operations in the cockpit, LeaFT-MixeR requires their formalization into a task model. For that purpose, we use the HAMSTERS-XL—Cockpit notation (Martinie et al., 2019), which is a customized version of the HAMSTERS notation for the description of user tasks in the cockpit (e.g. ‘grip motoric task’ for turning a knob). Pilots may interact with several systems of several types when accomplishing their goals: input and output devices, systems embedding input and output devices and that are manipulated as a whole, hardware components, software applications. Task analysis requires to identify what are the systems manipulated by the users to accomplish their tasks. The HAMSTERS notation supports assigning a task subset to a specific input or output device. Thanks to this association in the task models, we are then able to explicitly assign the feedback and the guidance tasks to the HoloLens device. In addition, HAMSTERS allows specifying the information sent and received from the system to the user and vice-versa, which allows modelling, e.g. the guidance information received by the trainee PF or the command. The HAMSTERS notation also provides support for modelling large sets of tasks and for structuring them in several task models and subroutines (a subroutine is a special type of task model that can be used as a reference several times in one or more task models) (Martinie et al., 2011b). Figure 9 shows a sample subroutine defining how to release the brakes during the take-off procedure. FIGURE 9 Open in new tabDownload slide The HAMSTERS subroutine for releasing the brake during take-off. FIGURE 9 Open in new tabDownload slide The HAMSTERS subroutine for releasing the brake during take-off. The subroutine consists of two different activities, which the trainee PF may execute concurrently. The first, called ‘Head management’, is a subroutine that guides the trainee PF’s attention towards the brake pedal through the arrow hologram described in Section 4.2. The second activity (the sequence subtree, sibling of ‘Head Management’) shows the sample modelling of both HoloLens guidance and feedback. First, the HoloPit application shows the suggestion for completing the procedure step through an output interaction task (‘Display 3D Info’ in Fig. 9). The modelling language allows specifying both the suggestion text (information data ‘Inf: Suggestion Release the brake’ in Fig. 9) and how to highlight the cockpit control (information data ‘Inf: Brake circle’ in Fig. 9, that will then be used to display, through the HoloLens, a ring around the brake pedal for guiding user attention, as explained in Section 4.3). The information that is intended to be displayed with HoloLens is presented in the task models with the elements of notation named ‘information data’ (depicted with the box ‘Inf:’), as the information data connected to the ‘Display 3D Info’ interactive output task and consumed by the two visual perception tasks ‘See the suggestion’ and ‘See the information’, which represent the delivery of the application message to the trainee PF. The purple box with labels starting with ‘i/O D:’ and connected to these tasks represent the input/output device with which the trainee PF has to interact (HoloLens input/output device for this task). FIGURE 10 Open in new tabDownload slide Positioning a guidance element in the cockpit through gaze pointing. FIGURE 10 Open in new tabDownload slide Positioning a guidance element in the cockpit through gaze pointing. After receiving the suggestion, the trainee PF decides to follow it, and he requires to elaborate an action plan. The cognitive tasks ‘Decide to follow the suggestion’ and ‘Analyse the suggestion’ represent such cognitive activities in the model. Then, the trainee PF executes a sequence of physical actions on the brake cockpit control: he stretches the right leg for releasing the brake, resulting in an input operation for the plane. Such part of the sequence results in the motoric tasks ‘Leg Right Stretch’, ‘Leg Right Release the Brake’ and ‘Release Brake’ in the model. All tasks are performed with the ‘Brake’ input device (represented as a purple box which label starts with ‘i/o D:’ connected to these tasks), one of the plane controls in the cockpit. Finally, such operation results in a change of the plane status and the operation is completed. The HoloPit application shows a confirmation for such change (the ‘Display 3D confirmation’ interactive output task) and produces visual feedback that the trainee PF should see (visual perceptive task ‘See the confirmation’) as a green check. Similarly to the guidance information, the task model specifies the information provided (the ‘Inf:’ green boxes connected to the last two tasks). 5.1.2. Synergistic mapping between tasks, visual elements and flight simulator events For completing the data required for displaying the feedback and guidance elements in the HoloPit application, we chose the ‘code instrumentation’ approach for co-execution (as explained in Section 2.4) and we used an annotation file that contains the following elements: The visual element type associated with each HoloLens output task, such as rings or boxes for highlighting the cockpit controls; voice, text or both for the feedback. For each HoloLens output task, the 3D position, orientation and scaling of the virtual elements in the real world, exploiting the spatial mapping feature of the HoloLens. For each input or output task involving plane controls, the correspondence between the cockpit controls identified in the task models and the property or properties in the flight simulator that allow reading or updating the plane status. The first two annotation types are related to the visual appearance of both guidance and feedback in the HoloPit application. In order to display such elements correctly, the annotations specify their type (we discuss it more in detail in Section 4.3), their position and orientation in the 3D world. Placing such elements precisely using only coordinates and angles is a difficult task, so we created a guided procedure that allows the instructor to position the elements precisely wearing the HoloLens. The interaction is straightforward: after selecting the 3D element from a menu, she positions it in the cockpit using gaze pointing (Fig. 10). A tap gesture confirms the position selection, showing a set of buttons for increasing and/or decreasing the element size. Finally, it is possible to change the orientation through a trackball interface guided by tap-and-hold gestures. Pressing the save button, the procedure saves the position, the orientation and the scaling in a file, which is downloadable from the HoloLens device. The last annotation type supports the communication between the HoloPit application and the Flight Simulator component. We need this for identifying the flight simulator API call that returns or writes the required value, also specifying the variable type. In FlightGear, for instance, such properties are represented in a tree structure. It is sufficient to send a textual message through a TCP socket specifying the path of the desired property in the tree, using the slash as a path separator character as we are used to in filesystems. In the case of a property update, the message includes the new value to assign, and such a value distinguishes an update message from a reading one. In the annotation file, we specify the path associated with the plane property managed in a given task. Considering the brake pedal control operated in the ‘Release Brake’ task in Fig. 9, the annotation file maps the ‘Display 3D info’ output task to a guidance element, as shown in Fig. 11. The first object in the components list defines a ring for highlighting the brake pedal, referencing the task in the model. It contains the transformations for positioning the guidance element in the cockpit. The second item in the actions list defines the mapping between the ‘Release Brake’ input task and the flight simulator parameter: the I/O information ‘Brake’ corresponds to the path /controls/gear/brake-parking in the FlightGear plane property tree. The JSON object properties specify that it is a boolean value and the task completion sets it to true. FIGURE 11 Open in new tabDownload slide Sample annotations for the Release Brake routine. FIGURE 11 Open in new tabDownload slide Sample annotations for the Release Brake routine. 5.1.3. Building of the HoloPit procedure definition The task model and the annotation file constitute the input for generating the HoloPit internal representation of a flight procedure. A simple generator program takes as input the task model (a HAMSTERS file) and the annotation definition (a JSON file). The output, once uploaded in the application folder, allows HoloPit to load the sequence. The generation procedure works in two steps. The first one builds the list of the holograms that the application may show during the procedure. It starts from the task model and identifies all the leaf tasks marked as interactive output associated with the HoloLens device (i.e. those connected to the HoloLens purple box in Fig. 9). Then, using the task identifiers, it enhances the elements in the list with the hologram type and its 3D properties. It takes the text/speech messages from the information elements connected to the selected tasks (i.e. the green boxes in Fig. 9). The second step is required for receiving updates from the flight simulator. It starts building the list of relevant updates in the procedure, selecting all the tasks connected to a cockpit control device in the task model (for instance, the Brake purple box in Fig. 9). Then, it maps the cockpit control towards a flight simulator API call using a software-specific table (i.e. one for FlightGear, another one for Prepar3D, etc.). The flight event manger component in the HoloPit application takes this list as input and it registers to the flight simulator for receiving the events or sending the commands required for executing the procedure. 5.2. Training procedure execution within the runtime environment At runtime, the components in Fig. 3 communicate for synchronizing their internal state. In this section, we discuss the four possible communication sequences that support the state synchronization. 5.2.1. Triggering a visual cue to support pilot’s task The first sequence is the simplest and represents a task activation. Figure 12 shows the message flow for such a communication sequence. When the task simulator establishes that a task is ready for the execution enabled) and that the instructor triggers the task (label 1 in Fig. 12), the task simulator sends a message to the task event manager for activating it (label 2 in Fig. 12). Such a message may trigger the display of a hologram representing an output task or a request for monitoring a plane parameter for an input or system task (label 2 in Fig. 12), which is then perceived by the trainee PF (label 4 in Fig. 12). FIGURE 12 Open in new tabDownload slide Communication sequence for activating a task in the environment. The instructor activates an enabled task (1), then the Task Simulator computes the temporal sequence and sends an update message to the Task Event Manager (2). According to the task type, the latter component notifies the Mixed Reality interface, the Flight Event Manager or both of them (3). The notification finally arrives at the HoloPit application. FIGURE 12 Open in new tabDownload slide Communication sequence for activating a task in the environment. The instructor activates an enabled task (1), then the Task Simulator computes the temporal sequence and sends an update message to the Task Event Manager (2). According to the task type, the latter component notifies the Mixed Reality interface, the Flight Event Manager or both of them (3). The notification finally arrives at the HoloPit application. 5.2.2. Monitoring a pilot’s task The second sequence summarizes the communication flow occurring when the trainee PF operates a control in the cockpit, as depicted in Fig. 13 (label 1). In this case, the cockpit control changes directly the internal state in the flight simulator component (label 2 in Fig. 13) and the flight event manager receives a notification (label 3 in Fig. 13), exploiting the API of the current flight simulator software. The flight event manager maps such update into the more abstract plane state parameter, as defined in the task model. Then, it sends a message to the task event manager (label 4 in Fig. 13). If the parameter is associated with the current step in the procedure (i.e. a currently enabled task in the model), it triggers an update on the mixed reality interface, which shows the feedback for the completed task (label 6 in Fig. 13, detailed description in Section 4). In this case, it also sends a notification to the task simulator component, which updates the runtime status of the task model and the instructor user interface (label 6 in Fig. 13). FIGURE 13 Open in new tabDownload slide Communication sequence triggered by the PF operating a cockpit control. The cockpit physically receives the PF input, which updates the Flight Simulator state. It raises an API-dependent event received by the Flight Event Manager that translates it into the task model vocabulary and forwards the message to the Task Event Manager. If needed, the latter component forwards the message to Mixed Reality UI, the Task Simulator or both. FIGURE 13 Open in new tabDownload slide Communication sequence triggered by the PF operating a cockpit control. The cockpit physically receives the PF input, which updates the Flight Simulator state. It raises an API-dependent event received by the Flight Event Manager that translates it into the task model vocabulary and forwards the message to the Task Event Manager. If needed, the latter component forwards the message to Mixed Reality UI, the Task Simulator or both. 5.2.3. Monitoring a change of cockpit’s state The third communication sequence occurs when a plane parameter changes autonomously or as an effect of an action having a long span in time. For instance, this may happen when the plane increases its altitude or speed as a consequence of a manoeuvre. The information propagation is similar to the previous case, but we do not have the trainee PF intervention on the cockpit controls (see Fig. 14). The main differences are the notification occurrences. Usually, the plane parameters involved in such sequences change continuously during the procedure (e.g. the speed and the height during the take-off), so the flight event manager receives a continuous stream of updates (label 1 in Fig. 14), but the tasks associated to such parameters is complete if the value is above/below a given threshold or if it is inside a range. Therefore, the task event manager maintains the threshold or the range (label 2 in Fig. 14), and it propagates the notifications that trigger the completion of task in the model (label 3 in Fig. 14) as well as to the user interface of the instructor and to the HoloLens interface (label 4 in Fig. 14). FIGURE 14 Open in new tabDownload slide Communication sequence triggered by an asynchronous update of the plane state. In this case, the Flight Simulator raises autonomously the API-dependent notification received by the Flight Event Manager. The following of the sequence is similar to the previous case. FIGURE 14 Open in new tabDownload slide Communication sequence triggered by an asynchronous update of the plane state. In this case, the Flight Simulator raises autonomously the API-dependent notification received by the Flight Event Manager. The following of the sequence is similar to the previous case. 5.2.4. The instructor triggers a task with the task simulator The fourth communication sequence, depicted in Fig. 15, shows how the instructor can affect the procedure simulation and how the completion of a task in the model propagates through the environment. In the first case, the sequence starts with the instructor that triggers the execution of a task in the model through its graphical representation, i.e. double-clicking on an enabled task (label 0 in Fig. 15). Since the model contains both interaction and system tasks, the instructor can easily simulate the execution of a trainee PF operation or a change in the plane state. We numbered such communication as ‘zero’ in Fig. 15, since the task simulator component may raise a task completion event autonomously when handling a previous notification from the task event manager (see Figs 13 and 14). In both cases, the task simulator notifies the task event manager that the task selected by the instructor has been completed (label 1 in Fig. 15). FIGURE 15 Open in new tabDownload slide Communication triggered by a task completion event. The event may raise either as the consequence of an Instructor command or as the result of a previous communication. The task simulator sends a notification to the Task Event Manager that, if the task was associated to a plane parameter, forwards the message to the Flight Event Manager, the Flight Simulator and finally to the Cockpit. FIGURE 15 Open in new tabDownload slide Communication triggered by a task completion event. The event may raise either as the consequence of an Instructor command or as the result of a previous communication. The task simulator sends a notification to the Task Event Manager that, if the task was associated to a plane parameter, forwards the message to the Flight Event Manager, the Flight Simulator and finally to the Cockpit. Then, the task event manager looks up for the plane parameter(s) manipulated by the current task (if any) and requests their update to the Flight Event Manager (label 2 in Fig. 15). The information about which plane parameter needs setting comes from the task model itself and/or from the annotation file. It is worth pointing out that either an interaction or a system task may result in an update of the plane status: a system task may require a change in the plane configuration for completing, while an interaction task may model the operation on a cockpit control, and this eventually triggers a change in the plane state. FIGURE 16 Open in new tabDownload slide Task Model for the take-off procedure. FIGURE 16 Open in new tabDownload slide Task Model for the take-off procedure. FIGURE 17 Open in new tabDownload slide The Check 100 knots subroutine. FIGURE 17 Open in new tabDownload slide The Check 100 knots subroutine. In the last step, the flight event manager translates the message towards the actual simulator software (label 3 in Fig. 15), selecting the correct API call and message format. In the case of software controls (e.g. monitors), such change may be propagated to the cockpit (label 4 in Fig. 15). This communication sequence does not include any update in the mixed reality UI. The change in the flight simulator component will, in turn, raise an event, and the UI update will be eventually triggered by a communication sequence of the second type, started by the change of a plane parameter (see Fig. 12). 6. Sample Procedure: Take-Off With the LeaFT-MixeR environment, we implemented a training session for the take-off procedure. We modelled the entire procedure using the checklist contained in the Flight Operating Crew Manual (FCOM) (Airbus, 2005) for both the PF and Pilot Not Flying (PNF). Figure 16 shows the main task model. It is composed of different subroutines, each one corresponding to a specific phase described in the FCOM. Since the procedure is quite long, we focus here on the subroutine highlighted in Fig. 16, called ‘Check 100 knots’, which allows us to show the different communication patterns. The subroutine contains the operations the pilot perform immediately before the plane comes off the ground, during the acceleration on the take-off runway. The plane has already reached the 80 knots speed and, when it reaches 100 knots, the PF has to release the side stick for starting climbing. Figure 17 shows the subroutine modelling in detail. It consists of two branches executed in concurrency: the first calls another subroutine (Head management), modelling the arrow indicator that points towards the required cockpit controls when they are not in the HoloLens FOV. The second branch contains the PF operations we are going to analyse. The instructor triggers the execution of an interactive output task associated with the HoloLens device, showing both a guidance text and a waiting ring (‘Display 3D Info’). This task is enabled when the previous subroutine ends (‘Start Chrono’, see Fig. 16). Such activation triggers a message from the task simulator towards the HoloPit application (through the task event manager component), including the task identifier. Once the mixed reality receives the notification, it activates the holograms associated with the task guidance and we obtain the visualization depicted in Fig. 18B. This concludes the execution of the current output task. FIGURE 18 Open in new tabDownload slide Guidance for waiting the 100 knots speed. We show the communication sequence between the application components, the AR interface for the trainee pilot and the instructor interface showing the task model. An SVG version of the task model is available at url https://bit.ly/holopit-svg. FIGURE 18 Open in new tabDownload slide Guidance for waiting the 100 knots speed. We show the communication sequence between the application components, the AR interface for the trainee pilot and the instructor interface showing the task model. An SVG version of the task model is available at url https://bit.ly/holopit-svg. Then, the task simulator continues to run the model execution. For the sequence of user tasks, namely the perceptive and cognitive tasks, the LeaFT-MixeR environment cannot rely on any input for deciding when they are completed. The task simulator then considers them completed immediately after they are enabled (i.e. when they are ready to be executed). In summary, the model completes the following user tasks: ‘See the suggestion’, ‘See the hourglass’, ‘Analyse the suggestion’, ‘Decide to follow the suggestion’ and ‘Wait’. Next, we have the system task ‘Plane gets 100kt’, which is associated with the plane speed (a flight simulator parameter) and to a threshold of 100 knots. Then, the task simulator notifies the task event manager that the system task is enabled (Fig. 19A). When the flight event manager notifies a value of the plane speed greater than (or equal to) 100 knots, the task event manager sends a completion message, and the task simulator continues to run the task model execution. We summarize the message sequence Fig. 19B. FIGURE 19 Open in new tabDownload slide Communication for enabling (A) and executing (B) the ‘Plane reaches 100kt’ system task. Below, the instructor interface showing the task model (C). An SVG version of the task model is available at https://bit.ly/holopit-svg. FIGURE 19 Open in new tabDownload slide Communication for enabling (A) and executing (B) the ‘Plane reaches 100kt’ system task. Below, the instructor interface showing the task model (C). An SVG version of the task model is available at https://bit.ly/holopit-svg. Next, we have another guidance task (called again ‘Display 3D info’, but having a different identifier), which asks the trainee PF to release the sidestick for starting the climbing. As in the previous step, the task simulator asks the HoloPit application to activate the holograms associated with the current task and, after that, it advances the task model execution. The mixed reality interface appears as in Fig. 20A. FIGURE 20 Open in new tabDownload slide Releasing the sidestick phase in the Reach 100kt subroutine. Communication sequence (A), guidance for releasing the sidestick (B), the instructor interface showing the task model for the guidance (C) and when the PF releases the stick (D). An SVG version of the task model is available at url https://bit.ly/holopit-svg. FIGURE 20 Open in new tabDownload slide Releasing the sidestick phase in the Reach 100kt subroutine. Communication sequence (A), guidance for releasing the sidestick (B), the instructor interface showing the task model for the guidance (C) and when the PF releases the stick (D). An SVG version of the task model is available at url https://bit.ly/holopit-svg. After that, we have again a sequence of user tasks ending with a motor operation on the sidestick (‘Hand releases the sidestick’). As previously explained, the task simulator considers them completed immediately after they are enabled. Then, the simulation awaits the trainee PF input on the sidestick, represented by the ‘Release Sidestick’ input task. Similarly to the previous system tasks, the task simulator sends to HoloPit a message notifying that the input task is enabled, and it starts listening to the corresponding plane parameter. When the input is detected, the completion of such task is triggered first by a notification coming from the flight event manager, passing through the task event manager and finally reaching the task simulator. The sequence is exactly the one reported in Fig. 15. The last step in the subroutine is notifying that the operations were successful through the ‘Display 3D confirmation’ task, which results in the mixed reality interface in Fig. 21. The communication among the components is similar to the guidance tasks Fig. 21A: the task simulator sends the task identifier to HoloPit and completes the subroutine. FIGURE 21 Open in new tabDownload slide Completion feedback for the Check 100kt subroutine: communication sequence (A), PF feedback (B) and instructor interface showing the task model (C). An SVG version of the task model is available at url https://bit.ly/holopit-svg. FIGURE 21 Open in new tabDownload slide Completion feedback for the Check 100kt subroutine: communication sequence (A), PF feedback (B) and instructor interface showing the task model (C). An SVG version of the task model is available at url https://bit.ly/holopit-svg. 7. Feedback Received from the Instructors In order to receive feedback on the learning environment, we deployed LeaFT-MixeR in a training flight simulator hosted at the University of Cagliari. The simulator contains a physical reproduction of an Airbus A320 cockpit, connected to a computer running the Preapar3D software, whose output is visible on the two big screens that replace the plane windscreens. We invited an instructor pilot, a professional pilot and a student pilot to run a simulation session of the take-off procedure we discussed in Section 6. The student pilot wore the HoloLens device running the HoloPit application, and he took the PF role. The professional pilot was the PNF, while the instructor pilot coordinated the procedure from a nearby desk running the task interface. In order to make the professional and the instructor pilot aware of the feedback displayed in the HoloLens device, we mirrored the output on a laptop screen, which was visible to both of them. We used the mixed reality capture feature of the HoloLens SDK, which allows showing the holograms overlaid on top of the RGB camera stream. We supervised the procedure test in order to let them visualize the HoloPit suggestions properly and to explain how the task model works, since none of them had experience with them. Finally, we let both the instructor and the professional pilot try HoloPit since they never used any HoloLens application. After running the procedure, we had an unstructured interview for receiving their feedback on the application and for identifying possible problems or shortcomings. They all judged the usefulness and the usability of the environment positively. They were used to Virtual Reality applications running on HMD and to mobile-based AR, which they consider poorly suited for the cockpit environment, and they particularly appreciated the idea of a see-through HMD that does not occupy the hands and that does not replace the real cockpit with a virtual one. They noticed the limitation in the hologram FOV and explicitly asked why the arrow guidance appeared when they actually were able to see the cockpit control. We explained to them the technical limitations, and they agreed on the usability of the technique we used for limiting the effect of such a problem. They highlighted two main drawbacks of using the HoloLens inside the cockpit. The first one is the physical effort required for wearing the HMD for a long time. The device weight tires the user in long interaction sections, so they would use it for short training sessions. The second one was related to spatial mapping (the association between the hologram position in the virtual and the real space). They explained that there might be slight differences in the position of the cockpit controls in different simulators, even for the same plane model. For instance, the mount of the overhead panel might change its position, and this would be a problem for the hologram positioning. We explained that we have a guided procedure for reposition the holograms, but they agreed on the fact that it would be tedious. They appreciated the learning support of the environment as a whole, and they think that the application mode including all the guidance and the feedback elements was particularly suited for the very fresh PF trainees. For more experienced ones, they considered more appropriate the test mode, where the PF does not receive guidance but only feedback on the correct or wrong actions (see Section 4.3.1). They also asked if it was possible to exploit a similar interface in a virtual reality application in order to support the learners’ individual practice. We explained to them that we actually built something similar for creating a standalone development environment, and they suggested to include such mode in the possible configurations of LeaFT-MixeR. Finally, they considered the instructor’s control over the task model particularly suited for the way they teach in a real-world setting. They highlighted that usually the procedures have many variants according to e.g. the weather conditions, the plane etc. and that they simulate different scenarios. The task model interface allows them to activate such conditions. They had no previous experience with task models, so we needed to explain their meaning and the graphical notation. However, they were able to grasp the overall notation meaning after a while. They explicitly asked whether it was possible to include both roles in the modelling language. We confirmed it was possible but that we focused on the PF at the moment. While they did not object to using the task model notation for representing the procedure, they would not like to create the model themselves for supporting other procedures. Instead, they would like to have a repository of task models for each procedure and to fall back to modelling only if the existing ones are not suitable. They would prefer starting from an initial existing model in any case. The evaluation conducted with instructors demonstrated that the proposed approach of integrating task models at runtime during training was useful and usable. It brought the expected benefits to the trainee and the instructors. However, this evaluation has not studied the usability of the tools and the interaction techniques proposed. This is difficult to perform in safety-critical domains such as aviation due to the lack of availability of experts, the cost of equipment and their lack of availability (as they are nearly used all the time). 8. Discussion 8.1. Co-execution approaches to support simulator-based training sessions The LeaFT-MixeR environment provides support for synergistic execution, or co-execution, of task models with an Hololens MR application and a flight simulator. This co-execution is possible using the software programmed parsing of files that specify information about interactors and events in the Hololens MR application and in the flight simulator. Such an approach for co-execution is called resource introspection. Its main limitation is that the possible instances of widgets and events that are handled during the co-execution have to be parsed before execution. This means that for some types of user interfaces, such as a radar with incoming new instances of objects, this approach would not allow making the correspondence between a user task and this incoming object. Other engineering approaches are available to support co-execution task models with interactive systems applications. They are presented in Table 1. The resource introspection approach (line 2 in Table 1) has been chosen for the LeaFT-MixeR environment for the following reasons. The training sessions are prepared and planned before the execution of the environment. Their content is thus known in advance. The resource files can be prepared during the development of the training program and are aligned with the learning specification events and planned measures for training validation. As the development effort is the lowest for the resource introspection approach, it appears as the best compromise. Table 1. Co-execution approaches and their impact on the environment engineering (extended from (Martinie et al., 2015)). Approach . Widgets and/or interactors . Development effort . Learning environment architecture . Impact in learning environment . Dedicated API for developer to implement activation and rendering functions Any + new instances at runtime Medium to high Modified No Resource introspection (ex: xml layout files...) Any, but not new instances at runtime Low to medium Not modified Parser, notifications Runtime environment introspection Predefined list High Not modified Widget tree exploration, graphical identification of widgets, notifications Modification of runtime environment Predefined list High Not modified Widget list retrieval, graphical identification of widgets, notifications Code instrumentation Any plus new instances at runtime Medium to high Not modified Parser, notifications Approach . Widgets and/or interactors . Development effort . Learning environment architecture . Impact in learning environment . Dedicated API for developer to implement activation and rendering functions Any + new instances at runtime Medium to high Modified No Resource introspection (ex: xml layout files...) Any, but not new instances at runtime Low to medium Not modified Parser, notifications Runtime environment introspection Predefined list High Not modified Widget tree exploration, graphical identification of widgets, notifications Modification of runtime environment Predefined list High Not modified Widget list retrieval, graphical identification of widgets, notifications Code instrumentation Any plus new instances at runtime Medium to high Not modified Parser, notifications Open in new tab Table 1. Co-execution approaches and their impact on the environment engineering (extended from (Martinie et al., 2015)). Approach . Widgets and/or interactors . Development effort . Learning environment architecture . Impact in learning environment . Dedicated API for developer to implement activation and rendering functions Any + new instances at runtime Medium to high Modified No Resource introspection (ex: xml layout files...) Any, but not new instances at runtime Low to medium Not modified Parser, notifications Runtime environment introspection Predefined list High Not modified Widget tree exploration, graphical identification of widgets, notifications Modification of runtime environment Predefined list High Not modified Widget list retrieval, graphical identification of widgets, notifications Code instrumentation Any plus new instances at runtime Medium to high Not modified Parser, notifications Approach . Widgets and/or interactors . Development effort . Learning environment architecture . Impact in learning environment . Dedicated API for developer to implement activation and rendering functions Any + new instances at runtime Medium to high Modified No Resource introspection (ex: xml layout files...) Any, but not new instances at runtime Low to medium Not modified Parser, notifications Runtime environment introspection Predefined list High Not modified Widget tree exploration, graphical identification of widgets, notifications Modification of runtime environment Predefined list High Not modified Widget list retrieval, graphical identification of widgets, notifications Code instrumentation Any plus new instances at runtime Medium to high Not modified Parser, notifications Open in new tab 8.2. Learning The main goal of the approach proposed is to support training (the activity of instructors) but also learning (an increase of knowledge and skills of trainees). As current systems in aviation become more complex, including more functions but also more complex functions, classical approaches to training would require more time, resources and efforts. This is something that airlines cannot afford. Airbus6 has recently adopted a new strategy for training, moving away from function-based and systems-based training (i.e. training pilots to know and to know how to operate each aircraft system and each function) to a so-called evidence-based training where the emphasis of the training is more on real cases (e.g. incidents or failures) that occurred in the recent past. The approach proposed in this paper would perfectly fit with this kind of evolution as scenarios extracted from the task models could be derived from near-misses, incidents or accidents. 8.3. Taking into account human error Similarly, deviations may occur due to operator errors. The modelling philosophy of tasks modelling structures models in terms of goals, sub-goals, until elementary tasks are represented (see for instance (Paterno et al., 1997)). Following this philosophy, error and deviations from goals should not be represented in tasks models in the same way as it is not represented in Flight Crew Operating Manuals (Airbus, 2005). However, human errors are pervasive and unavoidable, so knowing how to recover from a human error should also be part of the training program. Hamsters-XL presents error modelling extensions (Fahssi et al., 2015) to support the explicit representations of human errors (covering both genotypes and phenotypes of errors (Reason, 1990)) and the tasks that have to be performed in order to recover from their occurrence. Task models enhanced with error would support describing which errors may occur and then checking that trainees have acquired the required knowledge to recover from them. Using HAMSTERS-XL scenarios, including deviations, could be produced and LeaFT-MixeR is able to highlight relevant information to support learning that could cover simultaneously standard procedures and deviations. 9. Conclusion and Future Work In this paper, we introduced the architecture and the functioning of LeaFT-MixeR, a learning environment that brings together the cyber and the physical aspect of training, considering the aircraft cockpit as a case study. The environment exploits the synergistic execution of a task model, a simulator and an AR application for guiding the trainee and involving the instructor in the procedure control. The environment maintains the documented benefits of AR employed in operation learning, together with the systematic learning approach supported by the task model formalization. The synergistic combination of the two elements adds different advantages with respect to other environments in the literature. The environment is able to detect what step of the task the trainee is accomplishing, supporting both automatic and instructor-driven feedback. It provides contextual cues to the trainee, according to the state of the task model execution. It keeps the instructor up to date on the procedure state and allows her to simulate specific situations and scenarios at any moment. Finally, it supports a dialogue between the simulator and the AR application for supporting guidance in tasks that require checking or reaching a given plane state configuration. A preliminary evaluation is here to demonstrate that the approach is meaningful and judged pertinent by experts and non-experts and that the performance resulting from the current engineering approach is good enough. In future work, we will add more collaborative features to the environment in order to support both the PF and PNF using AR, evolving the approach for collaborative routines. We will also manage different levels of experience in using the application: the feedback messages are indeed useful for novice trainees, while for more experienced ones, they can be annoying. In addition, we will enhance the AR interface by building it on HoloLens v2, which has a bigger FOV and lighter hardware. Finally, we would like to integrate our architecture into learning content management systems, in order to support progress tracking, formative and summative evaluations. Acknowledgments We would like to thank the CentraLab laboratory of the University of Cagliari that provided access to the physical reproduction of the Airbus 320 cockpit we used in this work. In particular, we thank Prof. Paolo Fadda, Prof. Gianfranco Fancello, Beniamino Fanni, Antonio Depau and Stefano Lande for the support in testing the flight procedures. Footnotes 1 When discussing HoloLens user interfaces, we use mixed reality as a synonym of the AR supported in the holographic headset, following the terms suggested in Microsoft documentation and SDK. 2 We tossed a coin for assigning genders to users in this paper. The chance decreed that the learner PF is male and the instructor is female. 3 url: https://www.flightgear.org/ 4 url: https://docs.microsoft.com/en-us/windows/mixed-reality/spatial-mapping 5 url: https://www.microsoft.com/en-us/HoloLens 6 url: https://safetyfirst.airbus.com/learning-from-the-evidence/ References Airbus ( 2005 ) Flight Crew Operating Manual A318/A319/A320/A321 FMGS Pilots Guide Vol 4 . Akçayir , M. and Akçayir , G. ( 2017 ) Advantages and challenges associated with augmented reality for education: a systematic review of the literature . Educ. Res. Rev. , 20 , 1 – 11 . Google Scholar Crossref Search ADS WorldCat Barboni , E. , Ladry , J.-F., Navarre , D., Palanque , P., and Winckler , M. ( 2010 ) Beyond modelling: an integrated environment supporting co-execution of tasks and systems models . In Proceedings of the 2nd ACM SIGCHI symposium on engineering interactive computing systems, pp. 165 – 174 . ACM , New York . Boeing ( 1994 ) TRP WWW page . http://esto.sysplan.com/ESTO/Displays/HMD-TDS/Factsheets/Boeing.html (accessed July 1994) . Campos , J. C. , Fayollas , C., Gonçalves , M., Martinie , C., Navarre , D., Palanque , P., and Pinto , M. ( 2017 ) A more intelligent test case generation approach through task models manipulation. Proceedings of the ACM on human-computer interaction, 1 (EICS):9, ACM , New York . Christian , J. , Krieger , H., Holzinger , A. and Behringer , R. ( 2007 ) Virtual and mixed reality interfaces for e-training: examples of applications in light aircraft maintenance . In Universal Access in Human-Computer Interaction . Applications and Services , pp. 520 – 529 . Springer , Berlin, Heidelberg . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Corbalan , G. , Kester , L., and Van Merriënboer , J. J. G. ( 2006 ) Towards a personalized task selection model with shared instructional control . Instr. Sci. , 34 , 399 – 422 . Google Scholar Crossref Search ADS WorldCat Crescenzio , F. D. , Fantini , M., Persiani , F., Stefano , L. D., Azzari , P., and Salti , S. ( 2011 ) Augmented reality for aircraft maintenance training and operations support . IEEE Comput. Graph. Appl. , 31 , 96 – 101 . Google Scholar Crossref Search ADS PubMed WorldCat Fahssi , R. , Martinie , C. and Palanque , P. ( 2015 ) Enhanced task modelling for systematic identification and explicit representation of human errors . In Human-Computer Interaction, INTERACT 2015 , pp. 192 – 212 . Springer , Cham . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Fowlkes , J. , Schatz , S. and Stagl , K. C. ( 2010 ) Instructional strategies for scenario-based training: insights from applied research . In Proc. of the 2010 spring simulation multiconference . SpringSim ‘10, San Diego, CA, USA. Society for Computer Simulation International . Gagné , R. ( 1985 ) The Conditions of Learning and the Theory of Instruction . Holt, Rinehart and Winston , New York . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Gurevich , P. , Lanir , J., Cohen , B., and Stone , R. ( 2012 ) TeleAdvisor: a versatile augmented reality tool for remote assistance . In Proceedings of the SIGCHI conference on human factors in computing systems , CHI ‘12, pp. 619 –622, New York, NY , USA. ACM. event-place : Austin, Texas, USA . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Haritos , T. and Macchiarella , N. D. ( 2005 ) A mobile application of augmented reality for aerospace maintenance training . In 24th Digital Avionics Systems Conference, vol. 1, pp. 5.B.3–5.1 . Hebert , T. J. R. ( 2019 ) The impacts of using augmented reality to support aircraft maintenance . In Technical Report AFIT-ENY-MS-19-M-121 . Air Force Institute of Technology Wright-Patterson AFB, OH, United States . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Henderson , S. and Feiner , S. ( 2011 ) Exploring the benefits of augmented reality documentation for maintenance and repair . IEEE Trans. Vis. Comput. Graph. , 17 , 1355 – 1368 . Google Scholar Crossref Search ADS PubMed WorldCat Hincapié , M. , Caponio , A., Rios , H. and Mendívil , E. G. ( 2011 ) An introduction to augmented reality with applications in aeronautical maintenance . In 2011 13th international conference on transparent optical networks , pp. 1 – 4 . Joint Aviation Authorities ( 2006 ) JAR - FCL 1 - Flight Crew Licensing (Aeroplane) . Kancherla , A. R. , Rolland , J. P., Wright , D. L. and Burdea , G. ( 1995 ) A novel virtual reality tool for teaching dynamic 3D anatomy . In Ayache, N. (ed), Computer Vision, Virtual Reality and Robotics in Medicine , Lecture Notes in Computer Science , pp. 163 – 169 . Springer , Berlin Heidelberg . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Kaufmann , H. and Dünser , A. ( 2007 ) Summary of usability evaluations of an educational augmented reality application . In Shumaker, R. (ed), Virtual Reality , Lecture Notes in Computer Science , pp. 660 – 669 . Springer , Berlin, Heidelberg . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Kerpen , D. , Löhrer , M., Saggiomo , M., Kemper , M., Lemm , J. and Gloy , Y. ( 2016 ) Effects of cyber-physical production systems on human factors in a weaving mill: implementation of digital working environments based on augmented reality . In 2016 IEEE international conference on industrial technology (ICIT) , pp. 2094 – 2098 . IEEE . Krey , N. ( 2007 ) The Nall Report 2007: Accident Trends and Factors for 2006 . AOPA Air Safety Foundation . Lee , K. ( 2012 ) Augmented reality in education and training . TechTrends , 56 , 13 – 21 . Google Scholar Crossref Search ADS WorldCat Martinie , C. , Navarre , D., Palanque , P., Barboni , E. and Canny , A. ( 2018 ) TOUCAN: an IDE supporting the development of effective interactive Java applications . In Proc. of the ACM SIGCHI symposium on engineering interactive computing systems , p. 4 . ACM , New York . Google Scholar Crossref Search ADS Martinie , C. , Navarre , D., Palanque , P. and Fayollas , C. ( 2015 ) A generic tool-supported framework for coupling task models and interactive applications . In Proceedings of the 7th ACM SIGCHI symposium on engineering interactive computing systems , pp. 244 – 253 . ACM , New York . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Martinie , C. , Palanque , P., Bouzekri , E., Cockburn , A., Canny , A., and Barboni , E. ( 2019 ) Analysing and demonstrating tool-supported customizable task notations . Proceedings of the ACM on human-computer interaction , 3 ( EICS ), 12 . ACM , New York . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Martinie , C. , Palanque , P., Navarre , D., Winckler , M. and Poupart , E. ( 2011a ) Model-based training: an approach supporting operability of critical interactive systems . In Proceedings of the 3rd ACM SIGCHI symposium on engineering interactive computing systems , pp. 53 – 62 . ACM , New York . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Martinie , C. , Palanque , P. and Winckler , M. ( 2011b ) Structuring and composition mechanisms to address scalability issues in task nodels . In Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., and Winckler, M., eds . In Human-Computer Interaction, INTERACT 2011 , Lecture Notes in Computer Science , pp. 589 – 609 . Springer , Berlin, Heidelberg . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Navab , N. ( 2004 ) Developing killer apps for industrial augmented reality . IEEE Comput. Graph. Appl. , 24 , 16 – 20 . Google Scholar Crossref Search ADS PubMed WorldCat Palmarini , R. , Erkoyuncu , J. A., Roy , R. and Torabmostaedi , H. ( 2018 ) A systematic review of augmented reality applications in maintenance . Robot. Comput. Integr. Manuf. , 49 , 215 – 228 . Google Scholar Crossref Search ADS WorldCat Parvin , P. , Chessa , S., Manca , M., and Paterno , F. ( 2018 ) Real-time anomaly detection in elderly behavior with the support of task models . Proceedings of the ACM on human-computer interaction , 2 ( EICS ), 15 . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Paterno , F. , Mancini , C. and Meniconi , S. ( 1997 ) ConcurTaskTrees: a diagrammatic notation for specifying task models . In Howard, S., Hammond, J., Lindgaard, G. (eds) , Human-computer interaction, INTERACT ‘97: IFIP TC13 international conference on human-computer interaction, 14th–18th July 1997, Sydney, Australia, The International Federation for Information Processing (IFIP) , pp. 362 – 369 . Springer , Boston, MA . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Reason , J. ( 1990 ) Human Error . Cambridge University Press . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Reason , J. ( 2008 ) The Human Contribution: Unsafe Acts, Accidents and Heroic Recoveries . Ashgate , Farnham . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Reiser , R. A. ( 2001 ) A history of instructional design and technology: Part II: a history of instructional design . Educ. Technol. Res. Dev. , 49 ( 2 ): 57 – 67 . Google Scholar Crossref Search ADS WorldCat Sigrist , R. , Rauter , G., Riener , R., and Wolf , P. ( 2013 ) Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review . Psychon. Bull. Rev. , 20 , 21 – 53 . Google Scholar Crossref Search ADS PubMed WorldCat Sims , D. ( 1994 ) New realities in aircraft design and manufacture . IEEE Comput. Graph. Appl. , 14 , 91 . Google Scholar Crossref Search ADS WorldCat Tang , A. , Owen , C., Biocca , F., and Mou , W. ( 2003 ) Comparative effectiveness of augmented reality in object assembly . In Proceedings of the SIGCHI conference on human factors in computing systems , pp. 73–80. ACM , New York . U.S. Army Field Artillery School ( 1984 ) A System Approach to Training (Course Student textbook) . U.S. Department of Defense ( 1975 ) Training Document (1975) Pamphlet , 350 – 330 . OpenURL Placeholder Text WorldCat Wu , H.-K. , Lee , S. W.-Y., Chang , H.-Y. and Liang , J.-C. ( 2013 ) Current status, opportunities and challenges of augmented reality in education . Comput Educ. , 62 , 41 – 49 . Google Scholar Crossref Search ADS WorldCat © The Author(s) 2021. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Engineering Task-based Augmented Reality Guidance: Application to the Training of Aircraft Flight Procedures JF - Interacting with Computers DO - 10.1093/iwcomp/iwab007 DA - 2021-03-26 UR - https://www.deepdyve.com/lp/oxford-university-press/engineering-task-based-augmented-reality-guidance-application-to-the-CW0vIsC3iT SP - 1 EP - 1 VL - Advance Article IS - DP - DeepDyve ER -