Abstract Speech generating devices (SGD) are a type of communication aid based on turning a message composed by the user into speech. The goal of these communicative aids is usually to find technological tools for communication problems rather than tools for scaffolding the learning process of language skills. We have implemented an interface designed for developing new communication skills rather than improving the act of communication. This tool is focused on augmentative and alternative communication methods. A number of interface design principles are proposed, which are based on some of the most well-known speech disorder interventions, such as analysis of verbal behavior or language acquisition modeling. RESEARCH HIGHLIGHTS We propose a voice output communication aid specifically designed for children with language and severe language delays related to autism and other disabilities. The main focus of this VOCA is the development of verbal functions. It includes explicit support and the integration of the latest research: language acquisition monitoring based on the cloud, aided language modeling, language acquisition through motor planning and verbal behavior analysis. Conversational analysis techniques and data recorded by means of language acquisition monitoring are applied in order to evaluate our findings. 1. INTRODUCTION Augmentative and alternative communication (AAC) refers to an area of clinical research and educational practice. AAC attempts to study and compensate for temporary or permanent impairments, activity limitations and participation restrictions of individuals with severe disorders in speech-language production and/or comprehension, including spoken and written modes of communication (Mirenda and Iacono 2009). People who use AAC may suffer from limitations experienced from birth (congenital) or that came later in life (acquired). Cerebral palsy is an example of a congenital condition (Table 1). Amyotrophic lateral sclerosis (ALS) or motor neuron disease (MND) are examples of an acquired condition. For some, AAC may even be a temporary method for expressive communication. People who use AAC span the full range of age and physical and cognitive abilities. Table 1. Examples of conditions that usually require AAC. Congenital condition Acquired condition Motor impairment Cerebral palsy Motor neuron disease Cognitive impairment Autism Aphasia (brain injury) Congenital condition Acquired condition Motor impairment Cerebral palsy Motor neuron disease Cognitive impairment Autism Aphasia (brain injury) View Large The ultimate and broad goal of AAC is to engage individuals in variety of interactions and in participating in activities of their choice, promoting effective initiative in the communication act, rather than being only a technological solution to certain communication problems. Examples of AAC systems are communication books or electronic devices that allow the user to use pictures, symbols, letters, and/or words and phrases to create messages. In this way, the American Psychiatric Association (APA, 2004) recommends that AAC should be thought of as a system comprised of four components: symbols, aids, strategies and techniques: Symbols are examined according to their ‘guessability’, transparency to conversational partners and ease of acquisition, ranging from actual objects to traditional orthography (e.g., printed words). Aids refer to devices used to transmit or receive messages. These vary from relatively simple (paper-based, mostly) to complex technological solutions (digital-based). Strategies refer to ways in which symbols can be conveyed most effectively and efficiently, including those designed to accelerate the communication rate. Techniques refer to the various ways in which messages can be transmitted: scanning, items browsed visually, auditorally or tactually until the desired one appears and is selected. Conversely, in direct selection the user goes directly to the desired symbol, usually via a pointing gesture. Direct selection has a one-to-one relationship between the motor act and the resultant selection. Moreover, it is clear that technology alone does not make individuals competent and proficient communicators on their own. AAC users begin as AAC novices and evolve in competence to become AAC experts with appropriate support, instruction, practice, and encouragement (Mirenda and Iacono, 2009). Light et al. (2003, 1998) described in detail the components of communicative competence for those who rely on AAC. They identified four components: linguistic, operational, social and strategic competence: Linguistic competence refers to the receptive and expressive language skills of one’s native language(s). It also involves knowledge of the linguistic code unique to one’s AAC system such as icons, words, signs and pictograms. Operational competence refers to the technical skills needed to operate the AAC system accurately and efficiently. Social competence refers to skills of social interaction such as initiating, maintaining, developing and terminating communication interactions. Strategic competence involves the compensatory strategies used by people who rely on AAC to deal with the functional limitations associated with AAC. AAC devices are intended as tools to be used when the user has reached a given level of competence. From this point of view, given a user and a specific AAC device, she should learn, on the one hand, specific operational competences for such a device and, on the other hand, linguistic knowledge regarding the code used by the device. Usually, the acquisition of linguistic competences intended as language skills are largely out of the focus of the design of the device. For example, one of the most popular AAC models is Unity (Baker, 1982). Unity provides the so-called Icon Tutor, a useful strategy that allows you to learn the location of any word within Unity. But there is nothing about the pragmatic or even semantic use of the language, i.e. when you should use such an icon or its meaning. It is a strategy for improving operational competence rather than linguistic or social competences. An example of commercial system based on Unity model is MinSpeak that is depicted in Section 2.2. For this reason, we propose a learning platform rather than a communicative aid since every component (symbols, aid strategies and techniques) is focused on the process of developing language and verbal functions when these are missing as a consequence of a cognitive impairment. This is the case of beginning communicators. In spite of the fact that a beginning communicator is any child with a spoken and/or symbolic vocabulary of less than 50 words (Romski et al., 2003), this term is usually used to describe any individual (regardless of age) who has one or more of the following characteristics (Mirenda and Iacono, 2009): He or she relies primarily on non-symbolic modes of communication1 such as gestures, vocalizations, facial expressions and body language. He or she is learning to use aided or unaided symbols to represent basic messages related to communicative functions such as requesting, rejecting, sharing information and engaging in social interaction. He or she uses non-electronic communication displays and/or simple switches or speech generating devices (SGDs) for both participation and early communication. To address the needs of beginning communicators, there are several possible interventions which must integrate communication experiences with receptive and expressive language activities. Such interventions are addressed to both teaching language skills and supplementing existing speech or replacing speech that is not functional. In this paper, we propose a number of interface design principles based on some of the most well-known speech disorder interventions, such as analysis of verbal behavior or language acquisition modeling. Table 2 resumes the key differences between both, Unity and Pictogram Tablet, users as examples of potential AAC users with motor or intellectual impairments. Table 2. Some key differences between Unity-like and Pictogram tablet typical users. Unity Pictogram Tablet Diagnosis Cerebral palsy, motor speech impairment, high functioning autism, down syndrome Low functioning autism, pervasive developmental disorder, Rett syndrome Cognitive development Typical, mild to moderate mental retardation Severe mental retardation Vocabulary size on average Hundred or thousands of words A few dozens of words Users of AAC devices Frequently, high communication needs They have AAC systems but are not using them Strategies for communication Ability to communicate about any topic in any context Socially or contextually in-appropriate Unity Pictogram Tablet Diagnosis Cerebral palsy, motor speech impairment, high functioning autism, down syndrome Low functioning autism, pervasive developmental disorder, Rett syndrome Cognitive development Typical, mild to moderate mental retardation Severe mental retardation Vocabulary size on average Hundred or thousands of words A few dozens of words Users of AAC devices Frequently, high communication needs They have AAC systems but are not using them Strategies for communication Ability to communicate about any topic in any context Socially or contextually in-appropriate View Large The rest of this paper is organized as follows. Section 2 sketches a brief introduction to the existing solutions for AAC interventions focused on linguistic competences. In Section 3, interventions and linguistic competences are presented in more detail. Section 4 highlights the foundations of Pictogram based on two of the best well-known instructional approaches: verbal behavior analysis and modeling. Section 5 describes Pictogram Tablet, the communication device of the Pictogram platform that has been developed. Section 6 introduces the method followed to evaluate the new platform. Section 7 unveils the findings from the analysis of the data collected through different real-user interactions. Section 8 concludes this paper with some reflections and proposals for future enhancements. 2. RELATED WORK The number of different AAC aids is quite impressive for both low- and high-tech solutions. By low-tech aids, we refer to non-digital and non-electronic solutions, like those with images on paper or plastic materials. High tech refers to digital- and electronic-based solutions. Instead of conducting a survey of all of them, we study two paradigmatic examples for both low and high technological solutions: The Pictogram Exchange Communication System (PECS) (Bondy and Frost, 1994; Frost and Bondy, 2002) and MinSpeak (Baker, 1982). The first one shares with Pictogram, the interest in speech intervention rather than the definition of a novel communicative device. As a kind of opposite to PECS, MinSpeak is an AAC device suited to people with complex communicative needs whose communicative skills require them to manage large vocabularies made up of several hundred or even thousand entries. The meta-analysis overcome by Ganz et al. (2012) compares three different AAC systems: PECS, SGDs and other picture-based AAC systems. Their work provides a wider study than the one we perform in the following sections. They summarize the intervention effects of 10 studies on PECS, 8 on SGD and 6 on other picture-based systems. The results clearly recognize the benefits of the first two in communication skills, being SGD systems even more effective than PECS. One relevant conclusion of this interesting study is that several aspects of AAC intervention types, particularly PECS and SGDs, were found to be particularly effective, such as the use of pictures versus written words or the implementation of a standardized treatment protocol. 2.1. Pictogram exchange communication system PECS is designed to teach functional communication skills with an initial focus on spontaneous communication. The PECS program is heavily influenced by Skinner (1957) book ‘Verbal Behavior’. Verbal Behavior can be defined as ‘behavior reinforced through the mediation of other persons, who must be responding in ways which have been conditioned precisely in order to reinforce the behavior of the speaker’. PECS has been and continues to be implemented in a variety of settings and contexts (home, school, community) so users have the skills to communicate their wants and needs. PECS does not require complex or expensive materials since it uses picture symbols as the modality (Fig. 1). PECS is a method of teaching young children or any individual with communication impairment a way to communicate within a social context. In the most advanced phases, individuals are taught to respond to questions and to comment. Figure 1. View largeDownload slide PECS book. Figure 1. View largeDownload slide PECS book. Additionally, descriptive language concepts such as size, shape, color, number, etc. are also taught so the student can make their message more specific by combining picture symbols. For example, ‘I want big yellow ball’ could be expressed as shown in (Fig. 2). There are several communicators based on PECS communication books, such as ARaSuite,2 SC@UT,3 CPA, e-Mintza and Mind-Express. They are rather similar to the one depicted in Fig. 3. There are usually several categories for families of words. The way that such families have been defined is a bit vague, and it varies between applications. For example, the pictograms of e-Mintza are sometimes categorized according to the most usual part-of-speech of the word such as ‘verb’ or ‘adjective’, while other words are categorized with regard to rather semantic issues, such as ‘food’ or ‘friends’. Nevertheless, these applications are intended as a kind of electronic version of a PECS communicator, but they do not provide any explicit support to PECS or to the verbal behavior methodology. Figure 2. View largeDownload slide ‘I want big yellow ball’ using ARASAAC pictograms. Figure 2. View largeDownload slide ‘I want big yellow ball’ using ARASAAC pictograms. Figure 3. View largeDownload slide An example of AAC inspired on PECS Communication Book. Figure 3. View largeDownload slide An example of AAC inspired on PECS Communication Book. 2.2. MinSpeak and semantic compaction While in AACs based on PECS, every picture has the effect of communicating the corresponding word, in MinSpeak (Baker, 1982) the same picture has different meanings depending on the context. Thus, every word has a short sequence determined by rule-driven patterns of icons. This is the so-called semantic compaction. Semantic compaction uses short symbol sequences and provides a single overlay to diminish the need of switching screens in order to find additional vocabulary items. The vocabulary icons provide multiple meanings which reduce the need for a large symbol set. Since the number of icons is reduced it is possible that every icon has a fixed position on the screen, and this allows language acquisition through motor planning (LAMP). Motor planning is an important aspect of becoming a fluent semantic compaction user. Icon sequencing should be the initial focus for learning how to use semantic compaction devices. A beginner needs to be fluent in a given icon set before progressing to a more advanced level. The LAMP strategy aids individuals in developing motor plans for sequencing icons for messages. MinSpeak relies upon the motor-based learning principle that when motor patterns are repeated, these processes become automatic and simplified. LAMP strategies are difficult to implement in low-tech AAC systems such as PECS and other communication boards. An AAC device with consistent motor patterns for saying words allows for the development of automation in communication. Consistently searching for the location of desired symbols and the placement of those individual symbols on a strip requires more complex motor planning and cognitive attention in the communication process. 3. AUGMENTATIVE AND ALTERNATIVE COMMUNICATION INTERVENTIONS AND LINGUISTIC COMPETENCES How linguistic competences are acquired? According to language acquisition device (LAD), linguistic development in children is an innate process, the evidence for which comes from ‘linguistic universals’. Noam Chomsky, the main precursor of LAD, believed that it is the inbuilt device that activates children’s linguistic development when they are exposed to language in their environment (e.g. when they hear/interact with their adult carers). As a reply to Chomsky’s suggestion that a LAD is what makes development of language possible, Bruner (1985) maintained that a LAD needs a language acquisition support system (LASS). The LASS is the idea that caregivers support their children’s linguistic development in social situations, by interacting and encouraging the child to respond (by pointing or asking questions). By experiencing good-quality interaction with caregivers, children learn to take a more active role in social situations. From the point of view of LASS, an AAC device is not at the heart of becoming a proficient communicator since having an AAC device does not make you a good communicator any more than having a piano makes you a good pianist. Specific strategies are required for supporting communicative competences. In contrast with other AAC devices that are intended as mere communication tools, Pictogram encourages the LASS idea of the necessity of a team to acquire language, the ‘support system’. An example of this change of paradigm is Pictogram’s ‘one team, one language’ lemma: every facilitator using her own Pictogram Tablet shares exactly the same vocabulary with the rest of the communication partners and the student. The vocabulary is real-time synchronized. For example, if a member of the team adds a new pictogram, this will be integrated in the rest of devices. In this way, Pictogram is the result of the effort to create a environment for teaching, learning and communicating rather than a simple communication tool. Pictogram Tablet is focused on beginning communicators, so we have studied several of the most well-known issues related to language development interventions that constitute the design foundations of our proposed solution. The rest of the section provides a brief description of such intervention programs and tools. 3.1. Verbal behavior This is defined as the science in which tactics derived from the principles of behavior are applied systematically to improve socially significant behavior and experimentation is used to identify the variables responsible for change (Skinner, 1957). One of the most popular therapies based in verbal behavior is applied behavioral analysis (ABA) (Baer et al., 1968). Certified therapists deliver or oversee the regimen, organized around the child’s individual needs—developing social skills, for instance, and learning to write a name or use the bathroom. The approach breaks desirable behaviors down into steps and rewards the child for completing each step along the way. Since ABA involves as much as 40 h a week of one-on-one therapy, it is also known as structured teaching. 3.2. Language modeling Language modeling is a valuable teaching and learning strategy that is important for learning a language at any age or stage, but is critical for beginning communicators. The central point is that AAC users learn language in the same way as typical children use language: through natural interaction in a language immersion environment. A number of language modeling techniques have been developed for AAC instructions in recent years. This includes aided language stimulation (ALgS) (Goossens et al., 1992), the system for augmenting language (SAL) (Romski et al., 2009) and aided language modeling (ALM) (Drager et al., 2006). All three of these techniques are based on the premise that, by observing symbols as they are used by the facilitators during motivating activities, a person can begin to establish a mental template of how symbols can be combined and recombined generatively to mediate communication during the activity. There are several ways to accomplish language modeling. For instance, the communication partner points out symbols on the communication display as he/she interacts verbally with the user. In other cases, the child holds the device while the facilitator uses a flashlight to highlight the symbol to be modeled on the AAC device. 3.3. Feedback The two primary purposes of feedback from communication technology are: to let the individual using AAC know that an item has been selected from the selection display (activation feedback) and to provide the individual with information about the message that has been formulated or selected (message feedback). Activation feedback is what the AAC user (not the partner) hears or sees while composing a message. There are four categories of feedback for the user: No activation feedback. There are many devices that have no special feedback for the user during message composition. The majority of these devices have voice output for the communication partner which is activated upon the touch of the keys, but there is nothing the user can use privately during the message construction. The AAC user gets feedback only when he delivers the message to the partner. Auditory activation feedback. Some devices permit the AAC user to listen to feedback (ideally at a low volume) during message construction. When the message is complete, the AAC user selects to ‘speak’ the entire message out loud to the partner. Visual activation feedback. Some devices can be set up to show the selections on the screen for the AAC user during message construction. When the message is complete, the AAC user can deliver the entire message to the partner, typically via voice output. Tactile activation feedback. Many keyboards are designed to give a subtle tactile feedback, ‘feeling’ every click. As you know from pushing elevator buttons that do not light up, even that subtle feeling of ‘clicked’ is reassuring. 3.4. Symbols Much of the power of AAC lies in the vast array of symbols that people can employ to send a message. Especially for individuals who cannot read or write, the ability to represent messages and concepts in alternative ways is essential to communication. Symbols can be described in terms of many characteristics, including realism, iconicity, ambiguity, complexity, figure-ground differential, perceptual distinctness, acceptability, efficiency, color and size (Lloyd et al., 1997; Schlosser, 2003). Among these, iconicity has received the most attention from both AAC researchers and clinicians. The term iconicity refers to ‘any association that an individual forms between a symbol and its referent’. This association depends on several factors such as differential cultural and experiential backgrounds, the concreteness of the referent (the more abstract, the more difficult to grasp the meaning of the symbol) or the developmental age (Mineo Mollica, 2003). In this regard, Light et al. (2008) studied children’s drawings intended to represent abstract concepts such as ‘who’ or ‘big’. Such drawings are often embedded within the contexts in which they occur, rather than depicting them generically as usual in commercially available AAC symbol sets. In this line, Worah (2008) found that children using these kinds of ‘contextualized symbols’ significantly outperform children who were trained with more general or schematic symbols. In brief, it is clear that the selection of the representation is a key point in learning new concepts, and this representation is highly user-dependent. 3.5. Literacy For the purpose of this study, the interest in literacy skills acquisition is multiple: There is a substantial body of evidence showing that children with a preschool diagnosis of language impairment are at a high risk of developing reading disorders (Bishop and Adams, 1990; McArthur et al., 2000). It is known that it is frequent that AAC users miss or insufficiently develop some of the skills that good readers require (Browder et al., 2008). Reading as language therapy. Integration of literacy promoting interventions into language interventions enhances both language and reading skills development (Catts et al., 2002; Mendelsohn et al., 2001). According to the report of the national reading panel (NRP, 2000), reading and writing requires the integration of knowledge and skills across a variety of domains. Typically, literacy instruction to teach basic reading skills targets the following skills (Mirenda and Iacono, 2009): first, phonological awareness skills (e.g. sound blending, phoneme segmentation skills) and letter-sound correspondences; then, decoding skills and sight word recognition skills, as well as the application of these skills to shared reading activities; and finally, independent reading of simple tests and reading comprehension skills. 4. AN INTERFACE INSPIRED ON SPEECH INTERVENTION NEEDS Table 34 shows some of the most frequent parameters that are found in the literature used to characterize a SGD oriented to language disorders rather than to motor impairments. These parameters are mainly related to techniques, type of allowed symbols and operational competences such as the availability of synthetic speech, type and source of symbols (predefined graphics, photos, draws, etc.) or word prediction. But there is no explicit information about which speech intervention is supported. From our point of view, this is the key aspect that should be make explicit when a AAC device is designed. For example, PECS includes a low-tech device, the communication board, as part of a complete and well-known methodology based on verbal behavior analysis, such as was pointed in Section 2.1. As we agree with this approach, the SGD has to be used as part of an intervention plan. Therefore, the design should be heavily inspired in a specific methodology for speech intervention. Pictogram gives explicit support to both verbal behavior and language modeling methods, whose main principles are exposed in Sections 3.1 and 3.2. Table 3. Some frequent parameters to characterize SGDs and their values among some SGDs (Pictogram included). Category-based layout Core vocabulary provided Photos can be added Graphics library available Text to speech voice Voice for pictos on/off User record voice Pre-stored scenes a User stored scenes Scenes links to other scenes Text Word completion Phrase storage Editing performed on computer Editing accessed in app Alexicom ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ GoTalk Now ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Grid Player ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ I click → I talk ✓ ✓ ✓ ✓ ✓ Pictogram ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Proloquo2Go ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Sono Flex ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Speak 4 Yourself ✓ ✓ ✓ ✓ ✓ ✓ ✓ Tap To Talk ✓ ✓ ✓ ✓ ✓ ✓ urTalker Pro ✓ ✓ ✓ ✓ ✓ ✓ Voice4u ✓ ✓ ✓ ✓ ✓ ✓ Category-based layout Core vocabulary provided Photos can be added Graphics library available Text to speech voice Voice for pictos on/off User record voice Pre-stored scenes a User stored scenes Scenes links to other scenes Text Word completion Phrase storage Editing performed on computer Editing accessed in app Alexicom ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ GoTalk Now ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Grid Player ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ I click → I talk ✓ ✓ ✓ ✓ ✓ Pictogram ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Proloquo2Go ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Sono Flex ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Speak 4 Yourself ✓ ✓ ✓ ✓ ✓ ✓ ✓ Tap To Talk ✓ ✓ ✓ ✓ ✓ ✓ urTalker Pro ✓ ✓ ✓ ✓ ✓ ✓ Voice4u ✓ ✓ ✓ ✓ ✓ ✓ aA scene is a vocabulary associated to an specific scenario. For example, ‘trip to the beach’ or ‘doctor visit’. View Large 4.1. Verbal behavior analysis Verbal behavior analysis is a heavily instructional approach based on therapy under controlled environments. Every instruction is made up by a number of learn units (Greer and Ross, 2008). Learn units interlock operants between a teacher or teaching device and learners. It involves instructional presentations, student responses and teaching consequences of reinforcement or correction operations. In addition, every learn unit must be evaluated in order to support an intervention based on evidence. When a student uses Pictogram Tablet in the course of a instruction, every learn unit is recorded and the therapist has the option to evaluate the current learn unit at the moment or later, by using the web site of Pictogram (Fig. 4) or her own Pictogram Tablet. Thereafter, it is possible to query past instructions, modify evaluations and generate reports on success rate, vocabulary usage, timing and so on. Figure 4. View largeDownload slide When the student uses Pictogram Tablet in an instructional session, every interaction is recorded and eventually evaluated. The figure shows a recording of a instructional session and three learn units with their evaluation (from top to bottom: correct, guided and fail). Figure 4. View largeDownload slide When the student uses Pictogram Tablet in an instructional session, every interaction is recorded and eventually evaluated. The figure shows a recording of a instructional session and three learn units with their evaluation (from top to bottom: correct, guided and fail). 4.2. Language modeling Pictogram interface encourages users to apply language modeling following two main design principles: Since the AAC device is the voice of the child, they should not be deprived of their device under any condition. Moreover, the communication partner should use her own device to model the symbol and no additional effort should be required to include every new symbol in the device of every partner.Thus, taking into account these two premises, two strategies for fostering language modeling are followed: One team, one language. Every facilitator is allowed to use her own Pictogram Tablet app just by using any Android 5.0 device or higher as the AAC system. In addition, the vocabulary is shared between every communication partner and the student. When a member of the intervention group logs into Pictogram Tablet, first they are required to enter user name and password and second, the list of students available for such a user is shown so it is mandatory to select one of them in order to synchronize both facilitator and child devices. Thus, the student, facilitator, caregiver, etc., all share the same vocabulary and all the devices are synchronized, so every time that a member includes, modifies or deletes a pictogram, it is instantly reflected on every device in the team, whenever the device is connected to the Internet (Fig. 5). Mirror mode. Instead of using a flashlight or a laser pointer to highlight the symbols to be modeled, when mirror mode is activated, every interaction is highlighted in the display of the other devices. For example, if a caregiver clicks on the ‘red’ pictogram, the same pictogram is highlighted on the screen of the device of the student. More concisely, the student will watch, for a few seconds, how the symbol executes an animation with the aim of catching her attention. Thus, for a few seconds, the size of the symbol is increased and decreased while the background will smoothly blink over gray tones. Figure 5. View largeDownload slide Pictogram Tablet and Pictogram Tablet Supervisor, both student and communication partner share exactly the same customized vocabulary. Figure 5. View largeDownload slide Pictogram Tablet and Pictogram Tablet Supervisor, both student and communication partner share exactly the same customized vocabulary. 5. DESCRIPTION OF PICTOGRAM TABLET Pictogram Tablet is a high-tech AAC aid. The preferred hardware required is an Android-based tablet ( 7″, 171 ppp, 1024×600 resolution). We tested other sizes with a number of users and we found this size to be small enough to carry and large enough to include a vocabulary made up of 32, 50, 900 or 2005 pictograms, depending on user configuration (see Table 4 and Fig. 6). The tablet and its technological features (sensors, touch screen, speakers, etc.) become the aids element according to the APA list (APA, 2004). The rest of elements (symbols, strategies and techniques) are detailed in this section. Figure 6. View largeDownload slide Pictogram Tablet interface for large and small grid variations. Figure 6. View largeDownload slide Pictogram Tablet interface for large and small grid variations. Table 4. Highest vocabulary size (pictograms) depending on user configuration. No categories Categories Large pictograms 32 900 Small pictograms 50 2005 No categories Categories Large pictograms 32 900 Small pictograms 50 2005 View Large Every concept is represented by means of a unique pictogram, and they are selected by clicking on them, i.e. the symbols component in the APA scheme. Given that the Pictogram Tablet is designed bearing in mind early communicators, it is frequent that the vocabulary size is as small as a few dozen concepts. In this scenario the whole vocabulary is accessible from one grid by just one click. As vocabulary grows, or if the therapist wants to introduce an abstract level, pictograms could be arranged into categories. In this case, all the pictograms in the main grid (except the first column) are entry points to categories. Examples of categories are family, colors, tastes, foods and drinks (see Section 5.2 for more details). The first column keeps direct access to the most frequent pictograms such as ‘yes’, ‘no’ or ‘I want’. The rest of pictograms would need, therefore, two clicks to be selected: one to select the category and another one for the desired pictogram. In any case, every pictogram has a fixed location, so LAMP is granted. Although more details about the functionalities provided by Pictogram Tablet are given in following sections, we can briefly summarize how they enhance four main competences: Linguistic competence. The system offers the combination of symbols with words, sounds, speech-to-text features, categorization, grammar correction and other aspects. Operational competence: Exchangeable contextual vocabulary sets, motor-planning, highly configurable interaction and phrase delivery, prevention of repetitive interactions, among other functionalities. Social competence: In Pictogram Tablet, all the people with close interaction with the student are able to configure the AAC environment. Contextual vocabularies can be defined to different social contexts. Personalized pictograms, with a fast addition of pictures, enable a quick adaptation to new contexts. Mirror mode provides a non-invasive manner for teachers to show how to generate messages. Strategic competence. The whole system is highly configurable and the set of pictograms available is vast. This allows an adaptation of the tool in many different ways, so the user can be trained in different manners. The user experience allows a fast interaction, so she can go back and forth through the vocabulary in an easy way, allowing her to change the composed message quickly. One main aim for most AAC systems is to implement a number of strategies and techniques (APA, 2004) so users with high-operational competences improve the ‘timing’, i.e. the placement of speech at particular moments in the interaction stream (Higginbotham et al., 2016). Another concept, ‘scaffolding’, describes the action of supporting and guiding someone as he/she learns a new task, more concisely, language and communicative skills (Tomasello, 2009). Scaffolding, rather than timing issues, is the main aim behind the design of Pictogram, which is deeply inspired by some findings for communication skills mastering that are depicted in Section 2. Following this objective, we have implemented a number of strategies intended to reinforce both structured teaching and modeling: 5.1. Feedback Both activation and message feedback have been implemented in a number of different ways which are customizable for each user. It is known that people with autism spectrum disorder (ASD) are visual thinkers. Thus, not only sounds but also visual feedback can be configured: Activation feedback. When a pictogram is selected, it is possible to configure Pictogram Tablet so that: A beep sounds. The transcription to text of the pictogram is spoken out loud or the locution recorded with each pictogram. This issue is useful in acquiring both operational competences and linguistic knowledge since (Koul and Schlosser, 2004) found that voice output is a valuable instructional factor in symbol learning. In addition, instruction in sight word recognition skills typically relies on some form of paired associate learning whereby the individual learns to match the target written word to a picture, photograph or other AAC symbol representing the word (Light and McNaughton, 2009b). As a consequence, this feedback is useful for teaching literacy skills. A proprioceptive issue (a buzz) occurs (when the hardware supports it). The pictogram is animated in a similar way as in mirror mode. When it is activated, the size of the pictogram gets greater and smaller for a few times. To simulate the behavior of a learning notebook, when a pictogram is selected it is removed from the panel and it is only shown in the strip. When delivery is completed or this pictogram is removed from the strip, it appears again. Message feedback, when the user clicks on the play button: The message is always read aloud by using a TTS artificial voice. This voice is customizable for each user. By default, there are different voices according to the genre of the user. When every pictogram in the strip is played, it is highlighted.The behavior of the communicator device when delivery is completed is also customizable: The message is deleted so the strip is emptied and the communicator aid is ready for a new message. The message strip is moved to the middle of the screen. The message is not automatically removed once it is played so the user is expected to remove every pictogram. Only when the last pictogram has been deleted does the message strip come back into the top of the screen, and new input actions are enabled. This procedure is based on low-tech communication boards where the user is required to compose the message on a strap and deliver the strap to the communication partner of the communication act. The aim is to reinforce the ‘message delivery’ concept. Thus, the user is receiving twice as visual feedback, firstly at the same time that the message is playing, and secondly when the message is delivered. In addition, it is possible to configure the device in order to disable the play button until the last pictogram is deleted (Fig. 7). In other words, it is not possible to ‘reuse’ the same message. In order to replay the same message, it is necessary to (i) delete the current message and (ii) compose the message again. The aim of the strategy ‘One message, one delivery’ is 2-fold: To highlight the relationship between cause and effect as a way to reinforce teaching of the meaning of the pictogram. Avoid echolalia-related behaviors. Echolalia is the repetition of phrases, words or parts of words. Echolalia may be a sign of autism, another neurological condition, a visual impairment or a developmental disability. When a person repeats back something that he or she has just heard, that is immediate echolalia. For example, if a parent says, ‘It’s time for a bath’, the child may repeat, ‘Time for a bath’. This behavior is possible to replicate when using the communication aid just by pressing the same symbol again and again, without any communicative intent. As it is required to delete the whole message before a new message is composed and eventually played, it is difficult to reproduce the same pictogram again and again without any meaning. Figure 7. View largeDownload slide As a consequence of a message delivery, this is scrolled down to the middle of the screen and both the communication board and the delivery button are disabled until the message is deleted by the student. Figure 7. View largeDownload slide As a consequence of a message delivery, this is scrolled down to the middle of the screen and both the communication board and the delivery button are disabled until the message is deleted by the student. 5.2. Symbols Our set of symbols is arranged according to image (how the symbols are, the pictograms) and content (the vocabulary). 5.2.1. The vocabulary A core vocabulary was prepared based on studies about usual vocabulary for preschool children (Marvin et al., 1994; Raban, 1987) and children with complex communication needs (Banjee et al., 2003; Dada and Alant, 2009) (PECS151 Vocabulary list). As a result, a collection of 621 core concepts was obtained, including the following: A list of concrete entities about concepts such as food, toys, buildings and persons. Information about the properties of such concepts. For example: color, shape, size, temperature, feelings and emotions. The events. A description of a type of event, relation, or entity and the participants in it. Examples of events are evoked by using verbs such as ‘to want’, ‘to see’, ‘to teach’, and ‘to play’. Some prepositions such as ‘of’, ‘in’, ‘on’, ‘along’ . Some interjections, such as ‘please’, ‘yes’, ‘not’ or ‘help’ . Pronouns. The core vocabulary was expanded by applying morphosyntatic rules for genre and number, where applicable. More complex derivational rules (to describe verb past tenses for example) were excluded this time, but could be considered for future versions. Apart from this preset core vocabulary, it is possible, and recommendable, to add user-specific words in order to replace part of the core vocabulary or to expand it (fringe vocabulary). 5.2.2. The pictograms Regarding images, we provide full access to SymbolStix™,5 a consistent symbol set covering many materials and programs made up of more than 16 000 symbols. Since it is known that iconicity depends highly on every user, we encourage communication partners to include their own symbols. Thus, it is possible to include other symbols apart from SymbolStix such as ARASAAC,6 developmentally appropriate symbols (DAS) or even photos (see Fig. 8). In addition, the size of pictograms can be set to ‘normal’ or ‘large’ sizes, considering motor or visual impairments and the size of the vocabulary, which would have a maximum of 16 000 pictograms at normal size and 4000 for large size. Given that pictograms are addressed to early communicators, the ‘large’ size is the default one, provided that it is easier to manage and there is enough room to include incipient languages. Figure 8. View largeDownload slide Pictogram Tablet with different types of symbols. Figure 8. View largeDownload slide Pictogram Tablet with different types of symbols. 5.2.3. Sight word recognition One well-known technique for improving sight word recognition is the combination of pictograms and text as an effective way to create the association between the word and the concept. Once such association is apprehended, it is possible to remove the image and show only the text, i.e. its transcription. Note that for a single pictogram it is only possible to show either the image, text and image or only text. Therefore, it is possible to combine pictograms with configurations (image, image with text or only text) at the same time (see Fig. 9). This property enables users to introduce literacy as an evolutionary process where the letters and sounds are taught in a given sequence such as a,m,t,p,o… (Light and McNaughton, 2009a). Later, a word set made up of these words is introduced (sight word recognition). Shared reading activities are introduced as soon as the student learns to read some isolated words. Next iteration begins by adding new letters and sounds and so on. Figure 9. View largeDownload slide Combining symbols and words in different ways. Figure 9. View largeDownload slide Combining symbols and words in different ways. 6. EVALUATION METHOD The Pictogram platform allows a deep analysis of all interactions that are recorded in supervised sessions. Therefore, intervention by evidence is possible, as the system provides rich reports regarding vocabulary size, evaluations of every communication intent, number of sessions through a planned set of instructions, and more. From the collected data, always with the acceptance of caregivers and parents, these interactions can be analyzed. 6.1. Participants Eighteen children with autism, 16 boys and 2 girls, were recruited from three psychology offices. Their mean age was 10.54 years (range 2–16; std 4.34). A two-stage consent procedure was used for the participants with ASD: First, the terms of service of Pictogram software include the agreement to exploit the information for research purposes whenever the data of children, relatives and caregivers is anonymously managed and published. Then, parents were asked to give written consent so that we would be allowed to attend therapy sessions. The procedure was repeated for every session that we attended. All participants were free to leave the study at any time. Nineteen speech therapists and nine parents were also recruited from the partner organizations to take part. Each child was paired with one or more communication partners for the study sessions. 6.2. Materials Pictogram Tablet was installed into a number of Amazon Fire 7 tablets (FireOS system based on Android 5.0, 7″ IPS 1025×600 display) and these were distributed among the students and some communication partners. Some parents and therapists used their own Android smartphone, such as Sony Xperia XA or LG G6. 6.3. Tasks Three different tasks were accomplished: One-to-one 45-min sessions using Pictogram Tablet as the basis for an interaction between the child and a speech therapist. Semi-structured interviews with therapists and parents. Pictogram Tablet usage registry by means of language activity monitoring (LAM) logfiles (Katya, 2004). A LAM logfile represents the use or events associated with a given AAC system; the selections made by the user. The logfile starts with a time stamp (24-h clock) and then the event, e.g. word, message or control feature selected. In this way, we access the full usage of the device as a communicator. 6.4. Procedure Before beginning therapy sessions, each therapist was trained for more than 3 h in handling the complete Pictogram platform. At the start of each one-to-one 45-min session, the study was explained to the therapists and they were asked if they had any questions regarding their possible participation. All sessions were conducted in a designated unoccupied room within each of the participating care facilities. A member of the research team sat in and observed all sessions. The observer sat behind each pair of participants, out of their view, and kept a tally of items on the checklist. Over the course of each session, the observer noted down the interaction between the child and the therapist under the methods and principles of conversation analysis (CA), an approach that has the potential to inform clinical practice as well as AAC system design theory (Todman et al., 2008). CA is the systematic, data-driven study of naturally occurring talk-in-interaction. One important issue that CA raises is how, during their turns in conversation, participants are able to show how they understand or have problems understanding the message from the last counterpart turn. LAM logfiles are used in order to obtain both global and individual measures of usage of the platform. On the one hand, global measures are useful in order to grasp which issues of the interface have a greater impact. Examples of this type of question are: is there any correlation between vocabulary size and vocabulary categories? What is the effect of input feedback and timing? On the other hand, individual measures enable the therapist to design an intervention based on evidence. For example, the way that a given student is increasing her vocabulary over the course of the months, or the specific effect of an individual configuration for such an student. 7. RESULTS 7.1. Qualitative analysis by means of conversation analysis In the following analysis, a series of three extracts are presented (the original names of the participants are not used). In each extract, different features of Pictogram will be examined; the first one is a representative therapist-student interaction, while the second one is an example of the way that the interface design and operation stresses the language learning process rather than the communication act. Finally, we show a third example where the interface configuration enables the therapist to avoid misuse of the communicative device. Key features of the overall practice will then be discussed. 7.1.1. Extract 1. Mand Function Procedures In this session, the therapist asks his students for a favorite item by using her own AAC device. Thus, a second no preferred item is selected and both of them are conspicuously placed in view and the names are said to the student in order to obtain her attention. Rose is a 4-year-old Spanish child, diagnosed with ASD. She has no verbal language. She communicates by means of gestures, pointing and/or gazing at the item. As preferred items any object is usually selected whose shape is similar to a triangle. In this case, a large triangular rubber is picked up. Table 5 shows a learn unit involving both the therapist and Rose and it is an example which illustrates that it is possible to follow verbal behavior analysis (Greer and Ross, 2008; Skinner, 1957) techniques using a high-tech device. Table 5. Rose extract: asking for a triangle. 01 Th oh!, what is it? 02 Th ⌈ It’s a ↑triangle ⌉ 03 ⌊ fingers the rubber whose shape is a triangle ⌋ 04 Th ⌈ It’s a ↓circle ⌉ 05 ∣ fingers a tennis ball ∣ 06 ⌊ (.) ⌋ 07 Ro ((gazes at the rubber)) 08 Th I 09 Ro ((tries to get the rubber)) → 07 Th ⌈ ((takes Rose’s hand and places it over the tablet)) ⌉ 10 ∣ ↑Ah,ah ∣ 11 ∣ ((turns head left and right)) ∣ 12 ⌊ (3.0) ⌋ 13 Th ⌈ It’s a ↑triangle ⌉ 14 ⌊ fingers the rubber whose shape is a triangle ⌋ 15 Th ⌈ It’s a ↓circle ⌉ 16 ∣ fingers a tennis ball ∣ 17 ⌊ (.) ⌋ → 10 Ro presses the triangle pictogram on the tablet 18 Th ⌈ ↑Great job! ⌉ 19 ⌊ ((moves head up and down ⌋ 20 ⌊ ((gives the rubber to the therapist)) ⌋ 01 Th oh!, what is it? 02 Th ⌈ It’s a ↑triangle ⌉ 03 ⌊ fingers the rubber whose shape is a triangle ⌋ 04 Th ⌈ It’s a ↓circle ⌉ 05 ∣ fingers a tennis ball ∣ 06 ⌊ (.) ⌋ 07 Ro ((gazes at the rubber)) 08 Th I 09 Ro ((tries to get the rubber)) → 07 Th ⌈ ((takes Rose’s hand and places it over the tablet)) ⌉ 10 ∣ ↑Ah,ah ∣ 11 ∣ ((turns head left and right)) ∣ 12 ⌊ (3.0) ⌋ 13 Th ⌈ It’s a ↑triangle ⌉ 14 ⌊ fingers the rubber whose shape is a triangle ⌋ 15 Th ⌈ It’s a ↓circle ⌉ 16 ∣ fingers a tennis ball ∣ 17 ⌊ (.) ⌋ → 10 Ro presses the triangle pictogram on the tablet 18 Th ⌈ ↑Great job! ⌉ 19 ⌊ ((moves head up and down ⌋ 20 ⌊ ((gives the rubber to the therapist)) ⌋ View Large 7.1.2. Extract 2. Preventing echolalia Salva is a 10-year-old and he has no kind of oral language. Over the course of the years, several AAC devices and techniques such as PECS (Frost and Bondy, 2002) and Sc@ut7 have been previously used. Even so, his communicative skills are very incipient so Isabel, his language therapist, used a Pictogram Tablet as an AAC device. In the beginning, only a very reduced number of preferred items are available. Salva needs to be guided in order to learn to ask for the items available in the vocabulary defined in his AAC. Eventually, Salva learns to ask for such items but his interest in these items is quite low so he prefers to press the play button again and again in order to listen to the sound. This is a kind of ‘electronic echolalia’ since there is no communicative intention when Salva presses the play button repeatedly. As a prevention of this behavior, we defined the option ‘one message delivered-one message heard’ so it is not possible to listen to the same message more than once. It is depicted in Section 3.3 and Fig. 7. Table 6 shows this scenario. Making it mandatory to delete the last message in order to deliver the next one does not make sure there is a communicative intention, but it is more probable that there is an intention different than auto-suggestion when the delete button is pressed. Table 6. Salva extract. 01 Th Do you like corn snacks → 02 Sa ((presses the ‘yes’ pictogram)) 02 ⌊ (.) ⌋ 03 Sa ((presses the ‘yes’ pictogram)) 04 Sa ((presses the ‘play’ button)) 05 Sa ((presses the ‘yes’ pictogram)) 06 Sa ((presses the ‘play’ button)) 07 Th ((modifies the tablet feedback by using his own tablet 08 Th ↑Do you like corn snacks 09 Sa ((presses the ‘yes’ pictogram)) 10 Sa ((presses the ‘play’ button)) → 01 Sa uh, oh 12 Th ⌈ ((take Salva’s hand and put it on the ‘delete’ button)) ⌉ ∣ ↑You have to ↑delete before ∣ ⌊ (3.0) ⌋ 13 Th Do you like corn snacks → 14 Sa ((presses the ‘delete’ button)) 15 Sa ((presses the ‘yes’ button)) 01 Th Do you like corn snacks → 02 Sa ((presses the ‘yes’ pictogram)) 02 ⌊ (.) ⌋ 03 Sa ((presses the ‘yes’ pictogram)) 04 Sa ((presses the ‘play’ button)) 05 Sa ((presses the ‘yes’ pictogram)) 06 Sa ((presses the ‘play’ button)) 07 Th ((modifies the tablet feedback by using his own tablet 08 Th ↑Do you like corn snacks 09 Sa ((presses the ‘yes’ pictogram)) 10 Sa ((presses the ‘play’ button)) → 01 Sa uh, oh 12 Th ⌈ ((take Salva’s hand and put it on the ‘delete’ button)) ⌉ ∣ ↑You have to ↑delete before ∣ ⌊ (3.0) ⌋ 13 Th Do you like corn snacks → 14 Sa ((presses the ‘delete’ button)) 15 Sa ((presses the ‘yes’ button)) View Large 7.1.3. Extract 3. Modeling Roberto is able to ask for preferred items spontaneously. Up to now, every item has been taught by guiding Roberto’s hand over the tablet in order to press the pictogram and then the message is delivered by pressing the play button. Carlos, his language therapist, is trying to teach him a new item but following a different approach: modeling. Thus, Carlos manages his own device in order to show a new concept, a small toy. Since Carlos is logged as Roberto’s supervisor, the vocabulary is the same for both of them. In addition, Carlos occasionally uses the mirror mode as a way to reinforce the modeling process (Table 7). Table 7. Roberto extract: modeling. 01 Th oh!, Do you want the ↑Doll? 02 Th ⌈ It’s a beautiful ↑Doll ⌉ ⌊ ((fingers the doll)) ⌋ 04 Ro ((gazes at the doll)) → 05 Th ((presses the doll pictogram on his own tablet)) 06 Ro ((watches his own tablet)) 07 Th ((presses the doll pictogram on his own tablet)) → 08 Ro ((presses the doll pictogram on his own tablet)) 18 Th ⌈ ↑Here you are! ⌉ 20 ⌊ ((gives the doll)) ⌋ 01 Th oh!, Do you want the ↑Doll? 02 Th ⌈ It’s a beautiful ↑Doll ⌉ ⌊ ((fingers the doll)) ⌋ 04 Ro ((gazes at the doll)) → 05 Th ((presses the doll pictogram on his own tablet)) 06 Ro ((watches his own tablet)) 07 Th ((presses the doll pictogram on his own tablet)) → 08 Ro ((presses the doll pictogram on his own tablet)) 18 Th ⌈ ↑Here you are! ⌉ 20 ⌊ ((gives the doll)) ⌋ View Large 7.2. Language acquisition monitoring In this section, some examples of the way that LAM can be used are provided. A first example is related to a global issue: is there any relationship between vocabulary categories and vocabulary size? The second example is related to the case that is depicted in Section 7.1.2 about echolalia. Finally, the current distribution of messages delivered by using Pictogram Tablet is shown according to the type of users and whether it happens in a controlled environment (learn units in therapy sessions) or not. 7.2.1. Vocabulary category and vocabulary size As Section 5 depicts, it is possible to configure the whole vocabulary available from one unique grid with just one click, or pictograms can be arranged into categories. In the second case, two clicks are required in order to select a given pictogram. This is the price to be paid in order to obtain a higher number of pictograms. We want to validate that therapists are actually using categories when the size of the vocabulary gets larger. For this reason, a Mann–Whitney–Wilcoxon (MWW) test8 was performed to determine whether there is a significant difference in vocabulary size when categories are on or off. There are twelve children using categories and six children not using them. Figure 10 shows the results. As a consequence, we cannot reject the null hypothesis ( U=30,000, P=0,4300). In the same way, we wonder if children with categories are using a higher number of pictograms. Note that this question is not about the size of the available vocabulary but about the size of the vocabulary that the students are actually using. We obtain the result depicted in Fig. 11. Again, it is not possible to reject the null hypothesis ( U=24,500, p=0,280). We have to conclude that categories are not being used properly; there is no use in pictogram categorization if the vocabulary size is roughly the same with or without categories. Note that it is a non-random sample since therapist criteria prevail over other considerations so we cannot generalized this finding, although we tend to think that we will obtain similar results in other populations. In addition, we are analyzing not the impact of categories in the development of the language but how categories are used by the psychologist: they should be used according to the size of the vocabulary and we find that it is not the case in the observed population. As a consequence, some kind of corrective strategy has to be designed, for example, giving more support to the therapists in order to use the categories properly. Figure 10. View largeDownload slide Categories and vocabulary usage size. Figure 10. View largeDownload slide Categories and vocabulary usage size. Figure 11. View largeDownload slide Categories and vocabulary definition size. Figure 11. View largeDownload slide Categories and vocabulary definition size. 7.2.2. An example In this example, we describe how LAM is a valuable tool for guiding the therapist to define an intervention based on evidence. This is the case described in Section 7.1.2 where Salva uses the play button in order to listen to the sound rather to ask for the referred item. A brief description of the way that Salva is using his device is shown in Table 8. The first issue is that Salva is using just 32 of 87 available pictos, that is 36.78% of coverage, and this is a clue about the poor adaptation of the both vocabulary definition and usage. A second interpretation of this data comes from the following facts: Salva delivered a quite impressive number of messages in the last 4 weeks: 12 531 messages from 12 541 times the play button was pressed. But the average message size is roughly only a single pictogram There is a high frequency of ‘repeated messages’. We have defined a ‘repeated message’ as the procedure that consists of pressing the play button and only the play button more than once in a 2-s time window. Table 8. Some data from Salva communicator. Vocabulary size 87 Vocabulary usage 32 Avg.message size 1.2 Delivered messages during 4 weeks 12 531 Repeated messages 83.2 Vocabulary size 87 Vocabulary usage 32 Avg.message size 1.2 Delivered messages during 4 weeks 12 531 Repeated messages 83.2 View Large As a consequence of the analysis of the data, it is possible to conclude that Salva is not using his device properly since (i) the vocabulary definition does not fit the vocabulary usage and (ii) Salva delivers a quite impressive number of messages, but a more detailed view of the properties of these messages reveals that there is no real communication intention in such actions and they should be discarded as communicative acts completely. This is in accordance with the behavior depicted in Table 6 by means of the application of Conversation Analysis techniques. 7.2.3. Who and when is using Pictogram Tablet? The ideal of usage for any AAC devices is likely to be: The student manages her own device in a spontaneous way, out of therapy time and integrated as part of her daily routines. The rest of the intervention team uses their own device in order to communicate and apply language modeling with the student.Since Pictogram records every student, therapist or caregiver interaction, it is possible to get an overall view about both points. More concisely, we have obtained the percentage of delivered messages that (i) happens out of therapy (ii) are done by other different from the student. Given the small population size under study, this are quite preliminary results but they give us information about far or near we are from the ideal situation depicted above. According to the data reported in Table 9, it is possible to affirm the following: Frequent spontaneous usages of Pictogram. Since 71.5% of student interactions are happening out of therapy. Therapists are using their own devices for modeling. Roughly, one of five interactions are done by a therapist, and the mayor part of such interactions are out of therapy (87%). Relatives and caregivers are not using Pictogram at all. It is only a 2.9% of interactions are recorded from this population group. Table 9. Interactions with Pictogram according to user type and instant. In order to distinguish interactions that happen during or outside therapy sessions we consider whether the interaction is attached to a learn unit or not (See Section 4.1). Type of user During speech therapy Out of speech therapy Total Student 5632 14 123 19 755 Therapist 532 4121 4653 Relatives and caregivers 0 732 732 Total 6164 18 976 25 140 Type of user During speech therapy Out of speech therapy Total Student 5632 14 123 19 755 Therapist 532 4121 4653 Relatives and caregivers 0 732 732 Total 6164 18 976 25 140 View Large Under the light of this data, (i) it is necessary to influence parents and caregivers in order to use their own devices to model language and (ii) the percentage of both students and therapists out of therapy is quite high. In principle, it is a good result since it is a clue regarding spontaneous communication and language modeling but just in case sessions are being properly recorded. Thus, we have to ensure about this point by interviewing the therapist. 8. CONCLUSIONS AND FUTURE WORK In this paper, we describe the interface of a Pictogram Tablet as an AAC device designed under the prism of well-known speech and language interventions based on evidence such as verbal behavior analysis and language modeling. In this way, every component of the interface (the vocabulary, pictograms iconicity, color, size and position, activation and message feedback, etc.) is the consequence of a careful revision and, when needed, of an adaptation of the current state-of-the-art studies. As future work, we propose an evolution of three main branches: Go further in modeling. One way to move modeling ahead is a kind of bidirectional ‘device-to-device’ communication where the child includes the name of a communication partner in her message and this is delivered to the device of that person. For example, if Jimmy composes and delivers the message ‘Mum I want water’, the same message will be transferred to Mum’s device. Then the mother could give an answer to Jimmy, and it would be delivered in the same reciprocal way as Jimmy’s message. Literacy. There is much to do regarding literacy, and we know that literacy is not only about reading and writing but also about communicative skills in general. For this reason, we want to implement specific strategies and techniques to improve phonological awareness skills and letter-sound correspondences. A first step is to provide the option of spelling out the transcription of the pictogram. In this way, when the child clicks a pictogram, the SGD will read aloud every letter that makes up the corresponding word at the same time that the letter is shown on the screen. It is our aim to maximize spontaneous language and speech. It is known that the inclusion of speech-generating devices as part of speech therapy delivers improvements in spontaneous communicative utterances, novel words and comments (Kasari et al., 2014). The integration of recorded voices of communication partners and students, when it is possible, could be a step in this direction. It has been found that speech tonality according to emotional state could be well recognized by children with ASD (Kuriki et al., 2016; Lin et al., 2017; Tobe et al., 2016). Therefore, the inclusion of pictograms related to emotions as a signal to change speech tonality is worth exploring. The development of more specific training resources by parents and therapists. We observed that parents, caregivers and even therapists tend to use the student’s communicator to compose their own messages. Furthermore, they do not usually log in but use the current student user. This is a misusage of the platform because when the Pictogram Tablet is managed by another person than the logged user, then (i) the data recorded by means of LAM is erroneously assigned to a different user than the real one and (ii) the student is not getting the right feedback regarding how the Pictogram Tablet should be used as her own, private communication device. Finally, the LAM module of Pictogram Tablet enables us to continue studying the impact of every strategy and technique, as plenty of analyses can be performed on the data collected. This would allow us to identify which interventions and methods are more promising and which should be revised in light of new evidence. ACKNOWLEDGEMENTS This research work is partially supported by the project REDES (TIN2015-65136-C2-1-R) and a grant from the Fondo Europeo de Desarrollo Regional (FEDER). REFERENCES American Psychiatric Association, ( 2004) Diagnostic and Statistical Manual of Mental Disorders . American Psychiatric Association. Baer, D.M., Wolf, M.M. and Risley, T.R. ( 1968) Some current dimensions of applied behavior analysis. J. Appl. Behav. Anal. , 1, 91– 97. Google Scholar CrossRef Search ADS PubMed Baker, B. ( 1982) Minspeak. Byte., 7, 186– 203. Banajee, M., DiCarlo, C. and Stricklin, S. B. ( 2003) Core vocabulary determination for toddlers. Augment. Altern. Commun. , 19, 67– 73. Google Scholar CrossRef Search ADS Bishop, D.V. and Adams, C. ( 1990) A prospective study of the relationship between specific language impairment, phonological disorders and reading retardation. J. Child. Psychol. Psychiatry. , 31, 1027– 1050. Google Scholar CrossRef Search ADS PubMed Bondy, A.S. and Frost, L.A. ( 1994) The picture exchange communication system. Focus Autism Other Dev. Disabil. , 8, 1– 19. Browder, D.M., Ahlgrim-Delzell, L., Gibbs, G.C.S.L. and Flowers, C. ( 2008) Evaluation of the effectiveness of an early literacy program for students with significant developmental disabilities. Except. Child. , 75, 33– 52. Google Scholar CrossRef Search ADS Bruner, J. ( 1985) Child’s talk: Learning to use language. Child. Lang. Teach. Ther. , 1, 111– 114. Google Scholar CrossRef Search ADS Catts, H.W., Fey, M.E., Tomblin, J.B. and Zhang, X. ( 2002) A longitudinal investigation of reading outcomes in children with language impairments. J. Speech. Lang. Hear. Res. , 45, 1142– 1157. Google Scholar CrossRef Search ADS PubMed Dada, S. and Alant, E. ( 2009) The effect of aided language stimulation on vocabulary acquisition in children with little or no functional speech. Am. J. Speech. Lang. Pathol. , 18, 50– 69. Google Scholar CrossRef Search ADS PubMed Drager, K.D., Postal, V.J., Carrolus, L., Castellano, M., Gagliano, C. and Glynn, J. ( 2006) The effect of aided language modeling on symbol comprehension and production in 2 preschoolers with autism. In Am. J. Speech Lang. Pathol. , 15, pp. 112– 125. Frost, L. and Bondy, A. ( 2002) The Picture Exchange Communication System Training Manual ( 2nd edn). Pyramid Educational Consultants Inc. Ganz, J.B., Earles-Vollrath, T.L., Heath, A.K., Parker, R.I., Rispoli, M.J. and Duran, J.B. ( 2012) A meta-analysis of single case research studies on aided augmentative and alternative communication systems with individuals with autism spectrum disorders. J. Autism. Dev. Disord. , 42, 60– 74. Google Scholar CrossRef Search ADS PubMed Goossens, C., Crain, S.S. and Elder, P. ( 1992) Engineering the preschool environment for symbolic interactive communication. Southeast Augmentative Communication. Greer, R.D. and Ross, D.E. ( 2008) Verbal Behavior Analysis . Pearson Education. Higginbotham, J., Fulcher, K. and Seale, J. ( 2016) Time and Timing Interactions . J & R Press Ltd. Kasari, C., Kaiser, A., Goods, K., Nietfeld, J., Mathy, P., Landa, R., Murphy, S. and Almirall, D. ( 2014) Communication interventions for minimally verbal children with autism: a sequential multiple assignment randomized trial. J. Am. Acad. Child. Adolesc. Psychiatry. , 53, 635– 646. Google Scholar CrossRef Search ADS PubMed Katya, H. ( 2004) Augmentative and alternative communication and language: evidence-based practice and language activity monitoring. Top. Lang Disord. , 24, 18– 30. Google Scholar CrossRef Search ADS Koul, R. and Schlosser, R. ( 2004) Effects of synthetic speech output in the learning of graphic symbols of varied iconicity. Disabil. Rehabil. , 26, 1278– 1285. Google Scholar CrossRef Search ADS PubMed Kuriki, S., Tamura, Y., Igarashi, M., Kato, N. and Nakano, T. ( 2016) Similar impressions of humanness for human and artificial singing voices in autism spectrum disorders. Cognition. , 153, 1– 5. Google Scholar CrossRef Search ADS PubMed Light, J.C., Arnold, K.B., Clark, E.A. and Light, J.C. (eds) ( 2003) Finding a place in the social circle of life. Communicative Competence for Individuals Who Use AAC: From Research to Effective Practice . pp. 361– 397. Paul H. Brookers Publishing Co. Light, J.C. and McNaughton, D. (eds) ( 2009a) ALL (Accessible Literacy Learning): Evidence-Based Reading Instruction for Learners withAutism, Cerebral Palsy, Down Syndrome and Other Disabilities . Mayer-Johnson. Light, J. and McNaughton, D. ( 2009b) Addressing the literacy demands of the curriculum for conventional and more advanced readers and writers who require aac. In Soto, G. and Zangari, C. (eds), Practically Speaking: Language, Literacy, and Academic Development for Students with AAC Needs . pp. 217– 246. Paul H. Brookes, Baltimore, MD. Light, J., Roberts, B., Dimarco, R. and Greiner, N. ( 1998) Augmentative and alternative communication to support receptive and expressive communication for people with autism. J. Commun. Disord. , 31, 153– 180. Google Scholar CrossRef Search ADS PubMed Light, J., Worah, S., Bowker, A., Burki, B., Drager, K., Da Silva, K., et al. . ( 2008) Childrens representations of early language concepts: implications for AAC symbols. In Presentation at the American Speech Language Hearing Association Convention, Chicago, IL. CIT0032. Lin, I.-F., Shirama, A., Kato, N. and Kashino, M. ( 2017) The singular nature of auditory and visual scene analysis in autism. Phil. Trans. R. Soc. B , 372, 20160115. Google Scholar CrossRef Search ADS Lloyd, L.L., Fuller, D.R. and Arvidson, H.H. ( 1997) Augmentative and Alternative Communication: A HANDbook of Principles and Practices Pearson . Marvin, C.A., Beukelman, D.R. and Bilyeu, D. ( 1994) Vocabulary use patterns in preschool children: effects of context and time sampling. Augment. Altern. Commun. , 10, 224– 236. Google Scholar CrossRef Search ADS McArthur, G.M., Hogben, J.H., Edwards, V.T., Heath, S.M. and Mengler, E.D. ( 2000) On the specifics of specific reading disability and specific language impairment. J. Child. Psychol. Psychiatry. , 41, 869– 874. Google Scholar CrossRef Search ADS PubMed Mendelsohn, A.L., Mogilner, L.N., Dreyer, B.P., Forman, J.A., Weinstein, S.C., Broderick, M., Cheng, K.J., Magloire, T., Moore, T. and Napier, C. ( 2001) The impact of a clinic-based literacy intervention on language development in inner-city preschool children. Pediatrics. , 107, 130– 134. Google Scholar CrossRef Search ADS PubMed Mineo Mollica, B. ( 2003) Representational competence. In Light, J.C., Beukelman, D.R. and Reichle, J. (eds), Communicative Competence for Individuals Who Use AAC: From Research to Effective Practice . pp. 107– 146. Paul H. Brookes Publishing Co, Baltimore. Mirenda, P. and Iacono, T. (eds) ( 2009) Autism Spectrum Disorders and AAC. Paul H. Brookes Pub. NRP ( 2000) Report of the National Reading Panel: Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and its Implications for Reading Instruction: Reports of the Subgroups. National Reading Panel (US). National Institute of Child Health and Human Development, National Institutes of Health. Raban, B. ( 1987) The Spoken Vocabulary of Five-Year Old Children . The Reading and Language Information Centre, Reading, England. Romski, M.A., Sevcik, R.A. and Fonseca, A.H. ( 2003) Augmentative and Alternative Communication for Persons with Mental Retardation . Academic Press. Google Scholar CrossRef Search ADS Romski, M.A., Sevcik, R., Smith, A., Barker, R.M., Folan, S. and Barton-Hulsey, A. ( 2009) The System for Augmenting Language: Implications for young children with autism spectrum disorders . Paul Brookes Publishing. Schlosser, R. ( 2003) Roles of speech output in augmentative and alternative communication: Narrative review. Augmen. Altern. Commun. , 19, 5– 27. Google Scholar CrossRef Search ADS Skinner, B.F. ( 1957) Verbal Behavior . Prentice Hall, Inc. Google Scholar CrossRef Search ADS Tobe, R.H., Corcoran, C.M., Breland, M., MacKay-Brandt, A., Klim, C., Colcombe, S.J., Leventhal, B.L. and Javitt, D.C. ( 2016) Differential profiles in auditory social cognition deficits between adults with autism and schizophrenia spectrum disorders: A preliminary analysis. J. Psychiatric Res. , 79, 21– 27. Google Scholar CrossRef Search ADS Todman, J., Alm, N., Higginbotham, J. and File, P. ( 2008) Whole utterance approaches in AAC. Augment. Altern. Commun. , 24, 235– 254. Google Scholar CrossRef Search ADS PubMed Tomasello, M. ( 2009) Constructing a Language . Harvard university press. Worah, S. ( 2008) The effects of redesigning the representations of early emerging concepts on identification and preference: a comparison of two approaches for representing vocabulary in Augmentative and Alternative Communication (AAC) systems for young children. ProQuest. Footnotes 1 Non-symbolic communication, or pre-linguistic communication, refers to both intentional and unintentional behaviors that may be either conventional or unconventional so that they do not involve the use of symbolic modes such as pictures, manual signs, or printed words (Mirenda and Iacono, 2009, p. 226). 2 Available at https://sourceforge.net/projects/arasuite/ 3 Available at http://asistic.ugr.es/scaut/ 4 Source: The South Carolina Assistive Technology Program (SCATP). http://scatp.med.sc.edu/ 5 SymbolStix™ is a n2y product, and it is commercially available at https://www.n2y.com 6 Available at http://www.arasaac.org/ 7 Available at http://asistic.ugr.es/scaut 8 When the dataset is small, the P-value from Student-t is likely to be the most usual test but it requires a normal distribution of the dataset. For this reason, we applied Shapiro–Wilk test that is suited for small datasets. We found that it is possible to assert that group B (no categories) follows a normal distribution, but this is not the case of group A (with categories). As a consequence we applied a non-parametric test, the MWW U test. Author notes Editorial Board Member: Dr. Effie Law © The Author(s) 2018. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved. For Permissions, please email: email@example.com
Interacting with Computers – Oxford University Press
Published: Mar 1, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera