TY - JOUR AU - P, Novais, AB - Abstract Current research on computational intelligence is being conducted in order to emulate and/or detect emotional states using specific devices such as wristbands or similar wearables. In this sense, this paper proposes the use of intelligent wristbands for the automatic detection of emotional states in order to develop an application which allows us to extract, analyse, represent and manage the social emotion of a group of entities. Nowadays, most of the existing approaches are centred in the emotion detection and management of a single entity. The designed system has been developed as a multi-agent system where each agent controls a wearable device and is in charge of detecting individual emotions based on bio-signals. 1 Introduction The emulation of emotional states allows machines to represent some human emotions. Current research on computational intelligence is being conducted in order to emulate and/or detect those emotional states [6]. This artificial representation of emotions is being used by machines to improve the interaction process with humans. In order to create a fluid emotional communication between human and machines, the machines need first to detect the emotion of the human with the final purpose of enhancing decison-making processes while improving human–computer interactions [10]. In order to accomplish these tasks, it is necessary to use different techniques such as speech recognition, artificial vision, written text, body gestures and bio-signals. Human beings perceive and analyse a wide range of stimuli in different environments. These stimuli interfere in our commodity levels modifying our emotional states. Before each one of these stimuli, humans generate several types of responses, like varying our face gestures, body movement or bio-electrical impulses. These variations in our emotional states could be used as a very useful information for machines. To do this, machines will require the capability of interpreting correctly such variations. This is the reason for the design of emotional models that interpret and represent the different emotions in a computational way. In this case, emotional models such as Ortony, Clore & Collins model [5] and the PAD (Pleasure-Arousal-Dominance) model [12] are the most used ones to detect or simulate emotional states. With the emergence of new smart devices, in areas such as ubiquitous computation and ambient intelligent, emotional states are now a very valuable information, which allows the development of new applications that help to improve the human being life quality in a more accurate and reliable way. Nevertheless, for the time being the detection of the joined emotion of a heterogeneous group of people is still an open issue. Most of the existing approaches are centred in the emotion detection and management of a single entity. In this work we propose to detect the social emotion of groups of people in an Ambient Intelligence (AmI) application with the help of smart wearables. Specifically, we propose a system that controls automatically the detection of the emotions of the people with the use of individual wristbands. Thus, the main goal of the proposed system is to obtain the social emotion of this group of people in order to try to improve the well-being of that people (as an example, play some kind of music to increase happiness of the people). Each one of the individuals will have an, possibly different, emotional response. This response will be detected and transmitted by the wristbands in order to calculate a social emotion of the set of individuals. This social emotion will be used to predict the most appropriated action according to the domain where the application is used. In this way, the designed system has been developed as a multi-agent system [19] where each agent controls a wearable device and is in charge of detecting individual emotions based on bio-signals. The rest of the paper is structured as follows: Section 2 gives an overview of the related state of the art; Section 3 describes the system focusing on the description of the wristband agent; finally, Section 4 presents some conclusions. 2 State of the art 2.1 Ambient assisted living AmIs are becoming very popular due to the interest of people on possible solutions that they address. Thus, the domain addressed on this paper is the aim of other projects that have as goal to solve these societal issues. AmIs are providing assistance at home by monitoring the environment and acting accodingly to each situation the users are exposed. When a home is equipped with sensor systems, actuators and interfaces, it gains the ability of being aware and able to change the environment condition (through actuators or smart objects) and give information to the users proactively or when required (through sound, screens, etc.). This will increase the living quality of the home residents and be cost-effective (requiring less interaction from external agents). While the adaptation of these homes would have a significant cost initially, this cost would degrade over time as maintenance cost is low. Alemdar and Ersoy [1] affirm that the Ambient Intelligence (AmI) (and thus Ambient Assisted Living (AAL)) domain can be clustered in five sections: Daily living activities Fall and movement detection Location tracking Medication control Medical status monitoring. These clusters help to identify the projects by their aim or action area. Although some are some spread across clusters, usually they have one distinguishable goal that is its main feature, thus being identified by a specific cluster. Next we present state-of-the-art projects of each cluster that are also comparable to our work. Daily living activities. Revita [17] is a platform for virtual residencies, designed for the elderly and the caregivers, giving information about medication, daily activities and establishing a social network. The aim is to create a sphere of help around the users, giving personalized information to each user and allowing the caregivers to do real-time monitoring. The users (elderly and caregivers) use mobile devices or regular computers to access the platform and receive information. Apart from the direct monitoring, the caregivers are able to communicate with the elderly through video chat, thus the doctors are able to do a consultation to the elderly without requiring them to leave home. The elderly also have a social network that is designed for them, to share their daily experiences and medical advices. Fall and movement detection. Castillo et al. [4] build a platform that is able to detect if a person is on the ground and if the fall was intentional or not. To do this, the platform uses cameras placed in strategic points of the home, recording continuously the environment. One issue with this project is the privacy concerns that arise from the constant visual monitoring and the loss of privacy. There are other systems, like floor sensors [14], but they are greatly inaccurate and suffer from false positives that confuse the system, as one leans down to pick up an object, thus they are still not a viable option currently. Location tracking. Ramos et al. [16] introduce a location tracking platform that is able to use a regular smartphone to track one’s location and send it to an online monitoring platform. The aim is to give the caregivers to proactively monitor the care-receiver location. This is useful for people that suffer cognitive disabilities and get confused frequently. The platform also offers proactive actions, as the caregivers are able to determine secure and unsecure areas for each person, and if the unsecure area is violated the caregiver receives a notification and can act accordingly. One drawback of this system is the battery consumption that it requires, limiting the usage time, as carrying more batteries is not always feasible. Medication control. Like [17], [22, 24] projects help elderly people remember to take their medication at the correct moment. They work through mobile devices interfaces or automated messages to mobile phones. This cluster is typically a by-product of projects that are from other clusters like the Daily living activities. Medical status monitoring. SmartBEAT [21] is a platform that tracks, analyses and compiles information about the users’ medical condition. The platform connects to several sensor systems, e.g. connected medical devices (like smart scales and blood pressure monitors), body sensors (like sensor shirts), etc., processes that data according to medical guidelines and makes that information available to the users and the caregivers. The aim is to record information that will be used by the doctors, to easily understand the current health state of the users and to show immediate information to the platform users to preventively correct bad behaviours that are changing the sensor readouts. 2.2 Affective computing These five clusters are often affected by other domains that pervade the projects with several features that lay outside of the main scope. One of them is Affective Computing. With the arising of the human–machine interaction came the need of human-like response from the machines. Currently there are systems that are able to communicate and negotiate similar to humans [20, 25], and while they are initially convincing, over time they lack human features, like empathy, that instils them with the ability of sustaining the initial awe. To overcome this issue, the affective computing domain offers solutions that enable computer systems to effectively emulate emotions that close the gap between humans and computers [23]. Nowadays, there are some relevant projects that are asserted in the same domain of our work that provide relevant methods and approaches to affective computing in AmI. Menendez-Ferreira et al. [13] present a project (SAVE IT) that uses affective computing to reduce and prevent violence and tackle racism, discrimination and intolerance. It is applied to the sports area, and its aim is to teach children positive values attending to their emotions and proactively responding to them. It works by interpreting the children emotional response to certain themes (pre-instated values) and modulating an emotional response that counteracts it, soliciting a positive reinforcement of core values, such as respect and empathy. Carneiro et al. [3] present a project that uses text-modelled advices to regain attention in computer tasks. The aim is to optimize teaching and working tasks by increasing the attention span that computer users have. This is done by monitoring the users (through cameras and sensors in the keyboard and mouse), attaining their emotional status (focusing in stress/happiness) and modelling an emotional response by the computer to nudge the users to maintain their attention. The use of personalized text increases the emotional engagement, thus increases the positive response to the advices displayed. Janssen et al. [8] present a music player that selects music for mood enhancement. The player collects data from body sensors and translates them into emotions. These emotions are then associated with the music that is playing. An associative map is created, relating the several meta-data (genre, year, etc.) with the emotional response. From a broad set of songs the music player is able to create a playlist that pleases the listener over a short number of songs. The aim is to elicit a positive emotional response from the listener, thus improving its quality of life. Although we witness a broad amount of projects being currently developed, they are very incipient. Few already present results and these are limited in terms of scope of the subjects that they use. Thus, there is still a long path to be walked and areas to be discovered. Our goal is to build a tool that helps elderly people on their daily life, through the use of non-invasive devices to attain the users’ emotional status. 3 System proposal This section explains the different elements that compose the multi-agent system, which describe a way to detect emotions based on bio-signals. One of the main problems to detect human emotions is the information capture, existing different ways to get this information. Normally, this information is acquired using image processing, but we can find other ways such as capturing and processing speech, body gesture and bio-signals. The emotion detection using bio-signals is focused in our body signals as skin resistance (GSR), photoplethysmogram, heart rate and electroencephalography. This paper only takes into account three of these signals, since acquiring electroencephalography signals needs a big amount of sensors connected to the user, so it could bother to him more than wearing a wristband. In current years, the use of wearable devices has been growing, devices such as Samsung1 with the Gear Fit, Gear S2 or Apple2 with the Apple Watch are only some examples. These devices can measure heart rate beat or hand movement using the Inertial Measurement Unit (IMU). Based on these devices and using the current technology in embedded systems, it is possible to create new smart bracelets. These bracelets include different sensors that allow the acquisition of bio-signals that can help for the detection of human’s emotions. Using signals of this kind along with the incorporation of complex algorithms based on machine learning techniques, it is possible to recognize how humans change their emotional states. The system here presented works not only with the individual emotions, but with the emotion of a group, to react not only to the emotion of one individual but to all the group in which those individuals are situated. The idea is twofold: to try to close the group to a goal emotion and to try to find the individual that is furthest from the group so that to act over him. For this reason, we use the Social Emotion model [18] to obtain the emotion of a group in a specific instant. This model is composed by a triplet that allows us to define the Social Emotion (SE) for a group of n agents |$ \textit{Ag} = \{ag_1, ag_2, ..., ag_n \}$| at instant |$t$| as \begin{equation*} SE_{t}(Ag) = \big(\overrightarrow{CE}_{t}(Ag), \overrightarrow{m}_{t}(Ag), \overrightarrow{\sigma}_{t}(Ag)\big), \end{equation*} where |$\overrightarrow{CE}_{t}(Ag)$| is a vector in the Pleasure Arousal Dominance (PAD) space,where each one of its components is calculated averaging the |$P$|⁠, |$A$| and |$D$| values of the n agents forming the set |$Ag$|⁠, respectively. The |$ \overrightarrow{m}_{t} (Ag) $| can indicate if there exist agents having their emotional state far away from the central emotion. The Euclidean distance is used to calculate the maximum distances between the emotion of each agent with respect to the |$\overrightarrow{CE}$|⁠. The |$\overrightarrow{\sigma }_{t}(Ag)$| or standard deviation allows the calculation of the level of emotional dispersion of this group of agents around the central emotion |$\overrightarrow{CE}(Ag)$| for each component of the PAD. The proposed multi-agent system is formed by three types of agents. These agents are the Wristband Agent, the Social Emotion Agent (SEtA) and the Manager agent. The Wristband Agent is mainly in charge of the following: Capturing some emotional information from the environment and specifically from a specific individual. This is done by interacting with the real world through the employed wristband. The agent captures the different bio-signals that will be used to detect the emotion of a human being. Predicting the emotional state of the individual from the processed bio-signals. In order to analyse these changes and predict emotional states, the Wristband Agent employs a classifier algorithm that will be later explained. Once the emotion has been obtained, it is sent to the agent which is in charge of calculating the social emotion of the agent group. This agent is called SEtA. The main goal of this agent is to receive the calculated emotions from all the Wristband Agents and, using this information, generate a social emotional state for the agent’s group (details of how this social emotion is calculated can be seen in [18]). Once this social emotion is obtained, the SEtA can calculate the distance between the social emotion and a possible target emotion (for instance the target emotion can be happiness). This allows us to know how far is the agent’s group of the target emotion. This can be used by the system to try to reduce that distance modifying the environment. This modification of the environment is made by the Manager Agent, which will have the know-how of the specific domain. This agent uses this social emotional value to calculate what is the next action to be done. After different executions, the Manager Agent can evaluate the effect that the actions have had over the audience. This will help the manager to decide whether to continue with the same actions or not in order to improve the emotional state of the group of people. Figure 1. View largeDownload slide Graphical view of the proposed process. Figure 1. View largeDownload slide Graphical view of the proposed process. Due to the limits of the paper, we only describe in detail the processes made by the Wristband Agent which are the model design, the data acquisition process and the emotion recognition. Moreover, the physical components of the wristband prototype are also described. 3.1 Model design For the recognition process of human emotions, the first step is to train the appropriated models. Nevertheless, to train these models, it is necessary to have a data set that normally is impossible to find. This is mainly due to the fact that humans respond differently to the environment stimuli. For this reason, we propose a tool that uses images as a visual stimulus. These images were obtained from the International Affective Picture System [9]3. Using them, we design a graphical user interface (see Figure 2) that connects with the wristband and at the same time shows the different images. Figure 1 shows the different steps that follow the application used to build the machine learning module. At step 1 the user watches images in 10 seconds intervals. During these 10 seconds the step 2 is activated and takes three images at the beginning, one in the middle and another at the end of each interval. At step 3 the captured images are sent to Microsoft cognitive service to detect the emotions. At the same time with steps 1 and 2, the step 4 acquires the bio-signals through the wristband and sends them to the system to be processed. Once the 10 seconds have finished, the user performs the SELF-ASSESSMENT MANIKIN (SAM) test [2] (step 5) in order to qualify qualitatively his emotional experience in front of the image. The different responses obtained by the user are stored in a database (step 6) that is composed by the image name, the emotion delivered by the cognitive service, the bio-signals acquired through the wristband and the SAM test result. This data set is used to train the machine learning model that will be integrated into the wristband. The Graphical User Interface (GUI) (Figure 2) was divided in three sections: one to show the image with which we want stimulate the emotional change, the second is the image captured using the web cam and the third is the SAM test. Figure 3 shows the wave form of a GSR signal (red line) and photoplethysmogram signal (blue line), acquired through the wristband. 3.2 Data acquisition process This process performed by the Wristband Agent is responsible for capturing the different life signals required. For this purpose, the Wristband Agent uses different sensors. The sensors used are the following: GSR and Photoplethysmography (Figure 4). Figure 2. View largeDownload slide Graphical interface used to get the data. Figure 2. View largeDownload slide Graphical interface used to get the data. Figure 3. View largeDownload slide Galvanic Skin Response (GSR) and Photoplethysmogram (PPG) Raw data. Figure 3. View largeDownload slide Galvanic Skin Response (GSR) and Photoplethysmogram (PPG) Raw data. Figure 4. View largeDownload slide View of the employed sensors. Figure 4. View largeDownload slide View of the employed sensors. Figure 5. View largeDownload slide Neural network architecture. Figure 5. View largeDownload slide Neural network architecture. Figure 6. View largeDownload slide MSE of the neural network. Figure 6. View largeDownload slide MSE of the neural network. The GSR sensor is capable of acquiring skin conductivity, also known as electro-dermal response (and in older terminology as ‘Galvanic Skin Response’). It is a phenomenon in which the skin momentarily becomes a better conductor of electricity when external or internal stimuli are produced that are awakening physiologically. Arousal is a broad term referring to overall activation, and is widely considered to be one of the three main dimensions of an emotional response. Therefore, measuring arousal is not the same as measuring emotion, but is an important component of it. Some studies [15] have shown that increased heart rate (bpm) can be associated with emotional changes. There are different methods that can help calculate bpm, such as electrocardiography (ECG), phono-cardiography and photo-plethysmography. In the first two methods, different sensors such as physiological electrode and expensive microphones are needed to calculate bpm. This sensor cannot be able to measure our ECG in daily life, because to take the measures the patient stands still. For this reason, we decided to introduce a heart rate sensor for wearable health. This sensor is composed of a red light source (LED) that emits a beam into the skin to illuminate the subcutaneous vessels, which reflect part of the beam depending on the number of red blood cells they contain. Reflected light hits a photo-sensor using this measurement it is possible to obtain the cardiac cycle by measuring the time interval between each peak voltage. The Butterworth band-pass filter was used as a method to eliminate the frequency between 0.1 and 200 Hz. At the same time, it was necessary to use a Butterworth band top filter to eliminate the 50 Hz frequency produced by the power grid. The latter process facilitates the creation of characteristic vectors, thus eliminating noise and reducing errors in classification. 3.3 Emotion classification In this section we describe the method to do the emotion recognition using the bio-signals acquired with the wristband. As commented in the previous section, we have made a data set to train an Artificial Neural Network (ANN) to be able to classify the user emotion from bio-signals captured by the wristband. This data set was created using five test subjects, where the input corresponds to the three characteristics corresponding to the three biological signals captured: GSR, photomicrograph and temperature as input. It was decided to use a double track to determine the emotions expressed by the user who is immersed in the experiment, one corresponding to the emotion obtained through Microsoft’s cognitive service and the other using the SAM test. The SAM test gives the values in terms of PAD and the Microsoft service returns the probability of one of the seven emotions. A thousand samples were taken to create the data set (they correspond to the thousand images that the subject saw during the experiment). This data set has a signal input acquired by the sensors. Each input channel of the bracelet is formed for 256 samples and each one of them is an input of our neural network. The neural network was performed using the PyBrain tool. PyBrain is short for Python-Based Reinforcement Learning, Artificial Intelligence and Neural Network Library. Different experiments were carried out to find the best classification by changing the learning rate, the activation function for each layer and the number of Epochs. The neural network has also five hidden neurons and seven outputs (each output corresponding to an specific emotion). The architecture of our neural network is shown in Figure 5 and has 256 neurons as input, 5 hidden neurons and 7 output neurons that correspond to the 7 emotions to be classified. Figure 7. View largeDownload slide Cross validation. Figure 7. View largeDownload slide Cross validation. The ANN has a background propagation architecture and was trained using a supervised methodology, since the objective of the network is to classify the human emotion. As commented before, this information was extracted from our data set. To make this training, we used 80% of the data to train the network and 20% as a test. Figure 6 shows the mean square error (MSE) of the neural network. Figure 8. View largeDownload slide Components of the smart wristband prototype. Figure 8. View largeDownload slide Components of the smart wristband prototype. As a result, the wristband obtains the current emotional state of the individual. This value will be sent to the SEtA Agent in order to calculate the group’s Social Emotion. To assure the independence between training and test data partitions, a cross validation was performed and the result of this test can be seen in Figure 7. 3.4 Wristband prototype This section describes the design of the physical wristband. The bracelet was designed using an Arduino mini4 as a processing system, with four different sensors connected to it; A GSR sensor, which perceives skin variation, a photoplethysmography sensor to measure blood volume, an Inertial measurement unit (IMU) to detect hand movement and a temperature sensor. In fact, the IMU consists of two sensors, an acceleration sensor and a gyro. Figure 8 shows the different components of our wristband prototype which are the following: (1) Power Supply and Battery Charger (3.7 volt battery), (2) Sensors: for acquisition of GSR and Photoplethysmogram signals and (3) Arduino mini processor. The bio-signals captured in the wristband are passed by an Analog to Digital Conversion or ADC allowing the discretization of the analogue signals. 4 Conclusions and future work This paper presents how to integrate non-invasive bio-signals for the detection of human emotional states through an agent-based application. The identification and detection of human emotional states allow the enhancement of the decision-making process of intelligent agents. The proposed application allows extracting (in a non-invasive way) the social emotion of a group of people by means of wearables facilitating the decision-making in order to change the emotional state of the individuals. As commented before, the application incorporates automatic emotion recognition using bio-signals and machine learning techniques, which are easily included in the proposed system. Most of the analysed systems do not cover the requirements set out in our proposal where it is proposed to obtain the emotional state of the person through the use of non-invasive devices. One of the main requirements for us is that the proposed device does not limit the daily life of the people who use it. In contrast, most of existing works such as [7, 11, 26] allow us to obtain person’s bio-signals but by means of the use of uncomfortable devices or that are difficult to transport. Our proposal allows us to go one step further, increasing the comfort level of the users while performing a body monitoring to attain the users’ emotional status. Moreover, the flexibility and dynamism of the proposed application allow the integration of new sensors or signals in future stages of the project. As future work, we want to apply this system to other application domains, specifically the proposed framework fits with the industrial one, for instance representing production lines including the individuals and their emotional states as yet another elements to be considered in the production line. Footnotes 1 http://www.samsung.com 2 http://www.apple.com 3 http://www.csea.phhp.ufl.edu/Media.html 4 https://store.arduino.cc/arduino-mini-05 References [1] H. Alemdar and C. Ersoy . Wireless sensor networks for healthcare: a survey . Computer Networks , 54 , 2688 – 2710 , 2010 . Google Scholar Crossref Search ADS [2] M. Bradley and P. J. Lang . Measuring emotion: the self-assessment semantic differential manikin and the semantic differential . Journal of Behavior Therapy and Experimental Psychiatry , 25 , 49 – 59 , 1994 . Google Scholar Crossref Search ADS PubMed [3] D. Carneiro , D. Durães , J. Bajo and P. Novais . Quantifying attention in computer-based tasks. In Affective Computing and Context Awareness in Ambient Intelligence , vol. 1794. CEUR , 2016 . [4] J. C. Castillo , J. Serrano-Cuerda , A. Fernández-Caballero and A. Martínez-Rodrigo . Hierarchical architecture for robust people detection by fusion of infrared and visible video. In Intelligent Distributed Computing IX , pp. 343 – 351 . Springer, Berlin, Heidelberg , 2015 . [5] B. N. Colby , A. Ortony , G. L. Clore and A. Collins . The Cognitive Structure of Emotions , vol. 18. Cambridge University Press , 1989 . [6] J. Gratch and S. Marsella . Tears and fears: modeling emotions and emotional behaviors in synthetic agents. In Proceedings of the Fifth International Conference on Autonomous Agents , pp. 278 – 285 . ACM , 2001 . [7] A. Hristoskova , V. Sakkalis , G. Zacharioudakis , M. Tsiknakis and F. D. Turck . Ontology-driven monitoring of patient’s vital signs enabling personalized medical detection and alert . Sensors , 14 , 1598 – 1628 , 2014 . Google Scholar Crossref Search ADS PubMed [8] J. H. Janssen , E. L. van den Broek and J. H. D. M. Westerink . Tune in to your emotions: a robust personalized affective music player . User Modeling and User-Adapted Interaction , 22 , 255 – 279 , 2011 . http://dx.doi.org/10.1007/s11257-011-9107-7. Google Scholar Crossref Search ADS [9] P. J. Lang , M. M. Bradley and B. N. Cuthbert . International affective picture system (IAPS): affective ratings of pictures and instruction manual. Technical Report A-8 . The Center for Research in Psychophysiology, University of Florida , Gainesville, FL , 2008 . [10] C. Maaoui and A. Pruski . Emotion recognition through physiological signals for human-machine communication . Cutting Edge Robotics , 317 – 333 , 2010 . [11] E. Maier and G. Kempter . ALADIN - a magic lamp for the elderly? In Handbook of Ambient Intelligence and Smart Environments , pp. 1201 – 1227 . Springer, Berlin, Heidelberg , 2010 . [12] A. Mehrabian . Analysis of affiliation-related traits in terms of the PAD Temperament Model . The Journal of Psychology , 131 , 101 – 117 , 1997 . Google Scholar Crossref Search ADS PubMed [13] R. Menendez-Ferreira , M. Gomez and D. Camacho . Save it: saving the dream of a grassroots sport based on values. In Affective Computing and Context Awareness in Ambient Intelligence , vol. 1794, 2016 . [14] L. Minvielle , M. Atiq , R. Serra , M. Mougeot and N. Vayatis . Fall detection using smart floor sensor and supervised learning. In 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) . IEEE , 2017 . http://dx.doi.org/10.1109/embc.2017.8037597. [15] H. Nakahara , S. Furuya , S. Obata , T. Masuko and H. Kinoshita . Emotion-related changes in heart rate and its variability during performance and perception of music . Annals of the New York Academy of Sciences , 1169 , 359 – 362 , 2009 . [16] J. Ramos , T. Oliveira , K. Satoh , J. Neves and P. Novais . Orientation system based on speculative computation and trajectory mining. In Highlights of Practical Applications of Scalable Multi-Agent Systems , pp. 250 – 261 . The PAAMS Collection , Springer Nature , 2016 . [17] Revita , Revita , 2018 . http://revita.hi-iberia.es/. [18] J. Rincon , V. Julian and C. Carrascosa . Social emotional model. In 13th International Conference on Practical Applications of Agents and Multi-Agent Systems , pp. 199 – 210 . vol. 9086 of LNAI. Springer , 2015 . [19] V. Sanchez-Anguix , A. Espinosa , L. Hernandez and A. Garcia-Fornes . Mamsy: a management tool for multi-agent systems. In 7th International Conference on Practical Applications of Agents and Multi-agent Systems (PAAMS 2009) , pp. 130 – 139 . Springer , 2009 . [20] V. Sanchez-Anguix , T. Dai , Z. Semnani-Azad , K. Sycara and V. Botti . Modeling power distance and individualism/collectivism in negotiation team dynamics. In 45th Hawaii International Conference on, System Science (HICSS) , pp. 628 – 637 . IEEE , 2012 . [21] Smart BEAT, SmartBeat 2018 . https://www.smartbeatproject.org. [22] K. Stawarz , A. L. Cox and A. Blandford , Don’t forget your pill! In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems - CHI’14 . Association for Computing Machinery (ACM) , 2014 . [23] J. Tao , T. Tan and R. W. Picard , eds. Affective Computing and Intelligent Interaction . Springer, Berlin, Heidelberg , 2005 . http://dx.doi.org/10.1007/11573548. [24] N. Tran , J. M. Coffman , K. Sumino and M. D. Cabana . Patient reminder systems and asthma medication adherence: a systematic review . Journal of Asthma , 51 , 536 – 543 , 2014 . Google Scholar Crossref Search ADS PubMed [25] O. Tunalı , R. Aydoğan and V. Sanchez-Anguix . Rethinking frequency opponent modeling in automated negotiation . In International Conference on Principles and Practice of Multi-Agent Systems , pp. 263 – 279 . Springer , 2017 . [26] M. Walter , B. Eilebrecht , T. Wartzek and S. Leonhardt . The smart car seat: personalized monitoring of vital signs in automotive applications . Personal and Ubiquitous Computing , 15 , 707 – 715 , 2011 . Google Scholar Crossref Search ADS © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permission@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Detecting emotions through non-invasive wearables JF - Logic Journal of the IGPL DO - 10.1093/jigpal/jzy025 DA - 2018-11-27 UR - https://www.deepdyve.com/lp/oxford-university-press/detecting-emotions-through-non-invasive-wearables-rMrDPhvz0j SP - 605 VL - 26 IS - 6 DP - DeepDyve ER -