Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Automatic Sleep Monitoring Using Ear-EEG

Automatic Sleep Monitoring Using Ear-EEG POINT-OF-CARE TECHNOLOGIES Received 31 December 2016; revised 3 March 2017 and 3 April 2017; accepted 24 April 2017, Date of publication 26 June 2017; date of current version 13 July 2017. Digital Object Identifier 10.1109/JTEHM.2017.2702558 1 1 2,3,4 TAKASHI NAKAMURA , VALENTIN GOVERDOVSKY , (Member, IEEE), MARY J. MORRELL , AND DANILO P. MANDIC ,(Fellow, IEEE) Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, U.K. Sleep and Ventilation Unit, National Heart and Lung Institute, Imperial College London, London SW3 6NP, U.K. NIHR Respiratory Disease Biomedical Research Unit, Royal Brompton and Hareeld NHS Foundation Trust, Imperial College London, London SW3 6NP, U.K. Imperial College London, London SW3 6NP, U.K. CORRESPONDING AUTHOR: D. P. MANDIC ([email protected]) This work was supported by the EPSRC grant Engineering under Grant EP/K025643/1, in part by the Rosetrees Trust, in part by the EPSRC Pathways to Impact under Grant PS8038, and MURI/EPSRC grant EP/P008461. ABSTRACT The monitoring of sleep patterns without patient's inconvenience or involvement of a medical specialist is a clinical question of signicant importance. To this end, we propose an automatic sleep stage monitoring system based on an affordable, unobtrusive, discreet, and long-term wearable in-ear sensor for recording the electroencephalogram (ear-EEG). The selected features for sleep pattern classication from a single ear-EEG channel include the spectral edge frequency and multi-scale fuzzy entropy, a structural complexity feature. In this preliminary study, the manually scored hypnograms from simultaneous scalp- EEG and ear-EEG recordings of four subjects are used as labels for two analysis scenarios: 1) classication of ear-EEG hypnogram labels from ear-EEG recordings; and 2) prediction of scalp-EEG hypnogram labels from ear-EEG recordings. We consider both 2-class and 4-class sleep scoring, with the achieved accuracies ranging from 78.5% to 95.2% for ear-EEG labels predicted from ear-EEG, and 76.8% to 91.8% for scalp-EEG labels predicted from ear-EEG. The corresponding Kappa coefcients range from 0.64 to 0.83 for Scenario 1, and indicate substantial to almost perfect agreement, while for Scenario 2 the range of 0.650.80 indicates substantial agreement, thus further supporting the feasibility of in-ear sensing for sleep monitoring in the community. INDEX TERMS Wearable EEG, in-ear sensing, ear-EEG, automatic sleep classication, structural complexity analysis. I. INTRODUCTION With the advance in wearable physiological monitoring Sleep is an essential process in the internal control of the devices, it has become possible to monitor some of sleep- state of body and mind and its quality is strongly linked related physiological responses out of the clinic. The next step with a number of cognitive and health issues, such as stress, towards sleep care in the community is therefore to monitor depression and memory [1]. For clinical diagnostic purposes, sleep-related physiological signals in an affordable way, at polysomnography (PSG) has been extensively utilised which home, and over long periods of time, together with automatic is based on a multitude of physiological responses, including detection of sleep patterns (sleep scoring) without the need the electroencephalogram (EEG), electrooculogram (EOG), for a trained medical expert. Indeed, consumer technologies and electromyogram (EMG). While the PSG is able to faith- are becoming increasingly popular for the self-monitoring of fully reect human sleep patterns, both the recording and sleep [2], and include both mobile apps and wearable devices. scoring process are expensive as this involves an overnight While such technologies aim to assess `sleep quality' and are stay in a specialised clinic and time-consuming manual scor- affordable, these are typically not direct measures of neural ing by a medically trained person. In addition, hospitals are activity, and instead measure indirect surrogates of sleep such unfamiliar environments for patients, which compromises the as limb movement [3]. reliability of the observed sleep patterns. In other words, the Another fast developing aspect of sleep research is auto- conventional recording process is not user-centred and not matic sleep scoring, with the aim to replace the time- ideal for long-term sleep monitoring. consuming manual scoring of sleep patterns from full PSG This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/ VOLUME 5, 2017 2800108 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG FIGURE 2. Comparison with previous studies. A: Evaluation of the agreement between the manually scored hypnograms based on FIGURE 1. The in-ear sensor used in our study. Left: Wearable in-ear scalp-EEG channels and ear-EEG channels [11]. B: Our analysis framework sensor with two flexible electrodes. Right: Placement of the generic for establishing the feasibility of ear-EEG in sleep research. earpiece. User-centred nature: Users are able to insert the sensor with computer software. The manual sleep scoring is per- by themselves as when wearing earplugs. The device is formed through a visual interpretation of 30-second PSG comfortable to wear and does not disturb sleep. recordings, and based on well-established protocols such Robustness: The sensor expands after the insertion and as the manual of the American Academy of Sleep maintains a stable interface with the ear canal, and is thus Medicine (AASM) [4]. The diagnostically relevant sleep not likely to dislodge during sleep. stages include: wake (W), non-rapid eye movement (NREM) In order to examine the feasibility of sleep monitoring Sleep Stage 1 (N1), NREM Stage 2 (N2), NREM with the ear-EEG sensor, we set out to establish a compre- Stage 3 (N3), and REM [5]. Automatic sleep stage scoring hensive cross-validation between standard clinical scalp-EEG employs machine learning and pattern recognition algo- recording and our own ear-EEG recordings. Previously, auto- rithms, and it is now possible to achieve up to 90 % accuracy matic sleep stage classication using custom-made hard-shell of classication between the W, N1, N2, N3 and REM ear-EEG sensor and from a single subject was undertaken sleep stages from a single channel EEG [6], [7]. Publicly based on manually labeled sleep stages from conventional available resources to evaluate automatic sleep stage clas- PSG [17]. Classication performance was evaluated for both sication algorithms include the Sleep EDF database [8]. scalp-EEG and ear-EEG patterns, and showed that ear-EEG A single channel EEG montage is therefore a prerequisite is similarly informative to scalp-EEG to predict sleep stages, for a medical-grade wearable system and for benchmarking which were labelled from a manually scored hypnogram from new developments against existing solutions. conventional PSG recording. With a different perspective, More recent approaches for sleep monitoring aim to our recent study [11] performed simultaneous sleep mon- move beyond actigraphy and develop advanced multimodal itoring from four subjects, using both scalp- and ear-EEG sensors and wearable devices. In this direction, Le et al. intro- data channels, and reported Substantial Agreement between duced a wireless wearable sensor to monitor vectorcardiog- the corresponding hypnograms, manually and blindly scored raphy (VCG), ECG, and respiration for detecting obstructive by a trained clinician, as shown in Figure 2A. The in-ear sleep apnea in real time [9]. Using a wearable in-ear EEG EEG data were recorded from our novel `one-ts-all' generic sensor (ear-EEG) [10], Looney et al. monitored fatigue, while viscoelastic earpieces [16]. In this manuscript, we make a fur- our recent work evaluated sleep stages during nap episodes ther step towards fully automatic wearable sleep monitoring from a viscoelastic in-ear EEG sensor [11], see Figure 1. in the community, by analysing the agreement between the The in-ear sensing technology has been proven to provide automatically predicted sleep stages by ear-EEG and scalp- sufciently good EEG signal for brain-computer interface EEG patterns. To this end, the sleep-related EEG-patterns applications with steady-state responses [10], [12], [13], and were obtained from both scalp and inside the ear simultane- has more recently been used for monitoring other physio- ously, using a stationary data acquisition unit. For rigour, the logical responses, such as cardiac activity [14], [15]. Such a ear-EEG automatic scoring procedures were validated for the wearable system is designed to be comfortable over long following scenarios: periods of time and with the electrodes are rmly placed inside the ear canal, which ensures good quality of record- 1) Agreement between automatically predicted sleep ings. Even though amplitude of ear-EEG is smaller than stages based on ear-EEG patterns and the manually that of scalp-EEG, the signal-to-noise ratio (SNR) was scored hypnogram from ear-EEG (Scenario 1). found to be similar [10], [12], [16]. In a sleep monitor- 2) Agreement between automatically predicted sleep ing scenario, in-ear wearable sensors have the following stages based on ear-EEG patterns and the manually advantages: scored hypnogram from scalp-EEG (Scenario 2). Affordability and unobtrusiveness: Our latest sensor Figure 2B illustrates the proposed analysis framework. The (generic earpiece) is made from viscoelastic mate- results are benchmarked against the results in [11] where both rial [16], such as those used in standard earplugs, see the scalp- and ear-EEG hypnograms were scored manually. Figure 1. In this way, we establish a proof-of-concept for the feasibility 2800108 VOLUME 5, 2017 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG FIGURE 3. Recording setup in our study. Left: The electrodes were placed on scalp and ear. Right: The subject reclined in a comfortable chair. of ear-EEG in automatic scoring of sleep patterns out-of- clinic and in the community. FIGURE 4. Flowchart for the sleep stage prediction framework adopted in II. METHODS this study (Scenario 2). A. DATA ACQUISITION to the same range as those of scalp-EEG, and both scalp-EEG The EEG recordings were conducted at Imperial College and ear-EEG were manually scored by a clinical expert, London between May 2014 and March 2015 under the who had six years of experience in EEG-based sleep stage ethics approval, ICREC 12_1_1, Joint Research Ofce scoring. The processed EEG data was blinded and the epoch- at Imperial College London. Four healthy male subjects based manual sleep scoring was performed according to the (age: 25 - 36 years) without history of sleep disorders par- American Academy of Sleep Medicine (AASM) criteria [4]. ticipated in the recordings. All participants were instructed The epoch size was set to 30 s, therefore 90 epochs were to reduce their sleep to less than 5 hours the night before, scored in each recording. and agreed to refrain from consuming caffeine and napping on the recording day. The four scalp-EEG channels C3, C4, C. PRE-PROCESSING FOR AUTOMATIC A1 and A2 (according to international 10-20 system), were STAGE CLASSIFICATION recorded using standard gold-cup electrodes. The forehead For automatic sleep stage classication, we considered the was used for the ground, and the standard congurations for recorded EEG from the left ear channel 1 (EL1), for a fair sleep scoring were utilised (i.e. C3-A2 and C4-A1). The ear- comparison with automatic scoring algorithms for a single EEG was recorded from both the left and right ear, and the EEG channel montage in the literature. First, the data was ear-EEG sensor was made based on a viscoelastic earplug downsampled to 200 Hz, and the epochs with the ampli- with two cloth electrodes [16], as shown in Figure 1. Earwax tudes of more than400V were removed from subsequent was removed from the ear canals, and the sensor expanded analyses. The data were then bandpass ltered with the pass- after the insertion, to conform to the shape of the ear canal. band of [0:5 30] Hz. The pre-processing resulted in a The reference gold-cup standard electrodes were attached loss of approximately 20 % of the data, and eventually 293 behind the ipsilateral mastoid and the ground electrodes were (hypnogram based on scalp-EEG, W:67, N1:46, N2:140, placed on the ipsilateral helix, as illustrated in Figure 3. Both N3:40, and hypnogram based on ear-EEG, W:52, N1:49, scalp-EEG and ear-EEG were recorded simultaneously using N2:162, N3:30) epochs were used for the classication. the g.tec g.USBamp amplier with 24-bit resolution, at a sampling frequency fs D 1200 Hz. D. FEATURE EXTRACTION The participants seated in a comfortable chair in a dark and After the pre-processing, two types of features were extracted quiet room. The duration of recording was 45 minutes, while from each epoch of the EEG. These were the same as to increase the number of transitions between the wake and those in the latest automatic sleep stage classication results sleep stage, a loudspeaker played 10 s abrupt noise at random based on the Sleep EDF database [18], and included: intervals. 1) a frequency domain feature - spectral edge fre- quency (SEF), and 2) a structural complexity feature - multi- B. SLEEP STAGE SCORING scale entropy (MSE) [19]. Both the recorded scalp- and ear-EEG were analysed based 1) FREQUENCY DOMAIN FEATURES on the framework illustrated in Figure 4. For scalp-EEG, a 4th-order Butterworth bandpass lter with passband 1 The r % of spectral edge frequency (SEFr) is calculated as - 20 Hz was applied to two bipolar EEG congurations the rth percentile of the total power obtained from power (i.e. C3-A2 and C4-A1). Due to low-frequency interfer- spectral density, as illustrated in Figure 5. Figure 6 illustrates ence in ear-EEG channels, the low cutoff frequency was power spectral density for the scalp C3-A2 (top) and in-ear set to 1 Hz for the Subject 1 and 3, and 2 Hz for the Sub- EL1 (bottom) channels for different sleep stages, labeled ject 2 and 4. Next, the ear-EEG amplitudes were normalised manually based on scalp-EEG patterns. Observe that the VOLUME 5, 2017 2800108 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG FIGURE 5. Spectral edge frequency (SEF) features for the 8 - 15 Hz band. The symbol SEF50 denotes the lowest frequency below which 50 % of the total power in a considered frequency band is contained (cf. SEF95 for 95 % of total power). FIGURE 6. Power spectral density for the scalp C3-A2 montage (top) and for the in-ear EEG channel EL1 (bottom). spectral patterns [20] in scalp-EEG and ear-EEG are similar: the alpha (8 - 13 Hz) band power in the Wake condition, a slightly smaller alpha power in N1 sleep, and the stronger power of the delta (< 2 Hz) band towards deep sleep. We next FIGURE 7. The frequency domain SEF50, SEF95, and SEFd features of the obtained the SEF50 and SEF95 features for the follow- ,  , , , and band power from the in-ear EEG channel EL1. The ing frequency bands:  D 0:5 - 30 Hz,  D features were averaged over all epochs and subjects. 0:5 - 16 Hz, D 8 - 11 Hz, D 8 - 15 Hz, and D 16 - 30 Hz. In addition, the SEFd feature was calculated as of noise. The following parameters for MSFE were chosen: the difference between SEF95 and SEF50, that is, SEFd D maximum scale  D 15, m D 2, n D 2, r D 0:15(standard SEF95 - SEF50, so that 15 SEF features were obtained from deviation of each epoch). Overall, 15 features were extracted the in-ear EL1 channel. Figure 7 shows the boxplots of SEF from the EL1 channel and were normalised, as illustrated features in different frequency bands for the EL1 channel and in Figure 8. Observe the good separation of entropy values for each sleep stage, averaged over all epochs and subjects. between sleep stages in each scale; in particular, structural Observe the consistent spread of SEF features. complexity for the Wake condition decreased with the scale factor. For the N3 sleep stage, a large proportion of power 2) STRUCTURAL COMPLEXITY FEATURES is contained in the delta band (relative to total power), and The multi-scale entropy (MSE) method calculates struc- this more deterministic behaviour caused the FE values to be tural complexity of time-series over multiple temporal smaller than in other sleep stages. scales [19], [21], and can be measured with e.g. sam- E. CLASSIFICATION ple entropy, approximate entropy, and permutation entropy. We used multi-scale fuzzy entropy (MSFE) [22] with a small Classication was performed based on 30 SEF and MSFE embedding dimension, owing to its robustness in the presence features, which were normalised to the range [0 1]. The 2800108 VOLUME 5, 2017 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG TABLE 1. Confusion matrix for the 2-class Wake vs Sleep classification. TABLE 2. Confusion matrix for the 2-class Wake-N1 vs N2-N3 classification. FIGURE 8. Structural complexity features for different sleep stages. Normalised multi-scale fuzzy entropy (MSFE) from the in-ear EEG channel EL1 is evaluated the over scales 1 (standard FE) to 15, and shows excellent separation between sleep stages. The error bars indicate the standard error. TABLE 3. Confusion matrix for 4-class sleep stage classification. one-against-one multi-class support vector machine (SVM) with a radial basis function (RBF) kernel was employed as a classier [23]. F. PERFORMANCE EVALUATION Feature extraction was performed using Matlab 2016b, and the classication was conducted in Python 2.7.12 Anaconda 4.2.0 (x86_64) operated on an iMac with 2.8GHz Intel Core i5, 16GB of RAM. A 5-fold cross validation (CV) was TABLE 4. Confusion matrix for the 2-class Wake vs Sleep classification. performed to evaluate the automatic sleep stage classication. The performance metrics used were class-specic sensitiv- ity (SE) and precision (PR), as well as overall accuracy (AC) and Kappa coefcient (), dened as follows: TP TP TP iD1 SE D ; PR D ; AC D ; TPC FN TPC FP N P Agreement of Cohen's Kappa coefcients [24], as shown in f g (TP C FP )(TP C FN ) AC i i i i e iD1 Table 1 and 2. D ;  D : N 1 The accuracy for the more difcult 4-class sleep stage The parameter TP (true positive) represents the number of classication was 78.5 % with the Kappa coefcient positive (target) epochs correctly predicted, TN (true nega-  D 0:64, which indicates a Substantial Agreement, as shown tive) is the number of negative (non-target) epochs correctly in Table 3. predicted, FP (false positive) is the number of negative epochs incorrectly predicted as positive class, FN (false negative) B. SCENARIO2: SLEEP STAGE CLASSIFICATION FROM is the number of positive epochs incorrectly predicted as EAR-EEG AGAINST THE MANUALLY SCORED negative class, C is the number of classes, and N the total HYPNOGRAM BASED ON SCALP-EEG number of epochs. We next evaluated the agreement between the hypnogram scored based on scalp-EEG channels and the predicted label III. RESULTS based on extracted features from the in-ear EEG channel EL1. A. SCENARIO1: SLEEP STAGE CLASSIFICATION FROM Tables 4, 5, and 6 show the corresponding confusion matri- EAR-EEG AGAINST THE MANUALLY SCORED ces, obtained from the classication based on the SEF and HYPNOGRAM BASED ON EAR-EEG MSFE features for the 2-class Wake vs Sleep and W-N1 vs We rst evaluated the agreement between the hypnogram N2-N3 scenarios, and the 4-class (W, N1, N2, N3) scenario. scored based on ear-EEG channels and the predicted label For the 2-class classication problems, the achieved classi- based on extracted features from the in-ear EEG channel cation accuracies were more than 90 %, with the Substantial EL1. Tables 1, 2, and 3 show the confusion matrices obtained Agreements ( D 0:75 and  D 0:80) [24]. from the classication results based on the SEF and MSFE The achieved accuracy for the 4-class sleep stage classi- features for the 2-class scenarios Wake vs Sleep and W-N1 vs cation was 76.8 %, with the Kappa coefcient  D 0:65, N2-N3, and the 4-class (W, N1, N2, N3) scenario. For the which indicates a Substantial Agreement. 2-class classication scenarios, the overall classication Figure 9 depicts the hypnograms scored manually based on accuracies were respectively 95.2 % and 86.0 %, with an scalp-EEG channels (blue) and the automatically predicted Almost Perfect ( D 0:83) to Substantial ( D 0:68) label based on the in-ear EL1 channel (red) for the 2-class VOLUME 5, 2017 2800108 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG TABLE 5. Confusion matrix for the 2-class Wake-N1 vs N2-N3 TABLE 7. Comparison between the manual scores and automatic classification. predicted scores (Accuracy [%] / Kappa). TABLE 6. Confusion matrix for the 4-class sleep stage classification. C. AGREEMENT BETWEEN THE PREDICTED AND MANUAL SLEEP SCORES Upon establishing the feasibility of predicting scalp-EEG sleep stages from ear-EEG features, we next benchmarked these ndings against our recent results based on manual scoring of both scalp- and ear-EEG [11]. To this end, Table 7 compares the manual and automatic labels for the following scenarios: Scenario 1: The manually scored hypnogram based on ear-EEG channels vs the predicted label based on the in- ear EL1 channel (Table 1, 2, and 3). Scenario 2: The manually scored hypnogram based on scalp-EEG channels vs the predicted label based on the in-ear EL1 channel (Table 4, 5, and 6). The hypnogram manually scored based on scalp-EEG channels vs that scored based on ear-EEG channels. In all cases, the proposed automatically scored labels were a signicant match to the corresponding labels scored manu- ally in [11]. IV. DISCUSSION AND CONCLUSIONS We have proposed an automatic sleep stage monitoring sys- tem using ear-EEG, which is capable of minimising both patient's inconvenience and the involvement of a medi- cal specialist. For rigour, the experiments have been con- ducted in two scenarios: Scenario 1 examined automatic scores for ear-EEG against manual scores for ear-EEG, while Scenario 2 examined automatic scores for ear-EEG against FIGURE 9. Hypnogram for Subject 2 scored based on scalp-EEG channels manual scores for scalp-EEG. This has both conrmed the (blue) and the automatically predicted label based on in-ear EEG channel EL1 (red) for the 2-class Wake vs Sleep (top) and W-N1 vs N2-N3 (middle) feasibility of ear-EEG for sleep monitoring, and has provided scenarios, and the 4-class (bottom) classification scenario. a proof-of-concept for the feasibility of ear-EEG in automatic scoring of sleep patterns out-of-clinic and in the community. Wake vs Sleep (top) and W-N1 vs N2-N3 (middle) scenarios, In 4-class sleep stage classication for Scenario 1 and and the 4-class (bottom) scenario, for the Subject 2. Only Scenario 2, the accuracies were respectively 78.5 % and the rst epoch was removed because of the AC onset noise, 76.8 % with Substantial Agreements of Kappa coefcients, therefore the hypnogram was scored based on 89 epochs, as shown in Table 3 and 6. These results conrmed that the which corresponds to 44 minutes of 30 s recording. For the recorded ear-EEG carried a sufcient amount of information 4-class problems, even though some epochs were predicted to evaluate human sleep robustly; however, discriminating the incorrectly, for example epoch 62 (hypnogram:N3, predic- N1 stage remains challenging, as also reported in scalp-EEG tion:N2), the majority of epochs were correctly classied. based automatic sleep stage classication [7], [18]. This was This conrms that the features extracted from the ear-EEG reected in the sensitivities for the N1 stage classication, data were effectively used for the automatic sleep stage clas- which were respectively 34.7 % and 50.0 % for Scenario 1 sication, and provided a substantial match to the scalp-EEG and Scenario 2, and were much smaller than the sensitivities patterns scored manually by an expert. We can therefore for the other sleep conditions. In manual scoring guideline, conclude that the recorded ear-EEG carried a sufcient the N1 sleep is dened as 50 % of the epoch consisting of a amount of information to evaluate human sleep robustly. relatively low-voltage mixed activity (2 - 7 Hz) and < 50 % 2800108 VOLUME 5, 2017 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG of the epoch containing alpha activity, while the wake-sleep [9] T. Q. Le, C. Cheng, A. Sangasoongsong, W. Wongdhamma, and S. T. S. Bukkapatnam, ``Wireless wearable multisensory suite and real- boundary is observed as a loss of alpha rhythm [20]. The N2 time prediction of obstructive sleep apnea episodes,'' IEEE J. Transl. Eng. sleep is dened as the appearance of sleep spindles and/or Health Med., vol. 1, Jul. 2013, Art. no. 2700109. [Online]. Available: K complexes, while < 20 % of the epoch may contain high- http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6563132 [10] D. Looney et al., ``The in-the-ear recording concept: User-centered and voltage (> 75V, < 2 Hz) activity. We could observe the wearable brain monitoring,'' IEEE Pulse, vol. 3, no. 6, pp. 3242, absence of alpha rhythm in N1 (blue) from both scalp- and Nov. 2012. ear-EEG, as illustrated in Figure 6. Nevertheless, the high- [11] D. Looney, V. Goverdovsky, I. Rosenzweig, M. J. Morrell, and D. P. Mandic, ``A wearable in-ear encephalography sensor for monitoring voltage activities in < 2 Hz band for the EL1 (ear-EEG) sleep: Preliminary observations from nap studies,'' Ann. Amer. Thoracic channel were not notable compared to those of C3-A2 Soc., vol. 13, no. 12, pp. 22292233, 2016. (scalp-EEG) channel; the spectrum of N2 (black) for the EL1 [12] P. Kidmose, D. Looney, M. Ungstrup, M. L. Rank, and D. P. Mandic, ``A study of evoked potentials from Ear-EEG,'' IEEE Trans. Biomed. Eng., channel between 1 - 5 Hz signicantly overlapped with that vol. 60, no. 10, pp. 28242830, Sep. 2013. of N1. This can lead to ineffective discrimination of the N1 [13] Y.-T. Wang et al., ``Developing an online steady-state visual evoked stage in the proposed automatic sleep stage scoring algorithm, potential-based brain-computer interface system using EarEEG,'' in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS), Aug. 2015, and is a persistent problem in any automatic sleep stage pp. 22712274. classication. [14] V. Goverdovsky, D. Looney, P. Kidmose, C. Papavassiliou, and Overall, the sleep stage prediction from ear-EEG for the D. P. Mandic, ``Co-located multimodal sensing: A next generation solution for wearable health,'' IEEE Sensors J., vol. 15, no. 1, 2-class sleep stage classication (Wake vs Sleep and W-N1 vs pp. 138145, Jan. 2015. N2-N3) for Scenario 1 gave the high respective overall accu- [15] V. Goverdovsky et al., ``Hearables: Multimodal physiological in-ear sens- racies of 95.2 % and 86.0 %, with the corresponding Kappa ing,'' Nature Sci. Rep., to be published. [16] V. Goverdovsky, D. Looney, P. Kidmose, and D. P. Mandic, ``In-ear EEG coefcients of 0.83 and 0.68, which indicates Almost Perfect from viscoelastic generic earpieces: Robust and unobtrusive 24/7 monitor- and Substantial Agreements. For the 4-stage classication, ing,'' IEEE Sensors J., vol. 16, no. 1, pp. 271277, Jan. 2016. the accuracy was 78.5 % with  D 0:64, indicating a Substan- [17] A. Stochholm, K. Mikkelsen, and P. Kidmose, ``Automatic sleep stage classication using Ear-EEG,'' in Proc. Annu. Int. Conf. IEEE Eng. Med. tial Agreement. For Scenario 2, the corresponding accuracies Biol. Soc. (EMBS), Sep. 2016, pp. 47514754. for the 2-stage classication were 91.8 % and 90.4 % with [18] T. Nakamura, T. Adjei, Y. Alqurashi, D. Looney, M. J. Morrell, and the Kappa coefcients  D 0:75 and  D 0:80 (Substantial D. P. Mandic, ``Complexity science for sleep stage classication from EEG,'' in Proc. IEEE Int. Joint Conf. Neural Netw. Agreements), while for the 4-stage classication the accuracy (IJCNN), May 2017, pp. 43874394. [Online]. Available: was 76.8 % with  D 0:65, a Substantial Agreement. We have http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7966411 therefore conrmed both empirically and over comprehen- [19] M. Costa, A. L. Goldberger, and C.-K. Peng, ``Multiscale entropy analysis of complex physiologic time series,'' Phys. Rev. Lett., vol. 89, p. 068102, sive statistical testing that the in-ear EEG carries sufcient Jul. 2002. amount of information to faithfully represent human sleep [20] M. H. Silber et al., ``The visual scoring of sleep in adults,'' J. Clin. Sleep patterns, thus opening up a new avenue in fully wearable sleep Med., vol. 3, no. 2, pp. 121131, 2007. [21] M. U. Ahmed and D. P. Mandic, ``Multivariate multiscale entropy: A tool research in the community. For this pilot study the number of for complexity analysis of multichannel data,'' Phys. Rev. E, Stat. Phys. subjects was four, and our future studies will consider a larger Plasmas Fluids Relat. Interdiscip. Top., vol. 84, no. 6, pp. 110, Jun. 2011. cohort of subjects, overnight sleep, and other aspects of fully [22] M. U. Ahmed, T. Chanwimalueang, S. Thayyil, and D. P. Mandic, ``A multivariate multiscale fuzzy entropy algorithm with application to uterine wearable scenarios. EMG complexity analysis,'' Entropy, vol. 19, no. 2, pp. 118, 2017. [23] C.-C. Chang and C.-J. Lin, ``LIBSVM: A library for support vec- tor machines,'' ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, REFERENCES pp. 27:127:27, 2011. [1] P. Maquet, ``The role of sleep in learning and memory,'' Science, vol. 294, [24] J. R. Landis and G. G. Koch, ``The measurement of observer agreement no. 5544, pp. 10481052, Nov. 2001. for categorical data,'' Biometrics, vol. 33, no. 1, pp. 159174, 1977. [2] P. R. T. Ko, J. A. Kientz, E. K. Choe, M. Kay, C. A. Landis, and N. F. Watson, ``Consumer sleep technologies: A review of the landscape,'' J. Clin. Sleep Med., vol. 11, no. 12, pp. 14551461, 2015. [3] I. Rosenzweig, S. C. R. Williams, and M. J. Morrell, ``The impact of sleep and hypoxia on the brain: Potential mechanisms for the effects of obstructive sleep apnea,'' Current Opinion Pulmonary Med., vol. 20, no. 6, pp. 565571, 2014. [4] C. Iber, The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specications. Darien, IL, USA: American Academy of Sleep Medicine, 2007. [5] M. Kryger, T. Roth, and W. Dement, Principles and Practice of Sleep TAKASHI NAKAMURA received the B.Eng. Medicine, 6th ed. Amsterdam, The Netherlands: Elsevier, 2017. degree from the Department of Electrical Engi- [6] B. Koley and D. Dey, ``An ensemble system for automatic sleep stage neering and Bioscience, Waseda University, Japan, classication using single channel EEG signal,'' Comput. Biol. Med., in 2014, and the M.Sc. degree in communica- vol. 42, no. 12, pp. 11861195, 2012. tions and signal processing from Imperial College [7] T. L. T. da Silveira, A. J. Kozakevicius, and C. R. Rodrigues, ``Single- London, U.K., in 2015, where he is currently pur- channel EEG sleep stage classication based on a streamlined set of suing the Ph.D. degree in signal processing. His statistical features in wavelet domain,'' Med. Biol. Eng. Comput., vol. 55, post-graduate studies are funded by the Nakajima no. 2, pp. 110, Feb. 2016. Foundation. His research interests are in the areas [8] PhysioNet. The Sleep-EDF Database [Expanded], accessed on Dec. 30, 2016. [Online]. Available: https://physionet.org/pn4/sleep- of brain-computer interface, and signal processing edfx/ for cognitive neuroscience. VOLUME 5, 2017 2800108 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG VALENTIN GOVERDOVSKY received the M.Eng. DANILO P. MANDIC is currently a Professor of degree in electronic engineering and the Ph.D. Signal Processing with Imperial College London, degree in communications from Imperial Col- London, U.K., where he has been involved in the lege London, U.K. He is currently a Rosetrees area of nonlinear adaptive and biomedical signal Fellow with the Department of Electrical and Elec- processing. He has been a Guest Professor with the tronic Engineering, Imperial College London. His Katholieke Universiteit Leuven, Leuven, Belgium, research interests are primarily in the areas of and a Frontier Researcher with RIKEN, Tokyo. His biomedical instrumentation, systems design, and publication record includes two research mono- wireless communications. He received the Eric graphs, Recurrent Neural Networks for Prediction Laithwaithe Award for excellence in research. (West Sussex, U.K.: Wiley, 2001) and Complex- He is currently involved in the development of wearable biosensing plat- Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear, and forms, such as the novel in-the-ear sensing concept for 24/7 monitoring of Neural Models (West Sussex, U.K.: Wiley, 2009), an edited book Signal brain and body functions. Processing for Information Fusion (New York: Springer, 2008), and over 400 publications in signal and image processing. He is a member of the London Mathematical Society. He has produced award winning papers and products from his collaboration with industry and has received the Presi- MARY J. MORRELL received the Ph.D. degree dent's Award for excellence in postgraduate supervision at Imperial College. from London University, followed by the Well- come Trust Fellowship with the University of Wisconsin-Madison. Her research is focused on the interaction between respiratory control and sleep mechanisms that lead to sleep related breath- ing disorders, specically sleep apnoea. This approach has led to the translation of physiolog- ical studies into large randomised treatment trials, for example the impact of sleep apnoea in older people. She founded a U.K. respiratory-sleep network facilitating multi- centre trials and her recent studies have focused on the neurological impact of sleep apnoea; particularly accelerated neural decline in older people, and the impact of sleepiness on daytime function. Her research is supported by funding from the Wellcome Trust, British Heart Foundation and National Institute of Health Research. She has served on the American Thoracic Society Board of Directors, and the Physiological Society Executive Board. She is President of the British Sleep Society. 2800108 VOLUME 5, 2017 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IEEE Journal of Translational Engineering in Health and Medicine Pubmed Central

Automatic Sleep Monitoring Using Ear-EEG

,; ,; ,; ,
IEEE Journal of Translational Engineering in Health and Medicine , Volume 5 – Jun 26, 2017

Loading next page...
 
/lp/pubmed-central/automatic-sleep-monitoring-using-ear-eeg-k7ZhnhEw41

References (52)

Publisher
Pubmed Central
Copyright
This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/
ISSN
2168-2372
eISSN
2168-2372
DOI
10.1109/JTEHM.2017.2702558
Publisher site
See Article on Publisher Site

Abstract

POINT-OF-CARE TECHNOLOGIES Received 31 December 2016; revised 3 March 2017 and 3 April 2017; accepted 24 April 2017, Date of publication 26 June 2017; date of current version 13 July 2017. Digital Object Identifier 10.1109/JTEHM.2017.2702558 1 1 2,3,4 TAKASHI NAKAMURA , VALENTIN GOVERDOVSKY , (Member, IEEE), MARY J. MORRELL , AND DANILO P. MANDIC ,(Fellow, IEEE) Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, U.K. Sleep and Ventilation Unit, National Heart and Lung Institute, Imperial College London, London SW3 6NP, U.K. NIHR Respiratory Disease Biomedical Research Unit, Royal Brompton and Hareeld NHS Foundation Trust, Imperial College London, London SW3 6NP, U.K. Imperial College London, London SW3 6NP, U.K. CORRESPONDING AUTHOR: D. P. MANDIC ([email protected]) This work was supported by the EPSRC grant Engineering under Grant EP/K025643/1, in part by the Rosetrees Trust, in part by the EPSRC Pathways to Impact under Grant PS8038, and MURI/EPSRC grant EP/P008461. ABSTRACT The monitoring of sleep patterns without patient's inconvenience or involvement of a medical specialist is a clinical question of signicant importance. To this end, we propose an automatic sleep stage monitoring system based on an affordable, unobtrusive, discreet, and long-term wearable in-ear sensor for recording the electroencephalogram (ear-EEG). The selected features for sleep pattern classication from a single ear-EEG channel include the spectral edge frequency and multi-scale fuzzy entropy, a structural complexity feature. In this preliminary study, the manually scored hypnograms from simultaneous scalp- EEG and ear-EEG recordings of four subjects are used as labels for two analysis scenarios: 1) classication of ear-EEG hypnogram labels from ear-EEG recordings; and 2) prediction of scalp-EEG hypnogram labels from ear-EEG recordings. We consider both 2-class and 4-class sleep scoring, with the achieved accuracies ranging from 78.5% to 95.2% for ear-EEG labels predicted from ear-EEG, and 76.8% to 91.8% for scalp-EEG labels predicted from ear-EEG. The corresponding Kappa coefcients range from 0.64 to 0.83 for Scenario 1, and indicate substantial to almost perfect agreement, while for Scenario 2 the range of 0.650.80 indicates substantial agreement, thus further supporting the feasibility of in-ear sensing for sleep monitoring in the community. INDEX TERMS Wearable EEG, in-ear sensing, ear-EEG, automatic sleep classication, structural complexity analysis. I. INTRODUCTION With the advance in wearable physiological monitoring Sleep is an essential process in the internal control of the devices, it has become possible to monitor some of sleep- state of body and mind and its quality is strongly linked related physiological responses out of the clinic. The next step with a number of cognitive and health issues, such as stress, towards sleep care in the community is therefore to monitor depression and memory [1]. For clinical diagnostic purposes, sleep-related physiological signals in an affordable way, at polysomnography (PSG) has been extensively utilised which home, and over long periods of time, together with automatic is based on a multitude of physiological responses, including detection of sleep patterns (sleep scoring) without the need the electroencephalogram (EEG), electrooculogram (EOG), for a trained medical expert. Indeed, consumer technologies and electromyogram (EMG). While the PSG is able to faith- are becoming increasingly popular for the self-monitoring of fully reect human sleep patterns, both the recording and sleep [2], and include both mobile apps and wearable devices. scoring process are expensive as this involves an overnight While such technologies aim to assess `sleep quality' and are stay in a specialised clinic and time-consuming manual scor- affordable, these are typically not direct measures of neural ing by a medically trained person. In addition, hospitals are activity, and instead measure indirect surrogates of sleep such unfamiliar environments for patients, which compromises the as limb movement [3]. reliability of the observed sleep patterns. In other words, the Another fast developing aspect of sleep research is auto- conventional recording process is not user-centred and not matic sleep scoring, with the aim to replace the time- ideal for long-term sleep monitoring. consuming manual scoring of sleep patterns from full PSG This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/ VOLUME 5, 2017 2800108 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG FIGURE 2. Comparison with previous studies. A: Evaluation of the agreement between the manually scored hypnograms based on FIGURE 1. The in-ear sensor used in our study. Left: Wearable in-ear scalp-EEG channels and ear-EEG channels [11]. B: Our analysis framework sensor with two flexible electrodes. Right: Placement of the generic for establishing the feasibility of ear-EEG in sleep research. earpiece. User-centred nature: Users are able to insert the sensor with computer software. The manual sleep scoring is per- by themselves as when wearing earplugs. The device is formed through a visual interpretation of 30-second PSG comfortable to wear and does not disturb sleep. recordings, and based on well-established protocols such Robustness: The sensor expands after the insertion and as the manual of the American Academy of Sleep maintains a stable interface with the ear canal, and is thus Medicine (AASM) [4]. The diagnostically relevant sleep not likely to dislodge during sleep. stages include: wake (W), non-rapid eye movement (NREM) In order to examine the feasibility of sleep monitoring Sleep Stage 1 (N1), NREM Stage 2 (N2), NREM with the ear-EEG sensor, we set out to establish a compre- Stage 3 (N3), and REM [5]. Automatic sleep stage scoring hensive cross-validation between standard clinical scalp-EEG employs machine learning and pattern recognition algo- recording and our own ear-EEG recordings. Previously, auto- rithms, and it is now possible to achieve up to 90 % accuracy matic sleep stage classication using custom-made hard-shell of classication between the W, N1, N2, N3 and REM ear-EEG sensor and from a single subject was undertaken sleep stages from a single channel EEG [6], [7]. Publicly based on manually labeled sleep stages from conventional available resources to evaluate automatic sleep stage clas- PSG [17]. Classication performance was evaluated for both sication algorithms include the Sleep EDF database [8]. scalp-EEG and ear-EEG patterns, and showed that ear-EEG A single channel EEG montage is therefore a prerequisite is similarly informative to scalp-EEG to predict sleep stages, for a medical-grade wearable system and for benchmarking which were labelled from a manually scored hypnogram from new developments against existing solutions. conventional PSG recording. With a different perspective, More recent approaches for sleep monitoring aim to our recent study [11] performed simultaneous sleep mon- move beyond actigraphy and develop advanced multimodal itoring from four subjects, using both scalp- and ear-EEG sensors and wearable devices. In this direction, Le et al. intro- data channels, and reported Substantial Agreement between duced a wireless wearable sensor to monitor vectorcardiog- the corresponding hypnograms, manually and blindly scored raphy (VCG), ECG, and respiration for detecting obstructive by a trained clinician, as shown in Figure 2A. The in-ear sleep apnea in real time [9]. Using a wearable in-ear EEG EEG data were recorded from our novel `one-ts-all' generic sensor (ear-EEG) [10], Looney et al. monitored fatigue, while viscoelastic earpieces [16]. In this manuscript, we make a fur- our recent work evaluated sleep stages during nap episodes ther step towards fully automatic wearable sleep monitoring from a viscoelastic in-ear EEG sensor [11], see Figure 1. in the community, by analysing the agreement between the The in-ear sensing technology has been proven to provide automatically predicted sleep stages by ear-EEG and scalp- sufciently good EEG signal for brain-computer interface EEG patterns. To this end, the sleep-related EEG-patterns applications with steady-state responses [10], [12], [13], and were obtained from both scalp and inside the ear simultane- has more recently been used for monitoring other physio- ously, using a stationary data acquisition unit. For rigour, the logical responses, such as cardiac activity [14], [15]. Such a ear-EEG automatic scoring procedures were validated for the wearable system is designed to be comfortable over long following scenarios: periods of time and with the electrodes are rmly placed inside the ear canal, which ensures good quality of record- 1) Agreement between automatically predicted sleep ings. Even though amplitude of ear-EEG is smaller than stages based on ear-EEG patterns and the manually that of scalp-EEG, the signal-to-noise ratio (SNR) was scored hypnogram from ear-EEG (Scenario 1). found to be similar [10], [12], [16]. In a sleep monitor- 2) Agreement between automatically predicted sleep ing scenario, in-ear wearable sensors have the following stages based on ear-EEG patterns and the manually advantages: scored hypnogram from scalp-EEG (Scenario 2). Affordability and unobtrusiveness: Our latest sensor Figure 2B illustrates the proposed analysis framework. The (generic earpiece) is made from viscoelastic mate- results are benchmarked against the results in [11] where both rial [16], such as those used in standard earplugs, see the scalp- and ear-EEG hypnograms were scored manually. Figure 1. In this way, we establish a proof-of-concept for the feasibility 2800108 VOLUME 5, 2017 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG FIGURE 3. Recording setup in our study. Left: The electrodes were placed on scalp and ear. Right: The subject reclined in a comfortable chair. of ear-EEG in automatic scoring of sleep patterns out-of- clinic and in the community. FIGURE 4. Flowchart for the sleep stage prediction framework adopted in II. METHODS this study (Scenario 2). A. DATA ACQUISITION to the same range as those of scalp-EEG, and both scalp-EEG The EEG recordings were conducted at Imperial College and ear-EEG were manually scored by a clinical expert, London between May 2014 and March 2015 under the who had six years of experience in EEG-based sleep stage ethics approval, ICREC 12_1_1, Joint Research Ofce scoring. The processed EEG data was blinded and the epoch- at Imperial College London. Four healthy male subjects based manual sleep scoring was performed according to the (age: 25 - 36 years) without history of sleep disorders par- American Academy of Sleep Medicine (AASM) criteria [4]. ticipated in the recordings. All participants were instructed The epoch size was set to 30 s, therefore 90 epochs were to reduce their sleep to less than 5 hours the night before, scored in each recording. and agreed to refrain from consuming caffeine and napping on the recording day. The four scalp-EEG channels C3, C4, C. PRE-PROCESSING FOR AUTOMATIC A1 and A2 (according to international 10-20 system), were STAGE CLASSIFICATION recorded using standard gold-cup electrodes. The forehead For automatic sleep stage classication, we considered the was used for the ground, and the standard congurations for recorded EEG from the left ear channel 1 (EL1), for a fair sleep scoring were utilised (i.e. C3-A2 and C4-A1). The ear- comparison with automatic scoring algorithms for a single EEG was recorded from both the left and right ear, and the EEG channel montage in the literature. First, the data was ear-EEG sensor was made based on a viscoelastic earplug downsampled to 200 Hz, and the epochs with the ampli- with two cloth electrodes [16], as shown in Figure 1. Earwax tudes of more than400V were removed from subsequent was removed from the ear canals, and the sensor expanded analyses. The data were then bandpass ltered with the pass- after the insertion, to conform to the shape of the ear canal. band of [0:5 30] Hz. The pre-processing resulted in a The reference gold-cup standard electrodes were attached loss of approximately 20 % of the data, and eventually 293 behind the ipsilateral mastoid and the ground electrodes were (hypnogram based on scalp-EEG, W:67, N1:46, N2:140, placed on the ipsilateral helix, as illustrated in Figure 3. Both N3:40, and hypnogram based on ear-EEG, W:52, N1:49, scalp-EEG and ear-EEG were recorded simultaneously using N2:162, N3:30) epochs were used for the classication. the g.tec g.USBamp amplier with 24-bit resolution, at a sampling frequency fs D 1200 Hz. D. FEATURE EXTRACTION The participants seated in a comfortable chair in a dark and After the pre-processing, two types of features were extracted quiet room. The duration of recording was 45 minutes, while from each epoch of the EEG. These were the same as to increase the number of transitions between the wake and those in the latest automatic sleep stage classication results sleep stage, a loudspeaker played 10 s abrupt noise at random based on the Sleep EDF database [18], and included: intervals. 1) a frequency domain feature - spectral edge fre- quency (SEF), and 2) a structural complexity feature - multi- B. SLEEP STAGE SCORING scale entropy (MSE) [19]. Both the recorded scalp- and ear-EEG were analysed based 1) FREQUENCY DOMAIN FEATURES on the framework illustrated in Figure 4. For scalp-EEG, a 4th-order Butterworth bandpass lter with passband 1 The r % of spectral edge frequency (SEFr) is calculated as - 20 Hz was applied to two bipolar EEG congurations the rth percentile of the total power obtained from power (i.e. C3-A2 and C4-A1). Due to low-frequency interfer- spectral density, as illustrated in Figure 5. Figure 6 illustrates ence in ear-EEG channels, the low cutoff frequency was power spectral density for the scalp C3-A2 (top) and in-ear set to 1 Hz for the Subject 1 and 3, and 2 Hz for the Sub- EL1 (bottom) channels for different sleep stages, labeled ject 2 and 4. Next, the ear-EEG amplitudes were normalised manually based on scalp-EEG patterns. Observe that the VOLUME 5, 2017 2800108 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG FIGURE 5. Spectral edge frequency (SEF) features for the 8 - 15 Hz band. The symbol SEF50 denotes the lowest frequency below which 50 % of the total power in a considered frequency band is contained (cf. SEF95 for 95 % of total power). FIGURE 6. Power spectral density for the scalp C3-A2 montage (top) and for the in-ear EEG channel EL1 (bottom). spectral patterns [20] in scalp-EEG and ear-EEG are similar: the alpha (8 - 13 Hz) band power in the Wake condition, a slightly smaller alpha power in N1 sleep, and the stronger power of the delta (< 2 Hz) band towards deep sleep. We next FIGURE 7. The frequency domain SEF50, SEF95, and SEFd features of the obtained the SEF50 and SEF95 features for the follow- ,  , , , and band power from the in-ear EEG channel EL1. The ing frequency bands:  D 0:5 - 30 Hz,  D features were averaged over all epochs and subjects. 0:5 - 16 Hz, D 8 - 11 Hz, D 8 - 15 Hz, and D 16 - 30 Hz. In addition, the SEFd feature was calculated as of noise. The following parameters for MSFE were chosen: the difference between SEF95 and SEF50, that is, SEFd D maximum scale  D 15, m D 2, n D 2, r D 0:15(standard SEF95 - SEF50, so that 15 SEF features were obtained from deviation of each epoch). Overall, 15 features were extracted the in-ear EL1 channel. Figure 7 shows the boxplots of SEF from the EL1 channel and were normalised, as illustrated features in different frequency bands for the EL1 channel and in Figure 8. Observe the good separation of entropy values for each sleep stage, averaged over all epochs and subjects. between sleep stages in each scale; in particular, structural Observe the consistent spread of SEF features. complexity for the Wake condition decreased with the scale factor. For the N3 sleep stage, a large proportion of power 2) STRUCTURAL COMPLEXITY FEATURES is contained in the delta band (relative to total power), and The multi-scale entropy (MSE) method calculates struc- this more deterministic behaviour caused the FE values to be tural complexity of time-series over multiple temporal smaller than in other sleep stages. scales [19], [21], and can be measured with e.g. sam- E. CLASSIFICATION ple entropy, approximate entropy, and permutation entropy. We used multi-scale fuzzy entropy (MSFE) [22] with a small Classication was performed based on 30 SEF and MSFE embedding dimension, owing to its robustness in the presence features, which were normalised to the range [0 1]. The 2800108 VOLUME 5, 2017 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG TABLE 1. Confusion matrix for the 2-class Wake vs Sleep classification. TABLE 2. Confusion matrix for the 2-class Wake-N1 vs N2-N3 classification. FIGURE 8. Structural complexity features for different sleep stages. Normalised multi-scale fuzzy entropy (MSFE) from the in-ear EEG channel EL1 is evaluated the over scales 1 (standard FE) to 15, and shows excellent separation between sleep stages. The error bars indicate the standard error. TABLE 3. Confusion matrix for 4-class sleep stage classification. one-against-one multi-class support vector machine (SVM) with a radial basis function (RBF) kernel was employed as a classier [23]. F. PERFORMANCE EVALUATION Feature extraction was performed using Matlab 2016b, and the classication was conducted in Python 2.7.12 Anaconda 4.2.0 (x86_64) operated on an iMac with 2.8GHz Intel Core i5, 16GB of RAM. A 5-fold cross validation (CV) was TABLE 4. Confusion matrix for the 2-class Wake vs Sleep classification. performed to evaluate the automatic sleep stage classication. The performance metrics used were class-specic sensitiv- ity (SE) and precision (PR), as well as overall accuracy (AC) and Kappa coefcient (), dened as follows: TP TP TP iD1 SE D ; PR D ; AC D ; TPC FN TPC FP N P Agreement of Cohen's Kappa coefcients [24], as shown in f g (TP C FP )(TP C FN ) AC i i i i e iD1 Table 1 and 2. D ;  D : N 1 The accuracy for the more difcult 4-class sleep stage The parameter TP (true positive) represents the number of classication was 78.5 % with the Kappa coefcient positive (target) epochs correctly predicted, TN (true nega-  D 0:64, which indicates a Substantial Agreement, as shown tive) is the number of negative (non-target) epochs correctly in Table 3. predicted, FP (false positive) is the number of negative epochs incorrectly predicted as positive class, FN (false negative) B. SCENARIO2: SLEEP STAGE CLASSIFICATION FROM is the number of positive epochs incorrectly predicted as EAR-EEG AGAINST THE MANUALLY SCORED negative class, C is the number of classes, and N the total HYPNOGRAM BASED ON SCALP-EEG number of epochs. We next evaluated the agreement between the hypnogram scored based on scalp-EEG channels and the predicted label III. RESULTS based on extracted features from the in-ear EEG channel EL1. A. SCENARIO1: SLEEP STAGE CLASSIFICATION FROM Tables 4, 5, and 6 show the corresponding confusion matri- EAR-EEG AGAINST THE MANUALLY SCORED ces, obtained from the classication based on the SEF and HYPNOGRAM BASED ON EAR-EEG MSFE features for the 2-class Wake vs Sleep and W-N1 vs We rst evaluated the agreement between the hypnogram N2-N3 scenarios, and the 4-class (W, N1, N2, N3) scenario. scored based on ear-EEG channels and the predicted label For the 2-class classication problems, the achieved classi- based on extracted features from the in-ear EEG channel cation accuracies were more than 90 %, with the Substantial EL1. Tables 1, 2, and 3 show the confusion matrices obtained Agreements ( D 0:75 and  D 0:80) [24]. from the classication results based on the SEF and MSFE The achieved accuracy for the 4-class sleep stage classi- features for the 2-class scenarios Wake vs Sleep and W-N1 vs cation was 76.8 %, with the Kappa coefcient  D 0:65, N2-N3, and the 4-class (W, N1, N2, N3) scenario. For the which indicates a Substantial Agreement. 2-class classication scenarios, the overall classication Figure 9 depicts the hypnograms scored manually based on accuracies were respectively 95.2 % and 86.0 %, with an scalp-EEG channels (blue) and the automatically predicted Almost Perfect ( D 0:83) to Substantial ( D 0:68) label based on the in-ear EL1 channel (red) for the 2-class VOLUME 5, 2017 2800108 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG TABLE 5. Confusion matrix for the 2-class Wake-N1 vs N2-N3 TABLE 7. Comparison between the manual scores and automatic classification. predicted scores (Accuracy [%] / Kappa). TABLE 6. Confusion matrix for the 4-class sleep stage classification. C. AGREEMENT BETWEEN THE PREDICTED AND MANUAL SLEEP SCORES Upon establishing the feasibility of predicting scalp-EEG sleep stages from ear-EEG features, we next benchmarked these ndings against our recent results based on manual scoring of both scalp- and ear-EEG [11]. To this end, Table 7 compares the manual and automatic labels for the following scenarios: Scenario 1: The manually scored hypnogram based on ear-EEG channels vs the predicted label based on the in- ear EL1 channel (Table 1, 2, and 3). Scenario 2: The manually scored hypnogram based on scalp-EEG channels vs the predicted label based on the in-ear EL1 channel (Table 4, 5, and 6). The hypnogram manually scored based on scalp-EEG channels vs that scored based on ear-EEG channels. In all cases, the proposed automatically scored labels were a signicant match to the corresponding labels scored manu- ally in [11]. IV. DISCUSSION AND CONCLUSIONS We have proposed an automatic sleep stage monitoring sys- tem using ear-EEG, which is capable of minimising both patient's inconvenience and the involvement of a medi- cal specialist. For rigour, the experiments have been con- ducted in two scenarios: Scenario 1 examined automatic scores for ear-EEG against manual scores for ear-EEG, while Scenario 2 examined automatic scores for ear-EEG against FIGURE 9. Hypnogram for Subject 2 scored based on scalp-EEG channels manual scores for scalp-EEG. This has both conrmed the (blue) and the automatically predicted label based on in-ear EEG channel EL1 (red) for the 2-class Wake vs Sleep (top) and W-N1 vs N2-N3 (middle) feasibility of ear-EEG for sleep monitoring, and has provided scenarios, and the 4-class (bottom) classification scenario. a proof-of-concept for the feasibility of ear-EEG in automatic scoring of sleep patterns out-of-clinic and in the community. Wake vs Sleep (top) and W-N1 vs N2-N3 (middle) scenarios, In 4-class sleep stage classication for Scenario 1 and and the 4-class (bottom) scenario, for the Subject 2. Only Scenario 2, the accuracies were respectively 78.5 % and the rst epoch was removed because of the AC onset noise, 76.8 % with Substantial Agreements of Kappa coefcients, therefore the hypnogram was scored based on 89 epochs, as shown in Table 3 and 6. These results conrmed that the which corresponds to 44 minutes of 30 s recording. For the recorded ear-EEG carried a sufcient amount of information 4-class problems, even though some epochs were predicted to evaluate human sleep robustly; however, discriminating the incorrectly, for example epoch 62 (hypnogram:N3, predic- N1 stage remains challenging, as also reported in scalp-EEG tion:N2), the majority of epochs were correctly classied. based automatic sleep stage classication [7], [18]. This was This conrms that the features extracted from the ear-EEG reected in the sensitivities for the N1 stage classication, data were effectively used for the automatic sleep stage clas- which were respectively 34.7 % and 50.0 % for Scenario 1 sication, and provided a substantial match to the scalp-EEG and Scenario 2, and were much smaller than the sensitivities patterns scored manually by an expert. We can therefore for the other sleep conditions. In manual scoring guideline, conclude that the recorded ear-EEG carried a sufcient the N1 sleep is dened as 50 % of the epoch consisting of a amount of information to evaluate human sleep robustly. relatively low-voltage mixed activity (2 - 7 Hz) and < 50 % 2800108 VOLUME 5, 2017 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG of the epoch containing alpha activity, while the wake-sleep [9] T. Q. Le, C. Cheng, A. Sangasoongsong, W. Wongdhamma, and S. T. S. Bukkapatnam, ``Wireless wearable multisensory suite and real- boundary is observed as a loss of alpha rhythm [20]. The N2 time prediction of obstructive sleep apnea episodes,'' IEEE J. Transl. Eng. sleep is dened as the appearance of sleep spindles and/or Health Med., vol. 1, Jul. 2013, Art. no. 2700109. [Online]. Available: K complexes, while < 20 % of the epoch may contain high- http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6563132 [10] D. Looney et al., ``The in-the-ear recording concept: User-centered and voltage (> 75V, < 2 Hz) activity. We could observe the wearable brain monitoring,'' IEEE Pulse, vol. 3, no. 6, pp. 3242, absence of alpha rhythm in N1 (blue) from both scalp- and Nov. 2012. ear-EEG, as illustrated in Figure 6. Nevertheless, the high- [11] D. Looney, V. Goverdovsky, I. Rosenzweig, M. J. Morrell, and D. P. Mandic, ``A wearable in-ear encephalography sensor for monitoring voltage activities in < 2 Hz band for the EL1 (ear-EEG) sleep: Preliminary observations from nap studies,'' Ann. Amer. Thoracic channel were not notable compared to those of C3-A2 Soc., vol. 13, no. 12, pp. 22292233, 2016. (scalp-EEG) channel; the spectrum of N2 (black) for the EL1 [12] P. Kidmose, D. Looney, M. Ungstrup, M. L. Rank, and D. P. Mandic, ``A study of evoked potentials from Ear-EEG,'' IEEE Trans. Biomed. Eng., channel between 1 - 5 Hz signicantly overlapped with that vol. 60, no. 10, pp. 28242830, Sep. 2013. of N1. This can lead to ineffective discrimination of the N1 [13] Y.-T. Wang et al., ``Developing an online steady-state visual evoked stage in the proposed automatic sleep stage scoring algorithm, potential-based brain-computer interface system using EarEEG,'' in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS), Aug. 2015, and is a persistent problem in any automatic sleep stage pp. 22712274. classication. [14] V. Goverdovsky, D. Looney, P. Kidmose, C. Papavassiliou, and Overall, the sleep stage prediction from ear-EEG for the D. P. Mandic, ``Co-located multimodal sensing: A next generation solution for wearable health,'' IEEE Sensors J., vol. 15, no. 1, 2-class sleep stage classication (Wake vs Sleep and W-N1 vs pp. 138145, Jan. 2015. N2-N3) for Scenario 1 gave the high respective overall accu- [15] V. Goverdovsky et al., ``Hearables: Multimodal physiological in-ear sens- racies of 95.2 % and 86.0 %, with the corresponding Kappa ing,'' Nature Sci. Rep., to be published. [16] V. Goverdovsky, D. Looney, P. Kidmose, and D. P. Mandic, ``In-ear EEG coefcients of 0.83 and 0.68, which indicates Almost Perfect from viscoelastic generic earpieces: Robust and unobtrusive 24/7 monitor- and Substantial Agreements. For the 4-stage classication, ing,'' IEEE Sensors J., vol. 16, no. 1, pp. 271277, Jan. 2016. the accuracy was 78.5 % with  D 0:64, indicating a Substan- [17] A. Stochholm, K. Mikkelsen, and P. Kidmose, ``Automatic sleep stage classication using Ear-EEG,'' in Proc. Annu. Int. Conf. IEEE Eng. Med. tial Agreement. For Scenario 2, the corresponding accuracies Biol. Soc. (EMBS), Sep. 2016, pp. 47514754. for the 2-stage classication were 91.8 % and 90.4 % with [18] T. Nakamura, T. Adjei, Y. Alqurashi, D. Looney, M. J. Morrell, and the Kappa coefcients  D 0:75 and  D 0:80 (Substantial D. P. Mandic, ``Complexity science for sleep stage classication from EEG,'' in Proc. IEEE Int. Joint Conf. Neural Netw. Agreements), while for the 4-stage classication the accuracy (IJCNN), May 2017, pp. 43874394. [Online]. Available: was 76.8 % with  D 0:65, a Substantial Agreement. We have http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7966411 therefore conrmed both empirically and over comprehen- [19] M. Costa, A. L. Goldberger, and C.-K. Peng, ``Multiscale entropy analysis of complex physiologic time series,'' Phys. Rev. Lett., vol. 89, p. 068102, sive statistical testing that the in-ear EEG carries sufcient Jul. 2002. amount of information to faithfully represent human sleep [20] M. H. Silber et al., ``The visual scoring of sleep in adults,'' J. Clin. Sleep patterns, thus opening up a new avenue in fully wearable sleep Med., vol. 3, no. 2, pp. 121131, 2007. [21] M. U. Ahmed and D. P. Mandic, ``Multivariate multiscale entropy: A tool research in the community. For this pilot study the number of for complexity analysis of multichannel data,'' Phys. Rev. E, Stat. Phys. subjects was four, and our future studies will consider a larger Plasmas Fluids Relat. Interdiscip. Top., vol. 84, no. 6, pp. 110, Jun. 2011. cohort of subjects, overnight sleep, and other aspects of fully [22] M. U. Ahmed, T. Chanwimalueang, S. Thayyil, and D. P. Mandic, ``A multivariate multiscale fuzzy entropy algorithm with application to uterine wearable scenarios. EMG complexity analysis,'' Entropy, vol. 19, no. 2, pp. 118, 2017. [23] C.-C. Chang and C.-J. Lin, ``LIBSVM: A library for support vec- tor machines,'' ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, REFERENCES pp. 27:127:27, 2011. [1] P. Maquet, ``The role of sleep in learning and memory,'' Science, vol. 294, [24] J. R. Landis and G. G. Koch, ``The measurement of observer agreement no. 5544, pp. 10481052, Nov. 2001. for categorical data,'' Biometrics, vol. 33, no. 1, pp. 159174, 1977. [2] P. R. T. Ko, J. A. Kientz, E. K. Choe, M. Kay, C. A. Landis, and N. F. Watson, ``Consumer sleep technologies: A review of the landscape,'' J. Clin. Sleep Med., vol. 11, no. 12, pp. 14551461, 2015. [3] I. Rosenzweig, S. C. R. Williams, and M. J. Morrell, ``The impact of sleep and hypoxia on the brain: Potential mechanisms for the effects of obstructive sleep apnea,'' Current Opinion Pulmonary Med., vol. 20, no. 6, pp. 565571, 2014. [4] C. Iber, The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specications. Darien, IL, USA: American Academy of Sleep Medicine, 2007. [5] M. Kryger, T. Roth, and W. Dement, Principles and Practice of Sleep TAKASHI NAKAMURA received the B.Eng. Medicine, 6th ed. Amsterdam, The Netherlands: Elsevier, 2017. degree from the Department of Electrical Engi- [6] B. Koley and D. Dey, ``An ensemble system for automatic sleep stage neering and Bioscience, Waseda University, Japan, classication using single channel EEG signal,'' Comput. Biol. Med., in 2014, and the M.Sc. degree in communica- vol. 42, no. 12, pp. 11861195, 2012. tions and signal processing from Imperial College [7] T. L. T. da Silveira, A. J. Kozakevicius, and C. R. Rodrigues, ``Single- London, U.K., in 2015, where he is currently pur- channel EEG sleep stage classication based on a streamlined set of suing the Ph.D. degree in signal processing. His statistical features in wavelet domain,'' Med. Biol. Eng. Comput., vol. 55, post-graduate studies are funded by the Nakajima no. 2, pp. 110, Feb. 2016. Foundation. His research interests are in the areas [8] PhysioNet. The Sleep-EDF Database [Expanded], accessed on Dec. 30, 2016. [Online]. Available: https://physionet.org/pn4/sleep- of brain-computer interface, and signal processing edfx/ for cognitive neuroscience. VOLUME 5, 2017 2800108 Nakamura et al.: Automatic Sleep Monitoring Using Ear-EEG VALENTIN GOVERDOVSKY received the M.Eng. DANILO P. MANDIC is currently a Professor of degree in electronic engineering and the Ph.D. Signal Processing with Imperial College London, degree in communications from Imperial Col- London, U.K., where he has been involved in the lege London, U.K. He is currently a Rosetrees area of nonlinear adaptive and biomedical signal Fellow with the Department of Electrical and Elec- processing. He has been a Guest Professor with the tronic Engineering, Imperial College London. His Katholieke Universiteit Leuven, Leuven, Belgium, research interests are primarily in the areas of and a Frontier Researcher with RIKEN, Tokyo. His biomedical instrumentation, systems design, and publication record includes two research mono- wireless communications. He received the Eric graphs, Recurrent Neural Networks for Prediction Laithwaithe Award for excellence in research. (West Sussex, U.K.: Wiley, 2001) and Complex- He is currently involved in the development of wearable biosensing plat- Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear, and forms, such as the novel in-the-ear sensing concept for 24/7 monitoring of Neural Models (West Sussex, U.K.: Wiley, 2009), an edited book Signal brain and body functions. Processing for Information Fusion (New York: Springer, 2008), and over 400 publications in signal and image processing. He is a member of the London Mathematical Society. He has produced award winning papers and products from his collaboration with industry and has received the Presi- MARY J. MORRELL received the Ph.D. degree dent's Award for excellence in postgraduate supervision at Imperial College. from London University, followed by the Well- come Trust Fellowship with the University of Wisconsin-Madison. Her research is focused on the interaction between respiratory control and sleep mechanisms that lead to sleep related breath- ing disorders, specically sleep apnoea. This approach has led to the translation of physiolog- ical studies into large randomised treatment trials, for example the impact of sleep apnoea in older people. She founded a U.K. respiratory-sleep network facilitating multi- centre trials and her recent studies have focused on the neurological impact of sleep apnoea; particularly accelerated neural decline in older people, and the impact of sleepiness on daytime function. Her research is supported by funding from the Wellcome Trust, British Heart Foundation and National Institute of Health Research. She has served on the American Thoracic Society Board of Directors, and the Physiological Society Executive Board. She is President of the British Sleep Society. 2800108 VOLUME 5, 2017

Journal

IEEE Journal of Translational Engineering in Health and MedicinePubmed Central

Published: Jun 26, 2017

There are no references for this article.