- Review
- Open access
- Published:
Mental state and emotion detection from musically stimulated EEG
Brain Informatics volume 5, Article number: 14 (2018)
Abstract
This literature survey attempts to clarify different approaches considered to study the impact of the musical stimulus on the human brain using EEG Modality. Glancing at the field through various aspects of such studies specifically an experimental protocol, the EEG machine, number of channels investigated, feature extracted, categories of emotions, the brain area, the brainwaves, statistical tests, machine learning algorithms used for classification and validation of the developed model. This article comments on how these different approaches have particular weaknesses and strengths. Ultimately, this review concludes a suitable method to study the impact of the musical stimulus on brain and implications of such kind of studies.
1 Introduction
The human brain is a spectacularly complex organ, how the brain processes an emotion having very little acquaintance. Discovering how the brain processes the emotion will impact not only in artificial emotional intelligence, human–computer interface but also have many clinical implications of diagnosing affective diseases and neurological disorders. There are several multidisciplinary and collaborative researches across the globe happening using different modalities of brain research and to investigate how the brain processes emotion. There are many ways to evoke the emotion; music is the excellent thriller and elicitor of emotion [1]. During listening unique music the physiological responses of subjects like shivering, speeding heart, goosebumps, laughter, lump in throat, sensual arousal and sweating [2]. Tuning in to the music incorporates different mental means, for example, observation, multimodal combination, focus, reviewing memory, syntactic handling and the preparing of significant data, activity, feeling and social discernment [3]. Thus, music is a potent stimulus for evoking the emotions and investigating processing functions of the human brain. There are several modalities of brain research categorised depending on how it measures neuronal activity of the brain, direct imaging, and indirect imaging, direct imaging measures electrical or magnetic signal generated due neuronal activity directly, e.g. EEG (electroencephalogram) and MEG (magnetoencephalogram), whereas indirect imaging fMRI (functional magnetic resonance imaging), PET (positron emission tomography), etc., measure neuronal activity using oxygen consumption of neurone. Indirect measuring has an excellent spatial resolution in case of PET around 4 mm and f MRI 2 mm but low temporal resolution low for PET 1–2 min and fMRI 4–5 s [4] and other enlisted disadvantages
-
Subject has to take radionuclide dye
-
Claustrophobic
-
Noisy
-
Mostly used for clinical research purpose
-
Highly expensive machine cost ($20,00,000–800,000) and scanning cost ($800–1500.) [4]
Direct imaging reasonable good spatial resolution and excellent temporal resolution 1 ms in case of EEG its 10 mm but having several advantages to carry the stimulus-based experiment [4]
-
Non-ionising
-
Simple to work, portable
-
Silent
-
No claustrophobia
-
Comparatively Inexpensive Machine cost ($1000–$10,000) and Scan cost ($100) [4]
-
Simple to plan incitement test
-
Easy to configuration/assemble HCI (human–computer interface) research and applications
This article reviews the literature of clinical and engineering domain to quantify the impact of music stimulus. The aspect of items of evaluations among literature is
-
Type of population and sample
-
EEG recording environment and recording Machine
-
Stimulus Type, duration of the stimulus, emotion Model
-
Feature extraction transform, feature extracted
-
Brainwave investigated
-
Statistical test and machine learning algorithm used
-
Assessment of model
The paper is written using an approach of a summary of reviews, an analysis of surveying aspects and synthesis of reviewing aspects and organised in sections as follows: Sect. 2 covers the structural information of brain, Sect. 3 represents literature selection and analysis, Sect. 4 shows summary of review, and Sects. 5, 6 and 7 represent discussion, suggested approach and conclusion, respectively
2 Functional structure of the brain
Before understanding EEG signals, we need to understand the structure of the brain. The human brain conveyed into three critical parts: cerebrum, cerebellum and cerebrum stem. Cerebrum subdivided into frontal lobe, parietal lobe, temporal lobe, occipital lobe, insular and limbic lobe alludes Fig. 1. Each part connected with some mental capacity, for example, the parietal projection sees agony and taste sensations and is associated with critical thinking exercises. The temporal lobe worried about hearing and memory. The occipital lobe primarily contains the districts utilised for vision-related errands. The frontal lobe principally connected with feelings, critical thinking, discourse and movement [6, 7]. A grown-up human brain holds, on an average, 100 billion neurons [8]. Neurons process and send data through electrical and chemical signals due to this it generates neuronal oscillations called brainwaves or EEG signals. Table 1 shows electrical and functional characteristics of these waves. The frequency range of EEG signals is 0.5–100 Hz, whereas amplitude range is 10–100 μV [9]. Delta wave has highest amplitude and lowest frequency, whereas gamma waves have highest frequency and lowest amplitude. In reviews, the frequency range varies by ± 0.5–1 Hz.
3 Literature selection and analysis
The keywords used to select the article were EEG and Music and Emotions on a repository like PubMed, IEEE explorer, Science Direct and Mendeley research tool. Library recognised quality twenty-two papers from the year of 2001 and 2018 created using Mendeley [10]. The mostly followed research methodology is shown in Fig. 2. The articles were analysed concerning general steps observed in an experiment such as participants, stimulus, EEG machine, channel, montages preprocessing, feature extraction, statistical testing and machine learning.
4 Summary of reviews
This section summaries findings and outcomes all the selected articles.
For a musical stimulus which was known to fluctuate in full of affective valence (positive versus negative) and intensity (extreme versus quiet), the author found that the pattern of asymmetrical frontal EEG activity. A higher relative left frontal EEG movement to satisfaction and cheerful melodic entries and more prominent relative right frontal EEG action to fear and dismal melodic selections. The author additionally discovered EEG asymmetry distinguished the intensity of emotion [11]. For the distinctive stimuli excerpt, jazz, rock-pop, traditional music and environmental sound. Author discovered positive enthusiastic attributions were joined by an expansion in left fleeting initiation, negative by a more two-sided design with predominance of the privilege fronto-temporal cortex. Author additionally discovered female members affirmed more prominent valence-related contrasts than males [12]. In this research, wonderful and offensive feelings were evoked by consonant and cacophonous melodic portion creator discovered lovely music was related with increment in frontal mid-line \(\theta\) power [13]. In this examination, EEG-based emotion classification algorithm was explored utilising four types musical excerpts. The hemispheric asymmetry \(\alpha\) power indices of brain activation were extracted as feature [14]. The author examined the connection between EEG signs and music-initiated emotion responses using four emotional music excerpts (Oscar film track). The author found that low-frequency bands \(\delta\), \(\theta\) and \(\alpha\) are correlate of evoked emotions [15]. In this examination the author researches spatial and spectral pattern for evoked feelings because of melodic passage. Author found that spatial and spectral pattern most significant to feeling and reproducible crosswise over subjects [16]. In this investigation, the author distinguished 30 subject-free features that were most connected with emotion processing crosswise over subjects and investigated the convenience of utilising less electrode to describe the EEG flow amid music listening [17]. For stimulus rock-pop, electronic, jazz and broadband noise author examined the relation between subjects’ EEG responses to self-evaluated enjoyed or loathed music. Movement in \(\beta\) and \(\gamma\) band may prompt a relationship between music inclination and enthusiastic excitement phenomena [18]. In this article, author found frequency band, beta and theta, perform superior to anything other frequency band [19]. The author investigated like and disliked under three cases familiarity of the music by taking three types music regardless of familiarity of music, familiar music and unfamiliar music. The author found that familiar music gives highest classification accuracy compared to regardless familiarity and unfamiliar music [20]. The authors found that among the musician and non-musician subjects participated in the research, musicians have significantly lower frontal \(\gamma\) activity during music listening and music imaging than resting state [21]. Author classified euphoric versus non-partisan, upbeat versus melancholic and well-known versus new melodic selection. The author researched brain network related to happy, melancholic and unbiased music. The authors research inter/intra provincial network designs with the self-announced assessment of the melodic selection [22]. The author found that among members thirty people of three distinctive age gatherings (15–25 years, 26–35 years and 36–50 years). The brain signals of age gathering (26–35 years) gave the best emotion acknowledgement accuracy in understanding to the self-reported emotions [23]. Author proposed a novel user identification framework using EEG signals while tuning in to music [24]. Authors quantify emotional arousal corresponding to different musical clips [25]. Author suggests that unfamiliar songs are most appropriate for the construction of an emotion recognition system [26]. The author explores the impact of Indian instrumental music Raag Bhairavi using frontal theta asymmetry [27]. The author proposes the frontal theta asymmetry model for estimating valence of evoked emotions and also suggested electrode reduction for neuromarketing applications [28, 29]. Author proposes frontal theta as biomarker of depression [30].
4.1 Participants and their handedness
4.1.1 Handedness
Human brain has two identical anatomical spheres, but each sphere has functional specialisation. Handedness is concept which by simplistic definition is prominent hand used in day-to-day activity [31]. Each hemisphere has specific prominent function, like language abilities in left hemisphere in right-handed person [32]. As we are probably aware that the brain is cross-wired, the left side of the hemisphere of the cerebrum controls the right side of the body and vice versa in the majority of people. In research involving brain and stimuli, we first need to know about handedness as it is an indicator of prominent hemisphere. As a prominent hemisphere has specialised functions; observations, findings, interpretation differ according to dominance. Many functions change hemisphere as per dominance in particular person. Like, left-handed people have language processing in right hemisphere and right-handed have in left hemisphere [33]. Brain pattern of right- and left-handed persons are different [34]. This section analyses the natures of subjects considered in the reviews
4.1.2 Participants
Subjects used 5–79 with median 20 most of the researchers consider unbalanced numbers of males and females see Table 2. When subjects participated in studies are less, outcome of the hypothesis is always questionable. In 78% of research, authors reported right-handed subjects without any handedness inventory. Only 22% of research used handedness Edinburgh inventory [37, 38]. In most of the investigation, 95% researcher recruited normal participants; few of them verify the normalcy. Most of the researchers selected participants who are the students or working staff and of the same background. Author [23] investigated the impact of the musical stimulus on a different age group. Author [21] studied the effect of the musical stimulus by recruiting musician and non-musician subject. Authors [30, 35] investigated the impact of the musical stimulus on mentally depressed subjects.
4.2 Musical stimulus type, duration and emotions
Different genres of musical stimulus excerpt of pleasant and unpleasant music selected to evoke a different types emotions stimulus chosen are classical, rock, hip-hop, jazz, metal, African drums, Oscar tracks, environmental (refer Table 3). Author [13, 18] used noise along with pleasant stimulus to elicit negative emotion. Authors [20] used familiar unfamiliar and regardless familiar music. Stimulus duration selected from 2 s to 10 min with median 30 s. Different excerpts interleaved with some time gap. Self-responses of evoked emotion noted from subjects participated in study. Emotions investigated the positive and negative emotions such as Fear, Happiness, Sadness, Anger, Tiredness, Like, Dislike, Anxiety, and Depression. Some authors used feel tracer to measure arousal effect of the stimulus.
4.3 EEG machine and channel investigated
Twelve different EEG machines are used in the reviewed articles (refer Tables 4 and 5). All the EEG machines surveyed on the features Compliance Certification, PC Interface, Filter Number of the channel, Sampling Frequency, Compatible toolbox, Electrode Type. Almost all machines were FDA (Food and Drug Administration), CE (Conformite Europeenne) certified, required a sampling frequency. Most of the equipment and compatible toolbox MS-Excel/MATLAB/LabVIEW. Mostly, 10–20 systems of electrode placement used in reviews (refer Fig. 3). Electrode is used in the reviewed articles 1–63 with a median 21.5. Total of 75% article reported referential montages taking A1 and A2 reference electrodes. Author [11, 35] used vertex electrode Cz as reference. Author [20] used frontal mid-line electrode Fz as well as A1 and A2 reference electrodes. Author [18] used Laplacian montage.
4.4 Preprocessing for artefact and feature extraction
Most of the articles reported manual, and offline removal artefact; few articles used filter and Laplacian montage method [19]. The notch filter was also used to remove features extraction transform. Most of the articles used FFT either DFT or STFT (56.25\(\%\)); 12\(\%\) researchers used wavelet transform and 6.25 \(\%\) researcher applied DFA and time domain analysis. Author [18] applied time–frequency transform (Zhao-Atlas-Marks, STFT, Hilbert, Huang Spectrum) (refer Table 6).
4.5 Brainwave and location investigated and statistical test
31.25\(\%\) researchers investigated all brainwaves (\(\delta\), \(\theta\), \(\alpha\), \(\beta\) and \(\gamma\)) together. Remaining of them selected few of them or independently studied a single band. In all reviews \(\alpha =75\%\), \(\gamma=37.5\%\), \(\beta =56.25\%\), \(\delta =37.5\%\) and \(\theta=87.5\%\) were investigated. Almost all researchers investigated frontal hemisphere only. Author [20] investigates all regions of the brain and correlates \(\gamma\) waves with memory processing. Twenty five per cent reviews conducted statistical tests, namely ANOVA, t test and Z test. Most of the authors consider confidence level of 0.05. Seventy-five per cent reviews directly applied machine learning algorithm (refer Table 7).
4.6 Machine learning algorithms
In all, 72%\(\%\) reviews employed supervised learning algorithm, namely k-NN, SVM, MLP, LDA, QDA, HMM, self-responses of subjects used as a feature vector. Twenty-eight per cent\(\%\) reviews used statistical tests, namely t test, ANOVA and Z test. Forty per cent\(\%\) of reviews used SVM along with other classifiers for classifying emotions. Classification accuracy is the most used metric. No study reported unsupervised machine learning algorithms (see Table 8).
5 Discussion and recommendations
5.1 Participants
The vast majority of the engineering domain study consider very less subject on an average 11 approximately, especially articles on IEEE explorer. To prove the hypothesis, minimum 30 subjects are required in the study [39]. In case scholars use subjects of both sexes, the number of subjects should be equal. Most of the authors required normal subjects without confirming normalcy of subjects. Homogeneous population were considered. This study is multidisciplinary study human factor, and experimental psychology is involved in this [40]. Most of the studies conducted by engineering fraternities are without clinical guidance. Handedness not considered if it considers evasive about handedness evaluation method.
5.2 Musical stimulus and dimension of emotion
Reviews use various genres of musical stimuli. To evoke different emotions among the subjects, a different emotional excerpt of incentives was employed. Most of the reviews employed familiar musical stimulus. Author [26] empirically proved unfamiliar excerpt most suitable for the construction of an emotion identification system. In reviews, various emotions are considered for emotion classification. The higher number of emotions causes emotion acknowledgement troublesome, and a few emotions may overlap [41]. In most surveys, the 1-dimensional emotion model was used. To investigate arousal feel tracer used feel tracer instrument is not reliable [42]. No reviews report about an automatic prediction of valence and arousal of 2 dimensional for the same excerpt of musical stimuli. High-frequency brainwaves like beta and gamma were used to correlate arousal [43] of emotion, while low frequency like alpha or theta for valence of emotion [11, 13]. Arousal and valence for the same excerpt of stimulus were plotted on the same graph as shown in Fig. 4.
5.2.1 Emotional processing in depression
Emotions are broadly classified as positive and negative, for sake of their understanding in processing in brain. Broadly, it is seen that positive emotions are processed in left anterior hemisphere (a.k.a. prefrontal cortex) of brain and negative emotions are processed in right [44]. In cases of depression, hypothesis in left anterior hemisphere hypo-arousal or right anterior hemisphere hyper-arousal leads to symptoms of depression [45]. EEG pattern supports evidence; findings shows that in cases of depression left anterior hemisphere is relatively inactive to right hemisphere [27], indicating that patients with depression have differential processing of stimuli than people without depression.
5.3 EEG machine and montages
While selecting EEG machine, following features should be considered
-
Minimum 256 Hz sampling frequency
-
CE, FDA approvals
-
Compatible with MS-EXCEL, LabVIEW
-
Quick technical support
-
DC operated
Montages are sensible and efficient game plans of electrode sets called channels that show EEG action over the whole scalp, permit appraisal of movement on the two sides of the cerebrum (lateralization) and aid in localisation of recorded activity to a specific brain region [46]
-
Bipolar Montage
In a bipolar montage, each waveform signifies the difference between two adjacent electrodes. This class of montage is designated as longitudinal bipolar (LB) and transverse bipolar (TB). Longitudinal bipolar montages measure the activity between two electrodes placed longitudinally on scalp, e.g. Fp1-F7, Fp1-F3, C3-P3, Fp2-F8, Fp1-F3, Fp2-F4, Fp2-F8 and F3-C3. Transverse bipolar montage measures activity between two electrodes along crosswise, e.g. Fp1-Fp2, F7-F3, Fp1-Fp2, F7-F3, Fp2-F8, F3-Fz, Fp2-F8, F3-Fz and F7-F3
-
Bipolar Referential Montage
In this montage, the distinction between the signal from an individual electrode and that of an assigned reference electrode was estimated. The reference electrode has no standard position. Nonetheless, the situation of the reference electrode is unique in relation to the account electrode. Mid-line positions are often used to avoid amplification of signals in one hemisphere relative to the other. Another most loved reference that utilised impressively is the ear (left ear for left hemisphere and right ear for right hemisphere), e.g. the left and right ears are considered as reference electrode Fp1-A1, Fp2-A2, F7-A1, F8-A2, Fp1-Cz, Fp2-Cz, F7-Cz, F8-Cz and so forth
-
Laplacian Montage
In this montage, the distinction between a electrode and a weighted normal of the encompassing electrodes is utilised to represent a channel.
5.4 Preprocessing for artefacts
EEG recording is exceedingly powerless to various forms and sources of noise. Morphology, an electrical characteristic of artefacts, can lead to significant difficulties in analysis and interpretation of EEG data. Table 9 shows various types of artefacts. The morphology of external artefacts is easily distinguishable from actual EEG [47]. Taking long duration and using many electrode artefact-free recording protocol is the best strategy for preventing and minimising all types of artefacts [27]
-
Educate the members around an eye, physical movement
-
Try not to permit electronic contraption in EEG recording lab
-
Record in acoustic free, diminish light and at surrounding temperature
-
All muscle, ocular or movement artefact slots of EEG signals reject
-
Members wash their hair to expel oil from their scalp.
-
Use proper montage
5.5 Feature extraction
There are three methods of analysing EEG signal time domain, frequency domain, time–frequency domain [9]
-
Time domain
All real-world signals presented time domain. This method is suitable to visualise real-world signal, voltage, PSD (power spectral density) and energy estimation of signal, mostly used for epilepsy analysis.
-
Frequency domain
Analysis of EEG signals concerns frequency, rather than time. It gives PSD’s of various rhythms of EEG signals. It is suitable for studying various brainwaves over a stipulated time period
-
Time frequency
Time–frequency examination contains those procedures that review a signal in both the time and frequency at the same time, appropriate for event-related emotion acknowledgement.
5.6 Brainwave and location
In existing literature, a frontal region mostly explored as it associated with emotion processing. A few researchers investigated an exclusive wave correlating evoked emotion. As mentioned in Sect. 1, musical stimulus created many psychological changes in subjects only examining frontal region, and few are the wave is not enough in creating a model of evoked emotion. Various lobes and many waves establishing their interrelationship need to be explored.
5.7 Machine learning algorithm
SVM is a supervised machine learning algorithm which can be used for classification or regression problems. It is a suitable algorithm for classification of evoked emotions. SVM utilises kernel trick to transform the data, and after that, because of these changes, it finds an ideal limit between the conceivable yields. Nonlinear kernel tricks can catch substantially more perplexing connections between data points without having to perform difficult transformations on own [48]. It has features
-
High prediction speed
-
Fast training speed
-
High accuracy
-
Results are interpretable
-
Performs wells with small numbers of observation
5.8 Model performance metrics
Healthcare and engineering models have different obligations, so the assessment metric should be different and should not be judged using a single metric; classification accuracy metrics are mostly considered in reviews for assessing the model. The model performance represented in the form of the confusion matrix is shown in Eq. (1).
Assume the inadequate model shown by Eq. (3) is having true-positive and false-positive values zero; still model classification accuracy by Eq. (2) is 83.33%. Accuracy is not a reliable metric for assessment of model. Apart from classification accuracy, there are many metrics for models assessment such as sensitivity, specificity, precision NPV (negative prediction value), FDR (false discovery rate), F1 score, FPR (false-positive rate), FNR (false-negative rate) accuracy, MCC (Mathew correlation coefficient) informedness (Youden index), markedness and ROC (receiver output character). Model performance metric such as recall, specificity, precision and accuracy are biased metrics [49]. ROC diagrams depicted the trade-off inside hit rates and false alert rates of classifiers and honed for the long time [50, 51]. As ROC decouples models performance from class skew and error costs, this makes ROC best measure of classification performance. The ROC graphs are useful for building the model and formulating their performance [52]. For a small number of positive class, F1 and ROC give a precise assessment of models [53, 54].
6 Suggested approach
As this research is interdisciplinary collaborative research by involving the medical fraternity of psychiatry or neurology background, music expert will satisfy Brouwer’s [40] recommendation I, II and VI. By recording EEG in three continuous gatherings, prestimulus, during stimulus and post-stimulus, could help in comparing with the baseline changes, and post hoc selection of data satisfy Brouwer’s recommendation III. moreover, remaining Burrowers recommendation IV and V by recording EEG using good artefact removing the protocol mentioned in Sect. 4.4 and Table 9. Analysing data using proper statistical test and machine learning algorithms (refer Fig. 6 for suggested approach). Comparison of left and right hemispheric activity refer gives vivid results, and the model formed called asymmetry model (refer Fig. 5). Most of the reviews compared left brain activity with the right brain activity and found that mathematical relationship for stimulus will be more significant.
7 Conclusion
We have summarised, analysed and discussed the research articles with using keywords music, EEG and emotion from the year 2001-2018. We have outlined attention of different approaches considered in mental and emotion detection with the musical stimulus, We have drawn attention to various aspects of current research such emotion model, statistical test and machine learning algorithms, model performance metrics, etc. We have recommended best practices for putting scholar before the researcher. It will provide inputs for the new researcher in this area.
References
Goldstein A (1980) Thrills in response to music and other stimuli. Physiol Psychol 8(1):126–129. https://doi.org/10.3758/BF03326460
Sloboda John (1991) Music structure and emotional response: some empirical findings. Psychol Music 19:110–120. https://doi.org/10.1177/0305735691192002
koelch (2012) Brain and music. Willey, Hoboken
Lystad RP, Pollard H (2009) Functional neuroimaging: a brief overview and feasibility for use in chiropractic research. J Can Chiropr Assoc 53(1):59–72
Alotaiby T, El-Samie FEA, Alshebeili SA, Ahmad I (2015) A review of channel selection algorithms for EEG signal processing. J Adv Signal Process. https://doi.org/10.1186/s13634-015-0251-9
Gray H (1988) Grays, anatomy: the classic collectors. Random House, New York
Chen P (2011) Principles of biological science
Patel ND (2011) An EEG-based dual-channel imaginary motion classification for brain computer interface. Thesis, Lamar University, Master of Engineering Science
Nidal K, Aamir SM (2014) EEG/ERP analysis methods and applications. CRC Press, Boca Raton
Sign in (2016) Retrieved from https://www.mendeley.com/library/
Schmidt Louis a, Trainor Laurel J (2001) Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions. Cogn Emot 15(4):487–500. https://doi.org/10.1080/02699930126048
Altenmuller Eckart, Schrmann Kristian, Lim Vanessa K, Parlitz Dietrich (2002) Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns. Neuropsychologia 40(2002):2242–2256
Sammler D, Grigutsch M, Fritz T, Koelsch S (2007) Muisc and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music, X 44:293–304. https://doi.org/10.1111/j.1469-8986.2007.00497
Lin Y, Wang C, Wu T, Jeng S, Chen J (2007) Music and emotion: multilayer perception for EEG signal classification during listening to emotional music, X 44:293–304. https://doi.org/10.1109/TENCON.2007.4428831
Lin YP, Wang CH, Wu TL, Jeng SK, Chen JH (2008) Support vector machine for EEG signal classification during listening to emotional music. In: Proceedings of the 2008 IEEE 10th workshop on multimedia signal processing, vol 15(4), pp 127–130. https://doi.org/10.1109/MMSP.2008.4665061
Lin Y, Jung T, Chen J (2009) EEG dynamics during music appreciation. In: 31st annual international conference of the IEEE EMBS, vol 15(4), pp 5316–5319. https://doi.org/10.1109/IEMBS.2009.5333524
Lin Y, Jung T, Chen J (2010) EEG-based emotion recognition in music listening, IEEE Trans Bio-Med Eng. https://doi.org/10.1109/TBME.2010.2048568
Hadjidimitriou SK, Hadjileontiadis LJ (2012) Toward an EEG-based recognition of music liking using time-frequency analysis. IEEE Trans Bio-Med Eng 59(12):3498–3510. https://doi.org/10.1109/TBME.2012.2217498
Duan R, Wang X, Lu B (2012) EEG-based emotion recognition in listening music by using support vector machine and linear dynamic system. Springer, Berlin, pp 468–475
Hadjidimitriou SK, Hadjileontiadis LJ (2013) EEEG-based classification of music appraisal responses using time-frequency analysis and familiarity ratings. IEEE Trans Affect Comput 4(2):161–172. https://doi.org/10.1109/T-AFFC.2013.6
Urakami Y, Kawamura K, Washizawa Y, Cichocki (2013) Electroencephalographic gamma-band activity and music perception in musicians and non-musicians. Activitas Nervosa Superior Rediviva 55(4):149–157
Shahabi H, Moghimi S (2016) Toward automatic detection of brain responses to emotional music through analysis of EEG effective connectivity. Comput Hum Behav 58:231–239. https://doi.org/10.1016/j.chb.2016.01.005
Mehmood A, Majid M, Muhammad S, Khan B (2016) Computers in human behavior human emotion recognition and analysis in response to audio music using brain signals. Comput Hum Behav 65:267–275. https://doi.org/10.1016/j.chb.2016.08.029
Kaur B, Singh D, Roy P (2016) A Novel framework of EEG-based user identification by analyzing music-listening behavior. Multimed Tools Appl. https://doi.org/10.1007/s11042
Sengupta S, Biswas S, Sanyal S, Banerjee A, Sengupta R, Ghosh D (2016) Quantification and categorization of emotion using cross cultural music: an EEG based fractal study. In: 2nd international conference on next generation computing technologies (NGCT), Dehradun, pp 759–764. https://doi.org/10.1109/NGCT.2016.7877512
Thammasan N, Moriyama K (2017) Familiarity effects in EEG-based emotion recognition. Brain Inf 4(1):39–50. https://doi.org/10.1007/s40708-016-0051-5
Tandle A, Jog N, Dharmadhikari A, Jaiswal S, Sawant V (2016) Study of valence of musical emotions and its laterality evoked by instrumental Indian classical music : an EEG study international conference on communication and signal processing (ICCSP), pp 276–280. https://doi.org/10.1109/ICCSP.2016.7754149
Tandle A, Jog N, Dharmadhikari A, Jaiswal S (2016) Estimation of valence of emotion from musically stimulated eeg using frontal theta asymmetry. In: 12th international conference on natural computation, fuzzy systems and knowledge discovery (ICNCFSKD). https://doi.org/10.1109/FSKD.2016.7603152
Tandle A, Dikshant S, Seema S (2018) Methods of neuromarketing and implication of the frontal theta asymmetry induced due to musical stimulus as choice modeling. Procedia Comput Sci 132:55–67. https://doi.org/10.1016/j.procs.2018.05.059
Dharmadhikari A, Tandle A, Jaiswal S, Sawant V, Vahia V, Jog N (2018) Frontal theta asymmetry as a biomarker of depression. East Asian Arch Psychiatry 28:17–22. https://doi.org/10.12809/eaap181705
Bishop DVM (2001) Individual differences in handedness and specific speech and language impairment: evidence against a Genetic Link. Behav Genet 31(4)
Broca P (1865a) Du sige de la facult du langage articul. Bulletins de la Socit d’Anthropologie 6:377–393
Ross ED (1984) Right hemisphere’s role in language, affective behavior and emotion. Trends Neurosci 7:3342–346
Springer SP, Deutsch G (1998) A series of books in psychology. Left brain, right brain: perspectives from cognitive neuroscience, 5th, edn. edn. W H Freeman/Times Books/Henry Holt and Springer, New York
Fachner Jrg, Gold Christian, Ala-ruona Esa, Punkanen Marko, Erkkil Jaakko (2010) Depression and music therapy treatment—Clinical validity and reliability of EEG alpha asymmetry and frontal midline theta: three case studies EEG assessment. Music Therapy Icmpc 11:11–18
Leslie G, Alejandro O, Scott M (2013) Towards an affective brain–computer interface for monitoring musical engagement. https://doi.org/10.1109/ACII.2013.163
Caplan B, Mendoza JE (2011) Edinburgh handedness inventory. In: Encyclopedia of Clinical Neuropsychology. Springer, New York, p 928
Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9(1):97–113
Alarcao SM, Fonseca MJ (2016) Emotions recognition using EEG signals: a survey. IEEE Trans Affect Comput. https://doi.org/10.1109/TAFFC.2017.2714671
Brouwer A-M, Zander TO, van Erp JBF, Korteling JE, Bronkhorst AW (2015) Using neurophysiological signals that reflect cognitive or affective state: six recommendations to avoid common pitfalls. Front Neurosci 9:136. https://doi.org/10.3389/fnins.2015.00136
Jatupaiboon N, Pan-ngum S, Israsena P (2013) Real-time EEG-based happiness detection system, Hindawi Publishing Corporation. Sci World J. https://doi.org/10.1155/2013/618649
Cowie R, Douglas-Cowie E, Savvidou S, McMahon E, Sawey M, Schroder M (2000) ’FEELTRACE’: an instrument for recording perceived emotion in real time. In: Proceedings of the ISCA workshop on speech and emotion
Kropotov JD Quantitative EEG, event-related potentials and neurotherapy. Elsevier, Amsterdam
Demaree HA, Everhart DE, Youngstrom EA, Harrison DW (2005) Brain lateralization of emotional processing: historical roots and a future incorporating dominance. Behav Cogn Neurosci Rev 4(1):3–20
Coan JA, Allen JJB (2003) The state and trait nature of frontal EEG asymmetry in emotion. In: Hugdahl K, Davidson RJ (eds) The asymmetrical brain. MIT Press, Cambridge, pp 565–615
Acharya JN, Hani AJ, Thirumala PD, and Tsuchida TN (2016) American Clinical Neurophysiology Society Guideline. J Clin Neurophysiol 33(4):312–316. https://doi.org/10.1080/21646821.2016.1245558
Tandle A, Jog N, Dcunha P, Chheta M (2016) Classification of artefacts in EEG signal recordings and EOG artefact removal using EOG subtraction. Commun Appl Electron 4:12–9
https://community.alteryx.com/t5/Data-Science-Blog/Why-use-SVM/ba-p/138440
Powers David M W (2011) Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. J Mach Learn Technol 2(1):3763
Egan JP (1975) Signal detection theory and ROC analysis, Series in cognition and perception. Academic Press, New York
Swets JA, Dawes RM, Monahan J (2000) Better decisions through science. Sci Am 283:8287
Fawcett Tom (2006) An introduction to ROC analysis. Pattern Recognit Lett 27(8):861–874. https://doi.org/10.1016/j.patrec.2005.10.010
Authors' contributions
As a research is interdisciplinary Medical Fraternity Dr. ASD and Dr. SVJ contributed acumen of participant selection, Handedness, Normalcy of Participants psycho-neurological interpretation. While Engineering fraternity ALT and MSJ Contributed acumen of signal processing, machine learning and other technical aspects. All authors read and approved the final manuscript.
Authors’ information
Avinash L. Tandle is a PhD student and Assistant Professor at NMIMS MPSTME Mumbai Campus. His areas of interest are computational neuroscience and machine learning.
Acknowledgements
The author would like to acknowledge Dr Rajesh Garje (Sanjeevani Hospital), and Dr Shailendra Gaikwad (Manas Hospital) for their help in framing the experiment the author also like express gratitude to Dr Archana Bhise, Dr Ravi Terkar of NMIMS University and Dr Ravikiran Garje of Mumbai University for their constructive remarks
Competing interests
No author mentioned in the manuscript has any competing interest
Availability of data and materials
The EEG database Recorded at Dr. RN Cooper and articles from various repository
Ethics approval
This study proceeded after ethics committee approval from HBT medical college and Dr R.N. Cooper Hospital Mumbai.
Funding
This research is non-funded
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Tandle, A.L., Joshi, M.S., Dharmadhikari, A.S. et al. Mental state and emotion detection from musically stimulated EEG. Brain Inf. 5, 14 (2018). https://doi.org/10.1186/s40708-018-0092-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40708-018-0092-z