Stress Detection: Stress Detection Framework for Mission-Critical Application: Addressing Cybersecurity Analysts Using Facial Expression Recognition
Tiffany Davis-Stewart
The Graduate College M S Information Technology / North Carolina and Technical State University, USA
Corresponding author
Tiffany Davis-Stewart, The Graduate College MS Information Technology / North Carolina and Technical State University, USA.
ABSTRACT
This literature review aims to synthesize and define three broad topics cybersecurity, stress detection, and the effects of job performance from unmanaged stress by addressing key questions and reviewing existing research and experimentation in these areas. The primary focus is on how stress can be mitigated or managed during working hours for cybersecurity specialists. To address this, the review begins by examining technological advancements and their impact. Technology is now ubiquitously accessible through laptops, smartphones, and smartwatches. It extends to smart TVs, vehicles, and public Wi-Fi hotspots, providing continuous Internet of Things (IoT) access. As technology evolves, so does the interest in understanding how human emotions interact with technology. The rise of Big Data, driven by social media, sentiment analysis, text mining, and e-commerce, has further highlighted the need to explore these interactions. Key questions addressed include how human emotions and mental health issues, such as stress and anxiety, influence individuals working in high-pressure environments. Specifically, the review explores the impact of high stress levels on the performance of cybersecurity analysts. It seeks to develop early recognition and detection systems to alert them before their job performance is significantly compromised.
Keywords: Stress, Anxiety, Stress Detection, IOT, Cybersecurity, Technology, Cybersecurity Analysts, Smart Devices Formatting, Human-Computer Interaction, Affective Computing, Facial Expression Recognition
Introduction
Exploring the Impact of Technology
The accessibility of technology and the IOT (Internet of Things) available at our fingertips twenty-four hours per day can quickly become a technological overload. For many reasons, this can be problematic for people working in front of computers, whether at home or in an office setting. Our “first responders” in cybersecurity, the analysts who work consistently in front of multiple computer screens and mobile devices to protect data from various attacks, are at a higher risk. Unmanaged stress can lead to poor job performance, such as multiple health-related absences, missed warning signals on possible cyberattacks, and an overall lack of enthusiasm for the work needed [1].
Conversations on cybersecurity, stress, and anxiety are not usually discussed in the same setting. My objective is to show a correlation between these topics and to detect, reduce, and finally offer reasonable resolutions moving forward. This generates many questions, with the first being how to detect and monitor stress. Furthermore, how do we capture this data for analysis?
Additional questions are: how do we create an early recognition and detection system to alert the Cybersecurity Analyst before his or her job performance is severely affected?
This proposes using multimodel sensors and Facial Expression Recognition software to detect stress as triggered to better understand and answer as many of these questions as possible. The intended sensors will be smartwatches, GSR sensors, and chairs with various biosensors. Additionally, for in-house testing and experimentation, computers with webcams, video recording, and emotion recognition software are installed to capture data for data analysis.
Understanding Stress, Anxiety, and Technology
It is widely known that high levels of stress and anxiety that are left unmanaged can be dangerous [2]. Untreated stress or poorly managed stress levels can lead to many health complications. Not only do health complications occur in individuals, but these complications can also negatively affect job performance, leading to problems involving many others. The body and mind can be seriously affected by mental stress, rendering someone unable to concentrate, with difficulty in thinking with a clear mind, poor job performance, and neglect of personal responsibilities and health. All these symptoms can lead to a mental breakdown [3]. Unfortunately, many people do not take the time to take care of their mental health properly. Due to fast-paced lifestyles and the unavoidable use of technology, human stress levels are unmanaged and extremely high. Creating a work-life balance between home life, personal self-care, and work equals pressure to keep it all under control [4].
Cybersecurity can be defined as the protection of all computers and electronic devices connected to a network from malicious cyberattacks [5]. Furthermore, stress and anxiety are two different terms with different definitions. In discussing and understanding the long-term effects of both stress and anxiety, their symptoms are similar but are two very different disorders. Stress is the feeling of emotional or physical tension. It comes from any event that makes someone feel frustrated, angry, or nervous. Stress is the body’s reaction to a challenge or demand [5].
At the same time, the definition of Anxiety is the feeling of fear, dread, and uneasiness. Anxiety causes someone to sweat, feel restless and tense, and have a rapid heartbeat. For example, the anxious feeling when faced with a complex problem at work, before taking a test, or before making an oral presentation [6]. In addition to finding a correlation between cybersecurity analyst job performance, stress, and anxiety, this study aims to determine when someone feels stress or anxiety. The method of detecting these emotions is using facial cues from video monitoring with facial expression recognition software and capturing the data using biosensors
Review of Literture
This literature review will examine and analyze other studies, journals, articles, and tests to gain an overview of earlier attempts to detect stress. This proposal aims to detect and study participants’ stress and anxiety by video recording them in experimental environments, analyzing their facial cues using various stimuli, and applying these findings to statistical and data analytic formulas. The video recordings will be analyzed, and data will be extracted from the different facial cues at various states, such as neutral, relaxed, stressed, and surprised. The entire face consists of the forehead, eyes, eyebrows, nose, lips, and skin color [6]. When you think about what goes on with someone’s face when stress is triggered, each facial feature just mentioned is affected and has a range of motion. The skin typically changes to a pale color or more reddish hue based on emotion [7].
This review further explains that studies done on humans and animal experiments have shown that these effects, in the long term, can lower the immune systems and raise the risk of certain types of cancer. Stress and anxiety are also linked to other common symptoms, such as headaches, hypertension, and lower back and neck pain. [6] Stress can also be detected through biosignals. Biosignals are time-varying measures of the human body’s processes that can be divided into two main categories [6]. When associating the reviewed literature with this study, it is projected that prolonged stress in a particularly stressful working environment will adversely affect the sensitive nature of the cybersecurity job.
The two main categories are Physical signals and Physiological signals. Physical bio-signals measure how the body moves due to muscle activity, including pupil size, eye blinks, head, body, arms, and leg position or movements [8]. Respiration, facial expressions, and voice are also part of the physical bio-signals. Physiological signals are more related to the body’s vital functions, such as cardiac activity, typically detected with sensors attached to the body, such as the Electrocardiogram [ECG] and the Blood Volume Pulse [BVP] [6]. An EEG sensor can detect disruption of the standard breathing patterns, brain function, and other upper respiratory functions correlated with emotional stress and anxiety signals. One of the effects of stress is sweating in the palms of the hands, forehead, and armpits. The Galvanic Skin Response (GSR) sensor can detect stress when the sweat glands have been activated due to stress triggers. Stress detection using a GSR is proving reliable as it relies on the conductivity of the skin response based on stress triggers and stimuli [9].
This review also explains further how stress and anxiety affect the human face. There are various categories of facial features connected with stress and anxiety. The categories are the Head (head movements, skin color), the Eyes (blink rate, eyelid response, eyebrow movements), the Mouth (mouth shape and lips whether frowning or smiling), the Gaze (gaze direction, saccadic eye movement), the Pupil (pupil size as it dilates). [10]
Methodology
Framework
My methodology involves a comprehensive review of existing literature on stress detection, the symptoms and long-term effects of stress, and the impact of unmanaged stress on job performance and health, specifically targeting Cybersecurity Operation Center analysts and students. I will evaluate various techniques used for stress detection to determine which methods yield the highest accuracy. This process includes recreating, comparing, and contrasting successful and unsuccessful studies. The analysis will focus on understanding the reasons behind the outcomes of unsuccessful experiments, identifying potential improvements, and suggesting modifications that could enhance stress detection accuracy.
The experimental setup will be significantly enhanced using a desktop computer or laptop with an embedded camera and the IMOTIONS software platform installed. IMOTIONS enables users to collect and analyze human behavior data through a variety of modalities, including eye tracking, galvanic skin response (GSR), facial expression analysis (FEA), electroencephalography (EEG), electromyography (EMG), and electrocardiography (ECG), all within one comprehensive platform. This research will use FEA, GSR, and ECG sensors within the IMOTIONS platform.
IMOTIONS integrates an electro cardiology research module within its software platform and utilizes the Polar Belt for electrocardiography (ECG) measurements. The ECG collects the heart’s electrical signals, providing valuable data on participants’ physiological arousal and psychological states. The Facial Expression Analysis (FEA) component also detects and analyzes outward emotional expressions through automated computer algorithms that record facial expressions via webcam, delivering real-time insights into the participants’ emotional states.
To enhance the accuracy of stress analysis, I plan to incorporate additional sensors within the IMOTIONS platform. One such sensor is the Galvanic Skin Response (GSR) sensor, also known as the Electrodermal Activity (EDA) sensor, which is an effective tool for stress detection in previous studies. IMOTIONS supports the Shimmer GSR kit, which detects moisture on the skin and measures the activity of the autonomic nervous system. Changes in emotional arousal, triggered by environmental stimuli such as fear, threat, or joy, result in increased activity of the eccrine sweat glands. Thus, EDA/GSR is a valuable indicator of emotional arousal, providing insights into participants’ underlying physiological and psychological processes. (IMOTIONS, 2024)
Tracking facial expressions, particularly in controlled contexts and with other biosensors, can be a powerful indicator of emotional experiences. While no single sensor can completely understand the mind, combining multiple data streams with robust empirical methods can significantly advance our insights. Electrodermal activity (EDA), also known as galvanic skin response (GSR), measures the activity of the autonomic nervous system. Emotional arousal levels fluctuate in response to environmental stimuli such as fear, threat, or joy—which in turn increase the activity of the eccrine sweat glands. EDA/GSR is thus a valuable index of emotional arousal, offering insights into participants’ underlying physiological and psychological processes. The primary emotions are happiness, anger, fear, surprise, disgust, sadness, and neutrality.
The feelings or emotions that are primarily associated with stress and anxiety are:
- Anxiety: A feeling of unease, worry, or fear caused by stress.
- Frustration: A feeling of annoyance or disappointment, which can be caused by being unable to complete a task or achieve a goal due to stress.
- Irritability: A tendency to become easily annoyed or angered, which can be caused by stress.
- Anger: (primary emotion) A strong feeling of displeasure or hostility, which can be caused by stress.
- Sadness: (primary emotion) A feeling of sorrow or melancholy, which can be caused by stress.
- Fear: (primary emotion) A feeling of apprehension or terror, which can be caused by stress.
- Helplessness: A feeling of being unable to control a situation or outcome, which can be caused by stress.
- Nervousness: A feeling of restlessness or jitters, which can be caused by stress.
- Tension: A feeling of tightness or strain caused by stress.
- Fatigue: A feeling of exhaustion or weariness caused by stress.
Several facial expressions are commonly associated with stress. Some of these expressions include:
- Furrowed brow: When stressed, people may furrow their brows, creating wrinkles in the forehead.
- Tensed jaw: Stress can cause a person to clench or grind their teeth, resulting in a tense and tight facial expression.
- Narrowed eyes: When stressed, a person may squint or narrow their eyes, creating a strained or intense expression.
- Lips pressed together: Stress can cause a person to press their lips tightly, indicating tension or discomfort.
- Frowning: A downturned mouth or frown is a common expression of stress, indicating sadness, worry, or frustration.
- Rapid blinking: Stress can cause a person to blink rapidly or flutter their eyelids, indicating nervousness or tension.
- Tensed neck and shoulders: Stress can cause a person to hold tension in their neck and shoulders, resulting in a stiff and rigid posture.
The Setup
To effectively detect stress in an experimental setup, a computer workstation will be equipped with a high-definition webcam capable of recording a range of facial expressions. The computer will utilize IMOTIONS Facial Expression Recognition software to analyze captured emotions. Participants will be shown videos of various scenarios and still images to evoke specific emotional responses, allowing us to verify if the anticipated emotions are accurately reflected after each viewing session. Additionally, incorporating the GSR and ECG sensor devices, as aforementioned, has proven convenient and effective in capturing large amounts of supplementary data for further analysis. The placement of these wearable sensors is crucial for accurate stress detection. Additionally, administering questionnaires or surveys to participants beforehand helps gather data on their mental states. Various wearable devices with embedded biosensors can be employed, the most common being smartwatches, which are widely used for fitness and health monitoring. Other forms include headbands, glasses, rings, armbands, and intelligent garments such as t-shirts, vests, or gloves. Proper placement of these wearable sensors is crucial to accurately capture the relevant biometric data and ensure the validity of the information collected.
The sensors need to be placed:
- Chest
- Wrist and fingers
- Facial expressions captured using a webcam
- Eyes: Possibly using an eye-tracking device (Smart Eye AI)
Stress Detection
Various methods for stress detection include monitoring heart activity, brain activity, skin conductance, blood flow, and muscle activity. These methods fall under physiological stress-based affective computing [9]. The signals are captured using techniques such as electroencephalography (EEG) for brain activity, electromyography (EMG) for muscle activity, and galvanic skin response (GSR) for skin conductance. One notable multi-modal database, DREAMER, utilizes ECG signals to determine emotions triggered by audio-visual stimulation. In this study, other databases are created using neurophysiological signals to detect human emotions, including the AMIGOS database, which captures personality traits. This research also proposes using ECG data augmentation for emotion recognition, employing a seven-layer convolutional neural network (CNN) model.
Basic concepts and motivation are established by studying the following steps:
- Human emotion
- ECG and emotion recognition
- Emotion detection methods
The technologies used to detect these physiological signals are the following [9].
- Brain activity: Electroencephalography (EEG)
- Heart activity: electrocardiography (ECG)
- Skin response: galvanic skin response (GSR) and Electrodermal activity (EDA)
- Blood activity: photoplethysmography (PPG)
- Muscle activity: electromyography (EMG)
- Respiratory Response: piezoelectricity/electromagnetic generation
Detecting human emotions is extremely difficult. Human emotions are not only a psychological activity but also a complex series of behavioral emotions classified as a phenomenon.
These behavioral emotions involve various levels of neural and chemical interactions. To properly recognize human emotions, three main qualities must be described. The qualities are:
- Valence: positive or negative emotion such as fear or happiness
- Arousal: an intense or extreme emotion such as anger or sadness
- Dominance: the level of control, either with or without control.
Using Affective Computing to Detect Stress
The software application must be capable of using “affective computing,” which can be defined as computers that can recognize, interpret, and simulate the effects of stress [11]. This works by effectively designing human-computer interaction (HCI) with a more natural feel or effect. This gives the computer the ability or ability to recognize and interpret facial expressions as a human does [12]. The first detection or realization that someone is stressed or experiencing anxiety is by looking at someone’s facial expressions. These facial expressions will show while at work, driving, or conversing with someone on the phone or in person. Facial expression recognition software has proven in simulation that this is especially useful when detecting when a person’s anxiety has been triggered or their stress levels have increased [12].
The study was attempting to gain a basic understanding of the six emotions a human will express, which aids in determining when stress kicks in. The six basic emotions are Happy, Sad, Surprise, Fear, Anger, and Disgust. Testing has shown that these basic human emotions can be detected well in simulated environments using video cameras and facial expression recognition software [12]. Challenges discovered when trying to track and analyze facial expressions using video are dealing with the changes in the shape of the mouth during each emotion [13]. It was proposed during the study that only focusing on the upper portion of the face during the silent phase of testing eliminates the distraction of the mouth changes. Further testing and analysis of results show that full-face simulation yields the best results for showing varying emotions of the human face [12]. Specific testing was done to determine at what point elevated stress levels affect someone’s work performance and how much stress affects an employee’s alertness and safety. After the testing, it was determined that the proposed framework showed promising results. The Framework consists of various cross-modality data correlations [12].
Measuring Stress During Busy Workdays with Biosensors
Other studies focused little on how stress affected the mental and physical health of prolonged computer use or stressful workday situations [14]. This study aimed to determine the stress level during a protracted stressful event such as an overly busy workday. The biosensors used in this study were the ECG, EMG, EOG, and EEG. Each biosensor provides valuable information about different critical areas of the body. The electrocardiogram (ECG) is one of the simplest and fastest tests to evaluate the heart. The Electromyography (EMG) sensor measures muscle response or electrical activity in response to a nerve’s muscle stimulation. The test is also used to help detect neuromuscular abnormalities. The electrooculogram (EOG) is an electrophysiologic test that measures the existing resting electrical potential between the cornea and Bruch’s membrane. The electroencephalogram is a test that detects abnormalities in your brain waves or the electrical activity of your brain [14].
In the end, the goal of an effective computing system is to correctly detect elevated levels or dangerous levels of stress during the workday-specifically, the workday of a cybersecurity operations center employee’s day. The effective computing system will effectively allow the computer to be intelligent enough to communicate or alert the user when their users when prominent levels of stress have been detected, as well as issue messaging and alerts to take a break and step away from the computer [14]. Medical studies have shown that stress and anxiety in the workplace are detrimental to not only the employee but to the company as well. Multiple tests and analyses of prolonged stress levels have been shown to reduce human life expectancy by three years. So, improving, motivating, and uplifting employees and allowing flexibility is healthy for the employee and critical to occupational safety, well-being, and productivity [15].
Related Methodologies
Human-Computer Interaction
Human emotions are affected mainly by stressful environments such as work, the commute to work, home life, and unhealthy relationships. These everyday life incidents are something everyone experiences. These events are called psychophysiological occurrences. Stress can be described as a complex emotional state detected by biomedical methods, self-report surveys, and biomarkers. However, these methods are not helpful for real-time data capturing [16]. Stressful triggers and daily routines are only two experiences that affect human daily routines. Many others include feelings, bodily changes, cognitive reactions, behavior, and thoughts. To monitor and analyze these emotions with technology is called the Human-Computer Interaction (HCI) technique [8].
This technique is very challenging because everyone is different, everyone has different stressful triggers, and people live in various environments, which may be stressful to some and not at all to others [17]. The difficulties in designing research for emotion detection lie in finding reliable and meaningful data. It is problematic to design just one experiment to detect many different emotions and cumbersome to design many different experiments to detect one emotion. Again, people are different, and people will respond differently when exposed to the same stimuli; there are varying moods and the inability to accurately self-report an emotional experience [17].
Affective Computing and HCI
Related research has been found on stress detection using Affective Computing. Affective Computing has increasingly become a topic of interest as more people become aware of how their physical health relates to their emotional health and how technology correlates with them [9]. Steady growth has been seen in studies not only in software but also in hardware technology. This growth has pointed more toward detecting someone’s emotional or mental state and analyzing the data [9]. The research focuses on Human-Computer Interfaces (HCI), a modern form of computer science because it ties human emotional detection to technology [9]. Affective computing recognizes a psychophysiological state that influences behavior, which affects human emotion shown on the face in various emotional states. The six primary emotional states are joy, anger, surprise, disgust, sadness, and fear [9]. “Psychophysiology is the study of the relationship between physiological signals recorded from the body and brain to mental processes and disorders. These biological signals may be generated by the activity of organs in the body or by muscle activity” [18]. There are four basic negative emotions when defining stress.
The negative emotions are anger, disgust, sadness, and fear. There are only two positive emotions that are stress-related: surprise and happiness. Using a trained model set, the participant is shown an image during the testing state. If the participant is shown as being disgusted or angered, then it is considered to have instantaneous psychological stress detection [16]. The technologies used to detect these physiological signals are the following: [9]
- Brain activity: Electroencephalography (EEG)
- Heart activity: electrocardiography (ECG)
- Skin response: galvanic skin response (GSR) and Electrodermal activity (EDA)
- Blood activity: photoplethysmography (PPG)
- Muscle activity: electromyography (EMG)
- Respiratory response: piezoelectricity/electromagnetic generation
The technologies used to detect these physical signals are the following: [9]
- Facial expression: automated facial expression analysis (AFEA)
- Eye activity: infrared (IR) eye tracking
- Body gesture: automated gesture analysis (leveraging AFEA)
Understanding Acute and Chronic Stress
Primarily, stress is caused by everyday tasks or routines, which is called acute. Then, there are the chronic stressors of life. These triggers that cause chronic stress are things that happen out of the ordinary, such as unexpected expenses, health emergencies, possible additional tasks or errands added to an already packed schedule, or maybe learning a new language or having to take a test. All these stressful triggers can cause chronic health conditions if left unmanaged or untreated. Additionally, stress from working inside, such as the typical eight-hour shift, sitting in an office or cubicle, looking at the computer screen, and answering telephone calls or emails, is called “Office Syndrome.” The study on this stressful environment is called Office-Syndrome detection using EEG and HRV, as well as measuring hand movement [18].
A specially made watch measures the methods used in detecting Office Syndrome. The watch uses the Microcontroller AtMega 328 cortex MO as the central processor. The accelerometer ADXL345 measures the wrist’s roll and pitch orientation. The data captured from the wrist’s roll and pitch are stored separately for further detail and analysis [18]. For additional stress detection, a heart rate sensor called the Neurosky Mindwave, a commercial electrocardiography (ECG) device used inside a wearable device such as a watch, is highly reliable in capturing data for stress detection. The Neurosky Mindwake device detects stress during increased heart activity and captures data during resting states [18]. Resting states can also be used as a benchmark for individuals; since everyone is different, this information is beneficial. This study’s algorithms used for hand movement and stress detection are similar. If there is hand movement, a score is given.[18].
Simultaneously, if stress is detected by increasing heart rate activity, another score is given, ultimately increasing the final analysis’s total score [18]. Methodologies are needed to detect this type of stress and its urgency to heighten awareness. There are many stress and emotional state detection methods available. Many of these methods are invasive and time-consuming in collecting the relevant data needed for individual health assessments. More research and study are needed to automatically detect stress, collect data, analyze the data, and send treatment recommendations in real time. Studies have shown that video cameras and audio recorders have been impressive and reliable in capturing or identifying stress. Additionally, wearable devices have been proven to be convenient and successful in capturing large amounts of data for further analysis [19].
Using Wearable Devices
Questionnaires are an option to gather data on test participants’ mental states. Sometimes, people are not entirely truthful when completing surveys. Moreover, wearable devices not only offer convenience but also capture multiple signals. The physiological signals that can be captured with wearable devices are microelectromechanical systems (MEMS). These systems are Electrodermal activity (EDA), Photoplethysmography (PPG), and acceleration sensors, which all measure the physiological signs of stress. Studies have shown that using wearable devices to detect stress by capturing electrodermal activity (EDA) and heart rate (HR) activity successfully shows the physiological signals when stress is triggered. [28] The ability to detect stress using a wearable Photoplethysmography sensor (PPG) using Heart Rate Variability data is a more recent study. This study has proven to have reliable results in detecting stress as well.
This method uses a five-step process. As described by medical doctors, the heart beats using two natural pacemakers connected to the body’s nervous system. So, if any changes happen in the nervous system, it will ultimately affect the heart. The five-step process in how to get suitable HRV parameters is as follows:
- Extract B2B (beat-to-beat) data from heart rate data.
- Define frequency zones.
- Perform a Periodogram of those frequency zones.
- Find the area under the curve of the frequency zones.
- Calculate LF/HF values [9]
Among the available stress detection methods are monitoring heart activity, brain activity, skin conductance, blood activity, and muscle activity. All these methods are called physiological stress-based Affective Computing [9]. An additional physiological stress indicator is the biomarker of Salivary Cortisol. The “Stress Hormone,” or Cortisol, has been studied for many years. Researchers have measured the increase in cortisol levels in individuals who have shown high-stress levels or after the stress has been triggered.
The study of the increase in cortisol levels has been a successful indicator of stress in study after study [21]. Studies have successfully shown that physiological signals can identify human emotions. This was done by combining electromyography (EMG), ECG, and galvanic skin response (GSR) to detect stress in operating cars. The ECG has proven to be a reliable source of stress detection data in human emotion recognition systems.
Machine learning Technology
Several machine learning techniques are used to analyze the data extracted from the ECG. Data from the ECG sensor and data extracted from the Heart Rate Variability (HRV) successfully establish an emotion recognition system. Different physiological signals are analyzed to identify and label the various emotions correctly. The signals are captured from Electroencephalography (ECG), brain activity, electromyography (EMG), muscle activity, respiration, and skin conductivity. A multi-model database named DREAMER was established. This database uses ECG signals to help determine emotions triggered by audio-visual stimulation. This study created other databases using neurophysiological signals to detect human emotions. The AMIGOS database is used to capture personality traits [8].
This study also proposed to use an augmentation using electrocardiography (ECG) data to recognize human emotions using a seven-layer convolutional neural network (CNN) model. To accomplish the task of recognizing human emotion and using imbalanced samples used in Machine Learning approaches, the steps are detailed as follows:
- Describing the augmentation strategy
- Detecting the R-waves
- Periods Calculation of R-R intervals - There are different periods between successive R-waves (R-R intervals)
- Random selection of new R-R intervals
- R-R intervals concatenation [22].
Basic concepts and motivation are established by studying the following steps:
- Human emotion
- Electrocardiography (ECG) and emotion recognition
- Emotion detection methods
- Electrocardiography (ECG)-based emotion detection methods
- Electroencephalograph (EEG)-based emotion detection methods [5].
Detecting human emotions is extremely difficult; human emotions are a psychological activity and a complex series of behavioral emotions classified as a phenomenon. These behavioral emotions involve various levels of neural and chemical interactions. To properly recognize human emotions, three main qualities must be described [23]. The qualities are:
- Valence: positive or negative emotion such as fear or happiness
- Arousal: an intense or extreme emotion such as anger or sadness
- Dominance: the level of control, either with or without control [22].
In Electrocardiography (ECG) and emotion recognition, the heart rate can be defined as the number of beats per minute, which is the systolic contraction. Simultaneously, the ECG records the electric cardiac activity, which is responsible for the myocardial contraction. Myocardial contraction is the heart’s natural ability to contract. The heart rate is also measured by counting the number of R waves registered by the minute. The time or interval between two electrical R waves is labeled as the R-R interval [22].
Previous studies have determined that analyzing data captured from the Electrocardiography (ECG) sensors is the most important way to recognize human emotions. Specifically, a study was done to determine the human emotional response when listening to music. Not only was the Electrocardiography (ECG) sensor used, but the Electroencephalography (ECG), electromyography (EMG), respiration, and skin conductivity sensors were also used in this study. Calculations were made from HRV/Breathing rate variability (BRV), geometric analysis, entropy, multiscale entropy, time/frequency, and sub-band spectra. The analysis included these to detect the best human emotion while listening to music. Moreover, studies have shown that the EEG provides essential insight into the complex information about an individual’s emotional state. However, existing methods are unable to determine the exact human emotion accurately [22].
Affective Computing
Furthermore, physical stress-based Affective Computing includes the following methods: facial features, eye tracking, body movements, and gesturing. To accurately detect heart activity, data from the Heart Rate (HR) and Heart Rate Variability (HRV) will be collected using electrocardiography (ECG). Electrocardiography (ECG) captures the heart’s activity by measuring the heartbeat. The heartbeat consists of four components: the baseline, P wave, QRS complex, and T wave.
The HRV provides more information alone than the HR. “The Heart Rate Variability is the measure of the standard deviation in interbeat intervals of successive R waves in a single Heartbeat” [9]. Studies have also shown that the skin’s temperature changes considerably during increased stress. The sympathetic nervous system triggers short-term temperature changes in the skin when someone is under stress or in a prolonged stressful environment [21]. Usually, when someone is in an active, stressful state, their body signals to speed up the heart rate (HR). This signal speeds up the Heart Rate HR, increasing the blood supply throughout the body and alerting the patient to go into “fight-or-flight” mode. The “fight-or-flight” mode is prevalent in urgent situations where someone may have to make quick decisions in possibly life or death situations. The most widely used in stress detection to date is electrocardiography (ECG), which uses electrodes placed on specific body areas.
Even though this is somewhat of an intrusive process and cumbersome when placing each electrode on the body, the data captured accurately detects stress in the body by analyzing the heart rate and the heart rate variability. One of the few companies that manufactures ECG recording systems is called Biopac Systems, Inc., and the software used to capture offline and online real-time analysis is called AcqKnowledge. Its competitor, Shimmer Sensing, offers a wireless, wearable ECG device. This allows wireless real-time synchronization and analysis, which is preferable when trying to detect stress in individuals during their regular or daily routines [9].
Research has shown that a healthcare system that focuses on emotional aspects to support people in stressful work environments and daily life has been successful. This study showed that the health care system proved effective with the ECG signal because stress is one of the mental problems or symptoms [23]. Brain activity, which is the center of all activity in the body is usually captured using an Electroencephalography (EEG). The difference between the EEG and ECG is that the ECG detects stress by capturing electrical impulses in the body. Meanwhile, the EEG detects stress by measuring blood flow as an indicator [9]. There has been an increase in research on skin conductance as an indicator of stress. The stress is captured using a Galvanic Skin Response (GSR) sensor. The GSR sensor analyzes the conductivity of skin when triggered in stressful environments. These sensors are becoming more popular because of the less intrusive way to capture data. The GSR can be worn on the finger or wrist or using a computer mouse. Stress detection using a GSR is proving reliable as it relies on the conductivity of the skin response based on stress triggers and stimuli. The skin response is called the tonic skin response.
This stress detection technique is becoming more popular because of its convenience and ease of use. The setup requirement is significantly lower than that of EEG or ECG testing. Monitoring blood activity is another way to detect stress with a change in Heart Rate and Heart Rate Variability. It is also a change in blood pressure and Blood Volume Pulse (BVP) [23].
Noninvasive Blood Activity Monitoring
The process of monitoring blood activity uses photoplethysmography (PPG). This technique is very low-cost and noninvasive.
The PPG sensor uses an Optical pulse generated by a red or near-infrared (NIR) light source [9]. The most popular and validated technology that uses the PPG is Emphatic’s E3 and E4 wristbands. These wristbands are GSR sensors that can be worn conveniently on the wrist. Another up-and-coming company, Seraphim Sense, is developing an angel-sensor health bracelet. This bracelet has many sensors incorporated within the bracelet to detect and capture data from the skin, blood activity, and blood pressure. Additional studies have shown that using galvanic skin response (GSR) sensor data helps detect stress patterns but can also be problematic [23].
Collecting GSR data is not as simple as other studies have described. To detect stress patterns, specific symptoms need to be present. These symptoms include elevation of the voice and an increase in heart rate, in addition to the galvanic skin response (GSR). This study attempts to detect stress in a two-step process. Step one is identifying the type of stress. Stress can be broken down further into three separate categories.:
- Acute stress. Acute stress is a short-term trigger or stress factor.
- Episodic acute. Episodic acute stress is the trigger that happens more frequently.
- Chronic. Chronic stress is described as a long-term stress environment or continuous stressful experience. Step two is the analysis of results captured from the GSR sensor while determining what stage of stress a person is in [14].
Muscle activity is another physiological measure of stress. Previous studies have shown that muscle activity or action can be used to identify a stress response. Electromyography (EMG) is the technology that monitors muscle activity. The EMG is like the EEG and ECG, using electrodes placed strategically on the body to detect spikes as the muscles respond to stress [9]. This detection technique is not a good source as it is based on the muscle tone of the participant. The respiratory response is another method to detect stress. The respiratory response uses the technology of piezoelectricity/electromagnetic generation in the detection of stress. Ventilation and Hyperventilation are linked to stress because of mental and physical triggers.
To capture data from the respiratory response, Piezoelectric transducer electromagnetics is placed around the chest area to analyze motion as the chest expands and contracts. This provides an electrical response to the physical changes as the chest expands and contracts based on stress triggers [9]. Physical stress-based Affective Computing includes facial-related features, eye activity, and body gestures. Facial expression recognition study has proven effective in detecting various emotional states. The automated facial expression analysis (AFEA) algorithm detects these facial expressions. This technology has been used in previous studies to detect sadness, anger, happiness, and deceit [9]. The Facial Action Code System (FACS) can use automated facial expression analysis. The FACS was first published by Paul Ekman and Wallace Friesen in 1978 and later updated in 1992 and 2002. The Facial Action Code System measures the frequency of each facial expression. Each facial expression is then reduced to an action unit (AU). The action units are the most minor possible movements of each facial expression. The AU works effectively with facial expression recognition software applications as it successfully detects emotional states based on facial expressions.
Advanced Stress Detection
Automated facial expression analysis (AFEA) and facial action coding (FACS) have proven that stress and emotional state detection is accurate at 93% [9]. Eye Tracking, such as pupil dilation and blink rate, has been shown as a positive and reliable stress detection. The technology used with Eye Tracking is an infrared (IR) eye-tracking system. The infrared eye tracking system has shown that the blink rate increases positively as stress is triggered. Additionally, pupil dilation has been proven to be a reliable source of stress detection. Two eye-tracking systems are prevalent in detecting stress for study. These systems are the Tobi X120 series and the Eye Tribe ET1000 eye trackers. Studies using these Affective Computing systems have shown that the Tobi X120 extracts valuable data from eye activity [23]. This system also offers wearable eye tracker devices that have proven convenient and reliable [9]. A Wireless Body Sensor Network study detects stress in real-time. This device comprises a self-configuring network of small biosensor nodes, all communicating using radio signals. For real-time stress detection using multiple vital signs is proposed in this study. The framework used for real-time detection is the Wireless Body Sensor Network (WBSN), along with a fuzzy inference system (FIS). This framework will capture and analyze the data from the skin conductance (SC) first [14].
Next, if there are signs of stress in the skin conductance data, the vital signs are captured for further analysis. The vital signs used are the Heart Rate (HR), Respiration Rate (RR), and Systolic Blood Pressure (ABPSys). Studies have shown that these vital signs are the most significant signs of stress and are mostly affected [24]. Behavior and body gesturing are other emerging stress detection techniques primarily used in studying individuals with Autism. The technology used to detect stress using body gesturing is automated gesture analysis (leveraging AFEA). This technique monitors body movements such as making a fist, jaw clenching, body stiffness, crossing arms, pacing, jittering, and other nervous gestures. conjunction with visual tools to detect facial features, which can be a reliable source for determining stress levels and emotional states [9]. The predictive analysis technique is an additional method used in stress detection.
This technique is used in research on drowsiness detection, Pulse Rate Detection, and Pulse Monitoring Systems. Drowsiness is an overlooked system of unmanaged stress. When designing new stress detection technology, it is important to include eye drowsiness detection. Also, a Pulse Rate Detection system has been shown in research to be an effective technique. This technique is used with heart rate and heart rate variability monitoring. Lastly, a Pulse Monitoring System uses the Thing Speak Software to transfer the pulse rate values detected in real-time.
This positively represents an individual’s stress levels so stress can be assessed appropriately [25]. The workflow or best practices in stress detection and offering resolution are first to detect the emotion and recognize what is happening. If a participant is experiencing a series of negative emotions or stress, the next step is to offer a relaxing resolution. The resolution should include a relaxing deep breathing or breathing control technique. The next step should be reanalyzing the stress level of the participant. Then, have them redo the breathing techniques. Studies show that integrating stress detection with ECG technologies proves to detect stress and positively improve the efficiency of emotional support [25]. There has not been much research done on multi-model stress detection. Recent studies show that multi-model emotion detection systems, such as combining audio and visual signals, cause dimensionality and data sparseness.
Not much work has been done on combining EEG, Emotion Tracking, and Speech Emotion Recognition. This may prove to be essential in Human-Computer Interaction [26].
Multi-Model Methodologies
A few studies have shown that combining different emotion and facial recognition detection methods has a higher accuracy rate. The high accuracy rates mean that a multi-model approach can more accurately detect emotions. Pinpointing a single emotion can mean the difference between someone making a catastrophic mistake on the job or calming someone down during a stress trigger in real-time.
Combining detection methods, such as using a multi-model database with an EEG sensor and speech signals, is called Fusion strategies. In this study, Fusion methods, which combine speech and EEG signals, should improve accuracy ratings by twenty-five percent. Human-machine interaction (HMI) needs to have an automatic emotion recognition system [27].
External behaviors that can be detected are body movements, gesturing, and the tone of someone’s voice. This study built a multi-model emotion database using four different methods. The methods consisted of recording the signals of an EEG sensor, photoplethysmography, speech, and facial expressions simultaneously from thirty-two different experiment participants. The results showed that the EEG signals gave an 88.9% higher accuracy rating than speech recognition alone. This study also showed that combining emotional external and internal physiological behaviors proved to recognize human emotions [27]. Additionally, this study presents that humans can hide emotions intentionally or involuntarily, known as social masking. Research proves that using signals from the Automatic Nervous System (ANS) and Central Nervous System (CNS), such as from the EEG, is not easily concealed [27].
A high-level functionality and purpose must be created to detect, determine, and analyze these emotions. Python coding application is used in this study to classify each emotion. Python uses an Emotion Classification tool to train, evaluate, and deploy machine learning models to detect emotion [28].
To retrieve the best analysis, the functionality is split between two programs in Python:
- Run.py: a single argument program that executes an experiment configuration.
- Serve.py: This is a webserver that exposes release-ready research models [28].
This study showed that the collection of an Open-Sourced tool named Emotion Core was successfully used to train, evaluate, deploy, and showcase the various ways emotion can be detected using various modeling techniques, datasets, and evaluation approaches [28].
The impact of this research using Emotion-Core allowed researchers to experiment with various detection methods, data sets, document representation, and modeling approaches at a breakneck pace. The experiment methods included:
- Data formats: In Emotion-Core, there are only two columns needed:
- Test
- Emotion index
- The researcher experimented and preprocessed the columns accordingly.
- Document Representation and Models: This allows for centralized definitions of how to train and represent a model [28].
Recent studies have shown that during various emotions, these activities occur in different areas of the lobes within the brain. During these various emotions, activities in the brain either fire up or fire down in different brain regions. Literature and research cannot be overlooked that these phenomena in the brain are asymmetric [29]. However, very little research has been published on EEG-based emotion recognition using asymmetric brain activity.
Plenty of published work is done on emotion detection using other methods, but not multi-model detection or asymmetric studies. Most studies show that studying emotions used only a fixed set of electrodes when using EEG-based detection. This study shows that based on scientific findings in neuroscience, there are appropriate electrodes for every subject and every emotional state that match that frequency [29]. Introduced in this article were different methods of determining and recognizing emotions through EEG brain analysis. Three 000classifications were identified, and their accuracy ratings were between 88.17% and 100%. The three phases of the classification strategy were created and used to determine nine different emotional states.
The phases are as follows:
- Phase 1- Two classifiers were created to determine two different emotional states using the QDC classifier.
- Phase 2 - ensemble classifiers were defined as having the same emotional target.
- Phase 3 - decisions were made based on the analysis of phases one and two. The accuracy level was between 77.21% and 99.48% [29].
Even though research methods are improving, the results have been less promising. First, facial recognition and emotional analysis must be established to detect stress properly. Currently, there are three methods of research to detect and analyze facial expressions [30].
These methods are:
- FEMG - facial electromyography- is the measurement of facial electromyographic activity.
- Having human coders analyze facial expressions in real-time observations.
- Video and cameras are used to record and capture the various expression changes by classification algorithms [30].
Facial electromyography (FEMG) is placing electrodes on the face to collect impulses and to measure the muscle activity of the face directly. These signals are processed, filtered, and converted into digital format for further analysis. The electrodes are placed on two major muscles on the left side and right side of the face. The advantage of this measurement method is that it is exact, sensitive, continuous, and consistent. The disadvantage is that the electrodes must be placed correctly along with the cables, amplifying device, and other equipment for the testing [30]. The Facial Action Coding System (FACS) observes and analyzes facial expressions in real time. The coding system is based on the recognition of facial movements based on an anatomical, accurate face structure.
The encoding is measured in Action Units (AU). Groups in numbers, names, and activated muscles describe the action units [30]. Lastly, the Automated Facial Expression Analysis (AFEA) enhances the process of facial expression recognition. However, it is not a simple process because of the variance of human faces. Faces vary based on gender, ethnicity, age, facial hair, glasses, and even the lighting can all influence the recognition of facial expressions.
There are three distinct stages of automated analysis of facial expressions. The three stages are:
- Face detection
- Detecting and registering facial landmarks
- Classifications of facial expressions and emotions [30].
Multi-model facial expression detection studies are becoming more popular but have proven unreliable. More research is needed to make these detection methods more reliable and trustworthy. Multi-model methods include combining facial expression analysis, voice recognition, and gesture recognition. To get more accurate results, an EEG sensor must be added. The reasoning behind using EEG readings is that the results remain unaffected by external appearances and behaviors [31]. Generally, EEG signals are generated from different sections of the brain. To get accurate results, the testing uses the analysis from the EEG using the Spatio-temporal brain data (STBD). This method uses data from STBD by recording the activity of the neurons the brain evokes. The approach to collecting STBD data is subject to contingent and noncontingent testing methods [31]. In this study, two methodologies were proposed.
The methodologies were:
- To develop a training model and a feature extractor. This method improves the classification performance of emotion detection.
- To examine the model proposal using the theta, alpha, beta, and gamma bands [31].
The datasets used with the above methodologies were:
- DEAP - the database was created using EEG recordings from 32 participants (16 females and 16 males).
- SEED - database was created at the Brain-Like Computing and Machine Intelligence Laboratory (BCMI) [31].
The extraction feature uses the EEG signal. Studies have shown that EEG signals from emotions play a critical role in designing the human brain-computer interface. Six features were used for the evaluation of emotional performance.
The Six features were:
- Hjorth parameters - Statistical properties used for time-domain analysis in signal processing.
- Power Spectral density (PSD) -The average energy distribution per unit of time over different frequency bands.
- Differential entropy (DfE) - is the Differential entropy h(X) of a continuous random variable X.
- Rational asymmetry (RASM)- the formula is:
- RASM = h(X1left) - h(X1right)
- Differential asymmetry (DASM) - the formula is:
- DASM = h(X1left) - h(X1right)
- Linear Formulation of Differential Entropy (LF-DfE)- this is based on the fourth-order spectral moment [31].
Challenges or Gaps
Many studies have shown that common challenges in using facial expression recognition continue to occur when trying to detect stress with Facial Expression Recognition alone. Even though there are familiar facial landmarks on the human face, there are also different variations of the face. Age, race, gender, obstructions, and culture must be factored into why facial expressions detect stress. People age differently, and the difference ranges significantly from age to gender.
Also, culture and exposure to disease or drug abuse or healthily. The human faces change drastically between males and females and ethnicity. All these differences must be factored in when determining the exact expression and whether the person is stressed or not. Using a database with an abundance of different facial expressions can be beneficial. Some data biases contribute to the challenges because of the many ways data can be collected under different conditions [32]. Additionally, previous studies have used a set of standard algorithmic pipelines for facial expression recognition, all of which focus on traditional methods. Deep learning methods are hardly used at all [32].
Noticeable limitations have also been discovered when using specific classifiers to analyze Facial Expression Recognition software results. Using the Convolutional Neural Network and the AlexNet network classifiers consumed extensive memory storage when analyzing a large dataset, which posed a problem. Additionally, several external factors of social, emotional, and mental history created more difficulty in pinpointing the exact emotion or stress trigger captured from facial expression recognition [32]. Other challenges discovered are the many techniques that have been neglected, such as using edge vision-inspired deep learning and Artificial Intelligence-based FER technologies [33].
Other challenges using FER are explicitly defined in data sets, such as illumination, face pose, occlusion, aging, and low resolution. Defining the exact expression or emotion on a human face can be challenging for human-to-human. However, when artificial intelligence is combined with deep learning, detection can become even more difficult [33]. The challenge with datasets is the limited availability and scarcity of large datasets with all necessarily captured expressions. Illumination is a problem because the variation in lighting at different angles and locations can be numerous and complicated at best. The position of the face or the pose is an additional challenge because it changes with the head movement, angle, lighting, and camera location. Occlusion refers to portions of the face being hidden with hats, glasses, facial hair, and face masks. These obstructions commonly cause the recognition process to fail [33].
Aging, of course, is the age of the person. Human facial features usually change as the person gets older. To solve this problem in facial expression recognition, much data training must be done beforehand. Finally, there is low resolution in capturing images or in videos because of the difficulty of seeing the image or the face [33].
Recent studies and experimentations show that older or conventional facial emotional recognition models are less accurate. Another issue conventional facial emotional recognition models face is misinterpreting an emotion, such as labeling disgust as happiness or anger as a surprise. In present-day real-time detection, this would produce fewer desirable results [34]. All studies have shown that facial emotions and correct characterization are essential for application features and new technologies.
Overcoming Challenges
Conventional facial emotion recognition applications cannot capture the correct emotion, mainly when occlusion occurs. As described earlier, occlusion is the obstruction of the face with glasses, masks, mustaches, beards, or other face coverings [34]. Recent studies have shown that because of the surge of wireless mobile and electronic devices, there is a large amount of video data online or in the cloud. Because of the massive amounts of video data, facial expression recognition can be applied in various applications. Facial expression recognition can be used in classroom technology, security surveillance, employment, and identity verification. As technology has advanced, the reasons for using facial expression recognition have also evolved. However, traditional methods of facial expression recognition have not been able to keep up with advancements, and there is a need for real-time facial expression detection [35]. Studies have shown that traditional Facial Expression recognition methods focus only on the texture information received or the critical points of extraction for recognizing and expressing within an image. This study shows that experimentation tried to alleviate the problems of traditional FER methods by proposing and creating a Multi-region Attention Transformation Framework (MATF) to blend the texture information captured with a multi-dimensional manner of extraction [35]. The results show that fusing the multi-region framework with capturing an image’s texture proved to yield high accuracy and recognition of the actual expression [35].
In other studies, using Automated Facial Expression Analysis (AFEA), which combines facial expression recognition and emotion identification, results are not as accurate as previously thought. The process is not straightforward because of the various characteristics of each human face. The differences vary by gender, ethnicity, age, and occlusion, which are previously explained as obstructions of the face such as hair, hats, masks, and glasses [36]. Machine learning, deep learning, and computer vision, along with new technologies, are trying to solve the problem of identifying various human emotions accurately and in real time [35].
A completely different study offered a proposed method of a more efficient way to identify the many different human emotions. Computer applications may seemingly offer a faster way of communication, but by doing so, they alter the answers in various encounters, usually depending on the user’s emotional state [37]. The challenge with the computer applications method is that for accurate results, a large amount of test data and keywords are needed, and a graphics processing unit (GPU) is also required. With a high-performing computer, more accurate results may be achieved.
Deep learning techniques are powerful enough to learn numerous facial features currently used in facial expression recognition applications. However, the various facial characteristics still create problems when applying deep learning techniques using current datasets. The available datasets are not trained efficiently enough to achieve human-like accuracy [36].
Conclusion
In conclusion, stress and anxiety are two forms of physical and psychological tension. These uncomfortable and dangerous forms of tension in the human body can affect someone’s daily work performance [38]. In cybersecurity, critical information safety lies in these professionals’ hands.
This study aims to detect stress levels and learn what triggers stress in various situations. The final goal is to offer comfort and resolutions to keep stress at a minimum during the workday, keeping the employee’s health and well-being safe and the digital information he or she is protecting from cyber-attacks.
As previously mentioned, the various methods used to detect stress use different biosensors that capture data from multiple body parts. Additionally, these stressful situations can be captured and monitored on video cameras set up during simulations of stressful situations. Also described in detail are the different biosensors used to capture blood pressure, skin resistance, heart rate, brain waves, and muscle responses. All these electric responses also show on the face with different facial expressions. Along with video recording, facial expression recognition software will be used to analyze each expression. Again, each of these features will be used to aid employers in detecting stress better in their employees during the workday, especially in high-level cybersecurity professions.
Many methods and technologies are used to detect emotional stress in individuals. Studies have shown that the most reliable or high accuracy ratings come from using multiple modes and various sensors to capture data [9]. Studies have also shown that emotions are critical in motivation, perception, and decision-making.
So, suppose emotions are running “high” because of a prolonged stressful situation. In that case, the employee may be prone to making a grave mistake or not caring about the severity of the problem [8]. Affective Computing is a growing field of technology that can effectively detect stress and various emotional states using multi-model sensors to capture data [9]. This literature review intends to have done an exhaustive search of all studies done on stress, anxiety, and emotional health and how each, left untreated, can negatively affect the cybersecurity professionals of our society.
References
- Cho J, Yoo J, Lim J. Analysis of job stress’s impact on job performance and turnover of cybersecurity professionals. 2020.
- Piwowarski. 2022.
- Chickerur S, Hunashimore AMA study on detecting stress using facial expressions, emotions and body parameters.
- Parab AN, Savla DV, Gala JP, Kekre KY. Stress and emotion analysis using IoT and deep learning IEEE. 2020.
- Craigen D, Diakun-Thibault N, Purse R. Defining cybersecurity. Technology Innovation Management Review. 2014. 4: 13-21.
- Giannakakis G, Pediaditis M, Manousos D, Kazantzaki E, Chiarugi F, et al. Stress and anxiety detection using facial cues from videos. Biomedical Signal Processing and Control. 2017. 31: 89-101.
- U. Hadar.T.Steiner E.Grant F.C. Rose. Head Movement Correlates of Juncture and Stress at Sentence Level Speech 26 (1983) 117-129.
- Hosseini SA, Khalilzadeh MA. Emotional stress recognition system using EEG and psychophysiological signals: Using new labelling process of EEG signals in emotional stress state. 2010. 1-6.
- Greene S, Thapliyal H, Caban-Holt A. A survey of affective computing for stress detection: Evaluating technologies in stress detection for better health. IEEE Consumer Electronics Magazine. 2016. 5: 44-56.
- Koussaifi M, Habib C, Makhoul A. Real-time stress evaluation using wireless body sensor networks. 2018. 37-39.
- Xu C, Yan C, Jiang M, Alenezi F, Alhudhaif A, et al. A novel facial emotion recognition method for stress inference of facial nerve paralysis patients. Expert Systems with Applications. 2022. 197: 116705.
- Abu Baker Siddique Akhonda M, Foorkanul Islam S, Shehab Khan A, Ahmed F, Mostafizur Rahman M. Stress detection of computer user in office like working environment using neural network. 2014. 174-179.
- Sajjad M, Ullah FUM, Ullah M, Christodoulou G, Alaya Cheikh F, et al. A comprehensive survey on deep facial expression recognition: Challenges, applications, and future guidelines. Alexandria Engineering Journal. 2023. 68: 817-840.
- Foorkanul Islam S, Shehab Khan A, Ahmed F, Mostafizur Rahman M. Abu Baker Siddique Akhonda, Stress detection of computer user in office like workingJ. Clerk Maxwell, A Treatise on Electricity and Magnetism. 2014. 2: 68–73.
- Zenonos A, Khan A, Kalogridis G, Vatsikas S, Lewis T, et al. HealthyOffice: Mood recognition at work using smartphones and wearable sensors. 2016. 1-6.
- Agrafioti F, Hatzinakos D, Anderson AK. ECG pattern analysis for emotion detection. IEEE Transactions on Affective Computing. 2012. 3: 102-115.
- Reanaree P, Tananchana P, Narongwongwathana W, Pintavirooj C. Stress and office-syndrome detection using EEG, HRV and hand movement. 2016. 1-4.
- Liao C, Chen R, Tai S. Emotion stress detection using EEG signal and deep learning technologies. 2018. 90-93.
- Nita S, Bitam S, Heidet M, Mellouk A. A new data augmentation convolutional neural network for human emotion recognition based on ECG signals. Biomedical Signal Processing and Control. 2022. 75: 103580.
- Tivatansakul S, Ohkura M. Improvement_of_emotional_healthcare_system_with_stress_detection_from_ECG_signal. 2015.
- Mohan P, Nagarajan V. Stress measurement from wearable photoplethysmographic sensor using heart rate variability data. 2016.
- Yu J, Kwon S, Park S, Jun J, Pyo C. Design and implementation of real-time bio signals management system based on HL7 FHIR for healthcare services. 2021. 1-6.
- Facial action coding system (FACS).
- Fukazawa Y, Ito T, Okimura T, Yamashita Y, Maeda T, et al. Predicting anxiety state using smartphone-based passive sensing. Journal of Biomedical Informatics. 2019. 93: 103151.
- Danai S, Moschona. An affective service based on multi-modal emo- tion recognition, using EEG enabled emotion tracking and speech emotion recognition. 2020.
- Alvarez-Gonzalez N, Kaltenbrunner A, Gómez V. Emotion-core: An open source framework for emotion detection research Elsevier BV. 2021.
- Gannouni S, Aledaily A, Belwafi K, Aboalsamh H. Electroencephalography-based emotion detection uses ensemble classification and asymmetric brain activity. Journal of Affective Disorders. 2022. 319: 416-427.
- Piwowarski M, Wlekły P. Factors disrupting the effectiveness of facial expression analysis in automated emotion detection Elsevier BV. 2022.
- Li S, Deng W. Deep facial expression recognition: A survey. IEEE Transactions on Affective Computing. 2022. 13: 1195-1215.
- Durga BK, Rajesh V. A ResNet deep learning based facial recognition design for future multimedia applications. Computers & Electrical Engineering. 2022. 104: 108384.
- Guo Y, Huang J, Xiong M, Wang Z, Hu X, et al. Facial expressions recognition with multi-region divided attention networks for smart education cloud applications. Neurocomputing (Amsterdam). 2022. 493: 119-128.
- Sarvakar K, Senkamalavalli R, Raghavendra S, Santosh Kumar J, Manjunath R, et al. Facial emotion recognition using convolutional neural networks. Materials Today: Proceedings. 2021.
- Donnell BF, Hetrick WP. Psychophysiology of mental health. In H. S. Friedman (Ed.), Encyclopedia of mental health. 2016. 372-376.
- Gupta A, Arunachalam S, Balakrishnan R. Deep self-attention network for facial emotion recognition. Procedia Computer Science. 2022. 171: 1527-1534.
- Nan Y, Ju J, Hua Q, Zhang H, Wang B. A-MobileNet: An approach of facial expression recognition. Alexandria Engineering Journal. 2022. 61: 4435-4444.
- Piwowarski. 2022.
- Hull. 2017.
- Tiwari P, Veenadhari S. An efficient classification technique for automatic identification of emotions leading to stress. 2022.
- Koussaifi M, Habib C, Makhoul A. Real-time stress evaluation using wireless body sensor networks. 2018. 37-39.
- Tawari A, Trivedi MM. Face expression recognition by cross modal data association. IEEE Transactions on Multimedia. 2013. 15: 1543-1552.
- Widanti N, Sumanto B, Rosa P, Fathur Miftahudin M. Stress level detection using heart rate, blood pressure, and GSR and stress therapy by utilizing infrared. 2015. 275-279.
- Zhang J, Mei X, Liu H, Yuan S, Qian T. Detecting negative emotional stress based on facial expression in real time. 2019. 430-434.
- Tawari A, Trivedi MM. Face expression recognition by cross modal data association. IEEE Transactions on Multimedia. 2013. 15: 1543-1552.
- Patil VK, Pawar VR, Randive S, Bankar RR, Yende D, et al. From face detection to emotion recognition on the framework of raspberry pi and galvanic skin response sensor for visual and physiological biosignals. Journal of Electrical Systems and Information Technology. 2023. 10: 24.
- Liao C, Chen R. Tai S. Emotion stress detection using EEG signal and deep learning technologies. 2018. 90-93.
- Sengupta K. Stress detection: A predictive analysis. 2021. 1-6.
- Wang Q, Wang M, Yang Y, Zhang X. Multi-modal emotion recognition using EEG and speech signals. Computers in Biology and Medicine. 2022. 149: 105907.