A Study on the Accuracy of Micro Expression Based Deception Detection with Hybrid Deep Neural Network Models
##plugins.themes.bootstrap3.article.main##
This article details a study on enhancing deception detection accuracy by using Hybrid Deep Neural Network (HDNN) models. The research, focusing on fear-related micro-expressions, utilizes a diverse dataset of responses to high-stakes questions. It analyzes facial action units (AUs) and pupil size variations through data preprocessing and feature extraction. The HDNN model outperforms the traditional Convolutional Neural Network (CNN) with a 91% accuracy rate. The findings’ implications for security, law enforcement, psychology, and behavioral treatments are discussed. Ethical considerations of deception detection technology deployment and future research directions, including cross-cultural studies, real-world assessments, ethical guidelines, studies on emotional expression dynamics, “explainable AI” development, and multimodal data integration, are also explored. The study contributes to deception detection knowledge and highlights the potential of machine learning techniques, especially HDNN, in improving decision-making and security in high-stakes situations.
Downloads
Introduction
Deception detection has long been a subject of great interest and significance, with applications ranging from security and law enforcement to psychology and therapy. Accurate identification of deceptive behavior is crucial in high-stakes scenarios, where the consequences of misjudgment can be substantial. While traditional deception detection methods have proven valuable, recent advances in machine learning and deep neural networks have opened new avenues for improving accuracy.
This article presents the findings of a comprehensive study that leveraged the power of the Hybrid Deep Neural Network (HDNN) models and multimodal micro-expression analysis [1] to enhance the accuracy of deception detection. Micro expressions, fleeting and involuntary facial expressions that occur in response to concealed emotions, provide a rich source of information for discerning truth from deception. By integrating information from multiple modalities, such as facial action units (AUs), left and right pupil size variation, and other cues, this research aimed to achieve a higher level of accuracy in deception detection.
The central hypothesis of this study was that the HDNN models, with their abilities to capture complex and hierarchical features, would outperform traditional Convolutional Neural Networks (CNNs) in accurately identifying deceptive behavior based on micro expressions. The inclusion of features like AUs and pupil size variation, known to be associated with deceptive behavior, further enhanced the potential of HDNN models.
To rigorously test this hypothesis, a series of experiments were conducted, employing a diverse dataset of individuals responding to questions in high-stakes situations, both truth-telling and lying. The results, as detailed in subsequent sections, demonstrate a significant improvement in accuracy rates achieved by HDNN models when compared to CNN models. Beyond its implications for deception detection, these research results can have a practical impact to broader applications of HDNN models in various fields, including security, law enforcement, psychology, and behavioral treatments [2]. It also addresses the ethical considerations surrounding the use of advanced technologies in assessing human behavior.
Related Works
Prior Studies
The field of deception detection, particularly through the analysis of micro-expressions, has witnessed significant research endeavors aimed at enhancing accuracy and reliability. Prior studies have delved into various aspects of this domain, including the identification of specific facial action units (AUs) associated with deceptive behavior [3]. Additionally, research has explored the application of deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) [4], for deception detection, showcasing promising results [5]. Quantitative modeling of the leakage theory has shed light on how high-stake lies can manifest as ‘leakage’ in physiological changes, including micro-expressions lasting for fractions of seconds [6]. These insights have contributed to the theoretical foundations of the present study, which seeks to further advance deception detection accuracy through the utilization of Hybrid Deep Neural Network (HDNN) models and the integration of multi-modal data, including facial expressions and pupil size, as essential cues in the analysis [7].
Quantitative Modeling of Leakage Theory
The quantitative modeling of leakage theory has played a pivotal role in understanding the subtle manifestations of deception, particularly in high-stakes situations where the consequences of dishonesty are significant. This theory posits that high-stake lies, where the rewards come with serious consequences or the potential for severe punishments, can result in “leakage” of deception into physiological changes or observable behaviors [8]. One such manifestation is micro-expressions, fleeting facial expressions that last for a fraction of a second. These micro-expressions, often undetectable to the naked eye, can serve as valuable indicators of concealed emotions or untruthfulness [9]. By drawing parallels between the quantitative modeling of leakage theory and the dynamics of micro-expressions, our research explores the intricate relationship between deception, risk, and physiological responses. This connection underscores the relevance of quantitative modeling principles in uncovering subtle cues that can significantly enhance deception detection accuracy.
Quantitative modeling provides a rigorous and systematic approach to uncovering concealed emotions and behaviors in high-stakes scenarios, with micro expressions serving as a valuable component of this analysis [10]. This alignment enhances the accuracy, objectivity, and ethical considerations of deception detection, making it a crucial area of study in fields like security, law enforcement, psychology, and beyond. Micro expressions are ultra-brief facial expressions that typically last for a fraction of a second (1/25 to 1/5 s). Importantly, they are involuntary, meaning that individuals often cannot control them consciously. When someone is trying to deceive by concealing their true emotions or intentions, these micro-expressions can “leak” out despite their best efforts to maintain a neutral or false facial expression. Micro-expressions are incredibly subtle and can reveal genuine emotions that contradict the deceptive behavior or statements of an individual. These brief facial cues may include flashes of fear, anger, sadness, or other emotions that are inconsistent with the intended deception. Detecting these subtle indicators through careful analysis can provide crucial insights into concealed deception.
Problem Statement, Hypothesis Statement, and Research Question
Problem Statement
Deception detection in high-stakes scenarios, such as security screenings, legal proceedings, and interpersonal interactions, presents a significant challenge. Traditional methods for identifying deception often rely on verbal cues and observable behaviors, which can be intentionally controlled by deceptive individuals.
To address this challenge and enhance the accuracy of deception detection, there is a growing need to explore advanced technologies and methodologies, particularly the integration of HDNN models, for the analysis of micro-expressions and physiological cues as potential indicators of concealed deception.
Hypothesis Statement
There is a difference in deception detection accuracy between traditional CNN models and HDNN models when applied to the analysis of micro-expressions, including facial expressions, pupil size, and other physiological cues, in high-stakes scenarios.
Research Question
Do HDNN models, integrating multi-modal data including micro-expressions, pupil size, and other physiological cues, significantly enhance deception detection accuracy compared to traditional CNN models in high-stakes scenarios?
Methodology
Method
The selection of a quantitative research method for proving the improvement in the accuracy rate of deception detection based on micro expressions using HDNN models [11] is driven by several key reasons, including that Quantitative research allows for the precise and objective measurement of variables. In the context of deception detection, it enables the quantification of accuracy rates, performance metrics, and physiological responses associated with micro-expressions [12]. The research aims to test a specific hypothesis—that HDNN models can improve the accuracy rate of deception detection compared to other methods. Quantitative research provides a structured approach for hypothesis testing through statistical analysis.
Quantitative research aims to generate findings that are generalizable to a broader population or context. Demonstrating the effectiveness of HDNN models in improving deception detection accuracy can have broader implications beyond the specific dataset or scenario studied. Deception detection is a high-stakes area where objectivity is crucial. Quantitative methods enable an objective assessment of the performance of HDNN models, reducing potential biases associated with subjective judgments. When comparing the performance of HDNN models with other models, quantitative methods provide a clear way to quantify the extent of improvement in accuracy rates, making it easier to convey the benefits of HDNN models.
Accuracy, precision, recall, F1-score, and other quantitative metrics can be used to assess how well the model detects deception. Specifically, analyze and assess the role of facial action units (AUs) associated with fear (e.g., AU20) in improving the accuracy of deception detection. This is crucial for assessing the model’s effectiveness and determining whether the HDNN outperforms other models, such as CNN models. Developing an HDNN model with a quantitative approach allows researchers to demonstrate its potential applicability in various high-stakes scenarios beyond the specific dataset used in the study.
Population and Sample
This study offers insights into the population and samples used for improving deception detection accuracy through HDNNs in the analysis of micro-expressions. The population consists of participants from the TV game show ‘The Moment of Truth’ (IMDb, 2008), while the sample comprises 32 video clips, evenly split between instances of truth-telling and deception. These clips yield a dataset of 53,787 records, with the primary data points being facial action units (AUs). The Facial Action Coding System (FACS) serves as the standard for coding facial expressions [13]–[15]. Subsequent statistical analyses are carried out to identify notable differences in micro-expressions between different groups [16].
Previous research has pinpointed specific Action Units (AUs) like AU1, AU2, AU4, AU12, AU15, and AU45 as potential markers for distinguishing between individuals who are lying and those telling the truth in situations with significant consequences. For example, dishonest individuals tend to show increased activation of AUs associated with negative emotions, such as fear, including AU1, AU2, AU4, and AU12, while truth-tellers might exhibit activation of AUs linked to positive emotions like happiness or enjoyment, such as AU15 [17]. Moreover, there is an indication that in high-stakes situations, deception could be associated with reduced blinking frequency or shorter blink durations.
Definitions and Formula
Definition 1: Confusion Matrix Evaluation–After training the HDNN model on micro-expression data, it undergoes assessment using a validation or test dataset. For each instance in this dataset, the HDNN model makes predictions (e.g., “Truth” or “Lie”) and compares them to actual labels (ground truth). This comparison produces a confusion matrix [18] with four entries:
–True Positive (TP): Instances correctly predicted as “Lie” by the HDNN when they are indeed “Lie.”
–True Negative (TN): Instances correctly predicted as “Truth” by the HDNN when they are actual “Truth,” as shown in Code 8.
–False Positive (FP): Instances incorrectly predicted as “Lie” by the HDNN when they are actually “Truth.”
–False Negative (FN): Instances incorrectly predicted as “Truth” by the HDNN when they are actually “Lie.”
Definition 2: The following metrics provide valuable insights into HDNN’s performance [19]:
–High Accuracy: This signifies effective model performance, although class imbalance should be considered [20].
–Precision: High precision indicates accurate identification of “Lie” instances, while low precision suggests more false positives [21].
–Recall: High recall reflects the model’s effectiveness in capturing “Lie” instances, whereas low recall implies missed “Lie” instances [22].
–F1-Score: Balances precision and recall, with a higher F1 score indicating a better balance.
–Specificity: Indicates the model’s ability to correctly identify “Truth” instances.
Definition 3: As previously mentioned, the confusion matrix entries, TP, TN, FP, and FN, play a vital role in calculating various performance metrics: Accuracy:(TP+TN)(TP+TN+FP+FN) Precision:TP(TP+FP) Recall:TP(TP+FP) F1−Score:2∗(Precision * Recal)(Precision+Recal) Specificity:TN(TN+FP)
Experiment and Results
Results
Three classifiers, specifically Random Forest, k-nearest Neighbors (K-NN), and Bagging, are utilized for training and evaluation purposes. The machine learning model, whether it is k-Nearest Neighbors or Bagging, undergoes training via a 10-fold cross-validation [23] procedure within the R environment. In this procedure, the data collected from 12 participants is divided into 10 subsets, and the model experiences 10 rounds of training and testing cycles [24]. The performance analysis, is based on the conventional approach. Furthermore, a paired t-test is executed to examine the statistical distinctions among Action Units (AUs) linked with fear when comparing truth-telling and lying video clips. This approach provides a robust means of assessing whether significant differences exist in fear expressions under these conditions [25]. To address the issue of multiple testing, a Bonferroni correction is applied, with a p-value threshold set at 0.007. The results of the paired t-test for all AUs associated with fear consistently indicate a significant differentiation between truth-telling and lying AUs of fear, maintaining a p-value of 0.0070 even after Bonferroni correction, as verified in real-time using RStudio.
The outcome reveals a Confidence Level of 95% along with a BCa (Bias-Corrected and Accelerated) interval of (0.1468, 0.4154). This 95% confidence interval (BCa = (0.1468, 0.4154)) furnishes vital insights into the statistical analysis [26]. The 95% confidence level signifies that the research holds a 95% degree of confidence in the true value of the statistic residing within the provided interval. The BCa interval, adjusted for potential bias and skewness in the bootstrapped distribution, falls within the range of (0.1468, 0.4154), making it particularly reliable, especially for smaller or skewed datasets. Given that both 0.1468 and 0.4154 are entirely above zero (assuming this measures a difference or effect size), it implies a statistically significant effect or difference under scrutiny.
To establish the HDNN model, it is essential to install two critical libraries, namely ‘keras’ and ‘tidyverse.’ Additionally, the Anaconda machine-learning framework is indispensable for creating this multi-layered HDNN model. The significance of the HDNN model formula lies in its role as a foundational tool for building, customizing, and understanding the neural network used for deception detection based on micro-expressions. It promotes transparency, reproducibility, and flexibility, all of which are essential for robust and ethical machine-learning research. The process is initiated with the following command:
model %>%
layer_dense(units = 64, activation = ‘relu’, input_shape = c(input_shape)) %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units = 32, activation = ‘relu’) %>%
layer_left_pupil(units = 16, activation = ‘relu’) %>%
layer_right_pupil(units = 16, activation = ‘relu’) %>%
layer_dense(units = 1, activation = ‘sigmoid’)
The HDNN model is created by First splitting the dataset into training and testing sets and then creating bootstrapped samples of the training data. The specifics of AUs of Fear and Left and Right Pupils define the architecture of HDNN, followed by each bootstrapped dataset, training a separate CNN model. The following results show a remarkably similar level of confidence = 95% and BCa = (0.1544, 0.4732). The same concepts apply to both 0.1544 and 0.4732, in which they are entirely above zero, implying a statistically significant effect or difference being measured. Fig. 1 shows the histogram diagram of this analysis of AU20. This study repeated the same process for other AUs (such as AU26, Lpupil, and Rpupil), and the results represent the same confidence level and reliability of the research.
Fig. 1. Mean of AU20 analysis: (a) for lying-telling group of participants, and (b) for truth-telling group of participants.
The correlation analysis between the ‘Fear’ feature, represented by AU20, and the ‘lying’ group, as reported in Liu, yielded the following results: a correlation coefficient of −0.0167 and a statistically significant p-value of 0.0001. When applying the Random Forest classifier, the model achieved an average accuracy rate of approximately 87%, with a mean squared residual value of 0.05310285. The correlation table presented in Table I illustrates the relationships between lying and Action Units (AUs) associated with fear. Among these AUs, AU20 and AU26 exhibit the most substantial correlations with lying behavior. These correlations serve the purpose of identifying which AUs could be valuable in distinguishing between truth-telling and lying when training HDNN machine learning models.
Lying | AU01 | AU02 | AU04 | AU05 | AU07 | AU20 | AU26 | |
---|---|---|---|---|---|---|---|---|
Lying | 1 | |||||||
AU01 | −0.05308 | 1 | ||||||
AU02 | −0.10608 | 0.21279 | 1 | |||||
AU04 | 0.08339 | −0.13717 | 0.14546 | 1 | ||||
AU05 | −0.41469 | 0.26899 | 0.150661 | −0.14314 | 1 | |||
AU07 | −0.16209 | −0.25534 | 0.238795 | 0.18291 | −0.05978 | 1 | ||
AU20 | −0.46427 | 0.4043 | 0.255451 | −0.06419 | 0.271268 | 0.125538 | 1 | |
AU26 | −0.202931 | 0.26711 | 0.154966 | −0.33625 | 0.266501 | −0.19271 | 0.63471 | 1 |
In this context, a positive correlation value, closer to 1, signifies a positive relationship, implying that an increase in the intensity of the AU is linked to a higher likelihood of lying. For instance, AU26, with a correlation coefficient of 0.64, indicates a moderate positive correlation. This suggests that a moderate increase in the intensity of AU26 is moderately associated with a higher probability of lying. Conversely, AUs with correlation coefficients close to 0, such as 0.08, signify very weak or negligible correlations. In other words, changes in these AUs are not significantly related to lying behavior. Additionally, some AUs, like AU05, exhibit a weak negative correlation with a correlation coefficient of −0.38. This implies that a slight increase in the intensity of AU05 is weakly associated with a reduced likelihood of lying.
Summary
Using HDNN models, we achieved a remarkable accuracy rate of 91% in detecting deception based on a micro expressions’ dataset involving 16 individuals providing both truthful and deceptive responses. The dataset was divided into 80% for training and 20% for testing, highlighting the effectiveness of the HDNN model in detecting deception in a digit recognition context. This achievement underscores the model’s ability to capture crucial data features necessary for accurate classification, as detailed in Beh. Fig. 2 plays a pivotal role in illustrating how accuracy rates were assessed within the HDNN model. It’s important to acknowledge that this study utilizes numerous other libraries and codes, although they are not exhaustively presented here.
Fig. 2. Results of evaluating the accuracy rate of AUs of fear by using the HDNN model: (a) loss diagram, and (b) accuracy.
Furthermore, Fig. 2 demonstrates evaluating the accuracy rates concerning Action Units (AUs) associated with fear within the HDNN model by featuring the left and right pupil sizes at the top of AU layers. This evaluation employs specific parameters throughout 10 epochs, including Model Type (Hybrid Deep CNN), Optimizer (“adam”), Evaluation Metric (Accuracy), and Number of Epochs (10).
The training progression and performance of an HDNN model on the AU20 dataset can be summarized as follows. The model was configured with specific settings, including Model Type (Hybrid Deep CNN), Optimizer (“adam”), Evaluation Metric (Accuracy), and Number of Epochs (10).
Fig. 2 provides a visual representation of the HDNN model’s performance across training epochs. At the outset, on the first epoch, the model displayed an accuracy of approximately 0.8226. As the training process advanced, there was a consistent improvement in accuracy with each subsequent epoch. By the time the tenth epoch was reached, the model had achieved an accuracy of approximately 0.9473. This ascending trend in accuracy across epochs indicates that the model effectively learns and enhances its capability to correctly classify data. The final accuracy score of 0.9473 on the tenth epoch signifies the model’s strong performance in the given task. It is standard practice in deep learning to monitor accuracy during training to ensure convergence to a desirable solution. In this instance, the model demonstrated consistent accuracy growth over the ten epochs, indicating successful training. With an average accuracy rate of 91%, the HDNN model excels in classifying data related to deception based on AU20 features. This high average accuracy rate attests to the model’s effectiveness in distinguishing between truth-telling and lying using the selected features. It also suggests that the model has effectively learned significant patterns and representations from the data, which hold value for microexpression-based deception detection.
In the initial training epoch, the model exhibited a relatively high loss of 0.5070, signifying a notable number of errors in its predictions on the training data. Subsequently, in the second epoch, the loss decreased to 0.3270, signaling an improvement in the model’s performance as it made fewer errors compared to the first epoch. This trend continued, with the loss decreasing to 0.2809 in the third epoch, demonstrating the model’s gradual learning and error reduction. By the fourth epoch, the loss further declined to 0.2474, indicating the model’s enhanced ability to fit the training data.
The fifth epoch saw a drop in the loss to 0.2228, reflecting sustained progress in the model’s performance. In the sixth epoch, the loss reduced to 0.2024, signifying the model’s increasing accuracy in predictions. As training advanced to the seventh epoch, the loss reached 0.1857, indicating that the model was converging toward an optimal state. In the eighth epoch, the loss continued to decrease, reaching 0.1714, affirming the model’s effective learning from the training data. In the ninth epoch, the loss further decreased to 0.1550, suggesting that the model was approaching a crucial level of accuracy on the training data. Ultimately, in the tenth epoch, the loss reached 0.1434, confirming that the model had acquired substantial knowledge and was performing exceptionally well on the training data.
Conclusion
The quantitative results obtained through this methodology reveal significant progress in improving deception detection accuracy using micro-expressions. The HDNNs achieved an accuracy rate of 91%, representing a notable advancement compared to previous research findings. This enhancement represents a positive step toward developing more dependable and ethically responsible deception detection solutions. The rigorous statistical validation and ethical considerations highlight the practical relevance and ethical obligations associated with this technology [27]. Notably, the data presented in this study indicates that the HDNN models can outperform the CNN models. This superiority can be attributed to the HDNN model’s ability to capture intricate hierarchical features, resulting in faster convergence and higher accuracy in both training and validation sets.
Furthermore, the study conducted training and assessment for three classifiers (Random Forest, K-nearest neighbours, Bagging) utilizing a 10-fold cross-validation approach. These models integrated features linked to fear expressions, specifically AU1, AU2, AU4, AU5, AU20, and AU26. A distinct test dataset was employed, consisting of data from four participants, to evaluate the model’s accuracy in making novel predictions. Evaluation metrics like the F1-score may further underscore the HDNN model’s superior deception detection accuracy.
Comparative analysis with traditional Convolutional Neural Networks (CNN) revealed the superior performance of the HDNN model, particularly when dealing with specific facial action units (AUs) and data related to pupil size. HDNN model’s ability to capture complex hierarchical features and exhibit faster convergence contributed to its higher accuracy. The study’s incorporation of left and right pupil data as additional features highlighted the potential for further enhancing deception detection accuracy. This research underscores the significance of advanced machine learning techniques, ethical considerations, and the selection of relevant facial action units in the field of deception detection.
In conclusion, the findings of this study offer important insights into the potential of A Study on the Accuracy of Micro Expression based Deception Detection by using HDNN models to improve the accuracy of micro-expression deception detection, with direct implications for applications in security, law enforcement, and psychology. Future research should continue to explore and refine the capabilities of HDNN models and their integration with multi-modal data for even more robust deception detection solutions.
References
-
Marechal C, Mikolajewski D, Tyburek K, Prokopowicz P, Bougueroua L, Ancourt C, et al. Survey on AI-based multimodal methods for emotion detection. In High-Performance Modelling and Simulation for Big Data, vol. 11400, Springer, Cham, 2019, pp. 307–24.
Google Scholar
1
-
Oh G, Ryu J, Jeong E, Yang JH, Hwang S, Lee S, et al. Drer: deep learning-based driver’s real emotion recognizer. Sensors. 2021;21(6):2166.
Google Scholar
2
-
Adegun IP, Vadapalli HB. Facial micro expression recognition: a machine learning approach. In Scientific African. vol. 8. Elsevier BV, 2020, pp. e00465. doi: 10.1016/j.sciaf.2020.e00465.
Google Scholar
3
-
Rosenberg EL, Ekman P (Eds.). What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press; 2020.
Google Scholar
4
-
Agarap AF. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375. 2018.
Google Scholar
5
-
Baruah SP. Exploring the capability of MS EXCEL for constructing bias-corrected and accelerated (BCA) bootstrap confidence intervals to aid in decision-making during an emergency. Spreadsheets Educ. 2023 Mar 10:1–16.
Google Scholar
6
-
Beh KX, Goh KM. Micro expression spotting using facial land marks. 2019 IEEE 15th International Colloquium on Signal Processing & Its Applications (CSPA), pp. 192–7, IEEE, 2019 March.
Google Scholar
7
-
Ngugi LC, Abelwahab M, Abo-Zahhad M. Recent advances in image processing techniques for automated leaf pest and disease recognition–A review. Inf Process Agric. 2021;8(1):27–51.
Google Scholar
8
-
Şen MU, Perez-Rosas V, Yanikoglu B, Abouelenien M, Burzo M, Mihalcea R. Multimodal deception detection using real-life trial data. IEEE Trans Affect Comput. 2020;13(1):306–19.
Google Scholar
9
-
Shen X, Chen W, Zhao G, Hu P. Recognizing microexpression: an interdisciplinary perspective. Front Psychol. 2019;10:1318.
Google Scholar
10
-
Yuan Z, Jiang Y, Li J, Huang H. Hybrid-DNNs: Hybrid Deep Neural Networks for Mixed Inputs. Cornell University Library; 2020. Available from: arXiv.org.
Google Scholar
11
-
Shuster A, Inzelberg L, Ossmy O, Izakson L, Hanein Y, Levy DJ. Lie to my face: an electromyography approach to the study of deceptive behavior. Brain Behav. 2021;11(12):e2386.
Google Scholar
12
-
Borza D, Itu R, Danescu R. Micro expression detection and recognition from high speed cameras using convolutional neural networks. VISIGRAPP (5: VISAPP), pp. 201–8, 2018 January.
Google Scholar
13
-
Cha HS, Choi SJ, Im CH. Real-time recognition of facial expressions using facial electromyograms recorded around the eyes for social virtual reality applications. IEEE Access. 2020;8:62065–75.
Google Scholar
14
-
Yang J, Liu G, Huang SC. Emotion transformation feature: novel feature for deception detection in videos. Paper Presented at the 1726–1730, 2020. doi: 10.1109/ICIP40778.2020.9190846.
Google Scholar
15
-
Clark EA, Kessinger JN, Duncan SE, BellMA, Lahne J, Gallagher DL, et al. The facial action coding system for characterization of human affective response to consumer product-based stimuli: a systematic review. Front Psychol. 2020;11:920.
Google Scholar
16
-
Constâncio AS, Tsunoda DF, Silva HDFN, Silveira JMD, CarvalhoDR. Deception detection with machine learning: a systematic review and statistical analysis. Plos One. 2023;18(2):e0281323.
Google Scholar
17
-
Denault V, Dunbar NE. Credibility assessment and deception detection in courtrooms: hazards and challenges for scholars and legal practitioners. In The Palgrave Handbook of Deceptive Communication, Docan-Morgan T, Ed. Basingstoke, United Kingdom: Palgrave Macmillan, 2019, pp. 915–35.
Google Scholar
18
-
He W, Jiang Z. A survey on uncertainty quantification methods for deep neural networks: an uncertainty source perspective. arXiv preprint arXiv:2302.13425. 2023.
Google Scholar
19
-
Khan F. Facial expression recognition using facial landmark detection and feature extraction via neural networks. arXiv preprint arXiv:1812.04510. 2018.
Google Scholar
20
-
Kollias D, Zafeiriou S. Exploiting multi-CNN features in CNNRNN based dimensional emotion recognition on the OMG in-the-wild dataset. In IEEE Transactions on Affective Computing. vol. 12, Issue 3, pp. 595–606, Institute of Electrical and Electronics Engineers (IEEE), 2021. doi: 10.1109/taffc.2020.3014171.
Google Scholar
21
-
Kosemen C, Birant D.Multi-label classification of line chart images using convolutional neural networks. SN Appl Sci. 2020;2(7):1250.
Google Scholar
22
-
Zhang J, Yan B, Du X, Guo Q, Hao R, Liu J, et al. Motion magnification multi-feature relation network for facial microexpression recognition. Complex Intell Syst. 2022;8(4):3363–76.
Google Scholar
23
-
Kouriati A, Moulogianni C, Kountios G, Bournaris T, Dimitriadou E, Papadavid G. Evaluation of critical success factors for enterprise resource planning implementation using quantitative methods in agricultural processing companies. Sustainability. 2022;14(11):6606.
Google Scholar
24
-
Kraus S, Breier M, Dasí-Rodríguez S. The art of crafting a systematic literature review in entrepreneurship research. Int Entrep Manag J. 2020;16:1023–42.
Google Scholar
25
-
Krstinić D, Braović M, Šerić L, Božić-Štulić D. Multi-label classifier performance evaluation with confusion matrix. Comput Sci Inf Technol. 2020.
Google Scholar
26
-
Liu H, Cai H, Lin Q, Zhang X, Li X, Xiao H. FEDA: fine-grained emotion difference analysis for facial expression recognition. Biomed Signal Process Control. 2023;79:104209.
Google Scholar
27
Most read articles by the same author(s)
-
Ihsan Said,
Yanzhen Qu,
Improving the Performance of Loan Risk Prediction based on Machine Learning via Applying Deep Neural Networks , European Journal of Electrical Engineering and Computer Science: Vol. 7 No. 1 (2023) -
Jolynn Baugher,
Yanzhen Qu,
Create the Taxonomy for Unintentional Insider Threat via Text Mining and Hierarchical Clustering Analysis , European Journal of Electrical Engineering and Computer Science: Vol. 8 No. 2 (2024) -
Tony Hoang,
Yanzhen Qu,
Creating A Security Baseline and Cybersecurity Framework for the Internet of Things Via Security Controls , European Journal of Electrical Engineering and Computer Science: Vol. 8 No. 2 (2024) -
Alan Raveling,
Yanzhen Qu,
Quantifying the Effects of Operational Technology or Industrial Control System based Cybersecurity Controls via CVSS Scoring , European Journal of Electrical Engineering and Computer Science: Vol. 7 No. 4 (2023) -
Sushanth Manakhari,
Yanzhen Qu,
Improving the Accuracy and Performance of Deep Learning Model by Applying Hybrid Grey Wolf Whale Optimizer to P&C Insurance Data , European Journal of Electrical Engineering and Computer Science: Vol. 7 No. 4 (2023) -
Issayas M. Haile,
Yanzhen Qu,
Mitigating Risk in Financial Industry by Analyzing Social-Media with Machine Learning Technology , European Journal of Electrical Engineering and Computer Science: Vol. 6 No. 2 (2022) -
Edwin A. Agbor,
Yanzhen Qu,
Improving the Performance of Machine Learning Model Selection for Electricity Cost Forecasting in Homebased Small Businesses via Exploratory Data Analysis , European Journal of Electrical Engineering and Computer Science: Vol. 7 No. 2 (2023) -
Justin Morgan,
Yanzhen Qu,
Ordered Lorenz Regularization (OLR): A General Method to Mitigate Overfitting in General Insurance Pricing via Machine Learning Algorithms , European Journal of Electrical Engineering and Computer Science: Vol. 8 No. 5 (2024) -
Daniel Rodriguez Gonzalez,
Yanzhen Qu,
Improving the Performance of Closet-Set Classification in Human Activity Recognition by Applying a Residual Neural Network Architecture , European Journal of Electrical Engineering and Computer Science: Vol. 6 No. 2 (2022) -
Steve Moyopo,
Yanzhen Qu,
Quantifying the Data Currency’s Impact on the Profit Made by Data Brokers in the Internet of Things Based Data Marketplace , European Journal of Electrical Engineering and Computer Science: Vol. 7 No. 4 (2023)