Publication | Closed Access
Emotions Don't Lie
269
Citations
40
References
2020
Year
Unknown Venue
EngineeringMachine LearningSiamese Network ArchitectureAffective NeuroscienceDeep Learning NetworkVideo RetrievalPsychologySocial SciencesVideo InterpretationSpeech RecognitionEmotional ResponseImage AnalysisData ScienceEmotion RegulationPattern RecognitionDeepfakesAffective ComputingVideo TransformerVideo UnderstandingDeep LearningComputer VisionDeepfake DetectionEmotionEmotion Recognition
We present a learning‑based method for detecting real versus fake deepfake multimedia content. The method extracts audio‑visual similarity and affective cues, then classifies videos using a Siamese‑triplet deep network, validated on the DeepFake‑TIMIT and DFDC datasets. It achieves 84.4 % AUC on DFDC and 96.6 % on DeepFake‑TIMIT, outperforming state‑of‑the‑art approaches and being the first to combine audio‑visual similarity and emotional cues for deepfake detection.
We present a learning-based method for detecting real and fake deepfake multimedia content. To maximize information for learning, we extract and analyze the similarity between the two audio and visual modalities from within the same video. Additionally, we extract and compare affective cues corresponding to perceived emotion from the two modalities within a video to infer whether the input video is "real" or "fake". We propose a deep learning network, inspired by the Siamese network architecture and the triplet loss. To validate our model, we report the AUC metric on two large-scale deepfake detection datasets, DeepFake-TIMIT Dataset and DFDC. We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets, respectively. To the best of our knowledge, ours is the first approach that simultaneously exploits audio and video modalities and also perceived emotions from the two modalities for deepfake detection.
| Year | Citations | |
|---|---|---|
Page 1
Page 1