A JAMES Cook University (JCU) study of people’s ability to detect ‘deepfakes’ has shown humans perform fairly poorly, even when given hints on how to identify video-based deceit.
Dr Klaire Somoray and Dr Dan J Miller from JCU led the study using high-quality deepfake videos, in which a person in an existing image or video is manipulated to have another person’s likeness.
These deepfakes can be produced quickly and effectively and could be used to mislead populations and create political misinformation.
One such example is a manipulated video of Ukrainian President Volodymyr Zelensky that was circulated, asking for Ukrainian soldiers to surrender.
Dr Somoray and Dr Miller recruited more than 450 people and showed them 20 videos, 10 of which were real and 10 of which were deepfakes.
Participants were then graded on their ability to judge which videos were real, and which were not. Half of the volunteers were given training on how to spot a deepfake video.
“This includes paying attention to things such as lighting, whether the cheeks and forehead looked too smooth or wrinkly, whether the agedness of the skin was similar to the agedness of the hair and eyes and whether facial hair looked real,” Dr Somray said.
On average, participants correctly identified approximately 12 out of 20 videos.
Dr Somray went on to say that the poorest performers correctly categorised five out of 20 videos and the best performers correctly categorised 19 out of 20.
“Teaching people detection strategies did not impact detection accuracy or detection confidence, nor did time spent per video, or the average number of page clicks on each video,” she said.
“The findings cast doubt on whether simply providing the public with strategies for detecting deepfakes can meaningfully improve detection.
“Also, worryingly, it appears that individuals may be overly optimistic regarding their abilities to ascertain the authenticity of individual videos,” said Dr Miller.