Scivillage.com Casual Discussion Science Forum
New way to detect unethical deepfakes & protect against them - Printable Version

+- Scivillage.com Casual Discussion Science Forum (https://www.scivillage.com)
+-- Forum: Culture (https://www.scivillage.com/forum-49.html)
+--- Forum: Law & Ethics (https://www.scivillage.com/forum-105.html)
+--- Thread: New way to detect unethical deepfakes & protect against them (/thread-7293.html)



New way to detect unethical deepfakes & protect against them - C C - Jul 1, 2019

https://theconversation.com/detecting-deepfakes-by-looking-closely-reveals-a-way-to-protect-against-them-119218

INTRO: Deepfake videos are hard for untrained eyes to detect because they can be quite realistic. Whether used as personal weapons of revenge, to manipulate financial markets or to destabilize international relations, videos depicting people doing and saying things they never did or said are a fundamental threat to the longstanding idea that “seeing is believing.” Not anymore.

Most deepfakes are made by showing a computer algorithm many images of a person, and then having it use what it saw to generate new face images. At the same time, their voice is synthesized, so it both looks and sounds like the person has said something new.

Some of my research group’s earlier work allowed us to detect deepfake videos that did not include a person’s normal amount of eye blinking – but the latest generation of deepfakes has adapted, so our research has continued to advance.

Now, our research can identify the manipulation of a video by looking closely at the pixels of specific frames. Taking one step further, we also developed an active measure to protect individuals from becoming victims of deepfakes. In two recent research papers, we described ways to detect deepfakes with flaws that can’t be fixed easily by the fakers... (MORE)