Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Deepfake detectors can be defeated, computer scientists show for the first time

#1
C C Offline
https://www.newswise.com/articles/deepfa...first-time

INTRO: Systems designed to detect deepfakes --videos that manipulate real-life footage via artificial intelligence--can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online Jan. 5 to 9, 2021.

Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.

“Our work shows that attacks on deepfake detectors could be a real-world threat,” said Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student and first co-author on the WACV paper. “More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes in even when an adversary may not be aware of the inner workings of the machine learning model used by the detector.”

In deepfakes, a subject’s face is modified in order to create convincingly realistic footage of events that never actually happened. As a result, typical deepfake detectors focus on the face in videos: first tracking it and then passing on the cropped face data to a neural network that determines whether it is real or fake. For example, eye blinking is not reproduced well in deepfakes, so detectors focus on eye movements as one way to make that determination. State-of-the-art Deepfake detectors rely on machine learning models for identifying fake videos.

The extensive spread of fake videos through social media platforms has raised significant concerns worldwide, particularly hampering the credibility of digital media, the researchers point out. “"If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” ” said Paarth Neekhara, the paper’s other first coauthor and a UC San Diego computer science student... (MORE)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article AI outperforms conventional weather forecasting for the first time: Google study C C 0 84 Nov 16, 2023 01:23 AM
Last Post: C C
  Deepfake challenges ‘will only grow’ C C 0 59 Jan 18, 2023 04:29 PM
Last Post: C C
  Computer scientists discover new vulnerability affecting computers globally C C 0 172 May 2, 2021 09:42 PM
Last Post: C C
  Can a Quantum Computer be Hacked? Zinjanthropos 1 153 Dec 19, 2020 04:07 AM
Last Post: stryder
  Plugging brains into computers via veins + Tricking fake news detectors with comments C C 0 109 Nov 4, 2020 09:53 PM
Last Post: C C
  IBM throws Google's quantum computer a curveball + Why do devices slow down over time C C 1 714 Nov 4, 2017 01:20 PM
Last Post: RainbowUnicorn
  Can superpowers be created? + Computer program dreams up new QM experiments C C 0 532 Feb 24, 2016 08:20 PM
Last Post: C C
  Time-traveling computer + Adobe updates Flash amid exploit fears C C 0 578 Dec 29, 2015 04:32 AM
Last Post: C C
  Computer scientists find mass extinctions can accelerate evolution C C 0 531 Aug 14, 2015 07:15 PM
Last Post: C C
  U.S. researchers show computers can be hijacked to send data as sound waves C C 0 454 Aug 5, 2015 11:02 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)