America’s Defense Department “is looking to build tools that can quickly detect deepfakes and other manipulated media amid the growing threat of ‘large-scale, automated disinformation attacks,'” reports Nextgov:
The Defense Advanced Research Projects Agency on Tuesday announced it would host a proposers day for an upcoming initiative focused on curbing the spread of malicious deepfakes, shockingly realistic but forged images, audio and videos generated by artificial intelligence. Under the Semantic Forensics program, or SemaFor, researchers aim to help computers use common sense and logical reasoning to detect manipulated media.
As global adversaries enhance their technological capabilities, deepfakes and other advanced disinformation tactics are becoming a top concern for the national security community… Industry has started developing tech that use statistical methods to determine if a video or image has been manipulated, but existing tools “are quickly becoming insufficient” as manipulation techniques continue to advance, according to DARPA. “Detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources,” officials said in a post on FedBizOpps…
Beyond simply detecting errors, officials also want the tools to attribute the media to different groups and determine whether the content was manipulated for nefarious purposes. Using that information, the tech would flag posts for human review. “A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies,” DARPA officials said.
But that’s easier said than done. Today, even the most advanced machine intelligence platforms have a tough time understanding the world beyond their training data.