Facebook is banning "deepfake" videos ahead of the 2020 election—but critics say the company isn't going far enough in dealing with doctored content. Facebook said in a blog post Monday that it is going to remove "misleading manipulated media" if it has been altered "in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say" and is "a product of artificial intelligence or machine learning" that makes the video's content appear authentic. Facebook exec Monika Bickert said such videos are still rare, but "they present a significant challenge for our industry and society as their use increases," the Wall Street Journal reports.
Facebook says the policy will not apply to parody or satire videos, nor to video "edited solely to omit or change the order of words." It also apparently doesn't apply to so-called "cheapfake" videos, like the widely circulated "drunk Nancy Pelosi" video from last year in which her voice was slowed down to make her sound intoxicated. Hany Farid, a digital forensics expert at the University of California at Berkeley, tells the Washington Post that the new policy is too narrow and fails to address fake videos created with lower-tech methods. "Why focus only on deep-fakes and not the broader issue of intentionally misleading videos?" he asks. (Read more Facebook stories.)