Why Won’t Facebook Take Deepfakes Seriously?
Last year I wrote about the dangers of deepfakes and how most people are oblivious to how much havoc they will soon wreak on society. One of my primary concerns was how, when combined with products like Facebook, humanity can be brought to its knees through an assault on our ability to discern legitimate content and communication—whether we’re talking about news or messages from our family members—from manufactured nonsense.
Basically, compared with deepfakes, the “fake news” of today is going to look like a Model T next to a Shelby.
And suddenly it seemed like we had some good news: On Monday, Facebook announced that it will be banning deepfakes from its platform. Yay!
At first blush, this seems like a great idea because, at the very least, it shows that someone at Facebook is aware of the deepfake threat and worrying about it.
But when you read the fine print, this ban isn’t as great as it sounds. Now, granted, this is just a press-release and may not actually reflect the level of consideration within the company, but it seems like there might be some holes here:
Going forward, we will remove misleading manipulated media if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality—in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Let’s start with the first flag:
Facebook knows (algorithmically?) what constitutes “average” and at what point such an person might be “misled.” So, if I understand correctly, they’re still 100 percent for free-speech. Unless that free speech is misleading to people deemed average by the above-average-person averageness-assessing algorithms written by well-above-average AI scientists.
But without getting into Lake Wobegon territory, this means that the IQ level of the bottom half of the curve is going to determine what everyone in the top half of the curve get to see. That is . . . interesting.
The post continues:
Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages. If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.
This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labeling them as false, we’re providing people with important information and context.
But at least these staffers and fact-checkers just have to figure out what is fake and what is real, right? Umm, no, actually:
This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.
Ah, so once Facebook has separated out the parody from the disinformation, it will then check to see if the disinformation is only a misleading edit, or an actual deepfake. And whatever remains will be scrutinized under a rigorous policy where anything that is “misleading” to the “average person” will be removed.
Provided that it was created by Artificial Intelligence or Machine Learning and is not the work of an actual human hand.
I’m sure that sounds pretty cut and dry, right? Finally, Facebook is taking action against deliberately-manipulated disinformation campaigns designed to attack American democracy. For instance, remember the video that circulated on Facebook last May that made Nancy Pelosi look like she was drunk? Well, Facebook has some things to say about that! The company helpful told Reuters:
“The doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed. Only videos generated by artificial intelligence to depict people saying fictional things will be taken down,” Facebook said in a statement.
Oh. Maybe this is all just Facebook’s way of testing the public and seeing if we can tell the difference between a real policy and one written by one of their policy writing AIs taking a stab at satire?
It is interesting that Facebook is concerned enough about deepfakes that, absent a huge scandal or congressional inquiry, they’re taking measures to preempt the coming deepfake apocalypse. But do they actually understand the destructive power that results from combining sophisticated disinformation campaigns with their massive platform? Or are they just making a gesture because someone made Mark look like an idiot?
Let’s give Facebook the benefit of the doubt, because their intent really doesn’t matter. But what does matters is that this new regime cannot possibly work.
Deepfake technology will evolve faster than whatever Facebook’s human inspectors will be able to detect and Jordan Peele is already running rings around their ability to know what’s “satire.” And that’s even if we’re only looking at a bunch of script kiddies in it for the lulz and not state-backed intelligence agencies putting coordinated resources behind their efforts.
Deepfakes are a real problem that warrant real solutions. Facebook (and Google and Apple and Twitter, et al) can’t tackle it with siloed, half-baked nincompoopery. They need to cooperate, pool their talents, and start working aggressively at finding ways to detect both live and recorded videos that they can certify as being “real.”
And until they do, they may want to consider just removing video from their platforms entirely. We’ll be okay now that we have Disney+; I promise.