People share millions of photos and videos on Facebook every day, creating some of the most compelling and creative visuals on our platform. Some of that content is manipulated, often for benign reasons, like making a video sharper or audio more clear. But there are people who engage in media manipulation in order to mislead.
Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or “deep learning” techniques to create videos that distort reality – usually called “deepfakes.” While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases.
Today we want to describe how we are addressing both deepfakes and all types of manipulated media. Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.
Collaboration is key. Across the world, we’ve been driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform our policy development and improve the science of detecting manipulated media.
As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.
Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech.
Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages. If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.
This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.
Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts. Just last month, we identified and removed a network using AI-generated photos to conceal their fake accounts. Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior.
We are also engaged in the identification of manipulated content, of which deepfakes are the most challenging to detect. That’s why last September we launched the Deep Fake Detection Challenge, which has spurred people from all over the world to produce more research and open source tools to detect deepfakes. This project, supported by $10 million in grants, includes a cross-sector coalition of organizations including the Partnership on AI, Cornell Tech, the University of California Berkeley, MIT, WITNESS, Microsoft, the BBC and AWS, among several others in civil society and the technology, media and academic communities.
In a separate effort, we’ve partnered with Reuters, the world’s largest multimedia news provider, to help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course. News organizations increasingly rely on third parties for large volumes of images and video, and identifying manipulated visuals is a significant challenge. This program aims to support newsrooms trying to do this work.
As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact.
Facebook oversight board rulings so far
A person photographs the sign outside of Facebook headquarters in Menlo Park, Calif. | Paul Sakuma/AP Photo
- The content: A post used a slur to refer to Azerbaijanis in the caption of photos it said showed churches in the country’s capital.
- Why Facebook removed it: The company said the post violated its policy against hate speech.
- Why the board agreed: The way the term was used “makes clear it was meant to dehumanize its target.”
- The content: A user in Myanmar posted photos of a Syrian child who had drowned trying to reach Europe, and suggested that Muslims were disproportionately upset by killings in France over cartoon depictions of the Prophet Muhammad compared with China’s treatment of Uyghur Muslims.
- Why Facebook removed it: The company said the content violated its policy against hate speech.
- Why the board overruled Facebook: While the comments could be seen as offensive, they did not rise to the level of what Facebook considers hate speech.
- The content: An Instagram post about breast cancer awareness from a user in Brazil showed women’s nipples.
- Why Facebook removed it: The company’s automated content moderation system removed the post for violating a policy against sharing nude photos. Facebook restored the post after the oversight board decided to hear the case, but before it ruled.
- Why the board overruled Facebook: Facebook’s policy on nudity contains an exception for “breast cancer awareness.” The board added that the automated removal showed a “lack of proper human oversight which raises human rights concerns.”
- The content: A user posted a quote that the person misattributed to Nazi propagandist Joseph Goebbels.
- Why Facebook removed it: The company said it violated its policy against “dangerous individuals and organizations.”
- Why the board overruled Facebook: The board said the post did not promote Nazi propaganda but criticized Nazi rule.
- The content: A user in France falsely claimed that a certain drug cocktail could cure Covid-19 and berated the French government for refusing to make the treatment available.
- Why Facebook removed it: The company said the post violated its policy against misinformation that could cause real-world harm, arguing that it could lead people to ignore health guidance or attempt to self-medicate.
- Why the board overruled Facebook: The post did not represent an imminent harm to people’s lives because its aim was to change a government policy and it did not advocate taking the drugs without a doctor’s prescription.
- The content: A post in a Facebook group for Indian Muslims included a meme that appeared to threaten violence against non-Muslims. It also called French President Emmanuel Macron the devil and urged a boycott of French goods.
- Why Facebook removed it: The company said the post contained a “veiled threat” and violated its policy against inciting violence.
- Why the board overruled Facebook: The post, while incendiary, did not pose an imminent risk of violence and its removal overly restricted the user’s freedom of expression.
Trump faces a narrow path to victory against Facebook suspension
The key factors, these people said, will include whether the board thinks Facebook set clear enough rules and gave Trump a fair shake. Another will be what kind of case the board thinks it’s weighing — a narrow, “legalistic” debate about one person’s freedom of expression or a broader one about the public’s right to safety.
The board, often likened to Facebook’s Supreme Court, has the power to overrule decisions even by top executives like CEO Mark Zuckerberg. Its ruling on Trump will be the group’s highest-profile yet, with momentous implications for U.S. politics and potentially the company’s treatment of other world leaders.
Here are the make-or-break factors that could determine Trump’s fate on Facebook:
A point for Trump: The board’s early rulings bode well for his case
The oversight board’s decisions so far would seem to offer favorable omens for Trump: It has ruled against Facebook and ordered content restored in almost every case it has reviewed since its launch before the 2020 U.S. elections.
Two aspects of those decisions could work especially well for the former president: the board’s commitment to freedom of expression, and a big emphasis on whether Facebook made its policies clear enough for users.
The early rulings showed that the board values free expression “very highly,” said Evelyn Douek, a lecturer at Harvard Law School who has closely followed the oversight board’s work.
“They put a lot of weight on the importance of voice and the importance of free expression and free speech and they really put the onus on Facebook to heavily justify any restrictions that they wanted,” she said.
The board could decide that Facebook’s policy against incitement to violence isn’t clear enough. That policy was the company’s main justification for booting Trump after the assault on the Capitol, during which he had repeated his false claims of a stolen election and attacked Vice President Mike Pence for certifying Joe Biden’s victory.
“One thing that really struck me in their initial decisions was kind of how much of their analysis focused on lack of clarity in Facebook’s policies, and really pointing to that as a rationale for saying content has to be restored on the platform,” said Emma Llansó of the nonprofit Center for Democracy & Technology, which receives funding from Facebook and other tech companies.
When Facebook announced Trump’s suspension on Jan. 7, Zuckerberg said the risk of further violence if the platform allowed him to remain active was “simply too great.” The company’s rules say Facebook can “remove language that incites or facilitates serious violence” or “when we believe there is a genuine risk of physical harm or direct threats to public safety.” The policy also says Facebook may consider additional context in such cases, such as whether a user’s prominence adds to the danger.
But the board’s decision may turn on whether those policies gave Trump sufficient notice of what behavior would violate the rules — in other words, whether he received due process.
Under “the most narrow kind of legalistic interpretation,” Llansó said, “they might well conclude that Trump’s account should go back up.”
A point for Facebook: Trump got a lot of warnings
On the other hand, due process concerns may matter a lot less when dealing with Trump, a public figure who had repeated run-ins with the site’s rules.
“When it comes to [Facebook’s] decision making, it’s not really been clear to users, generally, about where the lines are drawn,” said David Kaye, a professor at the University of California at Irvine and a former United Nations special rapporteur. “But I don’t think any of that really applies to Trump. I mean, for months, all the platforms had been basically signaling to Trump pretty clearly that you are coming up to the line, if not crossing over it with respect to our rules.”
Trump spent years butting heads with Facebook over its standards, including posts before and after the election that the company either adorned with warning labels or took down entirely for making unfounded claims about the election or the coronavirus pandemic.
That should have made it clear to him and his accounts’ handlers that he was at risk for more forceful action, Douek said.
“There have been years of battle between Facebook and years of contestation around Trump’s presence on the platform, and it absolutely can’t be said that he didn’t have an idea that he was breaching Facebook’s policies,” she said.
Facebook took down more Trump posts immediately after the Capitol riots on Jan. 6, declaring it an “emergency situation” and warning that his online rhetoric “contributes to rather than diminishes the risk of ongoing violence.” It suspended him the following day.
A point for Trump: Critics say Facebook’s enforcement has been uneven
Facebook’s much-scrutinized track record in policing Trump’s posts could play in his favor, though.
Daniel Kreiss, a media professor at the University of North Carolina, argued that the social media giant spent years essentially ignoring Trump’s violations of its rules because the company stuck to an “overly narrow interpretation” of them.
That could hurt the company’s case, he said, if the board believes that the company suddenly adopted a broader interpretation of its policies in handling Trump’s posts on and after Jan. 6.
“A lot of this comes back to Facebook’s own failures over the last year,” Kreiss said.
In his Jan. 7 post, Zuckerberg said Facebook had let Trump use the platform “consistent with our own rules,” but that the storming of the Capitol dramatically changed the dynamics. “The current context is now fundamentally different, involving use of our platform to incite violent insurrection against a democratically elected government,” the CEO said.
But critics have skewered the company for not taking a more aggressive stance against Trump’s repeated, unsubstantiated claims of widespread voter fraud in the 2020 elections, as well as earlier posts such as his warning to racial justice protesters last May that “when the looting starts, the shooting starts.” Zuckerberg rejected such criticisms nearly a year ago, saying that “our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies.”
The perceived inconsistency, coupled with the oversight board’s initial decisions, could mean Trump is bound for a comeback, Kreiss argued.
“If I was a betting man, I would say that the early rulings would lead me to expect that the oversight board will overturn Facebook’s decisions,” he said.
A point for Facebook: Trump’s case defies precedent
Perhaps the biggest factor in Facebook’s favor is the fact that Trump’s case breaks any semblance of precedent the board could have established in its early rulings, the people tracking its deliberations said.
None of the previous cases directly involved a government leader — let alone the leader of the free world, or one accused of inciting a deadly attack in the seat of his own democracy. Plus, all the past disputes were about Facebook’s decisions to take down specific pieces of content, not the suspension of someone’s entire account.
“The thing about the Trump case is it’s so sui generis and exceptional,” Douek said.
“This just does seem a case that in some ways, is set apart … because of the magnitude of it in terms of how important this person is,” said University of North Carolina media professor Shannon McGregor, who co-wrote a piece with Kreiss calling for the oversight board to uphold Trump’s suspension.
Facebook in fact leaned on the unparalleled nature of the case when it referred Trump’s suspension to the oversight board on Jan. 21, kicking off the at-most 90-day review period.
“Our decision to suspend then-President Trump’s access was taken in extraordinary circumstances: a US president actively fomenting a violent insurrection designed to thwart the peaceful transition of power; five people killed; legislators fleeing the seat of democracy,” said Facebook global affairs chief Nick Clegg, a former British deputy prime minister.
He added, “This has never happened before — and we hope it will never happen again. It was an unprecedented set of events which called for unprecedented action.”
That could mean that even if the board takes issue with how Facebook arrived at its decision, it could still agree with its conclusion.
“I would probably fall on the side of: They will not order his account restored, but with an opinion that explains a lot of things Facebook needs to change about their policies to make that outcome clearer and more predictable in the future,” Llansó said.
A point for Facebook: The board is big on human rights
Trump and his conservative allies have long accused Facebook and other social media sites of trampling on free speech by unevenly restricting their content, a charge the companies deny. The criticism borrows from the American tradition of largely unfettered self-expression, a tradition that Zuckerberg himself has proclaimed as a core value for Facebook.
But researchers said they expect the oversight board to look at Trump’s suspension through a wider human rights lens, which would put a greater emphasis on how Trump’s speech could harm others.
“What human rights law does, when it comes to freedom of expression, is it looks at not just the freedom to impart information, but also the freedom to seek and receive it, and it provides a kind of framework for thinking about the impact that speech can have on others,” Kaye said.
That doesn’t bode well for Trump, Kaye said, because it would mean Trump’s right to express himself freely on Facebook wouldn’t necessarily be an overriding factor in the board’s decision.
Still, some aren’t convinced the board will take that broad an approach to the case.
Paul Barrett, deputy director at the NYU Stern Center for Business and Human Rights and a former Bloomberg columnist, argued in an article that the board’s earlier decisions “tended to frame the factual context of the disputed posts in a narrow way, an approach that can minimize the potential harm the speech in question could cause.”
He added, “If carried over to the Trump decision, these inclinations would help him.”
But onlookers should be careful not to read too much into the board’s initial rulings, Douek said.
“Predicting the future is always a bad idea, and it’s kind of stupid to do it on such a small sample,” she said.
Chris Cox earned cash and stock worth $69 million after rejoining Facebook last year
Asa Mathat | Re/code
Facebook Chief Product Officer Chris Cox got paid approximately $69 million to rejoin the social media company in 2020.
Cox rejoined Facebook in June 2020 after a more than one year hiatus away from the company. Cox is a key moral leader at Facebook and one of its top strategic executives, coordinating how the company’s various services work with one another.
According to the company’s 2021 proxy statement, Cox earned more than $421,000 in salary, more than $691,000 in bonus, and stock awards with a fair value of nearly $68 million, which will vest over the next four years.
Cox’s salary and bonus were down from his earnings in 2018, his last full year with the company. However, the stock compensation was up drastically, with Facebook noting that the massive compensation was awarded in connection to Cox rejoining the company.
The company also notes that Cox will earn an additional cash award of $4 million this year, to be paid within 30 days of the one-year anniversary of his return to Facebook.
Additionally, the proxy states that Facebook paid Cox $90,000 while he was away from the company for serving as a strategic advisor.