Facebook owner Meta is ending its third-party fact-checking program and will instead rely on its users to report misinformation, as the social media giant prepares for Donald Trump’s return to the presidency.
The $1.6 trillion company said Tuesday it would “enable more expression by lifting restrictions on certain topics that are part of mainstream discourse and focusing our enforcement on illegal violations and high gravity” and “would take a more personalized approach to political content.”
“It’s time to return to our roots around free speech on Facebook and Instagram,” said Mark Zuckerberg, CEO and co-founder of Meta. said in a video message.
Trump sharply criticized Zuckerberg during the campaign for last year’s US presidential election, suggesting that if Meta interfered in the 2024 vote he would “spend the rest of his life in prison”.
But the Facebook founder sought to rebuild his relationship with the president-elect after his victory in November, including visiting him at his Mar-a-Lago residence in Florida.
On Monday, Meta decided to make further inroads with the new US presidential administration by appointing UFC founder and prominent Trump supporter Dana White to its board of directors.
White will serve on Meta’s board alongside another Trump ally, tech investor Marc Andreessen, who has long pushed for the company to relax its oversight of online content.
Zuckerberg said the complexity of his content moderation system, which was expanded in December 2016 after Trump’s first election victory, had introduced “too many errors and too much censorship.”
In the United States, Meta will move to a so-called “community notes” model, similar to that used by Elon Musk’s X, which allows users to add context to controversial or misleading posts. Meta itself will not write community notes.
Meta said there were “no immediate plans” to end third-party fact-checking and introduce community ratings outside the United States. It is unclear how such a system could comply with regimes such as the EU’s Digital Services Act and the UK’s Online Safety Act, which require online platforms to put in place measures to combat illegal content and protect users.
Zuckerberg added that Meta would also change its systems to “significantly reduce” the amount of content its automated filters remove from its platforms.
This includes lifting restrictions on topics such as immigration and gender, to focus its systems on “illegal and high-severity violations”, such as terrorism, child exploitation and fraud, as well as content related to suicide, self-harm and eating disorders.
He acknowledged that the changes would mean Meta would “catch fewer bad things”, but argued that the trade-off was worth it to reduce the number of posts from “innocent people” that were deleted.
The changes bring Zuckerberg closer to Musk, who reduced content moderation after buying the social media platform, then called Twitter, in 2022.
“Just like on X, community ratings will require agreement among people with varying viewpoints to avoid biased ratings,” Meta said in a blog post.
“It’s cool,” Musk said in an X post referring to the changes to Meta.
Joel Kaplan, a prominent Republican who Meta announced last week would succeed Sir Nick Clegg as global affairs chairman, told Fox News on Tuesday that his third-party fact-checkers had been “too biased.”
Referring to Trump’s return to the White House on January 20, Kaplan added: “We have a real opportunity now, we have a new administration and a new president coming in who are great defenders of free speech and that makes a difference.
As part of the changes announced Tuesday, Meta also said it would move its U.S.-based content moderation team from California to Texas. “I think it will help us build trust to do this work in places where there is less concern about bias on our teams,” Zuckerberg said.
The changes to Meta have been criticized by online safety campaigners. Ian Russell, whose 14-year-old daughter Molly took her own life after viewing harmful content on sites such as Instagram, said he was “appalled” by the plans.
“These measures could have disastrous consequences for many children and young adults,” he said.
Zuckerberg first introduced third-party fact-checking as part of a series of measures in late 2016 intended to address criticism of widespread misinformation on Facebook.
He said at a time when the company needed “stronger detection” of misinformation and would work with the news industry to learn from journalists’ fact-checking systems.
Meta said it now spends billions of dollars a year on its safety and security systems, employing or contracting tens of thousands of people around the world.
But on Tuesday, Zuckerberg accused governments and “traditional media” of pushing his company to “censor more and more.”
He said Meta would work with the Trump administration to “push back against governments around the world that are going after American businesses and pushing for increased censorship.”
He pointed to restrictive regimes in China and Latin America, as well as what he called an “ever-increasing number” of European laws that “institutionalize censorship and make it difficult to build anything innovative there- down “.
Meta shares fell 2 percent Tuesday morning to $616.11.