“Meta has always been a hotbed of Russian, Chinese and Iranian disinformation,” says Gordon Crovitz, co-CEO of NewsGuard, a company that provides a tool for assessing the reliability of online information. “Now Meta has apparently decided to open the floodgates completely.”
Again, fact-checking is not perfect; Croviz claims that NewsGuard has already spotted several “false narratives” on Meta’s platforms. And the community notes model with which Meta will replace its fact-checking battalions may still be somewhat effective. But research Mahavedan and others have shown that crowdsourced solutions miss vast swathes of misinformation. And unless Meta commits to maximum transparency in how its version is implemented and used, it will be impossible to know whether the systems are working.
The move to community ratings is also unlikely to solve the “bias” problem that Meta executives are so openly concerned about, given that it seems unlikely that it existed in the first place.
“The motivation for this whole Meta policy shift and Musk’s takeover of Twitter is this accusation that social media companies are biased against conservatives,” said David Rand, a behavioral scientist at MIT. “There’s just no good evidence of that.”
In a recently published article paper Writing in Nature, Rand and co-authors found that while Twitter users who used a Trump-related hashtag in 2020 were more than four times more likely to ultimately be suspended than those who used pro-Biden hashtags, they were also significantly more likely to have shared “poor quality” or misleading information.
“Just because there is a difference in who is targeted by the measures taken does not mean there is bias,” explains Rand. “Popular ratings can do a very good job of replicating fact-checkers’ ratings… You will always see more conservatives getting sanctioned than liberals. »
And even if will install its own community notes type system. “There’s a reason there’s only one Wikipedia in the world,” Matzarlis says. “It’s very difficult to get something crowdsourced to take off on a large scale. »
As for relaxing Meta’s policy on hateful conduct, that in itself is an inherently political choice. It is always a question of authorizing certain things and prohibiting others; moving these boundaries to accommodate bigotry does not mean they do not exist. It just means Meta is more okay with it than the day before.
It all depends on how exactly the Meta system will work in practice. But between moderation changes and community guidelines overhaul, Facebook, Instagram, and Threads are moving toward a world where everyone can say that gay and trans people suffer from “mental illness,” where garbage AI will proliferate even more aggressively, where outrageous claims spread unchecked, where the truth itself is malleable.
You know: just like X.