Meta and X social media giants have approved advertisements targeting users in Germany with violent anti-Muslim and anti-Jewish hate speeches as the country’s federal elections approached, according to new research EkoA non -profit campaign group of corporate responsibility.
The group’s researchers tested whether the advertisement systems of the two platforms would approve or reject the submissions for advertisements containing hateful and violent messages targeting minorities before an election where immigration took the front of the scene in the general public political speech – including advertisements containing anti -Muslim insults; calls on the imprisonment of immigrants in concentration camps or to be gassed; and the imaging generated by the AI of mosques and burnt synagogues.
Most test advertisements were approved in the hours after examination for mid-February. The German federal elections are expected to take place on Sunday, February 23.
Planned hate speech ads
Eko said X approved the 10 hate speech advertisements that his researchers submitted only a few days before the federal elections occurred, while Meta approved half (five announcements) for working on Facebook (and potentially also Instagram) – Although he rejected the other five.
The reason why the meta provided for the five refusals indicated that the platform thought that there could be risks of political or social sensitivity which could influence the vote.
However, the five announcements that Meta approved included violent hate speeches comparing Muslim refugees to a “virus”, a “vermin” or “rodents”, a brand of Muslim immigrants like “rapists” and calling them to be sterilized, burned or gas. Meta also approved an advertisement so that synagogues are burnt down to “stop the globalist program of Jewish rats”.
As a sidenot, Eko says that none of the images generated by the AIA that she used to illustrate the advertisements of hatred speeches were artificially labeled – but half of the 10 ads were always approved by Meta, what whatever the company having a policy which requires the disclosure of the use of IA imagery For advertisements on social problems, elections or politics.
X, on the other hand, approved these five hateful advertisements – and five others which contained a discourse of similar hatred targeting Muslims and Jews.
These additional approved advertisements included messages attacking immigrants “rodent” which, according to advertising copy, “flood” the country “to steal our democracy” and an anti -Semitic insult which suggests that the Jews were lying on climate change in order to destroy the European industry and to accuse economic power.
The latter announcement was combined with images generated by a-representative a group of dark men sitting around a table surrounded by batteries of gold bars, with a David star on the wall above them- With the visuals also strongly leaning on anti -Semitic tropes.
Another AD X approved contained a direct attack on the SPD, the center-left party which currently leads the German coalition government, with a false assertion that the party wants to accept 60 million Muslim refugees from the Middle East, before Try to fix a violent response. X has also duly scheduled an announcement suggesting that “leftists” want “open borders” and calling for the extermination of Muslim “rapists”.
Elon Musk, the owner of X, used the social media platform where he has nearly 220 million followers to intervene personally in the German elections. In A tweet in DecemberHe called on German voters to support the AFD party on the far right to “save Germany”. He also organized a livestream with the AFD leader, Alice Weidel, on X.
EKO researchers disabled all test advertisements before anyone who had been approved was to be executed to ensure that no user of the platform was exposed to violent hatred speech.
He indicates that tests highlight the flagrant faults with the approach of advertising platforms on content moderation. Indeed, in the case of X, it is not clear if the platform makes an advertisement moderation, since the 10 advertisements of violent hatred speeches were quickly approved for the display.
The results also suggest that advertising platforms could gain income following the distribution of violent hate speeches.
The EU digital service law within the framework
EKO’s tests suggest that no platform properly applies the prohibitions in hatred speech that they both claim to apply to advertising content in their own policies. In addition, in the case of Meta, Eko reached the same conclusion after having led A similar test In 2023, before the new EU online governance rules – suggesting that the regime has no effect on its operation.
“Our results suggest that Meata’s advertising moderation systems focused on AI remain fundamentally broken, despite the DSA law law (DSA) now in full effect,” EKO spokesperson in Techcrunch told Techcrunch.
“Rather than strengthening its advertising examination process or its hate speech policies, Meta seems to be back at all levels,” they added, highlighting the recent announcement of moderation and verification of facts of the company as a sign of “active regression” that they suggested put it on a direct collision trajectory with DSA rules on systemic risks.
EKO submitted its latest conclusions to the European Commission, which oversees the application of the key aspects of the DSA on the pair of social media giants. He also said that he had shared the results with the two companies, but none responded.
The EU has opened surveys on the DSA on Meta and X, which include concerns concerning election safety and illegal content, but the Commission has not yet concluded these procedures. However, in April, he declared that he suspected an inadequate meta-model of political advertisements.
A preliminary decision on part of his DSA investigation into X, which was announced in July, included suspicion that the platform failed to be up to the rules for transparency for the regulation. However, the complete survey, which started in December 2023, also concerns the risks of illegal content, and the EU has not yet reached any conclusion on most of the probe much more than a year later .
DSA confirmed violations can attract sanctions up to 6% of world annual turnover, while systemic non-compliance could even lead to regional access to the violation of temporarily blocked platforms.
But, for the moment, the EU always takes the time to decide on the Meta and X probes, therefore – awaiting final decisions – all the sanctions of the DSA remain in the air.
Meanwhile, this is only a question of hours before German voters go to the polls – and an increasing set of research on civil society suggests that the flagship regulations of online governance of the UE has not protected the democratic EU economy process from a range of threat technology technologies.
Earlier this week, Global Witness published the results of algorithmic algorithmic flow tests from X and Tiktok in Germany, which suggest that the platforms are biased in favor of the promotion of AFD content compared to the content of other political parties. Civil society researchers have also accused X of blocking access to data to prevent them from studying electoral security risks when approaching the German survey – access to DSA is supposed to allow.
“The European Commission has taken important measures by opening up the DSA surveys on Meta and X, we must now see the Commission take strong measures to respond to the concerns raised within the framework of these investigations,” also said the carrier EKO Word.
“Our results, alongside growing evidence of other civil society groups, show that Big Tech will not clean its platforms voluntarily. Meta and X continue to allow an illegal hate discourse, an incentive to violence and an electoral disinformation to spread on a large scale, despite their legal obligations under the DSA, “added the spokesperson. (We hid the name of the spokesperson to avoid harassment.)
“Regulators must take strong measures – both in the application of the DSA, but also for example the implementation of pre -electoral mitigation measures. This could include the deactivation of recommendation systems based on profiling profiling immediately before the elections and the implementation of other appropriate “glass-breakage” measures to prevent algorithmic amplification of the limit content, such as hateful content in thorough elections. »»
The campaign group also warns that the EU is now faced with pressure from the Trump administration to soften its approach to the regulation of Big Tech. “In the current political climate, there is a real danger that the Commission does not fully apply these new laws as a concession in the United States,” they suggest.