Facebook Still Allows Hate and Violence Ads
It is perfectly possible to advertise hate messages and calls for extreme violence on Facebook. The company claims to focus more on moderation, but that is still insufficient.
A new study shows it’s child’s play to place ads on Facebook that go against your rules. Non-profit organizations Global Witness and Foxglove launched 20 different advertisements in Kenya, both in Swahili and English, with hate calls and found that 19 of them passed without a hitch.
These ads call for (ethnic-based) violence, beheadings, rape, or comparing certain tribes to beasts, Global Witness says. In its own words, the organization used such ads on Facebook because they are moderated more strictly than regular posts and because an ad could still be withdrawn before it was online.
But the approval to appear was received, even if it concerns content that is not allowed under Facebook’s rules. The organizations, therefore, call on Meta, the parent company behind Facebook, to exercise more control over what it provides on its platform.
The report by Global Witness and Foxglove has also sparked reactions in Kenya itself. The National Cohesiona and Integration Commission (NCIC) is asking Facebook to stop such ads immediately. The organization was founded in 2007 after election violence that killed 1,300 people.
NCIC even threatens to ban Facebook in the country. But in the meantime, the chief of staff of the local minister of ICT has already nuanced that no active work is being done on this.
To be clear, Facebook is not the only party that is under-acting. The Global Witness research focuses on Facebook, but Techcrunch notes that Twitter and TikTok also do too little moderation to limit hate speech. It points to a Mozilla Foundation study on Kenya, which found that viral videos of hate and political disinformation continued to circulate even though they are against TikTok’s rules.
Facebook already informed Global Witness in advance that it is taking steps to spot hate speech and that there are teams with local language knowledge to do it as best they can, but there will always be cases where that isn’t good enough to succeed.
In doing so, Facebook is largely giving the same response it has been giving for several years when it is pointed to inappropriate material on its platform. So, for example, when there was talk of Russian interference in the American elections in 2016, it was first denied and then acknowledged with the promise to do more about it.
But in the meantime, other examples have surfaced. For example, according to a study commissioned by Facebook itself, the company did too little in 2018 in Myanmar, where Facebook called for violence. A similar scenario played out in Ethiopia at the beginning of this year.
In short, the company has been saying for several years that it is doing much against hate and violence, then swings numbers about the number of deleted posts. Still, in the meantime, it remains child’s play to place paid ads that AI nor human moderators neither notice.