For a long time, Facebook has been responsible for propagating hate speech on its platform, especially in the least developing and vulnerable countries. Even while it promises to restrict such sensitive content, the reality is quite different. Facebook has been accused of allowing the posts containing the hate speech to go viral which has become a template in nations like Myanmar, Bangladesh, etc. So has been the situation in Ethiopia, where Facebook has failed to moderate content, resulting in increased social tensions and real-world violence.
Hate from digital space to real space
Ethiopia has been unstable for many years now and the political system there is based on hate. Here Facebook has failed to understand the difference between freedom of expression and hate speech. People used the social media platform to voice their ideas and be heard in the past, and it became a necessity for the country to achieve and maintain democracy. However, it has recently been used to spread propaganda, which could be religious or political. In recent years, this has increased and peaked. As a result, hate speech has shifted from the virtual to the physical realm. Moreover, calls to violence on the platform during the ongoing Ethiopian civil war have been traced to real-world violence.
“The content is some of the most terrifying I’ve ever seen anywhere,” Timnit Gebru, a former Google data scientist and leading expert on bias in AI, who is fluent in Amharic, told the Rest of the World. “It was literally a clear and urgent call to genocide. This is reminiscent of what was seen on Radio Mille Collines in Rwanda.” Radio Television Libre des Mille Collines, a station set up by Hutu extremists in Rwanda, broadcast calls to violence that helped spark the genocide in the country in 1994.
Read more: Facebook may soon have to shut down its business in Vietnam
Facebook says it lacks the capability to moderate content.
This is not something going unnoticed by Facebook yet it has barely adjusted its content moderation strategies in these smaller countries struggling with conflict and ethnic divisions. Though Facebook has defended itself that in these nations due to little language capacity and unskilled human resources it is not able to handle an exorbitant amount of data on a daily basis and therefore it failed to perform comprehensive human moderation.
But it has been applying network/AI-based moderation in these nations but Internal communications leaked in the Facebook Files show that the network-based moderation is still experimental. And therefore due to its opaqueness, moderation is not performed adequately.
But Facebook cannot be for sure trusted with its words. As a matter of fact, the Facebook paper leaks and the Facebook whistleblower, Frances Haugen, revealed how its platform is being used to spread misinformation, hate, and violence. She claimed that Facebook chooses profits over people’s lives.
She said, “There were conflicts of interest between what was good for the public and what was good for Facebook,” she said during the interview. “And Facebook over and over again chose to optimize for its own interests like making more money.”
Nevertheless, such accusations are not new. Facebook’s failure to moderate the content has also played a role in the ethnic persecution of the Rohingya in Myanmar previously. And, as these countries battle with civil wars, Facebook can no longer employ the excuse that it is under-resourced in these countries and hence unable to moderate the content. It must realize that human lives, not profits, should take first.
Read more: Is the end of Facebook already here?