Two complaints – in the United Kingdom and the United States – and damages estimated by the plaintiffs at more than 130 billion euros: in early December, NGOs representing Rohingya exiles lodged a complaint against Meta, the company mother of Facebook, accusing the mismanagement of hate messages by the social network of being the cause of thousands of deaths in Burma.
Since 2017, violent repression, mainly military, has targeted the Rohingya, Muslim minority in this predominantly Buddhist country; about 750,000 people fled Burma and at least 10,000 people were killed, according to a United Nations report published in September 2018, which estimated that “These crimes (…) were genocidal in nature ”.
But the Burmese army was not the only organization blamed by the United Nations: the same report accused Facebook, very popular in Burma, of having played a role. ” determining ” in the atrocities, by allowing calls to hatred against Muslims to proliferate. Weeks earlier, a damning Reuters investigation had shown platform moderation in the country to be more than flawed. Technical problems or insufficient number of Burmese-speaking moderators: many content calling for the killing of Rohingyas remained easily accessible and widely distributed.
Insufficient automated tools
Since then, Meta assures that it has greatly strengthened its resources in the country. “Our approach in Burma is fundamentally different today from what it was in 2017”, says a spokesperson for the company at World :
“The allegations accusing us of not having invested in security in the country are false. We have assembled a team of Burmese-speaking employees, banned the Burmese military, put an end to groups that sought to manipulate public debate, and taken action against disinformation. “
These measures did not succeed in solving the problems, as shown by the “Facebook Files”, these internal documents copied by the former employee Frances Haugen and sent to several editorial staff, including The world, by an employee of the United States Congress. Several documents, dating from mid-2020, show in particular that the automated tools put in place by Facebook to identify illegal messages seem insufficient.
A table summarizing the tools put in place in several countries shows that at the time, three years after the start of the massacres, Burma still does not have a classifier for disinformation. The classifiers are machine learning tools that help Facebook detect problematic messages. Widely used by the social network around the world, they must be programmed for each language. A detection system has since been put in place for disinformation messages in Burmese, says the social network, in addition to others classifiers already existing, including one for hate calls.
You have 62.22% of this article to read. The rest is for subscribers only.
We wish to say thanks to the author of this post for this amazing web content
“Facebook Files”: in Burma, the limits of the social network’s measures against hate calls