Christchurch: why the moderation of the social networks is it ineffective?

The attack of Christchurch, which has 50 lives, has once again demonstrated the weaknesses of the moderation of the giants of the Web. The terrorist has release

Christchurch: why the moderation of the social networks is it ineffective?

The attack of Christchurch, which has 50 lives, has once again demonstrated the weaknesses of the moderation of the giants of the Web. The terrorist has released its massacre live on Facebook, in a video long 17 minutes. The document is then propagated on many sites, especially YouTube (owned by Google), Instagram (owned by Facebook) and Twitter, in its entirety or through excerpts. Many people are thrilled of this wide distribution, and questioned, rightly, on the effectiveness of the tools of moderation of social networks.

This weekend, Facebook has given a few more details about his own efforts to stem the spread of the video. "The new zealand police warned us of the existence of this live. We quickly deleted the accounts Facebook and Instagram of the shooter," said Mia Garlick, head of Facebook New Zealand, via the official Twitter account of the social network. "In the first 24 hours [of the video], we've removed 1.5 million videos of the attack in the world, including 1.2 million that were blocked before they are even published." The social network has also indicated that the modified versions of the video, even those which do not show content which is violent, had also been removed, out of respect for the victims. On YouTube, copies of the video of the attack were published every second, at the height of the crisis, forcing the platform to temporarily disable some search terms related to the event, in order to limit their visibility.

These details confirm all of the complexity that is the exercise of control content on the Internet. And demonstrate that the moderation of the social networks still suffers from many flaws, some hardly dépassables.

● The artificial intelligence, an imperfect solution

In some cases, the giants of the Web make use of the tools of artificial intelligence to prevent the publication of objectionable material, or to detect and automatically report to the moderators humans. This is particularly the case of photos or videos of a pornographic, paedophile or content of terrorist propaganda. In these two last categories, the biggest players in the sector have recourse to common datastores. They allow them to share photos and videos already identified, in order to prevent the future publication on its own platform. It is a system of so-called"digital footprint". But this type of tool can only be used in the case of content already known. For the assassination of Christchurch, he would not have been able to prevent the first broadcast of the video. Moreover, internet users also know how to fool the machines, by slightly modifying the copied content, thus escaping from this automatic recognition.

If the artificial intelligence is heavily used for the content of pédopornographies or terrorist propaganda, it is much less for other categories. For example, half of the content of incitement to hatred deleted from Facebook are still reported by internet users, rather than being identified by algorithms. For the harassment, this proportion reaches 85%. A machine is not yet capable of understanding the subtlety of human language and its intentions when it comes to insults or threats.

● moderators humans were not numerous enough

Google, Facebook or Twitter apply all moderation, say a posteriori. most of The time, it is necessary that a user reports a problem to be controlled by a moderator human. Little is known about the people who are employed by the major platforms for the moderation of their content. In the case of Facebook, you know that they are about 15,000 in the world, the large majority are employees of subcontracting companies. Regularly, articles, reports and surveys testify to the conditions of difficult work of these moderators, under pressure and exposed to content that is violent all day long.

In the case of the bombing of Christchurch, the first reporting of a user is reached 29 minutes after the publication of the video, and 12 minutes after the end of the live. The content has been seen in a total of 4000 times before being deleted by Facebook.

● complex rules and changing

moderators are human to make their decisions according to the laws in force in the country of origin of the content (for example, in France, it is forbidden to publish content denying the existence of the Holocaust, which is not the case in the United States) and the internal rules of the social network in question. These last are complex, and change regularly. For example, in the case of Facebook, it is not allowed to publish content that "glorifies violence, or promotes the suffering or humiliation of other people." It is, however, allowed to publish some of the content is explicit in the purpose of information or of denunciation, "to help people raise awareness in their community with various problems". These rules can be overridden on a case-by-case basis. In the case of the attack of Christchurch, Facebook has made the choice to remove all the videos containing the images of the attack, regardless of the intentions of the users.

● The emergency of the "live"

In all cases, neither the artificial intelligence nor the moderators humans are not totally effective to manage an event that takes place in real time, especially when it comes to a live video, difficult to see it full diffusion. In emergency situations, the major platforms generally opt for a mixture of the two. For example, YouTube has made the choice to prevent automatically the reposting of the video of the bombing of Christchurch as a whole, but to send those with only excerpts to moderators, humans, in order to gauge if it was published content is for information purposes.

● platforms huge, and designed for viral marketing

Remains a final element complicating the work of moderation, without a doubt, the most difficult to resolve: the nature of the giants of the Web. Facebook, Instagram or YouTube are huge platforms frequented by billions of people in the world, posting every day is a phenomenal amount of content. Thus, according to the statements of Facebook, the social network has managed to block 80% of the videos showing the killing of Christchurch even before their publication. It is a ratio rather encouraging. However, because Facebook is such a large platform, this means that 300,000 videos related to the attack, the remaining 20%, have been published. This figure is a result. All these subjects have benefited a strong virality, driven by algorithms of recommendation trained to apply prior to the release causing a lot of reactions. More than human, more than new technological tools, it is also an internal revolution which would require social networks in order to better fight violence online.

Date Of Update: 21 March 2019, 00:00
NEXT NEWS