Over the past decade, the 3,000 or so pictographs that make up the emoji language have been a vital part of online communication. Today it is difficult to imagine a conversation without them. And we probably don’t think about it every time we use the face of a monkey or a lion, but with each new update, emoji are more and more politically charged. And the presence or absence of them contributes to visibility or cultural erasure. In fact, their inclusion must deal with geopolitical issues, such as nationality, ethnicity or religion.
Now, emojis have become a headache for businesses. Why? Most hate or racism messages on the platforms contain one of them.
Pictorial racism. After England lost to Italy in July in the final of the UEFA European Championship, the black players of the British team faced an avalanche of bananas. But instead of physical fruit, as in that 1988 game at Liverpool’s Goodison Park, these were emojis posted on their social media profiles, along with monkeys and other images. “The impact was as profound and significant as when it came to real bananas,” explained the Association of Professional Footballers of the United Kingdom.
The role of social networks. Facebook and Twitter have faced much criticism for delaying too long in eliminating that wave of racist abuse on platforms during the championship in the summer. And there was highlighted a problem that had been overlooked: despite spending years developing algorithms to analyze offensive language, social networks do not have effective strategies to stop the spread of hate speech, disinformation and other content on their platforms. . Emojis have become that obstacle.
When Apple introduced emojis with different skin tones in 2015, the company came under fire for allowing racist comments. A year later, the Indonesian government sparked outrage after demanding that networks remove emojis related to the LGBTQ movement. Some emojis, including the one showing a bag of money, have been linked to anti-Semitism. Black soccer players have been targeted with similar mechanisms: The Professional Soccer Players Association and Signify analysts conducted a study of racist tweets targeting players and found that 29% included some type of emoji.
The great obstacle. Companies have been developing algorithms for years to detect and eliminate racism. But there are those who believe that less effort has been made and less experience has been developed in the analysis of the emoji language, leaving an opening that we have now seen closely. The industry says dealing with pictographs is a technical challenge, but critics say it’s getting harder than it actually is.
Why? Racist emojis are less likely to be detected on the Internet than words, according to a study from the University of Oxford. The researchers carried out what they called a “Hatemoji verification” and claimed that “hateful content is complex and diverse, making detection systems difficult.”
The response of the companies. In reality, Twitter and Facebook defend having been deleting posts and disabling accounts. Facebook acknowledges that it wrongly said that the use of certain emojis during UEFA this summer did not violate its policies when in fact it did. That’s why they began to automatically block certain strings of emojis associated with racism, and also allow users to specify which emojis they don’t want to see. Twitter explained that its rules against offensive posts include hateful images and emojis.
Still, UK leaders condemn the hate speech seen after the competition, and even Boris Johnson warned executives at Facebook, Twitter, TikTok, Snapchat and Instagram to take action against online abuse. This is, yes, checking the algorithms.
Review the algorithms. Basically because even the courts end up debating issues such as whether to count as a threat to send someone a gun emoji. This issue is confusing for lawyers, but it is even more confusing for computer-based language models. “Some of these algorithms are trained in databases that contain few emojis,” explained the researchers at the Oxford Internet Institute. These models treat emojis as novelty, which means that algorithms must start from scratch to analyze their meaning based on context.
The University of Oxford has built its own model, using humans to help teach algorithms to understand emojis rather than letting software learn on its own. The result, they say, was far more accurate than the original algorithms developed by (Google’s) Jigsaw and other traditional algorithms that his team tested.
It was also “bad” not to include them. From all of this commotion, one must also remember that Apple was constantly beaten (and rightly so) for its lack of skin color options in its emoji, which was finally addressed in the iOS 8.3 update. But while the physical diversity has been modified, there are other emoji that are missing and that have not received as much attention. Things like places of worship, popular food, clothing styles, transportation methods still have a very westernized appearance. Nobody imagined that the funny face that we sent by WhatsApp was going to suppose, in the end, a debate of such magnitude.