Abuse in Sports Online Communities. How do We Handle it?

Watchers
7 min readJul 7, 2022

We take a deep look at how moderation works in online sports rooms, and at the latest studies about sports fans’ toxic behavior online.

This is the first article of a special series devoted to the creation of a safe space for online communication.

What the Reports Tell Us

Since 2020, international sports associations, trade unions and independent analysts have paid attention to how many hate attacks on athletes happen during competitions. This awareness has led to a lot of research about abusive behavior in online sports communities. Proprietary studies were presented by FIFA and FIFPRO, the Conversation, World Athletics, Australian sporting organizations, eSafety, NBPA, and others.

Studies have been conducted about different sports in different countries. The nuances of these studies are different, but the essence is the same: the core of online abuse is in homophobia, racism, sexism, ableism, islamophobia, transphobia, and threats. These are listed in order of frequency of occurrence from the most to the least, if you look at the situation as a whole. But in various different leagues different phobias and forms of discrimination come to the fore.

“In recent times we have seen female athletes, those from Aboriginal and Torres Strait Islander and diverse cultural and multilingual backgrounds increasingly becoming the targets of unimaginable online abuse, hatred, misogyny and racism.” — eSafety Commissioner Julie Inman Grant, Australia

So, the EURO 2020 Final, ending with the English team losing and the Italian team winning, showed an unacceptable level among European and English fans. The African Cup of Nations 2021 Final highlighted fans’ homophobia. Misogyny and sexism traditionally accompany women’s competitions. The Summer Olympic games in Tokyo, which were the first gender balanced (women’s participation was 49 %) herewith showed the highest level of sexism. According to studies, 87 % of all abusive online messages were addressed to women.

Is it Because of Social Media?

Toxicity breeds toxicity. At the beginning, bullying is directed against athletes, then it spreads to other fans, training staff, then to athletes’ relatives and friends. Toxic comments and messages don’t have any limitation period, because they were posted in public accounts on Twitter, Instagram, and other social media, therefore, they continue getting impression and being shared.

If you saw pages and accounts of sports clubs and certain athletes even once, especially during a match or after a loss, you might notice abusive comments. Also, you might pay attention to Twitter threads, which became popular and made it to the top.

If you are not a social media user, you may know from traditional media that athletes and sports organizations joined boycotts of social media more than once. In April 2021, Premier League, English Football League and Women’s Super League boycotted social media for 4 days because of persistent disregard for abuse problems. But still, two months after the EURO 2020 Final happened, it was the English team that faced the wave of abuse that convinced FIFA to take action.

Real steps

Many people think that the reason for abusive behavior on social media is the anonymity, but its not all that simple. Social media is not a space to be completely anonymous, and when an author of any hate speech comes into view. E.g., after the bullying waves during the EURO 2020 Final, a lot of online abusers were called into the police.

Gareth Southgate consoles Bukayo Saka, who missed England’s final penalty in the shootout ©AFP

Twitter and Instagram remain the spaces where bullying breeds bullying, regardless of whether the identity of the authors of abusive threads is often fixed or easily revealed. Triggers for bullying can be a game score, a player’s mistake during the game, or even an athlete’s look on a photo in their personal account. When anything can be a reason, there are no reasons at all.

As a result of studies, FIFA plans to launch moderation services across social media, which includes an automated search of toxic posts and a following check on them. According to the Threat Matrix company, the two-level check allows them to solve difficult cases. FIFA gives the example with an Italian player whose nickname is Gorilla. In this case, the gorilla emoji is absolutely acceptable, and at the same time it turns out to be a toxicity marker in racism cases.

Pitfalls to Avoid

Of course, this is not the first attempt that has been made to fight toxicity in online spaces. So, e.g., in 2021 Xinning Gui, PhD at the Pennsylvania State University, made a presentation at the Conference on Human Factors in Computing Systems. She shared what instruments are used to struggle with abuse in cybersport space and how players use them for their personal value. For example, they are flagging game competitors, not abusers.

Sifan Hassan of Team Netherlands celebrates after winning the gold medal in the Women’s 10,000m Final on day fifteen of the Tokyo 2020 Olympic Games (Photo by Cameron Spencer / Getty Images via CFP)

The dishonesty of some users is not the single pitfall during fighting online abuse. One tough case was the recent Pekin Olympics, where along with the hate comments towards athletes, real censorship was applied to Internet users. So, many athletes from Europe claimed that they weren’t ready to say anything in public about negative issues related to the Olympics, until they left China. In general, the Chinese checking online messages system is aimed at posts and comments expressing an anti-government point of view, not for checking abuse.

How to find a balance? How to create trusted online-space for comfortable communication, avoiding abuse, and still remember about freedom of speech?

Watchers experience

We provide opportunities to communicate and to create online communities based on content, and FIFA’s attitude is similar to our own. But the Matrix Threat company, which partners with sports trade-unions, has to fight windmills — to global social media, whose policy terms have already existed, their moderation system is automated and isn’t able to adapt to each event or situation.

It’s easier for us, because we create an environment for users’ communication, and we do this specially for each event. It means that we know what to pay attention to at each level of moderation.

We create online rooms, or chats, to let users discuss sports events. These might be whole tournaments or separate games. A chat appears before the match starts, and disappears after its end. This approach prevents one of Twitter’s problems: their threads don’t disappear after a time, whereas our chats do.

So, before starting support, we analyze similar events and communities around to find out what users want to discuss and how much hate they can share. Then we tune in auto-moderation correct black and white word- and phrase-lists.

It is needed to stop things which nobody wants to see in chats — aggressive, hate speeches, provocation and flood. Chats are places where any user should feel safe and comfortably. We are sure that people are more willing and free to communicate and share their opinions, when they are not attacked because of their identity.

When we finished the preparations, it’s time for the event. While it is going on, online chats are moderated at three levels: automated pre-moderation, the main moderation by a real person, and users’ moderation.

Also, chats and online rooms are places for communication and sharing emotions. Sometimes emotions can be negative, and if they are not directed against athletes or other users, why not are users allowed to share their reaction to loss?

Automated moderation is provided not only by pre-set black and white lists. During the event, moderators share new information with AI, that helps it to learn and adjust to certain events, if there is something surprising that happens, which we didn’t predict during pre-set. It means, moderators can add variants of toxic phrases or do the opposite, withdraw something that users can send during regular conversation.

The second level is live moderation, while moderators read each message and check for compliance with the chat rules. It ensures to avoid bullying and keeps messages which don’t break the rules. E.g., users’ comments about losing bets.

Users themselves can hide certain messages or all messages sent by a certain toxic participant, and don’t see them in a chat anymore. In doing this it doesn’t allow the required ban of the user who wrote them, but makes the space more comfortable.

It turns out that there are three levels, on each of which the live person is crucial, who, with AI support, evaluates the abuse of each message, which means that they can influence the situation in real time.

In the following texts, we will tell you what methods of lexical analysis are used in AI moderation, how to correctly evaluate the semantics of messages, what toxic topics we highlight in online chats, how they differ, and how to quickly detect an abuser right away. Subscribe to follow!

Text by Alina Kuzio

Watchers is a white label solution, which provides the ability to create online rooms on your video-platform. In these online rooms users will be able to watch content simultaneously while chatting by text or voice. At first, Watchers was created as a b2c app and now it is a SaaS being used by companies to show-case their content and to boost their business making use of social mechanisms.

If you want to learn more or to integrate our solution into your platform, сontact us: contact @watchers.io

--

--

Watchers

How to build communities around the content platform? We know the answer. Learn more: watchers.io