Abuse in Sports Online Communities. How do We Handle it?

Watchers
7 min readJul 7, 2022

--

We take a deep look at the latest studies about sports fans’ toxic behaviour online.

What the reports tell us

Since 2020, international sports associations, trade unions, and independent analysts have paid attention to how many hate attacks on athletes happen during competitions. This awareness has led to extensive research about abusive behaviour in online sports communities. FIFA, FIFPRO, the Conversation, World Athletics, Australian sporting organizations, eSafety, NBPA, and others presented proprietary studies.

Studies have been conducted about different sports in different countries. The nuances of these studies are different, but the essence is the same: the core of online abuse is homophobia, racism, sexism, ableism, islamophobia, transphobia, and threats. These are listed in order of frequency of occurrence from the most to the least if you look at the situation as a whole. But in various different leagues, different phobias and forms of discrimination come to the fore.

“In recent times, we have seen female athletes, those from Aboriginal and Torres Strait Islander and diverse cultural and multilingual backgrounds increasingly becoming the targets of unimaginable online abuse, hatred, misogyny and racism.” — eSafety Commissioner Julie Inman Grant, Australia.

So, the EURO 2020 Final, ending with the English team losing and the Italian team winning, showed an unacceptable level among European and English fans. The African Cup of Nations 2021 Final highlighted fans’ homophobia. Misogyny and sexism traditionally accompany women’s competitions. The Summer Olympic games in Tokyo, which were the first gender-balanced (women’s participation was 49 %), herewith showed the highest level of sexism. According to studies, 87 % of all abusive online messages were addressed to women.

Is it Because of Social Media?

Toxicity breeds toxicity. In the beginning, bullying is directed against athletes, then spreads to other fans, training staff, and athletes’ relatives and friends. Toxic comments and messages don’t have any limitation period because they were posted in public accounts on Twitter, Instagram, and other social media; therefore, they continue getting impressions and being shared.

You might notice abusive comments if you saw pages and accounts of sports clubs and certain athletes even once, especially during a match or after a loss. Also, you might pay attention to Twitter threads, which became popular and made it to the top.

If you are not a social media user, you may know from traditional media that athletes and sports organizations joined boycotts of social media more than once. In April 2021, Premier League, English Football League and Women’s Super League boycotted social media for 4 days because of persistent disregard for abuse problems. But still, two months after the EURO 2020 Final happened, the English team faced a wave of abuse that convinced FIFA to take action.

Real steps

Many people think that the reason for abusive behaviour on social media is anonymity, but it's not all that simple. Social media is not a space to be completely anonymous, and when an author of any hate speech comes into view. After the bullying waves during the EURO 2020 Final, a lot of online abusers were called to the police.

Gareth Southgate consoles Bukayo Saka, who missed England’s final penalty in the shootout ©AFP

Twitter and Instagram remain the spaces where bullying breeds bullying, regardless of whether the identity of the authors of abusive threads is often fixed or easily revealed. Triggers for bullying can be a game score, a player’s mistake during the game, or even an athlete’s look at a photo in their personal account. When anything can be a reason, there are no reasons at all.

As a result of studies, FIFA plans to launch moderation services across social media that will automatically search toxic posts. According to the Threat Matrix company, the two-level check allows them to solve difficult cases. FIFA gives an example of an Italian player whose nickname is Gorilla. In this case, the gorilla emoji is absolutely acceptable, but it also turns out to be a toxic marker in racism cases.

Pitfalls to Avoid

Of course, this is not the first attempt that has been made to fight toxicity in online spaces. So, for example, in 2021, Xinning Gui, PhD at Pennsylvania State University, made a presentation at the Conference on Human Factors in Computing Systems. She shared what instruments are used to struggle with abuse in cybersport space and how players use them for their personal value. For example, they are flagging game competitors, not abusers.

Sifan Hassan of Team Netherlands celebrates after winning the gold medal in the Women’s 10,000m Final on day fifteen of the Tokyo 2020 Olympic Games (Photo by Cameron Spencer / Getty Images via CFP)

The dishonesty of some users is not the single pitfall during fighting online abuse. One tough case was the recent Pekin Olympics, where along with the hate comments towards athletes, real censorship was applied to Internet users. So, many athletes from Europe claimed that they weren’t ready to say anything in public about negative issues related to the Olympics, until they left China. In general, the Chinese checking online messages system is aimed at posts and comments expressing an anti-government point of view, not for checking abuse.

How to find a balance? How do you create a trusted online space for comfortable communication, avoiding abuse, and still remembering freedom of speech?

Watchers’ experience

We provide opportunities to communicate and to create online communities based on content, and FIFA’s attitude is similar to our own. But the Matrix Threat company, which partners with sports trade-unions, has to fight windmills — to global social media, whose policy terms have already existed, their moderation system is automated and isn’t able to adapt to each event or situation.

It’s easier for us because we create an environment for users’ communication, and we do this specially for each event. It means that we know what to pay attention to at each level of moderation.

We create online rooms, or chats, to let users discuss sports events. These might be whole tournaments or separate games. A chat appears before the match starts and disappears after it ends. This approach prevents one of Twitter’s problems: their threads don’t disappear after a time, whereas our chats do.

So, before starting support, we analyze similar events and communities to find out what users want to discuss and how much hate they can share. Then, we tune in auto-moderation to correct block word and phrase lists.

It is needed to stop things that nobody wants to see in chats—aggressive, hateful speeches, provocation, and floods. Chats are places where any user should feel safe and comfortable. We are sure that people are more willing and free to communicate and share their opinions when they are not attacked because of their identity.

When we finished the preparations, it was time for the event. While it is going on, online chats are moderated at three levels: automated pre-moderation, the main moderation by a real person, and users’ moderation.

Chats and online rooms are also places to communicate and share emotions. Sometimes, emotions can be negative, and if they are not directed against athletes or other users, why are users not allowed to share their reactions to loss?

Automated moderation is provided not only by pre-set block lists. During the event, moderators share new information with AI, that helps it to learn and adjust to certain events, if there is something surprising that happens, which we didn’t predict during pre-set. It means, moderators can add variants of toxic phrases or do the opposite, withdraw something that users can send during regular conversation.

The second level is live moderation, while moderators read each message and check for compliance with the chat rules. It ensures to avoid bullying and keeps messages which don’t break the rules. E.g., users’ comments about losing bets.

Users can hide certain messages or all messages sent by a certain toxic participant so that they don’t see them in a chat. This doesn’t require a ban but makes the space more comfortable.

It turns out that there are three levels, each of which is crucial: the live person, who, with AI support, evaluates the abuse of each message, which means that they can influence the situation in real-time.

In the following texts, we will tell you what methods of lexical analysis are used in AI moderation, how to correctly evaluate the semantics of messages, what toxic topics we highlight in online chats, how they differ, and how to detect an abuser right away quickly. Subscribe to follow!

Text by Alina Kuzio

Watchers is a white label solution, which provides the ability to create online rooms on your video-platform. In these online rooms users will be able to watch content simultaneously while chatting by text or voice. At first, Watchers was created as a b2c app and now it is a SaaS being used by companies to show-case their content and to boost their business making use of social mechanisms.

If you want to learn more or to integrate our solution into your platform, сontact us: contact @watchers.io

--

--

Watchers

How to build communities around the content platform? We know the answer. Learn more: watchers.io