News

PFA - 4th August 2021 - Online Abuse AI Research Study

PFA - 4th August 2021 - Online Abuse AI Research Study

September 8th 2021

Today, a PFA-funded research project carried out by ethical data science company, Signify, found there was a 48% increase in unmoderated racist online abuse in the second half of the 2020/21 football season, with 50% of abusive accounts coming from the UK.

Despite social media platforms pledging better moderation, Signify's machine-learning AI systems found more than three-quarters of the 359 accounts sending explicitly racist abuse to players were still on the platform. As of July 2021, the vast majority of these accounts remain unsanctioned.

Additionally, only 56% of racially abusive posts identified throughout the season had been removed, with some posts have remained live for months, and in some cases, the full duration of the season.

Former Manchester United and England defender Rio Ferdinand stated: “Now is the time for change. If we have this kind of technology at our disposal, why aren’t social media companies using it to eliminate racist and discriminatory abuse?”

Signify monitored more than six million social media posts on Twitter, looking at player accounts from the Premier League, Women’s Super League (WSL) and English Football League (EFL). The data in this report suggests platforms are concentrating on removing individual, offensive posts instead of holding those who write them accountable. Signify reported 1,674 accounts to Twitter during the 2020/21 season, a third of which were identified as being affiliated with a UK club. The report also found players across the leagues faced homophobic, ableist and sexist abuse. Homophobic abuse was included in 33% of abusive posts.

Watford captain and PFA Players’ Board representative Troy Deeney said: “Social media companies are huge businesses with the best tech people. If they wanted to find solutions to online abuse, they could. This report shows they are choosing not to. When is enough, enough? Now we know that abusive accounts and their affiliation to a club can be identified, more must be done to hold these people accountable.”

In May, the PFA joined a football-wide social media boycott to draw further attention to online abuse. However, despite an initial drop in offending posts, Signify’s data shows that racist abuse of players peaked during that same month. In July, following the barrage of racist abuse players faced after the EURO 2020 final, we again called on social media networks to make good on their promises to tackle racist abuse on their platforms.

PFA Chief Executive Maheta Molango said: “The time has come to move from analysis to action. The PFA’s work with Signify clearly shows that the technology exists to identify abuse at scale and the people behind offensive accounts. Having access to this data means that real-world consequences can be pursued for online abuse. If the players' union can do this, so can the tech giants.”

Additional points:

  • 1,781 offensive tweets from 1,674 accounts – all of which have been reported to Twitter for sanctioning.
  • The PFA has been in direct contact with clubs where abusive accounts have been identified to have a clear affiliation (fan, member, season ticket holder) in order to support any club based action.
  • Where tweets have passed a clear criminal threshold, the PFA have passed over information to law enforcement.
    More than 100 posts containing direct, discriminatory abuse and serious threats were identified in each month of the 2020/21 season.
  • 19% of abusive, racist posts are likely to have been removed directly by the user.
  • 50% of abusive accounts whose location could be identified came from the UK.
  • Homophobic abuse peaked in December 2020, often corresponding with campaigns against homophobia, such as Rainbow Laces.
  • Players at 11 of the 12 clubs in the WSL were affected by discriminatory abuse online.

Read the full report: Click here