New study shows that racism and bigotry is part of the everyday norm on social media

New study shows that racism and bigotry is part of the everyday norm on social media

Social media is no longer in its infancy and reports of friendships reconnected and loves won and lost have been replaced by much more alarming news that social media has become a platform for hate speech, racism, misogyny  and bigotry. Growth in online hate speech has been so sharp in the last 10 years that international organisations, including the UN Human Rights Council, the European Commission against Racism and Intolerance, and national governments across Europe, namely the UK, Germany, France and Italy, have demanded that the giant corporations behind the social media platforms do more to address this issue. However, recent research has demonstrated that, while more needs to be done at the corporate level, there is action that could be taken by national governments and other organisations to make it harder to disseminate bigotry, hate speech and racism online.

To this end, sociology doctor, Luiz Valerio P. Trindade, has undertaken a study into the effect of racism and bigotry on social media and developed key recommendations that could stem the practice and help to prevent online racism and bigotry taking place, and laying the foundations for eradicating it from social media in the years to come. With research revealing that even individual Facebook posts with racist and hate content can continue to engage users up to 3 years after the initial post. Far from being just one moment of instant communication, these hateful posts have a life that extends over years. Perhaps more worryingly, the research evidences that social media is held to be a ‘No Man’s Land’ by those who disseminate hate and bigotry and demonstrates that these individuals feel at liberty to say whatever they like and believe that they are safe from the authorities.

Luiz Valerio P. Trindade has developed clear recommendations that could make a difference to the types of behaviour that take place on social media, if adopted. The recommendations include:

  • Raising the issue with secondary school youngsters and helping them to vocalise and develop their understanding of the social consequences of online racism and hate speech. This is critical because social media is used so widely by the 13-17 year old age group, with over 1.8 million users of this age in the UK alone.
  • Educational campaigns at national level to highlight the fact that online life is not separate from real life and that, in fact, they are intertwined.
  • Working to ensure that the corporations behind the social media platforms have smarter and faster processes to remove inappropriate content that’s been flagged up by their users or spotted by the powerful algorithms the corporations have already in place.
  • Calling for these corporations to highlight that their platforms are not a safe space for them to convey racist views or promote bigotry. Instead, these corporations need to set out clearly that identification data could be disclosed to the authorities, who could then hold them to account for their online attitudes.

Luiz Valerio P. Trindade, PhD in Sociology, University of Southampton said “The increasing trend of construction and dissemination of hate speech, bigotry, misogyny and racism are apparently becoming the ‘new normal’ in the digital landscape across several European countries and also in the UK. Moreover, rather than fading away soon after publication, derogatory posts oftentimes become powerful magnets attracting several new users for the same conversation for up to three years, what can potentially increase the initial harm caused to the victim of the verbal abuse”.

 

E: lindsey@whowhyhow.co.uk

T: 07946 545 083