Why Trolling Is So Prevalent
According to Whitney Phillips, an academic specialising in digital culture who has written two books on the subject of trolling, our troll problem is a culture problem. Phillips argues that trolling actually fits comfortably within our current media landscape full of click-bait content, scaremongering and sensationalism – that it may be obscene, but it isn’t all that deviant.
She describes, for example, the relationship between trolling and sensationalist corporate media – pointing out that, for trolls, exploitation is a leisure activity; for media, it's a business strategy. She also believes trolls are “engaged in a grotesque pantomime of dominant cultural tropes”, including gendered notions of dominance and success and an ideology of entitlement.
Psychologists have largely put trolling down to something called the ‘online disinhibition effect’, a term coined in 2004, which describes the loosening of social restrictions and inhibitions that are normally present in face-to-face interactions. There are two types: ‘benign disinhibition’ and ‘toxic disinhibition’ – while benign disinhibition explains why people might self-disclose more on the internet than they would in real life, toxic disinhibition describes behaviour that includes rude language, threats, crime, and violence on the internet; things the person would likely never do in real life.
One of the principal factors behind online disinhibition is anonymity: when you feel anonymous, you feel protected. By remaining anonymous, you can distance yourself from your behaviour; you can instead compartmentalise it in an online identity that’s not integrated with your offline self. It explains why the majority of trolling takes place on forums, comment sections and social media sites, where anonymous profiles are allowed.
Who The Trolls Really Are
Bailey Poland, author of Haters, Haters: Harassment, Abuse, and Violence Online, says trolls aren’t just “teenagers in their parents’ basements” but, more commonly, “very well adjusted normal adult men with wives and children and full-time jobs.” Poland believes many fall into the camp of “very aggrieved middle-aged white men who feel like they weren’t given the fantasy life they were owed” and, as such, they take it out on people who are “happy, successful or present in a way that makes them feel uncomfortable”.
Phillips also sees trolling as a gendered, privileged activity. "When you occupy a privileged position, you have the choice whether or not you can take your own word seriously, or the words of others,” she says. “You can essentially fetishise just punchlines of situations without actually being impacted by racism [or sexism]. You don't have to think about the lives being impacted."
Her latest book, The Ambivalent Internet, addresses the rise of a particular kind of online bigotry, where ambivalence acts as a veil for hate speech – which Phillips links to the prevalence of nihilism online. "Irony and cynicism is baked into the DNA of so much internet culture," she explains, tracing back this certain strand of trolling to 2003, when President Bush advised the public to combat post 9/11 terrorism by going to Disneyland. During this time, Phillips says, certain online forum sites to develop a signature tone in which nothing but "lulz" mattered.
Once restricted to these niche communities, social media has now brought this type of trolling into the mainstream. And in the wake of Trump’s successful presidential campaign, which normalised identity-based antagonism, people with racist, sexist and homophobic views now feel more comfortable sharing them publicly. “The tenor of discourse that Trump and his associates have been engaging in has emboldened some people to say ‘we are in charge now and we can say whatever we want’,” Kate Miltner, a PhD student at USC Annenberg School of Communication and Journalism theorised.
But, interestingly, trolling isn’t only conducted by a certain group of toxic people. Previous research shows that it’s widely distributed across the whole online population – one study of Wikipedia talk pages found that only about 20% of abusive comments were written by established trolls with the remaining 80% being one-offs; another discovered that trolling comments were more likely to be posted late at night and early in the week, when people are most likely to be in a bad mood. It seems, therefore, any one of us is capable of becoming a troll.
The Rise Of Cyber Bullying
Trolling has become the media’s go-to descriptor for all problematic online behaviours, most notably cyberbullying. But there is a difference: whereas trolls focus on being a nuisance to online communities, cyberbullies target individuals; rather than posting generally inflammatory statements, they post vicious things about a single person. While trolls try to attract attention to themselves, cyberbullies want to demean and hurt their victims. Trolls are indifferent to the harm their comments may cause, whereas cyberbullies want to cause distress to their victims.
But the prevalence and normalisation of trolling has been cited as one aspect behind the shocking rise in cyberbullying – according to the NSPCC, the number of children seeking help for cyberbullying increased by 88% in the five years to 2016. Around one-third of young people claim to have been victims, and new research suggests it can have damaging and deadly consequences.
Psychologists at the Universities of Oxford, Swansea and Birmingham reviewed previous studies on cyberbullying which involved more than 150,000 under-25s, across 30 countries over a 21-year period, and found that cyberbullying raised the risk of self-harm or suicidal behaviour 2.3 times. They also found a strong link between being a cyber victim and being a cyberbully – which put males in particular at a 20% higher risk of depression and suicidal behaviours. The research also found that students who were cyber-victimised were less likely to report and seek help than those victimised by more traditional means.
A New Trend Of Digital Self-Harm
Recent reports have also revealed teens are trolling themselves online in a process known as ‘digital self-harming’ or ‘auto-trolling’. Digital self-harmers anonymously post hurtful comments or content about themselves on social media, which range from the relatively harmless: "no one likes your photos"; to the far more serious: "kill yourself" and "you don't deserve to be alive".
Little research has been carried out into the trend as of yet, but a study of teens last year found that around 6% of US students ages 12 to 17 had digitally self-harmed. Boys were more likely to engage in auto-trolling than girls and LGBT students were nearly three times more likely, as were those who had experienced school bullying and cyberbullying, drug use and depressive symptoms.
Another piece of research, conducted in 2012, attempted to discover digital self-harmers' motivations. The psychologists speculated many did it as a “cry for help”; others wanted to appear “cool”; and some were aiming to “trigger compliments” and gain attention from peers.
What’s Being Done To Stop It
To combat cyberbullying and digital self-harm, experts have called for prevention tactics to be included in school anti-bullying policies, alongside broader concepts such as digital citizenship, online peer support for victims, advice on how an electronic bystander might appropriately intervene, and more specific interventions such as how to contact mobile phone companies and internet service providers to block, educate, or identify users.
The concept of blocking specific individuals from platforms once seemed the only solution to trolling, but methods are evolving rapidly. When it comes to the media, most news sites now prominently display community guidelines, and some publications – including the Globe, the Mail and the New York Times – have comment moderators, or ask readers to review other comments and assess them for civility. Some outlets have decided to ban comments altogether.
On social media, the latest trolling and cyberbullying prevention tools rely on artificial intelligence or machine-learning techniques, which can detect certain words or behaviours, such as bullying language: YouTube recently reported that its automated flagging algorithms are responsible for removing over 80% of the 8.2 million videos that have violated the site’s community guidelines.
And just last month, Twitter revealed a new strategy for promoting “healthy conversation”. The company plans to bury tweets from trolls, making them less visible in conversations and search results, even if these individual tweets don’t violate Twitter’s policies. “Less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what’s reported does not violate our rules,” the company wrote in a blog post.
“While still a small overall number, these accounts have a disproportionately large – and negative – impact on people’s experience on Twitter.”
To catch trolls in the act, Twitter is now using algorithms that detect suspicious behaviour regardless of the content of the actual tweets. This includes users who open up accounts without verifying their email address, users who open up multiple accounts at the same time, and users who repeatedly tweet at someone who doesn’t follow them. The social media site says it’s been testing this kind of muting in select markets over the past couple of months and has seen a notable drop in abuse reports.
Research into online abuse is also being used to develop computer programmes that can help commenters as they write, in an aim to improve online conversations. The creators of one such programme, Perspective, hope to help people feel comfortable expressing their differing opinions without worrying about being mistaken for a troll or being met with backlash.
“Imagine if you could run your comment through an algorithm that will tell you whether it is constructive or not,” the Perspective website states. “Our algorithm will tell you whether you are contributing to the conversation. Then we may have more of the meaningful and less of the nasty.”
And less of the nasty, we most definitely need.
DISCLAIMER: We endeavour to always credit the correct original source of every image we use. If you think a credit may be incorrect, please contact us at firstname.lastname@example.org.