“We suck at dealing with trolls and abuse on the platform  and we’ve sucked at it for years,” wrote Dick Costolo, former CEO of Twitter, in a memo which was leaked to the press last year. “It’s no secret and the rest of the world talks about it every day.”

Although embarrassing, to many the remarks sounded like a long-overdue admission. Online, abuse is depressingly common, and Twitter is far from the only online space where the trolls and bullies make hay. With over half of young people being bullied online and stories about relentless trolling of individuals still frequently in the news, you would be forgiven for thinking that nothing is being done to tackle this problem. But there are some signs that the trolls could be stopped.

It’s possible to predict quite early on whether a troll will end up being banned from a website

A string of recent research papers have explored how technology could help identify cyberbullies early, potentially helping to prevent prolonged abuse. Take this study on abusive online comments for example, which examined 40 million comments posted over a period of 18 months on sites like CNN.com and Breitbart.com. Co-author Cristian Danescu-Niculescu-Mizi of Cornell’s Department of Information Science explains that by looking at the behaviour of users, not necessarily the words they choose, it’s possible to predict quite early on whether they will end up being banned from a website or forum in the future.

“Anti-social users tend to focus on a few threads. They write a lot but only on a small number of threads instead of spreading out like other users would,” says Danescu-Niculescu-Mizi. By identifying this behaviour, the team was able to create an algorithm that would predict, quickly, whether users were being anti-social. Instead of looking for swear words or slang put-downs, the algorithm searches specifically for patterns of activity – and it seems to work across multiple sites.

“It turns out that there’s enough information in the first five or 10 posts to actually predict whether they’re going to be banned in the future with an accuracy of about 80%,” explains Danescu-Niculescu-Mizi.

The researchers stress that their algorithm is not intended to be a computerised replacement for human moderators, which might automatically ban users based on the frequency of their first few posts. Instead, the team hopes that flagging certain individuals early on would assist human moderators in warning anti-social users to cut out bad behaviour before they assume that they can get away with it. As Danescu-Niculescu-Mizi says, “You need humans in the loop.”

Some initiatives aim to turn trolling against the trolls

Wired has also reported on an effort by SRI International – the company which gave birth to Siri before it was acquired by Apple for use on iPhones – to identify cyberbullying using intelligent algorithms. This would help replicate, automatically, the kind of work that is currently done by human moderators. Yet more research has explored the characteristics of abuse on networks like Instagram.

There have, of course, been more traditional efforts to tackle cyberbullying by better publicising knowledge around it and how it happens. Often these efforts take place online, such as in the example of recently launched UK government website Stop Online Abuse, aimed at quenching the bullying of women and LGBT users online.

Interestingly, there have also been initiatives to turn trolling against the trolls. One example was the Zero Trollerance campaign by activist group The Peng Collective. Here, Twitter bots were used to automatically target people whose tweets appeared to be abusive. These users were given a somewhat tongue-in-cheek offer to take part in a “self-help programme” to end their trolling.

“I wanted to do something that I suppose would kick up more dust and engage with the trolls,” comments Ada Stolz who helped to create the campaign which sent outreach messages to thousands of Twitter accounts over a one week period. “That’s what we did. We trolled people too – we created an army of bots.”

The idea that people who engage in bullying should be confronted with the consequences of their actions is a very powerful one. Indeed, it’s being used by anti-cyberbullying organisations working directly with young people.

ChildNet is a British charity that frequently visits schools and organises activity days for pupils. For example, students are sometimes asked to work together on a short drama about sexting in which they get to play the part of a pupil who has had a nude image of themselves shared around the school. “Sexting” as a form of bullying is disturbingly common among today’s teenagers, but ChildNet’s drama forces children to put themselves in the shoes of the victim.

Often teenage cyberbullying is hard to see, hidden from adults, and not always taken seriously when brought up

“It helps young people explore how they would feel in terms of the different characters in it,” explains Hannah Broadbent, a spokesperson for the charity. “You can see their thought patterns changing where before they might have said, ‘well, she shouldn’t have shared that picture in the first place, it’s her fault,’ to a position where they are basically empathising with the person.”

“That sort of thing is very, very good,” says Emma Short, a psychologist who specialises in the study of cyber-stalking and harassment. Short acknowledges how difficult it has been in recent years for parents and teachers, not just other children, to realise the impact of a cyberbully’s behaviour. Often the activity is hard to see, hidden from adults, and not always taken seriously when brought up.

Why do people bully each other at all? There are sociological and psychological explanations. On the one hand, bullies tend to be engaged in a power play, seeking to establish dominance and some sort of hierarchy. The psychological reasons for this, though, can be manifold – from responding to early experiences of inferiority to re-enacting bullying which they have been exposed to themselves.

The key, says Short, is the cultivation of empathy. “The posting that you tend to do, especially bullying, seems to be done in isolation,” she notes. “The more we pool online communications into an interpersonal context for individuals, the more we can improve the degree of care that we take when we’re communicating.”

Indeed, there have even been efforts to ask users to reconsider a message before they hit send, based on the words they type into a text box. Such an approach is being used by teen discussion website A Thin Line.

This idea might appeal to Monica Lewinsky, who made a much-discussed speech last year about what she described as our problematic “culture of humiliation”. “The more we click on this kind of gossip, the more numb we get to the human lives behind it,” she said.

One of the difficulties in reinforcing this sense of responsibility digitally is the prevalence of the “bystander effect”. Often, people observe abuse going on but don’t do anything about it. For Short, this is one of the biggest barriers for any attempt to curb cyberbullying.

However, some interesting findings from online role-playing game (RPG) League of Legends could lead the way. Jeremy Blackburn is a researcher for Spanish technology company Telefonica I+D. In a recent study, Blackburn and his co-authors discovered that when players were prompted by their team-mates to report another player for abusive behaviour, they were more than 16 times more likely to do so. “If you had an explicit plea for help, like ‘report this guy he’s a jerk,’ ” comments Blackburn, “then more people were reported. It kind of breaks the bystander effect.”

People seem to be slightly more reserved when there is voice involved – Jeremy Blackburn, researcher

Blackburn frequently plays League of Legends and other RPGs himself. Players are able to chat to each other in-game via text, though increasingly they also use headsets to communicate verbally. Blackburn himself has experienced a range of behaviour – including people throwing insults and threats.

However Blackburn says he believes the nature of the medium can sometimes impact the severity of these insults. “I've noticed people seem to be slightly more reserved when there is voice involved,” he says, “It reduces anonymity a bit.”

It seems that it is possible to encourage people to rethink aggressive behaviour and not to ignore cyberbullying when they see it. Anything which foregrounds the human individual behind the avatar might be of use. And rather than simply writing algorithms to try and manage the situation for is, it looks like there really is no substitute for engaging directly with others – even the bullies.

Join 500,000+ Future fans by liking us on Facebook, or follow us onTwitter,Google+,LinkedInandInstagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, Travel and Autos, delivered to your inbox every Friday.