Loading

You're reading

E

Every day, more than 350 million photographs are uploaded to Facebook.

Add to that millions more videos, gifs and text posts. With each one, there is a chance that malicious or deviant content will be a part of the mix. Consider all of the sites on the web, beyond Facebook, and the number of opportunities for inappropriate or damaging content is staggering.

The internet: more than just cats. (Credit: Alamy)

Enter social media risk defence teams. What was once watched over by volunteer online moderators has now been taken over by professionals monitoring the web 24/7, 365 days a year. In a merging of public relations crisis management, internet security and social listening, an entire industry has been born.

There are an estimated 250,000 to 350,000 people working as social media monitors globally.

With the explosive growth of the internet and, specifically, user-generated content and companies’ social media presence, social defence is growing by the day. There are an estimated 250,000 to 350,000 people working as social media monitors globally and close to one million people working in online security and privacy, according to Hemanshu Nigam, former chief security officer of MySpace and founder of Los Angeles-based online safety consultancy SSP Blue. Those numbers, he said, are conservative estimates and are changing all the time.

So, who are these defenders?

“[It] is really the natural evolution of the online moderator [who] traditionally removed the ‘bad stuff’ and acted as part editor, part host in a community,” said Emma Monks, head of moderation, trust and security at Leeds, UK-based Crisp Thinking, a leading social risk defence firm. “Quite often it was a hobby job. They were volunteer members of the community and had a lot of autonomy in the decision making on what sort of content remained on display or was removed.”

We have the police force to ensure that we all feel safe; we should have that online.

The job is no longer volunteer and now the people filling these roles respond to the findings of complex computer algorithms designed to and filter out bad content. At Crisp, these so-called social risk analysts spend their day combing through well-known clients’ websites and social media, looking for inappropriate or damaging content. At other firms, these watchers are called social defenders or content moderators.

Adam Hildreth, founder of social risk defence firm, Crisp Thinking. (Credit: Crisp Thinking)

Their role

“One of the beliefs at Crisp is that brands and consumers should have this worry-free social media experience, not have to worry about all the bad things,” said Adam Hildreth, Crisp’s founder and chief executive officer. “Just like in real life, [where] we have the police force to ensure that we all feel safe, we should have that online.”

You absolutely cannot exist in the online space without having this layer of protection.

At only 31 years old, Hildreth is considered a veteran in the business. At age 14, he started a social network for teens called Dubit Limited, which became the most visited teen website in the UK. But, along with Dubit’s success, there was the concern over “online groomers”, paedophiles who troll the internet looking for victims, and other threats to site visitors. In 2006, Hildreth founded Crisp, “prompted by the rising threats to children, consumers, and brands online, through social media and messaging apps”, he said.

From there, Crisp has grown substantially and monitors companies’ social media presences for everything from a posted bomb threat on an airline’s Facebook page to animal rights activists bombarding a designer’s site over the use of fur. Hildreth counts some 200 global brands among the company’s clients and said that the staff of analyst defenders – around 200 people spread out globally – go through “billions” of pieces of content every month. (Disclosure: BBC.com is among Crisp’s global clients.)

Social risk firms monitor social media for everything from bomb threats to attacks on companies' websites. (Credit: Crisp Thinking)

Virtual watchmen

“We’re there to be that round the clock eyes and ears,” said Hildreth, watching and listening for anyone talking about their clients’ brands on their own channels or on the rest of the Web. “If it’s a risk, then we either deal with it ourselves, removing content from [the brand’s] page, for example, or we’re getting the right person out of bed.”

Sometimes, that means picking up the phone to say, “Hey, look, this celebrity just got off a plane and said they had a terrible experience with your brand.” Or, they’ve taken a photo of your product, “put it out to eight million people, and said they can’t believe how bad the product is.”

Where crisis PR, social media and security meet: social risk analysis. (Credit: iStock)

At Crisp, the analysts are first taught definitions outlining types of risk online, both reputational risks for brands (lots of negative chatter about a recent ad campaign on the company website, for example) and wider risks that have safety implications for the public (someone threatening physical harm through a Facebook post, for instance). They then apply those definitions to content given to them for review from a wide range of social media platforms and classify the content into different types of risk.

Surprisingly, analysts don’t need to have previous experience in the field.

The analysts are also responsible for alerting clients about any risks and for taking further action to remove the content if required, such as deleting or reporting the content to the social media platform, law enforcement or other agencies, according to Monks.

No experience necessary

Surprisingly, analysts don’t need to have previous experience in the field, said Monks. It is such a niche industry that it would be hard to find people with experience in it, for one, but, she added, experience isn’t necessary.

The ability to be dispassionate and objective is important.

“The main quality I look for is the ability to assimilate the definitions for various risk types and apply them to social media content in a consistent way,” said Monks. “Consistency is absolutely key. Not just in order to supply the client with a quality service, but also to aid that client in maintaining a community space that is healthy and engaging.”

These are the people who see almost everything online. (Credit: Alamy)

The ability to be dispassionate and objective is important. “There are times when a social risk analyst needs to review content that they have a strong opinion on, but that cannot sway their decision making,” said Monks. And, since some content can be quite unsettling – for example, beheadings or child pornography – resilience is essential, she said. Great communication, team spirit and a positive attitude also help.

For most companies, having a social risk defence isn’t optional any more, one reason the field is growing, said SSP Blue’s Nigam. There is just too much risk not having one. “You absolutely cannot exist in the online space without having this layer of protection. It really is a requirement now where it wasn’t before,” he said. “People expect it, and your experiences are impacted by it.”

To comment on this story or anything else you have seen on BBC Capital, head over to our Facebook page, message us on Twitter or find us on LinkedIn.

Around the bbc