Facebook and Twitter grilled over abuse faced by MPs
Facebook and Twitter executives have been grilled by MPs over how the social networks handle online abuse levelled at parliamentarians.
MPs argued that such hostility undermined democratic principles.
Twitter representative Katy Minshall admitted it was "unacceptable" that the site had relied wholly on users to flag abuse in the past.
She said there was more to be done, but insisted Twitter's response to abuse had improved.
Harriet Harman, chair of the Human Rights Committee, said: "There is a strong view amongst MPs generally that what is happening with social media is a threat to democracy."
SNP MP Joanna Cherry cited specific tweets containing abusive content that were not removed swiftly by Twitter.
One example was only taken down on the evening before the committee hearing, after Ms Cherry and other high-profile figures, including the journalist and activist Caroline Criado Perez, drew attention to the post.
"I think that's absolutely an undesirable situation," said Ms Minshall, Twitter's head of UK government, public policy and philanthropy.
Ms Cherry argued it was in fact part of a pattern in which Twitter only reviewed its decisions when pressed by people in public life.
MPs also questioned how useful automated algorithms were for identifying abusive content.
Facebook's UK head of public policy, Rebecca Stimson, said their application was limited.
For example, out of two million pieces of bullying content, Facebook's algorithms could only correctly identify 15% as in breach of the site's rules.
"For the rest you need a human being to have a look at it at the moment to make that judgement," she explained.
Labour MP Karen Buck said algorithms might not identify messages such as, "you're going to get what Jo Cox got" as hostile. She was referring to the MP Jo Cox who was murdered in June 2016.
"The machines can't understand what that means at the moment," said Ms Stimson.
However, both Ms Stimson and Ms Minshall said that their social networks were working to gradually improve their systems and were also implementing tools to better flag and block abusive content proactively, before it was even posted.