Facebook has launched a new feature allowing Instagram users to flag posts they think contain fake news to its fact-checking partners for verification. But questions remain as to whether it goes far enough to counter the amount of disinformation on the image-sharing platform.
The move is part of a wider raft of measures the social media giant has taken to tackle the problem of fake news on social media.
Facebook announced in May that it would start reducing the reach of false content on Instagram and gradually extend its fact-checking partnership to include the image-sharing platform. It also said it would start blocking hashtags and posts that spread anti-vaccine misinformation. Most recently, it tightened its political advertising rules ahead of next year's US presidential election.
Launched in December 2016 following the controversy surrounding the impact of Russian meddling and online fake news in the US presidential election, Facebook's partnership now involves more than 50 independent fact-checkers in over 30 countries.
Misinformation is an issue I've personally spent a lot of time on. I'm proud that, starting today, people can let us know if they see posts on Instagram they believe may be false. There's still more to do to stop the spread of misinformation, more to come:https://t.co/SRYwvgqPaz— Adam Mosseri (@mosseri) August 15, 2019
The new flagging feature for Instagram users was first introduced in the US in mid-August and has now been rolled out globally.
Users can report potentially false posts by clicking or tapping on the three dots that appear in the top right-hand corner, selecting "report", "it's inappropriate" and then "false information".
Facebook provides fact-checkers with a dashboard of flagged posts. Fact-checkers can also check content of their own choosing. They use a rating system to determine whether it contains false or misleading information.
The usual procedure on Facebook is that stories that are rated false, contain a mixture of accurate and inaccurate claims, or have false headlines will appear less prominent in users' news feeds. Accounts, pages and groups that repeatedly share misleading stories will be notified and face restrictions on distribution and their ability to make money from advertising.
Instagram's new flagging tool uses a similar dashboard and rating system. Posts rated as false by fact-checkers will be downgraded on its hashtag search and explore pages, two big methods people use to find new posts on the platform.
Facebook says that if a fact-checker rates a story as false on Facebook and it then appears on Instagram, an extra button can be clicked to rate it on there as well.
One notable difference is that an Instagram user whose content is reported and rated will not be notified.
Stephanie Otway, a spokeswoman for Instagram, said: "This is an initial step as we work toward a more comprehensive approach to tackling misinformation."
'Quite a lot of harm'
BBC News spoke with two fact-checking partners who welcomed the new feature and Facebook's track record in responding to feedback.
But they have concerns over the additional volume of work and ambiguity regarding the amount of impact it will have on reducing the spread of misinformation on the platform.
"Parts of the [Instagram] tool are less well-defined than the Facebook one. The amount of content that is sent to us is much smaller," says Aaron Sharockman, the executive director of PolitiFact, a US non-profit that has been using the tool.
PolitiFact recently debunked an Instagram hoax claiming 12 restaurant chains including McDonald's, KFC and Pizza Hut had come out in support of President Donald Trump's 2020 campaign.
Mr Sharockman says he had only been given access to 31 Instagram posts on the day of his phone interview with the BBC.
"They seem to have trouble finding out what constitutes misinformation. I am getting quite a lot of posts that I would not rate as false.
"I want to see more posts and be able to search within posts. Searching based on keywords is difficult on Instagram, especially if those words are in a photo or meme."
He says he wants to focus on "pure misinformation" relating to politics and the 2020 US election, and especially regarding posts seen by large numbers of people.
Will Moy, chief executive of Full Fact - a UK charity that joined the programme in January - says Facebook must be more transparent about the impact of their work and share more data with fact-checkers.
"It is very important that platforms help to educate people about misinformation and the risks that exist on the internet and social media," he says.
Full Fact published an assessment of the first six months of its partnership in June, calling on Facebook to expand its fact-checking programme to Instagram.
Mr Moy says misinformation about vaccination, 5G phone technology and toxins in food and drink are prevalent on Instagram.
"We have seen enough to know that health misinformation on Instagram is a real danger to people and can potentially cause quite a lot of harm."
'Most effective platform'
Instagram boasts one billion monthly active users. Two reports for the US Senate Intelligence Committee in December about the extent of Russia's disinformation campaign in the 2016 election highlighted the platform for generating a high volume of engagement for misleading memes and images.
One report by the research firm New Knowledge said Instagram was "perhaps the most effective platform" for Russian troll farm the Internet Research Agency (IRA), adding: "Instagram engagement outperformed Facebook, which may indicate its strength as a tool in image-centric memetic (meme) warfare."
The other, by University of Oxford's Computational Propaganda Project and the social network analysis firm Graphika, said the IRA's activity on Instagram rose by 238% after the election. Many of the misleading memes specifically targeted African-American voters or centred on issues like gun rights.
Samantha Bradshaw, a researcher for the Computational Propaganda Project, says that viral images and videos are "the future of disinformation and fake news", making Instagram an ideal platform for such operations.
"No-one has time to read a long piece containing false information anymore. People want a brief, digestible and sometimes humorous image or video carrying a specific political message," she says.
Flagging tools for users are "positive steps in the right direction", she says, but do not address the root causes of online disinformation.
"Platforms should think deeply about the broader issues that cause false content to go viral, rather than content that is true. They need to address concerns regarding their algorithms and business models."
Instagram hopes artificial intelligence will enable it to train algorithms to detect false content in the future, reducing the need for relying on fact-checking services and users.
But experts believe some level of human intervention will always be required to ensure the validity of the process.
"Ultimately, I expect that some aspects of identifying disinformation will be reliably covered by AI," says Ben Nimmo, head of investigations at Graphika. "But that means training enough analysts in the first instance to be able to create a large and reliable enough dataset, and then training the AI on the datasets and making sure it is up to the task, including by using human checks as a backstop."
Mr Moy says although AI will certainly help track the dissemination of fake news, it cannot "replace the human judgement in a way that respects the free speech of everyone involved."
"We could probably get to a place where algorithms and computers can spot fake news with a high degree of certainty," says Mr Sharockman. "But for people to believe it, we need humans to remain involved in the process."
What did you think of this story? Email and let us know