Mark Bridger's conviction for the murder of five-year-old April Jones has once more brought the issue of online child abuse to the fore. Many are in agreement that more needs to be done by web companies to block and remove such content. But what exactly is being done now, and how effective is it?
It was on the 10 February 2012, that the net finally closed in on Darren Leggett.
He was arrested at his parents' home in Kent. Police found a rucksack containing cable ties, a knife and 13 pairs of boys' pants.
His mobile phone showed he had earlier sent a text. It read: "I'm working on a plan to take one soon to rape and kill and eat."
He had been using the internet to both find and distribute content with other like-minded criminals.
But it was that same internet that directly led to his imprisonment a year ago next month.
An anonymous internet user had discovered the images and tipped off the Internet Watch Foundation (IWF), a UK-based organisation that works closely with the internet industry, police and government.
Beyond simply ensuring the images were removed, that single report set in motion a sting operation led by Kent Police in which officers posed as men wanting to pay to sexually abuse a child.
Leggett was jailed indefinitely last June on 31 counts of child sexual abuse, ending a horrific campaign that had lasted more than seven years.
The case has similarities to that of Mark Bridger, who tragically evaded detection for his online activities before carrying out his crime.
Then, as now, debates over unlawful content online surfaced. How was it that people like Leggett and Bridger were able to access and share such shocking content online, for so long, without being noticed?
Why were there not measures in place to prevent material so obviously of child abuse ever getting online?
The message today from internet firms is that they are doing what they can - but that more help, investment and co-operation is needed.
The IWF estimates that, in the year to March, about 1.5 million people in the UK accidentally came into contact with child abuse images online.
Yet the number of reports they receive are in the low thousands - a sign many are either unaware or unwilling to raise their voices.
"Last year we received just under 40,000 reports. There's a major difference there," said Emma Lowther, IWF's spokeswoman.
"The fact is, we all have a responsibility to tackle this. We don't claim our methods are foolproof but ultimately we need really, really good reports from the public to the IWF so that we can take action."
The IWF has a number of processes it says are effective in tackling the problem.
Reports made about content hosted in the UK are typically dealt with within 60 minutes. Internationally it is a more complex picture - but a global database of reports, known as INHOPE, acts as a communication tool between organisations like the IWF and similar organisations and governments in other countries.
Hosting companies, whose servers may have been used for storing illegal content without their knowledge, are also contacted.
Behind the scenes, efforts are made, in co-operation with the banks, to disrupt the payment channels criminals use to profit from such materials.
But many argue that such images should never even be viewable online.
They say companies that allow us to connect, browse and search the internet should step up their efforts to block out illegal content.
Specifically, Google has come under renewed pressure to improve its systems.
"Google's moral leadership is essential here," government technology adviser John Carr said on BBC Radio 4's Today programme.
"They are the biggest player in this space in the world."
In recent years, the search giant has implemented a system which allows search results directing users to child abuse to be removed quickly.
The search engine is a joint-funder of the IWF, and says it has a "zero tolerance" policy over such images.
Richard Cox, an experienced cybercrime investigator, said Google had greatly improved what it does in the area.
"They've improved their response time, so that when you get a report to them it's down pretty quickly," he told the BBC.
"In the old days it was a day or so - if it takes a day or so every time, it's pretty ineffectual."
However, he stressed that getting Google to deal with the issue was only a short-term, and arguably fruitless, fix.
"If Google block it, there's Yahoo, there's Bing, there are Russian and Chinese services."
'Real physical abuse'
Mr Cox also praised ISPs such as BT - the UK's largest - which has rolled out a system known as Cleanfeed to its customers.
It draws on databases and research run by the likes of the IWF and Interpol to block known troublesome websites from even reaching the user's home.
The system mostly relies on knowing the URL - web address - of the illegal content. While effective in many cases, it falls short of complete protection.
"It's not a solution - if you want to get hold of this material, this won't stop you," argues Christian Berg, chief executive of Netclean, a provider of technical tools designed to prevent the spread of child abuse imagery.
"Blocking is often more a preventative tool for people who are looking for adult pornography but accidentally find underage content."
His firm's technology is able to scan an image and cross-reference it with a database of known abuse images. If there is a match, action can be taken.
"We don't block the image for the user. They don't know they're being caught, because then they may try and destroy the hard drive or computer.
"It's important to block the content - but it's also important to find this person before they turn to real physical abuse."
Not all efforts to combat child abuse online have been universally welcomed.
In the US, the FBI came under heavy criticism this month for continuing to run a website sharing child abuse images as a method of catching possible criminals.
An agent told a court in Nebraska that a site referred to as "Website A" was seized by authorities but was then kept online in order to identify more than 5,000 users.
The FBI has declined to discuss the issue further while the investigation continues.
Additionally, many internet users also fear that the implementation of systems which block web content at source could be used for censorship purposes of material which is not in fact illegal.
One recent example revealed how a system used by mobile operators to block adult content in fact blocked out sites featuring political commentaries, personal blogs and community news.
"We must understand this will never be perfect," says Mr Cox.
"Criminals will always be one step ahead of us."
Follow Dave Lee on Twitter @DaveLeeBBC