The most recent figures from the Internet Watch Foundation charity reported 252,000 URLs containing graphic sexual images or videos of children during 2021, compared with 153,000 in the previous year. This includes the growing problem of self-generated material, where children are manipulated into sharing inappropriate pictures of themselves.
Image credit: SafeToNet
Researchers at ARU have been helping to develop an entirely new way to tackle online child abuse. It involves Artificial Intelligence (AI) technology being used to block specific video content from being filmed on a device.
Called SafeToWatch, it uses a phone's own camera to identify and block inappropriate images. Crucially, this means it can sidestep end-to-end encrypted software used by social media companies, which can hide whether abusive material is being sent online.
Prof Sam Lundrigan, Director of the Policing Institute for the Eastern Region (now the International Policing and Public Protection Research Institute) at ARU, says preventing the images from being uploaded in real-time is key to stopping the spread of child pornography.
'The figures speak for themselves, there's no sugar coating it at all. It's one of the biggest, growing threats online.
'It's also very complex - not so simple as a victim and perpetrator. Even though an individual can take material themselves, it can get in the hands of someone who shares it without consent, and that's where the harm comes in.'
ARU has partnered with the safety technology company SafeToNet to develop the prototype. It was one of five projects across the UK and Europe to win government funding for the research, as part of the £555,000 Safety Tech Challenge Fund, administered by the Department for Digital, Culture, Media & Sport and the Home Office.
Tom Farrell is SafeToNet’s Chief Impact Officer. Due to his previous career in law enforcement, he is more than familiar with fighting the perpetrators and seeking justice for the victims of these particular crimes. He has helped develop specialist technologies within the Home Office to help tackle the problem worldwide.
Farrell has witnessed how law enforcement is overwhelmed by the sheer quantity of cases and believes prevention has to be the key.
'What I always explain to my kids is that at the very point content leaves your device, it's gone. Despite the best efforts of organisations such as the Internet Watch Foundation, it is potentially gone forever.
'It can get into a vicious circle of child abuse material and will never ever be fully removed. So, you've got to stop it at the point of creation.'
It is precisely this sort of self-generated material – where children are manipulated into taking the content themselves before it is shared online - that is the biggest growth area. The latest figures show that the largest increase is among seven to 10-year-olds.
Farrell admits it can be a disturbing field to work in, but that he is driven by the motivation to keep children safe online.
'I never lose the feeling of being upset by something and I actually feel it's important to continue to be affected by certain things... otherwise you've become so accustomed to it that you've forgotten why you're doing it.'
The partnership between technology and academia has been hugely beneficial to both sides. Prof Lundrigan says more joint work is now planned to expand the AI technology and increase its usefulness in this area.
'There's so much we can be doing and there's a lot we don't know about online harm. Remember, technology is moving all the time - offenders are moving all the time and adapting their behaviour - new crimes are emerging - we've got the metaverse rapidly coming up - who knows how long it is before people are interacting in those environments - the potential for exploitation is massive.
'So, we have to keep on innovating and researching and trying to get ahead of the curve.'
The prototype has been developed as an app for this project, although the company hopes it could be put straight onto devices as standard, or onto platforms for the best results.
SafeToNet is keen to discuss the possibilities with a variety of platforms, and Farrell feels that once the concept is proved to be reliable and effective, it will be widely adopted.