AI-generated images of child abuse are making it harder to protect victims as law enforcement finds it harder to identify whether real children are at risk, the National Crime Agency warns.
Computer-generated pornography can also normalize abuse, the NCA warns
Images of child abuse created by artificial intelligence are making it harder to protect victims, the National Crime Agency warned yesterday.
The increasing use of realistic AI-generated child pornography has given law enforcement agencies the problem of identifying whether real children are at risk.
Graeme Bigger, who became permanent director-general of the NCA last summer, warned that computer-generated images flooding the internet could normalize the abuse of real victims – putting children at some risk to up to 1.6 per cent of the UK’s adult population.
‘We’re starting to see hyper-realistic images and videos created entirely by artificial intelligence,’ Mr Bigger said. ‘The use of AI for child sexual abuse will make it harder for us to identify real children who need protection, and make abuse more normal. And this is important, because we assess that viewing these images – real or AI-generated – materially increases the risk of perpetrators moving on to sexually abuse their own children.’
Mr Bigger said the NCA was seeking assurances from companies that make AI software that safeguards would be added to the products. There has been discussion around adding a digital tag that indicates an image has been AI-generated, he added.
The National Crime Agency has warned that AI-generated pornography is making it harder to identify real children at risk (stock photo)
Mr Bigger’s warning came as the Internet Watch Foundation (IWF) said AI had the potential to enable criminals to create ‘an unprecedented amount of life-like child sexual abuse images’.
The IWF said it had discovered artificially generated images containing categories A and B – the most serious form of sexual abuse – involving children under the age of three.
Its analysts also discovered an online ‘manual’ written by criminals to explain to others how to train the AI to produce more realistic images.
The agency says that while the images don’t depict real children, their creation is not a victim crime and can normalize the abuse of real victims — as well as make it harder to identify real children when they’re in danger.
The first global summit on AI security will be held in the UK in the autumn, with Rishi Sunak calling for international cooperation to mitigate the risks posed by the technology.
Susie Hargreaves, chief executive of the IWF, said appropriate legislation needed to be introduced to address the threat posed by AI. He added: ‘We are sounding the alarm and the prime minister must consider the serious threat he faces as a top priority.’
Share or comment on this article:
Read Full News Here