1. Social media accounts present a particular risk for grooming
Whilst access to smartphones has allowed parents to keep in contact with their children, it has also led to a rise in online danger for kids. Children with social media accounts are particularly at risk. Snapchat and Instagram are the platforms most likely to be used by predators; however, gaming sites such as Roblox are also used to communicate with kids. 82% of all grooming cases are targeted at girls aged 12-15. Even platforms without direct chat functions have been targeted by groomers; just this year an 11-year-old was targeted through a Spotify playlist. Parents need to be alert to the dangers and monitor their kids’ social media and games platforms.
2. Artificial Intelligence can be used to create sexual images of children
Artificial Intelligence (AI) is an incredible new tool, but is also a new threat to child safety. AI allows people to create images using text – a person can simply type into an app what they would like to see in a video. The text entered will be turned into a moving image by an AI bot. The Internet Watch Foundation found 11,108 AI text generated sexual images of children on the dark web. Given this technology is new, this is simply the tip of the iceberg. While it is illegal to make these images, the technology used to generate them us freely available, legal to download and not restricted in any way. The law has not kept up with technology.
3. Deepfakes are being used to create misinformation online
Whilst AI has been able to re-form The Beatles for one last hit, the same technology can be used to create fake pornographic content. Online tools exist that can utilise existing images of anyone to create adult pornographic content. Deepfakes, as they are known, are not just an issue for fake pornography, they could be a threat to democracy itself and can be used to manipulate the news agenda. In a social media world that has generated fake news, deepfake video content could be produced to try and persuade people that a particular event has happened when clearly it has not taken place. This raises major concerns for misinformation during conflicts, protests or even attempting to persuade people that a person has said or done something which they have not done. This will be a major issue for news organisations and social media.
4. The Online Safety Act has finally been introduced.
On 26 October 2023, Parliament finally passed the long-awaited Online Safety Act. The Government have described the Act as ‘a new era in online safety.’ While undoubtedly the Act will help reduce exposure to harm online, it will not remove it altogether. Parents in particular, need to be vigilant in protecting children as despite the legislation being passed, grooming will continue.
5. Age verification is now required before accessing adult content online.
In 2017 Parliament passed legislation that would have required that a person’s age is verified before they could access online pornography. This law was restricted to websites only and did not include social media and was never implemented. The Online Safety Act 2023 covers all online content and requires age verification before a person can access any adult material online, including through social media.
6. Concerns around the implementation of age verification are unfounded.
Opponents of age verification cite privacy concerns and issues of data protection. They are concerned that people may be compromised personally if they shared data with adult websites. These concerns are unfounded. Current technology does not store any data when age is verified, and most websites will utilise technology that estimates a person’s age through a person’s smartphone or computer camera. No one will be required to use a passport or driver’s licence.
7. The Online Safety Act protects children from more than just pornography.
Restrictions have been enacted to prevent children from accessing a variety of harmful and age-inappropriate content. The Act lists content that is harmful for children, including content that promotes, encourages or provides instructions for suicide, self-harm or eating disorders, content depicting or encouraging serious violence and bullying content.
8. It is unclear how effective the Online Safety Act will be in protecting adults.
Adults are given protection, but is it enough? The controversial ‘legal but harmful’ provisions, which received a lot of media attention, have been removed from the Online Safety Act. The Act places a duty on social media companies to enforce their terms and conditions and they must provide adults with the ability to filter out content they do not wish to access. It remains to be seen if these protections will be robust. Social media providers will still interpret their terms and conditions, although this will be subject to oversight by Ofcom.
9. Ofcom has responsibility to ensure that online platforms comply with the Online Safety Act.
The communications regulator, Ofcom, has been given the power to ensure that online platforms comply with the Act. In respect of children, Ofcom needs to be robust in how it applies the law. Online platforms, particularly those that host pornography and content harmful to children, must be held to account.
10. Child sexual abuse content is still being hosted across major adult sites.
Whilst the Online Safety Act will prevent children from accessing pornography, there is still more to do. All of the major and best-known adult sites illegally host child sexual abuse content. No major website verifies the age of people depicted in adult content online. This means that child sexual abuse is not restricted to the dark web, it is readily available on major websites. Adult websites must ensure that no child sexual abuse material is allowed to be hosted online.