
The development and deployment of AI systems across digital platforms are accelerating rapidly. While AI offers opportunities for innovation and safety enhancements, it also presents emerging risks, particularly for children and young people online.
The Online Safety Act (2023) has laid out a foundational framework for protecting children online, but there is growing concern that it does not fully account for the scale and nuance of AI-related risks.
This inquiry seeks to examine the harms posed by AI to children in online spaces, assess how effectively current regulations, including the Online Safety Act, address these risks. The inquiry will also examine how AI can help to deliver more effective safeguarding for children online and make recommendations for Government to address any shortcomings in existing legislation.
The Group welcomes written submissions by 11:59pm on 7 January 2025.
The inquiry will invite a select group of stake holders to contribute to oral evidence sessions taking place from January 2026.
