AI in Education: DfE Guidance on Safe Use of Generative AI in Schools
- PAG
- Jul 3
- 3 min read
Did you know that the Department for Education (DfE) has its own guidance for use of generative AI in education?
As artificial intelligence continues to reshape teaching and learning, the DfE has published a comprehensive set of expectations for the safe and appropriate use of generative AI in education. While only 'expectations' and not strict policy, this guidance still sets a new standard for how generative AI systems must be designed, tested, and deployed in education settings.
Below, we break down the key elements of the guidance to help school leaders, IT teams, and edtech providers understand what’s expected — and what it means for the future of AI in schools.
Why Has the DfE Released AI Guidance?
Generative AI tools — such as large language models and image generators — are now being used across the education sector to support teaching, learning, and school operations. However, with innovation comes risk. The DfE’s guidance outlines clear expectations around content safety, monitoring, security, data privacy, and governance to ensure these tools are safe for children, teachers, and schools.
The expectations are outcome-focused and non-prescriptive, giving developers flexibility in how they meet these standards — but making clear that accountability lies with the systems in use in schools.
Key Areas of the DfE’s AI Guidance for Education
1. Filtering Harmful or Inappropriate Content
Generative AI tools used by children must effectively prevent access to harmful content, including:
Filtering across text, images, and multiple languages
Adjusting moderation based on age, context, and SEND needs
Blocking access via all devices, including smartphones and 'bring your own device' (BYOD)
Updating filters to address emerging risks
This supports schools in meeting their obligations under Keeping Children Safe in Education and the Online Safety Act 2023.
2. Monitoring and Real-Time Alerts
To safeguard users, generative AI tools should:
Log input prompts and AI responses
Alert staff if a user attempts to access harmful material
Notify users (in age-appropriate terms) when content is blocked
Report potential safeguarding disclosures
Generate clear, accessible monitoring reports for staff
These practices are aligned with GDPR, the ICO’s Age Appropriate Design Code, and filtering standards in schools and colleges.
3. Cyber Security and Technical Protection
Generative AI systems must be secure by design, including:
Defences against ‘jailbreaking’ or system manipulation
Permission controls for different user roles
Regular patching and updates
Compliance with the Cyber Security Standards for Schools and Colleges
These measures also help institutions align with the Computer Misuse Act 1990.
4. Data Protection and Privacy Compliance
AI products must handle personal data lawfully and transparently. This includes:
Providing age-appropriate, accessible privacy notices
Disclosing where data is stored and under which jurisdiction
Conducting Data Protection Impact Assessments (DPIAs)
Prohibiting the use of personal data for training or commercial purposes unless lawfully justified and consented
This ensures compliance with the UK GDPR, Data Protection Act 2018, and the ICO’s Children’s Code.
5. Intellectual Property Rights
The guidance is clear: schools, teachers, and pupils retain copyright of original content created using AI. Unless explicit consent is obtained:
Inputs and outputs must not be stored, shared or used to retrain models
Developers must avoid using educational content for commercial gain
This aligns with the Copyright, Designs and Patents Act 1988.
6. Design, Testing, and Safety by Default
AI tools must be tested with diverse users, including children, to ensure:
Safe and predictable outputs
Risk mitigations are in place
Products function as intended before release
This supports compliance with the Equality Act 2010, UK GDPR, and General Product Safety Regulations 2005.
7. Governance and Accountability
To operate responsibly in educational environments, AI tools must:
Have a clear, documented risk assessment
Provide a formal complaints process for safety issues
Make AI-related decisions and policies transparent to users and regulators
This is essential for meeting UK GDPR accountability requirements and demonstrating safe practice under the ICO’s Children’s Code.
Final Thoughts: What This Means for AI in Education
The DfE’s guidance on AI in education is a timely and necessary step in safeguarding pupils and ensuring trust in emerging technologies. Whether you're a school leader trialling new AI tools or an edtech company developing classroom solutions, aligning with these expectations will not only support compliance, it will demonstrate your commitment to safety, privacy, and digital responsibility.
As the use of generative AI in education expands, schools and trusts should seek reassurance from suppliers that their systems meet or exceed these DfE expectations.
Comments