Diamond Standard
Using AI Safely with Children & Young People
The Diamond Standard & Consent-First Practice
Introduction
A Practical Training for Educators, Practitioners, and Leaders
Welcome to this essential training on using artificial intelligence safely and ethically with children and young people. This programme has been designed specifically for educators, child-serving practitioners, and organisational leaders who work directly with vulnerable populations.
As AI becomes increasingly embedded in educational and care settings, we must ensure that our practices prioritise the wellbeing, agency, and dignity of the children and young people we serve. This training introduces the Diamond Standard — a comprehensive framework for ethical AI use that places consent and child protection at its centre.
Throughout this session, you'll explore practical scenarios, learn to identify risks, and develop confidence in making sound decisions about AI implementation in your setting.

What You'll Learn
  • Why consent-first practice matters
  • The four facets of the Diamond Standard
  • How to assess AI tools safely
  • Practical scenario analysis
  • Governance and approval processes
Why This Training Exists
AI is Everywhere
Artificial intelligence is no longer a future concern — it's already embedded in the everyday tools we use for teaching, assessment, communication, and safeguarding. From learning platforms to administrative systems, AI touches nearly every aspect of educational practice.
Identity Fraud is Trivial
The barrier to creating convincing deepfakes, synthetic voices, and fraudulent identities has collapsed. What once required specialist knowledge can now be accomplished in minutes. This creates unprecedented risks for young people's digital safety and identity security.
Disproportionate Risk
Children and young people face heightened vulnerability in AI systems. They possess less power to refuse, fewer legal protections, and limited understanding of how their data may be used. Their developing identities make premature labelling particularly harmful.
Most Harm is Structural, Not Malicious
The greatest risks don't come from bad actors intentionally misusing AI. Instead, they emerge from well-intentioned systems that inadvertently remove agency, force coherence, or create synthetic authority. Even "helpful" AI can cause significant harm when it replaces human judgment or removes a child's ability to refuse participation.

The Need for Boundaries
Organisations require shared, defensible boundaries that protect both children and staff. Without clear frameworks, practitioners face impossible decisions, and young people remain vulnerable to systemic harm.
Critical Shift
Relational Systems Can Harm Without Breaking Rules
"Harm doesn't always look like misuse. Systems that appear helpful can still fundamentally remove agency and choice."
Harm Beyond Misuse
Traditional safeguarding focuses on preventing malicious use — someone deliberately breaking rules or causing harm. But AI systems can cause profound damage whilst operating exactly as designed. A system that works "perfectly" can still violate dignity, remove autonomy, or force unwanted coherence onto a child's developing identity.
"Helpful" Systems That Remove Agency
Consider an AI that "helps" by continuously analysing a child's emotional state, providing real-time feedback to teachers. Whilst presented as supportive, such systems remove a young person's right to privacy of thought and feeling. They create an environment of constant surveillance where interior freedom becomes impossible.
When Coherence Overrides Choice
AI systems excel at finding patterns and creating coherent narratives. But children and young people need the freedom to be inconsistent, to change, to remain undefined. When a system insists on resolving ambiguity — labelling a child's behaviour, predicting their outcomes, or stabilising their story — it removes the very uncertainty that learning requires.
Consent Must Be Systemic
True consent cannot be an afterthought or a one-time conversation. It must be built into the architecture of systems themselves. This means creating tools that can genuinely accept refusal, that can forget, that can operate without forcing participation. Consent embedded in design, not just documentation.
What We Mean by Consent
Consent is NOT:
A Checkbox
Ticking a box on a form does not constitute meaningful consent. True consent requires understanding, genuine choice, and the practical ability to refuse without penalty or pressure.
Silence
The absence of objection is not agreement. Silence may indicate confusion, fear, powerlessness, or simply not understanding what is being asked. Consent must be actively given, not assumed from lack of resistance.
One-Time Agreement
Consent is not a permanent state achieved through a single conversation. Circumstances change, understanding develops, and young people must have ongoing opportunities to reconsider their participation as they mature and learn more.
Continued Participation
Simply because someone has participated in the past does not mean they consent to continue. Ongoing involvement may reflect habit, pressure, or lack of alternatives rather than genuine ongoing consent.
Consent IS:
The Ability to Refuse
Real consent means having genuine power to say no without facing negative consequences, lost opportunities, or social penalty. If refusal isn't truly possible, consent is illusory.
The Ability to Pause or Withdraw
Consent must be revocable. Young people need the practical ability to step back, change their minds, or withdraw participation at any point — and systems must be designed to honour that withdrawal.
The Ability to Remain Undecided
Uncertainty should be permissible. Children and young people must have space to consider, to be unsure, to take time before committing. Forced immediate decisions undermine genuine consent.
The Ability for Meaning to Stay Provisional
Young people's understanding of themselves is properly fluid and developmental. Consent includes the right for meaning to remain open, for interpretations to stay tentative, for identities to be works in progress rather than fixed data points.
Framework
The Diamond Standard
A Shared Framework for Safe AI Use
The Diamond Standard provides a comprehensive framework for evaluating and implementing AI systems in settings that serve children and young people. Like a diamond's facets, all four dimensions must be present and aligned for a system to be considered safe and ethical.
Safety
Systems must actively reduce risk and harm, never introducing new vulnerabilities or dangers to children and young people.
Sovereignty
Children retain ownership and authorship of their data, stories, and identities without extraction or unwanted interpretation.
Symmetry
Systems that interpret children must themselves be interpretable, with transparent reasoning and no black-box decisions.
Stewardship
Human relationships and accountability remain central, with AI supporting rather than replacing adult responsibility.

All Four Must Be Met
The Diamond Standard is not a menu of options. All four facets must be satisfied for a system to be approved for use with children and young people. Meeting only one or two criteria is insufficient — partial compliance creates false assurance whilst children remain at risk.
The Four Facets
Exploring each dimension of the Diamond Standard in depth
Facet One
Safety
Safety is the foundational facet of the Diamond Standard. AI systems must actively reduce risk and protect children from harm — they cannot simply avoid creating new problems; they must demonstrably make young people safer than they would be without the technology.
No AI-Only Safeguarding Decisions
Artificial intelligence must never be the sole decision-maker in safeguarding contexts. Whilst AI can flag concerns or highlight patterns, human professional judgment must always be central to decisions affecting a child's safety or wellbeing. AI serves to augment human expertise, not replace it.
Treat All Digital Media as Potentially Synthetic
In an era where convincing deepfakes can be created easily, we must approach all digital images, videos, and audio recordings with appropriate caution. This doesn't mean distrusting everything, but rather implementing verification processes and never relying solely on digital media for high-stakes decisions about identity or events.
Identity Must Never Rely on a Single Signal
With the ease of spoofing voices, faces, and even behavioural patterns, identity verification requires multiple independent factors. A single biometric, password, or recognition system is insufficient. Layered verification protects young people from impersonation and identity theft whilst maintaining reasonable usability.
Risk Assessment Before Implementation
Before any AI system is deployed, thorough assessment must identify potential harms — including subtle risks to dignity, autonomy, and privacy. Safety isn't just about preventing obvious dangers; it's about recognising how systems might inadvertently create vulnerability or remove protective barriers that young people need.
Facet Two
Sovereignty
Sovereignty means that children and young people retain ownership and authorship of their own data, narratives, and developing identities. AI systems must not extract, infer, or impose meaning without genuine consent and the practical ability to refuse.
Core Principles of Sovereignty
Children are not data sources to be mined or interpreted without their knowledge and genuine agreement. Their stories, experiences, and developing sense of self belong to them — not to systems, institutions, or algorithms.
Sovereignty recognises that young people are becoming rather than being. Their identities are properly fluid, and premature labelling or pattern-finding can cause lasting harm by forcing coherence too early.
Children Retain Authorship
Young people maintain ownership of their data, stories, and the narratives about their lives. Systems must not claim permanent rights to analyse, store, or repurpose children's information without explicit, ongoing consent that can be withdrawn.
AI Must Not Extract Trauma or Infer Diagnosis
Systems cannot be permitted to mine conversations, writing, or behaviour for psychological insights, diagnostic labels, or trauma indicators without explicit therapeutic context and consent. Such extraction removes agency and can cause significant harm through misinterpretation or premature labelling.
Young People Should Know When AI is Involved
Transparency is fundamental to sovereignty. Children and young people must be informed when they're interacting with AI systems or when AI is analysing their data. Hidden AI removes the ability to make informed choices about participation and self-presentation.
Opt-Out Where Reasonably Possible
Whilst some systems may be genuinely necessary for safety or educational requirements, wherever feasible, young people should have meaningful alternatives that don't involve AI analysis. True consent requires real alternatives, not forced participation with an illusion of choice.
Facet Three
Symmetry
Symmetry demands reciprocal transparency: if a system interprets a child, that system must itself be interpretable. Black-box algorithms that profile, predict, or score children without explanation create dangerous power imbalances and remove accountability.
Systems That Interpret Must Be Interpretable
If an AI draws conclusions about a child's abilities, behaviour, needs, or future outcomes, it must be possible to understand how those conclusions were reached. Opaque decision-making removes the ability to challenge, correct, or contextualise algorithmic judgments.
No Black-Box Scoring or Profiling
Systems that reduce children to scores, risk ratings, or behavioural profiles without transparent reasoning are unacceptable. Young people have the right to understand how they're being assessed and to have human practitioners who can explain and justify any evaluative judgments.
AI Outputs Are Never Sole Evidence
Algorithmic outputs — whether predictions, risk scores, or pattern analyses — cannot stand alone as evidence for decisions affecting children. They must always be contextualised, verified, and supplemented with human professional judgment and direct knowledge of the child.
Forced Coherence is a Safeguarding Risk
When systems insist on resolving ambiguity, finding patterns, or creating consistent narratives about children who are still developing, they force premature coherence. This removes the developmental space young people need to be uncertain, inconsistent, and changing — which is itself a form of harm.

The Problem with Prediction
Predictive systems are particularly problematic. When AI claims to forecast a child's future behaviour, academic outcomes, or life trajectory, it risks creating self-fulfilling prophecies whilst removing agency. Predictions can become constraints, limiting opportunities based on algorithmic assumptions rather than genuine human potential.
Facet Four
Stewardship
Stewardship places human relationships and accountability at the centre of all AI use with children and young people. Technology must support practitioners, not replace them. The responsibility of care cannot be transferred to systems, no matter how sophisticated.
01
AI Supports Adults — It Does Not Replace Them
Artificial intelligence can help practitioners work more effectively by handling administrative tasks, surfacing relevant information, or suggesting areas for attention. However, it cannot replicate the relational wisdom, ethical judgment, and contextual understanding that human professionals bring to work with children.
02
Human Accountability is Non-Transferable
When something goes wrong — when a child is harmed, when an opportunity is missed, when a decision proves flawed — responsibility lies with people, not systems. We cannot hide behind algorithms or claim that "the AI made the decision." Accountability for children's wellbeing remains fundamentally human.
03
Systems Must Know How to Stop
Ethical AI includes the capacity for appropriate inaction. Systems must recognise their own limitations, flag uncertainty rather than forcing conclusions, and escalate to human judgment when situations require relational understanding. A system that cannot stop itself is fundamentally unsafe.
04
Relationship Comes Before Optimisation
The primary goal of work with children is building trusting relationships that support development and wellbeing — not maximising efficiency or optimising outcomes. When system design prioritises metrics over relationships, it undermines the very foundation of effective practice with young people.
"Technology should make us more human in our work with children, not less. It should give us more time for relationship, more capacity for presence, more resources for care — never less."
What This Means in Practice
Translating principles into practical decision-making
The Traffic-Light Model
To help practitioners make quick, confident decisions about AI use, we employ a straightforward traffic-light system. This model provides immediate guidance whilst ensuring that complex cases receive appropriate review.
🟢 Green
Allowed
Systems that support adults only, with no learner data involved. These can proceed without review.
🟠 Amber
Restricted
Systems involving learner data or interaction with children. These require formal review before implementation.
🔴 Red
Prohibited
Systems that create forced coherence, synthetic authority, or remove agency. These are never permitted.

A Tool for Clarity, Not Rigidity
The traffic-light model is designed to empower decision-making, not create bureaucratic barriers. Its purpose is to help you quickly assess whether you can proceed independently (green), need to pause for review (amber), or must not proceed (red). When in doubt, choose the more cautious category — uncertainty itself is valuable information.
🟢 Green Light
Allowed Uses
Green-light AI applications support adult practitioners without involving children's data or direct interaction with young people. These systems can proceed without formal review, though professional judgment should always be exercised.
Drafting and Summarising
Using AI to draft letters to parents, summarise meeting notes, create policy documents, or prepare communications. The key requirement is that no identifiable learner information is included in prompts or content. Generic examples and anonymised scenarios are acceptable.
Planning and Ideation
Employing AI for lesson planning, activity design, curriculum development, or brainstorming approaches to common challenges. Systems can suggest resources, generate ideas, or provide inspiration — all without reference to specific children or their data.
Administrative and Workflow Support
Using AI for scheduling, organising information, creating templates, formatting documents, or managing non-sensitive workflows. These applications streamline adult work without touching children's personal information or creating records about young people.
Professional Learning
Accessing AI for your own skill development, researching pedagogical approaches, exploring theory, or deepening subject knowledge. Personal professional development that enhances your practice without involving learner data falls within green-light use.

The Critical Test
Green-light use must involve no learner data whatsoever. If you find yourself tempted to include "just one example" or "this specific case to get better results," you've moved into amber territory and must pause for review. The boundary is absolute: no child's information, no child's interaction.
🟠 Amber Light
Restricted Uses
Amber-light applications require formal review before implementation. These involve learner data, direct interaction with children, or systems that learn about young people over time. Amber means pause and evaluate — not automatic refusal.
Any System Involving Learner Data
If an AI tool will process, analyse, or store information about specific children — including names, assessment results, behaviour records, or attendance data — it requires review. Even anonymised data may need evaluation if patterns could re-identify individuals or create group-level harms.
Any Interaction with Children
Systems where young people directly engage with AI — whether chatbots, tutoring systems, feedback tools, or interactive platforms — require careful review. Direct interaction creates relationship dynamics that must be assessed for consent, transparency, and appropriate boundaries.
Assessment, Tracking, or Analytics
AI that evaluates children's work, tracks progress over time, identifies patterns in behaviour or learning, or generates analytics about individuals or groups needs review. Such systems create narratives about young people that can shape opportunities and self-perception.
Systems That Learn About a Child Over Time
Adaptive systems that build profiles, remember previous interactions, or modify their behaviour based on a child's patterns are particularly sensitive. These systems create persistent digital representations of young people that may not align with their changing selves and require careful governance.
Amber is not rejection. Many amber-category systems can be approved for use after proper review confirms they meet the Diamond Standard. The review process exists to ensure safety and sovereignty — not to prevent innovation or effective practice.
🔴 Red Light
Prohibited Uses
Red-light applications are never permitted in settings serving children and young people. These systems fundamentally violate the Diamond Standard by creating forced coherence, removing agency, or establishing synthetic authority that undermines human relationship and accountability.
Webcam or Biometric Proctoring
Systems that continuously monitor students via webcam during assessments, analyse facial expressions, track eye movements, or use other biometric surveillance are prohibited. These create coercive surveillance environments, are easily spoofed, produce frequent false positives, and fundamentally undermine trust and dignity.
Emotion Recognition or Lie Detection
AI claiming to detect emotions from facial expressions, voice patterns, or physiological signals — or to identify deception — is not permitted. Such systems rest on scientifically questionable foundations, disproportionately misread neurodivergent individuals and those from different cultural backgrounds, and create harmful forced interpretations of interior states.
Behaviour Prediction or Scoring
Systems that claim to predict future behaviour, generate risk scores, or profile children's likelihood of specific outcomes are prohibited. These create self-fulfilling prophecies, remove agency by treating futures as predetermined, and cause harm through misidentification whilst providing false certainty to decision-makers.
Public AI Tools with Identifiable Learner Data
Using public AI services (ChatGPT, Claude, Gemini, etc.) with identifiable information about children violates data protection requirements and sovereignty principles. These services may retain, analyse, or use submitted data for training, creating unacceptable risks to privacy and agency. Generic queries are acceptable; specific learner data is not.

Why These Systems Create Synthetic Authority
Red-light systems share a common flaw: they claim certainty about fundamentally uncertain aspects of human experience. They force coherence onto developing identities, create authoritative-seeming judgments without genuine understanding, and remove the interpretive space that children need. These aren't failures of implementation — they're fundamental design problems that cannot be fixed through better governance.
Scenario Practice
Applying the Diamond Standard to real situations
Scenario A
Case Notes & AI
The Situation
A youth worker has written detailed case notes about a vulnerable 15-year-old following a difficult disclosure session. The notes include sensitive information about family circumstances, mental health concerns, and previous trauma. The practitioner is exhausted after a long day and copies the case notes into ChatGPT, asking it to create a concise summary for the safeguarding team.
Initial Response Questions
  • What is your immediate reaction to this scenario?
  • Where would you place this on the traffic-light system?
  • What specific concerns arise?
Analysis Through the Diamond Standard
Traffic-Light Assessment: 🔴 Red
This is prohibited use. The practitioner has shared identifiable, highly sensitive information about a child with a public AI service, violating both data protection requirements and sovereignty principles.
Where Does Consent Fail?
The young person disclosed in a trusting relationship with a human practitioner — not with an AI system. They had no knowledge their words would be processed by artificial intelligence, no opportunity to refuse, and no understanding that their story would exist in a commercial system's database. Consent was entirely absent.
Can the System "Not Know" Something?
Once information is submitted to a public AI service, it becomes permanent. ChatGPT and similar systems may retain data for training, quality improvement, or other purposes. The young person's trauma narrative now exists in a system that cannot forget, cannot be compelled to delete, and has no therapeutic relationship or duty of care to the child.
What Should Have Happened?
The practitioner should have summarised the notes themselves, or asked a colleague for support. If genuinely unable to complete the summary, they should have documented this in their handover and ensured appropriate continuity. Exhaustion and time pressure never justify compromising children's sovereignty and safety.
Scenario B
AI Tutor
The Situation
Your school is considering implementing an AI tutoring system for mathematics. The system adapts to each student's learning style, remembers their previous mistakes, provides personalised explanations, and generates custom practice problems. Students can ask questions in natural language and receive immediate feedback. The system builds a profile of each learner over time to improve its teaching effectiveness.
Critical Questions for Analysis
What Story Does the System Stabilise?
Consider how the AI builds a persistent narrative about each child's mathematical ability, learning style, and patterns of difficulty. Does this narrative serve the child, or does it risk creating a fixed story that limits how they're perceived and how they perceive themselves? How does the system handle children who are developing unevenly or who might suddenly grasp a concept they've previously struggled with?
Can the Learner Be Unread?
Does the system provide any way for a child to be uncertain, to experiment without being permanently profiled, or to have their struggles and mistakes forgotten? Or does every interaction become permanent data that shapes future teaching? Is there a mechanism for a student to request that their profile be reset or partially deleted?
Who Controls Memory and Forgetting?
Can students see what the system "knows" about them? Can they challenge inaccurate inferences? Can they choose to have certain information forgotten? Or does the system unilaterally decide what to remember and how to interpret patterns? Memory is power in relationships — who holds that power in this system?
Traffic-Light Assessment: 🟠 Amber
This system requires formal review. It involves learner data, direct interaction with children, and builds adaptive profiles over time — all characteristics requiring careful evaluation against the Diamond Standard. The system could potentially be approved if it incorporates appropriate sovereignty protections, transparency measures, and human oversight.
Scenario C
AI Exam Monitoring
The Situation
A college is proposing to use AI-powered webcam monitoring for mock exams and assessments. The system analyses students' faces, eye movements, and behaviour to detect potential cheating. It flags "suspicious" activity such as looking away from the screen, unusual movements, or changes in facial expression. A human invigilator reviews flagged incidents after the exam.
1
Consider Reliability
How accurate are these systems at actually detecting cheating versus producing false positives? Research shows high error rates, particularly for neurodivergent students, those with anxiety, or students from different cultural backgrounds who may have different eye contact norms. What harm is caused by false accusations?
2
Assess Spoofing Risk
If someone wants to cheat, how easily can they fool the system? Can they use a virtual background? Pre-recorded video? A second device out of camera range? If the system can be trivially defeated whilst still surveilling honest students, what purpose does it actually serve?
3
Evaluate Privacy Impact
Students are being recorded in their own homes during high-stress situations. What happens to this video? Who has access? How long is it retained? What if the camera captures other family members, personal spaces, or sensitive information about home circumstances?
4
Question Consent vs Coercion
Can students genuinely refuse this monitoring without penalty? If non-participation means failing the module or being unable to complete their qualification, is consent meaningful? Does the power imbalance between institution and student make true consent impossible in this context?

Traffic-Light Assessment: 🔴 Red
Webcam proctoring is explicitly prohibited. It creates coercive surveillance, produces unreliable results, is easily defeated by those intending to cheat, violates privacy in home spaces, and fundamentally undermines the trust relationship between students and educational institutions. The harms far outweigh any potential benefit.
Approval & Governance
Implementing systems safely and responsibly
When Review Is Required
Understanding when to pause and seek formal review is crucial for maintaining safety whilst enabling innovation. The approval process exists to protect children and support practitioners — not to create bureaucratic barriers.
A System Interprets a Child
Any AI that draws conclusions, makes inferences, or generates insights about an individual child's abilities, behaviour, needs, or characteristics requires review. This includes assessment systems, behaviour tracking, learning analytics, and adaptive platforms that build understanding of individual learners.
A System Remembers a Child
When AI retains information about a child across sessions, builds persistent profiles, or uses previous interactions to influence future responses, review is necessary. Remembering creates power dynamics and ongoing relationships that require careful governance to protect sovereignty and prevent forced coherence.
A System Influences Outcomes
If an AI system's outputs could affect decisions about a child's education, support, opportunities, or wellbeing — even indirectly — it requires review. This includes recommendation systems, resource allocation tools, grouping algorithms, and any system whose outputs might shape how adults respond to children.
You Feel Unsure
Uncertainty itself is valuable information. If you're not confident about whether a proposed AI use is acceptable, that uncertainty warrants review. It's better to pause and evaluate than to proceed with unexamined risk. Your professional intuition about potential problems should be trusted and explored.
"Uncertainty is a signal, not a failure. The approval process exists precisely to help you think through complex situations with support and multiple perspectives. Asking for review demonstrates good professional judgment."
The AI Approval Flow
Our review process is designed to be thorough without being burdensome, ensuring that necessary safeguards are in place whilst respecting practitioners' time and judgment.
Need Identified
A practitioner identifies a potential AI application that could benefit their work with children or support organisational functions. They consider the traffic-light system and recognise that formal review is needed.
Review Form Completed
The practitioner completes the AI Review Form, providing details about the proposed system, its purpose, what data would be involved, how children would interact with it (if applicable), and their assessment against the Diamond Standard.
Governance Review
The digital governance team reviews the proposal against data protection requirements, technical security standards, and organisational policy. They may request additional information or suggest modifications to strengthen protections.
Safeguarding Review (if Child-Facing)
For systems involving direct child interaction or sensitive data, the safeguarding team assesses potential risks to wellbeing, dignity, and agency. They evaluate consent mechanisms and ensure appropriate adult oversight is built in.
Leadership Sign-Off (if High Risk)
Particularly sensitive applications require senior leadership approval. This ensures that accountability for high-stakes decisions rests at the appropriate level and that organisational risk is considered alongside benefits.
Implementation + Review Cycle
Approved systems are implemented with monitoring plans and regular review points. Approval is not permanent — systems must be re-evaluated as they evolve, as circumstances change, and as we learn from experience.

Timeline Expectations
Standard reviews typically take 5-10 working days. Complex proposals requiring multiple stages of review may take longer. Urgent requests can be expedited when necessary — but remember that taking time to get things right is better than implementing systems that create harm.
What the Review Is Really Asking
The approval process isn't about judging whether AI is "clever" or "efficient." Those qualities are irrelevant if a system violates dignity or removes agency. The review centres on fundamentally different questions.
Not These Questions:
"Is This Clever?"
Sophisticated technology isn't inherently good. A system can be technically impressive whilst causing profound harm. Innovation for its own sake has no place in work with vulnerable people.
"Is This Efficient?"
Efficiency is valuable only when it serves relationship and care. Systems that "save time" by removing human judgment or forcing participation create costs that vastly exceed any administrative benefit.
"Will This Impress People?"
Being perceived as technologically advanced matters far less than being actually safe and ethical. Impressive systems that violate sovereignty or remove agency are impressive failures.
But These Questions:
"Can the System Stop?"
Does the system recognise its own limitations? Can it refuse to draw conclusions? Can it escalate to human judgment? Can it accept "I don't know" as a valid state? Systems that cannot stop themselves are fundamentally unsafe.
"Can It Forget?"
Can children withdraw their data? Can profiles be reset or deleted? Can the system release information it holds rather than retaining it permanently? The inability to forget removes sovereignty and agency.
"Can Someone Refuse Without Cost?"
Is non-participation genuinely possible? Are there meaningful alternatives? Does refusal lead to penalty, disadvantage, or loss of opportunity? Without the real ability to say no, consent is illusory.
Core Principle
The Right to Be Unread
At the heart of the Diamond Standard lies a fundamental principle: children and young people possess the right to interior freedom — the right to remain partially unknown, incompletely defined, and provisionally understood.
Not to Be Constantly Interpreted
Young people need space free from continuous analysis, assessment, and meaning-making. Not every behaviour requires explanation, not every emotion needs labelling, not every action demands interpretation.
Not to Be Prematurely Defined
Children are becoming, not arrived. Early labelling — whether through diagnostic categories, ability groupings, or behavioural profiles — can constrain development and create self-fulfilling prophecies that limit potential.
Not to Have Ambiguity Resolved Too Quickly
Uncertainty is developmentally appropriate and necessary. Young people need time to be confused, to contradict themselves, to try on different identities. Forcing coherence removes this essential exploratory space.
To Maintain Privacy of Thought and Feeling
Children deserve domains of experience that remain their own — emotions they don't have to explain, thoughts they can keep private, feelings they're still processing. Surveillance removes this essential interior freedom.
To Be Inconsistent and Changing
Development is properly non-linear and contradictory. Young people should be able to behave differently in different contexts, to change their minds, to grow in unexpected directions without being constrained by previous data.
"Learning requires interior freedom. When children know they're constantly observed, interpreted, and profiled, they lose the safety to experiment, to fail, to be genuinely themselves. The right to be unread is the right to develop authentically."
Three Anchors
When facing uncertainty about AI use, return to these three anchoring principles. They provide reliable guidance for complex situations and help maintain focus on what truly matters.
1
If It Touches a Child, Pause
Any system involving children's data, direct interaction with young people, or outputs that could affect decisions about children requires careful thought. The default response to child-facing AI should be to stop and evaluate rather than proceed and hope. This pause isn't bureaucratic hesitation — it's responsible practice that centres children's safety and sovereignty.
2
If It Stabilises a Story, Review
When a system creates persistent narratives, builds profiles over time, or draws conclusions about who a child is or might become, formal review is essential. These systems shape how young people are perceived by others and how they perceive themselves. The power to define identities requires rigorous governance and multiple perspectives on potential harms.
3
If It Can't Accept Refusal, Don't Use It
Systems that cannot genuinely accommodate a child's "no" — that penalise non-participation, force involvement, or remove meaningful alternatives — should not be implemented regardless of their other qualities. Without the real ability to refuse, consent becomes coercion dressed in gentler language. True ethical AI for children must be able to operate without forcing participation.
These anchors work together to create a decision framework that protects children whilst enabling thoughtful innovation. They remind us that safety, sovereignty, and consent aren't obstacles to overcome but essential features that must be built into every system from the beginning.
Reporting
If Something Feels Off
Trust your professional instincts. If you encounter concerning AI use, troubling outputs, or situations that create unease, reporting is not only appropriate — it's a fundamental aspect of safeguarding practice.
What to Report
Misuse of AI Systems
If you observe colleagues using AI inappropriately — sharing learner data with public systems, bypassing approval processes, or implementing prohibited technologies — this requires reporting. Misuse often stems from lack of awareness rather than malicious intent, but it still creates real risks that must be addressed.
Concerning Outputs
When AI systems produce troubling results — making inappropriate inferences about children, generating biased content, creating harmful recommendations, or stabilising problematic narratives — document and report these outputs. They may indicate systemic flaws requiring review or modification of approved systems.
Safeguarding Concerns
If AI interactions raise safeguarding issues — a child discloses harm to a chatbot, a system flags concerning content, or you observe risks to a young person's wellbeing related to technology use — follow your standard safeguarding procedures immediately. AI involvement doesn't change safeguarding responsibilities.
Identity Doubts or Deepfakes
If you suspect that digital media might be synthetic, that someone's identity might be spoofed, or that deepfake technology might be involved in communications — report these concerns. In our current technological environment, healthy scepticism about digital identity is prudent, not paranoid.
How to Report

Multiple Pathways Available
  • Line manager: For most concerns about AI use in your area
  • Digital governance team: For technical or policy questions
  • Safeguarding lead: For any child protection concerns
  • Confidential reporting line: If you're uncomfortable raising concerns through normal channels
What Happens Next
When you report a concern, it will be taken seriously and investigated appropriately. You'll receive acknowledgement of your report and, where appropriate, feedback about outcomes. Reports help us learn, improve systems, and protect children more effectively.
Protection for Those Who Report
Raising legitimate concerns about AI use, safeguarding issues, or policy violations is protected activity. You will not face negative consequences for good-faith reporting, even if the concern turns out to be unfounded. We actively want you to speak up when something feels wrong.
Raising concerns is safeguarding. Your willingness to notice problems, ask questions, and report worries is essential to maintaining a safe environment for children and young people. Never hesitate to speak up when you're uncertain or concerned.
Conclusion
Reflection & Close
Taking This Forward
You've engaged with complex ideas about consent, sovereignty, safety, and stewardship. You've explored the Diamond Standard and considered how it applies to real situations. You've learned when to proceed, when to pause, and when to refuse technologies that violate children's dignity and agency.
The work of protecting young people in an age of artificial intelligence is ongoing and collective. It requires all of us to remain vigilant, thoughtful, and willing to prioritise relationship over efficiency, care over optimisation, and human judgment over algorithmic convenience.
As you return to your practice, remember that uncertainty is wisdom, that pausing is strength, and that defending children's right to interior freedom is among the most important work we do.

Resources Available
  • Diamond Standard quick reference guide
  • AI Review Form and process documentation
  • Traffic-light decision tool
  • Scenario library for continued learning
  • Contact details for governance and safeguarding teams
Reflection Prompt
"One thing I'll do differently with AI after today..."
Take a moment to consider: What specific change will you make in your practice? What question will you ask that you might not have asked before? What pause will you take that you might have previously rushed through?

Thank You
Thank you for your commitment to protecting dignity, agency, and care in a rapidly changing world. Thank you for being willing to think critically about technology, to centre children's needs over convenience, and to hold the line on practices that matter.
The young people we serve deserve adults who think carefully, who pause when uncertain, and who refuse to compromise their wellbeing for the sake of efficiency or innovation. You are those adults.