Who Owns the Copyright for Work Created by AI?
As more people turn to AI programmes and all the various free platforms to write summaries, design letters, shape reports, tidy up a set of minutes etc, we have to…
May 6th 2026
AI is divisive. It promises speed, efficiency and easier communication. At the same time, it risks deepening inequality, concentrating power and reinforcing bias. Both realities exist. Pretending otherwise is unhelpful.
The louder narrative suggests a generational divide. Older people are seen as hesitant or lacking confidence. Younger people are framed as over-reliant, even careless. But this does not stand up to scrutiny.
Older generations are not “behind”. They are asking critical questions about ethics, impact and who benefits. Younger people are not blindly embracing AI either. Many are concerned about job displacement, loss of privacy and the erosion of social interaction. Some are actively stepping back, choosing phone-free time and reasserting human connection.
So, the real divide is not generational. It is structural. It sits between those who design and profit from AI and those who live with its consequences. AI is not neutral. Algorithmic bias already reflects and amplifies inequalities, particularly around race, gender and socio-economic status. If left unchecked, it will entrench these further. We are already seeing systems that make decisions about people’s lives without sufficient transparency or accountability.
The recent Public Voices in AI research reinforces this. People are not rejecting AI. They are rejecting the idea that AI alone will solve social problems. For them, public good is about fairness, belonging, community and the ability to live meaningful lives. AI should support that, not undermine it. That matters for early years.
Children are not passive recipients of technology. They are growing up within systems shaped by it. If we do not intervene, they will experience the downsides first. Overexposure to screens, reduced attention, weaker communication skills, and widening inequality for those already disadvantaged. As leaders in early years, we have a responsibility to act as advocates for children. That means asking hard questions about AI. Not just what it can do, but what it should do.
Do we resist AI altogether? No. It is already embedded in daily life. Do we accept it uncritically? Also no. That is how inequality becomes hardwired. The risk is not AI itself. The risk is our complacency.
If we become passive, we allow technology to be shaped solely by commercial interests and political expediency. We allow systems to develop that do not reflect the needs of children, families or communities. That is what being complicit looks like.
Instead, we need to build bridges.
1. Build critical AI literacy
First, we need critical AI literacy across the workforce. Not technical expertise, but informed judgement. Knowing when to use AI, when not to use it, and how to question its outputs.
2. Protect human relationships
Second, we must protect the relational core of early years practice. AI can support planning, reduce administrative burden and improve access to knowledge. But it cannot replace human relationships. A child’s development depends on connection, interaction and trust. That is non-negotiable.
3. Use AI to reduce inequality
Third, we need to challenge inequality directly. AI should be used to identify gaps in access, improve inclusion and support those most disadvantaged. If it is not doing that, we should question its purpose.
4. Ensure ethical AI governance
Fourth, we must insist on ethical governance. Data use, privacy and decision-making processes must be transparent and accountable. Communities need a voice in how AI is designed and deployed, particularly where it affects public services.
5. Focus on children’s futures
Finally, we need to keep the focus on children’s futures. The Public Voices research is clear. People expect AI to be pro-social, equitable, and future-focused. That includes recognising the long-term impact on children and the environment.
AI is not going away. But its direction is not fixed. We can choose to let it reinforce existing inequalities, or we can shape it to support a fairer society. That requires leadership, not compliance.
In early years, we have always been advocates for children. That role does not change because the context has. If anything, it becomes more important. AI for the public good will not happen by default. It will happen because we insist on it.
AI for the public good refers to the use of artificial intelligence to improve societal outcomes, including fairness, inclusion, community wellbeing and equal access to opportunities.
AI can increase inequality if left unchecked, as algorithmic bias may reinforce existing disparities in race, gender and socio-economic status. However, it can also be used to reduce inequality when applied responsibly.
AI is shaping the environments children grow up in. It affects learning, communication and access to opportunities, making it essential for early years leaders to engage with its impact.
No. AI can support administrative tasks and planning, but it cannot replace the human relationships that are essential for children’s development.
By building AI literacy, protecting human connection, challenging inequality, ensuring ethical governance and focusing on long-term outcomes for children.
As more people turn to AI programmes and all the various free platforms to write summaries, design letters, shape reports, tidy up a set of minutes etc, we have to…
The Power of Intergenerational Nurseries The world is changing fast. Technology is accelerating how we live, work and even care, but in that rush, you can’t help but…
This blog is No.3 in the AI suite This week, I want to examine how, or indeed, if AI can help with daily office tasks. I am coming at this…