Skip to main content
AI for public good

AI shifts the goalposts of digital inclusion

Dr Emma Stone, Director of Evidence and Engagement at Good Things Foundation argues that for an inclusive, equitable future, we urgently need to fix the existing digital divide in the UK before artificial intelligence (AI) increases it further.

Written by:
Dr Emma Stone
Date published:
Reading time:
8 minutes

Artificial intelligence (AI) is changing what inclusion means in today’s digital society and economy. The evidence is clear: digital exclusion is most likely to affect people who are older; and people of all ages who live in poverty (Ofcom, 2022). Not having a device, and/or connectivity, and/or digital skills and confidence makes life harder - especially for disadvantaged groups and in areas of multiple deprivation. As a recent House of Lords committee inquiry concluded, digital exclusion has ‘profound consequences for individual wellbeing and multibillion pound implications for UK productivity, economic growth, public health, levelling up, education and net-zero objectives’ (House of Lords, 2023).

Millions are on the wrong side of the UK’s digital divide. As data from Ofcom, Lloyds Consumer Digital Index, and Nominet reveal: 10 million adults lack the most basic level of digital skills; 2.5 million households struggle to afford home broadband; 1 in 14 households have no home internet access at all (Digital Nation, 2023). While old age is the strongest single predictor of being offline, being born in a digital age is no guarantee of inclusion. Around 500,000 children and young people lack a device suitable for learning (Nominet, 2023). Too many people feel fearful, frustrated, forced to go online, or left behind and unfettered AI development risks exacerbating existing divides. So, what does this mean for AI?

We still have millions of people who are excluded from basic digital inclusion, let alone an AI version of society.

Helen Milner, Chief Executive, Good Things Foundation, 2023

AI is a seismic shift in the goalposts of what it means to be digitally included. Public narratives linking AI developments with digital inclusion vary. Optimists spotlight opportunities to save time and money for people learning how to use the internet, and for those supporting them. Pessimists emphasise how AI is advancing without regard to people left behind, widening inequalities along familiar lines of income and opportunity. So, the question is: how can we shape a future in which we meet Rachel Coldicutt’s clarion call:

Let’s make AI work for 8 billion people, not 8 billionaires

Rachel Coldicutt, Executive Director at Careful Trouble, 2023

Red flags: digital exclusion and AI


Digitally excluded people have a lower digital footprint. Their invisibility in datasets used to train large language models and other AI tools raises red flags about reliability and robustness. Where AI is used to inform policies and decisions, then these too may be flawed. The dangers to social justice are highest where policies and decisions impact people who need public and health services most, and who are more likely to be digitally excluded because of old age, low incomes, or fear and distrust of digital.

When we create tools that are using unrepresentative, biased data, we are replicating harms that are happening at a structural level.

Rachel Coldicutt, 2023


Whether for or against AI, the loudest voices in AI debates are those who are digitally engaged. The voices of digitally excluded people are seldom sought in mainstream debates about new technologies – and AI is no different. Yet failure to engage the public - including digitally excluded people - in talking about AI will result in a failure to build trust, shape a shared sense of what ethical, responsible use of AI means, and ensure any benefits of AI are equitably distributed.


Across public, private and voluntary sectors, leaders are beginning to think about what AI means for their bottom line, their services and their workforce. Fewer think about the digital needs, hopes and fears of their customers, service users and communities. Everyone deserves equal opportunities to learn about AI, its implications, and how to use it if they choose to do so. Deny these opportunities and risk worsening inequalities - digital, educational, economic, and social. Learning about AI isn’t only for Bloomberg Mayors; AI needs to be on the nation’s curriculum, from classrooms to community centres.

Even at a local delivery level, and on the same day, we are working with businesses using this technology to advance and grow, versus people who just can’t engage. The digital divide therefore is growing wider.

Julie Hawker, Joint Chief Executive, Cosmic, 2023

Some are already working to combat these threats. The NHS is working with Ada Lovelace Institute on an AI Impact Assessment tool to mitigate bias in AI-driven health technologies. The People’s Panel on AI and AI Fringe extended public involvement in AI debate. Digital inclusion experts are advocating to get the basics right - a Minimum Digital Living Standard - and to recognise internet access as an essential, not a luxury. Alongside, there is a cohort of community-based practitioners who are navigating this new landscape with inclusion centre-stage.

Green shoots: AI and digital inclusion in communities

Several community organisations in the National Digital Inclusion Network (run by Good Things Foundation) are finding ways to support people to learn about AI and tackle digital exclusion.

In South London, ClearCommunityWeb is a social enterprise supporting older people, vulnerable adults and carers to feel more comfortable with technology. They create a safe, supported space to talk about AI and support people to have a go themselves, such as by using Bard or ChatGPT to write a letter of complaint to a landlord. This simple approach was hailed a “real game changer” for those taking part. It added a whole new dimension to people’s sense of the risks and rewards of using AI, bringing these tools out of the realm of robots and into everyday life. People started talking about how AI could help them, and how using AI was something they could do too.

Everything we try to do is to help people feel safer, feel more involved, feel they can participate. That for us is the beginning of a building block for people to be part of their own journey to improve their lives. If we don’t bring people along, that divide is always going to be there.

Caspar Kennerdale, Managing Director at ClearCommunityWeb CIC, 2023

Swansea MAD is another digital inclusion hub in the National Digital Inclusion Network. AI courses for beginners are now part of their work with young people and unemployed adults. They want to create an inclusive, diverse talent pool of adults who can work in tech and creative industries. Swansea MAD also create safe, supported opportunities for people to get hands-on experience of using AI tools.

We kind of teach it the way sport is taught…You wouldn’t give somebody a book on football. You’d say: ‘Go and have a good kickabout - see what you can do’.

Stuart Sumner-Smith, Senior Employability Officer, Swansea MAD, 2023

For community organisations like Cosmic, Swansea MAD and ClearCommunityWeb, and national digital inclusion charities like Good Things Foundation, supporting people to discuss, discover, and learn basic AI skills is becoming part of their work to ensure everyone can benefit from digital. The question that remains is how we can scale the learnings from such interventions, so that as many people as possible can be supported.

Amber warnings: redress and responsibility

A strong and active civil society is essential to supporting individuals to build AI skills and promoting critical engagement with AI products and services. Finding ways to support community organisations ​(as well as education and skills providers) to learn about AI themselves, and to be able to share this learning with those they work with, will become more important. But it could quickly become overwhelming too, as the impacts of AI disruption begin to take hold.

We also need an empowered civil society to shape AI developments at a structural level - countering harmful AI narratives, and holding governments, public institutions and big businesses to account for their use of AI in our lives. The actors’ and writers’ strike in America highlighted the power of collective organising to shape how AI is used. Closer to home, the Post Office scandal is a stark reminder of how hard it can be to get access to justice from established institutions and big business​.​

If AI is going to level the playing field of digital inclusion, public trust must be won. While the onus lands on those in positions of power to ensure transparency, promote fairness and accountability, and establish routes to redress, civil society also has a role in upskilling individuals, monitoring progress, and engaging in advocacy. While there could, therefore, be tension between organisations who seek to build AI skills on an individual level, and those who focus on shaping and challenging AI developments at a systemic level - in reality, we need both.

Final thoughts

AI is already shifting the goalposts of digital inclusion. As such, people need equal opportunities to learn about these novel technologies - how they are developed and used, alongside the risks and rewards - and to build the practical and critical skills to use them too, if they choose.

As AI advancements accelerate, we must not overlook those that are already rendered invisible in datasets, design, and debates around AI. On a systemic level, building public trust in AI will require government, industry and civil society to prioritise transparency, ethics and accountability - and to bring everyone on the journey.

Finally, if our vision for AI fits with an inclusive, equitable future, then we urgently need to get the basics in place first to fix the UK’s digital divide, and fast.

We want to have a fair, equal society, and we want to make sure that AI is happening with people involved, not just to them.

Helen Milner, CEO at Good Things Foundation, 2023
Animation of people going up or down escalators with binary code in the back ground

This reflection is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good