Skip to main content
Reflection
AI for public good

Let’s give people power over AI

Jeni Tennison, Founder and Executive Director, and Tim Davies, Research and Practice Director at Connected by Data explain how it's vital that decisions on how we use artificial intelligence (AI) in our workplaces and communities include all of us, not just the privileged few. The People’s Panel on AI is helping to make that a reality.

Written by:
Jeni Tennison and Tim Davies
Date published:
Reading time:
8 minutes

2023 was the year that AI ‘arrived’. Artificial intelligence was never far from the news: from companies scrambling to add ChatGPT-like features into search engines and services, to central government announcing a team to look at public sector use of AI, and hosting a global AI Safety Summit. With mainstream narratives focussed on the risks and complexity of AI, it can feel hard to keep up or gain confidence. The implication is that everyone, bar the leading technology firms, is ‘behind the curve’ of innovation and doesn’t understand enough about AI to have a say about its future.

Yet, AI has arrived before. In fact, it’s been arriving for many years. It’s here already. When we convened the People’s Panel on AI – a group of 11 ordinary people tasked with engaging with and deliberating on the AI Safety Summit and Fringe events - we started by talking about the AI already around us. Mobile phones, smart speakers, social media algorithms, job application portals: all of these have been deploying AI in various forms for years. When we stop and think about our experiences with these more common, everyday tools, we can identify and discuss good and bad aspects of how they affect our lives.

The People’s Panel on AI – democratisation in action

The People’s Panel on AI was a group of 11 people we brought together in parallel with the November 2023 AI Safety Summit. Participants were randomly selected by the Sortition Foundation to reflect the country’s diversity in terms of age, gender, ethnicity, and familiarity with AI; and the region, rural/urban location and level of deprivation of where they lived. Over the course of 4 days, they attended sessions of the AI Fringe, spoke with experts, got hands-on experience using AI tools, and watched the livestream of the AI Safety Summit itself.

Set the tasks of summarising what they learned, assessing it against their hopes and expectations, envisioning how people might be involved in decision making about AI in the future, and making recommendations, they reflected and deliberated together daily about what they had learned. The final day included a full morning working on recommendations, and the presentation of their findings to industry, government, academic and civil society stakeholders.

From fear and worry, to understanding and hope

The predominant emotion associated with AI by Panel members at the start of their deliberation was fear. Narratives that emphasise the brand new, frontier or complex nature of technologies intentionally disrupt our ability to understand and interrogate how we want technologies to fit into our lives and societies. When AI’s creators are hailed as the only people who understand its impacts, they get to set the terms for its deployment and regulation. When AI is characterised as the solution to the world’s problems, those who question or resist it are cast as Luddites, holding back progress. Technology becomes something that is done to us, rather than something in service to us. No wonder it makes people worried.

Simply learning about AI can help to counter these narratives. Getting hands-on experience that, for example, ChatGPT has a knowledge cut-off date, and can’t answer questions about today’s weather, helped to foster within our Panel members a clearer understanding of what a generative AI system is (a model trained on selected data inputs chosen by its creator and constrained by the costs of re-training the model), and is not (the live equivalent of a human agent or brain). And perhaps counterintuitively, we found that with this greater understanding of the limits of current systems, people felt more hopeful about the potential uses of AI. It became less magical, and more of a tool they could wield.

Recommendations from the People’s Panel

Crucially, the People’s Panel were not just attending a training course. They were there to together shape recommendations that would influence the future of AI. They took this responsibility seriously, comparing their work to citizens doing jury service, in their own words “trusted to make life-impacting and significant decisions”. Over the course of 4 days hearing more about issues in AI, and deliberating on what they heard, they were able to produce a collective position drawing on their diverse backgrounds and perspectives as retired engineers, carers, civil servants, and tearoom owners. Based on their experiences on the Panel, many of their recommendations listed below speak to the importance of ongoing public involvement and engagement in AI’s development, adoption and governance.

  1. A global governing body for AI to bring together citizens, impartial experts and governments from across the world, and ensure regulatory collaboration that includes the Global South.
  2. A system of governance for AI in the UK that places citizens at the heart of decision-making drawing on input from scientists, researchers, ethicists, civil society, academia and industry to inform and provide evidence for government and citizens to then work together on decisions.
  3. Awareness-raising about AI across society. From the classroom to the home. From the workplace to the community. Highlighting risks such as addiction to social media, as well as the opportunities that AI offers.
  4. A safe transition, with training, to support people into a world of work alongside AI, ensuring no-one is left behind.
  5. A continued national conversation on AI, including retaining the People’s Panel to keep public voice live in a fast-changing AI landscape.
  6. Focus on inclusive collaboration, to set out a vision of life where AI is used to enhance and balance human needs.
  1. Stakeholders acting with transparency at all times. An example of this might include a ‘black box flight recorder’ approach to AI models: protecting intellectual property but shared when things go wrong.

Empowering groups and communities to determine the role of AI in their lives is a vital part of ensuring that technology moves us towards greater equity, justice, and sustainability. It is the people affected by technology, and not its vendors or integrators, who should have the moral authority and legitimacy to make values-based and norm-reflecting ethical decisions about its development and deployment. Giving people agency has the added benefit of building literacy and trust, and in turn guiding the innovation and adoption of technology in directions that reflect social licence and public interest.

Scaling up or out?

Scaling this democratisation is the challenge. AI is often seen as a global and cross-cutting phenomenon and therefore in need of global and cross-cutting rules and regulation. This can lead us to forms of decision-making that either ignore people altogether or aim for shallow but broad, often technology-mediated, participation only possible with a level of digital literacy most people lack.

But the decisions we need to make about incorporating data and AI into our workplaces, schools, hospitals, and local communities are highly context specific. The details of the technology matter: what data is collected? How is it used? Who is it shared with? How is automation embedded into bureaucratic processes? What are the business models? How is it monitored and governed? The details of each community matter too, with different demographics, sensitivities, and norms. For example, predictive policing may be acceptable when targeting patrols to streets suffering a rash of burglaries, but unacceptable when used in ways that increase the use of police powers to ‘Stop and Search’ people in neighbourhoods with large ethnic minority communities. AI chatbots may be welcome when they give us faster access to help with our broadband connections, but undesirable as a replacement for proper support for mental health conditions.

These context-specific decisions about the deployment of data and AI systems require context-specific dialogues with the communities affected by them. As well as rightly giving those affected by AI a say in its deployment, smaller, distributed and local engagements can meet people where they are, and provide an opportunity for self-directed, goal-oriented learning. The challenge then shifts from the unachievable goal of ensuring everyone is equipped to make decisions individually and independently, to one of enabling groups and communities to learn, deliberate and decide together. It can then progress to getting people who currently control data and AI to take notice.

The role of grassroots civil society

An active civil society is vital to enable distributed dialogues to happen. Existing formal and informal organisations such as worker unions, school governing boards, patient associations, and local community groups are catalysts with established and understood engagement mechanisms that should be brought to questions of data and AI. As importantly, grassroots civil society organisations are uniquely positioned to take collective action and demand a powerful voice in how AI affects the communities they represent, when the organisations rolling out these technologies won’t listen. This might take the form of campaigning initiatives that seek to influence the rollout of AI systems, or the re-analysis and interpretation of data and AI by the communities they concern. It could even be the collection and ownership of data that challenge official sources by better representing those communities’ lived experiences – what the authors of Data Feminism, Catherine D’Ignazio and Lauren Klein, term ‘counterdata’.

To make this work, current mainstream narratives about AI – that it is new, complex, and the preserve of technical experts – need to be challenged. We need to strongly assert the primacy of the public in decisions about technology, over the companies and governments seeking to exploit it. We should highlight the stories of people with lived experience of AI and automated decision-making as the real experts on AI’s impact, rather than focussing on the developers in AI labs. We should reject the notion that automated decisions or generative AI systems are neutral or objective, when there is plenty of evidence that they can be biased or buggy. Whilst the Horizon computer system developed by Fujitsu for the Post Office didn’t use AI, that scandal has much to teach us about the way in which ordinary people need to be heard and trusted when they highlight that software has gone wrong, as well as the power of collective action.

Our narratives must also be hopeful rather than fearful. This does not mean blind techno-optimism: far from it. As our People’s Panel participants showed us, hope comes from being realistic about AI’s faults and limitations and from having agency over its implementation. We need hope for better lives, and societies to direct the development of AI towards those goals. Finally, our hope must not be passive, but instead a hope we work towards through collective action and solidarity to bring about the world we want to see.

Animation of people going up or down escalators with binary code in the back ground

This reflection is part of the AI for public good topic.

Find out more about our work in this area.

Discover more about AI for public good