The UK government risks facing growing public backlash against artificial intelligence unless it does more to share the benefits of AI with the public, according to a new report published by the Institute for Public Policy Research.
The report warns that current policy is too narrowly focused on accelerating AI growth, without clearly setting out how the technology will tangibly improve everyday life for most people.
Public anxiety over AI is increasing sharply, with the technology now widely regarded as one of the biggest global risks. The report notes that an increasingly diverse anti‑AI coalition is emerging, driven by concerns ranging from existential risk and copyright disputes to online safety for children and movements such as ‘QuitGPT’.
As AI capabilities advance rapidly, the report argues that its economic and social impacts are no longer theoretical. With significant disruption to the labour market expected within the next 12 months, the political stakes around AI policy are rising quickly.
IPPR warns that without decisive government intervention, the roll‑out of AI could:
- Concentrate economic power in a small number of large tech firms
- Widen inequality
- Replace jobs faster than new roles are created
The report also raises concerns about AI safety, noting that as systems become more advanced, existing testing regimes are becoming less reliable. Some models are already showing the ability to evade evaluation and oversight, making risks harder to manage.
Rather than framing AI policy as a choice between uncritical acceleration and outright resistance, IPPR calls for a new approach, described as “AI directionism”.
Under this model, government would take a far more active role in steering how AI is developed and deployed, ensuring it delivers clear and shared public benefits.
This would involve shaping markets, directing investment and setting clear priorities for how AI should be used in areas such as healthcare, education and public services, rather than leaving decisions solely to commercial incentives.
To deliver AI directionism in practice, the report recommends that government acts to redistribute benefits and reduce harm.
Key proposals include redistributing windfall gains from Sovereign AI investments back to the public, deploying AI engineers directly into schools, hospitals and local government to test where technology can genuinely improve outcomes, and reforming tax and subsidy systems so companies are rewarded for increasing worker productivity rather than simply automating roles.
The report also calls for stronger competition enforcement, warning that without intervention the AI economy could become excessively concentrated, limiting innovation and public benefit.
IPPR concludes that while the UK has strong foundations to be a leader in AI, policy must move beyond growth alone and explain who benefits, how, and why.
Carsten Jung, IPPR Associate Director, said:
“We don’t have to be passengers in the AI revolution, we can be drivers. Right now, policy is focused on speeding up AI adoption, but not on where it’s taking us. Without a clearer direction, we risk ending up with more inequality, more concentrated power, and benefits that never reach most people.”

Without a clearer social contract around AI, the report warns that rising public concern could harden into resistance – limiting the technology’s potential and leaving decisions to a small group of corporate actors rather than democratic oversight.
The think tank argues that actively directing AI for public good is the best way to secure economic resilience, public trust and long‑term prosperity in the age of intelligent machines.
Image credit: iStock
