A New Social Contract for the Age of Artificial Intelligence

The age of artificial intelligence (AI) is not just approaching, it’s already upon us. It will transform our lives and challenge our values, laws, and social norms. We need a new social contract for the age of AI.

Chris Kremidas-Courtney, Senior Advisor, Defend Democracy

Joe Litobarski, PhD Researcher in the History of Technology & Democracy, Maastricht University

The age of artificial intelligence (AI) is not just approaching, it’s already upon us. Research on cybernetic neural networks began in the 1940s but the combination of huge datasets and advances in computer power have led to breakthroughs in deep learning that are transforming entire industries and AI is being built into all kinds of consumer products that we use for work and leisure.

There is clearly hype around AI. Some suggest the energy-intensive economics of AI mean we are in a speculative bubble. Even if this proves true, we still need to publicly debate the medium-term societal impacts of these technologies. After all, the dot-com bubble of the 1990s didn’t stop the Internet from fundamentally transforming modern society.

In fact, 2024 is projected to be the year when we shift from mostly talking about AI to it becoming ubiquitous in our lives. The social and economic benefits could be enormous, but the risks may be just as significant.

The potential dangers of AI’s impact on elections have been foretold by experts for years and worries about economic disruption and drastic changes to labour markets are causing additional anxiety in a social order that is already showing signs of extreme stress.

In Europe, almost half of people in working-age households in OECD countries are financially fragile and the spectre of losing jobs to AI is already registering as a concern for workers around the world.

A 2023 Gallup poll in the United States found that 22% of workers fear being made obsolete by new technologies such as AI, up seven points from 2021. This new phenomenon is being called FOBO, fear of becoming obsolete. In the EU, FOBO may be even stronger as a 2017 Eurobarometer survey found that 72% of European workers fear new technologies will “steal people’s jobs.”

The Great Depression and the rise of fascism in the 1930s demonstrates the risks to society when significant parts of our communities lose their livelihoods. Widespread anxiety creates fertile ground for populists and authoritarian demagogues to gain power, endangering democracy, equality, and the rule of law.

Most studies on AI and labour markets indicate minimal overall impact on employment rates but that a significant number of workers may require new training to remain employable. Who will pay for this? What happens to the people who cannot be retrained?

A 2023 study by Goldman Sachs indicated that roughly two-thirds of jobs in the EU and US would be impacted by AI and that it could replace one-fourth of current jobs. That same study also pointed to a global GDP increase of 7% and significant new job creation, perhaps replacing most of any positions lost during the transition. Comparable studies by the International Labor Organization, European Central Bank, and International Monetary Fund indicate similar findings.

But these same study results are seldom being translated into clear headlines for the public. Even when they are, people mistrust them since they are coming from many of the same voices that reassured them the financial sector didn’t need strong regulations in place in the run-up to the 2008 crisis. Furthermore, without seeing concrete policies to ensure positive and inclusive outcomes for the AI transition, people can’t be blamed for their scepticism.

The public remembers a time when they were told technology would make their lives easier, reducing their workloads and freeing up more time for their own personal pursuits. Instead, the opposite happened with new technologies causing them to work even longer hours, enabling their bosses to monitor them more closely and even task them with work during their off time.

FOBO is causing many citizens today to glance miserably at the future with trepidation instead of confidence, not trusting the experts and pundits since so many of their previous predictions turned out to be wrong.

They are looking for answers and if states and institutions don’t provide them, populists and authoritarian actors will.

Will I still be able to work? Will I earn enough to feed and house my family? For how long? Will my life be better, or worse in ten years? What about the lives of my children? Will the rich get richer while the poor stay poor?

The age of AI will transform aspects of our lives in ways that will challenge our values, laws, and social norms but it’s not yet clear if this age will bend in the direction of greater equality and prosperity. Given the recent history of tech development and society, the prognosis is not good unless we make a conscious and collective effort. This is why we need a new social contract for the age of AI.

The future cannot be predicted, but futures can be invented.”

~ Dennis Gabor, Nobel Laureate, author “Inventing the Future” (1963)

While a social contract is inherently a collective effort to discuss, construct and agree upon social norms and values, a few key questions can already be identified.

Humans in the loop or humans in control? Autopilots may be able to fly planes but humans must always be in the cockpit. The human-in-control approach ensures human accountability for decisions taken by and with AI that can impact life, health, and property. EU legislation already offers protection against automated monitoring and decision-making, but as AI-powered tools become more pervasive we may need to clarify and even expand these protections, particularly where decisions will impact on human dignity in employment, healthcare, education, finance, etc. For example, algorithmic management of platform workers is already problematic, and we should be asking ourselves now if we are happy to accept AI management of workers in future.

What will be the relationship between people and AI? How can we keep people safe and ensure new technological developments are aligned with our societies’ values? In the simplest terms, this means democratically agreeing, codifying, and enforcing a set of rules outlining what AI can and cannot do, and how humans may or may not use it. This should include a need for human consent on how and when AI may be allowed to mediate a person’s interactions with the digital world, lest human agency be endangered.

How can we redesign education systems to encourage human-AI collaboration instead of AI replacement of humans? Not everything needs to be AI-powered, and education systems should continue to teach fundamental skills. Nevertheless, clearly education and lifelong learning stand to be transformed by AI technologies allowing greater personalisation to fit the learning approaches of individual students. Students of all ages should be taught to train and work with AI systems, including how to engage critically and ethically with deep learning algorithms and other AI technologies. With so many demands on education systems already, this will require strategic public investment and a focus on teacher training.

How can we guarantee a just transition for workers? In the labour market, job roles will need to evolve as they always have with new technologies. This might mean creating new positions that focus on human-AI collaboration, including ensuring that AI systems function as intended and are used responsibly. Adopting good practices learned from the ongoing climate transition, governments and the private sector should work together to help workers adapt in a just transition that prioritises the most at-risk sectors and develops the widespread skills needed to ensure responsible use of AI.

What human abilities and values do we want to protect? Humans do not live to work but we do derive meaning from having a sense of purpose and achieving goals. If AI drastically changes an industry and most of the workers are not suitable for other work, how will they be able to derive meaning from their lives without work or similar achievements? AI is already making its own art and music while most humans toil away doing much less creative and fulfilling activities. Are there certain skillsets that humankind will want to maintain, lest centuries of professional and cultural practice be lost? In many areas of human life it may be perfectly legitimate for us to want to protect AI-free “organic” or “bio” approaches, just as some EU countries have rules around languages and cultural production. Even if we want to give consumers the freedom to choose, we should consider whether we need rules around the labelling of AI-generated content for consumer choice to be meaningful.

How can we make the AI transition inclusive? There is by now greater awareness about the risks of algorithmic bias and algorithmic discrimination, but these challenges will be compounded by the widespread adoption of machine learning technologies. Increasing diversity throughout the AI ecosystem, from research departments to executive boards, will help support a more inclusive AI transition, as will a focus on greater diversity within training datasets.

How will we prevent people being left behind? This tech transition may be different from previous waves because it could also affect highly-skilled professional occupations that were previously considered “safe” from automation. As technology advances, everybody is potentially at risk of feeling obsolete and “left behind.” We should always remember, therefore, that humans are an end in themselves and that any progress that leaves people behind is not progress. Rather than accepting the premise that humans must compete with automation, welfare systems and labour protections should adapt to support the creative time and space people need to generate the rich and varied human cultural and social intelligence that deep learning models remix and combine into artificial intelligence output. Is there a mix of Universal Basic Services (UBS) and Universal Basic Income (UBI) that preserves human welfare and dignity? Perhaps this could be partly funded by some form of AI tax?

How can we guarantee ‘data as labour’ is fairly compensated? As tech pioneers E. Glen Weyl and Jaron Lanier have been arguing since 2018, technologies such as today’s deep learning algorithms and neural networks rely on ‘data as labour’ to do the heavy lifting. This is worth reiterating: deep learning models could not do what they do without human-produced training data. Even synthetic data – training data produced by AI models – is ultimately a product of initial human data as labour. Decision-making within AI systems is notoriously a “black box”, but policymakers can help build transparency into models by designing regulations that facilitate collective bargaining between organisations representing workers creating training data and the owners of AI models commercially exploiting that data.

How can we embed sustainable AI systems within a circular economy? Deep learning algorithms require significant computational resources that are energy-intensive, as well as running on hardware that requires raw materials mined from the earth. This is one of the many reasons it does not make economic or environmental sense to have AI powering everything when cheaper approaches (including cheaper algorithmic approaches) will do the job just as well. Where there is a responsible case for AI, however, how can we ensure it is not driving carbon emissions and environmentally destructive resource extraction? Indeed, if applied responsibly, AI can potentially help societies become more energy efficient and sustainable.

Who will own AI-created innovations? In 2021, South Africa became the first country to grant a patent to an AI model, though other patent offices have been much more cautious. In December 2023, Google DeepMind provided a completely novel solution to a mathematical problem. As AI systems start producing innovations such as new drugs, how will intellectual property systems ensure the means of innovation are benefiting everyone and not simply permanently captured by the wealthiest?

Do we need public AI? As AI models become more sophisticated, they will require greater computational resources, which is already becoming an economic barrier to access. There is a long history of public institutions designed to provide free or affordable access to information services, from public libraries to public broadcasting services. Rather than a two-tier system, with state-of-the-art AI hidden behind paywalls and free (or throttled) access to basic AI, shouldn’t all citizens have access to cutting-edge public AI?

The EU should engage European citizens in a broad-based and dynamic public debate on the above questions and any others that come up. Alongside traditional representative politics, the Conference on the Future of Europe and European citizen panels show how deliberative political processes might help identify answers in the democratic spirit of Jean-Jacques Rousseau’s The Social Contract.

There is an old saying from the early days of computing: garbage in, garbage out (GIGO) meaning that a computer’s output is only as good as the data input. Artificial intelligence could represent a vicious circle of GIGO: if we force humans to compete with automation then we will degrade society, which will degrade not only future AI training data but also the social context from which the companies and scientists who build AI systems will emerge. This future ends in catastrophe.

But there is an alternative future, where AI extends and supports human social intelligence in a virtuous circle that feeds back into itself, encouraging a flowering of humanity. We still have time to choose that future.

In truth, laws are always useful to those with possessions and harmful to those who have nothing; from which it follows that the social state is advantageous to men only when all possess something and none has too much.” ~ Jean-Jacques Rousseau, The Social Contract

Defend Democracy, 2 February 2024