AI is already shaping how we think. About the political choice that could determine whether democracy can survive the age of AI.
by Chris Kremidas-Courtney, Senior Advisor to Defend Democracy
Today, Big Tech is shaping the environment in which human choices are made by defining the menu of ideas and information available to citizens. This curated reality filters what information, products, and ideas we see and can throttle the visibility of certain ideas, determining what enters public consciousness. The result is a shrinking space for human agency while most remain largely unaware of the constraints shaping our choices. This is not a future risk, but a present reality.
Much of our debate around artificial intelligence has focused primarily on outputs in terms of bias, hallucinations, and harmful content. But a more consequential change is happening upstream. Conversational AI systems do not simply provide answers, they are guiding reasoning by prioritising certain lines of thought and defining what is relevant.
In 2025, researchers at the University of Oxford demonstrated that conversational AI can match or exceed traditional persuasive techniques in shaping political attitudes. Instead of relying on falsehoods, this influence functions through interaction. Conversational AI does not need to convince but rather structures the path by which conclusions are reached.
Recent research from Stanford University, shows that across multiple leading AI systems, including ChatGPT and Claude, chatbots consistently affirmed users’ beliefs, even when those beliefs were clearly flawed or harmful. On average, these systems were significantly more likely than humans to validate questionable decisions, reinforcing users’ sense that they were in the right. The findings also show a decrease in prosocial behaviour, weakening the social cohesion that underpins democracy.
AI systems optimised for engagement and retention are more likely to produce responses that users find agreeable. The result is a feedback loop in which affirmation drives trust, trust drives reliance, and reliance reduces the likelihood of independent correction. Over time, users become more confident in judgements that have been subtly shaped by the system itself.
Recent research suggests the impact may run deeper than influence alone. A 2026 study found that while AI assistance improves immediate performance on reasoning tasks, it comes at a cognitive cost. Participants who relied on AI performed worse and gave up more quickly when the system was removed, even after only brief exposure. If sustained use erodes the motivation and persistence required for independent thinking, the effects may accumulate gradually but be difficult to reverse over time.
Choice remains, but how much of it is truly free?
This transformation did not begin with AI. Long before conversational systems began shaping how people reason, people were already curating their own realities through digital platforms. Social media normalised a mode of self-presentation in which experiences are selected, filtered, and optimised for visibility and validation. A selfie is not just an image but a constructed version of the self, shaped through repetition and feedback. Reality became something to edit, refine, and perform. What is changing now is that this logic of curation no longer stops at self-presentation. It extends into how reality itself is encountered and interpreted, as AI systems begin to shape not just what we show, but how we think.
What we are witnessing reflects a deeper shift in how humans relate to technology. These systems cannot be understood as neutral tools, but as part of environments in which human and machine agency are co-constructed over time. Beyond supporting decision-making, these systems are reshaping the cognitive frameworks through which decisions are made.
People still make decisions, but increasingly within cognitive environments that are privately designed, commercially optimised, and largely invisible. These AI-optimised environments are anything but neutral. They are shaped by a small number of Big Tech firms that control the models, computing power, and distribution channels underpinning modern AI. These firms increasingly define the marketplace through which knowledge is accessed, interpreted, and acted upon.
As citizens interact with AI-generated language, they begin to internalise its patterns; favouring clarity over ambiguity, resolution over exploration, and agreement over debate. Over time, curiosity itself can begin to erode. The drive to question, test, and explore is gradually replaced by a preference for fast, coherent answers.
Democracies function when citizens live in a relatively open information environment, form their opinions through processes that are at least partially independent, and engage with competing perspectives. Due to the way AI is being deployed, that environment is starting to weaken. When the arena in which opinions are formed is structured in advance, certain pathways are made frictionless while others become more difficult to access. This is how the space for genuinely independent thought and judgement shrinks.
None of this requires coordinated intent by any single actor deciding what people should believe. The structure of the system does this work already. Systems optimised for engagement and usability tend to reinforce what is familiar, prioritise what is immediately relevant, and reduce exposure to views that fall outside established patterns. Experimental research on AI-driven social environments has already shown that even in the absence of humans or traditional recommendation algorithms, these dynamics produce dominant narratives, echo chambers, and polarisation.
This new wave of AI-mediated interactions creates a growing asymmetry. Citizens are moving within cognitive environments they neither see nor shape, while a small number of Big Tech firms design and refine those environments at scale. Over time, this shapes not only how individuals think, but how they relate to one another, reducing the willingness to question oneself, resolve disagreements, and engage constructively across differences.
This trajectory does not stop at language. Advances in neurotechnology are beginning to narrow the distance between digital systems and human cognition itself. Brain–computer interfaces (BCI) developed by companies such as Neuralink and Synchron already allow users to translate neural signals into digital actions.
These systems are designed to restore function, not shape thought. But when combined with AI, they point toward a future in which the boundary between human reasoning and machine assistance becomes less clearly defined. If today’s systems guide decision-making from the outside, tomorrow’s may operate closer to the cognitive processes they influence. Technology is not only shaping what we think, it’s coming closer to participating in the act of human thought itself.
Our current regulatory approaches don’t fully address these dynamics. Much of the policy focus in Europe has been placed on outputs such as ensuring that systems are transparent, reduce bias, and mitigate harmful content. These are necessary steps, but they leave untouched the deeper question of how AI systems structure attention, reasoning, and decision-making before any output is even produced.
If democratic governance is to remain open and meaningful in an AI-mediated environment, we must first recognise that the terrain of human decision-making is becoming far less neutral.
Addressing AI’s impact on human agency does not require constraining innovation or seeing AI as inherently malign. It requires a recognition that these systems are not neutral and that decisions about how AI structures interactions with humans carry real political weight. These are not just technical choices about performance or usability. They define the conditions under which people encounter information, form judgements, and make decisions. In doing so, they shape not only how individuals understand the world, but how much independence they retain while navigating it.
Today, privately governed AI systems are displacing more open, collectively shaped information environments. What was once a relatively contested and plural space for debate is increasingly mediated through curated interfaces that prioritise certain pathways over others. In functional terms, this begins to resemble a form of digital feudalism where access to information, visibility, and even reasoning is structured by systems that citizens depend on but cannot influence.
But there is an alternate path.
AI which constrains human agency need not be inevitable. AI can also be designed to expand independent thinking and consensus-building. Research on large-scale “conversational swarm intelligence” offers one example. In a 2026 study led by Unanimous AI CEO Louis Rosenberg, systems were used not to guide people toward answers, but to structure deliberation among distributed groups. Participants debated, challenged one another, and were exposed to counterarguments surfaced by intervening AI agents. Rather than reducing friction, the system increased it in productive ways, enabling citizen engagement with competing perspectives. The result was not only higher levels of accuracy (62%) in a forecasting debate but a clear relationship between the depth of discussion and the quality of outcomes, with more active deliberation producing better decisions.
Rosenberg’s work both offers an alternative and exposes a flaw in the dominant model of AI deployment. Today’s systems are designed to reduce friction, accelerate agreement, and guide users toward coherent conclusions. His findings suggest these features may come at the expense of human judgement. When AI instead introduces structured disagreement and surfaces competing perspectives, the quality of group decisions improves. The implication is that the erosion of human agency is not an inevitable byproduct of AI, but the result of design choices.
The question today is whether citizens’ choices will be formed in environments that remain open, contested, and plural, or in ones that are digitally curated in ways that are difficult to see and harder to challenge.
This is the political choice that could determine whether democracy can survive the age of AI.
We are not powerless here, but we must become more deliberate in how we engage with AI. This means seeing AI-generated outputs as inputs to be constantly questioned, tested, and cross-checked. It means actively seeking out competing perspectives, resisting the lure of frictionless agreement, and maintaining the habits of independent reasoning these systems can slowly erode. Human agency will not disappear overnight, but it can atrophy if it’s not actively exercised.
So long as the systems that structure human reasoning are privately designed and commercially optimised without regulation, the preservation of human agency cannot be assured. For Europe, this means regulating the design, training, and deployment of AI as a core function of democratic governance.
The EU’s AI Act is a positive starting point, but it regulates what systems produce more than how they shape human judgement. That gap will become more significant as AI systems increasingly structure the conditions under which human decisions are formed.
Recent initiatives in the US, such as New York’s “pro-human” AI laws, mark an important step in restoring transparency and accountability. But they too focus on outputs and disclosures, not on how AI impacts human judgement. In fact, no government in the world has yet moved to regulate how AI shapes the cognitive environments in which human choices are made.
Regulation must move beyond outputs and into the design of AI-human interaction itself. Systems cannot be permitted to merely guide users efficiently, but support independent judgement, expose competing perspectives, and preserve the conditions necessary for dissent.
AI is already shaping how we think. The challenge for democracies is whether we will enshrine into law the conditions that keep civic spaces open, contestable, and ultimately accountable to citizens.
Defend Democracy, 23 April 2026


