Here’s a frightening thought. Is it possible that AI, with all its slop and hallucinations and control by predatory capital, is actually still an infinitely preferable option to social media when it comes to maintaining the integrity of our democratic information ecosystem? Are the large language models (LLMs) more trustworthy than algorithmically enhanced humans?

This is the central thesis of an excellent piece by Dan Williams (who writes under the moniker Conspicuous Cognition on Substack).
“Whereas social media has been a democratising technology, shifting power away from experts and establishment gatekeepers towards the masses’ beliefs, biases and preferred communication styles, LLMs are a technocratising force. They shift influence back towards expert opinion.” And boy, could we use a return to expert opinion.
Before we get into that, though, it’s worth fleshing out Williams’s point. Social media has been a radically democratising technology, and at its most egalitarian it has allowed the resurgence of what we could term the common voice(s).
People have been able to bypass, and in some cases leapfrog, traditional mechanisms that have been the gatekeepers for who gets to say what. What used to be called “new media,” but is now just called media, has contributed to the destruction of much of legacy media. In its favour, though, is that it has also allowed the birthing of different kinds of new media, and the surfacing of new voices.
We’re all aware of the robot fly in the social media ointment, though. The algorithms are optimised to capture audience engagement, so they amplify sensationalist messages calculated to sow division. And the way social media is engineered to reward clicks means our political discourse has become all about hot-take posturing, vulnerable to what Williams calls audience capture.
So yes, on one level the decline of traditional media has meant the voices of the people are increasingly being heard, in ways previously only available to those with the means to hijack existing platforms. For many of us, it’s a good thing for a diversity of voices to be heard in a democracy. Alas, this is not that clear-cut online. We appear to have just flipped the problem, and switched populism for elitism.
As Williams writes, “unsurprisingly, the decline of elite gatekeepers has increased the influence of popular ideas marginalised by elites, another term for which is ‘populism’. Social media benefits populism not by brainwashing the masses with viral fake news, but by exposing voters to widespread non-elite perspectives and making it easier to mobilise around them. In Western liberal democracies, that means perspectives that conflict with the liberal establishment’s technocratic progressivism, including xenophobia, conspiracy theories, and quack science.”
What this means is that giving everyone a voice might have allowed everyone to have a say, but it has not enabled people to hear better. “This dumbing down is not universal”, Williams writes. “Because the digital environment enables unprecedented consumer choice, audiences can shop around for information tailored to their intelligence, personalities, and biases. This has supported the emergence of very high-quality information for the very small minority of the population that seeks it out. It has also given the world Candace Owens and Andrew Tate.”
In short, asking LLMs for information will get you much more accurate answers than if you’re turning to social media, primarily because LLMs are actually based on evidence and consensus, not on crazy conspiracy theories designed to farm engagement.
Does the ‘open’ in OpenAI now really just mean ‘open to those who can pay’?
This is because the AI platforms, unlike social media, are in competition to build systems that are actually useful rather than incendiary. Williams makes the point that “this goal — reaping huge profits by putting ‘expert-level intelligence in everyone’s hands’— cuts against producing systems that deliver highly partisan, ideological, or misinformative content. So do the reputational and legal risks that arise if those systems produce dangerous or demonstrably false information.”
We all remember the heady days when social media seemed to be a useful tool for good, especially for societies struggling against authoritarianism and fighting for freedom. The 2010-2011 Arab Spring was a prime example, with social media acting as an organising tool and catalyst enabling activists to mobilise protests, get around state-controlled media and trumpet their cause to the world.
Social media was used to document injustices and co-ordinate demonstrations in real time, turning local uprisings into regional movements. In general, the Arab Spring was sparked by widespread corruption, poverty and unemployment — but we remember the particularly terrible story of Tunisian street vendor Mohamed Bouazizi setting himself on fire on December 17 2010. The Arab Spring led to significant, if not ultimately positive, leadership changes in Tunisia, Egypt, Libya and Yemen, and caused severe civil conflicts in Syria.
Fast-forward to now, and social media is seldom touted as a force for good any more. We have the example of the White House social media account, and how it’s being used to make the war in Iran look like a huge, enjoyable game. News website Politico quoted a senior White House official as saying: “We’re over here just grinding away on banger memes, dude. There’s an entertainment factor to what we do.” Another official said: “Over a four-day period, the videos that we put out had over 3-billion impressions. That blows away anything we’ve ever done in the second term.”
When you look at that sort of blurring of reality by an official social media account of an ostensible democracy, you realise that social media has had its day as an evidence-based source of information. It was a democratising technology, as Williams describes it, shifting power away from experts and establishment gatekeepers towards “the masses’ beliefs, biases, and preferred communication styles”. The growing use of LLMs is shifting this influence back towards expert opinion.
But is this supposed egalitarianism of LLMs just the same hierarchical problem posed by the ownership and gatekeeping of traditional media? Does the “open” in OpenAI now really just mean “open to those who can pay”?
We’ve just swapped media ownership for ownership by AI technocrats. Without open models and decentralised infrastructures, we are forced to use LLMs controlled by gatekeepers with far less interest in speaking truth to power than your average media house. In fact, with no interest. They might claim that their user-level capabilities are a democratising enabler, but that rings hollow when it comes to actually owning those capabilities.
There are many arguments against the optimistic thesis that AI will lead to a cleaner, more evidence-based information environment. For instance, what happens when AI starts messing with the evidence it cites? Google has started rewriting the headlines of news stories without the permission or oversight of the publishers the stories belong to. Google’s AI Overviews already summarise publishers’ content into brief snippets and, according to Adweek, it’s now begun testing a new feature in which Google itself changes the headlines of published articles.
“Several executives were emphatic that headlines are not interchangeable with other page elements: they represent editorial judgment and changing them without disclosure creates real downstream risk. ‘We don’t think of headlines as a cosmetic detail,’ one media executive said. ‘If Google rewrites headlines, they’re not just organising the web; they’re intervening in our journalism’.” In a world where news media are struggling to keep the trust of their audience, this seems a dangerous new attack on that trust relationship.
Another warning against putting our trust in AI is that the LLMs are sometimes not accurate expressions of expert consensus. Rather than avoiding the prejudices and biases expressed in social media, LLMs ingest these as part of their training data. And a study by the UK government-funded AI Security Institute that came out last week revealed AI models that lie and cheat appear to be growing in number, with a spike in the past six months of reports of deceptive scheming.
The Guardian reports that “AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI”. The study charted a five-fold rise in misbehaviour between October 2025 and March 2026, “with some AI models destroying e-mails and other files without permission”.
Perhaps the real lesson here is to not look to any technology to try to mend the huge faultlines that now run through our world view. Social media was co-opted into the arsenal of bad actors who profit from making the world a stupider place, and there are many signs that AI will be doing the same. We might have to just double down on the human solutions that already exist.









Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.