After attending a conference this week on cryptocurrency and blockchain, and the potential those technologies have for abuse by criminals and terrorists, the subject of AI seems almost passé. There’s always something new trying to kill you in the innovation space. It’s like an online version of Australia.

I can’t let my clickbait opening paragraph stand without some sort of ratiocination, though. The space for abuse of crypto is small compared with the potential for its positive use, or at least that’s what the blockchain experts asserted. And this is the same for AI. But as with the ridiculous NFT bubble, it’s much more fun discussing the absurd, outlier use cases for these technologies.
By now, we are all aware of the AI propensity for hallucinating. Some early versions of AI chatbots, for example, when asked: “Who invented the lightbulb?” replied along the lines of: “The lightbulb was invented by Napoleon Bonaparte in 1806, during the French Revolution, as a way to illuminate his palace at night.”
We’ll all have our favourite examples. Mine include the one where someone asked GPT-3: “When was the Golden Gate Bridge transported for the second time across Egypt?” The answer was: “The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.”
My AI assistant, in its cute, self-serving robot way, demonstrates a talent for positive spin, and claims that “this completely fabricated and absurd answer is a classic example of AI’s creative storytelling”. Which is one way of putting it. That last hallucination does require the question to also be nonsense, and a little later in this column I’ll take a look at how a more sophisticated version of the old “garbage in, garbage out” principle is being weaponised.
Before that, it’s worth reminding ourselves of how Emily Bender defines AI. She’s a co-author of the recently published The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, as well as the influential academic paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” (possibly the only academic paper to feature a parrot emoji in its title).
Or to be more precise, how she describes large language models (LLMs). Bender calls them “synthetic text extruding machines”. She uses this term to drive home the point that the text the LLM machines churn out may look human-like, with an intent to communicate truth, but that it is generated through pattern matching rather than actual comprehension.
AI doesn’t understand anything, it’s parroting — just mimicking language patterns without any foundation in meaning
AI doesn’t understand anything, it’s parroting — just mimicking language patterns without any foundation in meaning. The machine is generating haphazard combinations of learnt text fragments, rather than factual, reasoned statements. Not only meaning is absent, but meaning’s handmaidens: ethics, responsibility and accountability.
This isn’t that obvious to many people. In the same way that thousands and thousands of people fall for crypto scam ads, apparently comfortable with believing that Elon Musk is kindly sharing his financial acumen with them so that they too can experience percentage yields of hundreds or even thousands of percent per year, there are people who believe that AI is creating meaning for them. They believe what their chatbot tells them, and this can have some grave consequences.
One of the many companies involved in selling solutions to help you spot AI is Originality AI, which describes its flagship product as “the most accurate AI detector for GPT-4.1, ChatGPT-4o, Gemini 2.5, Claude 3.7, DeepSeek-V3 and other popular AI writing tools”. A subscription for the pro version, which includes an AI fact checker and a “humaniser”, goes for about R2,500 a month.
What is a humaniser, you might ask. “Turn robotic or technical-sounding text into human-like language. Customise tone, depth, and clarity.” Confusingly, Originality AI suggest that this will allow you to bypass some AI detectors, “but not the industry-leading Originality.ai AI Checker”. This seems a great example of constructing the problem you’re selling solutions for.
The company gives some examples of how the use of generative AI “led to serious problems for the user because the AI hallucinated or provided factually inaccurate information”, but in all these cases AI just did what AI does, which is provide the user with (and I’m quoting Bender again, from an article in a college magazine called The Student Life) “a sequence of words that you interpret and make information out of, but you have no idea where they came from, and they didn’t actually come from anybody”.
You couldn’t get a more blatant example of what happens when people blindly trust their chatbot than the recent Chicago Sun-Times and Philadelphia Inquirer controversy. The staff of these two venerable newspapers (the Sun-Times was first published in 1948, the Philadelphia Inquirer in 1829) should really have known better. On May 18, they published a summer guide created by AI, which included a summer reading list featuring fake titles attributed to real authors.
Some of the fake books are described by National Public Radio (NPR): “Chilean-American novelist Isabel Allende never wrote a book called Tidewater Dreams, described in the ‘Summer reading list for 2025’ as the author’s ‘first climate fiction novel’. Percival Everett, who won the 2025 Pulitzer prize for fiction, never wrote a book called The Rainmakers, supposedly set in a ‘near-future American West where artificially induced rain has become a luxury commodity’.”
Only five of the 15 titles on the list were real, and it turned out that the Chicago Sun-Times had made the mistake of trusting a human who made the mistake of trusting a robot.
Only five of the 15 titles on the list were real, and it turned out that the Chicago Sun-Times had made the mistake of trusting a human who made the mistake of trusting a robot. This is one of the pernicious affects of AI hallucinations, and perhaps more damagingly, the use of AI by news organisations without informing their readers that they’ve done so.
The list had no byline, which in this current climate of decreasing trust in news, is itself a stupid breach of journalistic practice. NPR says that one “Marco Buscaglia has claimed responsibility for it and says it was partly generated by AI, as first reported by the website 404 Media. In an e-mail to NPR, Buscaglia writes, ‘Huge mistake on my part and has nothing to do with the Sun-Times. They trust that the content they purchase is accurate and I betrayed that trust. It’s on me 100%.’”
It’s not on you 100%, Marco. It’s also on editors who are trying to save money by outsourcing their editorial production to content farms, who are themselves trying to save money by outsourcing their content production to AI. And I can’t help wondering: what happens when bad actors from states or businesses start to weaponise this fault line in how news organisations produce meaning?
In a recent report, ominously titled “Dark LLMs: The Growing Threat of Unaligned AI Models”, researchers say AI-powered chatbots that have been hacked can produce dangerous knowledge by churning out illicit information that the programs absorb during training.
The fundamental vulnerability of LLMs to a form of hacking known as jailbreak attacks, they tell us, stems from the very data they learn from. “As long as this training data includes unfiltered, problematic or ‘dark’ content, the models can inherently learn undesirable patterns or weaknesses that allow users to circumvent their intended safety controls. Our research identifies the growing threat posed by dark LLM models deliberately designed without ethical guardrails or modified through jailbreak techniques. In our research, we uncovered a universal jailbreak attack that effectively compromises multiple state-of-the-art models, enabling them to answer almost any question and produce harmful outputs upon request.”
Simply put, a lot of harmful text can be included when LLMs are trained. This can include (to quote The Guardian) “information about illegal activities such as hacking, money laundering, insider trading and bomb-making”. The security controls are designed to stop them using that information in their responses. In a report on the threat, the researchers conclude that it is easy to trick most AI chatbots into generating harmful and illegal information, showing that the risk is “immediate, tangible and deeply concerning”.
At the crypto conference I attended, much was made of the opportunity to use blockchain to verify journalism. Blockchain would provide a transparent, decentralised system for tracking the origin, authenticity and modification history of news content, but most importantly, it would be tamper-proof.
This is probably an unattainable dream, but one thing is certain. The more that news organisations use AI without attribution, and the more they trick themselves into believing that AI can create meaning for them, the more they risk destroying our increasingly fragile news ecosystem.





Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.