A new January, and another Davos gathering brought to you by that most unlikely of NGOs, the World Economic Forum (WEF). Every year they choose a theme, and it’s instructive to take a look back at these as a sort of palimpsest of how business (of which politics is a subset in this case) has constantly attempted to impose meaning on the vagaries of socioeconomic turmoil.
In 2003, it was “Building Trust”, a theme they returned to in 2021 with “Crucial Year to Rebuild Trust”. Which, unfortunately, was cancelled because of the pandemic. So maybe not that crucial then. But they got around to it last year, with “Rebuilding Trust”. As we can all too clearly see, in this new beautiful reign of King Trump (he’ll be dialling into Davos 2025 to mock delegates), that one hasn’t really taken either.

But it’s the journey to this year’s theme that really shows ... actually, I’m not entirely sure what it does show. That all these themes seem to be auguries for fancier ways to model (or “exploit” in the old parlance) the relationship between capital and citizens to the detriment of the latter?
In 2016, the good people, the well-fed people, of Davos discussed “Mastering the fourth industrial revolution”. In 2019, they ruminated on “Globalisation 4.0: shaping a global architecture in the age of the fourth industrial revolution”. Remember those halcyon days when revolutions used to be about overthrowing governments or elitist political systems, to cause fundamental changes that would benefit ordinary people? No, me neither.
While advancements in AI, quantum computing, biotech, robotics and automation and other fields present numerous opportunities, new technologies are also hiking energy demand.
And here we are in 2025, with the theme “Collaboration for the Intelligent Age”. There are five thematic priorities grouped under the main theme. Our old friend Trust is here, in “Rebuilding trust: how can stakeholders find new ways to collaborate on solutions both internationally and within societies?” And can we make money out of building bigger stables so people can feel better about shutting doors even though the Trust horse has bolted?
The other four are “Reimagining growth”, “Safeguarding the planet”, “Investing in people”, and “Industries in the Intelligent Age”. It’s difficult to see how we’re going to reconcile safeguarding the planet and investing in people with what they’re calling the Intelligent Age, given the documented (though not well-documented enough, unfortunately) environmental costs of AI, and the snowballing human costs to come.
The WEF highlights the environmental issue in its explanation of the Intelligent Age theme. “While advancements in AI, quantum computing, biotech, robotics and automation and other fields present numerous opportunities, new technologies are also hiking energy demand. Electricity demand from the sector could reach 1,000 terawatt-hours (TWh) in 2026, as compared to 460TWh today.”
According to the International Energy Agency, if you’re happily using AI because you’re one of the new humans who embraces innovation, and asking it simple questions like “Please list all articles that Iqbal Survé has told his journalists to write about how amazing he was at Davos”, that single ChatGPT query requires 2.9Wh of electricity, compared with 0.3Wh for a Google search. Another disturbing stat: Goldman Sachs Research analysts expect that by 2028, AI will represent about 19% of data centre power demand. The resultant impact on electrical power grids will be huge.
In the US, for example, after a decade of no growth in power demand, Goldman Sachs estimates that the demand for power will rise 2.4% between 2022 and 2030, and about 0.9 percentage points of that figure will be tied to data centres. In Europe, power demand could grow “by 40% and perhaps even 50%” between 2023 and 2033. “At the moment, around 15% of the world’s data centres are located in Europe. By 2030, the power needs of these data centres will match the current total consumption of Portugal, Greece and the Netherlands combined.” I wonder how Eskom is gearing up for this sort of growth in power demand, on the back of the current decrease in demand. I’m sure it has planned ahead.
The “Investing in people” theme has the following description: “Technological advancements are impacting everything from employment, skills and wealth distribution to health care, education and public services. Among the most urgent of these impacts is the need to reskill and upskill people to meet the demands of the economy of tomorrow and capitalise on the millions of new jobs in emerging and frontier fields which are higher up the value chain. While nearly 40% of global employment is exposed to AI, it is anticipated that most of this impact will be to augment work rather than to fully automate existing occupations. Strong job creation is already being experienced in the digital and service economies and it will be crucial to continue translating technological gains into net-positive results for the workforce.”
Humans will not be replaced by AI, but they may be replaced by humans who use AI
— Cathy Li
Net-positive, huh. A frightening concept when we’re talking about actual humans, as opposed to human capital. (A side note on human capital: the World Bank’s human capital index for 2020 says that “a child born in South Africa today will be 43% as productive when she grows up as she could be if she enjoyed complete education and full health. This is higher than the average for Sub-Saharan Africa region but lower than the average for upper-middle-income countries.”)
The praise singers for AI tell us not to worry too much about the human cost, though. At Davos 2025, CNBCTV18 spoke to Cathy Li, head of AI, data and metaverse at the WEF. She emphasised that AI needs to augment human capabilities, rather than replace them. Note the use of the term “human capabilities”. This is rather different from talking about humans. “AI cannot be replacing humans; it needs to augment them,” Li stated, emphasising that the technology should enhance existing human skills. “Humans will not be replaced by AI, but they may be replaced by humans who use AI,” she added, offering “a clear vision of how AI should evolve alongside human intelligence”.
Yes, that is a pretty clear vision. If by clear we mean “conveniently pretending that humans won’t be replaced by AI that uses humans”. As those patriotic drones over in Silicon Valley (and Texas, of course) like to say, ask not what AI can do for you — ask what you can do for AI. Many, many people will be losing their jobs. I probably shouldn’t call them people. It’s easier to classify them as human units, alongside the AI units on the balance sheet. “People” is so messy.
At AI House at Davos, which appears, disappointingly, to be an actual old house with red bits stuck on it to make it look like Microsoft Paint’s idea of the future (a chilling metaphor for how AI is being misused and misunderstood by most of us), they’ll be discussing these complex issues. They also have a theme, which appears to be de rigueur at Davos. Theirs is “After the Hype”.
They explain it like this: “The hype has settled, and what lies before us is the real work: guiding AI’s potential to create a future that benefits all. We stand at a historic moment, where the decisions made today will shape the trajectory of our societies, industries and lives for decades to come. Our mission is to move beyond theoretical discussions and focus on what truly matters: harnessing AI to build a world that is innovative, equitable and sustainable. This is not just about technology — it’s about redefining what’s possible for humanity. Leaders from across disciplines come together to ask the deep, critical questions: How can AI transform our most pressing challenges? How can we balance rapid advancement with ethical responsibility?”
The world is now on track to have five trillionaires within a decade, a change from last year’s forecast of one trillionaire within 10 years.
It’s that last question that worries me. I’d probably reframe it as “How can we make sure there’s oversight of the inevitable shirking of ethical responsibility by the billionaires invested in profiting from AI?” and pose it to members of civil society who aren’t as invested in the holy bottom line as the grubs of Davos.
“Takers Not Makers”, a recent inequality report from Oxfam, reveals that the wealth of the world’s billionaires grew by $2-trillion last year, three times faster than in 2023. That translates to $5.7bn a day. The world is now on track to have five trillionaires within a decade, a change from last year’s forecast of one trillionaire within 10 years. Are we really going to trust (see Davos themes above) even the simple, understated billionaires with the power to decide what is ethical when it comes to AI?
On one level, that’s obviously a rhetorical question. But I asked Grok anyway (goodbye to another tree), and it listed many reasons for it being a bad idea: “There’s a significant concern that billionaires might prioritise profit over ethical considerations. The drive for financial gain could lead to the exploitation of AI technology, neglecting potential negative societal impacts. Without stringent regulations, there’s little to hold billionaires accountable for ethical breaches. Their influence might also extend to policymaking, potentially skewing regulations in their favour.”
I couldn’t have put it better myself, and I won’t have to when Grok replaces me. Elon Musk will be happy to note that Grok wasn’t entirely negative about the role of billionaires, but suggested that “a more balanced approach involving multiple stakeholders, stringent regulations and public oversight would likely yield a more ethically sound development of AI technologies”.
To remind you, this year’s Davos theme is “Collaboration for the Intelligent Age”. There are two definitions of what collaborators are. The first is people who work jointly together on something. The second is a people who co-operate traitorously with an enemy. Let’s hope the first definition is the dominant one.





Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.