Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
If 2024 was the year of experimentation with generative AI, then last year was one of implementation. Hundreds of thousands of businesses, as well as many hundreds of millions of individual users, applied the technology in all kinds of weird and wonderful ways. In some cases, users found highly productive uses of AI, but in many others the technology’s limitations became increasingly apparent, resulting in embarrassing business blunders.
This year will therefore be dominated by hard-headed evaluation as AI comes under intense scrutiny over its practical reliability and commercial viability. In particular, there are three questions the industry must address to justify the extraordinary investment surge that may see AI capital expenditure top $500bn in 2026.
First, is generative AI now hitting the limits of scaling? Back in 2019, the AI researcher Rich Sutton wrote an essay entitled “The Bitter Lesson” observing that the most effective way to build stronger AI was simply to throw more data and computation power at deep learning models. That scaling theory has since been spectacularly validated by OpenAI, and others, who have been building ever more powerful and computation-intensive models.
But Sutton is one of many researchers who now think that game is running out of energy, both literally and figuratively. This does not mean that progress in AI will grind to a halt. Far from it. But it does mean that AI companies will have to convince investors they can write smarter algorithms and exploit other more efficient research pathways. Expect to hear a lot more this year about neurosymbolic AI, which attempts to merge existing data-driven neural networks with rules-based symbolic AI.
Next, can the industry leaders develop viable business models as AI becomes more commoditised? Whereas the valuations of almost every business connected with the technology inflated in 2025, there will be far more differentiation in future. Some tech giants, including Alphabet, Amazon and Microsoft, will continue to deploy AI effectively to cut costs and improve existing services that already reach billions of people. But some insurgent AI start-ups, such as OpenAI and Anthropic that are aiming for blockbuster flotations this year, still need to convince investors they can build competitive moats around their own businesses.
Third, how will the US tech giants respond to the increasing popularity of Chinese open-weights AI models? A year ago, China’s DeepSeek shocked the AI industry by releasing a highly performing reasoning model at a fraction of the training costs of most US counterparts. Since then, Chinese so-called open-weights AI models, which are narrower, cheaper and more adaptable than most US models, have devoured market share. A study by the Massachusetts Institute of Technology and Hugging Face recently found Chinese-made open models had leapfrogged comparable US models, accounting for 17 per cent of all downloads.
Even Sam Altman, OpenAI’s chief executive, has admitted that his company might have been on “the wrong side of history” by mostly developing expensive, proprietary closed-weights AI models. But US companies are now releasing more open models to get back into that game. How will they fare?
Much of the excitement about AI’s potential is justified. When judiciously applied, the technology can streamline business processes, boost productivity and accelerate scientific discovery. But this year both users and investors will discriminate between those services and businesses that offer real value and those that have just been opportunistically surfing the AI hype wave.


