The nosebleed valuations in the US tech sector partly reflect the belief that artificial general intelligence is within sight. Even though few agree on what AGI means exactly, investors seem convinced that a stronger form of generalisable AI will transform economic productivity and make mountainous fortunes for its creators.
To sustain that story, US tech firms have been pouring hundreds of billions of dollars into building more AI infrastructure to scale their computing power. The trouble is that scaling is now producing diminishing returns and some researchers doubt whether the AI industry’s route map will ever lead to fully generalisable intelligence. Arch-sceptic Gary Marcus wrote recently that generative AI models were still best viewed as “souped-up regurgitation machines” that struggled with truth, hallucinations and reasoning and would never bring us to the “holy grail of AGI”.
The debate about the limits of scaling has been raging for years and, up until now, the doubters have been proved wrong. In 2019 the computer scientist Rich Sutton wrote The Bitter Lesson, arguing that the best way to solve AI problems was to keep throwing more data and computing power at them. The bitter lesson was that human ingenuity was overrated and constantly outstripped by the power of scaling.