The Concentrated Economics of AI: Why Cloud Hyperscalers May Be Undervalued
The scale of addressable labour
Global labour compensation amounts to roughly $60 trillion per year, representing approximately 52-53% of world GDP. This figure includes wages, salaries, benefits, and imputed income of the self-employed across all economies. The question for investors is straightforward: what fraction of this $60 trillion is addressable by large language models and their successors?
The answer depends on how you define “cognitive work” - tasks involving language, analysis, code, and information processing rather than physical manipulation. In developed economies, knowledge work accounts for 40-50% of employment; globally the figure is lower at 25-35% due to the prevalence of agriculture and manufacturing in developing economies. A conservative estimate suggests $15-25 trillion in annual labour compensation sits in domains where current AI systems are at least partially competitive.
If AI achieves half the cost of human labour at equivalent quality in these domains - a threshold we are approaching in many categories - the economic pressure to substitute becomes overwhelming. Even modest displacement of 3% of global wages represents $2 trillion in annual value shifting from labour to capital. This is not a hypothetical scenario requiring breakthroughs; it describes automation of customer support, content generation, code production, routine analysis, and translation - tasks where AI is already deployed at scale.
Where the value flows
The critical insight is that this value transfer will be extraordinarily concentrated. Unlike previous technological transitions where gains dispersed across many firms and workers, AI economics funnel through a remarkably narrow stack: enterprises pay cloud providers, who take margins on both distribution and compute, with residual licensing fees flowing to frontier model developers.
Consider the structure. Global cloud infrastructure revenue currently runs at approximately $400 billion annually, dominated by three providers: AWS at $115-120 billion, Microsoft Azure at $75-107 billion, and Google Cloud at $48-50 billion. Together they control over 60% of the market. Frontier AI labs - primarily OpenAI, Anthropic, and Google DeepMind - generate perhaps $10-30 billion in combined revenue, though Google’s contribution is bundled into its cloud figures.
When an enterprise deploys AI, the transaction typically routes through Bedrock, Vertex, or Azure AI. The hyperscaler captures distribution margin and underlying compute revenue; the frontier lab receives licensing fees net of distribution costs. In Google’s case, vertical integration means they retain the full economics. The arithmetic is stark: of every dollar enterprises spend on AI, hyperscalers likely capture 60-80 cents.
The enterprise moat
Why will enterprises route through hyperscalers rather than directly to frontier labs? The answer lies in 15 years of accumulated enterprise infrastructure that frontier labs cannot replicate quickly.
Cloud providers have built comprehensive compliance frameworks - FedRAMP, HIPAA, SOC2, PCI-DSS, ISO 27001 - that satisfy regulated industries. They offer data residency guarantees across sovereign regions, fine-grained identity and access management integrated with enterprise directories, audit logging that feeds existing security operations centres, and private network connectivity that keeps data off the public internet. A bank or pharmaceutical company evaluating AI deployment faces a choice: undergo new vendor approval, security review, and compliance assessment with a frontier lab, or add AI services to an existing cloud contract already blessed by procurement, legal, and security teams.
This is not a temporary advantage. Enterprise trust relationships compound over time, and the frontier labs are building on hyperscaler infrastructure themselves - Anthropic trains on AWS and GCP, OpenAI runs on Azure. The distribution channel is already determined.
Valuation implications
Current market capitalisations are: Nvidia at $4.5 trillion, Google at $3.9 trillion, Microsoft at $3.6 trillion, and Amazon at $2.5 trillion. These valuations reflect the market’s belief that AI is significant, but it is worth examining whether they adequately price the scenario where AI captures meaningful share of cognitive labour.
Assume $2 trillion shifts from labour to AI annually, with half retained by enterprises as cost savings and half flowing to AI providers. Of the $1 trillion in AI revenue, approximately $600-700 billion routes through hyperscalers. Applying revenue multiples of 8-12x - consistent with high-growth, high-margin cloud businesses - suggests added market capitalisation of $1.5-2.5 trillion per major hyperscaler.
This arithmetic implies Google reaching $6-7 trillion, Microsoft reaching $5.5-6.5 trillion, and Amazon reaching $4.5-5.5 trillion. The upside is most pronounced for Amazon (80-120% from current levels) and Google (60-80%), with Microsoft showing lower upside but also lower risk given its existing positioning.
Google represents the most leveraged bet on this thesis. Cloud currently constitutes only 13% of Alphabet’s revenue, meaning incremental cloud growth moves the needle substantially. Google Cloud only recently achieved profitability - operating income grew 142% in Q4 2024 - so operating leverage remains high as the business scales. Unlike Microsoft and Amazon, Google owns its frontier models outright, capturing full economics rather than sharing with partners. The risk, of course, is that Google is also most exposed if AI disrupts search advertising, which still represents 80% of revenue.
Speed of transition
The conventional assumption is that enterprise technology transitions require 5-10 years. This framing may be wrong for AI.
Previous transitions required infrastructure buildout - internet connectivity, mobile networks, SaaS platforms. AI deployment requires none of this. The infrastructure exists, the models are accessible via API, and enterprises are already running workloads on the relevant cloud platforms. The adoption bottleneck is organisational, not technical.
Moreover, the competitive pressure is acute. A firm whose competitor achieves 30% cost reduction through AI automation faces existential pressure within 18-24 months. CFOs understand this arithmetic, and procurement cycles compress when survival is at stake. Enterprise AI adoption is also unusual in following rather than leading consumer adoption - ChatGPT reached 100 million users faster than any product in history, creating executive awareness and demand that typically takes years to build.
The $2 trillion displacement scenario could plausibly unfold in 2-4 years rather than 5-10. The capability exists today for automating tier-one support, content production, code generation, routine analysis, and translation. These categories alone represent a meaningful fraction of knowledge work. Larger figures - $5-10 trillion in displacement - require penetrating harder domains: physical work, high-stakes professional judgement, and heavily regulated industries. These will take longer, but the initial wave may be shockingly fast.
Conclusion
The thesis is simple. AI that matches human capability at half the cost will capture substantial share of the $60 trillion global labour market. The concentrated structure of AI infrastructure - three hyperscalers, two or three frontier labs - means this value accrues to a remarkably small number of firms. Current valuations reflect AI enthusiasm but may not adequately price the magnitude of labour addressable or the speed of enterprise adoption. Google offers the most leveraged exposure to this thesis: smallest cloud base, newly profitable cloud operations, and full vertical integration including frontier models. The market may be underpricing both the size and the velocity of what is coming.

