Does AI datacenter capex make financial sense?
Below I look at how much annual run-rate AI datacenter capex is. Since the current monetization model is mainly selling $20 monthly subscriptions, I try to answer how many such subscriptions will need to be sold annually to justify the capex taking place.
Data center capex breakdown
A modern 100MW datacenter costs $3.35bn and will have 45,248 B100 GPUs.
Assuming a B100 costs $30k each means just the GPU capex will equate $1.35bn or 41% of the total capex.
Source: Morgan Stanley research[1]
The main take-away from the above is that GPUs account for ~41% of the datacenter capex.
Current run-rate industry capex
In their latest quarter (Q1-25), Nvidia earned $22.5bn in datacenter revenue. If we annualize this, it equates an $90bn annual run-rate. We also know that Nvidia has ~80% market share, which means total industry GPU capex is $113bn ($90bn/80%).
We know GPUs account for 41% of datacenter capex, therefore total annual run rate capex on AI datacenters is $274bn ($113bn of GPUs + $162bn on the rest like land, buildings, cooling, generators etc).
Below is the calculation summarized.
Depreciation
Hyperscalers typically depreciate GPUs over 3 years due to the rapid improvement in performance and hence need to upgrade to the latest technology. Nvidia however warranties their GPUs for 5 years, therefore I assume the GPU’s have a lifespan/depreciation of 5 years. Depreciating the $113bn GPU capex over 5 years, means annual GPU depreciation is $22.6bn.
I assume the rest of the facility is depreciated over 20 years. This includes things like:
- Building/real estate
- Networking gear
- UPS systems
- HVAC/cooling systems
- Generators
Since we calculated the total capex excl GPUs at $162bn, this means annual depreciation for this will be $8.1bn (162bn/20yrs).
Summing the above means annual depreciation for the current run-rate capex will be $30.7bn ($22.6bn for GPUs + $8.1bn for the rest). Calculation summarized below:
Required revenue to justify the expense
To make things simpler, let’s imagine all the GPUs is going into one giant facility. A facility that costs $274bn.
The cost of goods will mainly be the electricity costs (around $4bn per year) and then you must subtract taxes as well.
The annual depreciation that was calculated to be $30.7bn can be seen as the annual operating expense.
The net profit will look something like:
Revenue $61.4bn
less: COGS $4bn (electricity)
less: Opex $30.7bn (depreciation)
less: Taxes $5.6bn
Net profit $21bn
Return on equity is thus 7.7% ($21bn / $274bn). This is barely higher than what the cost of capital is. So, you absolutely need to earn at minimum $61.4bn annual revenue for this to make financial sense.
To be fair, capex intensity will at some stage decline when all the physical datacenter buildings have been built. At that stage the capex will be only to upgrade the GPUs in the building every ~4 years.
How to generate $61.4bn in revenue
Finally, how plausible is it to generate $61.4bn in revenue? Revenue is currently generated via a subscription model or via a usage model.
The subscription model is mainly for individual consumers (like the $20 per month to use Perplexity or Gemini etc). Since I don’t know how much businesses are spending per month, I am mainly looking at the revenue scenario through the lens of required monthly users paying $20p/m. This is probably flawed since businesses are likely a higher margin opportunity for AI providers than individual consumers.
You would need 255mn subscribers (at $20p/m) per year to generate $61.4bn in revenue. This is completely plausible, for example Netflix has 270mn subscribers (although their avg subscription is only ~$12).
While spending $274bn on AI datacenters makes sense as you likely can get a few hundred million subscribers, one needs to remember that this capex is growing annually.
Let’s say AI datacenter spend increases by 20% per year for the next 3 years (Conservative as Nvidia’s datacenter revenues are projected to grow much faster than this.) Cumulative AI datacenter capex out to 2027 will be about $1Tn:
Year 1: $274bn
Year 2: $329bn
Year 3: $395bn
Total capex = $998bn
You would need over 1bn subscribers to earn an acceptable return on this investment.
If we think of the current ability of LLM’s and the likely size of the market in terms of number of people. I think desk workers – people writing, making presentations and sending emails are likely the best served by the current state of LLM’s. A good estimate for how many people this is, is to look at the number of Microsoft Office users, which is reportedly over 1.2bn people.[1] The TAM is large enough.
Current industry revenue
The addressable market seems to be large enough to support the current capex taking place. However, it is reported that minimal AI related revenue is being generated across the industry. ChatGPT reportedly has 180mn monthly active users, although only a fraction of those are paid users. OpenAI has a run-rate annual revenue of $3.4bn – and their latest funding round values them at $80bn. OpenAI is just one player among many but is the largest in terms of revenue.
Just a guess, but industry-wide revenue is currently probably less than $10bn annually. To justify the (growing) capex, industry revenues will need to increase to a couple hundred billion dollars per year.
Conclusion
The market seems large enough in terms of potential customers, but it does look like the current pace of capex growth is far outstripping revenues and it will take many years (>4yrs) before seeing a positive return on equity. It effectively means all the current GPU capex is being incinerated – keeping in mind that current GPUs be redundant in ~4 years and will need to be replaced.
A different way of looking at it is that all the capex spent until now have produced the current state of the art models (GPT-4o, Claude, Perplexity). These models are clearly not good enough to incentivize large scale paid usage as evidenced by the miniscule revenues.
So, what needs to happen is more paid users which requires better models, which require better GPUs, which necessitates more rounds of GPU capex and training. We’ll probably get there, but how many of these ‘improvement’ or ‘capex cycles’ will be required and how much capital is available to fund this?
For big tech, the risk of falling behind in AI is too great not to spend as much as they can on AI. I am wearier of how long venture capital funded startups can keep up this spending. At some stage their investors will turn off funding if there is not a clear path to profitability on the horizon.
Here probably lies the risk to Nvidia specifically. Half of their revenues come from the large hyperscalers, which are all developing their own AI chips. The large hyperscalers currently still buy as many Nvidia GPUs as they can because their cloud customers (startups) want to train models using Nvidia and Cuda. If venture funding taps close, then that demand will decrease. At the same time, hyperscalers will move more and more of their own AI training to their internally designed chips (example Google and their TPU chips). The financial incentive to do this is clear since Nvidia has a ~4x markup on their GPUs. At this stage Nvidia will drastically need to lower selling prices and hence their margins will contract.
As long as venture capital ignores profitability, Nvidia’s outlook remains rosy. However, at some stage models will either need to improve to incentivize paying customers or investment will decline and Nvidia’s margins along with it.
Nvidia at this stage is in the too hard pile for me. It seems priced for perfection, ignoring the risks laid out above. It seems very plausible for the industry to start questioning if their short-term expectations aren’t maybe too high.
GenAI penetration rate
Software development is the industry with the highest GenAI adoption and by quite a wide margin. This is due to the reported 50% increase in developer productivity from using GenAI code assistants like Github Co-pilot. Github Co-pilot is the most popular coding assistant currently, but you also have Amazon’s CodeWhisperer and Google also has their own product in this regard.
Microsoft reported that Github (the whole company including co-pilot) increased revenue from $1bn in 2022 to $1.45bn in 2023. 45% of this increase was driven by co-pilot - thus $202mn ($450mn x 45%) is Co-pilot revenues in 2023. Co-pilot had 1.8mn paid subscribers at the end of 2023. This sounds right because at $10 p/m per subscriber, total revenue will be $216mn ($120p/y x 1.8mn subs). i.e the $216mn is close to their implied $202mn in co-pilot revenue.
There are around 27mn people employed as software developers in the world. This means penetration is 6.7% (1.8mn / 27mn). 90% of Fortune 500 companies use Github, so it is not a case of companies not subscribing to co-pilot because they don’t use Github.
It is quite surprising that the most useful use case of LLMs only has a ~7% penetration rate.
In conclusion, for current GenAI valuations to make sense the industry is going to need to increase revenues by a magnitude of 20x-30x in the next 3-4 years. Of course we are only 18 months into the modern GenAI journey and it very well could happen (or not - remember when blockchain was going to replace the whole banking system?). My sense is that the market has front-run this just a little too much. This could rhyme with the late 90’s internet cycle. If I had to guess, we are likely somewhere in the green circle below.
[1] https://wifitalents.com/statistic/microsoft-office/#:~:text=%22The%20number%20of%20Word%20users,to%201.2%20billion%20in%202020.%22
[1] https://longportapp.com/en/news/200696249






This analysis hits the core problem I see with our 110+ startups. $274bn annual capex requires 255mn subscribers at $20 monthly to break even. The math doesn't work without dramatically higher productivity gains than current models deliver.