Power Surge: The biggest bottleneck in data center construction
Warning - the stats of the deficit may shock you...
As the digital economy continues to rise, it's going to come as no surprise that the capacity and scale of data centers appears to be growing at an exponential rate. Jensen Huang (NVidia) has indicated that there will be around $1 trillion of investment into data centers in the next four years just to keep pace with the AI demand; to give an indication of that rate of change - it’s taken 15 years to accumulate $1 trillion of investment into data centers via traditional cloud and compute applications. Huang’s statistic only accounts for the AI demand also; and there shows no slowing of growth in traditional cloud and compute demand either.
So we absolutely have a demand for more data centers, but the main problem to meeting this demand isn’t what you might suspect.
It’s not the supply of land. There’s plenty of land to build them; or existing industrial sites that can be converted.
It’s not the supply of capital. There’s plenty of money to fund them; generally from real estate funds and venture capital firms.
It’s not in the supply of equipment and hardware. And there continues to be sufficient technology to enable the construction of them, and the challenges the latest hyperscale installations bring with it - the cooling, equipment density, hardware, and facilities management.
The problem is exclusively the supply of power.
The first (minor) problem - Generating enough of it.
Between conventional methods and renewables, we have no shortage of methods in terms of generating power. There is definitely enough potential to generate sufficient power to meet the demand; but it will require a lot of investment, and very quickly.
In terms of power requirements, a conventional data center is an incredibly hungry beast; but compared to a hyperscale AI data center, it is nearly incomparable. To give some illustration of this:
The whole of New York City consumes around 5 Gigawatts of power per day.
A conventional data center operating cloud or traditional compute functions - consumes around 60 Gigawatts per day.
A hyperscale AI data center consumes closer to 300 Gigawatts per day.
Up to now, power providers and utilities companies have typically achieved adding around 5-6 Gigawatts of additional power capacity per year in most areas.
In order to keep up with the scaling, this rate needs to quadruple effectively. Current research from data center investors suggest that the traditional methods of building power is now completely insufficient for modern requirements - and it is going to require every solution to be rolled out to meet the demand - renewables and non-renewables - solar, wind, hydroelectric, geothermal, gas, coal and nuclear.
There’s compounding issues here also though - the turbines used to convert the heat into electricity are only manufactured by around 3 companies in the USA (Siemens, GE Vernova and Mitsubishi Electric). There’s currently a 4 year waiting list for one of these.
The second (major) problem - Getting it distributed.
Much of the distribution grid infrastructure in certain parts of the world dates back to the second world war - and it was never designed with sufficient capacity for running these levels of demand or generation.
This makes site selection extremely challenging, and many are unsuitable on this premise alone. Grid upgrades would be slow, expensive, and typically involve government cooperation - the rate of development required could simply not be practicably met.
It is thought that eventually grid-independent infrastructure will be built adjacent or on data center sites, likely encompassing their own power generation sources in order to meet their own demand and to take reliance off the grid structure. Data centers have a stable and continuous demand requirement - requiring power load for 24 hours of the day, so the eventual solution would likely not be entirely renewable by virtue of the building’s requirements.
The most promising solution would appear to be Small Modular Reactors (SMRs) - often referred to as Nuclear Microreactors; which have been tested and proven in many applications such as submarines for a number of years. Scaled versions suitable for the datacenter application are still anticipated to be around 6-8 years away from becoming commonplace however.
Solutions?
In short, not many. Currently efforts are being directed into enabling as many existing sites as practicable. It’s unlikely that industry will be willing to wait longer than necessary to develop sites - approximate figures suggest that the average data center would cost around $7900 per minute of downtime. Therefore, delaying it’s implementation could easily be a costly exercise.
Whilst grid and generation upgrades are going to be necessary in the long-term (and therefore will be continued to be proactively worked on), efforts will turn into quicker solutions in the meantime - efficiency in the chipsets and compute architecture itself, or in the cooling, insulation and building design.
My two takes are to consider alternative designs for data centers; and also to consider decentralising the manufacturing of other industries, to enable this space and capacity to be utilised by data center operators. Both articles on those solutions below.
TH
Tunnel Vision: A novel argument for a better subterranean data center design
Microsoft unveiled an ambitious project in 2015 - Project Natick. Simply put, the idea was to make an unmanned submarine full of servers, drop it into the sea just off the coast of California, and connect back to land via a big underwater cable.
Divide & Conquer: Decentralized manufacturing could be the saviour for both employment and commercial real estate
Have we reached peak manufacturing efficiency? If we’re brutally honest, there’s only been a handful of true leaps in progress over recent years. There was obviously the industrial revolution and the production line; industrialised automated manufacturing methods; and more recently robotics and AI integration.