Tunnel Vision: A novel argument for a better subterranean data center design
Why a refined subterranean solution could be better than the submerged attempts - plus the fastest and cheapest way of getting them rolled (or bored) out.
Microsoft unveiled an ambitious project in 2015 - Project Natick. Simply put, the idea was to make an unmanned submarine full of servers, drop it into the sea just off the coast of California, and connect back to land via a big underwater cable.
It was a resounding success; and a second phase of this project was rolled out in 2018. It was pulled in 2020, but other companies (Subsea Cloud and Highlander) have continued in a similar guise since - and continue to attempt to expand this solution into the wider mainstream data center solutions.
But what’s the point of submerging a load of servers?
Well, it’s quite simple - heat. Or more accurately, cooling.
Servers get hot. Walk into your server room at your office, and you’ll notice it feels much warmer than the rest of the office. You may have up to 10 servers in a decent sized office. In comparison, commercial data centers can have thousands; which means the ambient air temperature can get very warm, very fast. The hot air is than sucked back in by the servers, which causes them to overheat and fail, so they need to be actively cooled.
Air cooling by large fans and air conditioning ambiently is possible, but inefficient. It’s a solution that uses significant amounts of energy, costs a fortune, and is still not a great way of getting the heat out of the ambient air - it largely relies on the hot exhaust air to mix with the cooler air being pumped in; in turn, levelling out to a lower stable temperature that doesn’t throttle the servers performance. There's disparity in the density of the hotter and cooler air, so it's only simple in theory - in practice it results in a very inefficient method.
The better solution is water cooling. Water has about 25 times the thermal conductivity of air; it’s got a much greater specific heat capacity too, meaning it can store much more heat energy before it's temperature actually rises; and as such it's far more stable and uniform in its heat distribution. It's been a common solution in data center cooling for some time, but often uses vast amounts of water and electricity in pumping it around the server racks too. It's also an expensive system to install, requiring much more hardware and infrastructure than air cooling. There’s the risk of water in close proximity to expensive and sensitive electronics, and so leak prevention and mitigation is critical.
The genius behind submersible data centers is to leverage although those amazing cooling properties of the water, with minimal pumping. Surrounding the sealed pod in water stablizes and retains cooler operating temperatures even more. As it's unmanned and sealed, the internals can be filled with pure Nitrogen gas to mitigate humidity/water vapour issues and retain dry circulation.
terms of construction, it’s also potentially a much cheaper option than a conventional land-based data center, owing to the lesser need for vast and complex HVAC systems.
Although this seems like a perfect solution, there's a few fundamental flaws in it's design - simply by virtue of it being underwater:
It’s not at all accessible for the most part. The chamber is completely sealed, with no air lock system - and therefore, to carry out repairs or maintenance, it requires it to be bought back up to surface level; nitrogen exchanged for air; operatives to carry out the maintenance; and then finally, reseal, reintroduce the low-pressure nitrogen, and re-submerge. Every time this is carried out, a potential risk of seal failure is amplified.
It’s not a universal solution. Original trials for this technology have been proven in Finland, Scotland, and other colder climates, where the sea temperature is typically cooler year round. This is not a viable solution for warmer, tropical seas or environments - try to cool hot servers with warm water is not going to be an effective solution. This could be countered by lowering the pod to a lower depth - but this brings additional issues with regards to the fabrication to withstand the collosal pressures deep sea puts onto the pod.
It’s only internally secure - the underground cable risks being damaged, and the pod could be susceptible to attack or theft to a far higher degree than a land based solution, owing to the expanse of the sea. Once the cable is cut, it could easily be towed away long before anyone could react to it’s theft.
It has no real margin for failure - If any part of the process or structure fails, then the entire investment is wasted - once excess water and humidity find it’s way into the pod, then there is little that can be done to save the equipment. Redundancy and backup must remain part of the overall architected storage solution to mitigate against this.
The phenomenon of ‘biofouling’ - whereby a build up of barnacles and crustaceans on the outer of the pod affect the structure’s ability to shed it’s heat. If left for too long, then it could easily have the opposite effect to cooling, basically creating an oven where the servers slowly cook themselves.
Although the sea is a largely stable temperature, some parts can be susceptible to marine currents and heat waves.
My novel take is that a solution needs to be able to capitalise off all of the benefits of an underwater datacenter, but with few of the downsides.
So; what if we consider a subterranean solution instead of a submersible one? There’s existing facilities that have started to experiment with this design, but there is some refinement that could improve it.
It should only be just below the ground level - to reap the benefits of an extremely stable temperature (owing to the insulating properties of the rock and thermal inertia), but never deep enough to be impacted by planetary accretion and radioactive decay from the mantle. Some of the existing structures, such as those by Iron Mountain has underground data centers, but they are more than 200 feet below the surface; Bluebird Fiber has an underground structure around 85 feet below the surface in a limestone cavern. Although existing mines and bunkers have been utilised for these purposes, they are arguably deeper than what is required for these purposes. As an approximate estimate, every additional 100 meters below the surface will generally attract an additional 25-30ºC - which is just additional heat that would have to be mitigated. It’s more sensible for a shallower structure which will not require as much ongoing power requirements to cool.
Close to the surface would be perfectly stable, and generally mimic a temperature akin to above ground conditions. This is perfectly safe for humans to access also, and with increased security from attack or physical breach.
The water cooling can be utilised, by effectively creating short cooling loops via a network of underground twin-wall tunnels. Servers are effectively housed in these tunnel runs, with the gap between the two walls filled with water (continuously pumped in) to act as a cooling jacket around the entirety of internal tunnel structure. This still mimics the effects of the structure being submerged from the large surface area contact with water. If greater thermal stability or thermal efficiency is required, simply increase the gap between the inner and outer walls.

Power supplies and other large heat contributors can be kept outside of the twin wall tunnel completely, and be serviced by it’s own cooling solution - this could be above ground, or even directly connected to existing power stations, substations or transformers.
The approach lends itself to coastal installations, or installations in close proximity the the sea; but could easily work as well on in-land structures with bodies of water - moated facilities could assist in providing the water required, as well as an additional layer of security and protection if required. This solution does not require excessive depths of water, just a sufficient volume in the body to support the cooling requirements of the installation.
Construction is also relatively sustainable, effectively mimicking that of tunnel construction techniques - the tunnel boring technology is practiced well, and installations can last up to hundreds of years without requiring significant overhaul. Deeper structures would require excavation and blasting, which makes expansion of existing structures much more difficult. This solution is a significantly cheaper and safer alternative - although existing underground structures could be utilised, it’s not anticipated that the full depth of them will be required.
It’s accepted that this solution may not be the most suitable in areas of frequent tectonic activity - but other solutions such as those currently used above ground and submerged could easily serve those areas.
It also preserves the open space above-ground for use for agriculture or other developments which cannot be housed underground, owing to their requirement for access to sunlight or other natural resources.
TH
You may also find these of interest:
I'll be your server today: a novel wafer design for a better data center solution
This could either be a billion-dollar idea or a complete failure. However, as data centers begin to reach hyperscale, the construction challenges - and subsequently the costs associated in overcoming them - are soon becoming prohibitive.
Power Surge: The biggest bottleneck in data center construction
As the digital economy continues to rise, it's going to come as no surprise that the capacity and scale of data centers appears to be growing at an exponential rate. Jensen Huang (NVidia) has indicated that there will be around $1 trillion of investment into data centers
Divide & Conquer: Decentralized manufacturing could be the saviour for both employment and commercial real estate
Have we reached peak manufacturing efficiency? If we’re brutally honest, there’s only been a handful of true leaps in progress over recent years. There was obviously the industrial revolution and the production line; industrialised automated manufacturing methods; and more recently robotics and AI integration.