Forget Square Footage—It's all about the juice, ice, and fiber
Data center leases bend conventional real estate rules; a quick overview for real estate investors looking to dip their toe into one of the fastest growing sectors in commercial property
It's no secret that common real estate markets that have been relatively stable for a number of years are starting to show signs of weakness and turmoil as the modern age begins to adapt to less bodies working in offices, streamlined manufacturing facilities, increasing pressures on sustainability, and legacy industries, in addition to the myriad of other contextual factors that are born as a result of these.
There's one sector of property that is showing increasing promise despite its challenges: the investment into data centers. These facilities were once solely invested in by tech companies or cloud providers, whereas now venture capital firms and REITs are seeking an increasing amount of this square footage to form a key part of their portfolios.
I'll state the obvious though. A data centre is not a factory. It is not an office, and it's quite unlike any other form of commercial real estate. Carrying more complexity, demand, and competition than some other asset classes. Specialist in many regards and to a large degree in some aspects. Not as specialist as many other infrastructure installations, such as power distribution, renewables installations, or oil and gas facilities.
So, for traditional investors looking to dip their toe into this new world, the market appears to have stabilised sufficiently to begin to give a broad overview of the types of deals and negotiations that often take place for new and existing sites. These are a different beast to conventional commercial and industrial property however, and there's certain considerations and differences that can hugely affect a property's viability and value.
Broadly speaking, there's three main threads that run through all aspects to inform the property deals - power distribution, cooling capacity and connectivity.
The money, the makers and the mechanics
There is currently an unprecedented demand for good data centre sites. These facilities were traditionally once largely funded by major tech companies (what would have once been referred to as FAANG - Facebook, Amazon, Apple, Netflix, Google - companies). They would have secured the sites, constructed the data centers, and operated them as the freeholder thereafter.
After the first few operational years in the early 2000s, there was a marked shift towards venture capital firms providing the funding to construct these properties which are then leased to the above tech companies; new tech companies (Oracle, Salesforce etc) or increasingly specialist data centre operators (Equinix, Iron Mountain etc). Many sites would require investment into increasing infrastructure capacity also. Even legacy sites could now require additional investment into increasing capacity again, as AI applications require much more power and cooling.
This is to enable the larger tech companies to utilize more of their capital in the increasingly costly fit-outs of these properties - the server fit outs, cooling solutions and networking infrastructure will be of the absolute bleeding edge of the current technology, in order to provide a little future-proofing to their investment.
Leases appear to be typically offered for 15-20 years. Rent/rates review clauses were quite variable on older leases; but owing to the current demand for these sites, it's expected for more frequent review clauses to be more commonplace.
Additional conditional review clauses can also be implemented in the event of a phased infrastructure investment plan that may not be fully operational at the time the lease commences.
Reutilization at renewal
A recent report on the churn rate of existing legacy data centres that have been under lease for the past 15 to 20 years has shown that typically, on average, around 86% of sites have been re-utilised at the end of an existing term for a similar use.
This is predominantly due to the site's relative interconnectivity between other infrastructure and other data centres - factor that is of more critical importance to the overall operations than building and hardware itself.
Again, on average, rent/rates appear to have increased by up to 22% on renewal leases. However, it should be recognised that sites with a greater power capacity (even if that capacity is not already realised or operational) are now starting to attract much more significant premiums owing to a low number of sites that would be suitable for AI hyperscale data centers rather than the traditional compute types.
The general consensus within the industry throughout the late 2000s was the expectation that rents would decrease by around 15% upon the renewal terms, as it was expected that these properties would have more limited use, and may not necessarily be reutilized at the end of their existing terms. Legacy sites were once thought to become obsolete in favour of more modern infrastructure, but they do appear to have a intrinsic benefit where a lower latency is required for certain applications - it’s more achievable for those targets to be served by local facilities rather than trying to force more through a centralized hyperscale solution.
These sites could be utilized for modern applications, but will likely involve end users having to place more reliance on inference - using data locally stored on devices to inform responses to queries, with lesser requirement to route through data centers and cloud facilities.
It should also be expected that the premiums such sites are currently attracting will inevitably likely soften as the overall network of data center infrastructure spreads out wider into a more even and uniform distribution; and especially if inference applications increase in use.
A concept you'll be used to...Location is everything, but not necessarily for the same reasons.
Traditional real estate is dependent on location; most commonly for desirable areas with strong demand for rental, good transport connections, local prosperity, centralised locations, distribution networks, or otherwise.
Whilst arguably these considerations are factors in the context of data center sites, there are really four parts of the location in particular that have significant sway on the value and the variance of demand a certain site may receive.
1. Ashborne, Virginia
Likely not the one you thought would be key, but Ashborne - located in Virginia - is currently the hot spot for the US data center development. This was one of the first critical areas where significant infrastructure investment was made in the early 2000s. Currently, it's estimated that around 70% of all internet traffic in the US passes through data centers located here.
Significant investment continues to be made in this area, and as a result, sites are trading for a significant premium far beyond and in excess of any other sites in the country.
2. Power - Grid Infrastructure, Distribution Networks and Generation
There are some variants in quoted figures in terms of power requirements for data centres, but loosely speaking:
It is thought that a typical compute data centre can use an average 36MW of power, easily stretching up to 60MW of power.
However, in comparison, an AI hyperscale data centre can use anywhere between 100 to 300MW of power for the equivalent size of facility. T
o put this into normalised terms, 100 MW of power is enough to power around 100,000 homes in any one region.
Therefore, the infrastructure required to run these facilities are significant; and as a result the number of areas which currently have the ready capacity, distribution and existing grid infrastructure to serve these facilities are relatively slim. Those areas that do possess such capacity and distribution will carry a significant premium over the existing.
The power requirements can be mitigated to a partial degree by an adjustment in cooling methodology adopted within the facility. However, whether liquid or air-cooled solutions are chosen, either will have a significant power component contributing significantly to the draw of the facility. For context, most facilities typically employ a hybrid system with a combination of liquid and air-cooled solutions.
3. Cooling - Water, Geology, Altitude and Climate
A concept that has been widely written about is the challenge of implementing cooling within modern data center developments; as these facilities have significant requirements to maintain a stable temperature. The hardware and associated infrastructure required to run it are renowned for producing significant amounts of heat - all of which which needs to be dealt with, processed, cooled and re-circulated either in liquid or gaseous form as a continuous loop as these facilities operate - 24 hours a day, 7 days a week, and 365 days of the year.
The cooling system can simply be an extension of the power requirement, especially in the event of the most basic air-cooled solution. However, more commonly, capacity tends to be met by leveraging the data center site itself. The access and proximity to bodies of water - be that open forms in rivers or lakes, proximity to sea/ocean water or access to subterranean aquifers - can assist with site selection criteria. For instance, sites located in areas with colder climates will obviously have less requirement for ancillary cooling solutions over those which are located in warmer temperate climates. If you were selecting for cooling ability alone, you would consider favouring Alaska over Arizona.
A lesser consideration - but one nonetheless - is a site's altitude. For a site located in a higher altitude above sea level, the ambient air may be considerably thinner than in other parts of the country. Additionally, climate and altitude can inform the natural humidity of the air which can have a huge impact on its suitability for air-cooled solutions within a data centre facility. In subtropical and humid environments in particular, any air that is taken in usually must have a large amount of moisture extracted before it may be used for air cooling purposes to avoid condensation build-up and to reduce the air density around the hardware within the facility.
A process such as that inevitably places additional strain on... you guessed it... power requirements.
This is just one example of the co-dependence of many of these factors, which are currently creating significant challenges for suitable site acquisition for major companies.
4. Connectivity - Latency, Inference, Transfer Rates and Compatibility
The argument for connectivity remains a continuous battle. The critical goal is to reduce latency as much as practicable to win the business of specific clients who demand a premium service. For example, Financial Institutions executing stock market trades, Defence Contractors, as well as Emergency Service Communication Channels and other core operators within a modern domestic society.
As mentioned previously, latency can be overcome to a partial degree by increasing inference on a user device level basis to ease reliance on the required data centre infrastructure.
Thereafter, within the data center facility, transfer rates and compatibility are the two key factors, although it should be regarded that this is less of an issue for the landlord or investor and more of an issue for the tenant or data center operator. Systems in most modern day installations are currently fibre optic based, and switch architecture is regularly updated as part of the ongoing maintenance and refinement of data center systems by operators.
On a more macro level, insofar as the investor is concerned, connectivity as a topic pertains more to a certain facility's proximity to other facilities by the same operator which could make a certain site a more valuable addition to one operator rather than another. As operators begin to roll out and create a network with a more uniform distribution across a country or continent, this will become a lesser factor. This sector may develop in time to be more akin to practises adopted by mobile and cell operators by sharing physical facility and infrastructure space with their own equipment.
Connectivity is arguably one of the lesser considerations for the time being and is relatively easily updated and overcome in comparison to some of the other factors. However, it can remain as a roadblock on some sites which would otherwise be regarded as particularly suitable. As wireless and satellite communication is developed further, sites deemed to be unacceptable on the basis of connectivity alone may soon become a more economic proposition for investors. Therefore, keeping a finger on the pulse of the developments within that space would be a wise move going forward.
Location sorted? Onto the Lease Structure
It is difficult to realistically give a broad brush approach, when so many sites operating in so many jurisdictions will ultimately have a huge variance in contextual factors. However, some first-order principles can be used to form the basis for leases. These are not designed necessarily to be exhaustive but perhaps serve as a good starting point to give illustration as to what parallels or differences may be drawn to deals which conventional real estate investors are more akin to.
It is typically going to be a long lease term of 15-20 years in order to enable a realistic return on investment to be realised. As mentioned above, rent or rate reviews throughout that term will likely be structured at relatively regular intervals (every 3-5 years, to market rate or conditional on operational metric) within the current market. This may adjust as more sites become operational going forward.
Of all the above factors, the power bank and grid distribution infrastructure will likely remain one of the most important factors going forward. Even if capacity is not currently required, it will nearly certainly be required in the near future. Sufficient funds should be planned to continuously upgrade infrastructure within the locality as required, and such infrastructure should be written as terms within the lease to enable increased rates to be charged upon completion or phased rollout.
The investment in infrastructure will represent probably the largest cost for the investor, and therefore the maintenance of this infrastructure is critical. Certain provisions should be expressly written into leases as to the roles and responsibilities of such maintenance.
An unconventional clause, but a consideration in light of expansion and connectivity; trigger clauses drafted to enact increased rates should operators open a number of sites in close proximity to a certain facility. Long term, this could soften the value or demand for the investor's facility - as competitors will not want to exist as a sole facility in the center of another dominant operator in a certain area. As a result of this, a reduced market demand will enable a softer lease renewal price going forward. Contractual provisions may be considered to be drafted in order to recognise this.
This is an exciting sector that will soon develop into a level of maturity that the remainder of industrial property has undergone. Unlike conventional infrastructure arrangements, which typically see a more specialized market, the requirement and prevalence of data centres forming part of community infrastructure assets going forward is more likely to see a more standardised deal structure, more akin to traditional office, retail, and manufacturing space.
Until this point, investors should be prepared to consider all opportunities potentially containing a range of vernacular agreements specific to site and context. And be willing and open to take such decisions based on the merits of each individual proposal.
As with many opportunities in business, favour fortunes the brave. Those who took the brave leap in the early 2000s have ended up becoming some of the most powerful and influential companies in human existence. Equivalent opportunities exist for the wealth and facilitators for these businesses also.
TH
You may also find these of interest:
The Proteus Protocol: How to rollout AI-ready data centers faster than your competitors for next-to-nothing
I'd forgive you if you'd never heard of Proteus before.
Tunnel Vision: A novel argument for a better subterranean data center design
Microsoft unveiled an ambitious project in 2015 - Project Natick. Simply put, the idea was to make an unmanned submarine full of servers, drop it into the sea just off the coast of California, and connect back to land via a big underwater cable.
A Hot Take, from Bits to Beets: how data centers could be growing our lunch for free
This started out as a shower thought; but the more I think about it the more I'm becoming obsessed with it.