Artificial Intelligence & Data Centers
By: Gautam Khosla
If you had walked into a typical data center ten years ago, you would have recognized it immediately by its sound. It wasn’t the hum of computation; it was the roar of fans. Thousands of them. We spent decades perfecting the art of pushing cold air over hot silicon, achieving a fragile equilibrium between compute density and thermodynamics. That era is now changing, with the force of a phase change.
The demand for digital infrastructure has always been present with cloud computing, streaming, e-commerce, enterprise digitalization, etc. Artificial Intelligence (AI) did not invent the demand for data centers. What AI changed is the intensity of demand, the technical configuration of buildings, and the degree to which development now collides with energy systems and public policy. In addition, AI demand is increasingly becoming a real estate and infrastructure story, which is being measured in megawatts, interconnection queues, chilled-water loops, and land assembled near transmission. Due to the fact that data centers scale faster than the energy system, the winners over the next decade won’t be defined by who wants to build, but by who can secure power, cool densely, and execute reliably.
AI is a Load Problem
At the global level, data centers are already a meaningful electricity consumer. The International Energy Agency estimates data centers used about 415 TWh in 2024 (1.5% of global electricity), and projected consumption could roughly double by 2030 to 945 TWh in its base case (IEA, 2025). In the United States, the story is even sharper. The Department of Energy (via Lawrence Berkeley National Laboratory) estimates data centers consumed about 4.4% of total U.S. electricity in 2023, and is projected to rise to 6.7% to 12% by 2028 as AI workloads expand and continue to exponentially grow (DOE/LBNL, 2024). Electricity isn’t just an operating cost line, it is often the gating factor for development. AI is pushing campuses from “tens of megawatts” to “hundreds of megawatts,” sometimes approaching utility-scale loads at a single site. When a tenant’s requirement is 100–500 MW, the limiting reagent is not concrete, it’s capacity on the grid and the timeline to deliver it.
Power Density
AI compute is physically different. Modern accelerators are power-hungry, and the cluster architecture multiplies the effect. For example, NVIDIA’s H100 SXM lists up to 700W TDP (thermal design power) per GPU (NVIDIA, n.d.). Put eight GPUs in a server, multiply across rows, then add networking (high-speed fabrics) and storage, and you start designing around heat flux, not just square footage. This is why the idea of power density has become the headline metric. According to Uptime Institute, a global data center advisory organization best known for developing the Tier Standard that classifies data center reliability, densified IT for AI is placing new demands on data center design and operations, as historical assumptions about cooling and power distribution are increasingly challenged (Uptime Institute, 2024). In the era of AI, the data center is less like an office building filled with servers and more like an industrial facility with extreme electrical and thermal loads, where mechanical, electrical, and plumbing (MEP) choices determine whether the asset can even host the next generation of compute.
Data Center Cooling
Traditional enterprise data centers were built around air cooling and raised floors. That model worked when rack densities were modest. However, AI changes the thermal math: higher rack power means more heat, and moving that heat with air alone becomes inefficient (and eventually impractical). This is why liquid cooling is moving from niche HPC deployments into the mainstream. ASHRAE’s thermal guidelines explicitly address high-density environments and liquid cooling classes, an acknowledgement that standard operating envelopes are being stretched by modern equipment (ASHRAE, 2021). Academic and engineering literature also point to direct liquid cooling, which enables higher cooling temperatures, reduces cooling energy, and improves waste-heat recovery potential (Stahlhut et al., 2024). There’s a second-order effect here that real estate professionals should care about, and that is how cooling strategy changes the building. Liquid cooling can mean different piping layouts, different redundancy, different commissioning risk, and new vendor ecosystems. It also raises the bar for operations because a failure mode in a high-density AI hall is not “hot aisle discomfort,” it can be sudden throttling and downtime that can cause problems for owners of data centers.
How Efficiency Framing is Changing
For years, the industry talked about Power Usage Effectiveness (PUE) as the efficiency yardstick. ENERGY STAR describes PUE as a measure of infrastructure efficiency, how much total energy is required per unit delivered to IT equipment (ENERGY STAR, n.d.).
AI complicates the obsession with PUE in two ways:
IT load is exploding. Even if infrastructure gets more efficient, total site demand can still rise sharply because the compute intensity per workload is higher.
“Useful work” matters. A slightly worse PUE might be acceptable if the facility delivers substantially more model training/inference throughput per megawatt, or if it can run higher-density racks that a competitor cannot.
DOE’s best-practices guidance still emphasizes driving efficiency through design and operations, including strategies that reduce cooling and power losses (DOE/NREL/FEMP, 2024). However, the investment decision is increasingly about delivering compute per constrained megawatt. Which turns the conversation from “green building branding” into “how do we win interconnection capacity and monetize it?”
Portfolio of Specialized Assets
One of the easiest mistakes real estate professionals make is assuming “data center” is a single product type. AI is pushing segmentation. For example, in regards to training-heavy campuses, there are huge power blocks, dense GPU clusters, often in fewer locations due to scale economics and power availability. Another type is inference and latency-sensitive sites, which are closer to population centers or network aggregation points. They sometimes have smaller footprints but still high density. There are also hybrid facilities, which are mixes of cloud, AI inference, and some training, which are built for flexibility as workloads shift.
The IEA’s modeling shows this diversity, noting growth across hyperscale, colocation, and enterprise data centers, with accelerated servers accounting for a large share of the increase in electricity consumption (IEA, 2025). From a commercial real estate perspective, this segmentation matters because it changes tenant credit profiles, lease structures, capex responsibilities, and risk. A “generic shell with power” may not be enough in an AI-dense world, especially if tenants demand liquid cooling readiness or very high power availability that requires specialized infrastructure.
Impact on CRE Underwriting
If you’re underwriting AI-era data center real estate, the questions that used to be “ops details” are now value drivers. For example, power is the new rent. The ability to secure, deliver, and expand MW capacity can define market dominance. In some markets (eg Dallas/Columbus), power availability will be scarcer than suitable land. Another example is that CapEx intensity also rises, and shifts. Higher densities and more complex cooling can push more capital into MEP, commissioning, and redundancy. That can mean higher replacement reserves, more specialized O&M, and different risk around build-to-suit execution. Finally, sustainability becomes operational. It’s not just emissions reporting. Water strategy, heat rejection, and efficiency can affect whether a site is permitted, financeable, and scalable (DOE/LBNL, 2024).
The Modifications of Data Centers
The AI wave is turning data centers into the most important new infrastructure asset class of this decade. Not because they’re trendy, but because they sit at the intersection of compute demand and physical constraint. The sector’s next chapter is about power density, cooling technology, and grid integration, which are disciplines that look a lot like energy and industrial development.
If the last era rewarded whoever could lease space and keep PUE respectable, the next era rewards whoever can deliver megawatts on time, cool high-density compute reliably, and manage community and infrastructure tradeoffs without breaking execution. AI isn’t just changing what happens inside data centers. It’s changing where they get built, how they’re financed, and what it means for a project to be “prime” real estate.
Learn more about the author: Gautam Khosla
Sources:
ASHRAE. (2021). Thermal Guidelines for Data Processing Environments (5th ed.)
DOE (U.S. Department of Energy). (2024, Dec 20). DOE release summarizing 2024 Report on U.S. Data Center Energy Use (LBNL).
DOE/NREL/FEMP. (2024). Best Practices Guide for Energy-Efficient Data Center Design.
ENERGY STAR. (n.d.). Portfolio Manager Help: Data Center PUE definition.
IEA (International Energy Agency). (2025). Energy and AI: Energy demand from data centres.
NVIDIA. (n.d.). NVIDIA H100 product specifications.
Uptime Institute. (2024). Global Data Center Survey 2024.
Stahlhut, M., et al. (2024). “Data Centers With Direct Liquid‐Cooled Servers …” Energy Science & Engineering.

