What is the actual scale of the thermal problem?
For someone who does not live in the data center world day to day: what is the actual scale of the thermal problem you are describing?
A 100-megawatt hyperscale data center converts essentially all of its electrical input into heat. That is the continuous thermal output of a mid-size industrial plant — running 24 hours a day, 365 days a year. We are talking about gigawatts of continuous, stable, predictable thermal energy being discarded at scale — while industries a few kilometers away are burning natural gas to produce the same thing, at exactly the same temperature ranges. That is not a niche sustainability problem. That is a systemic design gap, and it grows every single time a new campus goes live.
Isn't liquid cooling solving this at the source?
The industry is moving fast on liquid cooling, direct chip cooling, and immersion systems. Isn't this solving the thermal problem at the source?
Liquid cooling solves a heat removal problem inside the equipment. It brings component temperatures down, improves compute density, and reduces the volume of air that needs to move through the hall. All of that is real and valuable progress. But the thermal energy itself does not disappear — it is now in a water loop instead of an airstream. What happens to that water loop is a completely separate architectural decision. In most deployments today, the answer is still: reject it to the atmosphere. The cooling system remains an internal cost center. The heat is still discarded. Liquid cooling improves the efficiency of the disposal. It does not change the fact that disposal is still the strategy. That is the structural gap — and it sits one level above the cooling technology conversation entirely.
What does the solution actually look like?
So if better cooling technology does not resolve this, what does the solution actually look like?
The question shifts from "how do we cool better?" to "what do we do with the heat once we have collected it?" My answer: we need a standardized thermal export interface built into the campus from day one. I call it the Thermal Plug. Just as a data center builds a standardized electrical substation interface and a standardized fiber interconnect point before the first server is installed, it should build a standardized thermal export interface with the same logic — all specified before you know who the downstream heat user will be.
· Defined output temperature band
· Thermal capacity blocks & hydraulic connection standards
· Metering and billing boundaries
· Redundancy class & control interface
As liquid cooling architectures mature — particularly high-temperature loop designs capable of 60–70°C supply temperatures — the export quality improves further. The campus becomes thermally export-ready by default. Downstream partners connect to that interface later, on their own timeline, with their own capital. You are not designing a custom heat reuse project every time. You are building infrastructure optionality into the asset from the first concrete pour.
Why district heating is the wrong primary architecture
When most people hear "data center waste heat," the first thing they think of is district heating. Is that where you are pointing?
It is where most people point — and it is exactly the wrong primary architecture. The moment you examine it seriously, the constraints stack up fast. The most fundamental is temporal: a data center produces heat continuously, year-round, while district heating demand peaks in January and drops to near zero in July. Thermal storage at the scale needed to bridge a six-month seasonal gap is neither technically trivial nor economically viable.
The second constraint is temperature. Traditional district heating networks require 70 to 90°C. Most data center loops operate at 40 to 60°C — bridging the gap requires a heat pump, which adds capital cost and additional energy input. Third, the geography rarely aligns: land, grid, and water access optima are frequently not adjacent to dense urban populations. And fourth, district heating networks take years to permit and build, while hyperscalers deploy on 18 to 24-month cycles. Under current deployment logic, these timelines are structurally incompatible as a primary architecture.
"Do not design heat reuse as a custom project. Design heat export as infrastructure."
Dimitri WolfSo district heating is off the table entirely?
So district heating is off the table entirely?
Not off the table. Correctly framed. In specific geographies where conditions already align — existing dense pipe networks, cold climates with long heating seasons, campus locations with genuine urban proximity — district heating is a valid and proven downstream application. Microsoft's collaboration with Fortum in Finland demonstrates this concretely: covering 40% of district heating demand for 250,000 customers in the Espoo metropolitan area, using an existing 900-kilometer pipe network built over decades. It cannot be replicated generically, and it cannot scale at the speed hyperscalers are deploying. The Thermal Plug standardizes the supply-side interface so that district heating, where conditions are right, can connect to it — exactly like any other downstream application. District heating is an application. It is not the standard.
Is regulation the driver, or does the business case stand alone?
European regulation is now moving in this direction. Is regulation the primary driver, or does the business case stand independently?
Both are real, operating on different timescales. Regulation is a forcing function — it sets a floor, converts optional thinking into contractual design obligation, and gives procurement and legal teams a mandate to act. Germany's EnEfG requires data centers above 300 kilowatts to achieve an Energy Reuse Factor of 10% from July 2026, rising to 20% by July 2028. The EU Energy Efficiency Directive adds a parallel layer across member states. The structural logic applies globally — from water-stressed US markets, to the Middle East near petrochemical complexes, to the Nordics where existing industrial infrastructure accelerates deployment timelines.
But if compliance is the only driver, operators will do the minimum and stop. The business case I find genuinely durable is asset optionality. A campus built to a thermal export standard carries a future revenue stream — bankable under long-term heat purchase agreements — that a non-export-ready campus simply does not have. Industry estimates for retrofit typically run to several million euros per 10 MW of thermal capacity. Designed-in from day one, the same capability costs a fraction of that. The capital cost delta is not linear. It is an order of magnitude difference.
Why did every previous attempt fail?
Every major infrastructure conference for the last decade has had a slide about data center waste heat reuse. Almost none were executed at scale. Why?
The reason those slides were never executed is specific and worth naming directly: they were designed as projects, not as infrastructure. Someone identified a local heat user, negotiated a bilateral supply agreement, and engineered a bespoke system designed entirely around that single counterparty. Then it collapsed — on permitting, on seasonal mismatch, on the partner's business change, on regulatory shifts, on the impossibility of maintaining a one-off system with no standard components and no operational precedent.
The architectural error was designing thermal output around a known downstream user instead of standardizing the interface first — the same error that would have been made if early electrical infrastructure had been engineered to power one specific factory rather than standardized for any load. The Thermal Plug changes the structure: define and build the supply-side standard once. Capital can then evaluate a known, specified interface rather than a bespoke engineering risk. No single dependency. No custom engineering per site. No fragile bilateral exposure.
Who actually absorbs this heat at scale?
Given that district heating is not the scaling architecture, who are the realistic large-scale absorbers of this thermal energy?
The right filter is not "who wants heat" — that list is long and mostly useless at the scale we are discussing. The right filter is: who can absorb large, continuous thermal loads in a modular and interruptible way, at the temperature ranges a data center actually exports, without being critically dependent on that heat for operational survival? Controlled-environment agriculture requires stable growing environments at 15 to 28°C year-round, directly achievable from a liquid cooling loop with no heat pump uplift. Biogas and biomass processing hubs are a strong structural match, addressed in the next question. Food processing utilities, industrial drying clusters, pharmaceutical process utilities, and selected chemical preheating operations all carry meaningful continuous heat demands in the 60 to 90°C range, achievable with modest heat pump uplift.
What does not work as a primary sink is high-temperature core industrial process heat. Steel arc furnaces operate above 1,500°C, cement kilns above 1,400°C, glass melting furnaces above 1,300°C. Data center thermal export is suited for utility loads and preheating stages — not for replacing the hottest core of a heavy industrial process. That distinction is not a footnote. It is the boundary between a credible proposal and a greenwashing slide.
Biogas sounds too clean. Stress-test it.
You have highlighted biogas and biomass hubs as one of the strongest modular candidates. That sounds almost too clean. Stress-test it.
The skepticism is warranted. Anaerobic digestion requires continuous heat at 35 to 55°C — directly achievable from a data center cooling loop without heat pump uplift. The demand is constant year-round — biology has no summer demand curve. A facility processing industrial organic waste at 10 to 20 megawatts of thermal input is fully within the range of proven industrial biogas engineering. The outputs are bankable: biomethane for grid injection, certified digestate as a fertilizer substitute, and a recoverable CO₂ stream.
The stress test: a facility at this scale requires tens of thousands of tonnes per year of food processing residues, agricultural waste, or agro-industrial organics — a site selection requirement, not a fundamental barrier. Road or rail access for feedstock and digestate is non-negotiable but solvable at planning stage. Biomethane grid injection requires a gas network connection, which is straightforward in some markets and rate-limiting in others. And the biogas facility must maintain biological process stability during any data center maintenance window using auxiliary heating — it benefits from data center heat, it does not depend on it existentially. None of these are physical or economic barriers. They are engineering and logistics problems with known, deployable solutions.
The value to the operator is not a payment received for accepting heat — it is a fuel cost eliminated. Gas that was burned internally to maintain the fermenter can now go to the grid instead. That saving is independent of any heat market negotiation, which means the economics are stable by design, not contingent on a price that can collapse.
Biogas is not the complete answer to data center heat reuse at hyperscale. It is the best first anchor off-taker within a broader thermal ecosystem — fast to deploy, modular, non-seasonal, and independently viable. The cluster builds from there.
Is this the moment to push this to META, Google, and AWS?
META, Google, and Amazon Web Services are building at unprecedented scale right now. Is this the moment to push this architecture to them directly?
Yes — and the window is not open indefinitely. These operators are designing campuses today that will be operational for 20 to 30 years. The thermal infrastructure decisions embedded in those designs define the optionality of those assets for their entire operational lifespan. Retrofitting a thermal export interface into a completed campus costs an order of magnitude more than designing it in at the drawing stage.
The conversation with these operators is not about sustainability. That framing is finished and they know it. The conversation is: here is a standardized interface specification; it adds a defined capital cost per megawatt at the design stage; it creates a bankable future revenue stream under long-term heat purchase agreements; it satisfies regulatory requirements across the EU and increasingly in other jurisdictions; and it positions the campus as the anchor of an industrial ecosystem that generates measurable local economic value — including employment, which is increasingly a condition of planning permission. The technical readiness exists. The regulatory trajectory is established. What is missing is the interface standard itself — the agreed specification that makes Thermal Plug a default design requirement rather than a project-by-project negotiation. The campuses being designed today will still be standing in 2050.