EN DE 中文
Feature Interview  ·  Data Center Energy Systems  ·  2026

The Thermal Plug

Why the next data center race is not about power — and what comes after it.

Dimitri Wolf M.Sc. Mechanical Engineering Strategic Advisor · Aquatherm GmbH Spring 2026

Dimitri Wolf's background spans Siemens AG and Shell, covering energy value chains from conventional systems to renewable infrastructure transformation. He is an active member of the DIN committee for plastic welding in district heating networks and participates in European standardization bodies shaping technical and regulatory standards for thermal infrastructure. At Aquatherm GmbH, he leads international markets with a focus on system solutions for heating and cooling networks.

Every time you search something online, stream a video, or send an email, a physical building somewhere consumes electricity to make that happen. That building is a data center — and as a side effect of running millions of computers simultaneously, it produces enormous quantities of heat. Today, almost all of that heat is simply released into the atmosphere. The answer is a standardized export interface — the Thermal Plug — that pipes that heat directly to biogas plants, food factories, and pharmaceutical facilities next door, turning wasted energy into bankable industrial infrastructure and this in months not years.

This interview is specifically about thermal energy reuse. It is not about electricity supply, water infrastructure, site selection, or broader data center sustainability. Those are real and important topics — they are not this conversation.

What follows is a structured conversation with Dimitri Wolf — the strategist behind the concept — on why this standard does not exist yet, and what it will take to build it.

Q 01  ·  The Problem at Scale

What is the actual scale of the thermal problem?

For someone who does not live in the data center world day to day: what is the actual scale of the thermal problem you are describing?

A 100-megawatt hyperscale data center converts essentially all of its electrical input into heat. That is the continuous thermal output of a mid-size industrial plant — running 24 hours a day, 365 days a year. We are talking about gigawatts of continuous, stable, predictable thermal energy being discarded at scale — while industries a few kilometers away are burning natural gas to produce the same thing, at exactly the same temperature ranges. That is not a niche sustainability problem. That is a systemic design gap, and it grows every single time a new campus goes live.

Q 02  ·  The Cooling Technology Question

Isn't liquid cooling solving this at the source?

The industry is moving fast on liquid cooling, direct chip cooling, and immersion systems. Isn't this solving the thermal problem at the source?

Liquid cooling solves a heat removal problem inside the equipment. It brings component temperatures down, improves compute density, and reduces the volume of air that needs to move through the hall. All of that is real and valuable progress. But the thermal energy itself does not disappear — it is now in a water loop instead of an airstream. What happens to that water loop is a completely separate architectural decision. In most deployments today, the answer is still: reject it to the atmosphere. The cooling system remains an internal cost center. The heat is still discarded. Liquid cooling improves the efficiency of the disposal. It does not change the fact that disposal is still the strategy. That is the structural gap — and it sits one level above the cooling technology conversation entirely.

Hyperscale data center campus at dusk
The interface that changes everything — a standardized connection point is all it takes to stop discarding and start delivering.
Q 03  ·  The Core Concept

What does the solution actually look like?

So if better cooling technology does not resolve this, what does the solution actually look like?

The question shifts from "how do we cool better?" to "what do we do with the heat once we have collected it?" My answer: we need a standardized thermal export interface built into the campus from day one. I call it the Thermal Plug. Just as a data center builds a standardized electrical substation interface and a standardized fiber interconnect point before the first server is installed, it should build a standardized thermal export interface with the same logic — all specified before you know who the downstream heat user will be.

Thermal Plug — Minimum Specification

· Defined output temperature band

· Thermal capacity blocks & hydraulic connection standards

· Metering and billing boundaries

· Redundancy class & control interface

As liquid cooling architectures mature — particularly high-temperature loop designs capable of 60–70°C supply temperatures — the export quality improves further. The campus becomes thermally export-ready by default. Downstream partners connect to that interface later, on their own timeline, with their own capital. You are not designing a custom heat reuse project every time. You are building infrastructure optionality into the asset from the first concrete pour.

Q 04  ·  The Common Misconception

Why district heating is the wrong primary architecture

When most people hear "data center waste heat," the first thing they think of is district heating. Is that where you are pointing?

It is where most people point — and it is exactly the wrong primary architecture. The moment you examine it seriously, the constraints stack up fast. The most fundamental is temporal: a data center produces heat continuously, year-round, while district heating demand peaks in January and drops to near zero in July. Thermal storage at the scale needed to bridge a six-month seasonal gap is neither technically trivial nor economically viable.

The second constraint is temperature. Traditional district heating networks require 70 to 90°C. Most data center loops operate at 40 to 60°C — bridging the gap requires a heat pump, which adds capital cost and additional energy input. Third, the geography rarely aligns: land, grid, and water access optima are frequently not adjacent to dense urban populations. And fourth, district heating networks take years to permit and build, while hyperscalers deploy on 18 to 24-month cycles. Under current deployment logic, these timelines are structurally incompatible as a primary architecture.

"Do not design heat reuse as a custom project. Design heat export as infrastructure."

Dimitri Wolf
Q 05  ·  Precision, Not Dismissal

So district heating is off the table entirely?

So district heating is off the table entirely?

Not off the table. Correctly framed. In specific geographies where conditions already align — existing dense pipe networks, cold climates with long heating seasons, campus locations with genuine urban proximity — district heating is a valid and proven downstream application. Microsoft's collaboration with Fortum in Finland demonstrates this concretely: covering 40% of district heating demand for 250,000 customers in the Espoo metropolitan area, using an existing 900-kilometer pipe network built over decades. It cannot be replicated generically, and it cannot scale at the speed hyperscalers are deploying. The Thermal Plug standardizes the supply-side interface so that district heating, where conditions are right, can connect to it — exactly like any other downstream application. District heating is an application. It is not the standard.

Q 06  ·  Regulation & Investment Logic

Is regulation the driver, or does the business case stand alone?

European regulation is now moving in this direction. Is regulation the primary driver, or does the business case stand independently?

Both are real, operating on different timescales. Regulation is a forcing function — it sets a floor, converts optional thinking into contractual design obligation, and gives procurement and legal teams a mandate to act. Germany's EnEfG requires data centers above 300 kilowatts to achieve an Energy Reuse Factor of 10% from July 2026, rising to 20% by July 2028. The EU Energy Efficiency Directive adds a parallel layer across member states. The structural logic applies globally — from water-stressed US markets, to the Middle East near petrochemical complexes, to the Nordics where existing industrial infrastructure accelerates deployment timelines.

But if compliance is the only driver, operators will do the minimum and stop. The business case I find genuinely durable is asset optionality. A campus built to a thermal export standard carries a future revenue stream — bankable under long-term heat purchase agreements — that a non-export-ready campus simply does not have. Industry estimates for retrofit typically run to several million euros per 10 MW of thermal capacity. Designed-in from day one, the same capability costs a fraction of that. The capital cost delta is not linear. It is an order of magnitude difference.

Industrial thermal pipeline infrastructure
Infrastructure as a connector — large-scale networks function because the interface is standardized, not because every connection is bespoke.
Q 07  ·  The Critical Distinction

Why did every previous attempt fail?

Every major infrastructure conference for the last decade has had a slide about data center waste heat reuse. Almost none were executed at scale. Why?

The reason those slides were never executed is specific and worth naming directly: they were designed as projects, not as infrastructure. Someone identified a local heat user, negotiated a bilateral supply agreement, and engineered a bespoke system designed entirely around that single counterparty. Then it collapsed — on permitting, on seasonal mismatch, on the partner's business change, on regulatory shifts, on the impossibility of maintaining a one-off system with no standard components and no operational precedent.

The architectural error was designing thermal output around a known downstream user instead of standardizing the interface first — the same error that would have been made if early electrical infrastructure had been engineered to power one specific factory rather than standardized for any load. The Thermal Plug changes the structure: define and build the supply-side standard once. Capital can then evaluate a known, specified interface rather than a bespoke engineering risk. No single dependency. No custom engineering per site. No fragile bilateral exposure.

That structural shift is what separates a sustainability initiative from an infrastructure investment thesis.
Q 08  ·  Industrial Matching

Who actually absorbs this heat at scale?

Given that district heating is not the scaling architecture, who are the realistic large-scale absorbers of this thermal energy?

The right filter is not "who wants heat" — that list is long and mostly useless at the scale we are discussing. The right filter is: who can absorb large, continuous thermal loads in a modular and interruptible way, at the temperature ranges a data center actually exports, without being critically dependent on that heat for operational survival? Controlled-environment agriculture requires stable growing environments at 15 to 28°C year-round, directly achievable from a liquid cooling loop with no heat pump uplift. Biogas and biomass processing hubs are a strong structural match, addressed in the next question. Food processing utilities, industrial drying clusters, pharmaceutical process utilities, and selected chemical preheating operations all carry meaningful continuous heat demands in the 60 to 90°C range, achievable with modest heat pump uplift.

What does not work as a primary sink is high-temperature core industrial process heat. Steel arc furnaces operate above 1,500°C, cement kilns above 1,400°C, glass melting furnaces above 1,300°C. Data center thermal export is suited for utility loads and preheating stages — not for replacing the hottest core of a heavy industrial process. That distinction is not a footnote. It is the boundary between a credible proposal and a greenwashing slide.

Q 09  ·  The Stress Test

Biogas sounds too clean. Stress-test it.

You have highlighted biogas and biomass hubs as one of the strongest modular candidates. That sounds almost too clean. Stress-test it.

The skepticism is warranted. Anaerobic digestion requires continuous heat at 35 to 55°C — directly achievable from a data center cooling loop without heat pump uplift. The demand is constant year-round — biology has no summer demand curve. A facility processing industrial organic waste at 10 to 20 megawatts of thermal input is fully within the range of proven industrial biogas engineering. The outputs are bankable: biomethane for grid injection, certified digestate as a fertilizer substitute, and a recoverable CO₂ stream.

The stress test: a facility at this scale requires tens of thousands of tonnes per year of food processing residues, agricultural waste, or agro-industrial organics — a site selection requirement, not a fundamental barrier. Road or rail access for feedstock and digestate is non-negotiable but solvable at planning stage. Biomethane grid injection requires a gas network connection, which is straightforward in some markets and rate-limiting in others. And the biogas facility must maintain biological process stability during any data center maintenance window using auxiliary heating — it benefits from data center heat, it does not depend on it existentially. None of these are physical or economic barriers. They are engineering and logistics problems with known, deployable solutions.

The value to the operator is not a payment received for accepting heat — it is a fuel cost eliminated. Gas that was burned internally to maintain the fermenter can now go to the grid instead. That saving is independent of any heat market negotiation, which means the economics are stable by design, not contingent on a price that can collapse.

Biogas is not the complete answer to data center heat reuse at hyperscale. It is the best first anchor off-taker within a broader thermal ecosystem — fast to deploy, modular, non-seasonal, and independently viable. The cluster builds from there.

Q 10  ·  The Hyperscaler Imperative

Is this the moment to push this to META, Google, and AWS?

META, Google, and Amazon Web Services are building at unprecedented scale right now. Is this the moment to push this architecture to them directly?

Yes — and the window is not open indefinitely. These operators are designing campuses today that will be operational for 20 to 30 years. The thermal infrastructure decisions embedded in those designs define the optionality of those assets for their entire operational lifespan. Retrofitting a thermal export interface into a completed campus costs an order of magnitude more than designing it in at the drawing stage.

The conversation with these operators is not about sustainability. That framing is finished and they know it. The conversation is: here is a standardized interface specification; it adds a defined capital cost per megawatt at the design stage; it creates a bankable future revenue stream under long-term heat purchase agreements; it satisfies regulatory requirements across the EU and increasingly in other jurisdictions; and it positions the campus as the anchor of an industrial ecosystem that generates measurable local economic value — including employment, which is increasingly a condition of planning permission. The technical readiness exists. The regulatory trajectory is established. What is missing is the interface standard itself — the agreed specification that makes Thermal Plug a default design requirement rather than a project-by-project negotiation. The campuses being designed today will still be standing in 2050.

The first generation built for compute. The next generation builds for systems.
Industrial anaerobic digestion biogas facility
Stored energy — biomethane produced from waste heat is not a byproduct. It is a bankable product, injectable into the grid on demand.

"The right question is not who likes heat. It is who can absorb large utility-scale thermal loads in a modular way."

Dimitri Wolf
Q 11  ·  The Long View

The 50-Year Question

A politician approving a new 200-megawatt campus faces a direct question from constituents: what does this actually create for us? In its pure form — perhaps 40 to 100 direct jobs — that number is very hard to defend at a town hall. The answer changes fundamentally when the campus anchors an industrial cluster: biogas processing, food production utilities, controlled-environment agriculture, logistics and packaging — where the thermal infrastructure is shared, the supply chains interlock, and the employment multiplier reaches into the hundreds of real, local, industrial jobs. That is a regional economy argument. Politicians understand the difference.

The hardware inside that campus will not look the same in 2035, let alone 2075. What you must not build is a single-purpose facility whose surrounding ecosystem lives or dies by what specific chips are running inside. Detroit did not collapse because cars stopped being made. It collapsed because the entire regional economy was organized around a single industry with no independent metabolism.

Build the industrial cluster so that its infrastructure — the piping networks, the land use, the logistics access, the energy distribution — outlasts any specific technology generation inside the data center building. The cluster continues. The jobs continue. The tax base continues.

The very technology that created this thermal challenge — AI, in its current inefficient form — may be the thing that solves it. If AI accelerates materials science, process engineering, and system optimization, we may arrive at compute architectures that are an order of magnitude more efficient. But we will arrive there faster, and with far less waste, if we build the industrial integration infrastructure now — while the campuses are still being designed, while the clusters can still be planned around them, and while the heat is still worth capturing. We are not building for this decade. We are building the skeleton of an industrial ecosystem that needs to function in 2075 regardless of what the chips inside look like.

The first generation built for compute. The next generation builds for systems.