EN DE 中文
Feature Interview II   ·   Data Center Thermal Engineering   ·   2026

The Reverse Thermal Plug

The engineering underneath the concept. Biogas, bidirectionality, permits, distances, and why the data center becomes a grid node — not a donor.

Dimitri Wolf M.Sc. Mechanical Engineering Strategic Advisor · Aquatherm GmbH Spring 2026

Dimitri Wolf's background spans Siemens AG and Shell, covering energy value chains from conventional systems to renewable infrastructure transformation. He is an active member of the DIN committee for plastic welding in district heating networks and participates in European standardization bodies shaping technical and regulatory standards for thermal infrastructure. At Aquatherm GmbH, he leads international markets with a focus on system solutions for heating and cooling networks. This is the second interview in a series — the engineering sequel to The Thermal Plug.

The first interview established what the Thermal Plug is and why it is needed. This one goes into how it actually works — and answers every hard question an engineer, investor, regulator, or operator is entitled to ask. The answers are technical. They are also final.

This interview is specifically about the engineering, the economics, the permits, and the boundaries of the system. It is not about chip architecture, power supply, water infrastructure as a standalone topic, or broader AI sustainability debates. Those are real topics. They are not this conversation.

What this system actually delivers — before the engineering questions begin
Continuous Heat Recovery
Data center waste heat at 35–60 °C delivered to industrial processes that currently burn natural gas to produce the same temperature. Every megawatt-hour transferred is a megawatt-hour of gas not burned — measurable at the interface, owned by the receiver.
Dispatchable Grid Flexibility
Biogas operators using EPEX Intraday flex dispatch can run their engines only during high-price hours. DC heat keeps fermenters warm during engine-off windows — converting wasted cooling capacity directly into grid balancing value without a single extra component.
Cold Chain & Process Cooling
When compute loads drop, the same interface exports chilled water at 6–12 °C to pharmaceutical storage, fresh produce logistics, and food processing — industries that need cold 365 days a year regardless of season or ambient temperature.
Water Consumption Reduction
Every megawatt of heat exported through the plug is a megawatt the cooling towers do not need to reject by evaporation. At hyperscale, that is millions of litres of water per year returned to the local water cycle — a positive effect that costs nothing extra to achieve.
Industrial Cluster Anchor
A campus with a standardized thermal export interface becomes the energy backbone of a surrounding industrial ecosystem — biogas processing, food production, pharmaceutical utilities, controlled-environment agriculture — generating hundreds of local jobs the campus itself never could.
Clean Ownership Boundary
The data center ends at the plug. A viable external connection point — approximately 20 metres from the building — defines where the DC's responsibility stops and the connecting operator's begins. No grey zone. No shared liability. Each party owns and operates what they build.
Q 01   ·   The Physical Connection

Where does the pipe actually connect?

The first interview established the Thermal Plug as a standardized interface concept. This one goes into the engineering. If I am standing in front of a hyperscale data center campus, where exactly does the Reverse Thermal Plug physically tie into the cooling architecture?

The plug connects at the hydraulic boundary between the data center's internal cooling plant and the external world. That boundary is defined by a plate heat exchanger skid — a standardized thermal substation — and nothing external ever crosses it. The data center's internal loops stay isolated: their water chemistry, their pressure regimes, their redundancy certifications remain entirely unaffected by whatever is connected on the other side.

What that skid physically taps depends on the cooling architecture of the campus. On an air-cooled campus, the natural connection point is the condenser water return — warm water heading toward the cooling towers before it is rejected to atmosphere. On a liquid-cooled campus with warm-water rear-door or in-row exchangers, you tap the warm-water return loop before it reaches the chillers. On an immersion-cooled campus operating at 40 to 60 degrees Celsius on the primary loop, the export temperature quality improves further and the heat pump uplift requirement on the external side shrinks accordingly.

In all cases, the bypass is valve-controlled, fully instrumented, and sized as a modular unit — not a bespoke engineering exercise. Plate heat exchangers, secondary pump sets, isolation valves, metering stations at the boundary — all industrially standard components, used in district heating substations and industrial process interfaces for decades. The Thermal Plug does not require new technology. It requires the architectural decision to install that standardized substation at design stage.

The data center's physical responsibility ends at that skid plus a viable external connection point — approximately 20 metres from the building envelope — where civil works are actually possible without disrupting the operating facility. Everything beyond that point is owned, financed, and operated by whoever connects. The boundary is not ambiguous. It is a metered handover point, the same way a gas grid connection or an electrical substation interconnect defines a clear legal and operational boundary between two parties.

Q 02   ·   The Bidirectionality Argument

Why insist on bidirectional from day one?

That is the heat side. The concept is called the Reverse Thermal Plug precisely because it implies a second direction. Why insist on bidirectionality from day one rather than starting with heat export and adding cold later?

Because the reason to go bidirectional is not ideological — it is thermodynamic. A data center chiller plant is a large, active machine — compressors, variable-speed pumps, cooling towers, heat exchangers — running continuously. Like any large industrial machine of this type, it has an optimal efficiency envelope — a range of operating conditions where it delivers the most thermal output for the least electrical input. Modern hyperscale cooling plants use modular chiller staging and AI-assisted load controls specifically to stay within that envelope as IT load varies.

The problem with heat-only export is that external heat demand does not follow the data center's load profile. Heat demand peaks in winter. The data center's cooling requirement follows compute demand, which does not stop in summer and does not stop overnight. During low-compute periods, the plant has available cooling capacity — headroom it cannot efficiently suppress without cycling equipment or accepting efficiency losses. The cold side solves that by directing that headroom externally when internal demand does not require it. The plant stays in its optimal efficiency envelope because it always has somewhere useful to dispatch its output.

There is a third dimension almost never discussed: resilience. If the data center's cooling plant experiences a partial failure, an industrial neighbour connected through the same interface can reverse the flow and supply cooling capacity back to the campus for a contractually defined window. Every node keeps its own backup plant — but together they form a mutual resilience layer where capacity moves in either direction depending on who has the surplus. That changes the risk profile of the campus for insurers and infrastructure investors. Not marginally. Structurally.

One standardized interface. Two directions. The campus becomes an active thermal trading node — not a waste-heat donor.
Industrial heat exchanger infrastructure
The plate heat exchanger skid sits at the campus boundary — the standardized interface between the data center's internal loops and the external thermal economy. Everything left of this point belongs to the DC. Everything right belongs to whoever connects.
Q 03   ·   The First Anchor Off-Taker

Why biogas first? Walk through the engineering.

You have consistently named biogas as the first anchor off-taker for the Thermal Plug. Not food processing, not district heating, not cold chain — biogas. The engineering case needs to be airtight. Make it.

The case is airtight because the temperature match is exact, the demand profile is continuous, and the economic incentive is structural — not aspirational. Every agricultural biogas plant running mesophilic fermentation needs to keep its fermenter at 35 to 42 degrees Celsius, around the clock, every day of the year. Biology has no weekend. Thermophilic plants run at 50 to 55 degrees Celsius — still within or one small heat-pump step above a standard data center warm-water loop. The demand does not fluctuate seasonally. It does not drop in summer. It is baseload thermal demand in the most literal sense.

The conventional solution is to bleed waste heat from the combined heat and power unit — the BHKW — back into the fermenter. That works, but it is wasteful by design. A Gas-Otto motor operating at 34 to 42 percent electrical efficiency converts the remaining 55 to 60 percent of its fuel energy content into thermal output. Roughly a third of that thermal output is fed back into the fermenter as process heat. You are burning your own product to maintain the conditions that produce your product. The moment you replace that self-heating loop with an external low-grade heat source at zero marginal fuel cost — which is exactly what a data center cooling loop provides — the BHKW's thermal output is freed entirely for external sale or grid-dispatch arbitrage. The operator does not change the fermenter. They do not change the BHKW. They connect one heat exchanger skid to an interface they already have, and their economics shift permanently.

The numbers are concrete. A representative agricultural biogas plant in the 700 to 750 kilowatt-electrical class — the documented average for Germany and comparable European markets — carries a continuous fermenter heating demand of approximately 200 to 250 kilowatts thermal. Annualized, that is 1,700 to 2,200 megawatt-hours of thermal energy per plant per year. A single data center hall of 5 megawatt IT load, operating at a power usage effectiveness of 1.3, rejects approximately 1.5 megawatts of recoverable heat continuously. That is enough to serve six to eight biogas plants simultaneously on a single secondary circuit. At a 10-megawatt campus you are looking at 40 to 50 plants within a viable connection radius. The capacity match is not marginal — the data center structurally overproduces heat relative to what any single off-taker can absorb, which is precisely why biogas works as a modular distributed network rather than a bilateral contract with one facility.

One more data point worth stating directly: surveys of biogas plant operators across major producing regions show that more than two thirds are actively planning changes or upgrades to their heat utilization concept in the near term. The demand-side door is already open. The Thermal Plug walks through it.

The value proposition to the biogas operator needs to be stated precisely, because it is frequently misunderstood. The DC does not pay the biogas operator for accepting heat. The biogas operator saves the fuel cost they are currently spending to self-heat the fermenter — gas that can now be sold to the grid or dispatched at peak EPEX prices instead of burned internally. At current biomethane prices, that represents a direct operational saving of approximately 35,000 to 55,000 euros per year per 740 kilowatt-electrical plant. The heat is not a revenue stream for the DC. It is a cost elimination for the operator. That distinction is what makes the economics stable regardless of heat market pricing — the savings accrue from avoided gas consumption, not from a bilateral price negotiation that can collapse.

One further engineering point on operational risk: the biogas operator does not depend on the DC for biological process survival. Every connected facility maintains its own auxiliary heating capacity — a gas-fired backup sized to hold fermenter temperature through any DC maintenance window or curtailment event. The Thermal Plug reduces the number of hours that backup runs. It does not replace it. The biology is protected by the operator's own redundancy. The DC heat is an economic improvement on top of a system that already functions independently without it.

Biogas Plant — Thermal Interface Parameters

Fermenter target — mesophilic: 35–42 °C · thermophilic: 50–55 °C

Current DC liquid-cooling loop output: 40–55 °C → mesophilic: direct match, no heat pump · thermophilic: +8–15 °C uplift, COP >6

Continuous heat demand: ~200–250 kWth per 740 kWel plant

Annual thermal volume per plant: ~1,700–2,200 MWh/yr

DC 5 MW campus coverage: 6–8 plants · DC 10 MW campus: 40–50 plants

BHKW Otto motor efficiency: 34–42% electrical · ~55–60% thermal output

Operator annual saving from avoided self-heat fuel: ~€35,000–55,000 per plant

Operator auxiliary backup: gas-fired standby retained · biology protected independently

* €35–55k saving based on documented heat demand data for 700–750 kWel agricultural biogas plants · Sources: Bioenergyland Niedersachsen / Dewess-Kilian 2022

Biogas is not the complete answer to data center heat reuse at hyperscale. It is the best first anchor off-taker — fast to deploy, modular, non-seasonal, independently viable. The cluster builds from there.

Dimitri Wolf
Q 04   ·   Flex Dispatch Compatibility

Does your system conflict with BHKW flex dispatch?

Flexible BHKW operation — running gas engines only during high EPEX Intraday price windows — is increasingly standard. Does the Thermal Plug interfere with that model, or complement it?

It complements it directly — and this is one of the strongest technical arguments for the Thermal Plug that has not been made publicly yet. Flexible BHKW operation requires gas storage, typically a double-membrane balloon store of 4,000 to 5,000 cubic metres, and it requires that the fermenter stays at operating temperature even when the engine is completely offline. In a conventional setup, that means either running the BHKW at minimum load during off-peak hours just to maintain process heat — which forfeits the spot premium entirely — or installing a dedicated peak-load boiler at 30,000 to 80,000 euros in additional capital expenditure with its own fuel connection and permitting requirement.

The Thermal Plug removes both constraints simultaneously. When the BHKW is offline accumulating gas for the next high-price dispatch window, the data center's heat keeps the fermenter warm at zero fuel cost to the operator. The BHKW can go fully offline for 8 to 16 hours. The fermenter temperature holds. The operator dispatches at peak rates without compromise. In effect, the Thermal Plug converts data center waste heat into grid flexibility — the DC's discarded cooling capacity becomes the enabler of a smarter, more profitable biogas dispatch strategy that also stabilises the power grid.

Q 05   ·   Control Architecture

The hard part is controls. What does that mean?

You have consistently said the engineering challenge is controls, not pipes. What does the control architecture actually look like, and what happens when the system encounters conditions it was not designed for?

The control architecture operates in three layers, and the hierarchy between them is not negotiable. The first layer is the data center's internal cooling SLA — server inlet temperatures within specification, chilled water supply within the design band, uptime unconditionally maintained. This layer is always primary. The plug is always subordinate to it. If IT load spikes, if a chiller stage trips, if an internal alarm condition is raised — the bypass valves close and the external circuit stops receiving flow immediately. The external operator's connection agreement defines this explicitly, and they maintain their own backup plant in standby for exactly this reason.

The second layer is thermal dispatch logic. Thermal infrastructure differs fundamentally from electrical infrastructure in one critical dimension: electrical dispatch operates in milliseconds, thermal dispatch operates on a timescale of hours. Pipe thermal mass, buffer tank inertia, the thermal capacity of the external network, and contractual notification windows all mean the system cannot respond to sudden changes in real time. The correct response is model-predictive control — the system forecasts the next two to four hours of IT demand based on workload scheduling signals and historical patterns, and pre-positions valve and pump states accordingly. Hyperscale operators already forecast power demand at hourly resolution for electricity procurement. The same forecasting capability drives thermal dispatch. The data translation layer bridging thermal dispatch signals, AAS-Ready submodel profiles, and ML-based load forecasting across this interface is being developed under SCY:MO — a dedicated infrastructure intelligence framework purpose-built for exactly this boundary.

One point worth stating directly for those who will read this critically: the engineering of the thermal interface is solved. Heat exchangers, hydraulic separation, metering boundaries — these are standard industrial components with decades of operational precedent. The non-linear risk in this system is not the hardware. It is system integration — dynamic load mismatch across independent operators, contractual failure modes, and liability boundaries that cross organisational lines. That complexity is real, it is manageable, and it is exactly what the control architecture and ownership boundary described here are designed to contain. Engineering is not the hard part. Coordination is.

The third layer is the external interface: metered handover, demand signals from off-takers, bilateral communication of available thermal capacity in both directions — standard industrial SCADA protocol. Failure mode: if the control system loses telemetry, bypass valves fail closed. The data center returns to isolated internal cooling. No cascade failure. No dependency in the critical path.

Cold chain logistics warehouse
Pharmaceutical cold stores, fresh produce logistics, frozen food warehouses — industries that need cold at 2–8 °C continuously, 365 days a year, with no seasonal variation. The same interface that exports heat exports cold when compute loads drop.
Q 06   ·   Distance & Losses

What about transmission losses and distance?

Transmission losses and distance are the first objection every engineer raises when heat reuse projects are proposed. Pipes lose heat. The further you go, the worse it gets. How do you answer that?

By removing the premise. Transmission losses and distance are not this project's constraint at this stage — and I want to be precise about why, because this is not a rhetorical deflection. The Thermal Plug is a campus-boundary interface. It is not a district heating network. It does not lay 40 kilometres of insulated pipeline into a city centre. The model connects the secondary cooling loop of a data center to an off-taker located within a 1 to 5 kilometre radius — a radius that, in the geographic clusters where hyperscale development is concentrated, contains numerous agricultural and industrial operators.

At that distance, a properly insulated HDPE pre-insulated pipe circuit operating at 40 degrees Celsius loses less than 0.2 degrees Celsius per kilometre under standard soil conditions. The thermal loss budget is negligible. It does not enter the economic calculation in any meaningful way. The critics who raise losses are thinking about district heating — pushing 80 to 90 degree water across ageing urban networks to residential radiators. We are pushing 40 degree water, short distances, to a process that consumes heat continuously regardless of outdoor temperature. The physics are entirely different. The infrastructure is entirely different. The risk profile is entirely different.

Site selection is a real and honest constraint, and it should be named as one. Hyperscale campuses are not selected for proximity to biogas plants — they are selected for grid capacity, fiber density, water access, and land cost. Biogas plants are not selected for proximity to data centers — they are selected for feedstock supply and agricultural infrastructure. The 1 to 5 kilometre connection window will not be achievable on every site. What the data shows — in major hyperscale deployment corridors across Germany, the Netherlands, northern France, and the US Midwest — is that agricultural biogas density within that radius is high enough that viable connections exist at a meaningful proportion of candidate sites. This is a site-selection filter, not a universal blocker. It belongs in the feasibility study for each campus, not in the rejection criteria for the concept.

This is not a district heating project that happens to use data center heat. It is a purpose-built industrial process heat supply operating at low temperature, short distance, and continuous baseload. Those three conditions together eliminate the traditional failure modes of heat network projects almost entirely. The distance and loss question is a valid question for the wrong system. It is not a valid question for this one.

Q 07   ·   Advanced Mode

Could a data center just produce its own biogas?

If biogas is such a strong match, could a data center operator simply own the entire chain — feedstock, fermenter, gas storage, power generation — and close the loop completely? Why connect to anyone else at all?

Technically valid. Strategically wrong at this stage. A data center operator who owns feedstock procurement, anaerobic digestion, gas storage, and biomethane-fired generation has vertically integrated into a fundamentally different business — agricultural logistics, biological process management, waste handling regulatory compliance, and grid injection certification. These are not adjacent competencies. They are entirely separate industries with their own permitting regimes, operational expertise, liability frameworks, and supply chain dependencies. The risk profile of owning all of that is not incremental to running a hyperscale campus. It is multiplicative.

More practically: the biogas production model works at scale because agricultural operators and industrial waste processors already have the feedstock relationships, the planning permissions, the biological expertise, and the existing infrastructure. The data center brings one thing they do not have — continuous low-grade heat at exactly the right temperature. That is the contribution. That is the interface. The moment you try to own the biology as well, you are competing with operators who have spent decades building what you would be building from zero, and you are adding five to ten years of permitting and construction to a deployment timeline that hyperscalers measure in months.

The smarter architecture — and the faster one — is to let agricultural and industrial operators run the biology, connect to the grid, and absorb the heat through a standardized interface. The data center provides thermal supply. The biogas operator provides biomethane to the grid. Both parties do what they are good at. The interface is the value. Not the ownership of everything behind it.

The DC provides the heat. The operator provides the biology. The interface is the value — not the ownership of everything behind it.

Retrofit is technically feasible and commercially risk-weighted. Design-stage integration is the only version of this business case that is straightforwardly bankable.

Dimitri Wolf
Q 08   ·   Permits & Global Regulation

What does the permit landscape actually look like — globally?

Every country has its own regulatory framework. Germany has EnEfG and the EU Energy Efficiency Directive. The US has different rules. Asia has different rules again. Is this model actually globally deployable, or is it a European story?

It is globally deployable — and the regulatory picture, while fragmented in its specifics, is moving in one direction everywhere. The mechanisms differ by jurisdiction. The trajectory does not. Every major data center market is independently arriving at the same conclusion: the thermal output of hyperscale compute infrastructure is a recoverable resource, and failing to recover it will increasingly carry a regulatory and reputational cost. The convergence is not accidental. It is driven by the same underlying pressures — carbon commitments, energy security, industrial decarbonisation targets, and the political reality that a 500-megawatt campus consuming municipal grid capacity needs to demonstrate tangible local value beyond employment of 80 people.

In Europe, the framework is the most advanced. Germany's Energieeffizienzgesetz requires data centers above 300 kilowatts to achieve an Energy Reuse Factor of 10 percent from July 2026, rising to 20 percent by July 2028 — with mandatory waste heat reporting from 2024 already in force. The EU Energy Efficiency Directive Article 24 extends equivalent obligations across all member states, with national transposition timelines running through 2025 and 2026. These are not voluntary targets. They are legal obligations with compliance costs already calculable.

In the United States, the picture is more fragmented at federal level but accelerating at state level. California's Title 24 energy code and the Climate Corporate Data Accountability Act create de facto thermal reporting obligations for large operators. The Inflation Reduction Act's industrial decarbonisation provisions create direct financial incentives for waste heat recovery investment. Several states are developing specific data center energy reuse standards modelled explicitly on the German framework. The IEA's Data Centres and Data Transmission Networks tracking report names waste heat reuse as a primary efficiency lever globally — providing the international benchmark that national regulators cite when building their own frameworks.

In the Asia-Pacific region, Singapore's Green Data Centre Roadmap sets a Power Usage Effectiveness target of 1.3 or below for new facilities and includes waste heat reuse as a qualifying efficiency measure. Japan's GX — Green Transformation — industrial policy framework includes data center thermal output in its circular economy planning. The UAE's Net Zero 2050 strategic initiative has created specific incentive structures for waste heat recovery at industrial facilities, including data centers, with petrochemical and desalination co-location as target applications. The permitting complexity is real in every market. But the direction of travel is identical, and the acceleration is visible. The question for any operator planning a campus today is not whether this regulatory environment will apply to them. It is whether they will be ahead of it or behind it when it does.

Regulatory Reference Points by Market

🇩🇪 Germany: EnEfG — ERF 10% from Jul 2026 · 20% by Jul 2028 · waste heat reporting from 2024

🇪🇺 EU: Energy Efficiency Directive Art. 24 — member state transposition 2025–2026

🇺🇸 USA: IRA industrial decarbonisation incentives · California Title 24 · state-level ERF frameworks developing

🇸🇬 Singapore: Green DC Roadmap — PUE ≤1.3 · waste heat reuse as qualifying measure

🇯🇵 Japan: GX Green Transformation — thermal circular economy planning

🇦🇪 UAE: Net Zero 2050 — waste heat recovery incentives for industrial & DC facilities

📊 IEA: Data Centre tracking report — waste heat reuse named primary efficiency lever globally

Q 09   ·   Retrofit vs. Design Stage

Is retrofit actually as hard as people say?

Industry pushback on heat reuse frequently starts with retrofit complexity. Some argue it is nearly impossible on operating campuses at scale. Where is the line between honest engineering assessment and defensive overstatement?

Retrofit is technically feasible and commercially risk-weighted in ways that design-stage integration avoids entirely. Those are two different statements and they need to be held together rather than traded against each other. On the technical feasibility side — plate heat exchanger skids, headers, isolation valves, secondary pump sets — none of these require new technology. A well-sequenced retrofit on an operating campus can be executed in phases, each tied to a scheduled maintenance window, without compromising the primary cooling SLA.

On the risk-weighting side, the difficulty is real. Existing campuses were not designed with a thermal export interface, meaning the hydraulic architecture may not naturally accommodate a bypass header at the required location. Documentation quality on older facilities varies. Redundancy certification for modified cooling circuits on N+1 or 2N infrastructure requires formal re-qualification. The cost consequence is substantial — retrofitting thermal export capability onto an existing campus typically runs to several million euros per 10 megawatts of thermal capacity. Designing the same interface in at the drawing stage costs a fraction of that. That is not anecdotal. That is the entire investment thesis.

Q 10   ·   The Economics

An investor wants numbers. Give them.

An investor will not respond to a narrative. What does the capital cost look like, what drives the business case, and where are the limits of what the economics can support?

At design stage, for a campus in the 20 to 100 megawatt range: a plate heat exchanger skid sized for 5 to 20 megawatts thermal output, with isolation valves, secondary pump set, instrumentation, and controls integration, comes in at approximately 80 to 200 thousand euros per megawatt thermal. At the 10-megawatt scale, that is roughly 800 thousand to 2 million euros for the skid and controls at the campus boundary. If the external application requires temperature uplift — lifting from a 45-degree condenser return to 70 degrees for a district heating connection — an industrial heat pump stage adds approximately 200 to 500 thousand euros per megawatt thermal.

Against that capex, a 10-megawatt thermal supply contract generates revenues in the range of 300 thousand to over 1 million euros per year under current European gas pricing and carbon trajectories, depending on utilization hours and temperature specification. Payback periods at design-stage capex are in the mid-single-digit years under conservative assumptions. The retrofit comparison makes this starker — retrofitting the same 10-megawatt interface costs 3 to 10 million euros once civil, documentation, and certification costs are included. Against the same revenue projection, that payback extends to 15 to 30 years. Design-stage integration is not just cheaper. It is the only version of this business case that is straightforwardly bankable.

Design-stage: €0.8–2M per 10 MWth. Retrofit: €3–10M per 10 MWth. That is not a marginal difference. That is the entire investment thesis.
Q 11   ·   Why This Time Is Different

The argument that lasts decades.

Infrastructure conferences have had heat reuse presentations for fifteen years. Almost none produced anything at scale. The reason is specific and worth naming precisely — they were designed as projects around known downstream users, not as infrastructure with a standardized interface. Someone identified a district heating network nearby, negotiated a bilateral heat supply agreement, and built a bespoke system engineered entirely around that single counterparty. Then the counterparty's business changed, or the permitting ran two years over, or the hyperscaler deployed on an 18-month cycle that the heating network could not match. The project collapsed. And the conclusion drawn — incorrectly — was that data center heat reuse does not work.

The correct conclusion should have been: bilateral bespoke projects without a standardized interface do not scale, and they do not survive counterparty change. That is a structural error, not a technology failure. The Thermal Plug is not a new idea applied to a new problem. It is the correct application of an architectural principle that has already proven itself in electrical substations, fiber interconnect points, and gas distribution networks — applied now, at the right moment, to thermal infrastructure.

What is different today is the convergence of three things that were not simultaneously true five years ago. Regulation sets a hard floor — Germany's EnEfG is law, the EU Energy Efficiency Directive is law, Singapore's Green DC Roadmap is policy, and the compliance cost of inaction is now calculable in every major market. Liquid cooling is maturing rapidly, with warm-water loops already operating at 40 to 60 degrees Celsius — which means export temperature quality is improving precisely as the regulatory window is opening. And capital appetite for data center infrastructure has never been larger — the campuses being designed today will represent hundreds of billions of euros of asset value over their operational lifetimes, and the thermal infrastructure decisions embedded in those designs will determine the bankable optionality of those assets for thirty years.

The window is not open indefinitely. The campuses being designed in 2026 will be locked in their thermal architecture within the next 18 months. After that, the retrofit cost calculation applies, and the economics look entirely different. The design stage is the only stage where this can be done at a cost that makes the business case unambiguous. The campuses being designed today will still be standing in 2050.

The data center exports heat when loads are high and exports cold when loads are low — one standardized interface, two directions, continuous value. That is the Reverse Thermal Plug. — Dimitri Wolf