AI Compute · Scaling Now

Beyond estimation. The AI compute measurement crisis.

Mālama Labs is bringing rack-level hardware-signed power sensing to AI data centers. The same verification pipeline that produced 2,786+ on-chain SaveCards from Dallas now extends to the largest unverified emissions source in the world: AI compute.

Watch · 60 seconds

The hardware answer to AI's biggest blind spot.

A short walkthrough of why estimation breaks at AI scale and how rack-level sensors with cryptographic signing close the gap. Same proven pipeline as Dallas, applied to the largest unmeasured emissions source on the planet.

Press play →
01 · The Crisis

The 19,000× reporting gap.

The AI industry has a measurement problem. There is no standardized methodology for measuring AI's environmental footprint. Companies report whatever they choose, if they report at all.

The Federation of American Scientists found that Meta's actual carbon emissions may be up to 19,000× higher than market-based reports suggest. This is not a rounding error. It is the difference between treating climate disclosure as a marketing exercise and treating it as physical reality.

While the AI industry accelerates, every model upgrade, every video generation, every reasoning step compounds the gap. A single 5-second video generation consumes 944 Wh, enough to power a laptop for a full day. GPT-o3 uses 39.2 Wh per prompt, nearly 2,500× more than a lightweight text classifier.

Voluntary disclosures will not close this gap. Methodology committees will not close this gap. Hardware-signed measurement at the rack will.

02 · The Numbers

What aipower.fyi reveals.

Mālama's AI Energy Impact dashboard tracks 30 AI models with full methodology transparency. Every assumption is published with confidence levels. The contribution form is open. This is the estimation tier. Hardware sensors are next.

VIDEO GENERATION
944 Wh

Per 5-second clip. Equivalent to powering a laptop for a full day. Up to 1 liter of water consumed for cooling alone.

GPT-o3 REASONING
39.2 Wh

Per prompt. Nearly 2,500× more than a lightweight text classifier (0.016 Wh). Frontier reasoning is energy-intensive by design.

EFFICIENCY GAP
1,888,880×

Energy difference between most and least efficient AI tasks. Video generation versus text classification. Model choice matters enormously.

AGENTIC COMPOUNDING
3 to 10×

Multi-step agent workflows compound energy costs per task. The next generation of AI is more agentic by default.

Explore Dashboard ↗ | View Methodology →
03 · The Hardware Answer

Real-time AI power intelligence.

We are integrating specialized AI Data Center Power Sensors directly into rack-level infrastructure. This creates a high-fidelity, hardware-verified data stream that bypasses corporate guesswork.

DIRECT POWER SENSING

Per-Inference Wattage

Rack-level sensors measure exact electrical load per inference and training cycle. Not market-based estimates. Not vendor self-reports. Direct measurement at the source, signed at silicon.

WATER ATTRIBUTION

Cooling and Evaporation

Hardware-linked tracking of cooling energy and evaporation rates. A single video generation prompt can consume up to 1 liter of water. Hardware sensors close the attribution loop.

CARBON INTENSITY SYNC

Real Carbon, Not Average

Cross-references real-time grid carbon intensity with sensor-verified energy use for an absolute CO₂ figure. The same kilowatt-hour at different times and locations carries radically different emissions weight.

04 · How It Connects

Same pipeline. Two upstream streams.

The AI compute product line is not a parallel stack. It is the same six-layer Reality Engine architecture that already runs in Dallas, with rack-mount hardware as a second class of signing device feeding the same Hex Node validators.

UPSTREAM:
Carbon SaveCards
Genesis 300 outdoor nodes. Soil, atmospheric, ERW telemetry from biochar and weathering sites.
VALIDATED BY:
Hex Node Network
Same validators. Same Cardano anchor. Same Proof-of-Truth consensus.
UPSTREAM:
AI Compute Packets
Rack-mount AI Power Sensors. PDU-level wattage, cooling, water, and grid carbon intensity.
VALIDATED BY:
Hex Node Network
Same validators. Same Cardano anchor. Same Proof-of-Truth consensus.
The Dallas Pilot Node #1 (op5pro-field-a) is the technology demonstration that proved Mālama's hardware-signing pipeline works. Its 2,786+ on-chain SaveCards establish credibility for the signing architecture itself, which the AI compute product line then extends to rack-level deployment. AI compute pilot deployments target Q2 2026.
05 · Who It's For

Three buyers. One verified data stream.

01 / DATA CENTER OPERATORS

AI Compute & Infrastructure Teams

Hardware-verified scope 1 emissions data (direct rack-level power measurement) ready for SEC climate disclosure, EU CSRD, and SBTi reporting. Replace estimation with measurement. Replace vendor self-reports with on-chain proof.

Scope 1: Verified power draw per inference at the rack level.

02 / PROCUREMENT & ESG TEAMS

AI Procurement Teams at Enterprises

Procuring AI compute from hyperscalers and inference platforms. Need verified scope 3 attribution per workload for your corporate emissions reporting. Mālama provides the hardware-signed audit trail your framework requires.

Scope 3 Support: Use Mālama's verified power data to calculate upstream emissions from your AI service consumption.

03 / RESEARCHERS

Academic & Policy Researchers

Studying AI sustainability, energy use, and disclosure quality. Mālama's open dashboard, methodology transparency, and contribution form provide a public good data layer for the field.

06 · Roadmap

From dashboard to deployed sensors.

LIVE NOW
aipower.fyi dashboard30 AI models tracked. Open methodology. Contribution form active.
● LIVE
Q2 2026
Rack sensor pilotFirst AI Power Sensor deployment in a partner data center facility.
PARTNER LOI
Q3 2026
Hex Node validator integrationAI compute packets validated by the same Hex Node network that handles carbon SaveCards.
PLANNED
Q4 2026
Hyperscaler partnership programMulti-site deployment, EU CSRD reporting integration, SBTi alignment.
PLANNED
Get updates → Talk to the Team →