US-EAST-1 - | EU-WEST-1 - | ASIA-PACIFIC - | 24H VOL - | UTILIZATION - | US-WEST-2 - | LATENCY -
COMING SOON

Two-Settlement
Compute Marketplace

Day-ahead scheduling. Real-time balancing. Network-level resource pooling. Market infrastructure for AI compute.

$6.7T
Data center capex by 2030
~30%
Average GPU utilization
0
Globally optimal compute markets

Market Inefficiencies

Current compute allocation relies on long-term bilateral contracts and static provisioning. This structure creates persistent inefficiencies in utilization, price discovery, and capacity planning.

Chronic Underutilization

Inference workloads spike. Training jobs batch. Operators overprovision for peak demand. Result: billions in stranded capacity sitting idle.

Tough Provider Economics

Independent datacenters are forced to accept wholesale offtakes to secure stable cashflows. Compressed margins across newer players.

No Price Discovery

Long-term bilateral contracts dominate. Prices are negotiated privately, not discovered in open markets. No mechanism exists to determine real-time compute value.

Volatility Without Hedging

Demand swings wildly. Prices don't adjust. No forward market, no real-time balancing, no way to hedge exposure.

Market Architecture

Two-settlement design. Primary and secondary clearing. Network-level resource pooling. The structure that made electricity markets efficient—applied to AI compute.

T-1

Day-Ahead Market

Schedule predictable workloads 24 hours in advance. Lock in capacity commitments. Clear at auction-determined prices that reflect expected scarcity.

  • Hedge real-time exposure
  • Hardware & latency constraints
  • Batch training optimization
LIVE

Real-Time Balancing

Handle deviations as they occur. Dispatch flexible capacity to meet realized demand. Prices form continuously based on marginal cost of serving load.

  • Sub-second dispatch
  • Inference spike handling
  • Locational marginal pricing
1 NGH

Normalized GPU Hour

Go beyond just FLOPS. A standardized unit of compute that allows comparisons between heterogeneous hardware and give fundementals to price discovery. Performance factors via MLCommons benchmarks translate raw capacity into comparable, tradeable units.

Market Participants

Every participant benefits from transparent pricing and efficient allocation.

Compute Buyers

AI Labs • Enterprises • Researchers

  • Access capacity on-demand without long-term commitments
  • Hedge training costs with day-ahead scheduling
  • Scale inference elastically with real-time pricing

Compute Providers

Hyperscalers • Neo-Clouds • Colos

  • Monetize idle capacity through transparent markets
  • Reduce overprovisioning with network-level pooling
  • Increase margins by charging direct-to-user market prices

Liquidity Providers

Trading Firms • Market Makers

  • Earn availability payments for standby capacity
  • Capture spreads between day-ahead and real-time
  • Arbitrage regional price differentials

Network-Level Pooling

When workloads are pooled across the network, individual spikes become statistical noise. The law of large numbers absorbs volatility at scale.

Decrease
Required buffer capacity
Increase
Resource utilization
Network-Level Pooling Effect
Datacenter A ±45% variance
Datacenter B ±38% variance
Datacenter C ±52% variance
Pooled Network ±12% variance

Interested?

We're looking for partners across the compute ecosystem—data centers, cloud providers, AI labs, and infrastructure investors. Reach out to explore how we can work together.

Get in Touch