Cloud GPU Pricing

NVIDIA H200 Price Comparison (April 2026)

Last update:
April 1, 2026
5 mins read

Is the premium for the NVIDIA H200 worth it? With its 141GB of HBM3e memory. This GPU is designed for high-throughput inference and memory-intensive training, but that performance comes with a significant price tag.

In this guide, we break down the NVIDIA H200 pricing landscape for April 2026. We compare on-demand rates across hyperscalers like AWS and Azure alongside specialized GPU clouds like Lambda and RunPod to help you find the most cost-effective path for your workloads.

Key Takeaways

<ul><li><strong>H200 premiums remain steep.</strong> Even after AWS&#39;s June price cut, the cheapest hyperscaler H200 hour costs 3 times more than Thunder Compute&#39;s H100.</li><li><strong>Specialist clouds narrow the gap.</strong> Lambda, RunPod, Jarvislabs, and Vast.ai all sit in the $2–4 range, but Thunder&#39;s H100 is still cheaper.</li><li><strong>Choose H200 only when you must:</strong> Running massive models that overflow 80 GB VRAM or long-context inference. For prototyping, fine-tuning, and most training, H100 80 GB wins on ROI.</li><li><strong>Thunder Compute roadmap.</strong> We don&#39;t offer H200 nodes yet; today you can launch H100 80 GB at $1.38/hr (one-click VS Code, per-second billing, persistent volumes, live hardware swaps).</li></ul>

One-minute snapshot

[THUNDERTABLE:eyJoZWFkZXJzIjpbIlByb3ZpZGVyIiwiU0tVIC8gSW5zdGFuY2UiLCJPbi1EZW1hbmQgJC9HUFUtaHIqIiwiTm90ZXMiXSwicm93cyI6W1siQVdTIChwNWUuNDh4bGFyZ2UpIiwiOMOXIEgyMDAgMTQxR0IiLCIkNC45OCAoMS1kYXkgbWluaW11bSkiLCJDYXBhY2l0eS1CbG9ja3MgcHJpY2luZyBpcyBtaW5pbXVtIG9mIDEgZGF5OyBkaXZpZGUgJDM5Ljg0IGJ5IDggKEFtYXpvbiBXZWIgU2VydmljZXMsIEluYy4pIl0sWyJBenVyZSAoU3RhbmRhcmQgTkQ5NmlzciBIMjAwIHY1KSIsIjjDlyBIMjAwIiwiJDEwLjYwIiwiQ2FsY3VsYXRvciBwcmljZSAkODQuOCAvaHIgdG90YWwgKFB1YmxpYyBDbG91ZCBSZWZlcmVuY2UpIl0sWyJHb29nbGUgQ2xvdWQiLCJBMyBIMjAwIChvbi1kZW1hbmQpIiwiVEJBIiwiR29vZ2xlIGxpc3RzIEgyMDAgb25seSBhcyBTcG90IGZvciBub3c7IG9uLWRlbWFuZCBub3QgeWV0IHB1Ymxpc2hlZCAoSmFydmlzbGFicy5haSBEb2NzKSJdLFsiT3JhY2xlIENsb3VkIChCTS5HUFUuSDIwMC44KSIsIjjDlyBIMjAwIiwiJDEwLjAwIiwiQmFyZS1tZXRhbCBub2RlLCAkODAgL2hyIHRvdGFsIChPcmFjbGUpIl0sWyJMYW1iZGEgQ2xvdWQgKEhHWCBIMjAwKSIsIjHDlyBIMjAwIiwiJDMuNzkiLCJNaW51dGUtYmlsbGVkLCBubyBjb21taXRtZW50IChMYW1iZGEpIl0sWyJDb3JlV2VhdmUgKDggw5cgSDIwMCkiLCI4w5cgSDIwMCIsIiQ2LjMxIiwiJDUwLjQ0IC9ociBub2RlIC8gOCBHUFVzIChpb25zdHJlYW0uYWkpIl0sWyJSdW5Qb2QgKDggw5cgSDIwMCkiLCI4w5cgSDIwMCIsIiQzLjk5IiwiJDMxLjkyIC9ociBub2RlIC8gOCBHUFVzIChpb25zdHJlYW0uYWkpIl0sWyJKYXJ2aXNsYWJzIiwiMcOXIEgyMDAiLCIkMy44MCIsIlNpbmdsZS1HUFUgVk0sIHBheS1hcy15b3UtZ28gKEphcnZpc2xhYnMuYWkgRG9jcykiXSxbIlZhc3QuYWkiLCJNYXJrZXRwbGFjZSIsIuKJiCAkMi4yOSIsIkxvd2VzdCBjdXJyZW50IGhvc3QgbGlzdGluZyJdXX0=]

*Prices are normalized per single H200 even when a provider sells only 8-GPU nodes. U-S region pricing, on-demand only (no spot, reserved, or contract rates).

Methodology – why you can trust these numbers

<ul><li><strong>On-demand only.</strong> We excluded capacity reservations longer than 14 days, reserved instances, and spot/pre-emptible offers.</li><li><strong>Same silicon.</strong> Every row is a 141 GB NVIDIA H200 (SXM or PCIe).</li><li><strong>Public price lists only.</strong> Figures come straight from each provider&#39;s pricing page in April 2026.</li><li><strong>US regions, USD.</strong> Regional variation can add 5-20 percent; those are ignored for apples-to-apples comparison.</li></ul>

H100 vs H200 cost benchmark across generations

[THUNDERTABLE:eyJoZWFkZXJzIjpbIlByb3ZpZGVyIiwiMTAgaHJzIHJ1bnRpbWUiLCJFZmZlY3RpdmUgY29zdCJdLCJyb3dzIjpbWyJUaHVuZGVyIENvbXB1dGUg4oCTIEgxMDAgODAgR0IiLCIxMCDDlyAkMS4zOCIsIiQxLjM4Il0sWyJWYXN0LmFpIOKAkyBIMjAwIiwiMTAgw5cgJDIuMjkiLCIkMjIuOSJdLFsiUnVuUG9kIOKAkyBIMjAwIiwiMTAgw5cgJDMuOTkiLCIkMzkuOSJdLFsiTGFtYmRhIOKAkyBIMjAwIiwiMTAgw5cgJDMuNzkiLCIkMzcuOSJdLFsiQVdTIOKAkyBIMjAwIiwiMTAgw5cgJDQuOTgiLCIkNDkuOCJdLFsiQ29yZXdlYXZlIC0gSDIwMCIsIjEwIHggJDYuMzAiLCIkNjMuMDAiXSxbIkF6dXJlIOKAkyBIMjAwIiwiMTAgw5cgJDEwLjYwIiwiJDEwNi4wIl1dfQ==]

Bottom line: two hours on Thunder Compute's A100 costs less than 15 minutes of an H200 on Coreweave, and still buys roughly **13 × more runtime per dollar than hyperscaler H200s.

Get the world's
cheapest GPUs

Low prices, developer-first features, simple UX. Start building today.

Get started