Back
A100 GPU Pricing Showdown 2025: Who's the Cheapest for Deep Learning Workloads?
What are the cheapest GPU clouds to rent a single A100 for deep learning?
Published:
Apr 16, 2025
|
Last updated:
Apr 18, 2025

Below is a price snapshot (as of April 2025) you can use for your next project. All numbers are on‑demand, pay‑by‑the‑hour in a U.S. region unless noted.
Provider | SKU / Instance | On‑Demand $/GPU‑hr* | Notes |
---|---|---|---|
Thunder Compute | A100 40 GB | $0.57 | On-demand, price varies for multi-GPU nodes |
Runpod Community Cloud | A100 80 GB (40 N/A) | $1.19 | Marketplace pricing, Secure Cloud is ~50% more expensive |
Vast.ai | A100 SXM4 | $1.27 | Crowdsourced, frequent complaints about reliability |
Lambda GPU Cloud | 1 × A100 40 GB | $1.29 | Flat rate per GPU |
Paperspace | 1 x A100 40 GB | $3.09 | Lower prices available with 1-3 year commitments |
Azure | ND96asr A100 v4 (8 × A100) | $3.40 | $27.20/hr for 8 GPU box → ÷ 8 GPUs |
AWS EC2 | p4d.24xlarge (8 × A100) | $4.10 | $32.77/hr for 8 GPU VM |
Google Cloud | a2‑highgpu‑1g (1 × A100 40 GB) | $4.27 | Listed workstation VM price |
*Per‑GPU costs for multi‑GPU nodes are averages for fair comparison; network, storage, and egress are not included.
Methodology (why you can trust these numbers)
On‑demand only: No reserved‑instance, commitment, or prepaid discounts.
Same class of silicon: All providers offer NVIDIA A100 40 GB cards; SKUs with 80 GB are higher.
Public price lists: Each figure comes from the vendor’s current pricing page on the date above; where a provider sells only 8‑GPU nodes we divided by eight to get a single‑GPU equivalent.
USD in U.S. regions: Rates in other regions can differ by 5‑20%.
Why this matters for developers
Provider | 2 hrs runtime | Effective cost |
---|---|---|
Thunder Compute | 2 × $0.57 | $1.14 |
Lambda | 2 × $1.29 | $2.58 |
Paperspace | 2 x $3.09 | $6.18 |
Azure | 2 × $3.40 | $6.80 |
AWS | 2 × $4.10 | $8.20 |
Result: You get five to seven hours on a budget cloud GPU provider like Thunder Compute for the price of one on the big clouds.
Takeaways
Thunder Compute is ~7× cheaper than AWS or GCP for a single A100 and still 2‑3× cheaper than Lambda, RunPod, PaperSpace, Vast, or Azure.
Bookmark this page; we’ll refresh the numbers every quarter so you don’t have to.
Check out Thunder Compute for the cheapest A100s anywhere.

Carl Peterson
Other articles you might like
Learn more about how Thunder Compute will virtualize all GPUs
Try Thunder Compute
Start building AI/ML with the world's cheapest GPUs