# Create new instance
Source: https://www.thundercompute.com/docs/api-reference/instances/create-new-instance

https://api.thundercompute.com:8443/openapi.json post /instances/create/{cpu_cores}/{template}/{gpu_type}
Create a new compute instance with specified configuration.



# Create new instance
Source: https://www.thundercompute.com/docs/api-reference/instances/create-new-instance-1

https://api.thundercompute.com:8443/openapi.json post /instances/create/{cpu_cores}
Create a new compute instance with specified configuration.



# Create new instance
Source: https://www.thundercompute.com/docs/api-reference/instances/create-new-instance-2

https://api.thundercompute.com:8443/openapi.json post /instances/create
Create a new compute instance with specified configuration.



# Delete instance
Source: https://www.thundercompute.com/docs/api-reference/instances/delete-instance

https://api.thundercompute.com:8443/openapi.json post /instances/{instance_id}/delete
Delete a compute instance.



# List user instances
Source: https://www.thundercompute.com/docs/api-reference/instances/list-user-instances

https://api.thundercompute.com:8443/openapi.json get /instances/list
Get a list of all instances for the authenticated user.



# Modify instance configuration
Source: https://www.thundercompute.com/docs/api-reference/instances/modify-instance-configuration

https://api.thundercompute.com:8443/openapi.json post /instances/{instance_id}/modify
Modify the configuration of a compute instance.



# Start instance
Source: https://www.thundercompute.com/docs/api-reference/instances/start-instance

https://api.thundercompute.com:8443/openapi.json post /instances/{instance_id}/up
Start a stopped compute instance.



# Stop instance
Source: https://www.thundercompute.com/docs/api-reference/instances/stop-instance

https://api.thundercompute.com:8443/openapi.json post /instances/{instance_id}/down
Stop a running compute instance.



# Create instance snapshot
Source: https://www.thundercompute.com/docs/api-reference/snapshots/create-instance-snapshot

https://api.thundercompute.com:8443/openapi.json post /instances/snapshot
Create a snapshot of a compute instance.



# Delete snapshot
Source: https://www.thundercompute.com/docs/api-reference/snapshots/delete-snapshot

https://api.thundercompute.com:8443/openapi.json delete /snapshots/{snapshot_id}
Delete a user snapshot.



# Get user snapshots
Source: https://www.thundercompute.com/docs/api-reference/snapshots/get-user-snapshots

https://api.thundercompute.com:8443/openapi.json get /snapshots
Get a list of all snapshots for the authenticated user.



# Get available templates
Source: https://www.thundercompute.com/docs/api-reference/utilities/get-available-templates

https://api.thundercompute.com:8443/openapi.json get /thunder-templates
Get a list of all available templates including default and user snapshots.



# Get current pricing
Source: https://www.thundercompute.com/docs/api-reference/utilities/get-current-pricing

https://api.thundercompute.com:8443/openapi.json get /pricing
Retrieve current hourly pricing information for compute resources.



# Get system status
Source: https://www.thundercompute.com/docs/api-reference/utilities/get-system-status

https://api.thundercompute.com:8443/openapi.json get /status
Show availability for each type of GPU.



# Billing
Source: https://www.thundercompute.com/docs/billing

Understand Thunder Compute's usage-based billing, payment methods, billing alerts, current rates, and tips for saving on GPU cloud costs.

## Payment Options

There are **two ways to pay** for Thunder Compute:

### Option 1: Save a Payment Method

Save a credit card through our Stripe portal to get automatically billed for usage. You can manage your payment method anytime by going to [console.thundercompute.com/settings/billing](https://console.thundercompute.com/settings/billing) and clicking "manage billing".

### Option 2: Preload Credit

Add credit directly to your account as an alternative to saving a payment method. This credit never expires and will be used before any saved payment method.

**Order of payment**

1. Any preloaded credit you've added
2. Charges to your saved payment method

You can switch between options or use both by saving a payment method later even if you started with preloaded credit.

## Billing Alerts

* **Instance reminders:** We'll email you about any running instances so you're never caught off guard.
* **Threshold charges:** As your usage grows, we'll bill your card at preset checkpoints (which rise over time) to prevent runaway bills.

## Our rates

All compute resources are billed per minute only while your instances run. Storage incurs charges even when instances are stopped. Rates and promotions are subject to change without notice. For current rates, see our [pricing page](https://www.thundercompute.com/pricing).

## Credit terms

* **Preloaded credit:** Credit you add to your account does not expire and will be used before charging your saved card.
* **Revocation:** Promotional credit can be revoked at our discretion.
* **Account policy:** We have a strict one-account-per-person policy.

## Money-Saving Tips

While Thunder Compute is already the cheapest GPU cloud platform, there are a few strategies we recommend to reduce your bill:

* Turn off instances when you're done.
* Right‑size with `tnr modify` to match your workload.
* Delete instances you no longer need.
* Use snapshots to compress long‑term data.

We think this balances a smooth experience with strong verification—but if you have feedback or questions, please hop into our [Discord](https://discord.com/invite/nwuETS9jJK). We're always happy to improve!


# CLI Reference
Source: https://www.thundercompute.com/docs/cli-reference

Comprehensive reference for the Thunder Compute CLI. Manage instances (create, start, stop, delete), configure GPUs/CPUs, handle files, and use snapshots.

## Account Management

### Login

Authenticate the CLI, which provides a link to the [console](https://console.thundercompute.com/settings?tab=tokens) where you can generate an API token.

```
tnr login
```

Under the hood, this generates and saves an API token to `~/.thunder/token`. You can store a token file here to programmatically authenticate, or by setting the `TNR_API_TOKEN` environment variable in your shell.

### Logout

Log out of the CLI with:

```
tnr logout
```

This deletes the stored API token.

### API Token Management

* Generate/manage tokens in the [console](https://console.thundercompute.com/settings?tab=tokens)
* Tokens never expire but can be revoked
* Use unique tokens per device

## Managing Instances

### Create an Instance

Create a new Thunder Compute instance:

```
tnr create
```

This creates a new instance with default configuration and automatically assigns an instance ID.

#### CPU Configuration

Configure custom vCPU count:

```
tnr create --vcpus <vcpu_count>
```

Each vCPU comes with 8GB of RAM. For example, a 4 core instance has 32GB of RAM, and an 8 core instance has 64GB of RAM.

<Note>
  By default, 4 vCPUs and 32GB of memory are included with your instance. Additional vCPUs are
  billed hourly at the rates shown [here](https://www.thundercompute.com/pricing)
</Note>

#### GPU Configuration

Specify a GPU type:

```
tnr create --gpu <gpu_type>
```

Available GPU types:

* `t4`: NVIDIA T4 (16GB VRAM) - Best for most ML workloads
* `a100` (default): NVIDIA A100 (40GB VRAM) - For large models and high-performance computing
* `a100xl` : NVIDIA A100 (80GB VRAM) - For even larger models, the biggest and the best

You can use the `--num-gpus` flag to specify multiple GPU configurations:

```
tnr create --gpu <gpu_type> --num-gpus <n>
```

#### Template Configuration

Templates make it easy to quickly launch common AI tools. Your instance will already be configured with everything you need to get running to generate images, run an LLM, and more.

To use a template, add the `--template` flag when creating an instance:

```
tnr create --template <template_name>
```

Available templates:

* `ollama`: Ollama server environment
* `comfy-ui`: ComfyUI for AI image generation
* `webui-forge`: WebUI Forge for Stable Diffusion

After instance creation, start the server using `start-<template_name>` when connected. For example:

```
start-ollama
```

#### Mode Configuration

Choose between prototyping and production modes:

```
tnr create --mode <mode>
```

Available modes:

* `prototyping` (default): Development mode optimized for intermittent workloads
* `production`: Premium instance with maximum compatibility, stability, and reliability for production workloads

### Stop an Instance

Stops a running instance.

```
tnr stop <instance_ID>
```

<Note>
  Stopped instances continue to accrue storage cost.
</Note>

### Start an Instance

Starts a stopped instance.

```
tnr start <instance_ID>
```

### Delete an Instance

```
tnr delete <instance_ID>
```

<Warning>
  This action permanently removes an instance and all associated data.
</Warning>

## Using instances

### Connect to an Instance

Use the `connect` command to access your instance. This wraps SSH, managing keys while automatically setting up everything you need to get started.

```
tnr connect <instance_ID>
```

The instance must be running before you can connect to it. See if the instance is running and get the instance ID (default `0`) with `tnr status`.

### Port Forwarding

Connect with port forwarding with the `-t` or `--tunnel` flag:

```
tnr connect <instance_ID> -t PORT1 -t PORT2
```

Features:

* Forward multiple ports using repeated `-t/--tunnel` flags
* Example: `tnr connect 0 -t 8000 -t 8080` forwards both ports 8000 and 8080
* Enables local access to remote web servers, APIs, and services

### Copy Files

Transfer files between local and remote instance with the `scp` command:

```
tnr scp <source_path> <destination_path>
```

You can transfer files in either direction, from your local machine to an instance, or from the instance to your local machine. You indicate the direction of transfer with the path format, shown below.

Path format:

* Remote: `instance_id:path` (e.g., `0:/home/user/data`)
* Local: Standard paths (e.g., `./data` or `/home/user/file.txt`)
* Must specify exactly one remote and one local path
* Paths can be either absolute or relative.

Examples:

```
# Upload to instance
tnr scp ./local_file.txt 0:/remote/path/

# Download from instance
tnr scp 0:/remote/file.txt ./local_path/
```

<Note>
  File transfers have a 60-second connection timeout. SSH key setup,
  compression, and `~/` expansion are handled automatically.
</Note>

## Managing Snapshots

Snapshots capture the state of a stopped instance's disk, allowing you to create new instances from that point in time.

### Create a Snapshot

Create a snapshot from a stopped instance:

```
tnr snapshot <instance_ID> <snapshot_name>
```

* `<instance_ID>`: The ID of the instance to snapshot. The instance must be stopped.
* `<snapshot_name>`: A unique name for your snapshot.
  * Must contain only lowercase letters (a-z), numbers (0-9), and hyphens (-).
  * Must be between 1 and 62 characters long.

Snapshots are stored compressed to save space. You can view the compressed size using the `--list` command after creation.

<Note>
  You can use a snapshot as a template to launch new instances. The snapshot
  defines the initial disk content and size. While you can modify other
  configuration options (like vCPU count or GPU type) during the `tnr create`
  command, the new instance's disk size must be equal to or greater than the
  original instance's disk size. Decreasing the disk size is not supported.
</Note>

```
tnr create --template <snapshot_name> [--gpu <new_gpu_type>] #etc
```

### List Snapshots

List all available snapshots and their details, including compressed size:

```
tnr snapshot --list
```

### Delete a Snapshot

Delete a specific snapshot by name:

```
tnr snapshot --delete <snapshot_name>
```

<Warning>
  This action permanently deletes the snapshot. It does not affect instances
  created from this snapshot.
</Warning>

## System Management

### Modify Instance

Modify the instance's vCPU count (and RAM), GPU type, or disk size:

```
tnr modify <instance_ID> \
  --disk-size-gb <new_size_GB> \
  --gpu <new_gpu_type> \
  --vcpus <new_vcpu_count> \
  --mode <mode>
```

All flags are optional, but at least one change must be provided.

These changes will affect the billing price of the instance.

Instances must be stopped to modify the vCPU count/RAM, GPU type, or mode. You can resize disk at any time.

Each additional vCPU adds 8GB of RAM to your instance.

Available modes:

* `prototyping` (default): Optimized for cost-effective development
* `production`: Premium instances with maximum compatibility, stability, and reliability

<Warning>
  Storage can only be increased, not decreased. For smaller storage needs,
  create a new instance and transfer your files.
</Warning>

### View Instance Status

List all instances and details including `instance_ID`, `IP Address`, `Disk Size`, `GPU Type`, `GPU Count`, `vCPU Count`, `RAM`, and `Template`:

```
tnr status
```

use the `--no-wait` flag to disable automatic monitoring for status updates


# Compatibility
Source: https://www.thundercompute.com/docs/compatibility

Learn about Thunder Compute's technical specs, supported AI/ML libraries (PyTorch, Hugging Face), limitations, and strengths

## Use cases

Thunder Compute is optimized for AI/ML development workflows. That said, Thunder Compute has the full functionality of an EC2-style on-demand GPU cloud instance.

## CUDA versioning

* CUDA version 12.0 or greater (version 12.9 installed)
* CUDNN version 9.0 or greater

<Warning>
  Do not attempt to reinstall CUDA. If it seems like you need an older CUDA driver, you almost always are better off upgrading your other dependencies (e.g., PyTorch)
</Warning>

## Officially supported libraries

The following libraries and tools are thoroughly tested:

* PyTorch (version 2.7.1 installed)
* Notebooks
* AI model serving tools like ComfyUI, Ollama, VLLM, Unsloth, and more

Note: make sure you install the cuda-compatible version of these libraries. The cuda-compatible PyTorch binary and latest CUDA drivers are pre-installed on every Thunder Compute instance.

## Pre-installed libraries

* CUDA toolkit
* Docker (see [Docker on Thunder Compute](/guides/using-docker-on-thundercompute))
* PyTorch (and derivatives), Numpy, Pandas
* Jupyterlab

## Technical specs

* Egress/Ingress: 7Gbps
* IP: dynamic
* Location: U.S. (region varies)
* E series CPU instances in Azure

## Experimental (less stable)

The following workloads are less tested, experimental, or unstable:

* Tensorflow \[experimental]
* PyTorch Lightning \[experimental]
* Jax \[experimental]
* Custom CUDA Kernels \[unpredictable behavior, particularly with errors and profiling. Message us for details]

<Note>
  If you encounter compatibility issues with these experimental workloads, consider switching to [Production mode](/docs/production-mode) for maximum compatibility and predictable performance.
</Note>

## Unsupported

Currently, Thunder Compute lacks official support for graphics workloads such as OpenGL, Vulkan, and FFMPEG. If you'd like to run these, contact us.

<Tip>
  For workloads requiring graphics support, custom kernels, or components incompatible with the prototyping tier, use [Production mode](/docs/production-mode) which provides maximum stability and reliability with all optimizations disabled.
</Tip>

## Cryptocurrency mining

Mining, staking, or otherwise interacting with cryptocurrency is strictly prohibited on Thunder Compute. If cryptocurrency-related activity is detected, the associated account is immediately banned from Thunder Compute and any billing credit is revoked. The account is then billed for the full amount of usage.

## Geographic availability

Thunder Compute is only available for B2B customers, i.e., requires a VAT ID (or similar) in the following countries:

* United Arab Emirates
* Angola
* Bahrain
* Brazil
* Switzerland
* Côte d’Ivoire (Ivory Coast)
* Colombia
* Algeria
* Georgia
* Iraq
* Jordan
* Kazakhstan
* South Korea (Republic of Korea)
* Kuwait
* Morocco
* North Macedonia
* Oman
* Paraguay
* Qatar
* Saudi Arabia
* Tunisia
* Turkey (Türkiye)
* Tanzania
* Ukraine
* Uganda
* Uzbekistan
* Yemen
* India
* Moldova (Republic of Moldova)

Thunder Compute is not currently available in the following countries:

* Belarus
* China
* Cuba
* Indonesia
* Iran
* Kenya
* North Korea
* Malaysia
* Mexico
* Nigeria
* Russia
* Sudan
* Syria
* Uruguay

If you're located in one of these countries and need access to Thunder Compute, please contact us to discuss potential alternatives.

## Miscellaneous tips

We use a new kind of virtualization to maximize GPU utilization, reducing your cost. To learn more about how this works, check out this [blog post](https://www.thundercompute.com/blog/how-thunder-compute-works-gpu-over-tcp).

If you encounter any strange issues or errors, please check our [troubleshooting guide](/docs/troubleshooting) or contact us.

## Recommended Guides

To help you get started with Thunder Compute, we recommend checking out these guides:

* [Running Jupyter Notebooks](/guides/running-jupyter-notebooks-on-thunder-compute) - Use Jupyter for interactive development
* [Using Instance Templates](/guides/using-instance-templates) - Get started quickly with pre-configured environments


# Run DeepSeek R1 Affordably
Source: https://www.thundercompute.com/docs/guides/deepseek-r1-running-locally-on-thunder-compute

Run DeepSeek R1 affordably on Thunder Compute. This guide shows how to set up an A100 GPU instance and use Ollama for cost-effective model deployment.

# Easily Run DeepSeek R1 on Thunder Compute

Looking for the **cheapest way to run DeepSeek R1** or just want to **try DeepSeek R1** without buying hardware? Thunder Compute lets you spin up pay‑per‑minute A100 GPUs so you only pay for the time you use. Follow the steps below to get the model running in minutes.

> **Quick reminder:** Make sure your Thunder Compute account is set up. If not, start with our [Quickstart Guide](/quickstart).

If you prefer video instructions, watch this overview:

<iframe width="640" height="360" src="https://www.youtube.com/embed/EukG6P4s5QI?si=Sx3iWsISL8Ve58Uz" title="YouTube video player" frameborder="0" allowfullscreen />

## Step 1: Create a Cost‑Effective GPU Instance

Open your CLI and launch an 80 GB A100 GPU (perfect for the 70B variant):

```bash
tnr create --gpu "a100xl" --template "ollama"
```

For details on instance templates, see our [templates guide](/guides/using-instance-templates).

## Step 2: Check Status and Connect

Verify the instance is running:

```bash
tnr status
```

![Instance creation in CLI](https://mintlify.s3.us-west-1.amazonaws.com/thundercompute/images/instance_creation_cli.png)

Connect with its ID:

```bash
tnr connect <instance-id>
```

## Step 3: Start the Ollama Server

Inside the instance, start Ollama:

```bash
start ollama
```

If you hit any hiccups, check our [troubleshooting guide](/docs/troubleshooting).

Wait about 30 seconds for the web UI to load.

![Ollama server startup](https://mintlify.s3.us-west-1.amazonaws.com/thundercompute/images/start_ollama.png)

## Step 4: Access the Web UI and Load DeepSeek R1

1. Visit `http://localhost:8080` in your browser.
2. Choose **DeepSeek R1** from the dropdown. On an 80 GB A100, pick the **70B** variant for peak performance.

![Web UI with model selection](https://mintlify.s3.us-west-1.amazonaws.com/thundercompute/images/web_ui_model_selection.png)

## Step 5: Run DeepSeek R1

Type a prompt in the web interface. For example:

> *"If the concepts of rCUDA were applied at scale, overcoming latency, what would it mean for the cost of GPUs on cloud providers?"*

The model will think through the answer and respond. A full reply can take up to 200 seconds.

![Model response in progress](https://mintlify.s3.us-west-1.amazonaws.com/thundercompute/images/model_response_in_progress.png)

## Conclusion

That's the **cheapest way to run DeepSeek R1** and a quick way to **try DeepSeek R1** on Thunder Compute. Explore more guides:

* [Using Docker on Thunder Compute](/guides/using-docker-on-thundercompute)
* [Using Instance Templates](/guides/using-instance-templates)
* [Running Jupyter notebooks](/guides/running-jupyter-notebooks-on-thunder-compute)

Happy building!


# Install Conda (Miniforge)
Source: https://www.thundercompute.com/docs/guides/installing-conda

Learn how to install Conda on Thunder Compute using the recommended Miniforge installer. Follow step-by-step instructions for setup and activation.

Miniforge is the recommended installer for Conda from the conda-forge project. It includes conda, mamba, and their dependencies. On Thunder Compute instances, you **must use Miniforge** as other Conda distributions (like Anaconda or Miniconda) may have compatibility issues with system libraries. For more details about system compatibility, see our [compatibility guide](/docs/compatibility).

## Installation Steps

### Create a new instance

1. Create a new instance by following the steps in the [Quickstart Guide](/quickstart) and connect to it.

### Install Miniforge

1. Download the Miniforge installer:

```bash
curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
```

2. Install Miniforge:

```bash
bash Miniforge3-$(uname)-$(uname -m).sh
# Accept the license agreement (Enter, Q, Enter)
# Confirm the installation location (yes)
# Allow the installer to initialize Miniforge3 (yes)
```

3. Activate the installation:

```bash
source ~/.bashrc
```

Once installed, you might want to check out our guide on [running Jupyter notebooks](/guides/running-jupyter-notebooks-on-thunder-compute) to start using your Conda environment for data science and machine learning tasks.

## Need Help?

If you encounter any issues with Conda installation or package management, please check our [troubleshooting guide](/docs/troubleshooting) or contact our support team.


# Install MCP Server
Source: https://www.thundercompute.com/docs/guides/mcp-server-for-managing-gpus

Install the Mintlify MCP server to host Thunder Compute docs locally. Enables AI tools like Cursor to provide instant answers based on documentation.

## TL;DR

```
# 1 – install the docs bundle
npx @mintlify/mcp@latest add thundercompute

# 2 – start the server
node ~/.mcp/thundercompute/src/index.js
```

Your **Thunder Compute MCP** server is now live at [**http://localhost:5001**](http://localhost:5001) and ready for any AI client.

## Connect in Cursor

1. Open **Cursor → Settings → Docs**.
2. **Add Source** → `http://localhost:5001`.
3. Ask something like *"How do I submit a batch to Thunder Compute?"*.

## Update docs

Run the install command again whenever you need the latest release:

```
npx @mintlify/mcp@latest add thundercompute
```


# Thunder Compute Referral Program
Source: https://www.thundercompute.com/docs/guides/referral-program

Earn credits by referring friends to Thunder Compute. Get 3% of every dollar your referrals spend on GPU instances with our lifetime rewards program.

**Refer a friend, earn credit.** Share your unique referral link and receive credits every time someone you refer spends on Thunder Compute GPUs.

<Note>
  This program is currently in beta. Terms may evolve as we improve the program based on user feedback.
</Note>

## How It Works

Our referral program rewards you with **3% of every dollar** your referrals spend on GPU instances. Here's what you need to know:

* **Reward Rate:** 3% of all spending by referred users
* **Duration:** Lifetime rewards for each referred customer
* **Credits:** Paid out in Thunder Compute credits (non-transferable)
* **Tracking:** Credits apply to paid, consumed compute resources. These typically post within minutes of a finalized invoice for consumed compute.

We created this program as a way to give back to our community. Rather than paying advertisers, we want to reward you for your contribution to Thunder Compute.

By referring even a medium-size startup you can often receive thousands of dollars of free compute.

## Getting Started

### 1. Find Your Referral Link

1. Sign in to the [Thunder Compute Console](https://console.thundercompute.com/)
2. Navigate to **Settings › General**
3. Copy your unique referral link
4. Share it anywhere—social media, tutorials, blog posts, or direct messages

### 2. Share and Earn

Once someone creates a new account using your link and starts using GPU instances, you'll automatically earn 3% of their payments as credits.

## Eligibility Requirements

### For Referrers

* Active Thunder Compute account in good standing
* No restrictions on sharing methods or platforms

### For Referrals

* Must create a **new account** via your referral link
* Existing accounts that sign up through referral links are not eligible
* Self-referrals and duplicate accounts are prohibited

<Warning>
  Credits are non-transferable and cannot be converted to cash. They can only be used for Thunder Compute services.
</Warning>

## Program Rules

### Fair Use Policy

We maintain strict anti-fraud measures to ensure program integrity:

* Creating fake accounts is prohibited
* Self-referrals will result in credit removal
* Violating Thunder Compute's Terms & Conditions may lead to account suspension
* All referral activity is monitored for suspicious patterns

### Program Changes

As a beta program, Thunder Compute reserves the right to:

* Modify reward rates or eligibility requirements
* Update program terms with advance notice
* Discontinue the program if necessary

We'll announce any changes through email notifications and documentation updates.

## Frequently Asked Questions

**Q: When do I receive my referral credits?**
A: Credits are typically added to your account within minutes of your referral's successful invoice.

**Q: Is there a limit to how much I can earn?**
A: No, there's no cap on referral earnings. The more successful referrals you make, the more you earn.

**Q: Can I refer existing Thunder Compute users?**
A: No, only new users who create accounts through your referral link are eligible.

**Q: What counts as a qualifying payment?**
A: Only direct card payments for GPU instances qualify for referral rewards. Usage on free or referral credits do not qualify.

## Need Help?

Have questions about referral eligibility, credit posting, or the program in general? Contact our support team:

* **Email:** [support@thundercompute.com](mailto:support@thundercompute.com)
* **Discord:** Join our [community server](https://discord.gg/nwuETS9jJK)

Thank you for giving back to the Thunder Compute community!


# Run Jupyter Notebooks
Source: https://www.thundercompute.com/docs/guides/running-jupyter-notebooks-on-thunder-compute

Set up and run Jupyter Notebooks on Thunder Compute's affordable cloud GPUs. Connect via VSCode, install extensions, and verify GPU access for ML/data science.

## Prerequisites for a Jupyter Notebook with Cloud GPU

* VSCode installed
* Thunder Compute extension installed in VSCode, Cursor, or Windsurf
* Jupyter Notebook extension installed in VSCode, Cursor, or Windsurf

## Steps to Launch Your Notebook

### 1. Connect to a Thunder Compute cloud GPU in VSCode

Follow the instructions in our [quickstart](/quickstart) guide to set and connect to a remote instance in VSCode.

### 2. Install the Jupyter extension in your cloud workspace

Open the Extensions panel and install the Jupyter extension inside your Thunder Compute instance.

### 3. Verify GPU availability inside the notebook

Create a Jupyter Notebook, which is now connected to a Thunder Compute instance with GPU capabilities. To confirm that the GPU is accessible, run the following in a notebook cell:

```
import torch
print(torch.cuda.is_available())
```

If everything is set up correctly, the output should be:

```
True
```

You now have a Jupyter Notebook running on a Thunder Compute cloud GPU, a fast and low-cost alternative to Colab for indie developers, researchers, and data scientists.


# SSH on Thunder Compute
Source: https://www.thundercompute.com/docs/guides/ssh-on-thunder-compute

Learn how to manually SSH into Thunder Compute instances and troubleshoot common SSH connection errors.

Thunder Compute gives indie developers, researchers, and data-scientists low-cost cloud GPUs in a few clicks. Our **CLI** (`tnr`) and **VS Code extension** wrap SSH setup, key management, and port-forwarding for you—see the [Quick-Start guide](/quickstart) for a full walkthrough.

## 1. Manually SSH into Thunder Compute

To manually SSH into your Thunder Compute instance:

1. **Connect once with the CLI** to set up your SSH configuration:
   ```bash
   tnr connect
   ```
   This automatically adds your instance as `tnr-0` in your SSH config and sets up the necessary keys.

2. **SSH directly** using the configured alias:
   ```bash
   ssh tnr-0
   ```

3. **Use with other IDEs**: You can also use `tnr-0` with other remote SSH tools like VS Code Remote-SSH, JetBrains Gateway, or any SSH-compatible IDE.

That's it! The CLI handles all the key management and configuration for you.

## 2. Troubleshooting SSH Errors

When you run `tnr connect`, the tool SSHs into your instance automatically. If something goes wrong, you might see errors such as:

* **Bad permissions** – *"Try removing permissions for user: \OWNER RIGHTS (S-1-3-4) on file C:\Users\<your\_username>.ssh\config."*
* **Error reading SSH protocol banner** (often means the instance is out of memory and the SSH handshake cannot complete).
* **Key authentication failed** (your SSH key is outdated or misconfigured).

Follow the steps below to fix the problem.

### A. Restart the Instance

A quick restart clears many transient issues:

```bash
tnr stop
tnr start
```

Wait about a minute, then try `tnr connect` again.

### B. Test a Manual SSH Connection

Get a more detailed error message by bypassing `tnr connect`:

```bash
ssh tnr-0
```

### C. Fix Common Issues

#### Out-of-Memory

If you see **Error reading SSH protocol banner**, the instance may have run out of RAM. Wait a few seconds and retry. For a permanent fix, launch an instance with more resources:

```bash
tnr create --vcpus 8
```

*Tip: 16–32 vCPUs generally provide enough memory for most ML workloads.*

#### Permissions Problems

<CodeGroup>
  ```powershell Windows
  # Run PowerShell as Administrator
  icacls "$env:USERPROFILE\.ssh\config" /reset
  icacls "$env:USERPROFILE\.ssh\config" /inheritance:r
  Rename-Item -Path "$env:USERPROFILE\.ssh\config" -NewName 'config.old'
  ```

  ```bash MacOS
  chmod 600 ~/.ssh/config
  chown $(whoami) ~/.ssh/config
  mv ~/.ssh/config ~/.ssh/config.old
  ```

  ```bash Linux/WSL
  chmod 600 ~/.ssh/config
  chown $(whoami) ~/.ssh/config
  mv ~/.ssh/config ~/.ssh/config.old
  ```
</CodeGroup>

#### Corrupted Known-Hosts or Thunder Compute Locks

**Known-Hosts**

<CodeGroup>
  ```powershell Windows
  Rename-Item -Path "$env:USERPROFILE\.ssh\known_hosts" -NewName 'known_hosts.old'
  ```

  ```bash MacOS
  mv ~/.ssh/known_hosts ~/.ssh/known_hosts.old
  ```

  ```bash Linux/WSL
  mv ~/.ssh/known_hosts ~/.ssh/known_hosts.old
  ```
</CodeGroup>

**Thunder Compute Locks & Keys**

<CodeGroup>
  ```powershell Windows
  Remove-Item -Recurse -Force "$env:USERPROFILE\.thunder\locks"
  Remove-Item -Recurse -Force "$env:USERPROFILE\.thunder\keys"
  ```

  ```bash MacOS
  rm -rf ~/.thunder/locks ~/.thunder/keys
  ```

  ```bash Linux/WSL
  rm -rf ~/.thunder/locks ~/.thunder/keys
  ```
</CodeGroup>

### D. Reinstall the Thunder Compute CLI or VS Code Extension

If the steps above do not resolve the error, reinstalling the tooling often does:

1. Remove the existing CLI or extension.
2. Download the latest installer from [Thunder Compute download](https://console.thundercompute.com/?download).
3. Re-run `tnr login` followed by `tnr connect`.

### E. Still Having Issues?

Open a ticket in our [Discord support channel](https://discord.gg/thundercompute) with the exact error output, and we will get you unblocked fast.

Happy troubleshooting!


# Using Docker
Source: https://www.thundercompute.com/docs/guides/using-docker-on-thundercompute

Learn how to use Docker with automatic GPU support on Thunder Compute instances. Run containers, manage images, and troubleshoot common Docker issues.

Docker containers on Thunder Compute instances now come with GPU support enabled through the "thunder" runtime. This means you can run Docker containers with GPU access without any additional configuration. For more information about GPU compatibility, see our [compatibility guide](/docs/compatibility).

<Warning>
  Thunder compute is incompatible with the base `nvidia-container-toolkit`.
  Trying to remove the existing container toolkit and installing your own will lead to issues running docker containers.
</Warning>

## Getting Started

1. Connect to a Thunder Compute instance using the [quickstart guide](/quickstart)

2. Run your Docker containers as you would on a normal GPU instance: with `--runtime=nvidia` or `--gpus=all` if you need GPU support, otherwise without.

<Info>
  If you don't need GPU capabilities in the docker container at all, it makes more sense to run with docker's `runc` runtime.
  No need to add the --runtime flag in this case, `runc` is set as the default runtime.
</Info>

## Example

```bash
# Run a container with GPU support
docker run --runtime=nvidia ubuntu:22.04 nvidia-smi
# Run Ollama server (see our [Deepseek guide](/guides/deepseek-r1-running-locally-on-thunder-compute) for an example use case)
docker run --runtime=nvidia -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# Run a normal container with the `runc` runtime
docker run ubuntu:22.04
```

In a docker-compose file, that would look like this:

```yaml
services:
  ollama:
    image: ollama/ollama
    runtime: nvidia
    ports:
      - "11434:11434"
    volumes:
      - ollama:/root/.ollama
    restart: unless-stopped

  gpu-test:
    image: ubuntu:22.04
    runtime: nvidia
    command: nvidia-smi

volumes:
  ollama:
```

<Warning>
  If you get an error that looks like `docker: unexpected EOF`, try running the
  command again. For more troubleshooting tips, see our [troubleshooting
  guide](/docs/troubleshooting).
</Warning>

## Additional Info

### Supported Base Images

Most modern Docker images are supported:

* Ubuntu 22.04 and newer base images are fully supported
* Ubuntu 20.04 base images are supported in beta
* Other distributions like Alpine and Debian are supported

### Thunder Runtime

Thunder Compute instances replace the  `nvidia` runtime with the `thunder` runtime for all Docker containers.
The "thunder" runtime is identical to the nvidia runtime while injecting requirements needed for Thunder Compute GPU support.

## Need Help?

If you encounter any issues or have questions about Docker support, please contact our support team.


# Use Instance Templates for AI
Source: https://www.thundercompute.com/docs/guides/using-instance-templates

Quickly deploy LLMs (Ollama) and AI image generators (ComfyUI, WebUI Forge) on Thunder Compute using pre-configured instance templates. Get started fast.

Thunder Compute gives indie developers, researchers and data scientists instant access to **affordable cloud GPUs**. Our pre-configured **instance templates** set up popular AI stacks automatically, so you can **run LLMs** or **generate AI images** in minutes.

## AI Templates on Cheap Cloud GPUs

We currently offer:

* **Ollama** – launches an Ollama server for open-source large language models
* **ComfyUI** – installs ComfyUI for fast AI-image generation workflows
* **WebUI Forge** – deploys Stable Diffusion WebUI Forge with Flux-fp8 and Flux-fp4

## Deploy a Template

1. **Create an instance**

```bash
# Launch an Ollama instance
tnr create --template ollama

# Launch ComfyUI
tnr create --template comfy-ui

# Launch WebUI Forge (recommended GPU: A100)
tnr create --template webui-forge --gpu a100
```

<Warning>
  WebUI Forge ships with Flux-fp8 and Flux-fp4. For peak performance choose an A100 GPU.
</Warning>

2. **Connect to the instance**

```bash
tnr connect 0   # replace 0 with your instance ID
```

<Note>
  Port forwarding is handled automatically when you connect. The `-t` flag is unnecessary.
</Note>

3. **Start the service**

```bash
# Ollama
start-ollama

# ComfyUI
start-comfyui

# WebUI Forge
start-webui-forge
```

Required ports forward to your local machine automatically.

## Template Details

### Ollama Template

* Forwards port **11434**
* Access the API at `http://localhost:11434`
* Ready for popular Ollama models

### ComfyUI Template

* Forwards port **8188**
* Mounts the `ComfyUI` directory to your Mac or Linux host
* UI at `http://localhost:8188`
* Includes common nodes and extensions

### WebUI Forge Template

* Forwards port **7860**
* Mounts `stable-diffusion-webui-forge` to your host
* UI at `http://localhost:7860`
* Includes Flux-fp8 and Flux-fp4 models

## Need Help?

Encounter problems or have questions? Reach out to our support team any time.


# Production Mode
Source: https://www.thundercompute.com/docs/production-mode

Premium VM instances with maximum compatibility and predictable performance for production workloads

Production mode provisions a standard virtual machine (VM) on Thunder Compute with all low-level execution optimizations disabled. While this results in a higher hourly cost than the Prototyping tier, it guarantees maximum compatibility, predictable performance, and uninterrupted runtime.

## When to choose Production mode

* Long-running training jobs
* High-availability inference services
* Multi-GPU workloads
* Workloads that rely on graphics, custom kernels, or other components that are incompatible with the Prototyping tier

## Resources

Each Production instance is allocated **24 vCPUs** and **220 GiB RAM** per attached GPU. Currently, we offer A100 80GB GPUs in production mode.

## Switching between tiers

1. Stop the instance.
2. Select the desired tier in the CLI (`thunder instance modify --mode production`), the web console, or the VS Code extension.
3. Start the instance.

The operation takes a few seconds and does not affect data stored on attached volumes. You can switch back at any time.


# Quickstart
Source: https://www.thundercompute.com/docs/quickstart

Welcome to Thunder Compute's open-access beta! Thunder Compute is a cloud GPU platform for AI/ML prototyping. It is built on a proprietary orchestration stack to give you the cheapest prices anywhere. The beta may be unstable; if you encounter issues, please reach out and we'll quickly put out a fix.

<Tabs>
  <Tab title="VSCode / Cursor / Windsurf (Recommended)">
    ### Installation

    Click the following links to access the Thunder Compute [VSCode extension](vscode:extension/ThunderCompute.thunder-compute) or [Cursor extension](cursor:extension/ThunderCompute.thunder-compute) or [Windsurf extension](windsurf:extension/ThunderCompute.thunder-compute). You must have the editor installed for these links to work.

    ### Authentication

    You may be automatically prompted to login.

    If not, open the command palette with `ctrl + shift + p` and run `Thunder Compute:Login`

    Following the prompts, navigate to the [console](https://console.thundercompute.com/signup) and generate a login token.

    ### Add a Payment Method

    In the console, [add a payment method](https://console.thundercompute.com/settings/billing) to your account.

    ### Using The Extension

    You can create instances through the [console](https://console.thundercompute.com) or directly through the extension like so:

    ![Create Instance](https://mintlify.s3.us-west-1.amazonaws.com/thundercompute/images/Create_Instance.png)

    Click on the `Connect` button next to your instance, shaped like two arrows pointing towards each other.

    ![Connect to Instance](https://mintlify.s3.us-west-1.amazonaws.com/thundercompute/images/Connect_to_Instance.png)

    A new window will open connected to your instance. You can drag files you need into the file explorer, run notebooks, scripts, and more as if they were on your local machine.
  </Tab>

  <Tab title="CLI">
    <Tabs>
      <Tab title="Windows">
        ### Installation

        Download the installer for x64 from [here (most common)](https://storage.cloud.google.com/thunder-cli-executable/signed-releases-v2/windows/x64/tnr-installer.exe) or for ARM64 from [here](https://storage.cloud.google.com/thunder-cli-executable/signed-releases-v2/windows/arm64/tnr-installer.exe) and run the .exe file

        ### Authentication

        Run the following command to log in to the CLI (from powershell)

        ```powershell
        tnr login
        ```

        After running this command, navigate to the console to [generate a token](https://console.thundercompute.com/settings/tokens).

        ### Add a Payment Method

        Visit the console to [add a payment method](https://console.thundercompute.com/settings/billing) to your account.

        ### Using Thunder Compute

        ```powershell
        # Create an instance
        tnr create

        # Check on your instance
        tnr status

        # Connect to a running instance
        tnr connect 0

        # (Optional) Start a template within a running instance
        start-<template-name> # Example: start-ollama
        ```
      </Tab>

      <Tab title="MacOS">
        ### Installation

        Download the installer for macOS from [here](https://storage.cloud.google.com/thunder-cli-executable/signed-releases-v2/darwin/thunder-cli-installer.pkg) and run the .pkg file

        ### Authentication

        Run the following command to log in to the CLI (from your terminal):

        ```bash
        tnr login
        ```

        To retrieve the token, navigate to the console to [generate a token](https://console.thundercompute.com/settings/tokens).

        ### Add a Payment Method

        Visit the console to [add a payment method](https://console.thundercompute.com/settings/billing) to your account.

        ### Using Thunder Compute

        ```bash
        # Create an instance
        tnr create

        # Check on your instance
        tnr status

        # Connect to a running instance
        tnr connect 0

        # (Optional) Start a template within a running instance
        start-<template-name> # Example: start-ollama
        ```
      </Tab>

      <Tab title="Linux">
        ### Installation

        ```bash
        curl -fsSL https://console.thundercompute.com/install.sh | sh
        ```

        **Option 2: Python Package (requires Python 3.7+)**

        ```bash
        pip install tnr
        ```

        ### Authentication

        ```bash
        tnr login
        ```

        After running this command, navigate to the console to [generate a token](https://console.thundercompute.com/settings/tokens).

        ### Add a Payment Method

        Visit the console to [add a payment method](https://console.thundercompute.com/settings/billing) to your account.

        ### Using Thunder Compute

        ```bash
        # Create an instance
        tnr create

        # Check on your instance
        tnr status

        # Connect to a running instance
        tnr connect 0

        # (Optional) Start a template within a running instance
        start-<template-name> # Example: start-ollama
        ```
      </Tab>
    </Tabs>
  </Tab>
</Tabs>

That's it! You're now ready to use Thunder Compute.

## Next Steps

* Visit our [Compatibility Guide](/compatibility) to make sure your workload is compatible with Thunder Compute
* Learn how to [Run a Jupyter Notebook](/guides/running-jupyter-notebooks-on-thunder-compute)


# Storage and Networking
Source: https://www.thundercompute.com/docs/storage-and-networking

Technical specs, data retention policies, and other details about storage and networking of Thunder Compute instances

## Networking

* Egress/Ingress: 7-10Gbps
* IP: dynamic
* Region: U.S. Central (Iowa)
* E or N series CPU instances in GCP

### Accessing Ports

If you're using the CLI, you can tunnel ports with the `connect` command and the `-t` flag. For example, to tunnel port 6006, run:

```
cloudflared tunnel --url http://localhost:6006 
```

Then, to access it in your browser, use a tool like Cloudflared:

```
cloudflared tunnel --url http://localhost:6006
```

If you're using the VS Code extension, you can forward ports after connecting to an instance. To do this, use the “Ports” tab at the bottom of the VS Code window, as shown [here](https://code.visualstudio.com/docs/debugtest/port-forwarding).

## Storage

### Persistent Disk

By default, all instances use persistent disks. This enables a "stopped" state where data persists but compute usage is paused.

To stop all billing, delete instances that are no longer in use.

While we don't provide explicit guarantees, you can generally expect \~100k IOPS and 1200 Mbps Read/Write.

### Snapshots

Snapshots provide a cheaper option for long-term storage, compressed to eliminate unused space.

You can also use snapshots to create new instances with the same data as existing instances.

To take a snapshot, stop your instance and use the snapshot feature in the console or CLI. This process typically takes \~2-3 minutes.

## Inactive-Instance Data Retention

Thunder Compute retains the persistent storage associated with an account for 60 days after the last time any instance in that account was running.

* If an account has been inactive for 60 consecutive days (defined by no running instances), all attached volumes, snapshots, and other instance-specific data are irreversibly deleted.
* This retention window applies per account (not per individual instance). Starting or stopping any instance resets the 60-day timer.
* Deletion is permanent and cannot be undone. Back up any critical data before the retention period ends.
* Aggregated billing, usage, and audit logs are retained according to our standard privacy policy and are not affected by this rule.


# Troubleshooting
Source: https://www.thundercompute.com/docs/troubleshooting

Troubleshoot common Thunder Compute errors. Find solutions for connection issues, function errors, SSH problems, and access logs. Get support via Discord.

## Common solutions

1. Reconnect to the instance with `ctrl + d` and `tnr connect <instance_id>`
2. Upgrade tnr. Depending on your install method, you may have to use `pip install tnr --upgrade` or re-download the binary from the website
3. Restart the instance by running `tnr stop <instance_id>` and `tnr start <instance_id>` (full documentation in [CLI operations](/docs/cli-reference))

## Logs

To assist troubleshooting, the GCP/AWS instance logs for each instance are saved to the file `/var/log/syslog`. On your local machine, you can view the CLI logs with `cat ~/.thunder/logs`. Sharing these with our team can help us quickly find a solution for your problem.

## Common errors

### Function not implemented

A common error you may encounter is some variant of "This function is not implemented." What this means is that your program touches a portion of the CUDA API that we do not currently support. Check our [compatibility guide](/docs/compatibility) for supported features, and if you encounter this, please contact us.

### SSH errors

If you encounter SSH-related errors (like `Error reading SSH protocol banner` or permission issues), first retry the command.

If that fails, see our detailed [SSH Troubleshooting Guide](/guides/troubleshooting-ssh-errors) for step-by-step solutions.

For quick fixes, try restarting your instance with `tnr stop <instance_id>` and `tnr start <instance_id>`.

## Recommended Guides

To help prevent common issues and get the most out of Thunder Compute, we recommend these guides:

* [Using Docker](/guides/using-docker-on-thundercompute) - Learn about GPU-enabled containers and troubleshooting Docker issues
* [Installing Conda](/guides/installing-conda) - Proper setup of Conda environments to avoid dependency conflicts
* [Using Instance Templates](/guides/using-instance-templates) - Use pre-configured environments to minimize setup issues

## Production mode as a last resort

If you continue to experience compatibility issues or errors that cannot be resolved through the above methods, consider switching to [Production mode](/docs/production-mode). Production mode provides maximum stability and reliability with all low-level optimizations disabled, ensuring complete compatibility for workloads that encounter persistent issues in the prototyping tier.

To switch to production mode:

```
tnr modify <instance_id> --mode production
```

Note that production mode has a higher hourly cost but guarantees predictable performance and compatibility.

## Support

The fastest way to get support is to join [our discord](https://discord.gg/nwuETS9jJK). Our founding team will personally respond to help you as quickly as possible.