Compute Resources

Podstack offers multiple ways to deploy compute workloads, from containerized applications to full virtual machines and dedicated GPU instances.

Compute Options

Pods (Containers)

Pods are containerized workloads running on Kubernetes. They offer:

  • Fast deployment from Docker images
  • Fractional or whole GPU allocation
  • Web terminal and SSH access
  • Jupyter notebook integration
  • Auto-scaling with replicas

Best for: ML training, inference, Jupyter notebooks, containerized applications

Learn about Pods

Virtual Machines

VMs provide full operating system control with:

  • Choice of Linux distributions (Ubuntu, CentOS, Debian, Rocky)
  • Configurable CPU, memory, and storage
  • GPU passthrough support
  • Persistent disk storage

Best for: Custom software stacks, legacy applications, full OS requirements

Learn about VMs

GPU Marketplace (Baremetal)

Reserve dedicated GPU instances from the marketplace:

  • Browse available inventory across multiple GPU types
  • Dedicated hardware with no virtualization overhead
  • Ideal for large-scale training jobs

Best for: Maximum GPU performance, dedicated resources

Explore GPU Marketplace

Comparing Options

FeaturePodsVMsBaremetal
Deployment SpeedFast (seconds)Medium (minutes)Varies
GPU SharingFractional supportedWhole GPUsDedicated
OS CustomizationContainer imageFull OSFull OS
Billing GranularityPer-secondPer-hourPer-hour
Best ForDev/MLCustom stacksProduction training

GPU Types Available

Podstack supports various NVIDIA GPUs:

GPUMemoryBest For
A10040GB/80GBLarge model training
H10080GBLatest generation training
H200141GBMemory-intensive workloads
V10016GB/32GBCost-effective training
L40S48GBInference and training
T416GBBudget inference

Availability varies by region and demand.

Next Steps