Pods (Containers)
Pods are containerized workloads that run on Podstack’s Kubernetes infrastructure. They provide a fast, flexible way to deploy GPU-accelerated applications.
What is a Pod?
A pod is one or more containers running together with shared resources. On Podstack, pods typically run a single container with:
- GPU access (optional)
- CPU and memory allocation
- Storage mounts
- Network connectivity
- SSH and web terminal access
Key Features
GPU Support
- Whole GPU allocation (1, 2, 4, or more GPUs)
- Multiple GPU types (A100, H100, V100, L40S, T4)
- CUDA and cuDNN pre-installed in most images
Container Images
Use any Docker image:
- Public images from Docker Hub, NGC, etc.
- Private registry images with authentication
- Custom images built for your workload
Access Methods
- SSH: Direct terminal access via assigned subdomain
- Web Terminal: Browser-based terminal
- Jupyter Notebook: Built-in notebook server (if enabled)
- Custom Ports: Expose any ports for web services
Volume Mounts
- Mount NFS volumes for persistent shared storage
- ConfigMaps for configuration files
- SSH keys automatically mounted for access
Pod Lifecycle
Creating → Pending → Running → (Stopped) → Terminated
| State | Description | Billing |
|---|---|---|
| Creating | Pod being provisioned | No |
| Pending | Waiting for resources | No |
| Running | Pod is active | Yes |
| Stopped | Paused by user | No |
| Terminated | Pod deleted | No |
Tip: Stop pods when not in use to pause billing while preserving configuration.
In This Section
- Creating Pods - Deploy a new container
- Managing Pods - Start, stop, monitor, and delete
- Connecting to Pods - SSH, terminal, and notebook access