Sarada
Gallery
⚡ NEW: Gallery - HD Photos!
🔍
Skip to content
Initializing search
Home
Public Cloud
Private Cloud
Hardware
Education
Lambda Docs
Home
Public Cloud
Public Cloud
Introduction
Cloud Console
Data management
Data management
Filesystems
Filesystem S3 Adapter
Importing and exporting data
Logging and monitoring
Logging and monitoring
Guest Agent
Access and security
Access and security
Access and security overview
Teams
Firewalls
Billing
Billing
Billing overview
Managing billing
Cloud API
On-Demand
On-Demand
Overview
Getting started
Connecting to an instance
Creating and managing instances
Managing your system environment
Troubleshooting
1-Click Clusters
1-Click Clusters
Introduction
Using Lambda's Managed Slurm
Using Lambda's Managed Kubernetes
How to serve the Llama 3.1 405B model using a Lambda 1-Click Cluster
Security posture
Support
Additional resources
Additional resources
Forum
Blog
YouTube
Main site
Tags index
Private Cloud
Private Cloud
Introduction
Accessing your Lambda Private Cloud cluster
Security posture
Managed Kubernetes
Managed Kubernetes
Overview
Getting started
Additional resources
Additional resources
Forum
Blog
YouTube
Main site
Tags index
Hardware
Hardware
Introduction
Servers
Servers
Getting started
Set lower power limits (TDPs) for NVIDIA GPUs
Workstations
Workstations
Getting started
Troubleshooting Workstations and Desktops
Additional resources
Additional resources
Forum
Blog
YouTube
Main site
Tags index
Education
Education
Introduction
Tutorial: Getting started with training a machine learning model
Using Multi-Instance GPU (MIG)
Generative AI (GAI)
Generative AI (GAI)
How to serve the FLUX.1 prompt-to-image models using Lambda Cloud on-demand instances
Fine-tuning the Mochi video generation model on GH200
Large language models (LLMs)
Large language models (LLMs)
Deploying a Llama 3 inference endpoint
Deploying Llama 3.2 3B in a Kubernetes (K8s) cluster
Using KubeAI to deploy Nous Research's Hermes 3 and other LLMs
Serving Llama 3.1 405B on a Lambda 1-Click Cluster
Serving the Llama 3.1 8B and 70B models using Lambda Cloud on-demand instances
Running DeepSeek-R1 70B using Ollama
Deploying NVIDIA Nemotron 3 Nano using vLLM
Linux usage and system administration
Linux usage and system administration
Basic Linux commands and system administration
Configuring Software RAID
Lambda Stack and recovery images
Troubleshooting and debugging
Using the Lambda bug report to troubleshoot your system
Using the nvidia-bug-report.log file to troubleshoot your system
Programming
Programming
Virtual environments and Docker containers
Running Hugging Face Transformers and Diffusers on an NVIDIA GH200 instance
Scheduling and orchestration
Scheduling and orchestration
Orchestrating AI workloads with dstack
Using SkyPilot to deploy a Kubernetes cluster
Benchmarking
Benchmarking
Running a PyTorch®-based benchmark on an NVIDIA GH200 instance
Additional resources
Additional resources
Forum
Blog
YouTube