Balanced Memory
24GB HBM2 memory capacity
933 GB/s memory bandwidth
165 TFLOPS FP16 Tensor performance
Ampere Architecture
3rd Gen Tensor Cores
NVIDIA Ampere architecture
Multi-Instance GPU up to 4 instances
Mainstream AI
Inference optimized
8-GPU configurations
PCIe Gen4 support
Mainstream AI for Production
Perfect for inference deployments, edge AI, and cost-effective training. 24GB memory strikes the ideal balance between capacity and affordability.
Pricing for NVIDIA A30
Affordable mainstream AI performance. Get 24GB HBM2 and Ampere architecture for just ₹90/hr—perfect for production inference and training.
On-demand — ₹90/hr per GPU
Access up to 8 NVIDIA A30 GPUs instantly. Balanced 24GB memory and Ampere Tensor Cores deliver excellent price/performance for mainstream AI workloads. No commitments required.
Sign up to consoleDetailed Pricing Options
View all pricing tiers and configurations for A30
| Configuration | Hourly/On-Demand | Monthly | Annually |
|---|---|---|---|
1x NVIDIA A30Most Popular | ₹90/hr | ₹40,000 | ₹4,32,000 |
2x NVIDIA A30 | ₹180/hr | ₹80,000 | ₹8,64,000 |
4x NVIDIA A30 | ₹360/hr | ₹1,60,000 | ₹17,28,000 |
Scale Your AI Infrastructure
Deploy A30 GPUs now. Balanced performance, affordable pricing.