The World's First AI System Built on NVIDIA A100
NVIDIA A100 features the world's most advanced accelerator, the NVIDIA A100 Tensor Core GPU, enabling enterprises to consolidate training, inference, and analytics into a unified, easy-to-deploy AI infrastructure that includes direct access to NVIDIA AI experts.
A100 accelerates workloads big and small. Whether using MIG to partition an A100 GPU into smaller instances, or NVLink to connect multiple GPUs to accelerate large-scale workloads, A100 can readily handle differentsized acceleration needs, from the smallest job to the biggest multi-node workload. A100’s versatility means IT managers can maximize the utility of every GPU in their data center around the clock.
A100 delivers 312 teraFLOPS (TFLOPS) of deep learning performance. That’s 20X Tensor FLOPS for deep learning training and 20X Tensor TOPS for deep learning inference compared to NVIDIA Volta™ GPUs.
NVIDIA NVLink in A100 delivers 2X higher throughput compared to the previous generation. When combined with NVIDIA NVSwitch™, up to 16 A100 GPUs can be interconnected at up to 600 gigabytes per second (GB/ sec) to unleash the highest application performance possible on a single server. NVLink is available in A100 SXM GPUs via HGX A100 server boards and in PCIe GPUs via an NVLink Bridge for up to 2 GPUs.
An A100 GPU can be partitioned into as many as seven GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications, and IT administrators can offer rightsized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.
With 40 gigabytes (GB) of highbandwidth memory (HBM2), A100 delivers improved raw bandwidth of 1.6TB/sec, as well as higher dynamic random-access memory (DRAM) utilization efficiency at 95 percent. A100 delivers 1.7X higher memory bandwidth over the previous generation.
AI networks are big, having millions to billions of parameters. Not all of these parameters are needed for accurate predictions, and some can be converted to zeros to make the models “sparse” without compromising accuracy. Tensor Cores in A100 can provide up to 2X higher performance for sparse models. While the sparsity feature more readily benefits AI inference, it can also improve the performance of model training.
Choose Tenure
Plan | OS | Graphic Processor | GPU Memory | vCPUs | Dedicated RAM | Disk Space | Price | Minimum Billing | |
---|---|---|---|---|---|---|---|---|---|
GDC.A100-16.115GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 1 x NVIDIA A100 | 40 GB | 16 vCPUs | 115 GB | 1500 GB SSD | 1,24,100 (Billed hourly) 170/hr 75,000 (Billed Monthly) 170/hr 73,125 (Billed Quarterly) 170/hr 71,250 (Billed Half Yearly) 170/hr 67,500 (Billed Annually) 170/hr | NA | Create |
GDC.2xA100-32.230GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 2 x NVIDIA A100 | 80 GB | 32 vCPUs | 230 GB | 3000 GB SSD | 2,48,200 (Billed hourly) 340/hr 1,50,000 (Billed Monthly) 340/hr 1,46,250 (Billed Quarterly) 340/hr 1,42,500 (Billed Half Yearly) 340/hr 1,35,000 (Billed Annually) 340/hr | NA | Create |
GDC.4xA100-64.460GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 4 x NVIDIA A100 | 160 GB | 64 vCPUs | 460 GB | 6000 GB SSD | 4,96,400 (Billed hourly) 680/hr 3,00,000 (Billed Monthly) 680/hr 2,92,500 (Billed Quarterly) 680/hr 2,85,000 (Billed Half Yearly) 680/hr 2,70,000 (Billed Annually) 680/hr | NA | Create |
GDC.8xA100-128.920GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 8 x NVIDIA A100 | 320 GB | 128 vCPUs | 920 GB | 6000 GB SSD | 9,92,800 (Billed hourly) 1,360/hr 7,00,000 (Billed Monthly) 1,360/hr 6,82,500 (Billed Quarterly) 1,360/hr 6,65,000 (Billed Half Yearly) 1,360/hr 6,30,000 (Billed Annually) 1,360/hr | NA | Create |
Choose Tenure
Plan | OS | Graphic Processor | GPU Memory | vCPUs | Dedicated RAM | Disk Space | Price | Minimum Billing | |
---|---|---|---|---|---|---|---|---|---|
GDCW.A100-16.115GB | Windows - 2016 / Windows -2019 | 1 x NVIDIA A100 | 40 GB | 16 vCPUs | 115 GB | 1500 GB SSD | 1,28,147 (Billed hourly) 175.54/hr 79,047 (Billed Monthly) 175.54/hr 77,172 (Billed Quarterly) 175.54/hr 75,297 (Billed Half Yearly) 175.54/hr 71,547 (Billed Annually) 175.54/hr | 5000 | Create |
GDCW.2xA100-32.230GB | Windows - 2016 / Windows -2019 | 2 x NVIDIA A100 | 80 GB | 32 vCPUs | 230 GB | 3000 GB SSD | 2,55,143 (Billed hourly) 349.51/hr 1,56,943 (Billed Monthly) 349.51/hr 1,53,193 (Billed Quarterly) 349.51/hr 1,49,443 (Billed Half Yearly) 349.51/hr 141,943 (Billed Annually) 349.51/hr | 7500 | Create |
GDCW.4xA100-64.460GB | Windows - 2016 / Windows -2019 | 4 x NVIDIA A100 | 160 GB | 64 vCPUs | 460 GB | 6000 GB SSD | 5,09,135 (Billed hourly) 697.45/hr 3,12,735 (Billed Monthly) 697.45/hr 3,05,235 (Billed Quarterly) 697.45/hr 2,97,735 (Billed Half Yearly) 697.45/hr 2,82,735 (Billed Annually) 697.45/hr | 12000 | Create |
GDCW.8xA100-128.920GB | Windows - 2016 / Windows -2019 | 8 x NVIDIA A100 | 320 GB | 128 vCPUs | 920 GB | 6000 GB SSD | 10,17,119 (Billed hourly) 1393.31/hr 7,24,319 (Billed Monthly) 1393.31/hr 7,06,819 (Billed Quarterly) 1393.31/hr 6,89,319 (Billed Half Yearly) 1393.31/hr 6,54,319 (Billed Annually) 1393.31/hr | 20000 | Create |
Note: Nvidia QvDWS is per user license, for more RDS licenses can contact our sales team for more detail
All the GPU servers of E2E networks run in Indian data centers, reducing latency.
is suitable for a wide range of uses
Train complex models at high speed to improve predictions and decisions of your algorithms. Use any framework or library: TensorFlow, PyTorch, Caffe, MXNet, Auto-Keras, and many more.
Accelerate Convolutional Neural Networks based deep-learning workloads like video analysis, facial recognition, medical imaging and others
Analyze and calculate large and complex financial data; performtons of transactions in real-time. Do accurate financial forecasting, faster
Design and implement data-parallel algorithms that scale to hundreds of tightly coupled processing units: molecular modelling, fluid dynamics and others
Deal with large-size data sets and continuously growing data, splitting it up between processors to crunch through voluminous data sets at a quicker rate
We at CamCom are using E2E GPU servers for a while now and the price-performance is the best in the Indian market. We also have enjoyed a fast turnaround from the support and sales team always. I highly recommend the E2E GPU servers for machine learning, deep learning and Image processing purpose
E2E Networks Ltd is an India focused Cloud Computing Company - the first to bring contract-less cloud computing to the Indian startups and SMEs. E2E Networks Cloud was used by many of successfully scaled-up startups like Zomato/Cardekho/Healthkart/Junglee Games/1mg and many more to scale during a significant part of their journey from startup stage to multi-million DAUs ( Daily Active Users).
40 GB
The multi-instance group (mig) feature allows each GPU to be partitioned into as many as seven GPU instances, fully isolated from a performance and fault isolation perspective
Monthly prices shown are calculated using an assumed usage of 730 hours per month; actual monthly costs may vary based on the number of days in a month.
No price is exclusive of any taxes, all plan pricing is subject to 18% GST rate.
E2E Networks Ltd is an India focused Cloud Computing Company and the first to bring contract-less cloud computing to the Indian startups and SMEs.
Co-founded by veterans Tarun Dua and Mohammed Imran, E2E Networks Cloud was used by many of the successfully scaled-up startups like Zomato/Cardekho/Healthkart/Junglee Games/1mg and many more to scale during a significant part of their journey from startup stage to multi-million DAUs (Daily Active Users).
In 2018, E2E Networks Ltd issued its IPO through NSE Emerge and was oversubscribed 70 times! Today, E2E Networks is the largest NSE listed Cloud Provider having served more than 10,000 customers and having thousands of active customers.
E2E Networks is the platform of choice for Cloud Infrastructure used by Indian entrepreneurs since 2009.