NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale

A100 - The Universal System for All AI Infrastructure that enables Enterprises to consolidate training, inference, and analytics using World's most advanced accelerator

A100 provides up to 20X higher performance over the prior generations.

NVIDIA A100 features the world's most advanced accelerator, the NVIDIA A100 Tensor Core GPU, enabling enterprises to consolidate training, inference, and analytics into a unified, easy-to-deploy AI infrastructure that includes direct access to NVIDIA AI experts.

Learn more about NVIDIA A100
NVIDIA A100 Data SheetNVIDIA DGX A100

Specs

A100

6912
CUDA Cores(Parallel-Processing)
432
Tensor Cores (Machine & Deep Learning)
80 GB HBM2
GPU Memory
1555 GB/s
GPU Memory Bandwidth
PCIe
Form Factor

Benefits of E2E GPU Cloud

No Hidden Fees

No hidden or additional charges. What you see on pricing charts is what you pay.

NVIDIA Certified CSP Partner

We are NVIDIA Certified Cloud Service provider partner.

NVIDIA Certified Hardware

We are using NVIDIA certified hardware for GPU accelerated workloads.

Flexible Pricing

We are offering pay as you go model to long tenure plans.

GPU-accelerated 1-click NGC Containers

E2E Cloud GPUs have super simple one click support for NGC containers for deploying NVIDIA certified solutions for AI/ML/NLP/Computer Vision and Data Science workloads.

Linux A100 GPU Dedicated Compute

Plan
OS
GPU Cards
GPU Memory
vCPU
( ≥ 2.9Ghz)
Dedicated Ram
NVMe Disk Space
Price
GDC.A100-16.115GB
Ubuntu 16 / Ubuntu 18 / Centos 7
1 x 40 GB
16 vCPUs
115 GB
1500 GB SSD
₹75,000
(Billed Monthly)
₹102.75/hr
GDC.2xA100-32.230GB
Ubuntu 16 / Ubuntu 18 / Centos 7
2 x 40 GB
32 vCPUs
230 GB
3000 GB SSD
₹150,000
(Billed Monthly)
₹205.4/hr
GDC.4xA100-64.460GB
Ubuntu 16 / Ubuntu 18 / Centos 7
4 x 40 GB
64 vCPUs
460 GB
6000 GB SSD
₹300,000
(Billed Monthly)
₹410.9/hr
GDC.8xA100-128.920GB
Ubuntu 16 / Ubuntu 18 / Centos 7
8 x 40 GB
128 vCPUs
920 GB
6000 GB SSD
₹700,000
(Billed Monthly)
₹958.9/hr
GDC.A10080-16.115GB
Ubuntu 16 / Ubuntu 18 / Centos 7
1 x 80 GB
16 vCPUs
115 GB
1500 GB SSD
₹100,000
(Billed Monthly)
₹136.9/hr
GDC.2xA10080-32.230GB
Ubuntu 16 / Ubuntu 18 / Centos 7
2 x 80 GB
32 vCPUs
230 GB
3000 GB SSD
₹200,000
(Billed Monthly)
₹273.9/hr
GDC.4xA10080-64.460GB
Ubuntu 16 / Ubuntu 18 / Centos 7
4 x 80 GB
64 vCPUs
460 GB
6000 GB SSD
₹400,000
(Billed Monthly)
₹547.9/hr

Windows A100 GPU Dedicated Compute

Plan
GPU Cards
GPU Memory
vCPU
( ≥ 2.9Ghz)
Dedicated Ram
NVMe Disk Space
Licenses Bundle
Price
Minimum Billing
GDC.A100-16.115GB
1 x 40 GB
16 vCPUs
115 GB
1500 GB SSD
1xQvDWS,
1xRDS,
Windows Standard Licenses
₹79,047
(Billed Monthly)
₹108.2/hr
₹5000
GDC.2xA100-32.230GB
2 x 40 GB
32 vCPUs
230 GB
3000 GB SSD
1xQvDWS,
1xRDS,
Windows Standard Licenses
₹156,943
(Billed Monthly)
₹214.9/hr
₹7500
GDC.4xA100-64.460GB
4 x 40 GB
64 vCPUs
460 GB
6000 GB SSD
1xQvDWS,
1xRDS,
Windows Standard Licenses
₹312,735
(Billed Monthly)
₹428.4/hr
₹12000
GDC.8xA100-128.920GB
8 x 40 GB
128 vCPUs
920 GB
6000 GB SSD
1xQvDWS,
1xRDS,
Windows Standard Licenses
₹724,319
(Billed Monthly)
₹992.2/hr
₹20000
GDC.A10080-16.115GB
1 x 80 GB
16 vCPUs
115 GB
1500 GB SSD
1xQvDWS,
1xRDS,
Windows Standard Licenses
₹104,654
(Billed Monthly)
₹143.3/hr
₹5000
GDC.2xA10080-32.230GB
2 x 80 GB
32 vCPUs
230 GB
3000 GB SSD
1xQvDWS,
1xRDS,
Windows Standard Licenses
2,07,984
(Billed Monthly)
₹284.9/hr
₹7500
GDC.4xA10080-64.460GB
4 x 80 GB
64 vCPUs
460 GB
6000 GB SSD
1xQvDWS,
1xRDS,
Windows Standard Licenses
4,14,645
(Billed Monthly)
₹568/hr
₹12000
Note:

Hypervisor Backend Connectivity - 40Gbps over Fiber
Nvidia QvDWS is per user license, for more RDS licenses can contact our sales team for more detail (Sales@e2enetworks.com)
Additional licenses available on-demand, you can contact to our sales team (Sales@e2enetworks.com)

Multiple Use-cases, One Solution!

E2E’s GPU Cloud is suitable for a wide range of uses.

AI Model Training and Inference:

Earlier, GPUs were confined to perform domain-specific tasks with either training or inference. With NVIDIA A100, you can get the best of both worlds with an accelerator for training as well as inference. Compare to earlier cards, we can speedup training and inference by 3X to 7X.

Image/Video Decoding:

One of the significant challenge in achieving high end-to-end throughput in a DL platform is to be able to keep the input video decoding performance matching the training and inference performance. A100 GPU fixed this by adding 5 NVDEC (NVIDIA DECode) unit compared to 1 Unit in earlier GPU Card.

High-Performance Computing:

A100 introduces double-precision Tensor Cores, this enables researchers to reduce a 10-hour, double-precision simulation running on NVIDIA V100 Tensor Core GPUs to just four hours on A100. HPC applications can also leverage TF32 precision in A100’s Tensor Cores to achieve up to 10X higher throughput for single-precision dense matrix multiply operations.

Language Model Training:

Natural Language Processing (NLP) has seen rapid progress in recent years. It is no longer possible to fit large models parameters in the main memory of even the largest GPU. NVIDIA A100 is the flagship product of the NVIDIA which is the only solution can run 1-trillion-parameter model in reasonable time with the scaling feature of A100-based systems connected by the new NVIDIA NVSwitch and Mellanox State of Art InfiniBand and ethernet solution.

Deep Video Analytics:

Media publishers to surveillance systems, deep video analytics is the new vogue for extracting actionable insights from streaming video clips. NVIDIA A100’s memory bandwidth of 1.5 terabytes per second makes it perfect choice for image recognition, contactless attendance, and other deep learning applications.

Accelerate Machine Learning and Deep Learning Workloads with up to 70% cost-savings.

How E2E GPU Cloud is helping Cloud Quest in their gaming journey

Latency is a critical part of Cloud Gaming. E2E GPU Cloud provided ultra-low network latency to Cloud Quest users and enhanced their gaming experience.