NVIDIA® Tesla® V100 Tensor Core is the most advanced data center GPU ever built to accelerate AI, high performance computing (HPC), data science and graphics. It’s powered by NVIDIA Volta architecture, comes in 16 and 32GB configurations, and offers the performance of up to 100 CPUs in a single GPU. Data scientists, researchers, and engineers can now spend less time optimizing memory usage and more time designing the next AI breakthrough.
By pairing CUDA cores and Tensor Cores within a unified architecture, a single server with V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and deep learning
Equipped with 640 Tensor Cores, V100 delivers 130 teraFLOPS (TFLOPS) of deep learning performance. That’s 12X Tensor FLOPS for deep learning training, and 6X Tensor FLOPS for deep learning inference when compared to NVIDIA Pascal™ GPUs.
NVIDIA NVLink in V100 delivers 2X higher throughput compared to the previous generation. Up to eight V100 accelerators can be interconnected at up to gigabytes per second (GB/sec) to unleash the highest application performance possible on a single server.
The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.
With a combination of improved raw bandwidth of 900GB/s and higher DRAM utilization efficiency at 95%, V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM. V100 is now available in a 32GB configuration that doubles the memory of the standard 16GB offering.
V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.
Choose Tenure
Plan | OS | Graphic Processor | GPU Memory | vCPUs | Dedicated RAM | Disk Space | Price | Minimum Billing | |
---|---|---|---|---|---|---|---|---|---|
GDC.V100-8.120GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 1X NVIDIA V100 | 32 GB | 8 vCPUs | 120 GB | 900 GB SSD | 73,000 (Billed hourly) 100/hr 50,000 (Billed Monthly) 100/hr 49,000 (Billed Quarterly) 100/hr 48,500 (Billed Half Yearly) 100/hr 47,000 (Billed Annually) 100/hr | NA | Create |
GDC.V100-16.180GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 1X NVIDIA V100 | 32 GB | 16 vCPUs | 180 GB | 1800 GB SSD | 87,600 (Billed hourly) 120/hr 60,000 (Billed Monthly) 120/hr 58,800 (Billed Quarterly) 120/hr 58,200 (Billed Half Yearly) 120/hr 56,400 (Billed Annually) 120/hr | NA | Create |
GDC.2xV100-16.240GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 2X NVIDIA V100 | 64 GB | 16 vCPUs | 240 GB | 1800 GB SSD | 1,16,800 (Billed hourly) 160/hr 75,000 (Billed Monthly) 160/hr 73,500 (Billed Quarterly) 160/hr 72,750 (Billed Half Yearly) 160/hr 70,500 (Billed Annually) 160/hr | NA | Create |
GDC.2xV100-32.360GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 2X NVIDIA V100 | 64 GB | 32 vCPUs | 360 GB | 3600 GB SSD | 1,46,000 (Billed hourly) 200/hr 90,000 (Billed Monthly) 200/hr 88,200 (Billed Quarterly) 200/hr 87,300 (Billed Half Yearly) 200/hr 84,600 (Billed Annually) 200/hr | NA | Create |
GDC.4xV100-32.480GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 4X NVIDIA V100 | 128 GB | 32 vCPUs | 480 GB | 3600 GB SSD | 2,92,000 (Billed hourly) 400/hr 1,80,000 (Billed Monthly) 400/hr 1,76,400 (Billed Quarterly) 400/hr 1,74,600 (Billed Half Yearly) 400/hr 1,69,200 (Billed Annually) 400/hr | NA | Create |
GDC.4xV100-64.720GB | Ubuntu 16 / Ubuntu 18 / Centos 7 | 4X NVIDIA V100 | 128 GB | 64 vCPUs | 720 GB | 6000 GB SSD | 3,65,000 (Billed hourly) 500/hr 2,40,000 (Billed Monthly) 500/hr 2,35,200 (Billed Quarterly) 500/hr 2,32,800 (Billed Half Yearly) 500/hr 2,25,600 (Billed Annually) 500/hr | NA | Create |
Choose Tenure
Plan | OS | Graphic Processor | GPU Memory | vCPUs | Dedicated RAM | Disk Space | Price | Minimum Billing | |
---|---|---|---|---|---|---|---|---|---|
GDCW.V100-8.120GB | Windows - 2016 / Windows -2019 | 1X NVIDIA V100 | 32 GB | 8 vCPUs | 120 GB | 900 GB SSD | 75,599 (Billed hourly) 104/hr 52,599 (Billed Monthly) 104/hr 51,599 (Billed Quarterly) 104/hr 51,099 (Billed Half Yearly) 104/hr 49,599 (Billed Annually) 104/hr | 3000 | Create |
GDCW.V100-16.180GB | Windows - 2016 / Windows -2019 | 1X NVIDIA V100 | 32 GB | 16 vCPUs | 180 GB | 1800 GB SSD | 91,647 (Billed hourly) 126/hr 64,047 (Billed Monthly) 126/hr 62,847 (Billed Quarterly) 126/hr 62,247 (Billed Half Yearly) 126/hr 60,447 (Billed Annually) 126/hr | 3500 | Create |
GDCW.2xV100-16.240GB | Windows - 2016 / Windows -2019 | 2X NVIDIA V100 | 64 GB | 16 vCPUs | 240 GB | 1800 GB SSD | 1,20,847 (Billed hourly) 166/hr 79,047 (Billed Monthly) 166/hr 77,547 (Billed Quarterly) 166/hr 76,797 (Billed Half Yearly) 166/hr 74,547 (Billed Annually) 166/hr | 3500 | Create |
GDCW.2xV100-32.360GB | Windows - 2016 / Windows -2019 | 2X NVIDIA V100 | 64 GB | 32 vCPUs | 360 GB | 3600 GB SSD | 1,52,943 (Billed hourly) 210/hr 96,943 (Billed Monthly) 210/hr 95,143 (Billed Quarterly) 210/hr 94,243 (Billed Half Yearly) 210/hr 91,543 (Billed Annually) 210/hr | 7500 | Create |
GDCW.4xV100-32.480GB | Windows - 2016 / Windows -2019 | 4x NVIDIA V100 | 128 GB | 32 vCPUs | 480 GB | 3600 GB SSD | 2,98,943 (Billed hourly) 410/hr 1,86,943 (Billed Monthly) 410/hr 1,83,343 (Billed Quarterly) 410/hr 1,81,543 (Billed Half Yearly) 410/hr 1,76,143 (Billed Annually) 410/hr | 7500 | Create |
GDCW.4xV100-64.720GB | 4X NVIDIA V100 | 128 GB | 64 vCPUs | 720 GB | 6000 GB SSD | 3,77,735 (Billed hourly) 517/hr 2,52,735 (Billed Monthly) 517/hr 2,47,935 (Billed Quarterly) 517/hr 2,45,535 (Billed Half Yearly) 517/hr 2,38,335 (Billed Annually) 517/hr | 13,500 | Create |
Note: Nvidia QvDWS is per user license, for more RDS licenses can contact our sales team for more detail
All the GPU servers of E2E networks run in Indian data centers, reducing latency.
is suitable for a wide range of uses
Train complex models at high speed to improve predictions and decisions of your algorithms. Use any framework or library: TensorFlow, PyTorch, Caffe, MXNet, Auto-Keras, and many more.
Accelerate Convolutional Neural Networks based deep-learning workloads like video analysis, facial recognition, medical imaging and others
Analyze and calculate large and complex financial data; performtons of transactions in real-time. Do accurate financial forecasting, faster
Design and implement data-parallel algorithms that scale to hundreds of tightly coupled processing units: molecular modelling, fluid dynamics and others
Deal with large-size data sets and continuously growing data, splitting it up between processors to crunch through voluminous data sets at a quicker rate
We at CamCom are using E2E GPU servers for a while now and the price-performance is the best in the Indian market. We also have enjoyed a fast turnaround from the support and sales team always. I highly recommend the E2E GPU servers for machine learning, deep learning and Image processing purpose
E2E Networks Ltd is an India focused Cloud Computing Company - the first to bring contract-less cloud computing to the Indian startups and SMEs. E2E Networks Cloud was used by many of successfully scaled-up startups like Zomato/Cardekho/Healthkart/Junglee Games/1mg and many more to scale during a significant part of their journey from startup stage to multi-million DAUs ( Daily Active Users).
40 GB
The multi-instance group (mig) feature allows each GPU to be partitioned into as many as seven GPU instances, fully isolated from a performance and fault isolation perspective
Monthly prices shown are calculated using an assumed usage of 730 hours per month; actual monthly costs may vary based on the number of days in a month.
No price is exclusive of any taxes, all plan pricing is subject to 18% GST rate.
E2E Networks Ltd is an India focused Cloud Computing Company and the first to bring contract-less cloud computing to the Indian startups and SMEs.
Co-founded by veterans Tarun Dua and Mohammed Imran, E2E Networks Cloud was used by many of the successfully scaled-up startups like Zomato/Cardekho/Healthkart/Junglee Games/1mg and many more to scale during a significant part of their journey from startup stage to multi-million DAUs (Daily Active Users).
In 2018, E2E Networks Ltd issued its IPO through NSE Emerge and was oversubscribed 70 times! Today, E2E Networks is the largest NSE listed Cloud Provider having served more than 10,000 customers and having thousands of active customers.
E2E Networks is the platform of choice for Cloud Infrastructure used by Indian entrepreneurs since 2009.