A Comparative Study of L40S vs H100 vs A100: A Deep Dive into Next-Gen GPUs and Cloud Solutions‍

September 13, 2023

Introduction

In the rapidly evolving world of technology, Graphics Processing Units (GPUs) have emerged as the backbone for advancements in Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning. These specialized electronic circuits are designed to accelerate the processing of images and videos, but their application has transcended far beyond, becoming the backbone of complex computations in various scientific and commercial domains.

As the demand for more computational power continues to grow, the role of GPUs has become increasingly critical. Enterprises are now faced with the challenging task of selecting the most suitable GPU to meet their specific needs, a decision that could significantly impact the efficiency and effectiveness of their operations.

This blog aims to shed light on the yet-to-be-released NVIDIA L40S, a GPU that promises groundbreaking features and performance capabilities. To provide a comprehensive understanding, we will compare the theoretical specifications and potential of the L40S with two other high-performing, extensively tested GPUs: the NVIDIA H100 and A100.

The Rise of Cloud GPUs

In today's data-driven world, the demand for computational power is at an all-time high. Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning are no longer futuristic concepts but essential technologies that drive everything from automated customer service to advanced data analytics. At the heart of these technologies lie Graphics Processing Units (GPUs), specialized hardware designed to handle the complex calculations that these advanced applications require.

The Cost Barrier

While the capabilities of modern GPUs are nothing short of revolutionary, their high costs often serve as a significant barrier to entry, particularly for small and medium-sized enterprises (SMEs). The financial burden of acquiring, maintaining, and upgrading a state-of-the-art GPU can be substantial. This cost factor often limits smaller organizations from fully leveraging the capabilities of advanced AI and ML technologies, putting them at a competitive disadvantage.

The Cloud Solution

This is where Cloud GPUs come into play, leveling the playing field by offering high-end computational power as a service. Cloud-based GPU solutions eliminate the need for a large upfront investment, offering instead a pay-as-you-go model that provides businesses with the flexibility to scale their operations according to their needs. This approach democratizes access to essential technologies, making it feasible for organizations of all sizes to undertake complex computational tasks. The benefits of Cloud GPUs extend beyond cost savings. The cloud model inherently offers unparalleled scalability and flexibility, allowing organizations to adapt to project requirements dynamically. Whether you're a startup looking to run initial ML models or an established enterprise aiming to scale your AI-driven analytics, Cloud GPUs provide the computational muscle you need, precisely when you need it.

Future-Proofing with Cloud GPUs

E2E Cloud stands as a prime example of how cloud-based solutions can make high-end computational power accessible. The platform not only offers the tried-and-tested NVIDIA A100 and H100 GPUs but also plans to include the yet-to-be-released and promising NVIDIA L40S. This range of offerings allows businesses to choose the GPU that best fits their specific needs, all while benefiting from a cost-effective, scalable model. As we look towards a future where AI and ML technologies are set to become even more integral to business operations, the role of GPUs will only grow in importance. Cloud GPUs offer a sustainable, scalable way for organizations to stay ahead of the curve, providing the tools needed to innovate and excel in an increasingly competitive landscape.

Spotlight on NVIDIA L40S

Introduction to L40S

The NVIDIA L40S is a highly anticipated GPU, expected to be released by the end of 2023. While its predecessor, the L40, has already made a significant impact in the market, the L40S aims to take performance and versatility to the next level. Designed with the Ada Lovelace architecture, this GPU is being touted as the most powerful universal GPU for data centers, offering unparalleled capabilities for AI training, Large Language Models (LLMs), and multi-workload environments.

Specifications

The L40S comes with an impressive set of specifications:

These specs make it a formidable competitor, even when compared theoretically to the tested A100 and H100 GPUs.

Version-Specific Features

As of now, the L40S is expected to be released in a single version. This is in contrast to the A100, which comes in 40GB and 80GB versions, and the H100, which has three different versions: H100 SXM, H100 PCIe, and H100 NVL. Each of these versions offers different performance metrics and is designed for specific use-cases, making the choice of GPU a critical decision for enterprises.

Theoretical vs Practical Performance

It's important to note that while the L40S offers promising theoretical capabilities, it has yet to be tested in real-world scenarios. Both the A100 and H100 have undergone extensive testing and have proven their reliability and performance. Therefore, while the L40S promises groundbreaking features, its practical performance remains to be seen.

Use-Cases and Industries

The L40S is designed to be a versatile GPU, capable of handling a variety of workloads. Its high computational power makes it ideal for AI and ML training, data analytics, and even advanced graphics rendering. Industries like healthcare, automotive, and financial services stand to benefit significantly from the capabilities of this GPU. One of the standout features of the L40S is its ease of implementation. Unlike other GPUs that may require specialized knowledge or extensive setup, the L40S is designed to be user-friendly, allowing for quick and straightforward integration into existing systems.

NVIDIA H100: A Quick Overview

Introduction to H100

The NVIDIA H100 Tensor Core GPU is a powerhouse designed for accelerated computing, offering unprecedented performance, scalability, and security for data centers. Built on the NVIDIA Hopper architecture, the H100 is engineered to tackle exascale workloads and is particularly adept at handling Large Language Models (LLMs) and High-Performance Computing (HPC).

Versions and Specifications

The H100 comes in three distinct versions: H100 SXM, H100 PCIe, and H100 NVL. Each version is tailored for specific use-cases and offers different performance metrics. For instance, the H100 SXM is designed for maximum performance, while the H100 NVL is optimized for power-constrained data center environments. The H100 has been extensively tested and has proven its capabilities in real-world applications. It offers up to 30X acceleration in inference compared to its predecessor, the A100, and has shown to be 4 times faster in GPT-3 175B training.

Use Cases and Industries

The H100 is a versatile GPU that can be employed across a range of industries, from healthcare and automotive to financial services. Its high computational power and scalability make it ideal for data analytics, AI and ML training, and advanced graphics rendering. One of the standout features of the H100 is its Multi-Instance GPU (MIG) technology, which allows for secure partitioning of the GPU into as many as seven separate instances. This feature maximizes the utilization of each GPU and provides greater flexibility in provisioning resources, making it ideal for cloud service providers.

NVIDIA A100

Introduction to A100

The NVIDIA A100 Tensor Core GPU has been the industry standard for data center computing, offering a balanced mix of computational power, versatility, and efficiency. Built on the NVIDIA Ampere architecture, the A100 has been the go-to choice for enterprises looking to accelerate a wide range of workloads, from AI and machine learning to data analytics.

Versions and Specifications

The A100 comes in two versions: one with 40GB of memory and another with 80GB. The 80GB version offers double the memory, making it more suitable for workloads that require larger data sets, while the 40GB version is more cost-effective for smaller-scale applications. The A100 has been extensively tested in real-world scenarios and has proven to be a reliable workhorse for data center operations. It has been particularly effective in accelerating machine learning models and has set several performance benchmarks in the industry.

Use Cases and Industries

The A100 is versatile and finds applications in various sectors, including healthcare, automotive, and financial services. Its computational power and efficiency make it a preferred choice for running complex simulations, data analytics, and machine learning algorithms. One of the standout features of the A100 is its Multi-Instance GPU (MIG) capability, which allows for the partitioning of the GPU into seven separate instances, thereby maximizing resource utilization and offering greater flexibility for cloud service providers.

NVIDIA L40S and Its Comparison with Other GPUs

The NVIDIA L40S is a powerhouse GPU built on the Ada Lovelace architecture. It boasts an impressive 91.6 teraFLOPS of FP64 and FP32 performance, making it a formidable competitor in the high-performance computing arena. With 48GB of GDDR6 memory and a bidirectional memory bandwidth of 64GB/s, it's designed to handle data-intensive tasks with ease. The L40S also features 568 Tensor Cores and 142 RT Cores, providing robust capabilities for AI and ray tracing applications.

Detailed Specifications Table

  • Computational Power: The L40S clearly outperforms the A100 in FP64 and FP32 performance, making it a more powerful choice for high-performance computing tasks. However, the H100 series, especially the H100 NVL, shows a significant leap in computational power, particularly in FP64 and FP32 metrics.
  • Memory and Bandwidth: While the A100 offers HBM2e memory, the L40S opts for GDDR6. The H100 series goes a step further with HBM3 memory, offering the highest memory bandwidth among the three. This makes the H100 series particularly well-suited for data-intensive tasks.
  • Tensor and RT Cores: The L40S is the only GPU among the three to offer RT Cores, making it a better option for real-time ray tracing. However, all three GPUs offer Tensor Cores, crucial for AI and machine learning tasks.
  • Form Factor and Thermal Design: The L40S and A100 are relatively similar in form factor and thermal design, but the H100 series offers more flexibility, especially in its NVL version, which is designed for more demanding, power-constrained data center environments.
  • Additional Features: All three GPUs offer virtual GPU software support and secure boot features. However, only the L40S and H100 series offer NEBS Level 3 readiness, making them more suitable for enterprise data center operations.
  • Versatility: The L40S stands out for its versatility, offering a balanced set of features that make it suitable for a wide range of applications, from AI and machine learning to high-performance computing and data analytics.

The L40S is designed for a variety of applications, from AI and machine learning to high-performance computing and data analytics. Its robust feature set makes it a versatile choice for both small and large-scale operations.

Practical Implications: Making the Right Choice

When it comes to selecting a GPU for your organization, the choice is far from trivial. The right GPU can significantly impact the efficiency and effectiveness of your computational tasks, whether they involve AI, machine learning, or high-performance computing.

  • Workload Requirements: Different GPUs excel in different areas. For instance, the NVIDIA A100 is a versatile choice for a range of applications but may not offer the specialized capabilities of the L40S or H100 series for tasks like real-time ray tracing.
  • Cost vs Performance: While the L40S and H100 series offer superior performance, they also come at a higher cost. For example, accessing the H100 on E2E Cloud costs 412 rupees per hour, while the A100 costs 170 rupees per hour for the 40GB version and 220 rupees for the 80GB version. Organizations must weigh these benefits against the financial implications, especially if only smaller applications are required, where the A100 could be a more cost-effective option. 
  • Energy Efficiency: Newer GPUs often offer better performance per watt, which can lead to long-term energy savings. For instance, the NVIDIA A100 has a max power consumption ranging from 250W to 400W depending on the version, the L40S consumes up to 350W, and the H100's thermal design power (TDP) can go up to 700W in its most powerful configuration. While the L40S and H100 series offer higher performance, they also consume more power, making the A100, particularly the 40GB version, a more energy-efficient option for certain tasks

E2E Cloud: Bridging the Gap

As the demand for high-performance computing, AI, and machine learning capabilities continues to grow, the cost and complexity of implementing and maintaining such technologies also rise. This is particularly challenging for smaller organizations that may not have the resources for a large upfront investment in hardware.

E2E Cloud offers a viable solution to this challenge by providing access to top-of-the-line GPUs like the A100 and H100 on its cloud platform. This eliminates the need for a hefty initial investment, making these powerful computing resources accessible to a broader range of organizations. Given the long waitlist for purchasing the latest GPUs, accessing them from E2E Cloud offers a distinct advantage. It allows organizations to get their hands on cutting-edge technology without the wait, enabling them to stay competitive and agile in a fast-paced market.

E2E Cloud offers the unique advantage of allowing organizations to access GPUs, providing a practical and cost-effective solution to meet varying computational needs. With the L40S expected to be available for access by the end of 2023, organizations have the opportunity to test this new GPU in a real-world environment before making a long-term commitment. By offering a flexible and affordable solution, E2E Cloud is bridging the gap between the computational needs of organizations and the resources required to meet them.

Conclusion: The Future of GPU Computing

The landscape of GPU computing is evolving at an unprecedented pace, with each new model promising groundbreaking advancements in performance, scalability, and efficiency. The NVIDIA L40S, A100, and H100 each offer unique advantages and limitations, making the choice of GPU a critical decision for organizations looking to invest in AI, machine learning, or high-performance computing.

While the A100 has been a reliable workhorse, tested and proven in various applications, and the H100 offers cutting-edge performance at a premium price, the L40S stands as a promising newcomer. Its theoretical capabilities are impressive, but it remains to be seen how it will perform in real-world applications once it becomes available on E2E Cloud by the end of 2023.

References

  1. Official NVIDIA website. https://www.nvidia.com/en-in/data-center/l40s/
  2. NVIDIA Datasheet. https://resources.nvidia.com/en-us-l40s/l40s-datasheet-28413
  3. PNY https://www.pny.com/nvidia-l40s
  4. Official NVIDIA website. https://www.nvidia.com/en-in/data-center/h100/
  5. Official NVIDIA website. https://www.nvidia.com/en-in/data-center/a100/
Latest Blogs
This is a decorative image for: A Complete Guide To Customer Acquisition For Startups
October 18, 2022

A Complete Guide To Customer Acquisition For Startups

Any business is enlivened by its customers. Therefore, a strategy to constantly bring in new clients is an ongoing requirement. In this regard, having a proper customer acquisition strategy can be of great importance.

So, if you are just starting your business, or planning to expand it, read on to learn more about this concept.

The problem with customer acquisition

As an organization, when working in a diverse and competitive market like India, you need to have a well-defined customer acquisition strategy to attain success. However, this is where most startups struggle. Now, you may have a great product or service, but if you are not in the right place targeting the right demographic, you are not likely to get the results you want.

To resolve this, typically, companies invest, but if that is not channelized properly, it will be futile.

So, the best way out of this dilemma is to have a clear customer acquisition strategy in place.

How can you create the ideal customer acquisition strategy for your business?

  • Define what your goals are

You need to define your goals so that you can meet the revenue expectations you have for the current fiscal year. You need to find a value for the metrics –

  • MRR – Monthly recurring revenue, which tells you all the income that can be generated from all your income channels.
  • CLV – Customer lifetime value tells you how much a customer is willing to spend on your business during your mutual relationship duration.  
  • CAC – Customer acquisition costs, which tells how much your organization needs to spend to acquire customers constantly.
  • Churn rate – It tells you the rate at which customers stop doing business.

All these metrics tell you how well you will be able to grow your business and revenue.

  • Identify your ideal customers

You need to understand who your current customers are and who your target customers are. Once you are aware of your customer base, you can focus your energies in that direction and get the maximum sale of your products or services. You can also understand what your customers require through various analytics and markers and address them to leverage your products/services towards them.

  • Choose your channels for customer acquisition

How will you acquire customers who will eventually tell at what scale and at what rate you need to expand your business? You could market and sell your products on social media channels like Instagram, Facebook and YouTube, or invest in paid marketing like Google Ads. You need to develop a unique strategy for each of these channels. 

  • Communicate with your customers

If you know exactly what your customers have in mind, then you will be able to develop your customer strategy with a clear perspective in mind. You can do it through surveys or customer opinion forms, email contact forms, blog posts and social media posts. After that, you just need to measure the analytics, clearly understand the insights, and improve your strategy accordingly.

Combining these strategies with your long-term business plan will bring results. However, there will be challenges on the way, where you need to adapt as per the requirements to make the most of it. At the same time, introducing new technologies like AI and ML can also solve such issues easily. To learn more about the use of AI and ML and how they are transforming businesses, keep referring to the blog section of E2E Networks.

Reference Links

https://www.helpscout.com/customer-acquisition/

https://www.cloudways.com/blog/customer-acquisition-strategy-for-startups/

https://blog.hubspot.com/service/customer-acquisition

This is a decorative image for: Constructing 3D objects through Deep Learning
October 18, 2022

Image-based 3D Object Reconstruction State-of-the-Art and trends in the Deep Learning Era

3D reconstruction is one of the most complex issues of deep learning systems. There have been multiple types of research in this field, and almost everything has been tried on it — computer vision, computer graphics and machine learning, but to no avail. However, that has resulted in CNN or convolutional neural networks foraying into this field, which has yielded some success.

The Main Objective of the 3D Object Reconstruction

Developing this deep learning technology aims to infer the shape of 3D objects from 2D images. So, to conduct the experiment, you need the following:

  • Highly calibrated cameras that take a photograph of the image from various angles.
  • Large training datasets can predict the geometry of the object whose 3D image reconstruction needs to be done. These datasets can be collected from a database of images, or they can be collected and sampled from a video.

By using the apparatus and datasets, you will be able to proceed with the 3D reconstruction from 2D datasets.

State-of-the-art Technology Used by the Datasets for the Reconstruction of 3D Objects

The technology used for this purpose needs to stick to the following parameters:

  • Input

Training with the help of one or multiple RGB images, where the segmentation of the 3D ground truth needs to be done. It could be one image, multiple images or even a video stream.

The testing will also be done on the same parameters, which will also help to create a uniform, cluttered background, or both.

  • Output

The volumetric output will be done in both high and low resolution, and the surface output will be generated through parameterisation, template deformation and point cloud. Moreover, the direct and intermediate outputs will be calculated this way.

  • Network architecture used

The architecture used in training is 3D-VAE-GAN, which has an encoder and a decoder, with TL-Net and conditional GAN. At the same time, the testing architecture is 3D-VAE, which has an encoder and a decoder.

  • Training used

The degree of supervision used in 2D vs 3D supervision, weak supervision along with loss functions have to be included in this system. The training procedure is adversarial training with joint 2D and 3D embeddings. Also, the network architecture is extremely important for the speed and processing quality of the output images.

  • Practical applications and use cases

Volumetric representations and surface representations can do the reconstruction. Powerful computer systems need to be used for reconstruction.

Given below are some of the places where 3D Object Reconstruction Deep Learning Systems are used:

  • 3D reconstruction technology can be used in the Police Department for drawing the faces of criminals whose images have been procured from a crime site where their faces are not completely revealed.
  • It can be used for re-modelling ruins at ancient architectural sites. The rubble or the debris stubs of structures can be used to recreate the entire building structure and get an idea of how it looked in the past.
  • They can be used in plastic surgery where the organs, face, limbs or any other portion of the body has been damaged and needs to be rebuilt.
  • It can be used in airport security, where concealed shapes can be used for guessing whether a person is armed or is carrying explosives or not.
  • It can also help in completing DNA sequences.

So, if you are planning to implement this technology, then you can rent the required infrastructure from E2E Networks and avoid investing in it. And if you plan to learn more about such topics, then keep a tab on the blog section of the website

Reference Links

https://tongtianta.site/paper/68922

https://github.com/natowi/3D-Reconstruction-with-Deep-Learning-Methods

This is a decorative image for: Comprehensive Guide to Deep Q-Learning for Data Science Enthusiasts
October 18, 2022

A Comprehensive Guide To Deep Q-Learning For Data Science Enthusiasts

For all data science enthusiasts who would love to dig deep, we have composed a write-up about Q-Learning specifically for you all. Deep Q-Learning and Reinforcement learning (RL) are extremely popular these days. These two data science methodologies use Python libraries like TensorFlow 2 and openAI’s Gym environment.

So, read on to know more.

What is Deep Q-Learning?

Deep Q-Learning utilizes the principles of Q-learning, but instead of using the Q-table, it uses the neural network. The algorithm of deep Q-Learning uses the states as input and the optimal Q-value of every action possible as the output. The agent gathers and stores all the previous experiences in the memory of the trained tuple in the following order:

State> Next state> Action> Reward

The neural network training stability increases using a random batch of previous data by using the experience replay. Experience replay also means the previous experiences stocking, and the target network uses it for training and calculation of the Q-network and the predicted Q-Value. This neural network uses openAI Gym, which is provided by taxi-v3 environments.

Now, any understanding of Deep Q-Learning   is incomplete without talking about Reinforcement Learning.

What is Reinforcement Learning?

Reinforcement is a subsection of ML. This part of ML is related to the action in which an environmental agent participates in a reward-based system and uses Reinforcement Learning to maximize the rewards. Reinforcement Learning is a different technique from unsupervised learning or supervised learning because it does not require a supervised input/output pair. The number of corrections is also less, so it is a highly efficient technique.

Now, the understanding of reinforcement learning is incomplete without knowing about Markov Decision Process (MDP). MDP is involved with each state that has been presented in the results of the environment, derived from the state previously there. The information which composes both states is gathered and transferred to the decision process. The task of the chosen agent is to maximize the awards. The MDP optimizes the actions and helps construct the optimal policy.

For developing the MDP, you need to follow the Q-Learning Algorithm, which is an extremely important part of data science and machine learning.

What is Q-Learning Algorithm?

The process of Q-Learning is important for understanding the data from scratch. It involves defining the parameters, choosing the actions from the current state and also choosing the actions from the previous state and then developing a Q-table for maximizing the results or output rewards.

The 4 steps that are involved in Q-Learning:

  1. Initializing parameters – The RL (reinforcement learning) model learns the set of actions that the agent requires in the state, environment and time.
  2. Identifying current state – The model stores the prior records for optimal action definition for maximizing the results. For acting in the present state, the state needs to be identified and perform an action combination for it.
  3. Choosing the optimal action set and gaining the relevant experience – A Q-table is generated from the data with a set of specific states and actions, and the weight of this data is calculated for updating the Q-Table to the following step.
  4. Updating Q-table rewards and next state determination – After the relevant experience is gained and agents start getting environmental records. The reward amplitude helps to present the subsequent step.  

In case the Q-table size is huge, then the generation of the model is a time-consuming process. This situation requires Deep Q-learning.

Hopefully, this write-up has provided an outline of Deep Q-Learning and its related concepts. If you wish to learn more about such topics, then keep a tab on the blog section of the E2E Networks website.

Reference Links

https://analyticsindiamag.com/comprehensive-guide-to-deep-q-learning-for-data-science-enthusiasts/

https://medium.com/@jereminuerofficial/a-comprehensive-guide-to-deep-q-learning-8aeed632f52f

This is a decorative image for: GAUDI: A Neural Architect for Immersive 3D Scene Generation
October 13, 2022

GAUDI: A Neural Architect for Immersive 3D Scene Generation

The evolution of artificial intelligence in the past decade has been staggering, and now the focus is shifting towards AI and ML systems to understand and generate 3D spaces. As a result, there has been extensive research on manipulating 3D generative models. In this regard, Apple’s AI and ML scientists have developed GAUDI, a method specifically for this job.

An introduction to GAUDI

The GAUDI 3D immersive technique founders named it after the famous architect Antoni Gaudi. This AI model takes the help of a camera pose decoder, which enables it to guess the possible camera angles of a scene. Hence, the decoder then makes it possible to predict the 3D canvas from almost every angle.

What does GAUDI do?

GAUDI can perform multiple functions –

  • The extensions of these generative models have a tremendous effect on ML and computer vision. Pragmatically, such models are highly useful. They are applied in model-based reinforcement learning and planning world models, SLAM is s, or 3D content creation.
  • Generative modelling for 3D objects has been used for generating scenes using graf, pigan, and gsn, which incorporate a GAN (Generative Adversarial Network). The generator codes radiance fields exclusively. Using the 3D space in the scene along with the camera pose generates the 3D image from that point. This point has a density scalar and RGB value for that specific point in 3D space. This can be done from a 2D camera view. It does this by imposing 3D datasets on those 2D shots. It isolates various objects and scenes and combines them to render a new scene altogether.
  • GAUDI also removes GANs pathologies like mode collapse and improved GAN.
  • GAUDI also uses this to train data on a canonical coordinate system. You can compare it by looking at the trajectory of the scenes.

How is GAUDI applied to the content?

The steps of application for GAUDI have been given below:

  • Each trajectory is created, which consists of a sequence of posed images (These images are from a 3D scene) encoded into a latent representation. This representation which has a radiance field or what we refer to as the 3D scene and the camera path is created in a disentangled way. The results are interpreted as free parameters. The problem is optimized by and formulation of a reconstruction objective.
  • This simple training process is then scaled to trajectories, thousands of them creating a large number of views. The model samples the radiance fields totally from the previous distribution that the model has learned.
  • The scenes are thus synthesized by interpolation within the hidden space.
  • The scaling of 3D scenes generates many scenes that contain thousands of images. During training, there is no issue related to canonical orientation or mode collapse.
  • A novel de-noising optimization technique is used to find hidden representations that collaborate in modelling the camera poses and the radiance field to create multiple datasets with state-of-the-art performance in generating 3D scenes by building a setup that uses images and text.

To conclude, GAUDI has more capabilities and can also be used for sampling various images and video datasets. Furthermore, this will make a foray into AR (augmented reality) and VR (virtual reality). With GAUDI in hand, the sky is only the limit in the field of media creation. So, if you enjoy reading about the latest development in the field of AI and ML, then keep a tab on the blog section of the E2E Networks website.

Reference Links

https://www.researchgate.net/publication/362323995_GAUDI_A_Neural_Architect_for_Immersive_3D_Scene_Generation

https://www.technology.org/2022/07/31/gaudi-a-neural-architect-for-immersive-3d-scene-generation/ 

https://www.patentlyapple.com/2022/08/apple-has-unveiled-gaudi-a-neural-architect-for-immersive-3d-scene-generation.html

Build on the most powerful infrastructure cloud

A vector illustration of a tech city using latest cloud technologies & infrastructure