Top 8 Open-Source LLMs for Coding

May 8, 2024

The emergence of Large Language Models (LLMs) has sparked a new era of AI-assisted programming, helping developers streamline their coding processes and tackling complex problems more efficiently. Among the various LLMs available, open-source coding LLMs have gained significant attention due to their accessibility, transparency, and community-driven nature.

Open-source coding LLMs are powerful AI models that have been trained on vast amounts of programming-related data, including source code, documentation, and developer discussions. These models can understand and generate code in multiple programming languages, provide intelligent code suggestions, and even assist in debugging and optimization tasks. By leveraging the collective knowledge and expertise of the open-source community, these LLMs offer developers a valuable tool to enhance their productivity and overcome programming challenges.

Moreover, LLMs for coding provide significant benefits to software organizations. One of the key advantages is cost reduction compared to proprietary coding assistant subscriptions. By hosting open-source LLMs locally, organizations can avoid the recurring expenses associated with subscription-based services. 

In addition to cost savings, these LLMs for coding offer organizations greater control, customization, and privacy. By hosting these models within their own infrastructure, companies can ensure data security and compliance with privacy requirements. The open-source nature of these LLMs also allows organizations to customize and fine-tune the models to align with their specific coding practices, coding standards, and domain-specific requirements.

In this article, we will explore the top open-source coding LLMs that are making waves in the developer community.

1. Mistral 7B & Mixtral 8X7B

Mistral 7B and Mixtral 8x7B are two open-source language models developed by Mistral AI, both released under the Apache 2.0 license.

Mistral 7B is a 7.3B parameter model that outperforms Llama 2 13B on all benchmarks and even surpasses Llama 1 34B on many tasks. It approaches the performance of CodeLlama 7B on coding tasks while maintaining strong performance in English-language tasks. Mistral 7B uses techniques like Grouped Query Attention (GQA) for faster inference and Sliding Window Attention (SWA) to efficiently handle longer sequences.

Mixtral 8x7B is a larger, 46.7B parameter Sparse Mixture-of-Experts (SMoE) model. Despite its high parameter count, it only uses 12.9B parameters per token, allowing it to process input and generate output at the same speed and cost as much as a 12.9B model. Mixtral 8x7B matches or outperforms Llama 2 70B on most benchmarks.

Both models demonstrate strong performance on coding-related tasks:

1. Mistral 7B approaches the performance of CodeLlama 7B on code generation tasks while maintaining its proficiency in English-language tasks.

2. Mixtral 8x7B shows strong performance in code generation.

The models can be easily fine-tuned for various tasks. For example, Mistral 7B was fine-tuned on publicly available instruction datasets to create Mistral 7B Instruct, which outperforms all 7B models on the MT-Bench benchmark.

Models Available:

- Mistralai/Mistral-7B-Instruct-v0.2

- Mistralai/Mixtral-8x7B-Instruct-v0.1

- Mistralai/Mistral-7B-Instruct-v0.1

- Mistralai/Mixtral-8x7B-v0.1

- Mistralai/Mistral-7B-v0.1

2. CodeLlama

CodeLlama by Meta is a state-of-the-art large language model (LLM) designed for code generation and natural language tasks related to code. It is built on top of Llama 2 and is available in three versions:

1. CodeLlama: The foundational code model.

2. CodeLlama - Python: Specialized for Python programming.

3. CodeLlama - Instruct: Fine-tuned for understanding natural language instructions.

Four sizes of CodeLlama have been released: 7B, 13B, 34B, and 70B parameters. The models are trained on a massive dataset of code and code-related data:

- 7B, 13B, and 34B models are trained on 500B tokens of code and code-related data.

- 70B model is trained on 1T tokens.

The 7B and 13B base and instruct models have also been trained with fill-in-the-middle (FIM) capability, allowing them to insert code into existing code for tasks like code completion.

CodeLlama - Python is further fine-tuned on 100B tokens of Python code, while CodeLlama - Instruct is instruction fine-tuned and aligned to better understand human prompts.

In benchmark tests using HumanEval and Mostly Basic Python Programming (MBPP), CodeLlama outperformed state-of-the-art publicly available LLMs on code tasks. CodeLlama 34B scored 53.7% on HumanEval and 56.2% on MBPP, the highest among open-source solutions.

The models are released under the same community license as Llama 2, and the training recipes and model weights are available on GitHub

Models Available:

- CodeLlama-34b-Instruct-hf

- CodeLlama-13b-Instruct-hf

- CodeLlama-7b-Instruct-hf

- CodeLlama-70b-Instruct-hf

- CodeLlama-70b-Python-hf

- CodeLlama-70b-hf

- CodeLlama-7b-hf

- CodeLlama-13b-hf

- CodeLlama-34b-hf

- CodeLlama-7b-Python-hf

- CodeLlama-13b-Python-hf

- CodeLlama-34b-Python-hf

3. Phind-CodeLlama

Phind, an AI company, has fine-tuned two models, CodeLlama-34B and CodeLlama-34B-Python, using their internal dataset. The resulting models, named Phind-CodeLlama-34B-v1 and Phind-CodeLlama-34B-Python-v1, have achieved impressive results on the HumanEval benchmark, scoring 67.6% and 69.5% pass@1, respectively. 

Phind's dataset consists of approximately 80,000 high-quality programming problems and solutions, structured as instruction-answer pairs rather than code completion examples. The models were trained over two epochs, totaling around 160,000 examples, using native fine-tuning without LoRA. The training process was optimized using DeepSpeed ZeRO 3 and Flash Attention 2, allowing the models to be trained in just three hours using 32 A100-80GB GPUs with a sequence length of 4096 tokens.

To ensure the validity of their results, Phind applied the decontamination methodology to their dataset, which involves sampling substrings from each evaluation example and checking for matches in the processed training examples. No contaminated examples were found in Phind's dataset.

Phind-CodeLlama-34B-v2 is a newer version, which was initialized from Phind-CodeLlama-34B-v1 and trained on an additional 1.5 billion tokens. This new model achieved an even higher score of 73.8% pass@1 on the HumanEval benchmark, further demonstrating the effectiveness of Phind's fine-tuning approach.

Models Available:

- Phind-CodeLlama-34B-v2

- Phind-CodeLlama-34B-v1

- Phind-CodeLlama-34B-Python-v1

4. StarCoder & StarCoder2

StarCoder and StarCoder2 are two large language models developed by the BigCode project, an open scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs).

StarCoder:

- StarCoder is a 15.5B parameter model with an 8K context length, infilling capabilities, and fast large-batch inference enabled by multi-query attention.

- It is built upon StarCoderBase, which was trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process.

- StarCoder is a fine-tuned version of StarCoderBase, trained on an additional 35B Python tokens.

StarCoder2:

- StarCoder2 is built upon The Stack v2, which is 4× larger than the first StarCoder dataset, in partnership with Software Heritage (SWH).

- The Stack v2 contains over 3B files in 600+ programming and markup languages, derived from the Software Heritage archive.

- StarCoder2 models come in three sizes: 3B, 7B, and 15B parameters, trained on 3.3 to 4.3 trillion tokens.

- StarCoder2-3B outperforms other Code LLMs of similar size on most benchmarks and also outperforms StarCoderBase-15B.

Models Available:

- StarCoder2-15b

- StarCoder2-7b

- StarCoder2-3b

- StarCoder

- StarCoderBase

5. WizardCoder

WizardCoder is a code large language model (LLM) that enhances the open-source StarCoder model through complex instruction fine-tuning using the Evol-Instruct method adapted for code.

The Evol-Instruct method, introduced by WizardLM, is a technique for generating more complex and diverse instruction data to improve the fine-tuning of language models. The key idea is to "evolve" an existing dataset of instructions by iteratively applying various transformations to make the instructions more challenging and varied.

Available Models:

- WizardCoder-Python-34B-V1.0

- WizardCoder-15B-V1.0

- WizardCoder-Python-13B-V1.0

- WizardCoder-Python-7B-V1.0

- WizardCoder-3B-V1.0

- WizardCoder-1B-V1.0

- WizardCoder-33B-V1.1

6. Solar-10.7B

SOLAR 10.7B is a large language model with 10.7 billion parameters that demonstrates strong performance in various natural language processing tasks. The model was initialized from the pretrained weights of Mistral 7B.

For fine-tuning, SOLAR 10.7B underwent a two-stage process: instruction tuning and alignment tuning. The instruction tuning stage utilized mostly open-source datasets such as Alpaca-GPT4, OpenOrca, and a synthetically generated math question-answering dataset called “Synth. Math-Instruct”. In the alignment tuning stage, the model was further fine-tuned using human preference data from datasets like Orca DPO Pairs, Ultrafeedback Cleaned, and a synthesized math alignment dataset called “Synth. Math-Alignment”. 

The resulting instruction-tuned and alignment-tuned model, SOLAR 10.7B-Instruct, outperforms larger models like Mixtral 8x7B-Instruct on benchmark tasks, demonstrating the effectiveness of the training approach.

The Economics of Hosting an Open-Source Coding LLM on E2E’s Cloud Server

E2E Networks provides a wide range of cloud computing GPUs to host and inference these high-memory coding LLMs.

To calculate the GPU memory requirements, let’s spin a GPU node on E2E, and then load these models. 

We’ll be using a V100 32 GB GPU node for loading the models.

You can install Ollama to run the models. Ollama is a great service to serve and inference AI models locally. It provides super fast speeds. 


curl -fsSL https://ollama.com/install.sh | sh

Now let’s run WizardCoder:33b by using the following command.


ollama run wizardcoder:33b

To check the GPU usage, open another terminal and run the following.


nvidia-smi

This is the output we received:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17   Driver Version: 525.105.17   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  Off  | 00000000:01:01.0 Off |                  Off |
| N/A   27C    P0    36W / 250W |  19082MiB / 32768MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                             
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2643      C   /usr/local/bin/ollama           19078MiB |
+-----------------------------------------------------------------------------+

This shows that WizardCoder:33b takes about 20 GB of GPU memory space to be deployed. 

Using the above approach we calculated the GPU requirements of various models:

- Mixtral 8X7B: 25 GB

- CodeLlama-70b-Instruct-hf: 30.8 GB

- Phind-CodeLlama-34B-v2: 20 GB

- StarCoder2-15b: 9.51 GB

Now let’s assume that an organization has 1000 developers and the concurrency of requests sent to the LLM is 1%. This would mean that we need at least 10 instances of our deployed LLM so that there is lower latency and queuing of requests. For a team size of 2000 developers we would need 20 instances, and so on.

Based on the GPU requirements as calculated above, we can decide to select the median value, which is roughly 20 GB.

Every instance consumes around 20GB, and for our team of 1000 developers we need 10 instances. So the total memory requirement is about 200GB.

So we would need 8 times V100 - 32GB GPUs, which would give us a total GPU memory of 256 GB. This way we’ll also have extra memory for resource overheads.

E2E Networks offers a 4xV100 GPU node for 1,80,000 INR per month. Since we would be needing 2 of those, the cost would be roughly about 3,60,000 INR per month (if you were using V100). 

However, we recommend using H100 for this instead, due to its low latency and top-notch GPU capabilities. The HGX powered 8XH100 Cloud GPU has a total GPU memory of 640GB. So about 30 instances of our model could be launched on this GPU, which can cater to about 3000 developers

The cost for this series of Cloud GPUs is 20,00,000 INR per month. It comes with 200 CPU cores, a RAM of 1,800 GB, and an SSD storage of 21,000 GB. The system supports a combined memory bandwidth of 24 TB/s. With a remarkable 32 PetaFLOPS of computational power, it represents the most potent accelerated scale-up server platform for artificial intelligence and high-performance computing applications. This cutting-edge hardware enables the efficient processing of complex and demanding workloads, pushing the boundaries of what is possible in these domains.

On the other hand, if you want to reduce cost (and can handle higher latency and delays in response times), you could consider hosting a model with lower GPU requirements like StarCoder2-15B, on a cloud GPU like 4XL4 GPU on E2E networks, which costs about 1,27,000 INR per month. This has a memory capacity of 96 GB, and can easily host 10 instances of StarCoder2-15 B.

References

Refer to this table for a comprehensive comparison of all the available open-source coding LLMs.

Latest Blogs
This is a decorative image for: A Complete Guide To Customer Acquisition For Startups
October 18, 2022

A Complete Guide To Customer Acquisition For Startups

Any business is enlivened by its customers. Therefore, a strategy to constantly bring in new clients is an ongoing requirement. In this regard, having a proper customer acquisition strategy can be of great importance.

So, if you are just starting your business, or planning to expand it, read on to learn more about this concept.

The problem with customer acquisition

As an organization, when working in a diverse and competitive market like India, you need to have a well-defined customer acquisition strategy to attain success. However, this is where most startups struggle. Now, you may have a great product or service, but if you are not in the right place targeting the right demographic, you are not likely to get the results you want.

To resolve this, typically, companies invest, but if that is not channelized properly, it will be futile.

So, the best way out of this dilemma is to have a clear customer acquisition strategy in place.

How can you create the ideal customer acquisition strategy for your business?

  • Define what your goals are

You need to define your goals so that you can meet the revenue expectations you have for the current fiscal year. You need to find a value for the metrics –

  • MRR – Monthly recurring revenue, which tells you all the income that can be generated from all your income channels.
  • CLV – Customer lifetime value tells you how much a customer is willing to spend on your business during your mutual relationship duration.  
  • CAC – Customer acquisition costs, which tells how much your organization needs to spend to acquire customers constantly.
  • Churn rate – It tells you the rate at which customers stop doing business.

All these metrics tell you how well you will be able to grow your business and revenue.

  • Identify your ideal customers

You need to understand who your current customers are and who your target customers are. Once you are aware of your customer base, you can focus your energies in that direction and get the maximum sale of your products or services. You can also understand what your customers require through various analytics and markers and address them to leverage your products/services towards them.

  • Choose your channels for customer acquisition

How will you acquire customers who will eventually tell at what scale and at what rate you need to expand your business? You could market and sell your products on social media channels like Instagram, Facebook and YouTube, or invest in paid marketing like Google Ads. You need to develop a unique strategy for each of these channels. 

  • Communicate with your customers

If you know exactly what your customers have in mind, then you will be able to develop your customer strategy with a clear perspective in mind. You can do it through surveys or customer opinion forms, email contact forms, blog posts and social media posts. After that, you just need to measure the analytics, clearly understand the insights, and improve your strategy accordingly.

Combining these strategies with your long-term business plan will bring results. However, there will be challenges on the way, where you need to adapt as per the requirements to make the most of it. At the same time, introducing new technologies like AI and ML can also solve such issues easily. To learn more about the use of AI and ML and how they are transforming businesses, keep referring to the blog section of E2E Networks.

Reference Links

https://www.helpscout.com/customer-acquisition/

https://www.cloudways.com/blog/customer-acquisition-strategy-for-startups/

https://blog.hubspot.com/service/customer-acquisition

This is a decorative image for: Constructing 3D objects through Deep Learning
October 18, 2022

Image-based 3D Object Reconstruction State-of-the-Art and trends in the Deep Learning Era

3D reconstruction is one of the most complex issues of deep learning systems. There have been multiple types of research in this field, and almost everything has been tried on it — computer vision, computer graphics and machine learning, but to no avail. However, that has resulted in CNN or convolutional neural networks foraying into this field, which has yielded some success.

The Main Objective of the 3D Object Reconstruction

Developing this deep learning technology aims to infer the shape of 3D objects from 2D images. So, to conduct the experiment, you need the following:

  • Highly calibrated cameras that take a photograph of the image from various angles.
  • Large training datasets can predict the geometry of the object whose 3D image reconstruction needs to be done. These datasets can be collected from a database of images, or they can be collected and sampled from a video.

By using the apparatus and datasets, you will be able to proceed with the 3D reconstruction from 2D datasets.

State-of-the-art Technology Used by the Datasets for the Reconstruction of 3D Objects

The technology used for this purpose needs to stick to the following parameters:

  • Input

Training with the help of one or multiple RGB images, where the segmentation of the 3D ground truth needs to be done. It could be one image, multiple images or even a video stream.

The testing will also be done on the same parameters, which will also help to create a uniform, cluttered background, or both.

  • Output

The volumetric output will be done in both high and low resolution, and the surface output will be generated through parameterisation, template deformation and point cloud. Moreover, the direct and intermediate outputs will be calculated this way.

  • Network architecture used

The architecture used in training is 3D-VAE-GAN, which has an encoder and a decoder, with TL-Net and conditional GAN. At the same time, the testing architecture is 3D-VAE, which has an encoder and a decoder.

  • Training used

The degree of supervision used in 2D vs 3D supervision, weak supervision along with loss functions have to be included in this system. The training procedure is adversarial training with joint 2D and 3D embeddings. Also, the network architecture is extremely important for the speed and processing quality of the output images.

  • Practical applications and use cases

Volumetric representations and surface representations can do the reconstruction. Powerful computer systems need to be used for reconstruction.

Given below are some of the places where 3D Object Reconstruction Deep Learning Systems are used:

  • 3D reconstruction technology can be used in the Police Department for drawing the faces of criminals whose images have been procured from a crime site where their faces are not completely revealed.
  • It can be used for re-modelling ruins at ancient architectural sites. The rubble or the debris stubs of structures can be used to recreate the entire building structure and get an idea of how it looked in the past.
  • They can be used in plastic surgery where the organs, face, limbs or any other portion of the body has been damaged and needs to be rebuilt.
  • It can be used in airport security, where concealed shapes can be used for guessing whether a person is armed or is carrying explosives or not.
  • It can also help in completing DNA sequences.

So, if you are planning to implement this technology, then you can rent the required infrastructure from E2E Networks and avoid investing in it. And if you plan to learn more about such topics, then keep a tab on the blog section of the website

Reference Links

https://tongtianta.site/paper/68922

https://github.com/natowi/3D-Reconstruction-with-Deep-Learning-Methods

This is a decorative image for: Comprehensive Guide to Deep Q-Learning for Data Science Enthusiasts
October 18, 2022

A Comprehensive Guide To Deep Q-Learning For Data Science Enthusiasts

For all data science enthusiasts who would love to dig deep, we have composed a write-up about Q-Learning specifically for you all. Deep Q-Learning and Reinforcement learning (RL) are extremely popular these days. These two data science methodologies use Python libraries like TensorFlow 2 and openAI’s Gym environment.

So, read on to know more.

What is Deep Q-Learning?

Deep Q-Learning utilizes the principles of Q-learning, but instead of using the Q-table, it uses the neural network. The algorithm of deep Q-Learning uses the states as input and the optimal Q-value of every action possible as the output. The agent gathers and stores all the previous experiences in the memory of the trained tuple in the following order:

State> Next state> Action> Reward

The neural network training stability increases using a random batch of previous data by using the experience replay. Experience replay also means the previous experiences stocking, and the target network uses it for training and calculation of the Q-network and the predicted Q-Value. This neural network uses openAI Gym, which is provided by taxi-v3 environments.

Now, any understanding of Deep Q-Learning   is incomplete without talking about Reinforcement Learning.

What is Reinforcement Learning?

Reinforcement is a subsection of ML. This part of ML is related to the action in which an environmental agent participates in a reward-based system and uses Reinforcement Learning to maximize the rewards. Reinforcement Learning is a different technique from unsupervised learning or supervised learning because it does not require a supervised input/output pair. The number of corrections is also less, so it is a highly efficient technique.

Now, the understanding of reinforcement learning is incomplete without knowing about Markov Decision Process (MDP). MDP is involved with each state that has been presented in the results of the environment, derived from the state previously there. The information which composes both states is gathered and transferred to the decision process. The task of the chosen agent is to maximize the awards. The MDP optimizes the actions and helps construct the optimal policy.

For developing the MDP, you need to follow the Q-Learning Algorithm, which is an extremely important part of data science and machine learning.

What is Q-Learning Algorithm?

The process of Q-Learning is important for understanding the data from scratch. It involves defining the parameters, choosing the actions from the current state and also choosing the actions from the previous state and then developing a Q-table for maximizing the results or output rewards.

The 4 steps that are involved in Q-Learning:

  1. Initializing parameters – The RL (reinforcement learning) model learns the set of actions that the agent requires in the state, environment and time.
  2. Identifying current state – The model stores the prior records for optimal action definition for maximizing the results. For acting in the present state, the state needs to be identified and perform an action combination for it.
  3. Choosing the optimal action set and gaining the relevant experience – A Q-table is generated from the data with a set of specific states and actions, and the weight of this data is calculated for updating the Q-Table to the following step.
  4. Updating Q-table rewards and next state determination – After the relevant experience is gained and agents start getting environmental records. The reward amplitude helps to present the subsequent step.  

In case the Q-table size is huge, then the generation of the model is a time-consuming process. This situation requires Deep Q-learning.

Hopefully, this write-up has provided an outline of Deep Q-Learning and its related concepts. If you wish to learn more about such topics, then keep a tab on the blog section of the E2E Networks website.

Reference Links

https://analyticsindiamag.com/comprehensive-guide-to-deep-q-learning-for-data-science-enthusiasts/

https://medium.com/@jereminuerofficial/a-comprehensive-guide-to-deep-q-learning-8aeed632f52f

This is a decorative image for: GAUDI: A Neural Architect for Immersive 3D Scene Generation
October 13, 2022

GAUDI: A Neural Architect for Immersive 3D Scene Generation

The evolution of artificial intelligence in the past decade has been staggering, and now the focus is shifting towards AI and ML systems to understand and generate 3D spaces. As a result, there has been extensive research on manipulating 3D generative models. In this regard, Apple’s AI and ML scientists have developed GAUDI, a method specifically for this job.

An introduction to GAUDI

The GAUDI 3D immersive technique founders named it after the famous architect Antoni Gaudi. This AI model takes the help of a camera pose decoder, which enables it to guess the possible camera angles of a scene. Hence, the decoder then makes it possible to predict the 3D canvas from almost every angle.

What does GAUDI do?

GAUDI can perform multiple functions –

  • The extensions of these generative models have a tremendous effect on ML and computer vision. Pragmatically, such models are highly useful. They are applied in model-based reinforcement learning and planning world models, SLAM is s, or 3D content creation.
  • Generative modelling for 3D objects has been used for generating scenes using graf, pigan, and gsn, which incorporate a GAN (Generative Adversarial Network). The generator codes radiance fields exclusively. Using the 3D space in the scene along with the camera pose generates the 3D image from that point. This point has a density scalar and RGB value for that specific point in 3D space. This can be done from a 2D camera view. It does this by imposing 3D datasets on those 2D shots. It isolates various objects and scenes and combines them to render a new scene altogether.
  • GAUDI also removes GANs pathologies like mode collapse and improved GAN.
  • GAUDI also uses this to train data on a canonical coordinate system. You can compare it by looking at the trajectory of the scenes.

How is GAUDI applied to the content?

The steps of application for GAUDI have been given below:

  • Each trajectory is created, which consists of a sequence of posed images (These images are from a 3D scene) encoded into a latent representation. This representation which has a radiance field or what we refer to as the 3D scene and the camera path is created in a disentangled way. The results are interpreted as free parameters. The problem is optimized by and formulation of a reconstruction objective.
  • This simple training process is then scaled to trajectories, thousands of them creating a large number of views. The model samples the radiance fields totally from the previous distribution that the model has learned.
  • The scenes are thus synthesized by interpolation within the hidden space.
  • The scaling of 3D scenes generates many scenes that contain thousands of images. During training, there is no issue related to canonical orientation or mode collapse.
  • A novel de-noising optimization technique is used to find hidden representations that collaborate in modelling the camera poses and the radiance field to create multiple datasets with state-of-the-art performance in generating 3D scenes by building a setup that uses images and text.

To conclude, GAUDI has more capabilities and can also be used for sampling various images and video datasets. Furthermore, this will make a foray into AR (augmented reality) and VR (virtual reality). With GAUDI in hand, the sky is only the limit in the field of media creation. So, if you enjoy reading about the latest development in the field of AI and ML, then keep a tab on the blog section of the E2E Networks website.

Reference Links

https://www.researchgate.net/publication/362323995_GAUDI_A_Neural_Architect_for_Immersive_3D_Scene_Generation

https://www.technology.org/2022/07/31/gaudi-a-neural-architect-for-immersive-3d-scene-generation/ 

https://www.patentlyapple.com/2022/08/apple-has-unveiled-gaudi-a-neural-architect-for-immersive-3d-scene-generation.html

Build on the most powerful infrastructure cloud

A vector illustration of a tech city using latest cloud technologies & infrastructure