NVIDIA RTX Server Sizing Guide to run Maya Software with Arnold renderer

September 28, 2020

This specification provides insights on how to deploy NVIDIA® Quadro® Virtual Data Center Workstation (Quadro vDWS) software for modern-day production pipelines within the Media and Entertainment industry. Recommendations are based on actual customer deployments
and sample-of-concept (POC) artistic 3D production pipeline workflows and cover three common questions:


Which NVIDIA GPU should I use for a 3D Production pipeline?


How do I select the right profile(s) for the types of users I will have?

Using sample 3D production pipeline workflows, how many users can be supported (user
density) for this server configuration and workflow?

NVIDIA RTX™ Server offers a highly flexible reference design which combines NVIDIA Quadro RTX™ 8000 graphics processing units (GPUs with NVIDIA virtual GPU software running on OEM server hardware. NVIDIA RTX Server can be configured to accelerate multiple workloads within the data center. IT administrators can provision multiple, easy-to manage virtual workstations to tackle various artistic workloads. Since user behavior varies and is a critical factor in determining the best GPU and profile size, the recommendations in this reference architecture are meant to be a guide. The most successful customer deployments start with a Proof of Concept (POC) and are “tuned” throughout the lifecycle of the deployment. Beginning with a POC enables customers to understand the expectations and behavior of their users and optimize their deployment for the best user density while maintaining required performance levels. A POC also allows administrators to understand infrastructure conditions, such as network, which is a key component to ensure performance within their specific environment. Continued maintenance is important because user behavior can change over the course of a project and as the role of an individual changes in the organization along with potential improvement of displays during refresh cycles. A 3D production artist that was once a light graphics user might become a heavy graphics user when they change teams, assigned to a different project or even receive a display upgrade to a
higher resolution monitor. NVIDIA virtual GPU management and monitoring tools enable administrators and IT staff to ensure their deployment is optimized for each user

About Autodesk Maya 2020 and Arnold

Autodesk Maya 2020 is one of the most recognizable applications for 3D computer animation, modelling, simulation, and rendering utilized to create expansive worlds, complex characters, and dazzling effects. Creative professionals bring believable characters to life with engaging
animation tools, shape 3D objects and scenes with intuitive modelling tools and create realistic effects - from explosions to cloth simulation all within the Maya software.

Autodesk Arnold is the built-in interactive renderer for Maya and is an advanced Monte Carlo ray tracing renderer. It is designed for artists and for the demands of modern animation and visual effects (VFX) production.

It is available as a standalone renderer on Linux, Windows, and Mac OS, with supported plug-ins for Maya, 3dsMax, Houdini, Cinema 4D, and Katana.

Autodesk works closely with NVIDIA to ensure that creative innovation is never over. Studio drivers are released throughout the year to supercharge your favourite, most demanding applications. Using the same NVIDIA Studio drivers that are deployed on non-virtualized systems, NVIDIA Quadro vDWS software provides virtual machines (VMs) with the same
breakthrough performance and versatility that the NVIDIA RTX platform offers to a physical environment. VDI eliminates the need to install Autodesk Arnold and Maya on a local client, which can help reduce IT support and maintenance costs and enables greater mobility and
collaboration. This virtual workstation deployment option enhances flexibility and further expands the wide variety of platform choices available to Autodesk customers.

About NVIDIA RTX Servers

NVIDIA RTX Server is a reference design comprised of the following components
Qualified server
NVIDIA Quadro RTX 8000 graphics cards
NVIDIA Quadro vDWS GPU virtualization software
Autodesk Maya 2020 design software - to be installed by the client
Autodesk Arnold 6 rendering software - to be installed by the client
Teradici Cloud access software - to be installed by the client
When combined, this validated NVIDIA RTX Server solution provides unprecedented rendering and compute performance at a fraction of the cost, space, and power consumption of traditional CPU-based render nodes, as well as high-performance virtual workstations enabling designers and artists to arrive at their best work, faster.

NVIDIA RTX Server

NVIDIA RTX Server is a validated reference design for multiple workloads that are accelerated Quadro RTX 8000 GPUs. When deployed for high performance virtual workstations, the NVIDIA RTX Server solution delivers a native physical workstation experience from the data center, enabling creative professionals to do their best work from anywhere,
using any device. NVIDIA RTX Server can also bring GPU-acceleration and performance to deliver the most efficient end-to-end rendering solution, from interactive sessions in the desktop to final batch rendering in the data center. Content production is undergoing massive growth as render complexity and quality demands increase. Designers and artists across
industries continually strive to produce more visually rich content faster than ever before, yet find their creativity and productivity bound by inefficient CPU-based render solutions. NVIDIA RTX Server delivers the performance that all artists need, by allowing them to take advantage
of key GPU enhancements to increase interactivity and visual quality, while centralizing GPU resources.

NVIDIA Quadro RTX GPUs

The NVIDIA Quadro RTX 8000, is powered by the NVIDIA Turing™
architecture and the NVIDIA RTX platform, bring the most significant advancement in computer graphics in over a decade to professional workflows. Designers and artists can now wield the power of hardware-accelerated ray tracing, deep learning, and advanced shading to
dramatically boost productivity and create amazing content faster than ever before. The Quadro RTX 8000 has 48 GB to handle larger animations or visualizations. The artistic workflows covered within our testing
for this reference architecture used Quadro RTX 6000 GPUs.

NVIDIA Quadro Virtual Data Center Workstation Software

NVIDIA virtual GPU (vGPU) software enables the delivery of graphics-rich virtual desktops and workstations accelerated by NVIDIA GPUs. There are three versions of NVIDIA vGPU software available, one being NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS). NVIDIA
Quadro vDWS software includes the Quadro graphics driver required to run professional 3D applications. The Quadro vDWS license enables sharing an NVIDIA GPU across multiple virtual machines, or multiple GPUs can be allocated to a single virtual machine to power the most demanding workflows.

NVIDIA Quadro is the world’s preeminent visual computing platform, trusted by millions of creative and technical professionals to accelerate their workflows. With Quadro vDWS software, you can deliver the most powerful virtual workstation from the data center. Designers and artists can work more efficiently, leveraging high performance virtual
workstations that perform just like physical workstations. IT has the flexibility to provision render nodes and virtual workstations, scaling resources up or down as needed. An NVIDIA RTX Server solution can be configured to deliver multiple virtual workstations customized for
specific tasks. This means that utilization of compute resources can be optimized, and virtual machines can be adjusted to handle workflows that may demand more or less memory.

To deploy an NVIDIA vGPU solution for Autodesk Maya 2020 with Arnold, you will need an NVIDIA GPU that is supported with Quadro vDWS software, licensed for each concurrent user.

Teradici Cloud Access Software

Teradici is the creator of the industry-leading PCoIP remoting protocol technology and Cloud Access software. Teradici Cloud Access software enables enterprises to securely deliver high performance graphics-intensive applications and workstations from private data centers, public clouds or hybrid environments with crisp text clarity, true color accuracy and lossless
image quality to any endpoint, anywhere. Teradici PCoIP Ultra with NVIDIA RTX Server can provide virtual machines to multiple artists resulting in virtual machines that are indistinguishable from physical workstations. Artists can enjoy workspaces set up on the latest hardware, and work with confidence in high fidelity with steady frame rates.

Autodesk Maya and Arnold PoC Testing

To determine the optimal configuration of Quadro vDWS for Autodesk Maya and Arnold, both user performance and scalability were considered. For comparative purposes, we considered the requirements for a configuration optimized for performance only, and this configuration is
based solely on performance using sample artistic workflows. The scenes used within our POC testing focused on a VFX pipeline where a single shot is the result of several artist specialists working on different pieces. The following illustration shows the entire 3D production pipeline and illustrates the areas where our POC testing focused.

Our testing focused on a few of the phases illustrated in the above figure. We executed three GPU-accelerated artistic workflows within 4 VM’s:
VM1 and VM2 - Modeling, Texturing and Shading
VM3 - Animation
VM4 - Lighting and Rendering
The goal of this testing was to show how four artists from three unique parts of the pipeline can all work at the same time using shared server virtualized resources and be productive. The following paragraphs goes into further detail of each of these workflows

VM1 and VM2 - Modeling, Texturing and Shading

For artists to model effectively, they need fast interaction with their models to see different views, quick material changes, and realistic rendering. This workflow takes advantage of the NVIDIA® TensorRT™ cores in the NVIDIA RTX Server to accelerate the rendering process, and artists can view their noiseless assets by leveraging NVIDIA OptiX™ AI Denoising. The GPU
memory needed to support this artist would be considered small to medium, therefore a single VM was assigned half of the Quadro RTX 6000 GPU, which equates to a 12Q vGPU profile. Two VM’s can share the same GPU on a server. The following screenshot illustrates
the artist’s work.

In order to bring characters to life in film, they need to go through a “Look Development” process. In the example illustrated in Figure 4-2, Autodesk’s Arnold GPU Renderer utilizes NVIDIA RTX compatible features for performant ray tracing. Look Development involves the
following:
Refining textures and materials that often result in a time-consuming, back and forth process
Real time updates with NVIDIA RTX Server allow for artistic interaction to accurately dial in the look of the character, in-context to the scene.
NVIDIA RTX AI, employing NVIDIA OptiX Denoiser, provides high-fidelity changes in real time.
Artists can define and deliver higher quality content in a more intuitive workflow providing an overall increase in production value.
Having a full color range without compression is important to make accurate changes in confidence. Teradici PCoIP Ultra, which takes advantage of NVIDIA RTX GPU encoding, ensures that the virtual machines look indistinguishable from a local display.

VM3 - Animation

For artists to animate effectively, artists need smooth playback with no pauses or stutters as they make pose changes. Since this artist uses the Maya 2020 GPU animation cache, the GPU memory needed to support this artist would be considered large. Therefore, a single VM was
assigned an entire Quadro RTX 6000 GPU, which equates to a 24Q vGPU profile. The following screenshot illustrates the artist’s work.

Animation production can place extreme demands on compute hardware. Traditional workflows involve artists outputting time-consuming preview videos. Since Autodesk Maya 2019, real time animation playback and preview is now possible. Furthermore, with Viewport 2.0 enhancements, real-time rendering features are also available. In this scene, we are using
the GPU to cache animation, and preview ambient occlusion, shadows, lights and reflections, all in real-time in the viewport. Maya Viewport 2.0 leverages GPU memory to deliver high quality materials, lights, screen space ambient occlusion and more - at interactive speed. Starting in Maya 2019, you can use your GPU to cache animation calculations to memory in a
fraction of the time of a CPU cache. With this feature, you can playback your animations in real time, and continue to tweak and update your shots without having to play blast the timeline. By leveraging NVIDIA RTX GPU encoding with PCoIP Ultra, this VM is able to deliver interactive, real time animation playback without dropping any frames, which is really important to animators who are constantly reviewing their changes. Every frame counts.

VM4 - Lighting and Rendering

Artists who work with lighting and rendering, need fast resolution of the full image so they can see the impact of their lighting and camera changes. Since this artist is the user who most intensely uses the NVIDIA TensorRT cores in the NVIDIA RTX Server (for accelerating the rendering process), the GPU memory needed to support this artist is the largest of all and may
even need acceleration from multiple GPUs. NVIDIA vGPU technology provides administrators the ability to assign up to four shared GPUs to a single VM. The following screenshot
illustrates the artist’s work.

Lighting and rendering are resource intensive processes that are responsible for the final output of a scene. NVIDIA RTX Server enables artists to work and adjust scenes while utilizing leftover GPU resources to render. This provides for an incredibly efficient use of GPU resources, furthering the production pipeline workflow.

Evaluating vGPU Frame Buffer

The GPU Profiler is a tool which can be installed within each of the VM’s and used for evaluating GPU to CPU utilization rates while executing the aforementioned artistic workflows. The vGPU frame buffer is allocated out of the physical GPU frame buffer at the time the vGPU is assigned to the VM and the NVIDIA vGPU retains exclusive use of that frame buffer. All
vGPUs resident on a physical GPU share access to the GPUs engines including the graphic 3D, video decode, and video encode engines. Since user behavior varies and is a critical factor in determining the best GPU and profile size, it is highly recommended to profile your own data
and workflows during your PoC to properly size your environment for optional performance

Findings

Our testing showed that four artists from three unique parts of the pipeline can all effectively do their 3D production work using VMs. To determine the optimal configuration of Quadro vDWS to support these four artists, both user performance and scalability were considered. To further support this conclusion, NVIDIA collected insights from Media and Entertainment
customers as well, to understand how animation studio customers are deploying Quadro vDWS. A dual socket, 2U rack server configured with three Quadro RTX 6000 GPUs provided the necessary resources so that 3D production artists could work more efficiently, leveraging
high-performance virtual workstations which perform just like physical workstations. When sizing a Quadro vDWS deployment for Autodesk Maya and Arnold, NVIDIA recommends conducting your own PoC to fully analyze resource utilization using objective measurements and subjective feedback. It is highly recommended that you install the GPU Profiler within your
artist VMs to properly size your VMs.

Deployment Best Practices

Run a Proof of Concept

The most successful deployments are those that balance user density (scalability) with performance. This is achieved when Quadro vDWS-powered virtual machines are used in production while objective measurements and subjective feedback from end users is gathered.
We highly recommend a PoC is run prior to doing a full deployment to provide a better understanding of how your users work and how many GPU resources they really need, analyzing the utilization of all resources, both physical and virtual. Consistently analyzing resource utilization and gathering subjective feedback allows for optimizing the configuration
to meet the performance requirements of end users while optimizing the configuration for best scale.

Leverage Management and Monitoring Tools

Quadro vDWS software provides extensive monitoring features enabling IT to better understand usage of the various engines of an NVIDIA GPU. The utilization of the compute engine, the frame buffer, the encoder, and decoder can all be monitored and logged through a command line interface called the NVIDIA System Management Interface (nvidia-smi), accessed on the hypervisor or within the virtual machine. In addition, NVIDIA vGPU metrics are integrated with Windows Performance Monitor (PerfMon) and through management packs like VMware vRealize Operations. To identify bottlenecks of individual end users or of the physical GPU serving multiple end users, execute the following nvidia-smi commands on the hypervisor.

Understand Your Users

Another benefit of performing a PoC prior to deployment is that it enables more accurate categorization of user behavior and GPU requirements for each virtual workstation. Customers often segment their end users into user types for each application and bundle similar user types on a host. Light users can be supported on a smaller GPU and smaller profile size while heavy users require more GPU resources, a large profile size, and may be
best supported on a larger GPU like the Quadro RTX 8000 for example.

Understanding the GPU Scheduler

NVIDIA Quadro vDWS provides three GPU scheduling options to accommodate a variety of QoS requirements of customers.
Fixed share scheduling: Always guarantees the same dedicated quality of service. The fixed share scheduling policies guarantee equal GPU performance across all vGPUs sharing the same physical GPU. Dedicated quality of service simplifies a POC since it allows the use of common benchmarks used to measure physical workstation performance such as SPECviewperf, to compare the performance with current physical or
virtual workstations.
Best effort scheduling1: Provides consistent performance at a higher scale and therefore reduces the TCO per user. This is the default scheduler.
The best effort scheduler leverages a round-robin scheduling algorithm which shares GPU resources based on actual demand which results in optimal utilization of resources. This results in consistent performance with optimized user density. The best effort scheduling policy best utilizes the GPU during idle and not fully utilized times, allowing for optimized
density and a good QoS.
Equal share scheduling: Provides equal GPU resources to each running VM. As vGPUs are added or removed, the share of GPU processing cycles allocated changes accordingly, resulting in performance to increase when utilization is low, and decrease when utilization is high.

Organizations typically leverage the best effort GPU scheduler policy for their deployment to achieve better utilization of the GPU, which usually results in supporting more users per server with a lower quality of service (QoS) and better TCO per user.

Summary

A qualified OEM server configured with three Quadro RTX 6000 GPUs provided the necessary resources for 3D production artists to work more efficiently, leveraging high performance virtual workstations which perform just like physical workstations. When sizing a Quadro
vDWS deployment for Autodesk Maya and Arnold, NVIDIA recommends conducting your own PoC to fully analyze resource utilization using objective measurements and subjective feedback. NVIDIA RTX Server offers flexibility to IT administrators to size VMs based on
workload or workflow needs.

Server Recommendation: Dual Socket, 2U Rack Server
A 2RU, 2-socket server configured with two Intel Xeon Gold 6154 processors is recommended. With a high-frequency 3.0 GHz combined with 18-cores, this CPU is well-suited for optimal performance for each end user while supporting the highest user scale, making it a costeffective solution for Autodesk Maya.

Flash Based Storage for Best Performance
The use of flash-based storage, such as solid-state drives (SSDs) are recommended for optimal performance. Flash-based storage is the common choice for users on physical workstations and similar performance can be achieved in similarly configured virtual environments. A typical configuration for non-persistent virtual machines is to use the direct attached storage (DAS) on the server in a RAID 5 or RAID 10 configuration. For persistent virtual machines, a high performing all-flash storage solution is the preferred option.

Typical Networking Configuration for Quadro vDWS
There is no typical network configuration for in a Quadro vDWS powered virtual environment since this varies based on multiple factors including choice of hypervisor, persistent versus non-persistent virtual machines, and choice of storage solution. Most customers are using 10 GbE networking for optimal performance.

Optimizing for Dedicated Quality of Service
For comparative purposes, we considered the requirements for a configuration optimized for performance only. This configuration option does not take into account the need to further optimize for scale, or user density. Additionally, this configuration option is based solely on
performance using the aforementioned sample 3D production artistic workflows.

To run Maya with Arnold renderer workloads on E2E RTX 8000 GPU servers sign up here

Latest Blogs
This is a decorative image for Project Management for AI-ML-DL Projects
June 29, 2022

Project Management for AI-ML-DL Projects

Managing a project properly is one of the factors behind its completion and subsequent success. The same can be said for any artificial intelligence (AI)/machine learning (ML)/deep learning (DL) project. Moreover, efficient management in this segment holds even more prominence as it requires continuous testing before delivering the final product.

An efficient project manager will ensure that there is ample time from the concept to the final product so that a client’s requirements are met without any delays and issues.

How is Project Management Done For AI, ML or DL Projects?

As already established, efficient project management is of great importance in AI/ML/DL projects. So, if you are planning to move into this field as a professional, here are some tips –

  • Identifying the problem-

The first step toward managing an AI project is the identification of the problem. What are we trying to solve or what outcome do we desire? AI is a means to receive the outcome that we desire. Multiple solutions are chosen on which AI solutions are built.

  • Testing whether the solution matches the problem-

After the problem has been identified, then testing the solution is done. We try to find out whether we have chosen the right solution for the problem. At this stage, we can ideally understand how to begin with an artificial intelligence or machine learning or deep learning project. We also need to understand whether customers will pay for this solution to the problem.

AI and ML engineers test this problem-solution fit through various techniques such as the traditional lean approach or the product design sprint. These techniques help us by analysing the solution within the deadline easily.

  • Preparing the data and managing it-

If you have a stable customer base for your AI, ML or DL solutions, then begin the project by collecting data and managing it. We begin by segregating the available data into unstructured and structured forms. It is easy to do the division of data in small and medium companies. It is because the amount of data is less. However, other players who own big businesses have large amounts of data to work on. Data engineers use all the tools and techniques to organise and clean up the data.

  • Choosing the algorithm for the problem-

To keep the blog simple, we will try not to mention the technical side of AI algorithms in the content here. There are different types of algorithms which depend on the type of machine learning technique we employ. If it is the supervised learning model, then the classification helps us in labelling the project and the regression helps us predict the quantity. A data engineer can choose from any of the popular algorithms like the Naïve Bayes classification or the random forest algorithm. If the unsupervised learning model is used, then clustering algorithms are used.

  • Training the algorithm-

For training algorithms, one needs to use various AI techniques, which are done through software developed by programmers. While most of the job is done in Python, nowadays, JavaScript, Java, C++ and Julia are also used. So, a developmental team is set up at this stage. These developers make a minimum threshold that is able to generate the necessary statistics to train the algorithm.  

  • Deployment of the project-

After the project is completed, then we come to its deployment. It can either be deployed on a local server or the Cloud. So, data engineers see if the local GPU or the Cloud GPU are in order. And, then they deploy the code along with the required dashboard to view the analytics.

Final Words-

To sum it up, this is a generic overview of how a project management system should work for AI/ML/DL projects. However, a point to keep in mind here is that this is not a universal process. The particulars will alter according to a specific project. 

Reference Links:

https://www.datacamp.com/blog/how-to-manage-ai-projects-effectively

https://appinventiv.com/blog/ai-project-management/#:~:text=There%20are%20six%20steps%20that,product%20on%20the%20right%20platform.

https://www.datascience-pm.com/manage-ai-projects/

https://community.pmi.org/blog-post/70065/how-can-i-manage-complex-ai-projects-#_=_

This is a decorative image for Top 7 AI & ML start-ups in Telecom Industry in India
June 29, 2022

Top 7 AI & ML start-ups in Telecom Industry in India

With the multiple technological advancements witnessed by India as a country in the last few years, deep learning, machine learning and artificial intelligence have come across as futuristic technologies that will lead to the improved management of data hungry workloads.

 

The availability of artificial intelligence and machine learning in almost all industries today, including the telecom industry in India, has helped change the way of operational management for many existing businesses and startups that are the exclusive service providers in India.

 

In addition to that, the awareness and popularity of cloud GPU servers or other GPU cloud computing mediums have encouraged AI and ML startups in the telecom industry in India to take up their efficiency a notch higher by combining these technologies with cloud computing GPU. Let us look into the 7 AI and ML startups in the telecom industry in India 2022 below.

 

Top AI and ML Startups in Telecom Industry 

With 5G being the top priority for the majority of companies in the telecom industry in India, the importance of providing network affordability for everyone around the country has become the sole mission. Technologies like artificial intelligence and machine learning are the key digital transformation techniques that can change the way networks rotates in the country. The top startups include the following:

Wiom

Founded in 2021, Wiom is a telecom startup using various technologies like deep learning and artificial intelligence to create a blockchain-based working model for internet delivery. It is an affordable scalable model that might incorporate GPU cloud servers in the future when data flow increases. 

TechVantage

As one of the companies that are strongly driven by data and unique state-of-the-art solutions for revenue generation and cost optimization, TechVantage is a startup in the telecom industry that betters the user experiences for leading telecom heroes with improved media generation and reach, using GPU cloud online

Manthan

As one of the strongest performers is the customer analytics solutions, Manthan is a supporting startup in India in the telecom industry. It is an almost business assistant that can help with leveraging deep analytics for improved efficiency. For denser database management, NVIDIA A100 80 GB is one of their top choices. 

NetraDyne

Just as NVIDIA is known as a top GPU cloud provider, NetraDyne can be named as a telecom startup, even if not directly. It aims to use artificial intelligence and machine learning to increase road safety which is also a key concern for the telecom providers, for their field team. It assists with fleet management. 

KeyPoint Tech

This AI- and ML-driven startup is all set to combine various technologies to provide improved technology solutions for all devices and platforms. At present, they do not use any available cloud GPU servers but expect to experiment with GPU cloud computing in the future when data inflow increases.

 

Helpshift

Actively known to resolve customer communication, it is also considered to be a startup in the telecom industry as it facilitates better communication among customers for increased engagement and satisfaction. 

Facilio

An AI startup in Chennai, Facilio is a facility operation and maintenance solution that aims to improve the machine efficiency needed for network tower management, buildings, machines, etc.

 

In conclusion, the telecom industry in India is actively looking to improve the services provided to customers to ensure maximum customer satisfaction. From top-class networking solutions to better management of increasing databases using GPU cloud or other GPU online services to manage data hungry workloads efficiently, AI and MI-enabled solutions have taken the telecom industry by storm. Moreover, with the introduction of artificial intelligence and machine learning in this industry, the scope of innovation and improvement is higher than ever before.

 

 

References

https://www.inventiva.co.in/trends/telecom-startup-funding-inr-30-crore/

https://www.mygreatlearning.com/blog/top-ai-startups-in-india/

This is a decorative image for Top 7 AI Startups in Education Industry
June 29, 2022

Top 7 AI Startups in Education Industry

The evolution of the global education system is an interesting thing to watch. The way this whole sector has transformed in the past decade can make a great case study on how modern technology like artificial intelligence (AI) makes a tangible difference in human life. 

In this evolution, edtech startups have played a pivotal role. And, in this write-up, you will get a chance to learn about some of them. So, read on to explore more.

Top AI Startups in the Education Industry-

Following is a list of education startups that are making a difference in the way this sector is transforming –

  1. Miko

Miko started its operations in 2015 in Mumbai, Maharashtra. Miko has made a companion for children. This companion is a bot which is powered by AI technology. The bot is able to perform an array of functions like talking, responding, educating, providing entertainment, and also understanding a child’s requirements. Additionally, the bot can answer what the child asks. It can also carry out a guided discussion for clarifying any topic to the child. Miko bots are integrated with a companion app which allows parents to control them through their Android and iOS devices. 

  1. iNurture

iNurture was founded in 2005 in Bengaluru, Karnataka. It provides universities assistance with job-oriented UG and PG courses. It offers courses in IT, innovation, marketing leadership, business analytics, financial services, design and new media, and design. One of its popular products is KRACKiN. It is an AI-powered platform which engages students and provides employment with career guidance. 

  1. Verzeo

Verzeo started its operations in 2018 in Bengaluru, Karnataka. It is a platform based on AI and ML. It provides academic programmes involving multi-disciplinary learning that can later culminate in getting an internship. These programmes are in subjects like artificial intelligence, machine learning, digital marketing and robotics.

  1. EnglishEdge 

EnglishEdge was founded in Noida in 2012. EnglishEdge provides courses driven by AI for getting skilled in English. There are several programmes to polish your English skills through courses provided online like professional edge, conversation edge, grammar edge and professional edge. There is also a portable lab for schools using smart classes for teaching the language. 

  1. CollPoll

CollPoll was founded in 2013 in Bengaluru, Karnataka. The platform is mobile- and web-based. CollPoll helps in managing educational institutions. It helps in the management of admission, curriculum, timetable, placement, fees and other features. College or university administrators, faculty and students can share opinions, ideas and information on a central server from their Android and iOS phones.

  1. Thinkster

Thinkster was founded in 2010 in Bengaluru, Karnataka. Thinkster is a program for learning mathematics and it is based on AI. The program is specifically focused on teaching mathematics to K-12 students. Students get a personalised experience as classes are conducted in a one-on-one session with the tutors of mathematics. Teachers can give scores for daily worksheets along with personalised comments for the improvement of students. The platform uses AI to analyse students’ performance. You can access the app through Android and iOS devices.

  1. ByteLearn 

ByteLearn was founded in Noida in 2020. ByteLean is an assistant driven by artificial intelligence which helps mathematics teachers and other coaches to tutor students on its platform. It provides students attention in one-on-one sessions. ByteLearn also helps students with personalised practice sessions.

Key Highlights

  • High demand for AI-powered personalised education, adaptive learning and task automation is steering the market.
  • Several AI segments such as speech and image recognition, machine learning algorithms and natural language processing can radically enhance the learning system with automatic performance assessment, 24x7 tutoring and support and personalised lessons.
  • As per the market reports of P&S Intelligence, the worldwide AI in the education industry has a valuation of $1.1 billion as of 2019.
  • In 2030, it is projected to attain $25.7 billion, indicating a 32.9% CAGR from 2020 to 2030.

Bottom Line

Rising reliability on smart devices, huge spending on AI technologies and edtech and highly developed learning infrastructure are the primary contributors to the growth education sector has witnessed recently. Notably, artificial intelligence in the education sector will expand drastically. However, certain unmapped areas require innovations.

With experienced well-coordinated teams and engaging ideas, AI education startups can achieve great success.

Reference Links:

https://belitsoft.com/custom-elearning-development/ai-in-education/ai-in-edtech

https://www.emergenresearch.com/blog/top-10-leading-companies-in-the-artificial-intelligence-in-education-sector-market

https://xenoss.io/blog/ai-edtech-startups

https://riiid.com/en/about

Build on the most powerful infrastructure cloud

A vector illustration of a tech city using latest cloud technologies & infrastructure