Technologies like Artificial Intelligence allude to the advancement of humankind. The process, coupled with robotics innovations and capabilities, helps researchers move beyond the fenced activities of last-generation industrial robots while forging new collaborations and workflow models in the industry.
It is not an industry-specific innovation and rather holds tremendous applications. One of the critical use cases of AI lies in the robotics niche. Mr. Chris Anchuturuthen, the CIO – AI, Robotics, IoT, Machine Learning & Big Data Analytics, extensively covered the underlying significance of GPU in AI and Robotics during one of the workshops with E2E Networks. The speaker covered a few momentous cornerstones, with the basic ones irrefutably referenced underneath.
The speaker began his session by introducing the basics of Artificial Intelligence. It is a science identified with programming that prompts a PC not to require unequivocal orders to showcase a specific way. As a result, gadgets are customized to figure out different methods to undertake processes through various patterns.
A GPU (Graphics Processing Unit) is a distinct processor with committed memory that traditionally performs drifting point activities needed for delivering designs. All in all, it is a solitary chip processor utilized for broad graphical and mathematical calculations. GPUs are streamlined for preparing artificial intelligence and profound learning models as they can handle numerous calculations simultaneously.
Mr. Anchuturuthen further explained the processors’ functions and why it is necessary for efficient operability and seamless experience.
Why do we need a GPU unit?
- Optimized for serial tasks -Mr. Anchuturuthen explained this facet with an elaborate example. Consider water spurting from a faucet. If we connect a filter, and the water keeps flowing, it will get consistently strained, analogous to sequential processing. Like water, whatever is fed to a GPU gets inevitably processed, and an efficient output is produced, just like filtered water.
- Optimized for multiple parallel tasks – If you want to speed up the processing, the function is the same; increase pipes and filters in the analogy, which is a task apprehended by a GPU unit.
- Similar to a CPU, a Graphics Processing Unit too has multiple cores and entails productive functionality.
- Traditional CPUs are not economically feasible, while GPUs are.
The seminar profoundly highlighted the various AI applications – expedited when used in association with a Graphics Processing Unit. Mr. Anchuturuthen gave an example of driverless cars being developed. The driverless cars are supposed to be avoiding obstacles, self-driving, and intercepting commands through a voice control mechanism. He further substantiated the technological breakthrough with a video that demonstrated GPU implementation in AI, which enhanced the car’s function. He showed how the car used purpose-driven AI to read a book. The vehicle was given a command to read a book through its voice control feature. Correspondingly, with the utilization of a Graphics Processing Unit, the process was prompted extensively.
Another applicable example of AI is in the ‘Vision for the Blind.’ The process involves developing spectacles for blind people that are AI-powered, backed by GPU. These spectacles are faster in implementing commands and operations than Google Glass as the GPU makes the application respond within 5 seconds of scanning. Mr. Anchuturuthen also presented a video that illustrated the comprehensive functioning of the spectacles. This pharmaceutical-based application was around recognizing a drug prescription.
The process involved the following steps for immaculate functioning:
- Get user input for scanning prescriptions.
- Capture prescriptions.
- Get user input for scanning drug labels.
- Capture the drug label.
- Validate drug authenticity and use-case.
Moving on to robotics, some of the best robot disruptions based out of India were discussed. As covered under a video by Boston Dynamics, below are some of the features of Spot, a robot that enables automated inspection and sensing, explores beyond bounds, and records data without any limits.
Top speed- 3MPH (16 M/S)
Average run time of 90 minutes
Can navigate through challenging terrain
Cameras that enable 360 degrees of obstacle avoidance
Two payload ports
Can carry up to 14 kg
Operates in a temperature range of -20 degrees celsius to 45 degrees celsius
IP54 rain and dust protection
Mr. Anchuturuthen also showed a video of a quadruped robot, solo designed and developed in India. He further encapsulated how GPU powers the robot to execute functions phenomenally. Its performance highlights include:
The robot has a large range of motions, enabling fall recovery.
Its distinct emotions enabled leg configurations.
Impedance control that enables softer landing.
A lightweight yet robust robot that can handle impacts.
The robot entails a foot impedance control.
It possesses the ability to jump on obstacles of different heights. (Kino-dynamic planner+proposed controller)
Can we program AI to replicate human emotions?
Emotional AI is a subset of artificial intelligence (the expansive term for machines duplicating how people function and feel) that measures comprehension, recreation, and response to human emotions. It is otherwise called affective computing or artificial emotional intelligence.
While people may have the advantage of understanding feelings, machines are still exploring their way through the process. Nonetheless, machines have become adept at breaking down a lot of information, as described by Mr. Anchuturuthen. They can now tune in to voice expressions and begin to perceive when those modulations associate with pressure or outrage. More advanced robotics machinery can also dissect pictures and understand intricate changes in human demeanour. While the incumbent breakthrough is substantial, an absolute replication of human emotions is still something that scientists and researchers are actively working on perfecting.