The February edition of the #AITalk webinar series featured an engaging discussion on various challenges of deep learning in computer vision.
The webinar started with how biological vision system development started 580 million years ago and proceeded with the early applications of image processing: noise removal, media compression, medical imaging, and manufacturing.
Tesla’s autopilot system is a marvel of how sci-fi a few decades ago turned into reality at present. A lot of other computer vision applications came into the fore with the help of deep learning. Interestingly, some of these applications perform better than humans — for example, breast cancer detection, robotic arms in assembly lines, etc.
The computer vision industry is growing at 30% CAGR and will reach USD 49 billion by 2023. Early adopters of AI will benefit when compared to the laggards in this space. According to data from Stanford University, in the last decade, research was significant in computer vision, a close second to classical machine learning.
There are some challenges faced when it comes to applying deep learning for computer vision problems.
Using deep learning for computer vision problems requires three necessary components: large dataset, training and inference algorithm which requires scalable neural architecture, and efficient deployment.
The webinar moved with the exploration of MonkAI — a library of unified wrapper across PyTorch, MXNet, and Keras, and it has an easy to use syntax.
Watch more in the on-demand webinar recording here: https://youtu.be/6mFzmwi1NdY