Please use E2E special code 'NVUAR' when you register for an additional 20% discount on early bid pricing of $49 before 25th September. Free regsitration for University/Government
Full Day DLI Workshops are being offered for India audiences starting at 12.30 PM from Oct 6.
Visit Session Catalog to choose from hundreds of live & on-demand talks, instructor-led Deep Learning Institute (DLI) training, demos, podcasts, and Connect with the Experts sessions featuring live Q&A.
The NVIDIA Deep Learning Institute (DLI) offers in-depth workshops taught by subject matter experts to help you solve your most challenging problems in AI, accelerated computing, and data science.
GTC features DLI sessions at just $99 each starting from 6th October. India session details below- starts at 12.pm IST every day.
DLI workshops at NVIDIA GTC start October 5, 2020.
GTC Online, this Oct 5-9, will be packed with over 200+ sessions covering the latest developments and future plans for autonomous vehicles as well astopics including robotics, machine learning, 5G, and data science. Highlights include:
Opening keynote from NVIDIA CEO Jensen Huang unveiling the newest AI technologyInformative sessions from brilliant minds at Facebook, Volvo, Sony, Siemens and many more.GTC India Curtain Raiser and featured live programming from India speakers at HDFC, AMEX, IIT Hyderabad, 1MG, INNOPLEXUS, IISC, HCL, TIFR and many more.NVIDIA Deep Learning Institute instructor-led training sessions with India convenient timings.
We'll introduce GTC to India's larger developer community with this kick-off session hosted by Vishal Dhupar, India Managing Director at NVIDIA, who'll be joined by Indian industry and government leaders to discuss the unique role NVIDIA has played in pushing the boundaries of AI research and development, harnessing accelerated computing to create breakthroughs. As these capabilities now converge to create life-changing AI applications, a platform powered by accelerated computing is needed more than ever. India is on the threshold of building such a platform's digital infrastructure. We'll talk about the ongoing AI, 5G, and IOT revolutions, the relevance of GTC in this context, and opportunities and potential impacts for India.
Vishal Dhupar, Managing Director, NVIDIA Graphics India, NVIDIA
In India, 135 Cr people speak 13 languages with several different dialects. At Airtel, we're building voicebots that understand, infer, and speak these languages. Typically, state of the art voicebots need 10,000 hours of annotated speech-to-text audio data to train, but Indic languages don’t have such high quantities of annotated data available, posing the challenges of getting state of the art models with low resources and fastening the training cycle. Using NVIDIA GPUs and a distributed high performance compute architecture, we're solving both challenges and building bots that understand and speak multiple languages.
Arjun Variar, Lead Engineer, Machine Learning, Airtel.com
Shardul Lavekar, Engineering Manager, Airtel.com
Deep learning technologies are revolutionizing many industries and have proven to be successful in solving problems in unstructured data domains like image & speech recognition and natural language processing. Still, limited gains have been made with these technologies in traditional structured data domains like banking, financial services, and insurance. We'll cover American Express’s journey exploring deep learning techniques such as recurrent neural networks to generate the next set of data innovations by deriving intelligence from the data within its global, integrated network. Learn how using credit card data has helped us improve fraud decisions and elevate the payment experience for millions of card members across the globe. In this solution, we leverage deep learning capabilities for real-time fraud prevention. We've deployed a real-time fraud prevention model using GPU acceleration over TensorRT framework in partnership with NVIDIA.
Manish Gupta, Vice President, Machine Learning & Data Science Research, American Express
Abhishek Khanna, Vice President, Fraud Risk Decision Science, American Express
Tuberculosis (TB) causes more deaths worldwide than any other infectious disease, with early and accurate diagnosis as the key to fighting it. Chest X-rays have become a frontline tool for TB screening, but there's a shortage of radiologists in many regions of the world. AI-infused radiology solutions can reduce radiologist work, improve report quality, lower reporting times, and ultimately reduce the cost of population screening projects. Although there has been a plethora of research in using deep learning for diagnosing medical images, most projects have been evaluated retrospectively and in laboratory settings. We'll present our experience of developing deep learning models for TB and deploying them in the field for a live, large scale population screening project. The work was done in collaboration with Clinton Health Access Initiative and the Greater Chennai Corporation.
Aniruddha Pant, Chief Data Scientist, DeepTek, Inc.
Digitalization in EHS 4.0 enables organizations to go beyond traditional techniques to achieve the zero-incident objective. We're developing a visual understanding and perception-based solution using state of the art AI technologies and NVIDIA DeepStream to ensure safety at workplaces using existing CCTV camera infrastructure. The solution enables companies and factories to adhere to compliance regulations, which positively impacts profitability while decreasing incident rates. It also helps businesses adapt to the new normal and continue operations despite unprecedented challenges with COVID-19.
Varghese Kollerathu, Lead Research Engineer, Siemens
We'll highlight the importance of high-speed GPU interconnect for efficient use of accelerated FFTs in the direct numerical simulation (DNS) of the Navier-Stokes equations to investigate fluid turbulence. On a cubic grid, the numerical integration requires high computational resources that enhance the communication complexity. We'll show that this problem is mitigated by the use of high-speed GPU interconnect, especially on systems with a larger number of GPU per node. Using this approach, we're able to perform DNS at a moderate resolution of grid resolutions to investigate the challenging problem of fluid turbulence.
Prasad Perlekar, Associate Professor, Tata Institute of Fundamental Research-Hyderabad
We'll present a brief introduction to multigrid methods and their classifications, geometric and algebraic methods. We'll cover how to develop hybrid parallel algorithms for algebraic multigrid methods and efficiently utilize both CPU and GPUs, diving into variants of algorithms to handle memory and compute-intensive computations in CPU-GPU implementations. Basic knowledge of numerical linear algebra and parallel programming is expected.
Sashikumaar Ganesan, Associate Professor, Indian Institute of Science
We'll dive into how one can encode information that can shed light on the implicit user perception of a product by observing temporal user-item interactions. This overcomes the limitation of collaborative-filtering and content-based algorithms that take into account the user-interactions and meta information but ignore the temporal nature of a user's interest. We'll cover new embedding techniques: Prod2Ve, as well as a hybrid of Prod2Vec and content-based embeddings called MetaProd2Vec+. We'll also detail how, for building a next generation recommendation system, we've trained multi-layer Transformer models (BERT, GPT-2 architecture) on the sequence of item interactions done by the user to be able to generate and serve contextually-aware recommendations capable of guiding a user journey across the whole platform. The approach enables the creation of a recommendation system that can act as an assistant for the user at every step on the platform.
Utkarsh Gupta, Lead Data Scientist, 1mg.com
Bhaskar Arun, Data Scientist, 1mg.com
The auto-focusing system, which involves moving a microscope stage along a vertical axis to find the optimal focus position, is the chief component of an automated digital microscope. Current automated focusing algorithms, especially those deployed in cost-effective microscopy systems, often don't match the efficiency of a skilled human operator keeping a sample in focus. We'll present an auto-focusing system which uses recent advances in machine learning, namely deep convolutional neural networks, to quicken the auto-focusing system even on low-cost hardware and edge devices. We'll demonstrate the results of the focusing algorithm on an open dataset and describe the practical implementation of this method on a low-cost digital microscope to create a whole slide imaging system. Results of a clinical study using this system will be presented, demonstrating its efficacy in a practical scenario.
Tathagato Rai Dastidar, Co-Founder & CEO, SigTuple
We'll discuss the Anomaly and Outlier Detection for Fraud Prevention solution that HDFC built and uses. The solution is based on customer profiles and uses GPU acceleration to meet the computational demand. We'll cover the details of building feature engineering workflows to detect outliers in features like age, income, and credit history using the binning and bucketing technique, which was implemented using NVIDIA RAPIDS and the NVIDIA DGX platform. CuDF was used to efficiently define the data frames while correlating up to 10 million features of variable combinations and detecting the outliers within these features. Post-implementation, running the feature engineering pipeline for region-wide execution took around 6 minutes on NVIDIA DGX systems, compared to 6 hours per region on the traditional CPU-based platform. This helped the bank's machine learning team drastically reduce timelines of the model building and execution, as they compute for 20+ regions.
Bharath Shasthri, Head of Data Science, HDFC bank
We'll introduce the need for explaining the decisions of neural network models as they get absorbed into real-world applications, summarize existing efforts, and present our own efforts in this direction. While existing methods for neural network attributions are largely statistical, we'll propose a new attribution method for neural networks developed using first principles of causality. We'll cover how the neural network architecture is viewed as a structural causal model, as well as the methodology to compute the causal effect of each feature on the output. Using reasonable assumptions on the causal structure of input data, we'll propose algorithms to efficiently compute the causal effects, as well as scale the approach to data with large dimensionality. This work was previously presented at ICML 2019.
Vineeth N Balasubramanian, Associate Professor, Indian Institute of Technology, Hyderabad
AI model explainability, security, and trust are crucial as solutions are moved from labs to production. We'll describe various explainability techniques used to to bring in transparency to models, like LIME, FairML, and ELI5. We'll also cover vulnerability of AI models, different types of adversarial attacks, and how we can defend against them. Adversarial attacks done using techniques such as Projected Gradient Descent, Carlini-Wagner-L2, DeepFool and FSGM will be discussed. Defense techniques like spatial smoothing, JPEG compression, defensive distillation, and adversarial training will be explored to handle different attacks and ensure security of AI models. Explainability and security brings in additional compute overheads, which can potentially slow inference. NVIDIA Tesla V100 GPUs were used to help create trusted AI solutions for edge and data centers, along with experimentation of optimization using TensorRT.
Ramachandra kaladhara sarma, Senior Lead Data Scientist , HCL
Jayachandran K R, Associate Vice President & Head of Artificial Intelligence CoE, HCL
Performance of speech recognition systems are highly dependent on the annotation of the speech data. Manual annotation of a large unlabelled corpus is highly time-consuming and could require linguistic knowledge. The task of manual annotation can be partially solved by incorporating semi-supervised learning (SSL), which uses a small manually labeled corpus to build an initial seed speech recognition model. The seed model can then be used to transcribe a large unlabeled corpus. In the SSL framework, the utterances from the unlabeled corpus that are decoded with higher confidence are further used to refine or re-train the baseline seed model. The SSL framework is shown to work on an Indian language using an end-to-end speech recognition framework. With SSL, a reduction in error rates is achieved, and the refined system performs better than the baseline seed model. The model and SSL training is carried out using NVIDIA V100 GPUs.
Tanvina Patel, Data Scientist, Speech Systems, Cogknit Semantics, Bangalore
Medical imaging can help detect abnormal lung disease conditions including COVID-19. We'll cover a solution to improve COVID-19 detection based on modified DenseNet-121. Various spatial transform augmentation techniques were used to cover the scarcity of COVID-19 datasets. X-ray images were preprocessed with contrast-limited adaptive histogram equalization (CLAHE) to improve their quality. Local interpretable model-agnostic explanations (LIME) is used to understand the model predictions. We'll deep dive into the various spatial transform augmentation techniques used, as well as how the CLAHE and LIME is used to build the solution. The solution also supports model metadata management like versioning and hyper-parameters. Data augmentation and model training are being performed using NVIDIA V100 GPUs for accelerated processing.
Kameshwar Raovenkatajammalamadaka, Head, Artificial Intelligence, HCL Technologies
Trupti Chavan, Data Scientist, HCL Technologies
COVID-19, caused by the novel coronavirus, has over 22.5M registered cases. Since polymerized chain reaction-based testing kits haven’t been able to cater to the huge number of patients, tools to detect COVID-19 based on radiological images are needed. We've used multi-channel transfer learning techniques with novel loss functions and training paradigms to build a CNN-based classifier of COVID-19 for CXR images. To build the feature extractor, we've trained ResNet-18 model datasets, pretrained over ImageNet, over multiple datasets compiled from various sources. We'll cover how to implement Gaussian to handle uncertainties in data and how transfer learning is useful to work with less annotated techniques.
Deepshikha Kumari, Senior Data Scientist, NVIDIA
Sumit Jahgirdar, Research Intern, VIT Vellore
We'll demonstrate the efficacy of transfer learning for Indic automatic speech recognition (ASR) tasks, starting with understanding the complexity of working with low-resource Indic data, followed by looking at the best practices to train an Indic ASR. With a good pre-trained English ASR model, we'll showcase that transfer learning can be effectively and easily performed on different Indic languages (Gujarati, Tamil) as well as crowdsourced data from multiple user groups. The complete process of training these models is achieved via NVIDIA NeMo library, which eases experimentation with state-of-the-art methods. The experiments demonstrate that, in both cases, transfer learning from a good base model has higher accuracy than a model trained from scratch.
Ashish Sardana, Data Scientist, NVIDIA
The cost of developing a new drug roughly doubles every nine years, adjusted for inflation, according to Eroom’s Law. Developing a new drug is a complex process, requiring around 12-15 years and costing over $1 billion. We'll examine an AI solution that allows generating new hypotheses in terms of viable and non-obvious molecular structures, drastically accelerating molecule discovery. We'll discuss a deep learning generative model and pipeline to leverage the existing molecular and biological data for training, generation, and cross-validated using scientific literature.
Stratos Davlos, CTO , Innoplexus
Join NVIDIA Inception** at GTC to connect with leaders from AI startups that we believe will shape the future for generations to come through the NVIDIA Inception Premier Showcase and the virtual pavilion
AI startup focused sessions at GTC
|Monday, 10/5||Tuesday, 10/6||Wednesday, 10/7||Thursday, 10/8||Friday, 10/9|
|Keynote by NVIDIA CEO|
Hear the latest announcements in graphics, HPC, AI, and more in the GTC 2020 Keynote.
Join NVIDIA Vice President of Business Development and Head of Inception GPU Ventures Jeff Herbst as he sits down with a few high-profile special guests.
Get a deep-dive into NVIDIA platforms, led by an NVIDIA engineer, and hear from the startups who have used NVIDIA SDKs to drive their business forward.
|GTC GENERAL SESSIONS|
|NVIDIA Inception Premier Showcase|
Watch a hand-picked lineup of NVIDIA Inception Premier members demo their solutions.