Clara AGX Pytorch Container is a fully-optimised container containing the Pytorch machine learning framework and associated dependencies. It is built using the Clara SDK and includes everything you need to start machine learning inference on NVIDIA's edge-to-cloud computing platform, NVIDIA EGX.
Clara AGX PyTorch container is an NGC container that provides the tools needed to train, deploy and validate deep learning models. It contains everything you need to begin TensorRT, including optimised implementations of popular networks, such as ResNet, SSD, and FCN.
This container is available on NVIDIA GPU Cloud (NGC) as a Docker container which can be deployed anywhere Docker runs, including your local workstation, cloud GPU instance, or edge device.
With the Clara AGX Pytorch Container, you can create customised containers which can be easily deployed on NVIDIA A100 GPUs (incl. DGX systems) or used in the cloud (with AWS, GCP or Azure). It’s a key component of our Clara application development platform, which was built specifically to address the needs of medical imaging developers who want to deploy deep learning applications at scale.
NVIDIA Clara AGX SDK is a suite of tools and libraries to accelerate AI model training and deployment on medical imaging data. The tools include:
- Annotations Toolkit: Aligns polygonal annotations to image volumes in NIfTI format.
- Medical Imaging Library: Performs advanced image processing functions on medical images.
- NVIDIA Transfer Learning Toolkit (TLT): Speeds up the training process by providing pre-trained models and libraries for performing common tasks, such as object detection or semantic segmentation.
- It eliminates hardware dependency by enabling the use of a common Docker image across multiple systems. This allows developers to focus on building new algorithms instead of spending time on system dependencies.
- It is compliant with industry standards and has been validated for production deployment.
- It accelerates algorithm development and testing by making it easier to perform tasks such as creating training data or tuning hyperparameters for different datasets or models.
- It simplifies deployment by providing standard workflows for incorporating 3rd party libraries into your pipeline or deploying a model from one environment into another environment with minimal effort required from you or your team.
Clara AGX is a developing set of reference apps, AI frameworks, and models created specifically for the Clara AGX Developer Kit as well as for real-time medical device development.
The toolkit is designed to make it easier for businesses to create AI-powered medical equipment. Ultrasound, Metagenomics, Skincare and Streaming Video are among the containers included with the set.
With the Clara AGX Developer Kit, developers can streamline the process of building a medical imaging device that uses NVIDIA AI technologies. It gives them a "plug and play" experience—they get instant access to a comprehensive set of artificial intelligence (AI) tools through pre-trained containers.
Clara AGX Pytorch Container comes in two variants:
- Clara AGX Pytorch Container for AI Training: Designed to accelerate AI training workloads on NVIDIA® DGX™ systems and other hardware platforms with NVIDIA GPUs, this variant of Clara AGX Pytorch Container provides a streamlined software stack for reproducible deep learning training experiments.
- Clara AGX Pytorch Container for AI Inference: Designed to accelerate AI inference workloads on NVIDIA® Clara™ Deployment Platforms or other hardware platforms with NVIDIA GPUs, this variant of Clara AGX Pytorch Container provides a streamlined software stack for deploying your trained models as part of an end-to-end solution.
Your Docker environment must support NVIDIA GPUs before you can run an NGC deep learning framework container. The approach you use in your system is determined by the AGX OS version you have installed, the NGC Cloud Image provided by a Cloud Service Provider, or the software you have loaded in preparation for executing NGC containers.
To run a container, use the relevant command as described in the NVIDIA Containers And Frameworks User Guide's Running A Container chapter and specify the registry, repository, and tags. The NGC Container User Guide has more information on how to use NGC.
Running the container image requires the following steps:
- Select the Tags tab and locate the release of the container image you want to run.
- In the Pull Tag column, click the icon to copy the command.
- Open a command prompt and paste the pull command.
- The container image will start pulling after the command is pasted. Make sure that the pull is completed.
- Run the container image, there are two ways to implement it: interactive or non-interactive mode.