In this article, we will show the detailed process for setting up the deep learning environment using CUDA GPU, Anaconda Jupyter, Keras, and Tensorflow (for windows)
E2E GPU machines provide better performance and cost-efficiency, in comparison to standalone service providers.
For deep learning, the CUDA cores of Nvidia, graphics drivers are preferred in comparison to CPUs, because those cores are specifically designed for tasks like parallel processing, real-time image upscaling, doing petaflops of calculations each second, high-definition video rendering, encoding, decoding.
Nevertheless, you must have a CPU with at least four cores and eight threads (hyperthreading/simultaneous multi-threading enabled), because this process requires heavy parallel processing resources.
NOTE: Your hardware has to support GPU accelerated deep learning. Tensorflow requires a minimum CUDA compute specification score of 3.0. You can measure your hardware compute score and compatibility from the NVIDIA developer website.
- Minimum hardware requirements:
- CPU: Skylake Intel Core
- GPU: NVIDIA RTX 8000
- RAM: 8GB Dual channel memory or higher(16GB recommended)
- Software installation:
The software installation path is a bit finicky, since it is both hardware and software specific. Thus, you cannot go any further if you do not have CUDA cores in your dedicated GPU. Be very careful during the following installation processes:
Anaconda will be our main coding terminal for this setup of the deep learning environment. It comes with the Jupyter Notebook and Spyder console for Python. Download your Anaconda installation setup from here.
Screenshot of Anaconda Navigator after installation:
Click here for the detailed installation documents.
1. Install Microsoft visual studio
Installation of the Microsoft visual studio is a necessary step for the installation of the Nvidia CUDA software. During the installation of the Nvidia CUDA software, it will check for any supported versions of Studio code installed on your machine. Download visual studio from here.
2. Install NVIDIA CUDA
This application is the most significant software that helps your GPU interact with the deep learning programs that you will write in your Anaconda prompt. This software prepares your GPU for deep learning computations. Install the latest version of the Nvidia CUDA Toolkit from here. Also, read the installation guide for the CUDA Toolkit here.
NVIDIA CUDA Toolkit is installed.
1. Install CuDNN(MUST)
This is the NVIDIA CUDA Deep Neural Network(DNN) GPU accelerated library for deep neural networks. This Installation contains crucial library files, without which the TensorFlow environment will not be created and your GPU will not work. Download it from here. This software is a must for neural networks and deep learning. Learn more about it here.
2. Install necessary Python libraries in Anaconda:
Launch Anaconda prompt from the Anaconda Navigator.
Run the following line of codes by pressing Enter after each line:
conda install NumPy
conda install pandas
conda install scipy
conda install matplotlib
Now, it’s time to install Tensorflow 2.0 through the Anaconda prompt:
conda install TensorFlow
Tensorflow is now installed.
Now we have to create a Tensorflow environment for Anaconda:
conda create -n py3-tf2.0
#”py3-tf2.0″ is the name of the environment in which we will work with Tensorflow. You can give any name to the TensorFlow environment.
Activate the environment using:
conda activate py3-tf2.0
Now run the following code in a Terminal(Command Prompt or Powershell) window:
python -c “import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))”
You should see an output screen like this:
From the line highlighted above, you will see that all the library files are successfully opened, and the TensorFlow device is created.
Ultimately, the GPU is ready for deep learning computations and neural networks. We can train any object recognition or voice recognition model on this GPU with ease. This installation process is quite cumbersome, and the difference in software versions can easily put a halt to the whole process. Thus, it is advised to go through all of these steps carefully, and follow the instructions given in the respective hyperlinks for any debugging purpose.
- Build your model:
Numerous open-source deep learning models are available on the internet. Some of the most noteworthy are the YoloV4 object recognition model, Keras deep learning model, Pytorch deep learning model, and many more. Tensorflow is at the base of all these models, as they require the tensor core of the Cuda GPU to perform these Complex computational algorithms within a short period, as compared to CPU. Nvidia has made huge progress in the field of AI and deep learning through their Cuda GPUs. The majority of the supercomputers around the world use Nvidia GPUs as their main computational workhorses, paving the way for the future of AI.
In this article, we have gone through the extensive process of installing NVIDIA CUDA. It is a parallel platform for computation in a typical GPU or Graphical Processing Unit. Hope for the best understanding of the reader in this perspective.
Purchasing GPUs for On-Premises would be an overhead cost. Whereas on Cloud it is simple with pay as you go model. If your looking for Cloud GPU, I would prefer E2E Cloud. E2E Cloud offers the latest GPUs on Cloud at very economical cost. Click on link for a free trail – https://bit.ly/E2ECloudSignup