Every now and then, a library or framework emerges that completely changes the way we think about deep learning and aids in the advancement of deep learning studies by making them computationally quicker and less costly. Here we will be discussing one such library: PyTorch.
Overview-
PyTorch is the library or framework for Python scripts that make deep learning projects easier to create. PyTorch's approachability and ease of use drew a large number of early adopters from the academic, research, and development communities. And it has developed into one of the most popular deep learning tools across a wide range of applications in the years after its first release.
PyTorch has two primary characteristics or features at its core: An n-dimensional Tensor that works similarly to NumPy but on GPUs and the other is the construction and training of neural networks using automatic differentiation. Apart from these primary features, PyTorch includes a number of other features, which are detailed below in this blog.
PyTorch Tensor-
Numpy is a fantastic framework, however, it is unable to use GPUs to speed up numerical operations. GPUs can frequently deliver speedups of 50x or more for contemporary deep neural networks and today's parallel computing methods may take advantage of GPUs much more.
To train many models at once PyTorch offers distributed training, allowing academic practitioners and developers to parallelize their work. Using many GPUs to process bigger batches of input data the training of models can be made feasible with distributed training, as a result, the computation time is reduced.
The Tensor, the most fundamental PyTorch concept, is capable to do so. A PyTorch Tensor is basically the same as a NumPy array: a Tensor is an n-dimensional array, and PyTorch has several methods for working with them. Tensors may maintain track of a computational graph and gradients behind the scenes, but they can also be used as a general tool for scientific computing. PyTorch Tensors, unlike NumPy, may use GPUs to speed up their numeric operations. You just need to provide the suitable device to execute a PyTorch Tensor on GPU.
Automatic Differentiation-
Automatic differentiation is a method used by PyTorch to record all of our operations and then compute gradients by replaying them backward. Generally while training neural networks, developers have to manually implement both forward and backward passes. While manually implementing backward pass is easy but doing the same for forward pass might get a bit tricky or exhausting task. This is exactly what the autograd package in PyTorch does.
When you use autograd, your network's forward pass will construct a computational graph, with nodes being Tensors and edges being functions that produce output Tensors from input Tensors. Because we calculate the gradients on the forward pass, this approach allows us to save time on each epoch. You may also simply compute gradients by back propagating across this graph.
Flow control and weight sharing-
PyTorch implements a weird model as an example of dynamic graphs and weight sharing: a third-fifth order polynomial that selects a random integer between 3 and 5 and utilizes that many orders on each forward pass, recycling the same weights several times to calculate the fourth and fifth order. We can construct the loop in this model using standard Python flow control, and we can achieve weight sharing by simply repeating the same argument many times.
Torchscript-
TorchScript allows you to turn PyTorch code into serializable and optimizable models. Any TorchScript application may be saved from a Python process and loaded into another process that doesn't require or doesn’t have a Python environment.
Pytorch has tools for converting a model from a pure Python program to a TorchScript program that can be executed in any standalone application such as of C++. This allows users to train models in PyTorch using familiar Python tools before exporting the model to a production environment where Python applications may be inefficient due to performance and multi-threading issues.
Dynamic Computation Graphs-
In frameworks like PyTorch, you usually have a set up of the computational network and a distinct execution mechanism than the host language. This unusual design is largely motivated by the need for efficiency and optimization. DL frameworks keep track of a computational graph that specifies the sequence in which calculations must be completed in a model. Researchers have found it difficult to test out more creative ideas because of this inconvenient arrangement.
There are two such types of computational graphs, one is static and the other is dynamic. Variable sizes must be established at the start with a static network i.e. when the graph is Static all the variables are to be created and connected in the beginning, and then later is settled up in a static (non-changing) session which might be inconvenient for some applications, such as NLP as for NLP Dynamic computational graphs are critical since language or input can arrive in a variety of expression lengths.
PyTorch, on the other hand, employs a dynamic graph. That is, the computational graph is constructed dynamically once variables are declared. As a result, after each training cycle, this graph is regenerated. Dynamic graphs are adaptable, allowing us to change and analyze the graph's internals at any moment.
When all you had before were "goto" commands, introducing dynamic computational graphs is like introducing the idea of a process. We may write our programs in a composable manner thanks to the idea of the procedure. Of course, one may argue that DL designs do not require a stack. Recent research on Stretcher networks and Hyper networks, on the other hand, demonstrates this. Context switching, such as a stack, appears to be beneficial in some networks in studies.
nn Module
Autograd and computational graphs are a powerful paradigm for automatically generating sophisticated operators and computing derivatives; nevertheless, raw autograd may be too low-level for huge neural networks. We often consider stacking the computation when developing neural networks, with some layers containing learnable parameters that will be tweaked throughout the learning process.
In such cases, we can make use of PyTorch’s nn module. The nn package defines modules, which are fundamentally equivalent to neural network layers. A Module can contain internal data such as Tensors with learnable parameters in addition to taking input Tensors and computing output Tensors.
Conclusion-
In this blog, we understood how PyTorch is different from other libraries like NumPy, What are the special features that it offers including Tensor computing with substantial GPU acceleration and a tape-based autograd system used to build deep neural networks.
We also studied other features like flow control and weight sharing, torch scripts, computation graphs, and nn module.
This description was adequate to gain a general notion of what PyTorch is and how academicians, researchers, and developers may utilize it to construct better projects.