The human learning process can be defined as a gradual curve. Babies progress very steadily over the course of months, learning how to sit, crawl, stand, walk, run and the process goes on. Any new subject or concept is also learnt by us in a similar way. First, we start as a beginner in the field and then we continue further until we have mastered it.
Even when we try to learn a language we initiate in a similar manner. First, we learn the alphabet, understand the words and terms and lastly practice until we have developed the ability to create sentences. This is how we as a human learn the elementary concepts of almost everything.
The Progressive Growing Generative Adversarial Networks or ProGANs follow similar learning methods when they try to learn patterns. ProGANs begin at the lowest level and then proceed until they have reached a higher level of understanding. Nevertheless, before discussing more about ProGANs we should know what GANs are.
Definition of GANs
GAN modeling is an autonomous learning assignment in machine learning which helps systems learn about the chronicity as well as discover the patterns in the already supplied input data. Moreover, it helps in automatically generating new output data depending on the primary dataset.
More and more researchers are quickly gaining interest in GANs, and are increasingly motivated to use these generative models to produce synthetic as well as real examples that can tackle issues in different fields.
A noteworthy example of the GANs are tasks related to image-to-image translation such as producing different scenes, generating naturalistic images of objects, transforming a summer image into a winter image or a day-to-night image and sometimes creating images of humans which cannot be differentiated by other human beings.
Different types of GANs
All the types of GANs are mentioned below:
- Deep Convolutional Generative Adversarial Network (DCGAN)
- Auxiliary Classifier Generative Adversarial Network (AC-GAN)
- Generative Adversarial Network (GAN)
- Big Generative Adversarial Network (BigGAN)
- Stacked Generative Adversarial Network (StackGAN)
- Cycle-Consistent Generative Adversarial Network (CycleGAN)
- Context Encoders
- Conditional Generative Adversarial Network (cGAN)
- Progressive Growing Generative Adversarial Network (Progressive GAN)
- Information Maximizing Generative Adversarial Network (InfoGAN)
- Style-Based Generative Adversarial Network (StyleGAN)
- Wasserstein Generative Adversarial Network (WGAN)
The Progressive Growing Generative Adversarial Networks or ProGANs is an extension of the GAN training process introduced by several individuals namely Samuli Laine, Tero Karras, Jaakko Lehtinen, and Timo Aila. With the help of ProGANs generator models can learn with a solidity which in turn allows them to generate large-high-quality images.
The training process in ProGAN initiates with small images and then layer blocks are added in a step-by-step process in order to increase the production size of the generator model. At the same time, the size of the discriminator model is also enlarged until the desired outcome (image) is achieved.
This specific kind of approach has been extremely successful in producing excellent quality artificial images that are exceedingly naturalistic. The Progressive Growing Generative Adversarial Networks has 4 major steps:
- Progressive growing of both model and layers
- Minibatch std on Discriminator
- Normalization with PixelNorm
- Equalized Learning Rate
How does ProGAN work?
The Progressive Growing Generative Adversarial Networks mainly utilizes a generator as well as a discriminator model in addition to the conventional GAN structure. Furthermore, it begins with extremely small pictures such as 4×4 pixels.
As the training process progresses, the ProGAN methodically attaches new blocks of intricate layers to both the discriminator model and the generator model. This gradual addition of layers enables the models to train themselves on coarse-level attributes pretty efficiently at the very start. Later on, the same process enables the model to learn more delicate attributes of both the discriminator and the generator part.
In the process of adding new blocks to the layers, skip connection is also used appropriately. It is employed to connect the new block with either the input of the discriminator or the output of the generator. In the end, it will be added to a remaining output or input layer that can manage the impact of the new block.
Every living thing learns something new in a step-by-step process. They either adapt from their previous mistakes or begin again from scratch while developing an understanding of every individual concept gradually.
Likewise, the ProGAN network utilizes a similar approach in which the model begins training from the lowest pixel resolution and strives to achieve higher resolutions with increasing patterns in order to provide high-quality results.