Machine learning and deep learning techniques largely designed in a way that we have to train and test models every time for each new problem we want to solve, as each problem has unique set of dataset and classes distribution, while in opposed to this Transfer learning means the applications of skills and knowledge that were learned and acquired in one task to another related learning task or situation.
Concept of transfer learning can be understood by the way humans learn from other subject matter experts. Analogy would go something like this: A teacher has a huge expertise in the subject he/she teaches. It is beneficial for students to get all the information through a lecture or classes from the teacher. In this case ‘transfer’ of knowledge from teacher to student is being done.
Other, example of transfer learning could be,
- Knowing how to play classic piano helps in learning jazz piano
- Knowing math and statistics helps in learning machine learning techniques
In this blog, we are going to cover detailed understanding of Transfer learning, Types of Transfer learning and real-world applications of transfer learning.
Table of Contents:
- Understanding Transfer learning
- Transfer learning for deep learning
- Types of Transfer learning
- Applications of transfer learning
- Advantages of Transfer learning
- Real-world use-cases of Transfer learning
Understanding of Transfer learning:
To gain understanding of transfer learning, we first understand the difference between the traditional approach of building and training machine learning models, and the concept of transfer learning.
Traditional machine learning is isolated and knowledge gained or acquired from one task based on a set of dataset remains isolated. No knowledge or skills are transferred or retrained to related tasks of other sets of datasets.
In transfer learning, you can leverage knowledge in the form of parameters or weights from previously trained models for training newer models and even solve problems that have very little data available which is the case most of the time in industry and business environments.
To apply transfer learning methods in business or industry use-cases one should answer three questions:
- What to transfer: this tells which part of existing knowledge needs to be transferred to the new task
- When to transfer: this question answers in which task transfer learning helps in to improve accuracy of given task
- How to transfer: this tells which exact method to use in order to transfer knowledge on models to others.
Transfer learning for Deep Learning
The inductive learning method is exemplified by deep learning models. To infer a mapping from a set of training instances is the goal of inductive learning methods. For instance, the model learns how to link input features to class labels in classification scenarios. Such a learner relies on a set of assumptions about the distribution of the training data in order to generalize successfully to new data. Inductive bias is the term used to describe these sets of presumptions. Numerous characteristics, such as the hypothesis space it confines to and the search procedure through the hypothesis space, can be used to describe the inductive bias. Therefore, these biases affect what and how the model learns about the given task and domain.
Transfer learning is a concept which allows to reuse trained model instead training from scratch:
- Take a network trained on a different domain for a different source task from where model was originally trained
- Adapt it for your new domain or target problem at hand
Now, let’s look at some of strategies used in transfer learning for Deep learning
- Feature extraction:
Layered architectures used in deep learning systems and models allow for the learning of various features at various layers (hierarchical representations of layered features). To obtain the final output, these layers are finally connected to a final layer (often a fully connected layer in the case of supervised learning).
“The key idea here is to just leverage the pre-trained model’s weighted layers to extract features but not to update the weights of the model’s layers during training with new data for the new task.”
- Fine tuning:
Fine tuning consists of unfreezing a few of the top layers of a frozen model base used for feature extraction, and jointly training both the newly added part of the model and these top layers. This Is nothing but the Fine-tuning because it slightly adjusts the more abstract representations of the model being reused, in order to make them more relevant for the problem at hand.
Now, let’s look at the pre-trained models that are used in transfer learning for deep learning model development:
Everyone can use pre-trained models thanks to a variety of methods. An interface for downloading various well-liked models is provided by the renowned deep learning Python library ‘keras’. Pre-trained models are also available online because the majority of them are open-sourced.
For computer vision:
- Inception V3
For Natural language processing:
Types of Transfer Learning
The words related with transfer learning have been used broadly and frequently interchangeably throughout the literature's many incarnations, as was indicated at the beginning of this chapter. Therefore, differentiating between transfer learning, domain adaptation, and multi-task learning can be difficult at times.
- Domain Adaption:
Domain adaption is usually referred to in scenarios where the marginal probabilities between the source and target domains are different, such as P(Xₛ) ≠ P(Xₜ).
- Domain Confusion:
We talked about the potential applications of feature-representation transfer. It is important to emphasise once more that different layers in a deep learning network capture various feature set. This fact allows us to acquire domain-invariant features and enhance their domain portability. We push both domains' representations to be as comparable as feasible rather than letting the model learn any representation. This can be done by directly applying specific pre-processing procedures to the representations. The idea behind this technique is to add another objective to the source model to encourage similarity by confusing the domain itself, hence the domain confusion.
- Multitask learning:
In this technique, several tasks are learned simultaneously without distinction between the source and target. As opposed to transfer learning, where the learner initially has no knowledge of the target task, in this situation the learner learns information about numerous tasks at once.
Applications of Transfer learning
- Transfer learning for NLP:
Textual data provides a variety of difficulties for ML and deep learning. These are typically vectorized or converted using various methods. Various training datasets have been used to create embeddings like Word2vec and FastText.
- Transfer learning for Audio/Speech:
Automatic Speech Recognition (ASR) models created for English have been utilized successfully to boost German speech recognition performance. Another area where transfer learning has been quite helpful is automatic speaker identification.
- Transfer learning for Computer Vision:
Using various CNN architectures, deep learning has been used relatively well for a variety of computer vision tasks, including object detection and identification.
Advantages of transfer learning
We saw, what are the types of transfer learning, and applications of it, now let’s look at some of the advantages of transfer learning
- It helps in solving complex real-world problems with several constraints.
- Helps solve problems where we have little or no labelled data available
- Ease of transferring knowledge in form of parameters from one model to another based on specific problem or task.
Real-world use-cases of Transfer learning
- Image classification or Object recognition tasks such as Dog vs Cat classification, natural images like, forest, mountain, glacier, etc. classification.
- Personalized Display of products in shopping Mall/offline stores, etc.
- Multiclass image classification with less data availability
- Spam detection and filtering using BERT and GPT3 pre-trained models
- Image captioning using VGG16 as an encoder and BERT decoder
In this article, we saw detailed understanding of Transfer learning techniques, types of transfer learning are available in deep learning. Also, we saw some of the real-world applications and use-cases of transfer learning in deep learning.
 Data analytics using Python by Bharti Motwani
 Deep learning with Python by Francois Chollet