Introduction
In computer science, there’s an informal name for the phenomenon known as non-generalization and transfer learning. These are two different things: a non-generalization is a generalization that is not based on any specific context or data, and a transfer learning technique is a kind of advanced education that allows students to learn new information based on their experiences rather than from textbook definitions and examples. that’s right — there are two very different ways to teach non-generalization and transfer learning. The first way is called non-generalization and the second way is called non-transfer learning. Let’s explore these terms, their specific uses, and the techniques available to you when working with them.
Transfer learning is the process of improving the performance of students who are learning a new subject. It is a specific type of learning that is not universal or available to all students in the same school. A non-generalization principle or a non-transfer learning technique is an effective way to improve student performance in diverse subject areas. In this article, we will discuss the importance of non-generalization principles, transfer learning technique adoption, and how transfer learning works with some examples.
Table of contents:
- The principle of Non-generalization and Transfer learning technique
- What is transfer learning and how it works?
- How to use transfer learning
- Examples of transfer learning
- Conclusion
- The principle of Non-generalization and Transfer learning technique
For transfer learning to be successful, generalizable properties must be learned by the initial model and applied to the second task. stating that the datasets utilized in the two models must also be comparable.
Data scientists could believe that generalization is unnecessary during the training of machine learning models and should be avoided. It is not possible to produce a model in one domain using data from another. A model cannot be used to predict sales, for example, if it was trained on a weather dataset. Both theoretically and practically, this is a well-known theory. To achieve the best results, this should be established as the standard; yet, the shortage of datasets will probably be one of the major challenges for data science. As a result, it has been challenging to execute data science programs in some domains where there is a lack of data.
In the non-generalization approach, you don’t assign courses based on what you’ve studied the course, you assign courses based on what you’ve encountered in your everyday life. You know what you’ve encountered, you recognize that things are different, and you adapt to your new context as best you can. You don’t try to teach every course in a specific order, or to assign every concept in a specific way — you start with a basic knowledge of the course, and then you work your way through the concepts in a series of independent studies. The basic idea is that you don’t try to generalize from one situation to the next, you adapt as best you can, and try new things until you’ve developed a general understanding.
- What is transfer learning and how it works?
Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task.
Applying the knowledge of how to recognize cars to the situation of recognizing trucks serves as an example. Another illustration is using the knowledge of phone identification to recognize tablets. or by identifying books and textbooks. We simply continue using the patterns we've already mastered by finishing a similar exercise, as opposed to starting from scratch. Transfer learning seems to be making an effort to reduce the necessity of finding new solutions to existing problems, with a few small variations.
- How to use transfer learning
Transfer learning can be utilized in two approaches.
- Develop model approach
- Pre-trained model approach
In the first approach, you should have a selected predictive modeling problem with an abundance of data where there is some relationship in the input data, output data, and context behind what is being trained. Then develop a source model from available data which maps inputs to the outputs. Now save this model and Reuse this model for a similar task at hand. You can transfer and load your model to solve the relevant problem in the same domain with some fine-tuning.
In the second approach, you leverage already pre-trained models trained by a third party. They could be researchers, open-source projects, or organizations. You can reuse those models and fine-tune them for the task you have at hand. Make sure your model generalizes well to your problem and task well.
- Examples of transfer learning
It is most common to perform transfer learning with predictive modeling problems that use an image or video data. There are lots of pre-trained models that are used in image data problems and transferring weights from such models can save a lot of time.
Among the popular examples, the ResNet50 model trained on the ImageNet dataset is really more widely used in transfer learning tasks than another model. Some other examples of such models are VGG16, Inception, Xception, etc. Transfer learning is also widely used in language processing data that uses text data as input and output. Some examples of transfer learning for textual data are, Word2Vec, GloVe, FastText, etc.
- Conclusion
In this article, we learned briefly about the non-generalization principle which states that learned representations in one situation or a context can not be generalized to other contexts while transferring learning states opposite of it. A model trained on a large sample of data can be leveraged for other contexts or data as well. We also saw how transfer learning works and some examples for image and text data.
References:
[1] https://machinelearningmastery.com/transfer-learning-for-deep-learning/