Databases Reference
In-Depth Information
Different tasks
Source tasks
Target task
Knowledge
Learning system
Learning system
Learning system
Learning system
(a) Traditional learning
(b) Transfer learning
Figure 9.19 Transfer learning versus traditional learning. (a) Traditional learning methods build a new
classifier from scratch for each classification task. (b) Transfer learning applies knowledge
from a source classifier to simplify the construction of a classifier for a new, target task.
Source: From Pan and Yang [PY10]; used with permission.
Transfer learning aims to extract the knowledge from one or more source tasks and
apply the knowledge to a target task . In our example, the source task is the classification
of camera reviews, and the target task is the classification of TV reviews. Figure 9.19
illustrates a comparison between traditional learning methods and transfer learning.
Traditional learning methods build a new classifier for each new classification task, based
on available class-labeled training and test data. Transfer learning algorithms apply
knowledge about source tasks when building a classifier for a new (target) task. Con-
struction of the resulting classifier requires fewer training data and less training time.
Traditional learning algorithms assume that the training data and test data are drawn
from the same distribution and the same feature space. Thus, if the distribution changes,
such methods need to rebuild the models from scratch.
Transfer learning allows the distributions, tasks, and even the data domains used in
training and testing to be different. Transfer learning is analogous to the way humans
may apply their knowledge of a task to facilitate the learning of another task. For exam-
ple, if we know how to play the recorder, we may apply our knowledge of note reading
and music to simplify the task of learning to play the piano. Similarly, knowing Spanish
may make it easier to learn Italian.
Transfer learning is useful for common applications where the data become outdated
or the distribution changes. Here we give two more examples. Consider web-document
classification , where we may have trained a classifier to label, say, articles from vari-
ous newsgroups according to predefined categories. The web data that were used to
train the classifier can easily become outdated because the topics on the Web change
frequently. Another application area for transfer learning is email spam filtering . We
could train a classifier to label email as either “ spam ” or “ not spam ,” using email from a
group of users. If new users come along, the distribution of their email can be different
from the original group, hence the need to adapt the learned model to incorporate the
new data.
 
Search WWH ::




Custom Search