Geoscience Reference
In-Depth Information
This seems like a terrible idea, and goes against all practices in semi-supervised learning. Paradoxi-
cally, inv could entail a small
F
() , so the required l may be small, suggesting good semi-supervised
learning. What is wrong? Such bad incompatibility functions (which encode inappropriate semi-
supervised learning assumptions) will make it difficult to achieve
e U (f ) =
e(f ) =
0 and
0 at the
same time. They are bad in the sense that they make the above theorem inapplicable.
Finally, we point out that there are several generalizations to Theorem 8.2, as well as theo-
retic frameworks other than the PAC bounds for semi-supervised learning. These more advanced
approaches make weaker assumptions than what is presented here. We give some references in the
next section.
8.3 FUTURE DIRECTIONS OF SEMI-SUPERVISED LEARN-
ING
We conclude the topic with a brief discussion of what is not covered, and an educated guess on where
this field might go.
This topic is an introduction, not a survey of the field. It does not discuss many recent topics
in semi-supervised learning, including:
￿ constrained clustering, which is unsupervised learning with some supervision. Interested read-
ers should refer to the topic [ 16 ] for recent developments in that area. Some techniques there
have in turn been used in semi-supervised learning [ 113 ];
￿ semi-supervised regression [ 25 , 47 , 159 , 205 ];
￿ learning in structured output spaces, where the labels y are more complex than scalar values
(e.g., sequences, graphs, etc.) [ 2 , 5 , 26 , 104 , 170 , 173 , 215 ];
￿ expectation regularization [ 124 ], which may have deep connections with the class proportion
constraints in [ 36 , 33 , 89 , 210 ];
￿ learning from positive and unlabeled data, when there is no negative labeled data [ 61 , 114 , 109 ];
￿ self-taught learning [ 140 ] and the universum [ 186 ], where the unlabeled data may not come
from the positive or negative classes;
￿ model selection with unlabeled data [ 94 , 120 , 150 ], and feature selection [ 112 ];
￿ inferring label sampling mechanisms [ 146 ], multi-instance learning [ 207 ], multi-task learn-
ing [ 116 ], and deep learning [ 141 , 185 ];
￿ advances in learning theory for semi-supervised learning [ 4 , 9 , 46 , 63 , 143 , 161 , 162 , 164 ].
For further readings on these and other semi-supervised learning topics, there is a topic collection
from a machine learning perspective [ 37 ], a survey article with up-to-date papers [ 208 ], a topic
written for computational linguists [ 1 ], and a technical report [ 151 ].
Search WWH ::




Custom Search