Database Reference
In-Depth Information
to work on it if you really believe in it. For example, to me, the motivation
for deep learning is a complete no-brainer. It was completely obvious to me
25 years ago and it's surprising to me that it's taken so long for other people
to realize it. But it's still surprising to me how fast they converted to it once
they became convinced.
It's the fact that when you build your system, most of the time you've spent
is in building the data analysis system or data mining system, machine learning
system, whatever it is. Most of the time you spent was doing feature design,
data cleaning first of all, and then feature design. Then you turn the crank on
your favorite SVM or logistic regression or boosted trees or whatever you're
using for classification and prediction. From this point of view, feature design is
where all the time is spent. And how good of a job you do on feature design
limits the ultimate performance of the system.
So, clearly, if you could use learning for that—and with enough data you can,
you could use learning to design the feature extraction system. It would be a
big win because of all of the manual labor that would disappear and perhaps your
system would work better because the feature extractor would be tuned for
the data you had at hand. That was the motivation behind deep learning. Of
course, the danger of this is that the system now has too many parameters
and overfits. So we have all of those related concerns. So you need a lot of
data for that to work, and that's why people haven't picked up on this until
recently.
That logic was obvious to me 25 years ago, and it's still obvious. It's surprising
to me that it's taken so long for people to realize this. So that's one example of
where having this sort of long-term vision helps you convince yourself, in the
face of all your papers being rejected and nobody picking up on your work,
that you're actually on the right track. There's also some limit to this type of
belief in the absence of success. I mean it's not like I haven't been successful
at all with this. Of course, the check-reading system was a big success. It's one
of the few things that people remember from early success with neural nets.
It's not like my work was completely ignored. However, there was certainly a
winter period between the mid 1990s and mid to late 2000s.
Now, for more practical or short-term things, there are obvious measures of
success. There are metrics. All of the web companies have metrics for how
well they're doing, like how many clicks you get and what's the lift on whatever
things you measure and things like this. Those are pretty obvious. And those
things are being tested all of the time.
Gutierrez: Whose work is currently inspiring you?
LeCun: The people whose work inspires me are the people whom I really
learn something from when I talk to them. They are old friends. So people like
Geoff Hinton, Léon Bottou, and Yoshua Bengio. In the more industrial context,
my bosses at Bell Labs, Larry Jackel and Larry Rabiner, were an incredibly good
 
Search WWH ::




Custom Search