Database Reference
In-Depth Information
and some of it from the 1970s, but mostly from the 1960s. It was the old work
on the sort of neural net version 1.0, from the 1950s. Things like the percep-
tron and other techniques like this and then the statistical pattern recognition
literature that followed in the early 1970s. But by the time I started to take an
interest in this research area, the field had been pretty much been abandoned
by the research community. This time period is sometimes referred to as the
“neural net winter.”
I graduated—though my specialty was not actually machine learning, as there
was no such thing as machine learning back then. In fact, in France at that time,
there wasn't even such thing as computer science. The specialties I graduated
with were VLSI integrated circuit design and automatic control. After under-
grad, I went to grad school. Unfortunately, I had a hard time finding people
who were interested in what I wanted to do, as I already knew exactly what I
wanted to work on. I had already realized by the time I was an undergrad that
the thing that people were after back in the 1960s and could never solve was
basically the idea of multi-layer neural nets and deep learning.
Maybe two years before I started grad school, I started experimenting with
various algorithms. I came up with something that eventually became what
we now call the back-propagation algorithm—which we use every day at
Facebook on a very, very large scale—independently from David Rumelhart,
Paul Werbos, David Parker, Geoff Hinton, and others. I had a very hard time
getting senior people in grad school to help me because the field had been
abandoned. Luckily, I had a very nice advisor, Maurice Milgram, and I had my
own funding, which was mostly independent from my advisor. The very nice
advisor, who wasn't really working at all in anything I was doing, basically told
me that he would sign the paper, as I seemed like a smart guy, but he couldn't
help me.
Gutierrez: What made you so sure that this was the right direction?
LeCun: I don't know. I was just so convinced. It was just so obvious to me
that learning in multi-layer neural nets was the key. Then in the early 1980s,
I discovered people like Geoff Hinton, as well as Terry Sejnowski, and real-
ized they were interested in exactly the same question. So Geoff Hinton was
the guy I wanted to meet. He was the only guy that I could meet who could
understand what I was saying basically. It just so happened that he was invited
to one of the first conferences on neural nets back in 1985 in France. I met
him at the conference and we immediately clicked. He invited me to a sum-
mer school the next year, and then basically invited me to do a postdoc with
him. Geoff Hinton has been and continues to be a big mentor to me. So I was
driven. I knew what I was doing. It's not like I went to see an advisor who
told me, “Why don't you work on this algorithm?” It was completely self-
determined.
Gutierrez: What initially sparked your interest in AI?
 
Search WWH ::




Custom Search