Database Reference
In-Depth Information
in particular. Those are two of my slightly different interests. In both cases,
I believe things like deep learning will probably have a big impact on the practice
of data science in the near future.
If you're interested in AI or machine learning, the main conferences are NIPS
and ICML, and also conferences like AI Stats, UAI, and KDD, which is more data
science-oriented actually, but there are quite a lot of methods papers now.
And then there are lots of other good smaller conferences. In fact, there's
one that's really important to me, not just because I run it, called ICLR, which
stands for the International Conference on Learning Representation. This is
a deep learning conference. It's relatively small, but it's very focused and very
interesting. So those are conferences for machine learning-type things.
There are other areas of machine learning that I'm not very active in. Things
like reinforcement learning, which is actually very important for industry and is
used in things like ad placement, so there's AAAI and similar conferences. I also
have a foot in computer vision. In computer vision, the main conferences are
CVPR, ICCV, and ECCV. CVPR is more into images, videos, and similar things.
Gutierrez: Is there an area today that you feel is somewhat analogous to
deep learning when you started, in that you think it's going to be giant in the
future but people just aren't looking at it right now?
LeCun: I think it goes in cycles. We have a new set of techniques that comes
up, and for a while the technique is under the radar and then it kind of blows
up, and everybody explores how you can milk this technique for a while until
you hit a wall. Progress slows and becomes more boring. Then some new set
of techniques comes up and the whole process starts over again.
In my area, back in 1986 and 1987, neural nets had been under the radar and
sort of blew up in 1986. So a lot of interesting stuff happened, a lot of crazy
things happened—and a lot of hype happened, as well—until the early 1990s,
when, in my own lab at Bell Labs, the next wave came up. The next wave was
support vector machines or kernel methods, which are very popular and work
very well. That replaced, to some extent, a lot of the work on neural nets. So
neural nets migrated at that time to other conferences, like NIPS.
The ICML conference was actually more focused on a completely other branch
of machine learning that was more symbolic. That branch of machine learning
has essentially disappeared now. The transition to “statistical” machine learn-
ing at ICML was completed around 2009, when numerical machine learning
techniques basically wiped out symbolic machine learning, so there are barely
any papers on these topics at ICML anymore.
Simultaneous with kernel method, graphical models came up in the mid 1990s.
Graphical models are really orthogonal to things like neural nets, and can be
viewed as more like a framework around other techniques. In conferences
 
Search WWH ::




Custom Search