Database Reference
In-Depth Information
prop and other techniques, some of which I never published properly. I am
also proud of the DjVu system, which is not machine learning; it's a digital
document compression system.
Gutierrez: You're a professor and one of the founders of the NYU Center
for Data Science. You're also leading a research lab at Facebook. What does
a typical workday look like?
LeCun: I spend a little bit of time at NYU. If you had asked me this question
a year ago, the answer would have been very different because I was basically
running the Center for Data Science and putting together some of those
programs. This entailed a good deal of administrative work. But I don't do this
anymore because other people at NYU do this now. Whenever I'm at NYU,
which is a relatively small amount of time, I talk to my students and postdocs.
So it's mainly getting an update on their projects, giving them ideas on what to
do next, and things of that nature.
When I'm at Facebook, there is also a good bit of technical meetings. Several
meetings are more organizational, so these entail meeting with people in the
company who have AI or ML problems that perhaps my group could help
with—hiring, talking to candidates, interviewing candidates, and related things.
A good deal of my energy is actually devoted to hiring. And then there is the
daily work. A lot of what we do all of us are trapped in—like doing email
and procedural meetings. Then, of course, there is some work that is of ser-
vice to the community, like running the ICLR conference, doing peer paper
reviews, writing recommendation letters, and related activities. Some of these
are more science-related.
Gutierrez: How do you view and measure success?
LeCun: For scientists in academia and industry, the criteria are slightly dif-
ferent but not that different. In each area, the real criterion is impact. There
are three kinds of particular impact in industry. One impact is the intellectual
impact on the world at large, or on science and technology. So this is what
you do by publishing papers, giving talks, and helping to change the way people
do things.
You would think that a company like Facebook would not have any reasons to
publish the research they do internally because it would be telling competi-
tors what they do. But, in fact, there is a big incentive to do this. The incentive
is that for projects that are very upstream of applications, but even for proj-
ects that are very close to applications, the best way to measure the quality
of what you're doing is to test your methods on some standard data set or
measure yourself through the kind of formal crowd-sourcing that peer review is.
And for projects that are far from products, it's a much more accurate process
than internal evaluation despite all its shortcomings.
 
Search WWH ::




Custom Search