Database Reference
In-Depth Information
actually created a lot of animosity with their users. We don't have to worry
about that because we're a private company that's not owned by someone
else. Instead what we focus on—and this is going to sound goofy, especially
for a data scientist—is the happiness of our users. So we stay in very close
communication with our users, not just by being very engaged online through
social media, but also flying all around the globe to talk with our users. We
use these conversations to gauge what our customers are saying, what they
like, and what they don't like. And we focus on their happiness exclusively in
order to provide the best products possible.
Metrics aside, if you're doing what you can to solve all of those things that
you're learning about via conversation, and focusing on the things that you're
doing well, then that's really the best you can do. And that doesn't create any
weird, perverse measures, where instead of looking at your users and looking
at your product, you avoid your users and products to instead look at a KPI
number. In some cases, you're even competing internally with other teams
over this number and asking questions like, “How much of this number can
I attribute to the data science team?” That's a terrible way to look at things.
How much revenue can be attributed to data science versus marketing versus
user experience? The moment you do that, then all of a sudden, you've put
teams at odds, you've put leaders at odds, and now there's less of a desire to
collaborate.
So by not focusing in on those measures, we've actually done a great service
to the organization, I think, by just encouraging everyone to focus on making
our users happy. It's much easier to say here's the problem that needs to be
solved and ask who can bring what to bear on it. Maybe there's not a number
behind the problem, so the way I measure success is the feedback I get from
customers—whether they are internal or external customers. Focusing on
getting very positive feedback ensures that my goal is to see folks using my
features in an appropriate way and getting good stuff out of them.
Now, of course, we also measure certain really down-to-earth metrics that
help to ensure our customers stay happy. These low-level metrics are more
around server uptime, whether send-time optimization is working, if people
are using it, what segments look like, if it is helping, whether bad users are get-
ting through, what the spam reporting looks like, whether emails are bouncing,
and similar local metrics. So there are operational metrics like that that we
look at—just none of those crazy global stupid metrics.
Gutierrez: How do you know you're solving the right problems?
Foreman: That is a tough one because the interesting thing about figuring
out what problems you should solve is that it's going to be a combination of
what your users think they want, what your users actually want, and where
the space is going. Talking to users is crucial because they point you in the
right direction. Often though, the way a problem is characterized by an internal
 
Search WWH ::




Custom Search