Database Reference
In-Depth Information
Edge cases —There will always be edge cases, or outliers, where a met-
ric may not mean what you think it means. These situations are worth
understanding, but you shouldn't allow the perfect to be the enemy
of the good. As a leader, you need to weigh the benefits of choosing
metrics that work for 90 percent, 95 percent, or 99 percent of cases
with the costs of those incremental gains.
Accountability test —Could you hold someone accountable for this
metric without them offering a dozen reasons why it doesn't make
sense? If not, you may need to reconsider the validity and value of the
metric. This simple thought exercise is a decent test of the value of
the metric.
Self-serving —Be careful that you don't select metrics simply because
you know they'll make you look good. These short-term victories have
a way of incrementally turning into a losing long-term strategy for
organizational competitiveness and success.
Letting go —Putting a metric out to pasture, especially within the con-
text of a large, multilayered, complex organization, is a hard thing to
do. There are a few reasons this happens:
a) The metric was developed at great effort and high expense.
b) After the process of collecting the data for the metric stops, it can
be restarted only at great effort and high expense.
c) People higher in the organization (who last paid attention to the
metric when it was useful) might come looking for it if something
goes awr y.
SHARED UNDERSTANDINGS
A culture of data fluency needs to be built on a shared understanding of the
data sources, data analysis, key metrics, and data products. It requires employ-
ees to be on the same page about how data is used and why it is important.
In general, the concept of shared understanding is a fundamental building
block for organizational success. It allows everyone to be working toward
the same goals, have a common set of principles, and be in alignment about
Search WWH ::




Custom Search