My
dear friend and co-author Jeff Pfeffer and I have started a series of
interesting conversations about what we might study next. We’ve been doing a little brainstorming and constructive
argument. As part of this adventure, we’ve been talking about the impact of our
last book on evidence-based management and what evidence-based management
means.
One
of the themes that we keep returning to is our concern that managers and the
business press seem to automatically assume that quantitative evidence is
always the best evidence. This point
especially came home a in recent Wall
Street Journal article by Scott Thrum called “Now
Its Business By Data, But Numbers Can’t Tell The Future.”
Scott
talks about how quantitative data have helped companies including Yahoo!,
Google, and Harrah’s gain competitive advantage, and talks about our book Hard Facts and Tom Davenport’s Competing
on Analytics, with the implication
seeming to be –-based on stories from P&G and Google – that evidence-based
analysis is useful for making short-term tweaks, but not for seeing the future
or making big breakthroughs. I think
that this perspective is partly right, although quantitative evidence can also
lead to huge changes in organizational strategy (e.g., consider one hard fact:
The huge numbers of baby boomers retiring in the next decade, now that is
something that shaking a lot of organizational strategies).
But
there is an implication in this article and others that I find especially
disturbing: The message seems to be that evidence-based management means management
by quantitative data. I reject that
thought, and have always believed that there are times when qualitative data
are more powerful, valid, and useful for guiding action than quantitative data. I will likely touch on this point more in future
posts, but to get things started, there are three times when I believe that
qualitative data are essential.
1. When you don’t know what
to count. Unstructured observation of people at work,
open-ended conversation, and other so-called ethnographic methods are
especially useful when you don’t know, for example, what matters most to
customers, employees, or a company. Just
hanging around and watching can have a huge effect. I am reminded of something that happened
years ago at HP. Senior management was concerned
that people weren’t buying their PC’s, so instead of just reading marketing
reports, they each went out and tried to by an HP at a local computer
store. I remember then CFO Bob Wyman
telling us that it was one thing to hear that consumers weren’t impressed with
HP PC’s, and quite another to have a salesperson suggest that ought to buy
something other than an HP because they were a poor value. HP is now the leader in the PC business, and
although I am sure this one little experience wasn’t the main cause, it did
help senior executives get a more complete understanding of what elements of customer experience they might start counting.
2. When you can count it,
but it doesn’t stick. As Chip and Dan Heath conclude in Made to
Stick, statistics show that people are swayed by stories
, not
statistics. So this means that even if
you have good quantitative data to back your decisions, your decision will be
harder to sell if you don’t have some compelling stories and images to go with
it. So, to take the case of Procter
& Gamble, they have had quantitative evidence for many years that the “in-store”
experience of encountering a P&G product has a huge effect (beyond
advertising, prior brand loyalty and so on), but the message really sunk in
when folks for the Institute for the Future simply took CEO A.G. Lafley and his
team shopping a few years back. This
experience, in combination with work done with IDEO and P&G’s fantastic
head of design, Claudia
Kotchka, have helped P&G develop a deeper understanding of their
customer experiences – and to tell better stories – than could have happened
through quantitative evidence alone. And it has led them to focus greater effort on designing the experience of encountering the product on the shelves — not just packaging, but also where and how the products are displayed, and also, they’ve learned the importance of educating store employees about their products.
As
another example, our d.school students did a project about a year ago on ways
that large financial institutions alienate young college grads who want to
start saving money. Look at the desk to
the left, which came with a banker in a three piece suit. The students who went to talk to that banker
were all under 25 years old and were dressed in shorts t-shirts, but most had lucrative
job offers – which meant that they would be making more that that banker in a
few months. The setting and the banker
were so stiff that the idea of putting their money in his bank seemed like a
bad idea to the students – they found it intimidating and they felt as if they
couldn’t trust the banker or institution. The picture of that desk (and the guy in the
three piece suit, not shown here) is much ‘sticker” than any survey finding
that young people hesitate to put money a bank or investment fund.
3. When What You Can Count Doesn’t
Count. Researchers are always looking for things
that are easy to count, so they can get numbers that are amenable to
statistical analysis. There are times
when these numbers do matter. Sales, numbers of defects, and so on can be
valuable. But in the hunt for and obsession
with what can be counted, the most important evidence is sometimes overlooked. As
Einstein said, “Not everything that counts can be counted, and not everything
that can be counted counts.”
The
best example I’ve ever seen of the limits of quantitative data – and virtues of
story telling stories and qualitative experience – is found in on page 3 of John
Steinback’s 1941 classic “The Log from the Sea of Cortez,” a
book about marine collecting expedition that he went on with his dear friend Ed
Ricketts. I first heard about this from
Karl Weick, and have repeated it in many contexts – it is one of those
paragraphs that every manager and researcher in every field can benefit from:
The Mexican Sierra has “XVII-15-IX”
spines on the dorsal fin. These can be easily counted. But if the sierra
strikes hard so that our hands are burned, if the fish sounds and nearly
escapes and finally comes over the rail, his colors pulsing and his tail
beating the air, a whole new relational reality comes into being – an entity
which is more than the sum of the fish plus the fisherman. The only way to
count the spines of the Sierra unaffected by this second relational reality is
to sit in a laboratory, open an evil smelling jar, remove a stiff colorless
fish from the formalin solution, count the spines, and write the truth
"D.XVII-15-IX." There you have
recorded a reality that cannot be assailed — probably the least important
reality concerning either the fish or yourself.
doing. The man with the pickled fish has
set down one truth and has recorded in his experience many lies. The fish is not that color, that texture,
that dead, nor does he smell that way.
Again,
I am not rejecting quantitative evidence, it is essential in many settings. But
qualitative evidence has great virtues as well, for spurring hypotheses,
emotions, and for enabling us to “see” truths that aren’t easily counted. I
love that line “The man with the pickled fish has set
down one truth and has recorded in his experience many lies.”
This post is meant to get conversation started. When is quantitative evidence especially
valuable? And when does it lead people
to record apparent – even unassailable — truths that mask many lies and
dangerous half-truths?
Leave a Reply