The rap on research for the arts, museums, and informal sciences
“It only takes a second to fill out,” the x-ray technician told me cheerfully after an MRI I had yesterday. He was explaining that I would soon receive a survey in the mail asking about the service he provided, and he mimed checking off the boxes: “You just go down the list, five, five, five, five…”
Five, as you may have guessed, is the top satisfaction score.
Now, this was a community hospital affiliated with the University of Chicago Medical Center (which is a client of ours). But it’s an example of how all kinds of educational and cultural nonprofits could be thinking about the relationship between customer feedback, staff performance, and the bottom line.
At first it rubbed me the wrong way. My colleagues and I pride ourselves on being rigorous researchers, and we’ve criticized (here and here) survey processes that are less than scientific and objective. The whole point of social and market research is to get a true picture of how people think, feel, and act. You’re not allowed to coach them to give you high marks; you’re not supposed to influence them in any way.
But there was something else going on here, and it made me look more deeply at the role this kind of satisfaction research plays.
My tech’s name was Leo, which he wrote on the card he gave me so I would be sure to put it on the survey. Unprompted by any questions from me, he explained that the survey was a big part of the culture at the hospital. “We strive for five” is a staff mantra. At weekly meetings in each department, workers who received good survey ratings or comments are recognized. This presumably factors into their promotion and salary trajectories.
He even told me that the insurance companies link their reimbursement amounts to those patient satisfaction scores. I don’t know whether this is true or how much of the hospital’s revenue might be at stake in the formula. But what’s important is that Leo and his colleagues see the financial performance of the institution as dependent on the quality of the experiences it provides to individuals like me. ...
This is a big shift. Instead of helpless patient (“Take a seat”) or even customer or guest (“Can I help you?”), we become the institution’s clients (“What do you need? Are you happy?”). (Those of you in the museum biz may recognize these categories as a paraphrase of Zahava Doering and Andrew Pekarik’s influential 1999 article for the Smithsonian, “Strangers, Guests or Clients? Visitor Experiences in Museums.”)
That’s accountability. And come to think of it, I’ve seen that hospital become much friendlier and more professional in the last few years. (My 11-year-old daughters were born there, so I have some history with the place.)
And isn’t accountability the whole point of doing these post-experience evaluations? At a theater or symphony performance, they might take the form of surveys in the program book or e-mailed to subscribers. At a museum, they might be exit surveys of visitors as they leave an exhibition or education program. At schools and universities, it’s those ubiquitous course evaluation forms. Underlying much of the research work my firm does is the idea that we can raise the quality of the “customer’s” experience by first asking her about it and then responding to her feedback with various kinds of improvements, implemented over time.
But what if that’s the long way around? My “five”-happy, friendly hospital suggests that the evaluation process itself can have a direct, immediate impact on the experience provided, even before any consumer feedback is recorded, because it makes front-line staff conscious of the importance of pleasing customers — I mean, clients — and accountable both individually and collectively to the fulfillment of that goal.
Sure, you lose some of the scientific validity of the evaluation itself, since the system rewards the kind of “coaching” my tech tried with me. But who cares, as long as that system puts the consumer (think museum visitor, concertgoer, grad student, etc.) in a more central, respected position in the institution.
Imagine how differently your visits to large art and natural history museums, or your dealings with the ushers and coat check clerks and bartenders at concert halls, might feel if all those staff members and volunteers had to jot their first name on a little card for you and tell you to expect a survey about them.
I understand that a concert hall isn’t a hospital, and that this would be impractical in a hundred ways. And I wouldn’t want to put programming decisions to the same kind of test (although, unlike some of you, I don’t want the artistic and educational side totally protected from audience research).
But as cultural and educational nonprofits continue to struggle for audience loyalty and diversity in a rapidly changing society, it’s worth thinking about the relationship between evaluation, accountability, and money.
It’s the money, of course, that makes the fundamental difference. Imagine if, like my hospital, arts organizations and museums saw their revenues vary with their patrons’ satisfaction ratings. But who would play a role equivalent to those insurers? The obvious parallel is the foundations. Most already insist that an evaluation be built into the projects they fund, although those studies don’t typically emphasize visitor or audience satisfaction with the service they receive. It would be relatively easy to add such a focus and make the final grant amount contingent on demonstrating that the institution is, among other things, making its audiences happy. As easy as five, five, five...
5 Comments »