July 12, 2010

Survey “coaching,” accountability, and dollars: a lesson from healthcare

“It only takes a second to fill out,” the x-ray technician told me cheerfully after an MRI I had yesterday. He was explaining that I would soon receive a survey in the mail asking about the service he provided, and he mimed checking off the boxes: “You just go down the list, five, five, five, five…”

Five, as you may have guessed, is the top satisfaction score.

Now, this was a community hospital affiliated with the University of Chicago Medical Center (which is a client of ours). But it’s an example of how all kinds of educational and cultural nonprofits could be thinking about the relationship between customer feedback, staff performance, and the bottom line.

At first it rubbed me the wrong way. My colleagues and I pride ourselves on being rigorous researchers, and we’ve criticized (here and here) survey processes that are less than scientific and objective. The whole point of social and market research is to get a true picture of how people think, feel, and act. You’re not allowed to coach them to give you high marks; you’re not supposed to influence them in any way.

But there was something else going on here, and it made me look more deeply at the role this kind of satisfaction research plays.

My tech’s name was Leo, which he wrote on the card he gave me so I would be sure to put it on the survey. Unprompted by any questions from me, he explained that the survey was a big part of the culture at the hospital. “We strive for five” is a staff mantra. At weekly meetings in each department, workers who received good survey ratings or comments are recognized. This presumably factors into their promotion and salary trajectories.

He even told me that the insurance companies link their reimbursement amounts to those patient satisfaction scores. I don’t know whether this is true or how much of the hospital’s revenue might be at stake in the formula. But what’s important is that Leo and his colleagues see the financial performance of the institution as dependent on the quality of the experiences it provides to individuals like me. ...

This is a big shift. Instead of helpless patient (“Take a seat”) or even customer or guest (“Can I help you?”), we become the institution’s clients (“What do you need? Are you happy?”). (Those of you in the museum biz may recognize these categories as a paraphrase of Zahava Doering and Andrew Pekarik’s influential 1999 article for the Smithsonian, “Strangers, Guests or Clients? Visitor Experiences in Museums.”)

That’s accountability. And come to think of it, I’ve seen that hospital become much friendlier and more professional in the last few years. (My 11-year-old daughters were born there, so I have some history with the place.)

And isn’t accountability the whole point of doing these post-experience evaluations? At a theater or symphony performance, they might take the form of surveys in the program book or e-mailed to subscribers. At a museum, they might be exit surveys of visitors as they leave an exhibition or education program. At schools and universities, it’s those ubiquitous course evaluation forms. Underlying much of the research work my firm does is the idea that we can raise the quality of the “customer’s” experience by first asking her about it and then responding to her feedback with various kinds of improvements, implemented over time.

But what if that’s the long way around? My “five”-happy, friendly hospital suggests that the evaluation process itself can have a direct, immediate impact on the experience provided, even before any consumer feedback is recorded, because it makes front-line staff conscious of the importance of pleasing customers — I mean, clients — and accountable both individually and collectively to the fulfillment of that goal.

Sure, you lose some of the scientific validity of the evaluation itself, since the system rewards the kind of “coaching” my tech tried with me. But who cares, as long as that system puts the consumer (think museum visitor, concertgoer, grad student, etc.) in a more central, respected position in the institution.

Imagine how differently your visits to large art and natural history museums, or your dealings with the ushers and coat check clerks and bartenders at concert halls, might feel if all those staff members and volunteers had to jot their first name on a little card for you and tell you to expect a survey about them.

I understand that a concert hall isn’t a hospital, and that this would be impractical in a hundred ways. And I wouldn’t want to put programming decisions to the same kind of test (although, unlike some of you, I don’t want the artistic and educational side totally protected from audience research).

But as cultural and educational nonprofits continue to struggle for audience loyalty and diversity in a rapidly changing society, it’s worth thinking about the relationship between evaluation, accountability, and money.

It’s the money, of course, that makes the fundamental difference. Imagine if, like my hospital, arts organizations and museums saw their revenues vary with their patrons’ satisfaction ratings. But who would play a role equivalent to those insurers? The obvious parallel is the foundations. Most already insist that an evaluation be built into the projects they fund, although those studies don’t typically emphasize visitor or audience satisfaction with the service they receive. It would be relatively easy to add such a focus and make the final grant amount contingent on demonstrating that the institution is, among other things, making its audiences happy. As easy as five, five, five...



5 Comments »
Nina Simon — July 13, 2010

I worked at one museum where we rationalized away non "5" responses to surveys about educational programs however we could. We were just shy of perjury--to ourselves. At least in the hospital situation, they're open and honest about what they want to see on the scoresheet.

Eileen Bevis — July 19, 2010

Peter, I'm glad to hear that you feel there have been changes in the hospital's atmosphere, for the better, as a result of its research into and push for increased patient satisfaction and increase in accountability. Even if less than fully rigorous, research can definitely produce such indirect effects. I'm curious to hear more about your final reaction to the hospital's research, looking at the research through a marketing lens... did the fact that the hospital was doing this sort of research raise your opinion of the hospital and its commitment to patient care? Or, on the other end of the spectrum, did the 5-5-5 interaction ultimately just make you feel like a cog in a vast PR machine?

Peter Linett — July 19, 2010

More the latter, Eileen, and worse than a cog -- more a PR flack. You and Nina are both hitting on the real problem: the perjury that these systems seem to require of either customers (to make the staff feel better and the funders happy) or staff (to make themselves feel better when customers are critical).

I'm ambivalent, I guess. Yes, I do admire nonprofits more when they care enough to ask me about my experiences and needs, but the ways they do it are so often hamfisted or boring or, as in this case, over-eager to the point of needy.

Plus, who has time to fill out surveys? (:-)

Chloe P — July 20, 2010

I definitely see the parallels between this application of surveys in the arts/education and your experience, Peter, and I do understand why you'd feel like a flack for being used so blatantly by the system for self-serving purposes. But...

More than that, I just keep coming back to the thought that this is a GOOD thing for a hospital, and Leo, to do! Even if there's inherent bias and coaching and all of those elements that make us shudder at the thought of that survey being treated as "real research," it doesn't strike me that they are really using the satisfaction survey as real, objective research. There are many ways to use, and to misuse, research and for once this case seems like it's ultimately doing something mostly positive for the hospital and for the patients. (I admittedly know next to nothing about how hospitals and insurance companies negotiate reimbursement amounts... as with almost everything related to health insurance, I'm sure I'd be take issue with it. So, I'm ignoring that part of the equation.)

If the arts were to learn from this, I'd certainly say this type of interaction should never be called a "survey," and never pretend that it's "research" (especially to funders!)... But as long as the lesson applied is that the organization -- including all of its on-the-ground staff and volunteers -- can/should make it clear to patrons that it WANTS them to have a stellar experience, and gives each patron a way to share the good and bad about their experience, I see that as only having positive consequences. In other words, organizations SHOULD be needy about soliciting patrons' input! Just don't call it research. :)

Nic Covey — July 21, 2010

Interesting post, Peter - and a good discussion through the comments. I was drawn back to the post after getting a message from my mechanic just this morning, saying as blatantly, “You’re going to get a call asking about your experience and we hope you’ll select 10s on all aspects.” At least Leo’s card masked the coaching with a service-oriented “I hope I provided you with…”!

Used as a customer relationship management (CRM) touchpoint - to confirm a mutual understanding of service quality - I actually applaud this type of “survey.” (My assumption being that if you called Leo, or if I called the dealership, and said “actually you performed at a 3,” they would find a way to make amends. Too hopeful?)

On the broader subject of survey coaching, your post and my subsequent mechanic call has me thinking about the real effects of this on the research’s utility. A quick internet "lit review” reveals that this coaching situation is something auto manufacturers and dealerships have confronted for years. Some manufacturers even penalize dealerships caught doing this.

But I don’t think it’s necessary to discount the research so much, nor fault the self-promoters. Leo (and my mechanic, and perhaps eventually the concert hall ushers) helped to prep us for the arrival of our surveys – this must have a positive and fairly consistent effect on survey response, no matter the respondent’s opinion. Then, the coaching we receive, though direct, is fairly toothless. Is either of us incentivized to be anything less than honest in our responses on account of their counsel? Probably not.

It might even be said that our coaches are doing so at their own risk. When the coaching is as systematic as it was for Leo or the dealership, the stakeholders know of it. So if the responses come in negative, aren’t they even more damming? What’s more, might the sour taste you now have negatively impact your responses when that card comes? And fairly so, as that survey interaction is now just one more part of the customer experience - an aspect for which you didn't care.

Coaching infuses bias, to be sure, but perhaps there is balance in the impropriety. That doesn’t make it good or responsible research, but it might protect its utility.

Post new comment

The content of this field is kept private and will not be shown publicly.