January 15, 2010

Say it ain’t so, statistician

I’m just getting to a recent book about the buying and selling of scientific “truth,” and it’s enough to make a grown researcher cry. Any lessons for us in the culture and higher ed crowd?

Unfortunately, yes. Doubt is Their Product: How Industry's Assault on Science Threatens Your Health by David Michaels, an epidemiologist who last month became Obama’s OSHA chief, is an infuriating look at big industry’s manipulation of scientific evidence to derail or delay safety regulations. Think cigarettes, lead, asbestos, or remember Silkwood and Erin Brockovich.

The book’s title refers to an infamous 1969 memo from a Brown & Williamson tobacco executive who wrote that, "Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy."

The companies and their mercenary scientific henchmen didn’t need to work too hard to find uncertainties to exploit, since doubt and uncertainty are built into the scientific method. (The physicist Richard Feynman called doubt the essence of science.) Real science is about disproving hypotheses, and there are always outlier data, competing explanations, and marginal numbers requiring interpretation. Research is supposed to be empirical and objective, but deciding what counts as knowledge – the process of scientific consensus-building by which we decide what it is we know – is messy and human.

Why does this hit home for us researchers in the arts and education? Well, the science we do is social science, but the statistical and interpretive questions are similar. The advocacy impulse in our world may be socially positive, but it’s still an advocacy impulse and has to be kept from influencing our empirical findings about how audiences think, feel, and act.

We see the advocacy impulse at work in two distinct areas: first, in what questions get asked and how they’re worded, and second, in the interpretation of the response data. I’ll write about the first one here and save the second for a follow-up post.

We often come across survey or interview language that is leading, meaning that it’s easy for respondents to see what the “right” answer is. Human nature being what it is, they’re likely to give it, or at least shift their response somewhat in that direction – in part to please the researcher (who is a kind of authority figure) and in part to confirm their own view of themselves as people who support social “goods” like education, museums, or the arts. (Researchers call this “social desirability bias.”)

Consider this question from a national arts advocacy survey I received in the mail at home: “Do you think that there should be more public funding of the arts and arts education at the federal, state, and local levels? Yes/No.” When you put it like that – linking arts to education as if they were one thing, isolating these priorities from any real-world tradeoffs – it sounds like motherhood and apple pie. But what if they had put the arts in context with other things that could receive public funding, such as job creation, health care, or the environment? Then how would arts funding fall?

Or this question, from the same survey: “How important are the arts to you and your family? Very important, important or not important.” By placing “important” at the midpoint of the scale, the researchers are subtly communicating that this is the “average” or typical way that Americans feel about the arts. So unless you actively don’t like the arts, you’d select “important” at one level or the other.

Those may be obvious examples, and luckily they’re rare. But the tendency to advocate for one’s own organization or sector, even unconsciously, is strong. It’s hard not to embed your own assumptions and hopes in the questions you ask your audiences. That’s where the science comes in. Real scientists (say, chemists or physicists) usually have a hypothesis going into the study, but what they’re expecting to prove is the null hypothesis: that there was no correlation, that their hypothesis was wrong.

In the social science version of this, where we’re studying people rather than molecules or galaxies, we need to avoid communicating our hypothesis to the respondents (your audiences). We don’t want them what the “right” answer is. This is especially important in arts and education research, where the social desirability bias already tilts their answers in a certain direction.

I empathize with our clients and their funders. Everyone wants to see positive impacts from these institutions, including us researchers. And we do need to remind ourselves that, unlike the cases Michaels writes about in Doubt is Their Product, the stakes are not life-and-death. But even in our corner of the world, we all need to take care not to claim victory without enough evidence.

More on the effects of the advocacy impulse on interpretation next week. Meanwhile, please jump in with your own opinions. I'm not looking for any particular answer...


Post new comment

The content of this field is kept private and will not be shown publicly.