March 21, 2011

Nastygram from the NY Times on visitor research

Maybe the Times arts critics have it in for the Brooklyn Museum. Or maybe they just don’t believe museum curators should get to know the audiences they’re creating exhibitions for. Then again, some museums don’t believe that either, which is why “front end” evaluation is often a botched job.

So I tried not to get defensive when I read this paragraph in art critic Ken Johnson’s review of Brooklyn’s new show on Plains Indian tipis.

Beyond some basic historical context, the exhibition offers no revelatory perspective on its contents. That might be partly because, as the organizers, Nancy B. Rosoff and Susan Kennedy Zeller (both Brooklyn Museum curators) point out in their catalog preface, part of the planning process involved focus groups and visitor surveys “to determine the level of visitor interest in and knowledge of the tepee and Plains culture.” They also invited a team of American Indian scholars, artists and tribal members to vet their plans. The result is an exhibition that speaks down to its audience, assuming a low level of sophistication, and that does as little as possible to offend or stir controversy.

On one level, this is the familiar highbrow take on visitor studies: If you ask the public what they want from an arts or culture experience, you’re doomed from the get-go. Focus groups yield lowest-common-denominator thinking, which should have no place in planning encounters with the great or challenging or profound. The museum should exercise its cultural authority and decide what visitors need to see and learn, without getting sidetracked by what they want.

But when you gather museum-goers in a focus group or ask them questions on a survey, do they really tell you, “I want this exhibition to talk down to me. I want the interpretation of objects to be bland and inoffensive”?

Of course not. The real issue here is what kinds of questions the museum asks and how it understands — and makes use of — the answers. I hasten to add that I haven’t seen the exhibition yet, and I may not agree with Johnson’s that it is condescending or bland. (From what I’ve been able to see online, it looks promising.) ...

Typically, front-end evaluation for a museum exhibit investigates how much visitors already know about the topic of the planned exhibition, and sometimes what they’re interested in learning. (Often, the same study solicits feedback on possible titles for the show or tries to gauge respondents’ likelihood to attend.) The idea is that, if the curators and other members of the exhibition team have a sense of how much knowledge the public brings into the exhibition, they can pitch their interpretive texts at the right level: not over the visitors’ heads, which might make them feel stupid, but not below them, either, which might leave them impatient or bored.

The trouble is, when it comes to most of the topics that museums exhibit, the public knows very little about the artifacts or narratives in question, at least by the standards of the museum. So the apparent mandate from the evaluation is to keep it simple, establish the basic facts, avoid complexities and confusions. The resulting exhibition often feels like a 3D, beautifully illustrated version of a junior-high textbook: you can sense the oversimplifications even if you don’t know enough to say exactly what they are, and you can feel the flat, pedantic tone.

But that’s because we’re starting with a narrowly cognitive, educative purpose in mind. We’re interested in what visitors know about tipis rather than (for example) what they feel, what they wish, what they fear, what they find beautiful, what they find sad. We’re looking at a single, isolated aspect of human connection to the material. It’s not necessarily the most interesting aspect, but it’s the one that museums, as Enlightenment institutions, have traditionally cared about most.

What kinds of questions would we ask if we cared just as much about emotional, spiritual, social, ethical, imaginative, and physical connections to that material? How would we start a conversation with our audiences about those kinds of engagement — and start it early enough in the planning process that the museum’s own “intended learning outcomes” for visitors aren’t yet written in stone (that is, on grant applications)?

What would our exhibitions look like if we did? Probably not the low-risk, unambitious curation that Johnson sees (rightly or wrongly) in the Brooklyn exhibition. Probably something with higher aspirations and less predictable effects on visitors: something that can fail for some and grab others by the heart.

What’s your take on front-end research? Do you conduct such studies at your cultural institution? If so, how are they used?



3 Comments »
Autumn — March 23, 2011

Interesting topic. It surprises me that they would conduct curatorial focus groups in the first place, but it's especially interesting to see the extent to which their findings may have affected the exhibition interpretation. I tend to believe that focus groups in museums are often more useful at indicating action - what people will do, buy, or attend as far as programming or membership.
I think the use of this method to determine exhibition content could be less successful because, simply put, you don't know what you don't know. If participants in a focus group have a low level of knowledge about a subject, cutting labels and signage down to just the non-controversial basics doesn't allow for the element of surprise, multi-sensory evaluation, and chance for dialogue that, in my opinion, define a great exhibition. But if the museum utilizes the focus group data to modify technical language, for example, or to consider incorporating more opportunities for visitors of all knowledge levels to connect to the material, front-end curatorial focus groups could greatly contribute to the success of an exhibition.

Beverly Serrell — March 24, 2011

I have conducted a lot of front-end evaluation studies for exhibitions, and I find them valuable, useful, and surprising. The “trick” is to ask the questions about prior knowledge while remaining open to visitors’ emotions and feelings about their knowledge, assumptions, and questions about the topic.

For instance, in a front-end study for a cross-continental railroad history exhibition, visitors were show photographs with short captions that might be used in the exhibits. A photograph of railroad workers with a caption about jobs provoked reactions and questions about ethnic groups, exploitation, danger, recognition and labor riots. Clearly visitors had interest in these hot, not bland, topics.

One of my favorite front-end stories is a case study about an exhibition on the history of the development of penicillin in Brooklyn. To find out what visitors knew about the topic, a sample of visitors was told that the museum was making a new exhibit called “Manufacturing a Miracle: Brooklyn and the Story of Penicillin,” and then they were asked one open-ended question, “What would you expect to see, do, find out about, and how would you feel in an exhibit about that?” The question was a mouthful, and visitors were allowed to choose which aspect to answer first. Then the evaluator followed up by probing for responses from the other categories.

Most interesting were their responses to how the exhibit might make them feel. Many people said they would feel pride in Brooklyn for being important in the story of penicillin. Many said they expected to feel more knowledgeable, interested, and “more confident about what I know about penicillin.” Stronger emotions were also reported: “I’d feel very emotional because it saved my dad’s life.” “I’d feel sad, for the people who have suffered illness.” “I’d feel anger--I wish they knew then what we know now about preventive measures.” Appreciation for the drug’s success; fascination with the scientific investigation process; hopefulness for future cures (such as for AIDS); and surprise that such a topic was being dealt with at a historical museum were among the other feelings visitors expressed.

This was a very different question from the more vague, “What would you like to see and do in a history museum?” Or, “What would you like to know about penicillin?” And the answers were a rich resource for exhibit developers' ideas.

Gillian Savage — April 05, 2011

Generally, people don't arrive in focus groups with all their answers assembled and ready. I like to give time for participants to engage with some of the ideas, images and objects and then have a general chat. It is not a question and answer situation.

My most successful projects have been where participants had the chance to stand and look at objects or displays on walls -- i.e. act a bit like they are in an exhibition. This can be just photos sourced online along with bits of text printed on paper. Blutack is your friend!

As well as conversation, creative responses to the materials can reveal the full gamut of perspectives, feelings, misunderstandings, etc. That statue of a saint with a globe in his hand may be just a strange paperweight to a non-catholic child.

I agree that a 'rational-efficiency' mindset seems to prevail too often in both exhibition development and front end evaluation.

So, "what kind of questions would we ask?..." I wouldn't be asking questions, I'd be giving participants an experience and then listening to the kinds of questions THEY ask, or the things they want to talk about.

Post new comment

The content of this field is kept private and will not be shown publicly.