The rap on research for the arts, museums, and informal sciences
Maybe the Times arts critics have it in for the Brooklyn Museum. Or maybe they just don’t believe museum curators should get to know the audiences they’re creating exhibitions for. Then again, some museums don’t believe that either, which is why “front end” evaluation is often a botched job.
Beyond some basic historical context, the exhibition offers no revelatory perspective on its contents. That might be partly because, as the organizers, Nancy B. Rosoff and Susan Kennedy Zeller (both Brooklyn Museum curators) point out in their catalog preface, part of the planning process involved focus groups and visitor surveys “to determine the level of visitor interest in and knowledge of the tepee and Plains culture.” They also invited a team of American Indian scholars, artists and tribal members to vet their plans. The result is an exhibition that speaks down to its audience, assuming a low level of sophistication, and that does as little as possible to offend or stir controversy.
On one level, this is the familiar highbrow take on visitor studies: If you ask the public what they want from an arts or culture experience, you’re doomed from the get-go. Focus groups yield lowest-common-denominator thinking, which should have no place in planning encounters with the great or challenging or profound. The museum should exercise its cultural authority and decide what visitors need to see and learn, without getting sidetracked by what they want.
But when you gather museum-goers in a focus group or ask them questions on a survey, do they really tell you, “I want this exhibition to talk down to me. I want the interpretation of objects to be bland and inoffensive”?
Of course not. The real issue here is what kinds of questions the museum asks and how it understands — and makes use of — the answers. I hasten to add that I haven’t seen the exhibition yet, and I may not agree with Johnson’s that it is condescending or bland. (From what I’ve been able to see online, it looks promising.) ...
Typically, front-end evaluation for a museum exhibit investigates how much visitors already know about the topic of the planned exhibition, and sometimes what they’re interested in learning. (Often, the same study solicits feedback on possible titles for the show or tries to gauge respondents’ likelihood to attend.) The idea is that, if the curators and other members of the exhibition team have a sense of how much knowledge the public brings into the exhibition, they can pitch their interpretive texts at the right level: not over the visitors’ heads, which might make them feel stupid, but not below them, either, which might leave them impatient or bored.
The trouble is, when it comes to most of the topics that museums exhibit, the public knows very little about the artifacts or narratives in question, at least by the standards of the museum. So the apparent mandate from the evaluation is to keep it simple, establish the basic facts, avoid complexities and confusions. The resulting exhibition often feels like a 3D, beautifully illustrated version of a junior-high textbook: you can sense the oversimplifications even if you don’t know enough to say exactly what they are, and you can feel the flat, pedantic tone.
But that’s because we’re starting with a narrowly cognitive, educative purpose in mind. We’re interested in what visitors know about tipis rather than (for example) what they feel, what they wish, what they fear, what they find beautiful, what they find sad. We’re looking at a single, isolated aspect of human connection to the material. It’s not necessarily the most interesting aspect, but it’s the one that museums, as Enlightenment institutions, have traditionally cared about most.
What kinds of questions would we ask if we cared just as much about emotional, spiritual, social, ethical, imaginative, and physical connections to that material? How would we start a conversation with our audiences about those kinds of engagement — and start it early enough in the planning process that the museum’s own “intended learning outcomes” for visitors aren’t yet written in stone (that is, on grant applications)?
What would our exhibitions look like if we did? Probably not the low-risk, unambitious curation that Johnson sees (rightly or wrongly) in the Brooklyn exhibition. Probably something with higher aspirations and less predictable effects on visitors: something that can fail for some and grab others by the heart.
What’s your take on front-end research? Do you conduct such studies at your cultural institution? If so, how are they used?
3 Comments »