Friday, 21 December 2012

One project finishes | another one starts better


One project finishes, another one starts better


How can we share what we learn?


There’s no doubt in my mind that there is some excellent audience research going on in the heritage industry, but I’m equally convinced that we could and should share it better.  The question is how?

Audience research at the Science Museum has focused for over 15 years on improving our offer for visitors. The main people the research has influenced have therefore been internal – the exhibition/programme/web teams who make our offer and the Trustees and Executive Board who drive our audience-focused strategy. We have always wanted to publish the results of our evaluation more broadly and have sometimes done so[i] but I have to admit this has been opportunistic. Too often as one project finishes we are taken up in the rush of the next before sharing what’s happened outside the Museum itself.  This is now changing. We have a new Director keen to raise the profile of the work we’re doing, and we have started going down several paths to do just that. Here are some of the issues that I’ve come across in doing this and some suggestions for making sharing easier.

Which evaluation to publish?


If most evaluation publications are about the summative conclusions at the end of a project we miss out on sharing front-end and formative research. I can understand why this is – developmental research is harder to publish and is often ‘quick and dirty’ with the practical aim of getting an exhibit to work rather than exploring the intellectual theory behind it. Yet if we fail to share this kind of research we underplay the crucial research and development function of audience research, one which leads to perhaps the greatest direct impact on visitors.

Front-end research can completely change the direction of an exhibition. Here are some examples of how it has worked for us.  'Difficult and dull' was the audience reaction to the idea of a gallery on brain science and genetics. The reaction challenged us to be more creative and we reframed the exhibition around the issue of identity. The result is the enduringly popular 'Who Am I?' gallery. For a redevelopment in 2010, we conducted formative research with disabled groups who asked for a more multisensory approach to content. The result was a series of case augmentations bringing objects into the open gallery [fig 1] and increasing engagement opportunities for all our visitors. Increasing access is a key interest of the Audience Research Group and this work was discussed at several conferences.[ii]

Fig 1 | Case augmentation on human emotions, Who Am I? Copyright© Science Museum

When we designed our new Launch Pad gallery [the Museum’s signature interactive children’s gallery which presents science phenomena through hands-on exhibits], the existing Launch Pad provided an ideal test-bed for new ideas. Almost immediately we discovered that the current space was pitched at the wrong audience, falling into the chasm between younger children [for whom the scientific ideas were too complex] and older children [for whom the design was too childish]. Moreover, although current learning literature stresses the importance of social learning, it turned out that a lot of the existing exhibits gave a passive, demonstration type of experience rather than offering opportunities for interaction [fig 2].

Fig 2 | Exhibit in the old Launch Pad gallery. Copyright© Science Museum

The new Launch Pad [fig 3] was redesigned for children aged 8-14, and accompanying adults. The exhibition team absorbed the current learning literature [including for example, George Hein[iii], Howard Gardner[iv], Kevin Crowley[v]], looked at international best practice [especially at the Exploratorium in San Francisco] and conducted endless testing to develop exhibits, that invite open-ended, multi-user, exploratory kinds of behaviour. New Launch Pad welcomed its millionth visitor within 10 months.

Fig 3 | The new Launch Pad gallery. Copyright© Science Museum

As another example of the importance of formative testing, take our recently opened 'atmosphere’ gallery', which tackles climate science [fig 4].

Fig 4 | atmosphere gallery. Copyright© Science Museum

When we came to create displays about climate change research included a small-scale test of visitors’ understanding of terminology and the researcher started to map out the mental models that visitors were bringing to the subject. We found out that the visitor’s understanding was at an altogether different level than the exhibition team [led by an eminent climate scientist] had supposed. The mental model showed that visitors had a high level of prior knowledge about the impacts of climate change, for example rising sea levels and melting ice caps. However, there were gaps in people’s knowledge and understanding of some of the key terms and processes.

For example, visitors believed their own prior knowledge of the causes of climate change to be sound. However, when probed it was apparent that they had misconceptions, such as a believing that a depleted ozone layer has a direct causal link to global warming. This knowledge of audience understanding helped the exhibition team to create content that extended knowledge and challenged misconceptions[vi].

When it comes to developing new ideas, prototype evaluation is crucial and we do it with interactive exhibits, interpretation, new technologies [such as object engagement apps] and online content.  Prototyping allows us to take risks in innovation and creativity – we can test ideas in the relative privacy of development rather than on the public floor and succeed [or fail] interestingly [and cheaply]. I can’t help feeling that these experiences are among the most valuable evaluations to share. An example is the research we conducted to develop instruction labels for interactive exhibits in Launch Pad. We realised there was a problem for visitors understanding how to use some exhibits correctly. For example they were using the ‘turntable’ exhibit as a roundabout rather than experimenting with their body position to explore conservation of angular momentum. We needed an interpretation solution that solved this barrier to engaging with the exhibit properly. The video label project used a combination of academic literature review and original research to create a solution. Inspired by Gelman et al.[vii] and Stevens and Hall[viii] who wrote about the value of presenting hints and concepts through moving-image media, we worked with a resident PhD student and a group of placement students to develop and test four different prototypes of a visual label. The final video labels show a short clip of a member of staff using the exhibit correctly, footage of the science phenomena from the real world, and a couple of lines of simple text, and they are played on large screens so that people can absorb the instructions while waiting for their turn. Video-labels are a completely simple and original answer to a recognised problem [fig 5].

Fig 5 | video label, turntable exhibit, new Launch Pad. Copyright© Science Museum 

But what about summative evaluations? Should we share these? 


Summative evaluations [which explore how well an exhibition - or other kind of project - meets its objectives against its target audiences] might look more promising for publication: they are usually more extensive, have complex triangulated methodologies and often very long and detailed reports. But I’m uncomfortable publishing these in their raw form. Not because I mind what they say – some of our best ideas come from things that don’t work - but because I think they are hardest for other people to make use of as they relate so specifically to the gallery. Tracking studies, cued and uncued observations at individual exhibits, dwell times and satisfaction ratings are very site specific. And for me the impact isn’t in the report or gallery being evaluated, it’s in the cumulative learning that feeds back into the institution. I always see summative work as front-end work for the next project, and I think we genuinely achieve that at the Science Museum.  The problem is that it’s usually done through the knowledge accumulation, training and advocacy of an internal audience research department and that’s hard to translate into documentary form. This is where we have to do better.

What format is best for sharing, and with whom?


The format for sharing should meet the audience needs. But which audiences, and what type of information is most useful to them?

To try and answer these questions, here are some of the ways The Science Museum is planning to share our work across the museum sector. I think this is an area for much further discussion, and I’ve commented on each method to contribute to the debate:

  • Training | Often it’s not evaluation findings but expertise that professional colleagues are after. Here, training is the better format and we’ve had a good response to evaluation and prototyping workshops at ecsite and Visitor Studies Association Conferences, but there are no cross-sector systems for encouraging and developing practitioners’ skills in research and evaluation. Should that be a priority?
  • Peer networks | The web is a great tool for sharing with peers, and we have developed the Sharing Expertise section of our website to do just that http://www.sciencemuseum.org.uk/about_us/sharing_expertise.apx This focuses on our practice rather than evaluation, including what we’ve learned from 15 years of running children’s sleep-overs, tips on developing science dialogue events, and lessons from Talk Science, a five-year project delivering teachers’ Continuing Professional Development [CPD] around contemporary science debate [fig 6].
Fig 6 | Teachers participating in the Talk Science Professional Development course. Copyright© Science Museum
  • Conferences | We often present the newest work at museum or science centre conferences, in the UK and the States, and keep in touch with international best practice that way. But I wonder whether presentations which are so momentary and ephemeral are easily embedded in the field? And work presented to specific sectors, such as science related institutions, may not find its way to the broader cultural sector:  a missed opportunity.
  • Self-publishing is another option - we are planning a detailed audience research website - and I do like the idea of posting on the growing number of knowledge portals, for example: the VSA [Visitor Studies Association],  iseevidenceWikki and ASTC [Association of Science Technology Centres] Exhibitfiles [on which we published the Launch Pad video labels work].  But informal feedback seems to suggest that take-up of these is patchy. How are readers and contributors to choose which to use? As a reader I’d also like a quality filter [such as peer review] so I can be sure case-studies are sound and generalisable.
  • Academic papers | Academic papers are a powerful way of sharing learning. Writing them forces us to contextualise on-the-ground evaluation into a general research question, and reading them helps us embed original research in reliable academic literature. And peer review must provide some assurance of quality.  But how many of we practitioners regularly read academic literature? Recent Wellcome Trust funded research found that out of 29 senior science communication professionals surveyed, none had read any of the top 10 cited literature related to their field[ix]. Nevertheless, academic rigour and reputation is important to us and I will always look for opportunities to publish in this way.
  • Academic collaborations |  Building relationships with university departments can also lead to fruitful outcomes. Academics provide access to current thinking and the newest methodologies, as well as routes into joint research projects [and funding streams] and potential co-publications. An informal reading group organised by staff at King’s College, London, provides a forum for us to discuss our work with colleagues from museums such as the V&A, British Museum and Tate, and it is always useful to have bracing discussions about our current thinking.
  • Practitioner’s guides and manuals | These provide an alternative dissemination format. One of the best practitioner’s guides I’ve seen is the San Francisco Exploratorium’s Active Prolonged Engagement (APE) manual, which uses rigorous research to tease out what makes visitors stay longer and engage more deeply at interactive science exhibits in a ‘how to... ’ format. It was a big influence on the success of our Launch Pad gallery. 
An example of where sharing was planned for and funded may be useful here. In 2003 the Wellcome Trust funded ‘Naked Science’, an 18-month pilot project to inform the programme of adult controversial science dialogue events planned for our new Dana Centre. As usual the project included evaluation – we tested experimental events from puppet shows to science theatre, punk science comedy to genetics board games, as well as exploring adults’ attitudes to controversial science and working out how we would define and measure dialogue. Unusually though, the project also included funding for sharing our findings externally. To do this we developed a ‘do-it-yourself’ guide for anyone wanting to set up their own science dialogue events, and produced a detailed report with all of the research findings. You can find these on the Dana Centre website here http://www.danacentre.org.uk/aboutus  So, might including sharing activity in funding bids may be the most effective way forward?

Is one solution a secondment to disseminate findings?


In conclusion, my feeling is that we should be working towards a more systematic way of gathering and sharing evaluation and best practice across the cultural field, but while that develops we are best off sharing in all kinds of different ways for different kinds of reader. However, this does increase the required investment of time to reflect and write, so that we translate our research into formats that are findable and useable to the widest audience. This time is over and above that of doing the original research, and institutions have to commit to it. The Science Museum has delivered some of the investment I’ve suggested in this blog by giving me a secondment to concentrate just on writing and publishing research findings.  It’s now my job to identify the most important and interesting findings from the past ten years of audience research and translate them into the most effective sharing formats. My question to myself is how much can I get out there in the next six months!

Author | Kate Steiner, Head of Audience Research, The Science Museum | kate.steiner@sciencemuseum.org.uk


Source material



[i] Example of recent Science Museum Group evaluation publication: Birchall, D., Henson, M., Burch, A., Evans, D., & Haley Goldman, K., 2012. Leveling Up: Towards Best Practice in Evaluating Museum Games. Museums and the Web (2012): Selected Papers and Proceedings. Eds. N. Proctor & R. Cherry. Published by Museums and the Web
[ii] Steiner K.: Designing accessible exhibitions, presentation, ecsite (European Network of Science Centres and Museums) conference, (2012)
[iii] Hein G: Learning in the Museum (museum meanings), Routledge, (1998)
[iv] Gardner H.: Frames of Mind, the theory of multiple intelligences, Basic Books, (2011)
[v] Crowley, K. & Knutson, K. Museum as learning laboratory: Bringing research and practice together.  Hand to Hand, (2005) 19(1), 3-6.
[vi] Dillon, J. and Hobson, M. Communicating global climate change: issues and dilemmas. In J. Gilbert, B. Lewenstein and S. Stocklmayer (eds) Communication and Engagement in Science and Technology. New York: Routledge, (2012, in press).
[vii] Gelman R., Massey C. M. and McManus M., ‘Characterising supporting environments for
cognitive development: lessons from children in a museum’, in Resnick L. Levine J. and
Teasley S. (eds), Perspectives on Socially Shared Cognition (1991), pp 226–56
[viii] Stevens R. and Hall R., ‘Seeing Tornado: how video traces mediate visitor understanding of
phenomena in a science museum’, Science Education, 81 (1997), pp 735–48
[ix] Falk J. et al: Analysing the UK Science Education Community: The contribution of informal providers, November 2012

Thursday, 20 December 2012

Sharing evaluation | don't reinvent the wheel...


Sharing evaluation | don't reinvent the wheel...

Barriers


For over 12 years now I’ve worked within the museum and heritage sector in the field of evaluation and audience research and I’m reasonably confident the sector can now see the benefits of evaluation. However, I often feel frustrated at the barriers that we, as museum professionals, collectively seem to have raised, which often prevent us from sharing what we find out from the evaluation process.  We are still reticent about telling others what hasn’t worked, we still ask the same old questions, and we don’t look internally or externally as much as we could for useful data.  We just don’t seem to learn as well as we potentially could from the experiences of our colleagues. Recently a colleague suggested that perhaps it’s just part of the human condition that we need to keep asking the same questions of museum audiences.  Does this suggest we need reassurance or is it, fundamentally, about the fact that we believe our organisations are so unique we don’t feel we can apply what others have found out about their audiences to our own situation?  Does it feel easier to keep asking the same questions rather than spending time applying someone else’s research to our own organisation?   I’ve come to the conclusion that breaking down the barriers to sharing is much more than the question of whether we publish our findings. Many museums already do this really well, e.g. the Victoria and Albert, who commission a lot of research and put reports on line and the Natural History Museum, who share evaluations of exhibitions and learning programmes.  I think we need to do more. 

Understanding


We know that carrying out evaluation provides evidence about how projects and programmes are meeting aims and objectives and encourages on-going project improvements. We know it can act as an excellent tool for telling others about the great work being done in the sector, enabling us to share lessons learned with colleagues.  Evaluation can help the project team and participants feel like they ‘own’ the project, it can develop relationships with visitors and play a really important role in developing ideas and planning future projects. [see image]  We also know that if we don’t carry out any evaluation we could waste time and money, produce something that we can’t change, lose interest from our target audience, and waste an opportunity to learn something useful and lose funding.  There are a number of excellent online toolkits readily available which steer willing participants through the whole evaluation process e.g. the East of England Museum Hub’s Evaluation Toolkit for Museum Practitioners and the Smithsonian Centre for Education and Museum Studies has just set up an online community of practice around evaluation, which looks really exciting.

Facilitating a focus group at the Grant Museum of Zoology as part of the London Museums Hub Project ‘Say it again say it differently’ 2004-2006.

Fear


So we understand the benefits of evaluation and how to do it, but why are we not sharing what we find out through the evaluation process more effectively? [see image] One of the important factors may be that many of us are still reticent about telling others about what hasn’t worked so well.  I really understand the fear behind this.  For many reasons we want to portray projects, programmes and events in the best light, to secure future funding or to retain professional standing, but in its purest sense evaluation should be seen as an opportunity for us to learn and improve our professional practice. It should not be done to point fingers or assign blame. Rather it should help create a culture where we are able to take risks and share what hasn’t gone so well. Only then can we learn and move forward as a sector.

Re-packaging


I think a possible way forward is to re-package the data already held within organisations and think more strategically about how we plan evaluation. The organisations I regularly work with are often surprised with how much data they actually have once they start looking! Recently, working with the Education and Interpretation teams in a heritage organisation a colleague and I tried this as a practical exercise.  After some initial hesitation, followed by discussion and deliberation, the team realised they could pull out useful audience information from their existing evaluation data which they could put to efficient use in planning future projects and programmes.  Maybe the next step would be to work with two or three similar organisations to share audience consultation and evaluation data around specific audiences. Maybe a group of museums could agree some overarching research questions so all the evaluation and audience consultation work they do individually could collectively feed into answering these questions.

Facilitating a focus group at the Brunel Museum as part of the London Museums Hub Project ‘Say it again say it differently’ 2004-2006.

Let’s be confident about what we know


As a profession we need to be confident about what we already know.  I am often asked to find out the barriers secondary school teachers have in taking their students out to museums. I can safely say that I’ve asked a lot of teachers this question and I know what the answers usually are, so I encourage the organisations I work with to ask something which might be fundamentally more useful [see image]. A recent project with The National Maritime Museum [NMM] in Greenwich took this a step further when they asked secondary teachers what the NMM could do to help teachers advocate internally for a museum trip with their students. Should the NMM guarantee that the school could bring a whole year group at once or guarantee specialist input from a member of museum staff for the visit? Should they provide evidence of impact on attainment targets or information that highlights the benefits of learning outside the classroom? The Museum plans to share the results of their research with the sector.  Having the confidence that they know about the barriers teachers have to visit the museum with students has enabled the NMM to find out new information which will ultimately improve their school offer.

Evaluating with children at Bromley Museum as part of the London Museums Hub Project ‘Say it again say it differently’ 2004-2006.

Sharing documents is not enough

As a committee member for the London Museums Group there are certainly more conversations we can have as to how Share London, the London Museums Group online forum for sharing practice, can help us to share evaluation and audience consultation findings more widely.  I think we need to remember that posting our analysed and interpreted evaluation and audience research data might not be enough and more work and conversations need to be had about how we stop reinventing the wheel. Ultimately, time is always precious, and if we can stop asking the same questions, use the data we already have and share our findings more efficiently we [and our audiences] can all only benefit in the longer term.  

Author | Nicky Boyd, Museum Consultant [Audience Research & Evaluation] and LMG | nickyboyd@btinternet.com

Images | all photographs were taken by J. Neligan during focus groups as part of the London Museums Hub project Saying it differently 2004-06.  

For more information on the project please go to: http://www.museumoflondon.org.uk/Corporate/About-us/Regional-Programmes/Publications+and+Resources.htm


Nicky Boyd's Blog has been reprinted on the Guardian's Culture Professionals Network.  Sign up free to become a member of the Culture Professionals Network.