Golden Rules | Evaluating Evaluation
I’m
quite new to the world of evaluation. When I went part-time at the Museums Association a few years
ago, as well as becoming a partner in the Museum Consultancy, one of the
people I hooked up with was Christian Heath at King’s College London. We were
both surprised that in spite of the millions spent on summative evaluation, it
seems to have had a rather limited impact on practice. We wanted to find out
why – and we wanted to find out what evaluations could tell museums generally
about making engaging visitor experiences.
We
devised a project called Evaluating Evaluation and raised some money to do it
from two of the main funders of museum evaluation: the Wellcome Trust and the
Heritage Lottery Fund. You can read a little more here.
We’ve
spent much of 2012 immersed in evaluations, mainly of permanent displays in
museums and galleries, with a few from science centres and historic buildings. I
soon came across Eric
Jensen from Warwick University, a man on a mission running a series of seminars
to encourage evaluators to use more rigorous methodologies.
And,
at first, I thought that the reasons why evaluations weren’t better shared and
compared was because of methodological problems. But, as the Evaluating
Evaluation investigation progressed it became clear that even if all evaluations
used perfect social science methods they still wouldn’t have a huge impact.
As
Nicky Boyd says, there are institutional and organisational issues that prevent
evaluation being shared and used to its full extent. Evaluators often come from
outside the museum, or have a relatively marginal role inside it, and so do not
have a big influence; project teams disperse after completing their work, their
knowledge often disperses, too; few museums have proper systems for recording
the findings of evaluations and feeding their lessons into future work [although some, such as the British
Museum, do]; in the case of summative evaluations of permanent displays,
which we focused on, it’s often simply impractical to make remedial changes to
a complex gallery.
As
our project got started, some evaluators got in touch and told us how
dissatisfied they are with the way some museums approach evaluation. They feel
museums often prepare lousy briefs, expect far too much for the available fee,
have little idea what they are going to do with the evaluation once complete
and confuse evaluation with advocacy.
This
latter point is crucial. I think summative evaluation can be expected to play
four conflicting roles. First is the role we’d all say is the most important:
to help the project team learn from their work and improve their future
practice.
Second,
the similar but subtly different role of supporting learning beyond the project
team, or in the wider sector. Here, the fear mentioned by Nicky Boyd comes into
play. Project teams can be reluctant to expose their failures – sometimes even
within the institution they work for.
Third,
most summative evaluation comes at the end of an externally funded project and
is a funder’s requirement. There’s a real risk that this confuses evaluation
with the funder’s systems for monitoring and accountability. Unless the funder
and the museum have a very trusting relationship, it’s likely that the
evaluation will be under pressure to ‘prove’ to the funder that the museum met
all its objectives.
Finally,
in similar vein, some museums don’t want evaluation at all – they just want
good news that they can use for advocacy.
I
fear that because of the organisational issues, it will be hard for the sector
to move to a position where it can realise all the potential benefits from
summative evaluation. [I should stress that our focus has been on summative
evaluation; the problems seem fewer, and the benefits easier to realise, in the
case of front-end and formative evaluation.]
After
reading dozens of summative evaluation reports I have to say that my main
feeling is one of disappointment that they actually tell us rather little about
what visitors actually experience. There’s lots and lots of information in
summative evaluations, much of it very interesting, but little of it seems
actually that useful beyond the
museum that commissioned it – and in some cases it might not in fact be that
useful even there.
Sorry
to say, if we’re looking for ‘golden rules’ about how to create great museum
displays, I don’t think we’ll find them in summative evaluations. [It may be,
of course, that the golden rules in fact don’t exist.]
Some
evaluators have suggested that museums might move slightly away from summative
evaluation towards purer audience research. Perhaps, for example, a group of
museums could jointly align all the evaluations they commission to address a
consistent set of research questions. Then, over time, comparable data would
build up and we might gain some solid empirical evidence about how to better
engage our audiences.
Author | Maurice
Davies is a visiting senior research fellow in the Department of Management at
King’s College London, a partner in the Museum Consultancy and Head of Policy
and Communication at the Museums Association | maurice.davies@ntlworld.com
No comments:
Post a Comment