Evaluating the Impacts of engagement

How can we evaluate the impact on students taking part in I’m a Scientist? Can we measure if they’re more likely to take a STEM subject at A Level? If they’re more likely to study science at University? How should we use the large amounts of data generated by online projects? How can we share our evaluation in a more useful way?

These are just some of the questions we’re trying to answer about evaluating I’m a Scientist and other Gallomanor run projects. Judging from the first in a series of seminars looking at Evaluating Impacts of Public Engagement and Non-Formal Learning, last Friday 4th November, others are thinking along the same lines.

The Core Issues & Debates seminar kicked off the series at the Dana Centre in London, and bought together a range of researchers, evaluators and learning and communication practitioners. Future seminars focus on areas such as how to reach new audiences, evaluating online engagement and using qualitative evaluation methods.

The 7 speakers approached evaluating impacts from different views – funding, strategy, science festivals, academic, and museums/science centres. There were some key themes that emerged during each of the 20 minute talks and the resultant Q+A sessions. (It would have been useful to have a bit more time for Q+A discussion after each speaker, as the allocated 10 minutes were quickly eaten into.)

  1. Evaluation needs to be shared with others so all projects are ‘learning projects’. The British Science Association’s Collective Memory is a good place to start. It’s worth constantly thinking about how to improve evaluation during a project, such as changing evaluation questions so they return more useful responses.
  2. Evaluation is very important right from the grant application stage at the start of a project, but shouldn’t be done for the sake of it, or just because funders ask for it.
  3. There are still lots of questions unanswered about how to evaluate and measure the impacts of an engagement project. Is it really possible to measure if students are more engaged with or interested about science as a direct result of one activity? Is it enough to accept your activity is one of many factors that may have influenced a change seen? These will hopefully be explored further, and maybe even answered, in future seminars in the series.
  4. Negativity can be hard to capture in evaluation. Evaluation studies can therefore be designed to try and capture negativity, such as framing questions to encourage participants to think not just about the positives of the event.
  5. Bad evaluation that draws inaccurate or invalid conclusions from data can be more damaging than no evaluation.

Overall it was a useful introduction and summary of how impacts are being evaluated. Armed with my 7 pages of dense notes scribbled during the seminar we’re now working out how to put some of these ideas into practice with I’m a Scientist. This will likely spark another post in due course.

Click on the RSS symbol at the top left of the page to subscribe to the blog, or register for email updates.

Posted on November 9, 2011 by in Evaluation, IAS Event, News, Science Engagement. Comments Off on Evaluating the Impacts of engagement