As part of ScienceOnline (or SpotOnLondon2012) Karen Bultitude and I have arranged a session called: Can we work together to better evaluate online engagement? It’s taking place in the Steel Room on Monday 12th November at 2.30pm.
We wanted to provide some context to the session. Here goes:
Since you’re interested in ScienceOnline it’s fairly likely that you think online science engagement is important and going to get more important. Me too. I run a few online science engagement projects, https://imascientist.org.uk is the most prominent. It’s pretty important to me to know how well the project is performing. We need to know in order to improve it, we need to be able to tell our funders and we feel obliged to be able to demonstrate to all stakeholders that the project is effective.
So we evaluate our projects using a wide range of methods: surveys, interviews, personal meaning mapping and we also try to analyse the vast quantities of data that we capture as a matter of course from the website and Google Analytics. We use that information to work out how we’re doing.
All well and good except that without anything to compare against it is difficult to know how we are doing. A starting point when wanting to compare against sector standards is knowing what to measure. It is surprisingly difficult to find much information about how online engagement properties are performing. We try to publish all our evaluation. Google: evaluation I’m a Scientist. The first three results are our main evaluation reports and more informal evaluations of two spin-offs. Try it for another online engagement project. Google: evaluation [INSERT PROJECT NAME] – very little seems to be published.
The Cultural Sector (including The Science Musuem) recognised this problem in 2009. They came together to produce the “Let’s Get Real: How to evaluate Online Success” report in order to better understand their digital activities and to help them plan future activities more effectively in more financially constrained times.
The report brought together 17 museums, galleries and venues. They shared statistics and methodologies. They agreed a common configuration for Google Analytics for the purpose of reporting to funders. They shared good practice.
There is plenty of interest in how we measure our online activity. This session is the 3rd on the subject in this conference alone. Our hope is that we can work together to better measure our online success.
There will be benefits to better measurement. We’ll be able to improve what we do. We can more more efficient, more effective. We’ll spend less time wondering what, out of the mountains of data, is important. We’ll also be able to demonstrate to funders in a coherent and consistent manner that online engagement is effective. And that is something that will benefit all of us who believe in online engagement.