Science Learning+ is a significant funding scheme provided jointly between the Wellcome Trust and the National Science Foundation.
Learning can happen anywhere and at any time. Science Learning+ is an international initiative that aims to understand the power of informal learning experiences inside and outside of school.
The second aim of the scheme is to
“bridge the practice and research gap”
At a seminar in July aimed at providing an update on the Phase I project an interesting conversation developed about that gap between Science Communication practitioners and researchers.
I heard one speaker talk about practitioners wanting to know if a hypothetical red headline would give a 3% uplift in visitors. I responded on twitter:
Disagree that practitioners want efficacy. I use eval. for that. I want research to tell me if the activity provides good outcomes #slplus
— Shane McCracken (@ShaneMcC) July 27, 2015
Not all practioners agreed with me. Some felt each project would be unique enough to warrent a rewriting of expectations
— Helen Featherstone (@HFeatherstone) July 28, 2015
Others simply disagreed and place efficacy as something for researchers:
— Andy Lloyd (@arlloyd) July 27, 2015
In the end 140 characters felt underpowered.
For me research and evaluation are different, but very related.
I expect research to tell me if an approach to science communication works and how it works. I expect evaluation to tell me how well a project is working and how it can be improved. I would like evaluation to draw upon the research to extrapolate that particular activities will lead to particular outcomes.
For example using I’m a Scientist:
The feedback we get from our participants is that connecting online with scientists improves their attitude to science, and to jobs in science. We seem to find the changes in attitude among girls is greater than it is for boys.
I want some research to tell me why those conversations are improving attitudes and if those changes are persistent. I want the research to be telling me how online activity compares to offline activity and why.
I want research to tell me what characteristics of engagement deliver the best and most persistent improvements in attitude and achievement.
Then I want my evaluation to examine our work against those characteristics and to suggest ways to improve them.
Research = why something works
Evaluation = how well something works
What do you think?