STREAM 5: Data at work: developing the evidence base to guide future action

Stream Moderator
Robert Thomas
Project Manager for Enabling Technologies, DIICCSRTE
Brains Trust
Professor Léonie Rennie
Science & Mathematics Education Centre, Curtin University
Brains Trust
Oona Nielssen
General manager, Communication, CSIRO

STREAM HOSTED ON THE 6th June
Workshop 1: 12.00pm – 1.30pm
Workshop 2: 2.30pm – 4.00pm

In order to conduct robust, effective and meaningful science engagement in Australia, a solid evidence base is critical. Over recent months a number of reports have been released as a result of IA-supported initiatives, and the science engagement evidence base they present has been developed.

So where to now?

Now that we have an evidence base in development, how do we use the data? How should this information be shared, and with whom? How will it be kept up-to-date? Are any vital areas still unexplored? Have we even gathered the right data at all?

What are the ramifications for future funding and policy decisions? Where do we want to be in five years’ time? What should be our goals? A consistent measure for science engagement activities and a national picture of Australian’s attitudes towards science? Something else?

Participate in the Summit and leave a comment

17 comments
Craig Cormick
Craig Cormick like.author.displayName like.author.displayName 2 Like

The top three impediments and solutions from yesterday's workshop. What do you think?

Stream 5: Better Using the Data

Issue 1: Know your objectives

Solution:

Determine your program objectives in advance and build those objectives into the event.

Instead of an annoying survey after the event, engage the audience during the event to measure outcomes.

=Provide models for uniform evaluation methodologies and best practice examples.

Issue 2: Know how to get the data you need

Solution:

Determine standards of evaluation appropriate to your objectives.

Build a shared centralised community of agreed benchmarks to develop a toolkit of methods based on shared qualitative principles of evaluation.

=Provide models for uniform evaluation methodologies and best practice examples.

Issue 3: Having the resources to get what you want

Solution:

Have the resources to be able to measure your outcomes

Build evaluation into grant provisions

Collaborate with organisations with shared interests and share the financial burden of evaluation.

=Science communication grants to include good and consistent evaluation components

Melanie McKenzie
Melanie McKenzie like.author.displayName 1 Like

@Craig Cormick Just wanted to follow up that the Issues that are listed here have misrepresented the conversations that took place (at least the ones I attended). I'm not sure where *uniform* methodologies came from. The main issues here refer to establishing standards for evaluation, with well-considered, tailored objectives for different audiences. Not making it all the same.

Craig Cormick
Craig Cormick like.author.displayName 1 Like

Melanie  - thanks for this. I've updated the line based on conversations on the day - but will use your text here too. CC

Rob Thomas1
Rob Thomas1

@Craig Cormick I agree with this, also in the notes it is mentioned that the community developments a toolkit of suitable methods based on the agreed standards of evaluation. So a range of quality methods for practitioners to use based on their needs. 

Rob Thomas1
Rob Thomas1 like.author.displayName 1 Like

Hi all,

We've got a summary based on what's being said - here it is:

Science communication programs should be designed first and foremost with an objective in mind, which is clear and measurable, and sufficient resources allocated to measure the achievement of that objective.


The resources available should determine the achievable objective and the reliable measurement of the objective in a science communications project. The size and scope of the project, such as its audience, is determined on the basis of the capacity of the project resources to achieve and measure outcomes.  

A culture of accountable evaluation should help determine methods of information collection and what is worthy information. For example audience size is not a worthwhile outcome to report on itself without additional qualitative information.

In doing so we are creating science communication programs that have their capacity for measurement built in to justify their existence, and can provide important useful data to the community on the ongoing research towards the effectiveness of science communication.


Or, a summary summary

A more accountable culture that provides useable data.


This is the statement we'll be working to achieve at the conference - what do you think of it? Let us know.

Craig Cormick
Craig Cormick like.author.displayName 1 Like

Okay, a very BIG question for all panel streams - are we too distracted by the word 'Science'? Is Science our obsession - and should we rather be talking less about the Process and more about the Endgame, such as:

- Environmental sustainability

- Productive industries

- Better health services

- More informed citizens

- Being healthier, wealthier and wiser etc etc etc?

Craig Cormick
Craig Cormick like.author.displayName like.author.displayName 2 Like

Challenging statement for discussion, from Dan Kahan at Yale: What do you think of it?

"Not only do too many science communicators ignore evidence about what does and doesn't work.  Way way too many also shoot from the hip in a completely fact-free, imagination-run-wild way in formulating communication strategies.

If they don't rely entirely on their own personal experience mixed with introspection, they simply reach into the grab bag of decision science mechanisms (it's vast), picking and choosing, mixing and matching, and in the end presenting what is really just an elaborate just-so story on what the "problem" is and how to "solve" it.  

That's not science. It's pseudo-science."

Claire Harris
Claire Harris like.author.displayName 1 Like

Perhaps the first part is true but needs to be taken in context. As with any discipline or job there are those that do their jobs well to the level expected of them and some that don't. Also the time and resourcing to go and fill knowledge gaps around audience and stakeholder needs is rarely supported by organisations. I know I often highlight gaps in market or academic research but I am rarely given a budget to rectify myself.... re next bit of your post, I'm not sure what you're referring to in those blog posts as couldn't find the specific text - ie what's not science?

Melanie McKenzie
Melanie McKenzie like.author.displayName like.author.displayName 2 Like

Yes - but good, ol' research is subject to a qualified peer review process. Evaluation "research" is usually not peer reviewed (or published). Program evaluation standards have been established elsewhere (e.g., see the American Evaluation Standards: http://www.eval.org/evaluationdocuments/progeval.html). I'm suggesting that we should have a conversation about what our standards are - or at least put it on the agenda for further discussion.

Leonie Rennie
Leonie Rennie like.author.displayName 1 Like

I think this is a key aspect of what we are about, and it would be great to make some shared progress on what our standards should be. What do you think are some of the issues that will help us/inhibit us from achieving high standards in sci comm evaluation?

Vicki Martin
Vicki Martin like.author.displayName 1 Like

@Leonie Rennie At this stage, I don't think we have a good enough idea of who our audience really is.  This makes it difficult - if not impossible - to measure the real impact our communication is having.  I think we can get a lot more help from psychologists - social psychologists in particular, to help us understand the audience better, and understand the barriers to communication - similar to the way they've been helping us understand issues around climate change and human behaviour.  

heatherbray6
heatherbray6 like.author.displayName 1 Like

@Melanie McKenzie Ok, I get where you are coming from now, and agree that 'we' need to set some norms around good evaluation practice, and embed that in good sci comms/engagement practice. I look forward to discussions about the differences between evaluation and research :)

heatherbray6
heatherbray6 like.author.displayName 1 Like

Isn't what we're talking about here just good, ol' research? All disciplines do it: science, health, education etc. They share data by publishing (usually with peer review) with the aim of improving knowledge/practice within their discipline. I don't know if there is one 'right way' to measure science engagement, but I like the idea of a dynamic group of researchers collectively engaged in examining what we do and sharing findings.

Melanie McKenzie
Melanie McKenzie like.author.displayName 1 Like

In order to establish an evidence base, we need some way of telling the difference between what is useful (e.g., reliable and meaningful) evidence - and what is not. Rather than assuming all data are good, I think we need to discuss some standards for collecting, analysing, reporting, and compiling these data. In other words: we need to set some standards for evaluating.

Claire Harris
Claire Harris like.author.displayName 1 Like

Would be great to discuss the results from the national audit of science engagement activities http://www.asc.asn.au/more-about-the-national-audit-of-science-engagement-activities/ and also discuss some of the deficiencies/needs for more information. See The Conversation article and ensuing discussion: http://theconversation.com/science-engagement-in-australia-is-a-20th-century-toy-12456 

jazz_vibes
jazz_vibes

Best data the public has is weather. Imagine if we had the same report for emissions or energy intensity, internet usage, cars on the road, etc. A section at the end of the nightly news could be devoted to a new one each night.