Skip to main content

https://geospatialcommission.blog.gov.uk/2022/01/21/evaluation-how-it-shaped-the-public-dialogue-on-location-data-ethics/

Evaluation: How it shaped the public dialogue on location data ethics

Large and diverse group of people seen from above gathered together in the shape of a pie chart
Credit Image: Arthimedes, Shutterstock

In support of Mission 1 of the UK Geospatial Strategy, to promote and safeguard the use of location data, the Geospatial Commission intends to publish guidance later this year on how to unlock value from location data in a manner that mitigates concerns and retains public confidence. The public dialogue, one of the UK’s first on location data, sought to gather evidence on public perceptions about location data use, to inform this guidance. The final report, published in December, offered valuable insights into what citizens believe are the key benefits and concerns.

At the Geospatial Commission, evaluating our work is a key part of all our projects. Evaluation takes many forms but it helps us to learn and adapt during projects and provides evidence of the impact we are having. In this blog, independent evaluator Sophie Reid tells us about the evaluation process that she used during the public dialogue on location data ethics.

Why is evaluation so important in the public dialogue?

Public dialogues are naturally organic and iterative processes. There are objectives developed at the outset, of course, but much of the true insight comes from the open process which allows members of the public to talk about the aspects of the topic that matter to them, and how it fits into their lives, in their own words.

Evaluating the quality of the process, therefore, calls for an approach that can accommodate this flexible style. Things might change along the way. Unexpected outcomes might arise.

Mostly, in any evaluation, you’re looking at the impact the project has had (what difference it has made) and the process which provides the context to that impact. A public dialogue is the same – you can assess the impact on public participants, on other stakeholders, and on policymaking. You can assess the process too, and in a final assessment, judge what difference those aspects of the process made to the overall impact of the dialogue. This helps lead to improvements for future public dialogue – a better process and greater impact.

However, it is not just about making a final assessment. Along the way, evaluation also helps to make improvements to the dialogue as it is developed and delivered. I have produced internal reports and been part of project team meetings, to feedback on what has been learnt through the evaluation. This formative aspect of evaluation is important in creating a learning and development approach for the project, allowing the project team to react and adapt to the latest information.

Collaborations and methodologies

Sciencewise, the co-funder of the dialogue, is a programme led by UK Research and Innovation (UKRI) which supports policymakers to develop socially informed policy through public dialogue. Since its inception in 2004, and over 55 dialogues since then, Sciencewise has developed a framework for assessing quality in public dialogue. This forms the backbone of any independent evaluation of a Sciencewise co-funded public dialogue, including guidance on assessing context, scope and design, delivery and impact.

For this particular evaluation, I wanted to apply a methodology called Realist Evaluation, first outlined by Ray Pawson and Nicholas Tilley. This is a theory-based evaluation methodology that asks what works, for whom, in which circumstances and why. In particular, it focuses on identifying the mechanisms by which outcomes are achieved (that are found somewhere in the intersection of what resources are offered by the dialogue and how all involved respond to those).

Articulated in this language, the overall theory that I have been testing is that by providing participants with new resources (stimulus material, interaction with experts, structured and welcoming space to discuss with others), public dialogue enables them to make meaningful contributions to policy development. The public dialogue process and its outputs are seen as credible and are used by policymakers and other stakeholders.

Emerging from the evaluation so far are three key mechanisms to test whether the intended outcomes have been achieved: 

  • credibility (of the dialogue project to participants, and of the outputs and the process to its stakeholders),
  • participant ‘readiness’ to deliberate; and 
  • useability of the findings.

These are what I will be paying particular attention to over the next six months as the longer-term impact of the dialogue emerges.

Seeking feedback and next steps….

In practice, evaluating using this methodology means I have been observing the workshops, facilitator briefings and focus groups with specifically impacted groups. I have asked public participants, experts and observers to complete a survey after some of the workshops. I have interviewed a sample of participants, experts and observers, members of the project’s independent expert Oversight Group, and members of the project team, including Traverse, Ada Lovelace Institute and the Geospatial Commission, at multiple points during the dialogue.

It has been fascinating watching the public participants grapple with the topic of location data ethics. It was both a technical conversation and a conversation about society and ethics. This is a challenge to which the participants have risen when provided with the space, materials and expert facilitation to learn about the topic and develop their own views through discussion with a diverse group of others.

A final evaluation report of the dialogue will be published by Sciencewise in summer 2022.

Sign up for this blog to get an email notification every time we publish a new blog post. For more information about this and other news see our website, or follow us on Twitter and LinkedIn.

Sharing and comments

Share this page