I've left this blog

Hello, I'm not updating this blog anymore but you can still find me over at Medium or on my website. Cheers for now.

Search This Blog

Friday 1 July 2016

92. Design experiments for local democracy

Photo credit

The notwestminster work we have been doing is seriously great and the folks involved have achieved a lot.  There have been two brilliant events, many ideas generated and lots of new connections made.  From the stuff done so far we have settled on a set of local democracy design challenges to work on - a list of things we want to change.

While this is all good, the next step is to do something practical; to make some stuff.  We had a brilliant Maker Day in February but we need to take it up a notch.

The method I'm suggesting is design experiments.  Here are some notes and first thoughts.

Design Experiments


Recently I have been reading ‘Nudge, Nudge, Think, Think’ by Peter John and others.  A fascinating book - you should read it!  It’s primarily about techniques to encourage behaviour change (e.g. nudge, think) in areas of civic concern such as recycling or volunteering.  It is also about how you test and develop innovations in public service settings.  Specifically it looks at how randomised control trials and design experiments can be used to explore new ways of engaging with citizens.

Stoker and John describe a design experiment like this:
‘…researchers work with practitioners over a period of time to design, implement, evaluate, and redesign an innovative intervention. The group receiving treatment is compared to a comparison group. The aim is to perfect the intervention over several iterative cycles until it is the best it can be [1].
One example in the book is of a video being used to get the voices of the ‘seldom heard’ into the debates at a council's area committee meetings.  The research team asked local practitioners to produce a video that was then shown to different area committees.  Observations at each meeting led to changes to the video and, at the end of the experiment, conclusions were reached about the effectiveness of the approach and how it could be made more effective in future (turns out that the context of the meeting was as important as the video itself).

Along similar lines are some of the experiments being done by Nick Taylor.  See this simple voting machine to encourage community engagement, for example.  Again, a small intervention designed to test the idea that lowering the bar to access improves community consultations (it does).

The design experiment method comes from education where classrooms - a relatively controllable environment - serve as laboratories for experiments.

In the same way the formal process of local democracy in local councils can provide a great laboratory for democracy experiments.  It is a relatively stable and controllable environment and, where you have multiple meetings of the same type, it can support comparisons between interventions and non-interventions.

Design experiments of this type can only tell us what works (or doesn't work) in a particular context and it is important to recognise this limitation. What they can provide, however, is the starting point for experiments on a wider scale in in different settings that might point to more general conclusions.

Advantages


I think there are some real advantages to using design experiments to take forward the #notwestminster design challenges.

1.  It is a manageable approach  
Of course councils don’t have extra resources to dedicate to this type of work but, given that the experiments would be small and hopefully fit with local work that might have been done anyway, I think design experiments are a reasonable proposition.

2.  It will test our assumptions
In our design challenges we have a number of assumptions about how people will behave if different aspects of local democracy are redesigned (people would be better informed if we did this…, people would get more involved if we did that…).  Wouldn't it be great to have some evidence to back this up?  Design experiments, if done robustly, could provide evidence to support our assumptions (or force us to think again).

3.  We can make the most of our network
One of the things I love about Notwestminster is the way it brings together councillors, practitioners, academics, techies and citizens.  Design experiments are a great was to bring people together in small teams to make something new.  We can also share experiments in progress across the network getting valuable input as we go along.

4.  It will give us something to share
By reporting experiments and their outcomes we can contribute to a growing body of knowledge about civic innovations that will be of use to practitioners and researchers alike.  What’s not to like about that?

5.  We can make something worthwhile
Last, but not least, wouldn't it be great to actually make something that makes a difference to a local council and its citizens?  Wouldn't it?

Challenges


In developing the method and approach there are also some challenges to be overcome.  Sarah Cotterill and Liz Richardson [3] have highlighted some of these as they relate to working with local government and a couple are particularly relevant here:

1.  Measuring the difference
If we are serious about research we need to be clear about what ‘outcome measures’ will tell us what we need to know.  Ideally we will want ‘objective’ measures (e.g. voter turnout) but sometimes we will need to rely on people’s perceptions.  The classic concerns about validity and reliability apply.

2.  Mixed methods
If we want a rich picture about what is happening around a particular experiment we will need to invest in a range of sources of evidence.  What research techniques should be used? Do we need a 'tool kit'?

3.  Organisational commitment
Cotterill and Richardson point to organisational difficulties as a reason why many experiments in local government fail.  How can we ensure that experiments won’t get neglected when other pressures and priorities come into play?

Design experiments also require a different way of working and participants need to be clear about that at the start.  As John et al suggest:
'Design experiments favour small-scale innovation, in a relatively controlled environment, where the dialogue can take place with a small range of policy-makers and workers, all of whom have signed up to a new way of doing business and to intense researcher-practitioner interactions.' [2] 
But what should this commitment look like and how can we make sure that it sticks?

4.  Good governance
Design experiments in councils would be managed by ‘design teams’ involving councillors, practitioners, researchers, practice advisors and others.  Being clear about roles and how decisions are made will be important – but how should this look exactly? How should the different experiments be linked together and managed as a whole (if at all)?

So, some initial thoughts.  Let’s see what can be done with them.


[1] Stoker, G and John, P (2008) Design Experiments: Engaging Policy Makers in the Search for Evidence about What Works, Political Studies, Vol 57 (2).
[2] John, P et al (2013) Nudge, Nudge, Think, Think, Experimenting with Ways to Change Civic Behaviour, Bloomsbury, London.
[3]  Cotterill, S and Richardson, L (2010) Expanding the Use of Experiments on Civic Behavior: Experiments with Local Government as a Research Partner, the Annals of the American Academy of Political and Social Science, Vol 628

4 comments:

Anonymous said...

I like the approach you outline, Dave. As it happens, an invitation to an IFS event on some tests they ran in Lambeth (not too dissimilar to what you propose) found its way into my inbox the other day: http://www.ifs.org.uk/events/1319

Dave Mckenna said...

Cheers Tim, one idea is to plan the notwestminster event as a show and tell along similar lines.

Perry said...

The other place I've found useful ideas about experimentation is in the world of lean and agile. Here are a few of the notions I've picked up@

• Start by deciding what you want to learn: a clear question you want to answer, or, even better, a hypothesis you want to test.
• Then find the quickest, easiest and cheapest way to answer that question or test that hypothesis.
• As well as experiments, you can also learn from analogs and antilogs. (Sorry about the spelling.) Like experiments, these too are based on real behaviour. Analogs are successful predecessors that are worth learning from. Antilogs are also predecessors, but you do things differently from them, probably because they were unsuccessful.
• The motto of lean startups is ‘fail faster’. If your hypothesis is disproved – well done, you’ve learned something. Replace it with another and repeat the process.
• In order to learn as much and as quickly as possible, there needs to be clear focus on the user (or customer etc.). ‘Pull’ is the notion that nothing is added to the service or product until the user expresses a need or demand for it.
• A Minimum Viable Product (MVP) is any version of a service or product that can begin the process of learning, using a Build-Measure-Learn feedback loop.

Dave Mckenna said...

Thanks - this is very useful.
I think John et al make the point that this approach comes from an engineering rather than 'scientific research' so a massive overlap with agile as you suggest.
We will need to have a conversation about method so these points will be valuable to feed in.

I particularly like the point about analogs - one I hadn't thought about before - sounds like an experiments equivalent of the literature review.

Post a Comment