Blog | May 4, 2016 | Rosemary McGee

Last week’s TICTeC Conference in Barcelona, organised by mySociety, was a reminder of the promise and perils of youth. 

As a young and quickly-evolving field, ‘civic tech’ (its term, not mine) is bursting with ideas, innovation, freshness and enthusiasm. That’s great, especially when these are infused with the will to address injustices and inequalities that favour some human beings to the detriment of others. 

But, as some of its members have noted already (here, here and here to name a few) there is a process of maturation that the civic tech field needs to go through.

One of the key ways in which ‘civic tech’ has some growing up to do is in relation to research and evidence, and their connections to practice.

From my position as an applied researcher based at a UK academic institution, coordinating the Research, Evidence and Learning component of the Making All Voices Count programme, here are three pieces of recent research, three arguments, and three takeaways, to fuel the movement’s efforts to deal with its growing pains without losing its unique freshness and enthusiasm.


The recent IDS Bulletin Opening Governance included articles on three recent research studies that each surveyed and critically analysed a range of ICT-based initiatives for promoting citizen voice to improve service delivery.

Peixoto and Fox’s article in the Bulletin reviewed 23 citizen voice platforms. They found that the 12 initiatives which achieved only ‘low government responsiveness’ were the ones that relied on the implicit market-based assumption that individual demand for good quality public services produces its own supply.  The 11 which achieved medium or high government responsiveness were cases that were pushing on an open door – in these, existing political will to be responsive to citizens and users, or the service deliverer’s acknowledgement of their accountability to users for a certain quality of service, were factored into the service design.

Welle et al analysed 8 tech-based ‘solutions’ to rural water supply sustainability problems. Their analysis considered three variables: successful ICT reporting, successful ICT report processing, and successful service improvements through water scheme repairs.  Only 3 out of the 8 were successful on all three counts so rated as successful overall.

The study also highlighted the limits of crowdsourcing. Possibly useful for bringing in (dys)functionality reports, it is often premised on unrealistic notions of how big, interested or even existent ‘the crowd’ is, and not actually related at all to any accountability effects the overall initiative may have.

Wilson and Lanerolle explored how people choose the tech tools they use in transparency and accountability (T&A) initiatives (also see the Engine Room’s Tool Selection Assistant which emerged from this research).

Of 38 tech-for-T&A initiatives they researched, in less than a quarter did their respondents consider that their choice of tool had been a good one and had lent effectiveness to the initiative.

Some basic but essential conclusions cry out to us here:

  • Not many ICT platforms purporting to ‘close feedback loops’ actually achieve it.
  • Failure is often down to fatal flaws in their theories of change. But that was pointed out as long ago as 2011 in a thorough-going review on the Impact and Effectiveness of Transparency and Accountability Initiatives, commissioned by T/AI, as well as in a parallel review by Global Integrity looking specifically at what were then the ‘new technologies’ for transparency and accountability.

There are frequent complaints that we do not yet have enough evidence in this field to demonstrate impact or inform improvements in practice. Undoubtedly they have a point. But, it is also true that there is evidence that is relevant, openly available – and not getting used. Why not?

Here are three reasons to think about.

Perhaps it’s the researchers’ fault? Unintelligible mad-professor types in ivory towers surrounded by dusty tomes aren’t doing the research that’s needed? Not true.

For just a few debunking examples, see the Open Governance Research Repository, T/AI’s publications site the Citizenship DRC archives, the Governance and Social Development Resource Centre (GSDRC) and our very own Making All Voices Count. What’s more, loads of it is intended to inform practice rather than end up in dusty tomes in the ivory towers, and loads of it is open-access.  We don’t all hold that ‘evidence’ only means things published in peer-reviewed journals high on the citation indices: some of us even hold that the most evidence comes from careful, critical empirical observation of practice. And we aren’t all unintelligible mad-professor types: some of us even tweet and blog! So let’s not place all the onus on researchers to close the evidence—practice gap.

Perhaps it’s down to conflicting world views? Consider contrasting approaches to constructing evidence. While a basic principle of marketing a tech innovation is to set out to prove your concept right, a basic principle of social science research is to set out to prove your hypothesis wrong.  Or consider contrasting approaches to success and failure.

Tech innovators set out to fail, note why they failed, and repeat the cycle with modifications until they stop failing; social science researchers do good socio-political appraisal and context analysis in advance to minimise an initiative’s chances of failure; and aid donors are supposed to do all they can to maximise the value for money and impact of every aid pound spent. If the problem is conflicting world views, the implication is that we all need to get to know each other better, surface these ‘inter-cultural’ differences in good time and work together to make a virtue of our different strengths, rather than assuming our approaches are well-aligned and getting nonplussed when they turn out not to be.

Or perhaps there are simply not enough incentives for practitioners and innovators to search for, take up and apply relevant evidence to guide their work? Framing ‘innovation’ as always about the brand-new, rather than as about adapting, adjusting or re-purposing the tried and tested, tends to downplay the relevance of past experience and reward what is shiny and new, rather than what can plausibly be assumed will work.

Moreover, many of those crowding into this sector of ‘civic tech’ come not from academic research backgrounds but from the business and innovation sectors. They might be expected to do good market research and market testing (their equivalent of drawing on the available evidence) if they were using their own money as venture capital. But why would they bother to do that when there’s free aid money available to fail with? If the problem is lack of conducive incentives for good evidence-based practice by tech innovators, then awareness-raising is needed about the distortions of the dominant aid-funded innovation paradigm – including by marshalling the evidence of limited effectiveness, low impact and poor value for money.

And so we come to the three takeaways:

  1. If the idea of evidence-based policy and practice is rather a myth at the best of times, it is particularly mythical in this emerging field of technologies for citizen engagement and government accountability.
  2. Where is each and every one of us – researchers, innovators, developers, civil servants in official aid donor agencies, philanthropic foundation staff, development NGO workers in the global south, civic techies in the global north - in all this? What should each of us be doing differently?
  3. Scholarly disciplines and fields of practice that have been around for some time tend to come of age by developing ethics, to regulate and guide their practice, assure their quality and ensure their accountability. A future project for everyone at the TICTeC conference - and all those who are part of this field - is the co-construction of ethics for civic technology and digital democracy, with the development of a closer relationship between evidence and action at the heart of it.

About the author

Rosemary McGee is Research, Evidence and Learning Coordinator for Making All Voices Count, based at Institute of Development Studies
Share