All Blogs

Are you doing what’s needed to get the state to respond to its citizens? Or are you part of the problem?

Blog | October 23, 2017 | Vanessa Herringshaw

Making All Voices Count has been a multi-million pound, multi-country, multi-donor, multi-stranded innovation, scaling and research programme, all geared towards improving government responsiveness to citizens using ICTs.

It has invested heavily in pulling together all that has been learnt, and as it heads into its closing stages, now is the time for everyone interested in such matters to reflect on what it all means.

I’ve been reading over the incredibly rich and practically-orientated research and practice papers already on the Making All Voices Count website, and some of those coming out soon. There’s a huge amount of useful and challenging learning, and I’ll be putting out a couple of papers summarising some important threads later this year.

But as different civic tech and accountability actors prepare to come together in Brighton for Making All Voices Count’s final learning event later this week, I’m going to focus here on three things that really stood out and would benefit from the kind of ‘group grappling’ that such a gathering can best support. And I aim to be provocative!

  1. Improving state responsiveness to citizens is a complex business - even more than we realised - and a lot more complex than most interventions are designed to address. If we don’t address this complexity, our interventions won’t work. And that risks making things worse, not better.
  2. It’s clear we need to make more of a shift from narrow, ‘tactical’ approaches to more complex, systems-based ‘strategic’ approaches. Thinking is developing on how to do this in theory. But it’s not clear that our current institutional approaches will support, or even allow, a major shift in practice.
  3. So when we each look at our individual roles and those of our organisations, are we part of the solution, or part of the problem?

Let’s look at each of these in turn.

 1. Facing up to the complexity of improving accountability

Most practitioners who have worked on ‘accountability’ for more than a few years know that it is an inherently complex business. If anyone ever thought that greater ‘transparency’ plus ‘participation’ would readily result in ‘accountability’ in any linear way, those days are long gone. And with them, should go the idea that the use of ICTs can positively transform the entire state-citizen relationship without the messier businesses of power, politics and human relationships getting in the way.

The learning from Making All Voices Count is showing that ‘accountability’ is even more complex than we thought, on at least two levels – let’s call them level A and B.

Level A learning on the complexity of accountability - 

We need to ‘home-in’ and unpack what seem like basic assumptions

about specific components of ‘citizen voice’, ‘government responsiveness’ and how they do or do not relate causally in specific contexts and over time.

The point that ‘citizens’ are no homogeneous group whose behaviour can be predicted externally is now established, and the focus has shifted to developing methods to really understand their diverse perspectives and needs relating to voice and accountability, and having that drive intervention design and adaptation.

Strangely, there has been less focus to date on a similar unpacking of the complexities of the ‘state’, its varied members and the diverse enablers/barriers to their responsiveness to citizens. Making All Voices Count has been supporting new work here to begin to do this, to survey and interview individual government decision-makers and to start to frame the factors - both structural and individual - that appear to make them more or less likely to be responsive. (see Forthcoming Research for studies by  Lieberman, Martin and McMurry, and Joshi and McCluskey).

With regard to the role of ICTs, Making All Voices Count has itself been ‘unpacking’ the different ‘affordances’ of various technologies (i.e. their particular qualities that facilitate certain actions) and the extent to which these can be used to improve ‘seven streams’ of tech-enabled change in pursuit of accountable governance – information, feedback, naming-and-shaming, innovation, connection between citizens, infomediation and intermediation.

Finally, there has been important focussed work to begin to explore assumptions about the links between ‘citizen voice’ and ‘government responsiveness’ and whether the former actually leads to the latter. These have mostly ‘homed-in’ to unpack whether ICT-enabled citizen feedback platforms actually generate practical service delivery improvements from government. Three important comparative studies show some very challenging findings indeed – in general, where government willingness already exists, the platforms and citizen inputs do stimulate improvements, but where that willingness does not pre-exist, they do not. Citizen voice, when collected and channelled through technology alone, has not generated government willingness or responsiveness. (see Welle et al 2016, Peixoto and Fox 2016, Hrynick and Waldman 2017). So what else is needed?

Here is it instructive to return to the Jonathan Fox’s 2014 inductive review of impact evaluations of social accountability initiatives. This concluded that the initiatives could be divided into “two quite different sets of initiatives: tactical and strategic”. The low impact ‘tactical’ approaches were those with approaches that were framed in narrow, linear ways: tool-led; limited to only citizen voice; focussed on only information provision; and limited in scope to only local arenas.

In contrast, the higher impact ‘strategic’ interventions were framed in much broader, more complex ways: based on multiple, coordinated tactics; focussed on creating enabling environments for collective citizen action (and thus reducing perceived of risk); coordinating work on both citizen voice and government responsiveness together; working at multiple levels (linking very local, sub-national and national actors; and framed as long campaigns rather than finite interventions (i.e. iterative, contested, uneven and on-going).

This takes us on to Level B learning about dealing with complexity for accountability –

we need to ‘zoom-out’ to not only see the big picture, but to work with that system as a whole.

 Facing up to mounting empirical evidence that, “So many efforts, in such diverse contexts, fall short of achieving tangible accountability gains,” in 2016, Fox challenges those pursuing, “incremental, localised or small-scale change to explain how these will plausibly connect to generate more systemic transformation.”

Instead, he calls for a ‘conceptual reboot,’ with a core message: “Systemic problems call for systemic responses”.

“The point of departure,” he says, “is the challenge of breaking out of self-reinforcing low accountability traps in which pro-accountability forces in both state and society are weak. He proposes “Vertical Integration”, i.e. “the coordinated, independent oversight of public sector actors at local, sub-national, national and transnational levels … monitoring each stage and level of public sector decision-making, and non-decision-making and performance”.

His thinking focuses on “large-scale, nationwide, cumulative power shifts’. But it also links to temporal dynamics. Many of the positive examples of improved government responsiveness to citizens to date have occurred during particular political windows of opportunity.  But they are rare and hard to predict, and even then, often do not last long enough to allow engagement to translate into government response through all the policy making to implementation necessary to bring impact (Loureiro et al 2016; McGee and Edwards 2016). Relying on these alone will not suffice.

This draws on the ‘systems’ and ‘ecosystem’ thinking of others (Halloran, 2016) which suggests that to address deeply ingrained forces supporting inertia, corruption and/or impunity, the change process may need to be more of a ‘big bang’ of mutually reinforcing policy and system changes than a gradual, incremental process (Johnstone 2014; Marquette and Peiffer 2015).

So any ‘honed-in’ work to unpack and work on specific complexities needs also to be nested within a ‘zoomed-out’ understanding and approach that addresses the system as a whole. This is, of course, highly challenging. But the alternative is most likely failure.

The negative impacts of such failures do not stop at the waste of time and money of practitioners, researchers and funders currently going into them, nor even in the consequences for citizens of no improvement in government responsiveness. Citizens, especially in marginal and subsistence situation, may suffer material losses if they waste their very limited time and resources on interventions that do not deliver for them. And the final danger is that such failed efforts actually make things worse by raising levels of citizen and government cynicism and mistrust by making it harder to engage either in future and/or acting as a smokescreen for inaction.

2. Do our institutional structures allow us to address the complexity of accountability?

 Conceptually and empirically, such findings reflect the limitations of many small, piece-meal approaches currently aiming to increase citizen voice and government responsiveness. They point to the need to shift to more complex, realistic and system-wide approaches. This would require implementation, research and funding approaches that are large-scale, mixed and multi-level, long-term, coordinated, flexible and responsive.

But is it at all realistic to think that such an overall approach is feasible? And for whom? What would this mean for practitioners, researcher and funders?

 Practitioners’ – those trying to do the work on the ground of bringing improved government responsiveness – are themselves a complex group, including reformers in governments, activists outside it, and intermediaries trying to link the two. The ability of each to work in more system-wide ways partly depends on their mission and ethos and their institutional room for manoeuvre (e.g. scale and source of funding, size and skill-set of staff, mandate, laws affecting what activities they can undertake etc.). Shifts here may be partly a question of challenging institutional inertia, partly of addressing broader dis/incentives controlled by others, (such as funding models that force them to compete,) and partly of investing in the kinds of processes and long-term timeframes for partnership and coalition building needed for success.

For researchers, it may be instructive to interrogate whether current institutional norms and structures incentivise research and evaluation approaches that match large-scale, mixed and multi-level, long-term, coordinated, flexible and responsive interventions. For example, would such kinds of research carry kudos? Would they affect publication rates? Would they demand different skill sets (e.g. more focus on real-time, on-going ‘action research’ compared to periodic or retrospective evaluation, more focus on mixed methods). Is there some level of current mismatch between the kinds of ‘clean’, narrowly focussed, quantitative, case-control evaluations favoured by some funders, and the messier realities of system-wide, adaptive programming approaches?

Different funders face varying kinds of accountability pressures of their own. But all potentially have the ability themselves to at least think in system-wide ways. Then, depending on their own political context and mandates, they can aim to fund the necessary clusters of reinforcing interventions likely to be needed in any single context, or to coordinate and combine their relative strengths with other funders, especially combining foundations with different bi- and multi-lateral funders, so that different funders support vital parts of systems change where others cannot.

3. So when we each look at our individual roles and those of our organisations, are we part of the solution, or part of the problem?

Making the shift to working in more complex, realistic, systemic ways may require many of us to re-assess and tackle, not just our individual roles and ways of working, but also the ways that we work together and the ways that we interpret and respond to the structures in which we operate.

The serious mismatches between what is needed and what is being done are being increasingly articulated and demonstrated. They may be well-rehearsed, but they are not going away. Rather they are being thrown into ever more sharpened relief as experience and evidence mounts and ‘open government’ on paper increasingly co-exists with closure and restriction in reality.

 So, practitioners, is it time for more of your own ‘citizen voice’ on whether this is really working?

And if you are a funder or a researcher, is it time to ask yourself – do your funding and research frameworks continue to act like this complexity does not exist, forcing practitioners into overly simplistic and short-term action, potentially wasting funds and fuelling cynicism as interventions fail? Or do you support complex, multi-level, long-term, responsive programming … really?

I said I’d try to be provocative! And I look forward to some great energetic but practical discussions in Brighton this week and at the GPSA partners forum in Washington next week….


About the author

Vanessa Herringshaw is former Director of the Transparency and Accountability Initiative, Director of Advocacy for the Natural Resources Governance Initiative, and Head of Economic Policy for Save the Children UK. Now freelance, she has been commissioned by Making All Voices Count to review and communicate findings, especially those relating to government responsiveness. She can be contacted as ness.herringshaw@gmail.com
Share