Blog | December 12, 2017 | Jed Miller

When you’re an NGO technologist, you discover that every exciting technology plan comes with ‘small print’: caveats and contingencies that stand between your best case scenario and reality. A promising civictech tool runs up against scepticism that hobbles its impact, for example, or a new trove of government data gets released to find only a trickle of usage.

But to learn why tech succeeds or falls short in the governance context requires time and resources that NGOs don’t usually have. It takes research and reflection to understand that small print and delve into whether digital tools can foster better governance and greater participation—and, if so, under what conditions.

What place for learning in T4T&A initiatives?

A focus on learning has been central to the work of Making All Voices Count. Over four and a half years–and through 178 grants for innovation and scaling tech-enabled approaches accountable governance, and research – the programme has explored the role of technology in this field, and worked to build an evidence base for what works and what doesn’t.

For a practitioner who has seen too many NGO tech efforts fail to learn from prior work, Making All Voices Count’s research on initiatives like these offers the chance to review lessons from an impressive range of projects.

Several reports single out adaptiveness or adaptive learning as an explicit subject – see in particular the research report by Pedro Prieto Martin, Becky Faith, Kevin Hernandez and Ben Ramalingam and the newly-released findings by Global Integrity on citizen engagement around the Open Government Partnership in five countries. Several others also offer noteworthy reflections on the need for adaptability in tech for transparency programmes, and the conducive conditions for turning lessons into adaptations.

In many cases, the practice papers created by the Making All Voices Count research, evidence and learning team and some of its implementing partners provide the most relevant findings on adaptive learning. These briefings not only summarise the longer research reports, but also include dialogues in which researchers and practitioners reflect on project challenges. Indeed, they suggest the importance of critical reflection as a crucial element of adaptive learning in a field where emergent demands and funding deadlines make it too rare a commodity.

Prieto Martin and colleagues write that to be adaptive means “to be flexible, reflective and really able to learn ... to recognise, as quickly as possible, whether or not your strategies are working, and use this awareness to continuously adjust or replace them with better ones.”

Combing through the library of findings, you find a thread of insights on the place of learning and adaptation in technology and governance projects – in the pivotal role of midstream user research for Yowzit in South Africa, for example, or in the wish for donors with a “long-term mind-set” in Accountability Lab’s reflections from their Liberia work on innovative strategies for youth engagement in governance.

In government, academia and philanthropy, institutions move slowly and learn even more slowly, but the tech community has popularised the idea of agile software development, a speedy cycle of creation, testing, feedback and adjustment that offers a way to get from Version 1 to Version 2.0 sooner than you could with an academic paper or a seven-figure foundation grant.

A review of Making All Voices Count practice papers containing insights on observable learning and adaptation suggests lessons about adaptive learning that are comparable to lessons in the effective use of tech: that beyond the need for strong fundamentals in your project plan, it is the presence or absence of certain conducive conditions that determine if learning and adaptation are possible within the bounds of your project.

While we may not get to a single, agreed framework for how to design projects for adaptive learning, the encouraging examples from Making All Voices Count projects and, indeed, the shortfalls documented where learning was more elusive and less adaptive — suggest some common conducive conditions for adaptive learning to occur.

Common conditions for adaptive learning?

One common theme is that learning in the midst of experimentation, multistakeholder collaboration and the political environment of governance is a challenge.

Equally compelling are the stories that show how learning and adaptation can play out within the course of a project — for instance through the addition of unplanned research to help redirect an initiative, or in the form of significant adjustments to project strategy or tactics.

In one case from South Africa, the Yelp-like platform Yowzit had a small innovation grant from Making All Voices Count “to test the hypothesis that the same mechanism it uses to rate businesses could be an effective accountability mechanism for the public sector.” The challenges encountered during the initial period of the grant were characteristic of the citizen participation and civic tech field. Results were slow to come due to general citizen distrust in government, to gaps between government’s willingness to experiment and its capacity to adapt, and to the typical challenges of building a user base for an innovative use of tech.

However, after acknowledging that their “initial progress on gaining wide usership … will likely be modest,” Yowzit and Making All Voices Count inserted a new research phase into the project, investing resources to better understand the dynamics of civic tech, with a particular focus on the expectations of potential users, not only for a public service ratings platform, but for government responsiveness itself. Drawing on the combined lessons “from the innovation process and this applied research,” Yowzit began its next stage of work with plans including stronger partnerships with the government agencies positioned to follow through on citizen comments, and with the civil society groups who already sit as intermediaries between officials and citizens.

In Indonesia, FAMM is a grassroots organisation with international ties that seeks to increase women’s participation in government decisions and civic life. With a grant from Making All Voices Count, FAMM sought to better understand the dynamics that exclude women from local decision-making in rural areas – where the disparities of power, education and opportunity between women and men are especially high.

FAMM’s research and interviews highlighted the severe limitations on women’s participation in local governance, including the hidden dynamic of backlash against women entering majority-male spaces, and the constraints of comfort and self-image that inhibit women’s decisions to participate. In consideration of these political and cultural dynamics, FAMM’s assumptions for future work shifted: “[O]ur position is now different,” said FAMM’s Niken Lestari, “Our strategy is to strengthen informal relations and build informal spaces where decisions are actually taken.”

While these cycles of complicated outcome, renewed enquiry and pivoted approaches appear to be intuitive and even inevitable when retold as narrative, the learning loop they describe is not an easy loop to close for many—if not most — projects in the tech for transparency and accountability sector that has been the focus for Making All Voices Count.

Despite the valiant efforts of some practitioners, a few organisation leaders and many idealistic consultants, our sector still discusses ‘lessons learned’ mostly in the past tense the phrase employs. Several Making All Voices Count papers discuss the development sector’s longstanding awareness that its appetite for agile approaches has not translated into an overhaul of business as usual.

For Yowzit, accumulated experience with civil servants and a readiness to invest in interstitial user research enabled the project to deepen and expand from earlier weaknesses. For FAMM’s activist action, researchers used evidence and deep experience with cultural dynamics to enable the broadening of their focus from empowering women in the least conducive spaces to building the spaces and activities that were more conducive for women to build movements.

In South Africa, the Foundation for Professional Development (FPD) demonstrated the both power of candid dialogue with donors – in this case, Making All Voices Count – to achieve adaptiveness and the potential of adaptive innovation projects informed by practitioner research. Funded to create a tool “to help track survivors of sexual assault through health, justice and psycho-social support services,” FPD realised early on that they lacked the money or the time to build the full system envisioned. They approached their Making All Voices Count project manager and agreed to revise the project’s scope. “We focused on building the client experience app, and we shared the findings of our scoping exercise on a case management system with [a CSO and an agency], who took over development,” one project lead explained.

Developing this app to report on rape crisis centres, FPD had the advantage of two funding streams, for client research and for app development respectively. On the one hand, many ICT4D projects do not get separate research support to inform an innovation grant. Robust support for user-centred design is still more common in Silicon Valley than in the development sector. As FPD reported, “We went about developing the app in a very participatory way. We had formal and informal meetings with key stakeholders throughout the research and innovation process.” On the other hand, the funding timeline reportedly limited the strength of the final product. “The sequencing of the practitioner research and the innovation was a big challenge,” the grantee reported. “[W]e’re sorry that we couldn’t take the app through three or four iterations.”

 Time, embeddedness and autonomy

Making All Voices Count researchers have surveyed a number of frameworks and definitions for adaptive learning – see in particular Jonathan Fox and Prieto Martin and colleagues. Naturally and appropriately, these do not all agree and none is heralded as definitive. At the same time, based on these and the indications from several recent Making All Voices Count publications, three common themes can be observed about the conditions that can make learning and adaptation more possible:

  1. Time: flexibility vs. limitation

Is there sufficient time to confront new information or challenges, and then not only to learn from that information, but also to adapt during the course of the current project?

If time is flexible, practitioners can add new activities to their implementation, such as the “informal conversations” used by Accountability Lab in Liberia when response to traditional survey instruments was low, or the research on civil servants and citizen expectations undertaken by Yowzit in South Africa when their initial work on a “civic Yelp” app was slow to take hold.

It’s also worth noting the massive influence of grant making time-cycles—and donor demands—on cycles of learning in foundation-funded projects. Suzanne Johnson, a lead in the FPD grant in South Africa said, “In Making All Voices Count, there’s a focus on learning and reflection. … Many donors don’t foster the space to reflect on challenges with implementation.”

The conflicts between time, expectations, reflection, learning and adaptation emerge again and again across the Making All Voices Count portfolio. Too often in our sector, learning is told, “Maybe next time.”

 

  1. Embeddedness: deeper history and stronger context

Can practitioners draw from a depth of experience with local stakeholders, institutions, political and cultural dynamics, and technology habits?

When Indonesian grantee FAMM saw indications that the political and sociological barriers were too big to integrate young women into traditionally male decision making forums, they were able to adjust their strategy for fostering empowerment by drawing on their experience with power dynamics, gender dynamics and official and informal spaces in Indonesia. This same contextual understanding likely also made it easier to observe and interpret the early indications that their approach required adjustment. Embeddedness not only made it easier for FAMM to be adaptive, it made them readier to learn, especially in the face of resistance.

Similar positive dynamics can be seen in the FPD and Accountability Lab projects. It was FPD’s prior familiarity with the South African healthcare system that enabled to them partner with service providers more easily and, more importantly, to preserve agility and trust as they shifted their technology plans. In Liberia, only a partner with strong ties to networks beyond governance and civil society groups could develop a civic engagement strategy based not simply on fixing the accountability ecosystem, but expanding it.

Pedro Prieto Martin identifies embeddedness as one of four core adaptive practices, characterising it as “building with not for:”

Continued engagement with the problem-owners (customer, partners, users and community) and with the general context of work provides the evidence-based feedback loops required to improve. To maximise embeddedness and minimise distance between makers and users, aim for work done locally, by locals.

  1. Autonomy: the power, and knowledge, to make new choices

A third theme emerges in several Making All Voices Count projects: that of the independence and authority of the grantee. In other words, groups with deeper subject matter fluency, and the financial or operational freedom to shift tactics when faced with project challenges, are better positioned to act on new lessons, even if it means deviating from the precise terms of a grant or strategy.

The work by FPD, and another South African practitioner research and learning grantee – Open Democracy Advice Center – offer examples of how stronger standing with local leaders and international donors provided room to learn and pivot during the course of a project, not simply after the fact. As ODAC’s lead researcher Gabriella Razzano explains in her practice paper, ODAC came to their project with an established focus on “engaged and effective advocacy with supportive champions” in South Africa’s Open Government Partnership work. “We are trying to co-create…assist with the implementation of commitments where appropriate, and play our watchdog role in ensuring that the government is fulfilling its commitments to greater transparency and accountability as part of the OGP,” she said. “Partnership is what the OGP is all about, which means civil society must be firmly embedded in the process.”

As practitioners trying to build on each other’s work and not re-invent the proverbial wheel, we can do more to build toward these conditions from the earliest stages of project design – and, as importantly, grant design.  Organisations may not always have the time or resources to create better conditions for learning, but those limitations may turn out to be habits — of NGOs or donors — not permanent barriers.

To borrow another Silicon Valley analogy, it may be useful to think of these conducive conditions as a ‘runway’ that allows a project to take off, just as venture capital can for a tech startup. Without these resources, learning and the adaptation it invites are often unrealised, but when we plan with these conditions in mind, we increase the chances that we can learn as we go.


About the author

Jed Miller is a writer and digital strategist focused on citizen engagement, transparency and organisational adaptation. He has consulted with groups including the Open Government Partnership, the World Bank, and the Open Society Foundations, and previously served as internet director for the Revenue Watch Institute (NRGI) and the American Civil Liberties Union.
Share