Abstract:

Qualitative research has much to offer to the practical work of humanitarian and development organizations. Growing recognition of the potential for qualitative research to enhance programme impact is putting pressure on development practitioners to adapt a ‘research approach’ in their monitoring, evaluation, accountability, and learning work. This introductory chapter starts off by outlining some of the ways in which qualitative research can be used to improve the impact, quality, and accountability of development projects and programmes. It will then introduce some basic principles of qualitative research and illustrate some of the ways in which qualitative research can be incorporated into various stages of the programme cycle.

After reading this chapter, you will be able to:

  • outline the ways in which qualitative research can improve development programmes and their impact;

  • describe the link between qualitative research and accountability;

  • explain what qualitative research is, including its strengths and weaknesses; and

  • identify ways of integrating qualitative research into a programme cycle.

  • Accountability: The means by which people and organizations are held responsible for their actions by having to account for them to other people.

  • Evidence: The available body of facts or information indicating whether a belief or proposition is true or valid.

  • Findings: Summaries, impressions, or conclusions reached after an examination or investigation of data.

  • Formative evaluation: An early examination of an active programme with the aim of identifying areas for improvement in its design and performance.

  • Generalizability: The ability to make statements and draw conclusions that can have a general application.

  • Programme cycle: The process and sequence in which a programme develops from start to finish.

  • Qualitative research: A method of inquiry that takes as a starting point the belief that there are benefits to exploring, unpacking, and describing social meanings and perceptions of an issue or a programme.

  • Research: To study something systematically, gathering and reporting on detailed and accurate information.

With an ever-growing emphasis on evidence-informed programming, there is a push for development practitioners to strengthen the quality of their monitoring, evaluation, accountability, and learning (MEAL) activities. For many development practitioners, evidence continues to be associated with quantitative evaluations of development initiatives. In fact, until recently, many people working in MEAL have been suspicious of qualitative methods and have had little incentive to develop a qualitative evidence capacity (Bamberger et al., 2010). While quantitative evidence is crucial for decision making and rightfully continues to play a key role in the development of evidence, there is growing recognition of the need for qualitative evidence.

This recognition is born out of the fact that development programmes have often been designed and implemented without sufficient qualitative evidence to understand the needs, wishes, and context of the target population. Too often, local perspectives have been neglected in the design, implementation, and evaluation of programmes, despite local voices containing crucial information that can help development practitioners understand pathways to programme success and failure (Chambers, 1983, 1997).

Qualitative research can systematize and formalize the process of generating qualitative evidence. Qualitative research can be used to understand the context of a programme better; it can provide us with insights to new issues and help us understand the complexity of connections and relationships between people, programmes, and organizations. It can provide beneficiaries with an opportunity to share their perspectives of an issue or a programme, which in turn can help us understand the nuances with regard to how different people experience a programme. Importantly, qualitative research can be used to ensure that development programmes resonate with local realities and expectations.

However, given the dominance of quantitative MEAL efforts, many development practitioners lack the skills and confidence to authoritatively produce qualitative evidence. In particular, there continues to be confusion and lack of clarity within development organizations about what qualitative evidence looks like and how best to conduct rigorous qualitative studies.

Although we welcome a drive for more rigorous qualitative research, we also recognize that in a ‘development organization’ context, there is a tension between achieving rigour, what is feasible, and what is considered useful.

We accept that some development practitioners are likely to face significant constraints in adapting some of the practices we describe in this book. We are therefore not looking to turn you into an ‘academic researcher’; rather, we aim to introduce you to the ‘rules of the game’ for conducting rigorous qualitative research at all stages of a development project cycle. We want to encourage and equip you with the knowledge and skills required to adopt a ‘research approach’ (see Box 1.1) in your MEAL and development activities (Laws et al., 2013).

We believe that it is important for development practitioners to engage with qualitative research and adapt a ‘research approach’ in the generation of qualitative evidence for four main reasons.

  • Development practitioners are at the frontline, responding to humanitarian and development needs, which makes them particularly well suited to identifying issues on which research is required and to taking an active role in facilitating research.

  • Development practitioners can accelerate the use of research findings and translate them into programming and advocacy.

  • With the turn to evidence-informed policy and practice, interventions need to be based on systematic qualitative research from the ground as well as on evidence from evaluations in other locations. Assessing value for money and taking programmes to scale cannot be based on anecdotes and impressions.

  • Systematic qualitative research helps development practitioners improve the quality, accountability, and impact of their programmes.

Box 1.1 Key aspects of a ‘research approach’

These include:

  • being curious and having an interest in learning about the causes of things;

  • being willing to learn from data, and change your mind about prior beliefs;

  • having a concern to really understand what people say and the meanings behind their statements;

  • having an awareness of how you, the researcher, may shape what is being said and the direction of the research;

  • striving for analytical sophistication, identifying patterns that may not be immediately obvious;

  • being systematic and keeping records of all the data;

  • being interested in discussing findings in a broader context, for example in relation to previous experiences or the experiences of others.

Source: Laws et al. (2013: 14).

This book is designed to guide development practitioners through the process of planning, conducting, and reporting on qualitative research, while simultaneously showing how qualitative methods can support the work of development practitioners. In other words, we focus on the particular uses of qualitative research in the programme cycle and highlight the role of qualitative evidence in improving the impact, quality, and accountability of development programmes.

Our practical aim is to demystify the qualitative research process and provide development practitioners with the procedural clarity, skills, and confidence to use qualitative methods authoritatively and advocate for the need to embed qualitative research in the programme cycle, either on its own or together with quantitative studies.

What is qualitative research? And how is it different from quantitative research?

Research involves collecting information, also referred to as data, in a systematic way in order to answer a question. However, your research question, and the methods you use to generate data that can answer that question, are likely to reflect one of two research approaches, or a mix.

One such approach refers to quantitative research. Quantitative research typically explores questions that examine the relationship between different events, or occurrences. In an evaluation context, this might include looking at how change can be linked or attributed to a particular intervention. Such a question might be: ‘What impact did child-friendly spaces have on refugee children’s psycho-social well-being?’ To test the causality or link between ‘child-friendly spaces’, an intervention, and children’s ‘psycho-social well-being’, researchers will have to try to maintain a level of control of the different factors, also called variables, that may influence the relationship between the events. They will also need to recruit research participants randomly. Quantitative data is often gathered through surveys and questionnaires that are carefully developed, structured, and administered to provide you with numerical data that can be explored statistically and yield a result that can be generalized to some larger population (Bauer et al., 2000).

Another approach, and the focus of this book, is qualitative research. Qualitative research seeks to explore personal and social experiences, meanings, and practices as well as the role of context in shaping these. Qualitative research thus takes as a starting point the belief that there are benefits to exploring, unpacking, and describing social meanings and perceptions of a phenomenon, or a programme (Flick, 2002). Not only can qualitative research give voice to people who are ordinarily silent or whose perceptions are rarely considered, it can help explain ‘how’, ‘why’, and ‘under what circumstances’ does a particular phenomenon, or programme, operate as it does.

As such, you can use qualitative research to obtain information about:

  • local knowledge and understanding of a given issue or programme

  • people’s perceptions and experiences of an issue, their needs, or a programme;

  • how people act and engage with a programme, each other, and organizations;

  • local responses and the acceptability and feasibility of a programme;

  • meanings people attach to certain experiences, relationships, or life events;

  • social processes and contextual factors (for example, social norms, values, behaviours, and cultural practices) that marginalize a group of people or have an impact on a programme;

  • local agency and responses in mitigating poverty and the marginalization of vulnerable populations.

As these examples of research areas suggest, you can use qualitative research to gain a better understanding of either an issue or a particular programme or intervention. Issue-focused research can help you develop a better understanding of an issue, or phenomenon, and how it affects a group of people. This may, for example, include the health risks facing children in a particular location, or the barriers that expectant mothers face in accessing maternal healthcare. Qualitative research is particularly good at investigating sensitive topics, whether it be sexual abuse or intimate partner violence. It could also include examining the care or living arrangements of hard-to-reach groups, such as children living or working on the street. Issue-focused research can provide information that better prepares you to advocate for a cause or develop and plan a programme that addresses some of the problems that the research identifies.

Intervention studies and programme-focused research look at stakeholders’ interaction with a programme. This might include looking at some of the different ways in which a programme has an impact, community-level acceptability of a programme, or the factors enabling or hindering programme success. Programme-focused research could also involve examining how beneficiaries experience a programme. For example, a research question might read: ‘What are children’s experiences of spending time in child-friendly spaces?’ To explore children’s views of ‘child-friendly spaces’, researchers can use creative, flexible, semi- or unstructured methods that enable and capture children’s views. Such methods may include individual or group interviews (see Chapter 3), participant observations (see Chapter 4), participatory methods (see Chapter 5), or Photovoice (see Chapter 6). The information generated through these methods can be used to map out and contextualize children’s social experiences or to identify a range of minority, majority, or contradictory experiences or perceptions of child-friendly spaces.

Table 1.1 summarizes key differences between qualitative and quantitative research. Although the two approaches ask different questions and have different strengths, presenting them as distinct and opposite is not overly helpful. In practice, they are often combined or draw on elements from each other (Bauer et al., 2000). For example, quantitative surveys often include open-ended questions. Similarly, qualitative responses can be quantified.

Table
Table 1.1 Summary of the key differences between qualitative and quantitative research
Table 1.1 Summary of the key differences between qualitative and quantitative research
  Qualitative research approach Quantitative research approach
Examples of research questions How do cash transfers support the education of children? What impact did cash transfers have on children’s school performance?
  In what ways can a literacy boost programme affect children’s education? Does a literacy boost programme improve children’s reading skills?
  What social factors influence women’s access to healthcare? Is socio-economic status correlated to women’s health?
Type of knowledge Subjective Objective
Aim Exploratory and observational Generalizable and hypothesis-testing
Characteristics Flexible Fixed and controlled
  Contextual portrayal Independent and dependent variables
  Dynamic, continuous view of change Pre- and post-programme measurement of change
Sampling Purposeful Random
Data collection Semi-structured or unstructured Structured
Nature of data Narratives, quotations, descriptions Numbers, statistics
  Value uniqueness, particularity Replication
Analysis Thematic and interpretative Statistical

Qualitative and quantitative methods can also support each other, both through triangulation of findings and by building on each other. Triangulation is when you use different data sources and methods to shed light on an issue or programme. You can triangulate either by gathering data from different research participants or by examining an issue using different data collection methods. For example, you could compare the perspectives of teachers, students, and parents on the quality of schooling or gain an understanding of student perspectives through a questionnaire, interviews and participant observations. Why is it important to gather the perspectives of different stakeholders and/or use different methods? Triangulation can either create confidence in the trustworthiness of your findings or highlight further complexity (Denzin, 1989; Gaskell and Bauer, 2000). If, for example, different stakeholders all share a similar concern, or if your data collection methods all lead to similar observations, you are a step closer to overcoming bias (an inclination to hold a particular view) either induced by a particular method, or by only considering the views of one group of research participants. However, through data and method triangulation, you may also uncover inconsistencies or contradictions, which will require you to further understand the origin of these complexities (Gaskell and Bauer, 2000). Either way, triangulation can strengthen your conclusions and identify areas for further work.

Qualitative and quantitative methods can also be used to build on each other in an iterative manner. MEAL activities typically draw on a mix of qualitative and quantitative methods. This is because one research approach (qualitative or quantitative) can rarely fully address the research questions that are posed or provide the information required for a log frame. The approach of drawing on both qualitative and quantitative methods has been referred to as mixed methods.

The weight given to qualitative or quantitative methods may differ, as can the sequence in which qualitative and quantitative data is collected (Creswell, 2002). For example, qualitative research can be used to develop and guide the questions in a survey, and ensure that they both include relevant indicators and ask appropriate questions. Equally, a statistical analysis of a survey may identify variances, trends, and patterns, which can then be explained and explored further through qualitative research (see Figure 1.1).

The iterative process illustrated in Figure 1.1 is typical. Other sequences include collecting both quantitative and qualitative data at the same time, or starting to collect either qualitative or quantitative data, which is then followed up with an alternative method. Depending on your research question, one method may carry more weight than another. For example, you may conduct a qualitative study but also gather a few descriptive statistics from your context. In this case, the weight lies with the qualitative research methodology. There is no right or wrong sequence or weight. The most important thing is that you choose a strategy that can best answer your research question.

What are some of the limitations of qualitative research?

There are some limitations to qualitative research. While qualitative research is ideally suited to understanding local knowledge and perspectives, the knowledge produced from such studies is not easily generalizable to other people or other settings. One therefore has to be careful about making sweeping generalizations about the findings generated from qualitative research. Qualitative research embraces different views and perspectives, and is likely to unpack a variety of different experiences and perceptions; it is therefore rarely appropriate to test hypotheses using qualitative methods. Qualitative research can instead be used to generate hypotheses that can then be tested using quantitative methods.

figure
Figure 1.1 Iterative process of combining qualitative and quantitative methods in research

Source: Adapted from Bamberger et al. (2010).

All research is vulnerable to bias – and this includes quantitative research. However, qualitative research explicitly embraces subjectivity, which means that personal experiences, perceptions, and judgements are valued, whether they come from research participants or from the way in which researchers purposefully recruit participants to the study. Qualitative researchers also make observations and interpret data based on preconceived ideas about the topic. The background, experiences, and values of those researchers will therefore inevitably influence the generation of qualitative evidence. According to Madden (2010), this makes the researcher a key instrument and tool for the generation of qualitative evidence. It also means that qualitative findings are never objective truths; rather, they are carefully formed and shaped by the researcher. For sceptics of qualitative research, this raises questions about its rigour and the scientific value. However, precisely because of the subjectivity of qualitative research, it is important to use a set of quality criteria that are different from those of quantitative research: namely, reliability, validity, and generalizability (Gaskell and Bauer, 2000). In Chapter 2, we will describe different quality criteria of qualitative research, which help enhance its rigour and scientific value.

As a result of these limitations, people in positions of power often associate qualitative research with limited use and credibility. However, this is a grave misunderstanding of what systematic qualitative research has to offer. And it is a misunderstanding with real implications for the funding and support of the development of qualitative research capacity. As a consequence, there remains little procedural clarity or guidance on how to conduct good qualitative research in the development sector. While this is slowly changing, it reminds us that we all have a responsibility to maintain and further strengthen the quality and integrity of qualitative research.

By now, you probably have a good understanding of what qualitative research is and what it is not. To further explain the use and potential of such research to the work of development practitioners, we will now discuss some of the different ways in which qualitative research can improve and strengthen development processes.

Development agencies are continually aiming to develop programmes that are optimal in relation to relevance, impact, cost, reach, and social change. Qualitative research can help development practitioners achieve each of these goals more fully. In this section, we will introduce six components of development and discuss the contribution of qualitative research to each one. As illustrated in Figure 1.2, the components we will be discussing are: 1) beneficiary engagement, relevance, and empowerment; 2) accountability; 3) impact, innovation, and evidence; 4) value for money; 5) scalability and replicability; 6) advocacy, campaigning, and social change.

Beneficiary engagement, relevance, and empowerment

Many development and humanitarian organizations have it within their mandate to empower the people they work to assist, and they often see participation as an essential strategy to achieve this. Qualitative research can facilitate participation. As a research approach, it actively encourages the use and development of creative and flexible methods that enable different voices to be heard (O’Kane, 2008). In fact, some qualitative research methods have been developed with the specific purpose of enabling the people whom development agencies are looking to assist to participate in the planning of development programmes (Chambers, 1983; Rifkin and Pridmore, 2001). See Chapters 5 and 6 for more detail and examples of such qualitative and participatory research methods. Using qualitative research to facilitate participation is important for a number of reasons:

  • Qualitative research can be used to consult a wide variety of local stakeholders. Often the least powerful and visible people of a community, such as children, struggle to have a voice in community and programme sensitization forums. Qualitative research can thus ensure that different groups of people are given an opportunity to voice their perspectives about an issue or a programme.

  • In return, and to ensure relevance, development practitioners can use these perspectives to tailor the intervention to make it more aligned to the spectrum of views, expectations, and needs that exist in a programme context.

  • If community members feel that the opinions and experiences they articulated through qualitative research have been taken into account, and have influenced decisions, they are more likely to stay positively engaged with the programme and have a sense of ownership.

  • Some qualitative research methods (such as Photovoice; see Chapter 6) can actively facilitate deliberation, awareness raising, and critical thinking (Freire, 1973). Such analytical skills are essential for good community-level programme management and for developing relationships with external change agents.

  • Related to this, the type of participation that qualitative research facilitates can be empowering. Participation and empowerment are deeply intertwined, reinforcing each other, both as means and ends. On the one hand, participation can lead to the development of new skills, feelings of control, and power over the participants’ lives. On the other hand, participation in activities, and under conditions that do not enable change, can contribute to a sense of powerlessness and further discourage participation (Campbell and Jovchelovitch, 2000).

figure
Figure 1.2 Qualitative research for development

As these five examples suggest, qualitative research has the potential to facilitate beneficiary engagement, which not only ensures that programmes are relevant but can, as argued by Kilby (2006), help development organizations become effective agents of empowerment.

Accountability

Accountability broadly refers to the mechanisms that are in place within a development and humanitarian organization to ensure that it uses its position of power responsibly. It typically involves ‘giving an account’ to someone who has a stake in a development programme (Cornwall et al., 2000). More often than not, this involves demonstrating to a funding agency that a programme has been worth funding. Development practitioners are all too familiar with the process of generating data and information to demonstrate to their donors that their programmes are worthwhile. While being accountable to donors continues to be key in the delivery of aid and development programmes, the past few decades have witnessed a powerful movement to ensure that accountability is not limited to the funding agencies and donors, but also considers the responsibility of development organizations to be accountable to the people they seek to assist.

The Humanitarian Accountability Partnership (HAP) has been instrumental in promoting accountability to beneficiaries of humanitarian and development organizations. HAP has developed some standards, or benchmarks, of accountability (Darcy et al., 2013). These include the following:

  • Establishing and delivering on commitments: the organization develops a plan that sets out its commitment to accountability.

  • Staff competency: the organization ensures that its staff have the necessary competencies to deliver a plan of action for accountability.

  • Sharing information: the organization ensures that all stakeholders, including its beneficiaries, have access to timely and relevant information about the organization and its activities.

  • Participation: the organization gives voice to the people it aims to assist and incorporates their views into programming.

  • Handling complaints: the organization puts in place mechanisms that enable all stakeholders, including beneficiaries, to safely deliver complaints and receive a reply that gives details about how the organization is responding to the complaint.

  • Learning and continual improvement: the organization learns from its experience and applies learning to improve its performance.

These six benchmarks of accountability encourage us to think more holistically about accountability, shifting the focus away from auditing, which benefits donors, to implementing agencies’ responsibility to be held accountable to their beneficiaries.

A quick glance at the six benchmarks suggests the relevance of qualitative research to accountability. As already discussed, qualitative research is fundamental to the benchmarks of participation and learning. However, the ‘handling of complaints’ benchmark can also be actioned through qualitative research to some extent. Qualitative research is not a complaints-handling procedure and should not substitute for more established complaints mechanisms that are geared towards handling and responding to a wide variety of issues. However, the feedback generated through qualitative research can expose grievances and criticisms about a programme, enabling development practitioners to make necessary changes.

While qualitative research can generate learning about programme outcomes, feeding into donor reports, it also offers a great opportunity for real-time feedback that development practitioners can act upon to improve programme performance (Featherstone, 2013). But qualitative research is not a magic bullet for accountability. Qualitative research per se does not ensure accountability. It merely seeks to generate learning from a variety of programme stakeholders. Accountability happens when development practitioners use this learning, ideally in collaboration with the beneficiaries, to improve their ‘ways of working’ with local communities and to enhance the performance of their programmes. In addition, there are many other more established ways for you to promote accountability, which qualitative research cannot and should not replace.

Save the Children has developed a Programme Accountability Guidance Pack (Munyas Ghadially, 2013) that offers guidance and tools in areas such as information sharing, participation, complaints handling, capacity building of staff, and monitoring of accountability measures. You can download the pack and watch videos developed to improve understanding and facilitate discussions on programme accountability at <www.savethechildren.org.uk/resources/online-library/programme-accountability-guidance-pack>.

Impact, innovation and evidence

Development practitioners have an interest in implementing impactful programmes that: 1) can be measured by monitoring and evaluation frameworks; 2) are highly valued by the people they seek to assist; 3) represent good value for the time and resources invested. The decision to implement one type of intervention over another often rests on the experience of development practitioners, the scant availability of evidence, and what can be measured to demonstrate impact. While tacit or common knowledge, however limited it may be, can contribute to the development of fantastic programmes, the question of whether or not another intervention could produce better outcomes is always present.

It is this curiosity about whether or not better and more impactful programmes could be implemented for the same amount of money and effort that leads to innovation and evidence building. It is also this type of curiosity that encourages development practitioners to go beyond demonstrating impact to donors, and to innovate and develop evidence that helps them establish programmes that are optimal in relation to relevance, impact, cost, reach, and social change.

Quantitative inquiries are key to the development of such evidence, both to determine programme outcomes and to compare different development approaches. However, qualitative research is equally important and can be used to generate knowledge and facilitate learning in a number of different ways that can help practitioners develop innovative and evidence-informed programmes. We will now describe three ways in which qualitative research can be used to further impact, innovation, and evidence.

First, qualitative research can help localize development programmes. Development programmes are most successful when they are embedded in a local context, reflect locally perceived needs, and draw on local assets (Moser, 1998). This is widely recognized and it is not uncommon for donor agencies, in their proposals, to ask for an account of how community members were involved in the development of the proposal and how they can be expected to participate in the planning and implementation of the programme. Participatory and qualitative research plays a key role in generating information and evidence to inform future programmes so that they are tailored to local realities.

The process of localizing development programmes can involve two steps. First is a needs assessment, where qualitative research can be used to map the local perceptions of needs, examine their nature and causes, and set priorities for future action. A second step can involve using qualitative research to chart the cultural context, local assets, and community resources. These contextual factors may well form part of local coping strategies to hardship and would be important to consider, both to align programmatic and local responses and to optimize the utilization of local and external resources. Qualitative research can help generate a better understanding of the issues that affect local community members and can identify realistic solutions that reflect local knowledge and assets.

Second, qualitative research can be used to explore local experiences of a programme – not only as a formative evaluation tool but also as part of the end-of-programme evaluation. Only by giving local people and service providers an opportunity to communicate what they perceive to be the strengths and weaknesses of a programme, and the way it was implemented, will we be in a position to make programmatic changes that can either strengthen current and active programmes or inform future programmes. Qualitative research is thus a major part of formative evaluation, allowing beneficiaries to express their reactions to an active programme so that development practitioners can make the necessary changes for the programme to progress in a more valued direction (in line with accountability, as described above). From an end-of-programme evaluation perspective, qualitative research can be used to unpack local understandings of impact. While log frames are typically developed to measure hypothesized programme impacts, primarily to show donors that programmes have achieved what they set out to do, these impacts are often limited and deliberately reduced to what we and our donors find relevant. Qualitative research – for example through an investigation of the ‘most significant changes’ (described more fully below) – can provide details on what the programme beneficiaries perceive the impact of the programme to be.

Third, qualitative research can help contextualize ‘what happened’. Development programmes are not implemented in a vacuum, but interact with a host of social and contextual factors. These could include other development programmes, socio-cultural norms, and changes to the physical environment, as well as the personal skills, sensitivities, and characteristics of the people implementing the programme. Qualitative research can be used to unpack the contextual factors and processes that have contributed to either the success or failure of a programme. Such knowledge can help development practitioners mitigate potential risks to programme success and increase the chances of success and impact.

In summary, qualitative research can generate evidence that can be used to develop programmes that are tailored to local contexts. Qualitative research can also be used to determine improvements and changes to a programme. When acted upon, such evidence can optimize programme impact and satisfaction among the people the development programmes seek to assist.

Value for money

Development agencies are increasingly looking to deliver programmes that represent value for money. This is not about developing and implementing low-cost programmes, but about maximizing the impact of funds spent to improve poor people’s lives. In other words, ‘value for money’ is about ensuring that development programmes have the greatest impact at the lowest cost. Qualitative research is not typically associated with the ‘value for money’ agenda. But, as alluded to above, qualitative research is vital to any process looking to make development programmes more efficient, effective, and equitable, which in turn makes programmes more economical. Qualitative research can explore ways to enhance programme impact and overcome unintended consequences, such as drawing on local resources and strengths, or to involve local stakeholders to address possible barriers to the programme’s impact (for example, a cultural belief or detrimental gender constructions), all of which is likely to increase value for money. Moreover, it is notoriously difficult to document value for money. While solid and rigorous quantitative research designs are central to a ‘value for money’ analysis, it is increasingly recognized that a good analysis incorporates different sources of information, including qualitative evidence, to build a comprehensive picture of programme impact and value. This could, for example, include an outline of local perceptions of impact, above and beyond that stipulated by the logical framework guiding programme monitoring and evaluation.

Scalability and replicability

We have said it before: The ultimate aim of a development agency is to have a positive impact in the areas where they work. So far, impact has primarily been discussed in relation to developing programmes that are successful and create a positive change for the people they seek to assist. Impact, however, also refers to reach. A programme can be very successful yet reach only a small number of people. What is better is a programme that is equally successful but reaches a much larger number of people. Development agencies therefore have an interest in taking impactful programmes ‘to scale’. This can involve taking a stand-alone programme to scale, or it can mean working through national stakeholders, such as local government departments, which can extend their activities to a greater number of people. Often it is a mix of the two. Going to scale inevitably involves replicating activities in other locations and mainstreaming certain elements so that they can be implemented realistically by facilitators with varied skills and experience.

It cannot be assumed that, just because a programme has been successful in one context, it can be repeated in another context with equal success. Programmes are implemented by people with varied knowledge, skills, attitudes, and behaviours in contexts that are socially determined. Qualitative research plays an instrumental role in making sure that development agencies fully understand all the contributing factors to programme impact. Qualitative research, for example, can be used to unpack the many different contextual barriers and facilitators to programme impact and determine what elements of the programme need to be fostered further or where changes should be made in order to ensure that the programme has the flexibility to be tailored to different socio-economic or cultural contexts.

Advocacy, campaigning, and social change

Achieving social change requires action at many different levels. While development programmes can provide poor people with opportunities to escape poverty and live healthier lives, there are often limits in their scope to change the policy, legislation, and geopolitical processes that either leave people poor and vulnerable in the first place or fail to protect those who are most vulnerable. For that reason, many larger development agencies have staff, and sometimes an entire department, who are designated to advocacy. Save the Children define advocacy as ‘a set of organised activities designed to influence the policies and actions of others to achieve positive changes for children’s lives based on the experience and knowledge of working directly with children, their families and communities’ (Gosling and Cohen, 2007: 12).

Qualitative research, by giving a voice to marginalized people, can help development practitioners develop knowledge about the experiences of the most vulnerable. These voices, and the knowledge they represent, can be used by development practitioners to reframe an issue and develop new ways of seeing (Laws et al., 2013). The perspectives gathered through qualitative research can also be used in campaign materials, extending the voices of local people to a global audience. Some qualitative research methods, such as Photovoice (see Chapter 6), were developed with the explicit purpose of gathering voices to advocate for structural change.

We have now offered six reasons why qualitative research is vital to the field of development. Qualitative research is not only key to the development and implementation of projects, but also to understanding the impact and reach of development programmes. We now proceed to discuss some of the different phases in a development programme cycle where qualitative methods can be employed to enhance programme impact, quality, and accountability.

Qualitative research can be embedded in a development programme at many different points of its implementation cycle – serving different learning purposes. To demonstrate this, in this section we will describe five specific qualitative analyses in the programme cycle as well as highlighting some of the more general research, advocacy, and accountability opportunities that may shoot off at different points of the cycle (see Figure 1.3). These analyses are by no means exhaustive, but they offer concrete examples of how qualitative research can be embedded in a programme cycle with the aim of strengthening programme impact, quality, and accountability. We will discuss each of them in turn.

Situational analysis and needs assessment

Before a development programme is conceived, and a funding proposal written, there is a need to carry out a situational analysis and a needs assessment. This is a process of identifying and understanding the specificities of a problem and the broader context in which a programme operates, and using this information to plan actions to address the problem.

figure
Figure 1.3 Opportunities for qualitative research within the programme cycle

A situational analysis offers development practitioners with an understanding of the internal and external environment in which a programme will operate. Internally, this could include an analysis of organizational capabilities, while externally, if the organization works with and for children, it could include a country-level child rights situational analysis (CRSA). For organizations working in fragile states, the situational analysis could also include a security assessment. While situational analyses often depend heavily on literature reviews, they also often draw on interviews with key stakeholders. Once a situational analysis has mapped out macro-environmental factors that may affect or guide organizational operations, the process of identifying and understanding the specificities of the problem and planning actions to address that problem can commence. This is also called a needs assessment.

Identifying a problem and assessing a need often involve an iterative process that considers the capabilities, principles, and values of a development organization, the national strategies of a country, and the perspectives of the people the programmes are intended to assist. Once a general problem area has been identified – in the area of education, health, or hunger and livelihoods, for instance – a systematic process that places the intended beneficiaries centre stage can begin to determine people’s specific needs.

While surveys can be useful to determine the scale of a problem, the process of generating qualitative evidence pertaining to the views and perspectives of beneficiaries at a community level is key to determining what interventions will be most appropriate and successful in alleviating risks and hardships (Rossi and Lipsey, 2004). Individual and group interviews (see Chapter 3) as well as participatory learning and action (PLA) methods (see Chapters 5 and 6) are particularly well suited for needs assessments. Needs assessments that develop in a partnership between development practitioners and local people (Rifkin and Pridmore, 2001) can do the following:

  • They can offer critical reflection and raise the consciousness of community members about the conditions that compromise their well-being.

  • They can enable diverse groups of people to participate. This includes children and other marginalized groups who are ordinarily absent from community forums (see Box 1.2 for an example).

  • They can identify key barriers to change, risks, and hazards facing local communities.

  • They can identify assets, capacities, and local resources that can be used to address their needs.

  • They can help community members prioritize and draw up action plans for development activities.

  • They can support the selection of indicators that can be used to identify and measure the areas of change that a development programme expects to bring about.

A number of toolkits and guidance notes are available online to support development practitioners apply qualitative research methods in needs assessments. Examples include:

Local context analysis

Once a needs assessment has been carried out, and it is clear what problems or ‘gaps’ between current and desired conditions a development programme is looking to tackle, a more in-depth local context analysis can be undertaken. Local context analyses play an important role in the programme-planning process and seek to map the socio-economic, cultural, environmental, political, and legislative conditions that may affect a programme (see Box 1.3 for an example). For example, a local context analysis may provide information regarding the factors listed below:

  • When is it a good time to start implementing the programme? Religious holidays, local elections, or seasons when drought is likely or animals are prone to disease may delay a programme, or in a worst-case scenario they might stop it being implemented.

  • What are the local experiences and perceptions of the phenomenon that leaves some people vulnerable and at risk? This will help you gain a clear picture of circumstances that compromise people’s well-being as well as an understanding of the people who will be affected by the programme.

  • Which local norms and practices play a role in responding to, or exacerbating, the social conditions that compromise people’s well-being?

  • Local representations and understandings may be in conflict with the values and principles of development organizations, requiring a sensitive approach. The work by Save the Children, for example, is guided by the Convention on the Rights of the Child, yet local communities, often determined by poverty, may place greater emphasis on children’s responsibilities in sustaining household livelihoods.

  • Can local assets and capacities be drawn upon to implement the programme in line with local responses and resources? This may include the experience and knowledge of some local people, infrastructures that can house training sessions and other events, communal land to host a borehole, and so forth.

  • What existing services are there, and what are their roles and responsibilities in addressing issues relevant to the programme aim?

Box 1.2 Children using participatory methods to assess local needs

Save the Children used participatory learning and action tools, including Photovoice (see Chapter 6), to involve children in a needs assessment for a programme tackling chronic malnutrition in south-west Bangladesh.

The aim was to give children the opportunity to voice their concerns and challenges with regards to food and nutrition and to use this information in the planning of a programme.

figure
Figure 1.4 Children practising taking pictures

Source: Julie Newton/Save the Children.

Using qualitative research methods, a local context analysis supplements the needs assessment and situational analysis by gathering more in-depth and contextual information about the specific problem that a development programme is looking to address. It also explores what opportunities might be available for local participation in the planning and implementation of the programme, both to overcome potential conflicts and to recognize and build on existing capacities. On occasion, organizations may conduct very thorough needs assessments and situational analyses, which encompass many of the components of a context analysis.

Box 1.3 Children participating in a context analysis in West Africa

In 2013–14, the Child Protection Initiative of Save the Children conducted local context analyses into kinship care in communities across six countries in West and East Africa. The research was primarily qualitative, participatory, and exploratory, and was designed to enhance Save the Children’s understanding of the factors that influence children’s experience of kinship care, such as their kinship care arrangements and positive and negative experiences of kinship. Norms, practices, and understandings were gathered from different stakeholders, including children, caregivers, and local leaders. These local context analyses offered a foundation to strengthen programmes in the region that promote the prevention of family separation and family strengthening within a comprehensive care and protection system (Chukwudozie et al, 2015).

Given that development programmes are most likely to achieve buy-in and resonate with local needs and resources if they have been developed in partnership with local community members (Skovdal et al., 2013), it is increasingly seen as good practice to use qualitative research methods to engage prospective beneficiaries in needs assessments and local context analyses. This is demonstrated by the fact that many donor agencies ask in their proposals for an account of how community members were involved in the planning and development of a programme.

Barriers to and facilitators of programme progress analysis

Development programmes interact with a range of factors that can either facilitate or hinder progress and impact.

Therefore, once a development programme is up and running, it is important to monitor progress and carry out formative evaluations. Monitoring involves a continuous process of appraising programme progress and identifying strengths and weaknesses, with the aim of modifying and improving the programme (Gosling and Edwards, 2003). In the context of programme monitoring, Gosling and Edwards (2003) identify six types of monitoring:

  • Project inputs: monitoring whether what is needed to implement the programme is readily available, and following budgetary and work plan schedules.

  • Project outputs: monitoring what has been done, problems encountered, and changes to the environment or circumstances in which a programme is active.

  • Meeting objectives: monitoring the applicability of programme objectives and whether the programme is working towards them.

  • Impact: scoping intended and unintended consequences of the programme, highlighting positive and negative impacts.

  • Management: monitoring the way in which a programme is being implemented, such as the management style of the implementing agency as well as the participation of local people.

  • Context: monitoring the local context, being aware of socio-economic, political, and environmental developments that may affect the programme.

These are just a few areas where programme monitoring can take place. Some of them focus on process, while others look at impact or context. It is important to consider process, impact, and context monitoring as these are linked and can help us understand the pathways that lead to change. Qualitative research methods, such as individual interviews and focus group discussions (see Chapter 3), are ideal for conducting a formative evaluation, examining barriers and facilitators to programme progress. Local stakeholders – including a selection of beneficiaries, community members, and programme staff – can be interviewed at any stage during programme implementation. Interviews can follow a topic guide that examines barriers and facilitators to the six areas of monitoring mentioned above. Such interviews will reveal what has been achieved to date, as well as some of the operational processes and contextual factors that have either facilitated or hindered programme impact. Development practitioners can then use this feedback to modify the programme and capitalize on its strengths.

You can gather information from a variety of sources for the purpose of monitoring and formative evaluations (field visits, community meetings, field reports, records of activities, and so on). You may already do so as part of your job. Why should you then formalize the process and use qualitative research methods? Adopting a research approach, and gathering feedback systematically, can serve as a quality control and make sure that valuable learning is properly captured, stored, assimilated, and applied to development programmes in other areas or sectors.

There are a number of different ways in which you can use qualitative methods to evaluate the impact of a programme. A ‘stories of change’ analysis allows you to investigate the most significant changes that the programme has brought about (Dart and Davies, 2003). It is important that these ‘stories of change’ are gathered in a participatory and inductive (‘bottom-up’) way and not guided by indicators of what you, as a practitioner, believe is important and constitutes significant change. A qualitative ‘stories of change’ analysis should effectively be done independently of the quantitative research. However, if the qualitative ‘stories of change’ resonate with the quantitative indicators, they would strengthen and complement each other well.

The ‘stories of change’ will hopefully elaborate on and give detail to the social processes and contextual factors that contributed to the most significant changes. If these are limited, and if time and resources permit, you can try to arrange short follow-up interviews with individual participants, asking them about the background to these perceived significant changes. A ‘stories of change’ analysis is likely to highlight both expected and unexpected outcomes. This makes the approach attractive both as a way of supplementing and expanding on a quantitative summative evaluation and for mapping out the breadth of programme impact (which is useful from a ‘value for money’ perspective). ‘Stories of change’ analyses can be implemented with any stakeholder, allowing for comparisons. Adults and children who have benefited from the programme can speak from personal experience, while non-benefiting community members can speak about the changes they have observed. Also, programme staff and key stakeholders may have a perspective on the changes the programme has brought about.

The ‘stories of change’ can be gathered in a number of different ways, ranging from interviews (see Chapter 3), to participatory learning and action tools (see Chapter 5), and to Photovoice (see Chapter 6). While it is useful to map out the spectrum of positive and negative changes a programme has initiated, it is also helpful to ask community members to reflect on the changes they have observed and to come to a consensus about their significance, for example through a ranking. This way, entire communities can tell you what they consider the ‘most significant changes’ of a programme to be. However, be aware that different segments of a community may have different perceptions of what the ‘most significant change’ is, and so it is advisable for you to gather ‘most significant change’ stories from each of these groupings (for example, according to age group, gender, ethnic or language group, level of poverty, or health status).

Guidance on how to facilitate a ‘stories of change’ analysis in the context of development programme evaluation has been developed by Davies and Dart (2005).1

In addition to exploring local perceptions of change and impact, much can be learned from local perceptions of the strengths or limitations of a programme. This is particularly relevant to development practitioners who need to draw on past experiences to develop new, better, and scalable programmes. This type of inquiry builds on what is often referred to as a strengths, weaknesses, opportunities, and threats (SWOT) analysis. SWOT analyses are commonly used in performance management, but they can provide communities targeted by a development programme with a useful platform to discuss the programme’s strengths and limitations in detail. Programme strengths and limitations can also be explored through interview methods and with a mix of stakeholders. This type of analysis should be conducted in order to summarize key strengths and limitations as well as to discover lessons to be learned and recommendations for future programming.

These five different forms of analysis make use of qualitative methods within a development programme cycle. The list is by no means exhaustive. Many other general research, advocacy, and accountability activities can be facilitated. Research, whether operational or issue focused, can be conducted at any stage of the programme cycle, irrespective of the monitoring and evaluation framework that has been designed. The analysis can draw on data gathered at one specific point in time (also referred to as a cross-sectional study), or on information collected by following a small group of people throughout the programme cycle and interviewing them at different stages (also referred to as longitudinal case studies). It is also important to note that not all five opportunities are relevant to all programmes and that it may not be realistic to conduct all five types of study, considering costs, timing, and staff capacity.

If information is gathered from a number of different communities, a review can be used to generate evidence and key lessons for future programming and advocacy. This review can contrast and combine results from the different studies conducted during the programme (or between sister programmes in other contexts).

Qualitative research offers development practitioners an opportunity to understand local perspectives, needs, and context. By adopting a ‘research approach’ and by systematizing and formalizing their use of qualitative research methods, development practitioners can make a significant contribution to the creation of an evidence base. Qualitative evidence generated systematically is integral to the objectives of development practitioners. Qualitative research can be used to: 1) engage programme beneficiaries; 2) promote accountability; 3) contribute to impact, innovation, and evidence; 4) support the ‘value for money’ agenda; 5) facilitate the scalability and replicability of programmes; 6) provide material and opportunities for advocacy and campaigning.

Qualitative research can be integrated into the programme cycle in a number of different ways, from the development of situational analyses and needs assessments through to the monitoring and evaluation of programmes. Information gathered at the different steps of the programme cycle can be used to explore the feasibility and acceptability of a programme as well as to determine areas for improvement. If lessons and recommendations that emerge through systematic qualitative research are considered and contribute to programme changes, this can have immediate benefits to programme beneficiaries. In the next seven chapters, we will provide guidance on how you can generate and report on qualitative evidence, equipping you with the knowledge and skills required to adopt a ‘research approach’.

1. The full text is available at <www.mande.co.uk/docs/MSCGuide.pdf> [accessed 27 July 2015].

References
Bamberger, M., Rao, V. and Woolcock, M. (2010) Using Mixed Methods in Monitoring and Evaluation, pp. 130, Research Working Papers, Washington DC: World Bank.
Bauer, M. W., Gaskell, G. and Allum, N. C. (2000) ‘Quality, quantity and knowledge interests: avoiding confusions’, in M. W. Bauer and G. Gaskell (eds), Qualitative Researching with Text, Image and Sound, pp. 317, London: SAGE Publications.
Campbell, C. and Jovchelovitch, S. (2000) ‘Health, community and development: towards a social psychology of participation’, Journal of Community and Applied Social Psychology 10 (4): 25570. CrossRef
Chambers, R. (1983) Rural Development: Putting the Last First, London: Longman.
Chambers, R. (1997) Whose Reality Counts?: Putting the First Last, London: Intermediate Technology. Link
Chukwudozie, O. Feinstein, C. Jensen, C. O’kane, C. Pina, S. Skovdal, M & Smith, R. (2015) ‘Applying community-based participatory research to better understand and improve kinship care practices: insights from DRC, Nigeria and Sierra LeoneFamily and Community Health, vol 38, no. 1, pp. 108119. CrossRef
Cornwall, A., Lucas, H. and Pasteur, K. (2000) ‘Introduction: accountability through participation: developing workable partnership models in the health sector’, IDS Bulletin 31 (1): 113, doi: 10.1111/j.1759-5436.2000.mp31001001.x. CrossRef
Creswell, J. W. (2002) Educational Research: Planning, Conducting and Evaluating Quantitative and Qualitative Research, Boston MA: Pearson Education.
Darcy, J., Alexander, J. and Kiani, M. (2013) 2013 Humanitarian Accountability Report, Geneva: Humanitarian Accountability Partnership.
Dart, J. and Davies, R. (2003) ‘A dialogical, story-based evaluation tool: the most significant change technique’, American Journal of Evaluation 24 (2): 13755, doi: 10.1177/109821400302400202. CrossRef
Davies, R. and Dart, J. (2005) The Most Significant Change (MSC) Technique: A Guide to its Use, self-published, <www.mande.co.uk/docs/MSCGuide.pdf> [accessed 27 July 2015].
Denzin, N. K. (1989) The Research Act, 3rd edn, Englewood Cliffs NJ: Prentice Hall.
Featherstone, A. (2013) Improving Impact: Do Accountability Mechanisms Deliver Results?, London: Christian Aid, HAP International, and Save the Children.
Flick, U. (2002) An Introduction to Qualitative Research, 2nd edn, London: SAGE Publications.
Freire, P. (1973) Education for Critical Consciousness, New York NY: Seabury Press.
Gaskell, G. and Bauer, M. W. (2000) ‘Towards public accountability: beyond sampling, reliability and validity’, in M. W. Bauer and G. Gaskell (eds), Qualitative Researching with Text, Image and Sound, London: SAGE Publications.
Gosling, L. and Cohen, D. (2007) Advocacy Matters: Helping Children Change Their World, London: Save the Children, <www.savethechildren.org.uk/sites/default/files/docs/Advocacy-Matters-Participants-Manual.pdf> [accessed 27 July 2015].
Gosling, L. and Edwards, M. (2003) Toolkits: A Practical Guide to Planning, Monitoring, Evaluation and Impact Assessment, Volume 5, London: Save the Children.
IFRC (2007) VCA Toolbox: With Reference Sheets, Geneva: International Federation of Red Cross and Red Crescent Societies, <www.ifrc.org/Global/Publications/disasters/vca/vca-toolbox-en.pdf> [accessed 27 July 2015].
Kilby, P. (2006) ‘Accountability for empowerment: dilemmas facing non-governmental organizations’, World Development 34 (6): 95163, doi: http://dx.doi.org/10.1016/j.worlddev.2005.11.009. CrossRef
Laws, S., Harper, C., Jones, N. and Marcus, R. (2013) Research for Development: A Practical Guide, 2nd edn, London: SAGE Publications.
Madden, R. (2010) Being Ethnographic: A Guide to the Theory and Practice of Ethnography, London: SAGE Publications.
Moser, C. (1998) ‘The asset vulnerability framework: reassessing urban poverty reduction strategies’, World Development 26 (1): 119. CrossRef
Munyas Ghadially, B. (2013) Programme Accountability Guidance Pack: A Save the Children Resource, London: Save the Children, <www.savethechildren.org.uk/resources/online-library/programme-accountability-guidance-pack> [accessed 27 July 2015].
O’Kane, C. (2008) ‘The development of participatory techniques: facilitating children’s views about decisions which affect them’, in P. Christensen and A. James (eds), Research With Children: Perspectives and Practice, pp. 12555, London: Routledge.
Rifkin, S. and Pridmore, P. (2001) Partners in Planning: Information, Participation and Empowerment, London: TALC and Macmillan Education.
Rossi, P. H. and Lipsey, M. W. (2004) Evaluation: A Systematic Approach, London: SAGE Publications.
Skovdal, M., Robertson, L., Mushati, P., Dumba, L., Sherr, L., Nyamukapa, C. and Gregson, S. (2013) ‘Acceptability of conditions in a community-led cash transfer programme for orphaned and vulnerable children in Zimbabwe’, Health Policy and Planning 29 (7): 80917, doi: 10.1093/heapol/czt060. CrossRef
Cover Image


Qualitative Research for Development
Book type: Monograph

Related Content Search

Find related content

Search in

Purchase Options