An Open Access Article

Type: Cases Studies
Volume: 2024
DOI:
Keywords:
Relevant IGOs:

Article History at IRPJ

Date Received:
Date Revised:
Date Accepted:
Date Published:
Assigned ID: 20240805

Comparing Standard Processes for Designing M&E Frameworks with My Over 18 Years of International Development Experience in West Africa: Key Differences, Underlying Causes, and Impact on Program Success

Author: Christopher Konde Tifuntoh

PhD candidate on Measurement, Monitoring and Evaluation EUCLID (Euclid University), Bangui, Central African Republic and Greater Banjul, Gambia

Name and address of the corresponding author:

Christopher Konde Tifuntoh

Email: christopher.tifuntoh.eucliddmm@gmail.com

 

Corresponding Author:

Keywords: Monitoring and Evaluation (M&E), West Africa, Development Projects, Stakeholder Engagement, Program Theory, M&E Frameworks, Resource Constraints, Contextual Adaptability, and Capacity Building.

ABSTRACT

This paper compares standard processes for designing Monitoring and Evaluation (M&E) frameworks with the unique practical experiences gained from over 18 years of international development work in West Africa. The study aims to identify critical differences between theoretical approaches and on-the-ground practices, explore the underlying causes of these discrepancies, and assess their impact on program success.

The research reveals significant disparities between theory and practice, particularly in small and medium-sized projects. These differences are evident across various aspects of M&E, including stakeholder engagement, program theory and logic development, evaluation question formulation, and data management. While larger projects adhere more closely to theoretical standards, smaller initiatives often lack comprehensive M&E frameworks due to resource constraints and limited expertise.

The underlying causes of these disparities include contextual adaptability issues, resource limitations, stakeholder engagement challenges, and technical expertise gaps. The dynamic socio-economic and cultural environments in West Africa necessitate modifications to standardized M&E approaches. Additionally, the evolving nature of the M&E field itself contributes to the gap between theory and practice, as new methodologies and techniques may not be quickly adopted in field settings.

These differences significantly impact program success. The lack of comprehensive stakeholder engagement and robust data management systems, particularly in smaller projects, can result in missed opportunities for improvement and reduced program impact. However, the study also notes positive trends, such as the widespread adoption of logic models and increasing recognition of the importance of M&E in development projects. These positive trends should encourage us to continue improving M&E practices.

The paper concludes by emphasizing the urgent need for tailored approaches to M&E in West Africa that balance theoretical best practices with practical realities. It suggests developing simplified yet effective M&E frameworks for smaller projects, increasing capacity-building efforts, and promoting more context-specific M&E methodologies. Future research and practice should focus on bridging the gap between theory and practice, particularly in resource-constrained environments, to enhance regional development initiatives’ overall effectiveness and impact.

 

  • introduction

In today’s rapidly evolving world of project management, international development, and public policy, the importance of Monitoring and Evaluation (M&E) cannot be overstated. Understanding the M&E framework emphasizes its role as a critical tool for assessing the progress, efficiency, and effectiveness of projects, programs, and policies, especially for the success of programs. By tracking performance and enabling data-driven decisions, M&E helps organizations learn from their experiences and demonstrate accountability to stakeholders. Monitoring and Evaluation have piqued the interest of multiple stakeholders. Strengthening the capacities of countries and organizations to perform monitoring and evaluation (M&E) functions is gaining momentum in the Global South.[1]

Monitoring and Evaluation (M&E) frameworks are essential tools in the field of international development, designed to systematically track and assess the progress and impact of programs and initiatives. These frameworks provide a structured approach for collecting, analyzing, and using data to ensure that development projects achieve their intended outcomes and deliver value to stakeholders.[2] The standard processes for designing M&E frameworks typically involve defining program objectives, identifying key performance indicators, establishing data collection methods, and outlining data analysis and reporting procedures.[3]

The significance of M&E in international development cannot be overstated. Effective M&E frameworks ensure accountability, enhance transparency, and foster learning within development organizations.[4] They enable stakeholders to make informed decisions based on empirical evidence, improving program design and implementation. Moreover, M&E frameworks support aligning project activities with broader development goals and efficiently allocating resources.[5] By systematically evaluating program performance, M&E helps identify best practices and lessons learned, which can be applied to future projects to increase their effectiveness and sustainability.

In the context of my over 18 years of international development experience in West Africa, the design and implementation of M&E frameworks have been pivotal in driving the success of numerous projects. My extensive experience spans various roles and responsibilities, including program development, strategic planning, grants management, and impact reporting across countries such as Cameroon, Senegal, Ghana, Benin, Côte d’Ivoire, Nigeria, and Sierra Leone. This hands-on experience has given me a unique perspective on the practical challenges and nuances of designing and operationalizing M&E frameworks in diverse cultural and socio-economic settings.

This paper compares the standard processes for designing Monitoring and Evaluation (M&E) frameworks[6] with my extensive experience in international development across West Africa. It highlights critical differences between theoretical approaches and practical applications, exploring the underlying causes and their impact on program success. Drawing on real-world examples from my career, the comparison provides insights into tailoring M&E frameworks to address specific challenges and leverage opportunities within the West African context. Key differences often stem from local capacity, resource availability, donor requirements, cultural dynamics, and political environments, significantly influencing the design, implementation, and outcomes of M&E frameworks. Understanding these discrepancies is crucial for enhancing the effectiveness of development interventions, and this analysis aims to contribute valuable knowledge and practical recommendations for practitioners in similar settings.

This study compares the standard processes for designing M&E frameworks with the practical experiences gained from over 18 years of international development work in West Africa. The study aims to identify critical differences between theoretical approaches and on-the-ground practices by analyzing these frameworks and exploring the underlying causes of these discrepancies. Additionally, the study seeks to assess the impact of these differences on program success. By drawing on real-world examples, the paper will provide insights into how M&E frameworks can be tailored to address specific challenges and leverage opportunities within the West African context, ultimately enhancing the region’s effectiveness and sustainability of development initiatives.

This paper adopts a comparative analysis approach, leveraging qualitative and quantitative data. The study utilizes case studies and personal experiences to highlight the practical applications and deviations from standard M&E processes. Data collection involved reviewing standard M&E framework documents and guidelines. Additionally, personal project reports, evaluations, and anecdotal evidence from various assignments in West Africa were collected to provide contextual depth. Data analysis focused on identifying critical differences between the theoretical and practical aspects of M&E frameworks, exploring underlying causes, and assessing their impact on program success. This involved coding and thematic analysis to draw out significant patterns and insights that could inform future M&E practices in similar contexts.[7]

  • Definition of key terms

To mitigate the equivocation fallacy, which arises when a word or phrase is used ambiguously within an argument, several strategies can ensure clarity and precision. First, define key terms clearly at the outset and use them consistently without changing their meanings. Provide enough context for each term to ensure its meaning is clear, avoiding jargon and ambiguous language. Use examples and analogies to clarify terms and anticipate potential misinterpretations by proactively addressing ambiguities. Ask for clarification during discussions if a vague term is used, review your argument for terms with multiple meanings, and use synonyms or rephrase parts of your argument to avoid ambiguity.[8] Therefore, practical definitions of terms to be used in this paper and specifications of how the terms would be used are needed.

 

  1. Monitoring

Monitoring is an essential function in project management that involves collecting necessary information with minimal effort to make timely steering decisions. It provides critical data for analysis, discussion, self-evaluation, and reporting. Unlike evaluation, monitoring is an ongoing process integrated into the project cycle to ensure that programs do the right things correctly to improve quality. Monitoring provides early indications of progress or lack thereof, helping to improve project design and implementation continuously. According to Bamberger and Hewitt (1986), monitoring is an internal project activity designed to provide constant feedback on a project’s progress, problems, and implementation efficiency. The primary prerequisite for effective monitoring is having an Annual Work Plan and budget. Monitoring enables managers to identify potential problems and successes, providing a basis for corrective actions to improve project design, implementation, and results.[9]

Measuring results has enormous power. Monitoring has several powerful implications, such as distinguishing success from failure, rewarding success, and correcting failure. It allows learning from success and failure; demonstrating results can garner public support. Through monitoring, managers can also assess the continued relevance of a project, ensuring it supports development priorities, targets appropriate groups, and remains valid in changing environments. Requirements for Effective Monitoring are Baseline data, performance, and results indicators, as well as mechanisms for data collection, such as field visits, stakeholder meetings, and systematic reporting.[10] Monitoring – is the continual and systematic collection of data to provide information about project progress.[11]

Monitoring involves operational and administrative activities that track resource allocation, utilization, and the delivery of goods and services, as well as intermediate outcomes. It justifies resource allocation, improves service delivery, and demonstrates results for accountability. Monitoring addresses whether planned actions are taken and if progress toward desired results is achieved. It can focus narrowly on project and program implementation or broadly on tracking various stakeholders’ policies, strategies, and actions to ensure progress toward critical results. Monitoring supports management decisions by providing data to compare actual performance with original plans. According to the OECD’s Development Assistance Committee, monitoring is a continuous function that systematically collects data on specified indicators to inform management and stakeholders about the progress and achievement of objectives and the use of allocated funds.[12]

In this paper, I will use Patrick Gudda’s definition in his Guide to Project Monitoring and Evaluation, which states that monitoring is the art of collecting the necessary information with minimum effort to make a steering decision at the right time. This information also constitutes an essential database for analysis, discussion, (self-) evaluation, and reporting. Monitoring differs from assessment as a regular and systematic process integrated into the cycle of projects/programs.[13]

 

  1. Evaluation

Program evaluation determines the value of a collection of projects. It looks across projects, examining the utility of the activities and strategies employed. Frequently, a full-blown program evaluation may be deferred until the program is well underway, but selected data on interim progress are collected annually. Project evaluation, in contrast, focuses on an individual project funded under the umbrella program. Project evaluation might also include an examination of specific critical components. The evaluation of an element frequently looks to see the extent to which its goals have been met (these goals are a subset of the overall project goals) and to clarify the extent to which the components contribute to the success or failure of the overall project.[14]

Evaluation is a periodic, in-depth analysis of program performance. It relies on data generated through monitoring activities and information from other sources (e.g., studies, research, in-depth interviews, focus group discussions, surveys, etc.). Evaluations are often (but not always) conducted with the assistance of external evaluators. Evaluation is undertaken selectively to answer specific questions to guide decision-makers and program managers and provide information on whether underlying theories and assumptions used in program development were valid, what worked and did not, and why.[15]

The OECD/DAC definition of evaluation is “an assessment, as systematic and objective as possible, of an ongoing or completed project, program or policy, its design, implementation, and results. The aim is to determine the relevance and fulfillment of objectives, developmental efficiency, effectiveness, impact, and sustainability. An evaluation should provide credible and useful information, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors.” Evaluations involve identifying and reflecting upon the effects of what has been done and judging their worth. Their findings allow project/program managers, beneficiaries, partners, donors, and other project/program stakeholders to learn from the experience and improve future interventions.[16]

Evaluation is the periodic, retrospective assessment of an organization, project, or program, which might be conducted internally or by external independent evaluators.[17] Evaluation is the user-focused, systematic assessment of a project’s design, implementation, and results, whether ongoing or completed.[18]

In this paper, I will use Patrick Gudda’s definition in his Guide to Project Monitoring and Evaluation: Evaluation is a periodic, in-depth analysis of program performance. It relies on data generated through monitoring activities and information from other sources (e.g., studies, research, in-depth interviews, focus group discussions, surveys, etc.). Evaluations are often (but not always) conducted with the assistance of external evaluators.[19]

 

  1. Accountability

Accountability in M&E is a commitment to balance and respond to the needs of all stakeholders (including project participants, donors, partners, and the organization itself) in the project’s activities. Accountable projects are more relevant, likely to be supported by stakeholders, and ultimately will have a more significant impact. A commitment to accountability requires that project teams take proactive and reactive steps to address the needs of the project’s key stakeholders while delivering project results. [20]

Accountability ensures that the needs of all key stakeholders (e.g., your community, your members/supporters, the broader movement, funders, and supporters) are considered and respected during project implementation.[21] Accountability is an obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to report fairly and accurately on performance results vis-a-vis mandated roles and plans.[22] Weak accountability systems fuel corruption among officials, thus entrenching a culture of public sector corruption.[23]

In this paper, I will use the definition indicated in the Monitoring, Evaluation, and Learning for Development Professional Guide, which states that accountability is a commitment to balance and respond to the needs of all stakeholders (including project participants, donors, partners, and the organization itself) in the project’s activities.[24]

 

  1. M&E Framework

Many organizations and authors have defined the Monitoring and Evaluation Framework. I will provide three different definitions of the team and indicate the one to be used in this paper. A Monitoring and Evaluation Framework is a structured approach that guides the systematic collection, analysis, and use of data to track the progress and assess the impact of a program or project. It involves defining program objectives, developing performance indicators, establishing data collection methods, and outlining data analysis and reporting procedures.[25] A Monitoring and Evaluation Framework represents an overarching plan for undertaking monitoring and evaluation activities throughout a program. It includes a step-by-step guide to operationalizing these activities, defining the parameters for routine monitoring and periodic evaluation, and ensuring data collection, aggregation, and analysis are performed regularly to support formative and summative evaluation processes.[26]

Anne Markiewicz and Ian Patrick defined the M&E Framework as a comprehensive planning tool and document that guides the implementation of M&E activities throughout a program’s lifecycle.[27] It serves multiple purposes: tracking program progress, informing decision-making, ensuring accountability, and facilitating organizational learning. Ideally developed alongside program design, the M&E Framework outlines a systematic data collection, analysis, and reporting approach. It typically focuses on crucial evaluation domains such as appropriateness, effectiveness, efficiency, impact, and sustainability while considering cross-cutting issues like gender. The M&E Framework enables organizations to assess their initiatives’ performance, make informed adjustments, and demonstrate results to stakeholders by providing a structured plan for routine monitoring and periodic evaluation. Ultimately, it enhances programs’ overall value and impact by ensuring that learning and improvement are integral to the implementation process.[28] This paper will adopt the definition Anne Markiewicz and Ian Patrick set forth.

 

  1. Result-Based Management

The Organization for Economic Co-operation and Development (OECD) defines Results-Based Management (RBM) as a management strategy by which all actors, contributing directly or indirectly to achieving a set of results, ensure that their processes, products, and services contribute to the achievement of desired results (outputs, outcomes, and higher-level goals or impact). The strategy focuses on achieving results at all levels, clearly articulating the cause-effect relationship between inputs, activities, outputs, outcomes, and impacts.[29] UNDP sees it as a management strategy that uses feedback loops to achieve strategic goals. This approach ensures that the processes, outputs, and services are geared towards achieving desired outcomes and impacts. RBM includes the use of performance information to improve decision-making and program performance.[30] The World Bank views RBM as a strategy for improving management effectiveness and accountability by focusing on achieving results. It involves using evidence to inform decision-making processes, emphasizing planning, monitoring, and evaluating all aspects of a program or initiative to ensure that goals and objectives are met.[31]

Results-Based Management (RBM) is a widely adopted public sector management approach that originated in the 1980s, drawing from private and non-profit sector practices. It focuses on improving performance and achieving results, emphasizing accountability and evidence-based decision-making. It requires all actors to ensure their processes, products, and services contribute to desired results at various levels (outputs, outcomes, and impact). The approach integrates monitoring and evaluation as critical components for generating reliable evidence and breaking down traditional divides between planners, managers, and performance assessors. This interconnected, iterative approach is also characteristic of performance management, emphasizing the integration of planning, implementation, and assessment to drive continuous improvement and results achievement.[32] In this paper, I will use the definition presented by UNDP.

 

  1. Stakeholder and Stakeholder Management

The Project Management Institute (PMI) defines stakeholders as individuals, groups, or organizations that may affect, be affected by, or perceive themselves to be affected by a project, program, or portfolio’s decision, activity, or outcome.[33]  The United Nations Environment Programme (UNEP) added that stakeholders are interested in a particular decision, either as individuals or representatives of a group. This includes people who influence or can influence a decision and those affected by it.[34] I will use the PMI definition in this paper.

Project Management Institute (PMI) defines stakeholder management as the systematic identification, analysis, planning, and implementation of actions to engage stakeholders.[35] The International Organization for Standardization (ISO) added that Stakeholder management manages stakeholder expectations and ensures their engagement in the project’s life cycle. It includes identifying, analyzing, and regularly interacting with stakeholders to ensure their views are understood and considered in decision-making.[36] Stakeholder participation is essential in developing and determining the contents of a Monitoring and Evaluation Framework.[37] For this paper, I will use the definition of PMI.

  1. Ethical standards in M&E

According to the American Evaluation Association (AEA), ethical standards in M&E emphasize the importance of evaluator integrity, respect for people, and the obligation to consider the public interest. Evaluators should ensure honesty, transparency, and fairness in their work, protect the rights of participants, and provide accurate and unbiased reporting.[38] Ethical standards in Monitoring, Evaluation, Accountability, and Learning (MEAL) are crucial for responsible project management and data collection. Properly designed MEAL systems can enhance project impact and decision-making, while poorly implemented ones can waste resources, compromise participant safety, and reduce project effectiveness. To address these risks, organizations have established ethical principles focusing on themes such as representation of all populations, informed consent, privacy and confidentiality, participant safety, data minimization, and responsible data usage. These principles ensure that MEAL activities respect participants’ rights, protect their welfare, and maintain professional standards. By adhering to these ethical guidelines, projects can collect relevant data, make informed decisions, and maximize their positive impact while minimizing potential harm to participants and communities.[39] This paper considers the definition of AEA.

  1. Participation and critical thinking in M&E

According to UNICEF, Participation in M&E refers to the active involvement of various stakeholders, especially those directly affected by the programs, in all phases of the evaluation process. This includes planning, data collection, analysis, and dissemination of findings to make the evaluation more democratic and empowering for those involved.[40] Critical thinking involves clear, rational, and evidence-informed thinking that is open to different perspectives.[41] Critical thinking requires project teams to identify and test their assumptions, ask thoughtful questions for deeper understanding, remain open to multiple viewpoints, and commit to reflection and analysis to inform actions in MEAL activities. By applying critical thinking, teams can reduce data collection and analysis bias, leading to more accurate and reliable results. This approach helps uncover hidden assumptions that may influence MEAL activities and ensures a more comprehensive and objective evaluation of project outcomes and impacts.[42]

 

  • Standard Processes for Designing M&E Frameworks and Importance of Context in M&E

Anne Markiewicz and Ian Patrick highlighted 12 critical steps to develop an M&E framework. 1. The framework’s purpose (key stakeholders, purpose and focus, requirement and expectations, and stakeholder capacity needs). 2. Background and context of the program (program context, goal and objectives, program design). 3. Program Theory and Program Logic (consideration, participatory approach, program theory, program logic). 4. Evaluation Question (consideration, participatory approach, and finalized questions). 5. The Monitoring Plan (approach to monitoring and the monitoring plan). 6. The Evaluation Plan (approach to monitoring and the evaluation plan). 7. Data Collection Plan (and managing potential ethical issues). 8. Data management plan.[43] 9. Data Synthesis, Judgments, and consideration (approach to data synthesis, forming judgments, and reaching conclusion). 10. Learning Strategy (organizational and program learning strategy and identifying recommendations and lessons). 11. Report and Dissemination Plan, and 12. Implement a work plan (Program management arrangements, work planning, monitoring, and review framework).[44]

A robust framework must build on critical fundamental concepts. 1. Multiple purposes for M&E (M&E serves as a progress tracker for program implementation, identifying results, providing bases for accountability, and informed decision-making process). 2. Informed by Result-Based Management (a dynamic and interlinked relationship between planning, monitoring, and evaluation). 3. Evaluation-led focus for monitoring and evaluation (evaluation questions guide monitoring and evaluation). 4. Theory-Based Approach (establish an anticipated casual relationship with anticipated program results and use theory to organize and guide evaluation). 5. Partiipatory orientations (inputs and influence of stakeholders in the entire process).[45]

The value of an M&E framework lies in its multifaceted ability to track implementation progress, identify results, ensure accountability for funding, improve program performance, enhance service delivery, support learning and development, and inform policy and decision-making. Practitioners must consider generic functions and specific contextual needs, addressing political influences, stakeholder expectations, and practical constraints such as timing, feasibility, and resources. A well-designed M&E framework balances accountability and inclusiveness, providing a structured yet adaptable approach to meet diverse requirements and ensure comprehensive assessment.[46]

Results-Based Management (RBM) and theory-based approaches are integral to effective M&E frameworks. RBM emphasizes improving performance and achieving results by aligning organizational processes with desired outcomes, significantly influencing global public sector reforms, and promoting accountability through reliable evidence. Theory-based evaluation involves creating and testing program theories and logic to map causal pathways from actions to results, guiding the formulation and answering of evaluation questions using rigorous methods. Integrating evaluation theory into M&E processes elevates evaluative thinking, addresses the tension between accountability and learning, and promotes a comprehensive framework. This approach incorporates participatory methods, mixed methodologies, and a focus on organizational learning and capacity building, ensuring balanced and integrated assessments. Stakeholder participation is crucial for enhancing democratic representation and social justice, increasing the utility of evaluation findings, and providing diverse perspectives, thus building capacity and addressing power imbalances.[47]

The initial step in designing a functional M&E framework is scoping it. This involves understanding the context and purpose of the anticipated activities, addressing specific issues, and assessing the available and required resources. These elements provide the foundation for developing the framework. Effective planning is necessary to identify and engage stakeholders, ensuring their involvement in the framework’s development and any required capacity building. Stakeholders typically include policymakers, program funders, senior managers, program managers, deliverers, partners, and program beneficiaries or their representatives.[48] This phase involves 1. identifying requirements, 2. determining participation arrangements, 3. identifying possible and preferred approaches, 4. reviewing resource parameters, and 5. Confirm the purpose and parameters of the framework.[49]

The next step is to design the Program Theory and Program Logic. Program theory explains the reasoning and assumptions behind how and why a program’s strategies will achieve the intended results, often depicted in a causal model. Program logic visually maps the sequence from inputs to impacts, illustrating the steps to attain the program’s goals through connected diagrams. This approach makes the causal relationships between program actions and results explicit, reinforcing the identification of expected outcomes based on thorough analysis and clear communication. These elements should be articulated before commencing M&E activities to form hypotheses and identify critical variables, ensuring that evaluation methods are tailored to the specific concerns and challenges of the program.[50] This step consists of: 1. Plan stakeholder engagement strategy, 2. Develop or review program theory, 3. Develop or review program logic, and 4. Confirm program theory and logic with critical stakeholders.[51]

The third step is determining what we want to know (Evaluation Questions). The M&E framework should prioritize what we want to learn over what is easy to measure, with a significant focus on developing evaluation questions. These questions are essential for structuring and guiding the complementary monitoring and evaluation processes, ensuring relevance and usefulness. Well-crafted evaluation questions are critical for formulating both the Monitoring Plan and the Evaluation Plan, key components of the M&E Framework. Crafting meaningful evaluation questions requires considerable skill and insight.[52] This is developed by 1. Develop draft evaluation questions (this is done using the OECD widely adopted criteria of Relevance (Appropriateness[53]), Effectiveness, Efficiency, Impact, and Sustainability).[54] 2. Facilitate stakeholder participation; 3. Scope number and range of questions against data and resources available; 4. Present questions to stakeholders for final endorsement, and 5. Finalize evaluation questions.[55]

Though the M&E framework detailed the Monitoring Plan and Evaluation Plan as two different sections, I will summarize them together. The Monitoring Plan guides the systematic collection of performance information to answer evaluation questions, which serve as a common reference point for both the Monitoring and Evaluation Plans.[56] The Monitoring Plan tracks implementation progress and early results, aligning with program evaluation under the criteria of evaluation questions.[57] The Evaluation Plan builds on monitoring data to assess whether the program achieves its intended results, identifying what works well and why, and evaluating the program’s quality and stakeholder satisfaction. It summarizes monitoring data and adds evaluative processes to answer the evaluation questions.[58]

The monitoring plan involves 1. identifying the focus (to provide answers to evaluation questions), 2. developing performance indicators and targets, 3. identifying data collection and tools, and 4. determining responsibilities and time frames.[59] The common focus areas for monitoring are a. the context (appropriateness/relevance), b. implementation (effectiveness); c. management and governance (efficiency)—including budget and stakeholders; d. initial program results (impact); and initial program benefits (sustainability).[60] The monitoring plan is presented as a matrix with the following orientation from left to right: evaluation questions, the focus of monitoring, indicators, targets, monitoring data sources, and who is responsible and when.[61]

The evaluation plan consists of 1. determining the overall evaluation approach; 2. identifying the evaluation questions requiring criteria and standards; 3. identifying the focus of evaluation and method for each question; 4. determining responsibilities and time frames; and 5. reviewing the M&E plan.[62] The Evaluation plan is presented as a matrix with the following orientation from left to right: evaluation questions, a summary of monitoring, the focus of evaluation, evaluation method, method implementation, and who is responsible and when.[63] The integrated M&E plan is presented in a matrix with the following orientation from left to right: evaluation questions, the focus of monitoring, indicators, and targets, monitoring data sources, who and when, the focus of evaluations, evaluation methods, method implementation, and who and when.[64]

The sixth step of designing the M&E framework is collecting, managing, analyzing, and synthesizing data to reach evaluation conclusions. This step emphasizes the importance of sound evaluative judgments and conclusions in the monitoring and evaluation (M&E) process, which depends on solid program theory, logic, and evaluation questions, as well as effective monitoring and evaluation plans. Essential preconditions for achieving these judgments include planning for high-quality data collection, effective data management and storage, and robust data analysis and synthesis. The step details the development of a Data Collection Plan to guide the systematic collection of monitoring and evaluation data, including appropriate sampling techniques. It also outlines creating a Data Management Plan for entering, storing, managing, and analyzing data, particularly in program databases. Guidance is provided on integrating and synthesizing monitoring and evaluation data to form sound evaluative judgments and conclusions. This ensures that data are highly quality, properly managed, and effectively used to answer evaluation questions. The broader organizational context influencing data handling and the importance of maintaining high data quality is also considered, with further steps on learning, reporting, and dissemination covered in the subsequent chapter.[65]

In data collection and management, the following steps are followed: 1. Develop a data collection plan; 2. Develop a data management plan; 3. Consider an approach to data synthesis; and 4. Consider the methods for making evaluative judgments and reaching evaluative conclusions.[66] The data collection plan examines pre-post surveys, case studies, and semistructured stakeholder interviews alongside its purpose, focus, sampling, implementation, potential ethical issues, and the requirement of developing each plan.[67] This section also examined the sampling methods and their applications. The probability sample methods discussed are 1. simple random sampling and 2. stratified random sampling. The nonprobability sample methods discussed are: 1. Purposive sampling, 2. Convenient sampling; 3. Snowball sampling; and 4. Self-selecting sampling.[68]

The data management plan includes the following: 1. database requirement; 2. data collection; 3. data entry; 4. data analysis; 5—database reports; and 6. staff training/orientation.[69] Data synthesis against evaluation is presented in a matrix consisting of evaluation questions, performance indicators, targets, monitoring data, evaluation data, and data synthesis from left to right.[70] Data synthesis and assessment against evaluation questions using a rubric (standards or categories). The matrix has quality criteria, evaluation synthesis, standards (excellent, good, adequate, and poor), and evaluation judgments from left to proper—evaluative judgment and conclusion against evaluation questions. Evaluation questions, data synthesis, evaluation judgments, and evaluative findings are presented in a matrix from left to right.[71]

The seventh critical step in developing an M&E framework is learning, reporting, and dissemination strategies. This step emphasizes the importance of learning within the M&E Framework, highlighting how it can be promoted through reporting and dissemination. It discusses generating and structuring lessons and recommendations from M&E activities, building on data synthesis and evaluative judgments. The step also explores increasing the usefulness of M&E products for program improvement and broader application to related programs and policies. Learning is identified as a critical component of RBM, facilitating ongoing feedback and improvement. This iterative process ensures that evaluative conclusions and lessons inform and refine program planning and implementation, enhancing overall effectiveness. Additionally, the step covers strategies for reporting and disseminating findings, recommendations, and lessons to maximize their impact and transferability within organizations and beyond.[72]

The following steps involve learning reporting and dissemination: 1. Consider developing or refining a learning strategy for the program that maximizes using conclusions, recommendations, and lessons; 2. Consider processes for identifying recommendations and lessons and guide developing a reporting and dissemination strategy.[73] The indicative evaluation report structure outline is: 1. Program overview, 2. Foundations (program theory, logic, and evaluation questions), 3. Methodology, 4. Main results (OECD criteria, gender, cross-cutting issues, and overall evaluation conclusions), 5. Recommendations, learnings, and appendices.[74] The reporting and dissemination strategy is presented in a matrix consisting of reporting type, due date, audience and their interest, overall focus, contents, and dissemination.[75]

The eighth and final step of the main processes of developing an M&E framework is planning for implementation of the M&E framework. This step focuses on planning the implementation of the M&E Framework, emphasizing the importance of a staged approach and stakeholder involvement. Practical project management principles guide this process, clearly identifying tasks, timelines, and responsibilities. The implementation requires ongoing monitoring, periodic adjustments, and reviews to assess its effectiveness. The context assumes integration within organizational structures for decision-making and resource allocation, promoting synergy with RBM principles. Successful implementation relies on leadership support, adequate resources, staff training, and data quality. The step outlines a six-stage process, culminating in detailed planning for implementation, including steps for stakeholder engagement, developing evaluation constructs, and data management. This comprehensive approach ensures the M&E Framework is effectively embedded and continuously improved within the organizational context.[76]

The main steps in implementing the M&E framework are: 1. Confirm program management arrangement, 2. Develop a work plan for implementation, and 3—plan for monitoring and review of the framework.[77]

 

  • Key Differences Between Standard M&E Frameworks and West African Practices

In about 18 years of my development journey in West Africa, I have interacted with over 100 development projects in various roles (proposal design, program implementation, evaluator, and commissioner of evaluation). I could categorize the multiple projects into three categories. 1. Small projects (These typically involve lower budgets, fewer resources, and shorter durations. They are often simple in scope and can be managed with minimal oversight and documentation. Examples include small IT upgrades or minor office renovations). 2 Medium projects (These projects have moderate budgets and resources. They require more detailed planning and management than small projects and may involve multiple teams or departments). 3. Large Projects (Large projects involve substantial budgets, extensive resources, and longer durations. They require comprehensive planning, significant stakeholder involvement, and detailed documentation).[78] In my experience, small projects are below one million USD, while large projects are above ten million USD.

Monitoring and Evaluation practices in West Africa are deeply influenced by local socio-economic, political, and cultural dynamics, necessitating significant adaptation of international frameworks to address local realities. Stakeholder involvement is often more complex due to diverse interests and power dynamics among community members, local authorities, and donors. Challenges such as limited resources, insufficient training, and infrastructural issues hinder the effective implementation of standard M&E processes. The prevalence of informal systems and flexibility in rapidly changing environments require more adaptive and context-specific approaches. These differences highlight the importance of customizing M&E frameworks to local contexts and integrating traditional knowledge with standard methodologies to enhance relevance and effectiveness. Recognizing and addressing these key differences can improve the design and implementation of M&E systems, leading to more successful and sustainable development outcomes.[79]

In my experience, small and medium-sized projects in West Africa do not often take time to articulate the foundations of the M&E framework, especially building the framework on RBM, evaluation-led focus on monitoring and evaluation, and participatory orientation. About 50% of the large projects I have been involved in really engage effectively and continue participatory orientation. All projects I have engaged in in West Africa try to design a logical framework that seeks to articulate the theory-based approach with varied levels of details and quality (this also improved by the size of the project). Some components that fit into the framework are pulled from the project proposal. Most of the staff involved in program design are core program implementation staff with limited M&E experience (especially for small and medium-sized projects). An assessment of 30 international non-governmental organizations (INGO) in Ghana indicated that only 31% of M&E staff engaged in proposal design.[80] This is the case with most West African countries.

Most projects struggle with effective stakeholder involvement at the level of scoping the M&E framework. Medium and large projects often engage stakeholders in designing the proposal. Such external stakeholder participation usually does not continue when designing the M&E framework. Among the over 100 projects I have engaged in, less than 5% of them engaged in a limited external stakeholder engagement in developing its M&E framework. Among the small and medium-sized projects, less than 1% have engaged in participatory design sessions in which they integrated M&E issues into the design and used them in their M&E framework. The weak engagement of stakeholders is mainly due to limited resources, weak capacity, complexity, perceived low-cost benefit of the participatory process, and perceived delays that participation comes with.[81] Viewing stakeholders as a donor requirement is not helping much as, in most cases, the donor does not require such engagement at the M&E framework, thereby reducing the engagement. The dominance of supply-side accountability focuses primarily on reporting to donors and funding agencies rather than engaging with and being accountable to the beneficiaries and local communities. This upward accountability can overshadow the importance of integrating feedback and perspectives from the demand-side stakeholders, thus limiting their involvement in the M&E framework.[82]

The standard for designing an M&E framework is to engage three categories of stakeholder representation: Category A (funders, policymakers, program designers, senior managers), category B (program managers, program implementers, service delivery partners), and Category C (program beneficiaries’ representatives).[83] However, in my experience, only part of category A and B stakeholders are occasionally engaged when it is a donor requirement. Only about 2% of projects I have worked on have engaged stakeholders from all the categories before concluding on their M&E framework.

Over 95% of the projects I have engaged with in West Africa have at least one of the three weekly used logic models. Projects often adopt one of the following three logic models: 1. Theory of change (ToC), 2. Result framework, 3. Logical (log) framework.[84] The most widely used are the ToC and the log frame, which set the basis of program theory and program logic as one step of building an M&E framework. An assessment of 30 INGOs in Ghana in 2022 indicated that 82% had a logic model at the design stage of all their projects, and 75% had the most program and theory logic elements.[85] I found the adoption of program theory and program logic adoption in projects in West Africa to have gone beyond donor requirements as all sizes of projects systematically adopt it in most organizations engaged in social development interventions. The challenge here continues to be weak stakeholder engagement across the board. Weak stakeholder engagement has continually impacted the design of program theory and logic quality assumptions. Most assumptions are not well engaged, and different assumptions manifest during implementation. This has been the case of currency fluctuations in Ghana and Nigeria, which have caught most program managers off guard.

All projects I have worked on in West Africa involve only evaluation questions at the evaluation level. They do not consider the evaluation questions at the monitoring level. About 90% of projects do not design the evaluation questions at the planning stage. This is a departure from the standard, which requires the design of the evaluation questions at the early stages of the project process to guide both monitoring and evaluation. Less than 5% that I have engaged within West Africa engage all required stakeholders in the design of evaluation questions at any stage. The evaluation questions are often proposed by M&E staff when designing requests for proposals (RFP) for evaluations (mostly mid-term and end-of-project evaluations). Most often, inputs to the evaluation questions are done by donors and program implementation colleagues during the review of the proposed RFP. Each evaluation usually comes with its own questions, making consistency difficult.[86]

The core attributes of evaluation questions are: 1. Agreed (agreement on the evaluation question for monitoring and evaluation should be a consensus between main stakeholders); 2. Practical (possibility to gather reliable and affordable data and the number and range of questions are within the scope of the program resources); and 3. Helpful (questions in providing data relevant to assess progress and program value).[87] In practice, monitoring questions are seldom agreed upon, even in large projects, while evaluation questions do not engage a wide range of stakeholders. Even though all organizations use the OECD’s criteria for evaluation questions, programs often adopt ‘relevance’ as a criterion that is also limited in scope and appropriateness.[88] (which is more robust). In some cases, programs add more criteria and sometimes over-focus on the additional criteria that the standard ones use to assess the program’s value and progress. Though it is unclear how many evaluation questions should be included, I have occasionally seen projects with 20 – 50+. It is best to keep the number below 15. Better Evaluation has proposed keeping it below 10.[89]

Over 80% of the development projects I have engaged in have at least a monitoring plan, with about 50% having both a monitoring and evaluation plan. Given that donors often require it, all medium and large projects have both plans. The challenge I have usually seen in projects implemented in the region is at the level of indicator design and target setting. Many projects use project-specific indicators, which are not well-designed and tested like standard indicators. All small projects I have engaged with fall victim, while medium and large projects engage some standard indicators; I still find the ratio smaller than it should be (less than 75%). In addition, about 75% of projects I have engaged in have a baseline. Still, the baseline does not provide values to all project indicators, making the design of the M&E framework challenging.

One of the main challenges is developing a comprehensive project management plan (PMP) or a quality Performance Indicator Reference Sheet (PIRS). This sheet defines performance indicators and is critical to ensuring indicator data quality and consistency.[90] Small and 50% of medium-sized projects do not develop a PMP or PIRS for their indicators at the project set-up. Without a solid PMP or PIRS, the M&E framework is fragile. 50% of medium-sized and large projects often develop PMP or PIRS because it is a donor requirement. If we must improve the quality of overall M&E, then PMP or PIRS must be solid. Another major challenge for regional projects is the development of sound indicator targets. About 50% of projects do not follow appropriate processes nor use adequate methods to set targets for all indicators. In large projects, targets are well set for the project’s life, but the project teams seldom break the targets into monthly, quarterly, semesterly, and annual, making monitoring challenging. About 30% of projects do not maintain a functional indicator performance tracking table (IPTT), making it difficult to track and report progress on the indicators. Using a functional IPTT[91] to engage project indicators through implementation improves the project success rate by at least 50%.

In my experience, only 50% and 100% of medium-sized and large projects, respectively, develop an evaluation plan before project commencement or at the set-up stage. Donor requirements partly drive this. At the evaluation, all projects that engage in midterm (regardless of the size) use at least four of the five OCED evaluation criteria to articulate evaluation questions. All evaluation questions add at least one new criterion: gender-related issues, partnership management, or scale-up. The process of developing the evaluation also sees the participation of only the implementing partners and the donors. Over 80 projects use non-experimental design and qualitative evaluation in West Africa. This aligns with the various methods in most African evaluations, where about 56% use qualitative methods.[92]

The M&E standard framework requires that evaluation be based on specific theories. Many theories include participatory, empowerment, theory-based, developmental, and utilization-focus evaluation.[93] Most projects that engaged theories in the evaluation used utilization-focused theory when engaging in mid-term evaluation and theory-based theory at the end of project evaluation. The two are often separately integrated with the participatory theories. Outcome harvesting and the most significant change have gained traction among smaller projects. Most projects (about 95%) focus more on contribution than attribution. All small-scale projects and about 50% of medium-size projects use internal evaluators. All large projects and 50% of medium-size projects use external evaluators.

At the plan for data collection and management stage, there has been the near absence of an apparent data collection plan developed at the start of the project. Less than 10% of projects I have engaged in within West Africa have a data quality assessment control protocol. Though data assessment is not systematically planned and carried out, some data control happens. Data quality is essential for effective management decision-making and is characterized by five key attributes. Data quality generally centers around validity, ensuring data accurately represents the intended results. Integrity involves implementing safeguards to minimize bias, transcription errors, and data manipulation. Precision requires data to be sufficiently detailed to support informed decision-making. Reliability is achieved through consistent data collection processes and analysis methods over time. Timeliness ensures that data is available frequently enough, is current, and is provided promptly to influence management decisions. These five attributes collectively contribute to high-quality data that can be trusted and effectively utilized in organizational planning, monitoring, and evaluation processes.[94]

All small projects do not have a clear data management plan, a database system, apparent data synthesis, or a data storage protocol. Large projects do not have such challenges, as the donor requires such a system at the start of the project. Small and medium-sized projects are not often used in systematic sampling protocols like large ones. Small and medium-sized projects do not usually align their data collection tools with the aligned indicators. The tools frequently have many questions that are needed for the project process. This does not align with one of the M&E standards – data minimization (collecting only relevant data for direct project needs).[95]

At the learning, reporting, and dissemination strategy phase of the M&E framework, I have noticed a strong focus on activity and monitoring reporting and a limited focus on the quality of result and evaluation reporting. Small and medium size projects often do not develop a learning agenda or questions. Such projects only gather learnings as they happen but do not plan the learning processes. About 50% of large projects I have engaged with in the region have a reasonably robust learning agenda, straightforward questions, and a learning reporting plan. An assessment of 30 INGOs operating in Ghana indicated that only 45% of the organization systematically design learning agendas for the projects.[96]

Among 30 INGOs operating in Ghana, participants indicated that the most popular means of disseminating findings is producing and sharing the entire report (36.4%). In contrast, the least popular method is newsletter policy briefs and fact sheets (4.5%).[97] Small and medium-sized projects generally do not structure their reporting and do not often produce other dissemination materials. Only about 5% of projects I have engaged in consider project beneficiaries in their report dissemination processes.

  • Underlying Causes of Differences and possible reason

The differences between theoretical processes for designing M&E frameworks and the practices adopted in the field can be attributed to several underlying causes. One significant factor is contextual adaptability. Theoretical frameworks often provide a standardized approach, which may not always align with field operations’ diverse and dynamic contexts. Local socio-economic, cultural, and political environments necessitate modifications to the theoretical models to address specific challenges and opportunities unique to the region.[98] Another critical factor is the availability of resources. Implementing theoretical M&E frameworks often assumes the availability of substantial financial, human, and technical resources. However, these resources are limited in many field settings, particularly in developing regions. This disparity necessitates adjustments in the M&E processes to fit the available resources, which can lead to deviations from the theoretical models.[99]

Stakeholder engagement also plays a critical role in the differences observed. Theoretical M&E frameworks emphasize comprehensive stakeholder involvement throughout the evaluation process. In practice, however, engaging all relevant stakeholders can be challenging due to logistical, communication, and power dynamics issues. These challenges often result in less inclusive and participatory M&E processes than theoretically prescribed.[100] Additionally, technical expertise is a significant factor. Theoretical frameworks assume a certain level of technical proficiency among the practitioners implementing the M&E processes. There may be a skills gap, especially in remote or under-resourced areas. This gap can lead to improper implementation of M&E activities and reliance on more straightforward, less accurate methods.

In addition, institutional support and leadership commitment are crucial for effectively implementing theoretical M&E frameworks. In many field settings, there may be a lack of institutional backing or inconsistent leadership support, which undermines the proper execution of M&E activities as designed. This lack of support can lead to fragmented or incomplete M&E processes, diverging significantly from the theoretical ideals.[101]

Lastly, the gap between theory and practice can be attributed to the evolving nature of the M&E field itself. As new approaches and methodologies emerge, their adoption and integration into field practices often lag. Patton notes that evaluation is a young field still developing its philosophical foundations, methodological frameworks, and practice standards.[102] This ongoing evolution means that theoretical ideals may sometimes outpace practical implementation, leading to disparities between what is advocated in theory and what is achievable in practice.

  • Impact on Program Success

An adequately designed M&E framework can significantly impact the overall success of a development program by providing a structured approach to assessing performance and making informed decisions. One significant impact is the improvement of program effectiveness. By setting clear objectives, indicators, and methodologies for data collection and analysis, an M&E framework ensures that program activities are aligned with desired outcomes. This alignment helps identify what works well and what needs adjustment, leading to more effective interventions.[103]

Moreover, a well-designed M&E framework enhances accountability and transparency. It allows stakeholders, including donors, program managers, and beneficiaries, to see how resources are used and the results achieved. This transparency fosters trust and supports the program’s credibility, which is crucial for securing ongoing and future funding. Regular reporting and dissemination of findings based on the M&E framework provide a clear account of progress and challenges, essential for maintaining stakeholder confidence.[104] Learning and improvement are also vital benefits of a robust M&E framework. Continuous monitoring and periodic evaluations generate valuable insights and lessons learned, which can be applied to enhance program design and implementation. This iterative learning process helps programs to adapt to changing conditions and improve over time, making them more resilient and sustainable. By institutionalizing learning mechanisms, organizations can build a culture of continuous improvement.[105]

Another critical impact is on resource allocation and efficiency. A practical M&E framework provides detailed information on program performance, which helps managers allocate resources more efficiently. By identifying successful strategies and areas of waste, programs can optimize their budgets and resources, ensuring that they are directed towards activities that generate the most significant impact. This efficient resource management is vital for maximizing the value of investments in development programs (Global Fund, 2023).[106] Finally, a comprehensive M&E framework supports evidence-based decision-making. Reliable and timely data collected through the framework enable program managers and policymakers to make informed decisions. This evidence-based approach helps fine-tune strategies, scale successful interventions, and make necessary course corrections. As a result, programs are better equipped to achieve their goals and deliver sustainable outcomes for their target populations.

 

  • Conclusion

The comparison between standard processes for designing M&E frameworks and actual practices in West Africa reveals significant disparities, particularly in the context of small and medium-sized projects. These differences are evident across various aspects of M&E, including stakeholder engagement, program theory and logic development, evaluation question formulation, and data management. The study highlights that larger projects adhere more closely to theoretical standards, but smaller initiatives often lack comprehensive M&E frameworks due to resource constraints and limited expertise. The underlying causes of these disparities include contextual adaptability issues, resource limitations, stakeholder engagement challenges, and technical expertise gaps. The dynamic socio-economic and cultural environments in West Africa necessitate modifications to standardized M&E approaches. Additionally, the evolving nature of the M&E field itself contributes to the gap between theory and practice, as new methodologies and methods may not be quickly adopted in field settings.

These differences significantly impact program success. While well-designed M&E frameworks can enhance program effectiveness, accountability, and learning, the observed disparities often lead to suboptimal outcomes. The lack of comprehensive stakeholder engagement and robust data management systems, particularly in smaller projects, can result in missed opportunities for improvement and reduced program impact. However, the study also notes positive trends, such as the widespread adoption of logic models and increasing recognition of the importance of M&E in development projects. Moving forward, there is a clear need for tailored approaches to M&E in West Africa that balance theoretical best practices with practical realities. This may involve developing simplified yet effective M&E frameworks for smaller projects, increasing capacity-building efforts, and promoting more context-specific M&E methodologies. Future research and practice should focus on bridging the gap between theory and practice, particularly in resource-constrained environments, to enhance regional development initiatives’ overall effectiveness and impact.

REFERENCES AND BIBLIOGRAPHY

African Union Commission. MONITORING AND EVALUATION FRAMEWORK 2020 – 2030 FOR THE AU/ILO/IOM/UNECA JOINT PROGRAMME ON LABOUR MIGRATION GOVERNANCE FOR DEVELOPMENT AND INTEGRATION IN AFRICA (JLMP). Addis Ababa, Ethiopia: African Union Commission Press, 2020.

American Evaluation Association (AEA). “AEA (2004). Guiding Principles for Evaluators. American Evaluation Association.” Washington, DC: American Evaluation Association (AEA), 2004. https://www.eval.org/p/cm/ld/fid=51.

Asogwa, Ikenna Elias, Maria Estela Varua, Rina Datt, and Peter Humphreys. “Accounting for Stakeholder Engagement in Developing Countries: Proposing an Engagement System to Respond to Sustainability Demands.” Meditari Accountancy Research 32, no. 3 (April 25, 2024): 888–922.

Bamberger, Michael, Jim Rugh, and Linda Mabry. RealWorld Evaluation: Working under Budget, Time, Data, and Political Constraints. 2nd ed. Thousand Oaks, Calif: SAGE, 2012.

BetterEvaluation. “Specify the Key Evaluation Questions – Rainbow Framework.” BetterEvaluation, 2023. https://www.betterevaluation.org/frameworks-guides/rainbow-framework/frame/specify-key-evaluation-questions.

Chanase, Gervin. “How CSOs Can Set Up and Sustain an M&E System: An Introduction for Development Practitioners.” WACSeries Op-Ed (blog), March 2021.

Culligan, Mike, and Leslie Sherriff. Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide. First Edition. Washington, DC: Humentum, 2019.

Designing a Results Framework for Achieving Results: A How-to Guide. Washington, DC: World Bank, 2012.

EvalCommunity. “Understanding Stakeholders in Monitoring and Evaluation (M&E).” EvalCommunity: Jobs and Experts (blog), 2023. https://www.evalcommunity.com/career-center/stakeholder-engagement/.

Flick, Uwe, ed. The SAGE Handbook of Qualitative Research Design, 2 Volume Set. London: Sage Publications, 2022.

Global Fund. Key Performance Indicators (KPIs) Handbook for the 2023-2028 Strategy. Global Fund, 2023.

Goergens, Marelize, and Jody Zall Kusek. Making Monitoring and Evaluation Systems Work: A Capacity Development Tool Kit. The World Bank, 2010. http://elibrary.worldbank.org/doi/book/10.1596/978-0-8213-8186-1.

Gudda, Patrick. A Guide to Project Monitoring & Evaluation. Bloomington, IN: AuthorHouse, 2011.

Hobson, Kersty, Ruth Mayne, and Jo Hamilton. A Step-by-Step Guide to Monitoring and Evaluation. Oxford: Oxford University Press, 2013.

International Federation of Red Cross and Red Crescent Societies. Project/Programme Monitoring and Evaluation (M&E) Uide. Geneva, Switzerland: IFRC, 2011.

International Organization for Standardization. Guidance on Project Management. Switzerland: ISO 21500, 2012. https://www.isopm.ru/download/iso_21500.pdf.

Keney, Gabriel. “Different Strokes for Different People: Letting Evidence Talk in Different Ways.” Ghana, July 14, 2024.

Kettner, Peter M., Robert Moroney, and Lawrence L. Martin. Designing and Managing Programs: An Effectiveness-Based Approach. Fifth edition. Los Angeles: SAGE, 2017.

Kusek, Jody Zall, and Ray C. Rist. Ten Steps to a Results-Based Monitoring and Evaluation System: A Handbook for Development Practitioners. Washington, DC: World Bank, 2004.

Markiewicz, Anne, and Ian Patrick. Developing Monitoring and Evaluation Frameworks. Los Angeles: Sage, 2016.

Masvaure, Steven, and Tebogo E. Fish. “Strengthening and Measuring Monitoring and Evaluation Capacity in Selected African Programmes.” African Evaluation Journal 10, no. 1 (December 15, 2022).

Mutie, Rogers. CRACKING THE MONITORING AND EVALUATION CAREER Ten Other Competencies That Will Drive Excellence in Your M&E Practice. Kansas, USA: Ascend Books, 2021.

OECD. Monitoring and Evaluation Framework OECD DUE DILIGENCE GUIDANCE FOR RESPONSIBLE SUPPLY CHAINS OF MINERALS FROM CONFLICT-AFFECTED AND HIGH-RISK AREAS. Brussels: OECD, 2021.

Organization for Economic Co-operation and Development (OECD). Glossary of Key Terms in Evaluation and Results-Based Management. Paris: OECD Publications, 2002. https://www.oecd.org/dac/evaluation/2754804.pdf.

Patton, Michael Quinn. “Evaluation Science.” American Journal of Evaluation 39, no. 2 (June 2018): 183–200. https://doi.org/10.1177/1098214018763121.

Project Management Institute, ed. A Guide to the Project Management Body of Knowledge: PMBOK® Guide. 5. ed. PMI Global Standard. Newtown Square, Pa: PMI, 2013.

ProThoughts. “Types of Projects – What Are the Classifications in Project Management?,” November 9, 2023. https://prothoughts.co.in/blog/types-of-projects/.

The Government of Ghana. NATIONAL MONITORING AND EVALUATION MANUAL. Accra – Ghana: The Government of Ghana, 2014.

Tifuntoh, Christopher Konde. “ASSESSING THE UPTAKE OF EVALUATION FINDINGS AND LEARNED LESSONS FOR PROGRAM IMPROVEMENT. THE CASE OF INTERNATIONAL NGOS IN GHANA.” GHANA INSTITUTE OF MANAGEMENT AND PUBLIC ADMINISTRATION (GIPMA), 2023.

UNAIDS. Basic Terminology and Frameworks for Monitoring and Evaluation. Washington, D.C.: UNAIDS, 2023. https://www.unaids.org/sites/default/files/sub_landing/files/7_1-Basic-Terminology-and-Frameworks-MEF.pdf.

United Nations Children’s Fund (UNICEF). Evaluation for Equitable Development Results. New York: United Nations Children’s Fund (UNICEF), 2016. https://www.wcasa.org/wp-content/uploads/2020/03/Evaluation_Evaluation-for-Equitable-Developmental-Results.pdf.

United Nations Development Programme (UNDP). HANDBOOK ON PLANNING, MONITORING AND EVALUATING FOR DEVELOPMENT RESULTS. New York, NY 10017, USA: United Nations Development Programme (UNDP), 2009. http://www.undp.org/eo/handbook.

United Nations Environment Programme (UNEP). Life Cycle Approaches. The Road from Analysis to Practice. Paris: UNEP/ SETAC Life Cycle Initiative, 2005. http://www.uneptie.org.

USAID. How-To Note: Conduct a Data Quality Assessment. 3 vols. Washington, D.C.: USAID, 2021. https://usaidlearninglab.org/system/files/resource/files/how-to_note_-_conduct_a_dqa-final2021.pdf.

———. Performance Indicator Reference Sheet (PIRS). Guidance & Template. Washington, DC: USAID, 2022.

Vaidya, Anand Jayprakash, and Andrew Erickson. Logic & Critical Reasoning: Conceptual Foundations and Techniques of Evaluation. Dubuque, IA: Kendall Hunt, 2011.

Weltbank. A Guide to the World Bank. 3. ed. Washington, DC: The World Bank, 2011.

 

[1] Steven Masvaure and Tebogo E. Fish, Strengthening and Measuring Monitoring and Evaluation Capacity in Selected African Programmes, African Evaluation Journal 10, no. 1 (December 15, 2022): 1.

[2] Jody Zall Kusek and Ray C. Rist, Ten Steps to a Results-Based Monitoring and Evaluation System: A Handbook for Development Practitioners (Washington, DC: World Bank, 2004), 94.

[3] United Nations Development Programme (UNDP), HANDBOOK ON PLANNING, MONITORING AND EVALUATING FOR DEVELOPMENT RESULTS (New York, NY 10017, USA: United Nations Development Programme (UNDP), 2009), 83–84, http://www.undp.org/eo/handbook.

[4] Michael Bamberger, Jim Rugh, and Linda Mabry, RealWorld Evaluation: Working under Budget, Time, Data, and Political Constraints, 2nd ed (Thousand Oaks, Calif: SAGE, 2012), 17.

[5] Weltbank, A Guide to the World Bank, 3. ed (Washington, DC: The World Bank, 2011), 84.

[6] Anne Markiewicz and Ian Patrick, Developing Monitoring and Evaluation Frameworks (Los Angeles: Sage, 2016), 8–10.

[7] Uwe Flick, ed., The SAGE Handbook of Qualitative Research Design, 2 Volume Set (London: Sage Publications, 2022), 12.

[8] Anand Jayprakash Vaidya and Andrew Erickson, Logic & Critical Reasoning: Conceptual Foundations and Evaluation Techniques (Dubuque, IA: Kendall Hunt, 2011), 38–39.

[9] Patrick Gudda, A Guide to Project Monitoring & Evaluation (Bloomington, IN: AuthorHouse, 2011), 6.

[10] Gudda, 7–14.

[11] Mike Culligan and Leslie Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, First Edition (Washington, DC: Humentum, 2019), 3.

[12] The Government of Ghana, NATIONAL MONITORING AND EVALUATION MANUAL (Accra – Ghana: The Government of Ghana, 2014), 19.

[13] Gudda, A Guide to Project Monitoring & Evaluation, 6.

[14] Gudda, 56.

[15] Gudda, 57.

[16] International Federation of Red Cross and Red Crescent Societies, Project/Programme Monitoring and Evaluation (M&E) Uide (Geneva, Switzerland: IFRC, 2011), 15.

[17] Kersty Hobson, Ruth Mayne, and Jo Hamilton, A Step by Step Guide to Monitoring and Evaluation (Oxford: Oxford University Press, 2013), 5.

[18] Culligan and Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, 3.

[19] Gudda, A Guide to Project Monitoring & Evaluation, 58–59.

[20] Culligan and Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, 4–5.

[21] Hobson, Mayne, and Hamilton, A Step by Step Guide to Monitoring and Evaluation, 6.

[22] Gudda, A Guide to Project Monitoring & Evaluation, 163.

[23] Rogers Mutie, CRACKING THE MONITORING AND EVALUATION CAREER Ten Other Competencies That Will Drive Excellence in Your M&E Practice (Kansas, USA: Ascend Books, 2021), 74.

[24] Culligan and Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, 4.

[25] Kusek and Rist, Ten Steps to a Results-Based Monitoring and Evaluation System, 143.

[26] Marelize Goergens and Jody Zall Kusek, Making Monitoring and Evaluation Systems Work: A Capacity Development Tool Kit (The World Bank, 2010), 145–46, http://elibrary.worldbank.org/doi/book/10.1596/978-0-8213-8186-1.

[27] Markiewicz and Patrick, Developing Monitoring and Evaluation Frameworks, 21.

[28] Markiewicz and Patrick, 21.

[29] Organisation for Economic Co-operation and Development (OECD), Glossary of Key Terms in Evaluation and Results Based Management (Paris: OECD Publications, 2002), 33, https://www.oecd.org/dac/evaluation/2754804.pdf.

[30] United Nations Development Programme (UNDP), HANDBOOK ON PLANNING, MONITORING AND EVALUATING FOR DEVELOPMENT RESULTS, 10.

[31] Designing a Results Framework for Achieving Results A How-to Guide (Washington, DC: World Bank, 2012), 7.

[32] Markiewicz and Patrick, Developing Monitoring and Evaluation Frameworks, 55.

[33] Project Management Institute, ed., A Guide to the Project Management Body of Knowledge: PMBOK® Guide, 5. ed, PMI Global Standard (Newtown Square, Pa: PMI, 2013), 30.

[34] United Nations Environment Programme (UNEP), Life Cycle Approaches. The Road from Analysis to Practice. (Paris: UNEP/ SETAC Life Cycle Initiative, 2005), 23, http://www.uneptie.org.

[35] Project Management Institute, A Guide to the Project Management Body of Knowledge, 391.

[36] International Organization for Standardization, Guidance on Project Management (Switzerland: ISO 21500, 2012), 18, https://www.isopm.ru/download/iso_21500.pdf.

[37] Markiewicz and Patrick, Developing Monitoring and Evaluation Frameworks, 63.

[38] American Evaluation Association (AEA), “AEA (2004). Guiding Principles for Evaluators. American Evaluation Association.” (Washington, DC: American Evaluation Association (AEA), 2004), 3, https://www.eval.org/p/cm/ld/fid=51.

[39] Culligan and Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, 7–8.

[40] United Nations Children’s Fund (UNICEF), Evaluation for Equitable Development Results (New York: United Nations Children’s Fund (UNICEF), 2016), 74, https://www.wcasa.org/wp-content/uploads/2020/03/Evaluation_Evaluation-for-Equitable-Developmental-Results.pdf.

[41] Culligan and Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, 8.

[42] Culligan and Sherriff, 9.

[43] Markiewicz and Patrick, Developing Monitoring and Evaluation Frameworks, 44.

[44] Markiewicz and Patrick, 117.

[45] Markiewicz and Patrick, 52.

[46] Markiewicz and Patrick, 53–54.

[47] Markiewicz and Patrick, 55–65.

[48] Markiewicz and Patrick, 66.

[49] Markiewicz and Patrick, 69.

[50] Markiewicz and Patrick, 1–2.

[51] Markiewicz and Patrick, 12.

[52] Markiewicz and Patrick, 29–30.

[53] Markiewicz and Patrick, 37.

[54] African Union Commission, MONITORING AND EVALUATION FRAMEWORK 2020 – 2030 FOR THE AU/ILO/IOM/UNECA JOINT PROGRAMME ON LABOUR MIGRATION GOVERNANCE FOR DEVELOPMENT AND INTEGRATION IN AFRICA (JLMP) (Addis Ababa, Ethiopia: African Union Commission Press, 2020), 42.

[55] Markiewicz and Patrick, Developing Monitoring and Evaluation Frameworks, 47.

[56] Markiewicz and Patrick, 59.

[57] Markiewicz and Patrick, 60.

[58] Markiewicz and Patrick, 1–2.

[59] Markiewicz and Patrick, 67.

[60] Markiewicz and Patrick, 69.

[61] Markiewicz and Patrick, 66.

[62] Markiewicz and Patrick, 15.

[63] Markiewicz and Patrick, 12.

[64] Markiewicz and Patrick, 32.

[65] Markiewicz and Patrick, 37–38.

[66] Markiewicz and Patrick, 47.

[67] Markiewicz and Patrick, 50.

[68] Markiewicz and Patrick, 53.

[69] Markiewicz and Patrick, 56.

[70] Markiewicz and Patrick, 61.

[71] Markiewicz and Patrick, 67.

[72] Markiewicz and Patrick, 74–75.

[73] Markiewicz and Patrick, 76.

[74] Markiewicz and Patrick, 87–88.

[75] Markiewicz and Patrick, 91.

[76] Markiewicz and Patrick, 99–102.

[77] Markiewicz and Patrick, 102.

[78] ProThoughts, “Types of Projects – What Are the Classifications in Project Management?,” November 9, 2023, 1–5, https://prothoughts.co.in/blog/types-of-projects/.

[79] Gervin Chanase, “How CSOs Can Set Up and Sustain an M&E System: An Introduction for Development Practitioners,” WACSeries Op-Ed (blog), March 2021, 2–7.

[80] Christopher Konde Tifuntoh, “ASSESSING THE UPTAKE OF EVALUATION FINDINGS AND LEARNED LESSONS FOR PROGRAM IMPROVEMENT. THE CASE OF INTERNATIONAL NGOS IN GHANA” (Ghana, GHANA INSTITUTE OF MANAGEMENT AND PUBLIC ADMINISTRATION (GIPMA), 2023), 67.

[81] EvalCommunity, “Understanding Stakeholders in Monitoring and Evaluation (M&E),” EvalCommunity: Jobs and Experts (blog), 2023, 5, https://www.evalcommunity.com/career-center/stakeholder-engagement/.

[82] Ikenna Elias Asogwa et al., “Accounting for Stakeholder Engagement in Developing Countries: Proposing an Engagement System to Respond to Sustainability Demands,” Meditari Accountancy Research 32, no. 3 (April 25, 2024): 888–922.

[83] Markiewicz and Patrick, Developing Monitoring and Evaluation Frameworks, 75.

[84] Culligan and Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, 12.

[85] Tifuntoh, “ASSESSING THE UPTAKE OF EVALUATION FINDINGS AND LEARNED LESSONS FOR PROGRAM IMPROVEMENT. THE CASE OF INTERNATIONAL NGOS IN GHANA,” 68–69.

[86] OECD, Monitoring and Evaluation Framework OECD DUE DILIGENCE GUIDANCE FOR RESPONSIBLE SUPPLY CHAINS OF MINERALS FROM CONFLICT-AFFECTED AND HIGH-RISK AREAS (Brussels: OECD, 2021), 61.

[87] Markiewicz and Patrick, Developing Monitoring and Evaluation Frameworks, 33.

[88] Markiewicz and Patrick, 37.

[89] BetterEvaluation, “Specify the Key Evaluation Questions – Rainbow Framework,” BetterEvaluation, 2023, 3, https://www.betterevaluation.org/frameworks-guides/rainbow-framework/frame/specify-key-evaluation-questions.

[90] USAID, Performance Indicator Reference Sheet  (PIRS). Guidance & Template (Washington, DC: USAID, 2022), 3.

[91] Culligan and Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, 52.

[92] Gabriel Keney, “Diffrent Strokes for Different People: Letting Evidence Talk in Different Ways” (Ghana, July 14, 2024), 12.

[93] Markiewicz and Patrick, Developing Monitoring and Evaluation Frameworks, 18.

[94] USAID, How-To Note: Conduct a Data Quality Assessment (Washington, D.C.: USAID, 2021), 1, https://usaidlearninglab.org/system/files/resource/files/how-to_note_-_conduct_a_dqa-final2021.pdf.

[95] Culligan and Sherriff, Monitoring, Evaluation, Accountability, and Learning for Development Professionals Guide, 7.

[96] Tifuntoh, “ASSESSING THE UPTAKE OF EVALUATION FINDINGS AND LEARNED LESSONS FOR PROGRAM IMPROVEMENT. THE CASE OF INTERNATIONAL NGOS IN GHANA,” 76.

[97] Tifuntoh, 80.

[98] UNAIDS, Basic Terminology and Frameworks for Monitoring and Evaluation (Washington, D.C.: UNAIDS, 2023), 15, https://www.unaids.org/sites/default/files/sub_landing/files/7_1-Basic-Terminology-and-Frameworks-MEF.pdf.

[99] UNAIDS, 18.

[100] EvalCommunity, “Understanding Stakeholders in Monitoring and Evaluation (M&E),” 5.

[101] Peter M. Kettner, Robert Moroney, and Lawrence L. Martin, Designing and Managing Programs: An Effectiveness-Based Approach, Fifth edition (Los Angeles: SAGE, 2017), 306.

[102] Michael Quinn Patton, “Evaluation Science,” American Journal of Evaluation 39, no. 2 (June 2018): 4, https://doi.org/10.1177/1098214018763121.

[103] USAID, How-To Note: Conduct a Data Quality Assessment, 5.

[104] Goergens and Kusek, Making Monitoring and Evaluation Systems Work, 14.

[105] United Nations Development Programme (UNDP), HANDBOOK ON PLANNING, MONITORING AND EVALUATING FOR DEVELOPMENT RESULTS, 23.

[106] Global Fund, Key Performance Indicators (KPIs) Handbook for the 2023-2028 Strategy (Global Fund, 2023), 118.

Table of Contents

RECENT ARTICLES:

Publisher information: The Intergovernmental Research and Policy Journal (IRPJ) is a unique interdisciplinary peer-reviewed and open access Journal. It operates under the authority of the only global and treaty-based intergovernmental university in the world (EUCLID), with other intergovernmental organizations in mind. Currently, there are more than 17,000 universities globally, but less than 15 are multilateral institutions, EUCLID, as IRPJ’s sponsor, is the only global and multi-disciplinary UN-registered treaty-based institution.

 

IRPJ authors can be assured that their research will be widely visible on account of the trusted Internet visibility of its “.int” domain which virtually guarantees first page results on matching keywords (.int domains are only assigned by IANA to vetted treaty-based organizations and are recognized as trusted authorities by search engines). In addition to its “.int” domain, IRPJ is published under an approved ISSN for intergovernmental organizations (“international publisher”) status (also used by United Nations, World Bank, European Space Agency, etc.).

 

IRPJ offers:

  1. United Nations Treaty reference on your published article (PDF).
  2. “Efficiency” driven and “author-focused” workflow
  3. Operates the very novel author-centric metric of “Journal Efficiency Factor”
  4. Minimal processing fee with the possibility of waiver
  5. Dedicated editors to work with graduate and doctoral students
  6. Continuous publication i.e., publication of articles immediately upon acceptance
  7. The expected time frame from submission to publication is up to 40 calendar days
  8. Broad thematic categories
  9. Every published article will receive a DOI from Crossref and is archived by CLOCKSS.

Copyright © 2020 IRPP et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.