CRP Review - Use
What is the main purpose of this review?
This review does not intend to be full programmatic evaluations, which cover more criteria and require a larger investment. Instead, these reviews provide value for money for CGIAR in terms of asking pertinent questions to independent experts in order to fulfill a key responsibility of CRPs, to demonstrate accountability to system funders.
Doesn’t internal monitoring and annual reporting fulfill the same function as these reviews?
Beyond standard, internal monitoring and reporting, which in most circumstances would be a key element of the accountability system, there is a utility as well as a donor expectation in conducting external and independent reviews. Data sources from monitoring and reporting systems will be an important, but not sole, source of information. External, expert opinion will allow qualitative assessment of reports and conclusions that have been vetted only by internal stakeholders.
What are the expectations for this review?
This review aims to satisfy donor due diligence and accountability mechanisms. While each donor may have a nuanced expectation, the overall expectation communicated to CAS by SIMEC over the past year is independent review delivered with a ‘light touch’. Therefore, we expect this review will fulfill that expectation for System Council donors. Interactions with SIMEC on behalf of System Council will continue to be invited throughout the year to ensure transparency. With CRP management, numerous touchpoints have been added to the review timeframes; clear, open lines of communication with system-management stakeholders have been established. After Action Reviews in June (i.e., after the first three reviews are complete) will test fit for purpose of the review approach and allow refinement of the approach, while maintaining a consistent methodology for all CRP reviews.
How will the review feed into the ongoing changes with One CGIAR reform process?
The overarching scope of the 2020 review is a concerted accountability focus, related to the current portfolio. However, future-facing recommendations will aim to offer evidence-based system-wide learning during the current change process. The reviews will provide evidence useful in the design and implementation of future research modalities, from the point of view of the QoS and effectiveness criteria under examination. There is, therefore, a strong sense of urgency in this project in order to offer cross-program and system-wide observations related to QoS and effectiveness in a timely fashion to the larger change process that is underway in CGIAR. The main entry points we envisage for evidence-based, cross-program learning are the detailed strategic planning and operationalization of the portfolio from early 2021. During 2020, however, the CRP Review team is working in close coordination with the CGIAR SMO Programs Unit and will be liaising through the Independent Science for Development Council (ISDC) Chair, in his capacity as a member of the Research Transition Advisory Group, to align with any opportunities to bring information into the One CGIAR research modality conversations. Given the pace of the change process, we will conduct a continuous assessment of relevant entry points.
How will the assessment of Quality of Research for Development be incorporated in this review cycle, and how does it relate to the Quality of Science evaluation criteria?
CRPs will not be evaluated against the QoR4D per se; mapping and cross-referencing will contribute to system-level learning and to the eventual Evaluation policy revision. This CRP review will map how the building blocks of QoR4D and two of the CGIAR’s six evaluation criteria and their indicators weave together in the context of research on agriculture and food security. The current review only employs two of six evaluation criteria, but nevertheless provides a rich opportunity to map review question and indicators with the QoR4D elements. This mapping will strengthen the scope and methodology used to assess QoR4D across CGIAR.
Note: Alignment of the CGIAR Evaluation Policy (2012) and Evaluation Guidelines (2014-2015) with the QoR4D frame of reference (2017) is an ongoing task within the CAS Secretariat that will fully crystallize with revision of the CGIAR Evaluation policy. Revision to the policy will be proposed in the CAS 2021 workplan, once all of the governance, management and research modalities of One CGIAR are confirmed.
CRP Review - Scope
Why are Platforms not being reviewed in 2020? What are the plans for assessing their performance?
The TORs for the Platforms will need to be designed to reflect Platforms’ nature of work and purpose, and will be much different than current TORs in terms of the review questions and indicators assessed. For this reason, CAS Secretariat has proposed to SIMEC that the Platforms will be subject to assessment in a later workplan.
Will the scope include achievements and results from earlier phase?
The reviews’ emphasis is progress and achievements in the current funding cycle. Nevertheless, the reviews will also consider the previous period for those programs where there is continuity between the first generation and second generation CRPs. Analyses of selected OICR that entail legacy research will bridge with the first phase. Previous evaluations (when they exist) will be a launching point, to allow a degree of reflection on what was learned in phase I, and adopted in phase II, where applicable.
How does this scope and approach differ from earlier System-led independent evaluations?
This are targeted reviews rather than full independent evaluations. One of the main differences of this review, compared to the past evaluations, is that we are conducting almost simultaneously 12 CRPs reviews with one umbrella Terms of Reference. In addition, as these are desk reviews within a tight time-frame, the review will more heavily rely on program information and reporting documentation already available. In addition, CAS staff will conduct pre-analysis and prepare systematized information packages for review teams, with an aim to focus the independent experts time on qualitative assessment of information provided. Finally, the focus is on two main criteria (out of six enumerated in the current evaluation guidelines in effect) – Quality of Science and Effectiveness.
To what extent are partnerships subject to the reviews?
Some elements of partnership will be included in the examination against the evaluation criteria of effectiveness and Quality of Science, and we expect it to be more central to some CRPs than others. Some indicators for study pertain specifically to partnerships. The intention of the review design is inclusion of research, scaling and funding partners among key informants in online survey and semi- structured interviews.
Will the reviews look at funding modalities and the relative benefits and costs of the various modalities in place during this phase?
To the extent that the design looks at high-quality inputs, we are able to examine inputs related to team, facilities, equipment and research tools. However, detailed examination on the financing mechanisms is outside of scope (i.e., the blending of Windows 1,2 and 3 with bilateral grants and relative benefits and costs of the current system). Furthermore, the evaluation criterion of efficiency is not part of the review design. However, for more information on the topic generally, the IEA evaluations at the end of CRP Phase I generated system-level findings and recommendations related to modalities, available on the CAS Secretariat / Evaluation website.
Will the review look at efficiency at the program level?
The evaluation criterion of efficiency is not part of the review design. For this rapid review, Quality of Science and Effectiveness evaluation criteria have been prioritized.
CRP Review - Methodological approach
What opportunities are there for consultation and feedback with CRP management during the review?
In addition to seeking feedback on initial design (through consultation on the abbreviated TORs), the approximately 10-week review period will schedule the following stages:
- Feedback on preliminary findings prior to initiating drafting of review report
- Validation of OICRs selected for in-depth analysis
- Sharing draft report with CRP management for fact check and, as required, to flag additional inputs
Will the same review questions and indicators be applied to all 12 programs?
A core principle of the CRP Review in 2020 is a harmonized approach. One of the main differences of this review, compared to the past independent evaluations in CGIAR, is that we are conducting (almost) simultaneously 12 CRPs with one umbrella TOR - this will facilitate elaborating findings, conclusions and recommendations across CRPs.
What approach and methodology will be used for the review?
Reviews will draw on mixed quantitative and qualitative data. A sampling of deep dives into a limited number of outcomes per program will be pursued in the context of the effectiveness criterion review. A Theory-Based approach will be used, which we consider the best solution in this case. Hence the need to look at the Theory of Change (TOC) for each CRP, or whatever alternative type of program logic has driven program design and implementation. Thus, the desk review will assess, across the board, the use and utility of TOCs as a design, strategic and organizational approach to AR4D.
Why is the review being conducted in 2020 when programs end in 2021?
Based on CGIAR’s donors need for accountability for the CRPs, the review is scheduled for 2020. Timing in 2020 will also allow CRPs and the CGIAR System to glean lessons to take forward into the change process and provide assurances to date on program delivery under the current portfolio. The reviews will be geared to estimate the full achievement in life of program, based on achievement to date. It is accepted that, given their timing, there may be a limited capacity to implement many findings and recommendations within the final life of the current programming cycle.
How will experts be identified and prepared for the program review on the rapid time schedule?
There is a core team managing and coordinating the review process, within the CAS Secretariat/evaluation function (CAS evaluation team), which has a mandate to conduct independent evaluation in the CGIAR. This team will identify experts that will dedicate a total of approximately 80- 90 working days to each CRP review, with the CAS evaluation team providing support through data collection and cleaning, pre-analysis, report writing and quality assurance and liaising and overall coordination. There will be ongoing management and guidance CAS evaluation team to ensure a harmonized and consistent approach. To identify experts, CAS evaluation team t is releasing a public notice in early March to build a roster of expert evaluators whom we will call up throughout 2020. For every program review, we seek consultants offering subject-matter area expertise in relevant domains (to the program under review) and a track record of high quality, recognized evaluative, assessment and review activity in R4D contexts. All experts must demonstrate and maintain compliance with the CAS Conflict of Interest policy (revised in 2019) prior to contracting. Once engaged, a briefing package on each program will familiarize the experts with the programs. The inception period of review is reduced, but inductions will occur for every reviewer.
How will cross-program and system-level findings be developed?
The CAS evaluation team during the year will track and communicate to stakeholders evidence emerging of a cross-program and system-wide nature as review reports are released. CAS/Evaluation anticipates proposing to SIMEC in its 2021 workplan a synthetic report that compiles and further analyzes cross-program and system level findings and recommendations into one report, for use within the system in future.
While review teams (and the CAS evaluation team) will be attuned to cross-program and system-level findings, the design methodology of the review does not entail a comparative review across programs, and fully recognizes the differing operating contexts, budget envelopes and other differential factors across the portfolio.