The International Lawyer


The new law and the corresponding OMB and key foreign aid agencies' guidelines require providers to follow best practices in the monitoring and evaluation (M&E) of U.S government (USG) foreign aid.2 A recent study conducted by the U.S. Government Accountability Office across the key USG foreign aid agencies identified a number of areas that needed improvement in the design, implementation, conclusions, and dissemination of foreign assistance evaluations.3 FATAA and the relevant guidelines will require providers to address those areas and focus their reporting requirements on tangible outcomes and the impact of their programming. In recent years, federal agencies have placed an increasing emphasis on demonstrating effectiveness through rigorous evaluations.6 But there are concerns that funding levels have increased, while the efficiency and effectiveness of aid remain opaque and uncertain.7 B. Focus of FATAA In response to these concerns, Congress passed the new FATAA legislation, which will impact all areas of U.S. foreign assistance.8 The main focus of the legislation is a shift toward outcomes and impact of foreign assistance funding.9 Agencies are required not only to measure outputs (i.e. number of kilometers built, malaria nets provided, or anti-corruption trainings held for judges), but also to assess outcomes and impacts (i.e. cost savings for vehicle owners, decreases in the prevalence of malaria, and corruption). Here are several examples: * The U.S. Department of State (DOS) issued an integrated Program Design and Performance Management Toolkit in October 2016;12 revised and updated a version of its program and project design, monitoring, and evaluation policy in November 2017;13 and updated its guidance in 2018.14 Subsequently, DOS issued an updated M&E policy in January 2018 in compliance with the January 2018 OMB Guidelines.15 * The U.S. Agency for International Development (USAID) revised its Automated Directives System (ADS) Chapter 201 addressing evaluation guidance, planning, and implementation in September 2016 and later in October 2018.16 USAID also developed toolkits to cover its work under FATAA-one for Monitoring, one for Evaluation, and one for Collaborating, Learning, and Adapting.17 * The Millennium Challenge Corporation (MCC) issued its March 2017 Policy for Monitoring and Evaluation,18 which requires that compact M&E plans identify and describe its evaluation methodologies, key evaluation questions, and data collection strategies. * The U.S. Department of Defense (DOD) issued agency-wide evaluation guidance for security cooperation in January 2017 in its DOD Instruction 5132.14 Assessment, Monitoring, and Evaluation Policy for the Security Cooperation Enterprise.19 III.Current Condition of Foreign Assistance Evaluations' Quality, Cost, and Dissemination A recent study conducted by the U.S. Government Accountability Office (GAO)20 sheds some light on the current condition of M&E across USG.21 GAO's study provides a baseline assessment of the quality, cost, and dissemination of foreign assistance evaluations when the legislation took effect.22 It focuses on the six agencies providing "the largest amount of U.S. foreign assistance:" USAID, DOS, MCC, USDA, HHS, and DOD.23 The study found that about three quarters of the 170 evaluations completed in fiscal year 2015 by these agencies and reviewed by GAO generally or partially addressed all of the quality criteria GAO identified for evaluation design, implementation, and conclusions.24 Agencies met some elements of the GAO quality criteria more often than other elements. B.Evaluation Implementation A key element in assessing evaluation implementation is the extent to which target population and sampling, data collection, and data analysis were appropriate for the study questions. Because the target population is the group the researcher would like to make statements in the evaluation, it is important that the group is clearly defined and includes all potential beneficiaries of the program.