TY - JOUR AU - PhD, Solomon Major, AB - Matching is part of our everyday life, whether it is matching debits to credits, finding the right match for a residency program or, in some instances, using match.com. The bottom line is that finding the right match is important. Yet when applying assessment techniques to Global Health Engagements (GHEs), the Department of Defense (DoD) sometimes engenders a mismatch by using measures of effectiveness (MOEs) that are ill-suited to the objectives sought by program and project stakeholders. In order to right this mismatch, it must first be recognized that a policy and operationally relevant analysis of mission impact(s) cannot be effectively undertaken without first considering whether the “level of analysis” at which the assessment is undertaken is suitably matched to the goals of the missions at hand.1 In certain cases, collecting MOE data on localized effects (e.g., for a particular clinic's catchment area or in a specific village or town) is sufficient for determining mission “success” and we consider some of those special cases below. For the moment, however, it is important to note that it is often necessary to collect data at the country level in order to measure engagements' strategic efficacy when considering their contribution to the Geographic Combatant Command's (GCC's) Theater Campaign Plans and country plans. For example, consider a notional immunization engagement. Ideally, the GHE immunizes several hundred patients (a measure of performance [MOP]) and thereby substantially lowers disease prevalence within the treated area (a MOE). But was this GHE successful? If the question was whether it had a locally favorable impact, the assessment should focus on these local measures of success. But if the question was whether it supported the GCC's country plan, the MOEs used for the assessment must reflect this broader focus. If the country plan's goals were to, say, improve citizens' perception of the host nation government or reduce political instability in the country, health-specific and localized effects may be insufficient to demonstrate success. Analyzing the contribution of these engagements to broader GCC goals will thus require that data are collected and analyzed at the national, rather than the village, level. Measuring the national impact of a single project, such as in our immunization example, may be problematic however. This is because it is unlikely that any “single” local mission will have a sufficiently large impact to affect national level MOEs, no matter how well it was executed. Aggregating local missions at the national level in an effort to demonstrate missions' combined strategic effectiveness may help to resolve this problem. Of course, even aggregated GHE activities, may still be insufficient to measurably alter outcomes countrywide because changing any national outcomes (e.g., security, health, economy, etc.) is difficult and often requires great effort. But the larger and more intense the effort, the greater the chance it will have to make a positive impact, underscoring the points made by Lt Gen Robb at the 2014 AMSUS Meeting, in which he argued that GHEs have become too scattered and haphazard and that the military health community must reduce its one-off projects and focus resources and attention on critical partners and key programs. None of this is to say that individual or local activities are unimportant or that their impact is impossible to measure. As noted above, data collection and analysis of local GHEs are critical to the overall assessment system in two distinct ways: individual missions are essential sources of data and measuring local effects can be important when DoD is a supporting agency. Assessments of aggregated activities would not be possible without high-quality data, which allows assessors to measure the intensity of effort. Without measuring the degree to which GHE immunization activities increased/decreased in a host nation, it would be impossible to determine whether better national health outcomes were associated with higher levels of DoD activity. It is therefore essential that “downrange” personnel rigorously capture MOPs so that they can then be aggregated at the GCC level. Useful data would include, but not be limited to, personnel, budgets, and the number and type of equipment deployed. It would also be useful to have information about local conditions, such as host nation capacity to provide health services and the quality of the host nation's infrastructure. These data should be recorded in data repositories, such as the Overseas Humanitarian Assistance Shared Information System, the Global Theater Security Cooperation Information Management System, or the Joint Lessons Learned Information System. Wherever these data are stored, however, it is critical that they be recorded according to a systematic, rigorous, routinized, and regularized information gathering and storage methodology.2 The current after action report system, Joint Lessons Learned Information System, is often left unexplored by analysts because the narrative structure of each individual after action report lacks standardization, making it difficult to determine trends across time and space. Reforming this process through standardized data collection practices has the potential to transform the current way of doing business from stagnant reports into an active knowledge integration system. Besides serving as input for higher level analysis, local MOE analysis can also be an important end in itself when, for example, when DoD is a supporting agency. For instance, the Defense HIV/AIDS Prevention Program supports the President's Emergency Plan for AIDS Relief. In this case, the supported program, President's Emergency Plan for AIDS Relief, should be responsible for determining the strategic success of the efforts (e.g., a reduction in HIV incidence in host nations), whereas Defense HIV/AIDS Prevention Program should be responsible for determining its own efficacy on a local level. In such cases, when the relevant level of success is found at the local level (village or town), so too should the level of analysis. In these cases, it would be inappropriate for DoD to measure success against strategic MOEs (which are properly the responsibility of the supported agency) and thus local MOEs should be developed to match the local nature of the assessment required. Measuring impact at the local level is not as simple as collecting data on local MOPs and MOEs, however. In order to effectively assess impact, a control group must be used to determine the efficacy of local GHEs. Without a control group, determining whether a favorable change in the population is the result of the GHE or alternate factors is not possible. At the strategic/national level, cross-country comparisons provide a natural control community; in order to establish the local or tactical impact of DoD GHEs, constructing a similar comparison group is essential. The selection of control villages or catchment areas can be done based on observable selection characteristics (using propensity score matching), based on time invariant but unobservable characteristics (difference in difference) or according to time variant and unobservable characteristics (randomized control trials).3 Although the last is generally considered the most effective assessment technique, political considerations may limit their use in DoD engagements and GHEs. The principal responsibility for monitoring and evaluation must lie with strategic-level actors (e.g., GCC or Headquarters staff). While this is especially true for strategic assessments, higher echelons should also be willing and able to provide the resources even for localized and tactical-level assessments since personnel executing missions are unlikely to have the expertise, time, or capacity to render effective assessments. That said, and according to the direction given in the “Policy Guidance for DoD Global Health Engagement” (also known as the “Global Health Policy Cable”), personnel at the tactical level are however responsible for collecting and recording the highest quality MOPs possible, so that strategic-level assessments can be facilitated. Finally, ensuring that there is a good match between the level of assessment and desired stakeholder outcomes is crucial in applying the most appropriate measurement methodology, whether that is to a specific engagement or aggregated engagements. As professor Patricia Rogers argues, “what we understand to be the ‘gold standard’ for impact evaluations [should be] appropriateness, not any one particular method [. … this] involves matching the design to the needs of the particular situation. The two key questions that need to be answered before developing an impact evaluation design are therefore ‘What is the nature of the intervention?’ and ‘Why is an impact evaluation being done.’”4 REFERENCES 1. U.S. Department of Defense Joint Publication 5-0, Joint Operation and Planning . Washington, DC, U.S. Department of Defense, pp D-6. Available at http://www.dtic.mil/doctrine/new_pubs/jp5_0.pdf; accessed September 14, 2015. 2. Howard White Challenges in evaluating development effectiveness. In: Evaluating Development Effectiveness , pp 33– 54. Edited by George KP, Osvaldo NF, Gregory KI Washington, DC, The World Bank, 2004. 3. Shahidur RK, Gayatri BK, Hussain AS Handbook on Impact Evaluation: Quantitative Methods and Practices . Washington, DC, World Bank Publications, 2009. 4. Chambers R, Karlan D, Ravallion M, Rogers P Designing impact evaluations: different perspectives, Working Paper 4, International Initiative for Impact Evaluation , pp 24– 25. New Delhi, 2009. Available at http://betterevaluation.org/resource/overview/Designing_impact_evaluations_different_perspectives; accessed September 14, 2015. Reprint & Copyright © Association of Military Surgeons of the U.S. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. TI - Finding a Healthy Match: A Discussion on Applying the Appropriate Level of Analysis for Global Health Engagement Activities JF - Military Medicine DO - 10.7205/MILMED-D-15-00403 DA - 2016-03-01 UR - https://www.deepdyve.com/lp/oxford-university-press/finding-a-healthy-match-a-discussion-on-applying-the-appropriate-level-NbIUgqbdWN SP - 189 EP - 190 VL - 181 IS - 3 DP - DeepDyve ER -