Conceptualizing and measuring performance orientation of research funding systems

Conceptualizing and measuring performance orientation of research funding systems Abstract In this article, we propose a synthetic indicator for the performance orientation of public R&D funding at the country level that allows for quantitative comparisons across countries and over time, and we illustrate the methodology for its computation and validation. The indicator characterizes R&D funding from the state in terms of the extent the beneficiaries’ performance is taken into account in the allocation of resources. By building on the literature on research funding, the indicator combines information on how funding is allocated with quantitative measures of funding volumes. It is operationalized in terms of a fine-grained decomposition of public R&D funding in instruments, each of them characterized by their mode and criteria of allocation. Building on a large-scale European project, we test the operationalization of the indicator for a sample of European countries, and we perform a set of sensitivity and robustness analyses to inquire the impact of definitions and of data issues, particularly for what concerns cross-country comparisons. We conclude with a discussion of the advantages and limitations of the indicator and by proposing a research agenda for its further development. 1. Introduction Public R&D funding systems in European countries have experienced deep changes since the 1980s, a transition labelled as the move from a historically based allocation to a ‘new funding regime’ oriented towards competition and performance (Geuna 2001; Slaughter and Rhoades 2004). The literature describes two main patterns of change, i.e. the increase in the share of project funds assigned to research teams (Lepori et al. 2007) and the introduction of so-called performance-based research funding systems (PRFSs)—i.e. the allocation of institutional funding based on the assessment of the research performance at the organizational level (Hicks 2012; Geuna and Piolatto 2016). The introduction of a competitive and performance-based allocation was driven by a set of beliefs on improving national research performance by selectively supporting the best research organizations and groups and by setting incentives to public-sector research (Bonaccorsi 2007; Aghion et al. 2010). Yet, evidence that this goal has been achieved is, at best, inconclusive (Nieminen and Auranen 2010). On the contrary, the adverse impact of performance orientation has been suggested, including the focus on short-term research, the increasing costs of evaluation and gaming by researchers (Laudel 2006). A further trend has been the push towards income diversification, by increasing the amount of resources provided by private contracts and by charities, even if these sources represent a limited share of R&D funding in most European countries (with the partial exception of the UK) and for most universities (Estermann and Pruvot 2012; Teixeira et al. 2014). The literature has identified wide diferences between countries in how performance-oriented funding systems have been introduced, in terms of the balance between instruments, of the criteria adopted to measure performance and of the amount of funding involved (Hicks 2012; de Boer et al. 2015; Jonkers and Zacharewicz 2016). Yet, we currently lack of a robust methodological framework to compare systematically the performance orientation of public R&D funding across countries and over time. This article aims at filling this gap by developing and testing empirically a quantitative measure of the extent the (past or expected) performance of beneficiaries is taken into account when distributing public R&D funding, broadly identified through the perimeter of Global Budgetary Appropriations for R&D (GBARD) in the Frascati Manual (FM) (OECD 2015). By combining these measures with information on the amount of funding by instrument, it becomes possible to compute a synthetic indicator of the country-level performance orientation of public R&D funding. We operationalize the indicator in terms of a fine-grained decomposition of public R&D funding in instruments, each characterized by their mode and criteria of allocation, and by integrating information from studies and policy documents with expert judgments on the less formal dimensions of allocation. The proposed indicator can be broadly defined as a characterization of the underlying rationales and criteria through which public R&D funding is distributed; as such, it describes an important component of the governance of the (public) research system, whose implications for the conduct of research and its outputs have still to be fully understood. In the article, we test the indicator for a sample of European countries by performing a set of sensitivity and robustness analyses to examine the impact of definitions and data issues, particularly among cross-country comparisons. We specifically take into account some well-known comparability issues of R&D statistics (Luwel 2004), and we devise an analytical procedure to assess their impact on the measure we are proposing. We conclude with a discussion of the advantages and limitations of the indicator, and by proposing a research agenda for its further development and usage for comparative analyses. 2. Background and basic concepts 2.1 Defining the perimeter of public R&D funding To identify a comparable perimeter, we adopt the FM definition of GBARD that includes all funds provided by the state, which are used for R&D activities (OECD 2015).1 This involves identifying all budgetary lines in the state budget that support research and then evaluating their R&D content. In practice, budgetary lines like funds transferred to research councils or public research organizations (PROs) are fully included, while for institutional funds to Higher Education Institutions (HEIs), the share of R&D is computed from the use of time of staff (OECD 2015). Innovation support with no R&D content is also excluded. Funding to international agencies and performers is included, even if our approach would allow excluding them from the indicator (Lepori et al. 2013). This choice presents several advantages. First, it builds on extensive standardization work by Organisation for Economic Co-operation and Development (OECD) and EUROSTAT and allows comparing our data with international R&D statistics, that, despite some well-known shortcomings (Aksnes et al. 2017), remains the unique reference for comparing R&D investments across countries (Makkonen 2013). Secondly, its coverage of institutional funding is broader than instruments explicitly targeted to R&D support (Hicks 2012) and, specifically, it also includes mixed HEI funding attributed for R&D as a supplemental distribution to student funding that remains a central component of public research support in many countries (Jongbloed and Vossensteyn 2016). However, the GBARD perimeter is not without problems. We highlight two major issues that lead to specific sensitivity tests for the indicator. First, the measurement of the R&D content of the general university budget (so-called General University Funds in the FM) is difficult, since many European HEIs do not have an accounting system that allows distinguishing R&D from educational expenditures. The main method advised by the FM is to rely on a survey of the use of time by the academic staff, but data show a high diversity of methods, including reliance on old surveys, national shares by disciplines, and even a global national estimate in a few countries (OECD 2000; Pouris 2007). Secondly, the proper identification of project funding instruments is not without problems. Contracts from ministries cannot always be identified, except in the countries that perform also a survey of funding organizations, while the exact delimitation between R&D support and innovation support might raise issues, as in some programmes managed by innovation agencies the two forms of support come together. 2.2 Decomposing public R&D funding We represent public R&D funding as a set of funding lines within the public budgets with different characteristics in terms of how they are managed, the beneficiaries, and how they are allocated. A well-established distinction is between institutional funds, attributed to whole organizations for their current functioning, and project funds, which are allocated directly to research groups or individuals by a Research Funding Organization (RFO; Lepori et al. 2007). Some lines are earmarked to specific types of organizations, like HEIs or PROs, others, like most project funds, are accessible to different R&D performers, and may also include firms. Frequently, these lines comprise a set of distinct funding schemes: project funding is divided into instruments with different goals and criteria (e.g. funding of projects vs. career instruments), while institutional funding of HEIs might be allocated through different (sub-) instruments, for example an educational component based on the number of students and a research component based on performance assessment. While this categorization can be made at different levels of granularity, we focus our analysis on instruments, which are reasonably coherent internally in terms of two characteristics, i.e. the allocation mode and the allocation criteria that are used to construct a measure of performance orientation. We therefore define funding instruments as the aggregation of funding schemes with similar characteristics in terms of how funding is allocated. 2.3 Allocation mode and allocation criteria We focus on two dimensions of how funding is allocated, which we label as allocation mode and allocation criteria. Comparative research on public R&D funding suggests some broad categories (see Hicks 2012; Jonkers and Zacharewicz 2016), which have also been introduced in the European Union-funded study on public research funding (PREF; see Lepori 2017; Reale et al. 2017). First, the allocation mode describes the process through which funding is allocated to beneficiaries, by distinguishing between: Competitive bid, i.e. allocation based on the submission of proposals and their comparative evaluation. Competitive bid is the usual mode for project funding (Lepori et al. 2007) but might occur also for some institutional funding instruments. Historical (incremental) allocation is allocation where the current level of funding functions as a baseline, possibly with some fixed level of increase when the total amount of funding grows. Incremental allocation traditionally characterized public budgets (Wildavsky and Caiden 2004) and represented the dominant model for the allocation of institutional funding to universities and PROs until the 80s (Jongbloed and Lepori 2015). Negotiated allocation is allocation based on negotiations between the state and the performing organization. Negotiated allocation may be associated with performance contracts (de Boer et al. 2015), but also with historical elements. Formula-based allocation refers to cases where the funding volume is calculated through a mathematical formula based on a set of quantitative measures, like scores for research assessment, publication numbers, and the number of students. Comparative literature documents the diversity and complexity of formulas used for allocating resources (Hicks 2012). Secondly, allocation criteria describe the explicit or implicit criteria, which are used to calculate the amount of funding. The evaluation of competitive bids is usually based on some assessment of the merit of the proposal, against different criteria depending on the instrument’s goals, as well as on reputation and standing of the applicants (van den Besselaar and Leydesdorff 2009; Grimpe 2012). For institutional funding, the most current allocation criteria include input measures (like the number of staff), number of students or, less frequently, number of graduates, peer-review outcomes (like in the UK Reserch Assessment Exercise; Geuna and Piolatto 2016), bibliometric measures, and acquisitions of third-party funds (see Jonkers and Zacharewicz 2016 for a recent overview). In many countries, institutional funding, particularly to HEIs, is divided into components allocated through different criteria, for example an educational component based on the number of students and a research component based on research indicators. By distinguishing and characterizing such instruments separately, our approach allows handling composite allocation in a sensible way. 2.4 Defining performance orientation We define performance orientation as the extent to which performance is taken into account in their decisions concerning the allocation of funding. Performance orientation therefore is an attribute of the policy design of the process and of the criteria for allocating resources, not of the level of competition or the outcome in terms of the distribution of funds between performers, which is also expected to depend on market characteristics, like the presence of competitors and the demand for specific research services (van den Besselaar and Leydesdorff 2009). We follow Nieminen and Auranen by distinguishing between two dimensions of performance orientation, i.e. evaluating proposals to assess the expected future performance and linking allocation to past performance (Nieminen and Auranen 2010). Therefore, the indicator is constructed as a sum of two components: Ex ante performance orientation, i.e. whether allocation of funding is based on expectations on future knowledge production or performance. This is typically the case when funds are allocated through competitive bids. Ex post performance allocation, i.e. whether funding allocation is based on measures of a research organization’s past performance. While past performance is an important component in both cases, the policy intention and the signalling effect towards performers are different—in the case of ex ante performance orientation, the funder distributes resources based on its assessment of the future expected output, whereas for ex post performance the funder rewards past performance—expecting an indirect effect on the performers’ behaviour. To operationalize ex post performance orientation, we build on Hicks’ definition of PRFS, i.e. those systems where the allocation of funding at the national level is based on ex post evaluation of research output and has direct implications for funding allocation. However, the measure we are proposing is continuous, i.e. funding instruments receive a score between 0 (no performance allocation) and 1 (full performance allocation, matching Hicks’ definition). We also foresee some level of performance orientation for cases where performance is indirectly taken into account in the allocation process, particularly within negotiations. While assessing such indirect connection is methodologically more difficult, this extensive definition is more flexible, to cover cases where funding levels are negotiated; yet performance criteria are mobilized in the process. By combining categories for mode and criteria of funding with our definitions, we define scores for ex ante and ex post performance orientation to be assigned to funding instruments as in Table 1. The values derive from a multiplication between fixed scores assigned to the mode of allocation (0 for historical, 0.5 for negotiated, and 1 to formula) and the scores attributed to allocation criteria by a triangulation of the experts’ assessments and descriptive information, i.e. literature, reports, administrative documents, etc. Table 1. Measures of performance orientation Allocation mode Score allocation mode ex post Allocation criteria Score allocation criteria Ex ante performance orientation Ex post performance orientation (score allocation mode* score allocation criteria) Project funding n/a Indifferent n/a 1 0 Institutional funding: competitive bid n/a Indifferent n/a 1 0 Institutional funding: historical allocation 0 Not applicable 0 0 0 Institutional funding: negotiated allocation 0.5 Output or educational criteria 0 0 0 Institutional funding: negotiated allocation 0.5 Research criteria 0 ≤ × < 1 0 0 ≤ × < 1 depending on the instrument’s characteristics Institutional funding: funding formula 1 Input or educational criteria 0 0 0 Institutional funding: funding formula 1 Research performance 0 < × ≤ 1 0 1 Allocation mode Score allocation mode ex post Allocation criteria Score allocation criteria Ex ante performance orientation Ex post performance orientation (score allocation mode* score allocation criteria) Project funding n/a Indifferent n/a 1 0 Institutional funding: competitive bid n/a Indifferent n/a 1 0 Institutional funding: historical allocation 0 Not applicable 0 0 0 Institutional funding: negotiated allocation 0.5 Output or educational criteria 0 0 0 Institutional funding: negotiated allocation 0.5 Research criteria 0 ≤ × < 1 0 0 ≤ × < 1 depending on the instrument’s characteristics Institutional funding: funding formula 1 Input or educational criteria 0 0 0 Institutional funding: funding formula 1 Research performance 0 < × ≤ 1 0 1 Table 1. Measures of performance orientation Allocation mode Score allocation mode ex post Allocation criteria Score allocation criteria Ex ante performance orientation Ex post performance orientation (score allocation mode* score allocation criteria) Project funding n/a Indifferent n/a 1 0 Institutional funding: competitive bid n/a Indifferent n/a 1 0 Institutional funding: historical allocation 0 Not applicable 0 0 0 Institutional funding: negotiated allocation 0.5 Output or educational criteria 0 0 0 Institutional funding: negotiated allocation 0.5 Research criteria 0 ≤ × < 1 0 0 ≤ × < 1 depending on the instrument’s characteristics Institutional funding: funding formula 1 Input or educational criteria 0 0 0 Institutional funding: funding formula 1 Research performance 0 < × ≤ 1 0 1 Allocation mode Score allocation mode ex post Allocation criteria Score allocation criteria Ex ante performance orientation Ex post performance orientation (score allocation mode* score allocation criteria) Project funding n/a Indifferent n/a 1 0 Institutional funding: competitive bid n/a Indifferent n/a 1 0 Institutional funding: historical allocation 0 Not applicable 0 0 0 Institutional funding: negotiated allocation 0.5 Output or educational criteria 0 0 0 Institutional funding: negotiated allocation 0.5 Research criteria 0 ≤ × < 1 0 0 ≤ × < 1 depending on the instrument’s characteristics Institutional funding: funding formula 1 Input or educational criteria 0 0 0 Institutional funding: funding formula 1 Research performance 0 < × ≤ 1 0 1 Consistently with definitions, a fixed score = 1 for ex ante performance orientation is assigned to project funding and institutional funding allocated through bids, while all remaining institutional funding has a score = 0 for ex ante performance allocation. By definition, historical allocation has no performance orientation. Formula institutional allocation based as on input or educational criteria receives a score of 0 for ex post performance allocation, while when using research performance criteria, it receives a score of 1. In the rare case of formula-based allocation based on both educational and research criteria, an intermediate score is attributed based on their respective weighting. Finally, negotiated allocation where some research performance criteria are considered will receive an intermediary score between 0 and 1, depending on the extent of negotiation and the respective importance of different criteria (past budgets, performance, political power, etc.). To standardize this evaluation, following criteria are suggested: The presence of some indication of performance (e.g. in the design of the instrument). The existence of a performance contract including measures of research performance. The existence of formal monitoring and/or an evaluation process of research performance (Whitley and Glaser 2007). The specific indicators used for the evaluation. Testing the sensitivity of the overall indicator on the scores selected by experts for negotiated allocation represents a major focus of the robustness testing. This discussion highlights some advantages of the proposed indicator, as well as issues, which need to be tested empirically. First, the decomposition by instruments makes the construction of the measure more tractable, since characterizing individual instruments is easier than providing a global assessment. At the same time, a careful description of national funding systems and good availability of data on the amount of funding associated with each instrument is required. Secondly, the cases for which scores require expert judgment are limited to a smaller set of instruments, specifically those without clear formal criteria on the allocation criteria. Specifically, the delineation between historical and negotiated allocation and the performance orientation of the latter are likely to be contestable measures, where experts might disagree on scores. Also, the criteria for allocation are not always described in detail and might require an expert appreciation, which can produce controversial results. Thirdly, the construction of the measure is transparent and therefore allows for robustness testing and sensitivity analysis, particularly to test whether scores attributed to some instruments have a large impact on the system-level indicator. 2.5 Operationalizing the indicator The previous discussion leads to a replicable procedure to compute an indicator of performance orientation.(a) First, public R&D funding—broadly corresponding to the GBARD definition in the FM—is decomposed into funding instruments based on their homogeneity in terms of allocation mode and criteria. Funding amounts are also associated with instruments for each year. Three issues need to be checked at this stage: whether there are issues with the perimeter of public R&D funding (e.g. instruments that are not included), whether the calculation of the R&D content is reliable, and whether the identified instruments are sufficiently homogeneous in terms of how funds are allocated.(b) Each instrument is characterized in terms of allocation mode and allocation criteria according to the criteria highlighted in Table 1, providing a higher level of detail for the more difficult cases. For project funding instruments, no details are needed, as all are assigned ex ante = 1 and ex post = 0, while more detail is required for cases of negotiated and formula allocation. This also allows the identification of contested cases for robustness testing, as well as of influential instruments whose score has a strong impact on system-level indicators. When the scores are based on expert assessment and/or incomplete information, lower and higher bounds can be devised to perform sensitivity analyses.(c) To compute synthetic indicators, we weight the scores by the funding volume assigned to each instrument. share_project = total project funds/total funds share_institutional = total institutional funds/total funds Ex ante performance orientation (institutional) =∑institutional(funding_instrument_amount)j×(ex ante)j ∑institutional(funding_instrument_amount)j Ex post performance orientation (institutional) =∑institutional(funding_instrument_amount)j×(ex post)j ∑institutional(funding_instrument_amount)j Performance orientation = (share_project) + (share institutional)×(ex ante_institutional + ex_post_institutional) Where the sum runs only on the institutional funding instruments since, by definition, ex ante performance orientation is 1 and ex post performance orientation is 0 for project funds. With these definitions, it is therefore sufficient to compute the breakdown of public funding between project and institutional and to perform a more in-depth analysis of the institutional funding instruments. These indicators can be computed for the whole national system, for domestic funding only (excluding funding transferred to international agencies and performers) or separately for performing sectors (considering only those instruments for which a performing sector is eligible, therefore characterizing the specific funding environment for a group of performers). For the sake of simplicity, we will focus our analysis on the country-level indicator. 3. Data Data have been collected in the framework of a large study funded by the European Commission on the characteristics of research funding systems in European countries (PREF, contract no. 154321). PREF provided data to enable the systematic analysis of national funding systems: the national research budget in each country has been decomposed into instruments, based on expert knowledge of national systems. Then, a set of descriptors has been collected, including the distinction between project and institutional funding, the identification of the managing agency, the allocation mode, and criteria; this information was mostly collected from documentary reports and descriptions of national systems and cross-checked with national experts2 and the National Statistical Authorities. Further, financial data have been collected that cover the year 2000 (or the first year available) through 2014.3 To the greatest possible extent, these data were provided by the National Statistical Authorities and comply with EUROSTAT statistics on public R&D funding. More detailed information, for example concerning project funding, was sourced from ministry reports and funding agencies. For each country, PREF produced a country report, while also providing qualitative information on the characteristics of funding instruments, which was used to assess the reliability of the PREF estimates.4 National experts in the PREF project were also asked to characterize funding instruments in terms of their allocation procedures and criteria based on pre-defined categories. The indicator of performance orientation has been computed for a subset of 12 countries, for which reasonably complete data were available, i.e. Austria, Denmark, Finland, France, Germany, Italy, Norway, Poland, Portugal, Sweden, Switzerland, and the UK. Because of data limitations in the earlier years, we restrict the analysis to the years 2005–14. In this process, the PREF data have been rechecked—leading to a punctual and limited modification with respect to the original data set. Any issues, which were detected, as well as the scores attributed by experts within PREF, have been rechecked in case of doubt by using additional sources, including scholarly papers and available reports. 3.1 Methods and analysis The goal of the analysis performed in this article is methodological, i.e. to ascertain the extent to which the aggregated indicators are influenced by different choices concerning the construction of the indicator. As a first step, we present a cross-sectional comparison of the indicators, which provides some first indications on the consistency with respect to the information from the literature. A time series analysis also allows the detection of changes over time, which might hint to methodological issues, like sudden changes that cannot be explained by political decisions. As a second step, we provide in-depth analysis of the perimeter, by looking at cases where funding instruments were not included (e.g. due to a lack of data) and to issues with the calculation of funding amounts. We also consider the possibility that instruments are not sufficiently homogeneous to allow the calculation of reliable scores. We draw extensively on the country reports produced by the PREF project, as well as the RIO/ERAWATCH reports and on EUROSTAT public funding data and metadata. For the detected problems, we define high/low thresholds for sensitivity. As a third step, we single out two types of influential instruments: those that amount to a large share of total funding in a country and those whose score would strongly impact the aggregated indicator. We perform robustness and sensitivity analyses by computing alternative scores for these instruments and testing whether the indicators are significantly affected, particularly concerning the relative position of countries. Finally, we consider changes over time of the indicators and their reliability. 4. Results 4.1 Descriptive analysis Figure 1 provides data on shares of performance-oriented funding divided into project funding, ex ante institutional and ex post institutional funding. We derive following observations. (a) First, there are wide variations in indicators between countries, which are largely driven by differences in the share of project funding. For the purposes of this analysis, the main question is the extent to which these differences (and the country ranking) are robust against different definitions of the indicator and the coverage of funding instruments. (b) Secondly, as expected, ex ante institutional funding plays a limited role in the considered countries. We identify two main groups of such instruments: first, institutional overheads allocated to performers based on the amount of project funding received (France, Switzerland) and, secondly, capital and researchers’ grants allocated to whole HEIs by the UK research councils on the basis of an institutional proposal. (c) Thirdly, we observe significant changes in the share of performance orientations in half of the considered countries, i.e. Finland, Norway, Poland, Portugal, and Sweden. In these countries, the increase is due to the creation of a specific performance-oriented funding instrument for HEIs. Figure 1. View largeDownload slide Share of performance-oriented funding as % of GBARD. Figure 1. View largeDownload slide Share of performance-oriented funding as % of GBARD. In five countries, i.e. Austria, Germany, Poland, Sweden, and Switzerland, we observe a substantial increase in the share project funding, while this share decreased in Denmark. Poland, the country showing the largest increase, created two new funding agencies i.e. the National Centre for Research and Development (NCBR) in 2009 and the National Science Centre (NCN), respectively, supporting innovation-oriented research and basic science. While changes in project funding are gradual, the emergence of ex post performance orientation is sudden, pointing to reforms in the design of the funding instruments. (d) Fourthly, in all countries, there is a substantial share of non-competitive institutional funding. Given the size of the involved amounts, an assessment of whether these instruments can indeed be characterized, as ‘no’ performance orientation is needed, since even a small share of performance orientation would significantly affect the aggregate indicators. 4.2 Perimeter and coverage issues A first question is whether the data adequately cover public national funding, since differences in the perimeter are also likely to affect the comparability of the performance orientation indicator (Aksnes et al. 2017). Since PREF data are disaggregated by funding instruments, an in-depth assessment of the coverage becomes possible. More specifically, we compare PREF totals with EUROSTAT data; then we check for missing instruments. Finally, we check for potentially uncertain cases in the division between project and institutional funds. These checks are performed for the year 2014 (or the last available year; see Table 2). Table 2. Coverage issues and sensitivity tests at the country level Country Year PREF/EUROSTAT Perimeter issues, missing instruments, and issues with project funds Sensitivity tests AT 2013 1.00 None CH 2014 1.00 None DE 2014 1.16 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding DK 2014 1.12 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding FI 2014 1.02 None None FR 2014 1.19 Total volume of public funding higher than EUROSTAT due to the inclusion of the investissements d'avenir programme None, PREF data better reflect national investment IT 2014 1.00 Exchange funds are excluded Adding 10% GBARD as project funds NO 2014 1.01 None None PL 2014 0.66 Missing data on intra-mural expenditures of government and ministries funding programmes. Exchange funds are excluded Low: all missing amount as institutional. High: all missing amounts as project PT 2014 0.84 Missing data on exchange funds and intra-muros R&D expenditure Low: excluding COMPETE. High: all missing amounts as project SE 2014 1.01 None None UK 2013 0.98 None None All countries Share of R&D in HEI institutional funding ±20% with respect to EUROSTAT figures Country Year PREF/EUROSTAT Perimeter issues, missing instruments, and issues with project funds Sensitivity tests AT 2013 1.00 None CH 2014 1.00 None DE 2014 1.16 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding DK 2014 1.12 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding FI 2014 1.02 None None FR 2014 1.19 Total volume of public funding higher than EUROSTAT due to the inclusion of the investissements d'avenir programme None, PREF data better reflect national investment IT 2014 1.00 Exchange funds are excluded Adding 10% GBARD as project funds NO 2014 1.01 None None PL 2014 0.66 Missing data on intra-mural expenditures of government and ministries funding programmes. Exchange funds are excluded Low: all missing amount as institutional. High: all missing amounts as project PT 2014 0.84 Missing data on exchange funds and intra-muros R&D expenditure Low: excluding COMPETE. High: all missing amounts as project SE 2014 1.01 None None UK 2013 0.98 None None All countries Share of R&D in HEI institutional funding ±20% with respect to EUROSTAT figures Table 2. Coverage issues and sensitivity tests at the country level Country Year PREF/EUROSTAT Perimeter issues, missing instruments, and issues with project funds Sensitivity tests AT 2013 1.00 None CH 2014 1.00 None DE 2014 1.16 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding DK 2014 1.12 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding FI 2014 1.02 None None FR 2014 1.19 Total volume of public funding higher than EUROSTAT due to the inclusion of the investissements d'avenir programme None, PREF data better reflect national investment IT 2014 1.00 Exchange funds are excluded Adding 10% GBARD as project funds NO 2014 1.01 None None PL 2014 0.66 Missing data on intra-mural expenditures of government and ministries funding programmes. Exchange funds are excluded Low: all missing amount as institutional. High: all missing amounts as project PT 2014 0.84 Missing data on exchange funds and intra-muros R&D expenditure Low: excluding COMPETE. High: all missing amounts as project SE 2014 1.01 None None UK 2013 0.98 None None All countries Share of R&D in HEI institutional funding ±20% with respect to EUROSTAT figures Country Year PREF/EUROSTAT Perimeter issues, missing instruments, and issues with project funds Sensitivity tests AT 2013 1.00 None CH 2014 1.00 None DE 2014 1.16 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding DK 2014 1.12 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding FI 2014 1.02 None None FR 2014 1.19 Total volume of public funding higher than EUROSTAT due to the inclusion of the investissements d'avenir programme None, PREF data better reflect national investment IT 2014 1.00 Exchange funds are excluded Adding 10% GBARD as project funds NO 2014 1.01 None None PL 2014 0.66 Missing data on intra-mural expenditures of government and ministries funding programmes. Exchange funds are excluded Low: all missing amount as institutional. High: all missing amounts as project PT 2014 0.84 Missing data on exchange funds and intra-muros R&D expenditure Low: excluding COMPETE. High: all missing amounts as project SE 2014 1.01 None None UK 2013 0.98 None None All countries Share of R&D in HEI institutional funding ±20% with respect to EUROSTAT figures 4.2.1 Differences between totals PREF total for total funding is identical to EUROSTAT GBARD data for seven countries, which is due to the use of the same primary data sources. The PREF total is 16% higher in Germany, 12% in Denmark, and 19% in France. For the first two countries, this difference is most likely due to the double counting between funding instruments, as data have been disaggregated in PREF using additional sources, like the German Research Report. For France, the difference is due to the inclusion of the R&D portion of the national future investment programme in PREF, which is excluded in EUROSTAT data as it is considered as a capital expenditure. The PREF figure is therefore considered to be more correct (only the effective expenditures have been included). For two countries, the PREF total is below the EUROSTAT figures. For Portugal (−16%), the analysis of the funding instruments covered by PREF suggests that the missing amounts are ministries’ contracts. For Poland, the difference is very large (−34%) in 2014, suggesting that the data are not fully reliable.5 4.2.2 Institutional funding for HEIs To test for robustness, we apply a lower and higher bound of ±20% of the corresponding amounts. Since the indicative share of R&D in the university general budget is around 40%, this amounts to increasing this share to 48%, respectively, decreasing to 32%. It seems reasonable that the true value will be within these bounds. 4.2.3 Missing funding instruments The main issue concerns so-called exchange funds (OECD 2015), i.e. contracts for services awarded by the state, usually by different ministries. This category of funds is fully missing in three countries, i.e. Italy, Poland, and Portugal, therefore lowering the share of project funds. Since the highest share of exchange funds in the remaining countries is 10% of the total GBARD (in Norway), in the sensitivity analysis the project funds in these three countries will be increased by a corresponding amount. 4.2.4 Identification of project funds The distinction between institutional funds and project funds was further checked to single out ambiguous cases. This analysis shows that in most cases this distinction is straightforward: in all countries, the main instruments in terms of funding volume include core funds to HEIs (institutional), transfer to RFOs (project), and transfer to PROs. In the year 2014, there were only 12 mixed funding instruments, i.e. comprising an amount of institutional and project funds, corresponding, however, to only 1% of the total funding volume. We could identify two types of cases: first, RFOs that also distribute institutional funds, either as overheads to projects (Switzerland, France) or through direct funding of research centres (Norway, the UK); secondly, funding to PROs comprising some project component. In both cases, it is relatively straightforward to divide the corresponding amounts by using secondary sources. An issue was found in Poland, where the amounts transferred by the two funding agencies (NCBR and NCN) included a small share of institutional funding, with no indication of its nature; the corresponding amount (a few percent of the total) was consistently reclassified as project. Most differences in the share of project funding with respect to EUROSTAT are the outcome of a more detailed analysis of the national funding structure—for example NCBR in Poland should be considered as an RFO, therefore reclassifying the corresponding instrument as project. Cross-country harmonization has also been improved for international funding instruments—as funding to international RFOs is now consistently considered as project funds in all countries. Specific inclusion and exclusion problems relate to innovation programmes, where the R&D content is difficult to estimate; this particularly concerns Italy (where these programmes are excluded) and Portugal, where a share of the large COMPETE programme is included (corresponding to more than one-third of total project funding in the country). In one of the robustness tests, the COMPETE programme has been dropped. In France, the funding lines within the Investissements d’avenir programme that solely support economic innovation were excluded. The preliminary conclusion is that the PREF data provide, with a few exceptions, a reasonably good coverage of the main funding instruments, while the more disaggregated approach to instruments allows a better categorization and breakdown between project and institutional funding. The few diverging cases will be taken into account in the sensitivity analysis described in Table 2 below. 4.3 Influential instruments As a second step, we extracted the influential instruments from the PREF database, i.e. those institutional funding instruments that exceeded 20% of total funding, respectively, of ex post performance or non-performance-oriented funding. The scores for these instruments therefore have a strong influence on aggregate indicator. Table 3 shows that we identified only 28 such influential instruments. In total, 18 instruments are institutional funding to HEIs, while the remaining are mostly core allocations to PROs. With the notable exception of France, the main issue for measuring ex post performance allocation is therefore a correct characterization of higher education funding systems, for which the assessment can rely on a large amount of literature (see CHEPS 2010; de Boer et al. 2015). Table 3. Influential instruments (2014, or last available year) Note. White cells < 20%, light cells between 20 and 50%, and dark cells above 50%. Due to confidentiality restrictions, the numerical scores can be provided on request for research purposes. Table 3. Influential instruments (2014, or last available year) Note. White cells < 20%, light cells between 20 and 50%, and dark cells above 50%. Due to confidentiality restrictions, the numerical scores can be provided on request for research purposes. The simplest situation is found for those countries where HEI core government allocation is formally split into two subinstruments and one of them is labelled as being formula-based and performance-oriented. This includes six countries, i.e. Denmark (Aagaard and Schneider 2015), Italy (Geuna and Piolatto 2016), Norway (Aagaard et al. 2015), and the UK (Barker 2007). This list of countries also matches the one provided by Hicks, but additionally our data provide a quantitative measure of the amount of money involved. For these countries, the uncertainty of the measure can be considered as ‘low’. Two further countries, Poland and Portugal, have performance-based systems based on a research assessment, but the involved funding amounts cannot be singled out and, therefore, experts have provided an intermediate score (Jonkers and Zacharewicz 2016; Kulczycki et al. 2017). For France, an assessment of the performance orientation was made from descriptions of the funding formula,6 but how this formula was used in the allocation process is not fully clear. These cases have therefore been characterized as having a ‘medium’ level of uncertainty. In the three German-speaking countries (Austria, Germany, and Switzerland), HEI allocation is not (or only partially, for example by region) based on formulas; Switzerland and Germany are also characterized by high internal heterogeneity, as both are federal countries. In these cases, a large historical or education-based component remains present, therefore justifying low-performance orientation scores. However, we consider a ‘high’ level of uncertainty in the measure. The remaining instruments can be divided into two subgroups. Intra-mural R&D instruments (three instruments) can generally be considered as having a low-performance orientation, since they are directly managed within the public administration (‘medium’ uncertainty). Funding to PROs are the most difficult cases to qualify, as information available is scarce and, in most cases, they are based on direct negotiation between the State and the performer. While accepting the assessment of national experts, we therefore qualify them as ‘high’ uncertainty. With the exception of France (and partially Germany), they account for a limited share of total public funds. To test robustness, we set higher and lower bounds by increasing/decreasing the score by 0.1 for medium and 0.2 for high uncertainty. We therefore conclude that, first, the number of influential instruments is limited and, therefore, an in-depth assessment of their performance orientation is feasible. Secondly, robustness tests can be implemented to assess the impact of uncertainty in the instrument-level scores on the country indicators. 4.4 Sensitivity analysis Figure 2 shows the sensitivity test for the share of project funds over total funds, based on the high/low thresholds proposed in Table 2. The major uncertainty concerns the two countries for which PREF figures are substantially below GBARD, i.e. Portugal and, even more so, Poland. The test however is somewhat extreme, since it is not expected that all missing funds are either project or institutional. The uncertainty on the R&D content of the general university budget has an impact limited to a few percentage points—hence, this issue is not very relevant for the kind of indicators we are developing, while substantially affecting the total GBARD volume at the country level. Figure 2. View largeDownload slide Sensitivity analysis for the share of project funds. Figure 2. View largeDownload slide Sensitivity analysis for the share of project funds. We therefore conclude that the breakdown between project and institutional funds is reasonably robust and significantly improved with respect to EUROSTAT, except in cases where there is large uncertainty in the perimeter. Figure 3 presents the robustness test for the ex post performance orientation of institutional funding. It shows that, even with the rather extreme tests conducted, the differences between countries with high-performance orientations in institutional funding—namely, the UK, Portugal, and Poland—and the remaining countries are maintained, while the relative position of the countries with low-performance orientations is uncertain. Figure 3. View largeDownload slide Sensitivity test for ex post performance orientation. Figure 3. View largeDownload slide Sensitivity test for ex post performance orientation. The figure also displays that for those countries that have introduced formula-based models, the measure is subject to limited uncertainty, while in countries where some performance elements have been introduced within a negotiated allocation (Austria and Germany), and in countries where PROs are more important (France), the indicator is more uncertain. As a matter of fact, the main uncertainty in countries like Germany and Switzerland is represented by regional higher education funding, which has been classified as non-competitive; given the size of the corresponding instrument, even attributing a low score of performance orientation would significantly alter the country scores. Figure 4 computes the total performance orientation by adding the low/high scores from Figure 3 to the share of ex ante performance orientation, taking into account the high/low scores from Figure 2 for project funds. Figure 4. View largeDownload slide Robustness test for total performance orientation. Figure 4. View largeDownload slide Robustness test for total performance orientation. Since the uncertainty in the measure of project funding is smaller, the aggregated indicator is rather stable, particularly concerning the ranking of countries. It should be emphasized that this is an extreme test, since it is unlikely that all corrections work in the same direction. 4.5 Changes over time We distinguish between three sources of change in the performance orientation of institutional funding: changes in the relative funding level for existing instruments, the creation of new instruments, and, finally, changes in the allocation criteria of existing instruments. We suggest their measure displays different levels of reliability: changes in funding levels can be measured in a rather simple way, provided a time series are available; reliability of changes due to the introduction of new instruments is contingent to the precision of their characterization; finally, we expect that changes of criteria for existing instruments are more difficult to measure. Table 4 shows that for the 12 countries considered here, the number of new instruments and of changes in the criteria for existing instruments in the period 2005–14 was very low: no changes are listed in the PREF database for seven countries, while in two countries a performance-based component was introduced for HEI funding. Accordingly, the reliability of the change measure is high. Finally, in three other countries the criteria for HEI allocation were modified—the most important change was in Poland with the move from historical allocation to allocation based on the national assessment exercise. The level of reliability is lower, since it is difficult to ascertain the extent of change (Figure 5). Table 4. Changes in performance scores of institutional instruments, 2005–14 Country Instruments involved Type of change Year Reliability Impact on the score AT Basic funding to PROs Slight increase in the criteria (0.2–0.3) 2008 Low (expert based) Low FI Introduction of a performance-based component in higher education funding New subinstrument 2010 High High PL Higher education funding based on national evaluation Increase in the score for performance criteria 2010 Medium High PT Higher education funding Expert assessed slight increase of performance orientation 2005 Medium Medium SE Introduction of a performance-based component in higher education funding New subinstrument 2010 High High Country Instruments involved Type of change Year Reliability Impact on the score AT Basic funding to PROs Slight increase in the criteria (0.2–0.3) 2008 Low (expert based) Low FI Introduction of a performance-based component in higher education funding New subinstrument 2010 High High PL Higher education funding based on national evaluation Increase in the score for performance criteria 2010 Medium High PT Higher education funding Expert assessed slight increase of performance orientation 2005 Medium Medium SE Introduction of a performance-based component in higher education funding New subinstrument 2010 High High Note. To analyse the impact of these changes, Figure 5 shows the extent of the increase in the share of ex post performance orientation due to each factor. For eight countries, we observe only a repartition effect, and for all of them the shift towards performance orientation is very limited (the change of criteria in Portugal took place in the first year of the time series); the same applies for Austria. The three remaining countries display much stronger change—starting from no performance orientation before the policy change. Table 4. Changes in performance scores of institutional instruments, 2005–14 Country Instruments involved Type of change Year Reliability Impact on the score AT Basic funding to PROs Slight increase in the criteria (0.2–0.3) 2008 Low (expert based) Low FI Introduction of a performance-based component in higher education funding New subinstrument 2010 High High PL Higher education funding based on national evaluation Increase in the score for performance criteria 2010 Medium High PT Higher education funding Expert assessed slight increase of performance orientation 2005 Medium Medium SE Introduction of a performance-based component in higher education funding New subinstrument 2010 High High Country Instruments involved Type of change Year Reliability Impact on the score AT Basic funding to PROs Slight increase in the criteria (0.2–0.3) 2008 Low (expert based) Low FI Introduction of a performance-based component in higher education funding New subinstrument 2010 High High PL Higher education funding based on national evaluation Increase in the score for performance criteria 2010 Medium High PT Higher education funding Expert assessed slight increase of performance orientation 2005 Medium Medium SE Introduction of a performance-based component in higher education funding New subinstrument 2010 High High Note. To analyse the impact of these changes, Figure 5 shows the extent of the increase in the share of ex post performance orientation due to each factor. For eight countries, we observe only a repartition effect, and for all of them the shift towards performance orientation is very limited (the change of criteria in Portugal took place in the first year of the time series); the same applies for Austria. The three remaining countries display much stronger change—starting from no performance orientation before the policy change. Figure 5. View largeDownload slide Decomposing the increase in performance orientation by mechanisms 2005–14. Figure 5. View largeDownload slide Decomposing the increase in performance orientation by mechanisms 2005–14. An important uncertainty in temporal changes is related to a possible gradual change for non-competitive instruments, like HE funding in Austria, Germany, and Switzerland or PROs in France, constituting the bulk of public funding. Given the large amounts involved, even a slight increase of the role of performance considerations for these instruments would significantly affect the temporal trends displayed in Figure 1. In substantive terms, we conclude that the increase in performance orientation due to repartition effects is very small in all countries, therefore confirming previous results that the structure of public R&D budgets is very stable over time frames around one decade (Lepori et al. 2007). Unlike for project funding, changes in institutional performance orientation occur in a sudden way through policy reforms and are therefore rather simple to observe and to measure. We however consider that the indicator might be biased towards formal policy changes, while less visible changes in how performance orientation is taken into account within negotiations might be overseen, hence somewhat exaggerating the distance between countries in the indicator. 5. Discussion and conclusions This article tested the robustness of a synthetic indicator to measure the performance orientation of public R&D funding in a comparative setting. The indicator focuses on the characterization of national funding instruments by their mode of allocation (formula, competitive bid, negotiation, and historical) and allocation criteria (input, educational, or output–outcome measures). The goal is to assess the importance and the change over time, of policy designs that allocate funds to performers based on the expectation that they will produce high-level research, respectively, based on the measurement of their past performance. The relevance of such an indicator is related to the lasting policy debate about changes in the allocation of public R&D funding and the belief, promoted in particular by New Public Management, that ‘competitive’ or ‘incentive-based’ funding would improve efficiency and performance of public-sector research (see Nieminen and Auranen 2010; Jonkers and Zacharewicz 2016). Conceptually and methodologically establishing the relationships between competitiveness of funding and performance has proved to be problematic. Contradictory results emerge from the literature, which shows that the effects of policy measures introducing competitive elements in funding allocation are more complex than expected, and some desirable results were not achieved (Butler 2010; van den Besselaar et al. 2017). On the one hand, competitive funding involves different forms of interaction between policy instruments and performers. It can refer to instruments where performers struggle for a limited amount of resources, like in project funding, or, alternatively to allocation linked to the achievement of certain levels of performance. In fact, evidence confirms that competitive funding does not always select the best applicants, as related to the functioning of the peer-review system (Reale and Zinilli 2017), to compensation effects between different organizations (Aagaard and Schneider 2015), and to monopolistic positions by some performers (Masso and Ukrainski 2009). On the other hand, the connection between public funding and performance is subtle, as research performers behave like strategic actors that combine different funding sources (including the non-public ones) to achieve their goals, while the coupling between system-level and organizational or individual incentives is frequently loose (Aagaard 2015). The introduction of performance criteria may lead to gaming and playing with numbers to maximize the acquired resources and to unintended outcomes (de Rijcke et al. 2016). Understanding the linkages between public R&D funding and research performance would therefore require an integrated and multi-method analysis across the different steps of the process (Butler 2010). Faced with these methodological difficulties, the more modest goal of this article was to improve the (semi-quantitative) characterization of a specific element of the funding environment, i.e. the beliefs and criteria used by public bodies in allocation public R&D funds and the extent they rely on observable measures of performance. Thus, the indicator does not represent how instruments allocate funding to performers, but the policy intentions of the decision makers, and how they balance allocations driven by ex ante and ex post assessments. While not allowing to study directly the causal link between competition and performance, such an indicator improves the ability to characterize the policy environments of research performers across countries and over time. This can help to investigate how policy designs are appropriate with respect to the actual system-level performance, and the extent to which similar policy designs characterize countries with different levels of research performance (see Nieminen and Auranen 2010). Beyond the synthetic indicator we have tested, the methodological approach developed in the article also provides a systematic framework for a more fine-grained characterization and comparison of the policy design of R&D public funding. The robustness was tested by: (a) checking the coverage of public national funding instruments, (b) testing the level of uncertainty of the funding criteria scores for the most influential instruments, and (c) using sensitivity analysis for the breakdown between project funding and institutional funding, the ex post performance orientation, and the total performance orientation. The results show that uncertainties exist, affecting countries where data are problematic, or suffering from missing values, or where the research system is highly complex, and the decomposition of performance orientation needs to rely on experts’ appreciation, as it is embedded in soft negotiated instruments rather than in formalized allocation formulas. To limit the unpredictability of experts’ appreciation, we relied on three approaches. First, reducing the recourse to experts to those measures that cannot be appreciated only from descriptive information; secondly, to pre-define a set of categories on which they choose and to formalize as much as possible the process—for example by separately characterizing allocation mode and allocation criteria. Thirdly, we triangulated the experts’ assessments with information from other sources, like reports and scholarly analysis, to improve quality and reduce uncertainty. However, it is interesting to note that high/low scores generated through the sensitivity analysis do not produce a radically different positioning between countries and, particularly, do not alter the distinction between countries with high- and low-performance orientations. In this respect, the admittedly imperfect operationalization of the performance orientation indicator provides reasonable results. It is worth to point out the importance of setting reproducible procedures to construct a composite indicator of public funding, even when it represents a soft concept like performance orientation. Against previous calls that current indicators on public funding are not very reliable (Aksnes et al. 2017), we devised a procedure to open the black box by decomposing the measure in its components and by developing sensitivity analyses on each of them. As we have demonstrated in this article, such a procedure presents distinctive advantages. First, it allows for the identification of the most contestable dimensions of the indicator, showing that it is not forceful that methodological issues affect at the same time all components of an indicator. Secondly, the decomposition allows for the quantitative observation of the impact of comparability problems on the aggregate indicator, demonstrating that only some lead to severe distortions (and the extent of uncertainty differs by country as well). Thirdly, it promotes structured discussions concerning the limitations of the indicator and the progressive accumulation of methodological knowledge to move towards a consensus on validity (Barré 2001). Acknowledgements The authors thank the European Commission, Joint Research Centre, for funding this study through the PREF contract (contract no. 154321), as well as their colleagues participating to study, Thomas Scherngell and Georg Zahradnik (AIT, Vienna), Espen Solberg (NIFU, Oslo), and Emilia Primeri (CNR, Rome). The authors also thank the National Statistical Authorities for their support in data collection, as well as two anonymous referees for helpful comments and remarks. Footnotes 1 In the Frascati Manual, ‘Research and experimental development (R&D) comprise creative and systematic work undertaken in order to increase the stock of knowledge—including knowledge of humankind, culture and society—and to devise new applications of available knowledge’ (OECD 2015: 44–45). 2 PREF national experts were selected on ground of their knowledge of the country and of R&D statistics. 3 In the case of France last year of the time series is 2015. 4 PREF Public Funding Country Profiles of the countries analysed are published as Annexes n. 1, 8, 10, 11, 12, 18, 28, 29, 30, 36, 37, and 39 of the PREF Final Report available at https://rio.jrc.ec.europa.eu 5 A comparison of the time series for Poland shows that PREF totals are nearly identical to EUROSTAT GBARD for the years 2005–9, with the difference growing larger over time (−15% on 2012, −20% in 2013, and −34% in 2014). This is due to a rapid increase of the total GBARD from 2009 to 2014 (+63%), while PREF totals increased only slightly (+11%). It is difficult to ascertain which data point is more precise. 6 http://www.ifrap.org/education-et-culture/financement-des-universites-dispositif-sympa last visited 29 August 2017. References Aagaard K. ( 2015 ) ‘ How Incentives Trickle Down: Local Use of a National Bibliometric Indicator System ’, Science and Public Policy , 42 / 5 : 725 – 37 . Google Scholar CrossRef Search ADS Aagaard K. , Bloch C. , Schneider J. W. ( 2015 ) ‘ Impacts of Performance-based Research Funding Systems: The Case of the Norwegian Publication Indicator ’, Research Evaluation , 24 : 106 – 17 . Google Scholar CrossRef Search ADS Aagaard K. , Schneider J. W. ( 2015 ) ‘ Research Funding and National Academic Performance: Examination of a Danish Success Story ’, Science and Public Policy , 43 : 518 – 31 . Google Scholar CrossRef Search ADS Aghion P. et al. ( 2010 ) ‘ The Governance and Performance of Universities: Evidence from Europe and the US ’, Economic Policy , 25 : 7 – 59 . Google Scholar CrossRef Search ADS Aksnes D. et al. ( 2017 ) ‘ Measuring the Productivity of National R&D Systems: Challenges in Cross-national Comparisons of R&D Input and Publication Output Indicators ’, Science and Public Policy , 44 : 246 – 58 . Barker K. ( 2007 ) ‘ The UK Research Assessment Exercise: The Evolution of a National Research Evaluation System ’, Research Evaluation , 16 : 3 – 12 . Google Scholar CrossRef Search ADS Barré R. 2001 . ‘Sense and Nonsense of S&T Productivity Indicators. Science and Public Policy , 28 , 259 – 66 . Bonaccorsi A. ( 2007 ) ‘ Better Policies Vs Better Institutions in European Science ’, Science and Public Policy , 34 : 303 – 16 . Google Scholar CrossRef Search ADS Butler L. 2010 . ‘Impacts of Performance-based Research Funding Systems: A Review of the Concerns and the Evidence’, in OECD, Performance-based Funding for Public Research in Tertiary Education Institutions, Paris: OECD, pp. 127–65. CHEPS . 2010 . ‘Progress in Higher Education Reform in Europe’, in Funding Reform . Brussels : European Commission . de Boer H. F. et al. 2015 . Performance-based Funding and Performance Agreements in Fourteen Higher Education Systems . European Journal of Higher Education , 5 : 280 – 96 . Google Scholar CrossRef Search ADS de Rijcke S. et al. ( 2016 ) ‘ Evaluation Practices and Effects of Indicator Use—A Literature Review ’, Research Evaluation , 25 : 161 – 9 . Google Scholar CrossRef Search ADS Estermann T. , Pruvot E. B. 2012 . ‘European Universities Diversifying Income Streams’, in Curaj A. et al. (eds) European Higher Education at the Crossroads , pp. 709 – 26 . Dordrecht : Springer . Google Scholar CrossRef Search ADS Geuna A. ( 2001 ) ‘ The Changing Rationale for European University Research Funding: Are There Negative Unintended Consequences? ’, Journal of Economic Issues , 35 : 607 – 32 . Google Scholar CrossRef Search ADS Geuna A. , Piolatto M. ( 2016 ) ‘ Research Assessment in the UK and Italy: Costly and Difficult, but Probably Worth It (at Least for a While) ’, Research Policy , 45 : 260 – 71 . Google Scholar CrossRef Search ADS Grimpe C. ( 2012 ) ‘ Extramural Research Grants and Scientists’ Funding Strategies: Beggars Cannot Be Choosers? ’, Research Policy , 41 : 1448 – 60 . Google Scholar CrossRef Search ADS Hicks D. ( 2012 ) ‘ Performance-Based University Research Funding Systems ’, Research Policy , 41 : 251 – 61 . Google Scholar CrossRef Search ADS Jongbloed B. , Lepori B. 2015 . ‘The Funding of Research in Higher Education: Mixed Models and Mixed Results’, in Huisman J. , et al. (eds) The Palgrave International Handbook of Higher Education Policy and Governance , pp. 439 – 61 . London : Palgrave Macmillan . Google Scholar CrossRef Search ADS Jongbloed B. , Vossensteyn H. ( 2016 ) ‘ University Funding and Student Funding: International Comparisons ’, Oxford Review of Economic Policy , 32 : 576 – 95 . Google Scholar CrossRef Search ADS Jonkers K. , Zacharewicz T. 2016 . Research Performance Based Funding Systems: A Comparative Assessment . Institute for Prospective Technological Studies, Brussels: Joint Research Centre . Kulczycki E. , Korzeń M. , Korytkowski P. ( 2017 ) ‘ Toward an Excellence-Based Research Funding System: Evidence from Poland ’, Journal of Informetrics , 11 : 282 – 98 . Google Scholar CrossRef Search ADS Laudel G. ( 2006 ) ‘ The Art of Getting Funded: How Scientists Adapt to Their Funding Conditions ’, Science and Public Policy , 33 : 489 – 504 . Google Scholar CrossRef Search ADS Lepori B. 2017 . Analysis of National Public Research Funding (PREF). Handbook for Data Collection and Indicators Production . Seville : Joint Research Centre, Technical report . Lepori B. et al. ( 2007 ) ‘ Comparing the Evolution of National Research Policies: What Patterns of Change? ’, Science and Public Policy , 34 : 372 – 88 . Google Scholar CrossRef Search ADS Lepori B. , Reale E. , Larédo P. 2013 . ‘ Logics of Integration and Actors’ Strategies in European Joint Programs ’, Research Policy , 43 : 391 – 402 . Google Scholar CrossRef Search ADS Luwel M. ( 2004 ) ‘The Use of Input Data in the Performance Analysis of R&D Systems’, in Moed H.F. , Glänzel W. , Schmoch U. (eds) Handbook of Quantitative Science and Technology Research , Dordrecht : Springer , pp. 315 – 38 . Makkonen T. ( 2013 ) ‘ Government Science and Technology Budgets in Times of Crisis ’, Research Policy , 42 : 817 – 22 . Google Scholar CrossRef Search ADS Masso J. , Ukrainski K. ( 2009 ) ‘ Competition for Public Project Funding in a Small Research System: The Case of Estonia ’, Science and Public Policy , 36 : 683 – 95 . Google Scholar CrossRef Search ADS Nieminen M. , Auranen O. ( 2010 ) ‘ University Research Funding and Publication Performance—An International Comparison ’, Research Policy , 39 : 822 – 34 . Google Scholar CrossRef Search ADS OECD . 2000 . Measuring R&D in the Higher Education Sector. Methods in Use in the OECD/EU Member Countries . Paris : OECD . OECD . 2015 . Frascati Manual 2015. Guidelines for Collecting and Reporting Data on Research and Experimental Development . Paris : OECD . Pouris A. ( 2007 ) ‘ Estimating R&D Expenditures in the Higher Education Sector ’, Minerva , 45 : 3 – 16 . Google Scholar CrossRef Search ADS Pruvot E.B. , Estermann T. ( 2012 ). ‘European Universities Diversifying Income Streams’, in Curaj A. , Scott P. , Vlasceanu L. , Wilson L. , et al. (eds) European Higher Education at the Crossroads , pp. 439 – 61 . Dordrecht : Springer , 709 – 26 . Reale E. , Lepori B. , Scherngell T. 2017 . Analysis of National Public Research Funding (PREF) . Brussels : European Commission, Joint Reserch Center Technical Report . Reale E. , Zinilli A. 2017 . ‘ Evaluation for the Allocation of University Research Project Funding: Can Rules Improve the Peer Review? ’, Research Evaluation , 26 : 190 – 98 . Google Scholar CrossRef Search ADS Slaughter S. , Rhoades G. 2004 . Academic Capitalism and The New Economy: Markets, State, and Higher Education . Baltimore : JHU Press . Teixeira P. N. et al. ( 2014 ) ‘ Revenue Diversification in Public Higher Education: Comparing the University and Polytechnic Sectors ’, Public Administration Review , 74 : 398 – 412 . Google Scholar CrossRef Search ADS van den Besselaar P. , Heyman U. , Sandström U. 2017 . ‘ Perverse Effects of Output-based Research Funding? Butler’s Australian Case Revisited ’, Journal of Informetrics , 11 : 905 – 18 . Google Scholar CrossRef Search ADS van den Besselaar P. , Leydesdorff L. ( 2009 ) ‘ Past Performance, Peer Review and Project Selection: A Case Study in the Social and Behaviouraval Sciences ’, Research Evaluation , 18 : 273 – 88 . Google Scholar CrossRef Search ADS Whitley R. , Glaser J. 2007 . ‘The Changing Governance of the Sciences’, in The Advent of Research Evaluation Systems . Dordrecht : Springer . Wildavsky A. , Caiden N. 2004 . The New Politics of the Budgetary Process . New York, NY : Pearson/Longman . © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Research Evaluation Oxford University Press

Conceptualizing and measuring performance orientation of research funding systems

Research Evaluation , Volume Advance Article (3) – Mar 12, 2018

Loading next page...
 
/lp/ou_press/conceptualizing-and-measuring-performance-orientation-of-research-oGPlrGvM20
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ISSN
0958-2029
eISSN
1471-5449
D.O.I.
10.1093/reseval/rvy007
Publisher site
See Article on Publisher Site

Abstract

Abstract In this article, we propose a synthetic indicator for the performance orientation of public R&D funding at the country level that allows for quantitative comparisons across countries and over time, and we illustrate the methodology for its computation and validation. The indicator characterizes R&D funding from the state in terms of the extent the beneficiaries’ performance is taken into account in the allocation of resources. By building on the literature on research funding, the indicator combines information on how funding is allocated with quantitative measures of funding volumes. It is operationalized in terms of a fine-grained decomposition of public R&D funding in instruments, each of them characterized by their mode and criteria of allocation. Building on a large-scale European project, we test the operationalization of the indicator for a sample of European countries, and we perform a set of sensitivity and robustness analyses to inquire the impact of definitions and of data issues, particularly for what concerns cross-country comparisons. We conclude with a discussion of the advantages and limitations of the indicator and by proposing a research agenda for its further development. 1. Introduction Public R&D funding systems in European countries have experienced deep changes since the 1980s, a transition labelled as the move from a historically based allocation to a ‘new funding regime’ oriented towards competition and performance (Geuna 2001; Slaughter and Rhoades 2004). The literature describes two main patterns of change, i.e. the increase in the share of project funds assigned to research teams (Lepori et al. 2007) and the introduction of so-called performance-based research funding systems (PRFSs)—i.e. the allocation of institutional funding based on the assessment of the research performance at the organizational level (Hicks 2012; Geuna and Piolatto 2016). The introduction of a competitive and performance-based allocation was driven by a set of beliefs on improving national research performance by selectively supporting the best research organizations and groups and by setting incentives to public-sector research (Bonaccorsi 2007; Aghion et al. 2010). Yet, evidence that this goal has been achieved is, at best, inconclusive (Nieminen and Auranen 2010). On the contrary, the adverse impact of performance orientation has been suggested, including the focus on short-term research, the increasing costs of evaluation and gaming by researchers (Laudel 2006). A further trend has been the push towards income diversification, by increasing the amount of resources provided by private contracts and by charities, even if these sources represent a limited share of R&D funding in most European countries (with the partial exception of the UK) and for most universities (Estermann and Pruvot 2012; Teixeira et al. 2014). The literature has identified wide diferences between countries in how performance-oriented funding systems have been introduced, in terms of the balance between instruments, of the criteria adopted to measure performance and of the amount of funding involved (Hicks 2012; de Boer et al. 2015; Jonkers and Zacharewicz 2016). Yet, we currently lack of a robust methodological framework to compare systematically the performance orientation of public R&D funding across countries and over time. This article aims at filling this gap by developing and testing empirically a quantitative measure of the extent the (past or expected) performance of beneficiaries is taken into account when distributing public R&D funding, broadly identified through the perimeter of Global Budgetary Appropriations for R&D (GBARD) in the Frascati Manual (FM) (OECD 2015). By combining these measures with information on the amount of funding by instrument, it becomes possible to compute a synthetic indicator of the country-level performance orientation of public R&D funding. We operationalize the indicator in terms of a fine-grained decomposition of public R&D funding in instruments, each characterized by their mode and criteria of allocation, and by integrating information from studies and policy documents with expert judgments on the less formal dimensions of allocation. The proposed indicator can be broadly defined as a characterization of the underlying rationales and criteria through which public R&D funding is distributed; as such, it describes an important component of the governance of the (public) research system, whose implications for the conduct of research and its outputs have still to be fully understood. In the article, we test the indicator for a sample of European countries by performing a set of sensitivity and robustness analyses to examine the impact of definitions and data issues, particularly among cross-country comparisons. We specifically take into account some well-known comparability issues of R&D statistics (Luwel 2004), and we devise an analytical procedure to assess their impact on the measure we are proposing. We conclude with a discussion of the advantages and limitations of the indicator, and by proposing a research agenda for its further development and usage for comparative analyses. 2. Background and basic concepts 2.1 Defining the perimeter of public R&D funding To identify a comparable perimeter, we adopt the FM definition of GBARD that includes all funds provided by the state, which are used for R&D activities (OECD 2015).1 This involves identifying all budgetary lines in the state budget that support research and then evaluating their R&D content. In practice, budgetary lines like funds transferred to research councils or public research organizations (PROs) are fully included, while for institutional funds to Higher Education Institutions (HEIs), the share of R&D is computed from the use of time of staff (OECD 2015). Innovation support with no R&D content is also excluded. Funding to international agencies and performers is included, even if our approach would allow excluding them from the indicator (Lepori et al. 2013). This choice presents several advantages. First, it builds on extensive standardization work by Organisation for Economic Co-operation and Development (OECD) and EUROSTAT and allows comparing our data with international R&D statistics, that, despite some well-known shortcomings (Aksnes et al. 2017), remains the unique reference for comparing R&D investments across countries (Makkonen 2013). Secondly, its coverage of institutional funding is broader than instruments explicitly targeted to R&D support (Hicks 2012) and, specifically, it also includes mixed HEI funding attributed for R&D as a supplemental distribution to student funding that remains a central component of public research support in many countries (Jongbloed and Vossensteyn 2016). However, the GBARD perimeter is not without problems. We highlight two major issues that lead to specific sensitivity tests for the indicator. First, the measurement of the R&D content of the general university budget (so-called General University Funds in the FM) is difficult, since many European HEIs do not have an accounting system that allows distinguishing R&D from educational expenditures. The main method advised by the FM is to rely on a survey of the use of time by the academic staff, but data show a high diversity of methods, including reliance on old surveys, national shares by disciplines, and even a global national estimate in a few countries (OECD 2000; Pouris 2007). Secondly, the proper identification of project funding instruments is not without problems. Contracts from ministries cannot always be identified, except in the countries that perform also a survey of funding organizations, while the exact delimitation between R&D support and innovation support might raise issues, as in some programmes managed by innovation agencies the two forms of support come together. 2.2 Decomposing public R&D funding We represent public R&D funding as a set of funding lines within the public budgets with different characteristics in terms of how they are managed, the beneficiaries, and how they are allocated. A well-established distinction is between institutional funds, attributed to whole organizations for their current functioning, and project funds, which are allocated directly to research groups or individuals by a Research Funding Organization (RFO; Lepori et al. 2007). Some lines are earmarked to specific types of organizations, like HEIs or PROs, others, like most project funds, are accessible to different R&D performers, and may also include firms. Frequently, these lines comprise a set of distinct funding schemes: project funding is divided into instruments with different goals and criteria (e.g. funding of projects vs. career instruments), while institutional funding of HEIs might be allocated through different (sub-) instruments, for example an educational component based on the number of students and a research component based on performance assessment. While this categorization can be made at different levels of granularity, we focus our analysis on instruments, which are reasonably coherent internally in terms of two characteristics, i.e. the allocation mode and the allocation criteria that are used to construct a measure of performance orientation. We therefore define funding instruments as the aggregation of funding schemes with similar characteristics in terms of how funding is allocated. 2.3 Allocation mode and allocation criteria We focus on two dimensions of how funding is allocated, which we label as allocation mode and allocation criteria. Comparative research on public R&D funding suggests some broad categories (see Hicks 2012; Jonkers and Zacharewicz 2016), which have also been introduced in the European Union-funded study on public research funding (PREF; see Lepori 2017; Reale et al. 2017). First, the allocation mode describes the process through which funding is allocated to beneficiaries, by distinguishing between: Competitive bid, i.e. allocation based on the submission of proposals and their comparative evaluation. Competitive bid is the usual mode for project funding (Lepori et al. 2007) but might occur also for some institutional funding instruments. Historical (incremental) allocation is allocation where the current level of funding functions as a baseline, possibly with some fixed level of increase when the total amount of funding grows. Incremental allocation traditionally characterized public budgets (Wildavsky and Caiden 2004) and represented the dominant model for the allocation of institutional funding to universities and PROs until the 80s (Jongbloed and Lepori 2015). Negotiated allocation is allocation based on negotiations between the state and the performing organization. Negotiated allocation may be associated with performance contracts (de Boer et al. 2015), but also with historical elements. Formula-based allocation refers to cases where the funding volume is calculated through a mathematical formula based on a set of quantitative measures, like scores for research assessment, publication numbers, and the number of students. Comparative literature documents the diversity and complexity of formulas used for allocating resources (Hicks 2012). Secondly, allocation criteria describe the explicit or implicit criteria, which are used to calculate the amount of funding. The evaluation of competitive bids is usually based on some assessment of the merit of the proposal, against different criteria depending on the instrument’s goals, as well as on reputation and standing of the applicants (van den Besselaar and Leydesdorff 2009; Grimpe 2012). For institutional funding, the most current allocation criteria include input measures (like the number of staff), number of students or, less frequently, number of graduates, peer-review outcomes (like in the UK Reserch Assessment Exercise; Geuna and Piolatto 2016), bibliometric measures, and acquisitions of third-party funds (see Jonkers and Zacharewicz 2016 for a recent overview). In many countries, institutional funding, particularly to HEIs, is divided into components allocated through different criteria, for example an educational component based on the number of students and a research component based on research indicators. By distinguishing and characterizing such instruments separately, our approach allows handling composite allocation in a sensible way. 2.4 Defining performance orientation We define performance orientation as the extent to which performance is taken into account in their decisions concerning the allocation of funding. Performance orientation therefore is an attribute of the policy design of the process and of the criteria for allocating resources, not of the level of competition or the outcome in terms of the distribution of funds between performers, which is also expected to depend on market characteristics, like the presence of competitors and the demand for specific research services (van den Besselaar and Leydesdorff 2009). We follow Nieminen and Auranen by distinguishing between two dimensions of performance orientation, i.e. evaluating proposals to assess the expected future performance and linking allocation to past performance (Nieminen and Auranen 2010). Therefore, the indicator is constructed as a sum of two components: Ex ante performance orientation, i.e. whether allocation of funding is based on expectations on future knowledge production or performance. This is typically the case when funds are allocated through competitive bids. Ex post performance allocation, i.e. whether funding allocation is based on measures of a research organization’s past performance. While past performance is an important component in both cases, the policy intention and the signalling effect towards performers are different—in the case of ex ante performance orientation, the funder distributes resources based on its assessment of the future expected output, whereas for ex post performance the funder rewards past performance—expecting an indirect effect on the performers’ behaviour. To operationalize ex post performance orientation, we build on Hicks’ definition of PRFS, i.e. those systems where the allocation of funding at the national level is based on ex post evaluation of research output and has direct implications for funding allocation. However, the measure we are proposing is continuous, i.e. funding instruments receive a score between 0 (no performance allocation) and 1 (full performance allocation, matching Hicks’ definition). We also foresee some level of performance orientation for cases where performance is indirectly taken into account in the allocation process, particularly within negotiations. While assessing such indirect connection is methodologically more difficult, this extensive definition is more flexible, to cover cases where funding levels are negotiated; yet performance criteria are mobilized in the process. By combining categories for mode and criteria of funding with our definitions, we define scores for ex ante and ex post performance orientation to be assigned to funding instruments as in Table 1. The values derive from a multiplication between fixed scores assigned to the mode of allocation (0 for historical, 0.5 for negotiated, and 1 to formula) and the scores attributed to allocation criteria by a triangulation of the experts’ assessments and descriptive information, i.e. literature, reports, administrative documents, etc. Table 1. Measures of performance orientation Allocation mode Score allocation mode ex post Allocation criteria Score allocation criteria Ex ante performance orientation Ex post performance orientation (score allocation mode* score allocation criteria) Project funding n/a Indifferent n/a 1 0 Institutional funding: competitive bid n/a Indifferent n/a 1 0 Institutional funding: historical allocation 0 Not applicable 0 0 0 Institutional funding: negotiated allocation 0.5 Output or educational criteria 0 0 0 Institutional funding: negotiated allocation 0.5 Research criteria 0 ≤ × < 1 0 0 ≤ × < 1 depending on the instrument’s characteristics Institutional funding: funding formula 1 Input or educational criteria 0 0 0 Institutional funding: funding formula 1 Research performance 0 < × ≤ 1 0 1 Allocation mode Score allocation mode ex post Allocation criteria Score allocation criteria Ex ante performance orientation Ex post performance orientation (score allocation mode* score allocation criteria) Project funding n/a Indifferent n/a 1 0 Institutional funding: competitive bid n/a Indifferent n/a 1 0 Institutional funding: historical allocation 0 Not applicable 0 0 0 Institutional funding: negotiated allocation 0.5 Output or educational criteria 0 0 0 Institutional funding: negotiated allocation 0.5 Research criteria 0 ≤ × < 1 0 0 ≤ × < 1 depending on the instrument’s characteristics Institutional funding: funding formula 1 Input or educational criteria 0 0 0 Institutional funding: funding formula 1 Research performance 0 < × ≤ 1 0 1 Table 1. Measures of performance orientation Allocation mode Score allocation mode ex post Allocation criteria Score allocation criteria Ex ante performance orientation Ex post performance orientation (score allocation mode* score allocation criteria) Project funding n/a Indifferent n/a 1 0 Institutional funding: competitive bid n/a Indifferent n/a 1 0 Institutional funding: historical allocation 0 Not applicable 0 0 0 Institutional funding: negotiated allocation 0.5 Output or educational criteria 0 0 0 Institutional funding: negotiated allocation 0.5 Research criteria 0 ≤ × < 1 0 0 ≤ × < 1 depending on the instrument’s characteristics Institutional funding: funding formula 1 Input or educational criteria 0 0 0 Institutional funding: funding formula 1 Research performance 0 < × ≤ 1 0 1 Allocation mode Score allocation mode ex post Allocation criteria Score allocation criteria Ex ante performance orientation Ex post performance orientation (score allocation mode* score allocation criteria) Project funding n/a Indifferent n/a 1 0 Institutional funding: competitive bid n/a Indifferent n/a 1 0 Institutional funding: historical allocation 0 Not applicable 0 0 0 Institutional funding: negotiated allocation 0.5 Output or educational criteria 0 0 0 Institutional funding: negotiated allocation 0.5 Research criteria 0 ≤ × < 1 0 0 ≤ × < 1 depending on the instrument’s characteristics Institutional funding: funding formula 1 Input or educational criteria 0 0 0 Institutional funding: funding formula 1 Research performance 0 < × ≤ 1 0 1 Consistently with definitions, a fixed score = 1 for ex ante performance orientation is assigned to project funding and institutional funding allocated through bids, while all remaining institutional funding has a score = 0 for ex ante performance allocation. By definition, historical allocation has no performance orientation. Formula institutional allocation based as on input or educational criteria receives a score of 0 for ex post performance allocation, while when using research performance criteria, it receives a score of 1. In the rare case of formula-based allocation based on both educational and research criteria, an intermediate score is attributed based on their respective weighting. Finally, negotiated allocation where some research performance criteria are considered will receive an intermediary score between 0 and 1, depending on the extent of negotiation and the respective importance of different criteria (past budgets, performance, political power, etc.). To standardize this evaluation, following criteria are suggested: The presence of some indication of performance (e.g. in the design of the instrument). The existence of a performance contract including measures of research performance. The existence of formal monitoring and/or an evaluation process of research performance (Whitley and Glaser 2007). The specific indicators used for the evaluation. Testing the sensitivity of the overall indicator on the scores selected by experts for negotiated allocation represents a major focus of the robustness testing. This discussion highlights some advantages of the proposed indicator, as well as issues, which need to be tested empirically. First, the decomposition by instruments makes the construction of the measure more tractable, since characterizing individual instruments is easier than providing a global assessment. At the same time, a careful description of national funding systems and good availability of data on the amount of funding associated with each instrument is required. Secondly, the cases for which scores require expert judgment are limited to a smaller set of instruments, specifically those without clear formal criteria on the allocation criteria. Specifically, the delineation between historical and negotiated allocation and the performance orientation of the latter are likely to be contestable measures, where experts might disagree on scores. Also, the criteria for allocation are not always described in detail and might require an expert appreciation, which can produce controversial results. Thirdly, the construction of the measure is transparent and therefore allows for robustness testing and sensitivity analysis, particularly to test whether scores attributed to some instruments have a large impact on the system-level indicator. 2.5 Operationalizing the indicator The previous discussion leads to a replicable procedure to compute an indicator of performance orientation.(a) First, public R&D funding—broadly corresponding to the GBARD definition in the FM—is decomposed into funding instruments based on their homogeneity in terms of allocation mode and criteria. Funding amounts are also associated with instruments for each year. Three issues need to be checked at this stage: whether there are issues with the perimeter of public R&D funding (e.g. instruments that are not included), whether the calculation of the R&D content is reliable, and whether the identified instruments are sufficiently homogeneous in terms of how funds are allocated.(b) Each instrument is characterized in terms of allocation mode and allocation criteria according to the criteria highlighted in Table 1, providing a higher level of detail for the more difficult cases. For project funding instruments, no details are needed, as all are assigned ex ante = 1 and ex post = 0, while more detail is required for cases of negotiated and formula allocation. This also allows the identification of contested cases for robustness testing, as well as of influential instruments whose score has a strong impact on system-level indicators. When the scores are based on expert assessment and/or incomplete information, lower and higher bounds can be devised to perform sensitivity analyses.(c) To compute synthetic indicators, we weight the scores by the funding volume assigned to each instrument. share_project = total project funds/total funds share_institutional = total institutional funds/total funds Ex ante performance orientation (institutional) =∑institutional(funding_instrument_amount)j×(ex ante)j ∑institutional(funding_instrument_amount)j Ex post performance orientation (institutional) =∑institutional(funding_instrument_amount)j×(ex post)j ∑institutional(funding_instrument_amount)j Performance orientation = (share_project) + (share institutional)×(ex ante_institutional + ex_post_institutional) Where the sum runs only on the institutional funding instruments since, by definition, ex ante performance orientation is 1 and ex post performance orientation is 0 for project funds. With these definitions, it is therefore sufficient to compute the breakdown of public funding between project and institutional and to perform a more in-depth analysis of the institutional funding instruments. These indicators can be computed for the whole national system, for domestic funding only (excluding funding transferred to international agencies and performers) or separately for performing sectors (considering only those instruments for which a performing sector is eligible, therefore characterizing the specific funding environment for a group of performers). For the sake of simplicity, we will focus our analysis on the country-level indicator. 3. Data Data have been collected in the framework of a large study funded by the European Commission on the characteristics of research funding systems in European countries (PREF, contract no. 154321). PREF provided data to enable the systematic analysis of national funding systems: the national research budget in each country has been decomposed into instruments, based on expert knowledge of national systems. Then, a set of descriptors has been collected, including the distinction between project and institutional funding, the identification of the managing agency, the allocation mode, and criteria; this information was mostly collected from documentary reports and descriptions of national systems and cross-checked with national experts2 and the National Statistical Authorities. Further, financial data have been collected that cover the year 2000 (or the first year available) through 2014.3 To the greatest possible extent, these data were provided by the National Statistical Authorities and comply with EUROSTAT statistics on public R&D funding. More detailed information, for example concerning project funding, was sourced from ministry reports and funding agencies. For each country, PREF produced a country report, while also providing qualitative information on the characteristics of funding instruments, which was used to assess the reliability of the PREF estimates.4 National experts in the PREF project were also asked to characterize funding instruments in terms of their allocation procedures and criteria based on pre-defined categories. The indicator of performance orientation has been computed for a subset of 12 countries, for which reasonably complete data were available, i.e. Austria, Denmark, Finland, France, Germany, Italy, Norway, Poland, Portugal, Sweden, Switzerland, and the UK. Because of data limitations in the earlier years, we restrict the analysis to the years 2005–14. In this process, the PREF data have been rechecked—leading to a punctual and limited modification with respect to the original data set. Any issues, which were detected, as well as the scores attributed by experts within PREF, have been rechecked in case of doubt by using additional sources, including scholarly papers and available reports. 3.1 Methods and analysis The goal of the analysis performed in this article is methodological, i.e. to ascertain the extent to which the aggregated indicators are influenced by different choices concerning the construction of the indicator. As a first step, we present a cross-sectional comparison of the indicators, which provides some first indications on the consistency with respect to the information from the literature. A time series analysis also allows the detection of changes over time, which might hint to methodological issues, like sudden changes that cannot be explained by political decisions. As a second step, we provide in-depth analysis of the perimeter, by looking at cases where funding instruments were not included (e.g. due to a lack of data) and to issues with the calculation of funding amounts. We also consider the possibility that instruments are not sufficiently homogeneous to allow the calculation of reliable scores. We draw extensively on the country reports produced by the PREF project, as well as the RIO/ERAWATCH reports and on EUROSTAT public funding data and metadata. For the detected problems, we define high/low thresholds for sensitivity. As a third step, we single out two types of influential instruments: those that amount to a large share of total funding in a country and those whose score would strongly impact the aggregated indicator. We perform robustness and sensitivity analyses by computing alternative scores for these instruments and testing whether the indicators are significantly affected, particularly concerning the relative position of countries. Finally, we consider changes over time of the indicators and their reliability. 4. Results 4.1 Descriptive analysis Figure 1 provides data on shares of performance-oriented funding divided into project funding, ex ante institutional and ex post institutional funding. We derive following observations. (a) First, there are wide variations in indicators between countries, which are largely driven by differences in the share of project funding. For the purposes of this analysis, the main question is the extent to which these differences (and the country ranking) are robust against different definitions of the indicator and the coverage of funding instruments. (b) Secondly, as expected, ex ante institutional funding plays a limited role in the considered countries. We identify two main groups of such instruments: first, institutional overheads allocated to performers based on the amount of project funding received (France, Switzerland) and, secondly, capital and researchers’ grants allocated to whole HEIs by the UK research councils on the basis of an institutional proposal. (c) Thirdly, we observe significant changes in the share of performance orientations in half of the considered countries, i.e. Finland, Norway, Poland, Portugal, and Sweden. In these countries, the increase is due to the creation of a specific performance-oriented funding instrument for HEIs. Figure 1. View largeDownload slide Share of performance-oriented funding as % of GBARD. Figure 1. View largeDownload slide Share of performance-oriented funding as % of GBARD. In five countries, i.e. Austria, Germany, Poland, Sweden, and Switzerland, we observe a substantial increase in the share project funding, while this share decreased in Denmark. Poland, the country showing the largest increase, created two new funding agencies i.e. the National Centre for Research and Development (NCBR) in 2009 and the National Science Centre (NCN), respectively, supporting innovation-oriented research and basic science. While changes in project funding are gradual, the emergence of ex post performance orientation is sudden, pointing to reforms in the design of the funding instruments. (d) Fourthly, in all countries, there is a substantial share of non-competitive institutional funding. Given the size of the involved amounts, an assessment of whether these instruments can indeed be characterized, as ‘no’ performance orientation is needed, since even a small share of performance orientation would significantly affect the aggregate indicators. 4.2 Perimeter and coverage issues A first question is whether the data adequately cover public national funding, since differences in the perimeter are also likely to affect the comparability of the performance orientation indicator (Aksnes et al. 2017). Since PREF data are disaggregated by funding instruments, an in-depth assessment of the coverage becomes possible. More specifically, we compare PREF totals with EUROSTAT data; then we check for missing instruments. Finally, we check for potentially uncertain cases in the division between project and institutional funds. These checks are performed for the year 2014 (or the last available year; see Table 2). Table 2. Coverage issues and sensitivity tests at the country level Country Year PREF/EUROSTAT Perimeter issues, missing instruments, and issues with project funds Sensitivity tests AT 2013 1.00 None CH 2014 1.00 None DE 2014 1.16 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding DK 2014 1.12 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding FI 2014 1.02 None None FR 2014 1.19 Total volume of public funding higher than EUROSTAT due to the inclusion of the investissements d'avenir programme None, PREF data better reflect national investment IT 2014 1.00 Exchange funds are excluded Adding 10% GBARD as project funds NO 2014 1.01 None None PL 2014 0.66 Missing data on intra-mural expenditures of government and ministries funding programmes. Exchange funds are excluded Low: all missing amount as institutional. High: all missing amounts as project PT 2014 0.84 Missing data on exchange funds and intra-muros R&D expenditure Low: excluding COMPETE. High: all missing amounts as project SE 2014 1.01 None None UK 2013 0.98 None None All countries Share of R&D in HEI institutional funding ±20% with respect to EUROSTAT figures Country Year PREF/EUROSTAT Perimeter issues, missing instruments, and issues with project funds Sensitivity tests AT 2013 1.00 None CH 2014 1.00 None DE 2014 1.16 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding DK 2014 1.12 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding FI 2014 1.02 None None FR 2014 1.19 Total volume of public funding higher than EUROSTAT due to the inclusion of the investissements d'avenir programme None, PREF data better reflect national investment IT 2014 1.00 Exchange funds are excluded Adding 10% GBARD as project funds NO 2014 1.01 None None PL 2014 0.66 Missing data on intra-mural expenditures of government and ministries funding programmes. Exchange funds are excluded Low: all missing amount as institutional. High: all missing amounts as project PT 2014 0.84 Missing data on exchange funds and intra-muros R&D expenditure Low: excluding COMPETE. High: all missing amounts as project SE 2014 1.01 None None UK 2013 0.98 None None All countries Share of R&D in HEI institutional funding ±20% with respect to EUROSTAT figures Table 2. Coverage issues and sensitivity tests at the country level Country Year PREF/EUROSTAT Perimeter issues, missing instruments, and issues with project funds Sensitivity tests AT 2013 1.00 None CH 2014 1.00 None DE 2014 1.16 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding DK 2014 1.12 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding FI 2014 1.02 None None FR 2014 1.19 Total volume of public funding higher than EUROSTAT due to the inclusion of the investissements d'avenir programme None, PREF data better reflect national investment IT 2014 1.00 Exchange funds are excluded Adding 10% GBARD as project funds NO 2014 1.01 None None PL 2014 0.66 Missing data on intra-mural expenditures of government and ministries funding programmes. Exchange funds are excluded Low: all missing amount as institutional. High: all missing amounts as project PT 2014 0.84 Missing data on exchange funds and intra-muros R&D expenditure Low: excluding COMPETE. High: all missing amounts as project SE 2014 1.01 None None UK 2013 0.98 None None All countries Share of R&D in HEI institutional funding ±20% with respect to EUROSTAT figures Country Year PREF/EUROSTAT Perimeter issues, missing instruments, and issues with project funds Sensitivity tests AT 2013 1.00 None CH 2014 1.00 None DE 2014 1.16 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding DK 2014 1.12 Probably double counting of some instruments Subtracting these amounts fully from institutional, respectively, project funding FI 2014 1.02 None None FR 2014 1.19 Total volume of public funding higher than EUROSTAT due to the inclusion of the investissements d'avenir programme None, PREF data better reflect national investment IT 2014 1.00 Exchange funds are excluded Adding 10% GBARD as project funds NO 2014 1.01 None None PL 2014 0.66 Missing data on intra-mural expenditures of government and ministries funding programmes. Exchange funds are excluded Low: all missing amount as institutional. High: all missing amounts as project PT 2014 0.84 Missing data on exchange funds and intra-muros R&D expenditure Low: excluding COMPETE. High: all missing amounts as project SE 2014 1.01 None None UK 2013 0.98 None None All countries Share of R&D in HEI institutional funding ±20% with respect to EUROSTAT figures 4.2.1 Differences between totals PREF total for total funding is identical to EUROSTAT GBARD data for seven countries, which is due to the use of the same primary data sources. The PREF total is 16% higher in Germany, 12% in Denmark, and 19% in France. For the first two countries, this difference is most likely due to the double counting between funding instruments, as data have been disaggregated in PREF using additional sources, like the German Research Report. For France, the difference is due to the inclusion of the R&D portion of the national future investment programme in PREF, which is excluded in EUROSTAT data as it is considered as a capital expenditure. The PREF figure is therefore considered to be more correct (only the effective expenditures have been included). For two countries, the PREF total is below the EUROSTAT figures. For Portugal (−16%), the analysis of the funding instruments covered by PREF suggests that the missing amounts are ministries’ contracts. For Poland, the difference is very large (−34%) in 2014, suggesting that the data are not fully reliable.5 4.2.2 Institutional funding for HEIs To test for robustness, we apply a lower and higher bound of ±20% of the corresponding amounts. Since the indicative share of R&D in the university general budget is around 40%, this amounts to increasing this share to 48%, respectively, decreasing to 32%. It seems reasonable that the true value will be within these bounds. 4.2.3 Missing funding instruments The main issue concerns so-called exchange funds (OECD 2015), i.e. contracts for services awarded by the state, usually by different ministries. This category of funds is fully missing in three countries, i.e. Italy, Poland, and Portugal, therefore lowering the share of project funds. Since the highest share of exchange funds in the remaining countries is 10% of the total GBARD (in Norway), in the sensitivity analysis the project funds in these three countries will be increased by a corresponding amount. 4.2.4 Identification of project funds The distinction between institutional funds and project funds was further checked to single out ambiguous cases. This analysis shows that in most cases this distinction is straightforward: in all countries, the main instruments in terms of funding volume include core funds to HEIs (institutional), transfer to RFOs (project), and transfer to PROs. In the year 2014, there were only 12 mixed funding instruments, i.e. comprising an amount of institutional and project funds, corresponding, however, to only 1% of the total funding volume. We could identify two types of cases: first, RFOs that also distribute institutional funds, either as overheads to projects (Switzerland, France) or through direct funding of research centres (Norway, the UK); secondly, funding to PROs comprising some project component. In both cases, it is relatively straightforward to divide the corresponding amounts by using secondary sources. An issue was found in Poland, where the amounts transferred by the two funding agencies (NCBR and NCN) included a small share of institutional funding, with no indication of its nature; the corresponding amount (a few percent of the total) was consistently reclassified as project. Most differences in the share of project funding with respect to EUROSTAT are the outcome of a more detailed analysis of the national funding structure—for example NCBR in Poland should be considered as an RFO, therefore reclassifying the corresponding instrument as project. Cross-country harmonization has also been improved for international funding instruments—as funding to international RFOs is now consistently considered as project funds in all countries. Specific inclusion and exclusion problems relate to innovation programmes, where the R&D content is difficult to estimate; this particularly concerns Italy (where these programmes are excluded) and Portugal, where a share of the large COMPETE programme is included (corresponding to more than one-third of total project funding in the country). In one of the robustness tests, the COMPETE programme has been dropped. In France, the funding lines within the Investissements d’avenir programme that solely support economic innovation were excluded. The preliminary conclusion is that the PREF data provide, with a few exceptions, a reasonably good coverage of the main funding instruments, while the more disaggregated approach to instruments allows a better categorization and breakdown between project and institutional funding. The few diverging cases will be taken into account in the sensitivity analysis described in Table 2 below. 4.3 Influential instruments As a second step, we extracted the influential instruments from the PREF database, i.e. those institutional funding instruments that exceeded 20% of total funding, respectively, of ex post performance or non-performance-oriented funding. The scores for these instruments therefore have a strong influence on aggregate indicator. Table 3 shows that we identified only 28 such influential instruments. In total, 18 instruments are institutional funding to HEIs, while the remaining are mostly core allocations to PROs. With the notable exception of France, the main issue for measuring ex post performance allocation is therefore a correct characterization of higher education funding systems, for which the assessment can rely on a large amount of literature (see CHEPS 2010; de Boer et al. 2015). Table 3. Influential instruments (2014, or last available year) Note. White cells < 20%, light cells between 20 and 50%, and dark cells above 50%. Due to confidentiality restrictions, the numerical scores can be provided on request for research purposes. Table 3. Influential instruments (2014, or last available year) Note. White cells < 20%, light cells between 20 and 50%, and dark cells above 50%. Due to confidentiality restrictions, the numerical scores can be provided on request for research purposes. The simplest situation is found for those countries where HEI core government allocation is formally split into two subinstruments and one of them is labelled as being formula-based and performance-oriented. This includes six countries, i.e. Denmark (Aagaard and Schneider 2015), Italy (Geuna and Piolatto 2016), Norway (Aagaard et al. 2015), and the UK (Barker 2007). This list of countries also matches the one provided by Hicks, but additionally our data provide a quantitative measure of the amount of money involved. For these countries, the uncertainty of the measure can be considered as ‘low’. Two further countries, Poland and Portugal, have performance-based systems based on a research assessment, but the involved funding amounts cannot be singled out and, therefore, experts have provided an intermediate score (Jonkers and Zacharewicz 2016; Kulczycki et al. 2017). For France, an assessment of the performance orientation was made from descriptions of the funding formula,6 but how this formula was used in the allocation process is not fully clear. These cases have therefore been characterized as having a ‘medium’ level of uncertainty. In the three German-speaking countries (Austria, Germany, and Switzerland), HEI allocation is not (or only partially, for example by region) based on formulas; Switzerland and Germany are also characterized by high internal heterogeneity, as both are federal countries. In these cases, a large historical or education-based component remains present, therefore justifying low-performance orientation scores. However, we consider a ‘high’ level of uncertainty in the measure. The remaining instruments can be divided into two subgroups. Intra-mural R&D instruments (three instruments) can generally be considered as having a low-performance orientation, since they are directly managed within the public administration (‘medium’ uncertainty). Funding to PROs are the most difficult cases to qualify, as information available is scarce and, in most cases, they are based on direct negotiation between the State and the performer. While accepting the assessment of national experts, we therefore qualify them as ‘high’ uncertainty. With the exception of France (and partially Germany), they account for a limited share of total public funds. To test robustness, we set higher and lower bounds by increasing/decreasing the score by 0.1 for medium and 0.2 for high uncertainty. We therefore conclude that, first, the number of influential instruments is limited and, therefore, an in-depth assessment of their performance orientation is feasible. Secondly, robustness tests can be implemented to assess the impact of uncertainty in the instrument-level scores on the country indicators. 4.4 Sensitivity analysis Figure 2 shows the sensitivity test for the share of project funds over total funds, based on the high/low thresholds proposed in Table 2. The major uncertainty concerns the two countries for which PREF figures are substantially below GBARD, i.e. Portugal and, even more so, Poland. The test however is somewhat extreme, since it is not expected that all missing funds are either project or institutional. The uncertainty on the R&D content of the general university budget has an impact limited to a few percentage points—hence, this issue is not very relevant for the kind of indicators we are developing, while substantially affecting the total GBARD volume at the country level. Figure 2. View largeDownload slide Sensitivity analysis for the share of project funds. Figure 2. View largeDownload slide Sensitivity analysis for the share of project funds. We therefore conclude that the breakdown between project and institutional funds is reasonably robust and significantly improved with respect to EUROSTAT, except in cases where there is large uncertainty in the perimeter. Figure 3 presents the robustness test for the ex post performance orientation of institutional funding. It shows that, even with the rather extreme tests conducted, the differences between countries with high-performance orientations in institutional funding—namely, the UK, Portugal, and Poland—and the remaining countries are maintained, while the relative position of the countries with low-performance orientations is uncertain. Figure 3. View largeDownload slide Sensitivity test for ex post performance orientation. Figure 3. View largeDownload slide Sensitivity test for ex post performance orientation. The figure also displays that for those countries that have introduced formula-based models, the measure is subject to limited uncertainty, while in countries where some performance elements have been introduced within a negotiated allocation (Austria and Germany), and in countries where PROs are more important (France), the indicator is more uncertain. As a matter of fact, the main uncertainty in countries like Germany and Switzerland is represented by regional higher education funding, which has been classified as non-competitive; given the size of the corresponding instrument, even attributing a low score of performance orientation would significantly alter the country scores. Figure 4 computes the total performance orientation by adding the low/high scores from Figure 3 to the share of ex ante performance orientation, taking into account the high/low scores from Figure 2 for project funds. Figure 4. View largeDownload slide Robustness test for total performance orientation. Figure 4. View largeDownload slide Robustness test for total performance orientation. Since the uncertainty in the measure of project funding is smaller, the aggregated indicator is rather stable, particularly concerning the ranking of countries. It should be emphasized that this is an extreme test, since it is unlikely that all corrections work in the same direction. 4.5 Changes over time We distinguish between three sources of change in the performance orientation of institutional funding: changes in the relative funding level for existing instruments, the creation of new instruments, and, finally, changes in the allocation criteria of existing instruments. We suggest their measure displays different levels of reliability: changes in funding levels can be measured in a rather simple way, provided a time series are available; reliability of changes due to the introduction of new instruments is contingent to the precision of their characterization; finally, we expect that changes of criteria for existing instruments are more difficult to measure. Table 4 shows that for the 12 countries considered here, the number of new instruments and of changes in the criteria for existing instruments in the period 2005–14 was very low: no changes are listed in the PREF database for seven countries, while in two countries a performance-based component was introduced for HEI funding. Accordingly, the reliability of the change measure is high. Finally, in three other countries the criteria for HEI allocation were modified—the most important change was in Poland with the move from historical allocation to allocation based on the national assessment exercise. The level of reliability is lower, since it is difficult to ascertain the extent of change (Figure 5). Table 4. Changes in performance scores of institutional instruments, 2005–14 Country Instruments involved Type of change Year Reliability Impact on the score AT Basic funding to PROs Slight increase in the criteria (0.2–0.3) 2008 Low (expert based) Low FI Introduction of a performance-based component in higher education funding New subinstrument 2010 High High PL Higher education funding based on national evaluation Increase in the score for performance criteria 2010 Medium High PT Higher education funding Expert assessed slight increase of performance orientation 2005 Medium Medium SE Introduction of a performance-based component in higher education funding New subinstrument 2010 High High Country Instruments involved Type of change Year Reliability Impact on the score AT Basic funding to PROs Slight increase in the criteria (0.2–0.3) 2008 Low (expert based) Low FI Introduction of a performance-based component in higher education funding New subinstrument 2010 High High PL Higher education funding based on national evaluation Increase in the score for performance criteria 2010 Medium High PT Higher education funding Expert assessed slight increase of performance orientation 2005 Medium Medium SE Introduction of a performance-based component in higher education funding New subinstrument 2010 High High Note. To analyse the impact of these changes, Figure 5 shows the extent of the increase in the share of ex post performance orientation due to each factor. For eight countries, we observe only a repartition effect, and for all of them the shift towards performance orientation is very limited (the change of criteria in Portugal took place in the first year of the time series); the same applies for Austria. The three remaining countries display much stronger change—starting from no performance orientation before the policy change. Table 4. Changes in performance scores of institutional instruments, 2005–14 Country Instruments involved Type of change Year Reliability Impact on the score AT Basic funding to PROs Slight increase in the criteria (0.2–0.3) 2008 Low (expert based) Low FI Introduction of a performance-based component in higher education funding New subinstrument 2010 High High PL Higher education funding based on national evaluation Increase in the score for performance criteria 2010 Medium High PT Higher education funding Expert assessed slight increase of performance orientation 2005 Medium Medium SE Introduction of a performance-based component in higher education funding New subinstrument 2010 High High Country Instruments involved Type of change Year Reliability Impact on the score AT Basic funding to PROs Slight increase in the criteria (0.2–0.3) 2008 Low (expert based) Low FI Introduction of a performance-based component in higher education funding New subinstrument 2010 High High PL Higher education funding based on national evaluation Increase in the score for performance criteria 2010 Medium High PT Higher education funding Expert assessed slight increase of performance orientation 2005 Medium Medium SE Introduction of a performance-based component in higher education funding New subinstrument 2010 High High Note. To analyse the impact of these changes, Figure 5 shows the extent of the increase in the share of ex post performance orientation due to each factor. For eight countries, we observe only a repartition effect, and for all of them the shift towards performance orientation is very limited (the change of criteria in Portugal took place in the first year of the time series); the same applies for Austria. The three remaining countries display much stronger change—starting from no performance orientation before the policy change. Figure 5. View largeDownload slide Decomposing the increase in performance orientation by mechanisms 2005–14. Figure 5. View largeDownload slide Decomposing the increase in performance orientation by mechanisms 2005–14. An important uncertainty in temporal changes is related to a possible gradual change for non-competitive instruments, like HE funding in Austria, Germany, and Switzerland or PROs in France, constituting the bulk of public funding. Given the large amounts involved, even a slight increase of the role of performance considerations for these instruments would significantly affect the temporal trends displayed in Figure 1. In substantive terms, we conclude that the increase in performance orientation due to repartition effects is very small in all countries, therefore confirming previous results that the structure of public R&D budgets is very stable over time frames around one decade (Lepori et al. 2007). Unlike for project funding, changes in institutional performance orientation occur in a sudden way through policy reforms and are therefore rather simple to observe and to measure. We however consider that the indicator might be biased towards formal policy changes, while less visible changes in how performance orientation is taken into account within negotiations might be overseen, hence somewhat exaggerating the distance between countries in the indicator. 5. Discussion and conclusions This article tested the robustness of a synthetic indicator to measure the performance orientation of public R&D funding in a comparative setting. The indicator focuses on the characterization of national funding instruments by their mode of allocation (formula, competitive bid, negotiation, and historical) and allocation criteria (input, educational, or output–outcome measures). The goal is to assess the importance and the change over time, of policy designs that allocate funds to performers based on the expectation that they will produce high-level research, respectively, based on the measurement of their past performance. The relevance of such an indicator is related to the lasting policy debate about changes in the allocation of public R&D funding and the belief, promoted in particular by New Public Management, that ‘competitive’ or ‘incentive-based’ funding would improve efficiency and performance of public-sector research (see Nieminen and Auranen 2010; Jonkers and Zacharewicz 2016). Conceptually and methodologically establishing the relationships between competitiveness of funding and performance has proved to be problematic. Contradictory results emerge from the literature, which shows that the effects of policy measures introducing competitive elements in funding allocation are more complex than expected, and some desirable results were not achieved (Butler 2010; van den Besselaar et al. 2017). On the one hand, competitive funding involves different forms of interaction between policy instruments and performers. It can refer to instruments where performers struggle for a limited amount of resources, like in project funding, or, alternatively to allocation linked to the achievement of certain levels of performance. In fact, evidence confirms that competitive funding does not always select the best applicants, as related to the functioning of the peer-review system (Reale and Zinilli 2017), to compensation effects between different organizations (Aagaard and Schneider 2015), and to monopolistic positions by some performers (Masso and Ukrainski 2009). On the other hand, the connection between public funding and performance is subtle, as research performers behave like strategic actors that combine different funding sources (including the non-public ones) to achieve their goals, while the coupling between system-level and organizational or individual incentives is frequently loose (Aagaard 2015). The introduction of performance criteria may lead to gaming and playing with numbers to maximize the acquired resources and to unintended outcomes (de Rijcke et al. 2016). Understanding the linkages between public R&D funding and research performance would therefore require an integrated and multi-method analysis across the different steps of the process (Butler 2010). Faced with these methodological difficulties, the more modest goal of this article was to improve the (semi-quantitative) characterization of a specific element of the funding environment, i.e. the beliefs and criteria used by public bodies in allocation public R&D funds and the extent they rely on observable measures of performance. Thus, the indicator does not represent how instruments allocate funding to performers, but the policy intentions of the decision makers, and how they balance allocations driven by ex ante and ex post assessments. While not allowing to study directly the causal link between competition and performance, such an indicator improves the ability to characterize the policy environments of research performers across countries and over time. This can help to investigate how policy designs are appropriate with respect to the actual system-level performance, and the extent to which similar policy designs characterize countries with different levels of research performance (see Nieminen and Auranen 2010). Beyond the synthetic indicator we have tested, the methodological approach developed in the article also provides a systematic framework for a more fine-grained characterization and comparison of the policy design of R&D public funding. The robustness was tested by: (a) checking the coverage of public national funding instruments, (b) testing the level of uncertainty of the funding criteria scores for the most influential instruments, and (c) using sensitivity analysis for the breakdown between project funding and institutional funding, the ex post performance orientation, and the total performance orientation. The results show that uncertainties exist, affecting countries where data are problematic, or suffering from missing values, or where the research system is highly complex, and the decomposition of performance orientation needs to rely on experts’ appreciation, as it is embedded in soft negotiated instruments rather than in formalized allocation formulas. To limit the unpredictability of experts’ appreciation, we relied on three approaches. First, reducing the recourse to experts to those measures that cannot be appreciated only from descriptive information; secondly, to pre-define a set of categories on which they choose and to formalize as much as possible the process—for example by separately characterizing allocation mode and allocation criteria. Thirdly, we triangulated the experts’ assessments with information from other sources, like reports and scholarly analysis, to improve quality and reduce uncertainty. However, it is interesting to note that high/low scores generated through the sensitivity analysis do not produce a radically different positioning between countries and, particularly, do not alter the distinction between countries with high- and low-performance orientations. In this respect, the admittedly imperfect operationalization of the performance orientation indicator provides reasonable results. It is worth to point out the importance of setting reproducible procedures to construct a composite indicator of public funding, even when it represents a soft concept like performance orientation. Against previous calls that current indicators on public funding are not very reliable (Aksnes et al. 2017), we devised a procedure to open the black box by decomposing the measure in its components and by developing sensitivity analyses on each of them. As we have demonstrated in this article, such a procedure presents distinctive advantages. First, it allows for the identification of the most contestable dimensions of the indicator, showing that it is not forceful that methodological issues affect at the same time all components of an indicator. Secondly, the decomposition allows for the quantitative observation of the impact of comparability problems on the aggregate indicator, demonstrating that only some lead to severe distortions (and the extent of uncertainty differs by country as well). Thirdly, it promotes structured discussions concerning the limitations of the indicator and the progressive accumulation of methodological knowledge to move towards a consensus on validity (Barré 2001). Acknowledgements The authors thank the European Commission, Joint Research Centre, for funding this study through the PREF contract (contract no. 154321), as well as their colleagues participating to study, Thomas Scherngell and Georg Zahradnik (AIT, Vienna), Espen Solberg (NIFU, Oslo), and Emilia Primeri (CNR, Rome). The authors also thank the National Statistical Authorities for their support in data collection, as well as two anonymous referees for helpful comments and remarks. Footnotes 1 In the Frascati Manual, ‘Research and experimental development (R&D) comprise creative and systematic work undertaken in order to increase the stock of knowledge—including knowledge of humankind, culture and society—and to devise new applications of available knowledge’ (OECD 2015: 44–45). 2 PREF national experts were selected on ground of their knowledge of the country and of R&D statistics. 3 In the case of France last year of the time series is 2015. 4 PREF Public Funding Country Profiles of the countries analysed are published as Annexes n. 1, 8, 10, 11, 12, 18, 28, 29, 30, 36, 37, and 39 of the PREF Final Report available at https://rio.jrc.ec.europa.eu 5 A comparison of the time series for Poland shows that PREF totals are nearly identical to EUROSTAT GBARD for the years 2005–9, with the difference growing larger over time (−15% on 2012, −20% in 2013, and −34% in 2014). This is due to a rapid increase of the total GBARD from 2009 to 2014 (+63%), while PREF totals increased only slightly (+11%). It is difficult to ascertain which data point is more precise. 6 http://www.ifrap.org/education-et-culture/financement-des-universites-dispositif-sympa last visited 29 August 2017. References Aagaard K. ( 2015 ) ‘ How Incentives Trickle Down: Local Use of a National Bibliometric Indicator System ’, Science and Public Policy , 42 / 5 : 725 – 37 . Google Scholar CrossRef Search ADS Aagaard K. , Bloch C. , Schneider J. W. ( 2015 ) ‘ Impacts of Performance-based Research Funding Systems: The Case of the Norwegian Publication Indicator ’, Research Evaluation , 24 : 106 – 17 . Google Scholar CrossRef Search ADS Aagaard K. , Schneider J. W. ( 2015 ) ‘ Research Funding and National Academic Performance: Examination of a Danish Success Story ’, Science and Public Policy , 43 : 518 – 31 . Google Scholar CrossRef Search ADS Aghion P. et al. ( 2010 ) ‘ The Governance and Performance of Universities: Evidence from Europe and the US ’, Economic Policy , 25 : 7 – 59 . Google Scholar CrossRef Search ADS Aksnes D. et al. ( 2017 ) ‘ Measuring the Productivity of National R&D Systems: Challenges in Cross-national Comparisons of R&D Input and Publication Output Indicators ’, Science and Public Policy , 44 : 246 – 58 . Barker K. ( 2007 ) ‘ The UK Research Assessment Exercise: The Evolution of a National Research Evaluation System ’, Research Evaluation , 16 : 3 – 12 . Google Scholar CrossRef Search ADS Barré R. 2001 . ‘Sense and Nonsense of S&T Productivity Indicators. Science and Public Policy , 28 , 259 – 66 . Bonaccorsi A. ( 2007 ) ‘ Better Policies Vs Better Institutions in European Science ’, Science and Public Policy , 34 : 303 – 16 . Google Scholar CrossRef Search ADS Butler L. 2010 . ‘Impacts of Performance-based Research Funding Systems: A Review of the Concerns and the Evidence’, in OECD, Performance-based Funding for Public Research in Tertiary Education Institutions, Paris: OECD, pp. 127–65. CHEPS . 2010 . ‘Progress in Higher Education Reform in Europe’, in Funding Reform . Brussels : European Commission . de Boer H. F. et al. 2015 . Performance-based Funding and Performance Agreements in Fourteen Higher Education Systems . European Journal of Higher Education , 5 : 280 – 96 . Google Scholar CrossRef Search ADS de Rijcke S. et al. ( 2016 ) ‘ Evaluation Practices and Effects of Indicator Use—A Literature Review ’, Research Evaluation , 25 : 161 – 9 . Google Scholar CrossRef Search ADS Estermann T. , Pruvot E. B. 2012 . ‘European Universities Diversifying Income Streams’, in Curaj A. et al. (eds) European Higher Education at the Crossroads , pp. 709 – 26 . Dordrecht : Springer . Google Scholar CrossRef Search ADS Geuna A. ( 2001 ) ‘ The Changing Rationale for European University Research Funding: Are There Negative Unintended Consequences? ’, Journal of Economic Issues , 35 : 607 – 32 . Google Scholar CrossRef Search ADS Geuna A. , Piolatto M. ( 2016 ) ‘ Research Assessment in the UK and Italy: Costly and Difficult, but Probably Worth It (at Least for a While) ’, Research Policy , 45 : 260 – 71 . Google Scholar CrossRef Search ADS Grimpe C. ( 2012 ) ‘ Extramural Research Grants and Scientists’ Funding Strategies: Beggars Cannot Be Choosers? ’, Research Policy , 41 : 1448 – 60 . Google Scholar CrossRef Search ADS Hicks D. ( 2012 ) ‘ Performance-Based University Research Funding Systems ’, Research Policy , 41 : 251 – 61 . Google Scholar CrossRef Search ADS Jongbloed B. , Lepori B. 2015 . ‘The Funding of Research in Higher Education: Mixed Models and Mixed Results’, in Huisman J. , et al. (eds) The Palgrave International Handbook of Higher Education Policy and Governance , pp. 439 – 61 . London : Palgrave Macmillan . Google Scholar CrossRef Search ADS Jongbloed B. , Vossensteyn H. ( 2016 ) ‘ University Funding and Student Funding: International Comparisons ’, Oxford Review of Economic Policy , 32 : 576 – 95 . Google Scholar CrossRef Search ADS Jonkers K. , Zacharewicz T. 2016 . Research Performance Based Funding Systems: A Comparative Assessment . Institute for Prospective Technological Studies, Brussels: Joint Research Centre . Kulczycki E. , Korzeń M. , Korytkowski P. ( 2017 ) ‘ Toward an Excellence-Based Research Funding System: Evidence from Poland ’, Journal of Informetrics , 11 : 282 – 98 . Google Scholar CrossRef Search ADS Laudel G. ( 2006 ) ‘ The Art of Getting Funded: How Scientists Adapt to Their Funding Conditions ’, Science and Public Policy , 33 : 489 – 504 . Google Scholar CrossRef Search ADS Lepori B. 2017 . Analysis of National Public Research Funding (PREF). Handbook for Data Collection and Indicators Production . Seville : Joint Research Centre, Technical report . Lepori B. et al. ( 2007 ) ‘ Comparing the Evolution of National Research Policies: What Patterns of Change? ’, Science and Public Policy , 34 : 372 – 88 . Google Scholar CrossRef Search ADS Lepori B. , Reale E. , Larédo P. 2013 . ‘ Logics of Integration and Actors’ Strategies in European Joint Programs ’, Research Policy , 43 : 391 – 402 . Google Scholar CrossRef Search ADS Luwel M. ( 2004 ) ‘The Use of Input Data in the Performance Analysis of R&D Systems’, in Moed H.F. , Glänzel W. , Schmoch U. (eds) Handbook of Quantitative Science and Technology Research , Dordrecht : Springer , pp. 315 – 38 . Makkonen T. ( 2013 ) ‘ Government Science and Technology Budgets in Times of Crisis ’, Research Policy , 42 : 817 – 22 . Google Scholar CrossRef Search ADS Masso J. , Ukrainski K. ( 2009 ) ‘ Competition for Public Project Funding in a Small Research System: The Case of Estonia ’, Science and Public Policy , 36 : 683 – 95 . Google Scholar CrossRef Search ADS Nieminen M. , Auranen O. ( 2010 ) ‘ University Research Funding and Publication Performance—An International Comparison ’, Research Policy , 39 : 822 – 34 . Google Scholar CrossRef Search ADS OECD . 2000 . Measuring R&D in the Higher Education Sector. Methods in Use in the OECD/EU Member Countries . Paris : OECD . OECD . 2015 . Frascati Manual 2015. Guidelines for Collecting and Reporting Data on Research and Experimental Development . Paris : OECD . Pouris A. ( 2007 ) ‘ Estimating R&D Expenditures in the Higher Education Sector ’, Minerva , 45 : 3 – 16 . Google Scholar CrossRef Search ADS Pruvot E.B. , Estermann T. ( 2012 ). ‘European Universities Diversifying Income Streams’, in Curaj A. , Scott P. , Vlasceanu L. , Wilson L. , et al. (eds) European Higher Education at the Crossroads , pp. 439 – 61 . Dordrecht : Springer , 709 – 26 . Reale E. , Lepori B. , Scherngell T. 2017 . Analysis of National Public Research Funding (PREF) . Brussels : European Commission, Joint Reserch Center Technical Report . Reale E. , Zinilli A. 2017 . ‘ Evaluation for the Allocation of University Research Project Funding: Can Rules Improve the Peer Review? ’, Research Evaluation , 26 : 190 – 98 . Google Scholar CrossRef Search ADS Slaughter S. , Rhoades G. 2004 . Academic Capitalism and The New Economy: Markets, State, and Higher Education . Baltimore : JHU Press . Teixeira P. N. et al. ( 2014 ) ‘ Revenue Diversification in Public Higher Education: Comparing the University and Polytechnic Sectors ’, Public Administration Review , 74 : 398 – 412 . Google Scholar CrossRef Search ADS van den Besselaar P. , Heyman U. , Sandström U. 2017 . ‘ Perverse Effects of Output-based Research Funding? Butler’s Australian Case Revisited ’, Journal of Informetrics , 11 : 905 – 18 . Google Scholar CrossRef Search ADS van den Besselaar P. , Leydesdorff L. ( 2009 ) ‘ Past Performance, Peer Review and Project Selection: A Case Study in the Social and Behaviouraval Sciences ’, Research Evaluation , 18 : 273 – 88 . Google Scholar CrossRef Search ADS Whitley R. , Glaser J. 2007 . ‘The Changing Governance of the Sciences’, in The Advent of Research Evaluation Systems . Dordrecht : Springer . Wildavsky A. , Caiden N. 2004 . The New Politics of the Budgetary Process . New York, NY : Pearson/Longman . © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

Research EvaluationOxford University Press

Published: Mar 12, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off