European Paradox or Delusion—Are European Science and Economy Outdated?

European Paradox or Delusion—Are European Science and Economy Outdated? Abstract The European Union (EU) seems to presume that the mass production of European research papers indicates that Europe is a leading scientific power, and the so-called European paradox of strong science but weak technology is due to inefficiencies in the utilization of this top level European science by European industry. We fundamentally disagree, and will show that Europe lags far behind the USA in the production of important, highly cited research. We will show that there is a consistent weakening of European science as one ascends the citation scale, with the EU almost twice as effective in the production of minimal impact papers, while the USA is at least twice as effective in the production of very highly cited scientific papers, and garnering Nobel prizes. Only in the highly multinational, collaborative fields of Physics and Clinical Medicine does the EU seem to approach the USA in top scale impact. 1. Introduction In this paper, we will show persuasive evidence that the so-called European paradox (e.g., Dosi et al. 2006)—outstanding European scientific performance despite the second-rate European economic performance—is quite misleading, because it is based on simple counts of research papers, instead of on a much more relevant analysis of the European involvement in promoting the advancement of important science. In numerical terms, important science can be expressed as highly cited papers, as very highly cited, Nobel level papers, and as Nobel prizes. In fact we will show, using a variety of counting techniques and in numerous fields and journal sets, that the farther you go up the citation distribution the weaker is the European scientific performance, and that by various measures of research papers that are likely to be of significance—the very highly cited papers with important discoveries—the US position is at least twice as strong as the European position. Furthermore, this US lead over European science has, at best, only decreased a small amount over the last 20 years. In fact the only real change in position we see from 1990 to 2011 is a remarkable recent increase in the number of highly cited Chinese papers in almost every set of papers studied, which has also been observed by others (e.g. Leydesdorff et al. 2014). There is also some indication of a modest increase in European performance in Clinical Medicine and Physics, largely due, not to European Union (EU) scientists per se, but rather to the increasing co-authorship of European scientist with USA and other countries at the top of the citation distribution. Furthermore, European performance is dangerously low in some hot science areas of high technological relevance, such as graphene. It goes without saying that most of the European economies have waddled along like ducks out of water over the last decade or so, at least partially because of bureaucratic meddling, and because of the EU’s chronic failure to encourage ambitious entrepreneurs (Economist 2012, http://www.economist.com/node/21559618, accessed 21 December 2015). We raise the question here whether some similar factors have been driving much of European science to also waddle along, away from significant, highly cited discoveries, and toward the large-scale production of relatively superfluous papers, in order to satisfy inappropriate, bureaucratic, publish or perish criteria. We acknowledge that the European scientific system is still capable of brilliant accomplishment—as the outstanding performance of the European Organization for Nuclear Research (CERN) demonstrates, and the remarkable achievement and science of landing on a comet, and transmitting pictures and data back to earth demonstrate. We also wonder why there has been a Cambrian explosion of h-, g-, and w-indices and other rather elegant measures of individual and occasionally institutional performance (e.g. Kosmulski 2013) within the EU, without much EU research asking the much larger question of what European science is really accomplishing for the large investment in science by the European Community and European countries. To use a sports analogy this profusion of new indicators is somewhat like a system where one counts the kicks in European football rather than counting the goals—no matter how sophisticated the measures of angles and velocity and attempts is, that is not at all the same as measuring how often goals are scored. 2. The importance of demonstrating the importance of science Both of the authors of this paper believe deeply in the value of research, and the many benefits it has brought to society. Our concern is that, unless that value is measured and delineated, in a manner that is clear and demonstrable to macro-scale decision makers, research will eventually be treated as just a dispensable social program, subject to the whims of budgetary fluctuations. As any research manager knows, it takes years to develop a research team, but only days to destroy it. In times of rapid economic expansion the scientific community flourishes, as happened in the USA in the decades following World War II (WWII), when the USA ascended to the pinnacle of world science. Europe has also recovered remarkably, especially as the EU has grown. However, given the economic stagnation that now seems to be occurring, and the limited resources available for public use, the competition for funds will intensify, both within science, and between science and other public activities. It is our contention that research policy makers have been far too preoccupied with micro- and meso-scale analysis of research activities and far too neglectful of demonstrating to society the benefits of maintaining a high quality, productive, and relevant scientific community. In numerous reports, the OECD (e.g. 2000) has emphasized that scientific advances and technological change are important drivers of economic performance, and the importance of an active role of governments in boosting research. There is much bibliography highlighting the benefits of publicly funded basic research; an excellent review by Salter and Martin (2001) discusses this matter. However, there are fewer studies addressing the question of whether all research, highly or never cited, in hot or quiescent areas, is of equal value for society; in the absence of clear answers to these questions, the decisions of policy makers can be wrong. In a brilliant seminal paper in Minerva in 1962, Alvin M. Weinberg, then Director of the Oak Ridge National Laboratory, discussed the need for, and criteria for, scientific choice. Regarding the latter Weinberg (1962; reprinted 2000: 253) states that As science grows, its demands on our society’s resources grow. It seems inevitable that science’s demands will eventually be limited by what society can allocate to it. We shall then have to make choices. These choices are of two kinds. We shall have to choose among different, often incommensurable, fields of science—between, for example, high-energy physics and oceanography or between molecular biology and science of metals. We shall also have to choose among the different institutions that receive support for science from the government—among universities, governmental laboratories and industry. The first choice I call scientific choice; the second, institutional choice. A few years later the US government began its quantitative study of macro-scale scientific performance. The Science and Engineering series of biennial reports started with Science Indicators 1972, the goal of which was to develop ‘a set of indices which will reveal the strengths and weaknesses of U.S. science and technology, in terms of the capacity and performance of the enterprise in contributing to national objectives’ (National Science Board 1972: 1). Note the inclusion of ‘contributing to national objectives’: from the beginnings of large-scale support of Science in the USA there was concern for the relevance of the outcome to the country’s well-being. Subsequent studies in the USA have contributed importantly to this goal, and have had significant impact in providing simple, understandable measures allowing proponents to defend the public support of science in the USA. One of the most important of these was Edwin Mansfield’s paper ‘Academic research and industrial innovation,’ which found ‘A very tentative estimate of the social rate of return from academic research during1975–78 is 28 percent’ (Mansfield 1991: 11). Another US study providing evidence for the support of public science is Narin et al. (1997: 317), which found that ‘seventy-three percent of the papers cited by U.S. industry patents are public science, authored at academic, governmental, and other public institutions.’ In this study, the cited public science is further analyzed, concluding that the papers cited in patents are (i) highly cited by other papers, (ii) published in journals that are clearly prestigious and influential, (iii) authored at prestigious universities and laboratories, and (iv) supported by prestigious US governmental and other research support agencies. These findings lead to the conclusion that not all the papers are equally useful for the economy of a country. The importance of research is not under question, but clearly all papers published are not similarly relevant to society. Underlying this paper is our concern that the European research evaluation process places far too much emphasis on counting papers, and far too little on counting important discoveries, discoveries that are far more likely to be represented in the highly cited research tail than in the vast bulk of supporting, but less cited papers. 3. Research paradoxes Wrong assessments of scientific research give rise to false research paradoxes. Research paradoxes arise when the widely assumed strong link between scientific research and innovation and technology development is not observed. Although these paradoxes could possibly exist, demonstration of their actual existence depends on the correct assessment of both scientific and technological performances. The first mention of a research paradox in the EU is in The European Report on Science and Technology indicators 1994 (European Commission 1994). This report states (p. 17): ‘Thus, we observe the emergence of a “research paradox”: for a country to have a relatively high R&D intensity in a sector appears not necessarily to be a good indicator of successful industrial performance.’ One year later the Green Paper on Innovation (European Commission 1995) defined the ‘European Paradox’ (p. 5), comparing research, which is considered excellent, with innovation, which is considered insufficient: ‘Compared with the scientific performance of its principal competitors, that of the EU is excellent’ and ‘One of Europe’s major weaknesses lies in its inferiority in terms of transforming the results of technological research and skills into innovations and competitive advantages.’ Although this document does not explain how research excellence can be measured, it defines ‘scientific performance’ in terms of ‘number of publications per million ecus’ (p. 6). Since then, almost every EU document on science and technology refers to the ‘European Paradox’ and scientific excellence of the EU research. For example, Le Deuxième Rapport Européen sur les Indicateurs Scientifiques et Technologiques (European Commission 1997) states that ‘some evidence support the existence of a European paradox by which the EU’s excellence in scientific research does not result into technological and commercial results’ (translated by the authors); the Third European Report on Science & Technology Indicators 2003 (European Commission 2003) states that ‘several authors have been emphasising the strength of Europe’s educational and science base on the one hand, but particularly its inability, on the other hand, to convert this advantage into strong technological and economic performance.’ In the last EU Framework Programme for Research and Innovation, Horizon 2020, important documents negotiated for years that finally were passed by the European Parliament repeat the same concepts. For example: ‘A genuine change in our innovation systems and paradigms is therefore necessary. Still too often, excellence in higher education, research and innovation, while clearly existing across the EU, remains fragmented’ and ‘It will address the European paradox, since it will capitalize EU’s strong research base and find new innovative approaches to ensure a more competitive, sustainable and resource-efficient manufacturing sector.’ (European Commission 2011: 6, 30). The inconsistencies of the European paradox have been clearly established (Dosi et al. 2006); Albarrán et al. (2010) make a thorough review of the issue and describe how the European Commission is more interested in publication shares than in citation impact indicators. Our study further supports the view that the EU’s presumption of research excellence originates from a misleading research assessment, and that current EU’s research policy may lead the EU to ever decreasing levels of competitiveness in a knowledge-based economy. A rather recent EC report ‘Europe’s twin deficits: Excellence and innovation in new sectors’ (Sachwald 2015) strongly supports our thesis of European weakness in high impact research, but does not seem to have yet significantly impacted EU policies. 4. Key previous findings In this and next sections, we will discuss various measures of US and EU performance, typically at the top of the citation distributions. Some of these measures are based on fractional counts, where national authorship is apportioned over the various corporate addresses attached to the paper. Some of these are based on whole counts, where national authorship is credited to each author from an individual country. Some are based on various categories of exclusivity, for example papers with only US or only EU authors, or papers jointly authored by US and EU authors as well as authors from other countries, etc. A most important point is that the findings are almost independent of the various ways in which authorship are apportioned—the EU/US success ratio at the upper end of the citation distribution is often down in the range of 0.5 or less, independent of the details as to how the national authorship is apportioned. That is, the USA is twice as effective as the EU in the production of high impact research. We start with the data in the US Science & Engineering Indicators (S&EI; National Science Board 2014), which show that EU performance, compared with the USA, decreases monotonically as we ascend the citation scale, and that the EU/US ratio in the top 1% is roughly 0.5—that is, the USA has twice as many very highly cited paper per paper published. Specifically, Fig. 1A and B are based on data in appendix tables 5–58 in S&EI, 2014. The author address counts are fractional, and the citations are from a three-year publication window with a two-year lag—that is, for example, references made in the 2002 Science Citation Index tapes to papers published in 1998–2000. Figure 1. View largeDownload slide EU/USA ratio of the proportion of papers in all research fields distributed by citation percentiles. (A) 2002, 1.8 million papers. (B) 2012, 2.4 million papers. Source: National Science Board (2014). Figure 1. View largeDownload slide EU/USA ratio of the proportion of papers in all research fields distributed by citation percentiles. (A) 2002, 1.8 million papers. (B) 2012, 2.4 million papers. Source: National Science Board (2014). In the 2002 data, 57 per cent of the top centile papers are from the USA, whereas 30.8 per cent of all the papers are from the USA, giving the USA a top centile performance ratio of 1.85. Similarly, 28.2 per cent of the top centile papers are from the EU, while 35.6 per cent of all papers are from the EU, giving the EU a top centile performance ratio of 0.79. The EU/US percentile performance ratio shown in Fig. 1A is then 0.79/1.85, or 0.43. Note that this EU/US performance ratio rises slightly some over the next decade, to 0.94/1.74, or 0.54 for 2012, but is still only half that of the USA. The weak EU/US performance ratio at the top 1% level shown in Fig. 1 for all fields, together with the modest improvement over the decade from 2002 to 2012, is shown to hold for individual fields in Table 1. Note that in this S&EI report, the fields are defined in terms of journal sets created specifically for the SI 1972 report (National Science Board 1972), and expanded over the years (Science Indicators was renamed Science & Engineering Indicators in 1984) as the underlying WOS Journal coverage has expanded. Table 1. Top 1% EU/US percentile performance ratios by field Field  Citing years   2002  2012  Physics  0.50  0.55  Chemistry  0.37  0.46  Medical Sciences  0.48  0.60  Biological Sciences  0.40  0.47  Mathematics  0.42  0.45  All fields  0.43  0.54  Field  Citing years   2002  2012  Physics  0.50  0.55  Chemistry  0.37  0.46  Medical Sciences  0.48  0.60  Biological Sciences  0.40  0.47  Mathematics  0.42  0.45  All fields  0.43  0.54  Source: National Science Board (2014, appendix table 5–58). Consistent with these data, other studies have compared the scientific production of the USA and EU (reviewed by Albarrán et al. 2010) at different citation levels finding consistent leadership of the USA at high-citation levels. Specifically, King (2004) compared the EU-15 and the USA in terms of total number of publications and citations, and top 1% of highly cited papers in two periods 1993–1997 and 1997–2001. The USA was ahead in all cases. Although differences in total number of papers or citations were small, they were higher for citations than for papers, and in the upper 1% layer the advantage of the USA was much higher, two-fold and 70 per cent higher for the first and second periods, respectively. Herranz and Ruiz-Castillo (2013) performs the comparison using a pair of high- and low-impact indicators, and the mean citation rate in 219 scientific sub-fields, finding that although the EU has more publications than the USA in 113 out of 219 sub-fields, the USA is ahead of the EU in 189 sub-fields in terms of the high-impact indicator. Other papers also support our general thesis of strong US performance at the upper ends of citation distribution (Leydesdorff and Wagner 2009; Leydesdorff et al. 2014). In the 2014 paper they note that ‘the United States remains far more productive in the top—1% of all papers; China drops out of the competition for elite status: and the EU 28 increase its share amongst the top cited papers from 2000 to 2010’ (Leydesdorff et al. 2014: 606). These results are obtained using an integer counting method to allocate publications to a country whenever the country’s name is present in the publication’s address line, and more importantly they used a detailed normalization in terms of the 226 WOS categories of journals. We used a much less detailed, field-based normalization in the new data for this paper; this difference in normalization may account for the very strong top level performance we find for China in our data, particularly in Chemistry, in the elite, top 100–495 cited papers we analyzed. The reason for this is that, if Chinese papers are relatively concentrated in hot areas of Chemistry, such as graphene and solar cells, which we do see in our data, while European chemical science is more broadly spread across older, more traditional areas of Chemistry which are not as highly cited, then the more detailed WOS journal disaggregation used by Leydesdorff et al. (2014) would tend to hide the strength of Chinese Chemistry in the hot areas which count today. The observation that China may be concentrating in hot areas, while the EUR is not, is quite consistent with a particularly interesting analysis, based on top-rated scientists and institutions, Bonaccorsi (2007) concludes that European science is severely under-represented in the upper tail of scientific quality, and that European science is weaker in fields of rapid growth (hot areas), and that much of the weakness can be attributed to the difficulty of the traditional European communities and institutions to adapt to change. The importance of appropriately emphasizing the hot areas of science was masterfully described by the Nobel laureate Peter Medawar in his book ‘Advice to a young scientist’: ‘any scientist of any age who wants to make important discoveries must study important problems’ (Medawar 1979: 13). Perhaps an important study might be made of whether European science is not emphasizing the most important problems. An especially relevant study, a precursor to this paper studied simultaneously the numbers of Nobel prize-winning discoveries and the numbers of very highly cited papers (Rodríguez-Navarro 2015), reporting remarkable similarity between the USA/European performance ratios in the attainment of these discoveries and in the extrapolated production of very highly cited papers. In both cases, the research success of the USA is two-to-three times that of Europe (Table 2). The consistent association of the two performance ratios in the three fields of study: chemistry, physics, and medicine or physiology rules out a by chance association and gives strong support to the notion of a two or three to one superiority of the USA versus European science. This association of Nobel awards and very high citations is not at all surprising because it was demonstrated by Eugene Garfield a few years after the Science Citation Index was launched in 1964 (e.g. Garfield 1970, 1986). Table 2. US/EU research performance ratio based on the frequencies of papers at the citation range that in the USA corresponds to the frequency of Nobel prizes Year  Chemistry  Physics  Biochemistry & Molecular Biology  1982  1.48  2.41  2.41  1983  1.83     2.98  1984     3.16     1985  1.86     2.64  1986     2.48     1987  2.68  5.01  5.20  1988     3.68  4.03  1989  3.87        1990     3.57     1991  2.30     2.01  1992     1.75     1993  2.23     2.27  1994     4.30     1995  1.49     3.16  1996     2.90     1997  1.00     3.76  1998     2.85     1999  2.85     2.31  2000     3.06     2001  3.76     3.05  2002     3.50     2003  5.70     2.85  2004  3.70  2.85     2005  3.22     2.01  2006  3.41  2.30  2.92  2007  3.16  1.94  3.27  Mean  2.78  3.05  2.99  SD  1.18  0.87  0.85  Year  Chemistry  Physics  Biochemistry & Molecular Biology  1982  1.48  2.41  2.41  1983  1.83     2.98  1984     3.16     1985  1.86     2.64  1986     2.48     1987  2.68  5.01  5.20  1988     3.68  4.03  1989  3.87        1990     3.57     1991  2.30     2.01  1992     1.75     1993  2.23     2.27  1994     4.30     1995  1.49     3.16  1996     2.90     1997  1.00     3.76  1998     2.85     1999  2.85     2.31  2000     3.06     2001  3.76     3.05  2002     3.50     2003  5.70     2.85  2004  3.70  2.85     2005  3.22     2.01  2006  3.41  2.30  2.92  2007  3.16  1.94  3.27  Mean  2.78  3.05  2.99  SD  1.18  0.87  0.85  Source: Rodríguez-Navarro (2015). The frequency of discoveries that are awarded Nobel Prizes is very low, less than one per year in Europe, and its association with very highly cited papers occurs when the frequencies are similar. These low frequencies cannot be empirically measured and have to be estimated by an extrapolation method based on a logarithmic binning method, which helps to study highly skewed distributions (Fig. 2A), and the mathematical characteristics of the high-citation tail of the citation distribution. Attending to the whole range (Fig. 2A) and high-citation tail (Fig. 2B) of citations the data reveal the same fact described for S&EI (National Science Board 2014), the EU dominance in lowly but not in highly cited papers (Fig. 2A and B), and the US dominance in highly cited tail papers (Fig. 2B). To study the mathematical characteristics of the high-citation tail, the logarithms of paper frequencies can be plotted versus the logarithmic citation bins or versus the logarithms of citation percentiles (Fig. 2C and D). Both of these plotting strategies lead to the same conclusion; the further up the citation distribution the stronger the USA advantage over the EU. Figure 2. View largeDownload slide Frequency distribution of EU and US papers in chemistry published in 2003. (A) Frequency plot using logarithmic binning. (B) Amplification of the upper tail in A. (C and D) Double logarithmic plot of upper tail in A using either logarithmic binning or percentile grouping. Source: Rodríguez-Navarro (2015). Figure 2. View largeDownload slide Frequency distribution of EU and US papers in chemistry published in 2003. (A) Frequency plot using logarithmic binning. (B) Amplification of the upper tail in A. (C and D) Double logarithmic plot of upper tail in A using either logarithmic binning or percentile grouping. Source: Rodríguez-Navarro (2015). 5. New data—general comments and methods In this study, we generalize on the association between the numbers of Nobel Prize achievements and of highly cited papers, and suggest that highly cited science reflects the research that produces revolutionary science in the sense proposed by Thomas Khun (see e.g. Martin and Irvine 1983; Martin 1996; Rodríguez-Navarro 2011a, 2012). Therefore, to further study the US/EU research performance, we looked at the papers near the top of the citation distribution in terms of the three regions: the European research area (EU), the USA, and other, either individually or using a fractional counting method. Individual papers were classified in mutually exclusive sets: papers with US authors only, papers with EU authors only, papers with both US and EU authors but no others, and papers with other authors only. These identifications were performed in five citation categories: top 100, 200, 300, 400, and 495. This method is simple and allows a rapid analysis of many years in several fields. Although it does not take into account that the high-citation tails of citation distributions of different countries and research areas might be divergent (Fig. 2C and D; Rodríguez-Navarro 2015) it is sufficiently reliable for the USA–EU comparison, which is the main purpose of this study. Papers were identified as described previously (Rodríguez-Navarro 2015) except for the field of Clinical Medicine. In this case, the Research Area option (SU=) was constructed including all Research Areas with a clinical activity. To identify papers on hot topics, we used the Topic (TS = graphene) and Publication (PO = nature nanotechnology OR nano letters OR nanoscale OR nano energy) options. In each case, the top 500 most cited papers were downloaded as text files, including the institutional addresses, which were used to assign the three categories mentioned above. In almost every data set there were a few papers for which there were in fact no institutional addresses given, and occasionally not even individual authors. In the end a standard set of the top cited 495 papers for which country identification could be found were used in each data set, instead of the full set of 500 downloaded papers. When using a fractional counting method, in which each paper was fractionally assigned to the regions corresponding to all the author addresses, the differences from top 100 to top 495 were not large, but there did seem to be a small but rather consistent increase in the USA/EU ratio: that is, even in the very top 500 cited papers, the more selective the analysis the greater the performance gap between the USA and EU, consistent with the observations from the much, much larger data sets discussed earlier. 6. The persistence of the USA/EU research performance ratio and the proliferation of authors and institutions In the way described above, we investigated four fields of research: chemistry, physics, biochemistry & molecular biology, and clinical medicine every three years from 1990 to 2011. Figure 3, for chemistry, is an example of the data that we obtained. The USA/EU paper ratios are substantially above 2 at the beginning (1990) and end (2011) of the time period. Although the ratio fluctuates some in the intermediate years it never approaches 1, counted either fractionally (Fig. 3B) or exclusively (Fig. 3A). Since there are more EU than USA chemistry papers (Fig. 2), the US superiority at the top of the citation distribution is even more impressive. Figure 3. View largeDownload slide Annual count of the number of papers in the top-100 most cited papers in chemistry by address category. (A) Integer counting. (B) Fractional counting. Figure 3. View largeDownload slide Annual count of the number of papers in the top-100 most cited papers in chemistry by address category. (A) Integer counting. (B) Fractional counting. Another extremely interesting finding is the rapid rise of publications by the other countries in the top 100 (and top 495) chemistry papers. This is almost entirely due to the rapid ascent of Chinese papers to the top of the chemistry citation distribution. In fact, the most highly cited paper in the 2011 set is a paper, ‘Simultaneous enhancement of open-circuit voltage, short-circuit current density, and fill factor in polymer solar cells’, authored by scientist at four different Chinese institutions (He et al. 2011). Furthermore, 5 of the top cited 20 papers in 2011 are authored entirely by Chinese scientists, generally in the areas of solar cells and graphene. Table 3 summarizes the USA/EU paper ratio for the top 100 and 495 papers, showing the ratios between the numbers of USA and EU papers in both the EU only address category compared with the US only category, and in the fractional counts of EU and US author addresses. Aside from annual variability and field peculiarities (to be discussed elsewhere), the results show a clear persistence of the propensity USA research system to produce a larger number of highly cited papers than the EU system. Table 3. US/EU research performance ratios based on the annual number of papers in the top-100 and top-495 most cited papers; different fields and using two counting methods Field and number of top-cited papers  US only/EU only (year; paper ratio)   US/EU fractional counting (year; paper ratio)   1990  2011  1990  2011  Biochemistry & Molecular Biology—top-100  2.36  1.61  2.86  1.76  Biochemistry & Molecular Biology—top-495  2.03  1.85  2.81  1.72  Chemistry—top-100  2.78  2.23  2.48  2.80  Chemistry—top-495  1.79  1.43  1.73  1.66  Clinical Medicine—top-100  3.16  1.80  5.00  1.40  Clinical Medicine—top-495  2.22  2.17  2.64  1.16  Physics—top-100  2.70  2.00  1.76  0.82  Physics—top-495  2.42  1.32  1.61  0.77  Field and number of top-cited papers  US only/EU only (year; paper ratio)   US/EU fractional counting (year; paper ratio)   1990  2011  1990  2011  Biochemistry & Molecular Biology—top-100  2.36  1.61  2.86  1.76  Biochemistry & Molecular Biology—top-495  2.03  1.85  2.81  1.72  Chemistry—top-100  2.78  2.23  2.48  2.80  Chemistry—top-495  1.79  1.43  1.73  1.66  Clinical Medicine—top-100  3.16  1.80  5.00  1.40  Clinical Medicine—top-495  2.22  2.17  2.64  1.16  Physics—top-100  2.70  2.00  1.76  0.82  Physics—top-495  2.42  1.32  1.61  0.77  The only times the USA and EU research performance are converging are in Clinical Medicine and Physics, and then only because in these fields there has been a rapid increase in collaborative research (Fig. 4), and these multi-region papers appear to be the ones bringing up the EU performance. In 14 of the 16 cases, the US performance is stronger in the top 100 papers than in the top 495—showing that even in this very select set of highly cited papers, the more highly cited the papers are in general the stronger the US position is relative to Europe, with the exceptions only of the year 2011 in the two biologically related fields. Figure 4. View largeDownload slide Annual count of the number of papers and addresses in the top-495 most cited papers in clinical medicine. (A) Mutually exclusive sets. (B) Number of addresses per paper. Figure 4. View largeDownload slide Annual count of the number of papers and addresses in the top-495 most cited papers in clinical medicine. (A) Mutually exclusive sets. (B) Number of addresses per paper. A second general observation is that, while the EU performance relative to the USA has improved markedly from 1990 to 2011, in every 1 of the 16 cases, it is only in the fractional counts in Physics in 2011 that the EU performance appears to be stronger than the USA, and in this field there is an enormous amount of collaborative research, and it is quite likely that the strong European performance here is a result of collaborative world-class research being done by the Europeans with their colleagues from the USA and other countries, and not to European science on its own. Previous observations have noticed that over the last decades there has been a rapid increase in the number of papers published, and in the number of authors on the papers. This increase has been attributed by some to a misguided science evaluation process, one which encourages scientists to put their names of many papers, and thus enhance their apparent productivity. A particularly clear contribution of the evaluation processes to this effect was shown by Linda Butler (2003). She measured a notable increase in the number of papers from Australian scientists immediately following the inclusion of numbers of published papers as part of the evaluation criteria for future grants. Olle Persson and colleagues have addressed some of the complexities and unknown effects this inflationary bibliometrics adds to the evaluative process (Persson et al. 2004; Persson 2010). This rapid increase in institutional and individual authorship over the last two decades appears to be clearly present in our sets of very highly cited papers, and both EU and US scientists seem to be equally guilty. 7. Hot areas A bibliometric study of hot areas is out of the scope of present study, but a basic study about this issue illustrates the weak response of the EU research to a scientific challenge. For this purpose, we selected graphene because the EU’s research played a central role in the outstanding breakthrough on graphene research that took place in 2004–2005. This EU leadership is evidenced by the Nobel Prize in Physics 2010, which was awarded to two researchers of the University of Manchester. This breakthrough triggered an explosion on graphene research; the number of papers has been growing exponentially since 2008. However, despite the EU leadership in 2005, the EU responded slowly and its research on graphene lagged behind that of other countries, especially China. This EU lag occurs in terms of both total (Fig. 5) and highly cited (Table 4) papers. The country’s shares of highly cited papers on graphene in 2012 and 2013 show the repeatedly described UE lag with reference to the USA. More importantly, in addition to the leadership of China, the country’s shares reveal that the most advanced countries in Europe, such as Germany and UK, lag behind South Korea and Singapore. Table 4. Fractional counts in the 100 most cited papers in 2012 and 2013 on graphene and in journals of nanotechnology by the geographical origin of authorsa Country  2012   2013   Graphene  Nanojournalsb  Graphene  Nanojournalsb  EU  16.62  13.50  10.70  15.26  USA  29.84  45.09  42.81  48.60  China  28.62  21.17  22.94  18.69  Singapore  7.69  3.68  4.65  4.36  South Korea  7.38  7.67  7.91  5.92  Germany  5.23  1.84  2.14  5.30  Spain  4.00  1.84  0.92  1.56  Japan  3.69  3.07  3.98  0.93  UK  2.77  2.46  4.59  2.18  Taiwan  2.77  2.15  2.50  1.56  Country  2012   2013   Graphene  Nanojournalsb  Graphene  Nanojournalsb  EU  16.62  13.50  10.70  15.26  USA  29.84  45.09  42.81  48.60  China  28.62  21.17  22.94  18.69  Singapore  7.69  3.68  4.65  4.36  South Korea  7.38  7.67  7.91  5.92  Germany  5.23  1.84  2.14  5.30  Spain  4.00  1.84  0.92  1.56  Japan  3.69  3.07  3.98  0.93  UK  2.77  2.46  4.59  2.18  Taiwan  2.77  2.15  2.50  1.56  a Only most productive countries. b Nanotechnology journals: Nature Nanotechnology, ACS Nano, Nano Letters, Nanoscale, and Nano Energy. Figure 5. View largeDownload slide Annual count of the number of papers on grapheme. World and domestic papers from China and EU (no external addresses: China only and EU only). Figure 5. View largeDownload slide Annual count of the number of papers on grapheme. World and domestic papers from China and EU (no external addresses: China only and EU only). To rule out the possibility that a search restricted to a specific topic, graphene, might provide a misleading response by unexpected reasons, we made another search by nanotechnology journals that expand the hot topics covered in the search. The results of this new search, although not identical to the previous one, were very similar (Table 4). Table 4 summarizes the fractional counts in the 100 most cited papers. Similar counts using the 200 or 500 most cited papers provided similar results that lead to the same conclusions about the backwardness of EU research on graphene and other hot areas. 8. Discussion and conclusions In the previous sections we have shown that many different analyses all lead to the same conclusions—that, at the upper end of the research continuum, where important discoveries are made, the US research performance is usually between 1.5 to 3 times stronger than that of Europe, depending on the research area and method of calculation. These observations and others published previously rule out the existence of a European paradox (e.g. Dosi et al. 2006; Herranz and Ruiz-Castillo 2013). Our conclusions are based on paper counts in the high-citation tail of the citation distribution while the only data supporting the European and American (Shelton 2008) paradoxes are the total number of papers. Our approach does not imply that papers with citations below our counting thresholds, at least 99 per cent, are not important. In experimental sciences, we do not believe that low cited papers are dispensable for the progress of science (Cole and Cole 1972; Bornmann et al. 2010). In fact, using the Kuhn (1962) terms, revolutionary science could not exist without normal science; following the football example given above, a football match with only kicks to goal does not exist. The sentence: ‘nurturing revolutionary science requires its embedding into a sufficiently large volume of related normal science output, facilitating the validation of the revolutionary work’ (Andras 2011: 103) is an acceptable description of the issue. As previously said for patents (Narin et al. 2004), we do not mean that every important paper is highly cited, or that every highly cited paper is important. The approach of counting highly cited papers is based on the idea that ‘a very large amount of ultimate impact on society may be centered around a few relatively rare, extremely high impact discoveries, papers, invention, etc.’ (Narin and Hamilton 1996). The number of these infrequent discoveries correlates with the total number of papers in a few specific cases but not in most cases (Rodríguez-Navarro 2011b, 2015). In contrast, the correlation between the number of Nobel discoveries and the extension and characteristics of the high-citation tail has been established (Rodríguez-Navarro 2011a, 2015). Overall, the use of highly cited papers for research assessment is growing (e.g. Tijssen et al. 2002; King 2004; Allik 2013; Ponomarev et al. 2014); the introduction of the PP (top 1%) indicator in the CWTS Leiden Ranking (Waltman et al. 2012; http://www.leidenranking.com/) is a further support for the use of highly cited papers for research evaluation. The necessary existence of normal science as a basis of scientific progress raises an interesting question about how much normal science is necessary for that progress. The repeated finding of the US leadership versus the EU in highly cited papers but not in low-cited papers clearly indicates that in terms of research efficiency the EU research system has an important problem. This problem can be expressed numerically in terms of the ratio between numbers of papers and discoveries awarded with the Nobel Prize (Rodríguez-Navarro 2011b). Below we discuss some causes for the low performance of the EU research system, but we can advance here that although the publish or perish pressure is an obvious cause (Butler 2003; Rodríguez-Navarro 2009), the overall causes that explain the low production of revolutionary science are complex (Andras 2011). Expressing our findings in terms of population we can conclude that the number of important discoveries per inhabitant is at least three times higher in the USA than in EU and might be up to 4.5 times higher if we attend to very highly cited papers (Table 2). These are impressive figures that deserve to be analyzed addressing both the causes and consequences. The fundamental question is why the US performance is so superior to that of Europe at a time when the European economy is as large as that of the USA. Several causes could explain the much higher US research performance described above but a lower research activity must be excluded because, as we have repeated in this study, this superiority takes place only in highly cited papers, which is a very small proportion of the total. Similarly, a different intrinsic capacity of researchers at both sides of the Atlantic can also be ruled out because the populations at both sides are culturally very similar and because many successful US researchers moved there from Europe (Stephan and Levin 2001). The most obvious cause for the low ratio between highly and lowly influential papers in EU is the EU research policy. Research policy makers have to deal with different choices (Weinberg 1962) and to be successful they have to make the correct decisions. But correct decisions cannot be taken without a solid analysis of the actual situation, which is lacking in the EU. Thus, the EU research policy is based on the supposition of a European paradox and wrong conclusions about the excellence of EU research. This erroneous notion of excellent research and weak innovation is spread among research policy makers and appears in important documents of the European Commission, as we have described. Ultimately, research funding is improperly allocated. Further evidence suggesting that a wrong research policy is the cause of the low production of highly cited papers in EU research performance comes from the comparison of the EU with either China or South Korea in the research on graphene. In 2005, China and South Korea published 14 and 5 papers on graphene, respectively, and none was highly cited. In 2012–2013, the EU leadership of 2005 had disappeared (Fig. 4); China was ahead of the EU and South Korea and Singapore were ahead of both Germany and UK (Table 4). Although of lesser importance for our conclusions, it is worth mentioning that the rapid increasing of citations to Chinese research publications may be accelerated by ‘clubbing’ self-citations (Tang et al. 2015). The nonexistence of the European paradox leads to consider that the EU manufacturing sector might be more competitive than thought but that, in contrast with the US sector, it lacks a sufficient a base of scientific discoveries on which to develop its innovative capacity. It must be emphasized once more that the links between academic research and innovation are strong in the USA (Narin et al. 1997). This links might not occur in other countries but deficiencies in the manufacturing sector cannot be blamed if the academic research does not produce discoveries. Other studies have demonstrated that, in economic terms, the social rate of return from academic research is very high (Mansfield 1991) but it seems obvious that this conclusion cannot apply to any type of research with any rate of success. The bibliography about the returns of public research is extensive and generally agrees about the difficulties of measuring properly these returns (e.g. Salter and Martin 2001). However, much less has been investigated regarding how to measure the product of research. Although it seems clear that the product of research that influences economy is the discovery, most studies estimate research by using either R&D investments or the number of published papers, which in no case can be taken as proxies of discoveries. If we assume that the number of important discoveries per inhabitant is four times higher in the USA than in the EU, it seems obvious that the return of public research for the US and EU societies will be very different but we do not have reliable data. Salter and Martin (2001: 529) state: Currently, we do not have the robust and reliable methodological tools needed to state with any certainty what the benefits of additional public support for science might be, other than suggesting that some support is necessary to ensure that there is a ‘critical mass’ of research activities. The literature available has shown that there are considerable differences across areas of research and across countries and that additional research is needed to better define and understand these differences. The complexity of this issue increases when the study is extended to the peripheral regions of the EU (e.g., Bilbao-Osorio and Rodríguez-Pose 2004; Rodríguez-Pose and Crescenzi 2008) with a lower research tradition. In these cases the social returns of research might be low and it is conceivable that some EU economists doubt about the economic importance of that research. This would explain why the financial cuts of about 50% on academic research in Spain, which had always been very low, during the last four years have not been questioned by the European Commission. A low confidence on the economic importance of research leads to lower funding and political attention, consequently this leads to fewer discoveries and again to more funding cuts. In addition to these fundamental problems there are others apparently less fundamental factors that might be highly important. We will now indulge in some speculations about two of these, in the hopes that further research seriously investigates the situation. In many ways, the US research support system is quite decentralized, competitive, and freewheeling. A scientist with a fundamental biochemical research proposal can take that proposal to the NIH, to the NSF, to the Department of Defense, or to many different private support sources, which increases the probability of funding. This multifaceted, competitive support system is especially important for young scientists, who are not constrained in the USA by a monolithic institutional framework. A study of the ages at which European versus US scientists attain independence in research support, and the diversity of their support over the research careers would certainly provide insight into this question. We agree with Andras (2011: 104) Research funding agencies usually prefer low-risk normal science research. The larger the volume of a mature science, the stronger is the preference of funding agencies to fund normal science research. One possibility to increase willingness to support revolutionary science is to have more competition between funding agencies to fund high-impact research. We also note that the general field of scientometrics is currently rather quiescent in the USA compared with Europe. Perhaps the relatively laissez-faire system of research support in the USA, functioning without any centralized responsibility, reduces the need for system-wide analysis. Is this a net benefit, or net loss, to the productivity of the USA? One of the objectives of this paper is to encourage our colleagues to begin to seriously address some of these fundamental European policy issues. References Albarrán P., et al.   ( 2010) ‘A Comparison of the Scientific Performance of the U.S. and the European Union at the Turn of the 21st Century’, Scientometrics , 85: 329– 44. Google Scholar CrossRef Search ADS   Allik J. ( 2013) ‘Factors Affecting Bibliometric Indicators of Scientific Quality’, Trames , 17: 199– 214. Google Scholar CrossRef Search ADS   Andras P. ( 2011) ‘Research: Metrics, Quality, and Management Implications’, Research Evaluation , 20: 90– 106. Google Scholar CrossRef Search ADS   Bilbao-Osorio B., Rodríguez-Pose A. ( 2004) ‘From R&D to Innovation and Economic Growth in the EU’, Growth and Change , 35: 434– 55. Google Scholar CrossRef Search ADS   Bonaccorsi A. ( 2007) ‘Explaining Poor Performance of European Science: Institutions Versus Policies’, Science and Public Policy , 34: 303– 16. Google Scholar CrossRef Search ADS   Bornmann L., de Moya Anegón F., Leydesdorff L. ( 2010) ‘Do Scientific Advancements Lean on the Shoulders of Giants? A Bibliometric Investigation of the Ortega Hypothesis’, PLoS One , 5/ 10: e13327. Google Scholar CrossRef Search ADS PubMed  Butler L. ( 2003) ‘Explaining Australia’s Increased Share of ISI Publications—the Effects of a Funding Formula Based on Publication Counts’, Research Policy , 32: 143– 55. Google Scholar CrossRef Search ADS   Cole J. R., Cole S. ( 1972) ‘The Ortega Hypothesis—Citation Analysis Suggests that only a Few Scientists Contribute to Scientific Progress’, Science , 178: 368– 75. Google Scholar CrossRef Search ADS PubMed  Dosi G., Llerena P., Labini M. S. ( 2006) ‘The Relationships between Science, Technologies and their Industrial Exploitation: An Illustration through the Myths and Realities of the So-called ‘European Paradox’, Research Policy , 35: 1450– 64. Google Scholar CrossRef Search ADS   European Commission. ( 1994) The European Report on Science and Technology Indicators 1994. Executive Summary. Directorate-General XII, L-2920 Luxembourg: Office for Official Publications of the European Communities. ISBN 92-826-9010-5. European Commission. ( 1995) Green Paper on Innovation. <http://europa.eu/documents/comm/green_papers/pdf/com95_688_en.pdf> accessed 15 Dec 2015. European Commission. ( 1997) Le Deuxième Rapport Européen sur les Indicateurs Scientifiques et Technologiques 1997. Direction Generale XII. B-1049 Bruxelles. ISBN 92-828-0272-8. European Commission. ( 2003) Third European Report on Science & Technology Indicators. Directorate General for Research Information and Communication Unit. Luxembourg: Office for Official Publications of the European Communities. ISBN 92-894-1795-1. European Commission. ( 2011) Proposal for a Decision of the European Parliament and of the Council on the Strategic Innovation Agenda of the European Institute of Innovation and Technology (EIT): the Contribution of the EIT to a More Innovative Europe. COM(2011) 822 final; 2011/0387 (COD). <http://www.europarl.europa.eu/document/activities/cont/201203/20120316ATT41114/20120316ATT41114EN.pdf> accessed 26 Oct 2015. Garfield E. ( 1970) ‘Citation Index for Studying Science’, Nature , 227: 669– 71. Google Scholar CrossRef Search ADS PubMed  Garfield E. ( 1986) ‘Do Nobel Prize Winners Write Citation Classics?’, Current Contents , 23: 3– 8. <http://garfield.library.upenn.edu/essays/v9p182y1986.pdf> accessed 14 Oct 2015. He Z., et al.   ( 2011) ‘Simultaneous Enhancement of Open-Circuit Voltage, Short-Circuit Current Density, and Fill Factor in Polymer Solar Cells’, Advanced Materials , 25: 4636– 43. Google Scholar CrossRef Search ADS   Herranz N., Ruiz-Castillo J. ( 2013) ‘The End of the European Paradox’, Scientometrics , 95: 453– 64. Google Scholar CrossRef Search ADS   King D. A. ( 2004) ‘The Scientific Impact of Nations, What Different Countries Get for Their Research Spending’, Nature , 430: 311– 6. Google Scholar CrossRef Search ADS PubMed  Kosmulski M. ( 2013) ‘Family-Tree of Bibliometric Indices’, Journal of Informetrics , 7: 313– 7. Google Scholar CrossRef Search ADS   Kuhn T. S. ( 1962) The Structure of Scientific Revolutions . Chicago: University of Chicago Press. Leydesdorff L., Wagner W. ( 2009) ‘Is the United States Losing Ground in Science? A Global Perspective on the World Science System’, Scientometrics , 78: 23– 36. Google Scholar CrossRef Search ADS   Leydesdorff l., Wagner C. S., Bornmann L. ( 2014) ‘The European Union, China, and the United States in the Top-1% and Top-10% Layers of Most-Frequently Cited Publications: Competition and Collaborations’, Journal of Informetrics , 8: 606– 17. Google Scholar CrossRef Search ADS   Mansfield E. ( 1991) ‘Academic Research and Industrial Innovation’, Research Policy , 20: 1– 12. Google Scholar CrossRef Search ADS   Martin B. R. ( 1996) ‘The Use of Multiple Indicators in the Assessment of Basic Research’, Scientometrics , 36: 343– 62. Google Scholar CrossRef Search ADS   Martin B. R., Irvine J. ( 1983) ‘Assessing Basic Research, Some Partial Indicators of Scientific Progress in Radio Astronomy’, Research Policy , 12: 61– 90. Google Scholar CrossRef Search ADS   Medawar P. B. ( 1979) Advise to a Young Scientist . Alfred P. Sloan Foundation, ISBN 0-465-00092-4. Narin F., Hamilton K. S. ( 1996) ‘Bibliometric Performance Measures’, Scientometrics , 36: 293– 310. Google Scholar CrossRef Search ADS   Narin F., Hamilton K. S., Olivastro D. ( 1997) ‘The Increasing Linkage between U.S. Technology and Public Science’, Research Policy , 26: 317– 30. Google Scholar CrossRef Search ADS   Narin F., Breitzman A., Thomas P. ( 2004) ‘Using Patent Citation Indicators to Manage a Stock Portfolio’, in Moed H. F., et al.   (eds) Handbook of Quantitative Science and Technology Research , pp. 553– 68. The Netherlands: Kluwer Academic Publishers. National Science Board. ( 1972) Science and Engineering Indicators . Washington, DC: NSF. <http://files.eric.ed.gov/fulltext/ED084150.pdf> accessed 14 Oct 2015. National Science Board. ( 2014) Science & Engineering Indicators. <http://www.nsf.gov/statistics/seind14/content/etc/nsb1401.pdf>, <http://www.nsf.gov/statistics/seind14/content/chapter-5/at05-60.pdf> accessed 15 Oct 2015. OECD. ( 2000) Policy Brief: Science, Technology and Innovation in the New Economy. <http://www.oecd.org/science/sci-tech/1918259.pdf> accessed 14 Oct 2015. Persson O. ( 2010) ‘Are Highly Cited Papers More International?’, Scientometrics , 83: 397– 401. Google Scholar CrossRef Search ADS   Persson O., Glanzel W., Danell R. ( 2004) ‘Inflationary Bibliometric Values: The Role of Scientific Collaboration and the Need for Relative Indicators in Evaluative Studies’, Scientometrics , 60: 421– 32. Google Scholar CrossRef Search ADS   Ponomarev I. V., et al.   ( 2014) ‘Predicting Highly Cited Papers: A Method for Early Detection of Candidate Breakthroughs’, Technological Forecasting & Social Change , 81: 49– 55. Google Scholar CrossRef Search ADS   Rodríguez-Navarro A. ( 2009) ‘Sound Research, Unimportant Discoveries: Research, Universities, and Formal Evaluation of Research in Spain’, Journal of the American Society for Information Science and Technology , 60: 1845– 58. Google Scholar CrossRef Search ADS   Rodríguez-Navarro A. ( 2011a) ‘A Simple Index for the High-Citation Tail of Citation Distribution to Quantify Research Performance in Countries and Institutions’, PLoS One , 6/ 5: e20510. Google Scholar CrossRef Search ADS   Rodríguez-Navarro A. ( 2011b) ‘Measuring Research Excellence. Number of Nobel Prize Achievements Versus Conventional Bibliometric Indicators’, Journal of Documentation , 67: 582– 600. Google Scholar CrossRef Search ADS   Rodríguez-Navarro A. ( 2012) ‘Counting Highly Cited Papers for University Research Assessment: Conceptual and Technical Issues’, PLoS One , 7/ 10: e47210. Google Scholar CrossRef Search ADS PubMed  Rodríguez-Navarro A. ( 2015) ‘Research Assessment Based on Infrequent Achievements: A Comparison of the United States and Europe in Terms of Highly Cited Papers and Nobel Prizes’, Journal of the Association for Information Science and Technology , 67: 731– 40. Google Scholar CrossRef Search ADS   Rodríguez-Pose A., Crescenzi R. ( 2008) ‘R&D, Spillovers, Innovation Systems and the Genesis of Regional Growth in Europe’, Regional Studies , 42/ 01: 51– 67. Google Scholar CrossRef Search ADS   Sachwald F. ( 2015) ‘Europe’s twin deficits: excellence and innovation in new sectors’. Policy paper by the Research, Innovation, and Science Policy Experts (RISE). European Commission, Directorate-General for Research and Innovation. EUR 27371 EN. Salter A. J., Martin B. R. ( 2001) ‘The Economic Benefits of Publicly Funded Basic Research: a Critical Review’, Research Policy , 30: 509– 32. Google Scholar CrossRef Search ADS   Shelton R. D. ( 2008) ‘Relations between National Research Investment and Publication Output: Application to an American Paradox’, Scientometrics , 74: 191– 205. Google Scholar CrossRef Search ADS   Stephan P. E., Levin S. G. ( 2001) ‘Exceptional Contributions to US Science by Foreign-Born and Foreign-Educated’, Population Research and Policy Review , 20: 59– 79. Google Scholar CrossRef Search ADS   Tang L., Shapira P., Youtie J. ( 2015) ‘Is there a Clubbing Effect Underlying Chinese Research Citation Increasing?’, Journal of the Association for Information Science and Technology , 66: 1923– 32. Google Scholar CrossRef Search ADS   Tijssen R. J. W., Visser M. S., van Leeuwen T. N. ( 2002) ‘Benchmarking International Scientific Excellence: Are Highly Cited Research Papers an Appropriate Frame of Reference?’, Scientometrics , 54: 381– 97. Google Scholar CrossRef Search ADS   Waltman L., et al.   ( 2012) ‘The Leiden Ranking 2011/2012: Data Collection, Indicators, and Interpretation’, Journal of the American Society for Information Science and Technology , 63: 2419– 32. Google Scholar CrossRef Search ADS   Weinberg A. M. ( 1962) ‘Criteria for Scientific Choice, Minerva , I/ 2: 158– 171. [Reprinted (2000) Minerva, 38: 253–69.] © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Science and Public Policy Oxford University Press

European Paradox or Delusion—Are European Science and Economy Outdated?

Loading next page...
 
/lp/ou_press/european-paradox-or-delusion-are-european-science-and-economy-outdated-mu4oi1Q5Zl
Publisher
Oxford University Press
Copyright
© The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ISSN
0302-3427
eISSN
1471-5430
D.O.I.
10.1093/scipol/scx021
Publisher site
See Article on Publisher Site

Abstract

Abstract The European Union (EU) seems to presume that the mass production of European research papers indicates that Europe is a leading scientific power, and the so-called European paradox of strong science but weak technology is due to inefficiencies in the utilization of this top level European science by European industry. We fundamentally disagree, and will show that Europe lags far behind the USA in the production of important, highly cited research. We will show that there is a consistent weakening of European science as one ascends the citation scale, with the EU almost twice as effective in the production of minimal impact papers, while the USA is at least twice as effective in the production of very highly cited scientific papers, and garnering Nobel prizes. Only in the highly multinational, collaborative fields of Physics and Clinical Medicine does the EU seem to approach the USA in top scale impact. 1. Introduction In this paper, we will show persuasive evidence that the so-called European paradox (e.g., Dosi et al. 2006)—outstanding European scientific performance despite the second-rate European economic performance—is quite misleading, because it is based on simple counts of research papers, instead of on a much more relevant analysis of the European involvement in promoting the advancement of important science. In numerical terms, important science can be expressed as highly cited papers, as very highly cited, Nobel level papers, and as Nobel prizes. In fact we will show, using a variety of counting techniques and in numerous fields and journal sets, that the farther you go up the citation distribution the weaker is the European scientific performance, and that by various measures of research papers that are likely to be of significance—the very highly cited papers with important discoveries—the US position is at least twice as strong as the European position. Furthermore, this US lead over European science has, at best, only decreased a small amount over the last 20 years. In fact the only real change in position we see from 1990 to 2011 is a remarkable recent increase in the number of highly cited Chinese papers in almost every set of papers studied, which has also been observed by others (e.g. Leydesdorff et al. 2014). There is also some indication of a modest increase in European performance in Clinical Medicine and Physics, largely due, not to European Union (EU) scientists per se, but rather to the increasing co-authorship of European scientist with USA and other countries at the top of the citation distribution. Furthermore, European performance is dangerously low in some hot science areas of high technological relevance, such as graphene. It goes without saying that most of the European economies have waddled along like ducks out of water over the last decade or so, at least partially because of bureaucratic meddling, and because of the EU’s chronic failure to encourage ambitious entrepreneurs (Economist 2012, http://www.economist.com/node/21559618, accessed 21 December 2015). We raise the question here whether some similar factors have been driving much of European science to also waddle along, away from significant, highly cited discoveries, and toward the large-scale production of relatively superfluous papers, in order to satisfy inappropriate, bureaucratic, publish or perish criteria. We acknowledge that the European scientific system is still capable of brilliant accomplishment—as the outstanding performance of the European Organization for Nuclear Research (CERN) demonstrates, and the remarkable achievement and science of landing on a comet, and transmitting pictures and data back to earth demonstrate. We also wonder why there has been a Cambrian explosion of h-, g-, and w-indices and other rather elegant measures of individual and occasionally institutional performance (e.g. Kosmulski 2013) within the EU, without much EU research asking the much larger question of what European science is really accomplishing for the large investment in science by the European Community and European countries. To use a sports analogy this profusion of new indicators is somewhat like a system where one counts the kicks in European football rather than counting the goals—no matter how sophisticated the measures of angles and velocity and attempts is, that is not at all the same as measuring how often goals are scored. 2. The importance of demonstrating the importance of science Both of the authors of this paper believe deeply in the value of research, and the many benefits it has brought to society. Our concern is that, unless that value is measured and delineated, in a manner that is clear and demonstrable to macro-scale decision makers, research will eventually be treated as just a dispensable social program, subject to the whims of budgetary fluctuations. As any research manager knows, it takes years to develop a research team, but only days to destroy it. In times of rapid economic expansion the scientific community flourishes, as happened in the USA in the decades following World War II (WWII), when the USA ascended to the pinnacle of world science. Europe has also recovered remarkably, especially as the EU has grown. However, given the economic stagnation that now seems to be occurring, and the limited resources available for public use, the competition for funds will intensify, both within science, and between science and other public activities. It is our contention that research policy makers have been far too preoccupied with micro- and meso-scale analysis of research activities and far too neglectful of demonstrating to society the benefits of maintaining a high quality, productive, and relevant scientific community. In numerous reports, the OECD (e.g. 2000) has emphasized that scientific advances and technological change are important drivers of economic performance, and the importance of an active role of governments in boosting research. There is much bibliography highlighting the benefits of publicly funded basic research; an excellent review by Salter and Martin (2001) discusses this matter. However, there are fewer studies addressing the question of whether all research, highly or never cited, in hot or quiescent areas, is of equal value for society; in the absence of clear answers to these questions, the decisions of policy makers can be wrong. In a brilliant seminal paper in Minerva in 1962, Alvin M. Weinberg, then Director of the Oak Ridge National Laboratory, discussed the need for, and criteria for, scientific choice. Regarding the latter Weinberg (1962; reprinted 2000: 253) states that As science grows, its demands on our society’s resources grow. It seems inevitable that science’s demands will eventually be limited by what society can allocate to it. We shall then have to make choices. These choices are of two kinds. We shall have to choose among different, often incommensurable, fields of science—between, for example, high-energy physics and oceanography or between molecular biology and science of metals. We shall also have to choose among the different institutions that receive support for science from the government—among universities, governmental laboratories and industry. The first choice I call scientific choice; the second, institutional choice. A few years later the US government began its quantitative study of macro-scale scientific performance. The Science and Engineering series of biennial reports started with Science Indicators 1972, the goal of which was to develop ‘a set of indices which will reveal the strengths and weaknesses of U.S. science and technology, in terms of the capacity and performance of the enterprise in contributing to national objectives’ (National Science Board 1972: 1). Note the inclusion of ‘contributing to national objectives’: from the beginnings of large-scale support of Science in the USA there was concern for the relevance of the outcome to the country’s well-being. Subsequent studies in the USA have contributed importantly to this goal, and have had significant impact in providing simple, understandable measures allowing proponents to defend the public support of science in the USA. One of the most important of these was Edwin Mansfield’s paper ‘Academic research and industrial innovation,’ which found ‘A very tentative estimate of the social rate of return from academic research during1975–78 is 28 percent’ (Mansfield 1991: 11). Another US study providing evidence for the support of public science is Narin et al. (1997: 317), which found that ‘seventy-three percent of the papers cited by U.S. industry patents are public science, authored at academic, governmental, and other public institutions.’ In this study, the cited public science is further analyzed, concluding that the papers cited in patents are (i) highly cited by other papers, (ii) published in journals that are clearly prestigious and influential, (iii) authored at prestigious universities and laboratories, and (iv) supported by prestigious US governmental and other research support agencies. These findings lead to the conclusion that not all the papers are equally useful for the economy of a country. The importance of research is not under question, but clearly all papers published are not similarly relevant to society. Underlying this paper is our concern that the European research evaluation process places far too much emphasis on counting papers, and far too little on counting important discoveries, discoveries that are far more likely to be represented in the highly cited research tail than in the vast bulk of supporting, but less cited papers. 3. Research paradoxes Wrong assessments of scientific research give rise to false research paradoxes. Research paradoxes arise when the widely assumed strong link between scientific research and innovation and technology development is not observed. Although these paradoxes could possibly exist, demonstration of their actual existence depends on the correct assessment of both scientific and technological performances. The first mention of a research paradox in the EU is in The European Report on Science and Technology indicators 1994 (European Commission 1994). This report states (p. 17): ‘Thus, we observe the emergence of a “research paradox”: for a country to have a relatively high R&D intensity in a sector appears not necessarily to be a good indicator of successful industrial performance.’ One year later the Green Paper on Innovation (European Commission 1995) defined the ‘European Paradox’ (p. 5), comparing research, which is considered excellent, with innovation, which is considered insufficient: ‘Compared with the scientific performance of its principal competitors, that of the EU is excellent’ and ‘One of Europe’s major weaknesses lies in its inferiority in terms of transforming the results of technological research and skills into innovations and competitive advantages.’ Although this document does not explain how research excellence can be measured, it defines ‘scientific performance’ in terms of ‘number of publications per million ecus’ (p. 6). Since then, almost every EU document on science and technology refers to the ‘European Paradox’ and scientific excellence of the EU research. For example, Le Deuxième Rapport Européen sur les Indicateurs Scientifiques et Technologiques (European Commission 1997) states that ‘some evidence support the existence of a European paradox by which the EU’s excellence in scientific research does not result into technological and commercial results’ (translated by the authors); the Third European Report on Science & Technology Indicators 2003 (European Commission 2003) states that ‘several authors have been emphasising the strength of Europe’s educational and science base on the one hand, but particularly its inability, on the other hand, to convert this advantage into strong technological and economic performance.’ In the last EU Framework Programme for Research and Innovation, Horizon 2020, important documents negotiated for years that finally were passed by the European Parliament repeat the same concepts. For example: ‘A genuine change in our innovation systems and paradigms is therefore necessary. Still too often, excellence in higher education, research and innovation, while clearly existing across the EU, remains fragmented’ and ‘It will address the European paradox, since it will capitalize EU’s strong research base and find new innovative approaches to ensure a more competitive, sustainable and resource-efficient manufacturing sector.’ (European Commission 2011: 6, 30). The inconsistencies of the European paradox have been clearly established (Dosi et al. 2006); Albarrán et al. (2010) make a thorough review of the issue and describe how the European Commission is more interested in publication shares than in citation impact indicators. Our study further supports the view that the EU’s presumption of research excellence originates from a misleading research assessment, and that current EU’s research policy may lead the EU to ever decreasing levels of competitiveness in a knowledge-based economy. A rather recent EC report ‘Europe’s twin deficits: Excellence and innovation in new sectors’ (Sachwald 2015) strongly supports our thesis of European weakness in high impact research, but does not seem to have yet significantly impacted EU policies. 4. Key previous findings In this and next sections, we will discuss various measures of US and EU performance, typically at the top of the citation distributions. Some of these measures are based on fractional counts, where national authorship is apportioned over the various corporate addresses attached to the paper. Some of these are based on whole counts, where national authorship is credited to each author from an individual country. Some are based on various categories of exclusivity, for example papers with only US or only EU authors, or papers jointly authored by US and EU authors as well as authors from other countries, etc. A most important point is that the findings are almost independent of the various ways in which authorship are apportioned—the EU/US success ratio at the upper end of the citation distribution is often down in the range of 0.5 or less, independent of the details as to how the national authorship is apportioned. That is, the USA is twice as effective as the EU in the production of high impact research. We start with the data in the US Science & Engineering Indicators (S&EI; National Science Board 2014), which show that EU performance, compared with the USA, decreases monotonically as we ascend the citation scale, and that the EU/US ratio in the top 1% is roughly 0.5—that is, the USA has twice as many very highly cited paper per paper published. Specifically, Fig. 1A and B are based on data in appendix tables 5–58 in S&EI, 2014. The author address counts are fractional, and the citations are from a three-year publication window with a two-year lag—that is, for example, references made in the 2002 Science Citation Index tapes to papers published in 1998–2000. Figure 1. View largeDownload slide EU/USA ratio of the proportion of papers in all research fields distributed by citation percentiles. (A) 2002, 1.8 million papers. (B) 2012, 2.4 million papers. Source: National Science Board (2014). Figure 1. View largeDownload slide EU/USA ratio of the proportion of papers in all research fields distributed by citation percentiles. (A) 2002, 1.8 million papers. (B) 2012, 2.4 million papers. Source: National Science Board (2014). In the 2002 data, 57 per cent of the top centile papers are from the USA, whereas 30.8 per cent of all the papers are from the USA, giving the USA a top centile performance ratio of 1.85. Similarly, 28.2 per cent of the top centile papers are from the EU, while 35.6 per cent of all papers are from the EU, giving the EU a top centile performance ratio of 0.79. The EU/US percentile performance ratio shown in Fig. 1A is then 0.79/1.85, or 0.43. Note that this EU/US performance ratio rises slightly some over the next decade, to 0.94/1.74, or 0.54 for 2012, but is still only half that of the USA. The weak EU/US performance ratio at the top 1% level shown in Fig. 1 for all fields, together with the modest improvement over the decade from 2002 to 2012, is shown to hold for individual fields in Table 1. Note that in this S&EI report, the fields are defined in terms of journal sets created specifically for the SI 1972 report (National Science Board 1972), and expanded over the years (Science Indicators was renamed Science & Engineering Indicators in 1984) as the underlying WOS Journal coverage has expanded. Table 1. Top 1% EU/US percentile performance ratios by field Field  Citing years   2002  2012  Physics  0.50  0.55  Chemistry  0.37  0.46  Medical Sciences  0.48  0.60  Biological Sciences  0.40  0.47  Mathematics  0.42  0.45  All fields  0.43  0.54  Field  Citing years   2002  2012  Physics  0.50  0.55  Chemistry  0.37  0.46  Medical Sciences  0.48  0.60  Biological Sciences  0.40  0.47  Mathematics  0.42  0.45  All fields  0.43  0.54  Source: National Science Board (2014, appendix table 5–58). Consistent with these data, other studies have compared the scientific production of the USA and EU (reviewed by Albarrán et al. 2010) at different citation levels finding consistent leadership of the USA at high-citation levels. Specifically, King (2004) compared the EU-15 and the USA in terms of total number of publications and citations, and top 1% of highly cited papers in two periods 1993–1997 and 1997–2001. The USA was ahead in all cases. Although differences in total number of papers or citations were small, they were higher for citations than for papers, and in the upper 1% layer the advantage of the USA was much higher, two-fold and 70 per cent higher for the first and second periods, respectively. Herranz and Ruiz-Castillo (2013) performs the comparison using a pair of high- and low-impact indicators, and the mean citation rate in 219 scientific sub-fields, finding that although the EU has more publications than the USA in 113 out of 219 sub-fields, the USA is ahead of the EU in 189 sub-fields in terms of the high-impact indicator. Other papers also support our general thesis of strong US performance at the upper ends of citation distribution (Leydesdorff and Wagner 2009; Leydesdorff et al. 2014). In the 2014 paper they note that ‘the United States remains far more productive in the top—1% of all papers; China drops out of the competition for elite status: and the EU 28 increase its share amongst the top cited papers from 2000 to 2010’ (Leydesdorff et al. 2014: 606). These results are obtained using an integer counting method to allocate publications to a country whenever the country’s name is present in the publication’s address line, and more importantly they used a detailed normalization in terms of the 226 WOS categories of journals. We used a much less detailed, field-based normalization in the new data for this paper; this difference in normalization may account for the very strong top level performance we find for China in our data, particularly in Chemistry, in the elite, top 100–495 cited papers we analyzed. The reason for this is that, if Chinese papers are relatively concentrated in hot areas of Chemistry, such as graphene and solar cells, which we do see in our data, while European chemical science is more broadly spread across older, more traditional areas of Chemistry which are not as highly cited, then the more detailed WOS journal disaggregation used by Leydesdorff et al. (2014) would tend to hide the strength of Chinese Chemistry in the hot areas which count today. The observation that China may be concentrating in hot areas, while the EUR is not, is quite consistent with a particularly interesting analysis, based on top-rated scientists and institutions, Bonaccorsi (2007) concludes that European science is severely under-represented in the upper tail of scientific quality, and that European science is weaker in fields of rapid growth (hot areas), and that much of the weakness can be attributed to the difficulty of the traditional European communities and institutions to adapt to change. The importance of appropriately emphasizing the hot areas of science was masterfully described by the Nobel laureate Peter Medawar in his book ‘Advice to a young scientist’: ‘any scientist of any age who wants to make important discoveries must study important problems’ (Medawar 1979: 13). Perhaps an important study might be made of whether European science is not emphasizing the most important problems. An especially relevant study, a precursor to this paper studied simultaneously the numbers of Nobel prize-winning discoveries and the numbers of very highly cited papers (Rodríguez-Navarro 2015), reporting remarkable similarity between the USA/European performance ratios in the attainment of these discoveries and in the extrapolated production of very highly cited papers. In both cases, the research success of the USA is two-to-three times that of Europe (Table 2). The consistent association of the two performance ratios in the three fields of study: chemistry, physics, and medicine or physiology rules out a by chance association and gives strong support to the notion of a two or three to one superiority of the USA versus European science. This association of Nobel awards and very high citations is not at all surprising because it was demonstrated by Eugene Garfield a few years after the Science Citation Index was launched in 1964 (e.g. Garfield 1970, 1986). Table 2. US/EU research performance ratio based on the frequencies of papers at the citation range that in the USA corresponds to the frequency of Nobel prizes Year  Chemistry  Physics  Biochemistry & Molecular Biology  1982  1.48  2.41  2.41  1983  1.83     2.98  1984     3.16     1985  1.86     2.64  1986     2.48     1987  2.68  5.01  5.20  1988     3.68  4.03  1989  3.87        1990     3.57     1991  2.30     2.01  1992     1.75     1993  2.23     2.27  1994     4.30     1995  1.49     3.16  1996     2.90     1997  1.00     3.76  1998     2.85     1999  2.85     2.31  2000     3.06     2001  3.76     3.05  2002     3.50     2003  5.70     2.85  2004  3.70  2.85     2005  3.22     2.01  2006  3.41  2.30  2.92  2007  3.16  1.94  3.27  Mean  2.78  3.05  2.99  SD  1.18  0.87  0.85  Year  Chemistry  Physics  Biochemistry & Molecular Biology  1982  1.48  2.41  2.41  1983  1.83     2.98  1984     3.16     1985  1.86     2.64  1986     2.48     1987  2.68  5.01  5.20  1988     3.68  4.03  1989  3.87        1990     3.57     1991  2.30     2.01  1992     1.75     1993  2.23     2.27  1994     4.30     1995  1.49     3.16  1996     2.90     1997  1.00     3.76  1998     2.85     1999  2.85     2.31  2000     3.06     2001  3.76     3.05  2002     3.50     2003  5.70     2.85  2004  3.70  2.85     2005  3.22     2.01  2006  3.41  2.30  2.92  2007  3.16  1.94  3.27  Mean  2.78  3.05  2.99  SD  1.18  0.87  0.85  Source: Rodríguez-Navarro (2015). The frequency of discoveries that are awarded Nobel Prizes is very low, less than one per year in Europe, and its association with very highly cited papers occurs when the frequencies are similar. These low frequencies cannot be empirically measured and have to be estimated by an extrapolation method based on a logarithmic binning method, which helps to study highly skewed distributions (Fig. 2A), and the mathematical characteristics of the high-citation tail of the citation distribution. Attending to the whole range (Fig. 2A) and high-citation tail (Fig. 2B) of citations the data reveal the same fact described for S&EI (National Science Board 2014), the EU dominance in lowly but not in highly cited papers (Fig. 2A and B), and the US dominance in highly cited tail papers (Fig. 2B). To study the mathematical characteristics of the high-citation tail, the logarithms of paper frequencies can be plotted versus the logarithmic citation bins or versus the logarithms of citation percentiles (Fig. 2C and D). Both of these plotting strategies lead to the same conclusion; the further up the citation distribution the stronger the USA advantage over the EU. Figure 2. View largeDownload slide Frequency distribution of EU and US papers in chemistry published in 2003. (A) Frequency plot using logarithmic binning. (B) Amplification of the upper tail in A. (C and D) Double logarithmic plot of upper tail in A using either logarithmic binning or percentile grouping. Source: Rodríguez-Navarro (2015). Figure 2. View largeDownload slide Frequency distribution of EU and US papers in chemistry published in 2003. (A) Frequency plot using logarithmic binning. (B) Amplification of the upper tail in A. (C and D) Double logarithmic plot of upper tail in A using either logarithmic binning or percentile grouping. Source: Rodríguez-Navarro (2015). 5. New data—general comments and methods In this study, we generalize on the association between the numbers of Nobel Prize achievements and of highly cited papers, and suggest that highly cited science reflects the research that produces revolutionary science in the sense proposed by Thomas Khun (see e.g. Martin and Irvine 1983; Martin 1996; Rodríguez-Navarro 2011a, 2012). Therefore, to further study the US/EU research performance, we looked at the papers near the top of the citation distribution in terms of the three regions: the European research area (EU), the USA, and other, either individually or using a fractional counting method. Individual papers were classified in mutually exclusive sets: papers with US authors only, papers with EU authors only, papers with both US and EU authors but no others, and papers with other authors only. These identifications were performed in five citation categories: top 100, 200, 300, 400, and 495. This method is simple and allows a rapid analysis of many years in several fields. Although it does not take into account that the high-citation tails of citation distributions of different countries and research areas might be divergent (Fig. 2C and D; Rodríguez-Navarro 2015) it is sufficiently reliable for the USA–EU comparison, which is the main purpose of this study. Papers were identified as described previously (Rodríguez-Navarro 2015) except for the field of Clinical Medicine. In this case, the Research Area option (SU=) was constructed including all Research Areas with a clinical activity. To identify papers on hot topics, we used the Topic (TS = graphene) and Publication (PO = nature nanotechnology OR nano letters OR nanoscale OR nano energy) options. In each case, the top 500 most cited papers were downloaded as text files, including the institutional addresses, which were used to assign the three categories mentioned above. In almost every data set there were a few papers for which there were in fact no institutional addresses given, and occasionally not even individual authors. In the end a standard set of the top cited 495 papers for which country identification could be found were used in each data set, instead of the full set of 500 downloaded papers. When using a fractional counting method, in which each paper was fractionally assigned to the regions corresponding to all the author addresses, the differences from top 100 to top 495 were not large, but there did seem to be a small but rather consistent increase in the USA/EU ratio: that is, even in the very top 500 cited papers, the more selective the analysis the greater the performance gap between the USA and EU, consistent with the observations from the much, much larger data sets discussed earlier. 6. The persistence of the USA/EU research performance ratio and the proliferation of authors and institutions In the way described above, we investigated four fields of research: chemistry, physics, biochemistry & molecular biology, and clinical medicine every three years from 1990 to 2011. Figure 3, for chemistry, is an example of the data that we obtained. The USA/EU paper ratios are substantially above 2 at the beginning (1990) and end (2011) of the time period. Although the ratio fluctuates some in the intermediate years it never approaches 1, counted either fractionally (Fig. 3B) or exclusively (Fig. 3A). Since there are more EU than USA chemistry papers (Fig. 2), the US superiority at the top of the citation distribution is even more impressive. Figure 3. View largeDownload slide Annual count of the number of papers in the top-100 most cited papers in chemistry by address category. (A) Integer counting. (B) Fractional counting. Figure 3. View largeDownload slide Annual count of the number of papers in the top-100 most cited papers in chemistry by address category. (A) Integer counting. (B) Fractional counting. Another extremely interesting finding is the rapid rise of publications by the other countries in the top 100 (and top 495) chemistry papers. This is almost entirely due to the rapid ascent of Chinese papers to the top of the chemistry citation distribution. In fact, the most highly cited paper in the 2011 set is a paper, ‘Simultaneous enhancement of open-circuit voltage, short-circuit current density, and fill factor in polymer solar cells’, authored by scientist at four different Chinese institutions (He et al. 2011). Furthermore, 5 of the top cited 20 papers in 2011 are authored entirely by Chinese scientists, generally in the areas of solar cells and graphene. Table 3 summarizes the USA/EU paper ratio for the top 100 and 495 papers, showing the ratios between the numbers of USA and EU papers in both the EU only address category compared with the US only category, and in the fractional counts of EU and US author addresses. Aside from annual variability and field peculiarities (to be discussed elsewhere), the results show a clear persistence of the propensity USA research system to produce a larger number of highly cited papers than the EU system. Table 3. US/EU research performance ratios based on the annual number of papers in the top-100 and top-495 most cited papers; different fields and using two counting methods Field and number of top-cited papers  US only/EU only (year; paper ratio)   US/EU fractional counting (year; paper ratio)   1990  2011  1990  2011  Biochemistry & Molecular Biology—top-100  2.36  1.61  2.86  1.76  Biochemistry & Molecular Biology—top-495  2.03  1.85  2.81  1.72  Chemistry—top-100  2.78  2.23  2.48  2.80  Chemistry—top-495  1.79  1.43  1.73  1.66  Clinical Medicine—top-100  3.16  1.80  5.00  1.40  Clinical Medicine—top-495  2.22  2.17  2.64  1.16  Physics—top-100  2.70  2.00  1.76  0.82  Physics—top-495  2.42  1.32  1.61  0.77  Field and number of top-cited papers  US only/EU only (year; paper ratio)   US/EU fractional counting (year; paper ratio)   1990  2011  1990  2011  Biochemistry & Molecular Biology—top-100  2.36  1.61  2.86  1.76  Biochemistry & Molecular Biology—top-495  2.03  1.85  2.81  1.72  Chemistry—top-100  2.78  2.23  2.48  2.80  Chemistry—top-495  1.79  1.43  1.73  1.66  Clinical Medicine—top-100  3.16  1.80  5.00  1.40  Clinical Medicine—top-495  2.22  2.17  2.64  1.16  Physics—top-100  2.70  2.00  1.76  0.82  Physics—top-495  2.42  1.32  1.61  0.77  The only times the USA and EU research performance are converging are in Clinical Medicine and Physics, and then only because in these fields there has been a rapid increase in collaborative research (Fig. 4), and these multi-region papers appear to be the ones bringing up the EU performance. In 14 of the 16 cases, the US performance is stronger in the top 100 papers than in the top 495—showing that even in this very select set of highly cited papers, the more highly cited the papers are in general the stronger the US position is relative to Europe, with the exceptions only of the year 2011 in the two biologically related fields. Figure 4. View largeDownload slide Annual count of the number of papers and addresses in the top-495 most cited papers in clinical medicine. (A) Mutually exclusive sets. (B) Number of addresses per paper. Figure 4. View largeDownload slide Annual count of the number of papers and addresses in the top-495 most cited papers in clinical medicine. (A) Mutually exclusive sets. (B) Number of addresses per paper. A second general observation is that, while the EU performance relative to the USA has improved markedly from 1990 to 2011, in every 1 of the 16 cases, it is only in the fractional counts in Physics in 2011 that the EU performance appears to be stronger than the USA, and in this field there is an enormous amount of collaborative research, and it is quite likely that the strong European performance here is a result of collaborative world-class research being done by the Europeans with their colleagues from the USA and other countries, and not to European science on its own. Previous observations have noticed that over the last decades there has been a rapid increase in the number of papers published, and in the number of authors on the papers. This increase has been attributed by some to a misguided science evaluation process, one which encourages scientists to put their names of many papers, and thus enhance their apparent productivity. A particularly clear contribution of the evaluation processes to this effect was shown by Linda Butler (2003). She measured a notable increase in the number of papers from Australian scientists immediately following the inclusion of numbers of published papers as part of the evaluation criteria for future grants. Olle Persson and colleagues have addressed some of the complexities and unknown effects this inflationary bibliometrics adds to the evaluative process (Persson et al. 2004; Persson 2010). This rapid increase in institutional and individual authorship over the last two decades appears to be clearly present in our sets of very highly cited papers, and both EU and US scientists seem to be equally guilty. 7. Hot areas A bibliometric study of hot areas is out of the scope of present study, but a basic study about this issue illustrates the weak response of the EU research to a scientific challenge. For this purpose, we selected graphene because the EU’s research played a central role in the outstanding breakthrough on graphene research that took place in 2004–2005. This EU leadership is evidenced by the Nobel Prize in Physics 2010, which was awarded to two researchers of the University of Manchester. This breakthrough triggered an explosion on graphene research; the number of papers has been growing exponentially since 2008. However, despite the EU leadership in 2005, the EU responded slowly and its research on graphene lagged behind that of other countries, especially China. This EU lag occurs in terms of both total (Fig. 5) and highly cited (Table 4) papers. The country’s shares of highly cited papers on graphene in 2012 and 2013 show the repeatedly described UE lag with reference to the USA. More importantly, in addition to the leadership of China, the country’s shares reveal that the most advanced countries in Europe, such as Germany and UK, lag behind South Korea and Singapore. Table 4. Fractional counts in the 100 most cited papers in 2012 and 2013 on graphene and in journals of nanotechnology by the geographical origin of authorsa Country  2012   2013   Graphene  Nanojournalsb  Graphene  Nanojournalsb  EU  16.62  13.50  10.70  15.26  USA  29.84  45.09  42.81  48.60  China  28.62  21.17  22.94  18.69  Singapore  7.69  3.68  4.65  4.36  South Korea  7.38  7.67  7.91  5.92  Germany  5.23  1.84  2.14  5.30  Spain  4.00  1.84  0.92  1.56  Japan  3.69  3.07  3.98  0.93  UK  2.77  2.46  4.59  2.18  Taiwan  2.77  2.15  2.50  1.56  Country  2012   2013   Graphene  Nanojournalsb  Graphene  Nanojournalsb  EU  16.62  13.50  10.70  15.26  USA  29.84  45.09  42.81  48.60  China  28.62  21.17  22.94  18.69  Singapore  7.69  3.68  4.65  4.36  South Korea  7.38  7.67  7.91  5.92  Germany  5.23  1.84  2.14  5.30  Spain  4.00  1.84  0.92  1.56  Japan  3.69  3.07  3.98  0.93  UK  2.77  2.46  4.59  2.18  Taiwan  2.77  2.15  2.50  1.56  a Only most productive countries. b Nanotechnology journals: Nature Nanotechnology, ACS Nano, Nano Letters, Nanoscale, and Nano Energy. Figure 5. View largeDownload slide Annual count of the number of papers on grapheme. World and domestic papers from China and EU (no external addresses: China only and EU only). Figure 5. View largeDownload slide Annual count of the number of papers on grapheme. World and domestic papers from China and EU (no external addresses: China only and EU only). To rule out the possibility that a search restricted to a specific topic, graphene, might provide a misleading response by unexpected reasons, we made another search by nanotechnology journals that expand the hot topics covered in the search. The results of this new search, although not identical to the previous one, were very similar (Table 4). Table 4 summarizes the fractional counts in the 100 most cited papers. Similar counts using the 200 or 500 most cited papers provided similar results that lead to the same conclusions about the backwardness of EU research on graphene and other hot areas. 8. Discussion and conclusions In the previous sections we have shown that many different analyses all lead to the same conclusions—that, at the upper end of the research continuum, where important discoveries are made, the US research performance is usually between 1.5 to 3 times stronger than that of Europe, depending on the research area and method of calculation. These observations and others published previously rule out the existence of a European paradox (e.g. Dosi et al. 2006; Herranz and Ruiz-Castillo 2013). Our conclusions are based on paper counts in the high-citation tail of the citation distribution while the only data supporting the European and American (Shelton 2008) paradoxes are the total number of papers. Our approach does not imply that papers with citations below our counting thresholds, at least 99 per cent, are not important. In experimental sciences, we do not believe that low cited papers are dispensable for the progress of science (Cole and Cole 1972; Bornmann et al. 2010). In fact, using the Kuhn (1962) terms, revolutionary science could not exist without normal science; following the football example given above, a football match with only kicks to goal does not exist. The sentence: ‘nurturing revolutionary science requires its embedding into a sufficiently large volume of related normal science output, facilitating the validation of the revolutionary work’ (Andras 2011: 103) is an acceptable description of the issue. As previously said for patents (Narin et al. 2004), we do not mean that every important paper is highly cited, or that every highly cited paper is important. The approach of counting highly cited papers is based on the idea that ‘a very large amount of ultimate impact on society may be centered around a few relatively rare, extremely high impact discoveries, papers, invention, etc.’ (Narin and Hamilton 1996). The number of these infrequent discoveries correlates with the total number of papers in a few specific cases but not in most cases (Rodríguez-Navarro 2011b, 2015). In contrast, the correlation between the number of Nobel discoveries and the extension and characteristics of the high-citation tail has been established (Rodríguez-Navarro 2011a, 2015). Overall, the use of highly cited papers for research assessment is growing (e.g. Tijssen et al. 2002; King 2004; Allik 2013; Ponomarev et al. 2014); the introduction of the PP (top 1%) indicator in the CWTS Leiden Ranking (Waltman et al. 2012; http://www.leidenranking.com/) is a further support for the use of highly cited papers for research evaluation. The necessary existence of normal science as a basis of scientific progress raises an interesting question about how much normal science is necessary for that progress. The repeated finding of the US leadership versus the EU in highly cited papers but not in low-cited papers clearly indicates that in terms of research efficiency the EU research system has an important problem. This problem can be expressed numerically in terms of the ratio between numbers of papers and discoveries awarded with the Nobel Prize (Rodríguez-Navarro 2011b). Below we discuss some causes for the low performance of the EU research system, but we can advance here that although the publish or perish pressure is an obvious cause (Butler 2003; Rodríguez-Navarro 2009), the overall causes that explain the low production of revolutionary science are complex (Andras 2011). Expressing our findings in terms of population we can conclude that the number of important discoveries per inhabitant is at least three times higher in the USA than in EU and might be up to 4.5 times higher if we attend to very highly cited papers (Table 2). These are impressive figures that deserve to be analyzed addressing both the causes and consequences. The fundamental question is why the US performance is so superior to that of Europe at a time when the European economy is as large as that of the USA. Several causes could explain the much higher US research performance described above but a lower research activity must be excluded because, as we have repeated in this study, this superiority takes place only in highly cited papers, which is a very small proportion of the total. Similarly, a different intrinsic capacity of researchers at both sides of the Atlantic can also be ruled out because the populations at both sides are culturally very similar and because many successful US researchers moved there from Europe (Stephan and Levin 2001). The most obvious cause for the low ratio between highly and lowly influential papers in EU is the EU research policy. Research policy makers have to deal with different choices (Weinberg 1962) and to be successful they have to make the correct decisions. But correct decisions cannot be taken without a solid analysis of the actual situation, which is lacking in the EU. Thus, the EU research policy is based on the supposition of a European paradox and wrong conclusions about the excellence of EU research. This erroneous notion of excellent research and weak innovation is spread among research policy makers and appears in important documents of the European Commission, as we have described. Ultimately, research funding is improperly allocated. Further evidence suggesting that a wrong research policy is the cause of the low production of highly cited papers in EU research performance comes from the comparison of the EU with either China or South Korea in the research on graphene. In 2005, China and South Korea published 14 and 5 papers on graphene, respectively, and none was highly cited. In 2012–2013, the EU leadership of 2005 had disappeared (Fig. 4); China was ahead of the EU and South Korea and Singapore were ahead of both Germany and UK (Table 4). Although of lesser importance for our conclusions, it is worth mentioning that the rapid increasing of citations to Chinese research publications may be accelerated by ‘clubbing’ self-citations (Tang et al. 2015). The nonexistence of the European paradox leads to consider that the EU manufacturing sector might be more competitive than thought but that, in contrast with the US sector, it lacks a sufficient a base of scientific discoveries on which to develop its innovative capacity. It must be emphasized once more that the links between academic research and innovation are strong in the USA (Narin et al. 1997). This links might not occur in other countries but deficiencies in the manufacturing sector cannot be blamed if the academic research does not produce discoveries. Other studies have demonstrated that, in economic terms, the social rate of return from academic research is very high (Mansfield 1991) but it seems obvious that this conclusion cannot apply to any type of research with any rate of success. The bibliography about the returns of public research is extensive and generally agrees about the difficulties of measuring properly these returns (e.g. Salter and Martin 2001). However, much less has been investigated regarding how to measure the product of research. Although it seems clear that the product of research that influences economy is the discovery, most studies estimate research by using either R&D investments or the number of published papers, which in no case can be taken as proxies of discoveries. If we assume that the number of important discoveries per inhabitant is four times higher in the USA than in the EU, it seems obvious that the return of public research for the US and EU societies will be very different but we do not have reliable data. Salter and Martin (2001: 529) state: Currently, we do not have the robust and reliable methodological tools needed to state with any certainty what the benefits of additional public support for science might be, other than suggesting that some support is necessary to ensure that there is a ‘critical mass’ of research activities. The literature available has shown that there are considerable differences across areas of research and across countries and that additional research is needed to better define and understand these differences. The complexity of this issue increases when the study is extended to the peripheral regions of the EU (e.g., Bilbao-Osorio and Rodríguez-Pose 2004; Rodríguez-Pose and Crescenzi 2008) with a lower research tradition. In these cases the social returns of research might be low and it is conceivable that some EU economists doubt about the economic importance of that research. This would explain why the financial cuts of about 50% on academic research in Spain, which had always been very low, during the last four years have not been questioned by the European Commission. A low confidence on the economic importance of research leads to lower funding and political attention, consequently this leads to fewer discoveries and again to more funding cuts. In addition to these fundamental problems there are others apparently less fundamental factors that might be highly important. We will now indulge in some speculations about two of these, in the hopes that further research seriously investigates the situation. In many ways, the US research support system is quite decentralized, competitive, and freewheeling. A scientist with a fundamental biochemical research proposal can take that proposal to the NIH, to the NSF, to the Department of Defense, or to many different private support sources, which increases the probability of funding. This multifaceted, competitive support system is especially important for young scientists, who are not constrained in the USA by a monolithic institutional framework. A study of the ages at which European versus US scientists attain independence in research support, and the diversity of their support over the research careers would certainly provide insight into this question. We agree with Andras (2011: 104) Research funding agencies usually prefer low-risk normal science research. The larger the volume of a mature science, the stronger is the preference of funding agencies to fund normal science research. One possibility to increase willingness to support revolutionary science is to have more competition between funding agencies to fund high-impact research. We also note that the general field of scientometrics is currently rather quiescent in the USA compared with Europe. Perhaps the relatively laissez-faire system of research support in the USA, functioning without any centralized responsibility, reduces the need for system-wide analysis. Is this a net benefit, or net loss, to the productivity of the USA? One of the objectives of this paper is to encourage our colleagues to begin to seriously address some of these fundamental European policy issues. References Albarrán P., et al.   ( 2010) ‘A Comparison of the Scientific Performance of the U.S. and the European Union at the Turn of the 21st Century’, Scientometrics , 85: 329– 44. Google Scholar CrossRef Search ADS   Allik J. ( 2013) ‘Factors Affecting Bibliometric Indicators of Scientific Quality’, Trames , 17: 199– 214. Google Scholar CrossRef Search ADS   Andras P. ( 2011) ‘Research: Metrics, Quality, and Management Implications’, Research Evaluation , 20: 90– 106. Google Scholar CrossRef Search ADS   Bilbao-Osorio B., Rodríguez-Pose A. ( 2004) ‘From R&D to Innovation and Economic Growth in the EU’, Growth and Change , 35: 434– 55. Google Scholar CrossRef Search ADS   Bonaccorsi A. ( 2007) ‘Explaining Poor Performance of European Science: Institutions Versus Policies’, Science and Public Policy , 34: 303– 16. Google Scholar CrossRef Search ADS   Bornmann L., de Moya Anegón F., Leydesdorff L. ( 2010) ‘Do Scientific Advancements Lean on the Shoulders of Giants? A Bibliometric Investigation of the Ortega Hypothesis’, PLoS One , 5/ 10: e13327. Google Scholar CrossRef Search ADS PubMed  Butler L. ( 2003) ‘Explaining Australia’s Increased Share of ISI Publications—the Effects of a Funding Formula Based on Publication Counts’, Research Policy , 32: 143– 55. Google Scholar CrossRef Search ADS   Cole J. R., Cole S. ( 1972) ‘The Ortega Hypothesis—Citation Analysis Suggests that only a Few Scientists Contribute to Scientific Progress’, Science , 178: 368– 75. Google Scholar CrossRef Search ADS PubMed  Dosi G., Llerena P., Labini M. S. ( 2006) ‘The Relationships between Science, Technologies and their Industrial Exploitation: An Illustration through the Myths and Realities of the So-called ‘European Paradox’, Research Policy , 35: 1450– 64. Google Scholar CrossRef Search ADS   European Commission. ( 1994) The European Report on Science and Technology Indicators 1994. Executive Summary. Directorate-General XII, L-2920 Luxembourg: Office for Official Publications of the European Communities. ISBN 92-826-9010-5. European Commission. ( 1995) Green Paper on Innovation. <http://europa.eu/documents/comm/green_papers/pdf/com95_688_en.pdf> accessed 15 Dec 2015. European Commission. ( 1997) Le Deuxième Rapport Européen sur les Indicateurs Scientifiques et Technologiques 1997. Direction Generale XII. B-1049 Bruxelles. ISBN 92-828-0272-8. European Commission. ( 2003) Third European Report on Science & Technology Indicators. Directorate General for Research Information and Communication Unit. Luxembourg: Office for Official Publications of the European Communities. ISBN 92-894-1795-1. European Commission. ( 2011) Proposal for a Decision of the European Parliament and of the Council on the Strategic Innovation Agenda of the European Institute of Innovation and Technology (EIT): the Contribution of the EIT to a More Innovative Europe. COM(2011) 822 final; 2011/0387 (COD). <http://www.europarl.europa.eu/document/activities/cont/201203/20120316ATT41114/20120316ATT41114EN.pdf> accessed 26 Oct 2015. Garfield E. ( 1970) ‘Citation Index for Studying Science’, Nature , 227: 669– 71. Google Scholar CrossRef Search ADS PubMed  Garfield E. ( 1986) ‘Do Nobel Prize Winners Write Citation Classics?’, Current Contents , 23: 3– 8. <http://garfield.library.upenn.edu/essays/v9p182y1986.pdf> accessed 14 Oct 2015. He Z., et al.   ( 2011) ‘Simultaneous Enhancement of Open-Circuit Voltage, Short-Circuit Current Density, and Fill Factor in Polymer Solar Cells’, Advanced Materials , 25: 4636– 43. Google Scholar CrossRef Search ADS   Herranz N., Ruiz-Castillo J. ( 2013) ‘The End of the European Paradox’, Scientometrics , 95: 453– 64. Google Scholar CrossRef Search ADS   King D. A. ( 2004) ‘The Scientific Impact of Nations, What Different Countries Get for Their Research Spending’, Nature , 430: 311– 6. Google Scholar CrossRef Search ADS PubMed  Kosmulski M. ( 2013) ‘Family-Tree of Bibliometric Indices’, Journal of Informetrics , 7: 313– 7. Google Scholar CrossRef Search ADS   Kuhn T. S. ( 1962) The Structure of Scientific Revolutions . Chicago: University of Chicago Press. Leydesdorff L., Wagner W. ( 2009) ‘Is the United States Losing Ground in Science? A Global Perspective on the World Science System’, Scientometrics , 78: 23– 36. Google Scholar CrossRef Search ADS   Leydesdorff l., Wagner C. S., Bornmann L. ( 2014) ‘The European Union, China, and the United States in the Top-1% and Top-10% Layers of Most-Frequently Cited Publications: Competition and Collaborations’, Journal of Informetrics , 8: 606– 17. Google Scholar CrossRef Search ADS   Mansfield E. ( 1991) ‘Academic Research and Industrial Innovation’, Research Policy , 20: 1– 12. Google Scholar CrossRef Search ADS   Martin B. R. ( 1996) ‘The Use of Multiple Indicators in the Assessment of Basic Research’, Scientometrics , 36: 343– 62. Google Scholar CrossRef Search ADS   Martin B. R., Irvine J. ( 1983) ‘Assessing Basic Research, Some Partial Indicators of Scientific Progress in Radio Astronomy’, Research Policy , 12: 61– 90. Google Scholar CrossRef Search ADS   Medawar P. B. ( 1979) Advise to a Young Scientist . Alfred P. Sloan Foundation, ISBN 0-465-00092-4. Narin F., Hamilton K. S. ( 1996) ‘Bibliometric Performance Measures’, Scientometrics , 36: 293– 310. Google Scholar CrossRef Search ADS   Narin F., Hamilton K. S., Olivastro D. ( 1997) ‘The Increasing Linkage between U.S. Technology and Public Science’, Research Policy , 26: 317– 30. Google Scholar CrossRef Search ADS   Narin F., Breitzman A., Thomas P. ( 2004) ‘Using Patent Citation Indicators to Manage a Stock Portfolio’, in Moed H. F., et al.   (eds) Handbook of Quantitative Science and Technology Research , pp. 553– 68. The Netherlands: Kluwer Academic Publishers. National Science Board. ( 1972) Science and Engineering Indicators . Washington, DC: NSF. <http://files.eric.ed.gov/fulltext/ED084150.pdf> accessed 14 Oct 2015. National Science Board. ( 2014) Science & Engineering Indicators. <http://www.nsf.gov/statistics/seind14/content/etc/nsb1401.pdf>, <http://www.nsf.gov/statistics/seind14/content/chapter-5/at05-60.pdf> accessed 15 Oct 2015. OECD. ( 2000) Policy Brief: Science, Technology and Innovation in the New Economy. <http://www.oecd.org/science/sci-tech/1918259.pdf> accessed 14 Oct 2015. Persson O. ( 2010) ‘Are Highly Cited Papers More International?’, Scientometrics , 83: 397– 401. Google Scholar CrossRef Search ADS   Persson O., Glanzel W., Danell R. ( 2004) ‘Inflationary Bibliometric Values: The Role of Scientific Collaboration and the Need for Relative Indicators in Evaluative Studies’, Scientometrics , 60: 421– 32. Google Scholar CrossRef Search ADS   Ponomarev I. V., et al.   ( 2014) ‘Predicting Highly Cited Papers: A Method for Early Detection of Candidate Breakthroughs’, Technological Forecasting & Social Change , 81: 49– 55. Google Scholar CrossRef Search ADS   Rodríguez-Navarro A. ( 2009) ‘Sound Research, Unimportant Discoveries: Research, Universities, and Formal Evaluation of Research in Spain’, Journal of the American Society for Information Science and Technology , 60: 1845– 58. Google Scholar CrossRef Search ADS   Rodríguez-Navarro A. ( 2011a) ‘A Simple Index for the High-Citation Tail of Citation Distribution to Quantify Research Performance in Countries and Institutions’, PLoS One , 6/ 5: e20510. Google Scholar CrossRef Search ADS   Rodríguez-Navarro A. ( 2011b) ‘Measuring Research Excellence. Number of Nobel Prize Achievements Versus Conventional Bibliometric Indicators’, Journal of Documentation , 67: 582– 600. Google Scholar CrossRef Search ADS   Rodríguez-Navarro A. ( 2012) ‘Counting Highly Cited Papers for University Research Assessment: Conceptual and Technical Issues’, PLoS One , 7/ 10: e47210. Google Scholar CrossRef Search ADS PubMed  Rodríguez-Navarro A. ( 2015) ‘Research Assessment Based on Infrequent Achievements: A Comparison of the United States and Europe in Terms of Highly Cited Papers and Nobel Prizes’, Journal of the Association for Information Science and Technology , 67: 731– 40. Google Scholar CrossRef Search ADS   Rodríguez-Pose A., Crescenzi R. ( 2008) ‘R&D, Spillovers, Innovation Systems and the Genesis of Regional Growth in Europe’, Regional Studies , 42/ 01: 51– 67. Google Scholar CrossRef Search ADS   Sachwald F. ( 2015) ‘Europe’s twin deficits: excellence and innovation in new sectors’. Policy paper by the Research, Innovation, and Science Policy Experts (RISE). European Commission, Directorate-General for Research and Innovation. EUR 27371 EN. Salter A. J., Martin B. R. ( 2001) ‘The Economic Benefits of Publicly Funded Basic Research: a Critical Review’, Research Policy , 30: 509– 32. Google Scholar CrossRef Search ADS   Shelton R. D. ( 2008) ‘Relations between National Research Investment and Publication Output: Application to an American Paradox’, Scientometrics , 74: 191– 205. Google Scholar CrossRef Search ADS   Stephan P. E., Levin S. G. ( 2001) ‘Exceptional Contributions to US Science by Foreign-Born and Foreign-Educated’, Population Research and Policy Review , 20: 59– 79. Google Scholar CrossRef Search ADS   Tang L., Shapira P., Youtie J. ( 2015) ‘Is there a Clubbing Effect Underlying Chinese Research Citation Increasing?’, Journal of the Association for Information Science and Technology , 66: 1923– 32. Google Scholar CrossRef Search ADS   Tijssen R. J. W., Visser M. S., van Leeuwen T. N. ( 2002) ‘Benchmarking International Scientific Excellence: Are Highly Cited Research Papers an Appropriate Frame of Reference?’, Scientometrics , 54: 381– 97. Google Scholar CrossRef Search ADS   Waltman L., et al.   ( 2012) ‘The Leiden Ranking 2011/2012: Data Collection, Indicators, and Interpretation’, Journal of the American Society for Information Science and Technology , 63: 2419– 32. Google Scholar CrossRef Search ADS   Weinberg A. M. ( 1962) ‘Criteria for Scientific Choice, Minerva , I/ 2: 158– 171. [Reprinted (2000) Minerva, 38: 253–69.] © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

Journal

Science and Public PolicyOxford University Press

Published: Feb 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off