Abstract Dominant research evaluation systems potentially lead to a homogenization of research. A focus on the total number of citations or journal impact factors can motivate researchers in non-English contexts to publish in English only. Efforts to publish in languages other than English run the risk of becoming ‘lost science’. The purpose of this article is to offer an indicator, the PLOTE-index, which measures the percentage of citations flowing from the non-English publications of a researcher or a group of researchers. If the spread of citations over language areas is measured, it becomes visible. Only then can it be analyzed, debated, and evaluated. In a feasibility study, PLOTE scores are calculated for 40 professors in political science in Denmark. The relation between the PLOTE-index and total number of citations is discussed. Variations across subfields are demonstrated and discussed, as well as a decline in PLOTE scores over time. The potential use of the PLOTE-index in research policy, research evaluation, strategy-making, and recruitment is discussed, as well as future developments of the index. English prayers get answered first Trevor Noah 1. Introduction Citation analyses are becoming increasingly used as indicators of performance in research evaluation systems and play a role in official policymaking and management (Geuna and Martin 2003). The increasing use of international models for bibliometric measures of citations as a component in research evaluation creates special problems for researchers working in contexts where English is not the preferred language of communication (Archambault et al. 2006; González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012; López-Navarro et al. 2015). At first sight, the sheer number of citations is language-neutral. Citations in Danish, English, Spanish, or Finnish count the same. However, English publications usually have wider circulation and larger potential audiences, which may in itself constitute an incentive to publish in English if the goal is to maximize the total number of ‘language-neutral’ citations. Furthermore, dominant databases such as Scopus and Web of Science (WoS) are biased in favor of journals in English (van Leeuwen et al. 2001; Archambault et al. 2006; Mongeon and Paul-Hus 2016). Impact in the English-speaking world is sometimes regarded as a proxy for impact in general, if not conflated with it, which may create a ‘straight-jacket’ for scholars otherwise working in LOTE (languages other than English; Salager-Meyer 2008). Some policymakers might wish to enhance particular kinds of research related to cooperation with national and local stakeholders (López-Navarro et al. 2015), perhaps building bridges that increase the use of knowledge in national decision-making, in organizational capacity-building, and social innovation (Chavarro, Tang and Ràfols 2017). Some may wish to defend and cultivate a national language and its use in all domains, including those of academia (López-Navarro et al. 2015). One national goal among others may be to make scientific results available to taxpayers who have helped finance public universities. Although these goals are not articulated as ‘anti-English’, their practical implications draw researchers in the direction of publishing in their preferred national language. Let us call the intuitively preferred first language ‘L1’, whereas the generic term for languages other than English is LOTE. If researchers sometimes have good reasons to publish in LOTE, but dominant metrics continue to count citations as if language did not matter (thereby privileging English), do we then quantify good practice in a way that does not respect all instances of good practice? Is there a ‘self-perpetuating cycle in which English becomes more important?’ (Tardy 2004: 249). Are local issues being marginalized and are research findings becoming disseminated into local contexts to a diminishing degree, as suggested by López-Navarro et al. (2015: 942)? Is the proportion of impact that occurs as a result of publications in non-English languages declining as a consequence of evaluative metrics that favor English explicitly or implicitly? As the Leiden Manifesto (Hicks et al. 2015) suggests, decisions about research should not be based on one-eyed measures. A plurality of measures should be applied according to a principle that Lamont (2009) calls ‘cognitive contextualization’, meaning that the choice of criteria should be responsive to variations in contexts and purposes of research. Existing bibliometric indicators do not only lead to an unfair lack of appreciation of non-mainstream journals, they also play an active role and constitutive role in defining the purposes of research (Dahler-Larsen 2014). For example, the language of research communication is linked to the choice of research topics (López Piñeiro and Hicks 2015). There is a need to make new and alternative measures available that are sensitive to differences in impact across languages (López Piñeiro and Hicks 2015) and that, more specifically, take into account the relative weight of citations due to language or country (González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012: 308). The contribution of this article is very specific. It offers a new index, the PLOTE-index (percentage of citations in languages other than English), which measures the percentage of citations flowing from the non-English publications of a researcher or a group of researchers. Through a feasibility study of 40 professors in political science in one country, it is demonstrated that PLOTE data can be collected and variations in PLOTE across subfields and over time can be meaningfully interpreted. The social construction of an index is not just a collection of data. It is a social experiment. Although the PLOTE-index is mathematically simple, its construction is important. Measurement of a phenomenon helps promote a common social recognition of the objective existence of that phenomenon (Porter 1995). Only if variations in citations of publications in various languages are actually measured, can they be acknowledged, debated, assessed, and appreciated in research evaluation. For example, national policymakers should be aware that the adoption of apparently ‘language-neutral’ bibliometric indicators for research evaluation purposes may in fact help undermine the dissemination of research knowledge in LOTE (Salager-Meyer 2014). In decisions about recruitment and promotion, a consideration of PLOTE data along with other data can lead to a fuller appreciation of a researcheŕs impact upon local and national stakeholders along with his or her international success. In general, the function of the PLOTE-index is to direct attention toward the impact of non-English publications, saving them from become ‘lost science’ (González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012). Given the ‘functional split of languages’ and the ‘fragile balance’ between English and LOTE in non-Anglophone research settings (López-Navarro et al. 2015: 962), the PLOTE-index quantifies consequences of the trade-offs researchers have to make, thereby taking their dilemma seriously. The article is structured as follows. First, the potential biases against non-English languages in conventional citation analysis is discussed. Then comes a review of reasons why researchers may publish in LOTE versus English. The actual construction of the PLOTE-index and the choice of database are then described. The feasibility of the PLOTE-index is demonstrated through the use of secondary Google Scholar data to derive PLOTE-scores for 40 professors at political science departments in Denmark. Variations across subfields and a decline over time in PLOTE scores are demonstrated. Finally, the article concludes with a list of potential further uses of the PLOTE-index and a brief discussion. 2. Language as an issue in research evaluation As mentioned earlier, citation analyses are becoming increasingly used as indicators of performance in research evaluation systems (Geuna and Martin 2003). Publications and citations hereof in journals published mainly in English are the main area of focus for most evaluation agencies to assess research productivity and performance (López-Navarro et al. 2015: 942). English-language journals were found to be overrepresented to the detriment of other languages in databases such as WoS and Scopus already in 2006 (Archambault et al. 2006). This finding was confirmed about 10 years later by Mongeon and Paul-Hus (2016). Over time, the proportion of research published in languages other than English in the Science Citation Index has been decreasing in medicine (van Leeuwen et al. 2000) as well as more generally (González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012: 298). A number of interactive and sometimes self-fulfilling mechanisms enhance English as the preferred language of research communication. Of course, outlets with international circulation are often more prestigious and may attract the better contributions. Editors and reviewers of international publications (more often than not English publications) are cooler, and competition is harder. The best and toughest reviews come from prestigious international journals. In addition, circulation is often much greater. Even if the recent focus on journal impact factors may sometimes be built on ‘folk theories’ rather than exact knowledge (Rushforth and de Rijcke 2015), adoption of the impact factor as an evaluative criterion has favored the consolidation of English language journals in the diffusion of scientific knowledge (González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012: 301). All other things being equal, the lower number of citations for publications in LOTE is a disadvantage for researchers who publish in both English and LOTE when it comes to bibliometric performance per publication (van Leeuwen et al. 2001). The problem is self-sustaining to the extent that non-mainstream journals are less visible and recognized. Even if national journals are not explicitly excluded from the most prestigious category of journals, policies may have that effect. For example in the Norwegian system, to be placed in the most prestigious category, journals must have ‘visibility to the widest relevant audience’ (Schneider 2009: 367), be ‘leading in a field’, and have ‘an international audience’ (Aagaard 2015: 726). Impact factors are also used as guidelines in the selection of journals that give extraordinary publication points. In practice, these criteria work against journals in L1. Sometimes, international evaluation panels do not have the capacity to read publications in LOTE. Sometimes, with no methodological justification, reviews ignore publications that are not in English (Tardy 2004: 251; González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012: 304). Sometimes, a hierarchical understanding is engraved institutionally in bibliometric systems when quality is conflated with international and international is conflated with ‘in English’. This assumption helps construct reality. If researchers are pressured to send their best pieces of work to outlets in English with a high impact factor (Salager-Meyer 2014: 129), the socially constructed assumption that journals in English represent higher quality than other journals in fact becomes a valid descriptor of reality over time. The increasingly dominant position of English is a part a larger configuration of social and economic factors. Knowledge of English, financial interests, technological equipment, and infrastructure tend to work together, privileging the center and producing biases against the peripheries (Rafols, Molas-Gallart and Woolley 2015). Financial and linguistic barriers make access to journals unequally distributed around the world (Salager-Meyer 2008: 128; Chavarro, Tang and Ràfols 2017). It is very costly to construct alternative databases with LOTE publications and their citations (López Piñeiro and Hicks 2015). If the language of publications is not automatically registered by the evaluation machines producing citation analysis, language-sensitive metrics (such as the PLOTE-index) have to be manually produced. The ease or difficulty of counting aspects of research and its impact cannot be seen in isolation from the large industrial, commercial, professional interests, and social structures in which the technology of bibliometrics is embedded. Nevertheless, many nations adopt research policies where a key role is played by conventional bibliometric measures that do not pay particular attention to issues of language. A number of consequences follow. First, publications that are published internationally have to match the interests of an international audience. International publication is not entirely neutral to topics. A systematic comparison of national (Spanish) and international publications in the same discipline (sociology) has shown that the distribution of publications on topics in these two types of outlets differs. For example, Spanish journals published relatively more on migration and family, whereas international publications provided relatively more material on methodology (López Piñeiro and Hicks 2015). However, ‘international’ does not mean ‘an imagined universal community of scholars wishing to share their best research’ (Lillis and Curry 2010: 137). Often it means ‘Anglophone’. Internationally oriented bibliometric indicators may reduce the motivation for researchers in LOTE contexts to attend to the ‘social and cultural realities’ of their own communities and societies and topics of high local relevance (Belcher 2007; López-Navarro et al. 2015; Chavarro, Tang and Ràfols 2017). A shift toward more international publication may lead to a loss of genres or registers in ‘otherwise healthy languages’ (Tardy 2004: 251). The reduction of a national body of texts in particular scientific domains creates barriers for the access of non-privileged groups to knowledge and education (Alexander 2013). It may also dry out advanced vocabularies in LOTE as useful media for important national debates on public issues. Because of the need to ‘communicate with relevant constituencies’ and ‘interlocutors’ in their relevant languages (Alexander 2013: 86), there can be a legitimate national and political interest in combatting a situation where dominance of English reinforces socioeconomic inequality. It may also be a national goal that a population benefits in its own language from investments of taxpayers’ money in research. Having considered how languages of research communication are socially embedded, let us turn to the choice of language made by individual researchers. 3. Why researchers publish in LOTE or English As explained by Hicks (2004), researchers may aim to make a scientific contribution and to earn scientific recognition among their peers. Yet, as professionals of the national research community, they are also required to be accountable to their employers and their society. Researchers may associate publishing in English with greater intellectual feedback, broader diffusion of their ideas, recognition, better chances for promotion, and more citations (López-Navarro et al. 2015: 951). However, even if the prestige and function of mainstream journals is recognized, researchers sometimes choose non-mainstream journals, as these journals may help knowledge-bridging (sharing knowledge with stakeholders) and gap-filling (covering topics neglected in conventional journals) (Chavarro, Tang and Ràfols 2017). These concerns may be balanced in different ways depending on research fields (López-Navarro et al. 2015). Even within one field, say political science, which is the subject of the feasibility study (see following text), there might be subfields with varying proportions of publications and impacts in different language areas. For example, in international relations and European Union (EU) studies you might find a higher proportion of English language publications than in administrative law or public administration. An uneven distribution of citations would be a logical consequence. Next, type of publication might matter. If a book in English is more burdensome to write and is not recognized in the most prestigious bibliometrics, which only count journal articles, that would enter into the equation, too. In some situations, if you aim at maximizing readership, citations, or impact, your choice of language for your book would be considered more carefully than when you make the obvious choice of English for an (international) article. Researchers play different roles in relation to various forms of impact. Some forms of impact are channeled through high-ranking academic publications. Other forms are facilitated through white papers, government reports, evaluation reports, and widely read books that inform decision-makers. Still other forms of impact are created when a researcher helps build organizational capacity or informs the public about some controversial issue. Some researchers are positioned to facilitate these forms of impact without first going through a well-recognized international journal. The distribution of roles related to various types of impact may be uneven in various subfields. Timing and speed may also be an issue here. Some issues are burning and therefore require swift publication to audiences, which may be national or local. They may be reached more quickly than international ones. Not surprisingly, research also shows that the English skills of researchers play a role. One study found that the most common reasons for choosing one's own language in a research context were ease of communication and comfort level (Tardy 2004). Lack of skills in English should be taken seriously as an impediment to international publication of otherwise good and original research. Solutions to this problem may be found in training, resources for translation, new editorial policies, and multilingual journals (Salager-Meyer 2014). However, skills should not only be seen in a deficiency perspective. The skills that researchers have in their L1 may help them express their findings more eloquently and precisely for audiences, who may, in turn, enjoy reading scientific work in their own L1. In other words, there can be a number of legitimate reasons for carefully considering the balance between English and non-English languages in the choice of research communication. Some researchers in non-English contexts are native-born English speakers, so for them English is a natural choice. There can also be traditions or bad habits in particular research settings that work against English or against internationalism. Altogether, the choice of language can be influenced by many factors. The conclusion of this section is, however, that there is a sufficient number of good reasons why the choice of language is often a difficult trade-off. Variations in practice can therefore be expected. The next step is to measure the consequences of these variations in terms of the proportion of citations that flow from publications in LOTE versus English, respectively. 4. Definition of PLOTE The PLOTE-index1 is defined as the number of citations of publications in LOTE authored by a researcher or a group of researchers divided by the number of citations from all publications authored by the researcher (or group) multiplied by 100. Notice that what matters is the language of the cited publication. The language of the publication in which the citation occurs is not relevant. By capturing the percentage of citations that stem from non-English publications, the PLOTE-index describes the degree to which a researcher departs from an all-English set of citation-producing publications. To provide a clear alternative to conventional bibliometric indicators, this index is defined so that alternatives to English bring a researcher closer to 100%, whereas an all-English impact strategy results in a score of zero. The index may or may not be defined for particular periods. 5. Why citations? Proof over promise The PLOTE-index described later measures citations, not publications. This choice is consistent with the present focus on citations in research evaluation. It is also consistent with a ‘proof over promise’-argument (Harzing and Mijnhardt 2015): The actual functions of publications (be it in an international or a national arena) are better captured by citations than by the number of publications, or, for that matter, by the impact factor of the journal in which the publications occur. The focus on citations means that a text that is never cited is regarded as inconsequential. It has no influence on the PLOTE score. As a fortunate methodological consequence hereof, the problem of ‘stray publications’ that bedevil some databases is partly circumvented, as non-existing stray publications are usually not cited. Another positive aspect of a citation-based approach is that it represents the diversity of researchers in terms of age, gender, discipline, and academic affiliation better than a mere count of publications or a count of publications in journals with a high impact factor (Harzing and Mijnhardt 2015). 6. Choice of primary database To calculate PLOTE scores, a source of data had to be selected. Google Scholar was chosen based on the following criteria: The database should have the best possible representation of materials in LOTE. The database should include citations of and in a broad set of materials. The database should be democratically available, free, and easy to use so that it would not further enhance existing inequalities. The database should be appropriate for the purpose at hand (a feasibility study of professors in political science). Regarding Criterion 1, Google Scholar is less biased in favor of English than other databases and more representative of the world´s research activity (Delgado-López-Cózar and Cabezas‐Clavijo 2013). Google Scholar has a better coverage of non-English journals (Mongeon and Paul-Hus 2016). Halevi, Moed and Bar-Ilan (2017) found that Google Scholar has tripled the amounts of documents that included non-English publications compared with WoS. Whereas only about 1% of the citations in WoS and Scopus were from LOTE, the corresponding figure in Google Scholar is ∼7% (Harzing and van der Wal 2008). Mas-Bleda and Thelwall (2016) showed that moving to social media does not help reduce a bias against non-English publications as compared with Scopus. Regarding Criterion 2, Google Scholar includes two or three times as many journals as conventional databases (Delgado-López-Cózar and Cabezas‐Clavijo 2013). Google Scholar is ‘relevant societally’, as it includes publication outlets beyond a narrow set of academic journals (Harzing and Mijnhardt 2015: 739). It has the ‘the most comprehensive coverage’ (Harzing and Alakangas 2016: 801). One study finds that Google Scholar reports three times as many citations of books than Scopus (Harzing 2013). Furthermore, Google Scholar retrieves data rapidly (Delgado-López-Cózar and Cabezas‐Clavijo 2013). Regarding Criterion 3, Google Scholar is freely available, at least for everyone with Internet access. It is also well-known and easy to use. These factors contribute to equal access and use. Many use Google Scholar to get a first impression of a researcher and his or her work. Many researchers themselves use their Google Scholar H-index as a marketing device (Burrows 2012: 368). With respect to Criterion 4, a golden rule in research evaluation is to choose ‘the right tool for the task’ (Hicks et al. 2015; Halevi, Moed and Bar-Ilan 2017: 226). The feasibility study (presented later) will use professors in political science as examples. In social science and humanities in general, the citation rates in Google Scholar are about four times higher than in Scopus (Halevi, Moed and Bar-Ilan 2017) or up to 14 times higher than in WoS (Harzing and Alakangas 2016: 795). One of the strengths of Google Scholar lies in its broader coverage of citations particularly in social sciences and the humanities (Halevi, Moed and Bar-Ilan 2017: 827). Google Scholar can redress the traditionally disadvantaged position of the social sciences in citation analysis (Harzing 2013: 1074), a point that fits nicely with an analysis of political science. Although Google Scholar scores well on Criteria 1–4, its broad and rapid coverage (provided by machines) comes at a price. In particular, the literature identifies four problems with Google Scholar data (Martín-Martín et al. 2016; Halevi, Moed and Bar-Ilan 2017): Problem 1: stray publications (miscreditation of author name or other publication details, including ghost publications that do not exist); Problem 2: duplications; Problem 3: potential manipulation of data; and Problem 4: lack of transparency in general, including lack of master list and lack of description of quality control (Delgado-López-Cózar and Cabezas‐Clavijo 2013: 105; Halevi, Moed and Bar-Ilan 2017: 830). Although all of these problems should be taken seriously, studies have found that 89% of data points in Google Scholar could be verified (Prins 2016), and that the error rate in some respects is as low as 0.5% (Harzing 2013). The first line of defense against Problems 1–4 is that our feasibility study builds on names of verifiable university professors, thereby ignoring stray publications written by unknown ghost authors. The focus on citations, not publications, further reduces the problem with stray publications. Publications that do not exist are usually not cited very much. The problem with duplications is also negligible because each citation occurs only once. A PLOTE score is not substantially changed just because the same amount of citations might be split into two smaller amounts. To reduce Problems 1–4 as much as possible, further steps were taken. Data were captured from the professoŕs own publicly available Google Scholar profiles. These are secondary data because they are subject to the corrections made by the professors themselves, including removal of stray publications and merging of duplicates. Although this cleansing of data might not be perfect, professors have an interest in presenting their own publications correctly. One important motivation here is that professors wish to be cited correctly. Furthermore, although it is generally claimed that Google Scholar data can be manipulated, the professors have absolutely no interest in having manipulated data represented in their own publicly available profile (Harzing and Mijnhardt 2015: 102). University professors who identify themselves by name and institution have something at stake. In other words, bringing the Google Scholar data back to both individual and public scrutiny under the official name of each university professor helps re-create a form of accountability at the secondary data level that is not present in primary Google Scholar data. A final check on potentially remaining stray publications was carried out manually during the feasibility study. Several spot tests performed on the resulting data consistently showed that publications often cited were in fact genuine publications. Only frequently cited texts make a substantial difference for PLOTE scores. Small errors that occur randomly through computerized counting are not likely to constitute systematic errors across researchers or across subfields. When comparing PLOTE scores across subfields or over time, sources of error may cancel each other out, unless there is a systematic measurement error in particular subfields of political science only or in a particular period only.2 Thus, the methodology used in the feasibility study presented here resonates with several sources that claim that if corrective procedures are in place, and data fit a given analytical purpose, Google Scholar can in fact be used meaningfully (Harzing and van der Wal 2008; Harzing and Mijnhardt 2015; Prins 2016). 7. Feasibility study and methodology The feasibility study concerns professors at all political science departments in universities in Denmark. Political science is a part of social science that is interestingly situated between the ‘Erklärung’ of the natural sciences and the ‘Verstehen’ of the humanities. Political scientists contribute to general theory and they study the international order, but they also study a national political system, stimulate a national debate, write textbooks, and sit in committees. As they serve a variety of international and national audiences, variations in their PLOTE-index can be expected. By focusing on full professors, most of the variations in seniority and form of employment are controlled. A low number of citations from very few publications by very young researchers do not randomly distort the picture. All professors were identified in all departments with the name of ‘Statskundskab’ on their official webpage (the Danish term that comes closest to political science). Their names were typed into the search field in Google Scholar. Forty professors with a Google Scholar profile were included in the feasibility study, whereas 31 professors were not included because no Google Scholar profile could be found using plausible variations in the spelling of the researcher's name. The employment status and affiliation of two persons at the time of the feasibility study could not be established with certainty. Only the 40 professors verified in this procedure by two independent research assistants were included in the analysis. Each professor was allocated to a broad category denoting his or her main subfield. Although some work in more than one subfield, official website data describing research groups and centers were used to the largest possible extent to determine subfields and their labels. Only subfields that include four or more professors will be reported in the following text. In all, 39 professors are in these categories. Using Google Scholar, each publication was registered along with the language in which it occurred, as well the number of citations. There was no control for co-authorship or self-citation. In addition to the main analysis covering all 40 professors up to and including 2016, a separate analysis with the same individuals was carried out covering only the years 2014–2017 (until April) to capture changes over time. 8. Findings Data were collected for 40 professors. Their PLOTE-index ranges from 0.00 to 45.85%. Table 1 shows the number of publications and citations broken down by language of the publication. The table also shows the number of citations per article. Table 1. Number of articles and citations in English and non-English among 40 professors Language of publication Number of publications Number of citations Citations per publication English 4,125 75,703 18.35 LOTE 2,948 6,643 2.25 Language of publication Number of publications Number of citations Citations per publication English 4,125 75,703 18.35 LOTE 2,948 6,643 2.25 Table 1. Number of articles and citations in English and non-English among 40 professors Language of publication Number of publications Number of citations Citations per publication English 4,125 75,703 18.35 LOTE 2,948 6,643 2.25 Language of publication Number of publications Number of citations Citations per publication English 4,125 75,703 18.35 LOTE 2,948 6,643 2.25 There is a striking difference between the number of citations per article in the two language categories. On the average, a publication in English is cited about eight times more frequently than a publication in LOTE. It is not known whether this is due to a higher quality or wider circulation of English publications, or whether there are differences in types of publications across the two categories that explain differences in the amounts of citations. We also do not know whether it took more than eight times more effort to produce an English publication. Some of the professors are native English speakers, so for them English is not English as an additional language (EAL). For others, maybe the desire to publish in EAL has directed the professor in particular thematic or methodological directions. Yet, all other things being equal, if one’s intention is to maximize the number of citations of a publication, English is a better choice. Remember, however, that a massive amount of citations are carried by few publications. There is no guarantee that a text in English will get an average number of citations. Most texts do not. All in all, however, English texts get cited more on average. All other things being equal, a bibliometric system that awards publications in English and simply a bibliometric system that awards a high number of Google Scholar citations regardless of language provide strong incentives to write in English. Do professors actually do that? The association between citations (regardless of language) and proportion of publications written in LOTE by the professors in the sample is shown in Figure 1. Figure 1. View largeDownload slide Citations and proportion of LOTE publications. Figure 1. View largeDownload slide Citations and proportion of LOTE publications. As it is usually found, the number of citations across scholars is again highly skewed. The majority of professors have less than 1,500 citations, a few have between 1,500 and 7,000, and one has more than 20,000. The proportion of non-English publications is a bit higher for professors with a relatively low number of citations than among the rest of the professors, but there is great variation also in the middle group, and even the top scorer in terms of citations is not publishing in English only, as more than 20% of his/her oeuvre is in LOTE. Figure 2 shows the PLOTE-index against the number of citations. Figure 2. View largeDownload slide Citations and PLOTE-index. Figure 2. View largeDownload slide Citations and PLOTE-index. Professors with more than ∼1,500 citations typically have a relatively low PLOTE-index, less than ∼5%. There are three exceptions with more than 1,500 citations and a PLOTE-index above 35%. Yet, none of them passed the 5,000 citations threshold. And the PLOTE score for the professor with the highest number of citations is very close to zero, even if more than 20% of his or her work is in non-English languages. So, with some variation and exceptions, the road to a high number of citations has gone through publications in English. Perhaps the H-index is more often referred to than the number of citations in itself. The H-index also moderates the effects of extreme outliers in the absolute number of citations. Instead, the H-index describes the number of publications that have the same number of citations, e.g. the H-index is 20 when 20 publications have at least 20 citations. Figure 3 shows the PLOTE-index against the H-index. Figure 3. View largeDownload slide H-index and PLOTE-index. Figure 3. View largeDownload slide H-index and PLOTE-index. The pattern is relatively clear. Eight professors achieve an H-index above 20 in combination with a PLOTE-index below 5%. Only three have an H-index above 20 in combination with a PLOTE-index above 35 or so. These sets of observations may correspond to different publication strategies. A relatively high H-index is demonstrably possible in combination with a high proportion of non-English impact, but it is not the strategy chosen by most of the professors in our sample. The typical road to many citations goes through English publications. Let us look now at subfields. Table 2 shows the averages of the PLOTE-index within broadly defined subfields in political science. Table 2. Average PLOTE-index in subfields of political science among 39 professors Subfield No. of professors Average PLOTE International relations 10 3.06 EU studies 4 5.04 Political behavior and institutions 6 11.29 Comparative politics 10 13.75 Welfare and labor market studies 5 15.21 Public administration 4 32.36 Subfield No. of professors Average PLOTE International relations 10 3.06 EU studies 4 5.04 Political behavior and institutions 6 11.29 Comparative politics 10 13.75 Welfare and labor market studies 5 15.21 Public administration 4 32.36 This table includes professors who could be grouped into subfields with more than four members. One could not be placed in such a group. Table 2. Average PLOTE-index in subfields of political science among 39 professors Subfield No. of professors Average PLOTE International relations 10 3.06 EU studies 4 5.04 Political behavior and institutions 6 11.29 Comparative politics 10 13.75 Welfare and labor market studies 5 15.21 Public administration 4 32.36 Subfield No. of professors Average PLOTE International relations 10 3.06 EU studies 4 5.04 Political behavior and institutions 6 11.29 Comparative politics 10 13.75 Welfare and labor market studies 5 15.21 Public administration 4 32.36 This table includes professors who could be grouped into subfields with more than four members. One could not be placed in such a group. Not surprisingly, professors occupied with international relations and EU studies have a relatively low PLOTE-index. Some of them are already international experts who only speak English. Their typical audiences who are interested in the same topics are international. As predicted, the average PLOTE-index here is low. The middle groups in terms of PLOTE-index are those in the subfields of political behavior and institutions, comparative politics, and welfare and labor market studies. Finally, researchers in public administration have an average PLOTE score of more than twice that of any other group. The high PLOTE score in this group can be explained by overlapping topical interests with national stakeholders and decision-makers. Professors in public administration train public administrators, advise on organizational change, carry out evaluations, sit in public committees, and engage in national debates about the public sector. These activities may result in citations within the national language area. But are these traits necessarily inherent in the subfield as such and unique to it? Consider the counter-arguments. Have particular research traditions or traditional publication patterns within public administration created path dependency, which results in a high PLOTE score, when these patterns are not a necessary part of the subfield as such? Could scholars studying public administration articulate their research interests so that they would resonate more with international audiences? And do professors in other subfields (such as international relations or EU studies) not also have some obligations toward national audiences? In Brexit times, is EU research not highly relevant for national audiences such as the Danish one? Table 2 describes stark differences in the PLOTE-index across subfields of political science. It is a matter of interpretation and discussion whether these differences should be understood as inherent, ‘natural’, and desirable properties of the subfields, or whether actions should be taken to reduce or eliminate the differences in the PLOTE-index. The empirical finding itself does not warrant a normative conclusion. Nevertheless, evaluative metrics that do not acknowledge language at all would probably have a homogenizing effect (a downwards pressure on the PLOTE-index) in the long run (Geuna and Martin 2003: 296). With Table 2 in hand, however, there is an informed starting point for a normative discussion about whether subfield differences in PLOTE scores are appropriate, justified, and meaningful. Finally, let us see if PLOTE scores change over time. A special analysis was carried out covering only the most recent years 2014–2017 (until April). Many of the new publications are not cited yet. The data are therefore not broken down into subgroups, but a comparison is made between the new period and the rest of the data set that covers all years until and including 2016. The sample of professors remains the same. Therefore, a change over time is not due to new blood in the group but due to, for example, responses to changing demands (or age or seniority). The PLOTE-index is declining. It is now less than 4%. Among the professors in the sample, there is an underlying decline, although less dramatic, in the proportion of non-English publications (Table 3). The change can be due to reactions to deliberate research policy and evaluation regimes and/or changing informal norms among researchers. One might speculate that the PLOTE-index is very sensitive to further changes, because a small drop in the proportion of non-English publications has already led to a much sharper drop in the PLOTE-index. Although this difference can also be due to a difference in the timing of the transformation of publications into citations across language areas, the decline in PLOTE is substantial and consistent with observations in the literature of declining proportions of national citations over time (González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012). Table 3. Proportion of non-English publications and PLOTE-index over time among 40 professors Time period Proportion of non-English publications PLOTE Until and including 2016 41.68 8.07 2014–2017 35.81 3.57 Time period Proportion of non-English publications PLOTE Until and including 2016 41.68 8.07 2014–2017 35.81 3.57 This table is based on the total number of citations and publications from all professors in the sample. The resulting PLOTE score is thus not entirely comparable with the average of individual PLOTE scores in the analyses earlier. Table 3. Proportion of non-English publications and PLOTE-index over time among 40 professors Time period Proportion of non-English publications PLOTE Until and including 2016 41.68 8.07 2014–2017 35.81 3.57 Time period Proportion of non-English publications PLOTE Until and including 2016 41.68 8.07 2014–2017 35.81 3.57 This table is based on the total number of citations and publications from all professors in the sample. The resulting PLOTE score is thus not entirely comparable with the average of individual PLOTE scores in the analyses earlier. 9. The PLOTE-index: What is there to gain? The PLOTE-index can be constructively used in policymaking, research evaluation, and strategizing. In the making of research policy, it is reasonable to clarify goals regarding international and national impact. Many countries have institutionalized mechanisms for measuring the impact of research (Geuna and Martin 2003). Many use bibliometric measures directly or indirectly in funding decisions. More often than not, this happens without any explicit concern regarding the balance between impacts across language areas. Bibliometric indicators should not be applied indiscriminately (González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012: 308). Evaluative machineries should not be inconsistent with policy goals, so maybe the focus of these machineries should be reconsidered, if not the focus on these machineries. Are evaluation machines making policy? If research policies have negative consequences for the production of national scientific publications (González-Alcaide, Valderrama-Zurián and Aleixandre-Benavent 2012: 306), the PLOTE index can help quantify these consequences so that research policy decisions are made with open eyes. For example, even if national citation rates have been historically declining, do policymakers really want a further decline in PLOTE below the 4% found in the present study? Perhaps decision-makers accept a relative decline in PLOTE if only the number of LOTE publications does not fall in absolute terms. Again, facts may make the decision more enlightened. If policymakers deliberately want the total number of citations to increase, and the number of LOTE publications not to drop (or more ambitiously, PLOTE not to drop), then the workload upon researchers working in LOTE contexts will just increase, as they must do more of everything. Objectives related to conventional citation measures may be in conflict with another item on the agenda in today's research policies, i.e. the increasing interest in impact of research, including varieties in forms of impact (Donovan 2011; Penfield 2014; Derrick and Samuel 2016). An important dimension has to do with non-academic or social impact, including impact understood as the involvement of researchers in cooperative work with stakeholders, in the contribution of research to economic growth, innovation, welfare and an enlightened democratic debate. A significant part of these kinds of impact is likely to fall within a national arena. Although citations of LOTE publications is not a measure of all kinds of social impact, a high number of LOTE citations indicates some level relevance of a researcher’s work for stakeholders in his or her society. The pressure toward international citations and the pressure toward social impact may be in conflict or at least not easily compatible. If some researchers are tempted to manage the cross-pressure by pragmatically publishing more or less similar texts in several languages, then this strategy is partly inhibited by codes of conduct, which say that researchers are allowed to publish an original finding only once regardless of language area, unless under particular circumstances.2 Allegedly, these rules and regulations are countermeasures against a desire among researchers to ‘inflate’ their publication list by publishing versions of the same results several times. This countermeasure, however, has not taken into account that in some respects it may be good, common practice to publish similar texts in both English and LOTE versions (to make findings known to both international and national audiences). If this practice is banned or made suspicious by a code of conduct, good publications are even less likely to appear in national outlets. Therefore, is should be carefully considered whether codes of conduct in research unintendedly interact with bibliometric indicators, so that further pressure is put upon researchers to send their best material to international outlets only, while the PLOTE index drops, and forms of national, social impact are given less attention despite their declared importance in official policy statements. Next, the PLOTE-index can be used in research evaluation. Researchers with high PLOTE-indexes are, all other things being equal, likely to have a lower number of total citations, because it requires many more publications to produce a given impact in LOTE than in English. Without control for variations in the PLOTE-index across fields and subfields, if the number of citations regardless of language is crucial, then researchers with a high PLOTE-index are in an unfavorable position when they are under evaluation because they have fewer citations in total, all other things being equal. If the PLOTE-index varies systematically across fields and subfields, should these variations be controlled for in research evaluation? When comparing research groups across fields and subfields, normalization is usually regarded as sound (Wouters et al. 2008), although there may be difficulties with normalization in practice (Leydesdorff and Bornman 2016). However, for the issue at hand, there is no objective way to normalize PLOTE scores until a systematic registration of underlying data is implemented. In addition, the use of normalized data requires a clarification of the difference between norms meaning what people normally do and norms in the sense of ideals to follow. Just because various PLOTE scores are normal, they are not necessarily good. Nevertheless, research evaluation in disciplines such as political science could at least take into account that there are presumably sound reasons for at least some of the variations in PLOTE scores from one subdiscipline to another. Furthermore, the PLOTE-index might be relevant in recruitment decisions. In some situations, a high number of citations (regardless of language) may indicate an important aspect of research quality in a candidate. In other situations, depending on the job, it may be relevant to take into account the ability of the applicant to traverse multiple language areas and work with stakeholders in a particular national context or in several non-English-speaking domains. The PLOTE-index can be used to assess the ability of the researcher to balance his or her impact appropriately across the English/non-English divide. In some recruitment decisions, where national influence and visibility plays a role, a researcher with a moderate PLOTE score and a good number of total citations may be as fine a candidate as someone with a high number of citations and a PLOTE score of zero. Without such considerations, applicants in subfields in political science that have a high PLOTE index are systematically disadvantaged in totally open calls where the total number of citations and/or the number of articles in journals with a high impact factor are used as selection criteria. Finally, PLOTE scores can be used for strategizing. When subfields within the same department have markedly different PLOTE scores, a given research group may ask itself if it is positioned in the way it wishes to be with regard to citations and the PLOTE-index. If there is a trade-off between high total impact and a balanced PLOTE-index, how are these conflicting concerns managed? This question may trigger an interesting reflection on motivations to publish in English or other languages as they unfold in the local context (López-Navarro et al. 2015). Let us take the publication administration group in the feasibility study as an example. One strategic trajectory for this group involves a reduction of its PLOTE-index over time to become more isomorphic with other groups. This would probably require deliberate efforts to publish more in English and saying goodbye to some non-English impact. In this way, the group and its members position themselves more strategically in research evaluations and recruitment situations where there is an emphasis on international impact or simply on a high number of citations. Another strategic option is to argue compellingly for the continued relevance of having a higher PLOTE-index than other groups. A variation of this argument says that various subfields may contribute in different ways to the achievement of multiple goals in a department. One could also argue that the overall agenda in policy and management presently shifts toward more emphasis on non-academic impact and social impact, which practically often means impact in a national arena. So, a golden dawn awaits those with a high PLOTE-index (unless there will be political emphasis on impact not flowing from publications at all, but from other kinds of research activity). Researchers in and across groups may hold different views about the likely changes in goals of research policies in the future, and about the amount of ambiguity and discretion in interpreting them. At the level of individual researchers, a glance at the PLOTE-index may also lead to reflexive questions. Even if there is a general pressure to publish more in English, the link between citations and PLOTE-index (as well as between the H-index and the PLOTE-index) does not follow a linear natural law. General tendencies and individual strategies are not the same. Individual researchers may ask themselves if they are happy with their position vis-à-vis each of these metrics. They may also post their PLOTE-index as a personal marketing and branding strategy in supplement to the apparently language-neutral H-index to highlight their relevance for LOTE audiences. 10. Limitations What is presented here is only the first preliminary version of the PLOTE-index. The feasibility study has limitations. The data set describes a relatively small sample of professors in political science and findings cannot be generalized to other fields in other countries. Although a number of corrections of primary Google Scholar data were made to produce usable secondary data, not enough is known about rules for inclusion and data quality in Google Scholar to begin with. It remains unclear exactly what constitutes the kind of materials that counts as ‘a citation’. Google Scholar does not make language data available and searchable. The manual work required to calculate PLOTE scores out of Google Scholar data constitutes a barrier for immediate large-scale diffusion of the PLOTE-index. 11. Further applications and improvements The PLOTE-index and its applicability could easily be improved if more advance digital metrics supported it automatically, making manual counting superfluous. Such development would be welcome. If data quality in Google Scholar improves, the raw material used for the construction of the PLOTE-index may improve further. In the future, the fundamental idea in the PLOTE-index can also be refined and further explored using other suppliers of data instead of Google Scholar. The index can be developed in a number of directions, each serving specific purposes. PLOTE scores can be calculated not only for individuals but also for disciplines and nations. The latter may be relevant to understand and monitor the consequences for national publications of national research policies and systems for research evaluation. The PLOTE-index can be refined by taking into account the language of the citing publication, not only the cited one that would further an empirical research on the interaction between different kinds of literature, as suggested by Chavarro, Tang, and Ràfols (2017: 1674). Instead of non-English languages, the numerator could count citations in that language area where the researcher has his or her second-largest number of citations. The PLOTE-index would then measure the ability of the researcher to extend beyond his or her preferred language area regardless of which language area that might be. The force of this version of the indicator could be enhanced if an extra premium (such as Factor 2, 3, and 4) were given to impact in additional language areas beyond the two best-scoring language areas. 12. Conclusion and perspectives The issue of language is a particularly sensitive issue for researchers operating in non-English-speaking contexts. They have a range of good reasons to publish in English as well as in non-English languages. The outcome of their decisions in terms of a spread of citations across language areas can be measured. This is the function of the PLOTE-index. A feasibility study of 40 professors at political science departments in Denmark was carried out. Although not generalizable, the case study is interesting because professors in political science are engaged in abstract theory, the study of international organizations, as well as in issues related to national policy, parties, and institutions. Therefore, variations in their PLOTE scores were expected. Such variations were in fact found using data from the professors’ Google Scholar profiles. A high number of citations and a high H-index are usually associated with a relatively low PLOTE-index. These findings confirm that with few exceptions, the road to many citations goes through publications in English. Many scholars with high and very high impact scores have in fact published considerably in LOTE, even if their PLOTE-index is low. They have not given up publishing in non-English languages even if it contributes little to their total number of citations. The analysis also showed that there is stark variation in the average PLOTE-index across subfields in political science. Finally, it was shown that the proportion of publications in non-English languages is declining over time and so is the PLOTE-index. In recent years, among the professors studied, the proportion of their impact in non-English languages has dropped below 4%. Unless a further drop in the PLOTE-index is a deliberate policy goal, attention to the PLOTE-index should be considered in policymaking. Potential use of the PLOTE-index at the level of managers, research groups, and individual researchers has also been suggested. For example, in research evaluation, variations in PLOTE across subfields in a discipline could be taken into account. In general, the PLOTE-index turns the proportion of citations that flow from non-English publications into something that is measured and therefore potentially understood, appreciated, and considered in policymaking, research management, and research evaluation. Further theoretical and empirical work can help us highlight and understand variations in the PLOTE-index across nations, fields, subfields, and individual researchers, and over time. Conflict of interest statement. None declared. Footnotes 1 Alternative names of the PLOTE-index were considered. A simpler name, the L-index, is already occupied by a logarithmic index (Belikov and Belikov 2015). ‘L1-index’ was also considered but lacks precision, as publications in all non-English languages are counted, not only L1. LOTE (languages other than English) borders on the derogatory because of its negative definition of non-English, but it is analytically precise. P is kept to remind the reader that PLOTE is a percentage, not an absolute number. 2 For the Danish example, see http://ufm.dk/publikationer/2014/filer-2014/the-danish-code-of-conduct-for-research-integrity.pdf, especially p. 11. References Aagaard K. ( 2015) ‘How Incentives Trickle Down: Local Use of a National Bibliometric Indicator System’, Science and Public Policy , 42: 725– 37. Google Scholar CrossRef Search ADS Alexander N. ( 2013). Thoughts on the New South Africa . Pretoria: Jacana. Archambault É. et al. ( 2006) ‘Benchmarking Scientific Output in the Social Sciences and Humanities: The Limits of Existing Databases’, Scientometrics , 68/ 3: 329– 42. Google Scholar CrossRef Search ADS Belcher D. D. ( 2007) ‘Seeking Acceptance in an English-Only Research World’, Journal of Second Language Writing , 16/ 1: 1– 22. Google Scholar CrossRef Search ADS Belikov A. V., Belikov V. V. ( 2015) ‘A Citation-Based, Author- and Age-Normalized, Logarithmic Index for Evaluation of Individual Researchers Independently of Publication Counts’, F1000Research , 4: 884. Google Scholar CrossRef Search ADS Burrows R. ( 2012) ‘Living with the H-Index? Metric Assemblages in the Contemporary Academy’, The Sociological Review , 60/ 2: 355– 72. Google Scholar CrossRef Search ADS Chavarro D., Tang P., Ràfols I. ( 2017) ‘Why Researchers Publish in Non-Mainstream Journals: Training, Knowledge Bridging, and Gap Filling’, Research Policy , 46/ 9: 1666– 80. Google Scholar CrossRef Search ADS Dahler-Larsen P. ( 2014) ‘Constitutive Effects of Performance Indicators—Getting Beyond Unintended Consequences’, Public Management Review , 16/ 7: 969– 86. Google Scholar CrossRef Search ADS Delgado-López-Cózar E., Cabezas‐Clavijo Á. ( 2013) ‘Ranking Journals: Could Google Scholar Metrics be an Alternative to Journal Citation Reports and Scimago Journal Rank?’, Learned Publishing , 26/ 2: 101– 14. Google Scholar CrossRef Search ADS Derrick G., Samuel G. ( 2016) ‘The Evaluation Scale: Exploring Decisions About Societal Impact in Peer Review Panels’, Minerva , 54/ 1: 75– 97. Google Scholar CrossRef Search ADS PubMed Donovan C. ( 2011) ‘State of the Art in Assessing Research Impact: Introduction to a Special Issue’, Research Evaluation , 20/ 3: 175– 9. Google Scholar CrossRef Search ADS Geuna A., Martin B. R. ( 2003) ‘University Research Evaluation and Funding: An International Comparison’, Minerva , 41/ 4: 277– 304. Google Scholar CrossRef Search ADS González-Alcaide G., Valderrama-Zurián J., Aleixandre-Benavent R. ( 2012) ‘The Impact Factor in Non-English-Speaking Countries’, Scientometrics , 92/ 2: 297– 311. Google Scholar CrossRef Search ADS Halevi G., Moed H., Bar-Ilan J. ( 2017) ‘Suitability of Google Scholar as a Source of Scientific Information and as a Source of Data for Scientific Evaluation—Review of the Literature’, Journal of Informetrics , 11/ 3: 823– 34. Google Scholar CrossRef Search ADS Harzing A.-W. ( 2013) ‘A Preliminary Test of Google Scholar as a Source for Citation Data: A Longitudinal Study of Nobel Prize Winners’, Scientometrics , 94/ 3: 1057– 75. Google Scholar CrossRef Search ADS Harzing A.-W., Alakangas S. ( 2016) ‘Google Scholar, Scopus and the Web of Science: A Longitudinal and Cross-Disciplinary Comparison’, Scientometrics , 106/ 2: 787– 804. Google Scholar CrossRef Search ADS Harzing A.-W., Mijnhardt W. ( 2015) ‘Proof Over Promise: Towards a More Inclusive Ranking of Dutch Academics in Economics & Business’, Scientometrics , 102/ 1: 727– 49. Google Scholar CrossRef Search ADS Harzing A.-W., van der Wal R. ( 2008) ‘Google Scholar as a New Source for Citation Analysis’, Ethics in Science and Environmental Politics , 8/ 1: 61– 73. Google Scholar CrossRef Search ADS Hicks D. ( 2004) ‘The Four Literatures of Social Science’, in Moed H. F., Glänzel W., Schmoch U. (eds) Handbook of Quantitative Science and Technology Research , pp. 473– 96. Dordrecht: Springer Netherlands Hicks D. et al. ( 2015) ‘Bibliometrics: The Leiden Manifesto for Research Metrics’, Nature , 429– 31. Lamont M. ( 2009) How Professors Think: Inside the Curious World of Academic Judgment . Cambridge: Harvard University Press. Google Scholar CrossRef Search ADS Leydesdorff L., Bornmann L. ( 2016) ‘The Operationalization of “Fields” as WoS Subject Categories (WCs) in Evaluative Bibliometrics: The Cases of “Library and Information Science” and “Science & Technology Studies”’, Journal of the Association for Information Science and Technology , 67/ 3: 707– 14. Google Scholar CrossRef Search ADS Lillis T., Curry M. J. ( 2010) Academic Writing in a Global Context—The Politics and Practices of Publishing in English . New York, NY: Routledge. López Piñeiro C., Hicks D. ( 2015) ‘Reception of Spanish Sociology by Domestic and Foreign Audiences Differs and Has Consequences for Evaluation’, Research Evaluation , 24/ 1: 78– 89. Google Scholar CrossRef Search ADS López-Navarro I. et al. ( 2015) ‘Why Do I Publish Research Articles in English Instead of My Own Language? Differences in Spanish Researchers’ Motivations across Scientific Domains’, Scientometrics , 103/ 3: 939– 76. Google Scholar CrossRef Search ADS Martín-Martín A. et al. ( 2016) ‘The Counting House, Measuring Those Who Count: Presence of Bibliometrics, Scientometrics, Informetrics, Webometrics and Altmetrics in the Google Scholar Citations, ResearcherID, ResearchGate, Mendeley & Twitter’, EC3 Working Papers, 21. 19th of January 2015. 60 pages, 12 tables, 35 figures. University of Granada, https://arxiv.org/ftp/arxiv/papers/1602/1602.02412.pdf. Mas-Bleda A., Thelwall M. ( 2016) ‘Can Alternative Indicators Overcome Language Biases in Citation Counts? A Comparison of Spanish and UK Research’, Scientometrics , 109/ 3: 2007– 30. Google Scholar CrossRef Search ADS Mongeon P., Paul-Hus A. ( 2016) ‘The Journal Coverage of Web of Science and Scopus: A Comparative Analysis’, Scientometrics , 106/ 1: 213– 28. Google Scholar CrossRef Search ADS Penfield T. et al. ( 2014) ‘Assessment, Evaluations, and Definitions of Research Impact: A Review’, Research Evaluation , 23/ 1: 21– 32. Google Scholar CrossRef Search ADS Porter T. M. ( 1995) Trust in Numbers . Princeton: Princeton University Press. Prins A. A. M. et al. ( 2016) ‘Using Google Scholar in Research Evaluation of Humanities and Social Science Programs: A Comparison with Web of Science Data’, Research Evaluation , 25/ 3: 264– 70. Google Scholar CrossRef Search ADS Rafols I., Molas-Gallart J., Woolley R. ( 2015) ‘Science and Technology Indicators In & For the Peripheries. A Research Agenda’ in Paper presented at the 15th International Conference on Scientometrics & Informetrics, held in Istanbul, Turkey from 29 June to 4 July 2015, https://pdfs.semanticscholar.org/8fa4/b1d0f0ae49e97d0cfcabeb55bd100a20d274.pdf. Rushforth A., de Rijcke S. ( 2015) ‘Accounting for Impact? The Journal Impact Factor and the Making of Biomedical Research in The Netherlands’, Minerva , 53/ 2: 117– 39. Google Scholar CrossRef Search ADS PubMed Salager-Meyer F. ( 2008) ‘Scientific Publishing in Developing Countries: Challenges for the Future’, Journal of English for Academic Purposes , 7/ 2: 121– 32. Google Scholar CrossRef Search ADS Salager-Meyer F. ( 2014) ‘Writing and Publishing in Peripheral Scholarly Journals: How to Enhance the Global Influence of Multilingual Scholars?’, Journal of English for Academic Purposes , 13/ 1: 78– 82. Google Scholar CrossRef Search ADS Schneider J. ( 2009) ‘An Outline of the Bibliometric Indicator Used for Performance-Based Funding of Research Institutions in Norway’, European Political Science , 8: 364– 78. Google Scholar CrossRef Search ADS Tardy C. ( 2004) ‘The Role of English in Scientific Communication: Lingua Franca or Tyrannosaurus Rex?’, Journal of English for Academic Purposes , 3/ 2: 247– 69. Google Scholar CrossRef Search ADS van Leeuwen T. N. et al. ( 2000) ‘First Evidence of Serious Language-Bias in the Use of Citation Analysis for the Evaluation of National Science Systems’, Research Evaluation , 9/ 2: 155– 6. Google Scholar CrossRef Search ADS van Leeuwen T. N. et al. ( 2001) ‘Language Biases in the Coverage of the Science Citation Index and Its Consequences for International Comparisons of National Research Performance’, Scientometrics , 51/ 1: 335– 46. Google Scholar CrossRef Search ADS Wouters P. et al. ( 2008) ‘Messy Shapes of Knowledge - STS Explores Informatization, New Media and Academic Work’ in Hackett E. J. et al. (eds) The Handbook of Science and Technology Studies , pp. 319– 51. Cambridge: The MIT Press. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please email: email@example.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)
Research Evaluation – Oxford University Press
Published: Apr 11, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera