TY - JOUR AU - Belardinelli,, Paolo AB - Abstract Over the last decades, the use of mixed methods (MMs) has been burgeoning across social sciences. MMs have also been recommended as a viable research strategy for conducting studies in Public Administration. Through a systematic review conducted on seven journals, the article explores the extent to which the field has seized this opportunity and how it has done so. The review is organized around a framework that offers guidance on the analysis and design of MMs and is based on three pillars: selecting, sequencing, and connecting methods. The findings show that the field has been increasingly receptive to the adoption of designs combining at least a quantitative and a qualitative component. They indicate awareness of the repertoire of sequences available, ranging from parallel to sequential. They show signs of transparent reporting of unexpected results. However, they also show critical elements that may weaken the execution, hence the contribution yielded by the design. Drawing on the analysis of these limitations, the article points to the need to engage systematically in combining the results obtained through the separate research processes and to fully employ the evidence collected, especially through interviews, as a connecting point. Introduction This study maps the use of mixed methods (MMs) in public administration (PA) studies and offers a framework for their analysis and design. Over the last two decades, the use of MMs has been burgeoning across social sciences (Biesenbender and Héritier 2014), as witnessed by textbooks, the launch in 2007 of the Journal of Mixed Methods Research, and dedicated sessions included in the major disciplinary and field conferences. The intrinsic appeal of MMs lies in the opportunity to leverage the strengths and attenuate the limitations of single method approaches, thus reconciling the ardent disputes between nomothetic and idiographic perspectives and challenging the “incompatibility thesis” between quantitative and qualitative methods. The variety of perspectives and richness of data offered by MMs appear particularly suited to advance our understanding of complex phenomena and to draw implications from the analysis. Owing to their undeniable merits, MMs are often attributed an a priori positive valence without much conceptual problematization, definitional clarity or adequate assessment of the challenges associated with their adoption in a research project. In order to address these three interrelated uncertainties, we start by casting MMs in the philosophical frame of pragmatism (Tashakkori and Teddlie 2010), a perspective typically purported as a third way that bridges or at least overcomes the contrast between positivism/post-positivism and constructivism.1 From its early days (Dewey 1925; Peirce 1905) to more recent versions (Maxcy 2003; Morgan 2007; Murphy 1994), pragmatism has shown a clear focus on the problem to be analyzed and the practical implications of research.2 Its epistemological stance is practicality; that is to say, scholars should select and process data using “what works” to address the research question as a criterion. Therefore, it is not surprising that PA scholarship has turned its attention to MMs. A field with a predominantly applied focus (Van Thiel 2014), PA is concerned with employing “scientific knowledge to solve practical problems in highly politicized environments” (Ricucci 2010, 25). We then define MMs as a methodology, that is, a design and process of research, based on the combination of methods for data collection and analysis. In turn, “methodological presuppositions inform the methods used—the various techniques that a researcher draws on” (Haverland and Yanow 2012, 401). MMs don’t grant primacy to any specific paradigm nor hold researchers as “prisoners of a particular method or technique” (Robson 1993, 291). Following the intent of a study, MMs span viewpoints to inferences (Creswell and Plano Clark 2018) and promise to accommodate not only “top-down deductive research design but also grounded inductive or abductive research” (Feilzer 2010, 9). Specifically, we refer in this article to MMs as those including “at least one quantitative and qualitative method in the same research project” (Hesse-Biber 2015, 3). Finally, we argue that the pluralistic perspective entailed by MMs should not be taken as a tacit assent to conduct research in an expedient or sloppy way (Denscombe 2008). Quite the opposite, the purposefulness derived from pragmatism requires clarifying why and how MMs are employed. This allows a clear line to be drawn between a legitimate scholarly attitude to employ “what works” and the “whatever works” position that is sometimes associated with MMs and has predictably attracted skepticism from a few quarters (Bryman 2006; Pawson and Tilley 1997). Wrapping things up, at the heart of our argument lies the conception of MMs as a methodology aligned with pragmatism, a philosophical perspective consistent with the PA scholarly aim to build theories in an applied field. MMs have in fact been included among the main research strategies recommended, albeit rarely exemplified as a viable option to conduct studies in PA (Ricucci 2010; Yang, Zhang, and Holzer 2008). However, we know neither the extent to which the field has seized this opportunity nor how it has done so. Our paper aims to fill this gap by mapping the use of MMs in PA research through a systematic literature review. The analysis of the selected articles is organized around a framework for assessing and designing MMs. The framework is based on common themes emerging from an extensive review of well-established studies on MMs (Caracelli and Greene 1993; Creswell and Plano Clark 2018; Greene 2007; Hesse-Biber 2010; Morse 1991; Onwuegbuzie and Teddlie 2003; Plano Clark and Ivankova 2015, Tashakkori and Teddlie 1998, 2010; Teddlie and Tashakkori 2003) and composed of three main pillars: selecting, sequencing, and connecting methods. Taking stock is our first contribution. In a nutshell, we can state that over the last decade, the PA field has been increasingly receptive to the adoption of MMs. Our results indicate scholarly awareness of the whole repertoire of MMs sequences available, ranging from parallel to sequential. We have also found indications of transparent reporting of unexpected or counterintuitive results that, we believe, stem from the inherent features of a design that lends itself to recalibration. On a less reassuring note, however, we submit that MMs in PA could further unleash their potential, if scholars engaged more systematically in the combination of the results obtained through the separate research processes and if they did not relegate to the background some of the evidence collected, especially that resulting from interviews. This leads us to our second contribution, that is, a framework that can offer guidance on the analysis and design of MMs, while guaranteeing clarity and parsimony. We now introduce and define the components of our framework based on themes distilled from the specialized literature on MMs. A Framework for Mixed Methods Starting with a clear definition of what we mean by methods and which methods can be mixed, we refer to “all of those tools and techniques that are used to carry out research” (Yanow and Schwartz-Shea 2014, 401) that are informed by specific methodological premises. Ospina, Esteve, and Lee (2018) have recently provided a nonexhaustive yet thorough synthesis of different methods employed in PA research that seems useful to consider for definitional purposes. According to their classification, quantitative methods and a deductive approach are typically aligned with positivist/post-positivist presuppositions and include experimental and quasi-experimental designs, econometric models, survey methods, and quantitative case studies. Qualitative methods and inductive or abductive approaches are normally aligned with interpretivist presuppositions (For a discussion of different epistemological stances on qualitative inquiry, see Nowell and Albrecht (forthcoming), this issue) and “tend to favor methods that focus on language and representation, such as narrative inquiry, qualitative case studies, ethnographic, phenomenological, hermeneutical or historical analysis, and participatory research” (2017, 2).3 In this article, we focus on designs that include at least one quantitative and one qualitative method. We acknowledge, however, that scholars have used mixed methods that rely on a single paradigm, be it qualitative (e.g. a research mixing an ethnography and a qualitative case study) or quantitative (e.g. a randomized control trial and a survey). This choice has been criticized for leaving unattended the essential shortcomings intrinsic in each paradigm (Denzin 1978), such as the inability to explain an observed causal relation or the inability to generalize the findings.4 Consequently, designs mixing at least one quantitative and one qualitative method are not only far more common but also considered by several scholars as authentic MMs (Creswell 2009; Greene 2007; Johnson, Onwuegbuzie, and Turner 2007; Patton 2002; Ricucci 2010). The purposefulness derived from pragmatism requires clarifying not only what are MMs but also why and how they are employed. We do so by offering a framework that presents the two predominant rationales for selecting a MM design, illustrates the options available when sequencing methods, and draws attention to the importance of connecting methods, hence results. For clarity, these components are presented in separate sections, although they are often intertwined in the actual design of MM. Selecting a MMs Design Perhaps the most frequent reason to select a MM design is triangulation (Hammersley 2008; Ricucci 2010).5 It could be argued that triangulation offers a blanket justification for preferring a mixed over a single method, and one we may find applied to data, methods, theory, and even researchers (Denzin 1978; Johnson, Onwuegbuzie, and Turner 2007). However, we contend that it is useful to unpack further what can be considered the underlying principle of any MM (Johnson, Onwuegbuzie, and Turner 2007). Several scholars have accounted for the rationale for a MM design (Creswell and Plano Clark 2018; Denzin 1978; Green, Caracelli, and Graham 1989; Jick 1979; Rossman and Wilson 1985; Shannon-Baker 2015; Tashakkori and Teddlie 2010). Summarizing their findings, we submit that MMs are chosen over single-method designs with the aim to i) corroborate or complement findings, thus increasing the validity of results and ii) gain multilevel understanding of the phenomenon under study. Rather than being mutually exclusive, these two reasons specify the predominant rationale for preferring a MM over a single-method design. After specifying the “what” and “why” of MMs, we now focus on the “how”; that is to say we ask how to string and to connect methods respectively. Sequencing We define sequencing as the logical and chronological combination of methods in a design. The vast repertoire of possible combinations includes some that have been defined as “exercises in logically possible types of integration, rather than being built up out of examples” (Bryman 2007, 100). This caveat leads us to employ a parsimonious classification of MM concatenations, that is, parallel or sequential (for an in-depth analysis of “core mixed methods designs” see also Creswell and Plano Clark 2018, 51–99). The parallel design is employed to triangulate results that serve the same research purpose, although they are obtained through different methods. Many of the labels coined for this design, such as parallel, simultaneous, or concurrent (Morse 1991; Tashakkori and Teddlie 2010), imply that the two parts of the research are to be conducted separately from one another. Their findings are analyzed independently and, last, compared in order to gain a better understanding of the problem, as well as for corroboration and validation purposes. A sequential design instead includes two or more consecutive phases: quanti–qualitative (explanatory) design or quali–quantitative (exploratory) design. In the quanti–qualitative (explanatory) design, the second phase is aimed at explaining and refining the results obtained in the first phase. The results of the first phase may be puzzling because they differ from expectations. Or the researchers simply may feel the data are not revealing enough and employ qualitative methods to understand the causal mechanisms assumed to underlie correlational or causal quantitative findings. Furthermore, the qualitative phase may contribute to a refinement of the findings, by shedding light on the inner views of the actors (for a discussion and illustration of qualitative case studies as a way of addressing econometric challenges see Honig (forthcoming), this issue). The decision to embark on a second phase may be anticipated or may emerge in the course of the project (for a focus on the fixed versus emergent MM design see also Creswell and Plano Clark 2018 and Morse 1991). Two variants of quanti–qualitative designs deserve separate discussion. One is characterized by a comparative quantitative analysis followed by in-depth case studies that build on the results of the first phase. “Nested analysis,” a term widely employed in comparative research in political science, designates the use of small-N case studies nested in large-N comparative analysis (Lieberman 2005). In the other variant, the contribution of the quantitative phase is to support the design of the second, qualitative, phase. This variant is typically employed to ensure a more rigorous selection of case studies (Creswell and Plano Clark 2018).6 In the quali–quantitative (exploratory) design, the second phase is used to “augment” the findings of the first phase (Hanson et al. 2005, 229). In other words, the first phase generates the main innovative contribution of the study, while the second is meant to test the findings, thus generalizing or refining them (Bergman 2008; Morse and Niehaus 2009). A hybrid design type could be included in this taxonomy to describe combinations of two or more of the other design types, such as in the case of a qualitative-quantitative-qualitative method sequence (Schoonenboom and Johnson 2017). Connecting Methods Whether parallel or sequential, every MM design demands integration through connecting points between the research components (Tashakkori and Creswell 2007; Teddlie and Tashakkori 2003). To confirm this view, several scholars have referred to interwovenness as one of the most distinctive features of MMs (Biesenbender and Héritier 2014; Johnson, Onwuegbuzie, and Turner 2007; Morse 2010). Despite its centrality in MMs, this stage too often tends to be overlooked. Prima facie, this may be the result of a broad tendency toward inadequate reporting occurring in MM designs. However, we argue that it also signals the challenges of integrating both during the process and ex post. It is possible to identify specific connecting points that are recurrently, albeit not uniquely, associated with a MM design. In the case of a parallel design, integration requires assembly and comparison of the results obtained from the two different methods. Considering that the trademark of a parallel design is the independent conduit of the research components, connecting points are fundamental. The most obvious is the empirical context common to the two research processes, albeit the research question(s) together with the combination and the analysis of results guarantee a tighter design. In a quanti–qualitative (explanatory) design, the key stage for integration should be between the quantitative analysis of the first stage and the qualitative of the second or, more specifically, the connection between the results of the first and the data collection of the second phase. Two main methodological devices enable integration of the sequences. First, information from the first sample, which typically derives from a probability sampling procedure, is often required to draw the second, purposeful sampling; that is, the selection of units of analysis is instrumental to address the specific research questions (Kemper, Stringfield, and Teddlie 2003). Unlike representative sampling, purposeful sampling calls for the deliberate selection of individuals, organizations or events specifically on the basis of the information they have to offer (Teddlie and Yu 2007). Second, following an “interview protocol development” (Ivankova, Creswell and Stick 2006, 13), researchers can ground the research questions on the results from the first quantitative phase, searching for confirmation, clarification, richer accounts and inner perspective. Integration in the quali–quantitative (exploratory) design is a delicate stage. It requires embedding the results of the first, qualitative phase in the theory testing enacted through the second, quantitative one. A first connecting device, i.e. sampling, proceeds symmetrically to the quanti-qualitative design, with purposeful sampling in the first phase followed by representative sampling in the second one. Creswell and Plano-Clark suggest that “ideally both samples should be from the same population” (2018, 89), albeit frequently the number of participants in the first phase would be much smaller than those in the second one. Furthermore, the results of the exploration conducted through the qualitative phase ought to be turned into inputs for the second qualitative phase. The connecting device here is often represented by emerging themes, then translated into hypotheses or survey questions for the theory testing phase. After presenting the components of our framework based on themes distilled from the specialized literature on MMs, we now account for how we conducted the systematic literature review. Collecting and Analyzing the Primary Studies Considering that one of the main goals of the paper was to map the use of mixed methods in PA scholarship, we decided to employ a systematic review (Petticrew and Roberts 2006) driven by the following questions: To what extent have MMs been used by PA scholars? Why and how do they employ MMs? We performed and reported our review following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA, see Supplementary Table A1) following Liberati et al. (2009). We engaged in a round of consultation both with members of editorial boards of PA journals and with experts in MMs (Tot. six experts). We submitted them the key choices entailed by the sample selection, asking for their validation and advice. Two of these scholars we consulted three times. We focused our analysis on journal articles, as they increasingly represent the principal research outlet and capture the main contemporary trends in academic studies (Ospina, Esteve, and Lee 2018). Also, the peer review process that is integral to the acceptance and publication of journal articles provides a quality control mechanism, which is not the case for some conference papers and books (Bryman 2006). As a further rationale, and one pertinent to the purpose of this paper, employing MM design for a journal article differs remarkably from its use in a book-length project. Book authors face challenges in condensing distinct methodological and empirical sections and may define empirical side-projects as mixed method, often in the form of chapters that are coupled with each other more loosely than what would be conventionally accepted in a paper. We next selected seven journals to include in the systematic review. Based on recent methodological reviews in the public administration literature (Ospina, Esteve, and Lee 2018; Groeneveld et al. 2015), we included six top-ranked journals from the public administration category of the most recent ISI Journal Citation Reports—Social Science Edition (© Thomson Reuters 2017): Governance, International Public Management Journal (IPMJ), Journal of Public Administration Research and Theory (JPART), Public Administration (PA), Public Administration Review (PAR), and Public Management Review (PMR). We complemented these journals by adding American Review of Public Administration (ARPA), thus reflecting the ascendant trend of the journal impact over the last decade. We conducted a preliminary search for articles in our seven journals using not only “mix* method*” but also “multiple method*”, and “multimethod*” as keywords to be found anywhere in the article with no time restriction. These search criteria ensured the inclusion of all the articles explicitly adopting a MM approach. While aware that some scholars have drawn a distinction between mixed methods and multiple methods (Biesenbender and Héritier 2014), we decided to cast a wider net here. Both the preliminary results of our search and the consultation with the experts pointed to the fact that these terms often may be used interchangeably. Acknowledging that scholars may have employed more than one method despite not explicitly referring to mixed or multiple methods, we also engaged in a manual screening of all articles in our seven journals between 2011 and 2017. We decided to limit the timeframe in order to guarantee the feasibility of an accurate manual search, and we selected this specific span based on our preliminary keywords search, which revealed a steep increase of articles employing MMs beginning in 2011 (see Supplementary Figure A1). Our focus on MM as a specific research strategy in PA led us to the identification of three criteria for an article to be eligible in our systematic review: (i) it must be an empirical study, (ii) it must adopt at least one quantitative and one qualitative method, and (iii) it must have been published between 2011 and 2017. Although aware that scholars can employ many combinations of methods, we decided to be consistent with the mainstream definition and application (Johnson, Onwuegbuzie, and Turner 2007); thus, we focused on MM research strategies combining at least one quantitative and one qualitative method. More importantly, we found this enhanced significantly the “objectivity” and replicability of the systematic review process (Liberati et al. 2009). Specifically, for an empirical inquiry to be considered quantitative, we looked for some form of statistical inference. For example, the mere presentation of frequencies would not be considered a quantitative method for our manual search. At the same time, we regarded qualitative methods to be all empirical inquiries reporting “data in the form of words, that is, language in the form of extended text” (Miles and Huberman 1994, 9). Figure 1 synthesizes the PRISMA flowchart, i.e., the search and selection process of articles published in our seven journals between 2011 and 2017, leading us to include 104 primary studies in the systematic review. Figure 1. View largeDownload slide PRISMA Flowchart Figure 1. View largeDownload slide PRISMA Flowchart This process followed two different paths. In one, we ran an easily replicable keywords’ search process, in which, as a first step, we read the titles and abstracts of the 103 resulting publications. At this stage, we were able to exclude 32 records among book reviews, theoretical articles, and literature reviews that did not actually adopt a MM research strategy. We then carefully read the methods section of the remaining 71 empirical articles and dropped from our review those articles adopting a single-method strategy.7 After reading each article, we excluded articles employing a Q-methodology (e.g., Palmer 2013) and one combining theory with qualitative analysis (i.e., Ferlie and McGivern 2013), both characterized by the authors as mixed method studies. Also upon consultation with the experts, we concluded that the standard Q-methodology template for selecting and mixing methods leaves little if any room for the discretionary choices of selecting, sequencing and connecting that we are presenting here, whereas a study employing theory and qualitative research as two distinct components of the design is not comparable to one combining two different empirical methods. Moreover, we excluded six articles adopting combinations of two quantitative methods or two qualitative ones (Donahue and O’Leary 2011; Kaehne 2013; Lawton and Macaulay 2017; Morrell and Currie 2015; Musso et al. 2011; O’Brien et al. 2017). Finally, we double-checked the articles selected to determine if, consistent with our expectations, “mixed methods” and “multiple methods” were employed interchangeably and confirmed their inclusion. Thirty-four primary studies were selected during this phase of the search process. In the second path, we screened all published articles to manually select, according to the criteria described above, those adopting a MM even though not explicitly referring to it as such, ending up with 70 additional primary studies. For each study included in the review, in addition to author(s), publication year, title, and journal, we extracted information based on the following categories: predominant rationale for adopting a MM design, actual MM design, methods employed, their sequence, and connecting points. Nonetheless, at times we had to infer key information such as the predominant rationale for, or type of, MM design; for this task, each author performed an independent analysis of the articles on the basis of the identified categories. We then exchanged views and found consensus on those articles whose methodological sections were considered unclear (Tot. six articles) or lacked sufficient information to be coded on the basis of our categories (Tot. nine articles). Mixed Methods and Public Administration Scholarship Our final sample comprised 104 primary studies combining at least one quantitative and qualitative method, 34 (33%) explicitly referred to the use of MMs and 70 primary studies (67%) adopted this methodology without referring to it. Over half of our primary studies (59 out of 104; 57%) were published in three journals: PAR, JPART, and PA, with 21 (20%), 20 (19%), and 18 (17%) articles respectively. We found 13 out of 104 articles published in IPMJ (13%), 12 in Governance (12%), 12 in ARPA (12%), and 8 in PMR (8%). Interestingly, the percentage of explicit primary studies within each journal varied from 43% (9 out of 21) and 40% (8 out of 20) of PAR and JPART to 17% (2 out of 12) and 23% (3 out of 13) of Governance and IPMJ, respectively. Qualitative analysis of interviews was the method most frequently combined with another (78 studies out of 104; 75%), especially with quantitative analysis of survey data (45 out of 104 studies; 43%) and with quantitative analysis of archival data (34 out of 104 studies; 33%). Less frequent combinations included qualitative and quantitative analysis of interviews (Mitchell 2014), qualitative and quantitative analysis of archival data (Vogel 2012), quantitative analysis of archival data mixed with case studies (Di Mascio and Natalini 2013), quantitative analysis of survey data mixed with case studies (Andrews, Cowell, and Downe 2011), and quantitative and qualitative analysis of data from the same survey, which in this case included both closed- and open-ended questions (Tummers, Steijn, and Bekkers 2012). Selecting Methods We found that PA studies employed this design consistent with the rationales identified by the methodological literature. A first rationale is associated with the attempt to secure more robust results and to enhance their validity, confirming “the importance of triangulation from multiple methods […] as a validity check” (Freeman and Peck 2007, 925). In other words, this rationale is associated with a research orientation keen on uncovering possible contradictions in order to corroborate results and interpretations. Illustrative of how MMs can be employed to validate results, was a study that examined network implementation capacity and developed an innovative framework focused on three dimensions. Wang, Chen, and Berman (2016) conducted a validity check by randomly selecting interviewees among the respondents to the previous survey: “[they were] asked to provide examples, cases, and documents as evidence of their claims, thereby providing an assessment of the reliability of survey results.” MMs can be also employed to check construct validity. In a study on the effect of administrative burden on the bureaucratic perception of policies, the authors employed a MM approach in which the qualitative component “provides support for the validity of the concept of administrative burden,” whereas the quantitative component models the potential implications for bureaucratic policy preferences (Burden et al. 2012). Validation through MMs seems especially sensible in studies venturing into areas that are substantially unexplored. For example, in their study on how residents of low-income communities perceive the role of nonprofit organizations in representing their interests, the authors stated they “chose a mixed-methods approach due to the lack of prior research in this area.” (Mosley and Grogan 2012, 846). Second, studies employ MMs in order to secure a multilevel understanding of a phenomenon or of a research problem. As an instance, a multidimensional framework for assessing the impact of a technological innovation on an organization’s performance as perceived by different stakeholders prompted authors to mix the quantitative analysis of a survey with the qualitative analysis of open-ended questions and interviews (Cucciniello and Nasi 2014). Moreover, scholars may engage in multidimensional studies aimed at establishing a clear pattern linking variables, while also exploring the underlying mechanisms of this pattern. Koski et al. (2018) were motivated to choose MMs in their study on representation in collaborative governance to understand both general trends and the factors that inform these trends. It should be noted that 20 out of 104 studies (19%) did not specify any rationale for the adoption of a MM design. Unsurprisingly, this was more common among the studies that we found through the manual search and characterized as implicit MMs (19 out of 70 studies; 27%) than among those explicitly referring to MMs (1 out of 34 studies; 3%). After discussing why PA scholars selected MMs, we now focus on how they did so. Table 1 provides a summary of results of the articles according to sequencing and connecting categories. In order to maximize the transparency of the process, we have also included a synthesis of our findings for each article (Supplementary Table A2). Table 1. Summary Results N % Studies included 104 100% Sequencing  Parallel 32 31%  Sequential 68 65% Explanatory 39 38% Exploratory 29 28%  Hybrid 4 4% Connecting  Empirical context 104 100%  Findings combination 67 64%  Purposive sample 43 42%  Themes (survey) 13 13%  Same sample 12 12%  Case selection 11 11%  Themes (hypotheses) 10 10%  Interviews protocol 5 5%  Themes (variables) 5 5%  Same source of data 3 3% N % Studies included 104 100% Sequencing  Parallel 32 31%  Sequential 68 65% Explanatory 39 38% Exploratory 29 28%  Hybrid 4 4% Connecting  Empirical context 104 100%  Findings combination 67 64%  Purposive sample 43 42%  Themes (survey) 13 13%  Same sample 12 12%  Case selection 11 11%  Themes (hypotheses) 10 10%  Interviews protocol 5 5%  Themes (variables) 5 5%  Same source of data 3 3% View Large Table 1. Summary Results N % Studies included 104 100% Sequencing  Parallel 32 31%  Sequential 68 65% Explanatory 39 38% Exploratory 29 28%  Hybrid 4 4% Connecting  Empirical context 104 100%  Findings combination 67 64%  Purposive sample 43 42%  Themes (survey) 13 13%  Same sample 12 12%  Case selection 11 11%  Themes (hypotheses) 10 10%  Interviews protocol 5 5%  Themes (variables) 5 5%  Same source of data 3 3% N % Studies included 104 100% Sequencing  Parallel 32 31%  Sequential 68 65% Explanatory 39 38% Exploratory 29 28%  Hybrid 4 4% Connecting  Empirical context 104 100%  Findings combination 67 64%  Purposive sample 43 42%  Themes (survey) 13 13%  Same sample 12 12%  Case selection 11 11%  Themes (hypotheses) 10 10%  Interviews protocol 5 5%  Themes (variables) 5 5%  Same source of data 3 3% View Large Sequencing Methods Our systematic review showed that the parallel design was selected by 32 out of 104 studies (31%) while 72 out of 104 studies (69%) adopted the sequential design. We found a higher proportion of sequential designs among the studies explicitly adopting MMs (27 out of 34 studies; 79%) compared to those implicitly adopting MMs (45 out of 70 studies; 64%). In a parallel design, the results obtained through different methods conducted independently but not necessarily concomitantly are compared or integrated, depending on the rationale of the study, at a stage that represents the point of convergence. Among the 32 primary studies adopting a parallel design, qualitative analysis of interviews was most often combined with a quantitative analysis of survey data (12 out of 32 studies; 38%), followed by quantitative analysis of archival data (7 out of 32 studies; 22%) and of the same interviews (2 out of 32 studies; 6%). 4 primary studies (13%) used archival data as the main source of data for both qualitative and quantitative analyses. In 4 other cases (13%), the authors administered a survey with both closed- and open-ended questions, which allowed them to perform quantitative and qualitative analyses. Qualitative analyses of data collected through ethnography (Hyun, Post, and Ray 2018), case study (Thom 2013), and observations (Shea 2011) were among the other methods combined with the quantitative analysis of archival data. Combining quantitative and qualitative analysis was used to enhance understanding or to elicit additional findings in 12 out of 32 primary studies adopting a parallel design (38%), whereas it served to validate the findings generated through a single method in 8 studies (25%). In all the other cases, the rationale for mixing methods was not explicit. Different methods can be employed independently and their findings jointly presented to integrate perspectives on a subject, often within a case-study. This instance is exemplified by a study on the decision processes of line officers responsible for fire and fuel management projects, a field characterized by multiple layers of institutions involved. To examine how and why on-the-ground decisions and outcomes differ, Reiners (2012) conducted a case-study combining the statistical analysis of a survey and the qualitative analysis of in-depth interviews. Moreover, findings obtained through different methods can be compared as a validity check. In one study, the authors investigated the commitment to public values of special district managers and validated their findings by comparing the quantitative analysis of a survey and the thematic analysis of interviews (Berman and West 2012). Similarly, in a study investigating the factors driving the enactment of pension funding, the author combined a logistic regression with two, albeit brief, case studies, concluding that “both quantitative and qualitative robustness checks largely reinforce these findings” (Thom 2013, 480). Different methods may clearly offer findings that are not mutually validated, as reported in 4 of our 32 studies (13%). For example, to explore a sensitive issue such as the implications of nonprofit leaders’ government ties for nonprofit operations in China, the authors combined the statistical analysis of a survey and the qualitative analysis of interviews, finding contrasting results that were then discussed in the concluding section (Zhan and Tang 2016). Turning to the sequential design, our review indicated that more than half of the studies using a sequential design employed a quanti–qualitative (explanatory) design (39 out of 72 studies; 54%), more than one-third used a quali-quantitative (exploratory) design (29 out of 72; 40%) and in a few cases (4 out of 72 studies; 6%) both explanatory and exploratory sequential (hybrid) designs were adopted. Specifically, in quanti–qualitative (explanatory) designs, the second phase is aimed at explaining and refining the results obtained in the first one. When adopting this design, PA scholars often combined quantitative analysis of archival data (15 out of 39 studies; 38%) or survey data (12 out of 39 studies; 31%) with qualitative analysis of interviews. Case studies followed quantitative analysis of either archival or survey data in 6 articles (15%), while other combinations included quantitative and qualitative analysis of the data collected through the same survey, interviews, or archival sources (6 studies out of 39 studies; 15%). Researchers employing this design are usually driven by the desire to delve deeper in the findings. Illustrative of a classical sequence and division of labor between the two phases, Soss, Fording, and Schram, in their “Empirical critique of performance systems and New Public Management” (2011, 204), used administrative data to establish a pattern linking performance pressures on providers to sanctions imposed on clients. They then performed field research on case manager discretion in order to explain this relationship and clarify its underlying mechanisms. In the same vein, a study on representative bureaucracy investigating the effects of teachers’ representation on teens pregnancy rate, combined linear regression analysis with interviews in order to substantiate the hypotheses, as well as to “broaden and deepen the understanding of the possible causal mechanisms that underpin them” (Atkins and Wilkins 2013, 777). The results of the first phase may differ from expectations, prompting scholars to engage in a second research process, as reported in 6 of our 39 studies (15%). In an article on the difference between for-profit and nonprofit providers on the failure of welfare services, the author conducted case-studies in order to “explore the source of the perplexing outcomes” (Suda 2011, 34) resulting from the analysis of variance and covariance. Similarly, in a study on conflict of interest faced by international civil servants, the authors explained that the findings of the quantitative survey “hardly confirmed” their initial hypothesis. Therefore, puzzled by the results, they “further explored this issue through semi-structured interviews with a sub-set of the survey respondents” (Mele, Anderfuhren-Biget, and Varone 2016, 492). Or, simply, the researchers may feel the data are not revealing enough and employ qualitative methods to go “beyond simple linear understandings and explore equifinality, causal asymmetry, and complex interactions among explanatory variables” (Wang 2016, 376). Moreover, the second phase may contribute to a refinement of the findings, by shedding light on the inner views of the actors. In a study on the process changes that may reduce individual administrative burden, Herd et al. (2013) combined administrative panel data with interviews and documentary analysis that incorporated the views of policy makers and stakeholders. Considering that roughly two thirds of the studies adopting a quanti-qualitative design employed qualitative analysis of interviews in the second phase (28 out of 39 studies; 72%), it is important to highlight that the actual use of the evidence collected through interviews in the article varied remarkably. The studies adopting this design that explicitly referred to MMs (14) drew quite extensively from the interviews. For instance, in the section on “Qualitative Field Research in Communities” of their study on forcible stops in New York City, Eterno, Barrow, and Silverman (2017, 188) convincingly organized the findings based on the interviews that followed their multivariate regression. Instead, among the studies that did not refer explicitly to MMs and that we categorized as quanti–qualitative (explanatory) design (14), a few made only cursory use of the evidence collected through interviews. By way of illustration, in one study exploring how networks differ between municipalities, and between politicians and bureaucrats, the authors carried out an ambitious project. After performing an analysis of variance test to identify differences in tie patterns, they conducted 301 interviews. Despite this impressive empirical effort, the evidence collected through the interviews remained underemployed in the analysis (Alexander, Lewis, and Considine 2011). We also found a few instances (6 out of 39 studies; 15%) of two sub-types of MM sequential explanatory designs. The use of small-N case studies nested in large-N comparative analyses, was instantiated by a study on urban school districts in New York City (Destler 2017), where the author firstly quantitatively assessed the relation between organizational climate and performance management values, and then qualitatively investigated the underlying mechanisms, finding that trust among colleagues and supervisory support play a significant role in shaping teachers’ willingness to embrace performance management systems. The other sub-type, the so-called case-selection variant was exemplified by a study on the process and outcomes of performance management introduction at the local level, where the authors combined a large-N analysis, particularly suited for selecting diverse cases, with in-depth case studies “that are seen as being superior for identification of causal processes linking contexts and outcomes” (Di Mascio and Natalini 2013, 147). Moving to the PA studies that adopted a quali-quantitative (exploratory) design (29 out of 104; 28%), qualitative analysis of interviews was often followed by the quantitative analysis of survey data (17 out of 29 studies; 59%), archival data (4 out of 29 studies; 14%) and the same interviews (2 out of 29 studies; 7%). Other studies adopting this design relied on qualitative analysis of data collected from focus groups, through case studies (2 out of 29 studies; 7% respectively), through open-ended questions and through archival data (1 out of 29 studies; 3% respectively), and combined these data with quantitative analysis of archival or survey data. PA scholars selecting this design effectively employed the qualitative phase to generate either hypotheses or questions that are then tested in a survey. They often did so when facing theoretical alternatives. As an illustration, a study on the effects of reputational threats on the enforcement pace of a public agency, started with a preliminary qualitative analysis to identify the specific aspect of the agency’s reputation that was challenged. This enabled researchers to identify an appropriate testable hypothesis for the quantitative analysis selecting from two theoretical alternatives (Maor and Sulitzeanu-Kenan 2013). Exploratory studies based on preliminary interviews are useful also when the analysis of a research project is at the individual level. A study that extended the empirical investigation of bureaucratic burden by including its individual impacts may serve as an illustration. Recognizing the importance of understanding the personal experiences of beneficiaries, the author engaged in a qualitative phase based on interviews that informed the design of the surveys (Heinrich 2015). Sometimes, the qualitative phase can inform the generation of both hypotheses and survey questions, such as in a study on the use of incentives to hold contractors accountable for their performance. Since the objective was “to glean insights into the factors that influence the decisions public managers make” (Girth 2012, 321), a qualitative phase based on the analysis of contract documentation as well as on interviews with contract managers shaped the hypotheses and also helped in refining the survey questions. While most MM designs fell into either the quanti–qualitative (explanatory) or quali–quantitative (exploratory) designs, some researchers engaged fully in both designs, thus executing three distinct, yet concatenated, research phases. Four of our 72 sequential studies (6%) chose this hybrid design. A typical case was a study starting with the qualitative analysis of interviews that informed the survey questions, whose quantitative analysis was then followed by another, distinct, qualitative focus designed to understand the survey results in greater depth (see Gerlak and Heikkila 2011). Connecting Methods As discussed earlier, each MM design is characterized by a set of typical connecting points or devices. Far from being a mechanical exercise, however, connecting remains one of the most distinctive challenges of mixed versus single methods. Overall, our analysis found several convincing examples that we employ below to show how methods can be connected. At the same time, we found a significant number of studies that did not integrate adequately the components of a research project or their respective findings. Starting with the parallel design, our systematic review showed that all of the 32 studies adopting a parallel design had empirical context as a connecting device. There did not seem to be a specific type of context that called for a MM design and contexts in our sample varied remarkably, ranging from a single organization, such as a Korean University (Kim and Bak 2016), to an entire sector, such as the nonprofit sector in the San Francisco Bay Area (Hwang and Bromly 2015); and from a category of civil servants, such as Swedish senior local public managers (Lundin, Öberg and Josefsson 2015) to a comparative cross-country setting, such as in the exploration of regulatory responses to the financial crisis in the OECD countries (Young and Park 2013). It should be said, however, that relying on the same empirical context as the only connecting point in a study does not ensure a tight parallel design. We argue that a more compelling connecting point that should characterize all parallel MM studies is the combination, by comparison or integration, of the findings obtained through separate research processes.8 According to our analysis, 20 of the 32 studies adopting a parallel design (63%) succeeded in comparing, at least partially, quantitative and qualitative findings. This result does not change much between explicit (4 out of 7; 57%) and implicit (16 out of 25 studies; 64%) MMs parallel studies. The lack of an adequate combination represents a missed opportunity, especially after investing in a significant empirical effort. By way of illustration, Salge and Vera (2012) examined whether the payoff from engaging in innovation-generating activities is contingent on specific organizational conditions. They pursued a demanding study mixing estimation based on dynamic panel data from 153 hospitals with 72 semi-structured interviews to validate their proposed measures, a rationale fully compatible with the parallel design. However, they did not show how validation occurred. In the quanti–qualitative (explanatory) design, PA studies employed sampling as the main connecting point between the two phases (22 out of 39 studies; 56%), in a sequence consistent with the recommendations of the methodological literature, that is to say representative sampling followed by purposeful sampling. This sequence was effectively exemplified through a study investigating the impact of institutional factors on the relationship between grant funding and local debt in England and Germany. After analyzing a panel dataset in order to determine such relationship, the author conducted semi-structured interviews with intergovernmental actors in the two systems, “selected based upon their direct contribution to, and knowledge of, intergovernmental financial interactions” (De Widt 2016, 678).9 For the most part, PA studies employing a quanti–qualitative sequence seemed to execute adequately the connection between results (35 out of 39 studies; 90%), thus confirming the general notion that, owing to a high level of codification, implementing this design is relatively less problematic than the others (Teddlie and Tashakkori 2010). Turning to the quali–quantitative (exploratory) design, we found the main connecting point to be the identification of themes emerging from the qualitative analysis that were then incorporated in testable hypotheses or translated into survey questions and variables for the second phase. In fact, all but two quali–quantitative designs (27 out of 29 studies; 93%) included in our review employed themes derived from the first qualitative phase to elaborate either the survey instrument, for example, questions, to collect the quantitative data (14 out of 29 studies; 48%), the hypotheses that could be tested in the second phase of the research (10 out of 29 studies; 34%), or the variables that could be used in the quantitative analysis (5 out of 29 studies; 17%).10 In general, we found a clear connection between the qualitative results of the first phase and the hypotheses of the second, for example, by embedding verbatim comments from the interviews in the hypothesis formulation section. A study on the structural embeddedness of political executives as an antecedent of policy isomorphism in municipalities offers a good illustration. To derive hypotheses about the importance of mayoral network positions, the author employed qualitative data obtained from semi-structured interviews with mayors as well as media representatives, including some quotations that enabled them to capture the inner view of respondents, thus making the section more vivid (Villadsen 2011, 575–582). However, we found that studies where the qualitative interviews of the first phase informed the survey questions rarely showed how the survey questions were connected to the qualitative phase. For example, in an article on collaborative policy-making outcomes, the authors merely stated that they constructed the online survey with the support of interviews, which helped “researchers identify the key dimensions of concepts explored through survey questions” (Siddiki and Goel 2017, 259), albeit they provided no evidence yielded by the interviews. We argue that such limited engagement with the qualitative findings may be problematic, especially considering that the generalist methodological literature portrays this design as one where the main theoretical thrust lies in in the qualitative phase. Discussion And Conclusion Our results show that PA scholarship is increasingly receptive to the use of MMs. Owing to their unique adaptability for research purposes, MMs have enriched our methodological ammunition and are fruitfully employed in the constant search for better understanding of, and search for public solutions to complex phenomena. Taken as a whole, the studies in our sample were driven by a rationale fully compatible with the choice of MMs, whether a search for validity and corroboration or for a multilevel perspective on the unit of analysis. The variety of combinations signals the field’s awareness of available options and the ability to operationalize them. PA scholars are especially interested in selecting MM designs under specific conditions. These include, for example, subject areas that are unexplored, the need to capture various perspectives at different levels of analysis, and sensitive subjects where multiple access to data is fundamental. Our analysis reveals that the implementation and reporting of specific components of MMs leaves room for improvement, principally along the lines of explaining better the choices entailed by this design and fully employing the results obtained through the separate methods. First, reasons for choosing a MM study should be explained more. The lack of adequate reporting may be particularly problematic, for the pragmatist roots of this design not only indicate a functionalist perspective, that is, “what works,” but they also require clarification of why and how the design is employed. In this respect, we found that most PA studies explained the rationale for engaging a MM. They also tended to describe in detail the operationalization of the distinct research methods. However, choices pertaining to the MM design as a whole, such as how the sequence unfolded, how the phases were connected and how the findings were integrated, were less frequently explained. Partially related, we found encouraging although still limited signs of transparent reporting in the case of unexpected and even disappointing results. This usually occurs when findings obtained through two or more independent research processes do not confirm each other, thus prompting authors to search for alternative explanations. Or when findings from a quantitative analysis do not confirm the hypotheses, thus leading to a second, qualitative phase. We argue that such honest reporting builds on a crucial asset of MMs, a design uniquely suitable for turning the threat of unforeseen results into an opportunity to refine causal arguments. Reporting iterations and recalibrations in MMs could even mitigate the “language of deductive proceduralism” (Yom 2015, 2) that, by mimicking successful hypothesis testing, easily covers practices such as selective reporting and ignoring conflicting results. Second and more important, we call for a tighter combination of the findings obtained through separate research processes, especially when the rationale for MMs is the search for validity. This is what differentiates mixing methods from juxtaposing them. We also point to the need for a less cursory use of the qualitative findings derived from interviews, both when they follow a quantitative analysis and when they are meant to generate survey questions. We remain persuaded that, except for these limitations, employing MMs in PA is likely to enhance the quality of a study by, at the very least, increasing the familiarity of researchers with a specific empirical context and making the interpretation of results more accurate and nuanced. At the same time, we argue that attending to these limitations may help scholars to make their contributions more convincing, thus justifying the demanding empirical efforts often required by this design. Acknowledgements We would like to thank the symposium editors and the three anonymous reviewers for their constructive comments. We are also grateful to Nicola Bellé, Tony Bertelli, Giovanni Fattore, Sonia Ospina, Vicki Plano Clark and Sandra van Thiel for their critical remarks and helpful suggestions. References11 Alexander , Damon , Jenny M. Lewis , and Mark Considine . 2011 . How politicians and bureaucrats network: A comparison across governments . Public Administration 89 : 1274 – 1292 . Google Scholar Crossref Search ADS Andrews , Rhys , Richard Cowell , and James Downe . 2011 . Promoting civic culture by supporting citizenship: What difference can local government make ? Public Administration 89 : 595 – 610 . Google Scholar Crossref Search ADS Atkins , Danielle N. , and Vicky M. Wilkins . 2013 . Going beyond reading, writing, and arithmetic: The effects of teacher representation on teen pregnancy rates . Journal of Public Administration Research and Theory 23 : 771 – 790 . Google Scholar Crossref Search ADS Bergman , Manfred Max . 2008 . Advances in mixed methods research: Theories and applications . London, UK : Sage . Berman , Evan M. , and Jonathan P. West . 2012 . Public values in special districts: A survey of managerial commitment . Public Administration Review 72 : 43 – 54 . Google Scholar Crossref Search ADS Biesenbender , Sophie , and Adrienne Héritier . 2014 . Mixed-methods designs in comparative public policy research: the dismantling of pension policies . In Comparative Policy Studies , 237 – 264 . London, UK : Palgrave Macmillan . Nowell , Branda , and Kate Albrecht . Forthcoming. A Reviewer’s guide to qualitative Rigor . Journal of Public Administration Research and Theory . Bryman , Alan . 2006 . Integrating quantitative and qualitative research: How is it done ? Qualitative Research 6 : 97 – 113 . Google Scholar Crossref Search ADS ———. 2007 . Barriers to integrating quantitative and qualitative research . Journal of Mixed Methods Research 1 : 8 – 22 . Crossref Search ADS Burden , Barry C. , David T. Canon , Kenneth R. Mayer , and Donald P. Moynihan . 2012 . The effect of administrative burden on bureaucratic perception of policies: Evidence from election administration . Public Administration Review 72 : 741 – 751 . Google Scholar Crossref Search ADS Cameron , Roslyn . 2009 . A sequential mixed model research design: Design, analytical and display issues . International Journal of Multiple Research Approaches 3 : 140 – 152 . Google Scholar Crossref Search ADS Caracelli , Valerie J. , and Jennifer C. Greene . 1993 . Data analysis strategies for mixed-method evaluation designs . Educational Evaluation and Policy Analysis 15 : 195 – 207 . Google Scholar Crossref Search ADS Creswell , John W . 2009 . Mapping the field of mixed methods research . Journals of Mixed Methods Research 3 : 95 – 108 . Google Scholar Crossref Search ADS Creswell , John W. , and Vicky L. Plano Clark . 2018 . Designing and conducting mixed methods research , 3 rd ed. Thousand Oaks, CA : SAGE . Cucciniello , Maria , and Greta Nasi . 2014 . Evaluation of the impacts of innovation in the health care sector: A comparative analysis . Public Management Review 16 : 90 – 116 . Google Scholar Crossref Search ADS Datta , Lois E . 1994 . Paradigm wars: A basis for peaceful coexistence and beyond . New Directions for Evaluation 1994 : 53 – 70 . Google Scholar Crossref Search ADS De Widt , Dennis . 2016 . Top‐down and bottom‐up: Institutional effects on debt and grants at the English and German local level . Public Administration 94 : 664 – 684 . Google Scholar Crossref Search ADS Denscombe , Martyn . 2008 . Communities of practice: A research paradigm for the mixed methods approach . Journal of Mixed Methods Research 2 : 270 – 283 . Google Scholar Crossref Search ADS Denzin , Norman K . 1978 . The research act: A theoretical orientation to sociological methods . New York, NY : Routledge . Destler , Katharine N . 2017 . A matter of trust: Street level bureaucrats, organizational climate and performance management reform . Journal of Public Administration Research and Theory 27 : 517 – 534 . Dewey , John . 1925 . Experience and nature . Whitefish, MT : Kessinger . Di Mascio , Fabrizio , and Alessandro Natalini . 2013 . Context and mechanisms in administrative reform processes: Performance management within Italian local government . International Public Management Journal 16 : 141 – 166 . Google Scholar Crossref Search ADS Donahue , Amy K. , and Rosemary O’Leary . 2011 . Do shocks change organizations? The case of NASA . Journal of Public Administration Research and Theory 22 : 395 – 425 . Google Scholar Crossref Search ADS Eterno , John A. , Christine S. Barrow , and Eli B. Silverman . 2017 . Forcible stops: Police and citizens speak out . Public Administration Review 77 : 181 – 192 . Google Scholar Crossref Search ADS Feilzer , Y. Martina . 2010 . Doing mixed methods research pragmatically: Implications for the rediscovery of pragmatism as a research paradigm . Journal of Mixed Methods Research 4 : 6 – 16 . Google Scholar Crossref Search ADS Ferlie , Ewan , and Gerry McGivern . 2013 . Bringing Anglo-governmentality into public management scholarship: The case of evidence-based medicine in UK health care . Journal of Public Administration Research and Theory 24 : 59 – 83 . Google Scholar Crossref Search ADS Freeman , Tim , and Edward Peck . 2007 . Performing governance: A partnership board dramaturgy . Public Administration 85 : 907 – 929 . Google Scholar Crossref Search ADS Gerlak , Andrea K. , and Tanya Heikkila . 2011 . Building a theory of learning in collaboratives: Evidence from the Everglades Restoration Program . Journal of Public Administration Research and Theory 21 : 619 – 644 . Google Scholar Crossref Search ADS Girth , Amanda M . 2012 . A closer look at contract accountability: Exploring the determinants of sanctions for unsatisfactory contract performance . Journal of Public Administration Research and Theory 24 : 317 – 348 . Google Scholar Crossref Search ADS Greene , Jennifer C . 2007 . Mixed methods in social inquiry , vol. 9 . New York, NY : John Wiley & Sons . Green , Jennifer C. , Valerie J. Caracelli , and Wendy F. Graham . 1989 . Toward a conceptual framework for mixed-method evaluation designs . Educational Evaluation and Policy Analysis 11 : 225 – 274 . Groeneveld , Sandra , Lars Tummers , Babette Bronkhorst , Tanachia Ashikali , and Sandra Van Thiel . 2015 . Quantitative methods in public administration: Their use and development through time . International Public Management Journal 18 : 61 – 86 . Google Scholar Crossref Search ADS Hammersley , Martyn . 2008 . Questioning qualitative inquiry: Critical essays . London, UK : Sage . Hanson , William E. , John W. Creswell , Vick L. Plano Clark , Kelly S. Petska , and J. David Creswell . 2005 . Mixed methods research designs in counseling psychology . Journal of Counseling Psychology 52 : 224 . Google Scholar Crossref Search ADS Haverland , Markus , and Dvora Yanow . 2012 . A hitchhiker’s guide to the public administration research universe: Surviving conversations on methodologies and methods . Public Administration Review 72 : 401 – 408 . Google Scholar Crossref Search ADS Heinrich , Carolyn J . 2015 . The bite of administrative burden: A theoretical and empirical investigation . Journal of Public Administration Research and Theory 26 : 403 – 420 . Google Scholar Crossref Search ADS Herd , Pamela , Thomas DeLeire , Hope Harvey , and Donald P. Moynihan . 2013 . Shifting administrative burden to the state: The case of medicaid take‐up . Public Administration Review 73 ( s1 ): S69 – S81 . Google Scholar Crossref Search ADS Hesse-Biber , Sharlene N . 2010 . Mixed methods research: Merging theory with practice . New York, NY : Guilford Press . ———. 2015 . Mixed methods research: The “thing-ness” problem . Qualitative health research 25 : 775 – 788 . Crossref Search ADS PubMed Honig , Dan . Forthcoming. Case study design and analysis as a complementary empirical strategy to econometric analysis in the study of public agencies: Deploying mutually supportive mixed methods . Journal of Public Administration and Theory . Hwang , Hokyu , and Patricia Bromley . 2015 . Internal and external determinants of formal plans in the nonprofit sector . International Public Management Journal 18 : 568 – 588 . Google Scholar Crossref Search ADS Hyun , Christopher , Alison E. Post , and Isha Ray . 2018 . Frontline worker compliance with transparency reforms: Barriers posed by family and financial responsibilities . Governance 31 : 65 – 83 . Google Scholar Crossref Search ADS Ivankova , Nataliya V. , John W. Creswell , and Sheldon L. Stick . 2006 . Using mixed-methods sequential explanatory design: From theory to practice . Field Methods 18 : 3 – 20 . Google Scholar Crossref Search ADS Jick , Todd D . 1979 . Mixing qualitative and quantitative methods: Triangulation in action . Administrative Science Quarterly 24 : 602 – 611 . Google Scholar Crossref Search ADS Johnson , R. Bourke , Anthony J. Onwuegbuzie , and Lisa A. Turner . 2007 . Toward a definition of mixed methods research . Journal of Mixed Methods Research 1 : 112 – 133 . Google Scholar Crossref Search ADS Kaehne , Axel . 2013 . Partnerships in Local Government: The case of transition support services for young people with learning disabilities . Public Management Review 15 : 611 – 632 . Google Scholar Crossref Search ADS Kemper , Elizabeth A. , Sam Stringfield , and Charles Teddlie . 2003 . Mixed methods sampling strategies in social science research . In Handbook of Mixed Methods in Social and Behavioral Research , ed. Abbas Tashakkori and Charles Teddlie , 273 – 296 . Thousand Oaks, CA : Sage . Kim , Do Han , and Hee-Je Bak . 2016 . How do scientists respond to performance-based incentives? Evidence from South Korea . International Public Management Journal 19 : 31 – 52 . Google Scholar Crossref Search ADS Koski , Chris , Saba Siddiki , Abdul-Akeem Sadiq , and Julia Carboni . 2018 . Representation in collaborative governance: A case study of a food policy council . The American Review of Public Administration 48 : 359 – 373 . Google Scholar Crossref Search ADS Lawton , Alan , and Michael Macaulay . 2017 . From birth to death: The life of the standards board for england . Public Administration Review 77 : 720 – 729 . Google Scholar Crossref Search ADS Liberati , Alessandro , Douglas G. Altman , Jennifer Tetzlaff , Cynthia Mulrow , Peter C. Gøtzsche , John P. A. Ioannidis , Mike Clarke , P. J. Devereaux , Jos Kleijnen , and David Moher . 2009 . The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration . PLoS medicine 6 : e1000100 . Google Scholar Crossref Search ADS PubMed Lieberman , Evan S . 2005 . Nested analysis as a mixed-method strategy for comparative research . American Political Science Review 99 : 435 – 452 . Google Scholar Crossref Search ADS Lundin , Martin , PerOla Öberg , and Cecilia Josefsson . 2015 . Learning from success: Are successful governments role models ? Public Administration 93 : 733 – 752 . Google Scholar Crossref Search ADS Maor , Moshe , and Raanan Sulitzeanu‐Kenan . 2013 . The effect of salient reputational threats on the pace of FDA enforcement . Governance 26 : 31 – 61 . Google Scholar Crossref Search ADS Maxcy , Spencer J . 2003 . Pragmatic threads in mixed methods research in the social sciences: The search for multiple modes of inquiry and the end of the philosophy of formalism . In Handbook of Mixed Methods in Social and Behavioral Research , ed. Abbas Tashakkori and Charles Teddlie , 51 – 89 . Thousand Oaks, CA : Sage . Mele , Valentina , Simon Anderfuhren-Biget , and Frederic Varone . 2016 . Conflicts of interest in international organizations: Evidence from two United Nations humanitarian agencies . Public Administration 94 : 490 – 508 . Google Scholar Crossref Search ADS Miles , Matthew B. , and A. Michael Huberman . 1994 . Qualitative data analysis: An expanded sourcebook , 2 nd ed. Beverly Hills, CA : SAGE Publications . Mitchell , George E. 2014 . Collaborative propensities among transnational NGOs registered in the United States . The American Review of Public Administration 44 : 575 – 599 . Google Scholar Crossref Search ADS Morgan , David L . 2007 . Paradigms lost and pragmatism regained: Methodological implications of combining qualitative and quantitative methods . Journal of Mixed Methods Research 1 : 48 – 76 . Google Scholar Crossref Search ADS ———. 2013 . Integrating qualitative and quantitative methods: A pragmatic approach . Thousand Oaks, CA : Sage . Morrell , Kevin , and Graeme Currie . 2015 . Impossible jobs or impossible tasks? Client volatility and frontline policing practice in urban riots . Public Administration Review 75 : 264 – 275 . Google Scholar Crossref Search ADS Morse , Janice M . 1991 . Approaches to qualitative-quantitative methodological triangulation . Nursing Research 40 : 120 – 123 . Google Scholar Crossref Search ADS PubMed ———. 2010 . Sampling in grounded theory . In The SAGE handbook of grounded theory , ed. Antony Bryant and Kathy Charmaz , 229 – 244 . London, UK : Sage . Morse , Janice M. , and Linda Niehaus . 2009 . Principles and procedures of mixed methods design . New York, NY : Routledge . Mosley , Jennifer E. , and Colleen M. Grogan . 2012 . Representation in nonelected participatory processes: How residents understand the role of nonprofit community-based organizations . Journal of Public Administration Research and Theory 23 : 839 – 863 . Google Scholar Crossref Search ADS Murphy , John J . 1994 . Working with what works: A solution-focused approach to school behavior problems . The School Counselor 42 : 59 – 65 . Musso , Juliet , Christopher Weare , Thomas Bryer , and Terry L. Cooper . 2011 . Toward “Strong Democracy” in global cities? Social capital building, theory‐driven reform, and the Los Angeles neighborhood council experience . Public Administration Review 71 : 102 – 111 . Google Scholar Crossref Search ADS Neuman , W. Lawrence . 2013 . Social research methods: Qualitative and quantitative approaches . London, UK : Pearson education . O’Brien , Daniel T. , Dietmar Offenhuber , Jessica Baldwin-Philippi , Melissa Sands , and Eric Gordon . 2017 . Uncharted territoriality in coproduction: The motivations for 311 reporting . Journal of Public Administration Research and Theory 27 : 320 – 335 . Onwuegbuzie , Anthony J. , and Charles Teddlie . 2003 . A framework for analyzing data in mixed methods research . In Handbook of mixed methods in social and behavioral research , ed. Abbas Tashakkori and Charles Teddlie , 397 – 430 . Thousand Oaks, CA : Sage . Ospina , Sonia. M. , Marc Esteve , and Seulki Lee . 2018 . Assessing qualitative studies in public administration research . Public Administration Review 78 : 593 – 605 . Google Scholar Crossref Search ADS Palmer , Dalmer J . 2013 . College administrators as public servants . Public Administration Review 73 : 441 – 451 . Google Scholar Crossref Search ADS Patton , Micheal Q . 2002 . Qualitative evaluation and research methods , 3 rd ed. Thousand Oaks, CA : SAGE Publications, Inc . Pawson , Ray , and Nick Tilley . 1997 . An introduction to scientific realist evaluation . Thousand Oaks, CA : Sage . Peirce , Charles S . 1905 . What pragmatism is . The Monist 15 : 161 – 181 . Google Scholar Crossref Search ADS Petticrew , Mark , and Helen Roberts . 2006 . How to appraise the studies: An introduction to assessing study quality . In Systematic reviews in the social sciences: A practical guide , 125 – 163 . Malden, MA : Blackwell Publishing . Plano Clark , Vicky. L. , and Nataliya V. Ivankova . 2015 . Mixed methods research: A guide to the field , vol. 3 . Thousand Oaks, CA : Sage . Reiners , Derek . 2012 . Institutional effects on decision making on public lands: An interagency examination of wildfire management . Public Administration Review 72 : 177 – 186 . Google Scholar Crossref Search ADS Riccucci , Norma M . 2010 . Public administration: Traditions of inquiry and philosophies of knowledge . Washington, DC : Georgetown University Press . Robson , Colin . 1993 . Real world research: A resource for social scientists and practitioners-researchers . MA : Blackwell Pushers . Rorty , Richard . 1999 . Philosophy and social hope . London, UK : Penguin . Rossman , Gretchen B. , and Bruce L. Wilson . 1985 . Numbers and words: Combining quantitative and qualitative methods in a single large-scale evaluation study . Evaluation Review 9 : 627 – 643 . Google Scholar Crossref Search ADS Salge , Torsten Oliver , and Antonio Vera . 2012 . Benefiting from public sector innovation: The moderating role of customer and learning orientation . Public Administration Review 72 : 550 – 559 . Google Scholar Crossref Search ADS Schoonenboom , Judith , and R. Burke Johnson . 2017 . How to construct a mixed methods research designwie man ein mixed methods-forschungs-design konstruiert . KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie 69 : 107 – 131 . Google Scholar Crossref Search ADS PubMed Shannon-Baker , Peggy . 2015 . “But I wanted to appear happy”: How using arts-informed and mixed methods approaches complicate qualitatively driven research on culture shock . International Journal of Qualitative Methods 14 : 34 – 52 . Google Scholar Crossref Search ADS Shea , Jennifer . 2011 . Taking nonprofit intermediaries seriously: A middle‐range theory for implementation research . Public Administration Review 71 : 57 – 66 . Google Scholar Crossref Search ADS Siddiki , Saba , and Shilpi Goel . 2017 . Assessing collaborative policymaking outcomes: An analysis of US marine aquaculture partnerships . The American Review of Public Administration 47 : 253 – 271 . Google Scholar Crossref Search ADS Soss , Joe , Richard Fording , and Sanford Schram . 2011 . The organization of discipline: From performance management to perversity and punishment . Journal of Public Administration Research and Theory 21 ( suppl_2 ): i203 – i232 . Google Scholar Crossref Search ADS Suda , Yuko . 2011 . For-profit and nonprofit dynamics and providers’ failures: The long-term care insurance system in Japan . Public Management Review 13 : 21 – 42 . Google Scholar Crossref Search ADS Tashakkori , Abbas , and John W. Creswell . 2007 . Exploring the nature of research questions in mixed methods reasearch . Journal of Mixed Methods Research 1 : 207 – 211 . Google Scholar Crossref Search ADS Tashakkori , Abbas , and Charles Teddlie . 1998 . Mixed methodology: Combining qualitative and quantitative approaches (Vol. 46) . London, UK : Sage . ———. 2010 . Sage handbook of mixed methods in social & behavioral research . Thousand Oaks, CA : Sage . Teddlie , Charles , and Abbas Tashakkori . 2003 . Major issues and controveries in the use of mixed methods in the social and behvioral sciences . In Handbook of Mixed Methods in Social and Behavioral Research , ed. Abbas Tashakkori A . , and C. Teddlie , 3 – 50 . Thousand Oaks, CA : Sage . Teddlie , Charles , and Fen Yu . 2007 . Mixed methods sampling: A typology with examples . Journal of mixed methods research 1 : 77 – 100 . Thom , Michael . 2013 . Politics, fiscal necessity, or both? Factors driving the enactment of defined contribution accounts for public employees . Public Administration Review 73 : 480 – 489 . Google Scholar Crossref Search ADS Thomson Reuters . 2017 . Journal citation reports (Science edition) . ©Thomson Reuters. Retrieved from: http://thomsonreuters.com/en/products-services/scholarly-scientific-research/research-management-andevaluation/journal-citation-reports.html. Tummers , Lars , Bram Steijn , and Victor Bekkers . 2012 . Explaining the willingness of public professionals to implement public policies: Content, context, and personality characteristics . Public Administration 90 : 716 – 736 . Google Scholar Crossref Search ADS Van Thiel , Sandra . 2014 . Research methods in public administration and public management: An introduction . New York, NY : Routledge . Villadsen , Anders R . 2011 . Structural embeddedness of political top executives as explanation of policy isomorphism . Journal of Public Administration Research and Theory 21 : 573 – 599 . Google Scholar Crossref Search ADS Vogel , Rick . 2012 . Framing and counter‐framing new public management: The case of Germany . Public Administration 90 : 370 – 392 . Google Scholar Crossref Search ADS Wang , Weijie . 2016 . Exploring the determinants of network effectiveness: The case of neighborhood governance networks in Beijing . Journal of Public Administration Research and Theory 26 : 375 – 388 . Google Scholar Crossref Search ADS Wang , XiaoHu , Kai Chen , and Evan M. Berman . 2016 . Building network implementation capacity: Evidence from China . International Public Management Journal 19 : 264 – 291 . Google Scholar Crossref Search ADS Yang , Kaifeng , Yahong Zhang , and Marc Holzer . 2008 . Dealing with multiple paradigms in public administration research . Public Administration and Public Policy-New York 134 : 25 . Yanow , Dvora , and Peregrine Schwartz-Shea . 2014 . Interpretation and method: Empirical research methods and the interpretive turn . New York, NY : Routledge . Yom , Sean . 2015 . From methodology to practice: Inductive iteration in comparative research . Comparative Political Studies 48 : 616 – 644 . Google Scholar Crossref Search ADS Young , Kevin L. , and Sung Ho Park . 2013 . Regulatory opportunism: Cross‐national patterns in national banking regulatory responses following the global financial crisis . Public Administration 91 : 561 – 581 . Zhan , Xueyong , and Shui‐Yan Tang . 2016 . Understanding the implications of government ties for nonprofit operations and functions . Public Administration Review 76 : 589 – 600 . Google Scholar Crossref Search ADS Footnotes 1 It should be noted that MMs could be also framed in different philosophical paradigms. However, pragmatism is considered the mainstream paradigm by MMs researchers and theorists (Cameron 2009). For an analysis of the pragmatic threads in MMs research see Maxcy (2003). 2 Pragmatism stems from the Ancient Greek πνάγμα (pragma) meaning “act”. The etymology of the term resonates with the original conception of pragmatism, “that is the rational purport of a word or other expression, lies exclusively in its conceivable bearing upon the conduct of life” (Peirce 1905, 162). Pragmatist ontology circumvents the dichotomy between truth and reality by accepting that there are both singular and multiple realities (Rorty 1999). 3 This dichotomy is clearly a simplification that can be employed as a starting point. These categories are ideal types or idealized, simplified models of more complex arguments. According to Neuman (2013), few social researchers agree with all parts of an approach and they often mix elements from each. 4 Scholars often contrast between-methods and within-methods designs, that is, on the one hand, designs that include at least one quantitative and one qualitative method; on the other hand, designs including at least two qualitative or two quantitative methods. 5 The correspondence between the concepts is understandable. In the early days of social sciences, the metaphor of triangulation was borrowed from the military naval lexicon, where it indicated the multiple reference points employed to locate the exact position of a target (for the history of triangulation and MMs, see Datta 1994; Hanson et al. 2005). 6 This design is also been termed preliminary quantitative input design (Morgan 2013). 7 Often in these articles, “mixed methods” were simply called for in the final section on the future research agenda. 8 A less significant connecting point for the parallel design is the one inherent in the fact that data come from the same source, either the same sample of respondents (seven studies; 22%), or the same archival data (two studies; 6%). 9 For completeness, we also reported less frequent connecting points for the quanti–qualitative design: case-selection (6 studies out of 39; 15%) insights from the first phase to draft interviews protocol of the second one (4 studies out of 39; 10%), the same sample of respondents (4 studies out of 39; 10%) and the same source of archival data (1 study out of 39; 3%). The empirical context is a very weak connecting point for this design and is unsurprisingly present in all the studies analyzed (39 studies out of 39). 10 Purposive sampling connects qualitative to quantitative methods in 18 out of 29 studies (62%). 11 For a complete list of the references not cited in the article but included in the systematic review, see the online Supplementary Material. © The Author(s) 2018. Published by Oxford University Press on behalf of the Public Management Research Association. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Mixed Methods in Public Administration Research: Selecting, Sequencing, and Connecting JO - Journal of Public Administration Research and Theory DO - 10.1093/jopart/muy046 DA - 2019-04-02 UR - https://www.deepdyve.com/lp/oxford-university-press/mixed-methods-in-public-administration-research-selecting-sequencing-l7TXdRddzi SP - 334 VL - 29 IS - 2 DP - DeepDyve ER -