Abstract Performance management reforms are a popular way to try to create responsive and improving government. These types of reforms have become commonplace in education policy and the Journal of Public Administration Research and Theory (JPART) has been one of the leading venues for research on these topics. However, under-analyzed are the ways in which performance management policies represent antipolitical bent to education reform. We outline an argument that avoiding political decisionmaking in favor of reforms that create authoritative or purportedly neutral data risks undertaking policy change are not as meaningful as hoped. We select eight articles that represent research on performance management broadly and are thought provoking for a broader consideration of performance management in education policy. Introduction In the 1980s and 1990s, policies promising to “reinvent government” (Osborne and Gaebler 1992) were among the most popular reforms. Faced with declining confidence in public institutions, frustration with perceived inefficiency, and displeasure with governmental outcomes, reforms sought to make government more efficient, increase performance, and regain public confidence. Public education has not been spared this discontent. Although nearly half of Americans grade their local schools as worthy of an “A” or “B,” only 24% give these grades to the nation’s schools as a whole (Phi Delta Kappan 2016). Belief that our schools are failing to live up to our expectations has led to many waves of educational reform. One of the most recent waves of K-12 education reforms focused on improving public education through performance management. These reforms typically promised to improve school performance by establishing clear benchmarks, collecting and publishing data on student progress towards these benchmarks, and rewarding or sanctioning schools (or sometimes individual teachers) based on their progress (Jacobsen and Saultz 2016; McDermott 2011). Performance management reforms became omnipresent in public education in the United States with the passage of No Child Left Behind (NCLB), which impacted all K-12 schools, but its roots extend back much further (Ginsberg et al. 1988; Mehta 2013). Efforts to improve the performance of public institutions cut across a number of policy domains. However, public education is the largest public institution, educating approximately 90% of our nation’s children (National Center for Education Statistics 2017a) with an annual expenditure of approximately $634 billion (National Center for Education Statistics 2017b), making it of particular interest to researchers concerned with the impact of performance management strategies in public institutions. Yet the historical divide between public administration and education scholarship (Raffel 2007) has limited the degree to which scholars of public administration have researched public education. The Journal of Public Administration Research and Theory (JPART) attempts to bridge this divide and continues to be a leading venue for research that analyzes education issues through public administration lenses. By our count, JPART has published 28 articles focusing on education since the year 1991. Within this set, we identify eight articles that relate to performance management reforms in public education and include them in this virtual issue. Beyond their value as compelling research on the impact of performance management policies, we believe they are useful pieces to examine when considering the potential for performance management reforms to operate in antipolitical ways. That is, the propensity for performance management policies to be cast as value-neutral even though they simultaneously narrow the varied outcomes expected from the American public education system and limit the ability of some voices to participate in the education policy process. Our introduction is organized into three main parts. First, we outline the primary facets of performance management present in education reforms. Second, we argue that a focus on performance management reforms in education can be viewed as an example of antipolitics—a narrow focus on promoting efficiency at the expense of other goals present in a public and democratic system. Third, we discuss each previously published article included in this virtual issue, its relevance to performance management, and highlight some thoughts or questions spurred by the article when considering our comments on antipolitical undertones of performance management reforms in education. Performance Management in Education Performance management reforms in education seek to improve efficiency in four key ways: by setting performance goals, gathering data to help decide whether goals are being met, rewarding or sanctioning those responsible for achieving goals, and relaying information about performance to those who can make decisions with that information (Jacobsen and Saultz 2016). First, setting performance standards is most often done by creating a proficiency threshold on state academic tests. Second, K-12 policies mandate the annual collection of performance data on reading and mathematics standardized tests as well as the disaggregation of data by student types (e.g., race) and by schools. Third, based upon these performance data, schools, and at times individual teachers, may receive rewards or face sanctions in an attempt to induce improved performance. NCLB provides an emblematic example of the most popular way of creating sanctions or rewards—it would deprive schools of funding allocations if they failed to meet performance targets. Finally, performance data are publicly reported to enable better parental and citizen oversight of their public schools. The most common way of doing this has been through creating report cards for every school to publicize its performance. These publicly available reports for every public school include information on academic performance, teacher qualifications, and various other issues. Various internal data reporting methods also exist within states, districts, and schools that, while not public, also inform decisionmaking. In their design, these reforms treat politics and organizational change as separate enterprises—the problems within the system are primarily cast as technical issues to be addressed by performance management and not linked to broader ideologies about education. In 1994, Terry Moe criticized this tendency to separate politics and administration in the pages of JPART. He called this separation the “the dichotomy that wouldn’t die” (Moe 1994, 17), and many before Moe similarly characterized this politics-administration divide as myth. Treating inadequate performance as puzzles to be solved by experts, not political decisionmaking, is a long-standing tradition in education. Indeed, many schooling reforms that continue to impact the system to this day came to fruition thanks to Taylorist principles and a focus on efficiency in the first decades of the 20th century (Reese 2005). The physical design of a large comprehensive high school, organization of topics and curricular sequencing, and the placement of students into tracks based on potential life or career outlooks are but a few examples of this historical precedent. Today, a similar belief operates in education where policy makers and school leaders focus on efficiency and performance monitoring as the primary reform mechanism, often attempting to escape the realm of political debate. Trujillo (2014) dubs this as a “new cult of efficiency.” She argues that basing reforms on data monitoring creates a new technocracy that fails to address the complexity in public education, much of which is often not measured and extremely difficult to measure. Today’s accountability policies, grounded in performance management theories, fall prey to familiar technocratic assumptions. Current debates about performance management policies tend to focus on issues such as test frequency and when tests should occur (Klein 2017), what level of mastery is indicative of “proficiency” (Mitchell 2017), how performance ought to be calculated to assess teacher quality (Sawchuk 2013), whether similar standards ought to apply across all student groups, and for what types of measures schools should be held accountable (Hoff 2007). These are primarily process and organizational questions that ignore larger, potentially more important issues facing public education. Issues about what students should know and be able to do, what kinds of schools should be publicly supported, how we should measure other goals of public education beyond academic knowledge, and who should control education policy decisionmaking are discussed far less frequently in policy and administration circles. The Continued Bend Toward Antipolitics Education performance management regimes in their current forms are relatively new, but the underlying desire to create the “one best system” is not (Tyack 1974). Often, in the search for a singular best system, reforms attempt to avoid politics. Rather than engage in the messy and slow realm of politics, policies attempt to create or shift to more authoritative and less political decisionmaking realms. Previous scholars have discussed the ways in which education reformers sought to engage in an antipolitics by focusing on institutional choice (Plank and Boyd 1994). Common versions of this include reforms that centralize decisionmaking authority, reformers who seek remedy in the courts to escape the slow work of political mobilization, and reforms that expand the use of market mechanisms (e.g., school choice). At their core, these reforms seek to be avoid political uncertainty and shift decisions to more authoritative venues (e.g., individual parents and the courts versus elected school boards). Although performance management reforms operate differently, they still try to create the appearance of more authoritative decisionmaking to skirt more political processes. In some cases, the data themselves trigger decisions—for example, the sanctioning under NCLB for schools failing to meet performance targets. In others, the data are pointed to as the definitive onus for decisions. In this piece, we use “antipolitics” to mean the propensity to circumvent conflicted political decisionmaking in favor of something cast as more authoritative and neutral. In education contexts, performance management reforms fit this bill. The term “antipolitics” is not new to scholars of public administration. In the introduction to In Defense of Politics in Public Administration, Michael Spicer (2010) traces the history of antipolitical thought back at least as far as Woodrow Wilson, who believed politics did not belong in administration because groups suffered from too many differences and competing values to find an effective path forward. A similar argument has often been advanced for why politics ought to be removed from education policy decisionmaking—how to educate our nation’s most precious resource, our children, ought not be left to the messy and often corrupt world of politics (Reese 2005). Unlike Wilson and others, Spicer argues “the practice of politics is useful, if not actually necessary…” (Spicer 2010, 10). Similarly, the term antipolitics is not new to scholars of K-12 education and Plank and Boyd (1994) concluded with a call similar to Spicer’s (2010)—they say, “these conflicts must be accommodated and reconciled, not circumvented or suppressed” by education decisionmaking (277). These authors collectively cast doubt on whether the development and administration of policies can be most successful if they avoid political decisionmaking in favor of something purporting authoritative answers. Performance management reforms, whether consciously or not, mask the underlying political issues associated with our largest political institution by turning power over to the data themselves. It is not uncommon within education circles to hear school leaders refer to performance data as if there can be no discussion or disagreement because the data have spoken. Policy has increasingly focused on the process of data creation and the organizational capacity to develop data, rather than broader questions about whether our schools provide the kind of education we, as a society, value. Thus, school performance data are not a value-neutral proposition even though they may provide authoritative representation of school goals and basis for policy. Some symptoms of this larger antipolitical environment include the shift in policy conversations towards technocratic questions, certain types of data, and the expansion of such data. Some have documented the ways in which faith in “big data” correspond with goal displacement in education. For example, most states are designing or refining large-scale student data systems. But, advances in this area shift priorities toward those items more easily measured (e.g., mathematics skill development, reading comprehension, etc.) and away from other important goals (Lavertu 2016). These other goals, such as persistence, collaborative problem solving, and commitment to civic and political life, have all been neglected because they are skills that are not easily measured and included in burgeoning data systems. Today, it is easy to be lured into the trap that a singular goal—student academic performance—is the goal public schools ought to pursue and that performance management strategies can best help us achieve better outcomes toward this goal. However, this belies the many other functions schools serve including practical functions (e.g., vocational education) as well as their role in creating civically minded citizens or creating a shared identity of what it means to be an American. These goals are not merely historical aims that have faded over time—Rothstein et al. (2008) used nationally representative surveys and found that although individuals may vary in the degree to which they believe multiple goals ought to be pursued, a tiny minority prioritized academic performance as the singular goal for our schools. Despite this empirical evidence that the public holds multiple, and at times competing, goals for public education, education policy has drifted toward an exclusive focus on academic performance. The emphasis on performance management reforms artificially mask deeper debates regarding the goals of education that must necessarily take place within the public realm. One way performance management reforms fall into this trap is through performance reporting reforms. Performance reports often signal that academic performance is the only important outcome. For example, school-level report cards are the most common type of reporting. Even if accountability systems and reports are not solely focused on student academic outcomes, academic performance tends to be much more prominent than any other indicator. Although academics are important, they do not represent the entirety of what the public wants from public education. An onlooker may therefore have an overly negative view of a school given a failing grade based on its academic performance even though there may be other positive outcomes generated at the school. In the instance of school report cards, a school receiving an “F” might be widely labeled as failing—not just academically, but also in achieving its many and varied goals that are not reported. The issue of school closures provides another useful example. Often, attempts to close schools that underperform academically have been recast as technical decisions based on performance data. Yet school closures have immense political consequences for communities, factors that are no longer considered under performance management systems that turn to the data as the authority on a school’s value. Why Antipolitics Hinders Reform By focusing solely on outcomes that, although important, are not the lone focus of educational policy, performance management reforms hinder their ultimate goal of improving education systems for all students. The “reinventing government” (Osborne and Gaebler 1992) reforms of the 1980s and 1990s tried to improve both performance and trust. Although performance management education reforms may ultimately reach their intended performance goals, our primary worry, and one that requires more empirical study, is the degree to which focus on performance among current performance management regimes creates the possibility for an erosion of public trust. Performance management reforms run the risk of marginalizing education preferences outside the policy’s focus. Vocal critics have emerged and denounced the ways in which standardized tests have become the primary measure of student progress (e.g., Ravitch 2010). At times, the resulting push-back has been fierce. For example, prominent recent movements encouraged parents to opt their students out of standardized testing (Pizmony-Levy and Saraisky 2016) and opposed the Common Core State Standards (McDonnell 2013). These were highly visible attempts to combat the perceived negative practices stemming from performance management and signal lament for a perceived narrowing of the curriculum to those items contained in standards and tests, as well as distrust of educational tests as meaningful measures of student performance. Other concerns may similarly highlight the insufficiency of performance management reforms. One is that inherent structural and non-school inequities are disguised when focusing narrowly on academic performance (Rothstein 2004). Thus, although schools have effects on their students, the effects of outside-of-school factors likely matter more. Performance management reforms maintain a focus on schools and may not incentivize the kind of broad social policy approach necessary to improve intractable problems. Additionally, school performance data are often treated as neutral and unbiased representations even though they may instead reinforce the negative effects of factors like poverty on student outcomes (Hansen 2016). This antipolitical undertone to performance management reforms—where certain values are privileged yet conveyed as value-neutral—leads to a system where purportedly objective and complete pictures of school quality are widely disseminated. We believe those who do not agree that narrowly-defined achievement and performance management outcomes should be the primary basis for reform are less likely to trust the degree to which the system operates for the benefit of all students. This is the primary danger of antipolitics in education policy—that the failure to recognize and provide opportunity for public deliberations on the varied wants and interpretations of the education system will lead to a system distrusted by a portion of the population. Ensuing dissatisfaction may prevent schools from obtaining the support they need to have sustained and broad debate that creates the most meaningful reform. Articles in this Virtual Issue As one looks across the articles we selected that focus on performance management and education, we see not only a broad understanding of the ways in which performance management reforms have impacted public education, but we also note the ways in which these articles provoked our thinking based on our discussion above. We discuss each article based on which of the four facets to performance management reforms identified above it most relates to (establishing standards, collecting performance data, the use of sanctions or rewards, or performance reporting). The eight articles selected for the virtual issue are marked with an asterisk (*). Establishing Standards Establishing standards is a process more complicated than top–down creation and subsequent adherence. Instead, standards that are set at one supreme level—most often states in K-12 education policy—are mediated by a host of factors. State-to-state standards vary substantially, and within-state variation in the goals set by districts or schools to achieve their broader accountability goals is an under-studied phenomenon. Using New York City as their case, Favero, Meier, and O’Toole (2016) use novel methods of assessing school building-level managers’ internal management. In short, they find internal management influences multiple school measures, including student performance, parent satisfaction, and student attendance. Especially important is the ability of administrators to set clear goals that include high expectations. This piece is extremely valuable because it investigates building-level management, a level often overlooked in the literature despite the fact administrator motivations and values play a large role in implementation. Second, it peers inside the management issues that can influence whether performance management achieve its lofty goals. Future consideration of the factors that influence managerial practices, such as the ways administrators grapple with constraints placed on them by their communities and by expectations from superiors will help further clarify how managerial practices influence performance management reforms in schools. Although not focused on US K-12 education, Nielsen (2014) provides information on the ways performance management operates in Denmark. Using a panel data set, he finds that two primary activities among school-level administrators influence student scores, whereas others do not. First, hiring/firing authority corresponds with increased student performance. Second, devolving academic goal setting to schools correlates with decreased student performance. Like Favero, Meier, and O’Toole (2016), Nielsen (2014) shows the important role played by building-level actors. This study also leads the reader to wonder about the best amount of authority left to school administrators. It may be that authority at more local levels produces positive effects, but too much authority, or authority over the wrong operations, negatively effects outcomes. Furthermore, this balance may vary based on different communities because local values and politics may not be uniform. Future policy and scholarship must grapple with these issues. For example, union influence or active community groups might play a role in administrator behavior. We also wonder about the politics of intergovernmental relations when considering these questions. Namely, what are the intergovernmental environments that spur positive actions on the parts of policymakers and implementers? Collecting Performance Data The collection of performance data extends beyond state agencies collecting data on school performance. NCLB also required schools to set aside funds for externally-provided Supplemental Education Services (SES) and monitor those providers’ effectiveness. Heinrich (2010) examines this arrangement—where underperforming schools authorize SES providers and provide information to parents who choose providers. Within her study, she finds that little guidance was provided parents in selecting SES providers and that decisions may have been made according to the incentives offered by the provider (e.g., when allowed, the offer of free iPods, computers, or other items attracted enrollment). Additionally, she highlights the difficulty local education agencies had in holding providers accountable even though they were generally unsatisfied with provider services. Some problems included providers creating their own marketing data and initiatives to attract students and the local education agency’s difficulty maintaining adequate performance data as providers exited and entered the market. Her findings also challenge the degree to which a system of supplemental services embody common education system values like equitable access to service or equal representation. We believe Heinrich (2010) highlights a superb example of what can happen when accountability reforms pay insufficient attention to incentives for undesired behaviors. Furthermore, we believe this study identifies the very real tensions in creating a system predicated on data collection and dissemination when resources are insufficient and when providers are actively creating and disseminating different data in parallel. Meier and O’Toole (2013) show that using subjective measures of organizational performance can lead to biased estimates. In their study, they investigate how perceptions of school performance among managers align with more “objective” data (e.g., standardized test scores). In general, they find a large degree of bias in these perception measures. However, the direction and magnitude of bias is not always the same, and questions with more specific focuses tended to suffer from less bias. Although the authors are careful to note that the applicability of their concerning findings are focused only on self-assessments by administrators, it left us with many questions and avenues for future consideration. One was how we might begin to investigate the sources of bias with regard to citizen satisfaction. Is it that administrator perceptions align more closely with citizen perceptions of the district than testing results? If so, this may be a powerful signal that the local political climate plays an important role in administrator perceptions. Furthermore, we wonder if administrator perceptions correspond with different practices. Challenges may exist to performance management theories if, even in the face of data to suggest otherwise, managers continue to view their districts as performing well and do not modify behaviors to achieve better results. The Use of Sanctions or Rewards Performance management relies on sanctions and rewards to spur improved results. In her study of Massachusetts accountability reforms, McDermott (2006) explains how the reliance on these tools, coupled with lacking capacity, presented difficulties for implementation. These reforms pre-dated NCLB, but many of the same tenets as discussed above were present in Massachusetts policy. Chief among these was a threat of sanctions against schools without substantive incentives. McDermott (2006) argues the sanctions alone were ineffective and a lack of capacity at state and local levels further eroded the effectiveness of these tools. At the state level, there were insufficient resources and staff to seriously assist districts and enforce policies. At the local level, a lack of capacity and resources to assist in building capacity further stymied reforms. The study also provides an excellent example of the antipolitical assumptions embedded within reforms. McDermott (2006) summarizes how “rather than being an automatic result of getting incentives and governance structures right, reform…requires a combination of capacity, will, and trust…” (60). However, reforms were met with mistrust and reticence by local educators in this case, and the ensuing political impediments seemed unavoidable. These problems are not unique to Massachusetts’s context, and McDermott (2006) deftly highlights the peril of ignoring how the politics of implementation can derail plans. Performance Reporting Recent scholarship in JPART has focused on the role of performance reporting in education reforms. Carlson, Cowen, and Fleming (2014) examine the implementation of performance measurement and reporting within a voucher system. Milwaukee, Wisconsin, established the nation’s first school voucher program—where students can enroll in private schools and the school receives a set amount for each student—and has continued to modify the system since its inception in 1990. Beginning in 2010, all voucher schools were required to test students and report on performance in ways very similar to public accountability systems under NCLB. Unlike NCLB, the state could not levy sanctions on private schools and the primary negative repercussion was presumed disenrollment of students if it was shown a school was performing poorly. Carlson, Cowen, and Fleming (2014) find that in response to this reform, private school performance jumped significantly. They posit the best explanation is that schools changed practices because they believed parents would use test results in future education decisions. However, the authors are unable to unpack the mechanisms by which academic improvement occurred (which they freely admit). A follow-up question we have is whether performance management reforms narrowed the practices of private schools in the same ways they have been accused of limiting experiences in public schools. The theoretical underpinnings for school choice regimes like vouchers include the assumption schools will innovate free from constraints present in a more bureaucratic system to attract students. If test preparation replaced some potentially innovative practices, performance management reforms may have inadvertently replaced school practices popular with the community. This echoes antipolitical concerns expressed earlier. Two other recent articles also focus on performance reporting and its influence on citizen satisfaction. Jacobsen, Snyder, and Saultz (2015) utilize a nationally representative survey to test whether public expectations for schools exist and whether any differences in satisfaction among different groups. They find different expectations exist, and those preferring a more well-rounded education that pursue multiple goals rated schools more highly even when academic performance declined. We believe this article raises an important consideration when contemplating political decisions about the format of performance data. These data may fail to report a picture of schools that informs about the varied goals citizens care about, and in so doing may skew the ways in which citizens assess their schools. For example, if the data displayed only academic performance to respondents, rather than information on other goals as well, satisfaction may have declined for those with more well-rounded preferences. Finally, Barrows et al. (2016) also examine the ways in which decisions about school performance data influence satisfaction. Using national survey experiments, the authors find that including information about a school’s academic performance relative to state, national, or international comparative data decreases satisfaction. Moreover, they also find learning effects occur to depress satisfaction. That is, in a second experiment, respondents were more likely to adjust their satisfaction judgments downward when receiving new information, suggesting they initially overestimated their local schools. The suggestion that relative information influences satisfaction raises interesting questions for the politics of education. One such question is whether results hold for non-academic considerations. That is, would information about relative school safety or non-academic post-graduation outcomes also produce effects? Conclusion In total, we believe this collection of articles provide insight into some facets of performance management reforms and also prove generative for future thoughts about the topic. Although we argue performance management policies have come to represent an antipolitical reform, this should not be read as we believe these reforms are a failure. Instead, we hope to convey a worry that by avoiding politics, a focus on efficiency through performance management regimes runs serious risks for public education. To borrow from Laswell (1936), these policies drastically alter who gets what, as well as when and how they get it—the very definition of political. Thus, circumventing conflicted political decisionmaking in favor of the more authoritative and purportedly neutral data-driven decisions may prove a risky long-term proposition. Rather than lead to sustained and meaningful reform, the varied goals and values present in education policy may hinder performance management’s ability to create the type of improvements we seek. References *Barrows, Samuel , Henderson Michael , Peterson Paul E. , and West Martin R . 2016. Relative performance information and perceptions of public service quality: Evidence from American school districts. Journal of Public Administration Research and Theory 26: 571– 83. Google Scholar CrossRef Search ADS *Carlson, Deven E. , Cowen Joshua M. , and Fleming David J . 2014. Third-party governance and performance measurement: A case study of publicly funded private school vouchers. Journal of Public Administration Research and Theory 24: 897– 922. Google Scholar CrossRef Search ADS *Favero, Nathan , Meier Kenneth J. , and O’Toole Laurence J., Jr . 2016. Goals, trust, participation, and feedback: Linking internal management with performance outcomes. Journal of Public Administration Research and Theory 26: 327– 43. Google Scholar CrossRef Search ADS Ginsberg, Alan L. , Noell Jay , and Plisko Valencia W . 1988. Lessons from the wall chart. Educational Evaluation and Policy Analysis 10: 1– 12. Google Scholar CrossRef Search ADS Hansen, Michael . 2016. New school accountability measures must be careful not to reinforce student poverty . Brown Center Chalkboard. Washington, D.C.: Brookings Institution. https://www.brookings.edu/blog/brown-center-chalkboard/2016/01/04/new-school-accountability-measures-must-be-careful-not-to-reinforce-student-poverty/ (accessed August 10, 2017). *Heinrich, Carolyn J . 2010. Third-party governance under No Child Left Behind: Accountability and performance management challenges. Journal of Public Administration Research and Theory 20: i59– 80. Google Scholar CrossRef Search ADS Hoff, David J . 2007. ‘Growth models’ gaining ground in accountability debate. Education Week , December 19. Jacobsen, Rebecca and Saultz Andrew . 2016. Comparing satisfaction in schools and other public arenas. Public Performance and Management Review 39: 476– 97. Google Scholar CrossRef Search ADS *Jacobsen, Rebecca , Snyder Jeffrey W. , and Saultz Andrew . 2015. Understanding satisfaction with schools: The role of expectations. Journal of Public Administration Research and Theory 25: 831– 48. Google Scholar CrossRef Search ADS Klein, Alyson . 2017. Betsy DeVos: States should decide how much testing is “actually necessary.” Education Week , March 26. Laswell, Harold . 1936. Politics: Who gets what, when, how . New York, NY: McGraw Hill. Lavertu, Stéphane . 2016. We all need help: “Big data” and the mismeasure of public administration. Public Administration Review 76: 864– 72. Google Scholar CrossRef Search ADS *McDermott, Kathryn A . 2006. Incentives, capacity, and implementation: Evidence from Massachusetts education reform. Journal of Public Administration Research and Theory 16: 45– 65. Google Scholar CrossRef Search ADS McDermott, Kathryn A . 2011. High-stakes reform: The politics of educational accountability . Washington, DC: Georgetown Univ. Press. McDonnell, Lorraine M . 2013. Educational accountability and policy feedback. Educational Policy 27: 170– 89. Google Scholar CrossRef Search ADS Mehta, Jal . 2013. The Allure of Order . New York, NY: Oxford Univ. Press. *Meier, Kenneth J. and O’Toole Laurence J., Jr . 2013. Subjective organizational performance and measurement error: Common source bias and spurious relationships. Journal of Public Administration Research and Theory 23: 429– 56. Google Scholar CrossRef Search ADS Mitchell, Corey . 2017. Is a new English-proficiency test too hard? Educators and experts debate. Education Week , August 4. Moe, Terry M . 1994. Integrating politics and organizations: Positive theory and public administration. Journal of Public Administration Research and Theory 4: 17– 25. National Center for Education Statistics . 2017a. Private school enrollment. https://nces.ed.gov/programs/coe/indicator_cgc.asp (accessed August 10, 2017). ———. 2017b. Public school expenditures. https://nces.ed.gov/programs/coe/indicator_cmb.asp (accessed August 10, 2017). *Nielsen, Poul A . 2014. Performance management, managerial authority, and public service performance. Journal of Public Administration Research and Theory 24: 431– 58. Google Scholar CrossRef Search ADS Osborne, David and Gaebler Ted . 1992. Reinventing government: How the entrepreneurial spirit is transforming the public sector . Reading, MA: Addison-Wesley. Phi Delta Kappan . 2016. The 48th annual PDK poll of the public’s attitudes toward the public schools . Bloomington, IN: PDK International. Pizmony-Levy, Oren and Saraisky Nancy Green . 2016. Who opts out and why? Results from a national survey on opting out of standardized tests . New York, NY: Teachers College, Columbia Univ. Plank, David N. and Boyd William Lowe . 1994. Antipolitics, education, and institutional choice: The flight from democracy. American Educational Research Journal 31: 263– 81. Google Scholar CrossRef Search ADS Raffel, Jeffrey A . 2007. Why has public administration ignored public education, and does it matter? Public Administration Review 67: 135– 51. Google Scholar CrossRef Search ADS Ravitch, D . 2010. The death and life of the great American school system: How testing and choice are undermining education . New York, NY: Basic Books. Reese, William J . 2005. America’s public schools: From the common school to “No Child Left Behind.” Baltimore, MD: The Johns Hopkins Univ. Press Rothstein, Richard . 2004. Class and schools: using social, economic, and educational reform to close the black-white achievement gap . Washington, DC: Economic Policy Institute. Rothstein, Richard , Jacobsen Rebecca , and Wilder Tamara . 2008. Grading education: Getting accountability right . New York, NY: Teachers College Press. Sawchuk, Stephen . 2013. Teachers’ ratings still high despite new measures. Education Week , February 5. Spicer, Michael W . 2010. In defense of politics in public administration: A value pluralist perspective . Tuscaloosa, AL: Univ. of Alabama Press. Trujillo, Tina . 2014. The modern cult of efficiency: Intermediary organizations and the new scientific management. Educational Policy 28: 207– 32. Google Scholar CrossRef Search ADS Tyack, David B . 1974. The one best system: A history of American urban education . Cambridge, MA: Harvard Univ. Press. © The Author 2017. Published by Oxford University Press on behalf of the Public Management Research Association. All rights reserved. For permissions, please e-mail: firstname.lastname@example.org.
Journal of Public Administration Research and Theory – Oxford University Press
Published: Oct 4, 2017
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera