Current template: single.php
Register Log in

Maranto Admin Accountability for 2018 SSRJ

DOWNLOAD

Does Administrative Accountability Capture Student Learning?  An Arkansas Test

Robert Maranto (rmaranto@uark.edu), University of Arkansas

Kaitlin Anderson (ande2018@msu.edu), Michigan State University

Alexandra Boyd (boydalexandra@gmail.com), University of Arkansas 

Every restraint and requirement originated in somebody’s demand for it.

Herb Kaufman (1977, 29)

Abstract

Market critics propose that administrative accountability is superior to school choice in promoting school quality. We use Arkansas school level value added measures of student learning to test whether schools which are less effective academically are more likely to face administrative sanctions. We find very modest, but statistically significant relationships between school academic performance and state sanctions: fully accredited (non-sanctioned) schools are slightly more effective academically. We hypothesize that charter schools are more likely to face sanctions since they have fewer administrative resources; this is not supported by the data.

We discuss the relevant policy implications.

Introduction

State based regulation and market based choice are often seen as competing paradigms for school improvement. Proponents argue that markets better innovate than the state sector, that markets better match individual consumer needs, that parents rather than the state should most influence children, and that consumer sovereignty better guarantees school quality than do state mandates, since parents are more apt than bureaucrats to seek maximization of student success, and since distant authorities may impose measurements which fail to match local needs (Friedman 1962; Coulson, 1999; Chubb & Moe, 1990; Greene, 2005; Hill, Pierce & Guthrie, 1997; Merrifield, 2001; Vaughn & Witko, 2013). In contrast, proponents of state provided and regulated schooling argue that school consumers lack expert knowledge, that markets sort but do not develop students, that markets undercut democracy, and that markets may prove divisive over the long term (Henig, 1994; Hirsch, 1996, 2009; Glass, 2008; Ravitch, 2010).

Lost in the debates is that education in America and most other democracies is not provided by a single monopoly so much as by private entities, or more often, locally governed state owned enterprises (SOEs), heavily regulated by national and sub-national politicians and bureaucrats (see works within Glenn & De Groof, 2012). As in any such regime, regulations reflect the preferences of those politicians and bureaucrats (Downs, 1967), but even more the preferences of interest groups seeking to institutionalize their own material and ideological priorities (Lowi, 1979; regarding schooling see Chubb & Moe, 1990;  Moe, 2011). Those preferences may have relatively little to do with student learning (Maranto & McShane, 2012; Merrifield, 2001).

We use a sophisticated school level measure of student learning (value added) to test whether the regulatory compliance implemented by the Arkansas Department of Education

(ADE) is more likely to penalize public schools in which students learn relatively little, and secondly, whether compliance measures are more apt to penalize charter schools, which may lack the administrative resources to comply with administrative accountability demands (“red tape”). Findings indicate that schools with higher value added are slightly less likely to face sanctions, though the relationship is very weak. Charter schools are also less rather than more likely to be sanctioned.

The Hypotheses: Regulatory Compliance and Student Learning

As characterizes regulation generally and regulation of public organizations in particular (Lowi, 1979; Behn, 2001; Kaufman, 1977; Bozeman & Feeney, 2011), public schooling regulations are intended to advance a variety of goals. We propose that regulation of public schools reflects four broad goals: the general public interest, at least as conceived of by those advocated a particular regulatory scheme (as in the matter of school textbooks, see Ravitch, 2003); the material interests of school employees (on teacher unions, see Moe 2011); the turf and power of policy-making bureaucrats themselves (Downs, 1967; Niskanen, 1971; regarding schools see Williams, 2005; Levenson, 2012); and finally, the credit-claiming of elected politicians (regarding school boards, see Maeroff, 2011; regarding school superintendents see Hess 1999; regarding politicians generally see McCluskey, 2007). Only the first of the four goals directly relates to student achievement. This may partly explain why large increases in school resources have generally not increased student learning, even as demographic factors such as childhood poverty, the percentage of low birth weight babies, cognitive challenge rates, and family size have generally declined since 1960 (Maranto & McShane, 2012; Greene, 2005). The second (material interests of employees) and third (bureaucratic turf) goals, in particular, may lead to the creation of barriers to entry keeping alternative providers out of education, but not protecting students, as may typify regulation generally (Friedman, 1962).

A problem of regulation generally and that of schools in particular is that regulations designed to promote accountability for performance often become organizational ends rather than means to broader public ends, “goal displacement” (Merton, 1940; Anechiarico & Jacobs, 1996; Bozeman & Feeney, 2011; Howard, 1994). Further, distant central authorities find it difficult to mandate exemplary outcomes in part since such outcomes require innovation and initiative rather than compliance to central directives. Accordingly, even done well, regulation is unlikely to establish more than a basic floor for outcomes (Friedman, 1962; Hill, Pierce, & Guthrie, 1997; Hess, 2013; Ouchi, 2009). This seems to characterize teacher certification. Despite extensive regulation of teacher training and certification, it is not clear that certified teachers are better teachers; rather certification seems to combine low performance standards with high administrative compliance barriers to entry (Hanushek & Lindseth, 2009; Hess, 2010; Winters, 2012). In fact, using Arkansas data, Shuls and Trivitt (2013) find that traditionally trained and alternatively licensed teachers were not significantly more or less effective than each other, but that alternatively licensed teachers scored higher on teacher licensure exams than traditionally licensed teachers.

Further, over-regulation may cause long-term organizational pathologies. A vast literature shows that societies directed by central planners fail to find correct information and thus adapt to dynamic or localized circumstances, work from classical economists like Friedman (1962), but also from political scientists (Lindblom & Cohen, 1975), policy analysts (Brandl, 1998), and organization theorists (Hult & Walcott, 1990). This reflects planners’ short sort term mistakes in policy and resource allocation, but also long term impacts on institutional legitimacy and organization culture. At the organizational level, in their review of the literature, Bozeman and Feeney (2011) complain that “red tape” generally has substantial performance costs and opportunity costs, meaning that employees complying with procedural demands are not then working to further organization missions. Some studies indicate that more red tape leads to less risk acceptant organization culture, less public service motivation on the part of bureaucrats, and ultimately lower performance. Downs (1967) theorizes that more rule bound organizations attract and develop “conservers,” bureaucrats seeking to maximize their own security rather than either self-advancement or altruistic public service; such officials prefer to work in rule-bound environments since rules remove discretion and thus make work easier and more predictable, if less productive. Warwick (1975) finds evidence that such bureaucrats abounded in the rule bound U.S. State Department of the 1960s. Within schools, Hess (2010, 2013), Moe (2011), and Shuls and Maranto (2013) offer evidence that traditional public school personnel systems tend to recruit and attract conservers rather than more public service motivated leaders and teachers, in sharp contrast to certain charter school networks. Along these lines Theodore Sizer warns that “[t]he more the higher authorities impose standardized procedures and demand that school level people adhere to them…the greater the likelihood that the schools will be mediocre, even harmful to some children, and unable to attract and hold a full complement of able staff” (quoted in McCluskey, 2007, p. 170). Similarly, empirical studies by Ingersoll (2003) find that the greater the efforts administrators make to control teachers, the less effective that control tends to be. Likewise, in his work on urban schools Payne (2008) finds that centralized school systems lead teachers and principals to focus on protecting their prerogatives rather than serving children. Central offices, for their part, lack the information to hold these school level actors accountable. Hill et al. (1997) make similar points.

Taken as a whole, this suggests:

H1. The relationship between regulatory sanctions and student learning at the school level will be weak or nonexistent.

Regarding actual schooling accountability sanctions, as implied above, regulatory sanctions are a blunt instrument in several respects. Regulatory sanctions developed in the state or national capital may not reflect the complexities at the school level; thus, schools may excel on bureaucratic compliance measures while being unsafe and performing poorly academically. In part for this reason, many theorists prefer school choice allowing individual parents to hold schools accountable (Hill et al. 1997; Ouchi, 2009; McCluskey, 2007). Indeed, a vast “reinventing government” literature (Osborne & Gaebler, 1992; Knott & Miller, 1987; Barzelay 1993; works within DiIulio, 1994; for a summary applied to schools, see Maranto et al., 2001, ch. 3) suggests that regulatory sanctions are most apt to target either woefully under-performing schools, or very high performing schools. As Payne (2008) documents, certain dysfunctional schools neither provide a good education nor succeed at basic paperwork; in such schools nothing works well. On the other hand, as Thernstrom and Thernstrom (2003) detail, certain high performing schools readily ignore or even willfully disobey rules that interfere with educating students. Indeed at least one alternative certification organization, Teach for American, and at least one charter school network, KIPP, encourages this sort of mission driven thinking on the part of teachers (Maranto & Shuls, 2011).

Within Arkansas, Maranto (2010) reports that a number of the state’s highest performing schools, both district and charter, have suffered state administrative sanctions. Springdale’s Har-Bar High School, a traditional public school, was sanctioned for responding to student requests by offering an elective in Mandarin taught by the school’s only speaker of the language who, while certified, lacked certification in that language. (The certification rules were sufficiently complex that district administrators believed they had complied.) The KIPP Delta charter high school, a small college prep school, suffered state sanctions for offering but not forcing students to enroll in vocational classes. Both KIPP and Har-Ber have among the best test scores in the state, and excel on student level measures of value added. Their facing sanctions despite outstanding academic performance should not come as a surprise. As one longtime Arkansas superintendent reports, very few of the roughly 170 items audited by state education authorities to create the three sanction categories even tangentially relate to student learning. Notably, charter schools are held accountable for most of these items (Compton & Maranto, 2009). As Chubb and Moe (1990) and Merrifield (2001) show, education markets systems in which parental decisions determine resource allocations, ideally with autonomous schools able to determine their own prices and policies, are probably more likely to produce individually and socially desirable outcomes.

Accordingly, we will test:

H2. The relationship between regulatory sanctions and school success will be curvilinear, with the highest and lowest value added schools more likely to receive ADE sanctions. 

The simultaneous bluntness and complexity of any regulatory regime suggest that in addition to high and low performers, small schools that lack an adequate administrative apparatus will have the most difficulty complying with regulations. Charter schools, which are typically small, receive less funding relative to traditional public schools, and oriented toward parents rather than regulatory authorities (Finn, Manno, & Vanourek, 2000; Maranto et al., 2001;            Batdorff, Maloney, May, Speakman, Wolf, Cheng, 2014) may have particular difficulties complying with administrative accountability. The Arkansas charter sector, which was more successful academically than traditional public schools in the 2006-08 period though somewhat less so in 2010-12 using school level value added (CREDO, 2013), should suffer higher levels of academic sanction—if the state regulatory regime reflects values other than student learning. We will thus test:

H3. Charter schools will be more likely to receive regulatory sanctions than will traditional public schools. 

Below, we test these hypotheses.

Methods

We will test these hypotheses using student achievement value added data, VALUE ADDED, calculated as follows, for the 2009-11 period, as a parsimonious specification of the value-added model. Our model takes advantage of multiple test score observations for individual students to control for student level time-invariant, unobservable characteristics. The model takes the following form:

where Y is student i‘s standardized scale score on a State Benchmark Exam in grade k at school j in year t. Variables ρ, γ, and ε represent the impact of grades, schools, and random error. In this specification, the γ terms are the estimates of individual school quality. The model includes two years of prior test score data, and combines gains in math and literacy to form a single measure. School level value added is summed across these combined gains and across students. A school level calculation of 0 indicates that students in that school learned (as measured on state standardized tests) the mean for Arkansas public schools (not individual students). Since some schools opened or closed in the three year time period or were too small for analyses in a given year, value added figures are calculated for only 1,006 of 1,075 Arkansas public schools operating in the 2010-11 school year. We visited 14 schools marked as high performing on this value added measure, and have found that they have achievement-oriented cultures; thus, the quantitative evaluations were reinforced by qualitative fieldwork (Maranto, 2016).

Second, we will use a simple dummy variable for charter schools (CHARTER), with 0 if a school is not a charter (n=1043) and 1 (n=26) for charters, appropriate given the binary status of the variable.  Here we include 11 district-run charter schools, which in Arkansas have obtained substantial waivers from ADE regulations and operate as schools of choice; hence, they resemble charters for our purposes. Third, we use the percentage of free and reduced lunch students (FRL) since this is widely seen as a variable affecting the difficulty of teaching, and of complying with administrative sanctions (Miller, Kerr, & Ritter, 2008).

Finally, our key dependent variable is calculated using the ADE school sanctions lists from the 2010-11 school year, to conduct cross sectional analyses. These were quickly removed from the ADE web site, but were provided to us by the Arkansas Democrat Gazette. For the 2010-11 school year, 179 of 930 Arkansas public schools in the data were sanctioned for licensure issues, 99 for financial variances (typically having insufficient reserve funds), and an additional nine for curricular issues. Most sanctions are symbolic, somewhat affecting school reputations and in turn, principal and superintendent career paths. To calculate the variable SANCTION, fully accredited schools (schools with no sanctions) are coded as 0 (n=806), those with a single sanction as 1 (n=239), and those with two sanctions as 2 (n=24). (In theory, a school could have three sanctions, but none did.) The three sanction types die not strongly correlate. Curricular and licensure sanctions correlate at .18 (p=.06; n= 1059), but neither correlates at above .01 with sanctions for financial variances.

Results

Results tend to confirm H1, but not H2 nor H3. The relationships between sanctions and school effectiveness, as measured by value added, are weak. Each of the three sanctions

(licensure, curricular, and financial) correlates with 2009-11 school level value added at -.05 to .07 (meaning that higher value added schools receive fewer sanctions), which in the large data set reaches or approaches statistical significance, for financial variances at p=.09, and for curricular and licensure sanctions at p=.04 (each). Overall, SANCTION correlates with VALUE ADDED at -.09, statistically significant at .01 in the large data set (n=930). Accordingly, there is a relationship between measured school academic effectiveness and the likelihood that a school receives sanctions, but it is a very weak relationship, tending to confirm H1.

Relationships are direct rather than curvilinear, contradicting H2. Of the middle 90% of schools on VALUE ADDED, 74.4% receive no sanctions and 2.5% receive two sanctions, no different from schools on the tail ends of the value added distribution (pearson chi sq.=3.823,

d.f.=2, p=.148). Of the bottom 5% VALUE ADDED schools (n=46), 80.4% receive no sanctions, statistically no different from the 86.7% of top 5% value added schools (n=45) which are unsanctioned. (Only 2% of each group received two sanctions.)

Finally, only one of 26 charter schools (3.8%) received a single sanction, while among 1,043 other public schools, 238 (22.8%) received a single sanction and 24 (2.3%) received two sanctions (pearson chi sq.=6.205, d.f.=2, p=.045); thus charters are actually less likely to be sanctioned, rejecting H3. This may reflect the relative difficulty of obtaining a charter in

Arkansas. Alternatively, it may reflect the fact that charter schools often have waivers from certain regulations, and therefore may be less likely to receive sanctions. Indeed in fieldwork charter operators report receiving more scrutiny than traditional public schools. Interestingly, charters do slightly worse than traditional public schools academically, with a correlation of -.08 (p=.016) between VALUE ADDED and CHARTER.

Considering the possibility that regulatory authorities focus on the level of student achievement rather than value added, we repeat analyses using the levels of student achievement calculated in the same manner as VALUE ADDED, with school level means summed across these combined math and literacy value added for the 2010-11 school year. Again, a school level calculation of 0 indicates that students achieved (as measured on state standardized tests) the mean for Arkansas public schools (not individual students), designated as ACHIEVE. Unlike value added measures, this of course does not take into account a student’s prior performance, and thus may offer a less valid measure of school level success. Still, VALUE ADDED and ACHIEVE correlate at .45 (p=.001; n=1,069); Notably, FRL percentage correlates with ACHIEVE at -.53 (p=.000), but only at -.12 (p=.000) with VALUE ADDED. We find only weak correlations between SANCTION and ACHIEVE (-.06, p=.07), indeed somewhat weaker than for SANCTION and VALUE ADDED.

TABLE 1 HERE

In Table 1, we employ Ordinal Logistic Regression (OLR) to predict whether in 2011 charter schools and low value added schools are more likely to suffer administrative sanctions, controlling for free and reduced lunch percentage. (FRL correlates with SANCTION at .08, p=.02; and with CHARTER at -.14, p=.000). Here, we find that low performing schools were very slightly but significantly more likely to be sanctioned (p=.000); schools with higher FRL percentages were slightly but not significantly more likely to be sanctioned, and charter schools were slightly but not significantly less likely to be sanctioned, again tending to disprove H3.

Further, the rather weak (though in a large sample, statistically significant) relationship between the percentage of schools sanctioned and academic outcomes again tends to confirm H1.

We must stress the tentativeness of findings in this exploratory study. The statistical relationships found here are weak. One can raise reasonable questions about the external validity of findings from a single American state. Further, given that accountability regimes including testing regimen and administrative sanction items may change too often to affect bureaucratic organizations, it may be premature to expect any findings at all.

Implications: Can Markets Do Better?

Value added measures of academic achievement have been touted as techniques to determine what works in schooling and to drive gradual improvement (Winters, 2012; Maranto & McShane, 2012), but this paper may be the first to test whether value added measures of student learning correlate with administrative accountability measures of school effectiveness. Accordingly, despite the caveats noted above, this work has value as an exploratory study of a question central to recent American education reform (Maranto & McShane, 2012; Hess & Eden, 2017): Do regulatory sanctions embarrass and perhaps even punish school officials for poor academic results rather than merely faulty administrative compliance matters having little to do with teaching children? Findings indicate that the ADE regulatory regime only weakly reflects academic performance defined as school level value added measures of student learning. On the other hand, there is no evidence that ADE sanctions are more apt to target higher performing schools; indeed, they are very slightly less apt to do so. Further, there is no evidence that ADE sanctions disproportionately target charter schools; indeed charters are less likely to face sanctions, suggesting that regulations have not been used to disadvantage charters. This has implications for the school choice verses administrative regulation debate, a debate of both theoretical and applied importance (Vaughn & Witko, 2013).

Findings indicate that ADE’s regulatory regime does little to embarrass poor academic performance; nor highlight good performance. This may indeed indicate the limits of centralized accountability mechanisms to enforce and establish school quality: even a well-functioning centralized administration may have limited incentive and ability to improve school functioning. In contrast, either public sector (Hill et. al, 1997; Ouchi, 2009; Smarick, 2012) or private sector markets may send signals through parental choice, with policy-makers then incentivized to increase the supply of options proving popular for parents, who presumably have both knowledge of their children and the strong desire to see their children succeed. Further, private sector markets may have certain informational advantages in that producers can set prices different from those enforced by state regulators, leading to greater innovation and productivity over the long term (Merrifield, 2001). Yet even under the Every Student Succeeds Act, school accountability regimes are not going away (Hess & Eden, 2017), making it important to conduct further research on those regimes. Perhaps over the long run market and administrative accountability can work in combination to improve education, though the implications here are that the latter suffers enormous constraints.  

References

Anechiorico, F. & J.B. Jacobs. (1996). The Pursuit of Absolute Integrity. Chicago: University of Chicago Press.

Barzelay, M. with B.J. Armijani. (1993). Breaking Through Bureaucracy. Berkeley: University of California Press.

Batdorff, M., Maloney, L.,  May, J., Speakman, S., Wolf, P., Cheng, A. (2014). Charter School Funding: Inequity Expands. Fayetteville: University of Arkansas Department of Education Reform at http://www.uaedreform.org/research-reports/.

Behn, R. D. (2001). Rethinking Democratic Accountability. Washington: Brookings Institution.

Brandl, J.E. (1998). Money and Good Intentions are Not Enough. Washington: Brookings Institution.

Bozeman, B. & Feeney, M. K. (2011). Rules and Red Tape. Armonk: M.E. Sharpe.

Compton, G. & Maranto, R. (2009, December 27). Three ideas to make Arkansas public schools the best. Arkansas Democrat-Gazette, p. H1. http://www.uark.edu/ua/der/People/Maranto/20090621_OpEd.pdf

Chubb, J. E. & Moe, T. M. (1990). Politics, Markets, and America’s Schools. Washington: Brookings Institution.

Coulson, A. J. (1999). Market Education: The Unknown History. New Brunswick: Transaction.

CREDO. (2013). National Charter School Study. Stanford: Center for Research on Education Outcomes.  http://credo.stanford.edu/documents/NCSS%202013%20Final%20Draft.pdf

DiIulio, J. J., Jr. (Ed.). (1994). Deregulating the Public Sector. Washington: Brookings Institution.

Downs, A. (1967). Inside Bureaucracy. Boston: Little, Brown.

Finn, C. W., Jr., Manno, B.V., & Vanourek, G. (2000). Charter Schools in Action.

Princeton: Princeton University Press.

Friedman, M. (1962). Capitalism and Freedom. Chicago: University of Chicago Press.

Glass, G. V. (2008). Fertilizers, Pills, and Magnetic Strips. Charlotte: Information Age Publishing.

Glenn, C. L. & De Groof, J. (Eds.). (2012). Balancing Freedom, Autonomy and Accountability in Education (Volume 2). Oisterwijk, Netherlands: Wolf Legal Publishers.

Greene, J. P. (2005). Education Myths. Lanham, MD: Rowman and Littlefield.

Henig, J. R. (1994). School Choice: Limits of the Market Metaphor. Princeton: Princeton University Press.

Hess, F. M. (2013). Cagebusting Leadership. Cambridge: Harvard Education Press.

Hess, F. M. (2010). Education Unbound. Arlington: ASTD.

Hess, F. M. (1999). Spinning Wheels. Washington: Brookings Institution.

Hess, F.M. & M. Eden, editors. (2017). The Every Student Succeeds Act. Cambridge: Harvard Education Press.

Hill, P., Pierce, L. C., & Guthrie, J. W. (1997). Reinventing Public Education.

Chicago: University of Chicago Press.

Hirsch, E. D. (2009). The Making of Americans. New Haven: Yale University Press.

Hirsch, E. D. (1996). The Schools We Need and Why We Don’t Have Them. New York: Doubleday.

Horn, M.J. 1995. The Political Economy of Public Administration. London: Cambridge University Press.

Howard, P. K. (1994). The Death of Common Sense. New York: Random House.

Hult, K.M. & C. Walcott. 1990. Governing Public Organizations: Politics, Structures, and  Institutional Design. Pacific Grove: Brooks/Cole.

Ingersoll, R.M. (2001). Who Controls Teachers’ Work? Cambridge: Harvard University Press.

Kaufman, H. (1977). Red Tape: Its Origins, Uses, and Abuses. Washington: Brookings Institution.

Knott, J. H. & G.J. Miller. (1987). Reforming Bureaucracy: The politics of institutional choice. Englewood Cliffs: Prentice Hall.

Levenson, N. (2012). Smarter Budgets, Smarter Schools. Cambridge: Harvard Education Press.

Lindblom, C.E. & D. Cohen. (1975). Usable Knowledge: Yale University Press.

Lowi, T. J. (1979). The End of Liberalism, 2nd edition. New York: Norton.

Maeroff, G. I. (2011). School Boards in America: A flawed exercise in Democracy. New York: Palgrave/Macmillan.

Maranto, R. (2010, August 9).  Put Learning First: Standards outdated. Arkansas Democrat-Gazette, p. 7B.

Maranto, R. (2016, July 18). Good leadership: It makes Rogers schools stand out. Arkansas Democrat Gazette, p. 9B.

Maranto, R., & McShane, M. Q. (2012). President Obama and Education Reform: The Personal and the Political. New York: Palgrave/Macmillan.

Maranto, R., Milliman, S. R., Hess, F. & Gresham, A.W. (Eds.). (2001). School Choice in the Real World: Lessons from Arizona Charter Schools. Boulder: Westview.

Maranto, R. & Shuls, J.V. (2011, November). “Lessons from KIPP Delta,” Phi Delta Kappan, 93, 52-56.

McCluskey, N. P. (2007). Feds in the Classroom. Lanham: Rowman and Littlefield.

Merrifield, J. (2001). The School Choice Wars. Lanham: Scarecrow Education.

Merton, R. (1940). Bureaucratic Structure and Personality. Social Forces, 18(3), 560-68.

Miller, W. H., Kerr, B. & Ritter, G. (2008). “School Performance Measurement: Politics and Equity,” The American Review of Public Administration, 38(1), 100-117.

Moe, T. M. (2011). Special Interest: Teachers Unions and America’s Public Schools. Washington, DC: Brookings Institution.

Niskanen, W. A. (1971). Bureaucracy and Representative Government. Chicago: Aldine-Atherton.

Osborne, D. & Gaebler, T. (1992). Reinventing Government. Boston: Addison-Wesley.

Ouchi, W.G. (2009). The Secret of TSL. New York: Simon and Schuster.

Payne, C. M. (2008). So Much Reform, So Little Change. Cambridge: Harvard Education Press.

Ravitch, Diane. (2003). The Language Police: How Pressure Groups Restrict What Students Learn. New York: Knopf.

Ravitch, D. (2010). The Death and Life of the Great American School System. New York: Basic Books.

Shuls, J. & Maranto, R. (2013). Show Them the Mission: A Comparison of Materialistic and Idealistic Teacher Recruitment Incentives in High Need Communities. Social Science Quarterly, 95(1), 239-252.

Shuls, J.V. & Trivitt, J.R. (2015). Teacher Effectiveness: An Analysis of Licensure Screens. Educational Policy, 29(4), 645-675.

Smarick, A. (2012). The Urban School System of the Future. Lanham: Rowman and Littlefield Education.  Thernstrom, A. & Thernstrom, S. (2003).  No Excuses: Closing the racial gap in learning.  New York: Simon and Schuster.

Vaughn, M.G & C. Witko. (2013). Does the amount of school choice matter for student engagement. Social Science Journal, 50 (1): 23-33.

Warwick, D. P. (1975). A Theory of Public Bureaucracy. Cambridge: Harvard University Press.

Williams, J. (2005). Cheating Our Kids. New York: Palgrave/Macmillan.

Winters, M. A. (2012). Teachers Matter.  Lanham: Rowman and Littlefield.

 

Table One. Ordinal Logistic Regression summary with SANCTION as dependent variable.

Estimate Std. Error Wald df Sig. 95% Confidence

Interval

Lower Bound Upper Bound
Threshold [Sanctions= .00] 1.402 .232 36.482 1 .000 .947 1.857
[Sanctions = 1.00] 3.981 .306 169.095 1 .000 3.381 4.581
Location Value Added -1.826 .688 7.050 1 .008 -3.175 -.478
FRL .582 .367 2.513 1 .113 -.138 1.302
Charter -1.305 1.061 1.513 1 .219 -3.385 .775

 

DOWNLOAD
This entry was posted in SSRJ Articles. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>