Sunday, December 25, 2016

Merry Christmas and a Happy New Year!

May war and human persecution be replaced by peace and respect for all peoples.

Thursday, December 22, 2016

Our Schools Are Mostly Teaching the Wrong Stuff!

Here is a video that demonstrates just how helpless and ill equipped for real life our graduates really are after all the stupid reforms we have imposed on our public schools.

Several years after trying to jam the Common Core stuff down all students' throats, we find them averaging a pitifully low score of 38% on their English and math tests and they still can't balance their checkbooks or calculate the devasting effect credit card interest has on their standard of living. But to add insult to injury, they can't even cook their families a simple healthy meal. (See the video). So now about 2 thirds of our graduates are over-weight and prone to diseases like diabetes. So what was the answer of our school reformers? Cut out Physical Education!

About ten years ago or guidance counselors answered a survey about changes to the curriculum they would recommend. A huge majority said we needed more vocational career choices in our schools for students who did not plan to go to college. So what did BESE do? They forced more students to take the college prep curriculum based on the assumption that more strong medicine would do the trick. The result? We now have a smaller percentage of our students staying in college than we had then.  Many of our college prep courses have been watered down so we could increase our graduation rate. So both the potential college prep and technical prep students have been shortchanged.

When are we going to fire the phony educators like Betsy DeVos (she will be appointed by Trump) and John White from their positions of authority over education and put professional educators in charge who believe in preparing our kids for the real world?

Saturday, December 17, 2016

Can High Poverty Schools Beat the Odds?

Part II: An Analysis of School Performance Related to Family Poverty.
 

Unintended negative consequences of Louisiana’s school grading system

Does the grading of schools based on student test scores produce higher student performance?

My analysis reveals that the evidence is building each successive year that such pressure produces more negative consequences than positive ones.

I suggest readers spend a few minutes reading the two recent articles by Danielle Dreilinger at nola.com here and here, to examine the rise of rampant cheating on state tests caused by the intense pressure to produce higher test scores.  In addition, here is another article about alleged cheating at SciTech Academy. These are exactly the kind of cheating by educators that resulted in educators going to jail in Atlanta Georgia.

Time after time, it has been found that charter schools in the New Orleans Recovery District have used various forms of cheating and test question teaching to artificially raise their school performance scores. Time after time school performance scores in the RSD have dropped like a rock the year after instances of cheating are exposed. The graduation rate of the RSD dropped by almost ten percentage points when the LDOE clamped down on the misreporting of dropouts as transfers.

Here is a study by Stanford University that shows that grading and closing schools in New Orleans neglects and underserves the students that are most at risk.

Yes, it is very clear that one of the major outcomes of grading schools and evaluation of educators using student test scores produces cheating! The articles above show that cheating includes educators changing student test answers and educators making copies of test questions so that the answers can be taught to students before they take the state tests.

The problem of cheating is compounded by the long-standing policy of the Department of Education that allows school districts or charter groups to investigate themselves when allegations of cheating arise. Such a policy probably makes it more hazardous for whistleblowers reporting cheating than for the propagators of cheating.

To better understand the pressure that produces cheating, let’s look more closely at the relation between school ratings and student poverty.

There are only 10 public school districts in Louisiana with less than 60% of their students qualifying for free or reduced price lunch.  (See the spreadsheet in Part I of this investigation) Free or Reduced Lunch (FRL) has become a standard measure of the degree of poverty in public schools nationwide. Louisiana is one of the highest poverty states in the nation. All public school systems in Louisiana except one, have more than 50% of their students on FRL. That one exception is the Zachary Community School system which has 45% of its students on FRL It just so happens that the Zachary Community School system is the highest performing school district in Louisiana. It is rated as an “A” school district.

Out of the 10 school districts with fewer than 60% FRL students, 9 are also rated as “A” school systems. Is this a coincidence? I don’t think so.

One of the highest poverty districts, which seems to have cracked the code for producing higher test scores just happens to be the district described in one of the Dreilinger articles linked above. The state inspector general has investigated that district for alleged cheating. The IG determined that cheating was a serious problem in the district, ("extensive violations of test security policy") but apparently took no corrective action.  The LDOE has allowed the highly questionable test results and the improved district grade to stand. Except for the Dreilinger article in nola.com, which is hundreds of miles away from the district implicated in cheating, no local news media has even bothered to inform the public in the district of the alleged cheating.

The State Department of Education has also made it easier for school systems to appear to have improved performance by the lowering of cut scores on state tests. There is also a state policy of curving school performance scores in the last two years that keeps scores artificially high. As a result, no school system was assigned an “F” grade in 2016.  At the same time the ranking of Louisiana compared to other states on the NAEP tests has dropped even lower.

It is obvious that rating and grading schools using primarily student test scores reveals mostly the level of poverty of the students attending each school. Does such a rating system really tell us something about the quality of instruction? What do you think?

There is also strong evidence that the grading of schools based on student test scores results in neglect of students with disabilities in some schools because such students have little effect on improving school performance scores. One administrator was quoted advising teachers not to waste their time on such students even though the school had contrived to receive extra funding for more students with disabilities.

I must argue forcefully that the grading of high poverty schools places an unfair stigma or assumption of blame on the teachers and administrators of such schools. The general public automatically assumes that students in so called “failing” schools are not getting good instruction and that students in “A” schools are getting the best instruction. Surprisingly, Herb Basset has reviewed data that indicates mixed results, or at best, very slight improvement for students transferring to higher grade schools.

Another unintended consequence of the grading of schools using student test scores frustrates the mission of schools designed to address the needs of handicapped or troubled students. The schools for the deaf and visually impaired and a charter school for dyslexia are all rated F by our system of rating schools. This is outrageous considering the valuable services provided by these schools. In addition, practically all the alternative schools addressing the needs of suspended students and potential dropouts are rated F. The rating system has no relation to the purpose of those schools and it serves only to smear the reputations of their highly dedicated educators.

I believe that school grades tell us almost nothing about the quality of instruction. Poverty factors seem to be the dominant force in determining a school’s grade. So why do we still insist on stigmatizing the teachers and administrators that serve students struggling with the negative effects of poverty?

I happen to live in the Zachary Community School system. All my children and most of my grandchildren have attended the Zachary system, which continues to receive the highest ratings in the state. I have first hand knowledge that it is indeed an excellent school system!

I personally know many of the teachers and administrators in the Zachary system and can attest to the fact that they are superior educators and we are lucky to have them in my community.

But another negative unintended consequence of the school rating system, in Louisiana is that it makes it easier for top systems like Zachary to continue to attract the very best educators from any of the systems that are rated near the bottom. What possible benefit is there for a top educator to go to or remain in one of the “D” rated systems? So the rating system automatically drives top educators away from the students that need them most. This is the ultimate insult to our students and educators caused by the school grading system.



Thursday, December 15, 2016

The Secret Behind the Best School Performance Scores

Part I: A look at the data relating school performance scores and school letter grades to the rate of family poverty in Louisiana public schools

It took 3 successful public records lawsuits against the Louisiana Department of Education and the successful defense of a 4th lawsuit against me by the LDOE to get the accurate data that can be used to compare school performance scores to family poverty in each public school district. Now any person interested in the possible effect of poverty on school performance can view the results of this comparison.

The following chart was created by my son Donny, who is a graduate of the LSU school of business and who holds an MBA degree from Mississippi State. Donny majored in Quantitative Business Analysis and does statistical analysis for one of the largest companies in Baton Rouge. It turns out his skills can be applied to analyze our Louisiana public education data.

The graph below compares poverty of families of children enrolled in each school district in 2016 to average School Performance Scores (SPS). Poverty is determined by the percentage of families qualifying for free or reduced lunch or a comparable measure of wealth. The correlation coefficient comparing school performance to poverty is -0.826. The negative sign indicates an inverse or opposite correlation of school performance to the percentage of families that are economically disadvantaged. This relation is very statistically strong.


The  Excel chart below is a ranking of all public school districts in Louisiana based on poverty. The ranking starts with the school systems serving the smallest percentage of students living in poverty. This ranking is the data that was used to produce the graph shown above. Notice that even though the ranking is not a ranking by school performance scores, it ends up being almost the same. That's because of the strong correlation of family poverty based on free or reduced priced lunch to school performance. (Note: The gap in the chart is simply a page break.)
The school districts near the top of the chart are those with the smallest percentage of families struggling with poverty. Note that they are also the school districts with the top letter grades. These rankings demonstrate an uncanny relation between school performance and poverty in Louisiana. The smaller percentage of families living in poverty is the secret behind the best school performance scores!

I believe this data calls into question Louisiana's policy of grading public schools without considering the negative effects of poverty on SPS and therefore school grades. Apparently all we are doing is grading our schools based on the poverty of their students. What is the purpose of a system that classifies community schools as failing just because they happen to serve a large percentage of students that are struggling with poverty?

Think about it. Is this a fair way of rating our schools?

Part II will look at specific cases of school performance and examine the unintended consequences of our obsession with rating and grading schools using test score results.

Michael Deshotels

Monday, November 28, 2016

Our Louisiana School Principal Evaluation System is Seriously Flawed

Note to readers: Recently I became concerned about Superintendent White’s proposal that the new calculation of a school’s performance score would have a 25% component based on the annual improvement of student test performance. I especially question such a system applied to all schools, particularly since A rated schools often have little room to improve. This concern only added to my fears that our new Louisiana school principal evaluation system may also be overly reliant on perpetual student test score increases.

With the above concerns in mind I asked Herb Bassett, a Louisiana educator whom I regard as one of the best analysts of statistical based rating systems, to study the new principal evaluation system now in operation in Louisiana and provide my readers with his insights as to the appropriateness of this model for principal evaluation.

Mr. Bassett’s conclusions described below are very worrisome, and lead me to believe that principals and district superintendents have been saddled with a poorly designed and extremely unfair system for principal evaluations. Please review Mr. Bassett’s analysis and also my commentary following the analysis.

Bassett’s Analysis of Guidelines for Principal Evaluations

The following is an analysis of SPS growth targets recommended by the Louisiana Department of Education as part of the latest principal evaluation system. The analysis explains how LDOE:

1.     imposed what amounts to a stack ranking system designed to fail both 25% of A school principals and 25% of D school principals on at least one component of their evaluations 
  Note from editor: Stack ranking is a type of employee evaluation system that ranks employees on the results of the employee evaluation. It is common in stack ranking to designate a certain percentage of employees each year as unsatisfactory and a certain percentage as satisfactory as well as a certain percentage as high performers. This procedure amounts to a quota system for each level in the evaluation system. Even though the ranking affects only part of the principal’s evaluation, it can make a huge difference in the final evaluation.,
2.     overrode its own Achievement Level Descriptions for the majority of its target recommendations, that the overrides had a downward influence on principal ratings, and that LDOE did not clearly explain its overrides in its Goal -Setting Toolkits, 
3.     used an incorrect method to establish its target recommendations. This resulted in unrealistically high targets for A school principals while allowing relatively lax targets for D and F school principals.

This year, the LDOE convinced the Accountability Commission and BESE to tie principals' evaluations directly to SPS growth. Bulletin 130 now states:

§305. Measures of Growth in Student Learning Targets
D. Principals and Administrators. A minimum of two student learning targets shall be identified for each administrator.
1. For principals, the LDE shall provide recommended targets to use in assessing the quality and attainment of both student learning targets, which will be based upon a review of “similar” schools. The LDE will annually publish the methodology for defining “similar” schools.
2. For principals, at least one learning target shall be based on overall school performance improvement in the current school year, as measured by the school performance score.

LDOE was left to decide how it would set the targets. Its 2016 overall SPS Improvement target recommendations would lead to the following (based on the recently released 2016 SPSs):

·      Over two-thirds of principals of A-rated high schools would get the lowest rating while only one principal of an F-rated high school would do so.

·      No A-rated combination school principal would make the highest rating while exactly half of the principals of F-rated high schools would.

·      More than one-third of all principals would make the lowest rating.

Why would we require an A school to improve its SPS more than twice as much as a D school to rate full attainment?
These outcomes defy common sense. LDOE's recommended targets were unrealistic for A rated schools and comparatively lax for D and F schools. Why would we require an A school to improve its SPS more than twice as much as a D school to rate full attainment? These inverted expectations came from a questionable ranking system and from using incorrect methods to calculate those rankings.


Principals were encouraged to base a second goal on an individual component of the SPS. LDOE recommended similarly flawed targets for those as well.

I certainly hope that many principals and their supervisors chose to override the LDOE recommendations when they set their goals. This linked spreadsheet shows how the flawed targets would negatively impact principal evaluations sorted by each school configuration and letter grade.

1) LDOE essentially applied stack-ranking to each letter-grade category of schools to achieve the same 25% "insuffucient" quotas from A-school principals and D-school principals.

LDOE published targets in its Principal Goal Setting Toolkits for K-8 Schools, Combination Schools, and High Schools. "Similar schools" were defined by school type - Elementary, Combination, and High Schools - and further subdivided by school letter grade.

LDOE's achievement level descriptions indicate that targets were set by the prior year SPS growth of the schools at the 25th, 50th, and 75th percentile within each "similar school" category. My analysis finds that LDOE used a system that required more growth from A schools than D or F schools to reach "full attainment".

This amounts to a stack ranking system with arbitrary quotas of 25% insufficient, 25% partial, 25% full, and 25% exceeds attainment within each school letter grade category.

By requiring principals to set an Overall SPS Improvement target, LDOE's system effectively made A school principals compete against A school principals, B school principals compete against B school principals and so on. In its setting of recommended targets, LDOE ignored the thorny question of,  "Are A school principals as good/bad as D and F school principals on the whole?" If not, why use the same quota for each group?

The data presented in the figure below - compiled from the Goal Setting Toolkits - show that A schools significantly outperform D and F schools with moving struggling students past their VAM expectations. Why, then should we accept a system designed to rate principals of A schools "insufficient" as often as principals of D and F schools? Such a formula for rating principals seems to be contrary to the entire theory of rating schools and their staffs using SPS and the letter grading system.  The logic also runs completely contrary to some of the assumptions made by the U.S. Dept. of Education in recent years which resulted in the firing of some principals of low performing schools as a form of restructuring designed to produce “school turnaround”.

2) LDOE overrode its own Achievement Level Descriptions in a manner that would produce higher-than-quota percentages of "insufficient" and "partial" attainment.

LDOE's Achievement Level Descriptions indicate that the "partial attainment" target was set by the growth of the school at the 25th percentile in the prior year. I found, however, that if that school had negative growth, LDOE set the minimum target for "partial attainment" to 0.1 even though that value corresponded to a higher percentile. Presumably LDOE interpreted "school performance improvement" to exclude negative growth. The override applied to the vast majority of the targets. LDOE marked such data with "^" but gave no explanation that I could find in the Toolkits.

Because of this override, the principal of any school showing negative growth would automatically rate "insufficient" and overall, that would result in more than 25% "insufficient" ratings.

Additionally, when that 25th percentile override applied, LDOE also raised the "full attainment" target to a value higher than the growth of the 50th percentile school even if that school showed positive growth.

Thus, most of the recommended targets for partial attainment and full attainment were actually set from higher percentile ranks than what the Achievement Level Descriptions stated.

There is an important issue that accompanies setting targets based on the previous year's growth. A year in which there is exceptionally strong overall growth dooms the next year to high "insufficient" rates because the new targets are based on an unsustainable rate of growth.

To its credit, LDOE did provide two years of Overall SPS Improvement data and three years of individual component data for reference. However, it provided targets clearly labeled: 2015-2016 Recommended Targets: based on 2013-2014 and 2014-2015 results.

3) LDOE's recommended targets were based on flawed methodology.

I reconstructed LDOE's high school targets and found that LDOE sorted the prior year growth by schools ending letter grades rather than their starting letter grades. Now, the principals were asked to set goals based on their schools' starting letter grades. For an accurate "similar school" comparison we must compare the schools starting this year with an A to schools that last year started with an A.

In calculating its recommended targets for the A schools, LDOE included schools that started with B's and C's the previous year but grew to an A, while simultaneously it excluded any school that started with an A but dropped to a B. By systematically adding in schools with excellent growth and excluding some schools with negative growth, the rankings were skewed and led to unrealistically high expectations being set for schools that were starting from an A.

In setting the F school targets, LDOE removed schools that rose from an F to a D or C. It computed the target based only on the schools that remained an F. This resulted in LDOE setting much lower recommended targets for F and D schools than for A schools.

The targets for the B, C, and D schools were skewed, but to a lesser extent because some of the movement between school letter grades was offsetting in these middle letter grade categories.

The chart below provides LDOE's recommended targets for high schools, my reconstruction of LDOE's targets, and the targets that would have resulted from sorting by starting letter grade rather than by ending letter grade. Note that the targets based on sorting by the starting letter grade fit the expected pattern of generally requiring more improvement from the lower letter grade schools and less improvement from the higher letter grade schools.

While I have not reconstructed every one of LDOE's target recommendations, it is clear that all of the recommended targets, both for Overall SPS Improvement and individual components and for all years' data were computed using the wrong sorting. Principals have been asked to base their targets on faulty data.

Consider the impact on Ruston High School. In 2016 it grew 7.0 points from 100.3 A to 107.3. A. The flawed recommended target would rate that growth only "partial" whereas under the version using the correctly sorted data, that seven point growth would rate "exceeds".

I question the logic of trying to force the same quotas of each letter-grade school principals into the four performance categories, and I question the efficacy of attempting to force high rates of low evaluations. If we discourage and run off too many principals, where will we find their replacements?

I especially question the wisdom of using a system that most of the time assigns lower improvement targets to D and F schools than A and B schools. Such recommendations create pressure to widen the achievement gap rather than narrow it.

I urge principals and superintendents to recommend a better evaluation system for the future and consider what measures should be taken to rectify LDOE's mistakes in recommending targets for this year.

Finally, I urge BESE and LDOE to allow that principals who feel that the recommended targets have caused them to receive unjust ratings be allowed to adjust their targets retroactively in consultation with their supervisors and superintendents.

Herb Bassett, Grayson, LA

My Observations and Commentary

I believe Mr. Bassett’s findings reveal an extremely ill-conceived and careless method for a critical part of our school principal evaluation system. If applied as designed by the Louisiana Department of Education, I believe it will result in unfair evaluations and a lowering of the morale of our dedicated school principals. Not only do I find the new guidelines statistically flawed, but I also question the motive behind such a negatively skewed rating system.

Teachers are already complaining that the incessant drive by our Louisiana Department of Education to simply raise student test scores each year is interfering with a healthy teaching and learning environment in our schools. 
I believe this new principal evaluation system and also the 25% school improvement component for rating and grading schools to be far too obsessed with a perpetual raising of student test scores. Teachers are already complaining that the incessant drive by our Louisiana Department of Education to simply raise student test scores each year is interfering with a healthy teaching and learning environment in our schools. Tests are important but they should not embody the whole of the education experience for our children. This incessant pressure to raise test scores no matter what, is what caused the cheating scandals we have seen in Atlanta, Washington D. C., and El Paso.

Educators in Louisiana have already been through a very unfortunate and unfair experience when the state attempted to mandate that each year 10% of teachers of tested subjects would automatically be rated “ineffective” based on the flawed value added model. Now the state is trying to impose a failure rate for principals based on student test scores, with a higher resulting failure rate for principals of our highest rated schools. This is a negative evaluation quota system on steroids! Such a system is insanity and should be scrapped immediately!


Finally my over-arching concern about this whole matter is that I have come to believe that this latest scheme attempting to mandate an extremely harsh evaluation system for principals is really designed to pressure principals to fire more teachers based on student test scores. I do not believe the state should be attempting to negatively evaluate and fire more principals and teachers in A rated schools, but I also do not believe it would be proper to arbitrarily fire more personnel in D and F rated schools! That’s because the evidence is overwhelming that student test scores are much more heavily influenced by socio-economic factors than by the school and its personnel. Until our education reformers understand this fact we will be forever doomed to scapegoating our professional educators for factors over which they have little control.

Mike Deshotels