Saturday, September 27, 2014

Educators Speak Up Now or Forever Live With VAM and The Dog and Pony Show Called COMPASS

Note to readers: Legislators got so many complaints from both teachers and administrators last year on the new teacher evaluation system and the use of VAM to evaluate teachers that they passed Act 240 by Rep. Hoffmann which sets up a special subcommittee of the Accountability Commission with the assignment of recommending changes in the teacher and principal evaluation system. That subcommittee has representatives of teachers and administrators who are actually working in the trenches of education, selected by the legislature to work with the regular members of the Accountability Commission to recommend a revamp of the evaluation system to make it more effective. I will be getting the email addresses of all these members and their home parishes so that those educators who are most concerned about the possible flaws in the new evaluation system can relay their concerns to one or more of their representatives on this special subcommittee. If you don't contact them and tell them your recommendations for change, you may be doomed to suffer the flaws of this system for a long long time. The Subcommittee meets on November 7, 2014 to consider changes in the evaluation system. Be sure to contact them before that date.

Do you sometimes wonder why most of the education reform programs in recent years have been such boondoggles? Why is VAM so erratic and unfair to so many teachers? Why is so much of each school administrator's valuable time being tied up in watching teachers perform a dog and pony show that is not always related to real teaching? Why are some of the teachers in the highest performing school districts getting some of the lowest evaluations? Does the new teacher and principal evaluation system mandated by Act54 of 2010 have any connection to reality in our public schools?

Here are my observations about what may have gone wrong:
  1. The whole COMPASS and VAM system was designed and implemented under the direction of a person who had absolutely no teaching or school administrator experience and who had no training whatsoever in education. Her name is Rayne Martin and she was appointed to design and administer the new teacher evaluation program by former Superintendent Paul Pastorek. She now heads up one of the fake grass roots organizations (an AstroTurf organization) here in Louisiana called Stand for Children which is almost totally financed by out of state entrepreneurs whose goal is to privatize public education and to allow out-of-state companies to make as much money as possible using our children and our tax dollars. What an insult to the professional educators of this state! Some of the concepts behind the new evaluation system included parts of a pretty good system of teacher observation developed by Charlotte Danielson (except that I think it is much too inflexible to be applied to all types of teaching). But Rayne Martin was certainly not qualified to design a practical system. That's why it is so unworkable. Danielson herself has disavowed any connection with this scheme and does not endorse it in any way.
  2. The new Director of the evaluation system appointed by John White is almost as poorly qualified as Rayne Martin to run and improve this system. Her name is Jessica Baghian, and her background is (a) 5 weeks training as a TFA corps member with no formal education training, (b) Served as a TFA teacher in a low performing charter school in New Orleans for a couple of years, (c) Has no training as an administrator or teacher evaluator, (d) Has never evaluated teachers herself. Ms Baghian may be a very nice and hard working person but why is she in charge of telling 30 year veteran administrators how to evaluate teachers?
  3. The Value Added Model was designed for Louisiana by Dr George Noell from LSU who was apparently qualified, at least on paper, to design the technical components of a VAM system. But it is useful to note that he also has never had experience teaching or supervising at the elementary/secondary level. The problem is that his product, the VAM system, does not work consistently in identifying effective and ineffective teaching, and there are all sorts of glitches built in that no one has been able to fix. For example, Dr Noel admits that the reliability of VAM in producing a consistent ineffective rating based on student test scores is extremely low. (See this post) Dr Noell was asked: “What is the probability that a teacher who gets an ineffective rating one year because of VAM data will get an ineffective rating the following year if he/she changes nothing in his/her teaching. The answer he gave is 26.8%. So the VAM method of rating teachers as ineffective is going to be wrong 3 out of 4 times when you try to extend it to a second year. Why would any personnel management system want to use such an unreliable process? In addition to the general problem with VAM, no one can figure out how to use it to evaluate teachers of handicapped children and teachers of gifted children. It simply breaks down when applied to the two extremes.. This is the same result that is being observed in all other states where VAM is being used to rate teachers. There is a simple conclusion here: VAM is totally unreliable and should not be used ever to evaluate teachers. Period.
So what can you as an education practitioner (either teacher or school administrator) whose whole career may be adversely affected by this ineffective, inefficient system do to fix it for the future?

Here is my opinion, for what it is worth: Legislators hate to admit they were wrong in passing any law. I don't think we can get them to simply repeal Act 54 of 2010, which created the VAM and the COMPASS systems, but we may be able to get them to adopt major changes if we contact and lobby the members of the revision subcommittee. That means if you are affected by this system you should be willing to send at least an email or more to one or more members of the subcommittee and to the original members of the Accountability Commission. All of them have a vote in the final recommendations. Today I will list for you the regular members of the Accountability Commission and then as soon as I get them I will include the contact information for the newly appointed members of the subcommittee.
Here are the recommendations I would make to the Commission subcommittee. You may want to recommend some of these or formulate your own:
  1. Suspend the use of VAM or at least reduce the percentage of VAM in the teacher and principal evaluation to a much lower percentage than the present 50%. But again, my preference would be to suspend VAM indefinitely.
  2. If VAM cannot be removed, at least remove the requirement that an ineffective rating on VAM overrules the principal's effective rating. This was not part of the Act 54 law but was added by rule.
  3. Remove the requirement that one bad evaluation nullifies teacher tenure.
  4. Simplify and allow modifications of COMPASS to fit different situations such as special eduction, PE, early childhood, remedial classes, advanced classes, etc. Stop trying to evaluate many different types of teaching by one inflexible system.
  5. Stop insisting that teachers perform a dog and pony show where the observer has to document each little component of the COMPASS rubric each time for the teacher to get a good evaluation.
  6. Stop insisting that the ultimate demonstration of good teaching is the demonstration of student directed learning. Sometimes the teacher just has to take the lead in laying out what is to be learned rather than to always expect to see maximum student participation.
  7. I guess what I am saying in the 3 points above, is that each major subject area or department in a school system should be able to get together and redesign the COMPASS system to more accurately fit what the folks in that area do every day. Why can't we trust the teachers and administrators to modify COMPASS to do what will work best in different situations?
Here is the contact information for the members of the Accountability Commission:

Brett Duncan, School Board Rep. Chair of Commission., Tangipahoa Parish; brett.duncan@tangischools.org
Jeanne Burns, Board of Regents Rep.; burnsj@REGENTS.LA.GOV
 Laurie Carlton, Curriculum Coordinator, Plaquemines Parish; lcarlton@ppsb.org
Stephanie Desselle, Community Rep. (CABL); desselle@cabl.org
Giselle Juneau, Pupil appraisal, St. John, gjuneau@stjohn.k12.la.us
Mickey Landry, Charter School Rep., mickey.landry@lafayetteacademyno.org
Anna Larriviere, Nonpublic school Rep.; Alarriviere@diolaf.org
Sandra McCalla, Principal, Caddo Parish; smccalla@caddo.k12.la.us
Debbie Meaux, LAE President, Debbie.Meaux@lae.org
Steve Monaghan, LFT President, StevemonaghanLFT@aol.com
Brigitte Nieland, Community Rep., LABI; brigitten@labi.org
Carol Price, High School Math teacher, Zachary HS; Carol.price@zacharyschools.org
Patrice Pujol, Superintendent of Schools, Ascension; Patrice.Pujol@apsb.org
Debbie Schum, Principal Rep., Principal's Association; debra.schum@laprincipals.org
Brandy Thomas, Paren Rep; Allen Parish; Theyoncelived@gmail.com
Judy Vail, LEA Administrator, Calcasieu Parish; judy.vail@cpsb.org
Lee Ann Wall, A+PEL Rep.; lwall@acadia.k12.la.us
Representative Frank Hoffmann, hoffmanf@legis.state.la.us
Senator Conrad Appel, appelc@legis.la.gov









Wednesday, September 24, 2014

Experienced Educator on Predatory Charters

Note to my readers: The following is a letter to the editor that was printed in shortened form here in The Lafayette Advertiser. This letter by an experienced school principal warns us about the destructive influence of  a special group of charter schools that I refer to as predatory charters. All parents and educators should read and listen to the advice in this letter.

Letter to the editor:
If charter schools can’t deliver high scores, then what? The Daily Advertiser asked this question in its editorial on July 10. As a public educator with 31 years of experience, I have some thoughts.

As stated in the editorial, the students of the two for-profit charter companies that edged their way into Lafayette Parish fared about as well as their counterparts in traditional public schools. The response from the companies was that their students had taken more difficult assessments in 2013-2014. As pointed out in the editorial, So did public school students. So where does this leave us?

There is no magic bullet in education that will automatically produce higher test scores. Research has shown over and over that open admission charters perform no better than traditional public schools. The charter schools that produce higher scores generally have selective admission policies that allow for mostly higher performing students to attend. This gives the false impression that charter schools provide better education than traditional public schools. This is what the out of state for- profit charter companies and members of the privatization movement would have you believe. This is just one more farce of the education reform/privatization movement that has swept across the US and Louisiana.

What will begin to happen, and already has, is that as greater numbers of higher performing students are “accepted” into charter schools, the charter scores will naturally increase. So would any schools. Charters will continue to drain money out of the public school system, as they already have, which will cause teachers, programs, and facility upgrades to be cut. As time goes by, the Lafayette Parish School System will come to look very similar to what has evolved in Recovery School District-New Orleans and RSD-Baton Rouge, the lowest performing public schools in the state. There will come a future day when citizens of Lafayette Parish will look back and ask themselves, “What happened to our public schools?” They need look no further than the day BESE overrode the local school board and allowed for-profit charters to come into the District.

If it weren’t so disheartening, I would laugh when I hear “education reform leaders” like John White, Chas Roemer, and Holly Boffy accuse Gov. Bobby Jindal of “playing politics” with education and the Common Core State Standards. Have they forgotten how and why they got where they are?The hypocrisy of it all is shameful. The whole education reform movement from the beginning was nothing more than politics and money. Ask the hundreds of teachers who were forced to wait outside of the state Capitol when the education reform movement was ramrodded through the Legislature without any input from real educators. I agree reform is needed and support some of the initiatives, but that is a discussion for another day.

For those who truly want to improve education, it is really quite simple. First, focus on early childhood education, particularly, for children of generational poverty. Next, increase the length of the school year and school day. Provide early emphasis on language, reading, and math skills with reduced class sizes and smaller schools. Of course, having effective teachers and administrators goes without saying. Finally and most importantly, address the issue of generational poverty, because it is the main reason for lack of educational achievement in this state and country. It is no coincidence that Louisiana’s poverty rate falls right in line with its academic achievement when compared to other states. Louisiana ranks second highest in poverty in the nation following Mississippi.

So back to the question, if charters can’t deliver high scores, then what? I expect a public school system that is on par with the schools of the Recovery School Districts in New Orleans and Baton Rouge, the lowest performing schools in the state. All the while our local tax dollars will be increasingly flowing out of state to for-profit charters. The students who really need the most help will be the biggest losers in all of this. I sincerely hope I’m wrong, but I fear I am not! By the way, private and parochial schools will also be impacted.

Michael Kreamer
Life-long resident of Lafayette Parish
Principal, St. Martinville High

Thursday, September 11, 2014

Higher Student Expectations?

What is an acceptable level of performance for an 8th grade math student? All public school teachers are required to set student learning targets (SLTs) each year for each classroom of students. Where should the passing scores be set and what is an acceptable level of student failure? One good point of reference should be the state administered 8th grade LEAP math test. Let's examine what is expected by our Department of Education for 8th grade Math.

Superintendent John White has repeatedly suggested that the state should have high expectations of all students and that if have high expectations on state tests that students will achieve more. Is he really following his own advice in designing and grading state tests?

Do you think Superintendent White would accept a passing score of slightly over 40%  on a teacher-made final test as acceptable? If the teacher set 40.1% for a minimum score for passing, would it be acceptable for 35% of a classroom of students to score below 40.1%? That's a pretty low expectation isn't it? Yet that's the result of the state-wide performance of our students on the new Common Core aligned 2014 LEAP given this Spring. The official scale score for a level of Basic on the Math 8th grade LEAP remained at exactly the point total used in the past (321 out of a total of 500 points). But the actual percentage of correct answers needed for a score of Basic (the minimum percentage), has been lowered this Spring to only 40.1%. And even with that low expectation, 35% of students state-wide failed to reach the level of Basic. Yet John White, in releasing the LEAP results in May announced that student performance on the new Common Core aligned LEAP tests was "steady" even though the test was more challenging.

It is also interesting that in an earlier press release the LDOE commented on a survey of students taking the new Common Core LEAP, and concluded that students did not find the new tests to be very difficult compared to their classroom work. Here is the quote:

"Of those students taking the computer-based test during the first phase, nearly 70 percent said the test was easier or about the same as their current school work. And, when asked if there were questions about things they have not learned this school year nearly 85 percent said there were none or few questions."

That's a strange conclusion, because most of the students I taught could always tell when a particular test was more difficult than usual. But apparently this time, students had no idea that the passing score would have to be lowered to 40.1% in order for 65% of the students state-wide to pass it. Similar results occurred on another LEAP test measuring ELA performance for 4th graders.

I believe the new Common Core aligned LEAP tests this year were poorly designed and resulted in abnormally low scores for students across the state, but that those low scores were hidden from the public by rigging the scale scores to make it look like students had no big problems with the new Common Core aligned tests. That's why I decided to write to the Accountability Commission to express my concerns. The Accountability Commission is a committee composed of teachers, superintendents, principals, parents, and business representatives who are supposed to advise the LDOE and BESE on the proper implementation of accountability policy. I am hoping that the Commission members will demand real accountability from our LDOE for the construction and grading of the most recent LEAP and iLEAP tests. Next year the state will be using the PARCC tests, and the results of those tests may not look so good. I would hate to see teachers and schools punished because of another drop in test scores on tests that may not be appropriate for our students.

And by the way, I do not agree with John White that higher expectations of students somehow magically results in higher performance. When standards are not well designed, when standards are not age appropriate and when tests are poorly designed, higher expectations alone mean nothing. They may just result in disappointment and further condemnation of our public education system. Here is my letter to the members of the Accountability Commission:

Dear Commission members:

My name is Michael Deshotels and I am writing to the Commission as a retired educator and a father of three children who all attended public schools, and a grandfather of 10 grandchildren who are in various stages of attending public schools today.

My reason for writing to the Commission is that in my research for my blog The Louisiana Educator, I have observed some apparent problems with test validity and possible double standards in the testing and grading of students using this year's LEAP tests compared to other tests. I am calling upon the members of the Accountability Commission to look into these matters and possibly recommend better policies. Here are my concerns:

In reviewing the results of the recent LEAP testing from the Spring of 2014 I focused primarily on the English Language Arts (ELA) and Math sections because these were the portions of LEAP that were changed to increase alignment with the new Common Core standards. I noticed that the scale scores for LEAP had not changed for this year compared to the previous years but I wondered if the underlying percentages of correct answers on each test related to the various achievement levels had changed. I made a public records request in May of this year for the minimum percentage of correct answers required for the achievement levels of Basic and Mastery on the recent LEAP tests. To date I have only received the results for the Basic level which is shown in the table below. I am still waiting for the minimum percentages for Mastery, and a court order has now been issued as a result of my lawsuit to require the Education Department to produce those records by September 22.



Percentage of Total Raw Score Points Required to Earn “Basic”




ELA
Math
Science
Social Studies
4th
2014
44.62%
47.22%
58.93%
53.03%
2013
51.54%
50%
56.90%
56.06%
2012
53.85%
52.78%
62.07%
57.58%
8th
2014
58.70%
40.13%
58.93%
50%
2013
57.97%
48.61%
56.90%
52.63%
2012
57.97%
55.26%
56.90%
52.63%


From this data, I made two observations: First, the raw percentage correct scores for a rating of Basic varies quite a bit from year to year even though the scale score for Basic remains the same. Second, I observed that the percentage of correct answers needed for a student to receive a rating of Basic dropped significantly for three out of four categories of LEAP for the 2014 school year compared to previous years. For example, I noticed that a student taking the 8thgrade math test needed to get only 40% of the answers right this year to get a rating of Basic. There is a similar result for 4thgrade ELA, which required only 44.6% of the answers right for a rating of Basic.

My concern is this: The LEAP scale score for a rating of Basic for 4th grade ELA remained at 301 points out of a possible 500 points, which to the average parent seems to be about 60% of the possible points, yet the actual percentage of correct answers needed for Basic was only 44.6%. The minimum scale score for Basic for 8th grade Math remained at stayed at 321 out of 500 which seems like 64% to the average parent, but the real cut percentage is only 40.13%. In addition, the cut percentages have been lowered significantly even though the public is given the impression that cut scores have remained the same. At the time the scores from the Spring LEAP testing were announced, the Department of Education stated that the average performance of our students at the level of Basic had remained steady even though the new Common Core aligned tests were more difficult. This statement appears to be misleading.

I want to assure the Commission that I am well aware that the official scale scores should not be viewed as proportionate to the number of correct answers on the test, but this is not clear to the public. I am also aware of the process called“leveling” where the testing company, with the approval of the Department, routinely adjusts the cut percentages on new forms of the LEAP test to take into account the comparative difficulty of different forms of the test and to insure that students are not penalized or rewarded when tests get more difficult or get easier. This is apparently the explanation for the drastic lowering of the cut percentages on this year's LEAP tests for ELA and Math. But I still find the statement that our students' performance has remained“steady” even though the tests were “aligned to more challenging learning standards” to be misleading.

I am also concerned that when a cut score has to be lowered to as low as 40% for a rating of Basic, that the test itself is getting close to being invalid as a measure of learning. That is because since the majority of the questions are still multiple choice, a student can get close to a passing score by just guessing at all questions where he/she does not know the answer and combining those guesses with a very few known answers. I believe that the Commission should seriously question the validity of tests that result in such low cut scores.

Such low cut scores also introduce the issue of a double standard in state testing of students compared to acceptable Student Learning Targets (SLTs). I do not know of any schools that approve student learning targets as low as 40 to 45% for a passing score, and where it is acceptable for 35% of students in a classroom to score below 40% on the final test. Yet this is the situation we find in the setting of the 8thgrade LEAP math standards for Basic for the entire state. I believe there was something wrong with the 8th grade LEAP Math test and also with some of the other tests, yet the real results were covered up by the pronouncement by the Department of “steady”performance of our students. Let me make this clear: I don't blame the students or the teachers for this dismal result, I blame the test and the test designers. I also blame the LDOE for using the false stability of scale scores as a smokescreen to hide the real performance on this year's LEAP test.

Finally, I must object to the low level of transparency exhibited by the Department in making this vital information available to educators and the public. Has the information above been produced for review by the Accountability Commission? When I first requested this information in May of this year, I was told that the information may be available in November. But of course, the cut percentages were known by the Department since the scores were issued in April. Why is it necessary to withhold such information from the public until November? Shouldn't at least the Accountability Commission be allowed to review this critical information? This is important because such scoring methods will possibly become even more important as the state fully transitions to the PARCC testing. Will the percentage cut scores be kept secret on the PARCC testing also?

I am requesting that the Accountability Commission look into these matters and recommend a more transparent policy on releasing results and scoring changes and also demand a thorough analysis of the validity and appropriateness of the tests which will be used to grade our students, our teachers and our schools. This analysis should be conducted by someone independent of the State Department of Education and the testing company.

Finally I would appreciate an opportunity to address the next meeting of the Accountability Commission to further clarify my concerns. Please feel free to contact me with any questions or comments.

Sincerely,
Michael Deshotels
Phone 225-235-1632