Thursday, July 18, 2013

Moving Day

It's been a great run on Blogger, but the time has come to move on to Wordpress.

You can find my blog, along with all the original content, at:

freerangingideas.net

Monday, July 1, 2013

Pushing The Stone Up The Hill

Last week I attended the Ohio ASCD summer conference in Columbus.  In a session on the new accountability standards, I was again reminded about the huge job ahead for school districts in terms of communicating the coming report card cliff everyone is going to fall off of.

In general, the rule of thumb is that the percentage of students who currently score at the accelerated and advanced range will comprise the percentage of students who score proficient or better on the report card in the 2014-2015 school year.

ODE has created a presentation on the simulated grades under the new report card system.  The link to the State Impact Ohio story from March by Ida Lieszkovszky, along with the accompanying presentation, can be found here

It is important to note that the bad news in the report card simulation just takes into account the changes that are coming with the re-designed report card measures in 2012-13 and the increase in the indicator percentage to 80% proficient or above in 2013-2014.

One can reasonably assume that results will continue to go down for all Districts in the first year of the PARCC assessments.

So, the somewhat rhetorical question is what are you (and me) doing as a District leader to communicate the coming dip to your school board and community members?

While he did have his flaws, former State Superintendent Stan Heffner did do a nice job communicating the issue of low cut score thresholds and the associated inflated sense of achievement relative to performance that the scores gave to communities.  In general, in order to be considered proficient, a student can get less than 50% of the questions correct.  This is the reason why accelerated and advanced scores are projected to be the 'new' proficient.

Dr. Bobby Moore from BFK gaven an enlightening presentation on the reality of the current assessments and the false sense of achievement they give for high performing districts.  Using two anonymous districts with high performance index scores, Dr. Moore demonstrated how increasing expectations has a dramatic effect on the percentage of students who would be considered proficient.  The proficient column in the graphics below illustrate the percentage of students at or above proficient using existing cut scores.  If you were to increase the cut score to be at or above earning 75% of the raw point total on a given test in order to be considered proficient (note: 75% is considered a C most realms), look at what happens to the percentage of students who would be considered proficient or above.





(A special thank you to Dr. Moore for his presentation and personal follow up correspondance for this post.  You can follow him on twitter @BobbyMooreBFK)

Do the parents and community members in these districts have ANY sense of the performance description inflation that currently underpins the accountability measures in the state?  Are communities prepared for the re-norming of performance descriptors and the looming drop in ratings?

What about districts that work, strive, and struggle to improve scores each year, but consistently struggle to move their performance index scores past the mid 90's?  What will that cliff look like?

I don't think you would find anyone who would oppose the re-norming of accountability measures to have them accurately reflect the current level of skills and knowledge for students.  Helping parents understand how the performance levels got to where they are (game playing with NCLB standards) and helping them understand how scores will improve under the new system is vital.

Every school district in Ohio should be out there promoting the coming changes now, and ODE needs to also provide communication tools to help with this massive endeavor.  Districts have played by the rules through the entire NCLB accountability era, and they must be supported in telling the change story now that the metrics to earn a high summative letter grade on the 2014-2015 report card are changing so radically.

(A postscript to this blog post....Christina Hank writes in her blog 'Turn On Your Brain" about the morale busting implications for letting a single measure at a single point in time be the sole definition of teacher and school district quality and argues for broader metrics to define success).

Accountability Is Good (If Done Correctly)

An interesting sendup of value added on the heels of the recent CPD/SIO VA series

http://dianeravitch.net/2013/06/17/jan-resseger-on-absurdity-of-ohio-vam/

A key paragraph from the article:


A thought about using value added in a different way....for each teacher that has value added scores, report the results by the percentage of students that each teacher has in each category (x% greater than 2 SD above the gain line, y% b/w 1 and 2 SD above, z% b/w 0 and 1 SD above, etc.)  Then, for policy purposes, examine the corresponding percentage of students in each band who are considered to come from poverty based on subgroup guidelines.  The current method of assigning a single VA score for teachers does not accurately give credit for those students for whom the measure indicates the teacher caused growth.

Much larger than this is still the issue that VA scores are still derived from one test given at one point in time.  This singular, two hour window can not account for the other 900 hours of instruction that children receive, and all of the intangible value teachers add to students throughout the course of a year.

If the state and federal government are serious about measuring the value that teachers add to students, create a series of quarterly assessments for each subject, each year, and combine that score with a portfolio of student work that is rubric scored and normed against expected work outputs at each grade level.

(An article from The Atlantic that also addresses the issues around the reform movement and accountability)

Singular measures of student growth are the least statistically reliable.  The solution above would be expensive.  But if the bureaucrats and private corporations ever want these measures to be taken seriously, the must be a movement away from tests given at one point in a school year driving the entire accountability structure for teachers and schools.