Tuesday, September 23, 2014

Elementary and Secondary Education Act Needs to be Reauthorized

Last week the Iowa Department of Education released the results of the federal accountability law known as 'No Child Left Behind'. It should come as no surprise that more schools and districts have been labeled as 'in need of assistance'. According to the Department of Education website, 852/1,288 schools, or 66.1% missed meeting the targets known as Adequate Yearly Progress.

Adequate Yearly Progress, or AYP, is based on student proficiency as measured by the Iowa Assessments and participation rates. If a school does not test at least 95% of their students in all subgroup areas it is considered to have missed AYP. Regarding proficiency, since 2001 the targets have steadily climbed: in 2011-2012 they were 80%; then in 2012-2013 they were at 94%; and finally in 2013-2014 they reached 100% proficiency. The law requires that schools meet these proficiency targets for the overall student population along with demographic subgroups such as socio-economic status, limited English proficiency, race/minority, and special education. While there is a component of growth factored into the equation, it doesn't go far enough. 

Proficiency is a measure based on a specified benchmark on the test that students must obtain. The Iowa Assessments are a norm referenced test, meaning that pupils are ranked against one another. For example, if there were 100 students that took the test they would be ordered from 1-100. The net effect of this statistic is that children are ranked against one another instead of what they actually know. A student that had a score of 30 would equate to a percentile rank of 30, which means that 70 students performed better on the test. The end result of course is that you will have a certain number of students who never meet the proficiency benchmarks. A better measure of student progress would be a criterion referenced test, or one that actually does measure the knowledge that a student has gained. It would be more useful to know if a student understands his or her math facts as opposed to knowing more math facts than other students. 

Yet, we shouldn't be lulled into a sense of false security or promise that we can fix the problem by merely changing the instrument. While a criterion referenced test would be a significant improvement over a norm referenced test, there might still be a small percentage of students unable to meet the benchmark. The best possible outcome could be to measure the amount of growth that a student is able to make over the course of the year, or in the case of the Iowa Assessments from one testing cycle to another. In this case, student learning outcomes can be designed in a way to meet the needs of a diverse group of students. Students who are served by special education programs or who have limited English proficiency have a different starting point than those served by the general educational program. But even with these type of instruments in place, there may still be students who fall short of meeting goals. 

So then, you are probably wondering how Hudson stacked up on the federal accountability requirements. As a district, we met all the goals for participation up and down the line: as a district, as an elementary, middle, and high school. And as a district, we also met AYP in terms of the targets. In the high school, all targets were met for both participation and achievement. Our middle school students are on delay status for reading (which means they met the goal for this year but missed it last year), and on the SINA (school in need of assistance) for math. Finally, in the elementary school we are on the Watch list for reading and met the target for math. Here is where it gets a bit perplexing.

Recall that we are on the SINA list for math. Let's take a look at those scores in terms of a growth factor. If you look at the table to the left, notice that students from grades 5-6 grew on average 24 points. The expected growth rate for this group of students was only 13 (based on a median percentile rank of 50). So not only did they meet expected growth, statistically speaking, they grew in excess of one year! The same holds true between 6-7 grade: expected, 12; realized 21. Grades 7-8: expected 11 realized 17. Granted, that is the entire population and not segregated by subgroup. When you look at the subgroup data, you would notice that the column to the far right, labeled PR suggests as a subgroup, the students are not proficient. An example of such is included below.

This is for one of our subgroups. (I am purposely not naming the group at this point). You should note that between grades 6-7, as a subgroup they are only scoring at the 25th percentile, which is not proficient. However, if you look at the expected growth rate for this subgroup based on the percentile rank, they are expected to grow 8 points. In this particular group, they actually grew by 24 points. In order to meet proficiency, a student needs to exceed the expected growth. This is known as closing the gap. I don't know about you, but this most certainly does not look like a school that should be on the SINA list.

Most industry does not place these kind of arbitrary standards on the products they are sending out the factory door, and then imposing punitive penalties when falling short of that goal. For that reason, it is imperative that Congress act to re-authorize the Elementary and Secondary Education Act. While it is indeed a noble goal to strive for perfection out of our students and schools, we must also be realistic and provide fair measures. As the data shared here clearly illustrates, there is much more to student achievement than a percentile ranking. By the way, schools are not opposed to data and are just fine with accountability. We use data all the time to shape instruction. What schools are leery of are unfair measures that don't look at the entire scope of student achievement or data sets that aren't evaluated in the proper context.




No comments:

Post a Comment