Wednesday, November 20, 2019

The Results Are In

Last spring, students in Iowa took The Iowa Statewide Assessment of Student Progress (ISASP). This marked the first time Iowa students have been administered a test that actually aligns to the Iowa Core Academic Standards, which are the 'blueprint' of what is being taught in our classrooms. This test replaces the antiquated Iowa Assessment, which essentially is the test you and I all grew up taking: the Iowa Tests of Basic Skills. The student experience with the IASAP is much better than the Iowa Assessment. Instead of needing a number two pencil to fill in the bubbles, our students took the test using a secure online platform. Future generations of the test will also be computer adaptive, meaning that the test progressively levels itself dependent upon how the student answers the questions. 

The differences between the two assessments don't just stop there. As a start, the IASAP is administered during the final quarter of the school year. This means all students across the state have had a roughly equivalent amount of instruction leading up to the test. Previously, schools in Iowa had the option of taking the test in the fall, winter, or spring. As a reminder, Hudson students historically took the Iowa Assessments during the winter. 

Another important difference is that the Iowa Assessment was a norm referenced test, which meant that students were compared to one another. This made it incredibly difficult to determine how a student actually performed on the test. What we did know with the Iowa Assessment is that Student A performed better than 50% of all the other students that took the test (for example), but didn't necessarily know the percentage of correct responses on that test.

The ISASP on the other hand is a criterion referenced test. This means we aren't comparing students to one another, but rather the number of correct responses on the test. In other words, do they know the correct answer or don't they? In fact, when the student reports are sent out later in the month, as parents you will be able to see the percentage of correct responses for each area. I can assure you the parent report will be much more useful to you and easier to understand than the reports that were issued with the Iowa Assessment!

The final, and perhaps biggest difference between the two tests is the rigor of the ISASP. Because the Iowa Assessment was solely a 'fill in the bubble test', the questions largely were recall. They didn't really require the students to think very hard, or determine whether or not the student truly grasped the content. In education, we call this 'Depth of Knowledge' (DOK). The questions from the Iowa Assessment were generally DOK 1, or low level recall. There are four levels of DOK, and the ISASP has been designed to spread the DOK across the spectrum. As a result, the number of DOK 1 questions has been minimized. Perhaps that is why there was trepidation that when the results came out the scores were going to be lower. But that theory misses an important point: it is a completely different test and it would be inappropriate; strike that, impossible to compare the ISASP to the Iowa Assessment. In many ways it would be like comparing an apple to an orange.

That said, the results are in. Students in grades 3-11 are tested in English/Language arts (ELA) and math. The ELA suite of tests also includes a writing component, something new, which increases the rigor (DOK) of the test. Additionally, students in grades 5,8,10 are also administered a science test. We are just now beginning the process of analyzing our data, but here is a snapshot: 


The light blue column at far right is the percentage of Hudson students at each grade level who are currently proficient. The column at far right is the difference between our Hudson scores and the statewide proficiency. The dark blue color indicates a grade level exceeding the statewide average (and by how much), whereas the red indicated a grade level below the statewide average. This data gives us a good starting point for discussions in our school about why the scores are what they are, and will ultimately lead us to an action plan for improvement! At the same time, there is quite a bit to be proud of in this data set. The percentage of students proficient in some of our grade levels is staggering! A testament not only to the hard work of our student test takers, but to the quality of instruction that is occurring in the classrooms. Further, I might suggest this data shows the impact of our instruction is cumulative. The longer students are in our system, the more they improve.

A more elusive and important metric missing is student growth. This has typically been measured by comparing this  year's standard score to last year's standard score. Although we do have a standard score with the ISASP, it would be inappropriate to measure it against last year's standard score because again, it's a different test. There is a statistical reconciliation that can be done between the two, but I fear reliability will still be a factor. 

Nevertheless, we have successfully completed our first evolution of the test and we now have a baseline with which to grow! As we continue our analysis of the data, I'll be sure to pass along additional findings. You can expect to see your child's results when we send report cards home. In the interim, if you have any specific questions please feel free to contact your building principal. 

No comments:

Post a Comment