Print

Ask the Experts

Student Assessment - General Assessment Questions

My school currently uses AIMSWEB to progress monitor our students reading. I have a question regarding the statistical validity in testing procedures. For example, at the beginning of the year I test my students on passages 1, 2, and 3. When I retest midyear, is it correct to test then on passages 4, 5, and 6? If I retest on passages 1, 2, and 3, doesn't this present a statistical validity issue due to repeated exposure? We have a discrepancy in our building as to which procedure to follow based on how CBM was designed. Your quick response would be much appreciated, as we would like to share accurate information with parents during upcoming conferences. Thanks so much for your time!


Response from Evelyn Johnson, Ph.D.:

I'm assuming you are talking about ORF passages here and not maze, so my answer is targeted to ORF. As most of your teachers have probably realized, there is a pretty substantial amount of "bounce" in ORF passages - in other words, even though the passages within a grade level are at the same readability levels, some research has indicated that as much of 15% of the variability in student performance is due to passage effects. There is also research that shows that depending on the order in which we present passages, you might actually see different individual growth lines for students! That said, there are a number of routes you might consider to address this concern which I've briefly outlined below, and you'll have to make a decision with your staff that you can get behind and explain to parents. Regardless of which route you decide to go with, remembering always that AIMSWEB benchmarks are only 1 source of data you have on student performance is critical for the continued appropriate use of these tools in informing your instructional program.

 

  1. Equating passages. In the following article: Francis, D. J., Santi, K. L., Barr, C., Fletcher, J. M., Varisco, A., & Foorman, B. R. (2008). Form effects on the estimation of students' oral reading fluency using DIBELS. Journal of School Psychology, 46, 315-342, the authors describe a process for equating passages through an equipercentile ranking system. The specifics of that process are outlined in this article, but in general, the idea is that raw scores across passages are equated to percentile ranks and then percentile ranks are reported on passages.
  2. Use the same passages. Given that you are using these passages to benchmark, my guess is that the time period between assessment times is about 14-15 weeks? So the likelihood of a practice effect due to using the same passage is greatly reduced, and it might give you more stability in performance levels. In other words, you might reduce some of the variability in performance due to passage effects by using the same passages for benchmarking your students.
  3. Report median scores, and further assess students in the gray area. Finally, you might simply use different passages each benchmarking period, and for students who fall within a range of scores that indicate you might be concerned about them (i.e. a few words to the left or right of the "cut" score), you could administer 3 passages and use a median score to get more accurate about those "gray area" students.

 


Have more questions? Read more answers from Ask the Experts.
Back To Top