Print

Curriculum-Based Measurement: From Skeptic to Advocate


Despite the fact that CBM-R has been available to educators for over 25 years, it has not been widely used by teachers or specialists. A variety of explanations are likely. An earlier article by Dr. Hasbrouck and her colleagues on this same topic (Hasbrouck, et al., 1999) cited some of the reasons for the limited use of CBM-R. The two most common reasons educators gave for not using CBM-R on a consistent basis were that they had concerns about how much time it would take and many were unclear about how to implement it. In addition, sometimes teachers are hesitant to add something new to their already busy and challenging workload. This article is about the experience of one reading teacher who became a strong advocate of CBM, after starting off, like many teachers, a skeptic.

A CBM Skeptic

Candyce Ihnot has been an educator for over 30 years. More than two decades ago, when she was a first-year special education teacher in a large urban school district in the Midwest, Candyce was informed that all special education teachers were to begin using curriculum-based measurement to monitor their students’ progress in reading. Candyce describes her reaction:

"My first experience with curriculum-based measurement was about 22 years ago at a large, urban, intermediate school where, as the newest special education teacher, I was given a book storage room to serve as my classroom. The school district’s special education administration soon issued a mandate: All special education teachers were now required to use CBM assessments and to graph their students’ performance three times a week in whatever program they were teaching (reading, math, spelling, or writing).

My initial reaction to this new mandate was a combination of frustration and fear. My job was teaching. I did not feel I had enough time to do my job well as it was. Why should I take so much time away from teaching to assess and do even more paperwork? I was also worried about the increased accountability. Now, on a weekly basis, I would be producing concrete documentation of the effects of my instruction on hard-to-teach students. There was an expectation that students’ CBM graphs would be shared with parents, other teacher colleagues, and even the principal for student decision making. What if my students did not show that they were making progress, despite my best efforts? My negative emotional reaction to this new requirement was so strong that I gave serious consideration to quitting my special education position.

However, today, after years of experience with CBM, I now have very different feelings about CBM. It did take awhile to learn the procedures and get comfortable with the process. And, I know that it is likely that if I had not been forced to use CBM, I would never know what I know today, and that is that CBM is very valuable. In fact, if they were to say to me, ‘Candyce, you may no longer use CBM,’ I would go back to that same closet, gather all my kids back there with a flashlight, and continue to use CBM with them. I just cannot imagine teaching without it. That is how much I rely on it, even though it means I have a few minutes less for teaching and a few minutes more of paperwork."

Becoming a CBM Advocate

How did this transformation from a scared and frustrated skeptic to a passionate advocate of CBM take place? Candyce identified three primary reasons why she eventually came to consider CBM as a vital professional tool for her, and why she now strongly recommends its use to other educators. The first reason is that even when a struggling reader is making some progress, it is often hard to see and difficult to document. This lack of verification of progress can cause a teacher to continue using a program or instructional strategy that may not be having a positive effect. It can also contribute to students feeling discouraged and unmotivated. Even though they may be trying hard; they feel that their skills are not improving. Curriculum-based measures document even small changes in performance, and a CBM graph allows the teacher and students to see concrete evidence of improvement and to celebrate progress toward their goals. Parents can also see their child’s success, making them more motivated to provide ongoing encouragement and support.

The second reason Candyce values CBM as a professional tool in her classroom is that if a student is not making progress, the CBM graphs help her spot this early. An instructional change may not be immediately necessary, but the graphs can help a teacher notice if a student’s progress is leveling off or if performance is slipping. Having that information helps the teacher prepare a plan in the event the pattern continues.

Candyce also found that the immediate feedback provided by CBM graphs allows her to manage her instructional time more efficiently. The graphed CBM data helps a teacher decide with confidence to spend a bit less time with the student who is making good progress and give more assistance to a student who is struggling. Candyce found that these three aspects of CBM far outweighed the relatively minor cost in time and paper work.

Candyce readily admits that another reason she changed her opinion about CBM was the amount of guidance and support available to her and her colleagues during the early phase of the implementation. Her district’s mandate for teachers to use CBM was accompanied by extensive training and support that helped the teachers learn accurate and efficient ways to use CBM. Teachers were also given time to share their experiences with their peers. They heard each other’s success stories and learned successful strategies for making some of the more cumbersome or confusing parts of the process work more smoothly in their own classrooms. These resources made incorporating CBM procedures into daily practice feasible and helped ensure a successful implementation.

An Example of CBM Implementation

Candyce continued to use CBM later in her teaching career when she began teaching in an urban primary school that served approximately 650 students from kindergarten to third grade. English was a second language for about 150 of these students (ESL; most of them were from Laos and spoke Hmong), and another 90 to 100 were eligible for special services based on low performance (below the 40th percentile) and the spring administration of a standardized achievement test. Approximately 30 of these low-performing students were served in special education and the remainder in a remedial reading program.

For over a decade prior to Candyce’s arrival, this school had used a collaborative inclusion model for students with special needs. ESL, special education, and Title I/remedial reading teachers went into the general education classrooms and worked there with the identified students rather than employing a pull-out system. This model allowed for close working relationships between the specialists and the general education teachers. The school used a Joplin plan for organizing reading instruction: Students were homogeneously grouped across classrooms according to their instructional needs.

CBM was implemented at this school as one key source of information for grouping students at the beginning of the year. In the fall, every student’s oral reading fluency was measured by benchmarking assessments with three unpracticed, grade-level passages. The median score was used as the student’s beginning score. In the spring, all students were assessed again on three unpracticed passages. To determine if students were benefiting from the school’s reading program, the results from the school-wide CBM benchmarking data were examined in three ways: 1.) a combination of all student scores across the grades, 2) by grade level, and 3) as individuals.Those students identified as being “at risk of reading failure” and served in special education, ESL, or Title I/remedial programs were assessed weekly using CBM measures, and their individual performance graphs were retained.

CBM data tells a teacher that a student’s pattern of progress is or is not acceptable, but it cannot identify the causes for those patterns. A teacher must use other sources of information to determine what actions to take to help students improve their reading skills. Teachers and specialists commonly use diagnostic assessments, informal class- room observations, contacts with home, conferences with school personnel, and other sources for additional information, as necessary, to make decisions about a student’s instructional program.

When Candyce noted that a student’s graph was not showing progress, she responded in a variety of ways, some of which were very simple to do. For example, she had students change seats if they were being distracted by a peer, she noted illness patterns and discussed them with a child’s parents, and she spoke to students individually to make them more aware of their efforts and to encourage them to work harder.

At times, students required more targeted academic interventions, such as individually designed homework packets to increase practice time, help from an instructional assistant or paraprofessional to practice a specific skill for five to 10 minutes a day, or schedule changes to free up an extra five minutes to preview or review a lesson with the student.

Students in Candyce’s school were usually monitored in materials one level above their current instructional level so that a yearly goal could be set in the fall, and students’ progress toward their goals could be monitored across the 9 months of the school year. The goal for Title I/remedial students was to gain two correct words per minute (WCPM) each week and special education students were expected to gain 1.5 WCPM each week. These goals corresponded with the findings of Fuchs, Fuchs, Hamlett, Walz, and Germann (1993), who found that, on average, students in first grade can be expected to gain 2 correct words per week in oral reading fluency in the second half of the year, whereas ambitious goals for first grade students would be gains of three words per week. Goals identified for second grade students were 1.5 words gained per week, whereas a goal of either 1.0 or 1.5 words gained per week was deemed reasonable for students in third grade.

Although students at this school who were served in the Title I/remedial and special education programs were ultimately assessed once per week using CBM-R procedures, originally, all students in special education programs in the district were assessed three times per week. The teachers complained that assessment was taking too much time away from instruction. Some experimentation by teachers—and a study conducted in the district by CBM researchers— indicated that very similar results were obtained with less frequent assessments when the median score of the past three weeks was used to graph the results. (See Jenkins, Hudson, and Lee in this issue for more guidance on how frequently to assess students.) Each week the teachers recorded the student’s newest score. The highest and lowest scores from the past three weeks were ignored and only the middle, or median, score was graphed. This process is called moving median and was found to accurately measure a student’s actual performance over time. It allows for the natural fluctuation and variability in performance that can be caused by factors such as illness, inattention, and lack of interest or experience with the vocabulary in a particular passage. Although the moving median has been criticized as an unsophisticated technique, the median score represents an actual performance score, unlike the mean. Over the long run, these teachers found that using this method helped them take into account the normal fluctuation in children’s reading and differences in difficulty among the passages used in assessments.

Following standard CBM procedures, Candyce drew aimlines on each student’s graph to indicate a reasonable pattern of expected growth for the school year, based on their baseline performance (the median score from three separate assessments) in the fall and the projected increase of words per week as appropriate for that individual student. The aim-line connects the original score and the goal score at the end of a specified period of time, calculated from information such as that presented by Fuchs et al. (1993) or the Hasbrouck and Tindal oral reading fluency norms (Hasbrouck & Tindal, 2006). Once a week the teacher plotted each student’s CBM WCPM score on that student’s graph. At times, an aim-line would be redrawn if the original goal was determined to be too ambitious or too easy. The teacher made this change only after trying three modifications in the student’s instructional program.

Ihnot and her colleagues used CBM graphs to guide instructional decision making. If a student’s score fell below the plotted aim line for three consecutive weeks, indicating less-than-expected progress, the school policy mandated that the teacher consider making a change in the student’s instructional program. When this change was implemented, a vertical line was drawn on the graph to indicate that there had been a modification in the student’s program. An intervention was continued for at least three weeks to determine if it was having a positive effect on the student’s performance. If the effect was positive, the intervention remained in place. If there was no effect, another program change was made and another vertical line was drawn on the graph. If this pattern of little or no growth continued for several weeks, a referral for a student staffing was made.

CBM graphs were also used to keep students informed of their progress (or lack of progress) and to show parents during conferences. The benefit of including students in their own progress monitoring by having them record scores and analyze graphs has strong support from research (Fuchs, Deno, & Mirkin, 1984; Stecker & Fuchs, 2000).

Six Case Studies

Candyce selected graphs from six of her students to demonstrate the variety of concerns and interventions addressed when teachers use CBM for decision making. The students’ identities have been masked to protect
their privacy. Candyce Ihnot served each of these first, second, or third graders in either a special education or Title I remedial program.

figure1

Jeff
(Figure 1) was a second grader reading about 1 year below grade level. He was being monitored for CBM assessments in second-grade materials because his instructional goal was to be reading at grade level by the end of the year. Jeff’s graph shows that he started the year reading only 46 WCPM, which puts him just slightly below the 50th percentile for second-grade readers according to the oral reading fluency norms developed by Hasbrouck and Tindal (2006). Candyce set an ambitious goal for Jeff: to improve his reading by 2 WCPM each week across 25 weeks of instruction. An aim-line was then drawn on Jeff’s graph, from his initial score of 46 WCPM to his goal score of 96 WCPM (25 weeks of instruction x 2 words per week = an overall gain of 50 WCPM: 46 WCPM + 50WCPM = 96 WCPM.

Jeff demonstrated very good progress in the first few weeks in his reading program. His initial progress surpassed that of the other students in his reading group; but when his CBM graph indicated that his progress had leveled off around the fourth week of school, Candyce consulted with Jeff’s classroom teacher. They decided to move Jeff to a higher reading group. The vertical line on Jeff’s graph documents this program change (Intervention Line A). As can be seen on his graph, this intervention made little difference in Jeff’s weekly CBM scores. Based on their knowledge of Jeff as a student, along with the CBM data, the teachers again decided to make a program change 4 weeks later to help Jeff reach his goal of reading on grade level by the end of the year. This time, the difficulty of the instructional materials was increased (Intervention Line B). This approach seemed to do the trick; Jeff’s scores started climbing again. In the spring, Jeff missed several days of school due to illness, so the graph is blank for the period when he missed his CBM assessments (weeks 18–20). On his return, Jeff’s performance initially declined from the point when he left, but his upward growth began again by the second week after his return.

fig2-cbm-skeptictoadvocate

Marlene
(Figure 2) was a third grader who had been identified with learning disabilities. However, her teachers believed that Marlene’s problems were caused primarily by her frequent and extended absences. Marlene’s CBM graph documented the serious attendance problem. She was frequently absent on the day of the week designated for the CBM monitoring, so she was often tested on a different day. Although her sporadic scores showed that Marlene was on track according to her aim line for the first 12 weeks, both teachers believed that she could do even better. Their hypothesis was that Marlene’s poor attendance was the key factor, but they initially tried some school-based interventions anyway, including raising the difficulty level of the materials and increasing her instructional performance goal.

Marlene’s graph documents four different interventions (Intervention Lines A–D) implemented with little effect. Finally, the school’s social worker was alerted and a truancy letter was sent to the student’s home (Intervention Line E). At that point, Marlene’s attendance improved and her reading scores began to increase steadily. The CBM graph was later used at a parent conference to show Marlene’s parents that regular school attendance was truly important and clearly made a difference in their daughter’s reading. Marlene’s graph also shows a decline in the spring, which is common among the population of students in this school.

fig3-cbm-skeptictoadvocate

Mary
(Figure 3), a second-grade student, initially made excellent progress with her reading, exceeding her teachers’ expectations. At Week 5, Mary’s performance fell off quite dramatically and showed no significant improvement for a month. After the third week of essentially no gains, the teachers decided to move Mary to a higher level of materials (Intervention Line A). This seemed to work, and Mary again began to show steady progress. The teachers were satisfied the intervention had made the difference.

A few weeks later, Mary’s father came to a parent conference (documented with Intervention Line B). CBM graphs were always reviewed during parent-teacher conferences, so the teachers showed Mary’s reading graph to her father. The earlier month-long slump in his daughter’s reading was pointed out, and the teachers discussed how a program change turned that around. Mary’s father examined the graph carefully, and then quietly informed the teachers that this dramatic dip in Mary’s performance coincided exactly with the time when her mother unexpectedly left the family. The teachers saw this as evidence that, although most often a student’s academic performance is directly related to school-based activities and events, the influence of home is also very powerful. In this case, the improvement in Mary’s reading following the intervention may indeed have been influenced by the program change, or it may have simply reflected her adjustment to her new situation at home. Both teachers were impressed at the sensitivity of this simple measure to capture the effects of such an important occurrence in a child’s life, and they began to include this new awareness in their future interpretations of CBM graphs.

fig4-cbm-skeptictoadvocate

Michael
(Figure 4) was a second grader in Candyce’s Title I reading program who received daily instruction in first-grade materials. At the start of the year, Michael was showing steady progress that exactly matched his aim line until suddenly he made a large gain in performance at Week 8. Neither of Michael’s teachers could account for this gain instructionally, so they hypothesized that this jump may have been a developmental change. Wanting to capitalize on this improvement, Michael’s teachers discussed with him the possibility of moving to a higher reading group because he was now reading so well. Michael was not enthusiastic, but he agreed, with reluctance, to give it a try (Intervention Line A). The graph shows that this move was not beneficial to Michael, so he was returned to his original group where he again showed positive gains (Intervention Line B). Michael’s teachers believed that he was simply more comfortable in a situation in which he was a top performer. Both teachers were convinced that without the CBM data they would have been inclined to leave Michael in the higher group and would have spent the remainder of the year encouraging him to try harder.

fig5-cbm-skeptictoadvocate

Lisa
(Figure 5) was a second-grade student who was reading so poorly at the beginning of the year (less than 10 WCPM) that she could not be timed for the assessment with second-grade materials. Consequently, her CBM progress-monitoring assessments were conducted with first-grade-level reading materials (as shown in Figure 5). Her teachers considered referring her for special education services. After Lisa showed little progress on her graph, her teachers made an instructional change. She was placed in a fluency-building program (a combination of reading along with an audiotape, repeated readings, and daily progress monitoring with a performance graph; Intervention Line A) and made immediate gains. Teachers had noted that first grade students receiving instruction in this fluency intervention were often able to catch up in reading skill with their grade-level peers. Her teachers wanted to ensure that Lisa was also making this kind of progress, so at Week 14 they changed the level of monitoring materials to second grade (Intervention Line C). To indicate this change on the graph, a new performance baseline was established and a new aim-line drawn. Lisa’s progress in this more difficult level continued, although at a lower level and slower rate, and she was able to continue without special education.

fig6-cbm-skeptictoadvocate

Wendy’s graph
(Figure 6) is an example of how Candyce used CBM procedures with first-grade students. Because most first graders are not reading fluently at the beginning of the year, early progress in reading can be measured with weekly assessments of their ability to correctly identify the sounds of letters in 1-minute timings. Some programs use letter names for first-grade CBM assessments, but because the focus of this school’s program for first graders is phonics and phonemic awareness, the teachers believed that knowledge of letter sounds was a better match between instruction and assessment.

Wendy was placed in the reading readiness program, and her graphs showed steady progress on naming letter sounds. By March, her CBM performance indicated that she would most likely be successful in reading, so her graph was marked to indicate a change in assessment that measured oral reading fluency using WCPM (Intervention Line A). Wendy started this new graph with 3 days of baseline data and a two-word-per week aim line.

Summary

A large body of evidence has established the reliability and validity of CBM as well as its potential value in improving the instruction of students who struggle with reading, including students with dyslexia. Unfortunately, there is evidence that few teachers or specialists use this powerful tool. This article described the transformation of one teacher from a frustrated and reluctant user of CBM who initially felt forced to incorporate this new procedure into her practice, to one who enthusiastically embraced it, used it daily as part of her instructional routine, and went so far as to say “I cannot teach without it.” Perhaps this one teacher’s experience and the six illustrative case studies can help more teachers find a way to add CBM to their repertoire of effective and valuable professional tools.

References

Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). Effects of frequent curriculum-based measurement of evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21, 449–460.

Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth should we expect? School Psychology Review, 22, 27–48.

Hasbrouck, J., & Tindal, G. A. (2006). Oral reading fluency norms: A valuable assessment tool for reading teachers. The Reading Teacher. 59(7), 636–644.

Hasbrouck, J. E., Woldbeck, T., Ihnot, C., & Parker, R. I. (1999). One teacher’s use of curriculum-based measurement: A changed opinion. Learning Disabilities: Research & Practice, 14(2), 118–126.

Stecker, P. M., & Fuchs, L. S. (2000). Effecting superior achievement using curriculum-based measurement: The importance of individual progress monitoring. Learning Disabilities Research and Practice, 15, 128–134.


Adapted from Hasbrouck, J. E., Woldbeck, T., Ihnot, C., & Parker, R. I. (1999). One teacher’s use of curriculum-based measurement: A changed opinion. Learning Disabilities: Research & Practice, 14(2), 118–126.

This article was originally published in Perspectives on Language and Literacy, vol. 33, No. 2, Spring 2007, copyright by The International Dyslexia Association. Used with permission.



Back To Top