Using CBM-Reading Assessments to Monitor Progress

Learning to read is one of the great achievements of childhood, and listening to a child read a story fluently, with excellent expression, is a joy. For some children, however, learning to read is not an easy process. Reading is an extraordinarily complex cognitive task. It encompasses a set of intricately orchestrated, fast-operating processes that must work together precisely—translating letters into sounds; integrating sound, letter pattern, and word meanings together to construct larger meanings; making connections between ideas in text; linking text ideas to prior knowledge; and making inferences to fill in missing information. These activities occur simultaneously, and problems in any area can lead to a total or partial breakdown. A lot can go wrong. The road to reading is often treacherous for those with dyslexia. These individuals require intense, precisely focused instruction.

Teaching Struggling Readers Is a Challenge

Children who struggle with reading are a heterogeneous group. They encounter difficulty with different aspects of reading, and they acquire specific reading skills at different rates. Some encounter difficulty with learning to decode, some struggle to develop fast, automatic word recognition, some face challenges in linking ideas in text, and some lack background knowledge that allows them to interpret an author’s message. Moreover, struggling readers respond differently to reading instruction, even to a specific reading lesson. They also differ in motivation levels for engaging in reading and in the considerable practice that success in reading requires. These individual differences mean that struggling readers require different kinds of instruction at different times. And, here is the crux of the problem—for an individual student, it is not possible to know ahead of time which instructional approach will lead to the greatest success in learning to read; choosing the best approach requires ongoing assessment and analysis of the information.

How Progress Monitoring Can Help

Teachers realize that there is never sufficient instructional time, and they must get the most out of every lesson. Teachers can maximize their effectiveness by adopting a scientific stance toward instruction—gathering information, thoughtfully analyzing their students’ learning needs, and theorizing about the reading instruction that would be most productive. They think about whether a student should (a) practice linking specific letters to sounds (graphemes to phonemes), (b) practice applying those links in sounding out unfamiliar words, (c) practice reading word lists, spelling, vocabulary, text reading, or making connections between ideas in text to develop automaticity in those areas, or (d) build background knowledge.

Teachers theorize about the amount of lesson time that should be devoted to these components for each student, then design and teach in a way that is consistent with their analysis. For teachers to operate like scientists, however, they must also test their theories by collecting data through monitoring and evaluating students’ reading growth. Using these data, teachers can ask, “Is instruction producing satisfactory growth in my students’ reading achievement?” If the answer is “yes,” they can continue with the instructional elements that are working. If the answer is “no,” they can replace old instructional practices with ones that work better. Careful progress monitoring and analysis of student performance are the key elements of a scientific approach to instruction that has the most promise to meet the unique needs of students with dyslexia.

How to Monitor Progress in Reading

How do teachers know whether their students are improving satisfactorily in reading achievement? The most common means of monitoring progress is to carefully observe students’ performance during reading instruction. As they instruct, teachers ask themselves questions. Are students demonstrating growth during the lesson? Are they mastering particular letter-sound correspondences? Are they accurate and fluent in sounding out new words? Can they read word lists accurately and swiftly? Do they read text smoothly? Do some students struggle with some aspects of the lesson? Which parts? Much can be learned by carefully observing students’ performance during reading lessons; however, it is more informative to actually measure reading performance. It is a lot like tracking weight gain. Recording the calories consumed is not as informative as climbing on the scale every day or two. The trick is finding a suitable reading achievement measure that can be given repeatedly to measure student progress.

Norm referenced reading achievement tests will not suffice because they cannot be given repeatedly throughout the year; they require too much time to administer (taking time from instruction); they are not sensitive to reading gains over intervals of a few weeks; and, rather than measuring reading growth, they merely compare an individual’s performance to a peer group. By contrast, Curriculum-Based Measures in Reading (CBM-R; Deno, 1985) can be given frequently, take little time to administer, are sensitive to reading growth, and are well correlated with reading comprehension tests. CBM-R uses the number of words read correctly (WRC) to paint a picture of a student’s overall reading proficiency.

Because reading aloud is such a complex endeavor requiring coordination among several cognitive processes, it serves as an index of the student’s general reading achievement and is extremely useful for monitoring a student’s response to instruction (Fuchs, Fuchs, Hosp, & Jenkins, 2001). Just like a person’s body temperature is one way to measure his or her general health, CBM-R can indicate whether students are progressing satisfactorily or if a problem needs to be addressed.

How to Monitor with CBM-R

See the following list of steps for using CBM-R to monitor the progress of students in reading.


Finding the Right Reading Passages

In using CBM to monitor reading growth, teachers measure students’ reading performance repeatedly across the school year by having them read from passages that fall within the annual curriculum (i.e., passages randomly selected from the students’ grade level). Thus, each test falls within a set range (i.e., one grade level) of difficulty.

Hence, the first step in preparing CBMs is to identify 25–30 suitable reading passages per grade level. Although passages could be selected randomly from the reading curriculum used in the classroom, standard passages are preferred for several reasons. First, within a grade level, standard passages are roughly grade equivalent (GE) in readability (e.g., they range from 2.0 to 2.9 GE). Second, using standard passages allows for comparisons across classrooms, grades, schools, districts, and states. Third, standard passages generally have undergone a process of development and revision that screens out any passage that is atypically difficult or easy. It is important to have many passages at the same level of difficulty because students will read a new passage every time their progress is monitored. Table 1 provides information on where to obtain passages for progress monitoring. Several of the sources listed provide free downloads of passages; others require a payment.


Deciding on a Measurement Level

The next step is to determine the grade level of passages to use with each student. Because most teachers and administrators want to determine how students perform in grade-level reading material, the favored practice is to monitor progress with passages at the student’s assigned grade level (e.g., give a third-grade student passages at the third-grade level). However, if a student is unable to read the assigned grade-level passage with 90% accuracy or better, then his or her performance should be monitored at the grade level of text where the student can read with 90% accuracy (e.g., a third grader may need to be monitored in first-grade passages if she cannot read third-or second-grade passages with 90% accuracy). If a student struggles with first- grade passages (less than 90% accuracy or fewer than 20 words correct), then using CBM word lists rather than passages may be appropriate. Several sources in Table 1 also provide word lists for progress monitoring students who struggle to read first-grade passages.


Standardized Administration and Scoring

Progress monitoring with CBM requires teachers to follow a set of standardized administration and scoring procedures. Before conducting an assessment, collect the following materials:

  • Student copy of the reading passage
  • Examiner copy of the reading passage
  • Pencil for scoring
  • Timer or stopwatch
  • Administration script

Establishing Baseline

Progress monitoring begins with a baseline, or starting point, measurement. A baseline is obtained by asking students to read three or four passages, usually in one sitting. These passages are either at a student’s grade level or at the level of difficulty where he or she can read with 90% accuracy. Teachers calculate the WRC baseline level as either the median (middle value) or the mean of the student’s scores (see “Curriculum-Based Measurement: From Skeptic to Advocate” in this issue for additional information on when to use the median rather than the mean). This is the first data point on the student’s graph.

Setting Goals

Typically, developing readers increase their WRC scores every year throughout the elementary grades. First graders make the largest gains (1–3 WRC per week), second graders the next largest (1–2 WRC per week), with smaller gains for students in later grades (Deno, Fuchs, Marston, & Shin, 2001). On average, students in learning disability programs and those with dyslexia gain around one WRC per week, but can gain more when they receive intensive reading instruction. Table 3 shows types of improvement goals (modest, reasonable, and ambitious) in WRC per week. After selecting a weekly improvement goal (e.g., 1.0 WRC improvement per week), compute an aimline using the formula: Goal = (Number of Weeks of Instruction x Rate of Improvement) + Baseline Median. When plotted on the student’s chart, the aimline shows the desired rate of progress from the baseline week to the end of instruction. Teachers using one of the CBM Web sites (e.g., AIMSweb, Edcheckup, DIBELS) can enter this information on-line, or they can use the University of Washington CBM-R Slope Calculator (UW Slope Calculator available at to automatically create a graph by entering the student’s baseline score and the desired rate of improvement.


Recording Results

After each session, record the student’s median score on a recording form and then choose a method for recording the score. Teachers can (a) plot it with the previous data points on a chart using pencil and paper or a graphing program, (b) use one of the CBM websites to enter the scores on-line and receive a chart of performance, or (c) download, at no expense, the UW Slope Calculator. This spreadsheet automatically charts and calculates the weekly growth slope from baseline to the most recent CBM-R score.

Common Questions About CBM-R and Progress Monitoring

How often should progress be monitored?

In general, the more frequently teachers administer CBM-R, the more accurate the estimates of reading growth. Although some authorities advocate collecting CBM-R once or twice every week, this may not be practical for some teachers. Time devoted to assessment is usually time taken from instruction, and getting the right balance between time on instruction and time on assessment is important. Although more frequent assessment yields a more accurate measure of growth, there is a point of diminishing returns in the number of assessments needed to gauge growth. In fact, teachers can obtain a very good idea of students’ reading growth with less frequent measurements. CBM-R collected every three weeks provides a reasonably accurate picture of growth (Jenkins, Graff, & Miglioretti, 2006). However, there is a trade-off. When teachers monitor progress every week they need only administer one CBM passage per week. By contrast, to obtain reliable growth information using a sparser monitoring schedule (e.g., measuring every 3–5 weeks), a student must read three or four passages on each measurement occasion to obtain a reliable estimate of the student’s achievement.

How long will it take to determine growth in general reading proficiency?

It takes longer than you would think to get a clear picture of a student’s overall reading growth. By contrast, it does not take long to ascertain if students are learning specific skills (e.g., whether students are mastering specific letter-sound correspondences, sounding out specific words, or automatically reading specific words). By closely observing students during their reading lessons, within a day or two it is possible to get a reasonable idea about whether the reading lessons are working and students are improving. Although observing a student’s lesson performance provides information about specific reading improvements, it does not indicate if his or her overall reading proficiency is changing in a measurable way. That takes longer. In fact, it takes around 9 weeks (and sometimes longer) after the baseline to determine reliably the amount of real reading growth that a student is making (Fuchs, Compton, Fuchs, & Bryant, 2006).

How can I tell how much reading progress students have made?

A simple (and free) way to determine the amount of reading growth students are making is to enter their WRC scores into the UW Slope Calculator (see Table 5 as an example). Alternatively, for a fee teachers can use one of several on-line services.

How can I tell if my students are making adequate progress?

In general, progress is adequate when a student’s weekly WRC growth is at or above his or her growth goal. This is a signal to the teacher to proceed with the current instruction. By contrast, when a student’s growth is below his or her growth goal, progress is inadequate—a signal that instruction should be changed. Table 4 provides guidelines for determining when teachers should modify instruction according to different progress-monitoring schedules. The first row shows decision-rules based on a weekly monitoring schedule. The second row shows decision-rules based on a biweekly monitoring schedule, and so on. Depending on the monitoring schedule, teachers may have to wait 9–12 weeks after baseline1 to evaluate a student’s progress, and then use either the Graphed Scores or the Calculated Slopes method to check the adequacy of student progress. The decision-rules for both methods also depend on a teacher’s monitoring schedule, as illustrated in the following examples.


Robert’s Example. Table 5 shows Robert’s WRC scores displayed in the UW Slope Calculator. His teacher set a goal of 1.0 WRC growth per week, monitored progress weekly and employed the Graphed Score method to evaluate progress. After 9 weeks of instruction, Robert’s graph revealed three consecutive scores below the aimline. Employing the Guidelines in Table 4 for Graphed Scores and Weekly Progress Monitoring, Robert’s three consecutive scores below the goal signals inadequate progress and a prompt for his teacher to adjust instruction. Alternatively, Robert’s teacher could have employed the Calculated Slopes guidelines for Weekly Progress Monitoring. Robert’s slope at weeks 8 and 9 (.72 and .57, respectively) indicate that instruction is not strong enough and signal his teacher to make an instructional change. The advantage of using the Calculated Slope method rather than Graphed Scores method to evaluate growth is that an invalid baseline score (one that is artificially high or low) has less effect on the student’s growth estimate.

Click on image to view chart larger.

Sally’s Example. In Sally’s example shown in Table 6, her teacher set a goal of 1.0 WRC growth per week, used an Every-Three-Weeks Monitoring Schedule, and employed the Calculated Slope method to evaluate progress. She measured Sally’s reading with three passages every three weeks and entered the median of the three scores into the UW Slope Calculator. After 9 weeks of instruction, she determined that Sally’s slope (1.0) was adequate. However, after 15 weeks of instruction, Sally’s slope had fallen below her growth goal for two consecutive measurements, signaling her teacher to adjust instruction. After changing instruction, Sally’s teacher waited 9 weeks (as prescribed in Table 4) to reevaluate progress.

Click on image to view chart larger.

Making Instructional Changes

The whole point of monitoring progress is to improve instruction and student reading outcomes. CBM-R progress monitoring indicates whether students are benefiting sufficiently from instruction (i.e., meeting their growth goal) and when instruction should be adjusted. It does not tell how instruction should change, only whether the current approach is working. Exactly how instruction should change is left to the teacher’s professional judgment. This decision entails reanalyzing a student’s skills, motivation, and response to instruction, and theorizing about adjustments likely to produce more growth. Teachers should consider whether to increase intensity (allotting more time to instruction); redistribute instruction and practice to different aspects of reading (e.g., decoding, reading by sight, vocabulary, comprehension strategies); revise motivational procedures (e.g., rewarding diligence, providing more interesting text for instruction); or redesign the general instructional approach (e.g., emphasize the sociocultural meaning and purposes of literacy).


CBM-R gives the clearest picture of students’ ongoing reading growth. It is a measure that adds significantly to the insights teachers glean from observing student performance during reading lessons. It indicates how well students are responding to current instruction, when to change instruction, and if changes have worked. Research (Fuchs, Deno, & Mirkin, 1984) shows that students with reading disabilities make stronger reading gains when teachers use CBM-R. It helps us amend instruction until it is effective.


Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232.

Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). School Psychology Review, 30, 507–524.

Fuchs, D. F., Compton, D. L., Fuchs, L. S., & Bryant, J. D. (2006, February). The Prevention and Identification of Reading Disability. Paper presented at the Pacific Coast Research Conference. San Diego: CA.

Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). The effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21, 449–460.

Fuchs, L. S., Fuchs, D. F., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256.

Jenkins, J. R., Graff, J. J., & Miglioretti, D. L. (2006, February). How Often Must We Measure to Estimate Oral Reading Growth? Paper presented at the Pacific Coast Research Conference. San Diego: CA.

This article was originally published in Perspectives on Language and Literacy, vol. 33, No. 2, Spring 2007, copyright by The International Dyslexia Association. Used with permission.

Back To Top