Print

Progress Monitoring Within a Response-to-Intervention Model


The purpose of this article is to discuss progress monitoring within a Response-to-Intervention (RTI) model and to assist the reader in making informed decisions when selecting and interpreting progress-monitoring measures. To that end, this article answers the following questions:

  1. What is progress monitoring?
  2. How does progress monitoring work in RTI?
  3. What are the elements of effective progress-monitoring measures?
  4. What are some common progress-monitoring measures?
  5. What progress-monitoring measures were used in the RTI models in the research review of RTI field studies?
  6. What can we conclude from the research and what directions look promising for future research?

What Is Progress Monitoring?


In the context of an RTI prevention model, progress monitoring is used to assess student progress or performance in those areas in which they were identified by universal screening as being at-risk for failure (e.g., reading, mathematics, social behavior). It is the method by which teachers or other school personnel determine if students are benefitting appropriately from the typical (e.g., grade level, locally determined, etc.) instructional program, identify students who are not making adequate progress, and help guide the construction of effective intervention programs for students who are not profiting from typical instruction (Fuchs & Stecker, 2003). Although progress monitoring is typically implemented to follow the performance of individual students who are at risk for learning difficulties, it can also follow an entire classroom of students (Fuchs & Fuchs, 2006).

How Does Progress Monitoring Work in RTI?


As soon as a student is identified as at risk for achievement deficits by the universal screening measure, his or her progress should be monitored in relation to Tier 1 instruction (Fletcher, Lyon, Fuchs, & Barnes, 2007). Progress should be monitored frequently, at least monthly, but ideally weekly or biweekly (Fuchs & Fuchs, 2006). A student's progress is measured by comparing his or her expected rate of learning (e.g., local or national norms) and actual rate of learning (Fuchs, Fuchs, & Zumeta, 2008). A teacher can use these measurements to gauge the effectiveness of teaching and to adjust instructional techniques to meet the needs of the individual student. A student who is not responding adequately to Tier 1 instruction moves on to Tier 2 and increasingly intensive levels of intervention and instruction. The current recommended time period for measuring response to Tier 1 instruction is 8–10 weeks (McMaster & Wagner, 2007; Fuchs & Fuchs, 2005; Vaughn, Linan-Thompson, & Hickman, 2003) and nonresponsiveness is typically determined by a percentile cut on norm-referenced tests (e.g., < 20th percentile) or cut score on a curriculum based measurement (CBM).

According to the National Center on Student Progress Monitoring, progress monitoring has the following benefits when it is implemented correctly: 1) students learn more quickly because they are receiving more appropriate instruction; 2) teachers make more informed instructional decisions; 3) documentation of student progress is available for accountability purposes; 4) communication improves between families and professionals about student progress; 5) teachers have higher expectations for their students; and, in many cases, 6) there is a decrease in special education referrals. Overall, progress monitoring is relevant for classroom teachers, special educators, and school psychologists alike because the interpretation of this assessment data is vital when making decisions about the adequacy of student progress and formulating effective instructional programs (Fuchs, Compton, Fuchs et al., 2008).

Elements of Effective Progress-Monitoring Measures


To be effective, progress-monitoring measures must be available in alternate forms, comparable in difficulty and conceptualization, and representative of the performance desired at the end of the year (Fuchs, Compton, Fuchs et al., 2008). Measures that vary in difficulty and conceptualization over time could possibly produce inconsistent results that may be difficult to quantify and interpret. Likewise, using the same measure for each administration may produce a testing effect, wherein performance on a subsequent administration is influenced by student familiarity with the content.
By using measures that have alternate forms and are comparable in difficulty and conceptualization, a teacher can use slope (e.g., academic performance across time) to quantify rate of learning (Fuchs & Fuchs, 2008). Slope can also be used to measure a student’s response to a specific instructional program, signaling a need for program adjustment when responsiveness is inadequate (Fuchs et al., 2008).

Effective progress-monitoring measures should also be short and easily administered by a classroom teacher, special education teacher, or school psychologist (Fuchs & Stecker, 2003). According to Fletcher et al. (2007), there is much research to support the use of short, fluency-based probes in deficit areas such as word reading fluency and accuracy, mathematics, and spelling. However, for areas such as reading comprehension and composition, there is less research support for specific kinds of probes because these domains demonstrate less rapid change and require methods for assessing progress over longer periods of time (Fletcher et al., 2007; McMaster & Wagner, 2007).

Common Progress-Monitoring Measures


Progress can be monitored by a variety of methods. From a norm-referenced standpoint, it is possible to use widely available assessments such as the Test of Word Reading Efficiency (TOWRE; Torgesen et al., 1999) or the Woodcock-Johnson Achievement Battery (Woodcock, McGrew, & Mather, 2001). With such tests, alternate forms are available to demonstrate student improvement over time, but usually there is at least three months between administrations (Fletcher et al., 2007). Other measures, such as the Dynamic Indicators of Basic Literacy Skills (DIBELS; Good, Simmons, & Kame'enui, 2001), have been reviewed by the National Center for Student Progress Monitoring and vary considerably in reliability, validity, and other key progress-monitoring standards.

CBM, one approach to progress monitoring, has the most well supported measures in the research base. According to Fuchs and Fuchs (2006),
More than 200 empirical studies published in peer-review journals (a) provide evidence of CBM's reliability and validity for assessing the development of competence in reading, spelling, and mathematics and (b) document CBM's capacity to help teachers improve student outcomes at the elementary grades (p. 1).
CBM is a form of classroom assessment that 1) describes academic competence in reading, spelling, and mathematics; 2) tracks academic development; and 3) improves student achievement (Fuchs & Stecker, 2003). It can be used to determine the effectiveness of the instruction for all students and to enhance educational programs for students who are struggling (McMaster & Wagner, 2007). Finally, findings of over 200 empirical studies indicate that CBM produces accurate, meaningful information about students’ academic levels and growth, is sensitive to student improvement, and when teachers use CBM to inform their instructional decisions, students achieve better (Fuchs & Fuchs, 2006).

Fuchs and Stecker (2003) warn that most classroom assessment is based on mastery of a series of short-term instructional objectives or "mastery measurement." To implement this type of assessment the teacher determines the educational sequence for the school year and designs criterion-referenced tests to match each step in that educational sequence. According to Fuchs and Stecker, problems with mastery measurement include: 1) the hierarchy of skills is logical, not empirical; 2) assessment does not reflect maintenance or generalization; 3) measurement methods are designed by teachers, with unknown reliability and validity; and 4) the measurement framework is highly associated with a set of instructional methods. CBM combats these problems by making no assumptions about instructional hierarchy for measurement, so it fits with any instructional approach and by incorporating automatic tests of retention and generalization. According to Fuchs and Fuchs (2006), CBM and mastery measurement have another significant difference:
CBM also differs from mastery measurement because it is standardized; that is, the progress monitoring procedures for creating tests, for administering and scoring those tests, and for summarizing and interpreting the resulting database are prescribed. By relying on standardized methods and by sampling the annual curriculum on every test, CBM produces a broad range of scores across individuals of the same age. The rank ordering of students on CBM corresponds with rank orderings on other important criteria of student competence. For example, students who score high (or low) on CBM are the same students who score high (or low) on the annual state tests. For these reasons, CBM demonstrates strong reliability and validity. At the same time, because each CBM test assesses the many skills embedded in the annual curriculum, CBM yields descriptions of students' strengths and weaknesses on each of the many skills contained in the curriculum. These skills profiles also demonstrate reliability and validity (p. 2).
The tasks measured by CBM include 1) pre-reading (phoneme segmentation fluency; letter sound fluency); 2) reading (word identification fluency; passage reading fluency; maze fluency); 3) mathematics (computation; concepts and applications); 4) spelling; and 5) written expression (correct word sequences).

Progress Monitoring in Field Studies


In our research review of RTI field studies, all but one mentioned progress monitoring. However, the specific progress-monitoring measures and frequency of use varied significantly between studies. Table 1 provides information (e.g., type of measure and frequency) on progress-monitoring measures used in each of the 11 studies found in our research review.


Table 1: Progress Monitoring in the Field Studies

Authors* Model Name** Progress Monitoring Mentioned? Type How Often?
Ardoin et al. (2005) SPMM Yes CBM Math (Multiplication, addition, subtraction) Not reported
Bollman et al. (2007) SCRED Yes CBM Math and CBM Reading Either monthly (students having some difficulty) or weekly (students having great difficulty)
Callender (2007) RBM Yes DIBELS; CBM Math; CBM Writing Every 3 to 4 weeks
Fairbanks et al. (2007) BSM Yes Check in/Check out card 2 to 4 times a week
Kovaleski et al. (1999) IST Yes Academic Learning Time (ALT) checklist 45 days and 80 days after initial observation
Marston et al. (2003) MPSM Yes CBM Reading Weekly
O'Connor et al. (2005) TRI Yes

WRMT-R sub-tests; CBM Reading

Not reported
Peterson et al. (2007) FSDS Yes CBM Reading Weekly
Telzrow et al. (2000) IBA No N/R Not reported
VanDerHeyden et al. (2007) STEEP Yes CBM Reading 3 probes (frequency not reported)
Vaugh et al. (2003) EGM Yes TPRI; CBM Reading Weekly
*Click author name to view field study
**Model name

SPMM -
Standard-protocol mathematics model
SCRED - St. Croix River education district model
RBM -
Idaho results-based model
BSM - Behavior support model
IST - Pennsylvania instructional support teams
MPSM -
Minneapolis problem-solving model
TRI - Tiers of reading intervention
FSDS - Illinois flexible service delivery system model
IBA - Ohio intervention-based assessment
STEEP - System to enhance educational performance
EGM -
Exit group model

Conclusion and Directions for Future Research


Progress monitoring is paramount in determining if students are benefitting appropriately from the typical instructional program, identifying students who are not making adequate progress and guiding the construction of effective intervention programs for students who are not profiting from typical instruction. However, it is important to note that while CBM and other measures can be helpful tools for monitoring progress; there are some potential challenges to successful implementation. Teachers must be trained to use these assessments effectively, as well as to use the data to quantify rates of progress and, subsequently, adjust the educational program for struggling students (Fuchs, Fuchs, & Zumeta, 2008). Without that training, the usefulness of any progress monitoring-measure is greatly limited. It is crucial that schools and districts support data-driven approaches and make training available to all teachers.


References


Fletcher, J. M., Lyon, G. R., Fuchs, L. S., & Barnes, M. A. (2007). Learning disabilities: From identification to intervention. New York: The Guilford Press.

Fuchs, D., Compton, D. L., Fuchs, L. S., & Bryant, J. (2008). Making "secondary intervention" work in a three-tier responsiveness-to-intervention model: Findings from the first-grade longitudinal reading study at the National Research Center on Learning Disabilities. Reading and Writing: An Interdisciplinary Journal, 21, 413–436.

Fuchs, D., & Fuchs, L. S. (2005). Responsiveness-to-intervention: A blueprint for practitioners, policymakers, and parents. Teaching Exceptional Children, 38, 57–61.

Fuchs, D., & Fuchs, L. S. (2006). Introduction to responsiveness-to-intervention: What, why, and how valid is it? Reading Research Quarterly, 4, 93–99.

Fuchs, L. S., & Fuchs, D. (2008). The role of assessment within the RTI framework. In D. Fuchs, L. S. Fuchs, & S. Vaughn (Eds.), Response to intervention: A framework for reading educators (pp. 27–49). Newark, DE: International Reading Association.

Fuchs, L. S., & Stecker, P. M. (2003). Scientifically based progress monitoring. National Center on Student Progress Monitoring: Washington, DC. Retrieved May 15, 2009.

Good, R. H., Simmons, D. C., & Kame'enui, E.J. (2001). The importance and decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes. Scientific Studies of Reading, 5, 257–288.

McMaster, K. L., & Wagner, D. (2007). Monitoring response to general education instruction. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.). Handbook of response to intervention: The science and practice of assessment and intervention (pp. 223–233). New York: Springer.

Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K., & Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34, 33–58.

Torgesen, J. K., Wagner, R. K., Rashotte, C. A., Rose, E., Lindamood, P., & Conway, T. (1999). Preventing reading failure in young children with phonological processing disabilities: Group and individual responses to instruction. Journal of Educational Psychology, 91, 579–593.

Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391–409.

Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III. Itasca, IL: Riverside Publishing.

Back To Top