Print

Making Decisions About Adequate Progress in Tier 2


The purpose of this article is to discuss the identification of students not progressing adequately in Tier 2 of a Response-to-Intervention (RTI) model and to assist the reader in making informed decisions about the nature of researched methods and measures for Tier 3 identification. To that end, the article is divided into the following sections:

  1. Overview
  2. Prevalence of Students Not Progressing Adequately in Tier 2
  3. Methods of Identifying Students Not Progressing Adequately in Tier 2
  4. Research Evidence by Method
  5. Identification of Nonresponders in our Research Review of RTI Field Studies
  6. Conclusions and Recommendations

Overview


In an RTI framework, progress of students who are receiving Tier 2 interventions is monitored frequently (e.g., weekly or monthly) and compared to classroom averages. These progress data are used to inform instructional practice as well as make decisions about student movement between tiers of intervention. Based on these data, there are three possible outcomes for students receiving Tier 2 interventions: movement back into Tier 1, continuation of Tier 2 interventions, or movement into Tier 3 for more intensive interventions. This latter outcome includes the subset of students most at risk for academic failure and in the most need of specialized, intense supports. Because decisions about movement between tiers are so important, it is crucial to have a valid and reliable system to measure response to Tier 2 interventions.

While there is general consensus among researchers for measuring response to Tier 1 instruction (e.g., 8–10 weeks of progress monitoring; below cut score on curriculum-based measurement [CBM]), there is much less consensus for measuring response to Tier 2 instruction and when to begin Tier 3. Because a number of researchers associate Tier 3 interventions with special education services (Boardman & Vaughn, 2007; D. Fuchs, Compton, Fuchs, & Bryant, 2008; D. Fuchs & Deshler, 2007; D. Fuchs, Fuchs, & Compton, 2004; Vaughn, Linan-Thompson, & Hickman 2003), identification of students not progressing adequately in Tier 2 is critical for their academic success.

Prevalence of Students Not Progressing Adequately in Tier 2


D. Fuchs and Deshler (2007) estimate that the number of students, based on the assumption of a normal distribution, who do not show improvement in response to the increasingly intensive Tier 2 interventions and are moved into Tier 3 should fall between 2% and 7% of the general population. However, there is no clear methodological definition of how or when a student is to be identified as a nonresponder to intervention, what intervention is to be used, who is to deliver the intervention, or how nonresponsiveness is to be measured. This lack of clarity creates the potential for inconsistencies in identification of students not progressing adequately in Tier 2 and for highly variable prevalence rates at the school, district, state, and national levels (D. Fuchs et al., 2008).

Methods of Identifying Students Not Progressing Adequately in Tier 2


At least six methods are currently being promoted for identification of nonresponders to Tier 2. D. Fuchs and Deshler (2007) defined five methods: a) dual discrepancy, b) median split, c) final normalization, d) final benchmark, and e) slope discrepancy. Vaughn et al. (2003) described another method of identifying nonresponders to Tier 2 intervention: (f) exit groups. A description of each method is provided in Table 1.

Table 1: Methods of Identifying Nonresponders to Tier 2 Intervention Using Progress Monitoring Data

Method of Identification

Author(s) Introducing Method

How are Nonresponders Identified?

Dual discrepancy L.S. Fuchs and Fuchs (1998) Slope of improvement during treatment and performance level at the end of treatment. Slope and performance levels below a given point (e.g., 1 SD) in comparison with classroom peers.
Median split Vellutino et al. (1996) Slope of improvement never meets or exceeds the rank ordered median of the intervention group.
Final normalization Torgesen et al. (2001) Standard scores on a mastery test at the end of a tutoring intervention. A nonresponder would have to score below a given percentile rank (e.g., 25th percentile).
Final benchmark Good et al. (2001) Criterion-referenced benchmark at the end of the intervention. A nonresponder would have to score below a given benchmark (e.g., <40 on="" dibels="" orf="" td="">
Slope discrepancy D. Fuchs et al. (2004) Slope of academic performance compared to a normative cut-point referenced by the classroom, school, district, or nation.
Exit groups Vaughn et al. (2003) After 30 weeks of supplemental instruction, failing three times (once every 10 weeks) to meet criteria on the TPRI and TORF measures.

Research Evidence by Method


We reviewed the empirical literature on the six methods of identifying nonresponders to Tier 2 instruction and found 11 studies. Several of the studies (e.g., D. Fuchs et al., 2008, 2004) included more than one of the methods. The results are presented here by method.

Dual Discrepancy Method

Researchers used a dual discrepancy method to identify nonresponders to intervention in six of the studies. Speece and Case (2001) and Case, Speece, and Molloy (2003) conducted studies with at-risk groups comprising students in the bottom 25% of their classrooms (Ns = 144 and 53, respectively). The researchers provided the at-risk groups interventions for two 8-week periods. There was no information reported on frequency per week or duration per session. In both studies, the general education classroom teacher implemented the intervention. The interventions, designed by the researchers and teachers, included phonics instruction and partner-reading activities. Ongoing progress monitoring was measured using a CBM of oral reading fluency (ORF). A CBM evaluates a student's rate of progress on a given skill. An ORF probe consists of a student reading from three individual pages for 1 minute and having the number of words read correctly recorded.

In Speece and Case (2001), students were identified as nonresponders based on at least 10 ORF probes administered across the year. If their slope of progress across the year and level of performance (mean of the last two probes) at the end of the year were more than 1 SD below the slope and level of their classmates, they were designated as nonresponders. This method yielded 47 students, or a 6.7% prevalence rate. Likewise, Case et al. (2003) used the same identification criteria but judged nonresponsiveness several times during the school year. This allowed them to create three groups, never dually discrepant, infrequently dually discrepant, and frequently dually discrepant (FDD). The FDD group yielded 7 nonresponders, or a 2.8% prevalence rate.

McMaster, Fuchs, Fuchs, and Compton (2005); D. Fuchs et al. (2004; 2nd grade); and D. Fuchs et al. (2008) conducted studies with intervention groups (Ns = 176, 48, and 252, respectively) comprising the lowest performing students in each classroom based on a CBM probe of rapid letter naming (RLN). An RLN probe consists of students quickly naming upper- and lowercase letters in black print. In each of the three studies, interventions consisted of Peer-Assisted Learning Strategies (PALS). PALS is a structured peer tutoring program that emphasizes phonological awareness, decoding, and fluency (D. Fuchs et al., 2001). Teachers paired higher performing readers with lower performing readers. The activities were conducted in pairs. In each study, PALS was used for three 35-minute sessions per week for 7 weeks (McMaster et al., 2005), 10 weeks (D. Fuchs et al., 2008), and 10–12 weeks (D. Fuchs et al., 2004; 2nd grade). Students not making progress in the first 2 weeks of PALS were given one-to-one or small-group tutoring in each of the studies. Graduate assistants served as the tutors. Progress was monitored weekly by two word-level CBM measures, a nonsense word fluency (NWF) probe and word identification fluency (WIF) probe.

In each of the three studies, levels (e.g., mean of correct words per minute on the last two probes) and slopes (e.g., number of correct words per minute each time they were monitored) were calculated for each at-risk and average student. In McMaster et al. (2005), nonresponders were identified as 0.50 SD below the average readers in level and slope, yielding 66 students, or a 13.3% prevalence rate. In D. Fuchs et al. (2008), nonresponders were identified as 1 SD below average readers in level and slope, yielding an 8.6% prevalence rate. In D. Fuchs et al. (2004; 2nd grade), nonresponders demonstrated growth below 1.5 words per week (slope), and level below a 75-word benchmark. This yielded a prevalence rate of 2.2%.

Burns and Senesac (2005) conducted a study with students scoring at or below the 25th percentile on a district-administered test of reading (N = 151). Two interventions, the Help One Student to Succeed program (HOSTS; Blunt & Gordon, 1998) and Title 1 support were utilized. HOSTS is a structured comprehensive literacy program designed to supplement classroom reading instruction delivered by trained tutors (Bryant, Edwards, & LeFlies, 1995). Students received four 30-minute tutoring sessions per week for 15 weeks. Title 1 intervention varied. Students either received weekly individual reading instruction from a Title 1 consultant or small-group instruction. Half the sample received HOSTS, the other half Title 1. Progress was monitored and measured using the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002. DIBELS is an assessment system using frequent measurement to assess progress in early literacy skill development. It includes measures of NWF and ORF. DIBELS ORF was measured twice during the 15-week period.

Students were designated as nonresponders when they scored in the at-risk range on an end of the year DIBELS administration (<20 words="" per="" minute="" and="" fluency="" growth="" slope="" was="" at="" or="" below="" the="" nonresponsiveness="" criterion="" 1="" sd="" average="" this="" yielded="" a="" total="" of="" 18="" students="" 3="" prevalence="" rate="" div="">

Median Split Method

Researchers used a median split method to identify nonresponders to intervention in four of the studies. Vellutino et al. (1996) conducted a study with students (N = 186) demonstrating low levels of reading ability as rated by their teachers (no specific criteria were provided). The intervention was daily one-to-one tutoring (30 minutes per session) for a minimum of 15 weeks to a maximum of 25 weeks (typically 70–80 sessions). Tutors were non-school personnel certified in reading, elementary education, or both. Student progress was measured on the Woodcock Reading Mastery Test–Revised (WRMT-R; Woodcock, 1987) several times over a 2-year period.

To identify nonresponders, Vellutino et al. (1996) charted the slope of improvement for each student over each administration of the WRMT-R. They then rank ordered the slopes of each child and determined the median. Nonresponders’ slope never met or exceeded the median. This yielded a total of 19 students, or a 1.4% prevalence rate.

D. Fuchs et al. (2004; 1st grade and 2nd grade) and D. Fuchs et al. (2008) also used the median split method to identify nonresponders. D. Fuchs et al. (2004; 1st grade) identified their sample (N = 54) from 20 1st-grade classrooms. The participants were the lowest performing 2–3 students per classroom as measured by ORF probes. These 54 students were assigned to one-to-one tutoring or PALS in the classroom. The tutoring sessions occurred for 10–12 weeks, three times per week, for 30–35 minutes per session. Ongoing progress was measured using a WIF probe. Participants, interventions, and measures of D. Fuchs et al. (2004; 2nd grade) and D. Fuchs et al. (2008) were described earlier.

In each of the three studies, slope of improvement was charted for each student over each WIF probe. Each slope was rank ordered and the median was determined. Students whose slope never met or exceeded the median were defined as nonresponders. This yielded prevalence rates of 3.5% (D. Fuchs et al., 2004; 1st grade), 3.5% (D. Fuchs et al., 2004; 2nd grade), and 9.8% (D. Fuchs et al., 2008).

Final Normalization Method

Researchers used a final normalization method to identify nonresponders to intervention in four of the studies. Torgesen, Alexander, Wagner, Rashotte, Voeller, and Conway (2001) conducted a study of students identified as having reading difficulties based on test scores (WRMT-R) and teacher ratings (N = 60). All students in the intervention group received 67.5 hours of one-to-one reading instruction (specific intervention not reported) in two 50-minute sessions per day for 8 weeks. Tutors were six special education teachers with at least 1 year of experience each. Reports from the tutors served as progress monitoring.

To identify nonresponders, Torgesen et al. (2001), as well as D. Fuchs et al. (2004; 1st grade and 2nd grade) and D. Fuchs et al. (2008), tested students on the WRMT-R at the end of the intensive tutoring intervention. Standard scores were computed and those students scoring above a standard score of 90 (25th percentile) were deemed responsive. Those below the 25th percentile were nonresponsive. This yielded prevalence rates of 4.4% (Torgesen et al., 2001), 1.4% (D. Fuchs et al., 2004; 1st grade), 3.5% (D. Fuchs et al., 2004; 2nd grade), and 4.2% (D. Fuchs et al., 2008).

Final Benchmark Method

Researchers used a final normalization method to identify nonresponders to intervention in five of the studies. Good, Simmons, and Kame'enui (2001) conducted a study of students chosen based on teacher reports of reading difficulty (N = 378). They received an Accelerating Children's Competence in Early Literacy–Schoolwide intervention funded by the U.S. Department of Education and designed to improve the reading of students in Grades K–3 (Simmons, Kame'enui, & Good, 1998). Progress monitoring was measured using DIBELS and ORF.

Al Otaiba and Fuchs (2006) conducted a study of students chosen based on teacher recommendation (N = 104). These 104 students received a PALS intervention for 16 weeks, three times per week, for 20 minutes per session. Ongoing progress monitoring was measured using ORF probes.

To identify nonresponders, all five studies used a benchmark of 40 on DIBELS ORF. Students scoring below 40 at the end of the year were designated nonresponsive. This yielded prevalence rates of 8.8% (Good et al., 2001), 9.1% (Al Otaiba & Fuchs, 2006), 8.4% (D. Fuchs et al., 2004; 1st grade), 7.2% (D. Fuchs et al., 2004; 2nd grade), and 8.7% (D. Fuchs et al., 2008).

Slope Discrepancy Method

Researchers used a slope discrepancy method to identify nonresponders to intervention in two of the studies. Both D. Fuchs et al. (2004; 2nd grade) and D. Fuchs et al. (2008) measured slope on weekly WIF probes. Students below the normative cut-point (<1.5 words="" increase="" per="" week="" were="" designated="" nonresponsive="" this="" yielded="" prevalence="" rates="" of="" 2="" 4="" d="" fuchs="" et="" al="" 2004="" sup="">nd grade) and 7.6% (D. Fuchs et al., 2008).

Exit Groups Method

Vaughn et al. (2003) conducted a study of students (N = 45) selected based on teacher recommendations and scores on the Texas Primary Reading Inventory (TPRI; Texas Education Agency, 1998). Intervention consisted of small-group supplemental instruction for 10–30 weeks, five sessions per week, for 35 minutes per session. Researcher-trained tutors implemented intervention. Progress was monitored using TPRI and Test of Oral Reading Fluency (TORF; Children's Educational Services, Inc., 1987) administrations every 10 weeks.

Students who met criteria on TPRI and TORF in any of the three sessions were "exited" out of intervention. Students who did not meet criteria after 30 weeks were designated nonresponders. This yielded a prevalence rate of 2.4%.

Identification of Nonresponders in the Field Studies


In our research review of RTI field studies, all but two studies provided information on identifying nonresponders to Tier 2 instruction. Of the studies providing prevalence data, there was a range of 2.4% to 18% of students identified as not progressing adequately in Tier 2. Table 2 provides information (e.g., method, prevalence, etc.) on identifying nonresponders in each of the 11 studies found in our research review.

Table 2: Identification of Nonresponders to Tier 2 Interventions

Authors*

Model Name**

Identification Criteria Included?

Method

Prevalence

Ardoin et al. (2005)

SPMM

Yes

Final benchmark

Not specified

Bollman et al. (2007)

SCRED

Yes

Dual discrepancy

Not specified

Callender (2007)

RBM

Yes

Dual discrepancy

Not specified

Fairbanks et al. (2007)

BSM

Yes

Not applicable (behavior)

Not specified

Kovaleski et al. (1999)

IST

No

Not specified

Not specified

Marston et al. (2003)

MPSM

Yes

Final normalization

~7%

O'Connor et al. (2005)

TRI

Yes

Exit groups

~8%

Peterson et al. (2007)

FSDS

Yes

Final benchmark

~18%

Telzrow et al. (2000)

IBA

No

Not specified

Not specified

VanDerHeyden et al. (2007)

STEEP

Yes

Final benchmark

~11%

Vaugh et al. (2003)

EGM

Yes

Exit groups

~2.4%
*Click author name to view field study
**Model name

SPMM -
Standard-protocol mathematics model
SCRED -St. Croix River education district model
RBM -
Idaho results-based model
BSM -Behavior support model
IST - Pennsylvania instructional support teams
MPSM -
Minneapolis problem-solving model
TRI - Tiers of reading intervention
FSDS -Illinois flexible service delivery system model
IBA - Ohio intervention-based assessment
STEEP - System to enhance educational performance
EGM -
Exit group model

Conclusions and Recommendations


Clearly, depending on which method is used, there is potential for variation in the number of students identified as nonresponders. Our review of the empirical literature and review of field studies found prevalence rates of nonresponders ranging from 1.3% to 18%, depending on method.

Perhaps illustrating the variation in prevalence rates the best, D. Fuchs et al.'s (2008) longitudinal study of the same sample of first graders found considerable variation between percentages of nonresponders identified based on method used (e.g., dual discrepancy = 8.6%, median split = 9.8%, final normalization = 4.2%, final benchmark = 8.7%, and slope discrepancy = 7.6%). In addition, D. Fuchs et al. made the point that prevalence alone may be too narrow when making decisions about a method. Issues such as sensitivity (e.g., true positives) and specificity (e.g., true negatives) are also extremely important when selecting a measure and method to identify nonresponders to Tier 2 interventions. While choosing measures and methods of identifying nonresponders to Tier 2 instruction may seem daunting, D. Fuchs et al. (2008) did provide some recommendations for this important decision-making process. They reported three promising measures and methods (e.g., acceptable prevalence, sensitivity, and specificity): 1) final normalization using Test of Word Reading Efficiency (Torgesen, Wagner, & Rashotte, 1999) Sight Word Efficiency, 2) slope discrepancy using CBM WIF, and 3) dual discrepancy using CBM Passage Reading Fluency for level and CBM WIF for slope. This is, at least, a place to start for making decisions about adequate progress within the second tier of an RTI program for students who are in the most need of more intensive help.

References


Al Otaiba, S., & Fuchs, D. (2006). Who are the young children for whom best practices in reading are effective? An experimental and longitudinal study. Journal of Learning Disabilities, 39, 414–431.

Ardoin, S. P., Witt, J. C., Connell, J. E., & Koenig, J. L. (2005). Application of a three-tiered response to intervention model for instructional planning, decision making, and the identification of children in need of services. Journal of Psychoeducational Assessment, 23, 362–380.

Blunt, T., & Gordon, A. (1998). Using the HOSTS structured mentoring strategy to engage the community and increase student achievement. ERS Spectrum, 16, 24–27.

Boardman, A. G., & Vaughn, S. (2007). Response to intervention as a framework for the prevention and identification of learning disabilities: Which comes first, identification or intervention?In J. B. Crockett, M. M. Gerber, & T. J. Landrum (Eds.), Achieving the radical reform of special education: Essays in honor of James M. Kauffman(pp. 15–35). New York: Erlbaum.

Bollman, K. A., Silberglitt, B., & Gibbons, K. A. (2007). The St. Croix River education district model: Incorporating systems-level organization and a multi-tiered problem-solving process for intervention delivery. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 319–330). New York: Springer.

Bryant, H. D., Edwards, J. P., & LeFlies, D. C. (1995). The HOSTS program: Early intervention and one-to-one mentoring help students to succeed. ERS Spectrum, 13, 3–6.

Burns, M. K., & Senesac, B. V. (2005). Comparison of dual discrepancy criteria to assess response to intervention. Journal of School Psychology, 43, 393–406.

Callender, W. A. (2007). The Idaho results-based model: Implementing response to intervention statewide. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 331–342). New York: Springer.

Case, L. P., Speece, D. L., & Molloy, D. E. (2003). The validity of a response-to-instruction paradigm to identify reading disabilities: A longitudinal analysis of individual differences and contextual factors. School Psychology Review, 32, 557–582.

Children's Educational Services, Inc. (1987). Test of Oral Reading Fluency. Minneapolis, MN: Author.

Fairbanks, S., Sugai, G., Guardino, D., & Lathrop, M. (2007). Response to intervention: Examining classroom behavior support in second grade. Exceptional Children, 73, 288–310.

Fuchs, D., Compton, D. L., Fuchs, L. S., & Bryant, J. (2008). Making "secondary intervention" work in a three-tier responsiveness-to-intervention model: Findings from the first-grade longitudinal reading study at the National Research Center on Learning Disabilities. Reading and Writing: An Interdisciplinary Journal, 21, 413–436.

Fuchs, D., & Deshler, D. D. (2007). What we need to know about responsiveness to intervention (and shouldn't be afraid to ask). Learning Disabilities Research & Practice, 22, 129–136.

Fuchs, D., Fuchs, L. S., & Compton, D. L. (2004). Identifying reading disabilities by responsiveness-to-instruction: Specifying measures and criteria. Learning Disability Quarterly, 27, 216–227.

Fuchs, D., Fuchs, L. S., Svenson, E., Yen, L., Thompson, A., McMaster, K., et al. (2001). Peer assisted learning strategies: First grade reading. Nashville, TN: Vanderbilt University.

Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for reconceptualizing the identification of learning disabilities. Learning Disabilities Research & Practice, 13, 204–219.

Good, R. H., & Kaminski, R. A. (2002). Dynamic Indicators of Basic Early Literacy Skills. Eugene, OR: Institute for the Development of Educational Achievement.

Good, R. H., Simmons, D. C., & Kame'enui, E. J. (2001). The importance and decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes. Scientific Studies of Reading, 5, 257–288.

Kovaleski, J. F., Gickling, E. E., Morrow, H., & Swank, H. (1999). High versus low implementation of instructional support teams: A case for maintaining program fidelity. Remedial and Special Education, 20, 170–183.

Marston, D., Muyskens, P., Lau, M., & Canter, A. (2003). Problem-solving model for decision making with high-incidence disabilities: The Minneapolis experience. Learning Disabilities Research & Practice, 18, 187–200.

McMaster, K. L., Fuchs, D., Fuchs, L. S., & Compton, D. L. (2005). Responding to nonresponders: An experimental field trial of identification and intervention methods. Exceptional Children, 71, 445–463.

O'Connor, R. E., Harty, K. R., & Fulmer, D. (2005). Tiers of intervention in kindergarten through third grade. Journal of Learning Disabilities, 38, 532–538.

Peterson, D. W., Prasse, D. P., Shinn, M. R., & Swerdlik, M. E. (2007). The Illinois flexible service delivery model: A problem-solving model initiative. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 300–318). New York: Springer.

Simmons, D. C., Kame'enui, E. J., & Good, R. H., III. (1998). Accelerating Children's Competence in Early Reading and Literacy–Schoolwide: Project ACCEL-S (Federal OSEP Grant H324M980127). Eugene: University of Oregon.

Speece, D. L., & Case, L. P. (2001). Classification in context: An alternative approach to identifying early reading disability. Journal of Educational Psychology, 93, 735–749.

Telzrow, C. F., McNamara, K., & Hollinger, C. L. (2000). Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review, 29, 443–461.

Texas Education Agency. (1998). Texas Primary Reading Inventory (TPRI). Austin, TX: Author.

Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K., & Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34, 33–58.

Torgesen, J. K., Wagner, R. K., & Rashotte, C. A. (1999). Test of word reading efficiency (TOWRE). Austin, TX: Pro-Ed.

VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225–256.

Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391–409.

Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S. G., Chen, R., Pratt, A., & Denckla, M. B. (1996). Cognitive profiles of difficult-to-remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experimental deficits as basic causes of specific reading disability. Journal of Educational Psychology, 88, 601–638.

Woodcock, R. W. (1987). Woodcock Reading Mastery Test–Revised. Circle Pines, MN: American Guidance Service.

Back To Top