Print

The Use of RTI to Identify Students With Learning Disabilities: A Review of the Research


The idea of using a Response-to-Intervention (RTI) process as the method for identifying the presence of a learning disability (LD) has been around since the 1980s (Fuchs & Fuchs, 2006). Over the ensuing years, it has been further refined and championed by some educators and researchers as the primary method for LD identification, resulting in its appearance in the Individuals with Disabilities Education Improvement Act of 2004 (IDEA 2004), stated as “response to scientific, research-based intervention.”

Much of the rationale for its use as an LD identification process stems from the dissatisfaction of many educators with the use of the IQ–achievement discrepancy model and the general use of standardized, norm-referenced tests that measure intelligence as well as underlying cognitive processes (e.g., processing speed, short-term/working memory, etc.). It is not the purpose of this article to explore and define these issues in detail but rather to present oft-stated criticisms of the current identification methods and then to examine published research on the impact of using RTI procedures in the identification process to see if that research addresses the concerns and criticisms its proponents have about the use of the IQ–discrepancy method.

We identified the following four concerns and criticisms of the current model of identification, all of which proponents of RTI say can be addressed using RTI:

  1. Overidentification of students with LD
  2. Overrepresentation of minorities in special education
  3. Reliability (i.e., too many false positives and too few true positives)
  4. Variability of identification rates across settings (e.g., states, districts).

Does RTI Reduce the Number of Students Identified?


A longstanding issue in special education is the overidentification of students with LD. Many in the field blame the IQ–discrepancy method of identification as the cause of this issue. The major concerns of this group are that IQ tests are a poor index of intelligence, that the IQ–discrepancy approach is a “wait-to-fail” model since students must perform poorly for years before achievement scores are sufficiently below their IQ scores, and that low achievement for many students is actually caused by poor instruction rather than disability (Fuchs, Mock, Morgan, & Young, 2003).

The problem of overidentification for school districts is largely financial. Many school districts already operating on small budgets waste ample amounts of money and manpower on special education services for students who do not need them. For the students themselves, receiving incorrect service hours once placed into these programs may have a detrimental effect on their psychological well-being (Harris-Murri, King, & Rostenberg, 2006).

How Is RTI Purported to Reduce Overidentification


In examining the literature on this topic, two aspects of RTI are presented as addressing the issue of overidentification:

  1. Access to effective instruction and curricula for all. This purports to rule out ineffective instruction as the cause of disability.
  2. Early intervention. Students receive increasingly intensive intervention as soon as learning deficits are demonstrated. This purports to correct the wait to fail model described above.

Research on the Effects of RTI on Reducing Overidentification


Based on our review of published literature, we found eight studies that reported reductions in special education referrals/placements (Bollman, Silberglitt, & Gibbons, 2007; Callender, 2007; Marston, Muyskens, Lau, & Canter, 2003; O’Connor, Harty, & Fulmer, 2005; Peterson, Prasse, Shinn, & Swerdlik, 2007; VanDerHeyden, Witt, & Gilbertson, 2007). All of these studies are described in more detail in our review of field studies that can be found on this website.

Bollman and colleagues (2007) examined the effect of an RTI model on the rate of identification for special education services and reported that placement rates dropped from 4.5% to 2.5% over a 10-year period. They indicate that the statewide prevalence rate over the same time period dropped from 4% to 3.3%. Callender (2007) reported that placements decreased by 3% for "districts with at least one school implementing an RTI model," whereas the state rate decreased by 1%. Marston and his co-authors (2003) indicated that special education placement rates stayed constant over time for Minneapolis RTI schools, as did the rates for the district as a whole. Peterson et al. (2007) reported similar information: Referrals and placements stayed relatively stable over time after RTI implementation. O'Connor et al. (2005) examined the effect of the tiers of reading intervention model on placement rates. They found that during the 4 years of implementation, rates fell to 8% compared to an historical contrast group (same schools, same teachers) for which the rate was 15%. Finally, VanDerHeyden and colleagues (2007) reported that for the four schools included in their study, there was a decrease in referrals and an increase in placements. The authors interpreted this pattern as an indication of more appropriate referrals.

While these results are promising, there are a couple of reasons why they must be qualified. First, as described in our review of field studies on this website, the research designs of these studies are not rigorous enough to illustrate causation. Second, with the exception of Bollman et al. (2007), these studies do not contain an adequate amount of longitudinal data on the eventual outcomes of participants. That is, we do not know if the nonreferred students are eventually identified for service. It is not clear from these studies if RTI is actually decreasing the number of students identified as having an LD or is simply delaying services. More longitudinal research  is necessary to discern whether this is a decrease or a delay.

Does RTI Reduce Disproportionality of Minorities in Special Education?


The issue of disproportionality in the context of this article relates to the overrepresentation of racial, cultural, and linguistic minorities in special education. The disproportionate inclusion of minority and linguistically different students is often seen as a result of misidentification and misplacement of these students due to the use of biased assessment or other discriminatory educational procedures that result in a form of segregation (Equity Assistance Centers, 2008). In addition to inequitable treatment, disproportionality is also viewed as a problem because some believe that special education services are not effective in terms of student outcomes (Hosp, 2008).

The simplest way to ascertain whether minorities are being overidentified for special services is to calculate whether the proportion (percentage) of one group is larger than another in terms of placement in special education. For example, if the rate of placement for Caucasian students is 5% of the general population and the rate of placement is 9% for African American students, one could conclude that the latter group is disproportionally represented in special education. Again, if disproportionality occurs the assumption is that some form of discrimination has occurred.

How Is RTI Purported to Reduce Disproportionality?


In examining the literature on this topic, several aspects of RTI are presented as addressing the issue of overrepresentation:

  1. Assessment instruments used in RTI (e.g., curriculum-based measures) are nonbiased versus other forms of assessment.
  2. All students receive effective instruction and thus most students, including minorities, will progress satisfactorily.
  3. Instructional decisions (e.g., movement to or from a tier) are based solely on academic performance.
  4. If, after receiving Tier 1 instruction, more minorities are identified as being at risk (based on universal screening data) than majority students, the instruction will be evaluated and modifications will be made to the core program.
  5. Providing more intensive instruction in Tier 2 will result in fewer students moving into special education.

Research on the Effects of RTI on Reducing Disproportionality.


Based on our review of published literature, we found two studies that directly examined the impact of an RTI program on reducing disproportionality. Both of these studies (Marston et al., 2003; VanDerHeyden et al., 2007) are described in more detail in our review of field studies that can be found on this website.

Marston et al. (2003) used a historical contrast design to examine whether the implementation of an RTI program in Minneapolis, Minnesota, reduced the proportion of African-American students placed in special education. They stated that an odds-ratio of analysis (Parrish, 2002) whereby the probability of one group being placed in a category is compared to that of another group, was a more accurate method for reviewing disproportionality for students of color. In this method, if the probability turns out to be equal, the ratio is 1. An odds ratio of 2 means that one group is two times more likely to be placed in a category. In their article, Marston et al. stated that the average odds ratio for African American students in the state of Minnesota being identified as having an LD was about 2.7. They then stated that the odds ratio for the city of Minneapolis over the study period ranged from 1.9 to 2.1 (i.e., African American students were about two times as likely to be placed in special education). We were somewhat confused why the authors compared their odds ratio for Minneapolis to the entire state as this does not clearly indicate if the program reduced overrepresentation. It would have been useful if similar data from the city of Minneapolis before implementation of RTI had been used for comparison. When we analyzed the graph showing the odds ratios for each of five years (1997–2002), the ratios were fairly stable over that time period, with a slight increase during the last year.

VanDerHeyden et al. (2007) addressed the question of whether their RTI program would have any effects on identification rates by ethnicity. They compared expected and actual rates of minority students evaluated for special education services across five schools at three to four points in time (e.g., 2001–2005). They reported no statistical differences between expected rates of special education evaluations of minority students and actual evaluations. It is worth pointing out that while this study did not report any significant impact on disproportionality (for evaluations, placement data were not presented), that was because there was no disproportionality to begin with.

In summary, the research we found does not provide strong support for decreasing proportionality. Marston et al. (2003) did present some encouraging data; however, concerns about the analysis procedures and the research design quality call into question the validity oftheir results.

Is RTI Reliable and Valid?


The reliability and validity of RTI in identifying students who have an LD is based on sensitivity and specificity. Sensitivity refers to the degree to which a given operationalization of RTI reliably identifies students who, in fact, are designated as having an LD (Jenkins, Hudson, & Johnson, 2007). These students are referred to as true positives, those who truly are at risk for future academic difficulties. This is critical in an RTI model so that all students needing extra assistance receive it. Specificity refers to the degree to which a given operationalization of RTI accurately identifies students who do not need extra help (Jenkins, 2003). These students are referred to as true negatives, those who truly are not at risk for future academic difficulties. A high level of specificity will reduce the numbers of false positives. This is critical in an RTI model because false positives lead to a waste of time and money and may result in inappropriate instruction for students who don't need it.

A criticism of the IQ–discrepancy model for LD identification is that it does not provide an adequate level of sensitivity and specificity. That is, it produces too many false positives and false negatives. For example, a student may be incorrectly identified as having a disability when, in fact, the student simply has not been exposed to quality instruction. Likewise, many students with relatively low IQ scores who would accurately be identified as having an LD are denied services because their achievement scores are not sufficiently different (Fuchs et al., 2003).

How is RTI Purported to Increase Reliability and Validity?


In examining the literature on this topic, two aspects of RTI are presented as addressing the issue of reliability and validity:

  1. Quality curricula and instruction in Tier 1 will result in more students progressing satisfactorily.
  2. The filtering process of the tiers of intervention will clearly differentiate true positives and true negatives.

Research on the Reliability and Validity of RTI


Our review of the RTI field studies yielded no information on the reliability and validity of RTI as an identification mechanism. Because of the design of these studies, there was no way to ascertain if identified students were true positives or if unidentified students were true negatives. There was no follow-up to the initial disability classification. However, we were able to identify an article (Fuchs, Compton, Fuchs, & Bryant, 2008) in which the authors retrospectively analyzed student datasets using multiple methods and measures to assess sensitivity and specificity. Using a dataset of reading scores for 252 1st-grade students who were not part of an RTI program, the researchers reported three promising combinations of measures and methods that provide acceptable sensitivity and specificity (e.g., >.80) in determining disability status:

  1. final normalization using the Test of Word Reading Efficiency (Torgesen, Wagner, & Rashotte, 1999),
  2. slope discrepancy using CBM Word Identification Fluency WIF, and
  3. dual discrepancy using CBM Passage Reading Fluency for level and CBM WIF for slope (a detailed description of each method can be found here).

In summary, while the Fuchs et al. (2008) findings are promising, much more research is needed to determine the best combinations of methods and measures for identifying students with LD in an actual RTI model. Without a gold standard for methods and measures and given the very large number of possible combinations within an RTI framework, numbers of false positives and false negatives will remain at unacceptable levels and highly variable between districts and states.

Does RTI Increase Consistency of LD Identification?


One of the major criticisms of the IQ–discrepancy method of identifying students with LD is the inconsistency in LD prevalence rates between districts and across states (Scruggs & Mastropieri, 2002). According to Fuchs et al. (2003), this inconsistency is the result of districts and states using variable definitions of discrepancy (e.g., standard IQ minus standard achievement vs. regression of IQ on achievement), size of discrepancy (e.g., 1.0 SD vs. 2.0 SDs), and which specific IQ and achievement tests are used. The idea that a student can be identified with LD in one district, but not identified in the next district over has led to a “widespread view that the LD designation is arbitrary” (Fuchs et al., 2003, p. 158).

How Is RTI Purported to Increase Consistency of LD Identification?


In examining the literature on this topic, several aspects of RTI are presented as addressing the issue of consistency in several areas:

  1. Effective instruction for all in Tier 1.
  2. Standard protocol procedures for determining movement between Tier 1 and Tier 2 will produce similar at-risk pools across districts and states.
  3. Standard protocol procedures for determining nonresponsiveness to Tier 2 and Tier 3 interventions will consistently identify the most at-risk students for further evaluation or LD identification across districts and states.

Research on the Effects of RTI on Increasing Consistency of LD Identification


Based on our review of published literature, we found five studies (Marston et al., 2003; O’Connor et al., 2005; Peterson et al., 2007; VanDerHeyden et al., 2007; Vaughn, Linan-Thompson, & Hickman, 2003) that reported prevalence rates of students deemed nonresponsive to Tier 2 interventions. While movement into Tier 3 does not necessarily equal LD identification, the more intensive nature of Tier 3 interventions often resembles special education services (Fuchs et al., 2003). Using one of six methods (dual discrepancy, median split, final normalization, final benchmark, slope discrepancy, and exit groups) for identifying nonresponders to Tier 2 interventions, prevalence rates from these five studies ranged from 1.3% to 18% (a detailed description of each method can be found here). While this large range is concerning, it is difficult to draw definitive conclusions about consistency as the samples, methods, and measures all differed across studies. A better gauge of consistency would be to overlay a number of possible operationalizations of RTI over the same sample of students. Barth et al. (2008) did just that.

Barth and her colleagues (2008) analyzed an existing database of 399 1st-grade students in order to better understand the extent to which operationalizations of RTI overlap and agree in identifying nonresponders. The research team evaluated the database in relation to cut-points, measures, and the methods described above (Fuchs et al., 2008) for identifying nonresponders and a total of 808 comparisons of association were computed to address the agreement of different operationalizations of RTI. The results indicate that agreement is generally poor, with only 15% of the comparisons yielding the minimum level of agreement (kappas > 0.40) and that no single method (e.g., dual discrepancy, final benchmark) was superior to another in identifying nonresponders. The authors contend that cut-point is the most significant determinant of responder status and that different cut-points will derive different incidences of proportion of responders and nonresponders.

In summary, it is clear from the results of the Barth et al. (2008) study that the inconsistencies in identifying nonresponders in an RTI model are eerily similar to inconsistencies in the IQ–achievement discrepancy method that RTI purports to correct. At this point, the LD designation based solely on the RTI model would be just as arbitrary. In fact, as Fuchs and Deshler (2007, p. 134) pointed out, “If practitioners across the nation choose different RTI methods of identification, there may be even greater variation in number and type of children identified as having LD than the variation produced by use of IQ–achievement discrepancy.”

Conclusion


Based on our analysis, RTI has a limited research base that supports its capability to address the issues of overidentification, disproportionality, reliability and validity, and consistency in identifying students with LD. This is concerning for a number of reasons. While IDEA 2004 requires that children suspected of having LD receive a comprehension evaluation, it is not clear if states are interpreting this mandate correctly. According to the National Center on Response to Intervention’s (2011) RTI State Database, more than 10 states allow LD identification based on “RTI only.” It is not entirely clear if these states are truly using RTI only or the alternative hybrid model proposed by Fletcher (2011) and other researchers. Based on the three criteria (e.g., demonstration of low achievement, insufficient response to effective, research-based interventions, and consideration of exclusionary factors such as mental retardation, sensory deficits, language minority, etc.) put forth by a consensus group of researchers convened by the U.S. Department of Education Office of Special Education Programs in 2001 (Fletcher, 2011), the hybrid model is a comprehensive data gathering process tailored to an individual student’s needs. That is, only short, norm-referenced measures in the student’s at-risk area are administered, rather than a prescriptive, mandated battery of the same tests for every student. While we agree this more individualized approach makes sense as an alternative to the IQ–discrepancy method, the result may cause inconsistency in the prevalence rates of LD. The sheer number of possible combinations of methods and measures (as noted in this article) will potentially result in higher rates of variability in prevalence rates, sensitivity, and specificity across states and districts. It is clear that more research is warranted to evaluate the role of RTI in the LD identification process.

References


Barth, A. E., Stuebing, K. K., Anthony, J. L., Denton, C. A., Mathes, P. G., Fletcher, J. M., & Francis, D. J. (2008). Agreement among response to intervention criteria for identifying responder status. Learning and Individual Differences, 18, 296–307.

Bollman, K. A., Silberglitt, B., & Gibbons, K. A. (2007). The St. Croix River Education District model: Incorporating systems-level organization and a multi-tiered problem-solving process for intervention delivery. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 319–330). New York: Springer.

Callender, W.A. (2007). The Idaho results-based model: Implementing response to intervention statewide. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 331–342). New York: Springer.

Equity Assistance Centers. (2008). Response to intervention: An equity perspective: The Equity Assistance centers identify civil rights concerns with the implementation of response to intervention. Title IV Equity Assistance Centers: http://www.idra.org/south_central_collaborative_for_equity/RTI/

Fletcher, J. M. (2011). Identifying learning disabilities in the context of response to intervention: A hybrid model. Retrieved from the RTI Action Network website: http://www.rtinetwork.org/learn/ld/identifyingld

Fuchs, D., Compton, D. L., Fuchs, L. S., & Bryant, J. (2008). Making “secondary intervention” work in a three-tier responsiveness-to-intervention model: Findings from the first-grade longitudinal reading study at the National Research Center on Learning Disabilities. Reading and Writing: An Interdisciplinary Journal, 21, 413–436.

Fuchs, D., & Deshler, D. D. (2007). What we need to know about responsiveness to intervention (and shouldn’t be afraid to ask). Learning Disabilities Research & Practice, 22, 129–136.

Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how valid is it? Reading Research Quarterly, 41, 93–99.

Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157–171.

Harris-Murri, N., King, K., & Rostenberg, D. (2006). Reducing disproportionate minority representation in special education programs for students with emotional disturbances: Toward a culturally responsive response to intervention model. Education and Treatment of Children, 29, 779–799.

Hosp, J. L. (2008). Response to intervention and the disproportionate representation of culturally and linguistically diverse students in special education. Retrieved from the RTI Action Network website: http://www.rtinetwork.org/learn/31

Individuals with Disabilities Education Improvement Act of 2004, Pub. L. No. 108-446 § 1400 et seq.

Jenkins, J. R. (2003, December). Candidate measures for screening at-risk students. Paper presented at the National Research Center on Learning Disabilities Responsiveness-to-Intervention symposium, Kansas City, MO. Retrieved May 15, 2008, from http://www.nrcld.org/symposium2003/jenkins/index.html

Jenkins, J. R., Hudson, R. F., & Johnson, E. S. (2007). Screening for at-risk readers in a response to intervention framework. School Psychology Review, 36, 582–600.

Marston, D., Muyskens, P., Lau, M., & Canter, A. (2003). Problem-solving model for decision making with high-incidence disabilities: The Minneapolis experience. Learning Disabilities Research & Practice, 18, 187–200.

National Center for Response to Intervention. (2011). RTI State Database. Retrieved  from http://state.rti4success.org/index.php?option=com_chart&order=sld&asc=desc

O’Connor, R. E., Harty, K. R., & Fulmer, D. (2005). Tiers of intervention in kindergarten through third grade. Journal of Learning Disabilities, 38, 532–538.

Parrish, T. (2002). Racial disparities in the identification, funding, and provision of special education. Racial Inequity in Special Education, 15 – 37.

Peterson, D. W., Prasse, D. P., Shinn, M. R., & Swerdlik, M. E. (2007). The Illinois flexible service delivery model: A problem-solving model initiative. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 300–318). New York: Springer.

Scruggs, T. E., & Mastropieri, M. A. (2002). On babies and bathwater: Addressing the problems of identification of learning disabilities. Learning Disability Quarterly, 25, 155–168.

Torgesen, J. K., Wagner, R. K., & Rashotte, C. A. (1999). Test of Word Reading Efficiency (TOWRE). Austin, TX: Pro-Ed.

VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225–256.

Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391–409.

Back To Top