Field Studies of RTI Programs, Revised
In this article we present a review of published studies on the effectiveness of different RTI models. These studies, often referred to as field studies, are examinations of the impact of multi-tier and multi-component RTI models. On the surface, it is understandable that one might ask the following question: "Do we really need research on RTI to have confidence in its effectiveness?" After all, RTI programs generally use scientifically based instruction for all students, keep track of student progress using valid and reliable measures, use data to identify students who do not meet well-developed standards and benchmarks, and then provide those students with specially designed, evidence-based, and intensive intervention. However, many educational approaches or innovations that seem to make sense don't always work in practice (see Ellis, 2001, for a review of educational innovations). VanDerHeyden, Witt, and Gilbertson (2007) stressed this point by stating:
The research conducted to date with few exceptions… has focused primarily on the efficacy of the components individually but not on the efficacy of the RTI process as an integrated whole. In theory, if the components are effective, then the overall process would be expected to produce results; however, the question of whether the overall process is effective must also be addressed. (p. 226)
We used a four-step procedure to identify RTI field studies for inclusion in this review. First, we established a priori criteria for inclusion. To be included in this review, the study must have
- been published in a peer-reviewed journal, edited review journal, or edited textbook.
- employed instruction or intervention in at least two tiers of an RTI model for students experiencing academic or behavioral difficulties. Hence, we excluded those studies that simply compared interventions that conceivably could be used in one or more tiers of an RTI model (e.g., Torgesen et al., 2001; Vellutino et al., 1996). These studies are reviewed in subsequent sections of this Web site that investigate the components of RTI (i.e., intensive interventions used in Tier 2 and higher).
- provided quantifiable measures of student academic/behavioral outcomes and/or systemic outcomes (e.g., special education referrals/identifications) and completed descriptions of how the data were obtained and analyzed. Hence, we excluded published articles that described programs but did not include information about how the study of the program was conducted (e.g., Ikeda, Tilly, Stumme, Volmer, & Allison, 1996; Orosco & Klingner, 2010).
Second, we generated a list of search terms. We used search terms selected for a previous meta-analysis of RTI models (i.e., Burns, Appleton, & Stehouwer, 2005). We also used descriptors of well-known RTI models (e.g., Heartland model; Minneapolis problem-solving model). We used the following descriptors: response* to intervention, response* to instruction, RTI, tiers, tiered intervention, data-based decision making, Heartland model, Minneapolis problem-solving model, Ohio intervention-based assessment, Pennsylvania instructional support team, responders, nonresponders, disab* identification, special education identification, problem solving, and intervention-based assessment. Our search of the PsycINFO and ERIC databases and searches via Google Scholar and ProQuest identified 13 studies that met our inclusion criteria.
Third, we searched the reference lists of each included study, as well as a previous meta-analysis of RTI (i.e., Burns et al., 2005) and a previous review of RTI programs (i.e., Fuchs, Mock, Morgan, & Young, 2003). This procedure yielded three additional studies. Fourth, we hand-searched five journals from January 1996 through January 2008 for studies that may not have been entered into the research databases or available via search engine or content aggregator. These journals were the Journal of Learning Disabilities, Learning Disabilities Research & Practice, Remedial and Special Education, Exceptional Children, and the Journal of School Psychology. We selected these journals because they have published RTI studies in the past. Our hand-search did not yield any additional studies. Thus, our search procedures yielded 16 studies that met inclusion criteria for this review.
Once a study was identified for inclusion, we conducted a descriptive analysis. Key descriptive variables for each study are presented in Table 1. In addition, we provide an expanded description of each study (i.e., program description, purpose of the study, methodology, research questions asked, and reported results). These expanded descriptions can be accessed by the links in Table 1.
In addition, we further analyzed the studies in terms of the quality of the research design used, as well as other methodological variables (e.g., who provided instruction [teacher or researcher], level of detail regarding interventions, data collection, fidelity of implementation), to establish the overall quality of the research so that the reader can make informed judgments as to the degree of confidence they can have in the study results. Descriptions of the research designs used in the studies as well as a brief description of their level of rigor are presented in Figure 1.
|Figure 1: Types of Research Designs
- Randomized Control Trials (RCTs). The RCT is considered the best design to control for threats to internal causal validity, because study participants are randomly assigned to groups (i.e., experimental or control group). Randomization ensures equality on all variables between the groups; thus, an outcome (e.g., increase reading skills) can be attributed to the intervention rather than to some other variable.
- Quasi-Experimental Designs (QEDs).The QED is a group design that includes a control group but does not use randomization procedures and thus is considered less rigorous than RCTs. This shortcoming can be partially compensated for if the researchers can show that the experimental and control groups are equivalent at baseline/pretest on all measured variables.
- Historical Contrast Design (HCD).In this design, the posttest of the group receiving the treatment (e.g., RTI) is compared to a similar group from the past. For example, data are obtained for students exposed to RTI for a period of time and the postintervention outcomes (e.g., reading level, rates of referral) are compared to those for students from the same district or school before the implementation of the RTI program. This design is considered to be relatively weak in establishing causality (Shadish, Cook, & Campbell, 2002).
- Descriptive. With this type of study, data are collected (e.g., referral and placement rates) at the onset of implementation of RTI and then any changes or trends over time are noted. The lack of a control or contrast group limits conclusions about the impact of the intervention.
- Multiple Baseline (MB). Multiple baseline designs are a type of single-case methodology. For example, the intervention (e.g., an RTI program) is introduced to one school at a time to see if changes (e.g., levels of special education referrals) occur when, and only when, the intervention is introduced, thus controlling for threats to external validity.
- A-B Design (AB). The weakest of all single-case designs, this design is implemented by taking baseline data (e.g., performance on an academic task) and then implementing the intervention to see if performance increases. However, this procedure does not control for any competing explanations for why the behavior changed.
- Correlational. This design statistically quantifies the relationship between two variables (e.g., degree of implementation fidelity and student outcome). Although this design quantifies a relationship, it does not establish causality.
Results and Discussion
Each of the 16 RTI programs included in this review can be classified as either a problem-solving or standard protocol model as well as an existing or a researcher-developed model. A problem-solving model uses individually tailored interventions designed to address student failure to adequately respond to instruction, and these interventions are typically developed or selected through a team-based decision process. The standard protocol model refers to the use of preselected interventions that are used when personnel deem that the existing intervention has not led to the desired response by the student (Fuchs & Fuchs, 2006). Existing model studies are studies of the effectiveness of an in-place RTI program typically developed by school, district, or state-level personnel, with the interventions delivered by building-level personnel (e.g., teacher, school psychologist). The researcher-developed model examines the effects of an RTI program developed and implemented primarily by university-based researchers.
Of the 16 studies, eight were problem solving, three were standard protocol, and one was a combination of both (Callender, 2007) (see Table 1). Of the eight problem-solving models, seven were existing models with school personnel implementing the interventions. There was one researcher-developed, problem-solving model that used both researchers and teachers to implement tiered interventions (O’Connor, Harty, & Fulmer, 2005). Overall, existing models tended to use problem-solving procedures for selecting interventions, with school personnel implementing the program; researcher-developed models tended to be standard protocol designs. As shown in Table 1, all of the studies were conducted at the elementary school level, with four studies extending into Grade 8 or above. Those studies that included only elementary students typically focused on Grade 4 or lower.
|Table 1: Programmatic Field Studies of RTI
(Click author to view field study)
||Problem Solving or Standard Protocol
||# of Schools/# of Students Used
|Ardoin et al. (2005)
||Mathematics outcomes (fluency, calculation)
|Bollman et al. (2007)
||Descriptive, QED, & HCD
||Reading outcomes/SpecED placements
||Problem solving & standard protocol
||Descriptive & QED
||Reading outcomes/SpecED placements
|Duhon et al. (2009)
||Mathematics/intervention intensity quantification
|Fairbanks et al. (2007)
||Researcher & teacher
||A-B & MB
||Intervals of problem behaviors/office referrals
|Gettinger & Stoiber (2007)
||Researcher & teacher
|Kovaleski et al. (1999)
||High versus low Implementation on academics
|Marston et al. (2003)
||Placement rates/achievement/ referral rates/disproportion
|Murray et al. (2010)
||Researcher & teacher
||Retention rates/reading outcomes
|O'Connor et al. (2005)
||Researcher & teacher
||Reading outcomes (word identification, word attack, passage comprehension, fluency)/SpecED placement
||Occurrences of maladaptive behaviors
||Referral rates/SpecED placements/parent and educational staff satisfaction
|Telzrow et al. (2000)
||Correlational & descriptive
||Implementation fidelity/relationship between fidelity & student goal attainment
|VanDer-Heyden et al. (2007)
||MB across schools
||No. of SpecED Referrals/no. of SpecED placements
|Vaughn et al. (2003)
||Researcher & teacher
||Reading outcomes (fluency, word attack, passage comprehension, phonological awareness, rapid letter naming)
|Vellutino et al. (2007)
||Reading outcomes and disability prediction
*Click author name to view field study.
|SPMM - Standard-protocol mathematics model
SCRED- St. Croix River education district model
RBM - Idaho results-based model
MII - Midwestern intervention intensity
BSM - Behavior support model
EMERGE - Exemplary model of early reading growth and excellence
IST - Pennsylvania instructional support teams
MPSM - Minneapolis problem-solving model
|RTI&R - Response to intervention and retention
TRI - Tiers of reading intervention
SDBM - South Dakota behavior model
FSDS - Illinois flexible service delivery system model
IBA - Ohio intervention-based assessment
STEEP - System to enhance educational performance
EGM - Exit group model
ARTI - Albany response to intervention
In terms of outcome measures, reading progress was a focus of seven studies: two studies measured math performance, two measured frequency of problem behaviors and office referrals for behavior, one examined retention rates, and another examined time on task and task completion. Six studies looked at the impact of RTI on special education referral and/or placement rates.
A variety of research designs were used to establish the impact of the program on the selected outcome(s). Five studies used single-case methodology (i.e., multiple-baseline or A-B), four used HCDs, four included QEDs, one used correlational procedures, and six included descriptive methods. Only one study used an RCT design and those that used QEDs did not provide information as to whether baseline equivalency was established between the treatment and control groups.
Academic Outcome Studies
Reading. As noted earlier, seven of the studies reported measuring reading outcomes linked to an RTI program (Bollman, Silberglitt, & Gibbons, 2007; Callender, 2007; Gettinger & Stoiber, 2007; Murray, Woodruff, & Vaughn, 2010; O'Connor et al., 2005; Vaughn, Linan-Thompson, & Hickman, 2003; Vellutino, Scanlon, Zhang, & Schatschneider, 2008). Bollman et al. (2007) noted that St. Croix River Education District (SCRED) students showed a gradual rise on curriculum-based measurement measures over a 10-year period but lacked a control group against which to compare gains, thus making it difficult to attribute improvement to the program. They did use historical contrasts of non-SCRED student performance on the Minnesota statewide assessment, and reported that the rate of SCRED students reaching grade-level standards over a 7-year period was "slightly faster" than that of non-SCRED students from earlier years. In another study, Callender (2007) reported higher reading outcomes for students in the Idaho results-based model (RBM) program who had reading intervention plans than for students who did not have reading plans. Unfortunately, the reading skills were not specified nor was detail provided about the comparison group (e.g., selection or equivalency procedures).
O’Connor and colleagues (2005) investigated the effects of Tier 2 and Tier 3 reading interventions on a variety of reading skills. Tier 2 instruction consisted of small-group instruction (10–20 minutes per session) delivered three times per week. Tier 3 intervention consisted of five daily, 30-minute sessions that incorporated group and individualized instruction. When compared with an historical contrast group, students who had received tiered instruction performed higher on all reading measures. Vaughn and her co-authors implemented a tiered intervention program consisting of supplemental instruction in small groups (five times per week for 35 minutes each) and noted how many at-risk students met exit criteria (i.e., scored in the average range on several reading measures) at 10-week intervals. The authors reported that of the 45 students (primarily students in English as a second language [ESL] programs) participating in the study, 10 exited after 10 weeks of intervention, 14 after 20 weeks, and 10 after 30 weeks, with 11 students (24%) never meeting exit criteria. All students showed large gains on reading measures, especially those exposed to 30 weeks of intervention. They also found that approximately a third of the exiting students failed to "thrive" in the general education class and needed more supplemental instruction at later points in time.
Math. In the first of two RTI math studies, Ardoin, Witt, Connell, and Koenig (2005) implemented the standard protocol mathematics model model to ascertain whether a Tier 2 classwide intervention (i.e., explicit instruction) and a Tier 3 intervention consisting of individualized instruction and peer tutoring would improve the math performance (i.e., fluency and calculation) of 15, low-performing 4th graders. They found that 5 students did not respond adequately to Tier 2 instruction and were provided the Tier 3 instruction. They reported that only one student did not respond adequately to the individualized instruction.
Duhon, Mesmer, Atkins, Greguson, and Olinger (2009) implemented the MII model to determine if the application of the increased frequency of a fluency-based intervention package on the mathematics performance of poorly responding students would result in performance levels similar to that of typically responding peers. Out of 35 students identified as at-risk, 32 were able to reach benchmark after the Tier 1 intervention. The remaining three students were able to reach benchmark after varying levels of intensity of intervention in Tier 2. While these three students met benchmark after intense interventions, the authors reported that they regressed to baseline for maintenance.
Academically Related Behaviors. One study (Kovaleski, Gickling, Morrow, & Swank, 1999) examined academic performance, specifically the academically related behaviors of time on task, task completion, and task comprehension. The authors wanted to see if students who were exposed to the IST model performed better on these variables than students at schools where the model was not in use. Additionally, they wanted to see if the implementation level of the IST program (i.e., high or low implementation) would have an impact on performance. They found that students who were receiving high implementation of the model did better on all measured variables than did students in the low implementation situation as well as those students who did not receive IST services. Additionally, the non-exposed students performed better than those in the low implementation group. It was not clear what the specific criteria were for identifying schools as low or high implementation (a checklist was mentioned but most items were not described; in addition, different checklists were used for the two IST cadres used in the study) and while a QED design was used, equivalency between the two groups was not reported.
General Academic Performance. Marston, Muyskens, Lau, and Canter (2003) presented data for the purpose of comparing the level and rate of performance on a statewide achievement test for 34 "students needing alternative programming" (SNAPs) to 87 students who had been identified as having a learning disability using traditional methods. The results as reported by Heistad and Casey (2002) indicate that both groups performed and progressed at similar levels and that both were below performance and growth rates for students who were on track for passing the Minnesota Basic Standards Test. Thus, while establishing little difference on academic growth rates between SNAPs and those identified as having LD, information about the academic impact of the problem-solving model is not provided.
Retention Rates. One study (Murray, Woodruff, & Vaughn, 2010) examined the impact of an RTI model on retention rates of first graders across six Title I elementary schools. The authors defined "retention" as either a) the school district retained the student at the end of the academic year or b) the student was moved back to the previous grade level within the first 3 months of the next academic year. The authors sought to determine if their RTI reading framework also positively affected retention. An HCD was used to compare current retention rates against those of previous years. While this design is considered weak in determining causality, a decrease of 47% was reported for the 2 years the RTI framework was being implemented.
Referral and Placement Rates. As noted earlier, six of the 16 studies reported data on the effects of their programs on special education referral and/or placement rates. Bollman and colleagues (2007) examined the effect of the SCRED model on the rate of identification for special education services and reported that placement rates dropped from 4.5% to 2.5% over a 10-year period. They indicate that the statewide prevalence rate over the same time period dropped from 4% to 3.3%. Callendar (2007) reported that placements decreased by 3% for "districts with at least one RBM school," whereas the state rate decreased by 1%. Marston and his co-authors (2003) indicated that special education placement rates stayed constant over time for Minneapolis problem-solving model schools, as did the district as a whole. Peterson, Prasse, Shinn, and Swerdlik (2007) reported similar information: Referrals and placements stayed relatively stable over time after RTI implementation. O'Connor et al. (2005) examined the effect of the tiers of reading intervention model on placement rates. They found that during the 4 years of implementation, rates fell to 8% compared to an historical contrast group (same schools, same teachers) for which the rate was 15%. Finally, VanDerHeyden and colleagues (2007) reported that for the four schools included in their study, there was a decrease in referrals and an increase in placements. The authors interpreted this pattern as an indication of more appropriate referrals.
Based on our evaluation of these selected studies, we have developed several conclusions and observations about the findings of the 16 studies included in this review.
Finding 1. All of the studies examining the impact of an RTI program on academic achievement or performance resulted in some level of improvement, and the authors attributed the changes to the RTI approach they used. Thus, there is emerging evidence that a tiered early intervention approach can improve the academic performance of at-risk students. These findings are qualified, however, due to the use of research designs and procedures that hinder the degree the outcomes can be associated with the intervention programs, especially for "existing program" studies. Others have noted these limitations of RTI field study research (Burns et al, 2005; Fuchs et al, 2003; VanDerHeyden et al., 2007). In fairness, we acknowledge that the large-scale RTI models were developed for purposes other than for conducting technically sound research and that conducting RCTs and QEDs in which equivalency is established is difficult when conducting large-scale program evaluations.
Finding 2. While there is some level of support for RTI programs improving academic performance, this finding relates primarily to early reading skills for students at the elementary level. There were only two math studies (Ardoin et al., 2005; Duhon et al., 2009), but because of limited sample sizes (N = 14 and N = 35, respectively), this finding is tentative. It appears that more studies that include a focus on higher level reading skills (e.g., comprehension); on other academic areas such as math, writing, and content area instruction (e.g., social studies, science); and on the middle and high school levels are needed to establish the breadth of impact for RTI programs. This limitation of RTI research has been noted by others in the field (e.g., Division for Learning Disabilities, 2007; Fuchs & Deshler, 2007; National Joint Committee on Learning Disabilities, 2005).
Finding 3. With regard to the impact of RTI programs on referral and placement rates, it appears that, overall, referral and placement rates stayed fairly constant, with some studies showing decreases. The design concerns discussed for Finding 1 above apply here also. Thus, while there are emerging data indicating that RTI may not lead to increased special education placements, it is hard to make firm conclusions given that many studies did not clearly identify how they identified nonresponders (e.g., cutoff scores used) or delineated the specific processes and procedures used to establish eligibility. Another concern when using RTI approaches to identify nonresponders in subsequent tiers of intervention (and thus those eligible for special education services) was noted by O'Connor and her colleagues (2005). They observed that a number of students identified early on as nonresponders did not need Tier 2 or Tier 3 interventions in later grades. Conversely, some students who were responding adequately to Tier 1 interventions in early grades had difficulties in later grades, possibly due to changes in the types of skills taught in later grades. Additionally, there were some students characterized as "in and out" because they moved back and forth between tiers. Based on these observations, there is a clear need for more longitudinal RTI research so that questions about its impact on the stability of placement rates and eligibility decisions can be better answered.
Finding 4. Although not the focus of our review and not an intervention variable that was directly measured, the types of supporting factors that appeared necessary for scalability and sustainability of RTI programs were striking in their consistency. These factors, described in most of the studies we reviewed, include the following:
- extensive, ongoing professional development,
- administrative support at the system and building level,
- teacher buy-in and willingness to adjust their traditional instructional roles,
- involvement of all school personnel, and
- adequate meeting time for coordination.
In summary, we characterize the research base for establishing the impact of various models or approaches to RTI as emerging. As with many educational interventions, more longitudinal research is needed in order for professionals to be confident that RTI is an effective early intervention approach for all students, as well as confident in its impact on referral and placement rates in special education. In addition to research on the efficacy of RTI, examination of factors necessary for developing and sustaining RTI is also needed to assist educators as they consider adoption of this approach.
Ardoin, S. P., Witt, J. C., Connell, J. E., & Koenig, J. L. (2005). Application of a three-tiered response to intervention model for instructional planning, decision making, and the identification of children in need of services. Journal of Psychoeducational Assessment, 23, 362–380.
Bollman, K. A., Silberglitt, B., & Gibbons, K. A. (2007). The St. Croix River education district model: Incorporating systems-level organization and a multi-tiered problem-solving process for intervention delivery. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 319–330). New York, NY: Springer.
Burns, M. K., Appleton, J. J., & Stehouwer, J. D. (2005). Meta-analytic review of responsiveness-to-intervention: Examining field-based and research-implemented models. Journal of Psychoeducational Assessment, 23, 381–394.
Callender, W. A. (2007). The Idaho results-based model: Implementing response to intervention statewide. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 331–342). New York, NY: Springer.
Division for Learning Disabilities. (2007). Thinking about response to intervention and learning disabilities: A teacher's guide. Arlington, VA: Author.
Duhon, G. J., Mesmer, E. M., Atkins, M. E., Greguson, L. A., & Olinger, E. S. (2009). Quantifying intervention intensity: A systematic approach to evaluating student response to increasing intervention frequency. Journal of Behavioral Education, 18, 101–118.
Ellis, A. K. (2001). Research on educational innovations (3rd ed.). Larchmont, NY: Eye on Education.
Fairbanks, S., Sugai, G., Guardino, D., & Lathrop, M. (2007). Response to intervention: Examining classroom behavior support in second grade. Exceptional Children, 73, 288–310.
Fuchs, D., & Deshler, D. D. (2007). What we need to know about responsiveness to intervention (and shouldn't be afraid to ask). Learning Disabilities Research & Practice, 22, 129–136.
Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how valid is it? Reading Research Quarterly, 41, 93–99.
Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157–171.
Gettinger, M., & Stoiber, K. (2007). Applying a response-to-intervention model for early literacy development in low-income children. Topics in Early Childhood Special Education, 27, 198–213.
Heistad, D., & Casey, A. (August, 2002). Narrowing the gap in early literacy: Results from Minneapolis kindergarten classes. Paper presented at the Council of Great City Schools Annual Meeting, Fort Lauderdale, FL.
Ikeda, M. J., Tilly, W. D. III, Stumme, J., Volmer, L., & Allison, R. (1996). Agency-wide implementation of problem solving consultation: Foundations, current implementation, and future directions. School Psychology Quarterly, 11, 228-243.
Kovaleski, J. F., Gickling, E. E., Morrow, H., & Swank, H. (1999). High versus low implementation of instructional support teams: A case for maintaining program fidelity. Remedial and Special Education, 20, 170–183.
Marston, D., Muyskens, P., Lau, M., & Canter, A. (2003). Problem-solving model for decision making with high-incidence disabilities: The Minneapolis experience. Learning Disabilities Research & Practice, 18, 187–200.
Murray, C. S., Woodruff, A. L., & Vaughn, S. (2010). First-grade student retention within a 3-tier reading framework. Reading and Writing Quarterly, 26, 26–50.
National Joint Committee on Learning Disabilities. (2005). Responsiveness to intervention and learning disabilities. Learning Disability Quarterly, 28, 249–260.
O'Connor, R. E., Harty, K. R., & Fulmer, D. (2005). Tiers of intervention in kindergarten through third grade. Journal of Learning Disabilities, 38, 532–538.
Orosco, M. J., & Klingner, J. (2010). One school’s implementation of RTI with English language learners: “Referring into RTI.” Journal of Learning Disabilities, 43, 269–288.
Pearce, L. R. (2009). Helping children with emotional difficulties: A response to intervention investigation. The Rural Educator, 30, 34–46.
Peterson, D. W., Prasse, D. P., Shinn, M. R., & Swerdlik, M. E. (2007). The Illinois flexible service delivery model: A problem-solving model initiative. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 300–318). New York, NY: Springer.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference.Boston, MA: Houghton-Mifflin.
Telzrow, C. F., McNamara, K., & Hollinger, C. L. (2000). Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review, 29, 443–461.
Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K., & Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34, 33–58.
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225–256.
Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to intervention as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391–409.
Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S. G., Chen, R., Pratt, A., & Denckla, M. B. (1996). Cognitive profiles of difficult-to-remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experimental deficits as basic causes of specific reading disability. Journal of Educational Psychology, 88, 601–638.
Vellutino, F. R., Scanlon, D. M., Zhang, H., & Schatschneider, C. (2008). Using response to kindergarten and first grade intervention to identify children at-risk for long-term reading difficulties. Reading and Writing, 21, 437–480.
Back To Top