Considering Tier 3 Within a Response-to-Intervention Model

Schools can be viewed as intervention systems focused on promoting outcomes (e.g., literacy, social-emotional competence, etc.) deemed important to society (Deno, 2002). This is a complex task considering the nature of learning and development and the growing diversity of problems and issues facing school-aged youth. Research on child development informs us that children learn and develop skills at different rates. Specifically, children enter the learning environment with different skill sets, and an individual child’s Response to Intervention (RTI) is unique and dependent on biology, social learning history, and context. To reach desired outcomes in school, some students may require additional or unique instructional strategies or interventions beyond those typically available. Thus, for schools to meet the needs of all students it is important to establish a comprehensive continuum of multi-layered or multi-tiered systems of prevention/intervention services. This continuum should include intervention options of varying intensity that can be linked to the specific learning needs of students who are experiencing difficulties. To ensure that prevention and intervention strategies are provided in a timely manner and to students who need them, schools should establish a clear process for a) determining which students are experiencing difficulties, b) selecting intervention strategies or supports and matching these supports to students, and c) evaluating whether the intervention strategies are helpful to students.


One common multi-layered arrangement involves three tiers of prevention or intervention supports to students. At Tier 1 (i.e., primary prevention/intervention), universal (i.e., school-wide) prevention efforts are established to promote learning for all students, anticipating that most students (e.g., 80%) will respond to these strategies and will not require additional intervention. For example, a school considering Tier 1 activities might adopt a research-based reading curriculum and screen all students for reading problems three times per year to determine which students might need supports beyond the school-wide reading curriculum. At Tier 2 (secondary prevention or strategic intervention), students who are identified as being at-risk of experiencing problems receive supplemental or small-group interventions. For example, when school-wide screening reveals that some students (e.g., 15%) in Grade 3 are at risk of developing reading problems, the school might provide supplemental reading support through a classwide peer tutoring intervention. Similarly, when school-wide data indicate that higher rates of office discipline referrals are occurring on the playground, the school improvement team might look into interventions that promote appropriate playground play (e.g., Ervin, Schaughency, Matthews, Goodman, & McGlinchey, 2007). At Tier 3 (tertiary prevention), an additional layer of intensive supports is available to address the needs of a smaller percentage of students (e.g., 2%–7%) who are experiencing problems and are at risk of developing more severe problems. At Tier 3, the goal is remediation of existing problems and prevention of more severe problems or the development of secondary concerns as a result of persistent problems. For example, at Tier 3, a student whose reading performance falls significantly below that of his or her peers, despite intervention, might receive intensive reading support from the learning assistant four times per week with close monitoring of his or her progress.


The purpose of this article is to provide a general overview of special considerations pertaining to the provision of Tier 3 prevention and intervention efforts. Specifically, this article describes a self-questioning process to guide decision making at Tier 3. For each step of the process, readers are referred to additional references and resources.


Establishing a Process to Guide Decision Making at Tier 3


As noted earlier, within a multi-tier RTI approach it is important to establish a process for a) determining which students are experiencing difficulties, b) selecting intervention strategies or supports and matching these supports to students, and c) evaluating whether the intervention strategies are helpful. At each tier along the continuum, the process may vary in its intensity, yet it will always follow a consistent series of questions or steps. Practitioners can guide their decision making by adhering to a self-questioning process wherein they ask themselves the following questions:


  1. Who is experiencing a problem and what specifically is the problem?
  2. What intervention strategies can be used to solve the problem or reduce its severity?
  3. Did the problem (or problems) go away or decline in severity as a result of the intervention(s)?


This self-questioning process is familiar to most educators and is used formally or informally by many effective teachers as they proactively work to assess the progress of students in their classrooms. For example, teachers who are responsive to the individual needs of students in their classrooms regularly assess students’ skills and responsiveness to instructional strategies, providing additional supports and remediation at a whole-class, small-group, or individual level as necessary.


In school-wide, multi-tier approaches to RTI, a similar, but often more formalized, process is applied at a whole-school, classroom, and individual student level. Across tiers, the nature of services and support provided are differentiated on the basis of the intensity of the problems and the magnitude of need. At Tier 3, efforts focus on the needs of individual students who are experiencing significant problems in academic, social, and/or behavioral domains. Thus, the process at this level is more intensive and individualized than it is at other levels. In the sections that follow, considerations during each step of a Tier 3 self-questioning process are discussed.


Step 1: Who is experiencing a problem and what, specifically, is the problem?


The first step in the process is to define the problem, and embedded within this step is noting who is experiencing the problem and what level of support (i.e., Tier 1, Tier 2, or Tier 3) is warranted. When defining a problem, it is important to clearly describe what the problem “looks like” in objective, observable terms, so that all persons involved know they are talking about the same thing. Measurement of a problem should be direct and occur within the context (e.g., classroom setting or situation) in which the problem occurs. To quantify how much of a problem exists, the problem should be described in measurement terms (e.g., frequency, rate, duration, magnitude). Furthermore, to stay focused on working toward improving problem situations, it is helpful to describe problems as discrepancies between a student’s actual or current performance (i.e., “what is”) and desired or expected performance (i.e., “what should be”). Thus, in addition to measuring a student’s actual performance, criteria regarding expected levels of performance need to be established. By quantifying problems as discrepancies, educators can use this information to determine the magnitude or severity of a problem. This information can be useful in formalizing goals (i.e., a reduction in the discrepancy) and in prioritizing problems within and across students.


To illustrate this process, consider reading as an example. One measure of “reading health” shown to be predictive of later reading fluency and comprehension is the number of words a student reads correctly per minute, or oral reading fluency (Hosp & Fuchs, 2005). The Dynamic Indicators of Basic Early Literacy Skills (DIBELS; is a research-based, standardized, norm-referenced measure of pre-reading and reading skills that includes a measure of oral reading fluency for Grades 1 to 6 (Good, Gruba, & Kaminski, 2002). The DIBELS measures were designed for use as screening and evaluation tools, and scores on the DIBELS can be used to place students in categories of reading risk. Prespecified, research-based goal rates have been established for the DIBELS and are available on the Web site just mentioned. These goal rates might be used as “expected performance” standards against which to compare actual student performance in an RTI model. Specifically, students who read at or above recommended (i.e., benchmark) rates are considered to be at low risk of reading problems. In contrast, if students perform below benchmark rates, they are considered to be either at “some risk” of developing reading problems or “at risk” of developing reading problems.


The DIBELS benchmark criteria suggest, for example, that a 3rd grade student is expected to read 77 or more words correctly per minute in the beginning (fall term) of 3rd grade, 92 or more words in the middle (winter term), and 110 or more at the end (spring term). Thus, a student who reads fewer words correctly per minute than the specified benchmark amount (i.e., 77 words in the fall of Grade 3) might be viewed as experiencing a reading problem and, depending on their scores, might be viewed as in need of strategic (Tier 2) or intensive (Tier 3) reading intervention supports. To illustrate this more clearly, consider hypothetical data taken in the fall from all 3rd grade students at one elementary school. Imagine that all of the students in Grade 3 were screened for reading difficulties using the DIBELS. As with any screening device, the DIBELS is designed to be sensitive enough to identify students who may be at risk of experiencing reading problems. Thus, to determine who might be at risk of experiencing reading difficulties, the team of 3rd grade teachers would look to see which students scored below the expected goal rate of 77 words read correctly per minute. For example, let’s assume that Ben read at a rate of 67 words correctly per minute, which means he read 10 fewer words correctly per minute than the desired rate (i.e., 77 – 67 = 10). Ella, who read 30 words correctly per minute, read 47 fewer words correctly per minute than the desired rate (i.e., 77 – 30 = 47). Both children are reading at rates less than the desired rate of 77 and may be in need of additional reading supports, but the quantified problem (i.e., discrepancy between actual and expected performance) is greater for Ella. Of course, this is not to suggest that a student should be placed in a category of Tier 2 or Tier 3 support on the basis of a single score. Instead, screening devices, like the DIBELS, which can be administered repeatedly and are time-efficient measures, are useful because they can help identify students who may be in need of additional intervention supports or further assessment to determine need for support. See Jenkins and Johnson article in section of this Web site on Universal Screening for more information. [NCLD add Link to article]


One important question that schools need to consider is whether a student should receive Tier 1, 2, or 3 services. Tier 3 services are designed to address the needs of students who are experiencing significant problems and/or are unresponsive to Tier 1 and Tier 2 efforts. Schools should establish guidelines for determining how students will enter into Tier 1, 2, or 3 levels of support. Although guidelines may vary from school to school, students in need of Tier 3 services should be able to access these services in one of two ways. First, students receiving Tier 1 or Tier 2 supports who are not making adequate progress and are unresponsive to the continuum of supports available at Tier 1 or Tier 2 might be moved into Tier 3 to receive more intensive intervention supports. Second, there should be a mechanism through which students who are experiencing very severe or significant academic, behavioral, or social-emotional problems can be triaged directly into Tier 3 to receive necessary intensive and individualized intervention supports. For some students, the second option is necessary to provide needed supports in a timely fashion rather than delaying access to these supports by making students wait to go through Tier 1 and Tier 2 intervention services. Thus, in contrast to a fixed multi-gating system wherein students would only be able to receive more intensive services (i.e., Tier 3) following some time period of less intensive (i.e., Tier 1 or 2) services, the RTI approach should allow some flexibility to serve students based on their level of need in a timely and efficient manner.


As educators establish a process for determining which students at their school should receive Tier 1, 2, or 3 services, they face challenges associated with selecting criteria for tiers (for discussion, see Kovaleski, 2007). Research-based criteria of risk, like those provided by the DIBELS, are also available when looking at office discipline referrals for behavioral issues (see School Wide Information System [SWIS;]), and these criteria can be useful to schools in determining whether students should receive Tier 1, 2, or 3 services (for an example, see Ervin, Schaughency, Goodman, McGlinchey, & Matthews, 2006). Unfortunately, research-based risk criteria are not always available for other important targets, meaning that educators need to consider how they will decide to match tiered services to student needs. When research-based risk criteria for expected levels of performance are unavailable, educators must select standards for comparison (e.g., professional experience, teacher expectations, parental expectations, developmental norms, medical standards, curriculum standards, national norms, local norms, and classroom peer performance), and this is not an easy task (see Kovaleski, 2007). Furthermore, even when research-based risk criteria are available, schools serving high numbers of students at risk for reading and/or behavioral problems may not have sufficient resources to provide Tier 3 interventions to all students who fall into risk categories. In one high needs school in Michigan, for example, school-wide screening data revealed that less than 40% of the students at the school met benchmark reading goal rates according to the DIBELS, meaning 60% of the student population was at risk for developing reading problems (Ervin et al., 2006). Given such high numbers of students in need of support, coupled with limited school resources and time available to provide intensive intervention, the school-based team at this school decided to implement an early reading intervention program in kindergarten and 1st grade rather than attempt to design individualized reading plans for each student at risk for developing reading problems (see Ervin et al., 2006; Ervin, Schaughency, Goodman, McGlinchey, & Matthews, 2007). Students who continued to experience reading difficulties despite the classwide interventions were referred to grade-level teams and considered for Tier 3 intervention supports. This example illustrates that as educators develop a process for determining which students should receive Tier 3 intervention services, they need to consider how they will best use the available time and resources to provide a continuum of interventions to support the diverse learning needs of students.


Step 2: What intervention strategies can be used to reduce the magnitude or severity of the problem?


When a student has been identified as being in need of Tier 3 intervention supports, the next step in the self-questioning process is the selection and implementation of appropriate intervention supports. One option in this step is to move directly into intervention by selecting an evidence-based intervention strategy that has a standard protocol for implementation. There are many intervention strategies from which to choose. For example, several Web sites provide teacher-friendly intervention resources (e.g.,;;


A second option at this stage is to collect more information before moving to intervention. To assist in the development and selection of an intervention for a specific problem, it may be important to conduct an analysis of the problem’s context and function. To do so, we must ask what factors are contributing to the problem and in what ways can we alter those factors to promote learning and reduce the magnitude or severity of the problem. One end goal of this stage in the process is to “diagnose the conditions under which students’ learning is enabled” (Tilly, 2002, p. 29). This goal is accomplished by gathering information (e.g., direct observation, interviews, rating scales, curriculum-based measures of academic skills, review of records) from a number of sources (e.g., the student, teacher, parent, peers, administrator) to answer questions helpful in furthering our understanding of why (i.e., under what conditions) the problem is occurring. Specifically, we want to know where, when, with whom, and during what activities the problem is likely or unlikely to occur.


Although many questions can be asked at this stage, it is important to stay focused on identifying the factors that we can change (i.e., instructional strategies, curriculum materials) in attempting to mitigate the problem situation. For example, when a child’s classroom performance is below our expectations, we might ask whether the problem is a skill (i.e., can’t do) or a performance (i.e., won’t do) problem (for more information on this process, see Daly Chafouleas, & Skinner, 2005; Daly, Martens, Witt, & Dool, 1997; Witt, Daly, & Noell, 2000). Another important, and related, question to ask concerning learning problems is whether the alignment between the student’s skill level, the curriculum materials, and instructional strategies is appropriate (Howell & Nolet, 2000). When the problem involves performance that falls below what is expected, it is important to ask the following types of questions about whether this is because the student a) does not want to perform the task or activity, b) would rather be doing something else, c) gets something (e.g., attention, access to a preferred activity) by not doing the task, d) does not have the prerequisite skills to perform the task, e) is given work that is too difficult or presented in a manner that the student hasn’t seen before, or f) has been given insufficient time to practice the skill to fluency.


In answering the above questions, there is a direct link between our questioning and the development of a solution. For example, if the information we collect suggests that the student has the prerequisite skills needed to decode connected text but does so slowly, one hypothesis we might have is that perhaps the student has not had sufficient time to practice reading to develop fluency. An appropriate intervention for this student might focus on building reading fluency through an intervention that involves increased reading practice, such as repeated reading (see Daly et al., 2005, for a description of repeated reading). Alternatively, if we suspect that a student’s reading problem is related to not having enough assistance to acquire the skill and/or a deficit in pre-reading skills (e.g., problems with phonemic awareness), our hypothesized intervention strategy would might focus on direct skill development of prerequisite skills, with prompting and corrective feedback. In each example, the reading problem was related to a skill issue, and the solutions were linked to the type of skill problem (e.g., acquisition, fluency).


If the information we gather suggests that the reading problem is not a skill problem, but rather a performance (i.e., won’t do) issue, then the intervention should focus on addressing the function (e.g., escape task) of the behavior. Much has been written about linking assessment to intervention through functional behavioral assessment, and when problems are performance issues, interventions can address behavior function in several ways. When a student’s behavior is maintained by escape from a task, for example, the intervention might reduce the student’s motivation to escape the task by making the task less aversive (e.g., adjusting the choice of materials to increase interest), teach the student a more appropriate way to communicate that the task is aversive (requesting a brief break), or allowing escape from the task following performance of the task for a prespecified time period.


Regardless of whether educators decide to move directly to intervention or to collect more information to analyze the problem, the focus of this step in the self-questioning process is on selecting a solution (intervention strategy) that reduces the magnitude or severity of the problem (i.e., reduces the discrepancy between the student’s current and expected performance). Interventions should be selected on the basis of their functional relevance to the problem (i.e., match to why the problem is occurring), contextual fit (i.e., match to the setting and situation in which the problem occurs), and likelihood of success (i.e., demonstrated success within the research literature). Tier 3 interventions are designed to address significant problems for which students are in need of intensive interventions. As a result, Tier 3 interventions require careful planning. Specifically, an intervention plan should describe the following:


  1. What the intervention will look like (i.e., its steps or procedures)
  2. What materials and/or resources are needed and whether these are available within existing resources
  3. Roles and responsibilities with respect to intervention implementation (i.e., who will be responsible for running the intervention, preparing materials, etc.)
  4. The intervention schedule (i.e., how often, for how long, and at what times in the day?) and context (i.e., where, and with whom?)
  5. How the intervention and its outcomes will be monitored (i.e., what measures, by whom, and on what schedule?) and analyzed (i.e., compared to what criterion?).


In addition, an intervention plan should specify timelines for implementing objectives and for achieving desired goals. The end goal of this stage of the process is a clearly delineated intervention plan. (For examples of evidence-based intervention strategies, see the What Works Clearinghouse at, a resource developed by the Institute of Education Sciences, U.S. Department of Education.)


Step 3: Did the student’s problem get resolved as a result of the intervention?


An individual’s RTI can only be known following actual implementation of an intervention and careful (i.e., reliable and valid), repeated measurement of his or her behavior over time. Although a thorough description and analysis of the problem, why it is occurring, and what interventions are likely to be effective is important to the self-questioning process at Tier 3, the process is incomplete until practitioners ask if the student’s problem was resolved as a result of the intervention. The best way to determine whether a student is making progress toward the desired goals in RTI is to collect ongoing information regarding the integrity with which the intervention was implemented and, relative to intervention implementation, the discrepancy between desired and actual performance. The intervention process does not end until the problem (i.e., discrepancy between what is and what should be) is resolved. Thus, continuous monitoring and evaluation are essential parts of an effective RTI process. Specifically, information should be collected on targeted student outcomes (i.e., measurement of change in behavior relative to desired goals), proper implementation of the intervention (i.e., measure whether the intervention is implemented as planned), and social validity (practicality and acceptability of the intervention and outcome). When data are reviewed and analyzed, a decision should be made regarding whether the intervention plan should be revised or goals adjusted. Single-subject design methods are key to determining a student’s RTI (for further information, see Olson, Daly, Andersen, Turner, & LeClair, 2007).




Daly, E. J., Chafouleas, S. M., & Skinner, C. H. (2005). Interventions for reading problems: Designing and evaluating effective strategies. New York: Guildford Press.


Daly, E. J., Witt, J. C., Martens, B. K., & Dool, E. J. (1997). A model for conducting a functional analysis of academic performance problems. School Psychology Review, 26, 554–574.


Deno, S. L. (2002). Problem-solving as “best practice.” In A Thomas and J. Grimes (Eds.) Best practices in school psychology IV, Volume 1 (pp. 37-55). Bethesda, MD: National Association of School Psychologists.


Ervin, R. A., Schaughency, E., Goodman, S. D., McGlinchey, M. T., & Matthews, A. (2006). Moving research and practice agendas to address reading and behavior schoolwide. School Psychology Review, 35, 198–223.


Ervin, R. A., Schaughency, E., Goodman, S. D., McGlinchey, M. T., & Matthews, A. (2007). Moving from a model demonstration project to a statewide initiative in Michigan: Lessons learned from merging research-practice agendas to address reading and behavior. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), The handbook of response to intervention: The science and practice of assessment and intervention (pp. 354–377). New York: Springer.


Ervin, R. A., Schaughency, E., Matthews, A., Goodman, S. D., & McGlinchey, M. T. (2007). Primary and secondary prevention of behavior difficulties: Developing a data-informed problem-solving model to guide decision making at a schoolwide level. Psychology in the Schools, 44, 7–18.


Good, R. H. III, Gruba, J., & Kaminski, R. A. (2002). Best practices in using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an outcomes-driven model. In A Thomas and J. Grimes (Eds.) Best practices in school psychology IV, Volume 1 (pp. 699- 720). Bethesda, MD: National Association of School Psychologists.


Hosp, M. K., & Fuchs, L. S. (2005). Using CBM as an indicator of decoding, word reading, and comprehension: Do the relations change with grade? School Psychology Review, 34(1), 9-26.


Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and decision making (3rd ed.). Belmont, CA: Wadsworth.


Kovaleski, J. F. (2007). Potential pitfalls of response to intervention. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), The handbook of response to intervention: The science and practice of assessment and intervention (pp. 80–92). New York: Springer.


Olson, S. C., Daly, E. J., Andersen, M., Turner, A., & LeClair, C. (2007). Assessing student response to intervention. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), The handbook of response to intervention: The science and practice of assessment and intervention (pp. 117–129). New York: Springer.


Tilly, D. (2002). Best practices in school psychology as a problem-solving enterprise. In A Thomas & J. Grimes (Eds.), Best practices in school psychology IV (Vol. 1, pp. 21–36). Bethesda, MD: National Association of School Psychologists.


Witt, J. C., Daly, E. M., & Noell, G. (2000). Functional assessments: A step by step guide to solving academic and behavior problems. Longmont, CO: Sopris West.

Back To Top