Print

What’s Your Plan? Accurate Decision Making within a Multi-Tier System of Supports: Critical Areas in Tier 1


Each day, hundreds of decisions are made within schools: What counts as a passing grade? Who needs more support? What kind of support is needed and how will it be delivered?

Most decisions within the framework of multi-tier system of supports (MTSS) are made by teams—building leadership teams, student support teams, or grade-level teams. These teams may or may not use a deliberate decision-making process driven by data, but even if they do, a variety of factors, such as lack of consensus, the restrictions of the building schedule, the skills of available interventionists, and the amount of resources available to meet student needs can distract the team from its task. That can be a problem when the success of MTSS implementation depends upon making good decisions.


This first in a two-part series of articles addresses the complexity of MTSS decision making and identifies critical decision points within Tier 1 (universal or core instruction for all students) at the building level. The building level refers to the school unit that is implementing MTSS (e.g. an elementary or middle school), as opposed to the larger district unit. The second article is about Tier 2 (secondary prevention) intervention selection, management, and evaluation. Both articles provide tools that will have a positive impact on student outcomes by assisting teams in sharpening the precision of their decision making and describing typical barriers that teams may expect to encounter in implementing MTSS.

Unpacking the Complexity of MTSS Decision Making


Successful MTSS implementation is a highly complex process that involves the following tasks:
  • Gathering accurate and reliable data
  • Correctly interpreting and validating data
  • Using data to make meaningful instructional changes for students
  • Establishing and managing increasingly intensive tiers of support
  • Evaluating the process at all tiers to ensure the system is working

MTSS decisions are made using a team based process. For example, the steps in the process described above should be coordinated through an MTSS building leadership team. The MTSS building leadership team is responsible for coordinating and communicating all MTSS implementation efforts for the building. The MTSS building leadership team uses a problem-solving process at both the system and student levels. For example, at the system level, the team might ask, is the core instruction effective? At the student level, the team would ask which students need additional support? Teams look at both system-and student-level problems by asking 1) What is the problem? 2) Why is the problem occurring? 3) What should we do about the problem? and 4) Did our solution work?
Accurate and timely data are also crucial to effective problem solving. MTSS is a framework, not a rigid filter, so teams may make decisions based on student performance data that has established benchmarks or effectiveness based on empirical studies, especially in the areas of screening, progress monitoring, and intervention effectiveness.
In addition to student performance data, teams should familiarize their members with guidelines, indicators, flow charts, and checklists to improve the functioning of the MTSS process. Checklists can be particularly helpful. They are used in a variety of professions, such as aviation and medicine, because they minimize human error by guiding the user through the many steps and activities involved in complex work (Gawande, 2010). Checklists should identify critical elements of the work and give assurance that the system is working effectively. But at the same time, checklists should not be overly detailed. Having to review too much information can shift the focus from the system to the checklist, which can overwhelm and paralyze of the team, causing them to lose momentum. Table 1 provides a list of checklists that support MTSS implementation in the areas of reading and behavior:
Checklist Title Author(s) Purpose
Planning and Evaluation Tool for Effective Schoolwide Reading Program, Revised (PET-R) Kame’enui, & Simmons (2000) Provides checklist and rating system for seven critical elements of an elementary, schoolwide reading program
Benchmark for Advanced Tiers (BAT) Anderson, Childs, Kincaid, Horner, George, et al. (2009) Provides questions and scoring rubric for key features of a schoolwide, behavior support system (K-12)

Benchmark of Quality (BoQ) Kincaid, D., Childs, K., & George, H. (March, 2010) Provides a set of questions linked to 10 critical elements within schoolwide PBIS. Assists teams in identifying strengths and needs for action planning.
Table 1: Checklists that Support Implementation of a Multi-tier System of Supports

All of the checklists listed above are available on Michigan’s Integrated Behavior and Learning Support Initiative (MiBLSi) website; see also PBIS Assessments.

Tier 1 Critical MTSS Decisions


MTSS decisions at the Tier 1 building level are focused on balancing the needs of the entire student population and the resources available to the building. Critical areas for teams to examine include identification of student needs and the effectiveness of the core instruction or the instruction that all students receive every day.

Who needs more support?


One guideline for MTSS implementation is having approximately 80% of the students reach the benchmark criteria established by the screening tool. If the percentage is significantly lower than 80%, buildings should intensify their focus on improving Tier 1 instruction for two reasons: 1) buildings do not have the resources to intervene with a large percentage of students and 2) you cannot “intervene” your way out of core instruction that is not effective. Given these limitations, it is critical for teams to choose reliable and valid criteria for screening. Some buildings may base their screening measures on benchmark criteria established by a curriculum-based measurement screening tool (for example, Dynamic Indicators of Basic Early Literacy Skills (DIBELS) or AIMSweb); others may choose a norm-referenced criteria (for example, their lowest 15%); still others may use a combination of both. The challenge is to find the right students and match them to right interventions as early as possible.
Within screening, there are additional decision points beyond the identification of a score that will identify a level of risk for a student. Because screening data is a critical function within MTSS, the data must be accurate. The following fidelity checks for each period of screening are an important part of systematically checking for human errors in collecting screening data:
  • Are assessors given a checklist of standard administration and scoring rules?
  • Are the checklist administration and scoring rules reviewed with the team before each screening period? Is the data entry process checked for clerical errors?
  • Do the assessors have adequate training and coaching?
  • Does the building have an efficient schedule to collect screening data in a timely manner?

Finally, if screening is working properly, it can assist schools in deciding 1) if they are getting better over time and 2) what changes they need to make to the core curriculum. Graphs that show a visual picture of student growth are extremely helpful in making these decisions:
phoneme-segmentationFigure 1: DIBELS Phoneme Segmentation Fluency Data

Is the core instruction working?


Within a school district, assume there are two buildings (School A and School B) with similar student populations, staff, resources, and an identical reading curriculum. In the spring, School A has 91% of the 2nd grade at their screening benchmark goal while School B has 36%. State test scores between the two buildings show a similar gap. Why? One hypothesis is that something within Tier 1 of MTSS is not working within School B and deep analysis is needed at this level before focusing on the individual student issues.
To avoid being overwhelmed, School B can prioritize Tier 1 actions by critically and honestly completing the PET-R (Simmons & Kame’emui, 2000). The PET-R has seven sections (with five to ten items per section) that examine the following areas:
  1. Goals, Objectives, Priorities
  2. Assessment
  3. Instructional Program and Materials
  4. Instructional Time
  5. Differentiated Instruction/Grouping/Scheduling
  6. Administrative/Organization/Communication
  7. Professional Development

Buildings complete the PET-R once a year with a 4- to 6-person leadership team, preferably with balanced representation from teachers, coaches or itinerant staff, and administration. Teams read the items and rank how MTSS for reading in their building is functioning according to a scale: 0 (not in place), 1 (partially in place), or 2 (in place). An accompanying document, called the PET-A, provides examples and ideas for comparison. For example, if there is one area of the PET-R that the team ranks particularly low, one recommendation is to have a larger representation of staff (for example, all staff teaching one grade level or all lower elementary staff) complete the items within that section.
In our example, School B rated their schoolwide reading system a total score of 56% on the PET-R, while School A gave themselves 96%. School B’s lowest area was Instructional Time with a subscore of 42%. According to these scores, School B has a critical area to prioritize: How are we using our instructional time? How much is allocated? How much do we actually use? How long are students engaged? How much time are students given to respond and receive feedback?
An additional tool for Tier 1 analysis is to inventory practices by surveying teachers on the following questions and having them provide a scope and sequence of their day-to-day instruction in a subject area.
  • What instructional routines are used? Are the routines consistent from classroom to classroom, general education to special education?
  • Is there evidence of scaffolding and explicit instruction, especially when students are learning something new?
  • Is there evidence of distributed practice of critical skills?
  • Is cumulative review built in on a systematic basis?
  • How much time is allocated? How is that time used (for example, whole group instruction, small group instruction, or independent practice)?
  • Does the pace of the instruction match student needs?
  • Do students have multiple opportunities for response and feedback? Are students actively engaged (that is, are they saying, writing, and doing)?

A careful analysis of time, materials, and delivery of core instruction is an essential piece of knowing which components of the Tier 1 system are working well and which items need to be improved.

Troubleshooting Guide for MTSS Decisions


An additional tool that may assist MTSS leadership teams in sharpening their decision-making process is the Troubleshooting Guide for MTSS Decisions: Building Level. This guide was designed to provide ideas, resources, and additional tools for critical decision points within MTSS, such as screening, instruction, supplemental intervention, evaluation, and implementation.

Plan for Success


Successful MTSS implementation is highly complex and relies upon accurate decisions. Good decisions are accomplished when teams have accurate and timely data, not only for student outcomes, but also for critical components of MTSS such as screening, core instruction, intervention, progress monitoring and evaluation. Effective MTSS teams collect and analyze systems data just as systematically as they collect and review student data. Only when we follow these practices can we ensure that MTSS is carefully planned and implemented, efficient, and effective for our students.

References


AIMSweb http://www.aimsweb.com/

Anderson, Childs, Kincaid, Horner, George, et al. (2009)Benchmarks for Advanced Tiers (BAT).Educational and Community Supports, University of Oregon & University of South Florida.

Dynamic Indicators of Basic Early Literacy Skills (DIBELS®) http://dibels.org/

Gawande, A. (2010). The checklist manifesto: How to get things right. New York: Metropolitan Books.

Kincaid, D., Childs, K., & George, H. (March, 2010)

Schoolwide Benchmarks of Quality (Revised). Unpublished instrument. USF, Tampa, Florida

Simmons, D.C., & Kame’enui, E.J. (2003). Planning and evaluation tool for effective schoolwide reading programs, revised. Institute for the Development of Educational Achievement, College of Education, University of Oregon.

Back To Top