Implementing a Combined RTI/PBS Model: Improving on Screening Practices



Recent Comments

    The beginning of the 2010 school year found us back at Silver Sage Elementary school, ready to continue work on our RTI/PBIS implementation project, with a strong focus on Tier 2. We ended the previous school year with plans to begin screening early in the fall and to include screening for language arts because the school did not meet AYP in that area.

    We started the year with a review of the many accomplishments from the previous year (see our earlier post – A Preliminary Look at Year 1 Outcomes). Implementing new practices is difficult and we wanted to spend time rewarding the staff for their hard work! We discussed the strategies for the coming year, including the use of a new screening measure, the implementation of a math and writing intervention system, and the use of Check-In, Check-Out (CICO) as a Tier 2 behavior intervention.

    Fall academic screening was conducted early in the school year – during the second week of school. The school used several different measures for screening data: 1) the Test of Silent Reading Efficiency and Comprehension (TOSREC); 2) Monitoring Basic Skills Progress Concepts and Applications (Math); and 3) Total Words Written (TWW) Story Starters. In addition to these measures, the school also conducts fall Measures of Academic Progress (MAP) testing during the middle-to-end of September.

    It took about a week to score and record screening data for the curriculum-based measures (CBMs) that the school uses. Teachers were accustomed to the TOSREC and MBSP by now, but the TWW measure was new and was met with some resistance. Results indicated a significant number of students across grades below national benchmarks and the teachers were uncertain of the measure’s face validity. “How does this measure assess the quality of writing?” one teacher asked. When faced with the high percentages of students not meeting benchmark, another teacher said, “We don’t do this kind of writing in class, in fact, we don’t do much writing at all. This isn’t the way the Language Usage part of the state test is conducted.” Screening results indicated across measures that there were lots of students in the early grades who were below benchmark, prompting one teacher to comment “These kids aren’t used to following directions and the kinds of tasks we are asking them to do.”

    Which led us to wonder about a few things. First, is it wise to conduct screening before any real instruction has taken place? The typical mantra is that earlier is better – identifying kids sooner means that there will be more time to intervene before the end of the year. But is there such a thing as screening too early in the year? If students haven’t been exposed to any instruction for 3 months (summer vacation), will their performance be underestimated simply because they have been out of practice with academic tasks? We decided to run a test of this idea by scheduling a second screener in October for TWW. Indeed, we found that a lot of the students initially flagged had “self-corrected” with the second administration (before interventions were put into place). We are discussing as a team whether to delay beginning of year screening, or whether to conduct an early screening followed by a four-week follow-up screen. Given the limited resources at the school, we are leaning towards delaying administration, but the district office determines the screening schedule, so the decision might be more complicated than that.

    Second, should state assessment performance be the “gold standard” outcome that dictates all that we screen for? Our state assessment is a multiple-choice standardized measure. It is multiple choice in part to increase the reliability of scoring, and in part to reduce administration and scoring costs. No writing is included in the assessment of students, but does that mean that writing ability is no longer an important academic outcome? Similarly, our reading and math standards may be lower than other states and lower than national standards as evidenced by the disparity between the percentages of students meeting state standards compared to the percentages of students who meet national standards. In using our state test as the “bar” for outcomes, will we under identify students who need additional support in order to become strong readers, writers, and mathematicians?

    Finally, is it more important for screening tools to have face validity (so that teachers will be willing to make data-based decisions when presented with the results) OR is it more important for them to have better classification accuracy (so that when decisions are made, we can have confidence in those decisions)? Prior to beginning this implementation project, I would have said classification accuracy is paramount. Now, I’m not so sure. If teachers can’t identify with the way the screening tool is constructed, and if they aren’t really clear on the purpose of screening, then it can be really difficult to get them to use data—especially if they are resistant to using data initially. For now, we are trying to increase the staff’s assessment literacy by explaining the purposes of screening and by providing suggestions for how to now act on the data they’ve received.
    Back To Top