Print

Field Studies of RTI Effectiveness System to Enhance Educational Performance (STEEP)


Study Citation

 

VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225–256.


Program Description

 

The System to Enhance Educational Performance (STEEP) model is a standard-protocol model focusing on identification and evaluation of children for special education. STEEP uses a commercially available set of curriculum-based assessment (CBA) and curriculum-based measurement (CBM) probes in reading and math to obtain data on target students’ levels of performance relative to same-class peers. VanDerHeyden, Witt, and Gilbertson (2007) identified five critical steps in the STEEP process:

 

  1. Universal classwide screening;
  2. Classwide intervention;
  3. Performance/skill deficit assessment;
  4. Individual intervention;
  5. Special education referral process.

Within this model, general education teachers are responsible for universal screening by administering CBM probes at least twice a year and implementing classwide intervention (e.g., modeling the target skill, guided practice with frequent opportunities to respond and followed by immediate feedback, independent timed practice, and rewards for beating their last timed score).

 

A school psychologist conducts the performance/skill deficit assessment for students scoring in the 16th percentile or below on the classwide intervention measure. Students exhibiting skill deficits are provided individual intervention by the classroom teacher (or teacher designee) on a daily basis, with sessions lasting about 10 minutes. School psychologists are responsible for selecting the intervention.

Training of school psychologists in the STEEP model was the responsibility of university-based researchers. Training sessions occurred one day per week for approximately 15 weeks. Classroom teachers received seven all-day training sessions with a literacy and classroom management coach. By 2004, a total of five elementary schools were participating in the STEEP model.


Purpose of Study

 

VanDerHeyden et al. (2007) conducted the study to discover the effectiveness of the STEEP model on identification and evaluation of children for special education. Specifically, the purpose of the study was to answer the following questions:

 

  1. What effect would STEEP implementation have on the number of evaluations and percentage of evaluations resulting in qualification for special education services?
  2. To what degree would the decision-making teams use STEEP data to determine whether an evaluation should be conducted?
  3. What effect did STEEP implementation have on identification rates by ethnicity, sex, free or reduced lunch status, and primary language status?
  4. Did the use of STEEP reduce assessment and placement costs for the district and how were these funds reallocated?

Study Method

 

From April 2002 to April 2004, the STEEP model was implemented in five elementary schools (Grades 1–5), beginning with two schools in 2002–2003, adding one additional school in 2003–2004 and two schools in 2004–2005. Effects were examined via a multiple-baseline, across-schools design. STEEP procedures were sequentially introduced across schools and evaluated for their effects on the number of initial evaluations and the percentage of children evaluated who qualified for services (an estimate of diagnostic efficiency). STEEP effects were also evaluated for differences by gender, ethnicity, and SES level.

 

Study Results


Question 1: The number of special education evaluations across the five schools included in the study decreased from baseline, as did the number of students qualifying for special education services. During baseline, across the schools, a little over one-half of evaluations resulted in students qualifying for special education services. After STEEP implementation, 69.5% of evaluations resulted in qualification for services. VanDerHeyden et al. (2007) viewed the increase as a measure of evaluation efficiency (e.g., the STEEP program resulted in fewer false positives).

Question 2: The authors found that decision-making teams used STEEP data to a greater degree to determine whether an evaluation should be conducted.

Question 3: The authors found that, during baseline, the observed proportion of identification rates for evaluated minority students (.31) did not differ significantly from the expected proportion of .26. Also, during baseline years, the expected proportion of evaluated males (.50) significantly differed from the observed proportion of evaluated males (.62), That is, more males were evaluated than would be expected. After STEEP implementation there were no significant differences in the expected and actual proportion of evaluated males.

Question 4: The authors reported that both assessment and placement costs were reduced after STEEP implementation. They estimated that assessment costs were reduced by 50% and placement costs by approximately 55%. These percentages were estimated using hypothetical costs per assessment and student.



Back To Top