Field Studies of RTI Effectiveness Ohio Intervention-Based Assessment (IBA)
Telzrow, C. F., McNamara, K., & Hollinger, C. L. (2000). Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review, 29, 443–461.
The Ohio Intervention-Based Assessment (IBA) is a problem-solving model that includes collaborative consultation. It is a pre-referral system in that all students are taught using evidence-based curricula and students who are identified as “nonresponders” receive individualized interventions prior to referral for special education eligibility evaluation. Telzrow, McNamara, and Hollinger (2000) identified several components of IBA:
- Behaviorally defining a student's target behavior;
- Collecting baseline data;
- Identifying outcome goals for a student;
- Hypothesis generation regarding reasons for the problem;
- Developing an intervention plan;
- Collecting evidence of treatment fidelity;
- Collecting data about student Response to Intervention (RTI);
- Comparing RTI data to baseline performance
Within this model, a multidisciplinary team (MDT) is responsible for implementing problem-solving procedures and consists of the school principal, the school psychologist, and a general, and special education teacher. Parent involvement is encouraged, especially when the MDT suspects the presence of a disability. Training of school personnel in the IBA model took place over several years and was the responsibility of personnel working in Ohio’s Special Education Regional Resource Centers. By 1997, 329 schools throughout Ohio were involved in the IBA efforts.
Purpose of Study
The authors conducted the study to discover the level of fidelity with which the IBA model was being implemented, as well its relationship to student outcomes. Specifically, the purpose of the study was to answer the following questions:
- With what degree of fidelity did MDTs in participating schools implement the IBA problem-solving components?
- To what degree did students for whom the problem-solving process was used attain target academic or behavioral goals?
- What was the relationship between fidelity of problem-solving implementation and student outcomes?
Data were collected by asking 227 participating schools (90% were elementary schools) to submit two forms of documentation: a) a problem-solving worksheet (PSW) in which MDTs recorded information related to the eight previously identified components and b) an Evaluation Team Report (ETR) form, which included descriptions of learning concerns, implemented interventions, and data from progress monitoring. School personnel were directed to submit only their “best case” documentation (i.e., “products that would reflect their most complete and accurate implementation of the problem-solving process”; Telzrow et al., 2000, p. 449). Additionally, the authors used a Case Evaluation Instrument, which took the form of a 5-point Likert rating scale that was used to evaluate implementation fidelity and student change as presented in the PSW and the ETR.
Question 1: With regard to fidelity of implementation, ratings on the Likert scale were variable, with behaviorally defining the target behavior receiving the highest rating (M = 4.33/5), followed by clearly identified goal (M = 3.96/5). The lowest two ratings were hypothesized reason for problem (M = 2.18/5) and treatment fidelity/integrity (M = 2.60/5). The average rating for all components was 3.28/5. Note that a rating of 3 indicated that “some elements” of the problem solving components were present in the documentation.
Question 2: The authors concluded that there was overall improvement in student outcomes, as indicated by an average rating of 4 for the 291 submitted academic or behavioral goals. A rating of 4 was defined as “intermediate between” no progress and significant progress.
Question 3: Based on a number of correlation procedures, the authors indicated fairly modest but significant relationships between student outcomes and two of the problem-solving components: a clearly identified goal and data indicating student response to intervention. Both of these components combined to account for 8% of the variance in student outcome ratings. The relationship between student outcomes and integrity of implementation was low.
Back To Top