Print

Treatment Integrity: Ensuring the “I” in RtI



Response to intervention (RtI) has been conceptualized as having two meanings. First, RtI has served as a synonym for a multi-tier system of support (MTSS); as such, it includes those assessment and instruction/intervention procedures that have as a goal the attainment of proficiency in basic academic skills. Alternatively, RtI describes the use of assessment data that are collected on students during the course of instruction and/or intervention for the purpose of making both low- and high-stakes decisions about those students. In either usage, it is presupposed that the student’s response can be validly and reliably measured and that an intervention has been used that is reasonably calculated to facilitate student learning. This latter determination depends on the extent to which the intervention used is based in scientific research (i.e., has been shown to work with students under appropriately controlled conditions) and whether the intervention has been implemented with fidelity. The extent to which an intervention is delivered in adherence to its design features has been termed treatment integrity and has been identified as a critical element of RtI programs (Zirkel & Thomas, 2010). How treatment integrity is defined, operationalized, and evaluated within an MTSS is the topic of this article.

Basic Considerations

 

Throughout the literature on this topic, treatment fidelity and treatment integrity often are used interchangeably. For the sake of simplicity, we use treatment integrity in this article. Treatment integrity has been defined as the degree to which an intervention or treatment is implemented as planned, intended, or originally designed (Gresham, 1989, 2004; Gresham, MacMillan, Beebe-Frankenberger, & Bocian, 2000; Lane, Bocian, MacMillan, & Gresham, 2004). Treatment integrity has been an important topic in the research literature because it is critical to ascertain whether the treatment being investigated was implemented reliably if a causal relationship with the dependent variable is to be supported. Charters and Jones (1974), in an early paper on this topic, argued for the necessity of measuring treatment integrity in pursuing empirical research and noted that many studies failed to account for the extent to which treatment integrity is in place. By failing to consider the level of implementation of an experimental treatment, threats to internal and external validity make it impossible to reach accurate conclusions about the effectiveness of the treatment or to replicate a research study with the hopes of gaining the same results (Bellg et al., 2004). The goal of research is to determine if changes in the dependent variable (outcomes) are due to changes in the independent variable (intervention). The impact that the intervention has on outcomes can only be determined when researchers demonstrate that the intervention was implemented as intended without modifications (Gresham et al., 2000). Clearly, practitioners seeking to implement research-based interventions need to be cautious in adopting practices that are not supported by research studies in which treatment integrity is meaningfully measured.

 

Although treatment integrity is important in empirical research, our focus in this article is on how the concept and operationalization of treatment integrity applies to the implementation of instruction and interventions in schools. In this context, Hagermoser Sanetti and Kratochwill (2009) defined treatment integrity as “the extent to which essential intervention components are delivered in a comprehensive and consistent manner by an interventionist trained to deliver the intervention” (p. 448). The central concept is that a student’s change in performance can be reasonably attributed to the intervention that has been delivered. It is assumed that there is a direct relationship between the extent to which the intervention is delivered and the likelihood that the student will make meaningful gains. Further, it is presumed that an intervention needs to be delivered with sufficient fidelity for an effect to occur. However, what level or threshold of treatment integrity needs to be attained has not been established at present and likely varies with academic domain, the particular intervention, and a number of other factors. The reader is referred to the article by Hagermoser Sanetti and Kratochwill for a comprehensive listing of these factors.

 

Treatment integrity is considered to be a multidimensional construct. Schulte, Easton, and Parker (2009) articulated a number of critical features of treatment integrity, including those related to the delivery of the intervention, how the intervention is received by the participant, and how the participant is able to use the learned skills in a natural environment. In terms of the delivery of the intervention, the measurement of treatment integrity is based on the notion that the intervention can be delineated into a series of steps or actions that are considered essential to the intervention per se, as contrasted with those actions that are extraneous to, although not interfering with, the intervention and those actions that would be proscribed from happening during an intervention session (Hagermoser Sanetti & Kratochwill, 2009). The essential actions can then be arranged in a list (either sequential or inclusive) that allows for an appraisal of adherence in quantifiable terms (e.g., percentage or number of actions implemented).

 

Treatment Integrity and RtI

 

The introduction of RtI in schools has called attention to treatment integrity because one of the primary tenets of the RtI model is that evidence-based interventions are implemented with integrity. In essence, the validity of RtI depends on the thorough and effective implementation of the intervention (the I). If treatment integrity is not ensured, practitioners are unable to determine if the student’s progress is traceable to the intervention used. More important, if a student fails to make progress in response to a scientifically validated intervention, it is critical to ascertain whether the intervention, which has been established as effective for other students with similar needs, was implemented with sufficient integrity. Failure to check the fidelity of the treatment can lead to a potentially erroneous conclusion that the student’s academic deficiencies are the result of a disabling condition, such as a specific learning disability (Kovaleski, VanDerHeyden, & Shapiro, 2013).

 

The necessity of ensuring treatment integrity within an RtI framework not only is important programmatically but also is implied in federal law and regulations. The Individuals with Disabilities Education Improvement Act (IDEIA) regulations (2006) specify that 

 

A child must not be determined to be a child with a disability … if the determinant factor for that determination is lack of appropriate instruction in reading (or math), including the essential components of reading instruction.... (34 C.F.R. §300.306 [b][1][i-iii]) To ensure that underachievement in a child suspected of having a specific learning disability is not due to lack of appropriate instruction in reading or math, the group must consider, as part of the evaluation … data that demonstrate that prior to, or as part of, the referral process, the child was provided appropriate instruction in the regular education setting, delivered by qualified personnel…. (34 C.F.R. §300.309 [b][1])

 

These regulations indicate an evaluation team must rule out a lack of appropriate instruction, arguably by ensuring and documenting that appropriate instruction has occurred in general education. Further, the IDEIA regulations indicate that an evaluation team must document “the strategies used for increasing the child’s rate of learning” (34 C.F.R. §300.311 [a][7][ii][B]), by which it is understood that robust interventions to address the student’s need would be provided. Thus, not only does a student need to receive effective instruction and interventions prior to determination of eligibility for special education, data must also be collected to display this receipt. In our view, these data should include information regarding the fidelity of implementation of instruction and intervention, in addition to student outcome measures. As Lane et al. (2004) reported, “It is absolutely essential that treatment integrity data be collected when conducting school-based interventions in order to draw accurate conclusions about the effectiveness of the interventions” (p. 37). Teams can only make adjustments to an intervention and assess the effectiveness of the intervention if the fidelity of the implementation is monitored over time (Gable, Hendrickson, & VanAcker, 2001).

 

Additionally, when a student is identified as having a disability and needing special education, the student’s IEP includes a specification of those specially designed instructional practices that are reasonably calculated to facilitate meaningful progress for the student. Delivering the program as designed by the IEP team is a due process right afforded to parents of students with disabilities. Consequently, documenting treatment integrity within a special education program serves as a due process protection for students (Noell & Gansle, 2006).


Have Interventions Historically Been Implemented With Integrity?

 

As indicated above, to be truly effective as a service delivery model, substantive efforts are required in practical settings to ensure interventions are implemented with fidelity and treatment integrity is evaluated. Nonetheless, despite the importance of treatment integrity, it has historically been overlooked in research and in practice. In the research literature, Detrich (1999) noted that more information is known regarding the effectiveness of interventions than what is known about the integrity of intervention implementation. Only a small percentage of studies have reported measuring implementation and treatment integrity variables (Batsche, 2006; Walker, 2004). For example, in a review of studies published in three major learning disability journals over a 5-year period, Gresham et al. (2000) found that only 18.5% of the articles measured the treatment integrity of academic interventions.

 

Similarly, Noell and Witt (1996) noted that the science of behavioral consultation has been slow to evolve and that the limited understanding of fidelity issues may have been a contributing factor. Furthermore, few behavioral studies have assessed treatment integrity even though it is one of the most important components in the scientific process of studying behavior change (Gresham, 1989) and has been demonstrated to be correlated with more positive outcomes (Noell et al., 2005).

 

In practice, there has also been limited attention paid to treatment integrity by educators. Two seminal studies by Flugum and Reschly (1994) and Telzrow, McNamara, and Hollinger (2000) indicated low rates of implementation for various components of the problem-solving process. Flugum and Reschly (1999) reported that out of six quality indices for the problem-solving process, five were implemented with low rates of fidelity (behavioral definition, direct measure/baseline, step-by-step plan, graphing results, and comparing results to baseline data). Interestingly, three quarters of the respondents in this study reported that the intervention was implemented as planned despite over half of the respondents indicating that fewer than half of the quality indices were used, which suggests that the quality with which interventions are implemented varies greatly across practitioners. Telzrow et al. (2000) explained that overall “evidence of treatment integrity was absent or vague” and “below desired standards” (p. 454). Gresham (1989, 2004) has noted that many failures in consultation and interventions probably can be attributed to intervention plans not being implemented with fidelity.

 

Some signs that intervention integrity is interfering with the effectiveness of the intervention, and therefore needs to be assessed, include the following: lack of data regarding implementation, lack of progress-monitoring data of outcomes, data indicating the intervention is rarely implemented with integrity, training has not been provided regarding correct implementation, or support has been provided to teachers but the intervention rarely continues to be implemented correctly (Witt, VanDerHeyden, & Gilbertson, 2004).


What Affects Treatment Integrity?

 

When interventions are implemented in the field, rather than in a controlled empirical study, a variety of confounding factors may have an impact on the integrity with which they are implemented. The transition of an intervention from the lab to the classroom may be accompanied by a decrease in intervention effectiveness because of the decreases in treatment integrity associated with this transition (Hulleman & Cordray, 2009). Variables that affect treatment integrity in the classroom include the characteristics of the child, because teachers are more likely to be responsive to students who are more skilled (Detrich, 1999); the resources required for the intervention, because implementation will not occur if resources put a strain on the classroom (Detrich, 1999; Gresham 1989; Gresham et al., 2000); the similarity of the intervention to the current classroom practices (Detrich, 1999); the complexity of the treatments (Gresham, 1989; Gresham et al., 2000); the time required to implement interventions (Gresham, 1989; Gresham et al., 2000); the number of staff required to implement interventions (Gresham, 1989); the motivation of the staff to implement interventions (Gresham, 1989); and the perceived and actual effectiveness of the interventions (Gresham 1989; Gresham et al., 2000).

 

Additionally, many decreases in treatment integrity in the classroom are related to educator behaviors, attitudes, and perceptions rather than student factors (Biggs, Vernberg, Twemlow, Fonagy, & Dill, 2008; Hulleman & Cordray, 2009; Ransford, Greenberg, Domitrovich, Small, & Jacobson, 2009). Ransford et al. (2009) examined the impact of teacher burnout and efficacy on dosage and quality of intervention delivery. Their results indicated that burnout was negatively related to the level of dosage (i.e., higher burnout led to lower delivery of supplemental components) and teacher efficacy was positively related to dosage (i.e., higher efficacy led to an increased likelihood of delivering supplemental components of an intervention). Additionally, administrative support was related to implementation quality but not to dosage, and teachers’ positive perceptions of training and coaching in intervention delivery were associated with improved implementation quality. Therefore, teachers with high burnout and low curriculum supports indicated the lowest levels of intervention quality and dosage.

 

How Is Treatment Integrity Measured?

 

By assessing treatment integrity, researchers and educators can have greater confidence in the results of interventions (Bellg et al., 2004). It cannot be assumed that simply talking about changes in behavior will lead to the occurrence of change, so treatment integrity checks and intervention monitoring are important components to consultation and intervention implementation (Telzrow & Beebe, 2002). Quantitative methods can be used to assess the integrity of interventions, and decisions can be made determining the extent to which the results are due to the particular intervention.

 

A number of different tools have been utilized throughout the literature to assess and ensure treatment fidelity. The most commonly used fidelity devices can be divided into two broad categories: direct measures and indirect measures (Fuchs, Fuchs, Yazdian, & Powell, 2002; Gable et al., 2001; Gresham et al., 2000; Lane et al., 2004; Sheridan, Swanger-Gagne, Welch, Kwon, & Garbacz, 2009; Telzrow & Beebe, 2002).

 

Direct Measures

 

Direct measures include systematic observation of behavior in the classroom, videotaping, audiotaping, and using computer software (Gable et al., 2001; Gresham et al., 2000; Lane et al., 2004). In direct observations of behavior, the observer codes the treatment integrity (Gresham et al., 2000). Four steps are involved when developing a direct observation system (Gresham, 1989; Gresham et al., 2000; Lane et al., 2004):

 

    1. creating a detailed list or task analysis of the intervention,
    2. defining the components of the treatment in observational terms,
    3. rating the occurrence and nonoccurrence of each treatment component to calculate a percentage of treatment integrity,
    4. graphing the integrity and outcome data over time.

 

Teachers can also audiotape their lesson and self-assess their interaction with the students (Gable et al., 2001).

 

It is recommended that the treatment’s components be defined in specific, behavioral terms so that each component of an intervention can be adequately assessed (Gresham, 1989). Components of interventions can be defined in several ways—global, intermediate, or molecular. Global definitions state general principles of instruction or behavior change. Intermediate definitions outline the major steps of the intervention program. Molecular definitions provide a detailed task analysis of each event in an intervention (Gresham et al., 2000). The components can then be calculated as a number and percentage of accuracy across days and for each treatment session. Observers can likely obtain a reasonable idea of the treatment integrity of an intervention by conducting three to five observations that are 20–30 minutes in duration (Gresham, 1989).

 

Sheridan et al. (2009) also recommended that when determining fidelity criteria, each criterion should be selected based on its demonstrated treatment utility. Although high fidelity may be observed if selecting objective criteria, little information about an intervention’s efficacy can be gathered if the criteria do not contribute to the desired outcome. For this reason, the steps in the intervention that have high predictive validity of the desired outcome should be included in the fidelity criteria.

 

The drawback to direct observation as a form of treatment integrity is observer reactivity. The problem here is that the treatment may only be implemented with high integrity when observers are present (Gresham et al., 2000). To address the reactive effect that direct observation may have on interventionists, consultants can schedule observations on a random schedule and complete spot checks of intervention implementation.

 

Indirect Measures

 

In addition to direct observation methods, indirect methods can be used to supplement the direct measures (Gresham, 1989; Gresham et al., 2000). Indirect measures include self-reports, rating scales, interviews, checklist, Likert scales, lesson plan reviews, and permanent products (Gable et al., 2001; Gresham et al., 2000; Kovaleski et al., 2006; Lane et al., 2004; Telzrow & Beebe, 2002). The use of indirect assessment measures is less time consuming, more efficient, less likely to be influenced by social desirability, less reactive, and have the potential to be more accurate than other integrity assessment methods (Gresham, 1989; Gresham et al., 2000).

 

Self-reports require the person administering the treatment to rate the extent to which he or she implemented each treatment component (Gresham et al., 2000). By having teachers rate their own level of implementation, it may cue them to implement forgotten steps in subsequent sessions, leading to better integrity. However, self-reports may also lead to an overestimate of the level of correct implementation (Witt et al., 2004).

 

Permanent products related to each treatment component can also be examined to determine treatment integrity indirectly (Gresham et al., 2000). Permanent products have tangible features that can be coded for reliability. For instance, if part of the treatment was for the teacher to ask the student to write a sentence using a vocabulary word, a permanent product would be the sentence written in the student’s journal.

 

Available Treatment Integrity Protocols

 

Efforts to create and collect task-analyzed, step-by-step protocols to use in the assessment of treatment integrity have to date been a localized activity. There are no current publicly available repositories of treatment integrity protocols for either generic instructional or management tactics (e.g., impress method) or for commercially produced intervention packages (e.g., Read Naturally). The first author has assembled a number of available treatment integrity protocols on this website. The protocols have been developed by a variety of sources (publishers, graduate students, practitioners) and no claim is made for their sufficiency or thoroughness. They are posted as an aid to practitioners and researchers and should generally be considered as experimental products that require research as to their psychometric characteristics.

 

Selecting Treatment Integrity Assessments

 

The type of assessment measure used to evaluate treatment fidelity should match the desired outcome. For example, if the desired measurement outcome is teacher attitude toward the intervention, then self-report measures may be utilized, while behavioral observations may be used to assess teacher adherence to specific components of an intervention (Bellg et al., 2004). In addition to the technical aspects of treatment integrity, relational characteristics between client and practitioner can be examined as well (McLeod, Southam-Gerow, & Weisz, 2009). As McLeod and colleagues state, when assessing treatment integrity, observational methods provide objectivity, but they can be resource-intensive. As such, the importance of developing self-report measures cannot be overstated. However, because of the limitations with self-report measures, observational measures should be developed and used to validate self-reports. Sheridan et al. (2009) explored the psychometric qualities of various fidelity measures used in consultation (self-reports, permanent products, and direct observation) and found promising results for each type of measure, especially permanent products. However, due to limitations in each approach, these authors recommended a multi-method approach to measuring fidelity. To date, however, there has been no direct published guidance regarding how to best combine multiple data sources when measuring fidelity.

 

Who Conducts Treatment Integrity Assessments?

 

An interesting and potentially contentious issue regarding treatment integrity assessments is who should be tasked with conducting them. A number of possibilities come to mind in regard to both direct and indirect measures of treatment integrity. First, as indicated above, teachers or interventionists could conduct self-assessments. This practice likely has good potential for enhancing teachers’ self-reflection on their work but is probably not sufficient for a full and objective determination of actual treatment fidelity.

 

The second option is for colleague teachers to conduct treatment integrity checks. For example, teachers could spend 20 minutes per week during their planning period observing another teacher in the classroom (Gable et al., 2001). Similarly, faculty who are providing supplemental interventions in reading can do fidelity checks, perhaps on an interbuilding level. The advantage of this option is that it builds a sense of collegiality and might enhance teachers’ understanding and use of the instructional program or intervention. Another advantage of this arrangement is that it allows for the monitoring of treatment integrity in a nonevaluative manner.

 

A third option is for teachers to be observed by specialists who have special expertise in the particular instructional program or intervention. In recent years, a number of schools have hired reading and/or math coaches who are charged with consulting with teachers to improve overall instructional practices. Similarly, school psychologists have a long tradition of consulting with teachers on both behavioral and academic interventions. These personnel would seem to be in a very advantageous role to conduct treatment integrity checks because they would likely have extensive expertise in the instructional program or interventions and would also be seen more as supportive peers than as evaluators. However, to the extent that treatment integrity assessment might be perceived by teachers as evaluation, these practitioners would likely need to take steps to make clear that results of integrity checks do not make their way into the teacher-evaluation process.

 

Finally, building administrators (e.g., principals, assistant principals) are obvious choices for conducting treatment integrity checks. Principals have responsibility for ensuring that effective instructional practices are carried out in their schools and typically are required to conduct classroom observations of both teachers and specialists. Having treatment integrity protocols (e.g., checklists, as described above) would likely give added structure to these observations and enable the principal to clearly communicate expectations regarding integrity issues to the teacher. An issue here, however, is that these treatment integrity observations might have a more direct connection to teacher evaluation.

 

How Much Is Needed? Impact of Deviations From Protocol

 

As reported by Schulte et al. (2009), despite understanding the importance of implementing interventions with fidelity, in practice there is no empirical guidance regarding the level of fidelity of implementation that is needed to realize a meaningful gain in student performance. For example, it may not be clear whether a particular intervention is effective only if implemented with 100% fidelity or if a lesser level would be adequately effective (e.g., 90%, 80%). Therefore, educators do not know to what extent deviations from a prescribed intervention plan can occur and still obtain the expected results (Gresham et al., 2000). There is currently no standardized generic treatment integrity instrument with which to collect fidelity of implementation data. If such an instrument existed, practitioners could establish cut-points that define the extent of intervention fidelity required in an RtI model (Schulte et al., 2009).

 

In general, higher levels of treatment integrity typically result in better outcomes (Hagermoser Sanetti & Kratochwill, 2009). One study reviewed the impact that three different levels of treatment integrity related to addition and subtraction instruction had on the performance of second grade students (Noell, Gresham, & Gansle, 2002). The treatments were implemented by computer to consistently deliver the instruction at varying levels of fidelity. The students who received the intervention with full integrity displayed higher outcomes than those students who received the intervention with integrity ratings at one-third or two-thirds implementation.

 

The National Research Center on Learning Disabilities (2006) reported on how the strength of the treatment adequacy of schools’ implementation of the curriculum affects student outcomes. The report detailed a study that ranked schools on the following six criteria: presence of evidence-based core curriculum, fidelity of implementation of core curriculum at 86% or better, presence of small-group reading interventions provided for at-risk students, fidelity of implementation of intervention at 86% or better, data-based decision making used for interventions, and an instructional leader who manages the reading interventions. In schools with higher ratings on these criteria, students performed better on various measures than schools with lower criterion ratings. These results support that the level of treatment integrity of curricula and interventions is an important component related to student outcomes, although they do not provide conclusive evidence for the level of fidelity of instruction or interventions per se.

 

McCurdy and Watson (1999) reported that in three single-case studies, interventions may have positive behavior changes when the integrity is 60%–65%, indicating that perhaps 100% implementation integrity is not necessary. Nonetheless, if the intervention is not implemented with 100% integrity, any positive outcomes cannot be attributed to the intervention because they may have been caused by added or omitted factors. Similarly, intervention failure cannot be attributed to the intervention because it was not implemented with full integrity. This can lead to the discarding of interventions that may have been effective if implemented properly (Bellg et al., 2004; Gresham, 1989; Gresham et al., 2000). The cause of outcomes is therefore unknown, and decisions cannot be made regarding the interventions or treatments. In addition, making decisions about the effectiveness or ineffectiveness of interventions without having data to support the appropriate implementation can lead to an increase in time and costs that are avoidable (Bellg et al., 2004).

 

However, lower levels of treatment integrity do not necessarily always result in lower outcomes, for a variety of reasons (Hagermoser Sanetti & Kratochwill, 2009). Clinicians may use their judgment to modify an intervention in order to better meet the needs of a client. Very strict adherence to the treatment protocol may limit useful adaptations to the intervention that may be more effective or meaningful to the individual, particularly for more complex interventions (Schulte et al., 2009). Modifying interventions can be an acceptable practice because manipulations to the intervention may result in the introduction of a more successful strategy, but simple interventionist drift that results in modifications that do not take into account the client’s needs are not justifiable (Hagermoser Sanetti & Kratochwill, 2009).

 

Another possibility regarding the efficacy level of various interventions is that some treatment components may be more critical than others in producing meaningful student outcomes. For example, in a treatment integrity checklist consisting of 10 intervention actions, perhaps six of them are critical to producing an effect and the other four are not essential. Consequently, a simple overall percentage of components implemented might obscure what is actually needed. In this example, if there are indeed six critical items, a 60% implementation percentage that hit all the critical items might produce a meaningful gain, while a 60% implementation percentage that hit only some of the critical items might not produce the desired effects. This problematic situation is exacerbated by the likelihood that different interventions have different numbers of critical features. Currently, there is little to no research on either “dosage level” or “necessary ingredients” (to use a medical metaphor) in regard to the many and varied interventions that are used in schools, including those that have research evidence of effectiveness. A generation of research studies is needed to probe this labyrinth.

 

Improving Treatment Integrity

 

One of the primary ways to ensure that interventions are implemented with fidelity is to collect data. Witt et al. (2004) recommended that if data about treatment integrity are not being collected, at least one assessment method discussed previously should be initiated. It is only through the collection of data that practitioners can be sure that interventions are delivered as intended. If data are being collected and they suggest that the intervention is not being implemented with integrity, training and supports should be provided to teachers using methods such as scripted instruction, performance feedback, and follow-up support. If the teacher has been receiving these supports and the intervention continues to be implemented without fidelity, then consultants can provide teachers with weekly updates of treatment integrity data and student outcome data in a graphic format to assist with intervention planning.

 

As indicated earlier, performance feedback is an effective way to improve and maintain treatment integrity. Performance feedback can be provided during a consultant–consultee interaction (Noell, Duhon, Gatti, & Connel, 2002) or in the public forum of a school team meeting (Duhon, Mesmer, Gregerson, & Witt, 2009). Duhon et al. (2009) explained that if there are problems with treatment integrity or if there are concerns about possible implementation fall-off, school teams can discuss ways to improve intervention integrity in team meetings. Data-analysis team meetings (Kovaleski & Pedersen, 2008) or professional learning communities (Dufour & Eaker, 1998) would appear to be ideal venues for these types of discussions. It is important in these venues to ensure that the analysis of treatment integrity be carried out as a problem-solving activity rather than a procedure to evaluate a particular teacher or interventionist. It seems reasonable to conclude that the provision of performance feedback should lead to increased and stable levels of implementation, which in turn should result in improved intervention outcomes (Noell, Duhon, et al., 2002, 2005).

 

Conclusion

 

In this paper, we have argued that an MTSS that uses students’ RtI to determine progress and to inform instructional decisions requires that the instruction and interventions that are delivered be implemented with high degrees of fidelity to achieve meaningful student outcomes. A number of direct and indirect measures of treatment integrity have been described in the literature. Each technique has both positive aspects and limitations. Left unresolved at this point are a number of empirical questions that need to be addressed by researchers, most especially those that pertain to ingredients (which features of an intervention are essential and which are optional) and dosage (what percentage of essential features are required to produce meaningful student outcomes). What is particularly daunting about this needed line of research is that the answers to these questions will likely differ for different interventions or intervention packages. Consequently, following the recommendations of other writers in this field, we take the position that in applied settings a combination of methods should be used to most accurately determine the actual level of treatment integrity.

 

REFERENCES

 

Assistance to States for the Education of Children With Disabilities and Preschool Grants for Children With Disabilities; Final Rule, 71 Fed. Reg. 46539-46845 (August 14, 2006).

 

Batsche, G. M. (March, 2006). Problem-solving and response to intervention: Implications for state and district policies and practices. Workshop presented at the spring conference of the Association of School Psychologists of Pennsylvania, Harrisburg, PA.

 

Bellg, A. J., Resnick, B., Minicucci, D. S., Ogedegbe, G., Ernst, D., Borrelli, B., … Czajkowski, S. (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH behavior change consortium. Health Psychology, 23, 443–451.

 

Biggs, B. K., Vernberg, E. M., Twemlow, S. W., Fonagy, P., & Dill, E. J. (2008). Teacher adherence and its relation to teacher attitudes and student outcomes in an elementary school-based violence prevention program. School Psychology Review, 37, 533–549.

 

Charters, W. W., & Jones, J. E. (1974). On neglect of the independent variable in program evaluation. Eugene: University of Oregon, Project MITT.

 

Detrich, R. (1999). Increasing treatment fidelity by matching interventions to contextual variables within the educational setting. School Psychology Review, 28, 608–620.

 

DuFour, R., & Eaker, R. E. (1998). Professional learning communities at work: Best practices for enhancing student achievement. Bloomington, IN: National Education Service.

 

Duhon, G. J., Mesmer, E. M., Gregerson, L., & Witt, J. C. (2009). Effects of public feedback during RTI team meetings on teacher implementation integrity and student academic performance. Journal of School Psychology, 47, 19–37.

 

Flugum, K. R., & Reschly, D. J. (1994). Prereferral interventions: Quality indices and outcomes. Journal of School Psychology, 32, 1–14.

 

Fuchs, L. S., Fuchs, D., Yazdian, L., & Powell, S. R. (2002). Enhancing first-grade children’s mathematical development with peer-assisted learning strategies. School Psychology Review, 31, 569–583.

 

Gable, R. A., Hendrickson, J. M., & VanAcker, R. (2001). Maintaining the integrity of FBA-based interventions in schools. Education and Treatment of Children, 24, 248–260.

 

Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18, 37–50.

 

Gresham, F. M. (2004). Current status and future directions of school-based behavioral interventions. School Psychology Review, 33, 326–343.

 

Gresham, F., MacMillan, D. L., Beebe-Frankenberger, M. B., & Bocian, K. M. (2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented? Learning Disabilities Research and Practice, 15, 198–205.

 

Hagermoser Sanetti, L. M., & Kratochwill, T. R. (2009). Toward developing a science of treatment integrity: Introduction to the special series. School Psychology Review, 38, 445–459.

 

Hulleman, C. S., & Cordray, D. S. (2009). Moving from the lab to the field: The role of fidelity and achieved relative intervention strength. Journal of Research on Educational Effectiveness, 2, 88–110.

 

Kovaleski, J. F., & Pedersen, J. A. (2008). Best practices in data analysis teaming. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology, V (Vol. 2, pp. 115–129). Bethesda, MD: National Association of School Psychologists.

 

Kovaleski, J. F., Shapiro, E., Tuleya-Payne, H., Hall, R., Smith, A., & Lowery, P. (October, 2006). The school psychologist as shaper of data in a response to intervention decision making model. Paper presented at the annual conference of the Association of School Psychologists of Pennsylvania, State College, PA.

 

Kovaleski, J. F., VanDerHeyden, A. M., & Shapiro, E. S. (2013). The RTI approach to evaluating learning disabilities. New York, NY: Guilford.

 

Lane, K. L., Bocian, K. M., MacMillan, D. L., & Gresham, F. M. (2004). Treatment integrity: An essential—but often forgotten—component of school-based interventions. Preventing School Failure, 48(3), 36–43.

 

McCurdy, M., & Watson, T. S. (February, 1999). Techniques to strengthen the practice of school-based consultation using direct behavioral consultation. Presentation at the NASP Annual Convention, Las Vegas, NV.

 

McLeod, B. D., Southam-Gerow, M. A., & Weisz, J. R. (2009). Conceptual and methodological issues in treatment integrity measurement. School Psychology Review, 38, 541–546.

 

National Research Center on Learning Disabilities. (2006). Executive summary of the NRCLD topical forum applying responsiveness to intervention to specific learning disability determination decisions. Lawrence, KS: Author.

 

Noell, G. H., Duhon, G. J., Gatti, S. L., & Connell, J. E. (2002). Consultation, follow-up, and implementation of behavior management interventions in general education. School Psychology Review, 31(2), 217–234.

 

Noell, G. H., & Gansle, K. A. (2006). Assuring the form has substance: Treatment plan implementation as the foundation of assessing response to intervention. Assessment for Effective Intervention, 32, 32–39.

 

Noell, G. H., Gresham, F. M., & Gansle, K. A. (2002). Does treatment integrity matter? A preliminary investigation of instructional implementation and mathematics performance. Journal of Behavioral Education, 11, 51–67.

 

Noell, G. H., & Witt, J. C. (1996). A critical re-evaluation of five fundamental assumptions underlying behavioral consultation. School Psychology Quarterly, 11, 189–203.

 

Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L., Williams, K. L., … Resetar, J. L. (2005). Treatment implementation following behavioral consultation in schools: A comparison of three follow-up strategies. School Psychology Review, 34, 87–106.

 

Ransford, C. R., Greenberg, M. T., Domitrovich, C. E., Small, M., & Jacobson, L. (2009). The role of teachers’ psychological experiences and perceptions of curriculum supports on the implementation of a social and emotional learning curriculum. School Psychology Review, 38, 510–532.

 

Schulte, A. C., Easton, J. E., & Parker, J. (2009). Advances in treatment integrity research: Multidisciplinary perspectives on the conceptualization, measurement, and enhancement of treatment integrity. School Psychology Review, 38, 460–475.

 

Sheridan, S. M., Swanger-Gagne, M., Welch, G. W., Kwon, K., & Garbacz, S. A. (2009). Fidelity measurement in consultation: Psychometric issues and preliminary examination. School Psychology Review, 38, 476–495.

 

Telzrow, C. F., & Beebe, J. J. (2002). Best practices in facilitating intervention adherence and integrity. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (4th ed., pp. 503–516). Bethesda, MD: National Association of School Psychologists.

 

Telzrow, C. F., McNamara, K., & Hollinger, C. L. (2000). Fidelity of problem-solving implementation and relationship to student performance. School Psychology Review, 29, 443–461.

 

Walker, H. M. (2004). Commentary: Use of evidence-based interventions in schools: Where we’ve been, where we are, and where we need to go. School Psychology Review, 33, 398–407.

 

Witt, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Troubleshooting behavioral interventions: A systematic process for finding and eliminating problems. School Psychology Review, 33, 363–383.

 

Zirkel, P. A., & Thomas, L. B. (2010). State laws and guidelines for implementing RTI. Teaching Exceptional Children, 43, 60–73.

 


Back To Top
 
You must login to this website in order to comment.