Integrating Academic and Behavior Supports Within an RtI Framework, Part 3: Secondary Supports

Secondary supports (alternatively, targeted or Tier II support) are provided to students who do not respond to the universal academic and behavior Response to Intervention (RtI) models. Secondary supports are considered the next level of support in terms of RtI—students who do not respond to universal supports are provided one or more secondary interventions, and if students are not successful with this level of supports, it signals the need for more intensive, individualized treatments (Hawken, Adolphson, MacLeod, & Schumann, 2009). The theoretical proportion of students in a school who are provided secondary supports is approximately 15% (H. M. Walker et al., 1996), though this number is more dependent on the quality of the universal supports provided and the effectiveness and fidelity of the secondary interventions used (McIntosh, Reinke, & Herman, 2009). This logic holds true for both academic and behavior systems (Sugai, Horner, & Gresham, 2002).

Critical Feautures of Secondary Supports

Secondary supports systems in academics and behavior share a surprising number of critical features. First, secondary supports are often overseen by a team charged with pre-referral consultation, screening, assessment, and progress monitoring, in addition to actual intervention (Lewis-Palmer, Bounds, & Sugai, 2004). Because the goal of secondary supports is to provide efficient supports for a large number of students with similar needs, it is most efficient at the secondary level to use ongoing, generic interventions—programs that are applied to large numbers of students in the same way, with little to no individualization (i.e., a standard protocol approach; Vellutino et al., 1996). Instead, assessment focuses on selecting the best choice(s) from a few ongoing secondary interventions. If the student does not respond, teams may elect to individualize a secondary program, which would be considered tertiary supports (Hawken et al., 2009).

Strategies used in secondary interventions usually include a) a focus on additional instruction and practice and b) increased structure or explicitness, be they academic or behavioral in nature. Additional instruction may include reteaching of critical skills ("double-dosing" an academic or social behavior lesson) or teaching lessons at the student's instructional level, with ample opportunities for practice. Examples include repeated reading (Hasbrouck, Ihnot, & Rogers, 1999) or math fluency timings (Rathvon, 2003), and teaching or reteaching school-wide expectations or social skills lessons (Gresham, 2002; Langland, Lewis-Palmer, & Sugai, 1998). Increasing the structure or explicitness provides students with high-probability opportunities for success (Fuchs, 2009).

Either the curriculum and instruction or the physical environment is changed to place students in situations where correct responding is more likely. In academics, students may be instructed in smaller groups, using a carefully sequenced curriculum with instruction in conspicuous strategies (Vaughn, Linan-Thompson, & Hickman, 2003). In the domain of behavior, secondary interventions add additional structure to the school day or challenging routines, often through increased adult or peer role model contact and/or set routines, such as a check-in/check-out feedback and mentoring intervention (Crone, Horner, & Hawken, 2003).

Critical Questions Regarding Integrating Secondary Academic and Behavior Supports

Though secondary systems are commonly seen simply as standalone programs, it is more helpful to look at them through the same systems, data, and practices framework (Sugai & Horner, 2002) used previously. A true system for secondary supports includes many structures and recurring tasks regarding who receives supports, what type of supports, and for how long. At this level, academic and behavior RtI systems may be separate, but particular situations require consideration of integration. From these lenses, it is important to consider four areas when integrating systems: screening, assessment, intervention, and progress monitoring.

Universal Screening for Secondary Supports

Not all students will respond to universal systems, and a critical goal of school staff is to identify which students require more support than universal prevention to be successful, both academically and behaviorally. Though teacher referral is one option, a screening process can identify students who might not be identified by the classroom teacher (e.g., varying expectations for academic progress and tolerances for behavior; Severson, Walker, Hope-Doolittle, Kratochwill, & Gresham, 2007). Effective screening systems consider every student in the school at regular intervals (e.g., after each school-wide benchmark period).

Screening Measures

Systems use formative data that are valid and reliable for screening purposes, repeatable, sensitive to growth, time-efficient, and indicators of critical developmental skills (McIntosh, Reinke, & Herman, 2009). For academics, the most common measures are curriculum-based measurement probes in reading (Fuchs & Fuchs, 1992; Good & Kaminski, 2002), math (Clarke & Shinn, 2004; VanDerHeyden & Burns, 2008), spelling (Shapiro, 2004), and written expression (Espin et al., 2008; McMaster & Campbell, 2008). These measures are well-known and have adequate to strong psychometric properties. For behavior, common screening measures include office discipline referrals (ODRs; Sugai, Sprague, Horner, & Walker, 2000) and multiple-gate screening measures (Severson et al., 2007; B. Walker, Cheney, Stage, & Blum, 2005; H. M. Walker & Severson, 1992). Measures for screening in behavior are less fully developed, partially because of the expense of widespread observation and the limitations of some indirect measures (Briesch & Volpe, 2007). School teams commonly use ODRs as screeners, with decision rules such as two or more ODRs in one year as indicative of a lack of response to universal behavior systems (Horner, Sugai, Todd, & Lewis-Palmer, 2005). Though this cutoff has been validated as a predictor of disruptive problem behavior (B. Walker et al., 2005), recent research has indicated that ODR cut points are not valid indicators of "internalizing" problem behavior, such as anxiety and depression (McIntosh, Campbell, Carter, & Zumbo, 2009). As such, school teams using ODRs as their sole screening measures for behavior are advised to add measures that will indicate risk for anxiety and depression (H. M. Walker & Severson, 1994).

Integrated Academic and Behavior Screening Systems

There are considerable benefits to combining the groups charged with screening for academic and behavior challenges into one team. First, the processes of screening for both are remarkably similar. Though data sources are different, the decision-making steps are exactly the same. Second, considering both sets of data at the same table provides advantages beyond examining them separately. For example, when a student is flagged in both areas at the same time, it may indicate a more significant (perhaps tertiary) need that may have otherwise been missed (Reinke, Herman, Petros, & Ialongo, 2008). In addition, problems in one area may serve as an effective screener for problems in another. Given the low rates of ODRs in kindergarten and prediction of behavior problems from kindergarten reading deficits (McIntosh, Horner, Chard, Boland, & Good, 2006; McIntosh, Sadler, & Brown, 2009), intensive reading needs can be used as a screener for behavior, picking up behavior needs more quickly. Conversely, when students frequently receive ODRs or suspensions, their classroom instruction is interrupted, signaling the need to monitor academic skills more closely. Finally, using both types of data can help predict problems that are not solely academic or behavioral in nature, such as dropout. Effective dropout screening involves assessing both data sources simultaneously (e.g., ODRs, GPA, and credits toward graduation; McIntosh, Flannery, Sugai, Braun, & Cochrane, 2008). Hence, an integrated screening team can see better outcomes with less time spent.

Assessment for Intervention Selection

The process of screening identifies which students are not responding and are in need of additional supports, but additional information is often required to select the appropriate intervention, described as diagnostic testing (Salvia, Ysseldyke, & Bolt, 2006). In this case, diagnostic testing refers not to the tools of traditional special education eligibility testing (e.g., cognitive assessments), but to assessment for problem analysis, or to understand why the problem is happening, with a focus on variables that can be changed over those that cannot (Christ, 2008; Tilly, 2008).

In some cases, reanalysis of screening data may provide much of this information. For example, reading benchmark data may indicate whether intervention should focus primarily on skill acquisition (data indicating low accuracy) or fluency (data indicating accurate but slow reading rates; Daly, Chafouleas, & Skinner, 2005). Use of ODR data may indicate whether the student has difficulty interacting with peers or teachers and which school settings should be targeted for additional supports (Newton, Horner, Algozzine, Todd, & Algozzine, 2009). In many cases, however, additional data (e.g., brief functional behavior assessment, brief experimental analysis, can’t do/won’t do assessment; VanDerHeyden & Witt, 2008) will help improve intervention selection.

Functional Behavior Assessment as a Link Between Academic and Behavior Assessment

One method that is critical in understanding integrated academic and behavior supports is functional behavior assessment (FBA). The FBA is a process conducted to understand problem behavior within an environmental context, particularly the events that evoke and maintain problem behavior (O'Neill et al., 1997; Sugai, Lewis-Palmer, & Hagan-Burke, 1999). The final steps of an FBA is to select or design intervention strategies that will prevent problem behavior and teach skills that serve the same function as problem behavior, then monitor the plan's implementation and effectiveness (Sugai, Lewis-Palmer, & Hagan, 1998). This process can be considered an evidence-based practice for individuals with significant disabilities (Carr et al., 1999), and a growing body of research shows the effectiveness of FBA with general education populations as well (McIntosh, Brown, & Borgmeier, 2008). Moreover, FBA has been used to distinguish between students who are likely or not likely to respond to particular secondary-tier interventions (Carter & Horner, 2007; March & Horner, 2002; McIntosh, Campbell, Carter, & Dickey, 2009).

The FBA process plays a pivotal role in helping teams understand whether integrated academic and behavior supports is needed, or if one or the other will suffice. If the function of problem behavior is to obtain or escape social interactions (e.g., teacher attention), there may be no academic component needed for an effective intervention (McIntosh, Horner, Chard, Dickey, & Braun, 2008). However, if the function of the problem behavior is to escape academic tasks, an academic intervention is often necessary to improve behavior functioning (Roberts, Marshall, Nelson, & Albers, 2001). In these cases, an academic-only intervention may be more effective than a behavior-only intervention (Filter & Horner, 2009; Lee, Sugai, & Horner, 1999; Preciado, Horner, & Baker, 2009). As such, identifying the likely function of problem behavior is a necessary component for selecting appropriate secondary interventions.

When integrated, academic and behavior RtI teams may have enough information to complete an efficient brief FBA (Crone & Horner, 2003). For instance, a school team could combine existing data (e.g., ODRs, attendance, grades, existing screening data, performance on statewide assessments, credit hours) for team decision making. Students receiving ODRs outside of the classroom with a perceived motivation of adult attention but without academic challenges (e.g., failing grades) could be perfect candidates for secondary-tier behavior interventions. Students receiving ODRs in the classroom with a motivation of escape may be in need of additional academic supports. Existing data sets with common student identifiers can be merged using functions such as V-Look-Up in Microsoft Excel to generate combined lists of student outcomes for sorting through problem identification (e.g., in need of behavior remediation, in need of academic remediation). These data can be used to assist in grouping students into intervention groups that address related can't do (e.g., escape based) or won't do (e.g., attention or escape based) related needs. Such data also could be included with research validated interview measures, such as the Functional Assessment Checklist: Teachers and Staff (FACTS; March et al., 2000; McIntosh, Borgmeier, et al., 2008). Tracking RtI with progress-monitoring tools will help determine whether the brief FBA was sufficient or a more detailed FBA is needed.

Implementing a Range of Secondary Interventions

Some level of assessment is necessary, because a well functioning RtI system includes a variety of interventions for secondary supports. As described above, there are a few predictable challenges that students at risk for difficulty may face (e.g., academic skill acquisition, fluency, or generalization; low levels of positive interactions), and as a result, schools should have more than one generic intervention (McIntosh, Campbell, et al., 2009). However, schools may need to take care not to have so many in place that school personnel cannot implement each with integrity (Hawken et al., 2009). A sound process begins with identifying the secondary interventions already in place in a school and the student needs that each intervention addresses (Crone et al., 2003). Next steps may involve adding interventions to fill gaps or eliminating interventions on the basis of redundancy, equivocal results, or difficulty measuring their effects (Hawken et al., 2009).

Integrated Secondary Supports

When integrating academic and behavior RtI systems, it is always helpful to consider which systems and practices could be streamlined to improve the effectiveness and efficiency of supports. It is not helpful to integrate simply for integration's sake, but rather to identify where combining resources may improve supports. At this level of efficient intervention, it probably makes more sense to continue with separate interventions and only fully integrate when providing an intensive, individualized intervention when response to secondary supports is inadequate. Given the wealth of secondary academic (Fuchs, 2009; Joseph, 2008) and behavior (Hawken et al., 2009; Sugai, 2009) interventions available, students can be provided with separate interventions in each area with relative ease.

However, there are some secondary interventions that inherently provide moderate levels of academic and behavior supports simultaneously, a benefit for students who need supports primarily in one area but who could use some assistance in the other. Small group academic interventions provide an excellent opportunity to teach and reinforce prosocial classroom behaviors in a more controlled setting. In addition, students can be reinforced socially for their academic efforts, highlighting an avenue for accessing adult attention in the general education classroom. Likewise, some secondary behavior interventions also provide a modest degree of academic supports. Self-monitoring systems, in which students assess their own classroom behavior, are used to improve student behavior, but the behaviors targeted are often academic engagement and direction following (Shapiro & Cole, 1994). As a result, these interventions can decrease problem behavior but also increase on-task behavior and work completion (Todd, Horner, & Sugai, 1999). Check-in/check-out systems also target classroom behaviors that can enhance student academic engagement (Hawken & Horner, 2003).

Fidelity of Implementation

An important and often overlooked aspect of intervention that must be given attention is fidelity of implementation (Gresham, 1989). Without considering fidelity of implementation, it is unknown whether students fail to respond to secondary supports or if staff have failed to provide adequate supports. School teams can take steps to measure and improve fidelity, including the use of direct consultation, intervention scripts, and ongoing observation and performance feedback (Telzrow & Beebe, 2002). Meeting time devoted to monitoring and improving fidelity of implementation may seem like time better spent discussing student progress, but it is a valuable and critical investment of resources for all students.

Progress Monitoring

In keeping with the principle of efficiency, most secondary interventions have built-in progress-monitoring systems. For example, repeated fluency timings can easily be graphed to show student progress on general outcome measures (Hasbrouck et al., 1999). In the same way, the daily cards used in check-in/check-out and self-monitoring systems can be graphed to monitor student progress (Cheney, Flower, & Templeton, 2008; Crone et al., 2003). If systematic data are not produced as part of the intervention process, some system will need to be added to determine student response to the intervention. However, progress-monitoring data sources that are related to universal screeners are preferable (e.g., similar to data that are collected for all students).

In academics, the options are abundant—there are many curriculum-based measurement systems available for use that are described elsewhere on this Web site (see Progress Monitoring Within a Multi-Level Prevention System). For behavior, there also is a wide range of options, each with its advantages and disadvantages (Briesch & Volpe, 2007). Existing data may include attendance and ODRs, which are feasible to collect but may not be sensitive to daily improvement in performance, particularly in regard to adaptive, prosocial behavior. Direct observation is considered the gold standard of behavioral measurement (Cone, 1997), but observation of all students receiving secondary supports is rarely feasible. Recently, the use of direct behavior rating systems (similar to those used in check-in/check-out programs) has been proposed as an efficient but reliable method for classroom teachers to rate student behavior on a daily basis (Riley-Tillman, Chafouleas, & Briesch, 2007). These systems also may have potential for use as a universal screening tool. It is interesting to note that Yeh (2007) found frequent progress monitoring more effective at improving student performance in math and reading than policy programs that include increased educational spending, voucher programs, charter schools, and increased accountability.

Assessing Response to Intervention

Measuring academic RtI is generally much easier than in the area of behavior. In academics, students have more stable trajectories of growth for decision making. Student progress can be compared to the growth rates for other students receiving the same level of intervention (Fuchs & Fuchs, 2008). These trajectories can be analyzed to identify whether students are progressing toward important long-term academic outcomes (Kaminski, Cummings, Powell-Smith, & Good, 2008). In behavior, there are few stable trajectories that can be tapped for short-term growth goals. Progress has typically been evaluated through visual analysis of data (Hixson, Christ, & Bradley-Johnson, 2008). Students should experience some degree of success nearly immediately upon implementation of an effective intervention. However, like academic performance, improvement to typical behavioral functioning may take time, as new skills must be learned and used regularly to become part of a student's routine. Recently, there has been some research in methods for quantifying behavior RtI (Gresham, 2005). One metric with particular promise is the percentage of days meeting a predetermined goal (a percentage of possible points earned on a daily point card). Cheney and colleagues (2008) examined this metric for analyzing check-in/check-out data and found it a useful and logical measure of response to secondary behavior intervention. If daily point cards could be used as a universal screening instrument, there is potential for creating local norms to identify students by cut scores and growth. Such a system could be improved by adding information regarding internalizing behaviors as well.

Determining Whether Secondary Supports Is Sufficient

The bottom line decision to be made by teams is whether the individual student is successful at this level of support. The ultimate goal of any RtI system is student success in both academics and social behavior. Students themselves do not fit into a tier of supports; instead, their needs are addressed by the tiers provided. As such, some students may be successful with secondary supports in one area, and universal or tertiary supports in another. Therefore, the measurement of progress in both areas is needed.


Briesch, A. M., & Volpe, R. J. (2007). Important considerations in the selection of progress-monitoring measures for classroom behaviors. School Psychology Forum, 1, 59–74.


Carr, E. G., Horner, R. H., Turnbull, A., Marquis, J., Magito-McLaughlin, D., McAtee, M., et al. (1999). Positive behavior support as an approach for dealing with problem behavior in people with developmental disabilities: A research synthesis. Washington, DC: American Association on Intellectual and Developmental Disabilities.


Carter, D. R., & Horner, R. H. (2007). Adding functional behavioral assessment to First Step to Success: A case study. Journal of Positive Behavior Interventions, 9, 229–238.


Cheney, D., Flower, A., & Templeton, T. (2008). Applying response to intervention metrics in the social domain for students at risk of developing emotional or behavioral disorders. Journal of Special Education, 42, 108–126.


Christ, T. J. (2008). Best practices in problem analysis. In A. Thomas & J. P. Grimes (Eds.), Best practices in school psychology V (pp. 159–176). Bethesda, MD: National Association of School Psychologists.


Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the identification and development of early mathematics curriculum-based measurement. School Psychology Review, 33, 234–248.


Cone, J. D. (1997). Issues in functional analysis in behavioral assessment. Behavior Research and Therapy, 35, 259–275.


Crone, D. A., & Horner, R. H. (2003). Building positive behavior support systems in schools: Functional behavioral assessment. New York: Guilford.


Crone, D. A., Horner, R. H., & Hawken, L. S. (2003). Responding to problem behavior in schools: The Behavior Education Program. New York: Guilford.


Daly, E. J., Chafouleas, S. M., & Skinner, C. H. (2005). Interventions for reading problems: Designing and evaluating effective strategies. New York: Guilford.


Espin, C., Wallace, T., Campbell, H., Lembke, E. S., Long, J. D., & Ticha, R. (2008). Curriculum-based measurement in writing: Predicting the success of high-school students on state standards tests. Exceptional Children, 74, 174–193.


Filter, K. J., & Horner, R. H. (2009). Function-based academic interventions for problem behavior. Education and Treatment of Children, 32, 1–19.


Fuchs, L. S. (2009). Mathematics intervention at the secondary prevention level of a multi-tier prevention system: Six key principles. Retrieved January 31, 2009.


Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review, 21, 45–58.


Fuchs, L. S., & Fuchs, D. (2008). Best practices in progress monitoring reading and mathematics at the elementary grades. In A. Thomas & J. P. Grimes (Eds.), Best practices in school psychology V (pp. 2147–2164). Bethesda, MD: National Association of School Psychologists.


Good, R. H., & Kaminski, R. A. (Eds.). (2002). Dynamic Indicators of Basic Early Literacy Skills (6th ed.). Eugene, OR: Institute for the Development of Education Achievement.


Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18, 37–50.


Gresham, F. M. (2002). Teaching social skills to high-risk children and youth: Preventative and remedial strategies. In M. R. Shinn, H. M. Walker, & G. Stoner (Eds.), Interventions for academic and behavior problems II: Preventive and remedial approaches (pp. 403–432). Bethesda, MD: National Association of School Psychologists.


Gresham, F. M. (2005). Response to intervention: An alternative means of identifying students as emotionally disturbed. Education & Treatment of Children, 28, 328–344.


Hasbrouck, J. E., Ihnot, C., & Rogers, G. (1999). Read naturally: A strategy to increase oral reading fluency. ReadingResearch Instruction, 39, 27–37.


Hawken, L. S., Adolphson, S. L., MacLeod, K. S., & Schumann, J. (2009). Secondary-tier interventions and supports. In W. Sailor, G. Sugai, R. H. Horner, & G. Dunlap (Eds.), Handbook of positive behavior support (pp. 395–420). New York: Springer.


Hawken, L. S., & Horner, R. H. (2003). Evaluation of a targeted group intervention within a school-wide system of behavior support. Journal of Behavioral Education, 12, 225–240.


Hixson, M., Christ, T. J., & Bradley-Johnson, S. (2008). Best practices in the analysis of progress-monitoring data and decision making. In A. Thomas & J. P. Grimes (Eds.), Best practices in school psychology V (pp. 2133–2146). Bethesda, MD: National Association of School Psychologists.


Horner, R. H., Sugai, G., Todd, A. W., & Lewis-Palmer, T. (2005). School-wide positive behavior support. In L. Bambara & L. Kern (Eds.), Individualized supports for students with problem behaviors: Designing positive behavior plans (pp. 359–390). New York: Guilford.


Joseph, L. M. (2008). Best practices on interventions for students with reading problems. In A. Thomas & J. P. Grimes (Eds.), Best practices in school psychology V (Vol. 4, pp. 1163–1180). Bethesda, MD: National Association of School Psychologists.


Kaminski, R. A., Cummings, K. D., Powell-Smith, K. A., & Good, R. H. (2008). Best practices in using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) for formative assessment and evaluation. In A. Thomas & J. P. Grimes (Eds.), Best practices in school psychology V (pp. 1181–1203). Bethesda, MD: National Association of School Psychologists.


Langland, S., Lewis-Palmer, T., & Sugai, G. (1998). Teaching respect in the classroom: An instructional approach. Journal of Behavioral Education, 8, 245–262.


Lee, Y., Sugai, G., & Horner, R. H. (1999). Using an instructional intervention to reduce problem and off-task behaviors. Journal of Positive Behavior Interventions, 1, 195–204.


Lewis-Palmer, T., Bounds, M., & Sugai, G. (2004). Districtwide system for providing individual student support. Assessment for Effective Intervention, 30, 53–65.


March, R. E., & Horner, R. H. (2002). Feasibility and contributions of functional behavioral assessment in schools. Journal of Emotional and Behavioral Disorders, 10, 158–170.


March, R. E., Horner, R. H., Lewis-Palmer, T., Brown, D., Crone, D., Todd, A. W., et al. (2000). Functional Assessment Checklist: Teachers and Staff (FACTS).


McIntosh, K., Borgmeier, C., Anderson, C. M., Horner, R. H., Rodriguez, B. J., & Tobin, T. J. (2008). Technical adequacy of the Functional Assessment Checklist: Teachers and Staff (FACTS) FBA interview measure. Journal of Positive Behavior Interventions, 10, 33–45.


McIntosh, K., Brown, J. A., & Borgmeier, C. J. (2008). Validity of functional behavior assessment within an RTI framework: Evidence and future directions. Assessment for Effective Intervention, 34, 6–14.


McIntosh, K., Campbell, A. L., Carter, D. R., & Dickey, C. R. (2009). Differential effects of a tier two behavior intervention based on function of problem behavior. Journal of Positive Behavior Interventions, 11, 82–93.


McIntosh, K., Campbell, A. L., Carter, D. R., & Zumbo, B. D. (2009). Concurrent validity of office discipline referrals and cut points used in school-wide positive behavior support. Behavioral Disorders, 34, 100-113.


McIntosh, K., Flannery, K. B., Sugai, G., Braun, D., & Cochrane, K. L. (2008). Relationships between academics and problem behavior in the transition from middle school to high school. Journal of Positive Behavior Interventions, 10, 243–255.


McIntosh, K., Horner, R. H., Chard, D. J., Boland, J. B., & Good, R. H. (2006). The use of reading and behavior screening measures to predict non-response to school-wide positive behavior support: A longitudinal analysis. School Psychology Review, 35, 275–291.


McIntosh, K., Horner, R. H., Chard, D. J., Dickey, C. R., & Braun, D. H. (2008). Reading skills and function of problem behavior in typical school settings. Journal of Special Education, 42, 131–147.


McIntosh, K., Reinke, W. M., & Herman, K. E. (2009). Schoolwide analysis of data for social behavior problems: Assessing outcomes, selecting targets for intervention, and identifying need for support. In G. G. Peacock, R. A. Ervin, E. J. Daly, & K. W. Merrell (Eds.), The practical handbook of school psychology: Effective practices for the 21st century (pp. 135–156). New York: Guilford.


McIntosh, K., Sadler, C., & Brown, J. A. (2009). Kindergarten reading skill and response to instruction as risk factors for problem behavior. Manuscript submitted for publication.


McMaster, K. L., & Campbell, H. (2008). New and existing curriculum-based writing measures: Technical features within and across grades. School Psychology Review, 37, 550–566.


Newton, J. S., Horner, R. H., Algozzine, R. F., Todd, A. W., & Algozzine, K. M. (2009). Using a problem-solving model to enhance data-based decision making in schools. In W. Sailor, G. Dunlap, G. Sugai, & R. H. Horner (Eds.), Handbook of positive behavior support (pp. 551–580). New York: Springer.


O'Neill, R. E., Horner, R. H., Albin, R. W., Sprague, J. R., Storey, K., & Newton, J. S. (1997). Functional assessment and program development for problem behavior: A practical handbook (2nd ed.). Pacific Grove, CA: Brooks/Cole.


Preciado, J. A., Horner, R. H., & Baker, S. K. (2009). Using a function-based approach to decrease problem behavior and increase reading academic engagement for Latino English language learners. Journal of Special Education, 42, 227–240.


Rathvon, N. (2003). Effective school interventions: Strategies for enhancing academic achievement and social competence. New York: Guilford.


Reinke, W. M., Herman, K. C., Petros, H., & Ialongo, N. (2008). Empirically-derived subtypes of child academic and behavior problems: Co-occurrence and distal outcomes. Journal of Abnormal Child Psychology, 36, 759–777.


Riley-Tillman, T. C., Chafouleas, S. M., & Briesch, A. M. (2007). A school practitioner's guide to using daily behavior report cards to monitor student behavior. Psychology in the Schools, 44, 77–89.


Roberts, M. L., Marshall, J., Nelson, J. R., & Albers, C. A. (2001). Curriculum-based assessment procedures embedded within functional behavioral assessments: Identifying escape-motivated behaviors in a general education classroom. School Psychology Review, 30, 264–277.


Salvia, J., Ysseldyke, J., & Bolt, S. (2006). Assessment in special and inclusive education. Boston: Houghton Mifflin.


Severson, H. H., Walker, H. M., Hope-Doolittle, J., Kratochwill, T. R., & Gresham, F. M. (2007). Proactive, early screening to detect behaviorally at-risk students: Issues, approaches, emerging innovations, and professional practices. Journal of School Psychology, 45, 193–223.


Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd ed.). New York: Guilford.


Shapiro, E. S., & Cole, C. L. (1994). Behavior change in the classroom: Self-management interventions. New York: Guilford.


Sugai, G. (2009). School-wide positive behavior support and response to intervention. Retrieved January 31, 2009.


Sugai, G., & Horner, R. H. (2002). The evolution of discipline practices: School-wide positive behavior supports. Child and Family Behavior Therapy, 24, 23–50.


Sugai, G., Horner, R. H., & Gresham, F. M. (2002). Behaviorally effective school environments In M. R. Shinn, H. M. Walker & G. Stoner (Eds.), Interventions for academic and behavior problems II: Preventive and remedial approaches (pp. 315–350). Bethesda, MD: National Association of School Psychologists.


Sugai, G., Lewis-Palmer, T., & Hagan-Burke, S. (1999). Overview of the functional behavioral assessment process. Exceptionality, 8, 149–160.


Sugai, G., Lewis-Palmer, T., & Hagan, S. (1998). Using functional assessments to develop behavior support plans. Preventing School Failure, 43, 6–13.


Sugai, G., Sprague, J. R., Horner, R. H., & Walker, H. M. (2000). Preventing school violence: The use of office discipline referrals to assess and monitor school-wide discipline interventions. Journal of Emotional and Behavioral Disorders, 8, 94–101.


Telzrow, C. F., & Beebe, J. J. (2002). Best practices in facilitating intervention adherence and integrity. In A. Thomas & J. P. Grimes (Eds.), Best practices in school psychology IV (pp. 503–516). Bethesda, MD: National Association of School Psychologists.


Tilly, W. D. (2008). The evolution of school psychology to science-based practice: Problem-solving and the three-tiered model. In A. Thomas & J. P. Grimes (Eds.), Best practices in school psychology V (pp. 17–36). Bethesda, MD: National Association of School Psychologists.


Todd, A. W., Horner, R. H., & Sugai, G. (1999). Self-monitoring and self-recruited praise: Effects on problem behavior, academic engagement, and work completion in a typical classroom. Journal of Positive Behavior Interventions, 1, 66–76.


VanDerHeyden, A. M., & Burns, M. K. (2008). Examination of the utility of various measures of mathematics proficiency. Assessment for Effective Intervention, 33, 215–224.


VanDerHeyden, A. M., & Witt, J. C. (2008). Best practices in can't do/won't do assessment. In A. Thomas & J. P. Grimes (Eds.), Best practices in school psychology V (pp. 131–139). Bethesda, MD: National Association of School Psychologists.


Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391-409.


Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S. G., Pratt, S., Chen, R., et al. (1996). Cognitive profiles of difficult-to-remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experiential deficits as basic causes of specific reading disability. Journal of Educational Psychology, 88, 601–638.


Walker, B., Cheney, D., Stage, S. A., & Blum, C. (2005). Schoolwide screening and positive behavior supports: Identifying and supporting students at risk for school failure. Journal of Positive Behavior Interventions, 7, 194–204.


Walker, H. M., Horner, R. H., Sugai, G., Bullis, M., Sprague, J. R., Bricker, D., et al. (1996). Integrated approaches to preventing antisocial behavior patterns among school-age children and youth. Journal of Emotional and Behavioral Disorders, 4, 194–209.


Walker, H. M., & Severson, H. (1992). Systematic screening for behavior disorders (2nd ed.). Longmont, CO: Sopris West.


Walker, H. M., & Severson, H. (1994). Replication of the systematic screening for behavior disorders (SSBD) procedure for the identification of at-risk children. Journal of Emotional and Behavioral Disorders, 2, 66–78.


Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 28, 416–436.


View all the articles in this series.

Back To Top