Print

Problem Analysis Within an RTI Framework at a Secondary School



Middle schools and high schools across the country are actively working to improve the academic skills and postgraduation outcomes of the students they serve. Many schools are engaged in various reform efforts, which should include using data to diagnose student needs and providing intensive interventions for struggling learners (Alliance for Excellent Education, NGA Center for Best Practices, National Association of Secondary Principals, & National Association of State Boards of Education, 2005). However, only approximately 75% of students who enter 9th grade graduate from high school (Chapman, Laird, & KewalRamani, 2010), and low academic skills contribute to a student dropping out of high school. Thus, it is reasonable to assume that at least 25% of students who enter 9th grade have some form of academic deficit. 

If 25% of students have academic difficulties, then a high school with approximately 1,600 students would likely have approximately 400 students who need additional support. School personnel cannot derive individualized interventions for that many students. A relatively low-level problem analysis is needed within Tier 2 to identify categories of deficits from which subsequent interventions can be developed. We discuss this in more detail in other sources (Burns & Gibbons, 2012; Burns, Riley-Tillman, & VanDerHeyden, 2012; VanDerHeyden & Burns, 2010), but below we provide an overview of the process of using data to determine appropriate interventions. We are focusing here on Tier 2 interventions because an effective Tier 2 system is needed before effective Tier 3 interventions can occur, and many high schools and middle schools are set up to successfully implement Tier 2 interventions.

Problem analysis within Tier 2 focuses on identifying the category of the problem and using that information to assign a small group intervention. In other papers in this series, we have outlined types of data that can be used to identify students who need intervention, and those data can also be used to analyze problems. Problem analysis at the elementary level often begins with universal screening data such as measures of oral reading fluency. Students who score low on fluency are then assessed to determine decoding skills and potentially phonemic awareness difficulties. However, analysis at the secondary level has to go one step further by first examining student engagement; phonemic awareness only rarely needs to be addressed. Data such as high absenteeism (e.g., 20%), office discipline referrals, grade point average (GPA), and number of credits accrued (or lack thereof) can all be indicators of disengagement (Kennelly & Monrad, 2007) and are excellent places to start analyzing the problem, which may then lead to additional assessments of academic skills.

Engagement


We suggest that indicators of student engagement should be part of any universal screening system at the secondary level. Engagement is the commitment to learning, sense of belongingness, and willingness to participate in learning and extracurricular activities that is associated with positive student outcomes (Christenson et al., 2008). Engagement is clearly a multidimensional construct (Appleton, Christenson, & Furlong, 2008; Appleton, Christenson, Kim, & Reschly, 2006). Academic engagement is what is measured by credit hours completed, GPA, etc. and it is a good starting point because it is so easily measured. Cognitive engagement involves self-regulation and value of learning, and psychological engagement involves sense of belonging and identification with the school, both of which are important but much more difficult to measure than academic engagement (Appleton et al., 2006).

The Student Engagement Instrument (SEI; Appleton et al., 2006) is perhaps the most psychometrically sound approach to assess cognitive and psychological engagement. The SEI is a 35-item self-report measure that is based on engagement research and has been well tested. Some high schools across the country are using the SEI as a universal screening measure, which would certainly provide useful data. However, SEI data may not be needed for students who are performing well. Thus, it may be appropriate to administer the SEI to students who demonstrate difficulties with academic and behavioral indicators. Fredericks et al. (2011) published a review of several measures of student engagement (available at the Institute of Education Sciences’ Regional Educational Laboratory Program website), which may be helpful for practitioners who are considering various measures of engagement. 

Although research regarding interventions for cognitive and psychological engagement is limited, there are specific systemic interventions such as team teaching; smaller class sizes; extended class time through block scheduling, extended periods, and advisory periods; and encouragement of participation in extracurricular activities (Dynarski et al., 2008). Moreover, Check & Connect is a well-researched intervention that could be used for Tier 2 interventions. More specifically, students who lack cognitive engagement would probably benefit from interventions that involve setting personal goals, self-monitoring progress toward goals, and teaching specific strategies to reach personal and academic goals, and those who lack psychological engagement would likely benefit from personal relationships with a caring adult or some other mentor, increased participation in group activities, social support combined with appropriately challenging academic work, and a caring and supportive environment (Christenson et al., 2008). 

Academic Skill Deficits


Students who demonstrate low academic engagement (e.g., behind in credits to graduate, low GPA, high absenteeism, etc.) could be administered the SEI to determine if they are cognitively and psychologically engaged with school. However, the academic skills of students who are disengaged should also be assessed because low academic skills are closely linked to disengagement. If data suggest that a student is not psychologically or cognitively engaged and also indicate that the student has a skill deficit, then the interventions listed above should occur to help develop engagement, but academic interventions should also occur. Below, we discuss assessment systems useful for identifying skill deficits in reading and math.

Reading. Students who exhibit the risk factors discussed above should also be screened for reading skills. This screening could be done using data that already exist (e.g., previous reading assessments such as state-mandated accountability tests or district-administered group reading tests) or by administering a screening measure such as the Measures of Academic Progress or the Scholastic Reading Inventory. If the data suggest a reading deficit, then school personnel should implement a reading intervention, but they should also dive more deeply into the data to determine which reading intervention would be most appropriate.

The National Reading Panel (NRP, 2000) identified five areas necessary for reading instruction that could also serve as a heuristic for identifying the category of reading problem that exists. Reading instruction involves developing phonemic awareness, phonics, reading fluency, vocabulary, and comprehension. Reading is a complex process, but development tends to follow a sequence that aligns with the NRP areas, especially among struggling learners (Berninger, Abbott, Vermeulen, & Fulton, 2006). Therefore, assessing how well a student is progressing through the five skills identified by the NRP (2000) could provide a useful heuristic for most students who struggle with reading.

Recent research at the University of Minnesota found that high school students performed significantly better when small-group interventions (i.e., Tier 2) were matched to student needs rather than simply implementing an intervention that addresses multiple NRP areas (Burns, Scholin, McCarthy, & Karich, 2011). Student needs were identified by first examining data from the Measures of Academic Progress for Reading (MAP-R), which most directly assessed comprehension and vocabulary. If a student scored low on the MAP, then they were assessed with a measure of reading fluency. Certainly practitioners could conduct a curriculum-based measurement of oral reading fluency (ORF), but we wanted an assessment tool that could be administered to a group and there was no clear criterion to which ORF data could be compared. Thus, students who scored below the 25th percentile on the MAP-R were administered the Test of Silent Contextual Reading Fluency (TOSCRF; Hammill, Wiederhold, & Allen, 2006), which is a standardized, group-administered measure of reading fluency with reliability coefficients that exceed .8 and .9.

Students who demonstrated difficulty with comprehension and vocabulary, as measured by the MAP-R, but who scored above the 25th percentile on the TOSCRF received a comprehension intervention. However, those who scored low on both measures were further assessed by administering the Word Attack subtest of the Woodcock-Johnson III Tests of Achievement (Woodcock, McGrew, & Mather, 2001). The Word Attack subtest involves reading phonetically regular nonsense words as a measure of decoding skills. Again, students who scored above the 25th percentile demonstrated sufficient decoding skills and those who were at or below the 25th percentile suggested a deficit. A student who scored low on MAP-R and the TOSCRF, but who scored above the 25th percentile on the Word Attack subtest received a fluency intervention, and those who scored low on all three measures received a decoding intervention. 

Approximately 30% of the school’s 9th and 10th graders required an academic intervention, 15% of whom received a comprehension intervention, 5% received a fluency intervention, and 10% received a decoding intervention. The mean TOSCRF score for students who received the targeted intervention based on academic assessments increased from 90.17 (SD = 7.65) to 98.33 (SD = 7.27), but the control group that received a comprehensive intervention increased from 89.88 (SD = 9.73) to 94.32 (SD = 8.77). The MAP-R data also increased more for the targeted group (M = 206.00 to 217.21) than the control group (M = 211.00 to 212.40).

Math. Unfortunately, research regarding math interventions is less clear. According to the National Research Council, math proficiency is composed of a) conceptual understanding, b) procedural fluency, c) ability to formulate and mentally represent problems, d) reasoning, and e) successful application of math to daily activities (Kilpatrick, Swafford, & Finell, 2001). Thus, much like reading, these areas could be a potential heuristic with which to develop small-group math interventions. However, the National Mathematics Advisory Panel (NMAP; 2008) found that students need to be proficient through Algebra II to assure college preparedness, and algebra could provide the framework for an intervention heuristic for math.

The NMAP (2008) identified six major topics for algebra: symbols and expressions, linear equations, quadratic equations, functions, algebra of polynomials, and combinatorics and finite probability. Students could be assessed in these areas and interventions delivered. However, some high school students may lack even the more prerequisite skills for algebra. Geometry and measurement are benchmarks toward algebra proficiency that should be obtained by the end of 7th grade, along with fluency with fractions, and fluency with whole numbers should be obtained by the end of 5th grade (NMAP). It may be necessary to conduct assessments in these more basic skills to identify whether one of these areas is an appropriate intervention target.

Conclusion


In our experience, the two most common reasons why interventions do not work is that they are not correctly implemented and they do not correctly address the student’s problem. Problem-analysis is one of the foundational components of RTI. Although much of the research on problem analysis was conducted with elementary-age students, there is considerable research regarding early identification of secondary students, engagement, and targeted interventions. There certainly is more research to be conducted, but using data to diagnose student needs and providing subsequent intensive interventions for struggling learners is an important component of adolescent literacy instruction (Alliance for Excellent Education, NGA Center for Best Practices, National Association of Secondary Principals, & National Association of State Boards of Education, 2005).

Targeting an intervention increases the likelihood of success, and school personnel would be remiss if they did not consider student engagement in the equation. Examining existing data such as credits earned, attendance, office discipline referrals, GPA, state test scores, and district-administered reading and math test scores provides a thorough student profile from which to work, and minimal additional data may be needed for a relatively small proportion of students to determine the appropriate intervention target. In our experience, training middle and high school personnel to adequately consume these data is necessary for successfully implementing an RTI model, can be the basis for secondary school reform, and is time and effort well spent.


References


Alliance for Excellence in Education, NGA Center for Best Practices, National Association of Secondary School Principals, & the National Association of State Boards of Education. (2005). Reading instruction is for everybody—even teenagers. Washington, DC: Author.
Appleton, J. J., Christenson, S. L., & Furlong, M. J. (2008). Student engagement with school: Critical conceptual and methodological issues of the construct. Psychology in the Schools, 45, 369–386.
Appleton, J. J., Christenson, S. L., Kim, D., & Reschly, A. L. (2006). Measuring cognitive and psychological engagement: Validation of the Student Engagement Instrument. Journal of School Psychology, 44, 427–445.
Berninger, V. W., Abbott, R. D., Vermeulen, K., & Fulton, C. M. (2006). Paths to reading comprehension in at-risk second-grade readers. Journal of Learning Disabilities, 39, 334–351.
Burns, M. K. & Gibbons, K. (2012). Response to intervention implementation in elementary and secondary schools: Procedures to assure scientific-based practices (2nd ed.). New York, NY: Routledge.
Burns, M. K., Riley-Tillman, T. C., & VanDerHeyden, A. M. (2012). Advanced RTI applications: Intervention design and implementation. New York, NY: Guilford.
Burns, M. K., Scholin, S., McCarthy, A., & Karich, A. (2011). Comparison of targeted small-group interventions and comprehensive interventions on reading achievement among high school students at risk for reading failure. Minneapolis, MN: University of Minnesota.
Chapman, C., Laird, J., & KewalRamani, A. (2010). Trends in high school dropout and completion rates in the United States: 1972–2008 (NCES 2011-012). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
Christenson, S. L., Reschly, A. L., Appleton, J. J., Berman, S., Spangers, D., & Varro, P. (2008). Best practices in fostering student engagement. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 1099–1120). Bethesda, MD: National Association of School Psychologists.
Dynarski, M., Clarke, L., Cobb, B., Finn, J., Rumberger, R., & Smink, J. (2008). Dropout prevention: A practice guide (NCEE 2008–4025). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Fredricks, J., McColskey, W., Meli, J., Mordica, J., Montrosse, B., & Mooney, K. (2011). Measuring student engagement in upper elementary through high school: A description of 21 instruments (Issues & Answers Report, REL 2011–No. 098). Washington, DC: Regional Educational Laboratory Southeast, National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Hammill, D., Wiederhold, J., & Allen, E. (2006). TOSCRF: Test of Silent Contextual Reading Fluency, Examiner’s Manual. Austin, TX: Pro-Ed.
Kennelly, L., & Monrad, M. (2007). Approaches to dropout prevention: Heeding early warning signs with appropriate interventions. Washington, DC: American Institute for Research, National High School Center.
Kilpatrick, J., Swafford, J., & Finell, B. (Eds.). (2001). Adding it up: Helping children learn mathematics. Washington, DC: National Academy Press.
National Mathematics Advisory Panel. (2008). Foundations for success: The final report of the National Mathematics Advisory Panel. Washington, DC: U.S. Department of Education.
National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: National Institute of Child Health and Human Development, U.S. Department of Health and Human Services.
VanDerHeyden, A. M., & Burns, M. K. (2010). Essentials of response to intervention. New York, NY: Wiley.
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III Tests of Achievement. Itasca, IL: Riverside.

Back To Top