Dropping out of school has been a concern for educators internationally. According to UNESCO (2021), 24 million students are at risk of dropping out globally. School risk factors related to dropout include students’ ability to make academic progress over time, teacher absenteeism, and the quality of the education provided (Sabates et al., Reference Sabates, Akyeampong, Westbrook and Hunt2010). Preventive school climate factors that can decrease the likelihood of dropout include school safety, the physical environment, teaching and learning, and interpersonal relationships (Thapa et al., Reference Thapa, Cohen, Higgins-D’Alessandro and Guffey2012). One strategy for implementing these preventive factors is multi-tiered systems of support (MTSS).
MTSS approaches involve a three-tiered continuum of support (Durrance, Reference Durrance2023; Estrapala et al., Reference Estrapala, Rila and Bruhn2021). These interventions include schoolwide (Tier 1), group level (Tier 2), and individualised (Tier 3) support (Gamm et al., Reference Gamm, Elliott, Halbert, Price-Baugh, Hall, Walston, Uro and Casserly2012). Researchers suggest that staff who effectively implement Tier 1 strategies are likelier to implement more intensive Tier 2 and 3 interventions for their students (Eber et al., Reference Eber, Phillips, Upreti, Hyde, Lewandowski and Rose2009). MTSS approaches can take the form of schoolwide positive behaviour interventions and supports (SWPBIS), response to intervention, social and emotional learning (SEL), and school-based mental health. Schools that have implemented comprehensive MTSS have successfully addressed risk factors related to dropout for all students (Bradshaw et al., Reference Bradshaw, Mitchell and Leaf2010; Burke, Reference Burke2015; Gage et al., Reference Gage, Whitford and Katsiyannis2018) and students with disabilities (Choi et al., Reference Choi, McCart and Sailor2020a). Even with access to MTSS, many schools struggle to integrate MTSS-related strategies (e.g., academic, behavioural) into their settings (Choi et al., Reference Choi, McCart, Miller and Sailor2022; Jarl et al., Reference Jarl, Andersson and Blossing2017; Wei & Johnson, Reference Wei and Johnson2020).
How School Improvement Supports Schoolwide Interventions
School improvement is a mechanism for organising schoolwide interventions, such as MTSS. School improvement involves having a systematic approach to changing instruction and the environment. School improvement involves standardisation of instruction, monitoring progress, administrative support, distributive and collective leadership, providing sufficient resources, and district-level support (Dolph, Reference Dolph2017; Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). Improvement processes can be incremental, context specific, and challenging to implement (Donaldson & Weiner, Reference Donaldson and Weiner2017). However, schools with higher levels of schoolwide capacities, such as those implementing MTSS, can better integrate external reforms into their current structures than those with a lower ability (Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). Schoolwide capacity for improvement involves leadership practices, organisational conditions, teacher motivation, and teacher learning (Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). Although addressing schoolwide capacity may increase the effectiveness of implementation, there are still challenges to effective school improvement.
Using MTSS to facilitate school improvement may support factors that lead to improved student outcomes. These factors include high expectations, purposeful actions, meaningful relationships (Kaniuka & Vickers, Reference Kaniuka and Vickers2010), and systems components (e.g., administrative support, implementation teams) for interventions. For instance, the effectiveness of school improvement may be mediated by the functioning of the school improvement team (Benoliel, Reference Benoliel2021). The MTSS process provides systems, structures, and strategies that support healthy problem-solving, which could help improve team functioning (Goodman & Bohanon, Reference Goodman and Bohanon2018). Also, developing school improvement plans based on problem identification, a component of MTSS, may lead to more effective outcomes (Mintrop, Reference Mintrop2020). There is emerging research about integrating MTSS and school improvement for all students (Bohanon et al., Reference Bohanon, Wu, Kushki, LeVesseur, Harms, Vera, Carlson-Sanei and Shriberg2021; Freeman et al., Reference Freeman, Miller and Newcomer2015) and for students with disabilities (Choi et al., Reference Choi, McCart, Hicks and Sailor2019, Reference Choi, McCart and Sailor2020b; Sailor et al., Reference Sailor, Satter, Woods, McLeskey and Waldron2017). However, few researchers have examined the interaction between MTSS and school improvement in secondary schools.
Theoretical Framework: Activity Theory
The underlying theory for this study is activity theory, which involves activity systems components and their internal relationships (Engeström, Reference Engeström, Engeström, Miettinen and Punamäki1999). This theory involves a nested triangle model that includes societal and contextual factors. Activity systems consist of a subject (i.e., an individual or a subgroup), object (i.e., the orientation of activity), outcomes, community (i.e., participants with a shared goal), division of labour (i.e., distribution of roles), and rules (i.e., norms and conventions; Engeström, Reference Engeström, Engeström, Miettinen and Punamäki1999). The current model’s premise is that multiple activity systems are integrated by a shared focus on a goal or object (Engeström, Reference Engeström2008). For example, schoolwide approaches for behaviour, academic, and SEL support focus on using explicit instruction for teaching skills (Bohanon & Wu, Reference Bohanon and Wu2011). If the school improvement goals include behaviour, academic, and social outcomes, schoolwide teams may more easily integrate separate MTSS approaches into a joint plan.
Purpose of the Study
Our goal for this preliminary study was to improve our understanding of the connections between school improvement and MTSS implementation. Further, we hoped to learn how commitment to each process could improve student outcomes. We hypothesised that school improvement and MTSS would benefit from simultaneous implementation. This research is a response to calls to address how MTSS-related initiatives can improve student outcomes, including factors connected with dropping out of school (Horner et al., Reference Horner, Sugai and Fixsen2017). The following questions guided this research:
-
RQ1: What were the data patterns for schools above and below the median on measures related to implementing a schoolwide intervention, school improvement, and school improvement outcomes as assessed by statewide report card data?
-
RQ2: What were the changes in data patterns for schools above and below the median over time on measures related to schoolwide intervention implementation, school improvement, and school-level student outcomes (e.g., academic, graduation, dropout)?
-
RQ3: What were the patterns between implementing schoolwide interventions and school-level student outcomes?
-
RQ4: What data patterns emerged in office discipline referrals for individual students related to school improvement and schoolwide interventions?
Methods
This preliminary study occurred over 3 years, from the fall of 2014 to the spring of 2017. The institutional review board at Loyola University Chicago approved this study (IRB# 1396). The state education agency and its technical assistance team provided access to student-level and MTSS fidelity extant data. School-level data were publicly available on the state education agency’s website. Consent was only required for the school improvement measure designed for this research. The following section provides information regarding the participants, independent variables, dependent variables, and data analysis.
Participants
The participants for this study were staff from nine purposively sampled general education high schools. These schools implemented an integrated academic and behavioural schoolwide intervention in a Midwestern state in the United States. A statewide technical assistance team provided support for the implementation of MTSS. Technical assistance focused on providing effective academic and behavioural core instruction, using data for decision-making, developing effective team structures, and developing a continuum of student support. Participation in technical assistance included attending statewide MTSS training and submitting data related to the project (e.g., implementation fidelity data). These nine high schools represented all general education high schools implementing MTSS as a part of this statewide project at the time of data collection.
We mailed surveys (described in the Variables section) to the nine schools for this study to measure levels of school improvement. Table 1 includes demographic information for the sample schools with participants who did (n = 5, 56% response rate) and did not (n = 4) respond to the survey. In addition to the nine schools in the sample, we conducted a comparative analysis with 20 additional randomly selected high schools across the state. This analysis aimed to determine if schools in the study were similar to secondary schools throughout the state. Results from a permutation test indicated that the study schools were not significantly different from randomly selected schools in the state based on student demographic variables (e.g., number of students, socio-economic status, dropout rate).
Note. TIERS = Tiered Inventory of Effective Resources in Schools.
Variables
Fidelity measures for schoolwide supports
Two measures assessed the fidelity of implementation for schoolwide efforts: the Benchmarks of Quality (BoQ; Year 1 of the study – 2014–2015; Kincaid et al., Reference Kincaid, Childs and George2010) and the SWPBIS Tiered Fidelity Inventory (TFI; Year 2 of the study – 2015–2016; Algozzine et al., Reference Algozzine, Barrett, Eber, George, Horner, Lewis, Putnam, Swain-Bradway, McIntosh and Sugai2014). The state technical assistance provider switched between the two instruments during the study. Both tools are self-assessments of MTSS that emphasise SWPBIS. The systems-level components (e.g., administrative support, leadership team development) for both instruments are similar to measures of MTSS for other domains (Bohanon & Wu, Reference Bohanon and Wu2011; e.g., academic support, SEL). Internal schoolwide teams completed the BoQ and TFI with guidance from an external coach familiar with the instruments and MTSS.
Benchmarks of Quality
The BoQ included 53 items related to implementing schoolwide behaviour support at Tier 1 (e.g., faculty commitment, procedures for dealing with discipline). Schoolwide leadership teams scored each item on a Likert scale, ranging from 0 to a maximum of 3 points per item. Each item included a unique description for scoring purposes. Researchers found the BoQ to be valid and reliable for measuring the implementation of SWPBIS. The overall internal consistency of BoQ is α = .96 (Cohen et al., Reference Cohen, Kincaid and Childs2007). The BoQ overall implementation score for this study’s sample was 38%, with a range of 20% to 59%.
Tiered Fidelity Inventory
The TFI also measures implementation fidelity of MTSS related to behaviour. The TFI includes 45 items related to implementing SWPBIS across three tiers (e.g., team composition, discipline policies). The items on the TFI are scored on a Likert scale, with values ranging from 0 to 2 points. The overall internal consistency of the TFI is α = .96. The TFI also includes subscales for all three tiers of schoolwide support. Tiers 1, 2, and 3 sections had alpha scores of .87, .96, and .98, respectively (McIntosh et al., Reference McIntosh, Massar, Algozzine, George, Horner, Lewis and Swain-Bradway2017). Researchers identified moderate convergent validity between scores on the BoQ and the TFI (Mercer et al., Reference Mercer, McIntosh and Hoselton2017). The overall average TFI total score for the study schools was 20%, with a range of 7% to 29%. The Tier 1 average score across schools was 56%, ranging from 6% to 89%. For Tier 2, the average score was 9%, ranging from 0% to 63%. Finally, the Tier 3 school average was 5%, ranging from 0% to 50%. For analysis purposes, the BoQ total score and the TFI Tier 1 total score yielded the best comparisons due to the similarity of items in each instrument. Both BoQ and TFI data were collected during the second semester of each school year.
Tiered Inventory of Effective Resources in Schools
Based on our literature review, we could not find existing tools measuring elements of school improvement related to MTSS. Therefore, we designed the Tiered Inventory of Effective Resources in Schools (TIERS) for this study. We collected the TIERS data at the end of Year 2 and the beginning of Year 3 (2016–2017). The goal of the instrument was to measure components of school improvement implementation and MTSS. Most of the 25 items on the TIERS are scaled and have nominal or ordinal response Likert scale options. The prompts include constructs from both school improvement and MTSS. A copy of the TIERS is included as supplementary material for this article.
We gathered evidence of the validity of internal structures that indicated the degree to which the relationship among test items and components conformed to the constructs under study (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014, p. 13). Specifically, we assessed construct validity by assessing the items’ content validity (Sireci & Padilla, Reference Sireci and Padilla2014). We addressed content validity using two methods. First, we designed the TIERS items from a literature review on school improvement. Second, experts in schoolwide support and psychometrics reviewed the TIERS. Expert reviewers judged that the survey’s content addressed school improvement systems and data factors related to MTSS. The expert reviewers also judged the scaling appropriate to measure the survey constructs (Adams & Lawrence, Reference Adams and Lawrence2018; Forman & Crystal, Reference Forman and Crystal2015). We established the initial criterion-referenced validity (Kim & Shin, Reference Kim and Shin2022) in a previous study (Bohanon et al., Reference Bohanon, Wu, Kushki, LeVesseur, Harms, Vera, Carlson-Sanei and Shriberg2021). A Kendall rank-order coefficient test (W; Kendall, Reference Kendall1938; Puth et al., Reference Puth, Neuhäuser and Ruxton2015) yielded a statistically significant correlation (W = 1.00, p < .025) between the scores on a statewide measure of school improvement and the TIERS. A higher score on the TIERS correlated with higher scores on aggregate school improvement outcomes (e.g., attendance rates, graduation rates, performance on standardised scores).
Based on the TIERS data, 100% of the responders (n = 34) across the five schools that provided data reported having a school improvement plan and leadership team. Most (71%) reported having 6–10 leadership team members. Forty-four percent said it was usually or always true they reviewed data related to their school improvement plan three times per year. Additionally, 53% indicated using data to progress monitor all interventions for students in their schools. Although the schools appeared to have a plan and leadership teams in place, they implemented improvement data practices to a lesser degree.
Outcome data
School-level data
The state provided the school-level data through a public data warehouse. There has been a call for using publicly available existing data due to their cost effectiveness and ability to access data that would not be otherwise available (Watkins & Johnson, Reference Watkins, Johnson, Tierney, Rizvi and Ercikan2023). In the case of this study, we accessed schoolwide student data we could not have collected on our own. We measured school improvement outcomes using school-level scorecard data (RQs 1–2). These datasets included students from all demographics, including those with and without disabilities. The data were available for the second year of MTSS implementation (2015–2016). These data include variables typically associated with early warning systems (Carl et al., Reference Carl, Richardson, Cheng, Kim and Meyer2013). The state provided each school’s raw score and total points based on the scorecard dataset. The scorecard data included a composite of (a) graduation rates; (b) educator evaluations; (c) compliance factors (e.g., submitting a school improvement plan); (d) student proficiency on standardised assessments; (e) the percentage of students who participate in standardised assessments; and (f) attendance rates.
Data were also publicly available at the school level for graduation and dropout rates for Year 1 of MTSS implementation (2014–2015) and the prior year (2013–2014; RQ3). Standardised test scores for college and career readiness for maths and all subject areas were available for Years 1 (2014–2015) and 2 (2015–2016) of MTSS implementation (RQ2) for analysis with school improvement (i.e., TIERS). Only standardised maths scores were available for analysis with schoolwide interventions (i.e., BoQ) for Years 1 and 2 of MTSS implementation. Although student outcome data were collected some time ago, these same variables are still collected to evaluate student progress and school improvement. Therefore, these data appear to be relevant to current educational issues.
Student-level data
We used student-level office discipline referrals (ODR) to study patterns with MTSS and school improvement (RQ4). Both students with and without disabilities received ODRs in schools in this state. However, the state only required schools to collect ODR data for students with disabilities. Due to this limitation, the analysis for RQ4 focused on students with disabilities. These data were not publicly available, and we were granted access after formally applying to the state department of education. Although not all students with disabilities receive ODRs, researchers have shown that these students are more likely to receive disciplinary action (Green et al., Reference Green, Cohen and Stormont2019). However, with proper contextual support, these students may be less likely to receive punitive responses (Hurwitz et al., Reference Hurwitz, Cohen and Perry2021). Therefore, we considered analysis of this subgroup to be a valuable addition to the role of school improvement and MTSS-related interviews. Data for Years 1 (2014–2015) and 2 (2015–2016) of MTSS implementation were available. Based on previous research, ODR data are typically organised by the percentage of students with zero to one (i.e., Tier 1), two to five (i.e., Tier 2), and six or more (i.e., Tier 3) referrals (PBISApps, 2022). These cut points provide a valid method for identifying the support students need related to externalised problem behaviour (McIntosh et al., Reference McIntosh, Campbell, Carter and Zumbo2009). Seven schools (n = 7) reported ODR data for 2014–2015, and eight schools (n = 8) reported ODR data for 2015–2016. The data included a summary of the total number of referrals for each student with a disability by school.
Analysis
We used descriptive statistics for RQ1 and RQ2. We first analysed the TIERS, BoQ, TFI, and school improvement data for all five schools based on total scores on each variable. For RQ1 and RQ2, we analysed the data based on schools whose TIERS scores were above and below the median score. We used average scores and growth between years where the outcome variables were available. For RQ1, we also used the Kendall W to determine if there was a relationship between the rank order of the data on schoolwide interventions as measured by the BoQ and TFI, school improvement as measured by the TIERS, and the school improvement scorecard data.
We applied the Spearman rank-order correlation coefficient (ρ) to analyse RQ3. The focus of RQ3 was the relationship between intervention and student outcomes at the school level. For the analysis, we focused on the BoQ measured in 2015 and its association with the change in three student outcomes: maths (between 2015 and 2016), graduate rate (between 2014 and 2015), and the dropout rate (between 2014 and 2015). Although data interpretation guidelines exist, the cut points are arbitrary and inconsistent across recommendations (Schober et al., Reference Schober, Boer and Schwarte2018). Therefore, readers should use caution when interpreting descriptors associated with cut points for correlation coefficients. The Kendall W and the Spearman ρ yield scores ranging from −1 to +1. The higher the value, the higher the level of association. A positive result indicates that a higher value on one variable is associated with a higher value on another. A positive result is indicative that a higher number on one value is related to a lower number on another variable (Puth et al., Reference Puth, Neuhäuser and Ruxton2015). For the Spearman ρ, general guidance is that 0 to ± 0.20 = negligible result, ± 0.21 to ± 0.40 = weak, ± 0.41 to ± 0.60 = moderate, ± 0.61 to ± 0.80 = strong, and ± 0.81 to ± 1.00 = very strong (Prion & Haerling, Reference Prion and Haerling2014). For the Kendall W, recommendations are 0 = ± 0.05 = negligible result, ± 0.06 to ± 0.25 = weak, ± 0.26 to ± 0.48 = moderate, ± 0.49 to ± 0.70 = strong, and ± 0.71 to ± 1.00 = very strong (Schober et al., Reference Schober, Boer and Schwarte2018). The Kendall W and the Spearman ρ, both nonparametric statistics, were appropriate because of the study’s small sample size. Further, these statistics do not require assumptions of normality of the data (Puth et al., Reference Puth, Neuhäuser and Ruxton2015; Siegel & Castellan, Reference Siegel and Castellan1988).
The ODR data analysis (RQ4) was descriptive and visual. We determined the percentage of students with disabilities with ODRs for each school with data. Specifically, we determined the percentage of students who had zero to one ODRs (who responded to Tier 1 strategies), two to five ODRs (who required Tier 2 supports), and six or more ODRs (who required Tier 3 supports). The comparison included the schools with the highest and lowest mean scores based on the BoQ (n = 6), TFI (n = 8), and TIERS median (n = 4). Caution should be used when interpreting these ODR data results. Districts were only required to report discipline data for students with disabilities, and only if the behavioural incident resulted in suspension of 10 days or more. These data likely underrepresent incidents of schoolwide discipline, even for students with individualised education plans.
Results
We present the results of this study based on the research questions.
Patterns Between School Improvement and Schoolwide Intervention (RQ1)
The analysis for RQ1 involved comparing school improvement, as measured by the TIERS, and schoolwide interventions, as measured by the BoQ and TFI. Table 2 provides information on the five schools’ respondents for the TIERS survey. Although the TIERS data were collected later than the BoQ and TFI, the table illustrates a possible trajectory connected to school improvement and school-level outcomes. The table includes the total points schools could earn across the TIERS, BoQ, TFI, and the school improvement scorecard data. We present the data in the order of the highest to the lowest score on the TIERS. As illustrated in the table, there did appear to be a pattern between the scores on the TIERS and school improvement scorecard data, with higher TIERS scores occurring for schools with higher school improvement scores. Table 3 includes data that compare the scores above and below the median score on the TIERS. The average score for the BoQ (36%) and the TFI at Tier 1 (70%) was higher for schools above the median (n = 2) on the TIERS than for the schools that were below the median (n = 2, BoQ = 29%, TFI = 42%). Additionally, the schools with scores above the median on the BoQ and the TFI for Tier 1 also had higher school improvement data scores.
Note. TIERS = Tiered Inventory of Effective Resources in Schools; BoQ = Benchmarks of Quality; TFI = Tiered Fidelity Inventory. Data are presented from the highest to lowest score on the TIERS.
Note. MTSS = multi-tiered systems of support; TIERS = Tiered Inventory of Effective Resources in Schools; BoQ = Benchmarks of Quality; TFI = Tiered Fidelity Inventory. Data are presented based on the schools above and below the median score on the TIERS (n = 5 schools).
There were mixed results in the relationship between school improvement, schoolwide interventions, and school improvement outcomes. The relationship between the BoQ and the TIERS was not statistically significant (W = .36, p < .18). Likewise, the TFI Tier 1 scores and the TIERS relationship were not significant (W = .00, p < 1.00). Additionally, the BoQ and the TFI Tier 1 scores were not statistically significant (W = .36, p < .18). However, the BoQ and TFI Tier 1 were each statistically significantly related to the school improvement scorecard data (W = 1.00, p < .025).
Patterns Among School Improvement, Schoolwide Intervention, and School-Level Outcomes (RQ2)
Research Question 2 involved school improvement, schoolwide interventions, and available school-level outcomes. Table 3 includes information comparing the scores for schools above and below the median on the TIERS. In this sample, the schools above the mean on the TIERS (n = 2, M = 80.5%) were also two percentage points higher in school improvement points earned than schools below the median score.
Table 4 includes data on school improvement (as measured by the TIERS) and graduation and dropout rates in terms of change over time. As shown in Table 4, schools with lower levels of school improvement, as measured by the TIERS, observed decreases in graduation rates. In comparison, those with higher levels had comparatively improved graduation rates. Schools above the median on the TIERS also had a greater reduction in dropout rates (M = −35.9%) than schools below the median (M = 86.8%) across years.
Note. MTSS = multi-tiered systems of support; TIERS = Tiered Inventory of Effective Resources in Schools. Data are presented based on the schools above and below the median score on the TIERS (n = 5 schools).
Table 5 provides an illustration of the data for school improvement and the percentage of students who were college and career ready, as measured by statewide standardised assessments. Data were only available for maths and a summative score for all subject areas. The schools below the median on TIERS started with higher scores for maths and all subjects. However, the schools with TIERS scores above the median demonstrated more growth for maths (M = 12.4%) and all academic subjects (M = 52.3%) than those below the median. The schools below the median on the TIERS started with higher scores; however, the schools above the median made more growth across the 2 years.
Note. TIERS = Tiered Inventory of Effective Resources in Schools. Data are presented based on the schools above and below the median score on the TIERS (n = 5 schools).
Relationship Among Schoolwide Interventions and Student Outcomes (RQ3)
This analysis includes all nine general education high schools from the sample (n = 9). In some instances, the total score on the BoQ seemed to be related to student outcomes. The relationship between BoQ and the following year’s maths change was positive yet negligible (rs = .07). The relationship between BoQ and graduation rate (rs = .35) was weak. The relationship between BoQ and the dropout rate (rs = −.50) was moderate.
ODR Data Patterns for Schoolwide Interventions and School Improvement (RQ4)
We present the data ODR analysis for students with disabilities by comparisons with the TIERS, BoQ, and TFI at Tier 1. Figure 1 includes a comparison of the schools above and below the median on the TIERS and the percentage of students with zero to one, two to five, and six or more ODRs from the 2015–2016 school year. We chose this year for the ODR comparison because it was closest to the time of TIERS data collection. The average number of students with disabilities at each school was M = 89 (n = 444). All schools with TIERS and ODR data (n = 5) had an average of 90% (SD = 6.87%) of the students with disabilities with zero to one ODRs, 9% (SD = 7.12%) with two to five ODRs, and 1% (SD = 1.42%) had six or more ODRs. However, schools above the median on the TIERS (n = 2) had more students with one or fewer ODRs and fewer students with two to five ODRs than the two schools below the median score. The schools above the median on the TIERS had a lower percentage of students with six or more ODRs.
Seven schools had BoQ and ODR data (n = 7) during the 2014–2015 school year. The average number of students per school was M = 101, totalling 708 students with disabilities with ODR data. When comparing the BoQ (see Figure 2) and ODRs, these schools (n = 7) had an average of 94% (SD = 2.72%) of the students with disabilities with one or fewer ODRs, 6% (SD = 2.62%) with two to five ODRs, and 0.11% (SD = 0.3%) had six or more ODRs. Figure 2 provides a comparison of the schools above and below the median on the BoQ in terms of the percentage of students with disabilities with one or fewer, two to five, and six or more ODRs. Schools above the median on the BoQ (n = 3) had more students with one or fewer ODRs and fewer students with two to five ODRs than schools below the median score (n = 3). The schools above and below the median on the BoQ had approximately the same number of students with six or more ODRs.
Eight schools from the sample had TFI Tier 1 and ODR data (n = 8) for the 2015–2016 school year. The average number of students with disabilities at each school was M = 96 (n = 764). When comparing the TFI at Tier 1 with ODRs (see Figure 3), all schools with data (n = 8) had an average of 92% (SD = 6.58%) of the students with disabilities with one or fewer ODRs, 7% (SD = 6.54%) with two to five ODRs, and .52% (SD = 1.13%) had six or more ODRs. Due to tied scores, we calculated a median for the TFI at Tier 1 based on the total range of the scores. Schools above the median on the TFI Tier 1 (n = 4) had more students with one to zero ODRs and fewer students with two to five ODRs than schools below the median score (n = 4).
Discussion
This preliminary study’s purpose was to better understand the relationships between school improvement, schoolwide interventions, and student outcomes. The underlying theory behind connecting school improvement and MTSS was activity theory (Engeström, Reference Engeström, Engeström, Miettinen and Punamäki1999). At its core, activity theory is concerned with understanding how individuals interact with their environment, including other people, artefacts, and institutions. The framework assumes that human activity is mediated by tools, which can be physical, social, or symbolic. These tools shape our interactions with the environment and influence how we perceive, think, and act.
In the context of school improvement and MTSS, activity theory was used to understand how these systems are designed, implemented, and evaluated. By analysing the activity system, including the roles of each stakeholder, the tools they use, and the institutional contexts in which they operate, we better understand the challenges and opportunities for improving student outcomes. In implementing MTSS, activity theory can help us understand how teachers and administrators use different tools, such as progress monitoring assessments or behaviour interventions, to support student learning and behaviour. It can also help us to identify the institutional barriers that may prevent effective implementation, such as a lack of resources or professional development opportunities.
Specifically, activity theory is helpful in interpreting the study’s results. School improvement and MTSS both have activity systems that could be mutually beneficial (Engeström, Reference Engeström2008). School improvement planning involves developing shared goals and distributing roles and responsibilities (Mintrop, Reference Mintrop2020; Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). Both MTSS and school improvement focus on creating systems to support procedures (Benoliel, Reference Benoliel2021; Gamm et al., Reference Gamm, Elliott, Halbert, Price-Baugh, Hall, Walston, Uro and Casserly2012). Further, MTSS includes methods for providing a continuum of support. Therefore, activity theory provides a useful lens with which to identify each system’s similarities, differences, and mutual benefits for improving student outcomes.
In this study, there appeared to be somewhat of a positive relationship between school improvement efforts and the implementation of schoolwide interventions. Additionally, two of the datasets used in this study, the TIERS and school improvement scorecard data, often showed similar patterns (i.e., high or low) within schools. For instance, schools with higher levels of school improvement observed a greater reduction in dropout rates and comparatively improved graduation rates for all students. The reduced ODRs for students with disabilities and the growth of all students’ college and career readiness were greater for schools with higher TIERS scores. These observations may be partially due to the connections between school improvement (e.g., shared goals, division of labour) and schoolwide MTSS interventions (e.g., shared norms and conventions; Benoliel, Reference Benoliel2021; Engeström, Reference Engeström2008; Goodman & Bohanon, Reference Goodman and Bohanon2018).
Similar to other research (Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014), the schools in this study with higher levels of schoolwide capacities, as measured by the TFI and BoQ, appeared to be better able to implement school improvement strategies, as measured by the TIERS. This outcome may be partly due to the schools’ focus on leadership capacity, problem-solving, team functioning, and other organisational conditions through their work on MTSS (Goodman & Bohanon, Reference Goodman and Bohanon2018; Lane et al., Reference Lane, Menzies, Ennis and Bezdek2013). None of the schools in this study fully implemented MTSS based on the BoQ and TFI data. However, their capacity to implement school improvement, as measured by the TIERS, may have improved their ability to integrate MTSS into their setting.
The systems development that is a part of MTSS may also have improved the staff’s capacity to address school improvement priorities in their settings (Sleegers et al., Reference Sleegers, Thoonen, Oort and Peetsma2014). For example, the focus on healthy team functioning included in MTSS may have supported the school leadership teams’ ability to implement improvement strategies (Benoliel, Reference Benoliel2021). Schools in the study with higher levels of MTSS implementation demonstrated a higher degree of improvement in outcomes for all students (e.g., maths, dropout). This result may have been a function of the teams’ ability to use MTSS problem-solving strategies (e.g., reviewing screening data, problem identification) within their school improvement approaches (Goodman & Bohanon, Reference Goodman and Bohanon2018; Mintrop, Reference Mintrop2020).
Additionally, the schools above the median on BoQ and TFI reflected the expected proportion of students with disabilities with ODRs based on models of MTSS (i.e., 80% with zero to one ODRs, 5%–15% with two to five ODRs, 5% > with six or more ODRs). Interestingly, there appeared to be a higher proportion of students at the universal or targeted level, such as those with five or fewer ODRs in schools above the median on the BoQ, TFI, and TIERS. These findings are similar to other studies on decreases in ODRs related to MTSS approaches (Bradshaw et al., Reference Bradshaw, Mitchell and Leaf2010; Gage et al., Reference Gage, Whitford and Katsiyannis2018).
This preliminary study expands the connection with reductions in ODRs to include the association with school improvement, as measured through the TIERS. Additionally, the results involving ODRs relates to existing research supporting that schoolwide efforts and school improvement can improve outcomes for students with disabilities (Choi et al., Reference Choi, McCart, Hicks and Sailor2019, Reference Choi, McCart and Sailor2020b; Sailor et al., Reference Sailor, Zuna, Choi, Thomas, McCart and Roger2006). Students with disabilities already receiving disciplinary action (Green et al., Reference Green, Cohen and Stormont2019) are less likely to receive punitive responses once they receive specialised services and support (Hurwitz et al., Reference Hurwitz, Cohen and Perry2021). The school staff’s ability to support students with disabilities and behavioural issues may be related to the effectiveness of their schoolwide environment (Eber et al., Reference Eber, Phillips, Upreti, Hyde, Lewandowski and Rose2009). Although the results of this study may be promising, readers should consider the study outcomes in light of its limitations.
Limitations
Given the small sample size, it was difficult to quantitatively establish the validity of the TIERS for measuring MTSS and school improvement. This preliminary research was a case study based on a sample of all nine general education high schools implementing MTSS in one state. As in the state where this study occurred, far fewer high schools implemented MTSS than primary schools, which made it challenging to increase sample size. However, these results may provide insights that could guide future researchers studying the connections between MTSS and school improvement. Readers should consider how these findings might transfer to their setting, rather than generalising to all high school settings. Further, nonparametric statistics do not assume normality or normal distribution. Also, the nonparametric statistics used in this study are designed to address changes in underlying distributions with smaller sample sizes. Therefore, they are appropriate for applications with smaller datasets (Puth et al., Reference Puth, Neuhäuser and Ruxton2015; Siegel & Castellan, Reference Siegel and Castellan1988). There also were limitations in data availability from the state board of education. To address this concern, we only used outcome data where we could consistently compare data across school years. Additional studies should include datasets across more academic subjects and perhaps include SEL measures.
Although we addressed the TIERS’ content validity through qualitative efforts (e.g., grounding items in the research literature, expert review), researchers of future studies should use a larger sample to determine the TIERS’ psychometric properties. We also attempted to address the TIERS’ construct validity. However, future studies should address its overall reliability and validity. For example, researchers can use cognitive pretesting (Lenzner et al., Reference Lenzner, Neuert and Otto2016) to understand how respondents perceived the TIERS’ items. Although we attempted to reduce respondent fatigue by keeping the tool shorter, further research is needed to pinpoint any underlying problems with the instrument from the participants’ perspective. Although the research does not completely establish criterion validity, it is a small step toward understanding the criterion-referenced validity of the instrument. Researchers of future studies should draw upon larger samples to continue establishing criterion-referenced validity for the TIERS. Due to these limitations, the reader should use caution when considering these results. Future studies with larger sample sizes could also better describe the magnitude of outcomes included in this study. These studies should also include more sites that have reached full MTSS implementation. Researchers also should consider interview and observational data to identify factors related to MTSS and school improvement.
In most cases, MTSS implementation was below the full implementation threshold for MTSS. Only one school met the fidelity of the implementation benchmark on the TFI Tier 1 scale. This lack of fidelity of implementation limits the study’s findings. Although not impossible, implementing schoolwide interventions at full capacity may be more difficult to achieve quickly for secondary than primary schools (Durrance, Reference Durrance2023; Estrapala et al., Reference Estrapala, Rila and Bruhn2021). Contextual barriers include organisational structures, school size, and student development (Estrapala et al., Reference Estrapala, Rila and Bruhn2021). Implementation in secondary settings requires more time and resources to reach full fidelity (Durrance, Reference Durrance2023). At the time of the study, the schools in this sample had 2 years of documented MTSS implementation. Secondary schools may take over 2 years to fully implement MTSS (Durrance, Reference Durrance2023). Again, this sample represented all general education high schools implementing MTSS supported by the state. Therefore, subtle differences in data may provide insights into the early stages of MTSS implementation in secondary schools. In terms of ODRs, future studies should include data for all students, including those without disabilities. Despite ODRs only being available for students with disabilities, we believe interpreting these data tells part of the story of a subgroup keenly impacted by disciplinary actions (Green et al., Reference Green, Cohen and Stormont2019).
Conclusion
Preventing school dropout may include addressing school improvement and schoolwide factors. In this study, schools with higher levels of school improvement generally had higher levels of MTSS implementation. Overall, schools in this preliminary study that were higher on both school improvement and MTSS implementation had better outcomes associated with predictors of dropping out of school. School improvement and MTSS may be mutually beneficial enterprises that help school staff address factors related to dropping out. MTSS may help develop systems capacity and response arrays to better implement school improvement. School improvement may provide a common structure to help MTSS approaches (e.g., behaviour, academic, social and emotional) integrate into one cohesive system. Because the BoQ and TFI did not measure the academic form of MTSS, some of the improvement outcomes related to academic and dropout may have been due to improvements in core instruction, which were not measured. Both measures of MTSS assessed if the system supports were in place to implement schoolwide interventions effectively. More research is needed to understand better how school improvement and MTSS approaches can support improved outcomes for students who are most at risk of dropping out of school, including students with disabilities. We hope this preliminary study contributed to a better understanding of the relationship between these two schoolwide approaches.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/jsi.2023.15
Acknowledgements
The authors thank Karen Berlin, coordinator, Region 4 Training and Technical Assistance Center at George Mason University, and Dr Rachel Freeman, director of state initiatives, Institute on Community Integration at the University of Minnesota, for their feedback on this article. The authors also thank Dr Anna Harms and Dr Steve Goodman from Michigan’s MTSS Technical Assistance Center for supporting this project.
Funding
This research was partially sponsored by an internal research stimulation grant from Loyola University of Chicago.