Κυριακή 11 Αυγούστου 2019

Ten Years of Response to Intervention: Trends in the School Psychology Literature

Abstract

With the reauthorization of the Individuals with Disabilities Education (Improvement) Act (IDEA) in 2004, local education agencies (LEAs) were no longer required to employ an IQ-achievement discrepancy model in the identification of a specific learning disability (SLD). Rather, districts were permitted to use data from a Response to Intervention (RTI) framework in determining a student’s eligibility for special services under the SLD classification. Because this change in legislation has the potential to impact the ways in which schools provide services to students, it is important to review the research base that informs practice. This review of the trends in the RTI literature examines the frequency and type of published research in the 10 years that followed the changes to IDEA. Results indicate that further research is warranted, particularly in the areas of procedural integrity and full model implementation.

Predicting School Suspension Risk from Eighth Through Tenth Grade Using the Strengths and Difficulties Questionnaire

Abstract

The current study examined (1) if the Strengths and Difficulties Questionnaire (SDQ) would yield alternative factor structures related to either symptoms or strengths with early adolescent students when an exploratory factor analysis (EFA) is used; (2) which scales best predicted suspensions of typically developing early adolescents; and (3) what cutoff scores were useful for identifying youth at risk for suspensions. The current study included 321 parent-student dyads, who were followed from the middle of eighth grade until the end of tenth grade. A symptoms-based EFA yielded three factors: Misbehavior, Isolation, and Agitation. A strength-based EFA yielded three factors, as well: Emotional, Social, and Moral competence. Logistic regression path analyses were used to predict risk of any suspension at the end of eighth, ninth, and tenth grades. The predictor variables were the original SDQ Conduct Problems and Hyperactivity scales in one model, the Misbehavior and Agitation scales in a second model, and the Emotional and Moral competence scales in the third model. Only the Misbehavior scale consistently predicted suspensions across each grade (b = .27, OR = 1.32, p < .001; b = .15, OR = 1.18, p = .029; b = .17, OR = 1.18, p = .029, respectively). For the Misbehavior scale, cutoff scores were established that reflected the 75th and 90th percentile; however, each cutoff demonstrated strengths and weaknesses for identifying at-risk students. The expectation of screening to identify youth at risk for suspensions, a complex school discipline decision, is discussed.

Examining the Association Between DIBELS Next® and the SBAC ELA Achievement Standard

Abstract

This study examined expectations for reading proficiency in the context of Common Core State Standards assessments and how DIBELS Next can inform decisions about student skills relative to these expectations. Data for cohorts of students in grades 3–5 were analyzed to determine the concurrent and predictive validity of the DIBELS Next Composite Score (DCS) relative to outcomes on the Smarter Balanced Assessment Consortium (SBAC) English Language Arts (ELA) achievement standard. We examined the strength of the association between the DIBELS Next Composite Score and the SBAC ELA achievement standard. The percent of students who met or exceeded the grade-level SBAC ELA achievement standard was determined for each DIBELS Next benchmark status category. In addition, the likelihood of meeting or exceeding the SBAC ELA achievement standard given each DIBELS Next Composite Score was determined. Results are discussed with respect to implications for practice and future research.

Measuring Teacher Practices to Inform Student Achievement in High Poverty Schools: a Predictive Validity Study

Abstract

The present study examined the predictive validity of a classroom observation measure, the Classroom Strategies Assessment System (CSAS)-Observer Form, as a predictor of student performance on statewide tests of mathematics and English language arts. The CSAS is a teacher practice assessment that measures evidence-based instructional and behavioral management practices (Reddy and Dudek 2014). This study presents a sample of 35 teachers and 829 third through eighth grade students from six urban high poverty schools. Six school administrators trained to criterion on using the CSAS conducted three classroom observations for each teacher as part of their yearly evaluation process. Zero-order correlations revealed negative relationships between CSAS Rating Scale discrepancy scores (i.e., ∑ | recommended frequency-frequency ratings |) and mathematics and English language arts proficiency scores. A series of two-level hierarchical generalized linear models were fitted to data to assess whether CSAS Instructional Strategy and Behavioral Management Strategy Rating Scale Total and Composite discrepancy scores predicted statewide mathematics and English language arts proficiency scores. Results indicated that CSAS Total and Instructional and Behavior Management Composite discrepancy scores significantly predicted both mathematics and English language arts proficiency scores, suggesting that larger discrepancies on observer ratings of what teachers did versus what should have been done were associated with lower proficiency scores. Results offer evidence of the utility of measuring teacher practices via the CSAS to inform student achievement in high poverty settings. Implications for school psychological practice, teacher evaluation, and professional development are discussed.

A Comparison of Comprehension Accuracy and Rate: Repeated Readings and Listening While Reading in Second-Grade Students

Abstract

Researchers have evaluated the effects of repeated reading and listening-while-reading interventions on oral reading fluency and comprehension, and have compared the effects of these two interventions on indirect measures of comprehension. The current study was designed to extend this research by evaluating and comparing the effects of these two interventions using direct measures of reading comprehension and reading comprehension rates, or the amount of passage comprehended per time spent reading. To determine if an interaction exists between passage difficulty and intervention condition, students read two passages for each condition, one easier and one harder passage. Results revealed main effects on comprehension rate, but not on comprehension accuracy. These findings suggest that neither intervention enhanced comprehension, but listening while reading enhanced comprehension rates on both easier and harder passages, indicating that it may be a significantly more efficient procedure for enhancing comprehension. Implications for measurement, academic accommodations, class-wide instruction, and future research are discussed.

A Systematic Review of Treatment Integrity Assessment from 2004 to 2014: Examining Behavioral Interventions for Students with Autism Spectrum Disorder

Abstract

Many students with autism spectrum disorder (ASD) receive behavioral interventions to improve academic and prosocial functioning and remediate current skill deficits. Sufficient treatment integrity is necessary for these interventions to be successful. However, a literature review of studies that evaluated behavioral intervention for students with ASD from 1993 to 2003 found that only 18% of studies included an operational definition of the intervention and measured treatment integrity. The purpose of the present study was to update this review and incorporate recent advances in implementation science (e.g., dimensions, implementation support). Of the 130 studies reviewed, treatment integrity was assessed in 43% of studies. When treatment integrity data were included, it was most often adherence data that were assessed through observation or self-report. Implications for future research and practice are discussed.

Evidence-Based Assessment: Best Practices, Customary Practices, and Recommendations for Field-Based Assessment

Abstract

The purpose of the current review is to examine three frequently employed types of assessment: (a) standardized tests, (b) screening, and (c) behavioral assessment. The aims are to advocate for best practices with evidence-based assessments (EBAs) and provide guidance to implement EBAs within applied settings. Information regarding the current best practices, customary field-based practices, and recommendations for improved practices are provided for each assessment type. Further, a framework is provided for using standardized tests, screening, and behavioral assessment within best practices to determine student intervention needs and potential for disability.

Receiver Operating Characteristic Analysis of Oral Reading Fluency Predicting Broad Reading Scores

Abstract

Oral reading fluency was investigated as a predictor of standard scores on the Woodcock-Johnson Tests of Achievement III Broad Reading Cluster (BRC). Participants included first- through third-grade students (n = 1301) from elementary schools in and around Houston, Texas. Median words correct per minute scores from three oral reading passages were analyzed as predictors of BRC achievement based on national and local norms using receiver operating characteristic (ROC) analysis. ROC analysis provided an area under the curve for all three grades exceeding .88. Higher percentile ranks and cut scores across each grade based on local norms indicate that educators may consider the use of ROC analysis to generate and use screening parameters targeted to their student population’s specific resources and needs.

Assessment in the Every Student Succeeds Act: Considerations for School Psychologists

Abstract

The Every Student Succeeds Act (ESSA) aims to ensure that all students are college- and career-ready by requiring all schools to implement high-quality accountability systems and services for students. The ESSA impacts assessment practices in schools by requiring staff to account for a broader range of variables related to student well-being, including both academic and non-academic variables (e.g., student mental health). Schools likely will (and should) rely on the expertise of school psychologists in designing and implementing high-quality assessment systems. Thus, school psychologists must be prepared to review a variety of assessment materials and select instruments that are best suited for specific purposes, contexts, and populations. The aim of this article is to familiarize school psychologists with the ESSA requirements for school accountability as well as critical issues in evaluating assessment systems. More specifically, this article presents considerations for evaluating the validity and reliability of assessment tools and procedures. Implications for school psychologists engaged in implementation of the ESSA are described.

Response to Intervention (RtI) and the Impact on School Psychologist Roles: Perceptions and Acceptance of Systems Change

Abstract

This study examined school psychologists’ perceptions and acceptability of a state-mandated response to intervention (RtI) model. The purpose of this study was to examine the role school psychologists play in the RtI process as well as investigate factors influencing school psychologists’ involvement in RtI. A survey was disseminated through snowball sampling to school psychologists to identify the impact of RtI on school psychologists’ roles as well as district preparedness. A principal component analysis identified four clear survey components. Results from survey participants (n = 80) showed that most school psychologists felt prepared to implement RtI, but did not believe their school would be able to implement RtI with fidelity. Additionally, school psychologists working in schools already using RtI procedures felt more comfortable and confident with RtI than schools not already using RtI. The current study suggests that mandatory state-wide RtI implementation can be beneficial, but more training for teachers and administrators is needed.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου