2. Case Studies

Case Study 1 

Dogwood Elementary School is working toward the goal of having all students read on grade level by the end of third grade. As part of their on-going work toward that goal, teachers in grades K – 3 administer grade-appropriate subtests of  DIBELS (8th edition) three times each year. The Dogwood team chose DIBELS because it is a validated assessment that can be used for both screening and progress monitoring. Research shows that the DIBELS subtests are accurate for identifying student literacy needs (National Center for Intensive Intervention, 2021). As an added benefit, the measures can be obtained for free and are relatively easy to administer. (School leaders may also opt for a paid version of the DIBELS that provides detailed data reports.) DIBELS provides benchmark scores for the beginning, middle, and end of the school year. Students who achieve these benchmark scores or higher are considered to be at minimal risk for future difficulties and are expected to make progress with core instruction, while students performing below these scores may need strategic or intensive support (University of Oregon, 2020).

After screenings are administered for fall, mid-year, and spring, teachers from each grade level meet as a Professional Learning Community (PLC) to discuss their data and make instructional decisions. The data are also reviewed by the school-level Literacy Leadership team, which includes the principal, literacy specialist, school counselor, one teacher from each grade level, a special education teacher, and an ESOL teacher. Grade-level PLCs evaluate score patterns to determine how their instruction is impacting literacy development within the grade level. They also use the data to form student groups for supplemental instruction, to include remediation or extension, and to identify students who may need intensive intervention. The Literacy Leadership team examines the data more holistically, looking for patterns within and across grade levels. They use the data to make decisions about professional development needs for teachers, and to consider how time, staff, space, and materials can be used most effectively to support literacy instruction.

The second grade PLC at Dogwood is preparing to review their mid-year DIBELS data. The team consists of four second-grade teachers, plus the ESOL and special education teachers who support grades K-2. Mr. Hartmann is the team lead. He has been teaching second grade for six years. His class includes several emergent bilingual students (English Learners), and he co-teaches with the ESOL teacher, Mrs. Saeedi, for 30 minutes of the literacy block each day. Mrs. Angelo and Ms. Dahlia have been teaching second grade for three years, though Mrs. Angelo is new to Dogwood. Mr. Andrews is a new teacher. All of the teachers have students with Individualized Education Programs (IEPs) or Section 504 plans in their classes. Mrs. Jackson, the special education teacher, pushes into Mrs. Angelo, Ms. Dahlia, and Mr. Andrews’ classes for 20 minutes per day and also pulls some students for literacy intervention in the special education resource classroom during the grade-level intervention/enrichment period three times per week.

The PLC has a systematic process for evaluating the DIBELS screening data. They begin by looking at composite scores, which provide an overall picture of group and individual performance. The composite score for second graders is calculated by “weighting” each of the six subtests used at the grade level, then adding those scores together (University of Oregon, 2020). After examining patterns related to composite scores, the teachers examine student performance on subtests to determine where instructional changes might be needed. They also use the subtest data to look for students who have similar instructional needs and use that information to form instructional groupings for strategic or intensive support. The benchmarks for the second-grade composite score and each of the subtests are provided in Table 2.1. The composite score which indicates that a student could be in need of intensive intervention is provided in parenthesis. Any student performing below this score is referred to the reading specialist if they are not already receiving special education services.

Table 2.1

DIBELS benchmark scores for second grade students

Subtest Beginning of Year Benchmark Mid-Year Benchmark End of Year Benchmark
Nonsense Word Fluency – Correct Letter Sounds (NWF-CLS) 82 102 116
Nonsense Word Fluency – Words Recoded Correctly (NWF-WFC) 24 35 38
Word Reading Fluency 26 36 43
Oral Reading Fluency – Words Read Correctly (ORF-WRC) 49 78 94
Oral Reading Fluency – Accuracy

(ORF-A)

92+ 96+ 96+
MAZE 5.0 9.0 9.5
Composite Benchmark

(Intensive Intervention)

329

(< 315)

389

(< 372)

439

(< 420)

 

The following tables show the Beginning of Year (BOY) and Mid-Year (MY) composite scores for the students in each teacher’s class. The PLC will identify patterns across the grade level, as well as within each teacher’s class.

Table 2.2

Mr. Hartmann’s Class

Student Beginning of Year Composite

Benchmark = 329

Mid-Year Composite

Benchmark = 389

Special Services (ESOL, IEP, 504)
Alberto 331 395 ESOL
Beryl 351 401
Chris 342 413
Debby 340 413 504
Ernesto 334 412
Francine 317 381
Gordon 337 407
Helene 310 348 ESOL
Isaac 312 358 ESOL
Joyce 359 422
Kirk 349 410
Leslie 359 429
Milton 332 375
Nadine 333 422 ESOL
Oscar 335 410
Patty 324 381 504
Rafael 342 399
Sara 341 400
Tony 347 428
Valerie 352 394

 

Table 2.3

Mrs. Angelo’s Class

Student Beginning of Year Composite

Benchmark = 329

Mid-Year Composite

Benchmark = 389

Special Services (ESOL, IEP, 504)
Andrea 340 404
Barry 352 409
Chantal 311 335
Dexter 217 235 IEP
Erin 352 402
Ferdinand 361 428
Gabrielle 329 375
Humberto 350 392
Imelda 320 378 IEP
Jerry 343 386
Karen 360 423
Lorenzo 300 360 IEP
Melissa 361 393
Nestor 360 425
Olga 360 427
Pablo 329 379
Rebekah 345 416 504
Sebastien 357 420
Tanya 333 380
Van 325 332 IEP

 

Table 2.4

Mrs. Dahlia’s Class

Student Beginning of Year Composite

Benchmark = 329

Mid-Year Composite

Benchmark = 389

Special Services (ESOL, IEP, 504)
Arlene 328 395
Bret 361 426
Cindy 358 428
Don 336 424
Emily 334 423 504
Franklin 308 377 IEP
Gert 331 415
Harold 322 391
Idalia 343 416
Jose 344 419
Katia 341 400 IEP
Lee 330 403
Margot 314 376 IEP
Nigel 345 398
Ophelia 350 425
Philippe 356 391
Rina 333 391
Sean 331 393 IEP
Tammy 337 382
Vince 353 411

 

Table 2.5

Mr. Andrews’ Class

Student Beginning of Year Composite

Benchmark = 329

Mid-Year Composite

Benchmark = 389

Special Services (ESOL, IEP, 504)
Arthur 329 380
Bertha 356 421
Cristobal 331 380 504
Dolly 338 419 504
Edouard 343 427
Fay 357 422
Gonzalo 335 401
Hanna 355 413
Isaias 348 412
Josephine 332 407
Kyle 351 422
Leah 299 321 IEP
Marco 329 390
Nana 330 407
Omar 345 400
Paulette 326 373 IEP
Rene 327 400
Sally 350 409
Teddy 336 416
Vicky 336 415

 

After examining the grade level and whole class data, the teachers collaboratively examine the subtest data for individual students who performed below the benchmark. The teachers began with the students in Mr. Hartmann’s class. They reviewed the data for Francine, Helene, Isaac, Milton, and Patty. The students’ mid-year subtest scores are presented in Table 2.6. The subtest names are abbreviated in the top row. The full subtest name, benchmarks identifying those in need of strategic or intensive support, and a brief description of the skills assessed follow:

  • Nonsense Word Fluency – Words Read Correctly (NWF-WRC):

Benchmark for Strategic Support = 20;  Benchmark for Intensive Support = 15

Nonsense Word Fluency measures a student’s ability to apply letter-sound knowledge (more formally grapheme-phoneme correspondence) when reading made up words. The use of nonsense words in this assessment allows the evaluator to distinguish a student’s decoding skills from their automatic word recognition skills. In this subtest, students are awarded points for each whole nonsense word that they decode correctly.

  • Nonsense Word Fluency – Correct Letter Sounds (NWF-CLS):

Benchmark for Strategic Support = 68;  Benchmark for Intensive Support = 54

The NWF – CLS score is derived from the same decoding activity as NWF – WRC. The NWF – CLS is a more sensitive measure of grapheme-phoneme correspondence than the NWF – WRC because it allows the evaluator to capture the number of specific sounds that a student accurately recognizes, even if the student is unable to accurately decode the whole nonsense word. Error analysis of the NWF – CLS allows teachers to identify specific phonics patterns that may require additional attention.

  • Word Reading Fluency (WRF):

Benchmark for Strategic Support = 36;  Benchmark for Intensive Support = 23

The WRF subtest involves presenting the student with a list of words which they read aloud. This measures the students ability to read words in isolation.

  • Oral Reading Fluency – Words Correct (ORF – WC):

Benchmark for Strategic Support = 78;  Benchmark for Intensive Support = 59

In this measure, students are given a grade level passage and asked to read aloud. The score represents the number of words that were read correctly in one minute. The ORF – WC score is a measure of a student’s ability to read words in context. Passage reading performance may differ from word list reading performance if students use context cues or prior knowledge to predict unfamiliar words.

  • Oral Reading Fluency – Accuracy (ORF – ACC):

Benchmark for Strategic Support = 96%;  Benchmark for Intensive Support = 91%

The ORF – ACC is derived from the same passage reading assessment as the ORF – WC. The ORF – ACC score is calculated by dividing the number of words read correctly by the total number of words attempted.

  • MAZE:

Benchmark for Strategic Support = 9.0;  Benchmark for Intensive Support = 6.5

The MAZE assessment measures both reading fluency and comprehension skills. Students are given three minutes to silently read a grade level passage which has missing words. Students fill in the missing words by selecting the best option from a field of three choices. The score represents one point for each word selected correctly, minus 0.5 points for each incorrect response.

Table 2.6

Mid- Year Sub-test Data for Mr. Hartmann’s  Students who Performed Below Benchmark

Name NWF – WRC NWF – CLS WRF ORF – WC ORF – ACC MAZE
Francine

 

14 53 30 76 92 7.0
Helene

 

4 30 10 25 65 2.0
Isaac

 

8 40 15 39 72 4.0
Milton

 

14 54 23 61 92 7.0
Patty

 

13 50 35 77 95 8.5

 

Supporting Helene

In further reviewing Mr. Hartmann’s class data, the grade-level PLC determines that several students need additional Tier 2 support. One of these students is Helene. Since Helene is an emergent bilingual learner (English Learner), the team knows they will need to implement an intervention that has proven effectiveness with this population. They decide to implement 60 minutes of computer-based literacy intervention each week using a program that has research supporting its use for the development of fluency skills with emergent bilingual students. The team decides to use the DIBELS Oral Reading Fluency measures for weekly progress monitoring and reconvene after 4 weeks of intervention to determine if the program is helping Helene make sufficient gains to close the gap in her reading development. They establish a year end goal of 77 words read correctly on the measure with 91% accuracy, as these scores will represent a large gain and move Helene from the Intensive Support range to the Strategic Support range on the DIBELS progress chart. The progress monitoring data for Helene following implementation of the computer-based intervention are found in Table 2.7.

Table 2.7

Helene’s Weekly Progress Monitoring Data

Assessment Date ORF – WRC ORF – ACC
Mid-Year 25 65
Intervention 1 – Week 1 26 66
Intervention 1 – Week 2 28 68
Intervention 1 – Week 3 27 68
Intervention 1 – Week 4 29 71

 

After four weeks of intervention, the team reconvenes to review Helene’s data. They graph her weekly ORF scores, create a goal line, and compare Helene’s weekly progress to the goal line. The team notices that Helene is making some progress, but not enough to meet the goal they have established for her by the end of the year. With this information, the team decides to intensify Helene’s supplemental intervention. In addition to the 60 minutes of computer-based intervention, which occurs over two 30-minute time slots during the grade-level intervention/enrichment period, Helene will also now participate in a phonics intervention group that is co-taught by the literacy specialist and ESOL teacher for three 30-minute sessions each week. As with the first round of intervention, the team will continue to conduct weekly progress monitoring using DIBELS Oral Reading Fluency.

Table 2.8

Helene’s Weekly Progress Monitoring Data with Intervention 2

Assessment Date ORF – WRC ORF – ACC
Mid-Year 25 65
Intervention 1 – Week 1 26 66
Intervention 1 – Week 2 28 68
Intervention 1 – Week 3 27 68
Intervention 1 – Week 4 29 71
Intervention 2 – Week 1 31 76
Intervention 2 – Week 2 44 80
Intervention 2 – Week 3 53 84
Intervention 2 – Week 4 56 85

 

The team reconvenes to review Helene’s progress with the intensified intervention. Her ORF scores show clear and significant growth. With graphing and Curriculum-Based Measurement Decision making rules, the team will determine if the intervention should be continued, modified, or discontinued.

Resources

MTSS: Assessment Practices Within a Multi-Tiered System of Supports (ufl.edu) (https://ceedar.education.ufl.edu/wp-content/uploads/2020/12/Assessment-Practices-Within-a-Multi-Tiered-System-of-Supports-2.pdf )

EBPs: What Works Clearinghouse (https://ies.ed.gov/ncee/wwc)

HLPs: High Leverage Practices for Students with Disabilities (https://highleveragepractices.org/)

Data-Based Decision Making: IRIS | Progress Monitoring: Reading (vanderbilt.edu)

License

A Case Study Guide to Special Education Copyright © by Jennifer Walker; Melissa C. Jenkins; and Danielle Smith. All Rights Reserved.

Share This Book