RTI Assessment

 

Effective, Practical Assessment

 

  

 

RTI Teacher

 

Articles

Products

 Links

 

The following is an excerpt from the book, "Help, I Can't Read!" (available on this website).

One of the most intimidating aspects of RTI is the emphasis on data analysis driving our instructional decisions. Administrators are under pressure at the local, state and federal levels to prove that their students have been assessed, sorted, and given appropriate research-based interventions. They need statistics to satisfy the higher-ups. That’s a reality of today’s educational landscape, so let’s not shoot the messengers. (They’re not the enemy.) Instead, let’s find a reasonable assessment plan that works best for everyone involved. To be reasonable, it must:

  1. Analyze current, meaningful data about individual students.
  2.  Use that data to design an educational plan that will help each student reach his or her highest potential.
  3.  AND, not be so burdensome for the teachers. (We’re teachers, not testers!)
  4.  Last, but not least, it must work. Struggling readers must improve, or the plan isn’t worth the time or trouble.

 

Too much testing creates stressed-out teachers, but even more important—it stresses-out the students to the point they don’t try their hardest. Then they end up making pretty designs in all the little bubbles on the answer sheets for important standardized assessments—anything just to be done!

But, not enough testing leads to ineffective interventions.

 

Knowing the Score

The following are the minimum data teachers should know for every student:      

  • A standardized fluency score
  • A standardized comprehension score
  • A phonics score (grade-level appropriate)
  • A ZPD score (Zone of Proximal Development* showing reading level)

 

Fluency

Fluency is not the goal of reading—comprehension is. Even the developers of DIBELS (Dynamic Indicators of Basic Early Literacy Scores) stated that very clearly at an advanced training I attended. So, why is there so much emphasis on oral reading fluency? The answer is simple. Fluency is a by-product of good reading skills—it’s not the goal; it’s a symptom that the goal has been reached. Current research shows it is a fairly accurate measure of reading competence.

 So, why not just use an ORF score? Unfortunately, as with any other good thing, excessive ORF-ing is detrimental. When ORF scores are over-emphasized, students believe that good reading must be “fast.” This produces a phenomenon sometimes referred to as “word-calling.” Some students read very quickly, but when they finish, they have little comprehension of what they’ve read. They are word-callers. Because of the current trend to use ORF scores as often as every week or two as a progress-monitoring tool, word-calling is becoming more common, and is a difficult problem to solve.

 To avoid pushing our students into word-calling, we must take a more balanced approach. Fluency practice should usually have a comprehension element attached. It’s a simple thing to do—ask the students to tell you what they just read. This puts the emphasis on understanding versus speed. Can they give you the main details? If not, you’d better rethink your assessment plan, and put more emphasis on comprehension.

Comprehension

In order to provide a complete picture of a student’s performance, we need to balance the ORF score with a comprehension score. DIBELS, for example, attempts to do this with a retelling score, but they’ve not been able to assign any benchmarks that are statistically accurate. After reading each of the ORF test’s stories, the students are instructed to retell everything they can remember from the short passage they just read while the teacher tries to count the words in the response.

Many times teachers skip this part of the test because there’s no objective way to compare scores with other children. Some of the shortest answers show the deepest understanding, while some of the longer ones show no comprehension at all—just word babble.

I recommend teachers ask their students to retell the passages whenever testing for ORF, and then take anecdotal notes on the student’s ability. When students retell the passage, the teacher has a snapshot of the students’ thinking processes—do they get it, or not? And the teacher can hear what kind of errors the students are making. This is very valuable, but not a statistically accurate measure of students’ performance against a benchmark.

School districts administer summative* reading tests at the end of the school year which give an accurate, standardized comprehension score. Use it to balance the ORF. In my opinion, the spring score from the year before is more reliable than the current fall score. If we don’t put as much importance and effort into establishing the same testing conditions in the fall that we do in the spring, the fall score isn’t going to be valid. Much of the time, it’s lower than the spring score from the previous year, so unless there’s a good reason, I don’t see the need to give the fall test.

However, there are a few good reasons to give a fall standardized reading test:

  • Students don’t have a spring score—they’re new to your school.
  • Something unusual happened over the summer that could affect the score either for the better or for the worse.
  • Pressure from the government to show Adequate Yearly Progress* forces you to give a fall test, (not a good reason, but real).

Why subject the students to extra testing? They do better on the spring test if they don’t have “high stakes” test-fatigue.

 The last teaching assignment I had was as a reading specialist for what we called our “Double Dose” classes where I met with small groups of “intensive” readers from second through sixth grades. I would teach reading skills and strategies to those students at an extra reading time that did not conflict with core instruction in reading or math. When I set up my Double Dose classes, I devised a formula that would combine the comprehension score and the ORF score giving the comprehension score the most weight. I couldn’t meet with all the students who needed a double dose—there were too many—so I had to sort out the kids who needed my class the most. If students were low on both fluency and comprehension, they would come to Double Dose. But there were always struggling readers where either their fluency OR the comprehension was at grade level or above. If I had automatically put them in my double dose class, that would have been an inappropriate placement…

Grade Level Phonics Score

When I originally wrote this chapter, I didn’t include the phonics score as one of the essential scores, but when a sixth-grade teacher ran her class through the Assessment Sieve and administered the Informal Decoding Inventory (IDI)) to her whole class, she received some very valuable information. There were several readers who passed the district comprehension and fluency tests (barely), but they were unable to pass the phonics screening even at the pre-reading level. These were all students who should have been able to do much better on the state reading test, but their lack of phonics skills had caught up with them. They were unable to decode the multisyllabic words they were now encountering in their reading—and they were falling behind.

Now, I strongly recommend giving the IDI to your whole class. It’s included in this book and is easily administered in a whole group setting. (It also sets a baseline for the accompanying phonics progress-monitoring tool you can use for the rest of year.)

 

Zone of Proximal Development

Another required score is the Zone of Proximal Development (ZPD). It was developed by Lev Vygotsky in the 1970’s. Basically, he identified a zone where students were most likely to improve in reading ability. He found that if the students missed or paused over more than ten of the words on the page, the material was too difficult. Students spent so much effort trying to decode the words; they’d forget what they read. (We’ll discuss this more in Chapter 13.) On the other side of the zone, if they were reading almost all the words correctly, the material was too easy and wouldn’t encourage growth.

  Vygotsky's "Zone of Proximal Development"1

Level
Number of words that
cause the student to stumble when reading.
Setting where used
Independent Level

 

The student misses no more than 1 out of 20 words

(95% to 100% accuracy).

Any reading material the student reads alone needs to be at independent level.

Instruction Level

(Success Zone)

Student misses no more than

1 out of 10 words.

(90% to 94% accuracy)

Students can work at this level with support from the teacher or another student who reads at a higher level.

Frustration Level

Student misses more than

1 out of 10 words

(89% or less accuracy)

Student should never be required to read at this level without assistance.

 

Teachers need to know the approximate reading level of each of their students, and this applies from kindergarten through college. If we really expect to teach people anything, we need to give them materials they have the ability to read…period…end of subject. (In Chapter 7, I will go over several simple ways you can make effective accommodations for struggling readers so they will have access to your grade level curriculum.)

 There are several systems for leveling reading material:

  • Accelerated Reader (Renaissance Learning)
  • DRA
  • Fountas and Pinnell
  • Lexile
  • There are other systems, too.

 It doesn’t matter which one we use, as long as we use one of them to provide materials that our students can read...

 (Step-by-step instructions for assessing your class, are found in "Help, I Can't Read!")

* Word found in "Glossary."

 



1 Vygotsky, L. “Interaction between Learning and Development.”  Mind in Society. Trans. M. Cole. (Cambridge, MA: Harvard University Press, 1978: 79-91).

 

 

Email

Copyright © Rainbow Readers Gold Hill, OR
carol@rainbowreaders.com