Benefits of Repeated Reading Intervention for Low-Achieving Fourth- and Fifth-Grade Students

By Vadasy, Patricia F Sanders, Elizabeth A

Many students have difficulty achieving reading fluency, and nearly half of fourth graders are not fluent readers in grade-level texts. Intensive and focused reading practice is recommended to help close the gap between students with poor fluency and their average reading peers. In this study, the Quick Reads fluency program was used as a supplemental fluency intervention for fourth and fifth graders with below-grade-level reading skills. Quick Reads prescribes a repeated reading procedure with short nonfiction texts written on grade-appropriate science and social science topics. Text characteristics are designed to promote word recognition skills. Students were randomly assigned to Quick Reads instruction that was implemented by trained paraeducator tutors with pairs of students for 30 minutes per day, 4 days per week, for 18 weeks. At posttest, Quick Reads students significantly outperformed classroom controls in vocabulary, word comprehension, and passage comprehension. Fluency rates for both treatment and control groups remained below grade level at posttest. Keywords: fluency; repeated reading; paraeducator tutors

Skilled reading appears deceptively effortless but is coming to be better appreciated as the balanced coordination and timing of subskills involved in word recognition and comprehension (Perfetti, 1992; Stanovich, 1980). The effortless and fluent reading of text, or reading fluency, is often summarized as “the ability to read connected text rapidly, smoothly, effortlessly, and automatically with little conscious attention to the mechanics of reading such as decoding” (Meyer & Felton, 1999, p. 284). This definition guides the most common procedure for describing fluency in terms of rate and accuracy on an oral reading measure, such as a curriculum-based assessment. More comprehensive fluency assessment may include standardized procedures that capture prosody and comprehension (Pikulski & Chard, 2005). This definition also guides the most widely used instructional approaches to improve reading fluency, including repeated reading (Dahl, 1979; Samuels, 1979).

Our understanding of reading fluency continues to be deepened by research that underscores the developmental nature of fluency (Bowers, 1995; Kame’enui, Simmons, Good, & Harn, 2001; Manis & Freedman, 2001; Pikulski, 2006), the underlying lexical and sublexical processes that support fluency (Ehri, 1995; Wolf & Bowers, 1999), and the coordination of both cognitive and attentional resources (Berninger, Abbott, Billingsley, & Nagy, 2001; Breznitz, 2001) that supports this illusion of effortless fluent reading. As it is most commonly described (National Reading Panel, 2000), fluency has three components: accuracy, rate, and prosody, the latter of which is more difficult to objectively measure and quantify and has been less well studied than accuracy and rate.

The accuracy component of fluency most often refers to decoding accuracy (Meyer & Felton, 1999). Research has informed our understanding of the coordination of subprocesses that underlie decoding accuracy (Berninger, 1994; Berninger & Abbott, 1994; Berninger et al., 2001; Breznitz, 2001). This coordination includes making connections at the one- and two-letter spelling-pattern level (the orthographic layer) as well as connections at the morphological level (affixes and inflections). Torgesen, Rashotte, and Alexander (2001) reviewed variables most correlated with text reading rate across five intervention cohorts, ranging from second to seventh grade. Across studies, word reading accuracy and efficiency consistently explained the most variance in reading rate. As others (Berninger et al., 2001; Bowers & Wolf, 1993; Torgesen, Alexander, et al., 2001; Wolf, 2001) have shown, fluency is influenced by the level of word reading accuracy, but this process has both lexical and sublexical layers that become more well coordinated over time and are influenced by attentional resources.

Rate of processing and retrieving lexical information also influences reading fluency, and deficits in rapid automatized naming, or the very complex processes involved in perceiving, representing, and retrieving verbal labels for visual stimuli (Wolf, 1991; Wolf & Bowers, 1999), explain variance in reading skills from kindergarten through adult age (Meyer, Wood, Hart, & Felton, 1998; Wolf et al., 2002). Naming speed is strongly related to word reading fluency (Bowers, Golden, Kennedy, & Young, 1994; Wolf & Bowers, 1999) and is predictive of reading performance across both regular and less regular orthographies (Breznitz, 2001; Ho, Chan, Tsang, & Lee, 2002; Wimmer & Hummer, 1990). Naming speed may more strongly predict word reading accuracy for older than for younger students (Manis & Freedman, 2001). Students’ general cognitive speed of processing exerts a powerful influence on reading fluency. In the regression analyses Torgesen, Rashotte, et al. (2001) conducted on five cohorts, rapid naming speed for letters consistently explained additional variance in reading rate beyond that accounted for by the word reading accuracy and efficiency measures.

Remedial interventions have been designed to address limited fluency component skills, with a focus on making the word reading process effortless and automatic (LaBerge & Samuels, 1974) so that resources can be allocated to comprehension. Repeated reading is the most widely applied and studied remedial method to develop fluency (Dahl, 1974, 1979; Samuels, 1979, 1988). Repeated reading was found by the National Reading Panel (2000) to be the only well-supported approach for improving reading fluency (see reviews by Chard, Vaughn, & Tyler, 2002; Kuhn & Stahl, 2003; Meyer & Felton, 1999; Wolf & Katzir-Cohen, 2001). Quite simply, the repeated reading procedure involves reading a short passage aloud several times. The instructor often models and provides scaffolding and corrections. Students may read a passage a prescribed number of times until the student reaches a certain reading rate goal (in terms of words per minute; Samuels, 1979) or until he or she demonstrates a set number of rate improvements (Weinstein & Cooke, 1992). Fluency norms (Hasbrouck & Tindal, 2006) and benchmarks (Good & Kaminski, 2002) have become available to identify students who are not developing fluency and to monitor their progress. Students placed in a repeated reading intervention usually read from short passages that are selected to be at a certain level of difficulty or that feature repeated words across passages (Rashotte & Torgesen, 1985). Kuhn and Stahl (2003) concluded that passages read at a more difficult level contributed to larger reading gains than easier passages. Repeated reading may be assisted or unassisted. In their review, Chard et al. (2002) found that reading with a model was more effective for low- skilled students than reading without a model, and having the teacher model reading seemed to promote comprehension. These researchers found benefits for gradually increasing the difficulty of the texts and for providing feedback and correction for word reading errors. Most repeated reading interventions have been relatively short: Those reviewed by Wolf and Katzir-Cohen (2001) ranged from 1 to 15 days, and the median length of interventions reviewed by Meyer and Felton (1999) was four sessions. The number of rereadings varies from seven times (O’Shea, Sindelar, & O’Shea, 1985) to three to four times, which appears most common. Most studies have not addressed the level of training needed to provide repeated reading practice, and although most interventions have been provided by teachers, repeated reading has also been implemented by teaching assistants (Mercer, Campbell, Miller, Mercer, & Lane, 2000), by peer tutors (Mathes & Fuchs, 1993), or in a computer- based instructional situation (Carver & Huffman, 1981). As both the Meyer and Felton (1999) and Wolf and Katzir-Cohen (2001) reviews concluded, the effects of repeated reading on comprehension are unresolved. Other unresolved questions about repeated reading include the subtypes of students who most benefit from fluency interventions, the particular method and intensity of repeated reading, and the impact of these interventions on fluency components and subcomponents (e.g., word identification and decoding fluency).

The Quick Reads Program

In this article, we report on the 1st year of a 3-year evaluation of a published fluency program, Quick Reads (Hiebert, 2003), designed for use with students in Grades 2 through 6. In this study, the program was implemented with fourth and fifth graders. Schools often seek effective reading interventions for the nearly 4 in 10 fourth graders who read below basic level (U.S. Department of Education, 2004). The well-known reciprocal relationship between fluency and comprehension (Pinnell et al., 1995; Shinn, Good, Knutson, Tilly, & Collins, 1992; Tan & Nicholson, 1997) further supports providing students with reading practice to develop the skilled word reading and fluency skills that allow students to allocate attention to comprehension. In Kuhn and Stahl’s (2003) review of fluency interventions, gains in fluency were associated with gains in comprehension, although less so when more general standardized comprehension measures were used that draw from background knowledge or inferencing skills (see Francis et al., 2006). Preliminary Research on Quick Reads

Hiebert (2005) conducted a quasi-experimental study of the fluency intervention for second graders. Three schools were randomly assigned to one of three groups: a literature-based intervention, using the district’s literaturebased basal reading program; a content intervention, using Quick Reads texts; and a control classroom, using its regular basal reading instruction. The first two fluency groups used the procedures outlined for fluency- oriented reading instruction (FORI; Stahl, Heubach, & Cramond, 1997): The teacher modeled fluent reading of basal text, the students worked with a partner to reread text, the teacher led choral or echo reading, and the teacher conducted comprehension extension activities. The fluency intervention extended across a 20- week period. Students were assessed before and after the intervention on two passages, which were scored for fluency, phrasing, and comprehension. The Quick Reads fluency group significantly outperformed the control group on fluency at posttest despite having less opportunity to engage in repeated reading (as revealed by four indices that measured opportunity for reading in the seven intervention classrooms that were required to follow different school policies on reading time allocation). The present study extends the work of Hiebert with a more rigorous research design and a focus on students using the upper grade levels of the program.

The purpose of our research was to determine the effectiveness of the Quick Reads program used as a supplemental remedial fluency intervention for low-skilled fourth and fifth graders. We specifically considered the use of Quick Reads implemented by paraeducator tutors with student dyads in a pull-out tutoring intervention. The simplicity of the repeated reading procedure and the engaging nature of the Quick Reads passages make the program well suited for use by nonteachers with minimal background and training in reading and in student management.

Method

Students

Referral and screening. In the fall of the academic year, 40 fourth- and fifth-grade teachers in 12 public elementary schools in a large northwestern city were asked to refer students who (a) had never been retained, (b) had low rates of reading fluency or comprehension, and (c) would particularly benefit from a fluency- oriented intervention (i.e., teachers were asked to recommend students with adequate word reading skills who would benefit most from fluency instruction and who could be pulled out for instruction). Once active parent consents were obtained, referred students were screened for eligibility. Students were considered eligible for participation if they demonstrated at-risk performance on the average of three gradelevel reading passages from the Oral Reading Fluency (ORF) subtest of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002); fourth-grade at-risk performance was defined as scoring below 93 words correct per minute (WCPM) on fourthgrade passages, and fifth-grade at-risk performance was defined as scoring below 104 WCPM on fifth-grade passages. Of those screened, one fifth grader was recommended for an alternative intervention, as the student was able to read only 11 WCPM (far lower than bottom 10th percentile performance of 61 WCPM for fifth graders; see Hasbrouck & Tindal, 2006). Students eligible for participation were administered the full pretest battery.

Group assignment. Group assignment was a two-stage process. First, eligible students were randomly assigned within schools to dyads (pairs of students). Next, dyads were randomly assigned to one of two conditions: treatment (supplemental Quick Reads tutoring) or control (no tutoring; classroom instruction only). A few schools had uneven numbers of eligible students, and three eligible students were not assigned to either condition (through a random selection process) and were subsequently removed from study participation. Although it would have been preferable to randomly assign students within classrooms to dyads and conditions, there were too few eligible students within each classroom. Thus, dyads were cross- classroom, cross-grade pairs, and controls were typically dyads in name only-control pairs seldom had the same teacher.

Attrition. After group assignment, the sample comprised 70 students in each condition. By the end of the year, 16 (23%) treatment and 5 (7%) control students were lost to attrition. The treatment group attrition was greater due to study design: If 1 treatment student left, the student’s dyad partner was also removed from the study (although in three treatment dyads, both students moved), because tutoring instruction was designed for pairs. Controls, however, were dyads in name only, and thus the attrition of 1 control student did not affect another student. Treatment group attrition included 6 students who moved from their schools, 3 who were removed due to scheduling conflicts, 1 who had severe behavior problems during tutoring, 1 whose parent requested study removal, and 5 whose dyad was no longer intact. All control group attrition was due to students’ moving from their schools. Final sample sizes were thus 54 treatment students (27 dyads) and 65 control students. As reported in Table 1, there were no significant differences between groups on grade or status variable frequencies (all ps > .05).

Tutors

Twenty tutors were recruited from their school communities. Tutors’ educational levels, general tutoring experience, and experience working with fourth and fifth graders varied. Two tutors were employees of the district and served regularly as instructional assistants (IAs), and 3 were hourly employees. Prior to the study, tutors ranged from O to 11 years of tutoring experience and averaged 15 years of education: Two had master’s degrees, 11 had bachelor’s degrees (3 with teaching certificates), 2 had associate’s degrees, and 5 had attended some college. The average educational attainment of tutors in this study matches the paraeducator competency requirements under the No Child Left Behind Act of 2001.

Intervention

The Quick Reads program includes short, nonfiction passages written for Grade Levels 2 through 6. Each grade level includes nine science topics and nine social studies topics chosen on the basis of national and state grade-level curriculum standards for science and social science. Each topic is developed in five reading passages. For example, for Level D (fourth grade), science topics include “The Human Body,””Volcanoes,” and “Wind and Solar Energy.” Level D social studies topics include “The Constitution of the United States,” Natural Resources and the Economy,” and “The History of Sports.” An important feature of the Quick Reads program is its attention to text features that are expected to influence fluency rate. Texts that include a large number of unknown and difficult words can be discouraging for the struggling reader and difficult to process and comprehend. Therefore words chosen for inclusion in Quick Reads texts reflect expected grade-level decodability. The number of unique or rare words in Quick Reads texts is minimized, as students are unlikely to acquire these words as sight words from single exposures and from unlikely subsequent practice encounters in classroom reading texts. The vocabulary features many high- frequency words: Ninety-eight percent of the words used in the texts are high-frequency words or words that reflect grade-level phonics and syllable patterns. Texts emphasize words that students most need to learn to recognize automatically, and words are repeated to build opportunities for sight word learning. In previous studies of repeated reading (Faulkner & Levy, 1994; Rashotte & Torgesen, 1985), students’ reading fluency increased when they read texts that had a high overlap of words. Quick Reads text characteristics are hypothesized to develop underlying lexical accuracy and automaticity, prerequisite skills often overlooked in traditional conceptions of repeated reading as a remedial intervention.

Quick Reads is designed for classroom or small-group use either as part of the regular reading program or as a supplemental intervention. The number of passages students cover daily and the length of the intervention depend on how it is used. If a teacher follows the recommended classroom instructional routine, students use Quick Reads for 15 minutes a day for one semester, or 18 weeks. Each passage is read three times:

First read: The teacher activates background knowledge about the topic and asks students to find two words that are challenging. Students read the passage aloud or silently and then write notes or phrases of key ideas.

Second read: The teacher reads aloud with the students, setting a model for fluent reading, all reading aloud at the target rate of 1 minute. The teacher asks students to “tell the one thing the author wants you to remember.”

Third read: The teacher tells the students that their goal is to read as much of the passage as they can read in 1 minute. The students then read silently for 1 minute, and when the time is up, each student records the number of words read. The teacher and student review the comprehension questions together.

The teacher’s manual suggests many classroom extension activities to develop vocabulary skills and comprehension, including suggestions for supporting English language learner students. For example, activities suggested to build prior knowledge include discussing the student’s experience with the topic and creating a word web related to the topic. Activities suggested to build vocabulary include listing difficult words on the board and asking students to generate definitions and to find the vocabulary words in their readings. Activities suggested to identify key ideas include asking students to retell what they recall from the passage. Extension activities are recommended for each of the three readings, and additional reading is suggested for each topic. Treatment students received supplemental Quick Reads tutoring in dyads for 30 minutes per day, 4 days per week, for 20 weeks (November to June). Students assigned to the control group received regular classroom instruction while treatment students received tutoring. Students were pulled out for tutoring on the basis of teachers’ scheduling preferences. Classroom activities missed by treatment students during tutoring sessions (as reported by their classroom teachers) included reading or language arts, social studies, science, music, library, and physical education. More than half (55%) of treatment students were pulled out of literacy instruction at least some of the time during the intervention.

Tutoring sessions. The Quick Reads instruction for this evaluation was scripted to ensure that all tutors used the same procedures (as noted above, the teacher manual is written for classroom or small-group instruction and assumes that the teacher chooses the enrichment activities, coordinated with student need and other reading instruction). Each session began and ended with vocabulary instruction that had been designed by the research staff to match the vocabulary introduced in the Quick Reads passages.

Vocabulary extension activity. To incorporate the type of teacher support that would be provided if the program were used by the classroom teacher, we scripted a layer of brief vocabulary instruction, one of the extension activities suggested in the program. For each passage, one or two challenging words were identified. Criteria for selecting words for vocabulary instruction were that (a) the word was difficult to decode (e.g., symbol, engineer), (b) the word was important for understanding the topic being developed in the passages, and (c) the word was repeated in the passage at least twice and was also repeated across passages. We selected high-frequency, Tier 2 words (Beck & McKeown, 1985) that would be useful in middle school reading and that might not be familiar to students who are English language learners. We identified about 60 words per level and wrote clear, accessible definitions for the tutors to use to introduce and review. Tutors were provided with suggestions for several vocabulary review activities (e.g., student asked to provide examples or related words or to make a discrimination between two words) based on principles for effective vocabulary instruction (Baumann & Kame’enui, 2004; Beck, McKeown, & Kucan, 2002; Biemiller, 2001). Penno, Wilkinson, and Moore (2002) reported that students learn vocabulary from listening to stories being reread, with enhanced vocabulary acquisition when the teacher provided explanations of the words as they occurred in context.

Each tutoring session had seven steps, as follows.

1. New vocabulary: Tutor introduces new vocabulary word.

2. First read: Tutor introduces the passage and its main idea. Students take turns reading the passage.

3. second and third read: Tutor and students read the passage aloud together twice, with the tutor modeling smooth and fluent reading.

4. Fourth read: Each student completes a 1-minute timed reading.

5. Comprehension: Tutor and students read aloud the two comprehension questions that accompany each passage.

6. Vocabulary review: Tutor reviews vocabulary word from previous passage.

7. Read new passage: Students complete Steps 1 through 5 for a second passage (such that students read a minimum of two passages per session).

Quick Reads placement and coverage. Quick Reads passages are organized by grade levels (A = first grade, B = second grade, C = third grade, D = fourth grade, and E = fifth grade). We placed dyads into levels based on the grade level for which their averaged pretest reading fluency most closely matched 50th percentile (Hasbrouck & Tindal, 2006). Our sample, after attrition, included 1 dyad placed into Level B, 7 dyads placed into Level C, 18 dyads placed into Level D, and 1 dyad placed into Level E. Each Quick Reads level has three books, and each book contains six content areas with 5 passages per content area, for a total of 90 passages per level.

Each session, tutors recorded attendance, including the Quick Reads passage(s) covered. By the end of intervention, treatment students covered an average of 90 passages (SD = 22) and attended an average of 57 tutoring sessions (SD = 7), or 28.5 hours of intervention. After computing students’ individual passage coverage per session, we found that our treatment group averaged 1.6 passages per session (SD = 0.25; range = 1.1 to 1.9).

Tutor training. Tutors participated in one initial 4-hour training by project staff. Training included an overview of reading fluency development and the repeated reading method. Research staff then modeled the use of Quick Reads materials and vocabulary instruction. The tutors practiced and received feedback before they began work with students. Following initial training, coaches visited tutors weekly to provide coaching and modeling and to collect observation data on tutor instruction and management. Midyear, tutors attended a 3-hour workshop provided by research staff to reinforce tutoring strategies and effective student management. The workshop addressed specific tutor skills for successful Quick Reads lessons and included a demonstration of a Quick Reads lesson with students in study.

Tutor coaching. Throughout the 20-week intervention, research staff supported and conducted observations of the tutors. Specific researcher coaches were assigned to a set of tutors, and for each tutor, a minimum of eight observations were conducted (of which there were at least two observations per dyad). Coaches met monthly to discuss tutoring implementation progress.

Tutor observations (fidelity). To monitor treatment implementation fidelity, data were collected via observation forms, including (a) tutors’ adherence to scripted Quick Reads protocols, (b) tutor behavior in terms of both organization and responsiveness to students’ needs, and (c) student progress in terms of the amount of time spent actively engaged in reading passages. Tutors’ fidelity to scripted protocols was measured using a dichotomous (yes-no) implementation checklist that included two to five criteria for each of the Quick Reads instructional steps (the percentage of observed correct criteria across all steps was calculated per observation). Tutors’ behavior was measured using a 5-point rating scale of 0 (never) to 4 (always) on eight criteria, including “organizational materials,””tutoring time spent on instruction,””full tutoring time used,””smooth transitions,””corrections match error and skill,””use of specific praise,””quick pace,” and “keeps students engaged.” Student progress was measured by recording the amount of time (in seconds) students were actively reading text. Across all three measurements (protocols, behaviors, and student progress), only components actually observed were recorded (i.e., if the beginning of the tutoring session was the only component observed, then tutor behavior and student progress were not recorded). A total of 54 paired observations from five pairs of raters (one researcher- rater was used as a baseline for comparison with the other five) were used to obtain interobserver reliability. Adherence to scripted Quick Reads protocols reliability ranged from r = .53 to r = .91 and averaged r = .76. Reliabilities averaged r = .81 for tutor behavior ratings and r = .92 for the amount of time students spent on passage reading.

Across 254 observations (approximately 13 per tutor), adherence to protocols averaged M = 90% (SD = 11.5%); across 248 observations (approximately 12 per tutor), tutor behaviors averaged M = 3.7 (SD = 0.55), and across 206 observations, each student (within the dyad) spent an average of M = 7.8 minutes per session (SD = 4.42) actively engaged in orally reading Quick Reads text.

Student Assessments

Students were individually pretested and posttested by trained testers unaware of group assignment on skills hypothesized to be affected by intervention, including word reading accuracy, word reading efficiency, word comprehension, vocabulary, fluency rate, and passage comprehension. Attention and rapid automatized naming were measured only as a way to better describe the sample. Except for attention, vocabulary, and fluency rate, standard scores were used in analyses (population mean of 100 and standard deviation of 15). Measure descriptions are as follows.

1. Attention was measured midyear using the Attention scale from the Multigrade Inventory for Teachers (MIT; Shaywitz, 1987). To calculate the score, we averaged four items from the MIT together (those with reliable item-factor loadings; Agronin, Holahan, Shaywitz, & Shaywitz, 1992, pp. 98-99). Scores range from 0 to 4, with higher scores indicative of worse attention. Internal consistencies reported by the authors for Grades 4 and 5 are .91 and .92, respectively. For our sample, internal consistency is .90.

2. Rapid automatized naming was measured at pretest using the Rapid Letter Naming subtest of the Comprehensive Test of Phonological Processing (Wagner, Torgesen, & Rashotte, 1999). Students are presented with a card containing five letters repeated in random order, which they must name as fast as possible. The raw score is the number of seconds required to name all of the letters on two 36-item stimulus cards. Test-retest reliability reported in the test manual is .72 for 8- to 17-year-olds. For our sample, alternate form reliability is .75 (Form A and Form B letters correct per second).

3. Word reading accuracy was measured using the Word Identification subtest from the normreferenced, standardized Woodcock Reading Mastery Test-Revised/Normative Update, Form H (WRMTR/NU; Woodcock, 1998). This assessment requires students to read increasingly difficult words, and testing is discontinued after six consecutive incorrect responses. Split-half reliability (alternating items) reported in the test manual averages .99 for third graders and .91 for fifth graders. Internal consistencies for our sample of fourth and fifth graders are .93 at pretest and .94 at posttest. 4. Word reading efficiency was measured using the Sight Word subtest from the norm-referenced, standardized Test of Word Reading Efficiency, Form B (Torgesen, Wagner, & Rashotte, 1999). The Sight Word subtest requires students to read as many words as possible in 45 seconds from a list of increasingly difficult words. Test-retest reliability reported in the test manual for 6- to 9- year-olds is .96. For our sample, internal consistencies are .94 at pretest and .95 at posttest.

5. Vocabulary was assessed with a multiple-choice, curriculum- based measure of vocabulary developed by research staff in consultation with Dr. Judith Scott. Eighty initial items were constructed using 20 words sampled from each of four levels (B, C, D, and E) of the Quick Reads passages (no students were expected be placed in Quick Reads Level A). Criteria used for constructing the initial item pool included (a) words that appeared in at least three of the passages within the Quick Reads level and (b) words that were high-utility, content words (e.g., symbol, sense, native). Three distractors were written for each word and were constructed to have matching syntactic form to the correct definition and to have adequate semantic separation (the position of the correct definition among the four choices was randomly sorted). Students were asked to read each item and the four choices silently. All items were administered consecutively in the order of the Quick Reads levels, with 1 point awarded for each correct response. From the highest item-total (point-biserial) correlations at pretest, we selected half (10 of the original 20 items per level) of the items as our final measure. Thus, students’ scores reflect their raw number of items correct out of 40 items. Internal consistencies for our sample are .83 and .84 at pretest and posttest, respectively.

6. Word comprehension was assessed using the WRMT-R/NU Word Comprehension subtest. The WRMT-R/NU Word Comprehension subtest, measured at each test period, includes three increasingly complex subsections: Antonyms, Synonyms, and Analogies. For each section, items are arranged in increasing difficulty, and testing is discontinued after six consecutive incorrect responses. The Antonyms section requires students to read a word and supply a word opposite in meaning, and the Synonyms section requires students to read a word and supply a word similar in meaning. The third section, Analogies, requires students to read a pair of related words aloud, then read a single word aloud, and then supply a word related to the single word (using the relationship of the pair). For example, the student would read the word pair on-off followed by the single word in, and the correct response would be out. The raw score for the Word Comprehension test is the total number correct across all three sections. The test manual reports split-half reliability (alternating items) as .90 for fifth graders. For our sample, internal consistency is .92 at both pretest and posttest.

7. Fluency rate was assessed using students’ mean WCPM on three grade-level passages drawn from DIBELS ORF benchmarks. Specifically, fourth graders read the following DIBELS ORF Grade 4 passages at pretest: “Water Cycle,””Land at the Top of the World,” and “Georgia O’Keefe”; and at posttest: “The Youngest Rider,””Maid of the Mist,” and “She Reached for the Stars.” Fifth graders read the following DIBELS ORF Grade 5 passages at pretest: “Something’s Missing,””A New Habitat,” and “Mount Rainier”; and at posttest: “Help Is on the Way,””Whale Song,” and “Mount Everest.” Students read each passage aloud while the tester recorded errors; testing was discontinued after 1 minute. Words omitted, substituted, and hesitations of more than 3 seconds are scored as errors (words self-corrected within 3 seconds are scored as accurate). Test-retest reliabilities for elementary students are reported by Tindal, Marston, and Deno (1983) and range from .92 to .97. For our sample, Grade 4 passage fluency rate intercorrelations are .84 to .91 at pretest and .79 to .82 at posttest (all ps

8. Passage comprehension was measured using the WRMT-R/NU Passage Comprehension subtest, which requires students to supply a missing word that would be appropriate in the context of each sentence or passage read (silently). A series of acceptable responses are listed on the easel page for the tester. Items are increasingly difficult, and testing is discontinued after six consecutive incorrect responses. The test manual reports split-half reliability (alternating items) as averaging .97 for first and third graders and .73 for fifth graders. For our sample, internal consistencies are .89 at both pretest and posttest.

Data Analysis Strategy

Because pretest occurred prior to group assignment, pretest data were analyzed in one-way analyses of variance (SPSS 13.0 used for these analyses). However, two issues inherent in the research design we employed required a more complex analysis strategy for posttest data. First, students in the treatment condition received instruction in pairs (dyads) during most tutoring sessions, which suggests that students within a dyad cannot be expected to have outcomes independent from one another. As such, the assumption of independence required for typical analyses of variance and covariance (ANOVAs and ANCOVAs) is not tenable.

One strategy to handle this problem would be to collapse dyads into their means (cf. Graham, Harris, & Chorzempa, 2002); this is not a preferable option, because use of dyad means would lead to loss of individual student information as well as difficulty in drawing meaningful conclusions from analysis results. In addition, students in the control condition were dyads by label only (thus, it is reasonable to assume that students within a control dyad will have outcomes independent of one another). Another strategy, then, would be use of hierarchical linear modeling (also known as multilevel modeling, mixed modeling, and random effects modeling) to account for the nonindependence of students within treatment dyads while simultaneously allowing each control student to serve as his or her own dyad. However, this leads to a secondary but no less important issue: By posttest, we did not expect that treatment and control groups to have within-group variances similar enough to pool, which is the assumption in typical hierarchical linear model specifications (the treatment group, composed of dyads, was expected to be more homogeneous in their reading skills as a function of paired instruction, whereas controls’ variance was hypothesized to be stable at both pretest and posttest). Given these issues, our data analysis strategy was to specify ANCOVA-like hierarchical linear models that allow for heterogeneous group variances as well as random variation between dyads (cf. Raudenbush, Bryk, Cheong, & Congdon, 2004, pp. 52-54). Respective pretests were group-mean centered for use as covariates, and group was dummy coded for ease of interpretation (1 = treatment; 0 = control). HLM 6.0 (Raudenbush, Bryk, & Congdon, 2004) was used for all hierarchical analyses. The general equation used for all posttest analyses is provided below.

Interpretation of the general model is as follows: Each outcome estimated for student i in dyad j (controls were coded as one dyad) is equal to (a) the fixed effect of intercept gamma^sub 00^ (posttest mean for all students, holding group and pretest constant), plus (b) the fixed effect of slope gamma^sub 01^ (effect of treatment group membership, holding pretest constant), plus (c) the fixed effect of slope gamma^sub 10^ (effect of individual’s pretest ability relative to their group mean), plus (d) the random effect of variance between dyads u^sub 0j^, and plus (e) the random effect of within-group variance r^sub ij^.

Results

Pretests

Group means and standard deviations at pretest and posttest are shown in Table 2. We used a series of one-way ANOVAs to test for possible group differences at pretest, because students had yet to receive any treatment in dyads. The results of these analyses showed no evidence of group differences for any measure (all Fs = 1.2, all ps > .05), which was expected because students within dyads were randomly assigned to condition within schools. Pretests, shown in Table 2, reveal that the combined sample averaged approximately in the lower 20th to 25th percentile on all norm-referenced measures (as per the average standard scores) as well as fluency rate (see Hasbrouck & Tindal, 2006).

Posttests

To determine whether schools should be considered in our nesting structure for hierarchical posttest analyses, we tested for pretest differences between schools using oneway ANOVAs followed up with Tukey’s HSD pairwise comparisons. Because we found no significant differences between schools on any measure (Fs = 1.7, ps > .05), schools were not incorporated into posttest analyses.

Results from our hierarchical linear models (fixed effects shown in Table 3) revealed that there were significant treatment effects for vocabulary, word comprehension, and passage comprehension but not for word-level reading or fluency rate. The model estimates show that students in the treatment group had a 3-point advantage over controls on the curriculum-based vocabulary measure (d = .42); for word comprehension, the treatment group is estimated as having an advantage of 3 standard score points (d = .27); and finally, for passage comprehension, treatment students are estimated as having an advantage of 4 standard score points (d = .50). For our norm- referenced measures, these results imply that the treatment group averaged in the 30th percentile at posttest on both word comprehension and passage comprehension, whereas the control group averaged in the 25th and 10th percentiles, respectively. (Although we report Cohen’s d for significant treatment effects, calculated as the difference between groups divided by pooled standard deviation [Cohen, 1988], we wish to note that the computation assumes the variances of the two groups can be pooled, which is unreasonable given our research design. As such, we encourage readers to use caution in interpreting this kind of effect size, and to focus more on the estimated differences between groups in terms of standard score points and percentile ranks.) In all analyses, pretest accounted for significant variation in posttest (all ps .05 in chi-square tests), with the exception of fluency rate (between-dyad variation was greater than chance, p

Exploration of Fluency Rate Relationships

We hypothesized that the lack of treatment effects for fluency rate might be related either to the generally low word-level reading skills (accuracy and efficiency) of our sample or to treatment- related variables. Intercorrelations were computed and are provided in Tables 4 and 5 for treatment and control groups, respectively. For the treatment group, we added three treatment-related variables: (a) Quick Reads total passage coverage (number of unique passages read), (b) rate of coverage (average unique Quick Reads passages per session covered), and (c) dichotomously coded classroom reading instruction missed during tutoring (1 = missed some literacy instruction; 0 = otherwise). (Although not shown in the table, tutor protocol fidelity is not related to any pretest or posttest; all rs .05.)

Examination of the treatment group’s intercorrelations shows that although pretest word reading accuracy and efficiency have a low to moderate relationship with posttest fluency rate (r = .28, p .05, whereas passage coverage rate uniquely accounted for 10.6% of posttest variance; after pretest fluency rate and word reading efficiency were taken into account, F^sub change^ (1, 50) = 12.707, p

Attention and rapid automatized naming were not strong predictors of posttest outcomes for either group. For treatment students, attention accounted for 11% of the variance in fluency rate, and rapid automatized naming accounted for approximately 11% of the variance in word reading accuracy and 14% in passage comprehension. For the control group, attention accounted for approximately 7% of the variance in vocabulary, and rapid automatized naming accounted for approximately 9% of the variance in word reading efficiency and 11 % in fluency rate.

Discussion

The aim of our study was to evaluate the use of the Quick Reads fluency program for fourth- and fifth-grade students whose fluency rate was below grade level in the fall of the academic year. Students were referred by classroom teachers for supplemental fluency instruction, screened, and randomly assigned to dyads within schools. Dyads were then randomly assigned to either Quick Reads tutoring or no tutoring (regular classroom reading instruction only). Quick Reads instruction was provided by paraeducator tutors to pairs of students who worked on passages estimated at their average reading level for approximately 20 weeks. At posttest in the spring of the academic year, the treatment group significantly outperformed the control group on measures of vocabulary, word comprehension, and passage comprehension; however, we found no treatment effects for word-level reading or oral reading fluency. At posttest, both treatment and control students’ average fluency rate performance was in the lowest quartile, according to Hasbrouck and Tindal’s (2006) grade-level norms.

As noted by others (LaBerge & Samuels, 1974; Perfetti, 1985; Wolf & Katzir-Cohen, 2001), the complexity and developmental nature of reading fluency suggests that intervention efforts begin with attention to sublexical and word-level skills. In this study, we asked teachers to refer students who had poor reading fluency and basic wordreading skills. We further attempted to impose a minimum level of word reading skill for eligibility to ensure that students would be able to participate in and benefit from dyadic oral reading practice. We should note that teachers were not willing to refer students with higher reading skills for this pull-out intervention. Yet at pretest, treatment students averaged nearly one standard deviation below the population mean in word reading accuracy and efficiency, which may have been too low to allow them to develop reading rate and therefore reduced the fluency effects of the intervention.

If we take the developmental perspective outlined by Kame’enui et al. (2001), a more effective fluency approach for this low-skilled cohort of fourth and fifth graders might first target word-level reading efficiency. Our findings support the recommendations of others (Kame’enui et al., 2001; Torgesen, Rashotte, et al., 2001; Wolf & Katzir-Cohen, 2001) that effective fluency intervention must attend to all components of fluency, including explicit instruction to address gaps in sublexical and word-level skills as well as semantic, orthographic, and morphological processes. In choosing the vocabulary extension activity for this field test, we had hoped to address what we expected would be the most developmentally appropriate word-level instruction that would fit into the 30- minute instructional block without reducing the intensity of repeated reading time required to test its effects. In retrospect, we would replace the vocabulary extension activity with added explicit instruction in alphabetics and decoding efficiency. Many students in the treatment group struggled to recognize single- as well as multiletter spelling patterns. We were surprised and dismayed to find that many students had word-reading miscues due to confusion about single vowel sounds. In the 2nd year of our evaluation of Quick Reads, we will consider its use in a more preventive than remedial application for second- and third-grade students. In that evaluation, we will incorporate explicit instruction in the alphabetic principle as the short extension activity to accompany repeated reading procedures. One limitation of the current study is the lack of data on classroom reading instruction. In our future research with younger students, we will conduct observations to account for the influence of classroom reading instruction on student outcomes.

Because this study was designed to evaluate the effectiveness of the Quick Reads program, which is a repeated reading approach, the final stage of passage-level fluency was the developmental fluency target. And in spite of students’ lexical-level deficiencies, treatment students derived benefits from the Quick Reads intervention in the areas of vocabulary and comprehension. The Quick Reads text characteristics may have enabled these lowskilled readers to develop vocabulary and comprehension skills in spite of the fact that they continued to struggle at the lexical and sublexical levels in word reading and decoding (i.e., perhaps through the text characteristics: high-frequency words, word repetitions, and connections with grade-level concepts being taught in the classroom). One implication of this evaluation is that the effectiveness of Quick Reads might be increased if the extension activities for low-skilled students target explicit instruction to develop word reading efficiency. The intentional word choice and word repetitions that characterize the Quick Reads passages would allow a skilled teacher to readily add this instruction to the repeated oral reading practice. We expect it may be even more important to attend to students’ lexical level of fluency in earlier grades, the stage in development when many students are consolidating these underlying component skills. Authors’ Note: Grant R305G040103 from the Institute of Education Sciences, U.S. Department of Education, supported this research. We especially acknowledge Sueanne Sluis for coordinating the intervention, and we thank Julia Peyton and Sarah Tudor for coordinating student assessments. We also thank coaches, testers, tutors, and data entry staff, especially Kathryn Compton, Katy Compton, Eleanor Garner, Rayma Haas, Robin Horton, Ruth McPhaden, Siobhain Mogensen, Steven Pearson, Nancy Rerucha-Borges, Katia Roberts, Linda Romanelli, Laura Root, Tyler Rothnie, Yu Linda Song, Jason Vincion, and Lynn Youngblood. Finally, we are most grateful to the teachers, staff, and children of the schools for their support and participation in this study.

References

Agronin, M. E., Holahan, J. M., Shaywitz, B. A., & Shaywitz, S. E. (1992). The Multi-Grade Inventory for Teachers (MIT): Scale development, reliability, and validity of an instrument to assess children with attention deficits and learning disabilities. In S. E. Shaywitz & B. A. Shaywitz (Eds.), Attention deficit disorder comes of age: Toward the twenty-first century (pp. 89-116). Austin, TX: Pro-Ed.

Baumann, J. F., & Kame’enui, E. J. (2004). Vocabulary instruction: Research to practice. New York: Guilford.

Beck, I. L., & McKeown, M. G. (1985). Teaching vocabulary: Making the instruction fit the goal. Educational Perspectives, 23, 11-15.

Beck, I. L., McKeown, M. G., & Kucan, L. (2002). Bringing words to life: Robust vocabulary instruction. New York: Guilford.

Berninger, V. W. (1994). Reading and writing acquisition: A developmental neuropsychological perspective. Madison, WI: Brown and Benchmark.

Berninger, V. W., & Abbott, R. D. (1994). Multiple orthographic and phonological codes in literacy acquisition: An evolving research program. In V. W. Berninger (Ed.), The varieties of orthographic knowledge I: Theoretical and developmental issues. Dordrecht, Netherlands: Kluwer Academic.

Berninger, V. W., Abbott, R. D., Billingsley, F., & Nagy, W. (2001). Processes underlying timing and fluency of reading: Efficiency, automaticity, coordination, and morphological awareness. In M. Wolf (Ed.), Dyslexia, fluency, and the brain (pp. 383-414). Baltimore: York.

Biemiller, A. (2001). Teaching vocabulary: Early, direct, sequential. American Educator, 25, 24-28, 47.

Bowers, P. G. (1995). Tracing symbol naming speed’s unique contributions to reading disabilities over time. Reading and Writing: An Interdisciplinary Journal, 7, 189-216.

Bowers, P. G., Golden, J. O., Kennedy, A., & Young, A. (1994). Limits upon orthographic knowledge due to processes indexed by naming speed. In V. W. Berninger (Ed.), The varieties of orthographic knowledge: Theoretical and developmental issues (pp. 173-218). Dordrecht, Netherlands: Kluwer Academic.

Bowers, P. G., & Wolf, M. (1993). Theoretical links between naming speed, precise timing mechanisms and orthographic skills in dyslexia. Reading and Writing: An Interdisciplinary Journal, 5, 69- 85.

Breznitz, Z. (2001). The determinants of reading fluency: A comparison of dyslexic and average readers. In M. Wolf (Ed.), Dyslexia, fluency, and the brain (pp. 245-276). Timonium, MD: York.

Carver, R. P., & Hoffman, J. V. (1981). The effect of practice through repeated reading on gain in reading ability using a computer- based instructional system. Reading Research Quarterly, 16, 374- 390.

Chard, D. J., Vaughn, S., & Tyler, B. (2002). A synthesis of research on effective interventions for building reading fluency with elementary students with learning disabilities. Journal of Learning Disabilities, 35, 386-406.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). Hillsdale, NJ: Lawrence Erlbaum.

Dahl, P. (1974). An experimental program for teaching high speed word recognition and comprehension skills (Final Rep. Project No. 3- 1154). Washington, DC: National Institute of Education.

Dahl, P. (1979). An experimental program for teaching high speed word recognition and comprehension skills. In J. E. Button, T. Lovitt, & T. Rowland (Eds.), Communications research in learning disabilities and mental retardation (pp. 33-65). Baltimore: University Park.

Ehri, L. C. (1995). Stages of reading development in learning to read words by sight. Journal of Research in Reading, 18, 116-125.

Faulkner, H. J., & Levy, B. A. (1994). Fluent and nonfluent forms of transfer in reading: Words and their message. Psychonomic Bulletin and Review, 6, 111-116.

Francis, D. J., Snow, C. E., August, D., Carlson, C. D., Miller, J., & Iglesias, A. (2006). Measures of reading comprehension: A latent variable analysis of the diagnostic assessment of reading comprehension. Scientific Studies of Reading, 10, 301-322.

Good, R. H., & Kaminski, R. A. (2002). Dynamic Indicators of Basic Early Literacy Skills (DIBELS) (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement.

Graham, S., Harris, K. R., & Chorzempa, B. F. (2002). Contribution of spelling instruction to the spelling, writing, and reading of poor spellers. Journal of Educational Psychology, 94, 669- 686.

Hasbrouck, J., & Tindal, G. A. (2006). ORF norms: A valuable assessment tool for reading teachers. Reading Teacher, 59, 636-644.

Hiebert, E. H. (2003). Quick reads. Parsippany, NJ: Pearson Learning.

Hiebert, E. H. (2005). The effects of text difficulty on second graders’ fluency development. Reading Psychology, 26, 183-209.

Ho, C. S.-H., Chan, D., Tsang, S.-M., & Lee, S.-H. (2002). The cognitive profile and multiple-deficit hypothesis in Chinese developmental psychology. Developmental Psychology, 38, 543-553.

Kame’enui, E. J., Simmons, D. C., Good, R. H., & Harn, B. A. (2001). The use of fluency-based measures in early identification and evaluation of intervention efficacy in schools. In M. Wolf (Ed.), Dyslexia, fluency, and the brain (pp. 307-332). Baltimore: York.

Kuhn, M. R. & Stahl, S. A. (2003). Fluency: A review of developmental and remedial practices. Journal of Educational Psychology, 95, 3-21.

LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293-323.

Manis, F. R., & Freedman, L. (2001). The relationship of naming speed to multiple reading reassures in disabled and normal readers. In M. Wolf (Ed.), Dyslexia, fluency, and the brain (pp. 65-92). Timonium: York.

Mathes, P. G., & Fuchs, L. S. (1993). Peer-mediated reading instruction in special education resource rooms. Learning Disabilities Research and Practice, 8, 233-243.

Mercer, C. D., Campbell, K. U., Miller, M. D., Mercer, K. D., & Lane, H. B. (2000). Effects of a reading fluency intervention for middle schoolers with specific learning disabilities. Learning Disabilities Research and Practice, 15, 179-189.

Meyer, M. S., & Felton, R. H. (1999). Repeated reading to enhance fluency: Old approaches and new directions. Annals of Dyslexia, 49, 283-306.

Meyer, M. S., Wood, F. B., Hart, L. A., & Felton, R. H. (1998). The selective predictive values in rapid automatized naming within poor readers. Journal of Learning Disabilities, 31, 106-117.

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: National Institute of Child Health and Human Development.

No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425 (2002).

O’Shea, L., Sindelar, P. T, & O’Shea, D. J. (1985). The effects of repeated readings and attentional cues on reading fluency and comprehension. Journal of Reading Behavior, 17, 129-142.

Penno, J. F., Wilkinson, I. A. G., & Moore, D. W. (2002). Vocabulary acquisition from teacher explanation and repeated listening to stories: Do they overcome the Matthew effect? Journal of Educational Psychology, 94, 23-33.

Perfetti, C. A. (1985). Reading ability. New York: Oxford University Press.

Perfetti, C. A. (1992). The representation problem in reading acquisition. In P. B. Gough, L. C. Ehri, & R. Treiman (Eds.), Reading acquisition (pp. 145-174). Hillsdale, NJ: Lawrence Erlbaum.

Pikulski, J. J. (2006). Fluency: A developmental and language perspective. In S. J. Samuels & A. E. Farstrup (Eds.), What research has to say about fluency instruction (pp. 70-93). Newark, DE: International Reading Association.

Pikulski, J. J., & Chard, D. J. (2005). Fluency: Bridge between decoding and reading comprehension. Reading Teacher, 58, 510-519.

Pinnell, G. S., Pikulski, J. J., Wixson, K. K., Campbell, J. R., Gough, P. B., & Beatty, A. S. (1995). Listening to children read aloud: Data from NAEP’s integrated reading performance record (IRPR) at grade 4. Washington, DC: U.S. Department of Education, Office of Educational Research and Improvement.

Rashotte, C. A., & Torgesen, J. K. (1985). Repeated reading and reading fluency in learning disabled children. Reading Research Quarterly, 50, 180-188.

Raudenbush, S. W, Bryk, A. S., & Cheong, Y. F., & Congdon, R. T. (2004). HLM 6: Hierarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software International.

Raudenbush, S. W., Bryk, A. S., & Congdon, R. T. (2004). HLM for Windows 6.0 [Computer software]. Lincolnwood, IL: Scientific Software International.

Samuels, S. J. (1979). The method of repeated readings. Reading Teacher, 32, 403-408.

Samuels, S. J. (1988). Decoding and automaticity: Helping poor readers become automatic at word recognition. Reading Teacher, 41, 756-760.

Shaywitz, S. E. (1987). Multigrade Inventory for Teachers. New Haven, CT: Yale University School of Medicine.

Shinn, M. R., Good, R. H., Knutson, N., Tilly, W. D., & Collins, V. L. (1992). Curriculum-based measurement of oral reading fluency: A confirmatory analysis of its relation to reading. School Psychology Review, 21, 459-479. Stahl, S. A., Heubach, K., & Cramond, B. (1997). Fluency-oriented reading instruction (Reading Research Report No. 79). Athens, GA, and College Park, MD: Universities of Georgia and Maryland, National Reading Research Center.

Stanovich, K. E. (1980). Toward an interactive-compensatory model of individual differences in the development of reading fluency. Reading Research Quarterly, 16, 32-71.

Tan, A., & Nicholson, T. (1997). Flashcards revisited: Training poor readers to read words faster improvise their comprehension of text. Journal of Educational Psychology, 89, 276-288.

Tindal, G., Marston, D., & Deno, S. L. (1983). The reliability of direct and repeated measurement (Research Report 109). Minneapolis: University of Minnesota, Institute for Research on Learning Disabilities.

Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K. S., & Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34, 33-48, 78.

Torgesen, J. K., Rashotte, C. A., & Alexander, A. W. (2001). Principles of fluency instruction in reading: Relationships with established empirical outcomes. In M. Wolf (Ed.), Dyslexia, fluency, and the brain (pp. 333-355). Baltimore: York.

Torgesen, J. K., Wagner, R. K., & Rashotte, C. A. (1999). Test of Word Reading Efficiency. Austin, TX: PRO-ED.

U.S. Department of Education. (2004). The nation’s report card: Reading highlights 2003 (NCES 2004-452). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics. Retrieved January 8, 2008, from http://www .nces.ed.gov/nationsreportcard/pdf/main2003/2004452.pdf

Wagner, R., Torgesen, J. K., & Rashotte, C. A. (1999). Comprehensive Test of Phonological Processing (CTOPP). Austin, TX: Pro-Ed.

Weinstein, G., & Cooke, N. L. (1992). The effects of two repeated reading interventions on generalization of fluency. Learning Disability Quarterly, 15, 21-28.

Wimmer, H., & Hummer, P. (1990). How German-speaking first graders read and spell: Doubts on the importance of the logographic stage. Applied Psycholinguistics, 11, 349-368.

Wolf, M. (1991). Naming speed and reading: The contribution of the cognitive neurosciences. Reading Research Quarterly, 26, 123- 141.

Wolf, M. (2001). Dyslexia, fluency, and the brain. Timonium: York.

Wolf, M., & Bowers, P. (1999). The “double-deficit hypothesis” for the developmental dyslexias. Journal of Educational Psychology, 91, 1-24.

Wolf, M., & Katzir-Cohen, T. (2001). Reading fluency and its intervention. Scientific Studies of Reading, 5, 211-238.

Wolf, M., Goldberg, A., Gidney, C., Cirino, P., Morris, R., & Lovett, M. (2002). The second deficit: An investigation of the independence of phonological and naming-speed deficits in developmental dyslexia. Reading and Writing, 15, 43-72.

Woodcock, R. (1998). Woodcock Reading Mastery Test-Revised/ Normative Update. Circle Pines, MN: American Guidance Service.

Patricia F. Vadasy

Washington Research Institute

Elizabeth A. Sanders

University of Washington

Patricia F. Vadasy, PhD, is a senior researcher at Washington Research Institute. Her research interests include reading acquisition and reading intervention.

Elizabeth A. Sanders, MEd, is a doctoral student in measurement, statistics, and research design in the College of Education at the University of Washington. Her academic interests are in quantitative methods in educational research.

Copyright PRO-ED Journals Jul/Aug 2008

(c) 2008 Remedial and Special Education; RASE. Provided by ProQuest LLC. All rights Reserved.