What about computerized “reading incentive” programs like Accelerated Reader and Reading Counts?
by Jim Trelease
Twenty-five years ago when The Read-Aloud Handbook was first published, the idea of computerized reading-incentive/reading management programs would have sounded like science fiction. Today it is one of the most hotly debated concepts among both educators and parents: Should children read for “intrinsic” rewards (the pleasure of the book) or should they be enticed to read for “extrinsic” rewards—prizes or rewards (or grades)?
Advantage Learning System’s Accelerated Reader and Scholastic’s Reading Counts, the two incentive industry leaders, work this way: The school library contains a core collection of popular and traditional children’s books, each rated by difficulty (the harder the book, the more points it has). Accompanying the books is a computer program that poses questions after the student has read the book. Passing the computer quiz earns points for the student reader, which can be redeemed for prizes like school T-shirts, privileges, or items donated by local businesses. Both programs strongly endorse SSR as an integral part of their program and require substantial library collections. Both Accelerated Reader and Reading Counts have expanded their scope beyond “incentives” to include substantial student management and assessment tools, with Accelerated Reader having the largest customer base nationally.
Too many schools are doing the same thing with reading programs that other districts sadly have done with the game of basketball.
Before going forward on this subject, I must note, in the spirit of full disclosure, that I have been a paid speaker at three Accelerated Reader national conventions. I spoke on the subjects of reading aloud, SSR, and home/school communication problems, topics I have addressed at conventions for nearly all the major education conferences, from International Reading Association (IRA) and the American Library Association (ALA) to the National Association for the Education of Young Children (NAEYC).
I have written and spoken both favorably and negatively about these computerized programs but in recent years I've grown increasingly uneasy with the way they are being used by districts. Too often I see them being abused in ways similar to basketball, for example. In its original form, Dr. James Naismith was trying to shape an "indoorsy" exercise that would have an "outdoorsy" flavor to it. He invented basketball less than three miles from the home I'm sitting in right now. A century later, some people still use it as a form of exercise, some a form of sport, and others take it to another level and turn it into a local obsession—maybe even a form of legalized child-abuse—while warping the original intention of the sport. I don’t have to spell out those towns, cities, and states.
When I survey my seminar audiences nationally, I meet an increasing number of dedicated educators and librarians who are alarmed by the way these programs are being used. The original design was a kind of "carrot-on-a-stick"—using points and prizes to lure reluctant readers to read more. For a while the big complaint from critics was about these points or incentives. But I didn't have a problem with that as long as the rewards didn't get out of hand (and some have). As for incentives, my family's been benefiting from those frequent traveler "point programs" for decades. Every professional athlete, every CEO, and most sales reps have incentive clauses in their contracts. Who says this is bad business?
As I see it, the real problem arrived when districts bought the programs with the idea they would absolutely lift reading scores. "Listen," declares the school board member, "if we're spending $50 grand on this program that's supposed to raise scores, then how can we allow it to be optional? You know the kids who'll never opt for it—the ones with the low scores, who drag everyone else's scores down. No—it's gotta be mandatory participation." And to cement it into place, the district makes the point system 25 percent of the child's grade for a marking period. Oooops! They just took the "carrot" off the stick, leaving just the stick—a new grading weapon. Do you see the basketball connection now?
Here is a scenario that has been painted by more than a few irate librarians (school and public) in affluent districts that are using the computerized programs:
The parent comes into the library looking desperately for a "7-point book."
"What kind of book does your son like to read?" asks the librarian.
The parent replies impatiently, "Doesn't matter. He needs 7 more points to make his quota for the marking period, which ends this week. Give me anything."
In cases like that, we're just back to same ol', same 'ol: "I need a book for a book report. But it's due on Friday—so it can't have too many pages."
Draftees vs. Enlistees:The difference is in the "attitude."
The only time the incentives really work on attitudes is when it's voluntary. It's the equivalent of the difference between "enlistees" in the Army and "draftees." There's a big difference in their attitudes: one is in for a career (volunteer), the other (draftee) is in for as little time and work as possible.
As for the research supporting the computerized programs, that's hotly contested with no long-term studies with adequate control groups. True, the students read more, but is that because the district has poured all that money into school libraries and added SSR to the daily schedule? Where's the research to compare 25 "computerized" classes with 25 classes that have rich school and classroom libraries and daily SSR in the schedule? So far, it's not there.
Believe it or not, high reading scores have been accomplished in communities without computerized incentive programs, places where there are first-class school and classroom libraries, where the teachers motivate children by reading aloud to them, give book talks, and include SSR/DEAR time as an essential part of the daily curriculum. And the money that would have gone to the computer tests went instead to building a larger library collection. Unfortunately, such instances are rare. Where the scores are low, oftentimes so is the teacher’s knowledge of children’s literature, the library collection is meager to dreadful, and drill and skill supplant SSR/DEAR time.
Are there any other negatives associated with these programs?
Here are some serious negatives to guard against:
- Some teachers and librarians have stopped reading children’s and young adult books because the computer will ask the questions instead.
- Class discussion of books decreases because a discussion would give away test answers, and all that matters is the electronic score.
- Students narrow their book selection to only those included in the program (points).
- In areas where the “points” have been made part of either the grade or classroom competition, some students attempt books far beyond their level and end up frustrated.
Before committing precious dollars to such a program, a district must decide its purpose: Is it there to motivate children to read more or to create another grading platform?
Read the Declaration of Independence From High Stakes Testing
-----No Child Left Behind is leaving thousands of children behind!
Sign the petition by clicking HERE.
More than 33,000 signatures so far...