“Best elementary school program resisted”
Posted: November 27th, 2013 | Author: Michael Goldstein | | 6 Comments »
The wonderful Jay Mathews of the Washington Post writes:
Best elementary school program resisted
A program called Success for All, born in Baltimore 26 years ago to improve elementary schools, has set a record for most glowing reports from tough researchers.But the latest study showing how well it works also hints at why it has not become more popular: It uses ability grouping and scripted lessons, both disliked by many teachers.
Hmm. What gives? Maybe the research is weak.
Success for All is found in four schools in Alexandria and three in Prince George’s County. Those numbers might increase in light of a new report by the well-respected research group MDRC. The report says that Success for All kindergartners did significantly better than similar students in 18 control group schools based on a standardized test of phonics.
That endorsement by researchers can be added to past accolades. A comparison of whole-school reform models by the Washington-based American Institutes for Research in 1999 gave Success for All and Oregon-based Direct Instruction the highest ratings. And a three-year study by the Philadelphia-based Consortium for Policy Research in Education in 2009 revealed that Success for All moved students from the 40th to the 50th percentile in reading between kindergarten and the end of second grade.
The school improvement model, the brainchild of Johns Hopkins University researchers Robert E. Slavin and Nancy Madden, has spread through much of the country, with 500,000 students in 1,000 schools. It received a $49.3 million grant in 2009 from the federal i3 program to add more schools and increase training for teachers and staff members.
Okay then. That seems like strong research. So maybe teachers don’t like it. What about policymakers?
But it still gets a grumpy response from many policymakers. During the Bush administration, despite a rash of new education spending, Success for All got little notice. From its beginning, when a Maryland political dynamo named Buzzy Hettleman challenged Slavin and Madden to create a practical program for Baltimore schools, teachers have complained about being told what to say in class and how much time to devote to each lesson.
The MDRC study gathered not only test data but attitudes of teachers in the Success for All and control group schools. In the Success for All schools, 54.8 percent agreed or strongly agreed that their reading program was too rigid or scripted. Only 19.8 percent of the control group teachers said that about their non-Success for All programs.
Ability grouping — dividing students into groups based on their achievement levels — has also been standard for Success for All, with a twist. In traditional ability grouping, students rarely move from a lower to a higher group. Success for All dictates that every child be reevaluated every two months with an eye to moving those who are doing better up to a higher group.
The MDRC study found that 96.9 percent of Success for All teachers said their students were grouped by ability for reading, compared with only 54.8 percent of teachers in the control group.
Well maybe the creators of Success For All are very hard core right wing who are dismissive of teachers.
Oh, they’re lefties?
It is startling to see Slavin and Madden, a couple of progressive sensibilities who met as undergraduates at left-leaning Reed College, criticized for methods seen by many educators as too conservative and restrictive. But they have gone with what works for kids and have an unusual rule that limits faculty resistance to their ideas. They won’t let a school have the program unless 75 percent of the teachers have voted for it in a secret ballot.
According to MDRC, which plans two more reports on the program, Success for All beat the control group by about 12 percent of the average annual growth for kindergartners.
Programs as old as this one are often rejected as out-of-date by school districts. But can this region’s school leaders, who often say they base their decisions on research, afford to ignore something this successful?
Bridge, which is young, has some early, very promising data.
A challenge is that some folks we encounter, even if they’ll privately concede the schools help kids become good readers, worry more about teacher comfort with the curriculum.
I think you’re buying into the hype way too much here, which is unexpected for someone of your expertise. Almost all the statistics in this article (and report) actually have nothing to do with student outcomes, just indicators of school organization, which can be misleading (ability grouping can happen at the teacher level often more effectively than at the school level, where more data can be used to make the decision, PD can be more effective than alternatives but still not improve student outcomes). Notice how the one grade where positive student outcomes were measured or reported was kindergarten, while SFA has K-8 literary programs. Scripting phonics is a totally different ballgame from scripting reading comprehension, and it’s therefore no surprise both that (3-8) teachers are frustrated with the program and almost all the positive results come from K-2.
I taught at a school with SFA after teaching at one with balanced literacy, where I actually read with the students, had conversations based on the text, and ability grouped them based on their ability to read and understand a book of a particular complexity. SFA does none of these things, the ability grouping is based on a multiple choice test, and in my class I had students who scored from the beginning of 4th grade to the end of 5th grade. I was unable to have real discussions with the students due to the regimented structure of the lesson, nor was I able to actually read with them and ask questions or diagnose their word-solving or comprehension challenges. As a result, despite having more experience teaching, my students made much less progress than they did at the first school, a trend that was true school-wide. Perhaps you should view teacher resistance not as a necessary wall to demolish, but as a signal of an overly restrictive and ineffective program.
What’s scary for me is, I know that you’re missing a lot in hyping this program only because I actually taught it. How many times have you (or me, or anyone writing about education) done a cursory glance of a study and gotten the wrong impression of a program or policy? I think we all need to do a better job of being consumers of educational research.
Hi Max,
Great thoughts. I’m mixed here. I agree that hype often gets in the way, as do anecdotes. It can cut both ways, something overly promoted or unfairly castigated.
Here is an independent review of the SFA research, done by the research arm of the USDOE.
http://ies.ed.gov/ncee/wwc/interventionreport.aspx?sid=496
Thanks for the response. It’s interesting, I was looking at the what works clearing house as well and I came to some different conclusions: that there are no studies addressing grade 4-8, that the strongest results were with alphabet skills, which are most relevant to K-1 (maybe K-2), and that there was only mixed evidence of comprehension skills, which are most relevant to grades 3+. I’m also a little concerned by the total number of studies (112) compared to the number that meets their standards (1). What do you think is the cause and effect of 111 studies (or 105 if we exclude those that meet standards with reservations) that are not rigorous or objective enough to meet their standards floating around the internet, media, etc.? I imagine that a lot of SFA’s strong reputation and widespread presence may stem from these sub-par studies.
I think we’re both reasonable people, but it’s interesting that you see the evidence as saying there isn’t enough SFA, and I see the evidence as saying there is perhaps too much SFA from grades 4-8. At the very least there needs to be more rigorous studies of its effectiveness before we make any drastic changes in elementary school reading instruction.
I think the cause of the 111 to 1 is how much energy and sometimes $ it takes to do a true randomized trial.
I’m sympathetic to the professors and doctoral students who can’t do an RCT, but still want to “Study things.”
So they do correlation studies and publish them. Alas, I think it clouds the research base and I wish their incentives were different.
[…] http://www.thebridgeatmidnight.com/2013/11/27/best-elementary-school-program-resisted/?utm_source=fe… […]
[…] http://www.thebridgeatmidnight.com/2013/11/27/best-elementary-school-program-resisted/?utm_source=fe… […]