Blog: The Educated Reporter

Guest Post: Seeking Common Ground on Teacher Evaluations

EWA asked some of the education reporters who joined us at our 66th National Seminar (held at Stanford University in May) to contribute blog posts from the sessions. Today’s guest blogger is Debbie Cafazzo of the Tacoma News-Tribune. Stream sessions from National Seminar in your browser, or subscribe via RSS or iTunes. For more on teacher evaluations, visit EWA’s Story Starters online resource. We also recently held a seminar for journalists on this issue at the University of Chicago, and we’ll be sharing content from the event in the coming weeks.

Linda Darling-Hammond, a professor of education at Stanford University, has been on a quest for the quintessential teacher evaluation system for decades. She started researching the subject in the mid-80s. Back then, she said, it was like “searching for a needle in a haystack.”

Two factors caused the movement to take off in recent years: the federal government’s “Race to the Top” grant competition and its efforts to allow states waivers from provisions of the No Child Left Behind act.

Both federal initiatives rewarded states that tried to strengthen evaluation systems. She said there was consensus among educators that a fleeting classroom visit, a look at how the teacher is dressed and a scan of classroom bulletin boards isn’t the best way to sort out who’s doing an exemplary job.

Roadblocks to getting it right include principals with too many teachers to evaluate, as well as principals who lack expertise for all the content areas they are asked to evaluate, Darling-Hammond said.

Her new book, “Getting Teacher Evaluation Right: What Really Matters for Effectiveness and Improvement,” offers examples of states and districts that are getting it right.

The best start is to begin with a set of standards: “When feedback occurs around standards, teachers get better.”

She emphasizes the importance of having teachers demonstrate upon entry to the profession that they’re ready to teach. And teachers need authentic professional development – not “spray and pray workshops” – to keep them moving on the right track.

“If we do it well, it enhances the teaching profession,” she said. “If we do it well, we send a message that teachers are professionals who are doing important work.”

Ray Salazar, an English teacher in Chicago since 1995 and blogger at The White Rhino, observes that teacher evaluations have gone from “a private conversation between a teacher and a principal” to a public conversation that includes “anybody and everybody.”

The result: added tension, but also an opportunity to make needed changes.

In Chicago, the 1980s-style checklist of strengths and weaknesses is out, and a new framework based on the Danielson model is in. (Darling-Hammond endorsed the model, calling it “thoughtful,” but said there are also other good ones out there.)

Elements include planning and prep, classroom environment, instruction and professional responsibility.

The national conversation about evaluation systems is helping define good teachers. But Salazar’s kids at Hancock High know how they define them: “A good teacher believes that even the student in the back of the class, with his head down, can succeed. … Good teachers react quickly when they notice that a student is struggling.”

Salazar’s students believe they need to have a voice in teacher evaluations as well.

Good evaluations can help bad teachers either improve or make a choice to leave the profession.

Doing what’s best for students is not enough, Salazar said. You also have to do what’s manageable for teachers.

“We have to get feedback at regular intervals … that are thoughtful, that are from a person who can engage with teachers at different performance levels, and really help them improve their craft,” Salazar said.

David Steele is chief information and technology Officer for Hillsborough County Public Schools in Florida. The district won a $100 million grant over seven years from the Bill & Melinda Gates Foundation to develop a new teacher induction and mentoring system, as well as to pursue other goals including improving teacher and principal evaluations.

Hillsborough is investing “tremendous effort” in supporting first and second-year teachers, Steele indicated. Last year, 94 percent of the district’s first-year teachers returned.

The district aligns professional development with evaluations. Evaluations include not only principal observations, but also peer evaluators for every teacher. Each peer evaluator works with about 20 teachers.

The district uses the Danielson framework, and it has worked with the University of Wisconsin on its value-added measurement system. It counts for 40 percent of a teacher’s evaluation. Steele said the district’s value-added system uses more than one type of test: “We don’t want to get to where evaluations are based on one test on one day.”

Steele also believes it’s important for observers in the classroom to be properly trained. Those in his district receive 50 hours of training before they are allowed to do their first teacher observation.

Finally, the district wants to revamp its compensation system to reward great teachers earlier in their careers. Steele said it’s important to do the work with teachers, not to teachers.

“It’s all about student achievement,” Steele said. “Effective teachers drive student achievement.”

The practice of incorporating value-added measures into teacher evaluations has grown quickly in recent years, generating significant controversy. Such measures utilize a statistical method that involves tracking student growth on standardized tests to gauge teachers’ impact on student learning.

For the first time this year, 10 percent of Salazar’s evaluation will include a value-added measurement, using a reading test connected to Common Core State Standards. Value-added measures can be part of an evaluation, Salazar said, but they shouldn’t be all of it: “There’s too many nuances to teaching to be able to say this one assessment is going to determine if I keep my job or what my reputation as an educator is.”

Steele said that over-interpreting, or over-using value-added measures is just as bad as not using student results at all. He’s critical of a Florida law that forces officials to set overly rigid targets for teacher ratings.

Darling-Hammond said lots of studies show value-added isn’t reliable for many teachers. Value-added scores can be one measure among multiple measures, but shouldn’t be used alone, she said.

Darling-Hammond said many state tests only measure student performance against a grade-level standard, and measure low-level skills. Putting too much reliance on those tests forces teachers to focus too heavily on those skills, to the detriment of higher-level skills, she argued.

Have a question, comment or concern for the Educated Reporter? Email EWA public editor Emily Richmond at erichmond@ewa.org. Follow her on Twitter: @EWAEmily.

 



Have a question, comment or concern for the Educated Reporter? Contact Emily Richmond. Follow her on Twitter @EWAEmily.

Read other Educated Reporter articles.