Guest Post: Measuring Early Childhood Classroom Quality
The American Educational Research Association (AERA) held its annual meeting in San Francisco in May, and we asked some of the journalists in attendance to cover a few of the sessions for us. Given that early childhood education is back on the front burner, it seemed like a good time to share this post from Martha Dalton of Public Broadcasting Atlanta. In case you missed it, U.S.Secretary of Education Arne Duncan fielded questions from reporters last week in an exclusive EWA webinar.You can catch the replay here. And for more on early childhood and preschool, check out EWA’s Story Starters online resource.
Early childhood education programs in the United States are like a patchwork quilt: Most states have a combination of programs, ranging from state-run prekindergarten to independently operated centers to federal initiatives such as Head Start.
And because so many of these programs are fairly new, so are the systems used to evaluate them.
As a result, research on effective evaluations for both these educators and the programs overall is preliminary, according to Jerry West of Mathematica Public Policy Research. Speaking at the American Educational Research Association conference in San Francisco, West said many current evaluation systems rely heavily on classroom observations. “We are just beginning to understand what makes good and bad observation protocol,” he said.
Education publishing companies have not wasted any time offering their own tools, however. There are several of these instruments on the market to help administrators evaluate preschool teachers. But Debra J. Ackerman, a researcher with Educational Testing Service, noted that those systems are imperfect. “No one tool can measure everything you need to know,” she said.
Ackerman surveyed 54 early childhood education programs to identify how they evaluate staff. She found, as West did, that many evaluation models are built around classroom observations. Due to the subjective nature of observations, Ackerman said, several factors can affect a teacher’s score, including her relationship with the observer.
One instrument that is widely used to evaluate preschool teachers is the Classroom Assessment Scoring System, or CLASS. Martha J. Buell, a professor at the University of Delaware, studied how the CLASS is used. CLASS grades teachers on three criteria: emotional support, classroom organization and instructional support.
Using data collected from 31 teachers in two Reading First programs, Buell concluded that the context in which the CLASS is used affects its results for teachers. For example, she said, some teachers are better at certain skills in different situations. At centers that allow for more independent student learning, a teacher may be able to circulate the room and provide more instructional support. In addition, Buell said, the type of lesson a teacher does during an evaluation can affect her score.
Programs are also required to undergo evaluations. One such system is the Program Administration Scale. Asia Foster Nelson, a professor at Johnson County Community College in Kansas, studied how programs apply changes based on feedback from the PAS. Nelson also examined staff perception of the changes.
Through interviews with program directors and staff, focus groups, and surveys, Nelson found most teachers viewed the instrument as an administrative-centered tool. Staff members said if they had more input, they would feel a larger sense of ownership or buy-in to the program. Nelson concluded that staff members need to be included in decision-making in order to gain a full understanding of the program.
What could be hindering that staff engagement, Nelson found, is that program directors often have a hard time sharing responsibility. “They think if they hand the program over to someone else, it’s like a house of cards; everything will fall apart,” Nelson said.
She concluded that program directors will need to learn to let go a little in order to bring their staff into the fold and create a sense of shared responsibility for their programs.