Assessing assessment

Share

This afternoon I attended the annual meeting of the Independent Curriculum Group, and for a pretty mellow group of educators we got ourselves kind of stirred up, in a good way.

As an organization we aim to get schools talking about curriculum, and about why it’s a good thing for schools to develop their capacities to create their own. This year we’ve been focused on assessment, with groups of teachers, administrators, and occasionally students (Yay!) gathering in groups around the country to talk about the challenges of assessing “21st-century learning.” These conversations seem to have ranged from smallish and speculative to big and substantive—in every case, as I take it, exemplary of the kind of teacher talk of which we get to do far too little in our day-to-day lives in schools. A few groups have used the Looking At Student Work protocol approach that I wrote about a few weeks back—the Tuning Protocol, popular in Critical Friends work, proved especially useful.

A question on everyone’s mind seemed to be, So where do we go next with these conversations? Do we compile a giant “assessment bank,” or do we gather more groups and create “master assessments” that could be used in multiple schools to measure the efficacy of our collective work teaching certain kinds of skills? Some folks suggested generating documents on the essential characteristics of effective assessments, while others suggested creating a typology of assessments. Appropriate links were made to models like Understanding by Design and the work of Project Zero. Would master assessments become a new kind of “standardized test,” the very habit the ICG was founded to help schools kick?

Some interesting things emerged: schools that don’t give tests, schools that do exhibitions very seriously, and schools whose students (having learned this from teachers, I presume) now use the “assessment” as a euphemism for all the work they’re expected to do, but especially for tests, as in, “You have a full period math assessment on Chapter 5 on Tuesday.” At one point the talk grew lofty, with reference to mission and values, with a very astute observation made by Emily Jones of The Putney School that “We’re all assessing different things for different purposes with different children”—a nice reminder that each school provides its own context for even the most standardized of tests. Josie Holford of Poughkeepsie Day School knocked our socks off by suggesting that any enumeration of “essential 21st-century learning capacities” (yes, I’m talking about YOU, NAIS Guide to Becoming a School of the Future) ought to include “resiliency in the face of failure as one of those capacities.

In the end we kind of left ourselves up to the knees, at least, in the gelatinous goo that is the assessment of learning. I think we even agreed, more or less, that the work of the small groups should continue, perhaps focusing on how to tease out or define the qualities and characteristics of these oft-cited essential capacities. (Think about it: “Global Perspective”—What does this look like? How do you construct the scaffold by which it can be taught? How do you measure it? The capacities may be easy to list, but they’re damn hard to assess.)

Nothing earth-shattering took place in our meeting, but I think we all went away kind of jazzed by the conversation; unfortunately, late afternoons at a big convention (we are all at the National Association of Independent Schools Annual Conference, whose theme, “Innovation,” we seem determined to make into the new “excellence” (see my earlier grumpy post on this trend) when instead what it should mean is, “It’s time for all you schools to catch up with what’s been going on in education for the last 40 years or so!” It’s nice to be among people from a variety of schools from all over the place who care about all this.

It’s nice to work in this profession, in fact.

Share

Recent Blog Posts