(Meta-)Assessing (with) CDP

Guiding Questions

  1. How does CDP change the intent of, or need for, assessment?
  2. What assessments (or tools) work? How will we assess our own work this week?

If liberatory education is student-led and student-driven, how can it be assessed? Once assessment enters the learning equation, the relationship between teacher and student irreversibly changes. When we see ourselves as guides or supports, we relate to students essentially in their service, functioning as a resource for them to use to their benefit. Once we assume the task of assessing their performance, our role shifts to judge or even gatekeeper. We relate to students as their superiors, and they serve our needs until we are satisfied with their work. The relationship is inherently adversarial. Any kind of growth in that situation becomes a challenge.

Assessing With CDP

How, then, should we assess students? Are we even the right fit for the job? Today’s readings challenge that paradigm. Peter Elbow asserts that assessment (what he here calls “judgment”) serves three specific functions, none of which improves learning. I wrote that we should outsource grading, and Cathy N. Davidson reports success with crowd-sourcing it. Shea Swauger uncovers the terrors of test proctoring, and Asao B. Inoue proposes ways that changing assessment can reduce social violence.

Each of these authors encourages us to re-imagine how assessment works and who should be doing it. As I mentioned in one Monday Jam Session, if evaluation appears at the top of Bloom’s original taxonomy, students evaluating work should be learning at the most advanced levels. Yet schools rarely ask students to evaluate work, TAs notwithstanding.

The Goal of Assessment

What would assessment look like under the premises of critical digital pedagogy? How could assessment work in our classes if we hand that responsibility to students? How might we address the inevitable concerns of those who champion rigor?

As an experiment, try this activity: Compose a position statement on the goal of assessment, using exactly one full tweet. In other words, use exactly 270 characters, plus a space, plus the #CritPrax hashtag, to fill the 280 characters allowable in a single tweet. Then, compose a follow-up tweet in which you apply that position statement to your first tweet. In other words, grade yourself. How well does that process work? What standards did you use for assessment?

In Radical Hope: A Teaching Manifesto, Kevin Gannon shares an beguiling activity he does with his seminars. I won’t spoil the fun of his anecdote by sharing the details here, but his point is that our standards for assessment need to be relevant, obvious, and sensible. If they aren’t, even the most conscientious teachers can get viscerally angry. And that doesn’t help anyone.

Assessing CDP Itself

How do we know whether critical digital pedagogy actually works? Much has been written about critical pedagogy (see especially Freire, hooks, and Shor) and about critical digital pedagogy (see especially Stommel and Morris), but the vast majority of this work uses a critical narrative style that eschews empirical study and receives criticism for confirmation bias. Besides the intuitive sense CDP often seems to make, how can we know it’s working?

In Wednesday afternoon’s Jam Session, we chatted briefly about outcomes. Specifically, we discussed how broad outcomes can be (“Students will have an epiphany”) or how ineffective they can be (“Students will learn stuff”). But is there a way to create student-learning outcomes that will meet the needs of institutional accreditation and vertical alignment while also serving the principles of CDP? Can we have our cake and eat it, too?