reflection on my demo of “Avoiding FRACK-ing by Building Better Rubrics” #uiwp2012

This post is in response to my presentation / teaching demonstration, Avoiding FRACK-ing by Building Better Writing Rubrics” (slides here), for the 2012 University of Illinois Writing Project Summer Institute.  After presenting, other UIWP fellows also sent me feedback.

I ran out of time, but, I feel, such is the case for presentations that rely so much on tacit knowledge.  Rubrics are such a touchy topic, but one that I had a lot to reflect on.  In fact, most of the demo/presentation was really a reflection on my growth over the past year in the New Tech / PBL environment.  For me, the biggest downfall I experienced while I was assessing writing last  year was in trying to build a 3-column project rubric for writing.  Project rubrics often more resemble checklists than scales of execution; writing rubrics need to look come with more columns and more emphasis on the values of writing and on critical thinking.

Likes

I’m glad to see that my conceptualized “Discovery Rubrics” (see slide 22) were well-accepted.  With time, I hope that idea continues to evolve, offering me a way of promoting risk-taking and revision based on student inquiry/choice.  With all of the talk about getting students to write more but revise more carefully, I hope the discovery rubric will invite and structure those goals in my classroom next year.  My audience definitely took from me that rubrics should be looked at as tools for assessment/feedback (and thus formatted as such) as much they should be seen as tools for evaluation.

I only mentioned once that rubrics could be community-constructed, and the rest of the cohort here at UIWP seemed to really stick on to this.  Sure, I was at the end of the line of a series of great presenters, but I am glad to hear that many would welcome an added discussion on to go about co-creating rubrics with students.

Wonders

My audience wondered about the authenticity (in other words) of the breakout activity (they were to try to use Bloom’s terms and a critical thinking score to build a rubric…in teams/groups).  Since everyone in a team/group was coming from different backgrounds, they found themselves still generating generalized categories without having an assignment in mind.  The task would have been better accomplished individually so that my audience could “localize” a rubric (a term that I used to describe attributing the values of a classroom’s writing community) to a task they would use in the next school year.  Unfortunately, collaboration fell apart here under such a time crunch.  I wonder where conversations could have taken us had we the time for discussion and had the activity been paired or invidual vs. group-engaged.

Final thoughts

I was reminded by another UIWP fellow as I was debriefing that “The same way that formulas do the hard thinking for kids is the same way that rubrics do the hard thinking for teachers… just ahead of time.”  As I continue to grow as a teacher, I want to think hard about feedback (and then assessment) ahead of time; I should not be let off the hook in that regard.  Just as there should be no formula for my kids to cruise along, so should it be with my evaluation/assessment tools — one must think critically and reflect so we do not just mindlessly fill in our gradebooks…or follow the guy in front of us off the bridge.

Leave a Reply