‘Scuse Me, While I Kiss This Guy!

In the annals of misunderstandings, maybe my favorite is the “Purple Haze” syndrome. Mishearing lyrics or poetry even has a terminology: mondegreen. I remember, hilariously, a friend making this mistake in real time; so awesome is the memory that I doubt its existence, like snipe hunting or cow tipping, a signifier masking misadventure.

In fact, so pervasive is Purple Haze syndrome, some believe it to be based on fact, even naming their mondegreen websites after the lyric as they ironically explore the possibilities that the whole paradigm of their site is, in fact, bankrupt.

But that’s not what I want to write about.

Sometimes, even when we say what we mean, it gets misheard. Misunderstandings arise as language bubbles through emotional and physical filters like stress and our cochlea. Saying what we mean exactly, then, is essential, especially when we are instructing children or offering feedback.

A student came into my classroom yesterday, venting: “I just don’t know what she wants from me!” What is she hearing? What is being said? There’s almost no way of knowing.

I’m working on condensing a general use  6 Traits rubric to 4  traits based on feedback from my English department. People seem generally happy with it so far, though some colleagues found it too specific. I’ve been processing that, and I believe I have come to an understanding that specificity expresses expectations. An analytic rubric should express expectations for product. As such, an analytic rubric must be specific.

Additionally, being specific demands that we make decisions about what good products or outcomes are. Too often, the hidden curriculum of what a teacher likes or wishes for filters through a rubric, leading to grades in the end. Student gets grades, tries again next time. A solid analytic rubric communicates expectations, ideally in language the student understands and has practice with. The hidden curriculum or expectations will still exist somehow, but the student can be empowered to improve in a creative cycle through solid feedback and reflection based around a good analytic writing rubric, for example.

Even when expectations are clear, the student has to apply them and get to know the expectations personally, through their own writing (or other performance) and through their personally significant mental models. Until then, pieces of a complex rubric will be mini-mondegreens, limiting student learning and agency.

We’ve got to be specific and clear. We’ve got to be repetitive when it matters. We’ve got to engage in cycles of attempts and feedback. And we’ve got to give students experience with the expectations to internalize them meaningfully. Because even when we do, someone is going to hear something differently.

Now, excuse me, while I…

ZIS COETAIL Course 4 Project – Vertical Collaboration on Media Rubrics…And Beyond!

Crossposted from ZIS COETAIL cohort blog.

Shea and I worked on revising media rubrics for our Course 4 project. In my two years at ZIS, we haven’t done much cross-divisional work between English curriculum areas (CA), probably because we are busy, busy, busy people. As such, this has been a very illuminating peek inside the villa, checking out how the English CA is using rubrics to assess and instruct student writing and media creation.

My original media rubrics assessed the media product. For performance assessments, the performance itself often makes up the assessable product, so this made sense. These rubrics were based on the Upper School English CA’s Writing Rubric, which they developed themselves before my arrival. However, later media rubrics focused more on the genre of writing or media that students were asked to create. Interestingly, feedback on the earlier rubrics from students was that they weren’t terribly helpful for reflection or identifying areas for improvement. Because we were learning media creation from consuming and analyzing media models, such as Radiolab for podcasts, I asked students to write our News Writing rubric based on the models they listened to and read, but in a different form than earlier. My Masters action research was on student created rubrics from models and I am a big fan because students determine, and therefore internalize, the expectations for outcomes.

I chose to use a blank 6 Traits rubric because I have used the 6 Traits for years and find the breakdown apt for decoding and planning good writing. Students filled in the blanks based on what they saw as good, bad or mediocre. When we reached the conventions band, we realized together than, as some groups were writing and others were podcasting, we needed dual conventions bands for each media type. This really proved powerful. Recently, I have begun working on a video rubric, as the kids are doing investigative reporting and creating a video report. Through revising my existing rubrics to jive with Shea’s, I had an epiphany that drew also on the earlier experience of student created rubrics: Media is determined by conventions. I never needed that podcast rubric, but rather needed kids to know the conventions of the form. In addition to adherence to conventions, content, style, creativity, and format determine quality. Rubrics should reflect degrees of quality.

As I began to work with Shea, sharing feedback and making revisions, what became obvious is that our 5 column rubrics clashed with the middle school’s four column rubrics. A four column rubric is best because it eliminates the lure of the middle ground and forces a decision on the part of the assessor. I often borrow bits from grade bands as I assess a piece, which is as much a part of how I write rubrics as how I see student work. However,the new four column rubrics wound up stronger, I believe, than their predecessors. You may also note the blank band for video conventions. My students are viewing more media examples this weekend in order to fill in the blanks on Monday. Next, they will create a rubric for investigation and we will simply copy and paste the genre conventions below, merging the elements of quality into one rubric.

As I review these rubrics today, I see room for further improvement: “Sentence Fluency” could be better described (students wrote that, though, so it is meaningful to them). Also, what Sentence Fluency means for video may be so abstract as to demand a new band title. We’ll see. However, this process has led me to understand instruction and assessment of media creation in a new, more purposeful way. We can’t divorce content from form, period, and so our assessment tools should reflect that.

Further, by collaborating with Shea, I have seen in her revision an excellent clarification of media conventions wedded to content – media literacy demands are now embedded directly into her “Sell It” rubric. Also, I really appreciated her addition of an “Overall/Voice” band, which ties together the norms of an advertisement with the voice behind it. I’m not sure how to incorporate this into my current rubrics, but I will be considering a way to do so because it succinctly and explicitly illustrates the purpose for and function of the project’s outcome. Cool!

Working with Shea was great because it made creating better rubrics easier. Working together made my process much quicker and my final products stronger, less cluttered, and based on increased expectations for success. I look forward to more vertical teaming with Shea and my middle school colleagues in the future, not only because it is an enjoyable learning experience, but because it improves my teaching practice and, by extension, student learning.