Language and Identification

On a recent, totally engrossing Radiolab episode focused on color, Homer’s use of color in “The Odyssey” and “The Iliad” illustrated a startling fact: not once in either epic does Homer use the word “blue.” Dawn is rosy-fingered, the sea wine-dark, nothing blue. In fact, they later visit the Himba tribe in Namibia who have no word for blue, who struggle to differentiate a sky blue square from a computer monitor otherwise full of green squares (the official BBC link is here). The Himba people have fully functional color vision, but their brains aren’t seeing blue. Why not?

One hypothesis mentioned in the piece is that a culture must synthesize a color before naming it, and as teachers have been known to say – to name it is to know it. Differentiations of shades, of colors, demand parsing an abstraction. As in deconstructing language or writing, terminology helps apply labels to abstractions, just like blue for the curious deep safety of green-minus-yellow. When students struggle to see what needs improvement in their writing or even in their ideas, language helps. All too often, schools approach writing instruction haphazardly or formulaically, because it is so challenging a task.  I have seen the power of a set of language and common, basic rubrics in action, like those adapted from “The Six Traits.”

Over the past few weeks I have seen students form novel neologisms and contort common words like “flow” into descriptors for what they wish to create in their writing. This is my failure; I should provide a useful rubric for students to practice with, grow comfortable with, and apply to their own writing for personal growth. Once students begin to see distinct zones in their writing for improvement, they can learn independently through playing with their writing. If a student can’t name a fragment or a run-on sentence, she can’t find them, or fix the problem. If a student can label distinct parts of her organizational structure, or identify strategies for improving her sentence fluency, she cannot be stopped from learning and making improvements as she sees fit.

Across the curriculum, if students and teachers share a basic functional vocabulary for writing, we will all see anew, see kernels amidst the chaos, see something hiding right now before our very eyes, obscured by the blindness of our minds. Language can begin to unwind the blindfold. We should let it.

Gamification: Bad Design, Good Design

I was struck today during a portfolio conference with a student who was laboring under the perception that we were adversaries, that her job was to guess my mindset and reflect it back at me, fool me into believing she had learned something or improved her writing. She declared that she thought I might be refreshed by honesty. After a dozen amazing conversations with kids about their writing, I was rather stunned. Just prior to this conversation, I proctored a final exam. After the exam, a student squealed repeatedly about all of the “C” answers on the multiple choice – he had changed some because it seemed like too many Cs in a row to be correct.

So I believe this about gamification: When grades are on the line, design the game or be at the mercy of an implicit, insidious game.    Inherent in school’s current design is this game; we cheat to win games, to gain advantage. Shortcuts in games may win us some upper hand. When I was an offensive lineman, I was a master of holding, which is a penalty if caught. In fact, any decent O-lineman can tell you how not to be caught, just keep your hands inside and let go if they spin or get separation. Cheating in this case is built into the game play – defensive linemen learn how to get loose, “break the hands” off the jersey. This isn’t necessarily a flaw in the design of the game, though, because a certain type of holding gets penalized 95% of the time (hands outside the body).

In school, shortcuts save time or effort, and cheating more so. However, this isn’t built into the game play of school, because when kids cheat or take shortcuts, they lose. Granted, when we offer nothing of value to students, maybe they don’t lose. They do, however, develop odd superstitions like Skinner’s pigeons and erase a few Cs when all signs point to C. As I reflect on my own gaming of school as a teenager, I suffered from arrogant self-perception (likely do still, after all, I blog). The choices I made that cheated me out of learning or experiences like speaking another language hurt me. I like to believe that by being open and transparent, by giving students control over their learning and the expression of their learning as in this portfolio assessment, they will take some ownership and do something that displays growth. Many do, some don’t, and I’m focused here on the negative.

What is the solution to breaking the implicit games of school? Relationships first, transparency second. Third and fourth, choice and authenticity. If I can design a curriculum that is open, student-centered, and constructivist in nature, most students will come along. Designing a framework in which students can learn language skills by doing isn’t even that hard, but it sure looks different. Good design, thoughtful design, is important, because otherwise we stay victim to the implicit design and fight the same battles, again and again as the gamers lose. At the very least, we could try to design in some more fun.

ZIS COETAIL Course 4 Project – Vertical Collaboration on Media Rubrics…And Beyond!

Crossposted from ZIS COETAIL cohort blog.

Shea and I worked on revising media rubrics for our Course 4 project. In my two years at ZIS, we haven’t done much cross-divisional work between English curriculum areas (CA), probably because we are busy, busy, busy people. As such, this has been a very illuminating peek inside the villa, checking out how the English CA is using rubrics to assess and instruct student writing and media creation.

My original media rubrics assessed the media product. For performance assessments, the performance itself often makes up the assessable product, so this made sense. These rubrics were based on the Upper School English CA’s Writing Rubric, which they developed themselves before my arrival. However, later media rubrics focused more on the genre of writing or media that students were asked to create. Interestingly, feedback on the earlier rubrics from students was that they weren’t terribly helpful for reflection or identifying areas for improvement. Because we were learning media creation from consuming and analyzing media models, such as Radiolab for podcasts, I asked students to write our News Writing rubric based on the models they listened to and read, but in a different form than earlier. My Masters action research was on student created rubrics from models and I am a big fan because students determine, and therefore internalize, the expectations for outcomes.

I chose to use a blank 6 Traits rubric because I have used the 6 Traits for years and find the breakdown apt for decoding and planning good writing. Students filled in the blanks based on what they saw as good, bad or mediocre. When we reached the conventions band, we realized together than, as some groups were writing and others were podcasting, we needed dual conventions bands for each media type. This really proved powerful. Recently, I have begun working on a video rubric, as the kids are doing investigative reporting and creating a video report. Through revising my existing rubrics to jive with Shea’s, I had an epiphany that drew also on the earlier experience of student created rubrics: Media is determined by conventions. I never needed that podcast rubric, but rather needed kids to know the conventions of the form. In addition to adherence to conventions, content, style, creativity, and format determine quality. Rubrics should reflect degrees of quality.

As I began to work with Shea, sharing feedback and making revisions, what became obvious is that our 5 column rubrics clashed with the middle school’s four column rubrics. A four column rubric is best because it eliminates the lure of the middle ground and forces a decision on the part of the assessor. I often borrow bits from grade bands as I assess a piece, which is as much a part of how I write rubrics as how I see student work. However,the new four column rubrics wound up stronger, I believe, than their predecessors. You may also note the blank band for video conventions. My students are viewing more media examples this weekend in order to fill in the blanks on Monday. Next, they will create a rubric for investigation and we will simply copy and paste the genre conventions below, merging the elements of quality into one rubric.

As I review these rubrics today, I see room for further improvement: “Sentence Fluency” could be better described (students wrote that, though, so it is meaningful to them). Also, what Sentence Fluency means for video may be so abstract as to demand a new band title. We’ll see. However, this process has led me to understand instruction and assessment of media creation in a new, more purposeful way. We can’t divorce content from form, period, and so our assessment tools should reflect that.

Further, by collaborating with Shea, I have seen in her revision an excellent clarification of media conventions wedded to content – media literacy demands are now embedded directly into her “Sell It” rubric. Also, I really appreciated her addition of an “Overall/Voice” band, which ties together the norms of an advertisement with the voice behind it. I’m not sure how to incorporate this into my current rubrics, but I will be considering a way to do so because it succinctly and explicitly illustrates the purpose for and function of the project’s outcome. Cool!

Working with Shea was great because it made creating better rubrics easier. Working together made my process much quicker and my final products stronger, less cluttered, and based on increased expectations for success. I look forward to more vertical teaming with Shea and my middle school colleagues in the future, not only because it is an enjoyable learning experience, but because it improves my teaching practice and, by extension, student learning.

On High Stakes Testing

I just read a fascinating blog post from Will Richardson entitled “The Parent Factor.” In it, Richardson discusses his experience with the superintendent of a New York school district’s meeting with 15 parents about changing their curriculum “from a traditional classroom to a more student centered, authentic, inquiry based classroom” and the possible impact on test scores. Richardson notes that high stakes tests are frustrating both parents and children, but that “test scores are seen as a hugely important factor in maintaining property values and in tracking student achievement.” I highly suggest giving his entire piece a read.

What struck me the most reading this was how truly crazy the entire situation behind this meeting is. The tests don’t actually track student achievement in a meaningful way. The superintendent knows that, Richardson knows that, and the parents suspect that, but the stakes are so high for communities, from housing values to teacher employment to maintaining school funding, that they are forced to maneuver in convoluted ways to avoid the brutal punishment of NCLB censure: In Need of Improvement, Restructuring, and so on. A metaphor flashed repeatedly before me: It’s like a village knows more or less how to meet all of its own needs, but there is a moody and capricious dragon in their midst that must be consistently appeased through repetitive, time-consuming, mind-numbing ritual. Time is taken away from sowing fields, from teaching children, from playing games, from commerce, from conversation. Appeasement, managing the basics always to avoid the dragon’s scorch, overrides all other concerns.

I strive to make my views of high-stakes testing clear. My views of these tests became clear the day I saw a Navajo student in Tohatchi, New Mexico, try to answer this question on a state test:

Where are you most likely to see a motorboat?

  1. In the sky
  2. On a lake
  3. On the highway
  4. In a volcano (or something silly)
  5. Again, a silly option

Now most students will recognize what answer the question wants, but this question is not the right answer in Tohatchi. A child in Tohatchi is far, far more likely to see a motorboat go by on the highway en route from Denver to Lake Powell than anywhere else. This test question could never judge achievement; it judges cultural capital. How can the good people of Tohatchi appease this dragon in an ethical manner? Why would they want to, if the fate of their schools and their children did not hang in the balance? My heart goes out to educators working to shape authentic, student centered curriculum as the dragon lurks. At long last, when will we free our children from this abject nonsense and work to solve the actual problems at hand, rather than paper over them with bubble sheets?