Growth Mindset Research – Sigh.

It seems like the hottest trend in “hard” science academic research is to find the sacred cows of social science academic research, qualitative or quantitative, and slaughter them. Amy Cuddy likely stands as the poster child for this wave, but any research that suggests implicit bias, like the stereotype threat research, has been caught in the cross-fire of culture war battle that academia, and the rest of us, seems intent on waging (on itself).

Next up? Stanford’s Carol Dweck and her acolytes, like David Yeager at the University of Texas, are having their research challenged for its validity via meta-analysis, but on the basis of what appears to be one of the current threads of attack on social science research: publication bias.  In the metastudy from Michigan State, researchers contacted other academics whose growth mindset intervention failed to show results and was not published.

The researchers found a weak correlation between growth mindset interventions and academic achievement, terms they no doubt operationalized somehow. Interventions for children and adolescents had a larger effect than for adults, according to the study, but found that “students with low socioeconomic status or who are academically at risk might benefit from mind-set interventions.”

So, ok. I, too, doubt the depth of effect of short reading and writing interventions on long term implicit belief frameworks and self-concept. However, all of this – all of it – seems to miss the point of growth vs. fixed or entity mindset as a conceptual framework. The deep DNA of schooling in the English speaking world is to sort the children into ability groups largely predetermined by the social power of the group a child is born into. The concept

Untitled-1
Works. Every. Time.

of a fixed, largely predetermined, innate level of inherited ability is old, but not dead (of course). Sir Ken Robinson’s famous “19th century factory model” analogy resonates, and the bell curve undergirds most testing and assessment. Leading reactionary asshole Jordan Peterson makes a pretty Brave New Argument about the sorts of jobs people can handle by IQ. Fixed mindset, originally termed entity mindset to denote ability as something born within us all, is paradigmatic within and beyond education, so much so that Peterson and his ilk view ability with Joseph Campbell-like depth, woven into reality and expressed by it. But lots of us believe it. Just ask anyone if they are a “math person.”

But as controversial as much of the above might be, what is uncontroversial is expectancy effects, commonly referred to as the Pygmalion Effect, a much replicated reality in which expectations drive outcomes. Rosenthal produced a study in the early 1960’s in which researchers were asked to measure the times of rats through a maze, and some were told their rats were bred for exceptional intelligence – high fixed ability, one might say – others were told nothing, and yet others were told their rats were of low intelligence. The rats were all just rats, like us all, really. “Smart” rats were the fastest, etc. Labeling – the labeling effect – matters, and drives outcomes. This works in the classroom in the exact same way.

Growth mindset is an important implicit belief for teachers to hold, truly, as a north star. Without it, they will implicitly lower their expectations for kids, particularly if those children arrive in the classroom with labels that suggest ability. Everyone can learn, and of course, for certain kinds of learning, some kids come with innate strengths thanks to biology, nutrition, the number of books in their home, and so on. I don’t believe any metastudy has operationalized the implicit beliefs of teachers through a growth mindset lens, and I won’t hold my breath until they do. Truthfully, most people’s experiences with school echo those of Peterson – the infallibility of the Sorting Hat effect. They’ve internalized the fixed mindset communicated by the very structure and purpose of school.

But it doesn’t have to be that way. We can build for growth, and teach for growth relative only to more growth, to paraphrase Dewey. This is harder to measure than brief interventions, but does and will prove the deep conceptual importance of Dweck’s work.

 

The iPad 2 for Learning: Initial Questions

As we move forward in the thick stew that is the beginning of any international school year, as faculty shake off the jet lag and slowly lose their suntans, a few questions have arisen in our iPad 2 pilot project that need to be (and soon will be) ironed out once the higher priority tasks are ticked off the list. They are:

  • Syncing – If students have individual iPads to use daily and take home at night, should they be synced to an individual laptop in a 1 to 1 school like ours or to a central Mac for managing purchased apps, etc? It’s a pilot and we are well-resourced, but we don’t have a blank check for sweet games at 10 bucks a pop. We are heading toward syncing to a central computer, but that brings up…
  • When do iPads get synced to a central computer? How much ownership will kids lose or perceive themselves as losing when they give up the iPad for syncing? Does this matter at all?
  • What about the accessories? Clearly, the iPad needs a protective case, needs paid apps, needs charging which, if centralized, becomes pretty expensive quickly.
  • In a 1 to 1 environment with laptops and iPads, how will students manage care of their electronics? Are their hands full already in a purely concrete respect?
  • Will the iPad create an efficient workflow for kids, or will it be a Personal Distraction Device?
  • The million (and millions of) dollar question: How on Earth can the iPad be anything other than an engaging, useful sidecar to a solid computer? I spent an hour today making a Google Doc flow chart in Adobe InDesign complete with flow charts and I couldn’t even come close to duplicating this on an iPad based on what I have been able to find so far. It’s simply not tooled up for that level of creativity. Which brings me to
  • What do we want kids to do in school? If the iPad doesn’t unleash the full potential of current computing technology for kids to do things with, to explore, tinker, discover, and make, and we consider it as a laptop replacement, what are we doing wrong?

These are my big questions so far and the students aren’t even back yet. But, within two weeks kids will have their hands on the iPads, so I want to be collecting answers and revising questions immediately. I really wonder what issues and questions other teachers working with iPads have at this point and need to do a little digging in the next few weeks.

Thinking About Feedback

As I rounded essay number 45 or so and headed for third base today, my eyes were dry and I had the familiar essay ache that doubtless plagued my students at the end of their timed write. I enjoy reading student writing, and actually look forward to assessments like timed essays because it gives me data, information on what kids have learned, improved upon, missed completely, or ignored outright. I write a lot of feedback on student writing, and I push myself to be specific every time. I also try to focus on no more than three areas of growth, tied to our writing rubric, for each kid each time. There are many balls to keep in the air, including goals from previous writing assessments, but I dig it and enjoy the interactive nature of reading student writing and providing specific, targeted feedback.

So I read, I write, and I give students back their writing. They flip to the grade, roll their eyes, give high-fives, gasp in delight or horror, and ignore everything else. In the past, I had students who were much less grade driven and/or had classes with very few students, in which we could all sit down individually and discuss each student’s performance at length. I’ve made some minor changes to providing feedback, asking students to write metacognitive responses prior to seeing feedback or grades, but in larger (but by no means large) classes, I haven’t found the magic trick that will move students past simply looking at grades and shutting down or throwing up defensive walls. Of course, the same thing that works every time takes a long time to establish: a mutually respectful, open, and honest collegial relationship.

So, I have some ideas about what works and what doesn’t when it comes to feedback and focusing students on feedback. What doesn’t work:

  • Grade Centrism – Grades just get in the way. In a perfect situation in which any rubrics handed down from upon high are very valid, used with and by students regularly, and common across curriculum areas, grades become measures of performance. In less than perfect situations, grades quickly turn into arbitrary judgements of the good and the bad, the smart and the not-smart, or whatever the teenaged mind might read into the ambiguity between performance and grades. Not good, feedback doesn’t get through here.
  • Competitive Academic Environments – Collegiality counts. If you are an obstacle to my success, if this is a zero-sum situation, we’re in trouble. Related to the above.
  • Shifting Language – As a writing teacher, it’s a little crazy to me how many terms teachers have for the word “thesis.” It’s equally crazy how many different ideas teachers have for what a thesis should be. If I laud a student’s voice, and another teacher applauds that student’s style, and another teacher cheers that student’s tone (but without meaning tone, as I define it, as the speaker’s relationship to the subject), the student will think she is doing three things well. If one of us gives negative feedback on voice/style/tone/etc, how will she fix the problem? This even happens in math, I think, when kids learn different terms for operations at different levels. We have to know this means learning the same thing differently, time and again. Getting our language aligned can streamline learning and certainly make feedback laser focused.
  • Vague Feedback – I learned this from Grant Wiggins. “Good job!” Every time I write “Yes!” or “Great!” it’s a clarion call to keep writing: “Yes – sensible identification of tone in narration and effect on the theme of confusion in the text!” or “Great use of a signal verb to introduce a detail from the text!”
  • Dropping It Like It’s Hot – Got, got, got to go metacognitive, ideally before they see my feedback at all. This can be tough sometimes, but it must be done. This can go hand in hand with portfolio assessments, which is why I say we’ve got, got, got to be doing e-portfolios, but that is for another day.

There are more things that don’t work. What works reads like a flipped list:

  • Performance Feedback, not Grades – Sure, grades, I get it. It’s the way we do things. Sweet. Still, let’s change school cultures to focus on performance, through authentic performance tasks for assessment. Let’s show kids what great is, how to create great, and then assess the result with lots of specific feedback.
  • Cooperative Academic Environments – Nobody is an obstacle to your success – they are either an asset utilized or ignored. It’s a paradigm for mutual success. If this is working, everybody can provide constructive, specific feedback at any level in any direction and everybody learns, including instructors and administrators.
  • Aligned Language – Make the language match across the disciplines. Wow, does this take a lot of work. It’s worth it, though. Ancillary benefits are clearer expectations and a greater conversation around big ideas like differentiated instruction and assessment, what that means, what non-negotiable performance benchmarks might be. I don’t know what bad outcomes of this slow process can be.
  • Specific Feedback – Specific and aligned to expectations shared in advance of, as part of, or through instruction. Language must be non-judgmental, but also clear in terms of what has been done well, what hasn’t, the implications, and the path forward.
  • Spending Time with Feedback – Here’s a great opportunity for metacognitive response, conferencing (portfolios!), revision, peer discussions, and so much more. My action research for my MAT focused on student-created rubrics from model work or exemplars – it wasn’t all perfect, so perhaps model could be a misleading term for some. Students can create powerful assessment tools and, through so doing, truly internalize the expectations and produce amazing products as a result. It’s like a feedback loop inside a feedback loop.

Anyway, here’s a quick breakdown of what works from Grant Wiggins, as published by New Horizons.org:

Elements of a an educative assessment system:

1. Standards

· specifications (e.g. 80 wpm w/ 0 mistakes)
· models (exemplars of each point on the scale – e.g. anchor papers)
· criteria: conditions to be met to achieve goals – e.g. “persuasive and clear” writing

2. Feedback

· Facts: what events/behavior happened, related to goal
· Impact: a description of the effects of the facts (results and/or reactions)
· Commentary: the facts and impact explained in the context of the goal; an explanation of all confirmation and disconfirmation concerning the results

3. Elements of evaluation

· Evaluation: value judgments made about the facts and their impact
· Praise / Blame: appraisal of individual’s performance in light of expectations for that performer

4. Elements of Guidance

· Advice about what to do in light of the feedback
· Re-direction of current practice in light of results

There is more outstanding information at the Wiggins article linked above and here regarding how to create a feedback cycle. It’s genius in its simplicity and power. At any rate, as I read, wrote, and reflected, I wondered what makes me effective as a writing teacher. As I consider all of the things I’m doing differently now from last year, it’s the commonality of my feedback on student writing that helps students learn and improve more than any one thing. At least, that’s my thought for this busy Sunday, and it’s what led to the reflections herein. I wonder what works for other people in terms of providing feedback for student learning.

Learning Information is a Reflective Process; Get Started with a Test!

A fascinating study has just been published in the journal Science regarding kinds of study strategies and their effectiveness in improving recall, or retrieval, of information later. An article on the study has been linked below and all quotes come from the linked text.

In brief, when compared to strategies such as repeated reading, cramming, or concept mapping, taking tests has proven to increase recall later. I find this terribly interesting because of the conclusion that making mistakes on tests leads the learner to revise their understanding:

The students who took the recall tests may “recognize some gaps in their knowledge,” said Marcia Linn, an education professor at the University of California, Berkeley, “and they might revisit the ideas in the back of their mind or the front of their mind.”

When they are later asked what they have learned, she went on, they can more easily “retrieve it and organize the knowledge that they have in a way that makes sense to them.”

It may also be that the struggle involved in recalling something helps reinforce it in our brains.

Maybe that is also why students who took retrieval practice tests were less confident about how they would perform a week later.

“The struggle helps you learn, but it makes you feel like you’re not learning,” said Nate Kornell, a psychologist at Williams College. “You feel like: ‘I don’t know it that well. This is hard and I’m having trouble coming up with this information.’ ”

By contrast, he said, when rereading texts and possibly even drawing diagrams, “you say: ‘Oh, this is easier. I read this already.’ ”

So, overconfidence about the quality of one’s knowledge leads to poorer retrieval, no doubt because the brain does far fewer reflective cycles over the information and is not forced to revise understandings for greater clarity or correctness, as it must after flaws or gaps are pointed out via a test. Of course, the learner has to experience the results of the test. As such, our practice AP tests are of much higher value in improving your knowledge of the subject than the final exams themselves, because you will be given the results and be asked to revise answers, consider errors, and generally revisit the material. In short, you will naturally reflect on the outcome and seek to improve. At least, this is my reading of this study.

Also important is the conclusion at the end of the article that this study doesn’t mean more standardized tests are needed in American public schools, for example. Those tests are generally worthless, as students never receive feedback on the test beyond a score. Sometimes, the lucky ones get a print out with numbers in categories. None of this leads the learner to reflect on the specific gaps in his or her knowledge, as determined by the test.  As such, tests useful for improving learning should be specific, meaningful, valid, regularly occurring, and (probably) fairly brief. Tests should be given back to the learner soon after their scoring, and the learner should be led through a reflective process by the instructor as much as possible. These are, of course, my conclusions and not necessarily the conclusions of the study.

What does this mean for the secondary school student? When you have an identifiable set of information to learn – say, a text like Slaughterhouse Five, in which recall of many specific details are necessary for writing and discussing, or dark and light photosynthesis, with myriad attendant details and process – find a partner and devise a little test for each other, focusing on the most important ideas and essential details, as you see them. Take, swap, and discuss. That process would utilize your study time much, much better than sitting in a room and rereading lines that flow in your eyes and out through your mouth, hanging open with boredom. Would any of you try that process?  I’m genuinely curious. Would a session on writing test questions be helpful as a study strategy?

“To Really Learn, Quit Studying and Take a Test” – The New York Times