Not Taking Bad Advice: a Pedagogical Model
The text for my flipped keynote at Digital Pedagogy Lab 2020.
I’ve never created a model for online, digital, or hybrid pedagogies. As long as I’ve been teaching, and as long as I’ve been teaching teachers, I’ve encountered, been flummoxed by, or have cast off models. I have yet to see a single pedagogical model worth its salt. And yet I watch how quickly models spread. The neater and tidier the model, the more likely it seems to be broadly adopted by an institution: Learning Styles, Bloom’s Taxonomy, ADDIE, Scaffolding, Design Thinking, Quality Matters, Andragogy, HyFlex. Lists, frameworks, Venn diagrams, rubrics, templates. Six principles of Andragogy, five stages of the ADDIE development process, six levels of Bloom’s Taxonomy, forty-two review standards of the Quality Matters Rubric. And, even when these models are thoroughly debunked, they continue to retain traction. According to the Association for Psychological Science, “No less than 71 different models of learning styles have been proposed over the years … But psychological research has not found that people learn differently, at least not in the ways learning-styles proponents claim.” And, yet, the American Psychological Association found, “in two online experiments with 668 participants, more than 90 percent of them believed people learn better if they are taught in their predominant learning style.” In higher education, too many of us cling to other people’s models, because we have rarely been taught, encouraged, or given the support we need to create our own.
Some of the more insidious models are fashioned as needlessly complex in order to create a mystique of intellectual rigor. Even when these models aren’t based in research, they are made to seem as though they are. And the worst of these models have strange backgrounds, feed the motivations of for-profit companies, or aim to (or simply do) create edu-celebrities. I have always known I could make a lot more money as a speaker or consultant if I fashioned a fancily-named pedagogical model for myself. I wouldn’t even have to craft something original, because so many of the existing models simply repackage other models with a pie chart instead of a pyramid, a set of five circles instead of cubes. An easy image-search for “pedagogical model” brings up a few dozen geometric arrangements. Teaching is reduced to mechanistic gestures across x- and y-axes and learning is pattern recognition.
At the first teacher training I attended, I was introduced to Bloom’s taxonomy, which has since been burned into my brain through several dozen iterations. There is a certain comfort in the architecture of Bloom’s taxonomy, usually represented as a pyramid, sometimes in 3-dimensions, usually in rainbow colours. The model was originally invented in 1956, the version I first learned, and significantly revised in 2001. These days you’re more likely to encounter the revised version, which swaps out several of the less friendly words from the original, and appears to give students a bit more agency by adding the word “create” to the top. The new version also changes all the categories to verbs (like “remember” and “apply”), making it easier to tie each level directly to learning outcomes. I’d say it’s six of the least vibrant verbs we could apply to learning, but this is what learning outcomes so often call for in their pursuit of being measurable.
Dull verbs aside, my biggest issue with Bloom’s taxonomy is that it’s hierarchical. Each level of the pyramid is supposedly built upon the level below. So, you can’t “create” or “evaluate” until you first “remember” and “understand.” (I’m certain my 3-year-old would take issue with that.) The whole thing feels less like a method to encourage or inspire learning and more like a way to police students (and also teachers), laying out a series of hoops for them to jump through with a built-in defence of their existence, what Jeffrey Moro calls “cop shit,” or as I’ve come to call it, “the student agency military industrial complex." Of course, the educators, designers, and institutions trafficking in Bloom’s don’t mean for us to take scaling the pyramid literally, but far too often they really do. Ultimately, models like Bloom’s are a distraction from the hard conversations we should be having about teaching and learning, and I don’t think that’s an accident.
I have tracked an anti-pedagogical bent in higher education (and education more generally) since I started teaching in 1999 and teaching teachers in 2003. And while there is more direct attention to design within online learning circles, there is also even more reliance on models and packaging of best practices. Pedagogy is praxis, the intersection between the philosophy and practice of teaching. Best practices, which aim to standardize teaching and flatten the differences between students, are anathema to pedagogy. The most egregious example is the Quality Matters rubric, which the organization (a standalone non-profit as of 2014) calls the “QM Quality Assurance System,” which is said to “create a culture of continuous improvement” to “deliver the promise” of online learning. I have previously analyzed the marketing copy of Turnitin and various learning management systems. This work is not a mere exercise. When we’re deciding what tools we use, we should be looking careful at the discrepancies between what the companies say the tools do and what they actually do. At the points where these diverge, we begin to see the cracks in a tool’s promise.
From the front page of the Quality Matters Web site: “With online learning, everyone has a goal. Learners need to improve and grow. You work to nurture them with well-conceived, well-designed, well-presented courses and programs.” It all seems benign, and I do believe the intentions of the organization are probably good, and I certainly believe most of the educators who use the Quality Matters rubric are working toward the aims described there. My biggest concern, right now, is the way I’ve seen Quality Matters and its rubric being presented as a solution to emergency remote teaching. If a teacher with no experience working fully online has been asked to shift all their teaching into an online or hybrid format, dumping a 42- or 43-item rubric in their lap really isn’t going to help. And it will likely do harm.
Quality Matters is a special kind of model, almost the polar opposite of Bloom’s, because QM is decidedly not at all simple. It is, in fact, needlessly complex to the point of being inscrutable, which ultimately helps QM sell an annotated version of the rubric, as well as courses, workshops, certification programs, subscriptions, and institutional memberships. In the rubric itself, the words “clear” or “clarity” appear over a dozen times. (There are 42 items on the latest higher education rubric and 43 on the latest K-12 rubric, all with point values assigned to them, a mechanism for tallying a final score.) Item 5.4 on the higher education rubric, for example, says that “the requirements for learner interaction are clearly stated.” While I do think teachers should make their requirements for a course clear, writing longer syllabi, adding more items or levels to a rubric, and spelling out more requirements do not (in my experience) make anything more clear for students. The word “quality” is itself problematic, and perhaps even suspect. Many of the best examples of online learning would fail miserably at the QM rubric. In “The Trouble with Rubrics,” Alfie Kohn writes, “Consistent and uniform standards are admirable, and maybe even workable, when we’re talking about, say, the manufacture of DVD players.”
QM promises efficiency and objectivity, but actually creates more work and merely provides cover for all the biases and subjectivity that continue to exist (and are left unchecked) in spite of the rubric. The words “access,” “accessible,” and “accessibility” do appear five times on the Quality Matters rubric, but each use refers to either providing “policies” and “statements,” or having accessible “text and image files” and “multimedia content” that meet “the needs of diverse learners” (the only context in which “diversity” is mentioned). This just doesn't go even nearly far enough. The word “privacy” appears once, but the onus is put on students to protect their data and privacy; the course needs only to provide the necessary “information.” Here are a few words that do not appear anywhere on the Quality Matters rubric: “community,” “agency,” “inclusivity,” “flexibility,” “joy,” “compassion,” “question,” and “human.”
In “Why I threw Away My Rubrics,” Jennifer Hurley writes, “Where is the human response in all of this?” Hurley argues rubrics are especially damaging to those who would be most successful and to those who are struggling. The best teachers are left to rest on their laurels or even encouraged to step backward as they jump through hoops not really designed for them. The advice they get from QM runs counter to their better instincts, which see learning (and online learning) as a complex set of human experiences, behaviours, and interactions — all of which can't be neatly measured by a rubric. There is no incentive (and, in fact, risks) to pushing past the boundaries set up by the QM rubric.
Meanwhile, the teachers who are struggling (or teaching online for the first time) find themselves bewildered by the sea of categories the QM rubric contains and the way the rubric patronizes its users. For the new or struggling teachers, the rubric ends up feeling like a crude (and mechanistic) tool for administrators and institutions to police teaching – more “cop shit.” This is not a good entry point into the work of being a teacher, especially the very human work of being a teacher online.
None of this is to say Bloom’s taxonomy or the Quality Matters rubric have never ever been used to support good pedagogy. My point is that these are not the first places we should be turning as we begin to imagine what online and hybrid learning could be, especially in a moment like this one. The staff of Teaching Tolerance created a set of guidelines I find much more useful right now: “A Trauma-informed Approach to Teaching Through the coronavirus.“ They emphasize the need for “clear” communication and routines, as opposed to clear requirements or policies. They talk at length about helping students “maintain a sense of psychological safety.” The word “hope” appears 11 times in their guidance.
Carey Borkoski writes in “Cultivating Belonging”, “There is consensus in the literature about the benefits of a student’s sense of belonging. Researchers suggest that higher levels of belonging lead to increases in GPA, academic achievement, and motivation.” And from a follow-up piece co-authored with Brianne Ross on “Cultivating Belonging Online During COVID-19”: “The new instability and isolation within our current environments contribute to an evolving and unstable sense of belonging for students, teachers, leaders, and parents.” There has been much talk over the last several months about maintaining “continuity” of instruction and assessment, but less discussion about how we maintain the communities at the heart of our educational institutions. This is the design challenge before us.
There is no one-size-fits-all set of best practices for building a learning community, whether on-ground or online. And there is no secret mix of ingredients that create the perfect hybrid strategy.
Our efforts toward building community should be directed toward the students who need that community the most, the ones most likely to be feeling isolated even before the pandemic: disabled students, chronically ill students, homeless students, BIPOC students, LGBTQ students, etc. We need to build courses, and imagine new ways forward, for these students, the ones already struggling, already facing exclusion.
Flexibility and trust are key principles of any pedagogy worth its salt, but they are particularly important when we’re in crisis. Right now, we need to focus on “teaching the students we have, not the students we wish we had.” We don't need more models for hypothetical students we haven't yet met. And so, my pedagogical model has exactly one point:
Stop looking for models and begin by talking to students.