I trust we can see what’s happening in this illustration, which comes from a manuscript written in the early 1500s in Europe. This is obviously a problem of considerably long standing for our species. I believe the problem emerges from the same place that our truly good ideas come from: that fascinating place we call the human brain. The situation we see in the picture is not just a bad idea–It’s almost a good idea. It’s a classic case of an insufficient dose of ingenuity.
Tricky thing, ingenuity. Sometimes anything less than a full dose is poison.
Take for example the seemingly endless fascination with “learning outcomes.” Who could argue that we should not think about what our students learn? The whole idea here is to move from a “teaching” paradigm to a “learning” paradigm. Barr, Tagg, Chickering, Bloom, Boyer, and a flotilla of other writers have insisted that it’s all about the learning. If a teacher teaches but no learning occurs, then teaching hasn’t really occurred either. This all seems painfully, even hammeringly obvious to me, but I know there are indeed professors who believe their responsibility is simply to show up and talk in a way they themselves understand as they “cover the material,” an activity something like pulling the sheet all the way up over the deceased’s head. In this case, the cadaver is both the subject of the class and the subjects in the class, both of which (whom?) become not subjects but objects.
So the ingenious idea emerges: teachers should think about what they believe should happen in the student as a result of the class. Teachers should think not about what they are teaching, but about what the students are learning. There are even extraordinary efforts to refine the idea of ‘learning outcomes” by distinguishing “learning outcomes” from “learning objectives,” as the latter are still not sufficiently student-centered.
Yet something is deeply amiss, in my view. As we seek to perfect the language and institutionalization of a culture of “learning outcomes,” it seems we are necessarily moving toward a strictly behaviorist paradigm of learning, away from what Jerome Bruner refers to as the “cognitive turn” in learning theory and ever more deliberately toward a stimulus-response paradigm of learning. This behaviorist turn can be very sophisticated and refined. The behaviors specified, measured, and tracked can be cognitively demanding “smart human tricks.” There can even be qualitatively measured learning outcomes, though it appears these are less frequent than quantitative metrics, for reasons I think are obvious. Yet these are still behaviors, specified with a set of what I can only describe as jawohl! statements, all rewarding the bon eleves and marching toward compliance and away from more elusive and disruptive concepts like curiosity or wonder: For example, here are pretty much canonical examples of learning outcomes from the University of Toronto’s Center for Teaching Support and Innovation:
- By the end of this course, students will be able to categorize macroeconomic policies according to the economic theories from which they emerge.
- By the end of this unit, students will be able to describe the characteristics of the three main types of geologic faults (dip-slip, transform, and oblique) and explain the different types of motion associated with each.
- By the end of this course, students will be able to ask questions concerning language usage with confidence and seek effective help from reference sources.
- By the end of this course, students will be able to analyze qualitative and quantitative data, and explain how evidence gathered supports or refutes an initial hypothesis.
- By the end of this course, students will be able to work cooperatively in a small group environment.
- By the end of this course, students will be able to identify their own position on the political spectrum.
Learning outcomes should use specific language, and should clearly indicate expectations for student performance.
I see these examples and admonitions everywhere. Students will … students will … students will … students will. (Meantime the students’ will becomes defined for them, or ignored, or crushed.) Each of the above statements assume a linear, non-paradoxical, cleanly defined world. The sun shines. Experience is orderly. Tab A goes into Slot B. Problem solved. Please note that I am not arguing against specific knowledge. I love engineering and many engineers as well. Expertise is vital. But there is more to the story than repeat-after-me. One item of specific knowledge that’s vital for all learning is the knowledge of complexity and the emergent phenomena springing from it. Another is knowledge of ambiguity and the fluidity of concepts articulated so beautifully by Douglas Hofstadter in Fluid Concepts and Creative Analogies. Another is that interest, wonder, awe, and curiosity themselves are vital preconditions and outcomes of any learning experience. They shape the complex readiness (cognitive, affective, social, etc.) of students for the learning experience at hand, and that learning experience in turn shapes the students’ readiness (cognitive, affective, social, etc.) for the next experience.
The soldiers-on-parade list of “students will” statements characterizing “learning outcomes” may be necessary, but it’s also crucially, even tragically insufficient–yet that is where our ingenuity seems to be stopping as we whack away at the branch we’re sitting on.
For it turns out that two of the words we must never, ever use are “understand” and “appreciate.” These are vague words, we are told. Instead, we must use specific words like “describe,” “formulate,” “evaluate,” “identify,” and so forth. You know, action verbs that we believe we can measure with confidence. This is the doctrine, repeated faithfully across multiple contexts, that defines much of the practice of those in higher education (and K-12 as well) who seek a more learning-centered environment. Chronicle blogger and math professor Robert Talbert provides a recent iteration in his blog post about flipped classrooms in calculus:
A clear set of learning objectives is at the heart of any successful learning experience, and it’s an essential ingredient for self-regulated learning since self-regulating learners have a clear set of criteria against which to judge their learning progress. And yet, many instructors – myself included in the early years of my career – never map out learning objectives either for themselves or for their students. Or, they do, and they’re so mushy that they can’t be measured – like any so-called objective beginning with the words “understand” or “appreciate”. [Hyperlink in the original.]
Clear objectives vs. mushy objectives, the latter kicked to the curb with the scornful phrase “so-called,” because they “can’t be measured.” As he continues his post, Talbert cites the familiar Bloom’s Taxonomy. Oddly, “understanding” appears as level two of the pyramid, but Talbert doesn’t note the irony or indicate the complexities and divergences around this taxonomy, including the fact that the version he cites is a frequently cited revised version, and that it coexists with a digital version, etc. Many questions have emerged about this taxonomy. Perhaps it should be inverted? Perhaps it maps the learner’s progress toward higher-order thinking in far too linear a fashion. Does understanding really precede creation? Or does creation facilitate understanding, in a weirdly recursive way? If a writer says “I write in order to discover what I have to say,” where did she begin on the taxonomy, and where does she arrive? Or does she arrive? Is this taxonomy a pyramid or a wheel?
At this point the reader may object that I am introducing far too many complexities into what was intended as simple advice for professors who want to flip their classrooms. Unfortunately, these complexities matter. When confident, simple, plain, orderly advice is given about a complex matter, I hear the sound of the hatchet replaced by the sound of wood snapping as the branch I’m sitting on gives way. Again quoting from Talbert:
Bloom’s Taxonomy is a standard means of categorizing cognitive tasks by complexity, with the simplest (Knowledge, or “Remembering”) at the bottom and the most complicated (“Creating”) at the top. Go through each of your learning objectives and decide what level of Bloom they most closely correspond to. Then shuffle them around so that the higher up the list you go, the more complex the task is.
Compare this advice to the observations John Carroll and Mary Beth Rosson make in their essay “The Paradox of the Active User” (download here):
A motivational paradox arises in the “production bias” people bring to the task of
learning and using computing equipment. Their paramount goal is throughput. This is a desirable state of affairs in that it gives users a focus for their activity with a system, and it increases their likelihood of receiving concrete reinforcement from their work. But on the other hand, it reduces their motivation to spend any time just learning about the system, so that when situations appear that could be more effectively handled by new procedures, they are likely to stick with the procedures they already know, regardless of their efficacy.
A second, cognitive paradox devolves from the “assimilation bias”: people apply what they already know to interpret new situations. This bias can be helpful, when there are useful similarities between the new and old information (as when a person learns to use a word processor taking it to be a super typewriter or an electronic desktop). But irrelevant and misleading similarities between new and old information can also blind learners to what they are actually seeing and doing, leading them to draw erroneous comparisons and conclusions, or preventing them from recognizing possibilities for new function.
It is our view that these cognitive and motivational conflicts are mutually reinforcing, thus exaggerating the effect either problem might separately have on early and longterm learning. These paradoxes are not defects in human learning to be remediated. They are fundamental properties of learning. If learning were not at least this complex, then designing learning environments would be a trivial design problem (Thomas and Carroll, 1979).
One may immediately object that Carroll and Rosson are analyzing a very specific learning situation, that of someone trying to master unfamiliar software. But look again, especially at that last paragraph. “These paradoxes,” ones in which prior learning, motivation, etc. both propel and block learning, “are not defects in human learning to be remediated. They are fundamental properties of learning.” Carroll and Rosson are discussing learning, period, even though their analysis focuses on a particular learning task. Moreover, they approach the task of design for learning as a set of “programmatic tradeoffs” within a shifting field of paradoxical encounters. The last sentence quoted above is bracing and entirely to the point: “If learning were not at least this complex, then designing learning environments would be a trivial design problem.”
Much of the “learning paradigm” discussion, like the discussion around “analytics” and other current instructional interventions, treats designing learning environments as a trivial design problem. The effort required isn’t trivial, mind you. It can be hard work building out complicated environments based on straightforward design concepts. There are all these rubrics to write, all these Standards of Learning to formulate, revise, vote on, adopt, and implement. These are indeed complicated processes that take a lot of time. The effort and the time involved can convince us that we’re doing something very complex, rigorous, and highly responsible. But note that Carroll and Rosson are arguing that the problem of designing learning environments is non-trivial. It must engage with paradox, not seek to remediate paradox. By extension, Carroll and Rosson are implying that to attempt to remediate paradox (taxonomies are typically anti-paradoxical) is to end up with something far less complex than learning. In other words, when we “solve the problem” of learning, we simply substitute a simpler question for a harder question, a process mapped out by Daniel Kahneman in his recent book Thinking Fast and Slow.
Now read the advice from The Chronicle again. Count the number of times the word “paradox” is used. Hmm. Instead, there’s this voila! (or perhaps a QED):
Further down the line, the lists of learning objectives are also a ready-made topic list for timed assessments like tests and the final exam. Want to know what’s on the test? Just take the set union of all the learning objectives we’ve seen up to now.
As they used to say in the TV pitches, “it’s just that simple.”
I certainly do not intend to demonize Bloom or anyone using his ideas or anyone’s ideas deriving from his ideas. There’s plenty of demonization out there without my feeding the beast. Teaching and learning are difficult, sometimes bewildering activities, and it’s natural to want to have clarity about it all. It’s also natural, and to some extent a good thing too, when we seek accountability for our professional activities. Asking “what do we want to happen, and how will we know if we get there?” is an entirely fair and just thing to do. It’s when we’re forbidden to use “mushy” words like “understand” and “appreciate” because “they can’t be measured” that the trouble begins. And it’s when we believe that an “ordered list” will take us through the paradoxical encounters of meaning-making, curiosity, awe, and wonder so that we safely arrive at “student success” that we end up with what Ted Nelson famously termed “a forced march across a flattened plane.”
The Chronicle article clears the way for systematic learning that could easily be programmed into a sophisticated Computer-Aided Instruction machine. This means that one day it will indeed be administered by a computer:
This is far from a perfect system, but it’s a reliable way to align learning objectives with the actions you want students to perform and the means you want to use to assess them, and it gives students a key ingredient for self-regulated learning: A clear set of criteria that will tell them what they need to know and how to measure whether or not they know it.
If a thing can be automated, perhaps it should be automated. If we are going to argue that human beings who teach have an important role to play in learning, even in areas like mathematics (and why not especially there? see Lockhart’s Lament, an essay that I return to again and again), then we are going to have to engage with paradox and start talking again about “learning outcomes” that are beyond algorithms.
Without a strong view of the way in which “understanding” and “appreciation” (which I’d say means “something that gains in value for the learner because of the learning”), what can we possibly have to say about Spritz, a new instantiation of an older idea about computer-aided reading? How can we as educators mount a challenge to a learning design paradigm in which reading turns into what Ian Bogost aptly calls “Reading To Have Read“? Some excerpts from Bogost’s article (I urge you to read the whole thing, slowly):
In today’s attention economy, reading materials (we call it “content” now) have ceased to be created and disseminated for understanding. Instead, they exist first (and primarily) for mere encounter….
If ordinary readings are read to be understood, to be pondered and discussed and reflected upon rather than to be completed or collected, then perhaps it’s best to think of Spritzing as reading that is done to have been read. Indeed, the idea of Spritzing is the apotheosis of speed reading: reading in which completion is the only goal.
Spritzing is reading to get it over with. It is perhaps no accident that Spritze means injection in German. Like a medical procedure, reading has become an encumbrance that is as necessary as it is undesirable.
The Spritz FAQ snickers a bit at the German meaning, a nasty little snicker I’d say, acknowledging that they embrace the meaning and view it as a witty little mnemonic device for effective branding. The whole FAQ has a weird hipster vibe that seems to make the whole thing into competitive eating or sack races: “Hehehehehehe! Do you know what Spritz means in German? ROFL! LMAO! One of our founders is from Munich, so yes, we know. We bet you won’t forget it though, will you?” No, I don’t suppose I will, though I will CMEO, not LMAO.
Back to Bogost:
Spritz hasn’t stepped in to sabotage comprehension, but to formalize and excuse its eradication.
In other words, Spritz avoids mushy words like “understanding” and “appreciation,” the sort of things for which one creates opportunities for pondering, discussion, and reflection. If we as educators subscribe uncritically to the typical “learning outcome” paradigm, though, how can we possibly criticize Spritz? We have sawn off the branch we’re sitting on.
In a blog post responding to Bogost’s Atlantic article (which is how I found the Bogost piece, in fact), Alex Reid poignantly notes that two questions emerge from any analysis of reading (especially if reading is broadly construed as meaning-making arising from symbolic expression):
First, an ontological one, which is what are we or what are we capable of being? And second an ethico-political one, what should we be? Inasmuch as we are intertwined with symbolic behavior, the question of how we produce and consume symbols will be involved in these concerns.
To which one might add a third question: does a learning paradigm that avoids “understanding” and “appreciation” reduce symbolic behavior to indexicality alone? Poets (and mathematicians like Lockhart, Hardy, and Hofstadter–artists all) know that symbols not only contain representation but also stimulate representation. As a Miltonist very dear to me once wrote, symbols are not simply reiterative. They are generative, too.
One metric for that generative outcome might be called “civilization.” The best kinds of generative outcomes might be called “wisdom.”