Understanding and learning outcomes

Cutting off the branch on which he sits. From the Catalog of Illuminated MS at the British Library

Cutting off the branch on which he sits. From the Catalog of Illuminated MS at the British Library: BL Stowe 955 f. 15 Man cutting down a tree

I trust we can see what’s happening in this illustration, which comes from a manuscript written in the early 1500s in Europe. This is obviously a problem of considerably long standing for our species. I believe the problem emerges from the same place that our truly good ideas come from: that fascinating place we call the human brain. The situation we see in the picture is not just a bad idea–It’s almost a good idea. It’s a classic case of an insufficient dose of ingenuity.

Tricky thing, ingenuity. Sometimes anything less than a full dose is poison.

Take for example the seemingly endless fascination with “learning outcomes.” Who could argue that we should not think about what our students learn? The whole idea here is to move from a “teaching” paradigm to a “learning” paradigm. Barr, Tagg, Chickering, Bloom, Boyer, and a flotilla of other writers have insisted that it’s all about the learning. If a teacher teaches but no learning occurs, then teaching hasn’t really occurred either. This all seems painfully, even hammeringly obvious to me, but I know there are indeed professors who believe their responsibility is simply to show up and talk in a way they themselves understand as they “cover the material,” an activity something like pulling the sheet all the way up over the deceased’s head. In this case, the cadaver is both the subject of the class and the subjects in the class, both of which (whom?) become not subjects but objects.

So the ingenious idea emerges: teachers should think about what they believe should happen in the student as a result of the class. Teachers should think not about what they are teaching, but about what the students are learning. There are even extraordinary efforts to refine the idea of ‘learning outcomes” by distinguishing “learning outcomes” from “learning objectives,” as the latter are still not sufficiently student-centered.

Yet something is deeply amiss, in my view. As we seek to perfect the language and institutionalization of a culture of “learning outcomes,” it seems we are necessarily moving toward a strictly behaviorist paradigm of learning, away from what Jerome Bruner refers to as the “cognitive turn” in learning theory and ever more deliberately toward a stimulus-response paradigm of learning. This behaviorist turn can be very sophisticated and refined. The behaviors specified, measured, and tracked can be cognitively demanding “smart human tricks.” There can even be qualitatively measured learning outcomes, though it appears these are less frequent than quantitative metrics, for reasons I think are obvious. Yet these are still behaviors, specified with a set of what I can only describe as jawohl! statements, all rewarding the bon eleves and marching toward compliance and away from more elusive and disruptive concepts like curiosity or wonder: For example, here are pretty much canonical examples of learning outcomes from the University of Toronto’s Center for Teaching Support and Innovation:

Content

  • By the end of this course, students will be able to categorize macroeconomic policies according to the economic theories from which they emerge.
  • By the end of this unit, students will be able to describe the characteristics of the three main types of geologic faults (dip-slip, transform, and oblique) and explain the different types of motion associated with each.

Skills

  • By the end of this course, students will be able to ask questions concerning language usage with confidence and seek effective help from reference sources.
  • By the end of this course, students will be able to analyze qualitative and quantitative data, and explain how evidence gathered supports or refutes an initial hypothesis.

Values

  • By the end of this course, students will be able to work cooperatively in a small group environment.
  • By the end of this course, students will be able to identify their own position on the political spectrum.

Learning outcomes should use specific language, and should clearly indicate expectations for student performance.

I see these examples and admonitions everywhere. Students will … students will … students will … students will. (Meantime the students’ will becomes defined for them, or ignored, or crushed.) Each of the above statements assume a linear, non-paradoxical, cleanly defined world. The sun shines. Experience is orderly. Tab A goes into Slot B. Problem solved. Please note that I am not arguing against specific knowledge. I love engineering and many engineers as well. Expertise is vital. But there is more to the story than repeat-after-me. One item of specific knowledge that’s vital for all learning is the knowledge of complexity and the emergent phenomena springing from it. Another is knowledge of ambiguity and the fluidity of concepts articulated so beautifully by Douglas Hofstadter in Fluid Concepts and Creative Analogies. Another is that interest, wonder, awe, and curiosity themselves are vital preconditions and outcomes of any learning experience. They shape the complex readiness (cognitive, affective, social, etc.) of students for the learning experience at hand, and that learning experience in turn shapes the students’ readiness (cognitive, affective, social, etc.) for the next experience.

The soldiers-on-parade list of “students will” statements characterizing “learning outcomes” may be necessary, but it’s also crucially, even tragically insufficient–yet that is where our ingenuity seems to be stopping as we whack away at the branch we’re sitting on.

For it turns out that two of the words we must never, ever use are “understand” and “appreciate.” These are vague words, we are told. Instead, we must use specific words like “describe,” “formulate,” “evaluate,” “identify,” and so forth. You know, action verbs that we believe we can measure with confidence. This is the doctrine, repeated faithfully across multiple contexts, that defines much of the practice of those in higher education (and K-12 as well) who seek a more learning-centered environment. Chronicle blogger and math professor Robert Talbert provides a recent iteration in his blog post about flipped classrooms in calculus:

A clear set of learning objectives is at the heart of any successful learning experience, and it’s an essential ingredient for self-regulated learning since self-regulating learners have a clear set of criteria against which to judge their learning progress. And yet, many instructors – myself included in the early years of my career – never map out learning objectives either for themselves or for their students. Or, they do, and they’re so mushy that they can’t be measured – like any so-called objective beginning with the words “understand” or “appreciate”. [Hyperlink in the original.]

Clear objectives vs. mushy objectives, the latter kicked to the curb with the scornful phrase “so-called,” because they “can’t be measured.” As he continues his post, Talbert cites the familiar Bloom’s Taxonomy. Oddly, “understanding” appears as level two of the pyramid, but Talbert doesn’t note the irony or indicate the complexities and divergences around this taxonomy, including the fact that the version he cites is a frequently cited revised version, and that it coexists with a digital version, etc. Many questions have emerged about this taxonomy. Perhaps it should be inverted? Perhaps it maps the learner’s progress toward higher-order thinking in far too linear a fashion. Does understanding really precede creation? Or does creation facilitate understanding, in a weirdly recursive way? If a writer says “I write in order to discover what I have to say,” where did she begin on the taxonomy, and where does she arrive? Or does she arrive? Is this taxonomy a pyramid or a wheel?

At this point the reader may object that I am introducing far too many complexities into what was intended as simple advice for professors who want to flip their classrooms. Unfortunately, these complexities matter. When confident, simple, plain, orderly advice is given about a complex matter, I hear the sound of the hatchet replaced by the sound of wood snapping as the branch I’m sitting on gives way. Again quoting from Talbert:

Bloom’s Taxonomy is a standard means of categorizing cognitive tasks by complexity, with the simplest (Knowledge, or “Remembering”) at the bottom and the most complicated (“Creating”) at the top. Go through each of your learning objectives and decide what level of Bloom they most closely correspond to. Then shuffle them around so that the higher up the list you go, the more complex the task is.

Compare this advice to the observations John Carroll and Mary Beth Rosson make in their essay “The Paradox of the Active User” (download here):

A motivational paradox arises in the “production bias” people bring to the task of
learning and using computing equipment. Their paramount goal is throughput. This is a desirable state of affairs in that it gives users a focus for their activity with a system, and it increases their likelihood of receiving concrete reinforcement from their work. But on the other hand, it reduces their motivation to spend any time just learning about the system, so that when situations appear that could be more effectively handled by new procedures, they are likely to stick with the procedures they already know, regardless of their efficacy.

A second, cognitive paradox devolves from the “assimilation bias”: people apply what they already know to interpret new situations. This bias can be helpful, when there are useful similarities between the new and old information (as when a person learns to use a word processor taking it to be a super typewriter or an electronic desktop). But irrelevant and misleading similarities between new and old information can also blind learners to what they are actually seeing and doing, leading them to draw erroneous comparisons and conclusions, or preventing them from recognizing possibilities for new function.

It is our view that these cognitive and motivational conflicts are mutually reinforcing, thus exaggerating the effect either problem might separately have on early and longterm learning. These paradoxes are not defects in human learning to be remediated. They are fundamental properties of learning. If learning were not at least this complex, then designing learning environments would be a trivial design problem (Thomas and Carroll, 1979).

One may immediately object that Carroll and Rosson are analyzing a very specific learning situation, that of someone trying to master unfamiliar software. But look again, especially at that last paragraph. “These paradoxes,” ones in which prior learning, motivation, etc. both propel and block learning, “are not defects in human learning to be remediated. They are fundamental properties of learning.” Carroll and Rosson are discussing learning, period, even though their analysis focuses on a particular learning task. Moreover, they approach the task of design for learning as a set of “programmatic tradeoffs” within a shifting field of paradoxical encounters. The last sentence quoted above is bracing and entirely to the point: “If learning were not at least this complex, then designing learning environments would be a trivial design problem.”

Much of the “learning paradigm” discussion, like the discussion around “analytics” and other current instructional interventions, treats designing learning environments as a trivial design problem. The effort required isn’t trivial, mind you. It can be hard work building out complicated environments based on straightforward design concepts. There are all these rubrics to write, all these Standards of Learning to formulate, revise, vote on, adopt, and implement. These are indeed complicated processes that take a lot of time. The effort and the time involved can convince us that we’re doing something very complex, rigorous, and highly responsible. But note that Carroll and Rosson are arguing that the problem of designing learning environments is non-trivial. It must engage with paradox, not seek to remediate paradox. By extension, Carroll and Rosson are implying that to attempt to remediate paradox (taxonomies are typically anti-paradoxical) is to end up with something far less complex than learning. In other words, when we “solve the problem” of learning, we simply substitute a simpler question for a harder question, a process mapped out by Daniel Kahneman in his recent book Thinking Fast and Slow.

Now read the advice from The Chronicle again. Count the number of times the word “paradox” is used. Hmm. Instead, there’s this voila! (or perhaps a QED):

Further down the line, the lists of learning objectives are also a ready-made topic list for timed assessments like tests and the final exam. Want to know what’s on the test? Just take the set union of all the learning objectives we’ve seen up to now.

As they used to say in the TV pitches, “it’s just that simple.”

I certainly do not intend to demonize Bloom or anyone using his ideas or anyone’s ideas deriving from his ideas. There’s plenty of demonization out there without my feeding the beast. Teaching and learning are difficult, sometimes bewildering activities, and it’s natural to want to have clarity about it all. It’s also natural, and to some extent a good thing too, when we seek accountability for our professional activities. Asking “what do we want to happen, and how will we know if we get there?” is an entirely fair and just thing to do. It’s when we’re forbidden to use “mushy” words like “understand” and “appreciate” because “they can’t be measured” that the trouble begins. And it’s when we believe that an “ordered list” will take us through the paradoxical encounters of meaning-making, curiosity, awe, and wonder so that we safely arrive at “student success” that we end up with what Ted Nelson famously termed “a forced march across a flattened plane.”

The Chronicle article clears the way for systematic learning that could easily be programmed into a sophisticated Computer-Aided Instruction machine. This means that one day it will indeed be administered by a computer:

This is far from a perfect system, but it’s a reliable way to align learning objectives with the actions you want students to perform and the means you want to use to assess them, and it gives students a key ingredient for self-regulated learning: A clear set of criteria that will tell them what they need to know and how to measure whether or not they know it.

If a thing can be automated, perhaps it should be automated. If we are going to argue that human beings who teach have an important role to play in learning, even in areas like mathematics (and why not especially there? see Lockhart’s Lament, an essay that I return to again and again), then we are going to have to engage with paradox and start talking again about “learning outcomes” that are beyond algorithms.

Without a strong view of the way in which “understanding” and “appreciation” (which I’d say means “something that gains in value for the learner because of the learning”), what can we possibly have to say about Spritz, a new instantiation of an older idea about computer-aided reading? How can we as educators mount a challenge to a learning design paradigm in which reading turns into what Ian Bogost aptly calls “Reading To Have Read“? Some excerpts from Bogost’s article (I urge you to read the whole thing, slowly):

In today’s attention economy, reading materials (we call it “content” now) have ceased to be created and disseminated for understanding. Instead, they exist first (and primarily) for mere encounter….

If ordinary readings are read to be understood, to be pondered and discussed and reflected upon rather than to be completed or collected, then perhaps it’s best to think of Spritzing as reading that is done to have been read. Indeed, the idea of Spritzing is the apotheosis of speed reading: reading in which completion is the only goal.

Spritzing is reading to get it over with. It is perhaps no accident that Spritze means injection in German. Like a medical procedure, reading has become an encumbrance that is as necessary as it is undesirable.

The Spritz FAQ snickers a bit at the German meaning, a nasty little snicker I’d say, acknowledging that they embrace the meaning and view it as a witty little mnemonic device for effective branding. The whole FAQ has a weird hipster vibe that seems to make the whole thing into competitive eating or sack races: “Hehehehehehe! Do you know what Spritz means in German? ROFL! LMAO! One of our founders is from Munich, so yes, we know. We bet you won’t forget it though, will you?” No, I don’t suppose I will, though I will CMEO, not LMAO.

Back to Bogost:

Spritz hasn’t stepped in to sabotage comprehension, but to formalize and excuse its eradication.

In other words, Spritz avoids mushy words like “understanding” and “appreciation,” the sort of things for which one creates opportunities for pondering, discussion, and reflection. If we as educators subscribe uncritically to the typical “learning outcome” paradigm, though, how can we possibly criticize Spritz? We have sawn off the branch we’re sitting on.

In a blog post responding to Bogost’s Atlantic article (which is how I found the Bogost piece, in fact), Alex Reid poignantly notes that two questions emerge from any analysis of reading (especially if reading is broadly construed as meaning-making arising from symbolic expression):

First, an ontological one, which is what are we or what are we capable of being? And second an ethico-political one, what should we be? Inasmuch as we are intertwined with symbolic behavior, the question of how we produce and consume symbols will be involved in these concerns.

To which one might add a third question: does a learning paradigm that avoids “understanding” and “appreciation” reduce symbolic behavior to indexicality alone? Poets (and mathematicians like Lockhart, Hardy, and Hofstadter–artists all) know that symbols not only contain representation but also stimulate representation. As a Miltonist very dear to me once wrote, symbols are not simply reiterative. They are generative, too.

One metric for that generative outcome might be called “civilization.” The best kinds of generative outcomes might be called “wisdom.”

10 thoughts on “Understanding and learning outcomes

  1. Thanks for the insightful post. I agree on many points here — especially the point about the non-triviality of designing good curriculum meant to measure outcomes. Thank you for stressing that fact. I would add one layer to the discussion, though. That layer is the lens of the learner and the context of his/her course. Here are some examples:

    - Compare a traditional 18-year old student, full-time in college, just after graduating high school (U.S.-centric terms) vs. a 32-year old adult, taking college classes part-time, with the intent of getting a degree to further his/her career

    - Compare the learning outcomes of a college algebra class to the learning outcomes of an Organizational Theory class

    My point is that this student/course context has a significant effect on the validity/viability/utility of learning outcomes. I would say that measuring the outcomes for the 18-year old in the Org Theory class would be fruitless, while measuring outcomes for the 32-year old in the Algebra class is reasonable.

    Full disclosure — I started a higher ed analytics company last year, and I have both worked and taught for an open admissions institution that caters to working adults. This background leads me to espouse the concept of learning outcomes in cases where it is both valuable and reasonable reliable.

  2. “human beings who teach have an important role to play in learning…”

    “We know we have met a teacher when we come away amazed
    not at what the teacher was thinking but at what we are thinking.
    Those around whom surprising thinking emerges are teachers” (Carse, 1995)

    It is when we are required to certify learning that the counting begins.

  3. The argument to go beyond Bloom was presented some time ago in the late 1990s by some researchers in the Project Zero group in Harvard U led by Howard Gardner, David Perkins and Vito Perrone. The Teaching for Understanding framework was developed as a result of this project (http://www.pz.gse.harvard.edu/teaching_for_understanding.php). My growth as a teacher/designer grew when I encountered this research work. Indeed, developing understanding in students is a valid learning goal or “outcome”; by itself understanding is complex and layered. It does not make sense to insist that thinking is hierarchical or sequential in the way articulated in Bloom’s Taxonomy. Learning theories, taxonomies and research on how we learn are continually evolving and building upon that which has been previously discovered. So I’m with you that it is time to change and move away from the “Student will…” paradigm. The TFU framework focuses on generative topics, understanding goals, performances of understanding and ongoing assessments. I won’t expand on the TFU framework here as there are many available resources on it. I have found it rewarding to focus on “understanding,” throughlines in a course and the different thinking moves that underlie the attainment of understanding.

  4. Thank you for this, it helps add structure to some of the mushy thoughts that have been concerning me while taking a “techniques in engineering assessment” class. A few times we have touched upon these issues in our discussions of learning objectives and outcomes, but the various data we have to analyze captures only the aspects of learning that are easily measurable. I recently found out that a college-wide e-Portfolio initiative was axed after the first year because the cost of evaluating 1200 e-Portfolios was considered too high. Of course “the understanding gained by producing a holistic view of one’s own work in a public forum” wasn’t part of the equation in the cost/benefit analysis and it made me wonder what other kinds of activities might we not be doing simply because no one knows how to measure the learning outcome.

  5. Gardner, thank you for the link to my Chronicle post and for adding to this conversation. I just wanted to clarify a few things about what I wrote.

    My point in the post was that when designing *specific* learning activities, instructors and students need to be specific about what students should do that provides evidence of understanding. I did not say anywhere that we should *never* focus on broader concerns about understanding and appreciation — far from it, a broad appreciation of the utility and beauty of a subject is, for me, the primary purpose of learning that subject (and of education in general).

    Both instructors and students benefit from looking at the broad notion of understanding a subject or topic in terms of concrete, measurable outcomes. Instructors can use these to organize learning activities for students. Students, as I noted in the original, can use (indeed, need to use) unambiguous objectives to tell for themselves whether they are on the right track for understanding a topic, as a component of Pintrich’s self-regulated learning framework.

    Understanding and appreciation are great things. Those terms are great for high-level goals for a course. For levels much lower to the ground, more specificity is needed. Learning is more complicated than the sum total of one’s learning objectives — believe me, I know, since I teach three classes a semester and have three kids in elementary school. In some sense, the call to have concrete and measurable learning objectives is an effort to discretize the continuous. But I also think it’s a necessary first step along the road to having students who really do understand a subject and *know* that they understand it, rather than simply feel like they do.

    Here’s the link to the original post, since I didn’t see that in the article: http://chronicle.com/blognetwork/castingoutnines/2014/03/05/creating-learning-objectives-flipped-classroom-style/

  6. @Mike: Context is crucial, for lots of reasons, including the all-too-often elided factor of affect. I’m reminded of Seymour Papert’s insistence that he “fell in love with the gears,” as well as his observation that “love for the gears” wouldn’t have shown up on a pre-test/post-test assessment.

    @Joyce: That’s a stunning quotation. Thanks for sharing it, and for sending me to Carse. I look forward to the reading.

    @YinWah Thanks very much for that TFU information. More learning! And yes, more please. Clearly many folks have been working on this question–perhaps it’s *the* question.

    @Darren Your work will be absolutely crucial to the future of engineering education. I hope I stated that unambiguously and with precision! I continue to read your blog and I continue to learn from you at every opportunity. Thank you.

    @Robert I’m honored you’d stop by to read and comment, and I thank you for taking the time to do so. I’ve just written another post to try to get at some of the questions and concerns I’ve articulated in this one. While the new post is not a point-by-point response to your comment, it’s certainly a continuing response to the larger issues and thus to your comment as well.

    I do think you’ve softened your initial position a bit in your comment here. Part of what drove my response to your post was what seemed to me to be very narrowly prescriptive procedural advice of the kind I encounter pretty frequently in “faculty development workshops” and the like. I imagined there must be more nuance in your thinking, and I’m glad to see that here in your comment. That said, I think we still disagree. :) Our sharpest disagreement here, for me anyway, emerges in your comment’s third and fourth paragraphs. I believe I understand why you believe what you say there, but in my own experience learners are all-too-often repelled or narcotized by excessive disambiguation or over-organized learning activities. Even worse, some students–the *bon eleves* I mention (and Nassim Nicholas Taleb castigates very harshly)–will get so good at learning activities and concrete, measurable outcomes that they’ll mistake their success for understanding. I’ve known that to happen to faculty, too.

    I don’t think the problem is entirely one in which we discretize the continuous, though I do admire that phrase–and I am enamored of the Nyquist Theorem. I also agree, very much so, that students can “skim” and believe they’ve understood something when they have no clue about the necessary procedures or how to do them. I also agree that we can and should teach students how to tell when they’re skimming, and how to exercise their brains in ways that make procedural learning more effective. I think the problem, for me, lies more along the lines of what I’m trying to get at in my subsequent post about teaching machines. Perhaps you and I mean something different by “understanding.” In any event, thanks again for stopping by. And by the way, I do apologize for omitting the link to your blog post in the original version–this was an oversight born of haste as I tried to wrap up my post. I’ve fixed it now, and the link is in the post itself.

  7. I wouldn’t say that I’ve changed (“softened”) my position. Rather, here I am adding some context to my blog post that wasn’t necessarily clear in the original. The post was one of a series on how I created a flipped calculus class, and when I talk or write about this, the #1 request from people following along is just to give some insight as to how I went about doing things. So that post was unapologetically a “how to” post where a lot of the larger issues were omitted, intended to give faculty following along some concrete steps for designing out-of-class activities around unambiguous learning objectives. You can find further discussion of those issues, using pretty much the same language as in my comment above, elsewhere in my posts — although I wouldn’t expect someone to go dumpster-diving on my blog to find it, so it’s understandable how I could have come off as the edu-equivalent of Billy Mays. (Now that I have a beard I do actually sort of look like him.)

    My experience with students is pretty much the opposite of yours, it would seem — perhaps it’s a disciplinary thing, with students in the STEM disciplines having different needs from those in the humanities, in particular a felt need for structure. I agree with you that structure can lead to a “check box” mentality where students mistake the attainment of milestones for true understanding of a discipline. The antidote for me is to make sure that student experiences with a subject *start* with clear, unambiguous learning objectives but don’t *end* there. In particular, students’ experiences in a course need to involve learning activities that involve the synthesis of those learning objectives into something that supersedes them — like a design project for an engineering class, or a poster project for a math class, or something. That’s something that, for me, is the *sine qua non* of the flipped classroom — the in-class learning experience which uses class time to do work that is *not* just a subset of the bullet points from the pre-class activity as well as post-class activities which involve students in significant high-level learning experiences at the peak of the Bloom pyramid. (This is something, BTW, that I’ve written about before but I will be writing about this week, now that our exam period is over and I can actually think again.)

    But I still think that those kinds of experiences *start* with a discretized approach. Indeed Pintrich’s self-regulated learning paradigm requires that it be so, and hopefully (this is part of what the flipped class attempts to train students to do) when students get to the point of higher-level learning activities, they can start to generate their own criteria for comparing their learning against a standard and adjusting their behaviors accordingly.

    Also, I probably should have known the Nyquist Theorem. That’s why I blog — so smarter people can clue me in. :)

  8. @Robert I appreciate your continued contributions to the conversation–really I do. I also feel we may have come to that sad, common point when we will simply have to agree to disagree. I always struggle with sharp disagreements with colleagues I respect, because most of the time in academia “sharp disagreements” result mostly in demonization or dismissal. But my disagreements here *are* sharp and I do believe there are critical mistakes in what you’re arguing. I can see you believe that about my argument, too, hence the stage when our disagreements probably can’t be bridged in a meaningful or substantial way.

    Re the “how to”:
    My post here argues that crucial parts of your “how to” were based on flawed and incomplete and in many cases damaging models of learning. I also argue that a “how to” or a “just give some insight about [procedure x]” risks repeating the mistake of omitting understanding, generalizability, and all the other Higher Order Thinking Skills that the “learning objectives” (or “outcomes”) paradigms appear to support but typically erase from the conversation–because they’re “mushy.” and “so-called” and “can’t be measured.” I’ve seen this at its worst when “learning technologists” spend most of their time “training” faculty in “tools,” and this has sensitized me to the problem to the point of PTSD, or nearly so.

    Re the “why to”:
    I also argue that the “why to” must always be in sight, and that the “why to” must always be bigger than “so you can master this skill” or “so you can pass this class” or both. Engineering any course or curriculum around simplistic paradigms of “skill mastery,” even if the skill in question is difficult to master, inevitably (in my experience) bequeaths huge problems to the future, the future I want my students not only to inhabit but also to build–and improve. To be fair, I argue this point more fully in my subsequent post on my first teaching machine.

    I very sharply disagree that the dividing line here is one between STEM and the humanities. I live on both sides of that line (cue Joni Mitchell). The humanities talk smack about the scientists (and especially the engineers) and the scientists (and especially the engineers) talk smack about the humanities people. You’re not doing that, but bringing up the idea of “structure” as if we humanists don’t do that or don’t see the need for that or as much of that in the humanities moves in that direction, as do humanities folks who talk about “values” as if scientists (particularly engineers) never consider them. Indeed, I think that we keep missing the boat in our discussions of technology because that’s precisely where structure and values meet–but we can’t see that, in part because of that disciplinary divide. Well, some folks can, doubtless: Doug Engelbart for one, and Brian Arthur for another, and Janet Murray for a third, and Adele Goldberg for a fourth. But these folks aren’t generally read or discussed when we talk about pedagogy or curriculum or “edtech” or STEM–though they sure should be. Engineer, Economist, Literary Scholar, Programmer: a university.

    I pause to celebrate one point of sharp agreement: the need for synthesis. I’m right with you there. Where we once again part ways is in the discussion of heading to synthesis by means of the bullet points. Gregory Bateson notes that levels of learning are discontinuous. Getting to the superseding part is NOT a linear process. I take his observation as fundamental to any discussion of learning. I aim for a dance (structure and expressivity, yes?) between the micro and the macro, between the “how to” and the deeper, broadly human “why to.” I think they inspire and support each other. I think most of the talk of “learning outcomes” or “objectives,” including (I am sad to say again) the discussion in your original post, describes learning more like a conveyor belt than like that dance. One of the especially sad consequences is what I hear many of my colleagues say: first-year students shouldn’t publish to the web because they have nothing to say yet. The reasoning is that they won’t have anything to say until they have been schooled by us. I don’t believe that for a minute. When one of our Twitter correspondents tweeted “When students know how to learn, great, we’ll fade the LOs. Until then, LOs are way points,” I recoiled in horror. Our students know how to learn. They don’t always know how they can turn that knowledge to their advantage within the context of school–a context that I still stubbornly believe can be helpful–and we as instructors can help guide them in this respect. But my fear is that the culture of “learning outcomes” and “objectives” appears to answer the question of “what is learning?” by brutally reductive means along neo-behaviorist paradigms, and the outcome of *that* reduction stunts our students’ ability to imagine, let alone create, a better world.

    Again, my thanks for your contributions to the conversation. I know what I’ve written will likely seem to be an attack, and it does in fact attack your argument, but I do not mean to attack you! I do not know if I have succeeded. But I remain grateful for your help in prodding me to articulate my own thinking in fuller and I hope deeper ways. My best to you.

  9. Some thoughts – FWIW (The numbers are more for the sake of my brevity and thinking things thru than anything else. Apologies for any overstatement of the obvious in places.):

    1) Behavioral objectives use behaviors as *evidence* of learning. Behaviorist methods focus on *conditioning* of behavior. There is a key difference between the two (design and analysis versus process).
    2) In my experience with them, learning paradigms are focused on creating environments for constant and varied levels of learning, rather than places where knowledge or understanding is passed down or simplified.
    3) In order to determine whether learning has occurred or not, we must define learning.
    4) Understanding happens in the mind of the learner. Teachers are not mind readers.
    5) Behaviors (and objectives) help to define understanding and measure learning.
    6) Common and clear points of reference between teacher and learner are necessary when measuring learning.
    7) Use of the word “understand” tends to function as a cipher for many, promoting subjectivity at the expense of clarity.
    8) Complexity necessarily includes essential elements.
    9) Concepts of simplicity and complexity can become inverted when we conflate details with simplicity.
    10) Complexity ≠ uncertainty or subjectivity
    11) Applying objectives or outcomes when designing learning environments is not a way to resolve a trivial design problem. It is a means to clarify the expectations the teacher has for the learner, measure learner progress, and analyze both simple and complex modes of learning.
    12) The Caroll and Rosson paradoxes cannot be remediated by omitting the essentials any more than ignoring the complexities.
    13) Objectives are not limited in use to lower or simple modes of learning.
    14) As objectives focus on learner behaviors, they can measure process and attitude as well as results. (Whether or not teachers use them to do so, is another issue.)
    15) The complexity of subjective and symbolic concepts is not generally analyzed by recourse to more subjectivity and symbolism.
    16) Subjective learning can be measured in objective ways.
    17) Measurement and structure are not the whole. Learning is not limited to the specific objectives. Rather, the objectives define the minimum expectations, the waypoints of learning. A well designed course will allow learners freedom to expand and individualize their learning beyond the objectives and will elicit this learning wherever possible.
    18) Objectives and outcomes are a key part of the branch the proverbial teacher stands on. We chop at them at our own peril.

  10. Dr. Campbell,

    You raised some interesting questions for me about not necessarily evading instruction that plans for specific knowledge, but also making room for the much needed knowledge of complexity, ambiguity, fluidity of concepts, interest, wonder, awe and curiosity. You described these as being vital preconditions and outcomes of any learning experience and went on to say that these things …

    “… shape the complex readiness (cognitive, affective, social, etc.) of students for the learning experience at hand, and that learning experience in turn shapes the students’ readiness (cognitive, affective, social, etc.) for the next experience.”

    Your words on the topic carry a lot of weight with the scholarly research you have done on these issues over the years. Once again, you have caused me to step back, think, and reflect on my own instructional design practice.

    Thank you,
    Dave Goodrich

    You inspired me to write a follow up post:
    Blended Designs that Anticipate Serendipity | http://myblend.org/blog/item/blended-designs-that-anticipate-serendipity

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>