Assessment in a Web 2.0 Environment

I agree in principle that we who work in education should be able to describe what we intend to do, and that it is important that we find a way to demonstrate to what extent we have met those goals.

But that principle is a principle of almost unimaginable complexity.

Rather than proliferate crude measures of recall or reductive “normed” evaluations of various templated essays, we should think much more deeply and comprehensively about assessment. To do this, we’ll have to start with what it means not only to learn something in the sense of committing it to memory, vital as that is, but also to understand it, to be able to sense and articulate and share the structure of that knowledge as well as the conjectures and dilemmas that surround it and propel it into new areas of inquiry. We need to think about domain transfer, and ask what kind of learning fosters the analogical and metaphorical thinking that leads to conceptual breakthroughs. We need to think about the teacher’s theory of other minds, as well as the students’. We need to master strategies of indirection that empower each other to imagine and perform what Douglas Hofstadter calls “perceptual regrouping,” that trick of the mind that can perform figure-ground reversals, separate sequences into smaller groups to yield new possibilities, and adapt Polya-esque heuristics to apparently novel situations to reveal surprising connections with apparently far-flung domains.

I have colleagues working as hard as they can to answer the need for complexity. I just hope their work can stem the tide of unthinking “learning outcomes assessment” that Jonathan Kozol pillories in Letters to a Young Teacher.

I really, truly do not think that Likert scales or uniform tests or other simplistic measures are up to the task of helping us map or understand this most profound practice we call “education,” by which I take it we mean a deliberate approach to learning, part of which must include learning about one’s own learning. In other words, the deliberate practice of leading another’s cognition into a richer and more effective relationship with itself.

Of empowering and advancing the brain’s self-shaping capabilities.

I don’t have answers, but I do have a deep intuition that we can best think about this kind of complexity by thinking about similar networks of complexity that have emerged in human experience. (Here’s where I wish I’d majored in anthropology.) There are two such networks I think about a lot these days: language, particularly written language, and the Internet. In this podcast, which records a presentation I did over a year ago at an EDUCAUSE Learning Initiative annual meeting at the invitation of my hero, friend, and colleague Chuck Dziuban, I try to think about assessment by thinking about the emergent properties of the World Wide Web. It seems to me very interesting that a big part of Web 2.0 has to do with assessment, evaluation, reviews, and so forth. Is there a way these emergent phenomena could suggest more comprehensive, inclusive, and meaningful modes of assessing learning? I don’t know, but I do think it’s a question worth asking.

Longtime listeners will hear some familiar themes in this podcast, but cast in a different light. The Shakespeare bits develop some ideas I first began to work on in the “Proof That Matters” talk I did for a K-12 Online Conference a few months before I did this talk. All the ideas here need a great deal more development. I do hope, however, that they’re moving in a more answerable direction than most of the assessment talk I’ve encountered during the last few years.

EDIT: Janet Hawkins alerts me to some parallel thoughts:


10 thoughts on “Assessment in a Web 2.0 Environment

  1. If Columbus had only drawn a careful map of exactly where he planned to go, he would have had far less difficulty getting there, – – –
    but while he was figuring it out the New World would have been discovered by someone else.

    In order to create accurate assessment, it would seem to be essential to know where education should go in the coming decade. Everyone who can predict the future please hold up their hands. Anyone?

  2. Thanks Gardner. Assessment is about value, and as such is fundamentally in the domain of ethics as I see it. That’s why teaching has to be a profession, with professional, ethical obligation. Of course we say we don’t trust teachers anymore and instead we decide to trust some mathematical formula, regardless of whether or not the formula is any good. Though why we trust the ethics of the formula makers and measurers over the teachers I’m not sure.

    The underlying problem, though, as you suggested in your talk, is that learning is not about valuation. It’s not about the knowledge a student can recite. Give me a reliable theory of cognition and maybe we can get to a theory of learning, and maybe, just maybe, we can figure out a way to measure learning. All we do now is some fancy foot work of what I, as a teacher, can represent, through grades, about what I know about what students know at a certain point and time.

    There’s no way for anyone to assess what I have learned from listening to your talk. I can’t do it myself. So all we are left with is the ethical obligation to learn from our experiences and seek improvement (i.e. to assess).

  3. Pingback: Do blogs eat brains? « Soundings: Best Practices in Teaching and Technology

  4. As teachers we all want to educate, and we do that. We usually also have to teach the items identified in the curriculum.

    Although it is very difficult to assess “education”, it is quite easy to assess if you are meeting the goals of the curriculum. Yes, very few teachers even make an effort to do so.

    Assessment in a Web2.0 environment is not that different. Web2.0 provides a platform for different assessment methods, activities and reports. But assessment remains quite unchanged.

    We should never really worry too much about complexity. Think of how complex a task it would be to properly compensate someone for their time given that we live only for a short while. Yet something as simple as money does the trick.


  5. When I was a first year grad student in Economics, 1976, I was told by some of the faculty the the GRE Econ exam had no predictive power whatsoever in how students would do in the program (so they didn’t require us to take it) but that the math part of the regular GRE had strong predictive power, so they used that as a primary entry screen and determination on who was to get fellowships. Within the last five or six years I’ve heard a very similar argument about how students will do in General Chemistry and the math part of the ACT exam. Why bring this up? I believe that in certain domains of knowledge you can measure rather deep cognitive understanding (or lack) via the type of tests you’d want us to dispense with. Even in those areas, however, at some point intelligence becomes about asking good questions rather than providing answers to closed ended problems. The asking good questions skill I believe is in the realm where meaningful assessment doesn’t come easily. But I don’t believe you can have that skill if you can’t solve the closed ended problems readily, because if you can’t there is no way to see through issues to what is important to get at.

    If I were to teach an Econ course now I’d evaluate the students in two quite different ways, one as understanding models (and then depending on how advanced the course also on building models of their own) and two on telling stories that relate the models to known economic phenomena. Engineering students have a reputation about being good with the models but bad with the stories. Many Business students are just the opposite. You really need multiple ways of thinking about the economics and I suspect also about many other disciplines.

    One other point. Accreditation is a reality as are grades (in my ideal world both would go away, but I still believe in the tooth fairy) so there is the real issue of whatever assessment we come up with can it be aggregated in some meaningful way and can we make comparisons across students and across cohorts of students? For both of these some Likert-style indicators are useful, a necessary evil if you will. I liked the participant perception indicator that Carl Berger show use some years back.
    It is do-able, a definite plus, can be used in a longitudinal way, and does recognize that multiple dimensions in assessment can be better than reducing everything down to a single dimension. It does measure attitudes, not performance. The latter, as you say, is messy to assess in the meaningful cases. For that I’d prefer to give written feedback. Maybe we need both.

  6. Pingback: ICTlogy » ICT4D Blog » Funneling concepts in Education 2.0: PLE, e-Portfolio, Open Social Learning

  7. Larry points to an extremely important distinction at the beginning of his comment – the Econ department is using assessment for _predictive_ value, not _summative_ value. What would happen if we said that the important thing to evaluate is not how well a student did in this course, but how likely they are to do well further down the road?

    While this raises a host of thorny issues, it also would accomplish one of the critical steps in getting people to buy into assessment – the data would be usable and, more importantly, _used_.

    (Although it does remind me of the professor in library school who claimed that no student who got a B or higher in her course had ever failed their comps. I got my A, and I passed my comps, but it’s a pretty weird correlation to hang your hat on…)

  8. Pingback: Soundings: Best Practices in Teaching and Technology » Gardner Campbell’s ideas on Web 2.0 and Assessment

  9. Pingback: Tablets, Moodle, and Teachers « The Xplanation

  10. Pingback: Weekly Research Index | March 12, 2010 « The Xplanation

Leave a Reply

Your email address will not be published. Required fields are marked *