Doug Engelbart, transcontextualist

@GardnerCampbell's TEDx Talk: Wisdom as a Learning Outcome

I’ve been mulling over this next post for far too long, and the results will be brief and rushed (such bad food, and such small portions!). You have been warned.

The three strands, or claims I’m engaging with (EDIT: I’ve tried to make things clearer and more parallel in the list below):

1. The computer is  “just a tool.” This part’s in partial response to the comments on my previous post.

2. Doug Engelbart’s “Augmenting Human Intellect: A Conceptual Framework” is “difficult to understand” or “poorly written.” This one’s a perpetual reply. 🙂 It was most recently triggered by an especially perplexing Twitter exchange shared with me by Jon Becker.

3. Engelbart’s ideas regarding the augmentation of human intellect aim for an inhuman and inhumane parsing of thought and imagination, an “efficiency expert” reduction of the richness of human cognition. This one tries to think about some points raised in the VCU New Media Seminar this fall.

These are the strands. The weave will be loose. (Food, textiles, textures, text.)

1. There is no such thing as “just a tool.” McLuhan wisely notes that tools are not inert things to be used by human beings, but extensions of human capabilities that redefine both the tool and the user. A “tooler” results, or perhaps a “tuser” (pronounced “TOO-zer”). I believe those two words are neologisms but I’ll leave the googling as an exercise for the tuser. The way I used to explain this is my new media classes was to ask students to imagine a hammer lying on the ground and a person standing above the hammer. The person picks up the hammer. What results? The usual answers are something like “a person with a hammer in his or her hand.” I don’t hold much with the elicit-a-wrong-answer-then-spring-the-right-one-on-them school of “Socratic” instruction, but in this case it was irresistible and I tried to make a game of it so folks would feel excited, not tricked. “No!” I would cry. “The result is a HammerHand!” This answer was particularly easy to imagine inside Second Life, where metaphors become real within the irreality of a virtual landscape. In fact, I first came up with the game while leading a class in Second Life–but that’s for another time.

So no “just a tool,” since a HammerHand is something quite different from a hammer or a hand, or a hammer in a hand. It’s one of those small but powerful points that can make one see the designed built world, a world full of builders and designers (i.e., human beings), as something much less inert and “external” than it might otherwise appear. It can also make one feel slightly deranged, perhaps usefully so, when one proceeds through the quotidian details (so-called) of a life full of tasks and taskings.

To complicate matters further, the computer is an unusual tool, a meta-tool, a machine that simulates any other machine, a universal machine with properties unlike any other machine. Earlier in the seminar this semester a sentence popped out of my mouth as we talked about one of the essays–“As We May Think”? I can’t remember now: “This is your brain on brain.” What Papert and Turkle refer to as computers’ “holding power” is not just the addictive cat videos (not that there’s anything wrong with that, I imagine), but something weirdly mindlike and reflective about the computer-human symbiosis. One of my goals continues to be to raise that uncanny holding power into a fuller (and freer) (and more metaphorical) (and more practical in the sense of able-to-be-practiced) mode of awareness so that we can be more mindful of the environment’s potential for good and, yes, for ill. (Some days, it seems to me that the “for ill” part is almost as poorly understood as the “for good” part, pace Morozov.)

George Dyson writes, “The stored-program computer, as conceived by Alan Turing and delivered by John von Neumann, broke the distinction between numbers that mean things and numbers that do things. Our universe would never be the same” (Turing’s Cathedral: The Origins of the Digital Universe). This is a very bold statement. I’ve connected it with everything from the myth of Orpheus to synaesthetic environments like the one @rovinglibrarian shared with me in which one can listen to, and visualize, Wikipedia being edited. Thought vectors in concept space, indeed. The closest analogies I can find are with language itself, particularly the phonetic alphabet.

The larger point is now at the ready: in fullest practice and perhaps even for best results, particularly when it comes to deeper learning, it may well be that nothing is just anything. Bateson describes the moment in which “just a” thing becomes far more than “just a” thing as a “double take.” For Bateson, the double take bears a thrilling and uneasy relationship to the double bind, as well as to some kinds of derangement that are not at all beneficial. (This is the double-edged sword of human intellect, a sword that sometimes has ten edges or more–but I digress.) This double take (the kids call it, or used to call it, “wait what?”) indicates a moment of what Bateson calls “transcontextualism,” a paradoxical level-crossing moment (micro to macro, instance to meta, territory to map, or vice-versa) that initiates or indicates (hard to tell) deeper learning.

It seems that both those whose life is enriched by transcontextual gifts and those who are impoverished by transcontextual confusions are alike in one respect: for them there is always or often a “double take.” A falling leaf, the greeting of a friend, or a “primrose by the river’s brim” is not “just that and nothing more.” Exogenous experience may be framed in the contexts of dream, and internal thought may be projected into the contexts of the external world. And so on. For all this, we seek a partial explanation in learning and experience. (“Double Bind, 1969,” in Steps to an Ecology of Mind, U Chicago Press, 2000, p. 272). (EDIT: I had originally typed “eternal world,” but Bateson writes “external.” It’s an interesting typo, though, so I remember it here.)

It does seem to me, very often, that we do our best to purge our learning environments of opportunities for transcontextual gifts to emerge. This is understandable, given how bad and indeed “unproductive” (by certain lights) the transcontextual confusions can be. No one enjoys the feeling of falling, unless there are environments and guides that can make the falling feel like flying–more matter for another conversation, and a difficult art indeed, and one that like all art has no guarantees (pace Madame Tussaud).

2. So now the second strand, regarding Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.” Much of this essay, it seems to me, is about identifying and fostering transcontextualism (transcontextualization?) as a networked activity in which both the individual and the networked community recognize the potential for “bootstrapping” themselves into greater learning through the kind of level-crossing Bateson imagines (Douglas Hofstadter explores these ideas too, particularly in I Am A Strange Loop and, it appears, in a book Tom Woodward is exploring and brought to my attention yesterday, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. That title alone makes the recursive point very neatly). So when Engelbart switches modes from engineering-style-specification to the story of bricks-on-pens to the dialogue with “Joe,” he seems to me not to be willful or even prohibitively difficult (though some of the ideas are undeniably complex). He seems to me to be experimenting with transcontextualism as an expressive device, an analytical strategy, and a kind of self-directed learning, a true essay: an attempt:

And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers–whether the problem situation exists for twenty minutes or twenty years.

A list worthy of Walt Whitman, and one that explicitly (and for me, thrillingly) crosses levels and enacts transcontextualism.

Here’s another list, one in which Engelbart tallies the range of “thought kernels” he wants to track in his formulative thinking (one might also say, his “research”):

The “unit records” here, unlike those in the Memex example, are generally scraps of typed or handwritten text on IBM-card-sized edge-notchable cards. These represent little “kernels” of data, thought, fact, consideration, concepts, ideas, worries, etc. That are relevant to a given problem area in my professional life.

Again, the listing enacts a principle: we map a problem space, a sphere of inquiry, along many dimensions–or we should. Those dimensions cross contexts–or they should. To think about this in terms of language for a moment, Engelbart’s idea seems to be that we should track our “kernels” across the indicative, the imperative, the subjunctive, the interrogative. To put it another way, we should be mindful of, and somehow make available for mindful building, many varieties of cognitive activity, including affect (which can be distinguished but not divided from cognition).

3. I don’t think this activity increases efficiency, if efficiency means “getting more done in less time.” (A “cognitive Taylorism,” as one seminarian put it.) More what is always the question. For me, Engelbart’s transcontextual gifts (and I’ll concede that there are likely transcontextual confusions in there too–it’s the price of trancontextualism, clearly) are such that the emphasis lands squarely on effectiveness, which in his essay means more work with positive potential (understanding there’s some disagreement but not total disagreement about what “positive” means).

It’s an attempt to tell more of the the whole truth about experience, and to build a better world out of those double takes. Together.

Is Engelbart’s essay a flawless attempt? Of course not. But for me, Bateson’s idea of transcontextualism helps to explain the character of the attempt, and to indicate how brave and necessary it is, especially within a world we can and must (and do, yet often willy nilly) build together.

Not perfect; just miraculous.

More on this anon!

Understanding the machine

Last week’s VCU’s New Media Faculty-Staff Development Seminar took up two related but also quite distinct essays: Norbert Wiener’s “Men, Machines, and the World About” and J.C.R. Licklider’s “Man-Computer Symbiosis.” Aside from the regrettable (but understandable) androcentric language, both essays are forward-looking, yet in different ways. Each of them understands that human history moves in the direction of greater complexity, especially in the accelerating streams of technological innovation and invention. (Wiener wrote a whole book on the subject of invention, one well worth reading, though it was not published until years after his death.) Both writers write about machines, systems, and human-machine interaction. Both writers emphasize that the computer is a new kind of machine. Wiener writes of a “logical machine” with feedback loops, and Licklider emphasizes the “routinizable, clerical” capabilities of the computer. Although neither one uses the magical phrase “universal machine” that Alan Turing uses, they both seem to understand that a difference in degree (speed, memory) can mean a difference in kind. Wiener also writes of “the machine whose taping [i.e., programming] is continually being modified by experience” and concludes that this kind of a machine “can, in some sense, learn.” Such machine learning, and research into its possibilities, is going on all around us today, and that pace too is accelerating. (Google Translate is but one example. Notice that it keeps getting better?)

Part of the experience computers learn from, of course, is our experience–that is, computers can be made and programmed so that they adapt to (learn from) our uses of them. It was hard to see this happening in the pre-Internet era. We could customize various things in DOS, and on the Macintosh, and on Windows (yes, even on Windows), but we didn’t have the feeling of the computer adapting to our uses. For that phenomenon to become truly visible, we needed the World Wide Web and cloud computing. (If you see an unidiomatic translation in Google Translate, click on the word, and Google Translate gives you the opportunity to teach it something.) The computer that learns from us most visibly is the computer formed of the decentralized, open, ubiquitous Internet, as that medium is harnessed by various entities. The most powerful application ever deployed on the Internet, the platform that enabled the macro-computer of the Internet to become visible and self-stimulating, is the World Wide Web.

Which leads me to my point, one already made more elegantly by Michael Wesch (see “The Machine is Us/ing Us“), Kevin Kelly, and Jon Udell, among many others. As we publish to the Web, purposefully and variously and creatively, we also make the Web. This is also true on the micro scale of personal computing, deeply considered, but we see the effects most powerfully at the macro scale of networked, interactive, personal computing enabled by the World Wide Web. The Web, freely given to the world by Tim Berners-Lee, is a metaplatform with the peculiar recursive phenomenon of unrolling before your eyes as you walk forward upon it. It is a world that appears in the very making–assuming, of course, that you are indeed a web maker and not simply a web user.

Wiener writes, “If we want to live with the machine, we must understand the machine, we must not worship the machine…. It is going to be a difficult time. if we can live through it and keep our heads, and if we are not annihilated by war itself and our other problems, there is a great chance of turning the machine to human advantage, but the machine itself has no particular favor for humanity.” If the machine is us, however, as Michael Wesch argues (and in the case of the machine of networked, interactive, personal computing on the World Wide Web, I agree), then Wiener’s statement reads like this:

If we want to live with ourselves, we must understand ourselves, we must not worship ourselves…. It is going to be a difficult time. If we can live through it and keep our heads, and if we are not annihilated by war itself and our other problems, there is a great chance of turning ourselves to human advantage, but we ourselves have no particular favor for humanity.

The idea of enlarging human capabilities should make us nervous, I suppose, but it’s a step forward to understand that that is what we’re thinking about, and that is what’s uniquely empowered and enlarged by interactive, networked, personal computing. From art to medicine to engineering to business and beyond, one capability we have and share, to an alarming and exhilarating extent, is a capability for enlarging our capabilities. Computers are an interesting manifestation of that capability, and a powerful means of using (exploiting, unleashing) that capability. As is education. (Schooling? Depends on the day and the school and the teacher.)

Once we understand that, deeply, we may to Poincare’s observation, quoted by Licklider: “The question is not, ‘What is the answer?’ The question is, ‘What is the question?'”

Licklider dreamed of using computers to help humans “through an intuitively guided trial-and-error procedure” to formulate better questions. I am hopeful that awakening our digital imaginations will lead us to formulate better questions about our species’ inquiring nature and our very quest for understanding itself.