Happy New Year: Here’s Our Challenge

On December 9, 2013, Doug Engelbart and his work were honored in a memorial gathering at the Computer History Museum in San Jose, California. The tributes, with a panel discussion following, are up on YouTube. If you’re at all curious about Doug’s vision and the legacy it offers us, I urge you to watch the video. It’s two hours very well spent.

There are several challenges in the video, not just one, but the one I want to highlight here comes from the tribute by Elizabeth “Jake” Feinler, who worked with Doug for several years at the Augmentation Research Center (ARC) and went on to become the founding director of ARPANET’s Network Information Center after. Her tribute starts at 30:12 into the video.

Ms. Feinler had many great memories of Doug and ARC, but the part that resonated most deeply with me came in words she quoted from Doug himself.  The words illustrated Feinler’s experience of ARC as well as her intense admiration for Doug’s vision and humanity. Here’s what Feinler read, from an essay titled “Working Together,” written by Doug and Harvey Lehtman (also of ARC) and published in the December 1988 issue of Byte magazine.

We thought that success in tools for collaborative knowledge work was essential to the necessary evolution of work groups in increasingly knowledge-rich societies and to increasing organizational effectiveness. Until the recent growing interest in CSCW [computer supported collaborative work], most developers limited their analyses to technical issues and ignored the social and organizational implications of the introduction of their tools; such considerations were, however, key to our work.

There is growing recognition that some of the barriers to acceptance of fully integrated systems for augmenting groups of knowledge workers may be more significantly social, not solely technical. The availability of rapidly evolving new technologies implies the need for concomitant evolution in the ways in which work is done in local and geographically distributed groups.

ARC [the Augmentation Research Center] experienced this phenomenon continuously. The bootstrapping approach, so important to the continuing evolution of the system, caused us to constantly undercut our world: As soon as we became used to ways of doing things, we replaced platforms to which we were just becoming accustomed. We needed to learn new roles, change attitudes, and adopt different methods because of growth in the technical system we ourselves produced.

We brought in psychologists and social scientists to serve as observers and facilitators. They were as important to our team as the hardware and software developers. The resistance to change, which we soon realized was an essential part of introducing new technologies into established organizational settings, and the psychological and organizational tensions created by that resistance were apparent in ourselves. We were required to observe ourselves in order to create appropriate methodologies and procedures to go along with our evolving computer technologies. [my emphases]

This language, shifted only slightly, applies equally well to the process of education itself. True learning generates both increasing complexity and, at a meta level, an increasing awareness of the nature and potential uses of that complexity–i.e. strategies of mindfulness. A university is an augmentation research center, is it not? And yet how much time do we spend in fruitful self-observation, in bootstrapping ourselves into higher levels of mindfulness and invention despite the fact that by doing so we inevitably, constantly “undercut our world”? “World” means not the planet or civilization, but the structures and organizations that inevitably choke new growth. These “worlds” must serve the values we profess, not the other way around. These “worlds” and ourselves as their architects and inhabitants must evolve and grow even as we struggle to keep up with the change we have set in motion. “Undercut” is one way to acknowledge the struggle–but “reinvent” is also apt, for it points to the essential goals of learning.

What is the alternative? Bureaucratic self-defense? Does the world need more lessons in that?

2014 is likely to be a full-on year of Engelbart activity for me. The cMOOC I’ll be teaching with Jon Becker (with the able disruptive ingenuity of Tom Woodward) will explore topics central to Doug’s vision and work. There will be at least two Engelbart Scholars among the VCU students who take that course (more details coming soon). And as always, I will be doing my level best to bring Doug’s ideas and the ongoing work of the Engelbart Institute to the conversation about networked learning, wherever I can find it (or it can find me).

Two birthdays, an anniversary, and a brief lament

First, the birthdays.

Happy birthday to the author I’ve studied and delighted over for the last thirty-three years: John Milton, born 1608.

John Milton, busted.

I never imagined I’d spend my life reading and thinking and writing about this writer. Just goes to show. (Show what? I’ll leave that as an exercise for the reader.)

Happy birthday wishes also go out to Rear Admiral Grace Hopper, the mother of COBOL, a fountain of wit and wisdom, and a pioneering genius of computer science. I first learned about Admiral Hopper from Dr. David Evans’ Udacity course CS101. (Yes, Udacity. It just goes to show.) Dr. Evans linked to her famous interview with David Letterman, and I was an instant fan.

“Grace Murray Hopper at the UNIVAC keyboard, c. 1960.” From Wikipedia.

The anniversary: 45 years ago today, Dr. Douglas Engelbart sat on a stage in San Francisco and, according to one awestruck observer, “dealt lightning with both hands.” The event has come to be known as “the mother of all demos.” There’s a very nice remembrance of Doug and his demo in The Atlantic today. I know there’s also a memorial happening right about now in San Jose, as his daughter Christina and many of Doug’s family, friends, and admirers are gathered to remember the demo and Doug, who passed away this year on July 5.

I talked to Doug for about an hour, back in 2006. I met him and shook his hand in 2008 on the night before the 40th anniversary of the mother of all demos. I am humbled to be working with Christina on a project for this summer and beyond. I am so very grateful to be linked in spirit and work with Doug’s vision. When there are dark or confusing days, I try to remember how lucky I am to have found that vision, and to have thanked that visionary, while he was still with us.

Here’s the first part of the mother of all demos:

And here’s a version my wonderful student Phillip Heinrich did for a final project in my second-ever “Introduction to New Media Studies” class, what eventually became “From Memex To YouTube: Cognition, Learning, and the Internet” (and will have another morphing this summer at VCU and worldwide–watch this space):

Phillip’s work is a conceptual mashup of Doug’s demo and Michael Wesch’s “The Machine Is Us/ing Us.” Even four years later, Phillip’s work still dazzles. Apparently Doug himself saw it at one point, which makes me very joyful.


I write these words from a hotel room in Atlanta, where I’m attending the annual meeting of the Southern Association of Colleges and Schools Commission on Colleges. I’ve heard some inspiring speakers and learned a great deal about more of the vast machinery of higher education. At the same time, I’ve seen many folks whose eyes are on fire with a passionate devotion to learning and teaching. I honor them, and salute their survival despite the vast machinery that exists, in part and sometimes ironically, to support them and their vocations.

The lament is for the ways in which the notion of “technology” that surrounds me here is untouched by the vision of either Grace Hopper or Doug Engelbart. When I hear a presenter say that a survey couldn’t include questions about “technology” as part of its core because “technology changes so rapidly,” I groan inwardly. In addition to the (typically) underthought use of the word “technology,” the speaker obviously has confused computing devices with computing. In the latter sense, “technology” has not changed substantially since the introduction of networked, interactive, personal computing, with the possible exception of mobile computing. But the confusion here keeps “technology” questions in a different survey “module,” and keeps educators from learning or even asking what they don’t know. (And eventually we all suffer.)

Similarly, when I hear another presenter say “we didn’t know technology would eliminate jobs the way it has,” then offer a list of “technology improvements” for the organization that include new computers and monitors, new office software, etc., I have to gnash my teeth (quietly, but still). How can we be in 2013 and still be so far removed from even the outer edges of the bright light shed by the visions of Hopper and Engelbart, among many others? How can we call ourselves educators and be content not only to remain in darkness, but to spread it through inaction and (I’m sorry, but it must be said) ignorance?

More than once at this conference I’ve heard presenters talk about “technology” in the same breath that they lament how old they are and how strange youth culture seems to them. Sometimes the lament is mingled with a little of that “kids, get off my lawn” curmudgeonliness. We all get to be a little prickly as we age, I guess, but methinks we do protest too much. Doug Engelbart and Grace Hopper didn’t surrender their visions as age overtook them. We do ourselves and our students no good service to remain in the shallows we have created for ourselves, the shallows we continue to excuse and extend. As Janet Murray writes, “When will we recognize the gift for what it is…?” Or as Doug Engelbart asked on that San Francisco stage forty-five years ago today:

If, in your office, you as an intellectual worker were supplied with a computer display backed up by a computer that was alive for you all day and was instantly responsive to every action you had, how much value could you derive from that?

Both Murray’s and Engelbart’s questions remain unanswered, and that itself is worth lamenting. The real grief comes, for me, because the questions are almost never asked, even among those who pride themselves on the arts of inquiry.

A sad case, but there is still hope. My students have taught me that.

Thomas Merton on Education

 

Thomas Merton’s hermitage.

Very salutary readings for a rainy Sunday morning at the SACS-COC conference in Atlanta, Georgia. This is the first time I’ve attended this annual meeting. Higher education is my vocation, so you wouldn’t think I’d have culture shock here–but I find I do. Perhaps that’s a first-timer’s gift. I must practice gratitude!

Here are some of Merton’s thoughts. These come from a man who had been educated in France, England (graduating from Cambridge), and the US (graduating with an MA from Columbia University). For a short time, he was a professor of English at St. Bonaventure. So he knows whereof he speaks.

“The danger of education, I have found, is that it so easily confuses means with ends. Worse than that, it quite easily forgets both and devotes itself merely to the mass production of uneducated graduates–people literally unfit for anything except to take part in an elaborate and completely artificial charade which they and their contemporaries have conspired to call ‘life’.”

“The least of the work of learning is done in classrooms.”

“Anyone who regards love as a deal made on the basis of ‘needs’ is in danger of falling into a purely quantitative ethic. If love is a deal, then who is to say that you should not make as many deals as possible?” [One can substitute “learning” for “love” and reach the same conclusion.]

“[A publisher asked me to write something on ‘The Secret of Success,’ and I refused.] If I had a message to my contemporaries, I said, it was surely this: Be anything you like, be madmen, drunks, and bastards of every shape and form, but at all costs avoid one thing: success. … If you have learned only how to be a success, your life has probably been wasted. If a university concentrates on producing successful people, it is lamentably failing in its obligation to society and to the students themselves.” [Particularly bracing words given the buzz here–and in my own title at work!–regarding “student success.” Who would wish that our students would fail? Yet too narrow a view of success may be the most insidious route to failure of them all.]

And finally, in words that I would love to see above every classroom door and on the cover of every learning-related conference (my editorial material is clumsy but I want to present Merton generously):

“The purpose of education is to show a person how to define himself [or herself] authentically and spontaneously in relation to his [or her] world–not to impose a prefabricated definition of the world, still less an arbitrary definition of the individual himself [or herself].”

Source: Love and Living.
h/t @rovinglibrarian, @graceiseverywhere

No way out but through

A-Bomb group leaders, via NY Times/Bettmann/Corbis

Last week’s NMFS here at Virginia Commonwealth University discussed Vannevar Bush’s epochal (and, in its way, epic) “As We May Think.” The essay truly marks a profound shift, appearing just as WWII was about to conclude with a display of horrific invention that still has the power to make one’s mind go blank with fear. From Resnais’ Hiroshima Mon Amour to a film that can still give me nightmares, The Day After, the mushroom cloud that signifies this invention hung over my childhood and adolescence–and I don’t expect it will ever go away. Now that we know how, there is no unknowing unless civilization erases itself.

But as myth, fiction, and science continue to demonstrate, each in its own way, there are thousands of demonstrations of the real problem to hand every day: human ingenuity. It’s easy to get distracted by the name “technology,” as if it’s what we make, rather than our role as makers, that’s to blame. But no, it’s the makers we should lament. Or celebrate. Or watchfully, painfully love.

The state of man does change and vary,
Now sound, now sick, now blyth, now sary,
Now dansand mirry, now like to die:
Timor mortis conturbat me.

William Dunbar, “Lament for the Makers”

What shall we do with these vexing, alarming, exhilarating abilities? We learn, we know, we symbolize. Sometimes we believe we understand. We find a huddling place. We explore, and share our stories.

Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems.

Vannevar Bush, “As We May Think”

For several iterations through the seminar, that word “presumably” leapt out at me, signalling a poignant, wary hope as well as a frank admission that all hope is a working assumption and can be nothing more. This time, however, the word “review” glows on the page. Re-view. Why look again? How can repetition make the blind to see? Ever tried to find something hiding in plain sight? Ever felt the frustration of re-viewing with greater intensity, while feeling deep down that the fiercer looking merely amplifies the darkness? (Ever tried to proofread a paper?)

We console ourselves with the joke, attributed to Einstein, that the definition of insanity is to do the same thing again and again while expecting different results. Yet we hope that thinking, mindfully undertaken, may contradict that wry observation. We hope that thinking again can also mean thinking differently, that a re-view strengthened by a meta-view can yield more insight and bring us a better result than the initial view did. Look again. Think again. And, in Vannevar Bush’s dream of a future, a dream that empowered epochal making, looking again and thinking again would be enriched, not encumbered, by a memory extender, a “memex”:

[Man] has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory.

What is this experiment? When exactly did we sign the papers giving our informed consent to any such thing?

Our ingenuity is the experiment, the problem, the hope. Our birthright may also be our death warrant. Is that the logical conclusion?

Yet, in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome.

The word “science” signifies more than simply the methodological revolutions emerging in Renaissance Europe. For me, it signifies knowing. We in the humanities enact our own experiments in knowing, exerting our own ingenuity both constructively and destructively. We too are makers.

Re-view. Analyze more completely. “Encompass the great record and … grow in the wisdom of race [i.e., species] experience.” As we may think, and create and share “momentary stays against confusion.”

No way out but through.

Optimism

3. Hopefulness and confidence about the future or the successful outcome of something; a tendency to take a favourable or hopeful view. Contrasted with pessimism n. 2.

So the Oxford English Dictionary. I picked sense 3 because it seems most resilient in the face of abundant evidence that this is in fact NOT the best of all possible worlds (pace Leibniz, at least as he’s pilloried by Voltaire).

It seems to me that educators, no matter how skeptical their views (skepticism is necessary but not sufficient for an inquiring mind), are implicitly committed to optimism. Otherwise, why learn? and why teach?

Satan Overlooking Paradise

I think of this as I begin another semester thinking with faculty and staff across the university (last term Virginia Tech, this term Virginia Commonwealth University) about the possible good we could co-create, and derive, from interactive, networked, personal computing. To be pessimistic (not skeptical, pessimistic–they are not synonyms) about personal, networked, interactive computing is to be pessimistic not about an invention, but about invention itself–that is, about one of our most powerful distinctions as a species.

Computers have become woven into our lives in ways we can barely imagine, but the best dreams about the texture of such a world are hopeful, and stimulate hope. Are we there yet? Of course not. But to be pessimistic about computers is to be pessimistic about humanity. And while that’s certainly a defensible position generally speaking, it seems to me that education is an activity, a co-creation, a calling, that runs clean counter to pessimism.

Last week in the seminar we read Janet Murray’s stirring introduction to The New Media Reader. A colleague from the School of Dentistry. A colleague from the library. A colleague from the Center for Teaching Excellence. Colleagues from University College. And more. Once again, I read these words:

We are drawn to a new medium of representation because we are pattern makers who are thinking beyond our old tools. We cannot rewind our collective cognitive effort, since the digital medium is as much a pattern of thinking and perceiving as it is a pattern of making things.

Indeed–yet this is not to deny the meta level at which we consider our consideration, and think about our blind spots so we can find more light:

We are drawn to this medium because we need to understand the world and our place in it.

Yes–and now the world we need to understand is also a world transformed, for good and for ill but potentially for good, why not?, by the medium itself. Recursive, yes–but more deeply, a paradox, not an infinite regress. That’s the hope, anyway. And educators are committed to hope.

To return to [Vannevar] Bush’s speculations: now that we have shaped this new medium of expression, how may we think? We may, if we are lucky and mindful enough, learn to think together by building shared structures of meaning.

That mindfulness is the meta level. I am optimistic about that meta level. As a learner, I have to be. If mindfulness is impossible, then it’s truly turtles all the way down, and who would care?

How will we escape the labyrinth of deconstructed ideologies and self-reflective signs? We will, if we are lucky enough and mindful enough, invent communities of communication at the widest possible bandwidth and smallest possible granularity.

Lucky, and mindful. Chance favors the mindful mind.

We need not imagine ourselves stranded somewhere over the evolutionary horizon, separated from our species by the power of our own thinking.

Or separated from our history, or from our loved ones–though clearly Hamlet (to name only one) demonstrates that mindfulness alone is no guarantee of anything. But what is on the other side of the horizon? What do we find when we return to the place we left and see it for the first time?

The machine like the book and the painting and the symphony and the photograph is made in our own image, and reflects it back again.

To which I would add: the syntax and punctuation in Murray’s sentence above enact the pulses of ways we may think. Those pulses and the ways they enact are poetry. What more complex shared structure of meaning is there?–unless it’s true that all art aspires to the condition of music. Poetry “begins in delight and ends in wisdom,” Frost writes. He continues: “the figure is the same as for love.” Can the shared structures of meaning emerging from our species’ collective cognitive effort begin in delight and end in wisdom, too? Can the figure our collective cognitive efforts make be the same as for love? I think: I hope so. I think: it better be. I think: how can I try to help? The seminar is one answer, a crux of hopes, the discovery of an invisible republic of optimism.

The task is the same now as it ever has been, familiar, thrilling, unavoidable: we work with all our myriad talents to expand our media of expression to the full measure of our humanity.

And by doing so, that measure increases. May we use that abundance wisely, fairly, and lovingly within this mean old brave new world.

With luck and mindfulness, I am  hopeful that we can.

for my Jewish mother, Dr. Janet Murray, with love and deepest gratitude

So let’s recap

Soaring into the eye of the gods

In mid-March I got an email telling me I was nominated in the search for a senior leadership position at Virginia Commonwealth University: Vice Provost for Learning Innovation and Student Success. I was intrigued. I looked at the leadership profile. I was mightily interested. Can’t hurt to apply, I said to myself. So I did.

The hectic, rewarding pace of life went on. Janet Murray came to VT as the third Distinguished Innovator in Residence. (Very exciting.) The Center for Innovation in Learning prepared its first call for Innovation Grant proposals. (Ditto above.) Learning Technologies began its metamorphosis into Technology-enhanced Learning and Online Strategies. I traveled to Richmond, Boston, Bethlehem (Pennsylvania), and in June, to Rome (Italy) for conference presentations and faculty seminars. And to my wonder and delight, my candidacy continued to advance in the VCU search.

On June 7, as I sat in my lodgings in Barcelona, I spoke with VCU’s Provost, Dr. Beverly Warren, who offered me the job. As a literature scholar, it is my duty of course to tell you that the last time I was in Barcelona, in October of 2010, I was offered the Virginia Tech job. I guess Barcelona is my lucky town in the narrative of my professional life. (No one who’s been there will be in the least surprised.)

On June 24, back in the States, I signed the contract.

On August 1, I formally began my work, though I’d been ramping up at VCU and ramping down at VT since my return to the US. On this same day, my wife and I closed on our new home in Richmond.

Oh, and the conference in Rome was wonderful, far beyond my already-high expectations. The city and country were also pretty stupendous (litotes alert). As was Spain the week before, as was England the week before that. A summer of summers.

And sadly, the cloud over the trip was the death of my beloved mother-in-law on the same day that her youngest daughter, my wife, arrived in Madrid to join me in my travels. That grieving continues. If my experience with my parents at their passing is any guide, one learns to live with death, but one never gets over it.

I guess I’m a little behind in my blogging. Perhaps you can see why? The problem seems to be time, but it isn’t really. Time has become extremely compressed, yes, and spare time has become a vanishing commodity. My perception of time many days borders on the surreal as I adjust to the scale, scope, pace, and challenges of the new job–all very exciting, all very welcome, and all very demanding. Yet the real problem is, as ever, too much to say.

Time to write anyway. Not that I’ve been idle in that department, but I have been silent in this space, and I miss it. I did get 6000+ words done in an article on temptation in Paradise Lost, however–turns out I miss that kind of writing, too. Yes, Gardner writes, even if you haven’t seen it here for several months. Time to write anyway.

McLuhan and our plight

“Plight” is an interesting word. We are in a plight, meaning we’re in a tangle, a mess, a terrible fix, with “fix” itself an ironic noun in this context. Yet we also plight our troth, meaning “pledge our truth.” Plight-as-peril and plight-as-pledge both come from an earlier word meaning “care” or “responsibility” or (my favorite from the Oxford English Dictionary) “to be in the habit of doing.” Along a different etymological path, we arrive at the word meaning to braid or weave together. The word “plait” is a variant that makes this meaning more explicit. It’s not too far into poet’s corner before weaving, promising, and care-as-a-plight become entangled, at least in my mind, and perhaps usefully so.

The first McLuhan reading in the New Media Faculty-Staff Development Seminar is from The Gutenberg Galaxy, specifically the chapter called “The Galaxy Reconfigured or the Plight of Mass Man in an Individualist Society.” I don’t know if McLuhan is punning here, but it’s not implausible that the man who coined the term “the global village” and paid special attention to the role of mediation in human affairs–mediation considered as extensions of humanity–might think not only about the plight we find ourselves in but also the plighting of troth we might explore or co-create or braid.

The trick (and McLuhan is nothing if not a trickster, as others have noted) is that the plighting cannot be straightforward or “lineal,” lest it not be a genuine pledge or an authentic weaving. His very writing is obviously a plight for many readers, but it’s also a brave (and sometimes wacky) attempt to do a plighting of the plaiting kind as a sort of pledge of responsibility. He writes these stirring words for our consideration:

For myth is the mode of simultaneous awareness of a complex group of causes and effects. In an age of fragmented  lineal awareness, such as produced and was in turn greatly exaggerated by Gutenberg technology, mythological vision remains quite opaque. The Romantic poets fell far short of Blake’s mythical or simultaneous vision. They were faithful to Newton’s single vision and perfected the picturesque outer landscape as a means of isolating single states of the inner life.

From which I draw these conclusions regarding McLuhan’s argument (or plighting):

1. “Lineal” does not mean “synthesized” or “unified.” The straight path or bounded area leads only to fragmentation and reduction. It is not a weaving and cannot be. The lineal and the fragmented are perilously broken promises.

2. Mythological vision is a technology for enlarging awareness of complexity. Mythological vision is both plighted-woven and a means for plighting-weaving.

3. Fragmented, lineal awareness invents technologies of self-propagation that reinforce more lineality, more fragmentation, while giving the illusion of doing quite the opposite. Single-point perspective is not the same as a unifying vision or a simultaneous awareness of a complex group of causes and effects. It is, instead, reductive while pretending to be unified.

4. Even self-consciously or self-proclaimed liberatory movements such as Romantic poetry (or any number of other such apparently radical departures) may quail before the complexity and simply reinscribe a slightly shifted set of boundaries, thus perpetuating a reduction of complexity and a lack of awareness that dooms our technologies to reproducing our failures.

What technologies might reveal, restore, or help us co-construct a mythological vision, a species-wide simultaneous awareness of a complex group of causes and effects? It’s a political question that reaches into the realm of complexity science, art, and potentially even philosophy or (gasp) theology. Does Doug Engelbart’s idea of “augmentation” and complex symbolic innovation answer such a call? Does Bill Viola’s anti-condominium campaign? Is there an eternal golden braid to be had, or woven? What loom should we choose, or make?

Of Flutes and Filing Cabinets

Last week in our New Media Faculty-Staff Development Seminar, Nathan Hall (University Libraries) and Janine Hiller (College of Business) teamed up to take us through the Alan Kay / Adele Goldberg essay “Personal Dynamic Media.” Janine and Nathan took an inspired  approach to their task. Nathan’s a digital librarian, and he brought his training and interest in information science to bear on Kay and Goldberg’s ideas. Janine’s work is in business law, so intellectual property would have been a logical follow-on for discussion. But wily Nathan segued into wily Janine’s swerve in a direction that in retrospect makes perfect sense but at the time came with the force of a deep and pleasant surprise: the information science of metaphor.

As I look back on the session, I have to admire the very canny way in which the info science/metaphor combination acted out the very nature of metaphor itself: the comparison of two unlike objects. Having made the comparison, of course, one begins to see very interesting disjunctions and conjunctions. The mind begins to buzz. Wholly novel ideas emerge, such as the metamedium of the computer being like a pizza. Seriously.

Janine shared with us a lovely TED video on metaphor …

… and challenged us in small groups to come up with our own metaphors for computing as a metamedium (think of them as seminarian family-isms). We very quickly got to pizza in our group, courtesy of the talented Joycelyn Wilson. (Amy Nelson riffs on that metaphor in her own blog post.) Another group found itself circling back, recursively but sans recursing (dagnabbit), to the powerful and complex metaphor of the “dream machine.” (Go ahead and revive that metaphor by thinking about it again. And again. Stranger than one might suppose, eh?) (Oh, and to get another link in, I believe it was 21st-century studies lamplighter Bob Siegle who led us there.) In our closing moments, we began thinking about metaphor as a metaphor for computing, and computing as a metaphor for metaphor. I do believe Alan and Adele would have enjoyed the conversation.

At the end, Nathan sketched out a continuum between the procedural and the conceptual/metaphorical that he had found in “Personal Dynamic Media.” At one end was the filing cabinet (cf. Memex, cf. info science). At the other end was the flute (a metaphor that Janine beautifully led us to unpack in our discussion). And then, a few minutes after the seminar was over and I was walking to the car, a connection appeared for me.

There is indeed an apparent dichotomy between filing cabinets and flutes, between quotidian documents and art, between the minutiae of our task-filled lives and the glorious expressive possibilities of musical performance, especially with an instrument like the flute (I am a mediocre but enthusiastic flautist) that one plays in such intimate connection with one’s body and breath. It’s simple, direct, a column of air that resonates within the instrument as well as within the hollow, air-filled spaces within one’s own face and chest.

What could be more pedestrian, ugly, and (depending on the tasks) repellent than a filing cabinet? What could be more liberating and beautiful than a well-played flute?

How is a raven like a writing-desk? Alice asks in Alice In Wonderland. The question is never answered. (Brian Lamb once answered it–“Poe wrote on both”–but alas his ingenuity came many decades too late for poor Alice.)

How is a flute like a filing-cabinet? The question makes even less sense. At least, at first.

But considered within the world of Alan Kay’s aphorism that “the computer is an instrument whose music is ideas,” I find myself inspired to think that one may indeed make a flute of a filing-cabinet, awakening and ennobling the detritus of our dreary records and messy operational details with the quicksilver music and responsiveness of a well-played flute.

What if we could bring that vision into our lives? Our learning? Our schools? What if our filing cabinets were less like the warehouse in which the Ark of the Covenant is boxed and lost, and more like thought-vectors in concept space sounding something like the music of the spheres?

It may not be as hard as we may think–unless we actually prefer meaninglessness and stasis to delight and melody.

As Hoagy Carmichael once wrote, “Sometimes I wonder.”800px-Eight_Flute1

 

Intuition: Use, Agency, Invitation

You’ve seen the ad copy. I have too. The hard sell for the soft, gentle learning curve promised for a new device is that the device is “intuitive.” That is, the device is easy to use because you can make the device do what you want because the interface design helpfully indicates how to operate the device. You want to save a file? Click on the icon. Of course, in MS Word (and MS Office generally) the icon is a floppy disk. One used to save files on floppy disks. They used to look like that, too–the 3.5 inch not-floppy diskette. Yes, this is getting complicated already. Let’s stop the cascade by admitting that “intuitive” means “familiar,” and that “familiar” itself is more of a moving target than we’d like to think. And there’s a Gordian knot for another time. (Recommended reading: “The Paradox of the Active User,” a major addition to my intellectual armamentarium courtesy of Ben Hanrahan, a wonderful student in last year’s “Cognition, Learning, and the Internet” course.)

So let’s move on. “Intuition” (home of the intuitive) can mean something much deeper than “I bet that’s how I can do that.” It can mean “I bet this device ought to be able to do that.” In “Personal Dynamic Media,” Alan Kay and Adele Goldberg tell the story of one such intuitionist:

One young girl, who had never programmed before, decided that a pointing device ought to let her draw on the [computer] screen.

This kind of intuition is a creative intuition that isn’t about “ease of use” or “I bet I already know how to do that.” It’s an educated guess, a contextual surmise, and a leap of faith. Note the fascinating language in this description. She decided (moment of agency and commitment) that a pointing device ought to let her. This kind of intuition is something like the belief in “Mathgod” that Douglas Hofstadter describes so winsomely in Fluid Concepts and Creative Analogies. It’s also (no coincidence) what Jon Udell keeps talking about when he talks about how people “don’t have intuitions” about the World Wide Web. To connect Kay-Goldberg with Udell, to have intuitions about the Web would be to decide that the Web (and the Internet that supports it) ought to let one do this or that–meaning, “given what this system is and what it supports, this thing I imagine or invent should be possible.” Note that you have to know something about what kind of a thing, or network, or web you’re working with. Indeed. But note also that the paranoia, hebephrenia, or catatonia induced by the many double-binds that formal schooling presents to learners are responses that pretty much guarantee that such intuitions will simply not develop. Try to imagine an entering class vigorously discussing among themselves “given the mission statement of our university, this thing I imagine or would like to invent with regard to my own learning ought to be possible. Feel your brain cramping in both hemispheres? Do students read mission statements? If they did, would they seek to shape their learning in terms of it? Do the structures we build to support what we say we intend, we value, we desire, actually stimulate any such activity? Exactly. Learners in formal schooling are not very likely, most of the time, to decide that school ought to let one do this or that related to learning. And if they try to make such a decision, based on such an intuition, they are often hammered back into line. Not always, but often. And any such repression is too much.

But here’s the third level, and it comes next in “Personal Dynamic Media”

She then built a sketching tool without ever seeing ours…. She constantly embellished it with new features including a menu for brushes selected by pointing. She later wrote a program for building tangram designs.

This level of intuition is the invitationist level. This intuition is an intuition not so much about the device per se but about the learning context, an ecosystem of device, peers, teachers, etc. Kay and Goldberg praise the young girl for building her own sketching tool “without ever seeing ours.” Another teacher might have said “did you do your homework? Did you consult the manual? Did you follow directions?” These are often important questions, but they miss the most powerful intuition engine of all: the invitation.

In “The Loss of the Creature,” an essay that articulates the paradox of the active learner with haunting precision, Walker Percy writes about the recovery of being, by which he means the recovery of the person as well as the recovery of the person’s experience. He believes both person and experience to be lost to “packages” which we simply “consume” with an ever-increasing anxiety that our consumption be certified as genuine by others. Worse yet, we become increasingly numb to our consumption, unaware that our souls are rotting from the inside out. As Kierkegaard observes and Percy reminds us, the worst despair is not even to know one is living without hope. No surface receiving our “cognition prints.” No mark of our learning or inquiry or existence left behind. We do not even think to ask.

Toward the end of the essay, Percy tells a story about two modes of experience, a story of music and being:

One remembers the scene in The Heart is a Lonely Hunter where the girl hides in the bushes to hear the Capehart in the big house play Beethoven. Perhaps she was the lucky one after all. Think of the unhappy souls inside, who see the record, worry about scratches, and most of all worry about whether they are getting it, whether they are bona fide music lovers. What is the best way to hear Beethoven: sitting in a proper silence around the Capehart or eavesdropping from an azalea bush?

However it may come about, we notice two traits of the second situation: (1) an openness of the thing before one–instead of being an exercise to be learned according to an approved mode, it is a garden of delight which beckons to one; (2) a sovereignty of the knower–instead of being a consumer of a prepared experience, I am a sovereign wayfarer, a wanderer in the neighborhood of being who stumbles into the garden.

A big house with a Capehart that looks like a casket ready for an embalmed Beethoven and his embalmed listeners. Or: a sovereign wayfarer in the neighborhood of being, and a garden of delight which beckons to one.

We need to work on our beckoning. Beckoning is what Bakhtin calls addressivity: the quality of turning to someone. From design to cohort to community and everywhere in between, especially in the schools that face our present times and equip us to invent our futures: how can we work on our invitations?

The Web is not the same as the Internet, and why that matters

There must be some kind of way out of here.

I have been following John Naughton ever since I found his book A Brief History of the Future in a secondhand bookstore in South Philadelphia in the fall of 2011. (My thanks to Kathy Propert for taking me there.) Naughton is Emeritus Professor of the Public Understanding of Technology at the Open University in the UK. He’s a blogger at the aptly named Memex 1.1, he’s Vice-President of Wolfson College in Cambridge, he’s an adjunct professor at University College, Cork, his latest book is the extraordinary From Gutenberg to Zuckerberg: What You Really Need to Know About the Internet, and he’s a crackerjack journalist for The Guardian. This morning, Naughton’s blog linked to his latest Guardian column, “Kicking Away the Ladder,” which concerns among many other things the persistent, pernicious error of confusing the Internet with the World Wide Web. Naughton explores why that error matters, in fact why it may be a fatal error, one that could mean the end of the “open, permissive” infrastructure that has allowed these extraordinary telecommunications innovations we’ve witnessed over the last few decades to grow and flourish.

The essay is essential and sobering reading. Please go read it now (it’ll open in a new tab). I’ll be here when you get back.

Is Naughton overreacting? Not at all. The danger is clear and present. And he knows his history, so Naughton understands well what we have gained from the Internet and the World Wide Web. He knows how they were made, and what principles animated and informed their design. And he knows what we stand to lose in the face of the strategies controlled by those who understand elementary facts about internet and computing infrastructure, history, and design, facts that far too many people are too incurious even to inquire after. These are elementary facts. They are not difficult to understand. Their implications take a little more work to get your head around, yes, but it’s nothing that a basic program in digital citizenship couldn’t address successfully–assuming that program was about how to make open, permissive use of the open, permissive platform. That is, assuming digital citizenship is about the arts of freedom and not simply the duties and dull “vocations” of compliance and consumption.

I read parts of the essay aloud to my dearest friend and companion, the Roving Librarian, and she asked me a great question: “So, if you had to explain the difference between the Internet and the Web, how would you do it?” And as so often happens in the presence of a great and greatly foundational question asked in the spirit of mutual inquiry and respect and love, a cascade of thoughts was triggered. (Not a bad learning outcome, that.)

Here’s what I have so far. It’s coming out quickly and will need much development, but I need to write it down now. I welcome your comments and questions and elaborations and collegial friendly amendments. (No blame should attach to the Roving Librarian, by the way, for any mistakes I make. Lots of credit goes to her, though, for anything that’s worthwhile.)

The Internet is about data transmission. It’s a network that enables any node to transmit any kind of data to any other node, and any group of nodes (any network) to transmit any kind of data to any other group of nodes. It’s a network and a network-of-networks. It thus engages, stimulates, and empowers data exchange that’s one-to-one, one-to-many, many-to-many, and many-to-one. As Naughton points out in another essential essay, this structure permits unique and disruptive emergent phenomena, some of which will be disturbing and harmful, some of which will simply be puzzling or appear irrelevant (or be denounced as such), and some of which will be enormously beneficial. Naughton is not alone in his explorations. Clay Shirky indefatigably points out the enormous good that we can derive from the Internet. He points out the dangers, too, but when people call him names, they call him a “techno-utopian,” which as far as I can tell means he remains hopeful about our species’ powers of invention. Joi Ito, director of MIT’s Media Lab, emphasizes over and over again that the Internet is not so much a technology as the technological manifestation of a system of values and beliefs; not a technology, but a philosophy.

To summarize, then: the Internet permits open data transmission one-to-one, one-to-many, many-to-many, and many-to-one. Seems clear enough. And that Clay Shirky talk on social media and revolutions I linked to above makes my point very vividly and clearly. (In fact, I learned to explain things this way from Shirky, from his blog and his two books Here Comes Everybody and Cognitive Surplus and in other venues as well.)

So, then, how is the Web different from the Internet? Naughton says that it’s an application that runs on the Internet. The innovation Tim Berners-Lee brought into the world about a decade prior to the turn of the century could not have been imagined or built without the open, permissive foundation that the Internet was designed to be.

But then comes the logical next question: how then is the Web significantly different from the Internet, aside from providing a layer of eye candy that makes the Internet more appealing and the metaphor of a “page” that makes the Internet seem more familiar? Gregory Bateson says that a unit of information may be defined as a difference that makes a difference. So what difference does the difference of Web make?

If I can’t answer that question, then no explanation of the difference matters, because if fails the “so what?” question.

And the answer that comes to me, mediated through the readings I’ve done in learning environments like these (I consider a course a learning environment, a carefully crafted cognitive space or occasion that’s also a foundation for collaborative building), mediated through Jerome Bruner, and mediated through Mike Wesch’s evergreen “The Machine is Us/ing Us” and all that his creation mediates (like Kevin Kelly’s essay), is that the crucial difference is the link.

That’s all, and that’s everything.

The link allows us (and once we’ve seen it happen, it invites and entices us) to construct a thought network out of (upon, within, on top of, emerging from) a data network. That’s all, and that’s everything. It is the essential move that turns sensation–a matter of data transmission along nerve fibers–into what, given enough interconnections and enough ideas about interconnections, becomes cognition, a level-crossing connectome out of which abstractions, concepts, and conceptual frameworks will emerge.

The Internet passes data agnostically (video, text, audio, whatnot) and the Web allows us to create conceptual structures out of data by means of simple, direct, open, thoughtful, permissive linking. The linking is idiosyncratic, like cognition, but like cognition, it is not merely idiosyncratic. The linking is never random–human beings can’t be random–though it may be surprising or the relation may be obscure (at first). Some sets of links are more powerful than others, but none is as powerful as the very idea of linking, just as the most powerful concept we have is the notion of concept, something I delight in exploring with students and colleagues when we get to Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.”

The Internet transmits information. The Web enables (stimulates, encourages) a set of connections that, from the first link to the enormous set of links we now experience, symbolize ideas about relationship.

The Internet permits the pre-existing connectomes within each mind and among many minds working together to pass their nerve impulses freely along a meta-set of data connections, a network of networks, an Internetwork. The Internet is a protocol and a foundation for the data transmission that enables communication considered as information transmission.

But this is only the beginning, an open, permissive, and thus powerful light-speed beginning. The next advance occurs when information transmission can be made into a foundation for sharing not just perception but experience, for sharing not just neural connections but the experience of cognition that emerges within each mind. And that level of sharing means not just sharing information but empowering and stimulating new ways of creating and sharing meaningful structures of information. (More Engelbart here, obviously.) The link is not merely a link, but a concept that enacts itself–as concepts do when we build them, and build on them.

To sum up:

The Internet is like sensation or at most perception.

(A crucial first step, and we could have gotten a network that allowed us to look at only a few things in a few ways, a walled garden a la Facebook. Instead we got something open and permissive, like a neural network of small pieces loosely joined whose emergent power emerged from the possibility of connection, not from strict specialization or over-particular design. More like cells and atoms, in other words.)

The World Wide Web is like perception leading to thinking.

(It’s like making concepts. Here Vannevar Bush missed an opportunity that we’d need a Doug Engelbart to explore. What Bush described as “associative trails” are not a mere search history. They are links, yes, but links that reveal conceptual frameworks, that symbolize conceptual frameworks, that stimulate conceptual frameworks. They are not merely a scaffolding–though to be fair, Bush does describe the scaffolding in rich ways that probably do rise to the level of what I’m talking about here. The links are fundamentally social both in the intracranial sense–the connectome in my head–and the intercranial sense–built out of the social experiment we call civilization, and returning to it as another layer of invention and potential.)

The foundational commitment in both the Internet and the World Wide Web is the same: both are built as “open, permissive” structures (to use Naughton’s words). These structures are not unlike the distributed (neuroplastic) design of the brain itself, one that, as it happens, permits all the higher orders of cognition to emerge, higher orders built of “adjacent possibles” and “liquid networks” that in turn enable even higher orders of cognition to emerge. From this open, permissive, distributed structure emerges our distinctiveness as a species. And our links within the World Wide Web enact this emergence, represent this emergence, and thus stimulate further emergent phenomena as we create and share even more powerfully demonstrated ideas about shared cognition.

The Internet is like sensation. The World Wide Web is like thinking.

Or:

The Internet transmits data of all kinds: text, images, sounds, moving pictures, etc. The World Wide Web is a newly powerful word (or medium of symbolic representation, or language) that allows us to imagine and create newly powerful n-dimensional representations of the n-dimensional possibilities of “coining words” (making and realizing representations) together.

And:

A foundational commitment to an open, permissive architecture of creation and sharing enables the next layer (species, experience) of complexity and wonder and curiosity to emerge. This open, permissive architecture enables both the cognoplasticity of individual minds and the shared thinking and building that enables the macro-cognoplasticity of civilization.

There’s a fractal self-similarity involved that makes it difficult to tell Internet from Web, just as it’s sometimes hard to tell where I end and where you, or my history, or my friends, or my reading, begin. (Bakhtin’s “Speech Genres” maps these complexities most wonderfully–definitely worth extending your cognoplasticity in that direction, dear reader, with Professor Martin Irvine‘s fine guide as a beginning.) But the difference is there, and it is vital. I suspect the problem is that the difference is not well conceptualized because the conceptual framework rarely rises beyond, or in a different direction from, the technical distinctions. But then technical distinctions are rarely explored in ways that reveal the conceptual frameworks they represent and stimulate–hence Naughton’s frustration as well as the importance of his observations.

Now let’s connect these ideas to Bruner and his ideas in Toward a Theory of Instruction, ideas that influenced Alan Kay and other learning researchers who helped to envision and build the personal, interactive, networked computing environment we now live within with varying degrees of openness and permissiveness.

In Notes Toward a Theory of Instruction, Bruner distinguishes three levels of communicating (and thus three paths to learning):

1. The enactive: we communicate by doing something physically representational in view of others. If we want water, we mime the action drinking or lapping up water, and do so in the presence of others whom we believe might relieve our thirst.

2. The iconic: we communicate by pointing to something that materially represents at one remove, while still being physically connected to, the thing we mean or seek to draw attention to. Instead of miming the act of drinking, we might point to a cup or a water fountain, perhaps making a noise of some kind to indicate the degree of urgency we feel. This level is considerably more advanced than the first level because it entails a more sophisticated “theory of other minds,” a belief (supported by learning in a social context) that we can communicate shared experience directly through a shared locus of attention that does not directly connect to our physical bodies. To point to a cup indicates an experience of shared experience. To mime drinking almost gets there, but one might do this in one’s sleep as one dreams of drinking water. The enactive doesn’t necessarily indicate a theory of other minds–though miming drinking in the presence of someone whom one believes to be paying attention may approach the iconic and cross over to it, as when someone mimes drinking from a cup.

3. The conceptual (Bruner calls this “the symbolic,” but since it’s easy to confuse “symbol” and “icon,” I’ll use “conceptual” most of the time): we communicate by means of a set of shared concepts or abstractions. Here we don’t mime drinking, and we don’t point to a cup. We speak or write, “I am thirsty.” This is a wild and crazy thing, no? A set of squeaks and grunts. A set of ink marks (or pixel shadings). Words. Every one of those three words “I am thirsty” enacts, represents, creates, and communicates a state of enormous cognitive complexity that’s hidden from us because of our mastery. The familiarity cloaks the miracle. You can’t drink the word “water,” but behold, the word may bring you what you desire, or cause you to help another human being. (Obviously I’m thinking of Hofstader here as well, and I can recommend Fluid Concepts and Creative Analogies (with profound thanks to Jon Udell), I Am A Strange Loop, and for a rapid overview, his talk at Stanford on “Analogy as the Core of Cognition.” It’s all “metaphors we live by.”)

I think most education in our schools pretends to get to the conceptual but in fact stops at the iconic or perhaps even the enactive level. Pointing pointing pointing. Proctoring proctoring proctoring, the student always in the instructor’s presence. See-do, see-do, with “critical thinking” at a level of “see-do in the sophisticated complex way I your teacher have already imagined for you, and pointed to for you, as my expertise permits me exhaustively to define excellence for your seeing and doing.” A closed and impermissive architecture mediated through language, but not really conceptual and sometimes hardly even iconic–because it doesn’t support or represent emergent phenomena, what Bruner calls problem-finding:

Children, of course, will try to solve problems if they recognize them as such. But they are not often either predisposed to or skillful in problem finding, in recognizing the hidden conjectural feature in tasks set them….. Children, like adults, need reassurance that it is all right to entertain and express highly subjective ideas, to treat a task as a problem where you invent an answer rather than finding one out there in the book or on the blackboard. (157-158)

Like Facebook, our schools and the classrooms and curricula they provide form a walled garden full of “finding” by merely clicking on icons (including the face of the teacher, which when clicked upon may yield “what the teacher wants”), partly for administrative convenience, partly for administered intellectuality that hides our own conjectures (lest emergent conceptual frameworks undermine the power authority and wealth of the old architects), partly because it’s a good business model. Ah, the business model. Tim Berners-Lee put the Web in the public domain, and what kind of a business model is that? unless one considers it an investment made to benefit the species–a mission we say we follow in higher ed, of course.

Did I say that remaining at the enactive or at best the iconic while feigning the conceptual is a “good” business model? I meant a great business model, especially if one enjoys exploiting others without leaving visible marks, since it’s education that gives us the constrictive framework of pointing that enables, encourages, and stimulates the narrow ways we are able to imagine thinking about business models. Or even, at the level of curriculum, to imagine thinking about thinking about business models. I’ll drop the sarcasm and say that’s really bad news. If education fails us because its “great business model” and massively convenient administrative structures cannot or will not allow its participants to work at a truly conceptual level, a truly problem-finding level where the lowest and highest arenas of problem-finding are centrally concerned with learning itself, then we are trapped. There will be no portals. (The cake is a lie.)

So back to the Internet, now mapped along Bruner’s levels. The Internet permits enactive communication. Data transfer in an open and permissive network-of networks, like sensation in complexly open and permissive internal neural networks, permits a kind of data-telepresence that supports all sorts of miming-based communication.

The Web appears to be a graphical user interface for the Internet, but this is a dangerous misperception. Clicking on images (or even links, for that matter) is really no more than Bruner’s enactive level of communication. The Web is an environment for linking, which means it openly and permissively enables (encourages, stimulates), with each and every act and experience of linking and linked, an iconic level of communication that contains within it the potential of a powerful experience of the abstract and conceptual, an appeal (implicitly or explicitly) to shared experience at a symbolic level that depends on a more complex idea of other minds than the merely enactive or iconic levels of communication do.

People conflate school and education the way people conflate the Internet and the World Wide Web. Education appears to be synonymous with school, which is designed to be an environment for the focused and controlled delivery of content. This is a dangerous misperception that’s similar to the dangerous misperception that says the Internet and the World Wide Web are the same thing. I think the two misperceptions are related. One may cause the other. They may cause each other in a vicious circle. Hard to say. But the danger is the same. And Facebook is like Facebook because that’s the way we like to make a world, or have a world made for us, and school is school because we need to convince ourselves that any other way is stupid, wrong, or crazy.

At any given moment, however, there are people who, like Puddleglum in The Silver Chair, insist that there’s a better and different and more open and freer world above and outside the walls of the cave. And at some lucky moments, those people get to build something that reflects that belief. Something we can build on, too, and not simply react within.

Yes, this is like moving from Flatland into a three dimensional space. We face the same difficulty, too: how to imagine a dimension that we cannot explain in terms of the data of immediate (two-dimensional) perception? Thankfully, the two-dimensional world of Flatland has a word for “dimension,” which some Flatland Folk might become curious about. And once that curiosity is awakened, you never know, some of those folk may ask themselves whether the abstraction of “dimension” might be a portal into something real that they simply cannot experience except through that portal of abstraction.

Isn’t that something like how language works? If you think about it, doesn’t language itself seem to open up n-dimensional possibilities that lead us to co-create new realities out of nothing but thought itself? Like the poet, lunatic, and lover “of imagination all compact,” as Shakespeare has a typically dense administrator pronounce, the result is that we “give to airy nothing / A local habitation and a name” (A Midsummer Night’s Dream V.i.16-17). The dense administrator, the mighty King Theseus himself, imagines this ability to be a bug, not a feature. Poor King Theseus! Luckily he married up when he found Hippolyta, who responds to her husband’s pontification with practical visionary good sense:

But all the story of the night told over,
And all their minds transfigured so together,
More witnesseth than fancy’s [imagination’s] images
And grows to something of great constancy;
But howsoever, strange and admirable.(23-27)

Minds “transfigured so together.” Too many linkings to be anything less than constant, strange, and admirable. A problem-finding education.

In Computer Lib/Dream Machines, Ted Nelson writes,

What few people realize is that big pictures can be conveyed in more powerful ways than they know. The reason they don’t know it is that they see the content in the media, and not how the content is being gotten across to them–that in fact they have been given very big pictures indeed, but don’t know it. (I take this point to be the Nickel-Iron Core of McLuhanism.)

Brilliant, but there’s more at the core: the big-picture-conveyance is not just delivery but itself a new symbol, a symbol of a specific instance (and a generalizable example) of the possibility of big-picture-conveyance. There is information about information itself, and the possibilities of conveying and sharing experience, being conveyed and shared in that big-picture-instance. Nelson’s word “conveyed” is still too close to “delivered.” McLuhan’s insight is still deeper, that what is “delivered” is always a metastatement about the conditions and means of conveyance considered largely. To put it another way, symbols do not only contain and transmit meaning. Symbols also generate meaning, the way. “link” is both a noun and a verb. A medium not only of figuration, but a figure and medium of transfiguration. Our “minds transfigured so together.”

As a species, among our many failings, we also have the wonderful endowment of brains that are bigger on the inside than they are on the outside. “Further up, and further in!” Truth, in-deed. A blogging initiative like the one going at Virginia Tech right now at the Honors Residential College is an attempt to enable, stimulate, model, and encourage intra- and intercranial cognoplasticity, the experience of “bigger on the inside than on the outside,” thus extending the inside (of a small group selected for academic ability) to the outside (which must exist in fruitfully reciprocal relationship lest the experience be merely elitism, defensive, or mutually destructive “othering”). But there’s no way to do this in our newly mediated environment without asking people to narrate, curate, and share on the open web. Until one speaks a language, a word is only a sound (an enactment). Until one reads a language, a word is only a picture (an icon). Until one writes in a language (or medium), one cannot imagine or experience or help build the portal to the thinking-together, the macro-cognoplasticity, the networked transcontextualism, the planetary double-take, that represents the next dimension we need (and desire and dread, too). Our goal is to become first-class peers for each other. Conceptacular colleagues, not just rowers in someone else’s galley.

And here I conclude for now. We have, if we choose, the ability to maintain the open, permissive architecture of the Internet and the open, permissive architecture of the Web that resides within and emerges from the Internet. If we choose to preserve the open, permissive architecture we have been lucky enough to build and lucky enough not to wreck quite yet, we may move to the third level of communication Bruner notes: the conceptual, abstract, symbolic level. For the Web is a network of links, but to call it that is to approach the realization of the next level of understanding, the mode of conceptual communication and enactment (yes there is recursion here) Bruner terms the symbolic. The World Wide Web is not simply a collection of links but an enactment of, an icon of, and an idea about (a symbol of) the complexly open and permissive activity we call linking, out of which we build together the linked and linking and open-to-linking realities of our next stage of cognoplasticity as a species.

This must also be the figure an education makes. Education is the technology that amplifies and augments the natural process of learning. Education brings Flatlanders to consider “dimension” not just as an experience one enacts or points to (a line can go this way, or that; see; now let’s test you to see if you remember that) but as a symbol that can be abstracted from experience and thus (paradoxically) lead to greater, more complex, more possibility-filled and possibility-fueled experience. To use Hofstadter’s language, education must partake of, and stimulate, and empower, the experience and emergence and creation of strange “level-crossing loops.”

The Internet permits, and the World Wide Web enacts and pictures and symbolizes, that experience and that emergence and that creation, those possibilities. (And it all recurs and is recursive, each level leading from and to each level–but that‘s a level I can’t get to in this post, except momentarily here.) Education must do the same.

It is no accident that computers and cognition and communication and education have been so intertwingled in the history of our digital age. From Charles Babbage, Ada Lovelace, and Alan Turing onward, all along the watchtower where the resonant frequencies are transmitted and received, a “wild surmise” about learning within and among the amphitheatres and launch-pads of shared cognition has accompanied each development in the unfolding n-dimensional narrative of unfolding n-dimensional possibilities and awakenings. It’s exhilarating in that tower, and exhausting as one strains to see distant shifting shapes. It’s cold, especially in the darkest moments. Or so I imagine.

All-Along-the-Watchtower

[Correction: the Naughton column is from The Guardian. Naughton called it The Observer in his Memex 1.1 post, for reasons I don’t yet understand. At any rate, I’ve made the correction in paragraph one, above.]