Tuesday, September 24, 2013

I'm J.K. Rowling ... and so's my wife!

It's an infamous scene in Monty Python's Life of Brian (and a sly parody of Kubrick's Spartacus): a centurion arrives at the place of crucifixion with orders to release "Brian" -- but he  he has just one a problem, which one of these poor sods hanging from crosses is Brian? He takes the practical route: "Where's Brian of Nazareth?" he calls out, "I have an order for his release!" And then, one by one, everyone (except of course the real Brian of Nazareth) starts calling out "I'm Brian!" "No, I'm Brian!" -- after which one especially bold fellow calls out "I'm Brian! And so's my wife!"

Needless to add, the real Brian isn't rescued. And that's how I felt about the recent bru-ha-ha over J.K. Rowling's try at a pseudonymous other authorial life as "Robert Galbraith." It's certainly her right to have given it a try -- if I were as well-known for one series of books as she is, I can imagine wanting to escape and re-invent myself. And, as she explained when the whole thing was uncovered, she'd done it very discreetly -- so much so that The Cuckoo's Calling was given the usual treatment accorded first-time novelists whose book hasn't been singled out for a big campaign (that would be most of them): some review copies were sent out, the proper ISBN's and ASIN numbers were sent to major retailers, along with a few copies of the book. It garnered some good reviews, too -- but, just like others of its seeming kind, it sold around 1,000 copies. Tell me about it -- I've been there.

Which is perfectly fine, I suppose, except for what happened once Rowling's authorship was revealed -- the book shot to #1 on the bestseller lists, and the publishers hastened to print the hundreds of thousands of copies they now knew it would sell. As James Stewart commented in the New York Times, it was not just a depressing sign of how little effort publishers put into promoting most new novels, but of how difficult it is to promote a book at all. One can Tweet, and blog, and Tumble all one wants; one can give readings to as many near-empty bookstores as one can stand; one can whisper into as many grapevines as one wants -- but there's no way to make sure a new book, however good it may be, escapes being swept away in a greyish fog of indifference. In one especially sad consequence of the success of the Harry Potter books, Bloomsbury -- which went from tiny publisher to UK giant on the sales of Rowling's books -- no longer even has a slush-pile, which was where the first Potter book's manuscript was rescued from obscurity.

But maybe there is a way. After all, we don't know whether this is Rowling's first outing in disguise. She might well have written others, and who knows under how many names. In fact, it seems to me that she might possibly have written my novel, and perhaps those of other lesser-known writers as well. How could one prove otherwise, in an age when denial is the strongest sign of the truth of what's denied.

So I'll say it now: I'm not Russell Potter (wasn't that name a bit of a give-away?) -- I'm actually  J.K. Rowling.

And I'd encourage every other writer I know to say the same thing. Go ahead, prove us wrong! Conduct a computer analysis of our writing habits, track down the falsified contracts, call the publishers' representatives.  In the meantime, while all that's going on, we'll be enjoying selling more books in a day than we have in the past five years.

But seriously: I feel for J.K. Rowling. It's been harder to her to publish something under a pseudonym than it was for Prince Hal to pass unnoticed among his troops at Agincourt. But if she really wants to earn some respect from the actual "Robert Galbraiths" of the world, she should tell her publishers to re-open that slush pile. Heck, she might try reading a few manuscripts herself. 

Thursday, August 1, 2013

Literature is bad for you

With attacks on the humanities -- and on my field of English in particular -- coming from every direction, I feel that it's time to consider some new and perhaps radical strategies to promote the reading of literature in these attention-span dwindling days. What was the last straw? Was it when a student in one of my classes condemned Edgar Allan Poe's "A Descent into the Maelstrom" as "too detailed"? Was it when a student in the midst of a discussion of Shirley Jackson's Haunting of Hill House asked whether its author was mentally ill? Or perhaps it was the day that someone asked whether, since they'd already read Hamlet in eleventh grade, they'd have to read it "all over again" in my class?

We who teach literature labor under a false premise: that if a teacher or college professor tells young people that a particular book or author is "good" for them, that they'll a) take our word for it; and b) read it. But what if our saying that it's "good" is precisely the problem? After all, none of the propaganda about the virtues of spinach, whole wheat bread, or tofu has made any of those foods more popular with teenagers. And what sort of honor is it, anyway, to have one's work declared to be a "classic"? I think with sorrow on the moment when Groucho Marx, who was having dinner with the poet T.S. Eliot, boasted that his daughter was studying "The Waste Land" at Beverly High. "I'm sorry to hear that," the great poet had replied, "I have no wish to become compulsory reading."

And herein lies the dark essence of my plan: instead of saying that reading literature is good for you, I think we should start telling people it's bad for them. After all, it has some serious side-effects: for a time, at least, the reader believes in, and worries about, the travails of completely non-existent people. Their hearts beat faster, they break out in a sweat, they turn the pages feverishly -- all in the quest to discover the fate of a woman or a man who has never lived. No less a light than St. Augustine condemned Virgil's Æneid for this reason: why should he be made to weep for Dido, a woman who was little more than a chimera created by a whiff of words, when his own immortal soul, not yet confessed unto Christ, was in so much greater and more profound peril?

The Irish novelist Flann O'Brien put it succinctly: a novel, like a drug, is self-administered in private, and its key effect is to convince the user of the reality of nonexistent beings. So why not regulate, or better yet, ban it, as we do other hallucinogenic drugs? Asking kids to read novels over their summer vacations is little better than popping LSD in their lunch-bags: the results will be much the same. Warning labels, at least, seem to be in order. Searches of backpacks should be conducted at every school, and every one of these illusionary text-drugs confiscated. Libraries? Let them, like those of Dickens's Professor Gradgrind, contain nothing but facts. A book which consists of facts, after all, does not deceive its readers into believing in the reality of imaginary people or places. Teaching literature? Eliminated. Let those who still wish to read novels be obliged to purchase them illegally, on street-corners, and hide them within paper bags until they reach the safety of their homes. And as for e-books, well -- aren't e-cigarettes, as much as the older, match-lit variety, little more than drug delivery vehicles? Fortunately, amazon.com can easily delete all fiction from the Kindle readers of offenders, refunding the cost so that users can more wisely spend their funds on non-fiction and works of reference.

And then, finally, I think you'd get the next generation interested in reading again. The thrill of the forbidden, the discovery of books as contraband, and the risk of arrest would make books cool. People would brag about scoring a gram of Poe down on the corner, or dropping a little Shakespeare in some dark alley. Bootleg editions of Woolf and Cather would be printed with plain brown covers, and once more, you'd have to smuggle copies of Joyce's Ulysses across the border inside boxes labelled "sanitary towels." Underground book groups would form, and meet in secret, shifting locations, the address sent out via encrypted e-mails. Oh, sure, there'd be some places -- Amsterdam, I suppose -- that would tolerate fiction, setting up "reader parks" where you could openly turn the pages of Kerouac or Kesey. But we'd all know that would never work; fiction is just a gateway drug, and the only solution is one of zero tolerance. For, as the town manager in Terry Gilliam's Munchausen notes, we can't have people escaping at a time like this.

Thursday, July 11, 2013

What's a Book?

The late great Maurice Sendak, irascible and sharp as ever in his last interviews, had this to say of e-books: "I hate them. It's like making believe there's another kind of sex. There isn't another kind of sex. There isn't another kind of book! A book is a book is a book." It's a hard quote to improve on, but it's also worth considering, really: what is a book? And is an e-book really a book at all?

I'd say that, to be a book, whether material or virtual, there are a few basic qualifications -- I can think of six off the top of my head:

• A book must contain readable text.
• It must be portable -- the ability to take a book anywhere is one of its key strengths.
• The text must be persistent -- that is, it should still be there if you go away and come back again later.
• You should be able to do what you want with it: store it, loan it, give it away, bequeath it, and (yes) destroy it if you have a mind to.
• It should be able to be annotated, written in, drawn in, dog-eared or place-marked. Call it "interactivity" if you like.
• It shouldn't vanish unexpectedly. And, if undisturbed, it should last for years.

So is a typical e-book a book by these measures? In most cases, no. It meets the first two criteria, yes -- but is it persistent? Some e-books leant by libraries expire after a certain date and can no longer be read; some e-books can only be read in certain places (as with Barnes & Noble's 'share-in-store' café feature) -- that's not real persistence. The fourth qualification, though, is the biggest stumbling block: almost no commercial e-book format allows lending or giving of any kind. If, in a lifetime, you amass a library of physical books on which you spend tens of thousands of dollars, you can give it to a friend, leave it to your kids, or donate it to a library. If you did the same with e-books, you'd have nothing -- your death would be the death of every e-book you'd bought.

Annotation? Some platforms allow this, and there's even one model in which one can see other peoples' annotations -- wow, just like a book! There are "signed" e-books never touched by an author's hand. But if the lifespan of an e-book is uncertain, the duration of these user-added annotations is even more questionable.

And disappearing? Amazon.com famously deleted copies of Orwell's Animal Farm from its users' Kindles, kindly crediting them 99 cents, after the company was informed by the Orwell estate that the book was still in copyright -- talk about Orwellian. And there's nothing to say Amazon or some other vendor couldn't do it again. What's more, if you decided not to be an Amazon customer, or not to replace a broken Kindle, or if Kindle were to be replaced by some hardware or software that wasn't backwards-compatible with older e-book formats, then your books would have vanished for you.

Lastly, what would one make of some archaeologist of the far future, coming upon a buried e-library? If Amazon didn't exist in the future, there'd be no way to recover these battered e-readers and tablets -- their data was mostly stored on cloud that once floated in the sky of a lost civilization. And like clouds, there'd be no getting them back.

So I suggest a label, or some sort of certification: Only e-books and readers that met the six criteria above would be certified as genuine "books" -- everything else would have to use some other word: text-sacks, wordblobs, readoids, or libri-fizzles. Anything but "books."

(illustration from wikimedia commons)

Tuesday, June 25, 2013

HG Wells, Television, and "Things to Come"

I'm an enormous fan of the folks at the Criterion Collection, and keenly awaited their restored DVD of HG Wells and Alexander Korda's 1936 film Things to Come. And the disc did not disappoint; the restoration is brilliant, and the bonus features and commentaries are all illuminating as always. But they did miss one point, and I think it's a point worth making: Wells included television in his futuristic vision, and there was a very direct connection between his ideas and the Scottish engineer John Logie Baird, who had invented and demonstrated the first practical system of television a decade before the film's release. There's ample evidence of the connection, and it would be perfect commentary for the moment at which the rebel artist "Theotocopulos" beams his image around the ultramodern city. Indeed, the word that Wells consistently used in his original treatment/script was "televisor," and that was the very term Baird used for his apparatus.

Baird's original design used a electro-mechanical interface: a spinning disc with a spiral of precisely-placed holes served both as camera and as receiver. The camera/transmitter placed one or more photocells in front of the frame, and the receiver backlit an identically-proportioned disc with a small neon bulb with replicated the pulses recorded by the photocells. This "disc" system was used in the earliest public broadcasts of television in the early 1930's, during which home viewers -- "lookers-in" as they were called -- could tune to the high end of the radio band and receive remarkably clear pictures in the tiny window of their sets. The pictures were only 30 lines in resolution, but image captures and a few surviving off-air recordings show a distinct and recognizable images of human faces. The aspect ration of these broadcast was tall and narrow -- 3:7 -- which was designed to capture the presenter's upper body, a system one of the BBC's engineers quippingly called "head and shoulders."

And what we see in Things to Come is indeed a very tall and narrow aspect ratio. Wells's original treatment also refers on several occasions to the televisor's "disc," though in the final production design no discs are seen, and instead of a close-up of a disc with the outline of Cabal's head and shoulders, we're shown what looks like a filmstrip moving along a table. The screens in the film are of various sizes, including a small one in a translucent frame mounted on wheels, which we see Cabal pushing aside on his desk, disgusted with what he "sees" (a clever shot which eliminated the need of showing the screen image). Showing the same image on devices of many sizes, and at several locations, certainly prefigures our age of desktops, laptops, tablets, and smartphones, a connection that goes unremarked by the disc's commentator, David Kalat.

And interestingly, if Wells was fascinated by Baird's invention, Baird had grown up as an avid fan of Wells, whom he jokingly referred to as his 'demigod.' By chance, in 1931, both were passengers on the US-bound Aquitania, and Baird was able to finally meet his hero, though by his account the conversation was awkward, and did not touch on television; the snapshot above shows their meeting. It's hard to imagine a more resonant pairing -- the inventor of television with the pre-eminent writer of science fiction -- and yet what comes through most clearly is that both were, as Baird remarked, 'poor vulgar creatures' -- mere mortals who, as things turned out, would not live to see the ultimate forms the technologies they imagined would take.

Sunday, April 21, 2013

Tom Lehrer

No prophet is accepted in his own country, it's said -- and yet oddly enough, nearly fifty years ago there once was a man by the name of Tom Lehrer -- erstwhile Harvard mathematics professor, wry songster, and television parodist -- who managed to make music out of some of the most irksome and intractable issues of his day. Racism, pollution, Jim Crow, pornography, the nuclear arms race, the Second Vatican Council, and even World War III were all fodder for his astonishing show tunes, and there often seemed to be scarcely any line of propriety he wouldn't cross. And yet, since he sung every one with such verve, he managed to make nearly everyone laugh at their own folly, rather than throw rotten vegetables at the stage. By the late 1960's, his songs were the stuff of legend, exchanged by high school and college students like a sort of secret code: do you know "The Vatican Rag"? "Poisoning Pigeons in the Park?" "Smut?" At my high school, all the cool kids (that is, all the geeks, since I went to an alternative hippie Quaker school) had at least one of his songs memorized.

It was amazing to me and my friends in the late '70's and early '80's to think that many of these songs were first heard on network television in 1964 and 1965 on a program called That Was the Week that Was. You'll have to remember, this was a long, long, long time before Stephen Colbert.

Lehrer retired from comic songstering for more than 25 years, re-emerging briefly at a concert in 1998, where he was introduced by his old friend Stephen Sondheim, and performed a lovely redux of his famous anthem Poisoning Pigeons in the Park.

Since then, he seems to have re-retired, though I have it on good authority that he still lives in -- or at least is occasionally seen near -- Cambridge, Massachusetts. Asked in an interview some years ago why he wasn't writing songs satirizing our present moment, he observed that these days,  "everything is so weird in politics that it's very hard to be funny about it." True enough.


Tuesday, April 16, 2013

The Problem with Evil

The word "evil" seems to be trending once more. It's a harsh word, a powerful word, a sweeping word. There's no way to dilute it or qualify it; a person or a deed can't be more or less evil, just a little evil, moderately evil -- it's all or nothing. We reach for it in the same way we reach for the emergency brake switch on a train -- as a last resort, knowing that pulling that switch will be an irrevocable act.

"Evil" works for us when nothing else will. Like a pair of asbestos mitts, it enables us to handle things we could otherwise not bear to touch. "Evil" enables us to categorize and keep safe distance from people who would otherwise almost crush us with their horror, their unfathomability: Hitler, Stalin, Pol Pot, Idi Amin. And it gives us unlimited power to denounce them, to condemn them, to convince ourselves that never, never, never would we have anything to do with them. Those who are "evil" are consigned to the world of devils and demons, the underworld of those whose motivations, personalities, influences, or thoughts no longer matter. How could they? -- they're evil.

But "evil" also blinds us. It convinces us that, while some judgments are just a matter of perspective or cultural context, others are absolute, and apply universally. And yet when, in re-creations such as that in the film Argo, we see the United States denounced in billboards as "The Great Satan," we smirk and think how ridiculous that is: "What, us, Satan?"

And this is the essential problem. In the relentless march of cultural and historical amnesia that our modern media-saturated world embodies, "evil" is the ultimate memory zapper. It convinces us that all we need to know about someone is that they were "evil" -- no more sense learning about their lives or motivations. Case closed. The fact that so many of the people we write off in this manner were, in their country and in their heyday, beloved by millions and widely admired, strikes us a irrelevant. The fact that so many people who ended up being "evil" started out being "good" seems merely an inconvenient detail. When we see women holding up their babies for Hitler to kiss them, or families weeping at the funeral of Kim Jong-il, we think to ourselves, what foolish people! How were they so hoodwinked?

But perhaps it is we who wear the blinders. "Evil" works so well in retrospect; it solves for us the problem of history. But if we really want to prevent future Hitlers and Stalins from arising, it's absolutely useless.  No one steps forward and calls themselves evil -- to do that would be almost comic, like Austin Powers or Aleister Crowley. No, the people who we may, eventually, find to be evil will always be people who arrived to meet some human wish or another: the wish for stability, the wish for prosperity, the wish for revenge, the wish for power. They will begin by being very attractive people indeed, so attractive that we don't see them in time to stop them -- or ourselves.

Friday, April 5, 2013

Robo-grading

"Essay Grading Software Offers Professors a Break," read the headline in the New York Times. With the usual fanfare according some new and seemingly innovative educational development, the article described the new system developed by edX, a nonprofit corporation set up and sponsored by a group of élite colleges and universities, and which was (until now) best-known for developing software for, and hosting, MOOCs.  The president of edX, Dr. Anant Agarwal, is an electrical engineer (of course!) and has overseen this new system with the goal of 'instant feedback' in mind.  After all, who wants to wait for a professor to diligently grade an essay by hand, a process which can take -- if you have 25-30 students in a class, as I often do -- a week at least, and sometimes more -- especially when you can get it "graded"instantly online, and instantly work to "improve" it. All of which raises two questions: 1) What is it that professors would be "freed" to do if they didn't do one of the most essential tasks of teaching? -- and 2) How can a computer possibly give meaningful feedback on an essay?

But it's not the first time. Years ago, there was a little test which would, like magic, determine the "readability" and grade level of an essay; it was called the Flesch-Kincaid Grade Level test. It was based on two things that roughly -- very roughly -- correlated with readability and sophistication -- the length of sentences (surely longer ones were more challenging to read) and the length of words (given that many technical and specialized words of more than two syllables are derived from Latin or Greek, this again offered a sort of metric. Of course one could write none but brief words -- low grade level! Or someone preparing to compose a disquisition upon some substantive matter of polysyllabic significance could readily simulate the species of composition that would elicit a higher plateau of evaluative achievement. Not surprisingly, the Flesch-Kincaid test was initially developed by the US military in 1948, but it took on a new life when Microsoft Word included its metrics as an add-on to its built in spelling and grammar checker. By its metrics, the lowest-rated texts would be those packed with monosyllables, such as Dr. Seuss's Green Eggs and Ham, while a long-winded theological or legal treatise loaded with multi-syllable words word score high.

So how does edX's system work? Not surprisingly, it takes a page from the neural network systems developed years ago at MIT to handle complex tasks like parsing language or manipulating a robot hand. The idea of a neural network is that it programs itself by repeating a task over and over, getting feedback as to the success of each result.  When the feedback is good, the network "remembers" that behavior, and prioritizes whatever it did to achieve it; when feedback is bad, routines are dropped, or changed and tried again. It's not unlike the way babies learn simple motor skills.

And so, in order to judge whether an essay is "good," the edX system asks for 100 essays, essays already graded by professors. It internalizes all the patterns it can find in the essays marked as "good" or "bad," and then tests itself by applying these to additional papers; if its results match those of the human graders, it regards that outcome as "good" and works to replicate it.  Of course, such a system can only possibly be as good as whatever the professors who use it think is good; it might well be that what is good at Ho-Ho-Kus Community College is not so good at Florida A&T or Cal Poly. And the demands of different assignments might demand different metrics, or might even vary over time; such a machine would need regular re-calibration.

But can such a computer program be said to really be evaluating these essays? No. It only works to be predictive of the grade that a human grader would tend to assign. And, with so-called "outliers" -- papers that are unusually good, or unusually bad, its rate of error could be quite high. If we imagine a paper which breaks the usual rules of writing in order to obtain a certain effect, such a paper might indeed get very high marks from a human grader, but be flunked by a machine which believes there is no such thing as a good reason to violate a grammatical or structural rule.

So we're back to square one. If there were a large lecture where a standard sort of essay was expected, with very strict parameters, a program like this might be effective at matching its assessments to those of human evaluators. But this isn't how any college or university in fact teaches writing; in the best programs, the classes are small, the assignments varied and often have elements of creative writing, and the level of individual attention -- and variation -- is high.  Replacing professors in these classes with robo-graders would almost certainly result in much poorer learning.

And what are we freeing these professors up to do? What higher calling awaits those "freed" from grading essays? Recent surveys show that the average tenured or tenure-line professor in higher education today is teaching fewer classes than ever before; the figure was once over three courses per semester, and is now falling closer to two. Of course, at some smaller liberal-arts colleges, such as the one I teach at, our standard load is three; I myself teach four a semester, as well as four every summer -- twelve courses a year in all (hey, I've got kids in college myself, and I need the "extra" pay!). And somehow despite all that grading I've managed to write three books and dozens of articles. While, at the big research universities, some professors get so much course relief that they teach as few as two courses a year -- over my career, I'll have taught more courses that six such professors.  So I don't think the argument holds that professors need to be "freed up" from anything, unless they're teaching multiple large lectures, in which case they doubtless have teaching assistants anyway.

So go ahead, robo-grade your papers. Give your lectures to a video camera and have everyone just watch your MOOC. At that point, you don't really need to be on campus anyway, so why not just take an extended vacation?  But if the parents who are laboring and borrowing to gather up the funds to pay the tuition that pays your salary start to feel that your vacation should be a permanent one, don't be surprised.

Friday, March 8, 2013

Return to Oz

A little later today, I'll be off the see the Wizard -- the latest one, as portrayed by James Franco -- without especially high hopes, as the film has been widely panned for all of the reasons -- unconvincing characters, jumbled plot, over-reliance on digital effects -- that I was most concerned about when I first heard of it last year.  Still, having seen every other surviving film version of L. Frank Baum's story (along with the musical Wicked), I feel a certain obligation to check in on my old favorite fairyland, if only to reflect on how it's changed.  Of course, no film will ever be able to complete with the glorious 1939 musical version, a film I have watched more than any other, starting in the 1960's when it was hosted on television every spring by Danny Kaye, and when -- since I didn't have a color set -- the "not in Kansas anymore" line wasn't about color, but about the slow greyscale panorama of plastic plants, concealed Munchkins, and the invisible, ethereal chorus (referred to in the screenplay as "optimistic voices"). It remains, for me, the most important film of my childhood, and one that I will always love more than any other.

But as critics have lined up to disparage this latest effort, they've been painting with a rather broad brush -- and in some cases, a roller -- as with Chris Heller's piece in today's Atlantic, where he laments the "Sad, Century-Long History of Terrible 'Wizard of Oz' Movies." And, though there there sure have been a few stinkers along the way, as well as versions that I simply personally can't for some reason, stand watching (The Wiz), there have also been some remarkable films.  L. Frank Baum himself staged his Oz tales as "Radio Plays," combining puppets, full-size actors, film, and lantern slides; he also licensed them to the Oz Film Company, whose efforts -- uneven at best -- can be seen on supplemental DVD's with most sets of the 1939 film as well as on YouTube.  A 1925 silent, featuring Larry Seman as the Scarecrow and Oliver Hardy (yes, that Oliver Hardy) as a rather rotund Tin Woodsman, is marginally better, and deserves remembering for its innovation of the "dream sequence" concept, in which the Kansas farmhands are played by the same actors as Dorothy's later friends in Oz.

The 1939 film, for all its wonders, was not an immediate hit; it wasn't until it started being shown on television in the late 1950's and early 1960's that it gained the wide audience of children who took it directly into their hearts; what other televised film can count among its fans both Denzel Washington and Salman Rushdie?

For Chris Heller, the greatness of the 1939 film forms a narrative bridge -- I'll resist calling it a rainbow -- from which to disparage every other film adaptation of Oz, even those I somehow suspect he hasn't actually seen. For, among the list of films he disparages there is in fact one true gem -- an adaptation of Baum's Oz stories that is richer, darker, and more faithful to the original books than even the 1939 film: Walter Much's 1985 Return to Oz.  Murch, for whom this has been, so far, his only turn in the director's chair, is perhaps best known for his work on sound design; in the years of difficulty Francis Ford Coppola spent editing Apocalypse Now!, Murch was perhaps his closest collaborator. Murch got and kept the Oz job at the recommendation of George Lucas, whom Disney at one point had wanted to replace him -- and thank goodness they didn't. Return to Oz is gloriously, richly, darkly, deeply true to the spirit of Oz, and Fairuza Balk in her film début is brilliant as Dorothy.

Heller makes fun of the frame narrative, in which Aunt Em and Uncle Henry, desperate to find a cure for their daughter's delusional chatter about Oz, take her to an "electrical" physician for what appears to be some early form of shock therapy. It evokes the dream-residue frame narrative, and deepens it -- this film is about a child's determination not to condemn and reject her love of a place she believes in with all her heart. Her new companions -- a talking hen named Billina, a wind-up man known as Tik-Tok, and a pumpkin-headed mannikin named Jack -- are perfectly brought to life from the pages of Baum's original stories. The villains are Mombi (a conflation of two other wicked Baum witches) and the Nome King (one of Baum's most original creations, whose one great fear is eggs -- played in a career-best turn by Nicol Williamson). The ending is far more satisfying and emotionally resonant than that of the 1939 film -- for Fauruza Balk's Dorothy knows, as do we, that Oz is in fact a really, truly, live place.

I don't want to spoil the movie for those who haven't seen it -- but if somehow, like most people, you missed this film (when I saw it in a theatre in 1985, it was just me and the guy collecting for the Jimmy Fund), you simply must see it.  If Oz the Great and Powerful turns out to be as bad as some say it is, I can't think of a better antidote.

Wednesday, March 6, 2013

Massively Odious Online "Courses"

Columnist Thomas Friedman of the New York Times is the latest to add his voice to the chorus in support of the wonders and glories of MOOC's -- Massively Open Online Courses -- the lecture-driven streaming video productions which many élite schools (Harvard and MIT among them) are now touting as the next big thing in education.  Just think: millions of people around the world can hear the lectures of top-flight genius professors -- for free! Of course no one will get any course credit, or a degree from any of these fine institutions, but no matter! -- Maybe we'll have quizzes, and the students who do really well on the quizzes will get some kind of scholarship.  As Jon Lovitz might say, "Yeah, that's the ticket!"

Look, university education has evolved enormously since a few teachers and students rented some rooms in Oxford in the twelfth century.  In those truly "olden" days, the seven liberal arts -- Grammar, Rhetoric, Logic, Arithmetic, Geometry, Music, and Astronomy -- made up the essence of the curriculum. And, since Oxford (and Cambridge, and most other medieval universities) existed to prepare people for lives in the church or government, Theology was soon added. Professors stood at lecterns, and students sat on benches, with only the most senior students allowed at the front bench, from which actual questions -- at least, after the innovations of Peter Abelard at the University of Paris -- could be directed at those professors.  There was, in general, no tuition, although the cost of room and board, along with textbooks (which in those days had to be rented, being too expensive to purchase) made the taking of a degree prohibitively expensive, unless one were wealthy, or (like Chaucer's diligent Clerk) had friends to pay one's fees, in return for which one would diligently pray for the salvation of their souls.

One thing, however, has never changed, really: the idea that "education" (the word comes from the Latin ex-ductere, which, intriguingly, can mean either "to lead forth" or "to draw out or stretch") is something which takes place when teachers and students are in the same room. The UK system has evolved considerably, and now typically consists of a course of lectures, at which one sits, examinations, which are scored by fairly standard rubrics, and tutorials, which give students invaluable one-on-one experience.  In the United States, the first few private colleges (Harvard, Yale, Darmouth, Brown, and the rest) followed the British model for the most part, while the Land Grant Universities (established by the Federal Acts of 1862 and 1890) used a similar set of introductory lectures followed by smaller and more advanced classes, and expanded the curriculum at first to the more "useful" arts and sciences (such as agriculture and engineering) as well as professional certifications (education, social work, nursing, and so forth).  Universities sometimes had urban extension classes, and the UK system was made more accessible through the Open Universities scheme, made famous in the film Educating Rita.

But how to make education more "open" -- more available to the masses -- has remained a tricky question. Systems set up, such as the City College of New York, to provide college-level training to the less-privileged, eventually started charging tuition and fees, and once again access skewed along class lines.  The "Open University" system in the UK was preceded by regional institutions, the "Red Bricks," which sought to offer comparable on-site classes to those of the ancient universities.  Somehow, though, neither of these transcended the old "open élite" model: one educated the wealthy, and those who, from among the poorer classes, could be culled and promoted by 'merit,' leading both to their success and their co-optation by the ruling classes. For those who "made the grade," a lifetime of improved prosperity and prestige awaited.

Today, a terrible idea has begun to flourish: that an education should be measured by the economic benefits it confers. This is doubly strange because so many of the institutions of opportunity which sought to elevate those who strove to better themselves had also adopted the model of a 'liberal arts' education that had its roots in the ancient schools of learning. Education has always had some sort of economic benefit, but before now this was never its reason for being. A richer life, not a richer paycheck, was once considered its highest goal.

The MOOC madness well fits this new conception. Education is not for the enrichment of life for the many, it is for awarding laurels to the victors; it is not a mechanism to improve the intellectual life of  all, but a "race to the top" where the best and brightest will receive cash prizes.  The highly exclusive institutions which are poised to offer the widest array of MOOC's may seem, at first, to be the world's great benefactors: behold, they give away education for free!  But what they give is not in fact education at all; they give only a repeated, recorded message, like the lecture of one of the many dead professors at Hogwarts School in J.K. Rowling's universe.  Those who receive these lectures are, in fact, ghosts as well: they have no student ID's, cannot use the library, will receive no in-person advising or counseling, and the work they do will not accumulate.

Of course Harvard and MIT won't keep doing this for nothing.  Eventually, the MOOC's will have to be capitalized, will have to pay.  The first few tiers may be free, and then payment, or suffering through advertisements for cruises or medicines, will have to carry the freight.  A few people will, perhaps, be identified and elevated: come on down, you're the next contestant on This Education's For You! But in fact, the old inequities will be the same as ever: personal development and satisfaction for those who excel, and a broom and a mop for those who don't. And pure MOOC's, the ones that offer exams and follow-up but no in-person component, have a terrifically high rate of attrition and failure. "Blended" or "hybrid" courses which mix online and in-person instruction, fare much better, for reasons that seem to me obvious -- but in whatever form, the MOOC bandwagon looks set to march on, as our latest educational juggernaut.

Once upon a time, there were institutions in which many people, men and women, studied and prayed and lived a devout life which their communities believed redounded to the public good. But then one man, Henry VIII, asked cui bono? And the monasteries, abbeys, and religious foundations of England were closed at a stroke, the monks and nuns sent to fend for themselves, while the King's men rolled up the lead roofing and carried away the books.  What was left was pilfered by the local peasants, and all that remained were mere ruins.

If MOOC's become the way of the future, I fear for education in America, and the world: if no higher purpose than increased earnings can be found, then our system will, within a generation, be little more than a ruin itself.

Tuesday, February 12, 2013

Nam ... Rep ... Us!


The first person I ever heard about whose life made any sense to me was Superman.  He had come from an alien planet, Krypton, and wasn’t really related to Ma and Pa Kent, or anyone else on Earth for that matter.  Still, his parents were good people, and only wanted the best for him, so he had to figure out how to mingle in with Earth people, so that he wouldn’t get in trouble.  In one of the comics I had, there was a story-line where Superman had to tell the truth about everything – fortuntately, no one asked him about his secret identity.  Still, he had to tell the people around him what he really thought of them, when he could see with his superior intellect and x-ray vision that they were all in some way flawed or pathetic.  After one day of honesty, there were “down with Superman” rallies all over the country.  This was around the same time that everyone was burning their Beatles albums because John Lennon had said they were ‘more popular than Jesus.’  So much for honesty.

Another thing that fascinated was one of Superman’s lesser-known enemies, Mr. Mxyzptlk. Mr. Mxyzptlk was always trying to trick Superman into saying his name backwards, which would instantly transport him to the fifth dimension.  One time, Mr. Mxyzptlk tricked Superman into announcing the winners of a cricket race that worked like a horse race. He then bribed the cricket jockeys so that crickets named “Nam,” “Rep,” and “Us,” would be the first three winners.  Superman went ahead and announced them, but he didn’t suddenly get zapped into the fifth dimension.  Mr. Mxyzptlk was beside himself with rage, but Superman explained that it hadn’t worked because “Superman” wasn’t his real name.  His real name was Kal-el, so in order to send him to the fifth dimension, he’d have to say “Le-lak” (of course he didn’t really say those last words).  Foiled again.

Then there was the Bizarro world, where everyone looked like cubist paintings of their ordinary selves.  Not only that, but they all talked and thought backwards over there, mixing up “I” and “Me” and doing everything the opposite of the normal way.  There was even a Bizarro code: "Us do opposite of Earthly things! Us hate beauty, us love ugliness! Is big crime to make anything perfect on Bizarro World."

Everyone who existed in the regular Superman world had a counterpart in the other: there was a Bizarro Superman, a Bizarro Lois Lane, a Bizarro Jimmy Olson, and so on.  For some reason that, even now, I can’t precisely identify, the Bizarro world made sense to me too.  The Bizarro Supermen were all married to Bizarro Lois Lanes, and their Bizarro super-kids were expected to do bad in school. If they did well, they were given Superman bad-tutors (complete with mortarboard hats) who taught them how to do it all wrong again.

Of course there was another code beside the Bizarro one, which was even more bizarre: the Comics Code, instituted after the moral panic over the evil influence of comic books on teenagers. No one could actually shoot anyone, no one could get killed, and bad guys always finished last.  Maybe it was the narrow restrictions of the Comics Code that drove writers to invent Mr. Mxyzptlk and the Bizarro World -- maybe not.  But if you could accept a guy in a blue suit and a red cape who could leap over tall buildings in a single bound, you could accept anything.

Thursday, February 7, 2013

The End of the Web (not)

Self-appointed computer 'visionary' David Gelernter has news for us -- buzz buzz! -- the end of the Web is coming! Well, not exactly, but there will be no 'next' browser or web protocol; instead, we'll stop thinking spatially, in terms of 'pages,' and start thinking in a time-based manner, in terms of 'streams.' Presumably, our metaphors will change as well; we'll no longer speak of "going to" or "visiting" internet resources; neither will we 'surf' or 'browse' or 'explore.' No, we'll just swim in the stream, merge one stream with another, blend streams into a custom cappuccino of information, and search streams using exclusionary paradigms -- his example is a stream which we tell to temporarily edit itself so that it displays only those moments that mention cranberries. We won't use any of the old search language; instead we'll dynamically edit constant real-time streams. We will, however, do a heck of a lot of "scrolling."

But of course there's just one problem with this: we won't. The metaphors of information are, and have been, spatial ones, since the era of hieroglyphics and cuneiform. Our minds are, it seems, programmed for a sort of visual/spatial thinking -- it goes back, doubtless, to our very old days as hunter/gatherers. Whereas time, that seemingly old friend of ours, is quite a recent invention, and an annoying one as well; until well into the modern era, with the invention of the bimetal strip, which led to reliable and affordable pocket-watches, no one, quite literally, knew 'what time it was.' Time, although it exists in our minds as a constant flow, is in fact made up of all kinds of disparate material that our conscious minds work to stitch together; it is a production, not an exterior condition.  And, when it comes to the past, time gets murky; studies have shown that each time we recall past events, we alter our memory of them; it's the reason eyewitness testimony is often unreliable. A "line" of time, unlike a horizon-line in an image, is very much a cultural construct. 

Beyond that, we have some very suggestive empirical evidence that Internet users don't like to interface with life this way. Facebook has tried this with their "timelines" and the result has been almost universal hatred. Gelernter points to blogs, and to Twitter, as time-stream paradigms, but in fact the vast majority of Tweets that have any lasting impact contain URL's to more 'static' web resources. Blogs do indeed self-archive, and put the newest postings first, but people rarely search through these archives; if they come upon archived pages, it's usually through lateral links such as those generated by a search engine. Who among us has read a blog from start to finish?  Who would want to?  But Gelernter goes even farther; he expects that everyone will be accessing everything through streams that constantly flow in real time.  But do we want that either?  The number of Facebook users who leave or quit, frustrated with the continual barrage of 'news' and 'likes' suggests that this paradigm isn't going to be a crowd pleaser.  And isn't the current web founded on the pleasure of crowds, whence comes their (often unpaid) labor? 

But the other reason that the spatially-metaphored web isn't going to come to an end in favor of a time-metaphored one is that, to paraphrase Sun Ra, it's after the end of the world.   We already make time for our online doings, and whatever we do online becomes, if you want it to, part of a stream.  Those who want to access it that way already have all kinds of software to do so; if you'd like to get real-time updates to all the blogs you follow as an RSS stream, you can do it.  And more: if you want to think of the internet as a creature of time and flowing data, you already can think of it that way, model it that way, study it that way.  But while you're doing that, most of the people who are using it will be using it with spatial metaphors, and software to match.

Tuesday, January 15, 2013

Conspiracy Theory Theory

We seem to live in the age of conspiracy theories. Not that they haven't been around for a while, but that something in this postmodern moment seems to supply them, amplify them, and keep them alive longer than at any time in the past. An event scarcely settles into public awareness before someone steps forward to "question" it -- not simply to question why or how it happened, but whether it happened at all.

In the course of this post, I'm going to try not to mention any specific such theories -- I have no desire to give their promulgators more attention -- but I do want to look at some of the reasons for the plethora of such theories at the present moment, and I believe I can do so without going into the particulars of any of them (though I will, on occasion, mention such theories about the Kennedy assassination, which in many ways is the model for them all).

What kinds of events generate these theories? And what does the their structure tell us about the shifting shape of mass media, and our own shifting modes of understanding the 'true' and the 'real'?

To generate such theories, an event has to be of a very specific nature: something that, at its very occurrence, was completely unanticipated, spectacular, and ideologically charged. Whenever something terrible happens to an individual -- a friend dies, one's home catches fire, or one's finances collapse -- we ask ourselves why.  Sometimes, there's a simple answer: he got cancer, the ashes were still hot, you invested in what turned out to be a pyramid scheme.  Even then, though, there are leftover questions: Why did someone with such a promising future have to die? How could such a small mistake destroy a home? Why didn't I realize much sooner that the promised returns were too good to be true?

It can take an individual years to sort through these issues, and to eventually decide it's time to move on. But what about when such a disaster overtakes an entire region, nation, or the planet as a whole? The causes and consequences of these disasters are vast and complex, and there may never be a completely clear and unambiguous reason why.  Why did the Light Brigade charge into the cannon? How was it the Roosevelt didn't anticipate the attack on Pearl Harbor? How could a lone gunman have killed the leader of the free world, flush in the youth of his success?

The psychic cost of letting ourselves understand and accept such events is enormous, and the collective soul searches for some, for any kind of release from this torment. In the past, scapegoating was the easiest option; to identify and attack the purported cause of something terrible is undeniably cathartic, even if it later turns out that the person or people blamed were innocent.  It's harder to do today, although it can still happen.  But as it turns out, alleging a conspiracy -- even one so vast and profound that it turns out that what we thought happened didn't actually happen at all -- provides an even more effective balm.

On the face of it, these claims are absurd, but that doesn't matter. The people who make them do not put themselves under the burden of constructing a single, plausible alternative to what everyone else believes has happened; they simply exploit doubt and uncertainty to a sufficient degree that they can discard what they regard as the "official" and therefore false version. Having done that, they hint vaguely at a series of dark alternatives; they don't have to pick just one, and if one is shown to be patently false, they just point to another, and another, and another.

The Kennedy assassination's "grassy knoll" is one such point. One frame of the infamous Zapruder film seems to show a puff of smoke at this location.  Is it a puff of smoke? A puff of car exhaust? A slight fogging of the film? The first attack is simply to cast doubt on the "single shooter" version, and in this charge the puff of smoke is just the beginning.  The second move is to speak of another shooter as if he or or she was definitely known to exist, and search all the testimony one can find that correlates with this possibility; if any such claim is doubted, one simply cites another.

And here is why in this age, such theories have such traction: the informational background -- official reports, statements, maps, photographs, blog postings, 911 recordings, satellite photos, and so forth -- is so vast, and daily growing so much vaster, that the amount of informational fuel available is, for all practical purposes, infinite.

As anyone who has tried can confirm, it's impossible to ever win an argument with a conspiracy theorist. For one, they have bushels of information at their fingertips, and warehouses more if those run out.  For two, they don't have to produce a coherent account of what actually took place, just cast doubt on every particular claim, one after another, until eventually the whole thing shudders under piecemeal attacks.

Recently, some such theorists have defended their statements by saying that they are just practicing "critical thinking," that they are "questioning assumptions" and doing "investigative journalism." But none of these actually apply: what these theorists -- who often have, or are given, the name "truthers" actually possess is a very poor understanding of nature of truth.  To them, "truth" must be consistent, not only with every conceivable piece of data, but with their own ideological presuppositions. Anything inconsistent stinks to them of untruth.  And yet, the strange fact is, the truth of any actual event is full of inconsistencies, many of which can simply not be resolved completely.

The "truther" path dares those who accept a commonly-known fact or event to "prove" it happened. And yet nearly all human events cannot be proved in this way.  Prove that the ancient Egyptians existed! One points to the pyramids, and the Temple of Luxor.  But what if these were actually fake ruins put there by the Greeks thousands of years later? Prove that evolution is evidenced in fossilized life! But what if these fossils were put there by God to test our faith and confuse us?

Truthers are fond of documents taken out of context; the obsess over documentation but when documentation is provided, they simply say that it was forged or faked.  They insisted that neither the President's short form nor long-form birth certificates were "real" -- and yet an obviously, clumsily faked Kenyan one was held forth as confirmation of their suspicions. They point to photographs quite a bit -- and yet when it's shown that the image doesn't depict what they claimed, they simply say the photograph was altered by people trying to discredit them.

And they love eyewitness testimony.  Never mind that it's been conclusively shown that eyewitness testimony is quite often unreliable, they love the way the acid of testimonial inconsistency eats away at the "official" version.

The irony is that the reason eyewitness testimony is unreliable is the exact same reason that conspiracy theories are so attractive to their adherents. It's because the human mind detests information in a vacuum; we make stories of our memories even as we are making our memories, and the stories stick -- we hate to change them later.  In a classic Peanuts strip, Lucy sees what she thinks is a rare butterfly from Brazil on the sidewalk,  and wonders at the amazing natural ability of these tiny creatures to travel so far. In the next panel, her kid brother Linus points out that it's actually a potato chip. To which Lucy replies,  "Well, I’ll be! So it is! I wonder how a potato chip got all the way up here from Brazil?"



And so it is with the conspiracy theorist. The things which evidence may or may not point to are taken as givens, and any attempt to show that the preponderance of the evidence shows that it's a potato chip of local origin simply does not compute.  And, in the age when anyone who wants to can Google up millions of pages of information about butterflies, those of us who still see a potato chip are in trouble.