Healthcare.gov and the Gulf Between Planning and Reality

Back in the mid-1990s, I did a lot of web work for traditional media. That often meant figuring out what the client was already doing on the web, and how it was going, so I’d find the techies in the company, and ask them what they were doing, and how it was going. Then I’d tell management what I’d learned. This always struck me as a waste of my time and their money; I was like an overpaid bike messenger, moving information from one part of the firm to another. I didn’t understand the job I was doing until one meeting at a magazine company. 

The thing that made this meeting unusual was that one of their programmers had been invited to attend, so management could outline their web strategy to him. After the executives thanked me for explaining what I’d learned from log files given me by their own employees just days before, the programmer leaned forward and said “You know, we have all that information downstairs, but nobody’s ever asked us for it.”

I remember thinking “Oh, finally!” I figured the executives would be relieved this information was in-house, delighted that their own people were on it, maybe even mad at me for charging an exorbitant markup on local knowledge. Then I saw the look on their faces as they considered the programmer’s offer. The look wasn’t delight, or even relief, but contempt. The situation suddenly came clear: I was getting paid to save management from the distasteful act of listening to their own employees. 

In the early days of print, you had to understand the tech to run the organization. (Ben Franklin, the man who made America a media hothouse, called himself Printer.) But in the 19th century, the printing press became domesticated. Printers were no longer senior figures — they became blue-collar workers. And the executive suite no longer interacted with them much, except during contract negotiations.

This might have been nothing more than a previously hard job becoming easier, Hallelujah. But most print companies took it further. Talking to the people who understood the technology became demeaning, something to be avoided. Information was to move from management to workers, not vice-versa (a pattern that later came to other kinds of media businesses as well.) By the time the web came around and understanding the technology mattered again, many media executives hadn’t just lost the habit of talking with their own technically adept employees, they’d actively suppressed it.

I’d long forgotten about that meeting and those looks of contempt (I stopped building websites before most people started) until the launch of Healthcare.gov.

* * *

For the first couple of weeks after the launch, I assumed any difficulties in the Federal insurance market were caused by unexpected early interest, and that once the initial crush ebbed, all would be well. The sinking feeling that all would not be well started with this disillusioning paragraph about what had happened when a staff member at the Centers for Medicare & Medicaid Services, the department responsible for Healthcare.gov, warned about difficulties with the site back in March. In response, his superiors told him…

[…] in effect, that failure was not an option, according to people who have spoken with him. Nor was rolling out the system in stages or on a smaller scale, as companies like Google typically do so that problems can more easily and quietly be fixed. Former government officials say the White House, which was calling the shots, feared that any backtracking would further embolden Republican critics who were trying to repeal the health care law.

The idea that “failure is not an option” is a fantasy version of how non-engineers should motivate engineers. That sentiment was invented by a screenwriter, riffing on an after-the-fact observation about Apollo 13; no one said it at the time. (If you ever say it, wash your mouth out with soap. If anyone ever says it to you, run.) Even NASA’s vaunted moonshot, so often referred to as the best of government innovation, tested with dozens of unmanned missions first, several of which failed outright.

Failure is always an option. Engineers work as hard as they do because they understand the risk of failure. And for anything it might have meant in its screenplay version, here that sentiment means the opposite; the unnamed executives were saying “Addressing the possibility of failure is not an option.”

* * *

The management question, when trying anything new, is “When does reality trump planning?” For the officials overseeing Healthcare.gov, the preferred answer was “Never.” Every time there was a chance to create some sort of public experimentation, or even just some clarity about its methods and goals, the imperative was to avoid giving the opposition anything to criticize.

At the time, this probably seemed like a way of avoiding early failures. But the project’s managers weren’t avoiding those failures. They were saving them up. The actual site is worse—far worse—for not having early and aggressive testing. Even accepting the crassest possible political rationale for denying opponents a target, avoiding all public review before launch has given those opponents more to complain about than any amount of ongoing trial and error would have. 

In his most recent press conference about the problems with the site, the President ruefully compared his campaigns’ use of technology with Healthcare.gov:

And I think it’s fair to say that we have a pretty good track record of working with folks on technology and IT from our campaign, where, both in 2008 and 2012, we did a pretty darn good job on that. […] If you’re doing it at the federal government level, you know, you’re going through, you know, 40 pages of specs and this and that and the other and there’s all kinds of law involved. And it makes it more difficult — it’s part of the reason why chronically federal IT programs are over budget, behind schedule.

It’s certainly true that Federal IT is chronically challenged by its own processes. But the biggest problem with Healthcare.gov was not timeline or budget. The biggest problem was that the site did not work, and the administration decided to launch it anyway. 

This is not just a hiring problem, or a procurement problem. This is a management problem, and a cultural problem. The preferred method for implementing large technology projects in Washington is to write the plans up front, break them into increasingly detailed specifications, then build what the specifications call for. It’s often called the waterfall method, because on a timeline the project cascades from planning, at the top left of the chart, down to implementation, on the bottom right.

Like all organizational models, waterfall is mainly a theory of collaboration. By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work. Instead, waterfall insists that the participants will understand best how things should work before accumulating any real-world experience, and that planners will always know more than workers.

This is a perfect fit for a culture that communicates in the deontic language of legislation. It is also a dreadful way to make new technology. If there is no room for learning by doing, early mistakes will resist correction. If the people with real technical knowledge can’t deliver bad news up the chain, potential failures get embedded rather than uprooted as the work goes on.

At the same press conference, the President also noted the degree to which he had been kept in the dark:

OK. On the website, I was not informed directly that the website would not be working the way it was supposed to. Had I been informed, I wouldn’t be going out saying “Boy, this is going to be great.” You know, I’m accused of a lot of things, but I don’t think I’m stupid enough to go around saying, this is going to be like shopping on Amazon or Travelocity, a week before the website opens, if I thought that it wasn’t going to work.

Healthcare.gov is a half-billion dollar site that was unable to complete even a thousand enrollments a day at launch, and for weeks afterwards. As we now know, programmers, stakeholders, and testers all expressed reservations about Healthcare.gov’s ability to do what it was supposed to do. Yet no one who understood the problems was able to tell the President. Worse, every senior political figure—every one—who could have bridged the gap between knowledgeable employees and the President decided not to. 

And so it was that, even on launch day, the President was allowed to make things worse for himself and his signature program by bragging about the already-failing site and inviting people to log in and use something that mostly wouldn’t work. Whatever happens to government procurement or hiring (and we should all hope those things get better) a culture that prefers deluding the boss over delivering bad news isn’t well equipped to try new things.

* * *

With a site this complex, things were never going to work perfectly the first day, whatever management thought they were procuring. Yet none of the engineers with a grasp of this particular reality could successfully convince the political appointees to adopt the obvious response: “Since the site won’t work for everyone anyway, let’s decide what tests to run on the initial uses we can support, and use what we learn to improve.” 

In this context, testing does not just mean “Checking to see what works and what doesn’t.” Even the Healthcare.gov team did some testing; it was late and desultory, but at least it was there. (The testers recommended delaying launch until the problems were fixed. This did not happen.) Testing means seeing what works and what doesn’t, and acting on that knowledge, even if that means contradicting management’s deeply held assumptions or goals. In well run organizations, information runs from the top down and from the bottom up.

One of the great descriptions of what real testing looks like comes from Valve software, in a piece detailing the making of its game Half-Life. After designing a game that was only sort of good, the team at Valve revamped its process, including constant testing: 

This [testing] was also a sure way to settle any design arguments. It became obvious that any personal opinion you had given really didn’t mean anything, at least not until the next test. Just because you were sure something was going to be fun didn’t make it so; the testers could still show up and demonstrate just how wrong you really were.

“Any personal opinion you had given really didn’t mean anything.” So it is in the government; any insistence that something must work is worthless if it actually doesn’t. 

An effective test is an exercise in humility; it’s only useful in a culture where desirability is not confused with likelihood. For a test to change things, everyone has to understand that their opinion, and their boss’s opinion, matters less than what actually works and what doesn’t. (An organization that isn’t learning from its users has decided it doesn’t want to learn from its users.)

Given comparisons with technological success from private organizations, a common response is that the government has special constraints, and thus cannot develop projects piecemeal, test with citizens, or learn from its mistakes in public. I was up at the Kennedy School a month after the launch, talking about technical leadership and Healthcare.gov, when one of the audience members made just this point, proposing that the difficult launch was unavoidable, because the government simply couldn’t have tested bits of the project over time.

That observation illustrates the gulf between planning and reality in political circles. It is hard for policy people to imagine that Healthcare.gov could have had a phased rollout, even while it is having one

At launch, on October 1, only a tiny fraction of potential users could actually try the service. They generated concrete errors. Those errors were handed to a team whose job was to improve the site, already public but only partially working. The resulting improvements are incremental, and put in place over a period of months. That is a phased rollout, just one conducted in the worst possible way. 

The vision of “technology” as something you can buy according to a plan, then have delivered as if it were coming off a truck, flatters and relieves managers who have no idea and no interest in how this stuff works, but it’s also a breeding ground for disaster. The mismatch between technical competence and executive authority is at least as bad in government now as it was in media companies in the 1990s, but with much more at stake.

* * *

Tom Steinberg, in his remembrance of his brilliant colleague Chris Lightfoot, said this about Lightfoot’s view of government and technology:

[W]hat he fundamentally had right was the understanding that you could no longer run a country properly if the elites don’t understand technology in the same way they grasp economics or ideology or propaganda. His analysis and predictions about what would happens if elites couldn’t learn were savage and depressingly accurate.

Now, and from now on, government will interact with its citizens via the internet, in increasingly important ways. This is a non-partisan issue; whichever party is in the White House will build and launch new forms of public service online. Unfortunately for us, our senior political figures have little habit of talking to their own technically adept employees.

If I had to design a litmus test for whether our political class grasps the internet, I would look for just one signal: Can anyone with authority over a new project articulate the tradeoff between features, quality, and time?

When a project cannot meet all three goals—a situation Healthcare.gov was clearly in by March—something will give. If you want certain features at a certain level of quality, you’d better be able to move the deadline. If you want overall quality by a certain deadline, you’d better be able to simplify, delay, or drop features. And if you have a fixed feature list and deadline, quality will suffer. 

Intoning “Failure is not an option” will be at best useless, and at worst harmful. There is no “Suddenly Go Faster” button, no way you can throw in money or additional developers as a late-stage accelerant; money is not directly tradable for either quality or speed, and adding more programmers to a late project makes it later. You can slip deadlines, reduce features, or, as a last resort, just launch and see what breaks. 

Denying this tradeoff doesn’t prevent it from happening. If no one with authority over the project understands that, the tradeoff is likely to mean sacrificing quality by default. That just happened to this administration’s signature policy goal. It will happen again, as long politicians can be allowed to imagine that if you just plan hard enough, you can ignore reality. It will happen again, as long as department heads imagine that complex technology can be procured like pencils. It will happen again as long as management regards listening to the people who understand the technology as a distasteful act.

Napster, Udacity, and the Academy

Fifteen years ago, a research group called The Fraunhofer Institute announced a new digital format for compressing movie files. This wasn’t a terribly momentous invention, but it did have one interesting side effect: Fraunhofer also had to figure out how to compress the soundtrack. The result was the Motion Picture Experts Group Format 1, Audio Layer III, a format you know and love, though only by its acronym, MP3.

The recording industry concluded this new audio format would be no threat, because quality mattered most. Who would listen to an MP3 when they could buy a better-sounding CD at the record store? Then Napster launched, and quickly became the fastest-growing piece of software in history. The industry sued Napster and won, and it collapsed even more suddenly than it had arisen.

If Napster had only been about free access, control of legal distribution of music would then have returned the record labels. That’s not what happened. Instead, Pandora happened. Last.fm happened. Spotify happened. ITunes happened. Amazon began selling songs in the hated MP3 format. 

How did the recording industry win the battle but lose the war? How did they achieve such a decisive victory over Napster, then fail to regain control of even legal distribution channels? They crushed Napster’s organization. They poisoned Napster’s brand. They outlawed Napster’s tools. The one thing they couldn’t kill was the story Napster told.

The story the recording industry used to tell us went something like this: “Hey kids, Alanis Morisette just recorded three kickin’ songs! You can have them, so long as you pay for the ten mediocrities she recorded at the same time.” Napster told us a different story. Napster said “You want just the three songs? Fine. Just ‘You Oughta Know’? No problem. Every cover of ‘Blue Suede Shoes’ ever made? Help yourself. You’re in charge.” 

The people in the music industry weren’t stupid, of course. They had access to the same internet the rest of us did. They just couldn’t imagine—and I mean this in the most ordinarily descriptive way possible—could not imagine that the old way of doing things might fail. Yet things did fail, in large part because, after Napster, the industry’s insistence that digital distribution be as expensive and inconvenient as a trip to the record store suddenly struck millions of people as a completely terrible idea. 

Once you see this pattern—a new story rearranging people’s sense of the possible, with the incumbents the last to know—you see it everywhere. First, the people running the old system don’t notice the change. When they do, they assume it’s minor. Then that it’s a niche. Then a fad. And by the time they understand that the world has actually changed, they’ve squandered most of the time they had to adapt.

It’s been interesting watching this unfold in music, books, newspapers, TV, but nothing has ever been as interesting to me as watching it happen in my own backyard. Higher education is now being disrupted; our MP3 is the massive open online course (or MOOC), and our Napster is Udacity, the education startup.

We have several advantages over the recording industry, of course. We are decentralized and mostly non-profit. We employ lots of smart people. We have previous examples to learn from, and our core competence is learning from the past. And armed with these advantages, we’re probably going to screw this up as badly as the music people did.

* * * 

A massive open online class is usually a series of video lectures with associated written materials and self-scoring tests, open to anyone. That’s what makes them OOCs. The M part, though, comes from the world. As we learned from Wikipedia, demand for knowledge is so enormous that good, free online materials can attract extraordinary numbers of people from all over the world. 

Last year, Introduction to Artificial Intelligence, an online course from Stanford taught by Peter Norvig and Sebastian Thrun, attracted 160,000 potential students, of whom 23,000 completed it, a scale that dwarfs anything possible on a physical campus. As Thrun put it, “Peter and I taught more students AI, than all AI professors in the world combined.” Seeing this, he quit and founded Udacity, an educational institution designed to offer MOOCs.

The size of Thrun and Norvig’s course, and the attention attracted by Udacity (and similar organizations like Coursera, P2PU, and University of the People), have many academics worrying about the effect on higher education. The loudest such worrying so far has been The Trouble With Online Education, a New York Times OpEd by Mark Edmunson of the University of Virginia. As most critics do, Edmundson focussed on the issue of quality, asking and answering his own question: “[C]an online education ever be education of the very best sort?”

Now you and I know what he means by “the very best sort”—the intimate college seminar, preferably conducted by tenured faculty. He’s telling the story of the liberal arts education in a selective residential college and asking “Why would anyone take an online class when they can buy a better education at UVA?”

But who faces that choice? Are we to imagine an 18 year old who can set aside $250K and 4 years, but who would have a hard time choosing between a residential college and a series of MOOCs? Elite high school students will not be abandoning elite colleges any time soon; the issue isn’t what education of “the very best sort” looks like, but what the whole system looks like. 

Edmundson isn’t crazy enough to argue that all college experiences are good, so he hedges. He tells us “Every memorable class is a bit like a jazz composition”, without providing an analogy for the non-memorable ones. He assures us that “large lectures can also create genuine intellectual community”, which of course means they can also not do that. (He doesn’t say how many large lectures fail his test.) He says “real courses create intellectual joy,” a statement that can be accurate only as a tautology. (The MOOC Criticism Drinking Game: take a swig whenever someone says “real”, “true”, or “genuine” to hide the fact that they are only talking about elite schools instead of the median college experience.)

I was fortunate enough to get the kind of undergraduate education Edmundson praises: four years at Yale, in an incredible intellectual community, where even big lecture classes were taught by seriously brilliant people. Decades later, I can still remember my art history professor’s description of the Arnolfini Wedding, and the survey of modern poetry didn’t just expose me to Ezra Pound and HD, it changed how I thought about the 20th century.

But you know what? Those classes weren’t like jazz compositions. They didn’t create genuine intellectual community. They didn’t even create ersatz intellectual community. They were just great lectures: we showed up, we listened, we took notes, and we left, ready to discuss what we’d heard in smaller sections.

And did the professors also teach our sections too? No, of course not; those were taught by graduate students. Heaven knows what they were being paid to teach us, but it wasn’t a big fraction of a professor’s salary. The large lecture isn’t a tool for producing intellectual joy; it’s a tool for reducing the expense of introductory classes.

* * *

Higher education has a bad case of cost disease (sometimes called Baumol’s cost disease, after one of its theorizers.) The classic example is the string quartet; performing a 15-minute quartet took a cumulative hour of musician time in 1850, and takes that same hour today. This is not true of the production of food, or clothing, or transportation, all of which have seen massive increases in value created per hour of labor. Unfortunately, the obvious ways to make production more efficient—fewer musicians playing faster—wouldn’t work as well for the production of music as for the production of cars.

An organization with cost disease can use lower paid workers, increase the number of consumers per worker, subsidize production, or increase price. For live music, this means hiring less-talented musicians, selling more tickets per performance, writing grant applications, or, of course, raising ticket prices. For colleges, this means more graduate and adjunct instructors, increased enrollments and class size, fundraising, or, of course, raising tuition.

The great work on college and cost-disease is Robert Archibald and David Feldman’s Why Does College Cost So Much? Archibald and Feldman conclude that institution-specific explanations—spoiled students expecting a climbing wall; management self-aggrandizement at the expense of educational mission—hold up less well than the generic observation: colleges need a lot of highly skilled people, people whose wages, benefits, and support costs have risen faster than inflation for the last thirty years.

Cheap graduate students let a college lower the cost of teaching the sections while continuing to produce lectures as an artisanal product, from scratch, on site, real time. The minute you try to explain exactly why we do it this way, though, the setup starts to seem a little bizarre. What would it be like to teach at a university where a you could only assign books you yourself had written? Where you could only ask your students to read journal articles written by your fellow faculty members? Ridiculous. Unimaginable.

Every college provides access to a huge collection of potential readings, and to a tiny collection of potential lectures. We ask students to read the best works we can find, whoever produced them and where, but we only ask them to listen to the best lecture a local employee can produce that morning. Sometimes you’re at a place where the best lecture your professor can give is the best in the world. But mostly not. And the only thing that kept this system from seeming strange was that we’ve never had a good way of publishing lectures.

This is the huge difference between music and education. Starting with Edison’s wax cylinders, and continuing through to Pandora and the iPod, the biggest change in musical consumption has come not from production but playback. Hearing an excellent string quartet play live in an intimate venue has indeed become a very expensive proposition, as cost disease would suggest, but at the same time, the vast majority of music listened to on any given day is no longer recreated live. 

* * *

Harvard, where I was fortunate enough to have a visiting lectureship a couple of years ago, is our agreed-upon Best Institution, and it is indeed an extraordinary place. But this very transcendence should make us suspicious. Harvard’s endowment, 31 billion dollars, is over three hundred times the median, and only one college in five has an endowment in the first place. Harvard also educates only about a tenth of a percent of the 18 million or so students enrolled in higher education in any given year. Any sentence that begins “Let’s take Harvard as an example…” should immediately be followed up with “No, let’s not do that.”

This atypical bent of our elite institutions covers more than just Harvard. The top 50 colleges on the US News and World Report list (which includes most of the ones you’ve heard of) only educate something like 3% of the current student population. The entire list, about 250 colleges, educates fewer than 25%. 

The upper reaches of the US college system work like a potlatch, those festivals of ostentatious giving. The very things the US News list of top colleges prizes—low average class size, ratio of staff to students—mean that any institution that tries to create a cost-effective education will move down the list. This is why most of the early work on MOOCs is coming out of Stanford and Harvard and MIT. As Ian Bogost says, MOOCs are marketing for elite schools. 

Outside the elite institutions, though, the other 75% of students—over 13 million of them—are enrolled in the four thousand institutions you haven’t heard of: Abraham Baldwin Agricultural College. Bridgerland Applied Technology College. The Laboratory Institute of Merchandising. When we talk about college education in the US, these institutions are usually left out of the conversation, but Clayton State educates as many undergraduates as Harvard. Saint Leo educates twice as many. City College of San Francisco enrolls as many as the entire Ivy League combined. These are where most students are, and their experience is what college education is mostly like.

* * *

The fight over MOOCs isn’t about the value of college; a good chunk of the four thousand institutions you haven’t heard of provide an expensive but mediocre education. For-profit schools like Kaplan’s and the University of Phoenix enroll around one student in eight, but account for nearly half of all loan defaults, and the vast majority of their enrollees fail to get a degree even after six years. Reading the academic press, you wouldn’t think that these statistics represented a more serious defection from our mission than helping people learn something about Artificial Intelligence for free.

The fight over MOOCs isn’t even about the value of online education. Hundreds of institutions already offer online classes for credit, and half a million students are already enrolled in them. If critics of online education were consistent, they would believe that the University of Virginia’s Bachelor of Interdisciplinary Studies or Rutger’s MLIS degreeare abominations, or else they would have to believe that there is a credit-worthy way to do online education, one MOOCs could emulate. Neither argument is much in evidence.

That’s because the fight over MOOCs is really about the story we tell ourselves about higher education: what it is, who it’s for, how it’s delivered, who delivers it. The most widely told story about college focuses obsessively on elite schools and answers a crazy mix of questions: How will we teach complex thinking and skills? How will we turn adolescents into well-rounded members of the middle class? Who will certify that education is taking place? How will we instill reverence for Virgil? Who will subsidize the professor’s work? 

MOOCs simply ignore a lot of those questions. The possibility MOOCs hold out isn’t replacement; anything that could replace the traditional college experience would have to work like one, and the institutions best at working like a college are already colleges. The possibility MOOCs hold out is that the educational parts of education can be unbundled. MOOCs expand the audience for education to people ill-served or completely shut out from the current system, in the same way phonographs expanded the audience for symphonies to people who couldn’t get to a concert hall, and PCs expanded the users of computing power to people who didn’t work in big companies. 

Those earlier inventions systems started out markedly inferior to the high-cost alternative: records were scratchy, PCs were crashy. But first they got better, then they got better than that, and finally, they got so good, for so cheap, that they changed people’s sense of what was possible.

In the US, an undergraduate education used to be an option, one way to get into the middle class. Now it’s a hostage situation, required to avoid falling out of it. And if some of the hostages having trouble coming up with the ransom conclude that our current system is a completely terrible idea, then learning will come unbundled from the pursuit of a degree just as as songs came unbundled from CDs. 

If this happens, Harvard will be fine. Yale will be fine, and Stanford, and Swarthmore, and Duke. But Bridgerland Applied Technology College? Maybe not fine. University of Arkansas at Little Rock? Maybe not fine. And Kaplan College, a more reliable producer of debt than education? Definitely not fine.

* * *

Udacity and its peers don’t even pretend to tell the story of an 18-year old earning a Bachelor’s degree in four years from a selective college, a story that only applies to a small minority of students in the US, much less the world. Meanwhile, they try to answer some new questions, questions that the traditional academy—me and my people—often don’t even recognize as legitimate, like “How do we spin up 10,000 competent programmers a year, all over the world, at a cost too cheap to meter?” 

Udacity may or may not survive, but as with Napster, there’s no containing the story it tells: “It’s possible to educate a thousand people at a time, in a single class, all around the world, for free.” To a traditional academic, this sounds like crazy talk. Earlier this fall, a math instructor writing under the pen name Delta enrolled in Thrun’s Statistics 101class, and, after experiencing it first-hand, concluded that the course was

…amazingly, shockingly awful. It is poorly structured; it evidences an almost complete lack of planning for the lectures; it routinely fails to properly define or use standard terms or notation; it necessitates occasional massive gaps where “magic” happens; and it results in nonstandard computations that would not be accepted in normal statistical work.

Delta posted ten specific criticisms of the the content (Normal Curve Calculations), teaching methods (Quiz Regime) and the MOOC itself (Lack of Updates). About this last one, Delta said:

So in theory, any of the problems that I’ve noted above could be revisited and fixed on future pass-throughs of the course. But will that happen at Udacity, or any other massive online academic program?

The very next day, Thrun answered that question. Conceding that Delta “points out a number of shortcomings that warrant improvements”, Thrun detailed how they were going to update the class. Delta, to his credit, then noted that Thrun had answered several of his criticisms, and went on to tell a depressing story of a fellow instructor at his own institution who had failed to define the mathematical terms he was using despite student requests. 

Tellingly, when Delta was criticizing his peer, he didn’t name the professor, the course, or even his institution. He could observe every aspect of Udacity’s Statistics 101 (as can you) and discuss them in public, but when criticizing his own institution, he pulled his punches.

Open systems are open. For people used to dealing with institutions that go out of their way to hide their flaws, this makes these systems look terrible at first. But anyone who has watched a piece of open source software improve, or remembers the Britannica people throwing tantrums about Wikipedia, has seen how blistering public criticism makes open systems better. And once you imagine educating a thousand people in a single class, it becomes clear that open courses, even in their nascent state, will be able to raise quality and improve certification faster than traditional institutions can lower cost or increase enrollment. 

College mottos run the gamut from Bryn Mawr’s Veritatem Dilexi (I Delight In The Truth) to the Laboratory Institute of Merchandising’s Where Business Meets Fashion, but there’s a new one that now hangs over many of them: Quae Non Possunt Non Manent. Things That Can’t Last Don’t. The cost of attending college is rising above inflation every year, while the premium for doing so shrinks. This obviously can’t last, but no one on the inside has any clear idea about how to change the way our institutions work while leaving our benefits and privileges intact. 

In the academy, we lecture other people every day about learning from history. Now its our turn, and the risk is that we’ll be the last to know that the world has changed, because we can’t imagine—really cannot imagine—that story we tell ourselves about ourselves could start to fail. Even when it’s true. Especially when it’s true.

Wikileaks and the Long Haul

Like a lot of people, I am conflicted about Wikileaks.

Citizens of a functioning democracy must be able to know what the state is saying and doing in our name, to engage in what Pierre Rosanvallon calls “counter-democracy”*, the democracy of citizens distrusting rather than legitimizing the actions of the state. Wikileaks plainly improves those abilities.

On the other hand, human systems can’t stand pure transparency. For negotiation to work, people’s stated positions have to change, but change is seen, almost universally, as weakness. People trying to come to consensus must be able to privately voice opinions they would publicly abjure, and may later abandon. Wikileaks plainly damages those abilities. (If Aaron Bady’s analysis is correct, it is the damage and not the oversight that Wikileaks is designed to create.*)

And so we have a tension between two requirements for democratic statecraft, one that can’t be resolved, but can be brought to an acceptable equilibrium. Indeed, like the virtues of equality vs. liberty, or popular will vs. fundamental rights, it has to be brought into such an equilibrium for democratic statecraft not to be wrecked either by too much secrecy or too much transparency. 

As Tom Slee puts it, “Your answer to ‘what data should the government make public?’ depends not so much on what you think about data, but what you think about the government.”* My personal view is that there is too much secrecy in the current system, and that a corrective towards transparency is a good idea. I don’t, however, believe in pure transparency, and even more importantly, I don’t think that independent actors who are subject to no checks or balances is a good idea in the long haul.

If the long haul were all there was, Wikileaks would be an obviously bad thing. The practical history of politics, however, suggests that the periodic appearance of such unconstrained actors in the short haul is essential to increased democratization, not just of politics but of thought. 

We celebrate the printers of 16th century Amsterdam for making it impossible for the Catholic Church to constrain the output of the printing press to Church-approved books*, a challenge that helped usher in, among other things, the decentralization of scientific inquiry and the spread of politically seditious writings advocating democracy. 

This intellectual and political victory didn’t, however, mean that the printing press was then free of all constraints. Over time, a set of legal limitations around printing rose up, including restrictions on libel, the publication of trade secrets, and sedition. I don’t agree with all of these laws, but they were at least produced by some legal process.

Unlike the United States’ current pursuit of Wikileaks. 

I am conflicted about the right balance between the visibility required for counter-democracy and the need for private speech among international actors. Here’s what I’m not conflicted about: When authorities can’t get what they want by working within the law, the right answer is not to work outside the law. The right answer is that they can’t get what they want.

The Unites States is–or should be–subject to the rule of law, which makes the extra-judicial pursuit of Wikileaks especially nauseating. (Calls for Julian’s assassination are even more nauseating.) It may be that what Julian has done is a crime. (I know him casually, but not well enough to vouch for his motivations, nor am I a lawyer.) In that case, the right answer is to bring the case to a trial. 

IIn the US, however, the government has a “heavy burden” for engaging in prior restraint of even secret documents, an established principle since New York Times Co. vs. The United States*, when the Times published the Pentagon Papers. If we want a different answer for Wikileaks, we need a different legal framework first.

Though I don’t like Senator Joseph Lieberman’s proposed SHIELD law (Securing Human Intelligence and Enforcing Lawful Dissemination*), I do like the fact that it is a law, and not an extra-legal avenue (of which Senator Lieberman is also guilty.*) I also like the fact that the SHIELD Law makes it clear what’s at stake: the law proposes new restraints on publishers, and would apply to the New York Times and The Guardian as it well as to Wikileaks. (As Matthew Ingram points out, “Like it or not, Wikileaks is a media entity.”*) SHIELD amounts to an attempt to reverse parts of New York Times Co. vs. The United States.

I don’t think such a law should pass. I think the current laws, which criminalize the leaking of secrets but not the publishing of leaks, strike the right balance. However, as a citizen of a democracy, I’m willing to be voted down, and I’m willing to see other democratically proposed restrictions on Wikileaks put in place. It may even be that whatever checks and balances do get put in place by the democratic process make anything like Wikileaks impossible to sustain in the future. 

The key, though, is that democracies have a process for creating such restrictions, and as a citizen it sickens me to see the US trying to take shortcuts. The leaders of Myanmar and Belarus, or Thailand and Russia, can now rightly say to us “You went after Wikileaks’ domain name, their hosting provider, and even denied your citizens the ability to register protest through donations, all without a warrant and all targeting overseas entities, simply because you decided you don’t like the site. If that’s the way governments get to behave, we can live with that.” 

Over the long haul, we will need new checks and balances for newly increased transparency — Wikileaks shouldn’t be able to operate as a law unto itself anymore than the US should be able to. In the short haul, though, Wikileaks is our Amsterdam. Whatever restrictions we eventually end up enacting, we need to keep Wikileaks alive today, while we work through the process democracies always go through to react to change. If it’s OK for a democracy to just decide to run someone off the internet for doing something they wouldn’t prosecute a newspaper for doing, the idea of an internet that further democratizes the public sphere will have taken a mortal blow.

The Times’ Paywall and Newsletter Economics

It is, perhaps, the end of the beginning.

In early July, Rupert Murdoch’s News Corporation placed its two London-based “quality” dailies, the Times and Sunday Times, behind a paywall, charging £1 for 24 hours access, or £2 a week (after an introductory £1 for the first month.*) At the same time, News Corp also forbad the UK’s Audit Bureau of Circulations from reporting site traffic*, so that no meaningful measure of the paywall’s effect was available.

That situation has now been partially reversed, with News reporting some of its own numbers: they claim 105,000 total transactions for digital content between July and October.* (Several people have wrongly reported this as 105,000 users. The number of users is smaller, as there can be more than one transaction per user.) News Corp notes that about half of those transactions were one-offs, meaning only about 50,000 transactions in those four months were by people with any commitment to the site longer than a single day. 

Because that 50K number includes not just web site transactions, but Kindle and iPad sales as well, web subscribers are, at best, in the low tens of thousands. However, we don’t know how small the digital subscriber number is, for two reasons. First, the better the Kindle and iPad numbers are, the worse the web numbers are. Second, News did not report, for example, whether a loyal reader from July to October would count as a single transaction or several consecutive transactions. (If iPad sales are good, and loyal users create multiple transactions, then monthly web subscribers could be under 10,000.) 

The other figure News reported is that something like 100,000 print subscribers have requested web access. Combining digital-only and print subscribers, and comparing them with comScore’s pre-paywall estimate of roughly six million unique readers worldwide*, the reduction in total web audience seems to be on the order of 97%. (Note that this reduction can’t be measured by before and after traffic, as the home pages are outside the paywall, so people who refuse to pay still show up as visitors.)

Because the print subscribers outnumber digital-only users, most of the remaining 3% pay nothing for the site. Subscription to the paper is now a better draw for website use than any case News has been able to make for paid access.

Given the paucity of the data, the key question of churn remains unanswerable. After the introductory £1 a month offer, the annualized rate rises from £12 to £104. This will cause additional users to bail out, but we have no way of guessing how many.

As with every aspect of The Times’ paywall, interpretation of these numbers varies widely. There are people arguing that these numbers are good news; Robert Andrews at PaidContent sees hope in the Times now having recurring user revenues.* There are people arguing that they are bad news; Mike Masnick at TechDirt believes those revenues are unlikely to offset new customer acquition costs and the loss of advertising.* What is remarkable though, what seems to need more explaining than News’s strategy itself, is why anyone regards this particular paywall as news at all.

* * *

The “paywall problem” isn’t particularly complex, either in economic or technological terms. General-interest papers struggle to make paywalls work because it’s hard to raise prices in a commodity market. That’s the problem. Everything else is a detail.

The classic description of a commodity market uses milk. If you own the only cow for 50 miles, you can charge usurious rates, because no one can undercut you. If you own only one of a hundred such cows, though, then everyone can undercut you, so you can’t charge such rates. In a competitive environment like that, milk becomes a commodity, something whose price is set by the market as a whole. 

Owning a newspaper used to be like owning the only cow, especially for regional papers. Even in urban markets, there was enough segmentation–the business paper, the tabloid, the alternative weekly–and high enough costs to keep competition at bay. No longer.

The internet commodifies the business of newspapers. Any given newspaper competes with a few other newspapers, but any newspaper website compete with all other websites. As Nicholas Carr pointed out during the 2009 pirate kidnapping, Google News found 11,264 different sources for the story, all equally accessible.* The web puts newspapers in competition with radio and TV stations, magazines, and new entrants, both professional and amateur. It is the war of each against all.

None of this is new. The potential disruptive effects of the internet on newspapers have been observable since ClariNet in 1989.* Nor has the business case for paywalls changed. The advantage of paywalls is that they raise revenue from users. The disadvantages are that they reduce readership, increase customer acquistion and retention costs, and eliminate ad revenue from user-forwarded content. In most cases, the disadvantages have outweighed the advantages. 

So what’s different about News paywall? Nothing. It’s no different from other pay-for-access plans, whether the NY Times’ TimesSelect* or the Harligen Texas Valley Morning Star.* News Corp has produced no innovation in content, delivery, or payment, and the idea of 90%+ loss of audience was already a rule of thumb over a decade ago. Yet something clearly feels different. 

Over the last fifteen years, many newspaper people have assumed continuity with the analog business model, which is to say they assumed that readers could eventually be persuaded or forced pay for digital editions. This in turn suggested that the failure of any given paywall was no evidence of anything other than the need to try again. 

What is new about the Times’ paywall–what may in fact make it a watershed–isn’t strategy or implementation. What’s new is that it has launched as people in the news business are re-thinking assumed continuity. It’s new because the people paying attention to it are now willing to regard the results as evidence of something. To the newspaper world, TimesSelect looked like an experiment. The Times and Sunday Times look like a referendum on the future.

* * *

One way to escape a commodity market is to offer something that isn’t a commodity. This has been the preferred advice of people committed to the re-invention of newspapers. It is a truism bordering on drinking game material that anyone advising newspapers will at some point say “All you need to do is offer a product so relevant and valuable the consumer is willing to pay for it!”

This advice is well-meaning. It’s just not much help. The suggestion that newspapers should, in the future, create a digital product users are willing to pay for is merely a restatement of the problem, by way of admission that the current product does not pass that test.

Most of the historical hope for paywalls assumed that through some combination of reader desire and supplier persuasiveness, the current form of the newspaper could survive the digital transition without significant alteration.

Payalls, as actually implemented, have not accomplished this. They don’t expand revenue from the existing audience, they contract the audience to that subset willing to pay. Paywalls do indeed help newspapers escape commodification, but only by ejecting the readers who think of the product as a commodity. This is, invariably, most of them.

* * *

You can see this contraction at the Times and Sunday Times in the reversal of digital to print readers. Before the paywall, the two sites had roughly six times more readers than there were print sales of the paper edition. (6M web vs. 1M print for the Sunday Times* .) Post-paywall, the web audience is less than a sixth of print sales (down to <150K vs. 1M). The paying web audience is less a twentieth of print sales (<50K vs. 1M), and possibly much less. 

One way to think of this transition is that online, the Times has stopped being a newspaper, in the sense of a generally available and omnibus account of the news of the day, broadly read in the community. Instead, it is becoming a newsletter, an outlet supported by, and speaking to, a specific and relatively coherent and compact audience. (In this case, the Times is becoming the online newsletter of the Tories, the UK’s conservative political party, read much less widely than its paper counterpart.)

Murdoch and News Corp, committed as they have been to extracting revenues from the paywall, still cannot execute in a way that does not change the nature of the organizations behind the wall. Rather than simply shifting relative subsidy from advertisers to users for an existing product, they are instead re-engineering the Times around the newsletter model, because the paywall creates newsletter economics. 

As of July, non-subscribers can no longer read Times stories forwarded by colleagues or friends, nor can they read stories linked to from Facebook or Twitter. As a result, links to Times stories now rarely circulate in those media. If you are going to produce news that can’t be shared outside a particular community, you will want to recruit and retain a community that doesn’t care whether any given piece of news spreads, which means tightly interconnected readerships become the ideal ones. However, tight interconnectedness correlates inversely with audience size, making for a stark choice, rather than offering a way of preserving the status quo.

This re-engineering suggests that paywalls don’t and can’t rescue current organizational forms. They offer instead yet another transformed alternative to it. Even if paywall economics can eventually be made to work with a dramatically reduced audience, this particular referendum on the future (read: the present) of newspapers is likely to mean the end of the belief that there is any non-disruptive way to remain a going concern.


I’ve bundled some replies to various questions in the comments from November 9th here and from November 10th here.

Also, nota bene: One of the problems with the various “Hey you guys, I just had a great idea for saving newpapers!” micropayment comments showing up in my moderation queue is that the proposers often exhibit no understanding that micropayments have a 20-year history of failure. 

I will not post comments suggesting that micropayments will save the news industry unless those comment refer to at least some of the theoretical or practical literature on previous attempts to make them work for the news business. Start here: Why Small Payments Won’t Save Publishers

The Collapse of Complex Business Models

I gave a talk last year to a group of TV executives gathered for an annual conference. From the Q&A after, it was clear that for them, the question wasn’t whether the internet was going to alter their business, but about the mode and tempo of that alteration. Against that background, though, they were worried about a much more practical matter: When, they asked, would online video generate enough money to cover their current costs?

That kind of question comes up a lot. It’s a tough one to answer, not just because the answer is unlikely to make anybody happy, but because the premise is more important than the question itself. 

There are two essential bits of background here. The first is that most TV is made by for-profit companies, and there are two ways to generate a profit: raise revenues above expenses, or cut expenses below revenues. The other is that, for many media business, that second option is unreachable. 

Here’s why.

* * *

In 1988, Joseph Tainter wrote a chilling book called The Collapse of Complex Societies. Tainter looked at several societies that gradually arrived at a level of remarkable sophistication then suddenly collapsed: the Romans, the Lowlands Maya, the inhabitants of Chaco canyon. Every one of those groups had rich traditions, complex social structures, advanced technology, but despite their sophistication, they collapsed, impoverishing and scattering their citizens and leaving little but future archeological sites as evidence of previous greatness. Tainter asked himself whether there was some explanation common to these sudden dissolutions.

The answer he arrived at was that they hadn’t collapsed despite their cultural sophistication, they’d collapsed because of it. Subject to violent compression, Tainter’s story goes like this: a group of people, through a combination of social organization and environmental luck, finds itself with a surplus of resources. Managing this surplus makes society more complex—agriculture rewards mathematical skill, granaries require new forms of construction, and so on. 

Early on, the marginal value of this complexity is positive—each additional bit of complexity more than pays for itself in improved output—but over time, the law of diminishing returns reduces the marginal value, until it disappears completely. At this point, any additional complexity is pure cost.

Tainter’s thesis is that when society’s elite members add one layer of bureaucracy or demand one tribute too many, they end up extracting all the value from their environment it is possible to extract and then some. 

The ‘and them some’ is what causes the trouble. Complex societies collapse because, when some stress comes, those societies have become too inflexible to respond. In retrospect, this can seem mystifying. Why didn’t these societies just re-tool in less complex ways? The answer Tainter gives is the simplest one: When societies fail to respond to reduced circumstances through orderly downsizing, it isn’t because they don’t want to, it’s because they can’t. 

In such systems, there is no way to make things a little bit simpler – the whole edifice becomes a huge, interlocking system not readily amenable to change. Tainter doesn’t regard the sudden decoherence of these societies as either a tragedy or a mistake—”[U]nder a situation of declining marginal returns collapse may be the most appropriate response”, to use his pitiless phrase. Furthermore, even when moderate adjustments could be made, they tend to be resisted, because any simplification discomfits elites. 

When the value of complexity turns negative, a society plagued by an inability to react remains as complex as ever, right up to the moment where it becomes suddenly and dramatically simpler, which is to say right up to the moment of collapse. Collapse is simply the last remaining method of simplification.

* * *

In the mid-90s, I got a call from some friends at ATT, asking me to help them research the nascent web-hosting business. They thought ATT’s famous “five 9’s” reliability (services that work 99.999% of the time) would be valuable, but they couldn’t figure out how $20 a month, then the going rate, could cover the costs for good web hosting, much less leave a profit.

I started describing the web hosting I’d used, including the process of developing web sites locally, uploading them to the server, and then checking to see if anything had broken.

“But if you don’t have a staging server, you’d be changing things on the live site!” They explained this to me in the tone you’d use to explain to a small child why you don’t want to drink bleach. “Oh yeah, it was horrible”, I said. “Sometimes the servers would crash, and we’d just have to re-boot and start from scratch.” There was a long silence on the other end, the silence peculiar to conference calls when an entire group stops to think. 

The ATT guys had correctly understood that the income from $20-a-month customers wouldn’t pay for good web hosting. What they hadn’t understood, were in fact professionally incapable of understanding, was that the industry solution, circa 1996, was to offer hosting that wasn’t very good. 

This, for the ATT guys, wasn’t depressing so much as confusing. We finished up the call, and it was polite enough, but it was perfectly clear that there wasn’t going to be a consulting gig out of it, because it wasn’t a market they could get into, not because they didn’t want to, but because they couldn’t.

It would be easy to regard this as short-sighted on their part, but that ignores the realities of culture. For a century, ATT’s culture had prized—insisted on—quality of service; they ran their own power grid to keep the dial-tone humming during blackouts. ATT, like most organizations, could not be good at the thing it was good at and good at the opposite thing at the same time. The web hosting business, because it followed the “Simplicity first, quality later” model, didn’t just present a new market, it required new cultural imperatives.* * *

Dr. Amy Smith is a professor in the Department of Mechanical Engineering at MIT, where she runs the Development Lab, or D-Lab, a lab organized around simple and cheap engineering solutions for the developing world. 

Among the rules of thumb she offers for building in that environment is this: “If you want something to be 10 times cheaper, take out 90% of the materials.” Making media is like that now except, for “materials”, substitute “labor.”

* * *

About 15 years ago, the supply part of media’s supply-and-demand curve went parabolic, with a predictably inverse effect on price. Since then, a battalion of media elites have lined up to declare that exactly the opposite thing will start happening any day now. 

To pick a couple of examples more or less at random, last year Barry Diller of IAC said, of content available on the web, “It is not free, and is not going to be,” Steve Brill of Journalism Online said that users “just need to get back into the habit of doing so [paying for content] online”, and Rupert Murdoch of News Corp said “Web users will have to pay for what they watch and use.”

Diller, Brill, and Murdoch seem be stating a simple fact—we will have to pay them—but this fact is not in fact a fact. Instead, it is a choice, one its proponents often decline to spell out in full, because, spelled out in full, it would read something like this:

“Web users will have to pay for what they watch and use, or else we will have to stop making content in the costly and complex way we have grown accustomed to making it. And we don’t know how to do that.”

* * *

One of the interesting questions about Tainter’s thesis is whether markets and democracy, the core mechanisms of the modern world, will let us avoid complexity-driven collapse, by keeping any one group of elites from seizing unbroken control. This is, as Tainter notes in his book, an open question. There is, however, one element of complex society into which neither markets nor democracy reach—bureaucracy.

Bureaucracies temporarily suspend the Second Law of Thermodynamics. In a bureaucracy, it’s easier to make a process more complex than to make it simpler, and easier to create a new burden than kill an old one. 

In spring of 2007, the web video comedy In the Motherhood made the move to TV. In the Motherhood started online as a series of short videos, with viewers contributing funny stories from their own lives and voting on their favorites. This tactic generated good ideas at low cost as well as endearing the show to its viewers; the show’s tag line was “By Moms, For Moms, About Moms.”

The move to TV was an affirmation of this technique; when ABC launched the public forum for the new TV version, they told users their input “might just become inspiration for a story by the writers.”

Or it might not. Once the show moved to television, the Writers Guild of America got involved. They were OK with For and About Moms, but By Moms violated Guild rules. The producers tried to negotiate, to no avail, so the idea of audience engagement was canned (as was In the Motherhood itself some months later, after failing to engage viewers as the web version had).

The critical fact about this negotiation wasn’t about the mothers, or their stories, or how those stories might be used. The critical fact was that the negotiation took place in the grid of the television industry, between entities incorporated around a 20th century business logic, and entirely within invented constraints. At no point did the negotiation about audience involvement hinge on the question “Would this be an interesting thing to try?”

* * *

Here is the answer to that question from the TV executives. 

In the future, at least some methods of producing video for the web will become as complex, with as many details to attend to, as television has today, and people will doubtless make pots of money on those forms of production. It’s tempting, at least for the people benefitting from the old complexity, to imagine that if things used to be complex, and they’re going to be complex, then everything can just stay complex in the meantime. That’s not how it works, however. 

The most watched minute of video made in the last five years shows baby Charlie biting his brother’s finger. (Twice!) That minute has been watched by more people than the viewership of American Idol, Dancing With The Stars, and the Superbowl combined. (174 million views and counting.)

Some video still has to be complex to be valuable, but the logic of the old media ecoystem, where video had to be complex simply to be video, is broken. Expensive bits of video made in complex ways now compete with cheap bits made in simple ways. “Charlie Bit My Finger” was made by amateurs, in one take, with a lousy camera. No professionals were involved in selecting or editing or distributing it. Not one dime changed hands anywhere between creator, host, and viewers. A world where that is the kind of thing that just happens from time to time is a world where complexity is neither an absolute requirement nor an automatic advantage. 

When ecosystems change and inflexible institutions collapse, their members disperse, abandoning old beliefs, trying new things, making their living in different ways than they used to. It’s easy to see the ways in which collapse to simplicity wrecks the glories of old. But there is one compensating advantage for the people who escape the old system: when the ecosystem stops rewarding complexity, it is the people who figure out how to work simply in the present, rather than the people who mastered the complexities of the past, who get to say what happens in the future.

Newspapers and Thinking the Unthinkable

Back in 1993, the Knight-Ridder newspaper chain began investigating piracy of Dave Barry’s popular column, which was published by the Miami Herald and syndicated widely. In the course of tracking down the sources of unlicensed distribution, they found many things, including the copying of his column to alt.fan.dave_barry on usenet; a 2000-person strong mailing list also reading pirated versions; and a teenager in the Midwest who was doing some of the copying himself, because he loved Barry’s work so much he wanted everybody to be able to read it.

One of the people I was hanging around with online back then was Gordy Thompson, who managed internet services at the New York Times. I remember Thompson saying something to the effect of “When a 14 year old kid can blow up your business in his spare time, not because he hates you but because he loves you, then you got a problem.” I think about that conversation a lot these days.

The problem newspapers face isn’t that they didn’t see the internet coming. They not only saw it miles off, they figured out early on that they needed a plan to deal with it, and during the early 90s they came up with not just one plan but several. One was to partner with companies like America Online, a fast-growing subscription service that was less chaotic than the open internet. Another plan was to educate the public about the behaviors required of them by copyright law. New payment models such as micropayments were proposed. Alternatively, they could pursue the profit margins enjoyed by radio and TV, if they became purely ad-supported. Still another plan was to convince tech firms to make their hardware and software less capable of sharing, or to partner with the businesses running data networks to achieve the same goal. Then there was the nuclear option: sue copyright infringers directly, making an example of them.

As these ideas were articulated, there was intense debate about the merits of various scenarios. Would DRM or walled gardens work better? Shouldn’t we try a carrot-and-stick approach, with education and prosecution? And so on. In all this conversation, there was one scenario that was widely regarded as unthinkable, a scenario that didn’t get much discussion in the nation’s newsrooms, for the obvious reason.

The unthinkable scenario unfolded something like this: The ability to share content wouldn’t shrink, it would grow. Walled gardens would prove unpopular. Digital advertising would reduce inefficiencies, and therefore profits. Dislike of micropayments would prevent widespread use. People would resist being educated to act against their own desires. Old habits of advertisers and readers would not transfer online. Even ferocious litigation would be inadequate to constrain massive, sustained law-breaking. (Prohibition redux.) Hardware and software vendors would not regard copyright holders as allies, nor would they regard customers as enemies. DRM’s requirement that the attacker be allowed to decode the content would be an insuperable flaw. And, per Thompson, suing people who love something so much they want to share it would piss them off.

Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.

When reality is labeled unthinkable, it creates a kind of sickness in an industry. Leadership becomes faith-based, while employees who have the temerity to suggest that what seems to be happening is in fact happening are herded into Innovation Departments, where they can be ignored en masse. This shunting aside of the realists in favor of the fabulists has different effects on different industries at different times. One of the effects on the newspapers is that many of their most passionate defenders are unable, even now, to plan for a world in which the industry they knew is visibly going away.* * *

The curious thing about the various plans hatched in the ’90s is that they were, at base, all the same plan: “Here’s how we’re going to preserve the old forms of organization in a world of cheap perfect copies!” The details differed, but the core assumption behind all imagined outcomes (save the unthinkable one) was that the organizational form of the newspaper, as a general-purpose vehicle for publishing a variety of news and opinion, was basically sound, and only needed a digital facelift. As a result, the conversation has degenerated into the enthusiastic grasping at straws, pursued by skeptical responses.

“The Wall Street Journal has a paywall, so we can too!” (Financial information is one of the few kinds of information whose recipients don’t want to share.) “Micropayments work for iTunes, so they will work for us!” (Micropayments work only where the provider can avoid competitive business models.) “The New York Times should charge for content!” (They’ve tried, with QPass and later TimesSelect.) “Cook’s Illustrated and Consumer Reports are doing fine on subscriptions!” (Those publications forgo ad revenues; users are paying not just for content but for unimpeachability.) “We’ll form a cartel!” (…and hand a competitive advantage to every ad-supported media firm in the world.)

Round and round this goes, with the people committed to saving newspapers demanding to know “If the old model is broken, what will work in its place?” To which the answer is: Nothing. Nothing will work. There is no general model for newspapers to replace the one the internet just broke. 

With the old economics destroyed, organizational forms perfected for industrial production have to be replaced with structures optimized for digital data. It makes increasingly less sense even to talk about a publishing industry, because the core problem publishing solves — the incredible difficulty, complexity, and expense of making something available to the public — has stopped being a problem.* * *

Elizabeth Eisenstein’s magisterial treatment of Gutenberg’s invention, The Printing Press as an Agent of Change, opens with a recounting of her research into the early history of the printing press. She was able to find many descriptions of life in the early 1400s, the era before movable type. Literacy was limited, the Catholic Church was the pan-European political force, Mass was in Latin, and the average book was the Bible. She was also able to find endless descriptions of life in the late 1500s, after Gutenberg’s invention had started to spread. Literacy was on the rise, as were books written in contemporary languages, Copernicus had published his epochal work on astronomy, and Martin Luther’s use of the press to reform the Church was upending both religious and political stability.

What Eisenstein focused on, though, was how many historians ignored the transition from one era to the other. To describe the world before or after the spread of print was child’s play; those dates were safely distanced from upheaval. But what was happening in 1500? The hard question Eisenstein’s book asks is “How did we get from the world before the printing press to the world after it? What was the revolution itself like?”

Chaotic, as it turns out. The Bible was translated into local languages; was this an educational boon or the work of the devil? Erotic novels appeared, prompting the same set of questions. Copies of Aristotle and Galen circulated widely, but direct encounter with the relevant texts revealed that the two sources clashed, tarnishing faith in the Ancients. As novelty spread, old institutions seemed exhausted while new ones seemed untrustworthy; as a result, people almost literally didn’t know what to think. If you can’t trust Aristotle, who can you trust?

During the wrenching transition to print, experiments were only revealed in retrospect to be turning points. Aldus Manutius, the Venetian printer and publisher, invented the smaller octavo volume along with italic type. What seemed like a minor change — take a book and shrink it — was in retrospect a key innovation in the democratization of the printed word. As books became cheaper, more portable, and therefore more desirable, they expanded the market for all publishers, heightening the value of literacy still further.

That is what real revolutions are like. The old stuff gets broken faster than the new stuff is put in its place. The importance of any given experiment isn’t apparent at the moment it appears; big changes stall, small changes spread. Even the revolutionaries can’t predict what will happen. Agreements on all sides that core institutions must be protected are rendered meaningless by the very people doing the agreeing. (Luther and the Church both insisted, for years, that whatever else happened, no one was talking about a schism.) Ancient social bargains, once disrupted, can neither be mended nor quickly replaced, since any such bargain takes decades to solidify.

And so it is today. When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. They are demanding to be told that old systems won’t break before new systems are in place. They are demanding to be told that ancient social bargains aren’t in peril, that core institutions will be spared, that new methods of spreading information will improve previous practice rather than upending it. They are demanding to be lied to.

There are fewer and fewer people who can convincingly tell such a lie. * * *

If you want to know why newspapers are in such trouble, the most salient fact is this: Printing presses are terrifically expensive to set up and to run. This bit of economics, normal since Gutenberg, limits competition while creating positive returns to scale for the press owner, a happy pair of economic effects that feed on each other. In a notional town with two perfectly balanced newspapers, one paper would eventually generate some small advantage — a breaking story, a key interview — at which point both advertisers and readers would come to prefer it, however slightly. That paper would in turn find it easier to capture the next dollar of advertising, at lower expense, than the competition. This would increase its dominance, which would further deepen those preferences, repeat chorus. The end result is either geographic or demographic segmentation among papers, or one paper holding a monopoly on the local mainstream audience. 

For a long time, longer than anyone in the newspaper business has been alive in fact, print journalism has been intertwined with these economics. The expense of printing created an environment where Wal-Mart was willing to subsidize the Baghdad bureau. This wasn’t because of any deep link between advertising and reporting, nor was it about any real desire on the part of Wal-Mart to have their marketing budget go to international correspondents. It was just an accident. Advertisers had little choice other than to have their money used that way, since they didn’t really have any other vehicle for display ads. 

The old difficulties and costs of printing forced everyone doing it into a similar set of organizational models; it was this similarity that made us regard  Daily Racing Form and L’Osservatore Romano as being in the same business. That the relationship between advertisers, publishers, and journalists has been ratified by a century of cultural practice doesn’t make it any less accidental.

The competition-deflecting effects of printing cost got destroyed by the internet, where everyone pays for the infrastructure, and then everyone gets to use it. And when Wal-Mart, and the local Maytag dealer, and the law firm hiring a secretary, and that kid down the block selling his bike, were all able to use that infrastructure to get out of their old relationship with the publisher, they did. They’d never really signed up to fund the Baghdad bureau anyway.* * *

Print media does much of society’s heavy journalistic lifting, from flooding the zone — covering every angle of a huge story — to the daily grind of attending the City Council meeting, just in case. This coverage creates benefits even for people who aren’t newspaper readers, because the work of print journalists is used by everyone from politicians to district attorneys to talk radio hosts to bloggers. The newspaper people often note that newspapers benefit society as a whole. This is true, but irrelevant to the problem at hand; “You’re gonna miss us when we’re gone!” has never been much of a business model. So who covers all that news if some significant fraction of the currently employed newspaper people lose their jobs?

I don’t know. Nobody knows. We’re collectively living through 1500, when it’s easier to see what’s broken than what will replace it. The internet turns 40 this fall. Access by the general public is less than half that age. Web use, as a normal part of life for a majority of the developed world, is less than half that age. We just got here. Even the revolutionaries can’t predict what will happen.

Imagine, in 1996, asking some net-savvy soul to expound on the potential of craigslist, then a year old and not yet incorporated. The answer you’d almost certainly have gotten would be extrapolation: “Mailing lists can be powerful tools”, “Social effects are intertwining with digital networks”, blah blah blah. What no one would have told you, could have told you, was what actually happened: craiglist became a critical piece of infrastructure. Not the idea of craigslist, or the business model, or even the software driving it. Craigslist itself spread to cover hundreds of cities and has become a part of public consciousness about what is now possible. Experiments are only revealed in retrospect to be turning points.

In craigslist’s gradual shift from ‘interesting if minor’ to ‘essential and transformative’, there is one possible answer to the question “If the old model is broken, what will work in its place?” The answer is: Nothing will work, but everything might. Now is the time for experiments, lots and lots of experiments, each of which will seem as minor at launch as craigslist did, as Wikipedia did, as octavo volumes did.

Journalism has always been subsidized. Sometimes it’s been Wal-Mart and the kid with the bike. Sometimes it’s been Richard Mellon Scaife. Increasingly, it’s you and me, donating our time. The list of models that are obviously working today, like Consumer Reports and NPR, like ProPublica and WikiLeaks, can’t be expanded to cover any general case, but then nothing is going to cover the general case. 

Society doesn’t need newspapers. What we need is journalism. For a century, the imperatives to strengthen journalism and to strengthen newspapers have been so tightly wound as to be indistinguishable. That’s been a fine accident to have, but when that accident stops, as it is stopping before our eyes, we’re going to need lots of other ways to strengthen journalism instead. 

When we shift our attention from ’save newspapers’ to ’save society’, the imperative changes from ‘preserve the current institutions’ to ‘do whatever works.’ And what works today isn’t the same as what used to work.

We don’t know who the Aldus Manutius of the current age is. It could be Craig Newmark, or Caterina Fake. It could be Martin Nisenholtz, or Emily Bell. It could be some 19 year old kid few of us have heard of, working on something we won’t recognize as vital until a decade hence. Any experiment, though, designed to provide new models for journalism is going to be an improvement over hiding from the real, especially in a year when, for many papers, the unthinkable future is already in the past.

For the next few decades, journalism will be made up of overlapping special cases. Many of these models will rely on amateurs as researchers and writers. Many of these models will rely on sponsorship or grants or endowments instead of revenues. Many of these models will rely on excitable 14 year olds distributing the results. Many of these models will fail. No one experiment is going to replace what we are now losing with the demise of news on paper, but over time, the collection of new experiments that do work might give us the journalism we need.

Why Small Payments Won’t Save Publishers

With continued turmoil in the advertising market, people who work at newspapers and magazines are wondering if micropayments will save them, with recent speculation in this direction by David Sarno of the LA TimesDavid Carr of the NY Times, and Walter Isaacson in Time magazine. Unfortunately for the optimists, micropayments — small payments made by readers for individual articles or other pieces of a la carte content — won’t work for online journalism. 

To understand why not, there are two key pieces of background. 

First, the label micropayments no longer makes any sense. Some of the early proposals for small payment systems did indeed imagine digital bookkeeping and billing for amounts as small as a thousandth of a cent; this was what made such payments “micro”. Current proposals, however, imagine pricing digital content in the range of a dime to a dollar. These aren’t micro-anything, they are just ordinary but small payments, no magic anywhere.

The essential thing to understand about small payments is that users don’t like being nickel-and-dimed. We have the phrase ‘nickel-and-dimed’ because this dislike is both general and strong. The result is that small payment systems don’t survive contact with online markets, because we express our hatred of small payments by switching to alternatives, whether supported by subscription or subsidy.

The other key piece of background isn’t about small payments themselves, but about the conversation. Such systems solve no problem the user has, and offer no service we want. As a result, conversations about small payments take place entirely among content providers, never involving us, the people who will ostensibly be funding these transactions. The conversation about small payments is also not a normal part of the conversation among publishers. Instead, the word ‘micropayment’ is a trope for desperation, entering the vernacular of a given media market only after threats to older models become visibly dire (as with the failed attempts to adopt small payments for webzines in the late ’90s, or for solo content like web comics and blogs earlier in this decade.)

The invocation of micropayments involves a displaced fantasy that the publishers of digital content can re-assert control over we unruly users in a media environment with low barriers to entry for competition. News that this has been tried many times in the past and has not worked is unwelcome precisely because if small payment systems won’t save existing publishers in their current form, there might not be a way to save existing publishers in their current form (an outcome generally regarded as unthinkable by existing publishers.)

Faith in salvation from small payments all but requires the adherent to ignore the past, whether existing critiques (e.g. Szabo 1996; Shirky 20002003; Odlyzko 2003) or previous failures. Isaacson’s recent Time magazine cover story on micropayments, How to Save Your Newspaper, a classic of the form, recapitulates the argument put forward by Scott McCloud in his 2003 Misunderstanding Micropayments. That McCloud advanced the same argument that Isaacson does, and that the small payment system McCloud was proselytizing for failed exactly as predicted, seems not to have troubled Isaacson much, even though he offers no argument different from McCloud’s. 

Another strategy among the faithful is to extrapolate from systems that do rely on small payments: iTunes, ringtone sales, or sales of digital goods in environments such as Cyworld. (This is the idea explored by David Carr in Let’s Invent an iTunes for News.) The lesson of iTunes et al (indeed, the only real lesson of small payment systems generally) is that if you want something that doesn’t survive contact with the market, you can’t let it have contact with the market. 

Cyworld, a wildly popular online forum in Korea, is able to collect small payments for digital items, denominated in a currency called Dotori (“acorn”), because once a user is in Cyworld, SK Telecom, the corporate parent, controls all the distribution options. A Cyworld user who wants a certain kind of digital decoration for their online presence has to buy it through Cyworld if they want it; the monopoly within the environment is enough to prevent competition for pricing of digital goods. Similarly, mobile phone carriers go to great lengths to prevent the ringtone distribution network from becoming general-purpose, lest freely circulating mp3s drive the price to zero. In these cases, control over the users’ environment is essential to preventing competition from destroying the payment model.

Apple’s ITMS (iTunes Music Store) is perhaps the most interesting example. People are not paying for music on ITMS because we have decided that fee-per-track is the model we prefer, but because there is no market in which commercial alternatives can be explored. Everything from Napster to online radio has been crippled or killed by fiat; small payments survive in the absence of a market for other legal options. What’s interesting about ITMS, though, it that it contains other content that illustrates the dilemma of the journalists most sharply: podcasts. Apple has the machinery in place to charge for podcasts. Why don’t they? 

Because they can’t afford to. Were they to start charging, their users would start looking around for other sources, as podcasts are offered free elsewhere. Losing user attention would be anathema to a company that wants as tight a relationship between ITMS and the iPod as it can get; the potential revenues are not worth the erosion of audience. 

Without the RIAA et al, Apple is unable to corner the market on podcasts, and thus unable to charge. Unless Apple could get the world’s unruly podcasters to behave as a cartel, and convince all new entrants to forgo the resulting vacuum of attention, podcasts will continue to circulate without individual payments. With every single tool in place to have a functioning small payment sytem, even Apple can’t defy the users if there is any way for us to express our preferences. 

Which brings us to us. 

Because small payment systems are always discussed in conversations by and for publishers, readers are assigned no independent role. In every micropayments fantasy, there is a sentence or section asserting that what the publishers want will be just fine with us, and, critically, that we will be possessed of no desires of our own that would interfere with that fantasy.

Meanwhile, back in the real world, the media business is being turned upside down by our new freedoms and our new roles. We’re not just readers anymore, or listeners or viewers. We’re not customers and we’re certainly not consumers. We’re users. We don’t consume content, we use it, and mostly what we use it for is to support our conversations with one another, because we’re media outlets now too. When I am talking about some event that just happened, whether it’s an earthquake or a basketball game, whether the conversation is in email or Facebook or Twitter, I want to link to what I’m talking about, and I want my friends to be able to read it easily, and to share it with their friends. 

This is superdistribution — content moving from friend to friend through the social network, far from the original source of the story. Superdistribution, despite its unweildy name, matters to users. It matters a lot. It matters so much, in fact, that we will routinely prefer a shareable amateur source to a professional source that requires us to keep the content a secret on pain of lawsuit. (Wikipedia’s historical advantage over Britannica in one sentence.) 

Nickel-and-dimeing us for access to content made less useful by those very restrictions simply isn’t appealing. Newspapers can’t entice us into small payment systems, because we care too much about our conversation with one another, and they can’t force us into such systems, because Off the Bus and Pro Publica and Gawker and Global Voices and Ohmynews and Spot.us and Smoking Gun all understand that not only is a news cartel unworkable, but that if one existed, their competitive advantage would be in attacking it rather than defending it.

The threat from micropayments isn’t that they will come to pass. The threat is that talking about them will waste our time, and now is not the time to be wasting time. The internet really is a revolution for the media ecology, and the changes it is forcing on existing models are large. What matters at newspapers and magazines isn’t publishing, it’s reporting. We should be talking about new models for employing reporters rather than resuscitating old models for employing publishers; the more time we waste fantasizing about magic solutions for the latter problem, the less time we have to figure out real solutions to the former one.

Web Traffic and TV Ratings

First published on FEED, 4/20/1999.

Last week, according to Nielsen, Yahoo was seen by 100,000 more households than the “X-Files.” Yahoo is more popular than “Ally McBeal,” “NYPD Blue,” and “Everybody Loves Raymond” by the same per-week measure. Sometime in the next year, Yahoo will almost certainly become more popular than “Friends,” “Frasier,” and “E.R.,” which is to say more popular than the most popular TV shows going. Big web sites are now as widely seen as any TV series. For years, the arguments about “web as mass media” have been conducted in the future tense, but the web’s ascendancy has to a certain extent already happened, as a handful of sites eclipse popular television shows as the most common media experience in America. For the foreseeable future, only the Super Bowl and the occasional war will propel individual TV programs into prominence over these mega-sites.

Yahoo is not unique in this; the household reach of several of the current portal sites would put them in Nielsen Top 20 as well, and the web population, which currently reaches one household in three, is still growing by at least 25% a year. (TV, needless to say, is saturated.) These figures mean that the average working American will be online by the end of this year, and the average American, period, will be online by the end of next year. People dealing with the web as a media channel have often treated it as a small market, inherently targeted to rich white men, but the vast sprawling network we have no longer matches that description.

Surveying the reach of sites like Yahoo, it’s tempting to conclude that the web has finally become like TV, but nothing could be further from the truth. The web’s remarkable population explosion is only part of the story — the rest of it is explained by the growing fragmentation of the television landscape and the growing centralization of the web. From a certain angle, those Nielsen numbers make it look as if the Yahoos of the web are turning it into something as centralized as TV. But the real story is more complicated — and more interesting — than that.

Mere size doesn’t make the web like TV; even TV isn’t like TV anymore, and it never will be again. Many of our basic assumptions about TV — its bland content, its culturally homogenizing effects, its appeal to the lowest common denominator — have nothing to do with TV as a medium and everything to do with the number of available channels. While the web was growing, TV was fragmenting, undergoing a 10-fold increase in channels the last 20 years. “Gunsmoke,” one of the most popular television series ever, was watched by almost half of the country in its day, while “Seinfeld,” our recently departed popularity heavyweight, was only seen by about a quarter of the country. You could conclude that “Gunsmoke” was twice as good as “Seinfeld” and indulge yourself in some old-fashioned handwringing about how they don’t make ’em like they used to, but in reality “Gunsmoke” vs. “Seinfeld” is apples and oranges. “Gunsmoke”‘s popularity had less to do with its content than with the constriction of the old three-network world.

In the days of three networks, an average show would get a third of the available audience, just for showing up. With only three channels, running a prime time show that would appeal to 10% of the audience would be money-losing, and running a show that only appealed to 1% of that audience would be suicide. From the late ’50s to the late ’80s, the heyday of network TV, anything that got the attention of less than 10 million households was niche, and anything that appealed to a paltry million was un-producable. It was this pressure, and not anything inherent in the medium itself, that led to “lowest common denominator” TV. Locked in a permanent three-way race, the networks would always choose something that 50 million people would watch versus something 5 million people would rave about. In a world with only ABC, NBC, and CBS, size meant more than enthusiasm, day in and day out, for decades.

In these days of cable, satellites, and the beginnings of digital TV, it’s all niche programming now. A show that gets 10 million households — 10 ratings points — is a Top 10 hit, and a show that gets a million has gone from unproducible to solid performer. The miracle of “Seinfeld” was that any show could reach a quarter of the country in the face of this proliferation of choices. “E.R.” the post-“Seinfeld” heavyweight, only reaches 20% of the country, and no show that has launched this season has broken 15%. The handwringing about the end of “Seinfeld” was occasioned in part by the recognition that a show that reached even a quarter of the country at once was the last of a dying breed. Sic transit nihilum mundi.

Even as TV has become more decentralized because of the increase in the number of available channels, the core of the web has become more centralized under the same pressures. The web removes technological bottlenecks, but creates attention bottlenecks — unlike TV, whose universe is so small that it can be chronicled in a single magazine, the web is a vast and unmanageable riot of possibility and disappointment. In this environment, “surfing” as a strategy is dead. With a remote, a reasonably dexterous thumb, and five seconds a channel, a TV viewer can survey the available TV channels in less than five minutes. On the web, assuming that sites loaded instantaneously, five seconds a site would require four months of 24/7 surfing just to see what was out there. Enter the search engines and “portals,” the guides to web content. It’s no accident that the web sites that would rate in Nielsen’s Top 20 are all portals of one sort or another, and this leads to a curious situation: the web, seemingly so decentralized, may actually be more hit driven than TV, if a web “hit” is defined by user attention.

Jakob Nielsen, the web “usability” guru behind useit.com and a principal, with Don Norman, of the NielsenNorman Group, proposes a formula for understanding this enormous range of web-site popularity. Nielsen suggests that traffic to web sites lies on a “Zipf” distribution, whose effects can be understood this way: the 100th most popular web site would get 1/100th of the traffic of the most popular site (Yahoo), and the 2,000,000th most popular web site would see 1/2,000,000th of Yahoo’s traffic, leading to a curve that looks like this: (SEE ATTACHED FILE – ZIPFLINEAR.GIF)

Nielsen describes this pattern as having:

a few elements that score very high (the left tail in the diagram)
a medium number of elements with middle-of-the-road scores (the middle part of the diagram)
a huge number of elements that score very low (the right tail in the diagram)

He likens this distribution of web sites to the distributions of words in the English sentences: a few ubiquitous words (a, and, the), many frequent words (toy, table, run) and a vast collection of rare words (marzipan, cerulean, noninterdenominationally). If Nielsen is right, then the web is more decentralized than TV when considered as a whole, but more centralized than TV in its most popular elements. While the web’s five million sites dwarf the TV universe, a Zipf distribution means that a mere 1% of existing web sites account for 70% of total web traffic.

TV is a fixed sum game — when one TV show’s ratings go up, another’s necessarily go down. On the web, on the other hand, the more small sites there are (the right-hand tail of Nielsen’s diagram), the more need there is for the Yahoos of the world to make sense of it. It is this effect — the left tail grows up as the right tail grows out — which accounts for Yahoo’s powerful reach relative to TV shows. The web still only reaches a third of American households, but it’s better to have half of that third than 15% of the total. Speculating on the financial effects of this niche-oriented universe, Nielsen goes on to say: “[T]he greatest value of the web is narrowcasting of very specific services that people can’t easily get otherwise… My original article on the Zipf distribution predicted that the value of a page view for a huge site would be about a cent, but the actual value these days seem to be close to half a cent, if you look at Yahoo’s latest quarterly report. So I may have over-estimated the value of a page view for the big sites. For the small sites, I estimated a value of a dollar per page view, and that may even turn out to be an under-estimate as we get more business-to-business use of the web going.”

This is where web-TV convergence is really happening — not converging content but converging landscape. Both the web and TV are being divided into three tiers; a handful of huge properties (Yahoo; the Superbowl), a small group of large properties (AltaVista; “Dharma and Greg”), and an enormous group of small properties (EarAllergy.com; “Dr. Quinn”). The TV curve will always be flatter than the web’s, of course — the difference between a hit TV show and an average one could be a factor of 100 (10% of the audience to .1%), while the difference between a web mega-site and someone’s collection of wedding pictures easily exceeds a million-fold — but the trends are similar. As advertisers, content creators and users get used to this changed landscape, the advantage may move from simply being the biggest to being the best loved — a world where it is better to be loved by 50 thousand than to be liked by five million.

In the end, these three tiers will be what drive media strategy. The bottom tier — the millions of sites with very little traffic — will be composed almost entirely of labors of love, or expensive subscription-driven sites with very selective offerings. The middle tier — say, the 1000th to 100,000th most popular sites — will use its closer connection with its users’ interests while banding together into networks, web rings and so on in order to aggregate their reach. The top tier, the Yahoos and Geocities of the world, may be victimized by its own success, managing increasing problems of supply and demand. If the reach of these sites is too vast, it may become hard to support their current advertising rates, in the face of a potential oversupply. The two things these mega- sites have on their side are laziness — it will always be easier for advertisers to write one check — and the possibility that they can become so large that they can segment themselves into internal niches. Yahoo’s strategy may match that of Proctor and Gamble, which makes several brands of detergent so that no matter which public brand you choose, it’s a P&G; product. Yahoo could become both the 800-pound gorilla and the scrappy underdog (think GM’s Saturn car division, or ATT’s “Lucky Dog” phone service) by running both the centralized portal and as many of the niche sites as they can.

The people preaching traditional ideas of convergence between the web and TV made a fundamental miscalculation — betting on a stable TV landscape, and on the premise that when the web got that big it would be like TV too — when the reality is much more fluid. The web is not a mass medium, and it never will be, but neither is TV anymore. Both mediums reach large numbers of people, but now they reach them in self-selecting groups, not as part of a largely undifferentiated lump of “America” the way “Gunsmoke” or even “Seinfeld” did. In the end, the 40-year reign of the Big Three networks may have been an unusual island of stability in an ordinarily chaotic media universe. We will always have massive media, but the days of mass media are over, killed by the explosion of possibility and torn into a thousand niches.