Nomic World: By the players, for the players

First published May 27, 2004 on the “Networks, Economics, and Culture” mailing list.

[This is an edited version of the talk I gave last fall at the State of Play conference.]

I’m sort of odd-man-out in a Games and Law conference, in that my primary area of inquiry isn’t games but social software. Not only am I not a lawyer, I don’t even spend most of my time thinking about game problems. I spend my time thinking about software that supports group interaction across a fairly wide range of social patterns. 

So, instead of working from case law out, which has been a theme here (and here’s where I insert the “I am not a lawyer” disclaimer) I’m going to propose a thought experiment looking from the outside in. And I want to pick up on something that Julian [Dibbell] said earlier about game worlds: ‘users are the state.’ The thought experiment I want to propose is to agree with that sentiment, and to ask “How far can we go in that direction?”

Instead of looking for the places where game users are currently suing or fighting one another, forcing the owners of various virtual worlds to deal with these things one crisis at a time, I want to ask the question “What would happen if we wanted to build a world where we maximized the amount of user control? What would that look like?”

I’m going to make that argument in three pieces. First, I’m going to do a little background on group structure and the tension between the individual and the group. Then I want to contrast briefly governance in real and virtual worlds. Finally I want to propose a thought experiment on placing control of online spaces in the hands of the users.

Background [This material is also covered in A Group Is Its Own Worst Enemy — ed.]

The background first: The core fact about human groups is that they are first-class entities. They exhibit behaviors that can’t be predicted by looking at individual psychologies. When groups of people get together they do surprising things, things you can’t predict from watching the behavior of individuals. I want to illustrate this with a story, and I want to illustrate it with a story from your life, because even though I don’t know you, I know what I’m about to describe has happened to you.

You’re at a party and you get bored — it’s not doing it for you anymore. The people you wanted to talk to have already left, you’ve been there a long time, you’d rather be home playing Ultima, whatever. You’re ready to go. And then a really remarkable thing happens – you don’t actually leave. You decide you don’t like this party anymore, but you don’t walk out. That second thing, that thing keeping you there is a kind of social stickiness. And so there’s this tension between your intellectual self and your membership, however tenuous, in the group. 

Then, twenty minutes later, another really remarkable thing happens. Somebody else gets their coat, ‘Oh, look at the time.’ What happens? Suddenly everybody is leaving all at once. So you have this group of people, each of whom is perfectly capable of making individual decisions and yet they’re unconsciously synchronizing in ways that you couldn’t have predicted from watching each of them. 

We’re very used to talking about social structure in online game spaces in terms of guilds or other formal organizations. But in fact, human group structure kicks in incredibly early, at very low levels of common association. Anything more focused than a group of people standing together in an elevator is likely to exhibit some of these group effects. 

So what’s it like to be there at that party, once you’ve decided to leave but are not leaving? It’s horrible. You really want to go and you’re stuck. And that tension is between your emotional commitment to the group fabric and your intellectual decision that this is not for you. The tension between the individual and the group is inevitable, it arises over and over again — we’ve seen that pattern for as long as we’ve had any history of human groups we can look at in any detail. 

Unfortunately the literature is pretty clear this isn’t a problem you outgrow. The tension between the individual and the group is a permanent fact of human life. And when groups get to the point where this tension becomes a crisis, they have to say “Some individual freedoms have to be curtailed in order for group cohesion to be protected.” 

This is an extremely painful moment, especially in communities that privilege individual freedoms, and the first crisis, of course, is the worst one. In the first crisis, not only do you have to change the rules, you don’t even have those rules spelt out in the first place — that’s the constitutional crisis. That’s the crisis where you say, this group of people is going to be self-governing.

Group structure, even when it’s not explicitly political, is in part a response to this tension. It’s a response to the idea that the cohesion of the group sometimes requires limits on individual rights. (As an aside, this is one of the reasons that libertarianism in its extreme form doesn’t work, because it assumes that groups are simply aggregates of individuals, and that those individuals will create shared value without any sort of coercion. In fact, the logic of collective action, to use Mancur Olsen’s phrase, requires some limits on individual freedom. Olsen’s book on the subject, by the way, is brilliant if a little dry.) 

Fork World

If you want to see why the tension between the individual and the group is so significant, imagine a world, call it Fork World, where the citizens were given the right to vote on how the world was run. In Fork World, however, the guiding principle would be “no coercion.” Players would vote on rule changes, but instead of announcing winners and losers at the end of a vote, the world would simply be split in two with every vote.

Imagine there was a vote in Fork World on whether players can kill one another, say, which has been a common theme in political crises in virtual worlds. After the vote, instead of imposing the results on everyone, you would send everyone who voted Yes to a world where player killing is allowed, and everyone who voted No to an alternate world, identical in every respect except that player killing was not allowed.

And of course, after 20 such votes, you would have subdivided 2 to the 20th times, leaving you with a million potential worlds — a world with player killing and where your possessions can be stolen when you die and where you re-spawn vs. a world with player killing and possession stealing but death is permanent, and so on. Even if you started with a million players on Day One, by your 20th vote each world would average, by definition, one player per world. You would have created a new category of MMO — the Minimally Multi-player Online game.

This would fulfill the libertarian fantasy of no coercion on behalf of the group, because no one would ever be asked to assent to rules they hadn’t voted for, but it would also be approximately no fun. To get the pleasure of other people’s company, people have to abide by rules they might not like considered in isolation. Group cohesion has significant value, value that makes accepting majority rule worthwhile.

Simple Governance Stack

Since tensions between group structure and individual behavior are fundamental, we can look at ways that real world group structure and virtual world group structure differ. To illustrate this, I’m going to define the world’s tiniest stack, a three-layer stack of governance functions in social spaces. This is of course a tremendous over-simplification, you could draw the stack at all kinds of levels of complexity. I’ve used three levels because it’s what fits on a Powerpoint slide with great big text.

– Social Norms
– Interventions
– Mechanics

At the top level are social norms. We’ve heard this several times at the conference today — social norms in game worlds have the effect of governance. There are some societies where not wearing white shoes after Memorial Day has acquired the force of law. It’s nowhere spelt out, no one can react to you in any kind of official way if you violate that rule, and yet there’s a social structure that keeps that in place.

Then at the bottom of the stack is mechanics, the stuff that just happens. I’ve pulled Norman Mailer’s quote about Naked Lunch here — “As implacable as a sales tax.” Sales tax just happens as a side-effect of living our normal lives. We have all sorts of mechanisms for making it work in this way, but the experience of the average citizen is that the tax just happens. 

And between these top and bottom layers in the stack, between relatively light-weight social norms and things that are really embedded into the mechanics of society, are lots and lots of interventions, places where we give some segment of society heightened power, and then allow them to make judgment calls, albeit with oversight.

Arresting someone or suing someone are examples of such interventions, where human judgment is required. I’ve listed interventions in the middle of the stack because they are more than socially enforceable — suing someone for libel is more than just social stigma– but they are not mechanical — libel is a judgment call, so some human agency is required to decide whether libel has happened, and if so, how it should be dealt with.

And of course these layers interact with one another as well. One of the characteristics of this interaction is that in many cases social norms acquire the force of law. If the society can be shown to have done things in a certain way consistently and for a long time, the courts will, at least in common law societies, abide by that. 

The Stack in Social Worlds

Contrast the virtual world. Social norms – the players have all sorts of ways of acting on and enacting social norms. There are individual behaviors – trolling and flaming, which are in part indoctrination rituals and in part population control, then there are guilds and more organized social structures, so there’s a spectrum of formality in the social controls in the game world. 

Beneath that there’s intervention by wizardly fiat. Intervention comes from the people who have a higher order of power, some sort of direct access to the software that runs the system. Sometimes it is used to solve social dilemmas, like the toading of Mr. Bungle in LambdaMoo, or for dispute resolution, where two players come to a deadlock, and it can’t be worked out except by a third party who has more power. Sometimes it’s used to fix places where system mechanics break down, as with the story from Habitat about accidentally allowing a player to get a too-powerful gun.

Intervention is a key lubricator, since it allows the ad hoc solution of unforeseen problems, and the history of both political norms and computer networks is the history of unforeseen problems.

And then there’s mechanics. The principal difference between real world mechanics and virtual world mechanics is ownership. Someone owns the server – there is a deed for a box sitting in a hosting company somewhere, and that server could be turned off by the person who owns it. The irony is that although we’re used to computers greatly expanding the reach of the individual, as they do in many aspects of our lives, in this domain they actually contract it. Players live in an online analog to a shopping mall, which seems like public space, but is actually privately owned. And of course both the possibility of monitoring and control in virtual worlds is orders of magnitude higher than in a shopping mall.

The players have no right to modification of the game world, or even to oversight of the world’s owners. There are very few environments where the players can actually vote to compel either the system administrators or the owners to do things, in a way that acquires the force of law. (Michael Froomkin has done some interesting work on testing legal structures in game worlds.)

In fact what often happens, both online and off, is that structures are created which look like citizen input, but these structures are actually designed to deflect participation while providing political cover. Anyone in academia knows that faculty meetings exist so the administration can say “Well you were consulted” whenever something bad happens, even though the actual leverage the faculty has over the ultimate decision is nil. The model here is customer service — generate a feeling of satisfaction at the lowest possible cost. Political representation, on the other hand, is a high-cost exercise, not least because it requires group approval.

Two Obstacles

So, what are the barriers to self-governance by the users? There are two big ones — lots of little ones, but two big ones. 

The first obstacle is code, the behavior of code. As Lessig says, code is law. In online spaces, code defines the environment we operate in. It’s difficult to share the powers of code among the users, because our current hardware design center is the ‘Personal Computer’, we don’t have a design that’s allows for social constraints on individual use.

Wizards have a higher degree of power than other players, and simply allowing everyone to be a wizard tends to very quickly devolve into constant fighting about what to do with those wizardly powers. (We’ve seen this with IRC [internet relay chat], where channel operators have higher powers than mere users, leading to operator wars, where the battle is over control of the space.)

The second big obstacle is economics — the box that runs the virtual world is owned by someone, and it isn’t you. When you pay your $20 a month to Sony or Microsoft, you’re subscribing to a service, but you’re not actually paying for the server directly. The ownership passes through a series of layers that dilutes your control over it. The way our legal system works, it’s hard for groups to own things without being legal entities. It’s easy for Sony to own infrastructure, but for you and your 5 friends to own a server in common, you’d have to create some kind of formal entity. IOUs and social agreements to split the cost won’t get you very far. 

So, if we want to maximize player control, if we want to create an environment in which users have refined control, political control, over this stack I’ve drawn, you have to deal with those two obstacles — making code subject to political control, and making it possible for the group to own their own environment.

Nomic World

Now what would it be like if we set out to design a game environment like that? Instead of just waiting for the players to argue for property rights or democratic involvement, what would it be like to design an environment where they owned their online environment directly, where we took the “Code is Law” equation at face value, and gave the users a constitution that included the ability to both own and alter the environment?

There’s a curious tension here between political representation and games. The essence of political representation is that the rules are subject to oversight and alteration by the very people expected to abide by them, while games are fun in part because the rule set is fixed. Even in games with highly idiosyncratic adjustments to the rules, as with Monopoly say, the particular rules are fixed in advance of playing.

One possible approach to this problem is to make changing the rules fun, to make it part of the game. This is exactly the design center of a game called Nomic. It was invented in 1982 by the philosopher Peter Suber. He included it as an appendix to a book called The Paradox of Self Amendment, which concerns the philosophical ramifications of having a body of laws that includes the instructions for modifying those laws.

Nomic is a game in which changing the rules is a legitimate move within the game world, which makes it closer to the condition of a real government than to, say, Everquest. The characteristics of the Nomic rules are, I think, the kind of thing you would have to take on if you wanted to build an environment in which players had real control. Nomic rules are alterable, and they’re also explicit – one of the really interesting things about designing a game in which changing the rules is one of the rules, is you have to say much more carefully what the rules actually are. 

The first rule of Nomic, Rule 101, is “All players must abide by the rules.” Now that’s an implicit rule that almost any game anyone ever plays, but in Nomic it needs to be spelled out, and ironically, once you spell it out it’s up for amendment – you can have a game of Nomic in which you allow people to no longer play by the rules.

Suber’s other key intuition, I think, in addition to making mutability a move in the game, is making Nomic contain both deep and shallow rules. There are rules that are “immutable”, and rules that are mutable. I put immutable in quotes because Rule 103 allows for “…the transmutation of an immutable rule into a mutable rule or vice versa.”

Because the players can first vote to make an immutable rule mutable, and then can vote to change the newly mutable rule, Nomic works a little like the US Constitution: there are things that are easier to change and harder to change, but nothing is beyond change. For example, flag burning is currently protected speech under our First Amendment, so laws restricting flag burning are invariably struck down as unconstitutional. An amendment to the constitution making flag burning illegal, however, would not, by definition, be unconstitutional, but such an amendment is much much harder to pass than an ordinary law. Same pattern.

The game Nomic has the advantage of being mental and interpretive, unlike software-mediated environments, where the rules are blindly run by the processor. We know (thank you Herr Godel), that we cannot prove that any sufficiently large set of rules is also self-consistent, and we know (from bitter experience) that even simple software contains bugs. 

The task of instantiating a set of rules in code and then trying to work in the resulting environment, while modifying it, can seem daunting. I think it’s worth trying, though, because an increasing degree of our lives, personal, social and political, are going to be lived in these mediated spaces. The importance of online spaces as public gatherings is so fundamental, in fact, that for the rest of this talk, I’m going to use the words player and citizen interchangeably.

How to build it?

How to build a Nomic world? Start with economics. The current barriers to self-ownership by users is simple: the hardware running the social environment is owned by someone, and we have a model of contractual obligation for ownership of hardware, rather than a model of political membership. 

One possible response to current economic barriers, call it the Co-operative Model, is to use contracts to create political rights. Here we would set up a world or game in which the people running it are actually the employees of the citizens, not the employees of the game company, and their relationship to the body of citizens is effectively as work-for-hire. This would be different than the current ‘monthly subscriber’ model for most game worlds. In the co-operative model, when you’re paying for access to the game world your dollars would buy you shares of stock in a joint stock corporation — citizens would be both stakeholders and shareholders. There would be a fiduciary duty on the part of the people running the game on your behalf to act on the political will of the people, however expressed, rather than the contractual relationship we have now. 

The downside of this model is that the contractual requirements to do such a thing are complex. The Open Source world gives us a number of interesting models for valuable communal property like the license to a particular piece of software being held in trust. When such a communal trust, though, wants to have employees, the contracts become far more complex, and the citizens’ co-op becomes an employer. Not un-do-able, but not a great target for quick experiments either.

A second way to allow a group to own their own social substrate is with a Distributed Model, where you would attack the problem down at the level of the infrastructure. If the issue is that any one server has to be owned somewhere, distribute the server. Run the environment on individual PC’s and create a peer-to-peer network, so that the entirety of the infrastructure is literally owned by the citizens from the moment they join the game. That pushes some technological constraints, like asynchronicity, into the environment, but it’s also a way of attacking ownership, and one that doesn’t require a lot of contractual specification with employees. 

[Since I gave this talk, I’ve discovered BlogNomic, a version of Nomic run on weblogs, which uses this “distributed platform” pattern.]

A third way could be called Cheap Model; simply work at such low cost that you can piggyback on low-cost or free infrastructure. If you wanted to build a game, this would probably mean working in text-based strategy mode, rather than the kind of immersive graphic worlds we’ve been talking so much about today. There are a number of social tools — wikis and mailing lists and so on — that are freely available and cheap to host. In this case, a one-time donation of a few dollars per citizen at the outset would cover hosting costs for some time.

Those are some moves that would potentially free a Nomic environment from the economic constraints of ownership by someone other than the citizens. The second constraint is dealing with the code, the actual software running the world. 

Code

Code is a much harder thing to manipulate than economics. Current barriers in code, as I said, are levels of permission and root powers. The real world has nothing like root access — there is an urban legend, a rural legend, I guess, about the State of Tennesee declaring the value of pi to be 3, as irrational numbers were decidedly inconvenient. However, the passage of such a law couldn’t actually change the value of pi. 

In an online world, on the other hand, it would be possible to redefine pi, or indeed any other value, which would in some cases cause the world itself to grind to a halt. 

This situation, where a rule change ends the game, is possible even in Nomic. In one game a few years ago, a set of players made a sub-game of trying to pass game-ending rules. Now in theory Nomic shouldn’t allow such a thing, since such rules could be repealed, so this group of players specifically targeted unrepealability as the core virtue of all their proposed changes. One such change, for example, would have made the comment period for subsequent rule changes 54 years long. Such a rule could eventually have been repealed, of course, but not in a year with two zeros in the middle.

So unlike actual political systems, where the legislators are allowed to create nonsensical but unenforceable laws, in virtual worlds, it’s possible to make laws that are nonsensical and enforceable, even at the expense of damaging the world itself. This means that any citizen-owned and operated environment would have to include a third set of controls, designed to safeguard the world itself against this kind of damage.

One potential response is to create Platform World, with a third, deeper level of immutability, enforced with the choice of platform. You could announce a social experiment, using anything from mailing list software to There.com, and that would set a bounded environment. You can imagine that software slowly mutating away from the original code base, as citizens made rule changes that required code changes, but by picking a root platform, you would actually have a set of rules embodied in code that was harder to change than classic Nomic rules. The top two layers of the stack, the social and interventionist changes, could happen in the environment, but re-coding the mechanics of the environment itself would be harder.

A second possibility, as a move completely away from that, would be Coder World. Here you would only allow users who are comfortable coding to play or participate in this environment, so that a kind of literacy becomes a requirement for participation. This would greatly narrow the range of potential participants, but would flatten the stack so that all citizens could participate directly in all layers. This flattening would lead to problems of its own of course, and would often devolve into tests of hacking prowess, and even attempts crash the world, as with Nomic, but that might be interesting in and of itself.

Third, and this is the closest to the current game world model, would be Macro World. Here you would create a bunch of macro-languages, to create ways in which end-users who aren’t coders could nevertheless alter the world they inhabit. And obviously object creation, the whole history of creating virtual objects for virtual environments, works this way now, but it’s not yet at the level of creating the environment itself from scratch. You come into an environment in which you create objects, rather than coming into a negative space and letting the citizens build up the environment, including the rules.

A fourth and final possibility is a CVS World. CVS is a concurrent versioning system, it’s what programmers use to keep code safe, so that when they make a change that breaks something, they can role back a version. Wikis, collaborative workspaces where users create the site together and on the fly, have shown that the CVS pattern can have important social uses as well.

In the Matrix tradition, because I guess that everybody’s referred to the Matrix in every talk today, CVS World would be a world in which you simply wouldn’t care if citizens made mistakes and screwed up the world, because the world could always be rolled back to the last working version. 

In this environment, the ‘crash the world’ problem stops being a problem not because there is a defense against crashing, but rather a response. If someone crashes the world, for whatever reason, it rolls back to the last working version. And that would be potentially the most dynamic in terms of experimentation, but it would also have probably the most disruption of game play.

Why do it?

The looming question here, of course, is “Would it be fun?” Would it be fun to be in a virtual environment where citizens have a significant amount of control, and where the legal structures reflect the legal structures we know in the real world?” And the answer is, maybe no. 

One of my great former students, Elizabeth Goodman, said, the reason academics like to talk about play but not about fun is that you can force people to play. Much of what makes a game fun is mastering the rules — both winning as defined by the rules and gaming the system are likely to be more fun than taking responsibility for what the rules are. There is a danger that by dumping so much responsibility into the citizen’s laps, we would end up re-creating all the fun of city planning board meetings in an online environment (though given players’ willingness to sit around all day making armor, maybe that’s not a fatal flaw.)

Despite all of this, though, I think it’s worth maximizing citizen involvement through experiments in ownership and control of code, for several reasons.

First, as Ted [Castronova]’s work shows, the economic seriousness of game worlds has surpassed anything any of us would have expected even a few years ago. Economics and politics are both about distributed optimization under constraints, and given the surprises we’ve seen in the economic sphere, with inflation in virtual worlds and economy hacking by players, it would be interesting to see if similar energy would be devoted to political engagement, on a level more fundamental than making and managing Guild rules.

Next, it’s happening anyway, so why not formalize it? As Julian [Dibbell]’s work on everything from LambdaMOO to MMO Terms of Service demonstrates, the demands for governance are universal features of virtual worlds, and rather than simply re-invent solutions one crisis at a time, we could start building a palette of viable political systems.

Finally, and this is the most important point, we are moving an increasing amount of our speech to owned environments. The economic seriousness of these worlds undermines the ‘it’s only a game’ argument, and the case of tk being run out of the Sims for publishing reports critical of the game show how quickly freedom of speech issues can arise. The real world is too difficult to control by fiat — pi remains stubbornly irrational no matter who votes on it — but the online world is not. Even in non-game and non-fee collecting social environments like Yahoo Groups, the intrusiveness of advertising and the right of the owners to unilaterally change the rules creates many fewer freedoms than we enjoy offline.

We should experiment with game-world models that dump a large and maybe even unpleasant amount of control into the hands of the players because it’s the best lab we have for experiments with real governance in the 21st century agora, the place where people gather when they want to be out in public. 

While real world political culture has the unfortunate effect of being either/or choices — uni-cameral or bi-cameral legislatures, president or prime minister, and so on — the online world offers us a degree of flexibility that allows us to model rather than theorize. Wonder what the difference is between forcing new citizens to have sponsors vs. dumping newbies into the world alone? Try it both ways and see how the results differ.

This is really the argument for Nomic World, for making an environment as wholly owned and managed by and for the citizens as a real country — if we’re going to preserve our political freedoms as we moved to virtual environments, we’re going to need novel political and economic relations between the citizens and their environments. We need this, we can’t get it from the real world. So we might as well start experimenting now, because it’s going to take a long time to get good at it, and if we can enlist the players efforts, we’ll learn more, much more, than if we leave the political questions in the hands of the owners and wizards.

And with that, I’ll sit down. Thanks very much.

Fame vs Fortune: Micropayments and Free Content

First published September 5, 2003 on the “Networks, Economics, and Culture” mailing list.

Micropayments, small digital payments of between a quarter and a fraction of a penny, made (yet another) appearance this summer with Scott McCloud’s online comic, The Right Number, accompanied by predictions of a rosy future for micropayments.

To read The Right Number, you have to sign up for the BitPass micropayment system; once you have an account, the comic itself costs 25 cents.

BitPass will fail, as FirstVirtual, Cybercoin, Millicent, Digicash, Internet Dollar, Pay2See, and many others have in the decade since Digital Silk Road, the paper that helped launch interest in micropayments. These systems didn’t fail because of poor implementation; they failed because the trend towards freely offered content is an epochal change, to which micropayments are a pointless response.

The failure of BitPass is not terribly interesting in itself. What is interesting is the way the failure of micropayments, both past and future, illustrates the depth and importance of putting publishing tools in the hands of individuals. In the face of a force this large, user-pays schemes can’t simply be restored through minor tinkering with payment systems, because they don’t address the cause of that change — a huge increase the power and reach of the individual creator.

Why Micropayment Systems Don’t Work

The people pushing micropayments believe that the dollar cost of goods is the thing most responsible for deflecting readers from buying content, and that a reduction in price to micropayment levels will allow creators to begin charging for their work without deflecting readers.

This strategy doesn’t work, because the act of buying anything, even if the price is very small, creates what Nick Szabo calls mental transaction costs, the energy required to decide whether something is worth buying or not, regardless of price. The only business model that delivers money from sender to receiver with no mental transaction costs is theft, and in many ways, theft is the unspoken inspiration for micropayment systems.

Like the salami slicing exploit in computer crime, micropayment believers imagine that such tiny amounts of money can be extracted from the user that they will not notice, while the overall volume will cause these payments to add up to something significant for the recipient. But of course the users do notice, because they are being asked to buy something. Mental transaction costs create a minimum level of inconvenience that cannot be removed simply by lowering the dollar cost of goods.

Worse, beneath a certain threshold, mental transaction costs actually rise, a phenomenon is especially significant for information goods. It’s easy to think a newspaper is worth a dollar, but is each article worth half a penny? Is each word worth a thousandth of a penny? A newspaper, exposed to the logic of micropayments, becomes impossible to value.

If you want to feel mental transaction costs in action, sign up for the $3 version of BitPass, then survey the content on offer. Would you pay 25 cents to view a VR panorama of the Matterhorn? Are Powerpoint slides on “Ten reasons why now is a great time to start a company?” worth a dime? (and if so, would each individual reason be worth a penny?)

Mental transaction costs help explain the general failure of micropayment systems. (See OdlyzkoShirky, and Szabo for a fuller accounting of the weaknesses of micropayments.) The failure of micropayments in turn helps explain the ubiquity of free content on the Web.

Fame vs Fortune and Free Content

Analog publishing generates per-unit costs — each book or magazine requires a certain amount of paper and ink, and creates storage and transportation costs. Digital publishing doesn’t. Once you have a computer and internet access, you can post one weblog entry or one hundred, for ten readers or ten thousand, without paying anything per post or per reader. In fact, dividing up front costs by the number of readers means that content gets cheaper as it gets more popular, the opposite of analog regimes.

The fact that digital content can be distributed for no additional cost does not explain the huge number of creative people who make their work available for free. After all, they are still investing their time without being paid back. Why?

The answer is simple: creators are not publishers, and putting the power to publish directly into their hands does not make them publishers. It makes them artists with printing presses. This matters because creative people crave attention in a way publishers do not. Prior to the internet, this didn’t make much difference. The expense of publishing and distributing printed material is too great for it to be given away freely and in unlimited quantities — even vanity press books come with a price tag. Now, however, a single individual can serve an audience in the hundreds of thousands, as a hobby, with nary a publisher in sight.

This disrupts the old equation of “fame and fortune.” For an author to be famous, many people had to have read, and therefore paid for, his or her books. Fortune was a side-effect of attaining fame. Now, with the power to publish directly in their hands, many creative people face a dilemma they’ve never had before: fame vs fortune.

Substitutability and the Deflection of Use

The fame vs fortune choice matters because of substitutability, the willingness to accept one thing as a substitute for another. Substitutability is neutralized in perfect markets. For example, if someone has even a slight preference for Pepsi over Coke, and if both are always equally available in all situations, that person will never drink a Coke, despite being only mildly biased.

The soft-drink market is not perfect, but the Web comes awfully close: If InstaPundit and Samizdata are both equally easy to get to, the relative traffic to the sites will always match audience preference. But were InstaPundit to become less easy to get to, Samizdata would become a more palatable substitute. Any barrier erodes the user’s preferences, and raises their willingness to substitute one thing for another.

This is made worse by the asymmetry between the author’s motivation and the reader’s. While the author has one particular thing they want to write, the reader is usually willing to read anything interesting or relevant to their interests. Though each piece of written material is unique, the universe of possible choices for any given reader is so vast that uniqueness is not a rare quality. Thus any barrier to a particular piece of content (even, as the usability people will tell you, making it one click further away) will deflect at least some potential readers.

Charging, of course, creates just such a barrier. The fame vs fortune problem exists because the web makes it possible to become famous without needing a publisher, and because any attempt to derive fortune directly from your potential audience lowers the size of that audience dramatically, as the added cost encourages them to substitute other, free sources of content.

Free is a Stable Strategy

For a creator more interested in attention than income, free makes sense. In a regime where most of the participants are charging, freeing your content gives you a competitive advantage. And, as the drunks say, you can’t fall off the floor. Anyone offering content free gains an advantage that can’t be beaten, only matched, because the competitive answer to free — “I’ll pay you to read my weblog!” — is unsupportable over the long haul.

Free content is thus what biologists call an evolutionarily stable strategy. It is a strategy that works well when no one else is using it — it’s good to be the only person offering free content. It’s also a strategy that continues to work if everyone is using it, because in such an environment, anyone who begins charging for their work will be at a disadvantage. In a world of free content, even the moderate hassle of micropayments greatly damages user preference, and increases their willingness to accept free material as a substitute.

Furthermore, the competitive edge of free content is increasing. In the 90s, as the threat the Web posed to traditional publishers became obvious, it was widely believed that people would still pay for filtering. As the sheer volume of free content increased, the thinking went, finding the good stuff, even if it was free, would be worth paying for because it would be so hard to find.

In fact, the good stuff is becoming easier to find as the size of the system grows, not harder, because collaborative filters like Google and Technorati rely on rich link structure to sort through links. So offering free content is not just an evolutionary stable strategy, it is a strategy that improves with time, because the more free content there is the greater the advantage it has over for-fee content.

The Simple Economics of Content

People want to believe in things like micropayments because without a magic bullet to believe in, they would be left with the uncomfortable conclusion that what seems to be happening — free content is growing in both amount and quality — is what’s actually happening.

The economics of content creation are in fact fairly simple. The two critical questions are “Does the support come from the reader, or from an advertiser, patron, or the creator?” and “Is the support mandatory or voluntary?”

The internet adds no new possibilities. Instead, it simply shifts both answers strongly to the right. It makes all user-supported schemes harder, and all subsidized schemes easier. It likewise makes collecting fees harder, and soliciting donations easier. And these effects are multiplicative. The internet makes collecting mandatory user fees much harder, and makes voluntarily subsidy much easier.

Weblogs, in particular, represent a huge victory for voluntarily subsidized content. The weblog world is driven by a million creative people, driven to get the word out, willing to donate their work, and unhampered by the costs of xeroxing, ink, or postage. Given the choice of fame vs fortune, many people will prefer a large audience and no user fees to a small audience and tiny user fees. This is not to say that creators cannot be paid for their work, merely that mandatory user fees are far less effective than voluntary donations, sponsorship, or advertising.

Because information is hard to value in advance, for-fee content will almost invariably be sold on a subscription basis, rather than per piece, to smooth out the variability in value. Individual bits of content that are even moderately close in quality to what is available free, but wrapped in the mental transaction costs of micropayments, are doomed to be both obscure and unprofitable.

What’s Next?

This change in the direction of free content is strongest for the work of individual creators, because an individual can produce material on any schedule they like. It is also strongest for publication of words and images, because these are the techniques most easily mastered by individuals. As creative work in groups creates a good deal of organizational hassle and often requires a particular mix of talents, it remains to be seen how strongly the movement towards free content will be for endeavors like music or film.

However, the trends are towards easier collaboration, and still more power to the individual. The open source movement has demonstrated that even phenomenally complex systems like Linux can be developed through distributed volunteer labor, and software like Apple’s iMovie allows individuals to do work that once required a team. So while we don’t know what ultimate effect the economics of free content will be on group work, we do know that the barriers to such free content are coming down, as they did with print and images when the Web launched.

The interesting questions regarding free content, in other words, have nothing to do with bland “End of Free” predictions, or unimaginative attempts at restoring user-pays regimes. The interesting questions are how far the power of the creator to publish their own work is going to go, how much those changes will be mirrored in group work, and how much better collaborative filters will become in locating freely offered material. While we don’t know what the end state of these changes will be, we do know that the shift in publishing power is epochal and accelerating.

Group as User: Flaming and the Design of Social Software

First published November 5, 2004 on the “Networks, Economics, and Culture” mailing list.

When we hear the word “software,” most of us think of things like Word, Powerpoint, or Photoshop, tools for individual users. These tools treat the computer as a box, a self-contained environment in which the user does things. Much of the current literature and practice of software design — feature requirements, UI design, usability testing — targets the individual user, functioning in isolation.

And yet, when we poll users about what they actually do with their computers, some form of social interaction always tops the list — conversation, collaboration, playing games, and so on. The practice of software design is shot through with computer-as-box assumptions, while our actual behavior is closer to computer-as-door, treating the device as an entrance to a social space.

We have grown quite adept at designing interfaces and interactions between computers and machines, but our social tools — the software the users actually use most often — remain badly misfit to their task. Social interactions are far more complex and unpredictable than human/computer interaction, and that unpredictability defeats classic user-centric design. As a result, tools used daily by tens of millions are either ignored as design challenges, or treated as if the only possible site of improvement is the user-to-tool interface.

The design gap between computer-as-box and computer-as-door persists because of a diminished conception of the user. The user of a piece of social software is not just a collection of individuals, but a group. Individual users take on roles that only make sense in groups: leader, follower, peacemaker, process nazi, and so on. There are also behaviors that can only occur in groups, from consensus building to social climbing. And yet, despite these obvious differences between personal and social behaviors, we have very little design practice that treats the group as an entity to be designed for.

There is enormous value to be gotten in closing that gap, and it doesn’t require complicated new tools. It just requires new ways of looking at old problems. Indeed, much of the most important work in social software has been technically simple but socially complex.

Learning From Flame Wars

Mailing lists were the first widely available piece of social software. (PLATO beat mailing lists by a decade, but had a limited user base.) Mailing lists were also the first widely analyzed virtual communities. And for roughly thirty years, almost any description of mailing lists of any length has mentioned flaming, the tendency of list members to forgo standards of public decorum when attempting to communicate with some ignorant moron whose to stupid to know how too spell and deserves to DIE, die a PAINFUL DEATH, you PINKO SCUMBAG!!!

Yet despite three decades of descriptions of flaming, it is often treated by designers as a mere side-effect, as if each eruption of a caps-lock-on argument was surprising or inexplicable.

Flame wars are not surprising; they are one of the most reliable features of mailing list practice. If you assume a piece of software is for what it does, rather than what its designer’s stated goals were, then mailing list software is, among other things, a tool for creating and sustaining heated argument. (This is true of other conversational software as well — the WELL, usenet, Web BBSes, and so on.)

This tension in outlook, between ‘flame war as unexpected side-effect’ and ‘flame war as historical inevitability,’ has two main causes. The first is that although the environment in which a mailing list runs is computers, the environment in which a flame war runs is people. You couldn’t go through the code of the Mailman mailing list tool, say, and find the comment that reads “The next subroutine ensures that misunderstandings between users will be amplified, leading to name-calling and vitriol.” Yet the software, when adopted, will frequently produce just that outcome.

The user’s mental model of a word processor is of limited importance — if a word processor supports multiple columns, users can create multiple columns; if not, then not. The users’ mental model of social software, on the other hand, matters enormously. For example, ‘personal home pages’ and weblogs are very similar technically — both involve local editing and global hosting. The difference between them was mainly in the user’s conception of the activity. The pattern of weblogging appeared before the name weblog was invented, and the name appeared before any of the current weblogging tools were designed. Here the shift was in the user’s mental model of publishing, and the tools followed the change in social practice.

In addition, when software designers do regard the users of social software, it is usually in isolation. There are many sources of this habit: ubiquitous network access is relatively recent, it is conceptually simpler to treat users as isolated individuals than as social actors, and so on. The cumulative effect is to make maximizing individual flexibility a priority, even when that may produce conflict with the group goals. 

Flaming, an un-designed-for but reliable product of mailing list software, was our first clue to the conflict between the individual and the group in mediated spaces, and the initial responses to it were likewise an early clue about the weakness of the single-user design center.

Netiquette and Kill Files

The first general response to flaming was netiquette. Netiquette was a proposed set of behaviors that assumed that flaming was caused by (who else?) individual users. If you could explain to each user what was wrong with flaming, all users would stop.

This mostly didn’t work. The problem was simple — the people who didn’t know netiquette needed it most. They were also the people least likely to care about the opinion of others, and thus couldn’t be easily convinced to adhere to its tenets.

Interestingly, netiquette came tantalizingly close to addressing group phenomena. Most versions advised, among other techniques, contacting flamers directly, rather than replying to them on the list. Anyone who has tried this technique knows it can be surprisingly effective. Even here, though, the collective drafters of netiquette misinterpreted this technique. Addressing the flamer directly works not because he realizes the error of his ways, but because it deprives him of an audience. Flaming is not just personal expression, it is a kind of performance, brought on in a social context.

This is where the ‘direct contact’ strategy falls down. Netiquette docs typically regarded direct contact as a way to engage the flamer’s rational self, and convince him to forgo further flaming. In practice, though, the recidivism rate for flamers is high. People behave differently in groups, and while momentarily engaging them one-on-one can have a calming effect, that is a change in social context, rather than some kind of personal conversion. Once the conversation returns to a group setting, the temptation to return to performative outbursts also returns.

Another standard answer to flaming has been the kill file, sometimes called a bozo filter, which is a list of posters whose comments you want filtered by the software before you see them. (In the lore of usenet, there is even a sound effect — *plonk* — that the kill-file-ee is said to make when dropped in the kill file.)

Kill files are also generally ineffective, because merely removing one voice from a flame war doesn’t do much to improve the signal to noise ratio — if the flamer in question succeeds in exciting a response, removing his posts alone won’t stem the tide of pointless replies. And although people have continually observed (for thirty years now) that “if everyone just ignores user X, he will go away,” the logic of collective action makes that outcome almost impossible to orchestrate — it only takes a couple of people rising to bait to trigger a flame war, and the larger the group, the more difficult it is to enforce the discipline required of all members.

The Tragedy of the Conversational Commons

Flaming is one of a class of economic problems known as The Tragedy of the Commons. Briefly stated, the tragedy of the commons occurs when a group holds a resource, but each of the individual members has an incentive to overuse it. (The original essay used the illustration of shepherds with common pasture. The group as a whole has an incentive to maintain the long-term viability of the commons, but with each individual having an incentive to overgraze, to maximize the value they can extract from the communal resource.)

In the case of mailing lists (and, again, other shared conversational spaces), the commonly held resource is communal attention. The group as a whole has an incentive to keep the signal-to-noise ratio high and the conversation informative, even when contentious. Individual users, though, have an incentive to maximize expression of their point of view, as well as maximizing the amount of communal attention they receive. It is a deep curiosity of the human condition that people often find negative attention more satisfying than inattention, and the larger the group, the likelier someone is to act out to get that sort of attention.

However, proposed responses to flaming have consistently steered away from group-oriented solutions and towards personal ones. The logic of collective action, alluded to above, rendered these personal solutions largely ineffective. Meanwhile attempts at encoding social bargains weren’t attempted because of the twin forces of door culture (a resistance to regarding social features as first-order effects) and a horror of censorship (maximizing individual freedom, even when it conflicts with group goals.)

Weblog and Wiki Responses

When considering social engineering for flame-proofed-ness, it’s useful to contemplate both weblogs and wikis, neither of which suffer from flaming in anything like the degree mailing lists and other conversational spaces do. Weblogs are relatively flame-free because they provide little communal space. In economic parlance, weblogs solve the tragedy of the commons through enclosure, the subdividing and privatizing of common space. 

Every bit of the weblog world is operated by a particular blogger or group of bloggers, who can set their own policy for accepting comments, including having no comments at all, deleting comments from anonymous or unfriendly visitors, and so on. Furthermore, comments are almost universally displayed away from the main page, greatly limiting their readership. Weblog readers are also spared the need for a bozo filter. Because the mailing list pattern of ‘everyone sees everything’ has never been in effect in the weblog world, there is no way for anyone to hijack existing audiences to gain attention.

Like weblogs, wikis also avoid the tragedy of the commons, but they do so by going to the other extreme. Instead of everything being owned, nothing is. Whereas a mailing list has individual and inviolable posts but communal conversational space, in wikis, even the writing is communal. If someone acts out on a wiki, the offending material can be subsequently edited or removed. Indeed, the history of the Wikipedia , host to communal entries on a variety of contentious topics ranging from Islam to Microsoft, has seen numerous and largely failed attempts to pervert or delete entire entries. And because older versions of wiki pages are always archived, it is actually easier to restore damage than cause it. (As an analogy, imagine what cities would look like if it were easier to clean graffiti than to create it.)

Weblogs and wikis are proof that you can have broadly open discourse without suffering from hijacking by flamers, by creating a social structure that encourages or deflects certain behaviors. Indeed, the basic operation of both weblogs and wiki — write something locally, then share it — is the pattern of mailing lists and BBSes as well. Seen in this light, the assumptions made by mailing list software looks less like The One True Way to design a social contract between users, and more like one strategy among many.

Reviving Old Tools

This possibility of adding novel social components to old tools presents an enormous opportunity. To take the most famous example, the Slashdot moderation system puts the ability to rate comments into the hands of the users themselves. The designers took the traditional bulletin board format — threaded posts, sorted by time — and added a quality filter. And instead of assuming that all users are alike, the Slashdot designers created a karma system, to allow them to discriminate in favor of users likely to rate comments in ways that would benefit the community. And, to police that system, they created a meta-moderation system, to solve the ‘Who will guard the guardians’ problem. (All this is documented in the Slashdot FAQ, our version of Federalist Papers #10.)

Rating, karma, meta-moderation — each of these systems is relatively simple in technological terms. The effect of the whole, though, has been to allow Slashdot to support an enormous user base, while rewarding posters who produce broadly valuable material and quarantining offensive or off-topic posts. 

Likewise, Craigslist took the mailing list, and added a handful of simple features with profound social effects. First, all of Craigslist is an enclosure, owned by Craig (whose title is not Founder, Chairman, and Customer Service Representative for nothing.) Because he has a business incentive to make his list work, he and his staff remove posts if enough readers flag them as inappropriate. Like Slashdot, he violates the assumption that social software should come with no group limits on individual involvement, and Craigslist works better because of it. 

And, on the positive side, the addition of a “Nominate for ‘Best of Craigslist'” button in every email creates a social incentive for users to post amusing or engaging material. The ‘Best of’ button is a perfect example of the weakness of a focus on the individual user. In software optimized for the individual, such a button would be incoherent — if you like a particular post, you can just save it to your hard drive. But users don’t merely save those posts to their hard drives; they click that button. Like flaming, the ‘Best of’ button also assumes the user is reacting in relation to an audience, but here the pattern is harnessed to good effect. The only reason you would nominate a post for ‘Best of’ is if you wanted other users to see it — if you were acting in a group context, in other words.

Novel Operations on Social Facts

Jonah Brucker-Cohen’s Bumplist stands out as an experiment in experimenting the social aspect of mailing lists. Bumplist, whose motto is “an email community for the determined”, is a mailing list for 6 people, which anyone can join. When the 7th user joins, the first is bumped and, if they want to be back on, must re-join, bumping the second user, ad infinitum. (As of this writing, Bumplist is at 87,414 subscribes and 81,796 re-subscribes.) Bumplist’s goal is more polemic than practical; Brucker-Cohen describes it as a re-examination of the culture and rules of mailing lists. However, it is a vivid illustration of the ways simple changes to well-understood software can produce radically different social effects.

You could easily imagine many such experiments. What would it take, for example, to design a mailing list that was flame-retardant? Once you stop regarding all users as isolated actors, a number of possibilities appear. You could institute induced lag, where, once a user contributed 5 posts in the space of an hour, a cumulative 10 minute delay would be added to each subsequent post. Every post would be delivered eventually, but it would retard the rapid-reply nature of flame wars, introducing a cooling off period for the most vociferous participants.

You could institute a kind of thread jail, where every post would include a ‘Worst of’ button, in the manner of Craigslist. Interminable, pointless threads (e.g. Which Operating System Is Objectively Best?) could be sent to thread jail if enough users voted them down. (Though users could obviously change subject headers and evade this restriction, the surprise, first noted by Julian Dibbell, is how often users respect negative communal judgment, even when they don’t respect the negative judgment of individuals. [ See Rape in Cyberspace — search for “aggressively antisocial vibes.”])

You could institute a ‘Get a room!’ feature, where any conversation that involved two users ping-ponging six or more posts (substitute other numbers to taste) would be automatically re-directed to a sub-list, limited to that pair. The material could still be archived, and so accessible to interested lurkers, but the conversation would continue without the attraction of an audience.

You could imagine a similar exercise, working on signal/noise ratios generally, and keying off the fact that there is always a most active poster on mailing lists, who posts much more often than even the second most active, and much much more often than the median poster. Oddly, the most active poster is often not even aware that they occupy this position (seeing ourselves as others see us is difficult in mediated spaces as well,) but making them aware of it often causes them to self-moderate. You can imagine flagging all posts by the most active poster, whoever that happened to be, or throttling the maximum number of posts by any user to some multiple of average posting tempo.

And so on. The number of possible targets for experimentation is large and combinatorial, and those targets exist in any social context, not just in conversational spaces.

Rapid, Iterative Experimentation

Though most of these sorts of experiments won’t be of much value, rapid, iterative experiment is the best way to find those changes that are positive. The Slashdot FAQ makes it clear that the now-stable ratings+karma+meta-moderation system could only have evolved with continued adjustment over time. This was possible because the engineering challenges were relatively straightforward, and the user feedback swift.

That sort of experimentation, however, has been the exception rather than the rule. In thirty years, the principal engineering work on mailing lists has been on the administrative experience — the Mailman tool now offers a mailing list administrator nearly a hundred configurable options, many with multiple choices. However, the social experience of a mailing list over those three decades has hardly changed at all.

This is not because experimenting with social experience is technologically hard, but because it is conceptually foreign. The assumption that the computer is a box, used by an individual in isolation, is so pervasive that it is adhered to even when it leads to investment of programmer time in improving every aspect of mailing lists except the interaction that makes them worthwhile in the first place.

Once you regard the group mind as part of the environment in which the software runs, though, a universe of un-tried experimentation opens up. A social inventory of even relatively ancient tools like mailing lists reveals a wealth of untested models. There is no guarantee that any given experiment will prove effective, of course. The feedback loops of social life always produce unpredictable effects. Anyone seduced by the idea of social perfectibility or total control will be sorely disappointed, because users regularly reject attempts to affect or alter their behavior, whether by gaming the system or abandoning it. 

But given the breadth and simplicity of potential experiments, the ease of collecting user feedback, and most importantly the importance users place on social software, even a few successful improvements, simple and iterative though they may be, can create disproportionate value, as they have done with Craigslist and Slashdot, and as they doubtless will with other such experiments.

The Wal-Mart Future

Business-to-consumer retail Websites were going to be really big. Consumers were going to be dazzled by the combination of lower prices and the ability to purchase products from anywhere. The Web was supposed to be the best retail environment the world had ever seen.

This imagined future success created an astonishingly optimistic investment climate, where people believed that any amount of money spent on growth was bound to pay off later. You could build a $5 million Website, buy a Super Bowl ad for $70,000 per second, sell your wares at cost, give away shipping, and rest assured the markets would support you all the way.

The end of this ideal was crushing, as every advantage of B-to-C turned out to have a deflationary downside. Customers lured to your site by low prices could just as easily by lured away by lower prices elsewhere. And the lack of geographic segmentation meant that everyone else could reach your potential customers as easily as you could.

Like a scientist who invents a universal solvent and then has nowhere to keep it, online retail businesses couldn’t find a way to contain the deflationary currents they unleashed, ultimately diminishing their own bottom lines.

B-to-C: Not so bad after all

The interpreters of all things Internet began to tell us that ecommerce was much more than silly old B-to-C. The real action was going to be in B-to-B-to-C or B-to-G or B-to-B exchanges or even E-to-E, the newly minted “exchange-to-exchange” sectors.

So we have the newly received wisdom. B-to-C is a bad business to be in, and only ecommerce companies that operate far, far from the consumer will prosper.

This, of course, is nonsense. Selling to consumers cannot, by definition, be bad business. Individual companies can fail, but B-to-C as a sector cannot.

Money comes from consumers. If you sell screws to Seagate Technology, which sells hard disks to Dell Computer, which sells Web servers to Amazon.com, everybody in that chain is getting paid because Amazon sells books to consumers. Everything in B-to-B markets–steel, software, whatever–is being sold somewhere down the line to a company that sells to consumers.

When the market began punishing B-to-C stocks, it became attractive to see the consumer as the disposable endpoint of all this great B-to-B activity, but that is exactly backward. The B-to-B market is playing with the consumers’ money, and without those revenues flowing upstream in a daisy chain of accounts receivable and accounts payable, everything else dries up.

The fundamental problem to date with B-to-C is that it pursued an inflationary path to a deflationary ideal. The original assessment was correct: the Web is the best retail environment the world has ever seen, because it is deflationary. However, this means businesses with trendy loft headquarters, high burn rates, and $2 million Super Bowl ads are precisely the wrong companies to be building efficient businesses that lower both consumer prices and internal costs.

The future of B-to-C used to look like boo.com–uncontrolled spending by founders who thought that the stock market would support them no matter how much cash they burned pursuing growth.

I’ve seen the future…

Now the future looks like Wal-Mart, a company that enjoys global sales rivaled by only Exxon Mobil and General Motors.

Wal-Mart recently challenged standard operating procedure by pulling its Website down for a few weeks for renovation. While not everyone understood the brilliance of this move–fuckedcompany.com tut-tutted that “No pure-ecommerce company would ever do that” –anyone who has ever had the misfortune to retool a Website while leaving it open for business knows that it can cost millions more than simply taking the old site down first.

The religion of 24/7 uptime, however, forbids these kinds of cost savings.

Wal-Mart’s managers took the site down anyway, in the same way they’d close a store for remodeling, because they know that the easiest way to make a dollar is to avoid spending one, and because they don’t care how people do it in Silicon Valley. Running a B-to-C organization for the long haul means saving money wherever you can. Indeed, making a commitment to steadily lowering costs as well as prices is the only way to make B-to-C (or B-to-B or E-to-E, for that matter) work.

Despite all of the obstacles, the B-to-C sector is going to be huge. But it won’t be dominated by companies trying to spend their way to savings.

It’s too early to know if the Wal-Mart of the Web will be the same Wal-Mart we know. But it isn’t too early to know that the businesses that succeed in the B-to-C sector will invest in holding down costs and forcing their suppliers to do the same, rather than those that invest in high-priced staffs and expensive ad campaigns.

The deflationary pressures the Web unleashes can be put to good use, but only by companies that embrace cost control for themselves, not just for their customers.

AOL’s Brilliant Climbdown

1/12/2000

The word “synergy” always gets a workout whenever two media behemoths join forces (usually accompanied by “unique” and “unprecedented”), and Monday’s press release announcing AOL’s acquisition of Time Warner delivered its fair share of breathless prose. But in practical terms, Monday’s deal was made only for the markets, not for the consumers. AOL and Time Warner are in very different parts of the media business, so there will be little of the cost-cutting that usually follows a mega-merger. Likewise, because AOL chief Steve Case has been waging a war against the regional cable monopolies, looking for the widest possible access to AOL content, it seems more likely that AOL-Time Warner will use its combined reach to open new markets instead of closing existing ones. This means that most of the touted synergies are little more than bundling deals and cross-media promotions — useful, but not earth-shaking. The real import of the deal is that its financial effects are so incomparably vast, and so well timed, that every media company in the world is feeling its foundations shaken by the quake.

The back story to this deal was AOL’s dizzying rise in valuation — 1500% in two years — which left it, like most dot-com stocks, wildly overvalued by traditional measures, and put the company under tremendous pressure to do something to lock in that value before the stock prices return to earth. AOL was very shrewd in working out the holdings of the new company. Although it was worth almost twice of Time Warner on paper, AOL stock holders will take a mere 55% of the new company. This is a brilliant way of backing down from an overvalued stock without causing investors to head for the aisles. Time Warner, meanwhile, got its fondest wish: Once it trades on the markets under the “AOL” ticker, it has a chance to achieve internet-style valuations of its offline assets. The timing was also impeccable; when Barry Diller tried a similar deal last year, linking USA Networks and Lycos, the market was still dreaming of free internet money and sent the stocks of both companies into a tailspin. In retrospect, people holding Lycos stock must be gnashing their teeth.

This is not to say, back in the real world, that AOL-Time Warner will be a good company. Gerald Levin, current CEO of Time Warner, will still be at the helm, and while all the traditional media companies have demonstrated an uncanny knack for making a hash of their web efforts, the debacle of Pathfinder puts Time Warner comfortably at the head of that class. One of the reasons traditional media stocks have languished relative to their more nimble-footed internet counterparts is that the imagined synergies from the round of media consolidations have largely failed to materialize, and this could end up sandbagging AOL as well. There is no guarantee that Levin will forego the opportunity to limit intra- company competition: AOL might find its push for downloadable music slowed now that it’s joined at the hip to Warner Music Group. But no matter — the markets are valuing the sheer size of the combined companies, long before any real results are apparent, and it’s this market reaction (and not the eventual results from the merger) that will determine the reprecussions of the deal.

With Monday’s announcement, the ground has shifted in favor of size. As “mass” becomes valuable in and of itself in the way that “growth” has been the historic mantra of internet companies, every media outlet, online or offline, is going to spend the next few weeks deciding whether to emulate this merger strategy or to announce some counter- strategy. A neutral stance is now impossible. There is rarely this much clarity in these sort of seismic shifts — things like MP3s, Linux, web mail, even the original Mosaic browser, all snuck up on internet users over time. AOL-Time Warner, on the other hand, is page one from day one. Looking back, we’ll remember that this moment marked the end of the division of media companies into the categories of “old” and “new.” More important, we’ll remember that it marked the moment where the markets surveyed the global media landscape and announced that for media companies there is no such category as “too big.”

The Abuse of Intellectual Property Law

First published in FEED, 12/99.

1999 is shaping up to be a good year for lawyers. This fall saw the patent lawyers out in force, with Priceline suing Expedia over Priceline’s patented “name your own price” business model, and Amazon suing Barnes and Noble for copying Amazon’s “One-Click Ordering.” More recently, it’s been the trademark lawyers, with Etoys convincing a California court to issue a preliminary injunction against etoy.com, the Swiss art site, because the etoy.com URL might “confuse” potential shoppers. Never mind that etoy.com registered its URL years before Etoys existed: etoy has now been stripped of its domain name without so much as a trial, and is only accessible at its IP address (http://146.228.204.72:8080). Most recently, MIT’s journal of electronic culture, Leonardo, is being sued by a company called Transasia which has trademarked the name “Leonardo” in France, and is demanding a million dollars in damages on the grounds that search engines return links to the MIT journal, in violation of Transasia’s trademark. Lawsuits are threatening to dampen the dynamism of the internet because, even when they are obviously spurious, they add so much to the cost of doing business that soon amateurs and upstarts might not be able to afford to compete with anyone who can afford a lawyer.

The commercialization of the internet has been surprisingly good for amateurs and upstarts up until now. A couple of college kids with a well-managed bookmark list become Yahoo. A lone entrepreneur founds Amazon.com at a time when Barnes and Noble doesn’t even have a section called “internet” on its shelves, and now he’s Time’s Man of the Year. A solo journalist triggers the second presidential impeachment in US history. Over and over again, smart people with good ideas and not much else have challenged the pre-wired establishment and won. The idea that the web is not a battle of the big vs. the small but of the fast vs. the slow has become part of the web’s mystique, and big slow companies are being berated for not moving fast enough to keep up with their net-savvy competition. These big companies would do anything to find a way to use what they have — resources — to make up for what they lack — drive — and they may have found an answer to their prayers in lawsuits.

Lawsuits offer a return to the days of the fight between the big and the small, a fight the big players love. Ever since patents were expanded to include business models, patents have been applied to all sorts of ridiculous things — a patent on multimedia, a patent on downloading music, a patent on using cookies to allow shoppers to buy with one click. More recently, trademark law has become an equally fruitful arena for abuse. Online, a company’s URL is its business, and a trademark lawsuit which threatens a URL threatens the companies’ very existence. In an adversarial legal system, a company can make as spurious an accusation as it likes if it knows its target can’t afford a defense. As odious as Amazon’s suit of Barnes and Noble is, it’s hard to shed any tears over either of them. etoy and Leonardo, on the other hand, are both not-for-profits, and defending what is rightfully theirs might bankrupt them. If etoy cannot afford the necessary (and expensive) legal talent, the preliminary injunction stripping them of their URL might as well be a final decision.

The definition of theft depends on the definition of property, and in an age when so much wealth resides in intelligence, it’s no wonder that those with access to the legal system are trying to alter the definition of intellectual property in their favor. Even Amazon, one of the upstarts just a few years ago, has lost so much faith in its ability to innovate that it is now behaving like the very dinosaurs it challenged in the mid-90’s. It’s also no surprise that both recent trademark cases — etoy and Leonardo — ran across national borders. Judges are more likely to rule in favor of their fellow citizens and against some far away organization, no matter what principle is at stake. The web, which grew so quickly because there were so few barriers to entry, has created an almost irresistible temptation to create legal barriers where no technological ones exist. If this spate of foolish lawsuits continues — and there is every indication that it will — the next few years will see a web where the law becomes a tool for the slow to retard the fast and the big to stymie the small.

Why Smart Agents Are A Dumb Idea

Smart agents are a dumb idea. Like several of the other undead ideas floating around (e.g Digital Cash, Videophones), the idea of having autonomous digital agents that scour the net acting on your behalf seems so attractive that, despite a string of failures, agents enjoy periodic resurgances of interest. A new such surge seems to be beginning, with another round of stories in the press about how autonomous agents
equipped with instructions from you (and your credit card number) are going to shop for your CDs, buy and sell your stocks, and arrange your travel plans. The primary thing smart agents seem to have going for them is the ‘cool’ factor (as in ‘This will work because it would be cool if it did.’) The primary thing they have going against them is that they do not work and they never will work, and not just because they are
impractical, but because they have the wrong problems in their domain, and they solve them in the wrong order.

Smart agents — web crawling agents as opposed to stored preferences in
a database — have three things going against them:

  • Agents’ performance degrades with network growth
  • Agents ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking).
  • Agents make the market for information less efficient rather than more

These three barriers render the idea of agents impractical for almost all of the duties they are supposedly going to perform.

Consider these problems in context; the classic scenario for the mobile agent is the business trip. You have business in Paris (or, more likely, Peoria) and you need a flight, a hotel and a rental car. You instruct your agent about your dates, preferences, and price limits, and it scours the network for you putting together the ideal package based on its interpretatino of your instructions. Once it has secured this package, it makes the purchases on your behalf, and presents you with the completed travel package, dates, times and confirmation numbers in one fell swoop.

A scenario like this requires a good deal of hand waving to make it seem viable, to say nothing of worthwhile, because it assumes that the agent’s time is more valuable than your time. Place that scenario in a real world context – your boss tells you you need to be in Paris (Peoria) at the end of the week, and could you make the arrangements before you go home? You fire up your trusty agent, and run into the
following problems:

  • Agents’ performance degrades with network growth

Upon being given its charge, the agent needs to go out and query all the available sources of travel information, issue the relevant query, digest the returned information and then run the necessary weighting of the results in real time. This is like going to Lycos and asking it to find all the resources related to Unix and then having it start indexing the Web. Forget leaving your computer to make a pot of coffee – you could leave your computer and make a ship in a bottle.

One of the critical weaknesses in the idea of mobile agents is that the time taken to run a query improves with processor speed (~2x every 18 months) but degrades with the amount of data to be searched (~2x every 4 months). A back of the envelope calculation comparing Moore’s Law vs. traffic patterns at public internet interconnect points suggests that an autonomous agent’s performance for real-time requests should suffer by roughly an order of magnitude annually. Even if you make optimistic assumptions about algorithm design and multi-threading and assume that
data sources are always accessible, mere network latency in an exploding number sources prohibits real-time queries. The right way to handle this problem is the mundane way – gather and index the material to be queried in advance.

  • Agents ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking).

The usual answer to this problem with real-time queries is to assume that people are happy to ask a question hours or days in advance of needing the answer, a scenario that occurs with a frequency of approximately never. People ask questions when they want to know the answer – if they wanted the answer later they would have asked the
question later. Agents thus reverse the appropriate division of labor between humans and computers — in the agent scenario above, humans do the waiting while agents do the thinking. The humans are required to state the problem in terms rigorous enough to be acted on by a machine, and be willing to wait for the answer while the machine applies the heuristics. This is in keeping with the Central Dream of AI, namely that humans can be relegated to a check-off function after the machines have done the thinking.

As attractive as this dream might be, it is far from the realm of the possible. When you can have an agent which understands why 8 hours between trains in Paris is better than 4 hours between trains in Frankfurt but 8 hours in Peoria is worse than 4 hours in Fargo, then you can let it do all the work for you, but until then the final step in the process is going to take place in your neural network, not your agent’s.

  • Agents make the market for information less efficient

This is the biggest problem of all – agents rely on a wrong abstraction of the world. In the agent’s world, their particular owner is at the center, and there are a huge number of heterogenous data sources scattered all around, and one agent makes thousands of queries outwards to perform one task. This ignores the fact that the data is neither static nor insensitive to the agent’s request. The agent is not just
importing information about supply, it is exporting information about demand at the same time, thus changing the very market conditions it is trying to record. The price of a Beanie Baby rises as demand rises since Beanie Babies are an (artificially) limited resource, while the price of bestsellers falls with demand, since bookstores can charge lower prices in return for higher volume. Airline prices are updated thousands of times a day, currency exchange rates are updated tens of thousands of times a day. Net-crawling agents are completely unable to deal with markets for information like these; these kind of problems require the structured data to be at the center, and for a huge number of heterogenous queries to be made inwards towards the centralized data, so that information about supply and demand are all captured in one place, something no autonomous agent can do.

Enter The Big Fat Webserver

So much of the history of the Internet, and particularly of the Web, has been about decentralization that the idea of distributing processes has become almost reflexive. Because the first decade of the Web has relied on PCs, which are by their very nature decentralized, it is hard to see that much of the Web’s effect has been in the opposite direction, towards centralization, and centralization of a particular kind – market-making.

The alternative for the autonomous mobile agent is the Big Fat Webserver, and while its superiority as a solution has often been overlooked next to the sexier idea of smart agents, B.F.Ws are A Good Thing for the same reasons markets are A Good Thing – they are the best way of matching supply with demand in real time. What you would really do when Paris (Peoria) beckons is go to Travelocity or some similar
B.F.W. for travel planning. Travelocity runs on that unsexiest of hardware (the mainframe) in that unsexiest of architectures (centralized) and because of that, it works well everywhere the agent scenario works badly. You log into Travelocity and ask it a question about plane flights, get an answer right then, and decide.

B.F.Ws performance scales with database size, not network size

The most important advantage of B.F.Ws over agents is that BFWs acquire and structure the data before a request comes in. Net-crawling agents are asked to identify sources, gather data and then query the results all at once, even though these functions require completely different strategies. By gathering and structuring data in advance, B.F.W.s remove the two biggest obstacles to agent performance before any request is issued.

B.F.W.s let computer do what computers are good at (gathering, indexing) and people do what people are good at (querying, deciding).

Propaganda to the contrary, when given a result set of sufficiently narrow range (a dozen items, give or take), humans are far better at choosing between different options than agents are. B.F.W.s provide the appropriate division of labor, letting the machine do the coarse-grained sorting, which has mostly to do with excluding the worst options, while letting the humans make the fine-grained choices at the end.

B.F.W.s make markets

This is the biggest advantage of B.F.W.s over agents — databases open to heterogenous requests are markets for information. Information about supply and demand are handled at the same time, and the transaction takes place as close to real time as database processing plus network latency can allow.

For the next few years, B.F.W.s are going to be a growth area. They solve the problems previously thought to be in the agents domain, and they solve them better than agents every could. Where the agents make the assumption of a human in the center, facing outward to a heterogenous collection of data which can be gathered asynchronously, B.F.Ws make the asusmption that is more in line with markets (and
reality) – a source of data (a market, really) in the center, with a collection of humans facing inwards and making requests in real time. Until someone finda abetter method of matching supply with demand than real-time markets, B.F.W.s are a better answer than agents every time.

Pretend vs. Real Economy

First published in FEED, 06/99.

The Internet happened to Merrill Lynch last week, and it cost them a couple billion dollars — when Merrill announced its plans to open an online brokerage after years of deriding the idea, its stock price promptly fell by a tenth, wiping out $2 billion in its market capitalization. The internet’s been happening like that to a lot of companies lately — Barnes and Noble’s internet stock is well below its recent launch price, Barry Diller’s company had to drop its Lycos acquisition because of damage to the stock prices of both companies, and both Borders and Compaq dumped their CEOs after it became clear that they were losing internet market share. In all of these cases, those involved learned the hard way that the internet is a destroyer of net value for traditional businesses because the internet economy is fundamentally at odds with the market for internet stocks.

The internet that the stock market has been so in love with (call it the “Pretend Internet” for short) is all upside — it enables companies to cut costs and compete without respect to geography. The internet that affects the way existing goods and services are sold, on the other hand (call it the “Real Internet”), forces companies to cut profit margins, and exposes them to competitors without respect to geography. On the Pretend Internet, new products will pave the way for enormous profitability arising from unspecified revenue streams. Meanwhile, on the Real Internet, prices have fallen and they can’t get up. There is a rift here, and its fault line appears wherever offline companies like Merrill tie their stock to their internet offerings. Merrill currently pockets a hundred bucks every time it executes a trade, and when investors see that Merrill online is only charging $30 a trade, they see a serious loss of revenue. When they go on to notice that $30 is something like three times the going rate for an internet stock trade, they see more than loss of revenue, they see loss of value. When a company can cut its prices 70% and still be three times as expensive as its competitors, something has to give. Usually that is the company’s stock price.

The internet is the locus of the future economy, and its effect is the wholesale transfer of information and choice (read: power and leverage) from producer to consumer. Producers (and the stock market) prefer one-of-a-kind businesses who can force their customers to accept continual price increases for the same products. Consumers, on the other hand, prefer commodity businesses where prices start low and keep falling. On the internet, consumers have the upper hand, and as a result, anybody who profited from offline inefficiencies — it used to be hard work to distribute new information to thousands of people every day, for example — are going to see much of their revenue destroyed with no immediate replacement in sight.

This is not to say that the internet produces no new value — on the contrary, it produces enormous value every day. Its just that most of the value is concentrated in the hands of the consumer. Every time someone uses the net to shop on price (cars, plane tickets, computers, stock trades) the money they didn’t spend is now available for other things. The economy grows even as profit margins shrink. In the end, this is what Merrill’s missing market cap tells us — the internet is now a necessity, but there’s no way to use the internet without embracing consumer power, and any business which profits from inefficiency is going to find this embrace more constricting than comforting. The effects of easy price comparison and global reach are going to wring inefficiency (read: profits) out of the economy like a damp dishrag, and as the market comes to terms with this equation between consumer power and lower profit margins, $2 billion of missing value is going to seem like a drop in the bucket.

Who Are You Paying When You’re Paying Attention?

First published in ACM, 06/99.

Two columns ago, in “Help, the Price of Information Has Fallen and It Can’t Get Up”, I argued that traditional pricing models for informational goods (goods that can theoretically be transmitted as pure data – plane tickets, stock quotes, classified ads) fall apart on the net because so much of what’s actually being paid for when this data is distributed is not the content itself, but its packaging, storage and transportation. This content is distributed either as physical packages, like books or newspapers, or on closed (pronounced ‘expensive’) networks, like Lexis/Nexis or stock tickers, and its cost reflects both these production and distribution expenses and the scaricty that is created when only a few companies can afford to produce and distribute said content. 

The net destroys both those effects, first by removing the need for either printing and distributing physical objects (online newspapers, e-tickets and electronic greeting ‘cards’ are all effortless to distribute relative to their physical counterparts) and by removing many of the barriers to distribution (only a company with access to a printing press can sell classified ads offline, but on the network all it takes is a well-trafficked site), so that many more companies can compete with one another. 

The net effect of all this, pun intended, is to remove the ability to charge direct user fees for many kinds of online content which people are willing to shell out for offline. This does not mean that the net is valueless, however, or that users can’t be asked to pay for content delivered over the Internet. In fact, most users willingly pay for content now. The only hitch is that what they’re paying isn’t money. They’re paying attention. 

THE CURRENCY EXCHANGE MODEL

Much of the current debate surrounding charging user fees on the Internet assumes that content made available over the network follows (or should follow) the model used in the print world – ask users to pay directly for some physical object which contains the content. In some cases, the whole cost of the object is borne by the users, as with books, and in other cases users are simply subsidizing the part of the cost not paid for by advertisements, as with newspapers and magazines. There is, however, another model, one more in line with the things the net does well, where the user pays no direct fees but the providers of the content still get paid – the television model. 

TV networks are like those currency exchange booths for tourists. People pay attention to the TV, and the networks collect this attention and convert it into money at agreed upon rates by supplying it in bulk to their advertisers, generally by calculating the cost to the advertiser of reaching a thousand viewers. The user exchanges their attention in return for the content, and the TV networks exchange this attention for income. These exchange rates rise and fall just like currency markets, based on the perceived value of audience attention and the amount of available cash from the advertiser. 

This model, which generates income by making content widely available over open networks without charging user fees, is usually called ‘ad-supported content’, and it is currently very much in disfavor on the Internet. I believe however, that not only can ad-supported content work on the Internet, I believe it can’t not work. It’s success is guaranteed by the net’s very makeup – the net is simply too good at gathering communities of interest, too good at freely distributing content, and too lousy at keeping anything locked inside subscription networks, for it to fail. Like TV, the net is better at getting people to pay attention than anything else. 

OK SHERLOCK, SO IF THE IDEA OF MAKING MONEY ON THE INTERNET BY CONVERTING ATTENTION INTO INCOME IS SO BRILLIANT, HOW COME EVERYONE ELSE THINKS YOU’RE WRONG?

Its a question of scale and time horizons. 

One of the reasons for the skepticism about applying the TV model to the Internet is the enormous gulf between the two media. This is reflected in both the relative sizes of their audiences and the incomes of those businesses – TV is the quintessential mass medium, commanding tens of millions more viewers than the net does. TV dwarfs the net in both popularity and income. 

Skeptics eyeing the new media landscape often ask “The Internet is fine as a toy, but when will it be like TV?” By this they generally mean ‘When will the net have a TV-style audience with TV-style profits?’ The question “When will the net be like TV” is easy to answer – ‘Never’. The more interesting question is when will TV be like the net, and the answer is “Sooner than you think”. 

A BRIEF DIGRESSION INTO THE BAD OLD DAYS OF NETWORK TV

Many people have written about the differences between the net and television, usually focussing on the difference between broadcast models and packet switched models like multicasting and narrowcasting, but these analyses, while important, overlook one of the principal differences between the two media. The thing that turned TV into the behemoth we know today isn’t broadcast technology but scarcity. 

From the mid-1950s to the mid-1980s, the US national TV networks operated at an artificially high profit. Because the airwaves were deemed a public good, their use was heavily regulated by the FCC, and as a consequence only three companies got to play on a national level. With this FCC-managed scarcity in place, the law of supply and demand worked in the TV networks favor in ways that most industries can only dream of – they had their own private government created and regulated cartel. 

It is difficult to overstate the effect this had on the medium. With just three players, a TV show of merely average popularity would get a third of the available audience, so all the networks were locked in a decades long three-way race for the attention of the ‘average’ viewer. Any business which can get the attention of a third of its 100 million+ strong audience by producing a run-of-the-mill product, while being freed for any other sort of competition by the government, has a license to print money and a barrel of free ink. 

SO WHAT HAPPENED?

Cable happened – the TV universe has been slowly fracturing for the last 20 years or so, with the last 5 years seeing especially sharp movement. With a growing competition from cable (and later sattelite, microwave, a 4th US TV network, and most recently the Internet draining TV watching time), the ability of television to command a vast audience with average work has suffered badly. The two most popular US shows of this year each struggle to get the attention of a fifth of the possible viewers, called a “20 share’ in TV parlance, where 20 is the percentage of the possible audience tuning in. 

The TV networks used to cancel shows with a 20 share, and now that’s the best they can hope for from their most popular shows, and its only going to get worse. As you might imagine, this has played hell with the attention-to-cash conversion machine. When the goal was a creating multiplicity of shows for the ‘average’ viewer, pure volume was good, but in the days of the Wind-Chill Channel and the Abe Vigoda Channel, the networks have had to turn to audience segmentation, to not just counting numbers but counting numbers of women, or teenagers, or Californians, or gardeners, who are watching certain programs. 

The TV world has gone from three channels of pure mass entertainment to tens or even hundreds of interest-specific channels, with attention being converted to cash based not solely on the total number people they attract, but also on how many people with specific interests, or needs, or characteristics. 

Starting to sound a bit like the Web, isn’t it? 

THE TV PEOPLE ARE GOING TO RUE THE DAY THEY EVER HEARD THE WORD ‘DIGITAL’

All this is bad enough from the TV networks point of view, but its a mere scherzo compared to the coming effects of digitality. As I said in an earlier column, apropos CD-ROMs, “Information has been decoupled from objects. Forever.”, and this is starting to be true of information and any form of delivery. A TV is like a book in that it is both the mechanism of distribution and of display – the receiver, decoder and screen travel together. Once television becomes digital, this is over, as any digital content can be delivered over any digital medium. “I just saw an amazing thing on 60 Minutes – here, I’ll mail it to you”, “Whats the URL for ER again?”, “Once everybody’s here, we’ll start streaming Titanic”. Digital Baywatch plus frame relay is the end of ‘appointment TV’. 

Now I am not saying that the net will surpass TV in size of audience or income anytime soon, or that the net’s structure, as is, is suitable for TV content, as is. I am saying that the net’s method of turning attention into income, by letting audience members select what they’re intersted in and when, where and how to view it, is superior to TV’s, and I am saying that as the net’s bandwidth and quality of service increases and television digitizes, many of the advantages TV had move over to the network. 

The Internet is a massive medium, but it is not a mass medium, and this gives it an edge as the scarcity that TV has lived on begins to seriously erode. The fluidity with which the net apportions content to those interested in it without wasting the time of those not interested in it makes it much more suited in the long run for competing for attention in the increasingly fractured environment for television programming, or for any content delivery for that matter. 

In fact, most of the experiments with TV in the last decade – high-definition digital content, interactive shopping and gaming, community organization, and the evergreen ‘video on demand’ – are all things that can be better accomplished by a high bandwidth packet switched network than by traditional TV broadcast signals. 

In the same way that ATT held onto over 2/3rds of its long-distance market for a decade after the breakup only to see it quickly fall below 50% in the last three years, the big 3 TV networks have been coasting on that same kind of intertia. Only recently, prodded by cable and the net, is the sense memory of scarcity starting to fade, and in its place is arising a welter of competition for attention, one that the Internet is poised to profit from enormously. A publicly accessible two-way network that can accomodate both push and pull and can transmit digital content with little regard to protocol has a lot of advantages over TV as an ‘attention to income’ converter, and in the next few years those advantages will make themselves felt. I’m not going to bet on when overall net income surpasses overall TV income, but in an arena where paying attention is the coin of the realm, the net has a natural edge, and I feel confident in predicting that revenues from content will continue to double annually on the net for the forseeable future, while network TV will begin to stagnate, caught flat in a future that looks more like the Internet than it does like network TV.

Free PC Business Models

3/5/1999

“Get a free PC if you fit our demographic profile!” “Get a free PC if you subscribe
to our ISP for 3 years!” “Get a free PC if you spend $100 a month in our online store!”

Suddenly free PCs are everywhere — three offers for free PCs in the last month alone, and more on the way. Is this a gimmick or a con game? Has the hardware industry finally decided to emulate online businesses by having negative revenues? More importantly, is this the step that will lead to universal network access? When PCs and network access are free, what’s to stop everyone from participating in the digital revolution? Could it be that the long-lamented gap between the info-haves and the info-have-nots is soon to be eliminated by these ambitious free PC schemes?

The three offers rolled out last month — from the start-up free-pc.com, the ISP
empire.net, and the online mall shopss.com respectively — are actually the opening salvo of the biggest change in the computer industry since the introduction of the PC in the late 70’s. The free PC marks the end of the transition from PCs as standalone boxes to PCs as network-connected computers, and from PC as product to PC as service. But however revolutionary the underlying business model may be, it is unlikely to change the internet from a limited to a mass medium, at least for the coming years. To understand why, though, you need to understand the underlying economics of the free PC.

We’ve gotten used to cheaper and cheaper computers over the years, but this is different. Free isn’t even the same kind of thing as cheap – a $399 computer is cheaper than a $999 computer, but a free computer isn’t just cheaper still, it’s a different kind of computer altogether. $999 computers and $399 computers are both products; a free computer is a service.

To see why this transition from computer as product to computer as service is a necessary condition for a free PC, consider the forces making the free PC possible in the first place. The first is cost, of course: PC prices have been falling for years, and the recent fall over the last three years from $999 to $399 has been especially precipitous.

A $399 PC is still not free however, and no matter how far prices fall, they will never
fall to zero, so the second part of the equation is cost savings. A $399 PC is cheap
enough that companies can give them away and write off the cost, if they can make back the money elsewhere. A free PC is really a loan on expected future revenues, revenues derived from customer attention of one sort or another.

ABC’s TV slogan, “Hello? It’s free” isn’t quite right. You may not be paying ABC money
when you watch “Nightline,” but you’re still paying attention, and they sell that
attention to their advertisers. TV, radio and a variety of other media have figured out
a way to distribute free programming and profit from consumers’ attention. The low price of computers and their growing use as media outlets and not just as fancy typewriters now brings them into this realm as well. Companies can absorb the cost of giving away hardware if they can profit from getting you to pay attention.

To see how a company might do this, consider these three bits of marketing jargon –
“Customer Acquisition Cost,” “Customer Retention,” and “Lifetime Value of a Customer.” Each of these is a possible source for making back that $399, and in fact, each of the current free PC uses one or more of these sources to underwrite their offering.

Customer Acquisition Cost is simply the marketing budget divided by the number of new customers. If I spend a million dollars in advertising and get 2000 new customers, then my Customer Acquisition Cost is $500 per customer. If, by giving away a $399 PC I can cut my other customer acquisition costs to $100, I come out a dollar ahead per customer, even though I’ve given away hardware to get there. This is the model free-pc.com is following, where they are acquiring an audience cheaply as a first step towards selling that audience’s attention to their advertisers.

Customer Retention is a way of expressing the difficulty of keeping a customer once you get them. Telephone companies, for example, are especially vulnerable to this — customers switch long distance services all the time. If I can give away a $399 PC and keep a customer longer, I make the money back by increasing customer retention. (This is the rationale behind the “free cell phone” as well.) The New England ISP empire.net is following this model — in return for your computer, they have you as a customer for 3 years — guaranteed retention.

Lifetime Value of a Customer is a way of thinking about repeat business — if the average customer at a grocery store spends a hundred dollars a month and lives in the neighborhood for 5 years, then the lifetime value of the average customer is $6000. If a company can give away a $399 PC in order to raise the recipient’s lifetime value, they can absorb the initial cost and make it back over time. This is the model shopss.com is following, where you get a free iMac in return for agreeing to spend $100 a month at their online store for three years.

It’s worth pointing out that all three models depend on one critical element, without
which none of them would work: the Internet. Every one of these plans starts with the assumption that an Internet connection for these PCs is a basic feature, not merely an option. This is is the final step, the thing that turns the computer from product to service. Without the Internet, there is no way to stay connected to your users. Every business model for the free PC treats it as an up-front loan that the user will pay back over time. This requires that a computer be a connected media outlet, not an isolated box, and the Internet is the thing that connects computers together.

A few years ago, engineers at Sun and Oracle began talking about creating an NC — a
“network computer” — but while they’ve been talking, the Internet has been quietly
turning all PCs into NCs, without needing any change to the hardware at all. Everyone giving away free PCs (and there will be more, many more, following this model over the next few months) has recognized that the real PC revolution is not in hardware but use. PCs’ main function has slowly but decisively passed from standalone computing to network connectivity over the 20 years of its life. We’ve already seen that transformation alter the business models of the software world — the free PC movement gives us our first glimpse of how it might transform hardware as well.

If personal computers are now primarily devices of connection rather than computation, this raises the larger question of access. There is a technotopian dream of universal access floating around, where the Internet connects every man, woman and child together in what John Perry Barlow memorably called “the re-wiring of human conciousness.” For years now, old-style progressives have been arguing that we need public-sector investment to make the digital revolution more democratic, while free-market devotees have contended that Moore’s law alone would take care of widening the wired world. Is the free PC the development that will finally lead us into this promised land of universal connectivity?

No.

There’s no such thing as a free lunch. The free PC is not a free product, but a short-term loan in return for your use of a service. The free PC, in all its incarnations, is really a “marketing supported PC” (a more accurate albeit less catchy description). Its cost is supported by your value as a consumer of marketing, or of products bought online. The inverse is also true: people who do not have enough value as consumers won’t get free PCs. (Imagine answering one of those ads for a free cell phone, and telling the phone company that you wanted them to send you the phone itself, but you didn’t want to subscribe to their service.)

For the whole short history of the commercial Internet (1991-present), it has been an
inverted medium, with more representation among higher income brackets. This means that for a marketing supported medium, the most valuable clients for a free PC are, paradoxically, people who already own a PC. Indeed, free-pc.com makes this assumption when they ask the prospective recipient of a free PC how many computers they already own. Access to the Internet is still growing rapidly, but it is growing in stages — first it was populated by computer scientists, then academics, then “early adopters,” and so on. The free PC doesn’t change the Internet’s tendency to expand in phases, it’s just the next phase. The free PC will extend the reach of the Internet, but it will start by reaching the people who are the closest to getting there already, and only expand beyond that audience over time.

The free PC isn’t charity, but it also isn’t a temporary marketing gimmick. It is simply
the meeting of two long-term trends — falling hardware costs and the rising value of
connecting a customer to the Internet. In retrospect, it was inevitable that at some
point those two figures would cross — that the Internet would become valuable enough, and PCs cheap enough, that the the PC could become like a cell phone, a dangle to get people to subscribe to a service. The number of people who can afford to connect to the Internet will continue to rise, not instantly but inexorably, as the value of the Internet itself rises, and the cost of hardware continues to fall. 1999 will be remembered as the year in which this shift from computer as product to computer as service began.