Windows2000, just beginning to ship, and slated for a high profile launch next month, will fundamentally alter the nature of Windows’ competition with Linux, its only real competitor. Up until now, this competition has focused on two separate spheres: servers and desktops. In the server arena, Linux is largely thought to have the upper hand over WindowsNT, with a smaller installed base but much faster growth. On the desktop, though, Linux’s success as a server has had as yet little effect, and the ubiquity of Windows remains unchallenged. With the launch of Windows2000, the battle will no longer be fought in two separate arenas, because just as rising chip power destroyed the distinction between PCs and “workstations,” growing connectivity is destroying the distinction between the desktop and the server. All operating systems are moving in this direction, but the first one to catch the average customer’s eye will rock the market.
The fusion of desktop and server, already underway, is turning the internet inside out. The current network is built on a “content in the center” architecture, where a core of always-on, always-connected servers provides content on demand to a much larger group of PCs which only connect to the net from time to time (mostly to request content, rarely to provide it). With the rise of faster and more stable PCs, however, the ability for a desktop machine to take on the work of a server increases annually. In addition, the newer networking services like cable modems and DSL offer “always on” connectivity — instead of dialing up, their connection to the internet is (at least theoretically) persistent. Add to these forces an increasing number of PCs in networked offices and dorms, and you have the outlines of a new “content at the edges” architecture. This architecture is exemplified by software like Napster or Hotline, designed for sharing MP3s, images, and other files from one PC to another without the need for a central server. In the Napster model, the content resides on the PCs at the edges of the net, and the center is only used for bit-transport. In this “content at the edges” system, the old separation between desktop and server vanishes, with the PC playing both functions at different times. This is the future, and Microsoft knows it.
In the same way Windows95 had built-in dial-up software, Windows2000 has a built-in Web server. The average user has terrible trouble uploading files, but would like to use the web to share their resumes, recipes, cat pictures, pirated music, amateur porn, and powerpoint presentations, so Microsoft wants to make running a web server with Windows2000 as easy as establishing a dialup connection was with Windows95. In addition to giving Microsoft potentially huge competitive leverage over Linux, this desktop/server combo will also allow them to better compete with the phenomenally successful Apache web server and give them a foothold for making Microsoft Word leverage over HTML as the chosen format for web documents — as long as both sender and receiver are running Windows2000.
The Linux camp’s response to this challenge is unclear. Microsoft has typically employed an “attack from below” strategy, using incremental improvements to an initially inferior product to erode a competitor’s advantage. Linux has some defenses against this strategy — the Open Source methodology gives Linux the edge in incremental improvements, and the fact that Linux is free gives Microsoft no way to win a “price vs. features” comparison — but the central fact remains that as desktop computers become servers as well, Microsoft’s desktop monopoly will give them a huge advantage, if they can provide (or even claim to provide) a simple and painless upgrade. Windows2000 has not been out long, it is not yet being targeted at the home user, and developments on the Linux front are coming thick and fast, but the battle lines are clear: The fusing of the functions of desktop and server represents Microsoft’s best (and perhaps last) chance to prevent Linux from toppling its monopoly.
Freedom of speech in the computer age was thrown dramatically into question by a pair of recent stories. The first was the news that Ford would be offering its entire 350,000-member global work force an internet-connected computer for $5 a month. This move,already startling, was made more so by the praise Ford received from Stephen Yokich, the head of the UAW, who said “This will allow us to communicate with our brothers and sisters from around the world.” This display of unanimity between management and the unions was in bizarre contrast to an announcement later in the week concerning Northwest airlines flight attendants. US District Judge Donovan Frank ruled that the home PCs of Northwest Airlines flight attendants could be confiscated and searched by Northwest, who were looking for evidence of email organizing a New Year’s sickout. Clearly corporations do not always look favorably on communication amongst their employees — if the legal barriers to privacy on a home PC are weak now, and if a large number of workers’ PCs will be on loan from their parent company, the freedom of speech and relative anonymity we’ve taken for granted on the internet to date will be seriously tested, and the law may be of little help.
Freedom of speech evangelists tend to worship at the altar of the First Amendment, but many of them haven’t actually read it. As with many sacred documents, it is far less sweeping than people often imagine. Leaving aside the obvious problem of its applicability outside the geographical United States, the essential weakness of the Amendment at the dawn of the 21st century is that it only prohibits governmental interference in speech; it says nothing about commercial interference in speech. Though you can’t prevent people from picketing on the sidewalk, you can prevent them from picketing inside your place of business. This distinction relies on the adjacency of public and private spaces, and the First Amendment only compels the Government to protect free speech in the public arena.
What happens if there is no public arena, though? Put another way, what happens if all the space accessible to protesters is commercially owned? These questions call to mind another clash between labor and management in the annals of US case law, Hudgens v. NLRB (1976), in which the Supreme Court ruled that private space only fell under First Amendment control if it has “taken on all the attributes of a town” (a doctrine which arose to cover worker protests in company towns). However, the attributes the Court requires in order to consider something a town don’t map well to the internet, because they include municipal functions like a police force and a post office. By that measure, has Yahoo taken on all the functions of a town? Has AOL? If Ford provides workers their only link with the internet, has Ford taken on all the functions of a town?
Freedom of speech is following internet infrastructure, where commercial control blossoms and Government input withers. Since Congress declared the internet open for commercial use in 1991, there has been a wholesale migration from services run mostly by state colleges and Government labs to services run by commercial entities. As Ford’s move demonstrates, this has been a good thing for internet use as a whole — prices have plummeted, available services have mushroomed, and the number of users has skyrocketed — but we may be building an arena of all private stores and no public sidewalks. The internet is clearly the new agora, but without a new litmus test from the Supreme Court, all online space may become the kind of commercial space where the protections of the First Amendment will no longer reach.
The word “synergy” always gets a workout whenever two media behemoths join forces (usually accompanied by “unique” and “unprecedented”), and Monday’s press release announcing AOL’s acquisition of Time Warner delivered its fair share of breathless prose. But in practical terms, Monday’s deal was made only for the markets, not for the consumers. AOL and Time Warner are in very different parts of the media business, so there will be little of the cost-cutting that usually follows a mega-merger. Likewise, because AOL chief Steve Case has been waging a war against the regional cable monopolies, looking for the widest possible access to AOL content, it seems more likely that AOL-Time Warner will use its combined reach to open new markets instead of closing existing ones. This means that most of the touted synergies are little more than bundling deals and cross-media promotions — useful, but not earth-shaking. The real import of the deal is that its financial effects are so incomparably vast, and so well timed, that every media company in the world is feeling its foundations shaken by the quake.
The back story to this deal was AOL’s dizzying rise in valuation — 1500% in two years — which left it, like most dot-com stocks, wildly overvalued by traditional measures, and put the company under tremendous pressure to do something to lock in that value before the stock prices return to earth. AOL was very shrewd in working out the holdings of the new company. Although it was worth almost twice of Time Warner on paper, AOL stock holders will take a mere 55% of the new company. This is a brilliant way of backing down from an overvalued stock without causing investors to head for the aisles. Time Warner, meanwhile, got its fondest wish: Once it trades on the markets under the “AOL” ticker, it has a chance to achieve internet-style valuations of its offline assets. The timing was also impeccable; when Barry Diller tried a similar deal last year, linking USA Networks and Lycos, the market was still dreaming of free internet money and sent the stocks of both companies into a tailspin. In retrospect, people holding Lycos stock must be gnashing their teeth.
This is not to say, back in the real world, that AOL-Time Warner will be a good company. Gerald Levin, current CEO of Time Warner, will still be at the helm, and while all the traditional media companies have demonstrated an uncanny knack for making a hash of their web efforts, the debacle of Pathfinder puts Time Warner comfortably at the head of that class. One of the reasons traditional media stocks have languished relative to their more nimble-footed internet counterparts is that the imagined synergies from the round of media consolidations have largely failed to materialize, and this could end up sandbagging AOL as well. There is no guarantee that Levin will forego the opportunity to limit intra- company competition: AOL might find its push for downloadable music slowed now that it’s joined at the hip to Warner Music Group. But no matter — the markets are valuing the sheer size of the combined companies, long before any real results are apparent, and it’s this market reaction (and not the eventual results from the merger) that will determine the reprecussions of the deal.
With Monday’s announcement, the ground has shifted in favor of size. As “mass” becomes valuable in and of itself in the way that “growth” has been the historic mantra of internet companies, every media outlet, online or offline, is going to spend the next few weeks deciding whether to emulate this merger strategy or to announce some counter- strategy. A neutral stance is now impossible. There is rarely this much clarity in these sort of seismic shifts — things like MP3s, Linux, web mail, even the original Mosaic browser, all snuck up on internet users over time. AOL-Time Warner, on the other hand, is page one from day one. Looking back, we’ll remember that this moment marked the end of the division of media companies into the categories of “old” and “new.” More important, we’ll remember that it marked the moment where the markets surveyed the global media landscape and announced that for media companies there is no such category as “too big.”
First published on O’Reilly’s OpenP2P, 12/01/2000.
As the excitement over P2P grew during the past year, it seemed that decentralized architectures could do no wrong. Napster and its cousins managed to decentralize costs and control, creating applications of seemingly unstoppable power. And then researchers at Xerox brought us P2P’s first crisis: freeloading.
Freeloading is the tendency of people to take resources without paying for them. In the case of P2P systems, this means consuming resources provided by other users without providing an equivalent amount of resources (if any) back to the system. The Xerox study of Gnutella(now available at FirstMonday) found that ” … a large proportion of the user population, upwards of 70 percent, enjoy the benefits of the system without contributing to its content,” and labels the problem a “Tragedy of the Digital Commons.”
The Tragedy of the Commons is an economic problem with a long pedigree. As Mojo Nation, a P2P system set up to combat freeloading, states in its FAQ:
Other file-sharing systems are plagued by “the tragedy of the commons,” in which rational folks using a shared resource eat the resources to death. Most often, the “Tragedy of the Commons” refers to farmers and pasture, but technology journalists are writing about users who download and download but never contribute to the system.
To combat this problem, Mojo Nation proposes creating a market for computational resources — disk space, bandwidth, CPU cycles. In its proposed system, if you provide computational resources to the system, you earn Mojo, a kind of digital currency. If you consume computational resources, you spend the Mojo you’ve earned. This system is designed to keep freeloaders from consuming more than they contribute to the system.
A very flawed premise
Mojo Nation is still in beta, but it already faces two issues — one fairly trivial, one quite serious. The trivial issue is that the system isn’t working out as planned: Users are not flocking to the system in sufficient numbers to turn it into a self-sustaining marketplace.
The serious issue is that the system will never work for public file-sharing, not even in theory, because the problem of users eating resources to death does not pose a real threat to systems such as Napster, and the solution Mojo Nation proposes would destroy the very things that allow file-sharing systems like Napster to work.
The Xerox study on Gnutella makes broad claims about the relevance of its findings, even as Napster, which adds more users each day than the entire installed base of Gnutella, is growing without suffering from the study’s predicted effects. Indeed, Napster’s genius in building an architecture that understands the inevitability of freeloading and works within those constraints has led Dan Bricklin to christen Napster’s effects “The Cornucopia of the Commons.”
Systems that set out to right the imagined wrongs of freeloading are more marketing efforts than technological ones, in that they attempt to inflame our sense of injustice at the users who download and download but never contribute to the system. This plays well in the press, of course, garnering headlines like “A revolutionary file-sharing system could spell the end for dot-communism and Net leeches” or labeling P2P users “cyberparasites.”
This sense of unfairness, however, obscures two key aspects of P2P file-sharing: the economics of digital resources, which are either replicable or replenishable; and the ways the selfish nature of user participation drives the system.
One from one equals two
Almost without fail, anyone addressing freeloading refers to the aforementioned “Tragedy of the Commons.” This is an economic parable illustrating the threat to commonly held resources. Imagine that in an area of farmland, the entire pasture is owned by a group of farmers who graze their sheep there. In this situation, it is in the farmers’ best interest to maintain herds of moderate size in order to keep the pasture from being overgrazed. However, it is in the best interest of each farmer to increase the size of his herd as much as possible, because the shared pasture is a free resource.
Even worse, although each herdsman will recognize that all of them should forgo increases in the size of their herd if they are acting for the good of the group, they also recognize that every other farmer also has the same incentives to increase the size of their herds as well. In this scenario, each individual has it in their individual interest to take as much of the common resources as they can, in part because they can benefit themselves and in part because if they don’t someone else will, even though doing so produces a bad outcome for the group as a whole.
The Tragedy of the Commons is a simple, compelling illustration of what can happen to commonly owned resources. It is also almost completely inapplicable to the digital world.
Start with the nature of consumption. If your sheep takes a mouthful of grass from the common pasture, the grass exits the common pasture and enters the sheep, a net decrease in commonly accessible resources. If you take a copy of the Pink Floyd song “Sheep” from another Napster user, that song is not deleted from that user’s hard drive. Furthermore, since your copy also exists within the Napster universe, this sort of consumption createscommonly accessible resources, rather than destroying them. The song is replicated; it is not consumed. Thus the Xerox thesis — that a user replicating a file is consuming resources — seems problematic when the original resource is left intact and a new copy is created.
Even if, in the worst scenario, you download the song and never make it available to any other Napster user, there is no net loss of available songs, so in any file-sharing system where even some small percentage of new users makes the files they download subsequently available, the system will grow in resources, which will in turn attract new users, which will in turn create new resources, whether the system has freeloaders or not. In fact, in the Napster architecture, it is the most replicated resources that suffer least from freeloading, because even with a large percentage of freeloaders, popular songs will tend to become more available.
Bandwidth over time is infinite
But what of bandwidth, the other resource consumed by file sharing? Here again, the idea of freeloading misconstrues digital economics. If you saturate a 1 Mb DSL line for 60 seconds while downloading a song, how much bandwidth do you have available in the 61st second? One meg, of course, just like every other second. Again, the Tragedy of the Commons is the wrong comparison, because the notion that freeloading users will somehow eat the available resources to death doesn’t apply. Unlike grass, bandwidth can’t be “used up,” any more than CPU cycles or RAM can.
Like a digital horn of plenty, most of the resources that go into networking computers together are constantly replenished; “Bandwidth over time is infinite,” as the Internet saying goes. By using all the available bandwidth in any given minute, you have not reduced future bandwidth, nor have you saved anything on the cost of that bandwidth when it’s priced at a flat rate.
Bandwidth can’t be conserved over time either. By not using all the available bandwidth in any given minute, you have not saved any bandwidth for the future, because bandwidth is an event, not a conservable resource. Unused bandwidth expires just like unused plane tickets do, and as long as the demand on bandwidth is distributed through the system — something P2P systems excel at — no single node suffers from the SlashDot effect, the tendency of sites to crash under massive load (named after the frequent crashes to small sites that crash after getting front-page placement on the news site SlashDot.org).
Given this quality of persistently replenished resources, we would expect users to dislike sharing resources they want to use at that moment, but indifferent to sharing resources they make no claim on, such as available CPU cycles or bandwidth when they are away from their desks. Conservation of resources, in other words, should be situational and keyed to user behavior, and it is in misreading user behavior where attempts to discourage freeloading really jump the rails.
Selfish choices, beneficial outcomes
Attempts to prevent freeloading are usually framed in terms of preventing users from behaving selfishly, but selfishness is a key lubricant in P2P systems. In fact, selfishness is what makes the resources used by P2P available in the first place.
Since the writings of Adam Smith, literature detailing the workings of free markets has put the selfishness — or more accurately, the self-interest — of the individual actor at the center of the system, and the situation with P2P networks is no different. Mojo Nation’s central thesis about existing file-sharing systems is that some small number of users in those systems choose, through regard for their fellow man, to make available resources that a larger number of freeloaders then take unfair advantage of. This does not jibe with the experience of millions of present-day users.
Consider an ideal Napster user, with a 10 GB hard drive, a 1 Mb DSL line, and a computer connected to the Net round the clock. Did this user buy her hard drive in order to host MP3s for the community? Obviously not — the size of the drive was selected solely out of self-interest. Does she store MP3s she feels will be of interest to her fellow Napster users. No, she stores only the music she wants to listen to, self-interest again. Bandwidth? Is she shelling out for fast DSL so other users can download files quickly from her? Again, no. Her check goes to the phone company every month so she can have fast download times.
Likewise, decisions she makes about leaving her computer on and connected are self-interested choices. Bandwidth is not metered, and the pennies it costs her to leave her computer on while she is away from her desk, whether to make a pot of coffee or get some sleep, is a small price to pay for not having to sit through a five-minute boot sequence on her return.
Accentuate the positive
Economists call these kinds of valuable side effects “positive externalities.” The canonical example of a positive externality is a shade tree. If you buy a tree large enough to shade your lawn, there is a good chance that for at least part of the day it will shade your neighbor’s lawn as well. This free shade for your neighbor is a positive externality, a benefit to them that costs you nothing more than what you were willing to spend to shade your own lawn anyway.
Napster’s single economic genius is to coordinate such effects. Other than the central database of songs and user addresses, every resource within the Napster network is a positive externality. Furthermore, Napster coordinates these externalities in a way that encourages altruism. The system is resistant to negative effects of freeloading, because as long as Napster users are able to find the songs they want, they will continue to participate in the system, even if the people who download songs from them are not the same people they download songs from.
As long as even a small portion of the users accept this bargain, the system will grow, bringing in more users, who bring in more songs. In such a system, trying to figure out who is freeloading and who is not isn’t worth the effort of the self-interested user.
Real life is asymmetrical
Consider the positive externalities our self-interested user has created. While she sleeps, the Lynyrd Skynrd and N’Sync songs can fly off her hard drive at no additional cost over what she is willing to pay to have a fast computer and an always-on connection. When she is at her PC, there are a number of ways for her to reassert control of her local resources when she doesn’t want to share them. She can cancel individual uploads unilaterally, disconnect from the Napster server or even shut Napster off completely. Even her advertised connection speed acts as a kind of brake on undesirable external use of resources.
Consider a second user on a 14.4 modem downloading a song from our user with her 1 Mb DSL. At first glance, this seems unfair, since our user seems to be providing more resources. This is, however, the most desirable situation for both users. The 14.4 user is getting files at the fastest rate he can, a speed that takes such a small fraction of our user’s DSL bandwidth that she may not even notice it happening in the background.
Furthermore, reversing the situation to create “fairness” would be a disaster — a transfer from 14.4 to DSL would saturate the 14.4 line and all but paralyze that user’s Internet connection for a file transfer not in that user’s self-interest, while giving the DSL user a less-than-optimum download speed. Asymmetric transfers, far from being unfair, are the ideal scenario — as fast as possible on the downloads, and so slow when other users download from you that you don’t even notice.
In any system where the necessary resources like disk space and bandwidth are priced at a flat rate, these economics will prevail. The question for Napster and other systems that rely on these economics is whether flat-rate pricing is likely to disappear.
Setting prices
The economic history of telecommunications has returned again and again to one particular question: flat-rate vs. unit pricing. Simple economic theory tells us that unit pricing — a discrete price per hour online, per e-mail sent or file downloaded — is the most efficient way to allocate resources. By allowing users to take only those resources they are willing to pay for, per-unit pricing distributes resources most efficiently. Some form of unit pricing is at the center of almost all attempts to prevent freeloading, even if the currency the units are priced in are notional units such as Mojo.
Flat-rate pricing, meanwhile, is too blunt an instrument to create such efficiencies. In flat-rate systems, light users pay a higher per-unit cost, thus subsidizing the heavy users. Additionally, the flat-rate price for resources has to be high enough to cover the cost of unexpected spikes in usage, meaning that the average user is guaranteed to pay more in a flat-rate system than in a per-unit system.
Flat-rate is therefore unfair to all users, whether by creating unfair costs for light and average users, or by unfairly subsidizing heavy users. Given the obvious gap in efficient allocation of resources between the two systems, we would expect to see unit pricing ascendant in all situations where the two methods of pricing are in competition. The opposite, of course, is the actual case.
Too cheap to meter
Despite the insistence of economic theoreticians, in the real world people all over the world have expressed an overwhelming preference for flat-rate pricing in their telecommunications systems. Prodigy and CompuServe were forced to abandon their per-e-mail prices in the face of competition from systems that allowed unlimited e-mail. AOL was forced to drop its per-hour charges in the face of competition from ISPs that offered unlimited Internet access for a single monthly charge. Today, the music industry is caught in a struggle between those who want to preserve per-song charges and those who understand the inevitability of subscription charges for digital music.
For years, the refusal of users to embrace per-unit pricing for telecommunications was regarded by economists as little more than a perversion, but recently several economic theorists, especially Nick Szabo and Andrew Odlyzko, have worked out why a rational user might prefer flat-rate pricing, and it revolves around the phrase “Too Cheap to Meter,” or, put another way, “Not Worth Worrying About.”
People like to control costs, but they like to control anxiety as well. Prodigy’s per-e-mail charges and AOL’s hourly rates gave users complete control of their costs, but it also created a scenario where the user was always wondering if the next e-mail or the next hour was worth the price. When offered systems with slightly higher prices but no anxiety, users embraced them so wholeheartedly that Prodigy and AOL were each forced to give in to user preference. Lowered anxiety turned out to be worth paying for.
Anxiety is a kind of mental transaction cost, the cost incurred by having to stop to think about doing something before you do it. Mental transaction costs are what users are minimizing when they demand flat-rate systems. They are willing to spend more money to save themselves from having to make hundreds of individual decisions about e-mail, connect time or files downloaded.
Like Andrew Odlyzko’s notion of Paris Metro Pricing,” where one price gets you into a particular class of service in the system without requiring you to differentiate between short and long trips, users prefer systems where they pay to get in, but are not asked to constantly price resources on a case-by-case basis afterward, which is why micropayment systems for end users have always failed. Micropayments overestimate the value users pay on resources and underestimate the value they place on predictable costs and peace of mind.
The taxman
In the face of this user preference for flat-rate systems, attempts to stem freeloading with market systems are actually reintroducing mental transaction costs, thus destroying the advantages of flat-rate systems. If our hypothetical user is running a distributed computing client like SETI@Home, it is pointless to force her to set a price on her otherwise unused CPU cycles. Any cycles she values she will use, and the program will remain in the background. So long as she has chosen what she wants her spare cycles used for, any cycles she wouldn’t otherwise use for herself aren’t worth worrying about anyway.
Mojo Nation would like to suggest that Mojo is a currency, but it is more like a tax, a markup on an existing resource. Our user chose to run SETI, and since it costs her nothing to donate her unused cycles, any mental transaction costs incurred in pricing the resources raises the cost of the cycles above zero for no reason. Like all tax systems, this creates what economists call “deadweight loss,” the loss that comes from people simply avoiding transactions whose price is pushed too high by the tax itself. By asking its users to price something that they could give away free without incurring any loss, these systems discourage the benefits that come from coordinating positive externalities.
Lessons From Napster
Napster’s ability to add more users per week than all other P2P file-sharing systems combined is based in part on the ease of use that comes from its ability to tolerate freeloading. By decentralizing the parts of the system that are already paid for (disk space, bandwidth) while centralizing the parts of the system that individuals would not provide for themselves working individually (databases of songs and users ids), Napster has created a system that is far easier to use than most of the purely decentralized file-sharing systems.
This does not mean that Napster is the perfect model for all P2P systems. It is specific to the domain of popular music, and attempts to broaden its appeal to general file-sharing have largely failed. Nor does it mean that there is not some volume of users at which Napster begins to suffer from freeloading; all we know so far is that it can easily handle numbers in the tens of millions.
What Napster does show us is that, given the right architecture, freeloading is not the automatically corrosive problem that people believe it to be, and that creating systems which rely on micropayments or other methods of ensuring evenness between production and consumption are not the ideal alternative.
P2P systems use replicable or replenishable resources at the edges of the Internet, resources that tend to be paid for in lump sums or at rates that are insensitive to usage. Therefore, P2P systems that allow users to share resources they would have paid for anyway, so long as they are either getting something in return or contributing to a project they approve of, will tend to have better growth characteristics than systems that attempt to shut off freeloading altogether. If Napster is any guide, the ability to tolerate, rather than deflect, freeloading will be key to driving the growth of P2P.