Napster has joined the pantheon of Netscape, Hotmail, and ICQ as a software-cum- social movement, and its growth shows no sign of abating any time soon. Needless to say, anything this successful needs its own lawsuit to make it a full-fledged Net phenomenon. The Recording Industry Association of America has been only too happy to oblige, with a suit seeking up to a hundred thousand dollars per copyrighted song exchanged (an amount that would be on the order of a trillion dollars, based on Napster usage to date). Unfortunately for the RIAA, the history of music shows that when technological change comes along, the defenders of the old order are powerless to stop it.
In the twenties, the American Federation of Musicians launched a protest when The Jazz Singer inaugurated the talkies and put silent-movie orchestras out of business. The protest was as vigorous as it was ineffective. Once the talkies created a way to distribute movie music without needing to hire movie musicians, there was nothing anyone could do to hold it back, leading the way for new sorts of organizations that embraced recorded music — organizations like the RIAA. Now that the RIAA is faced with another innovation in distribution, it shouldn’t be wasting its time arguing that Napster users are breaking the law. As we’ve seen with the distribution of print on the Web, efficiency trumps legality, and RIAA needs to be developing new models that work with electronic distribution rather than against it.
In the early nineties, a service called Clarinet was launched that distributed news- wire content over the Net, but this distribution came with a catch — users were never ever supposed to forward the articles they read. The underlying (and fruitless) hope behind this system was that if everyone could be made to pretend that the Net was no different from paper, then the newspaper’s “pay directly for content” model wouldn’t be challenged on-line. What sidelined this argument — and Clarinet — was that a bunch of competing businesses said, literally, “Publish and be damned,” and the Yahoos and News.coms of the world bypassed Clarinet by developing business models that encouraged copying. But other companies developed new models well after realizing that Clarinet’s approach was wrong, and they still took years to get it right. The idea that people shouldn’t forward articles to one another has collapsed so completely that it’s hard to remember when it was taken seriously. Years of dire warnings that violating the print model of copyright would lead to writers starving in the streets and render the Web a backwater of amateur content have come to naught. The quality of written material available on-line is rising every year.
The lesson for the RIAA here is that old distribution models can fail long before anyone has any idea what the new models will look like. As with digital text, so now with music. People have a strong preference for making unlimited perfect copies of the music they want to hear. Napster now makes it feasible to do so in just the way the Web made it possible with text. Right now, no one knows how musicians will be rewarded in the future. But the lack of immediate alternatives doesn’t change the fact that Napster is the death knell for the current music distribution system. The music industry does not need to know how musicians will be rewarded when this new system takes hold to know that musicians will be rewarded somehow. Society can’t exist without artists; it can, however, exist without A&R; departments.
The RIAA-Napster suit feels like nothing so much as the fight over the national speed limit in the seventies and eighties. The people arguing in favor of keeping the 55-MPH limit had almost everything on their side — facts and figures, commonsense concerns about safety and fuel efficiency, even the force of federal law. The only thing they lacked was the willingness of the people to go along. As with the speed limit, Napster shows us a case where millions of people are willing to see the law, understand the law, and violate it anyway on a daily basis. The bad news for the RIAA is not that the law isn’t on their side. It plainly is. The bad news for the RIAA is that in a democracy, when the will of the people and the law diverge too strongly for too long, it is the law that changes. Thus are speed limits raised.
WAP is in the air, both literally and figuratively. A mobile phone consortium called Unwired Planet has been working on WAP (Wireless Access Protocol) since May of 1995 in an effort to establish the foundation for the mobile phone’s version of the Web. After several false starts, that work seems to be bearing fruit this year: Nokia was caught by surprise at the demand for its first WAP-enabled phone, Ericsson is right behind with its model, and analysts are predicting that by 2002, more people will access the internet through mobile phones than through PCs. However, we’ve got to be careful when we tout WAP as the next major networking development after the Web itself, because it differs in two crucial ways: the Web grew organically (and non-commercially) in its first few years, and anyone could create or view Web content without a license. WAP, by contrast, is being pushed commercially from the jump, and it is fenced in by a remarkable array of patents which will affect both producers and consumers of WAP content. These differences put WAP’s development on a collision course with the Web as it exists today.
Even after years of commercial development, the Web we have is still remarkably cross-platform, open to amateur content, unmanaged, and unmanageable, and it’s tempting to think that that’s just what global networks look like in the age of the internet. However, the Web is not just the story of the internet, it’s also a story of the computing ecology of the 1990’s. The Web has grown up in an environment where hardware is radically divorced from software: Anyone can install anything on their own PC with no interference (or even knowledge) from the manufacturer. The ISP business operates with a total separation of content and delivery: Internet access is charged by the month, not by the download. And most important of all, the critical pair of protocols — http and HTML — were allowed to spread unhampered by intellectual property laws. The separation of these layers meant that ISPs didn’t have to co-ordinate with browser engineers, who didn’t have to co-ordinate with site designers, who didn’t have to co-ordinate with hardware manufacturers, and this freedom to innovate one layer at a time has been part and parcel of the Web’s remarkable growth.
None of those things are true with WAP. The integration of WAP software with the telephone hardware is far tighter than it was on the PC. The mobile phone business is predicated on charging either per minute or per byte, making it much easier to charge directly for content. Most importantly, WAP’s patents have been designed from the beginning to prevent anyone from creating a way to get content onto mobile phones without cutting the phone companies themselves in on the action, as evidenced by Unwired Planets first patent in 1995, the astonishingly broad “Method and architecture for an interactive two-way data communication network.” WAP, in other words, offers a chance to rebuild the Web, without all that annoying freedom, and without all that annoying competition.
Many industries have looked at the Web and thought that it was almost perfect, with two exceptions — they didn’t own it, and it was too difficult to stifle competition. Microsoft’s first versions of MSN, Apple’s e-world, the pre-dot-com AOL, were all attempts to build a service which that grow like the Web but let them charge consumers like pay-per-view TV. All such attempts have failed so far, because wherever restrictions of either content creators or users were put in place, growth faltered in favor of the freer medium. With WAP, however, we are seeing our first attempt at a walled garden where there is no competition within a “freer” medium — the Unwired Planet patents cover every mobile device ever made, which may give them the leverage to enforce its ideal of total commercial control of mobile internet access. If predictions of the protocol’s growth, ubiquity, and hegemony are correct, then WAP may pose the first real threat to the freewheeling internet.
Windows2000, just beginning to ship, and slated for a high profile launch next month, will fundamentally alter the nature of Windows’ competition with Linux, its only real competitor. Up until now, this competition has focused on two separate spheres: servers and desktops. In the server arena, Linux is largely thought to have the upper hand over WindowsNT, with a smaller installed base but much faster growth. On the desktop, though, Linux’s success as a server has had as yet little effect, and the ubiquity of Windows remains unchallenged. With the launch of Windows2000, the battle will no longer be fought in two separate arenas, because just as rising chip power destroyed the distinction between PCs and “workstations,” growing connectivity is destroying the distinction between the desktop and the server. All operating systems are moving in this direction, but the first one to catch the average customer’s eye will rock the market.
The fusion of desktop and server, already underway, is turning the internet inside out. The current network is built on a “content in the center” architecture, where a core of always-on, always-connected servers provides content on demand to a much larger group of PCs which only connect to the net from time to time (mostly to request content, rarely to provide it). With the rise of faster and more stable PCs, however, the ability for a desktop machine to take on the work of a server increases annually. In addition, the newer networking services like cable modems and DSL offer “always on” connectivity — instead of dialing up, their connection to the internet is (at least theoretically) persistent. Add to these forces an increasing number of PCs in networked offices and dorms, and you have the outlines of a new “content at the edges” architecture. This architecture is exemplified by software like Napster or Hotline, designed for sharing MP3s, images, and other files from one PC to another without the need for a central server. In the Napster model, the content resides on the PCs at the edges of the net, and the center is only used for bit-transport. In this “content at the edges” system, the old separation between desktop and server vanishes, with the PC playing both functions at different times. This is the future, and Microsoft knows it.
In the same way Windows95 had built-in dial-up software, Windows2000 has a built-in Web server. The average user has terrible trouble uploading files, but would like to use the web to share their resumes, recipes, cat pictures, pirated music, amateur porn, and powerpoint presentations, so Microsoft wants to make running a web server with Windows2000 as easy as establishing a dialup connection was with Windows95. In addition to giving Microsoft potentially huge competitive leverage over Linux, this desktop/server combo will also allow them to better compete with the phenomenally successful Apache web server and give them a foothold for making Microsoft Word leverage over HTML as the chosen format for web documents — as long as both sender and receiver are running Windows2000.
The Linux camp’s response to this challenge is unclear. Microsoft has typically employed an “attack from below” strategy, using incremental improvements to an initially inferior product to erode a competitor’s advantage. Linux has some defenses against this strategy — the Open Source methodology gives Linux the edge in incremental improvements, and the fact that Linux is free gives Microsoft no way to win a “price vs. features” comparison — but the central fact remains that as desktop computers become servers as well, Microsoft’s desktop monopoly will give them a huge advantage, if they can provide (or even claim to provide) a simple and painless upgrade. Windows2000 has not been out long, it is not yet being targeted at the home user, and developments on the Linux front are coming thick and fast, but the battle lines are clear: The fusing of the functions of desktop and server represents Microsoft’s best (and perhaps last) chance to prevent Linux from toppling its monopoly.
Freedom of speech in the computer age was thrown dramatically into question by a pair of recent stories. The first was the news that Ford would be offering its entire 350,000-member global work force an internet-connected computer for $5 a month. This move,already startling, was made more so by the praise Ford received from Stephen Yokich, the head of the UAW, who said “This will allow us to communicate with our brothers and sisters from around the world.” This display of unanimity between management and the unions was in bizarre contrast to an announcement later in the week concerning Northwest airlines flight attendants. US District Judge Donovan Frank ruled that the home PCs of Northwest Airlines flight attendants could be confiscated and searched by Northwest, who were looking for evidence of email organizing a New Year’s sickout. Clearly corporations do not always look favorably on communication amongst their employees — if the legal barriers to privacy on a home PC are weak now, and if a large number of workers’ PCs will be on loan from their parent company, the freedom of speech and relative anonymity we’ve taken for granted on the internet to date will be seriously tested, and the law may be of little help.
Freedom of speech evangelists tend to worship at the altar of the First Amendment, but many of them haven’t actually read it. As with many sacred documents, it is far less sweeping than people often imagine. Leaving aside the obvious problem of its applicability outside the geographical United States, the essential weakness of the Amendment at the dawn of the 21st century is that it only prohibits governmental interference in speech; it says nothing about commercial interference in speech. Though you can’t prevent people from picketing on the sidewalk, you can prevent them from picketing inside your place of business. This distinction relies on the adjacency of public and private spaces, and the First Amendment only compels the Government to protect free speech in the public arena.
What happens if there is no public arena, though? Put another way, what happens if all the space accessible to protesters is commercially owned? These questions call to mind another clash between labor and management in the annals of US case law, Hudgens v. NLRB (1976), in which the Supreme Court ruled that private space only fell under First Amendment control if it has “taken on all the attributes of a town” (a doctrine which arose to cover worker protests in company towns). However, the attributes the Court requires in order to consider something a town don’t map well to the internet, because they include municipal functions like a police force and a post office. By that measure, has Yahoo taken on all the functions of a town? Has AOL? If Ford provides workers their only link with the internet, has Ford taken on all the functions of a town?
Freedom of speech is following internet infrastructure, where commercial control blossoms and Government input withers. Since Congress declared the internet open for commercial use in 1991, there has been a wholesale migration from services run mostly by state colleges and Government labs to services run by commercial entities. As Ford’s move demonstrates, this has been a good thing for internet use as a whole — prices have plummeted, available services have mushroomed, and the number of users has skyrocketed — but we may be building an arena of all private stores and no public sidewalks. The internet is clearly the new agora, but without a new litmus test from the Supreme Court, all online space may become the kind of commercial space where the protections of the First Amendment will no longer reach.
The word “synergy” always gets a workout whenever two media behemoths join forces (usually accompanied by “unique” and “unprecedented”), and Monday’s press release announcing AOL’s acquisition of Time Warner delivered its fair share of breathless prose. But in practical terms, Monday’s deal was made only for the markets, not for the consumers. AOL and Time Warner are in very different parts of the media business, so there will be little of the cost-cutting that usually follows a mega-merger. Likewise, because AOL chief Steve Case has been waging a war against the regional cable monopolies, looking for the widest possible access to AOL content, it seems more likely that AOL-Time Warner will use its combined reach to open new markets instead of closing existing ones. This means that most of the touted synergies are little more than bundling deals and cross-media promotions — useful, but not earth-shaking. The real import of the deal is that its financial effects are so incomparably vast, and so well timed, that every media company in the world is feeling its foundations shaken by the quake.
The back story to this deal was AOL’s dizzying rise in valuation — 1500% in two years — which left it, like most dot-com stocks, wildly overvalued by traditional measures, and put the company under tremendous pressure to do something to lock in that value before the stock prices return to earth. AOL was very shrewd in working out the holdings of the new company. Although it was worth almost twice of Time Warner on paper, AOL stock holders will take a mere 55% of the new company. This is a brilliant way of backing down from an overvalued stock without causing investors to head for the aisles. Time Warner, meanwhile, got its fondest wish: Once it trades on the markets under the “AOL” ticker, it has a chance to achieve internet-style valuations of its offline assets. The timing was also impeccable; when Barry Diller tried a similar deal last year, linking USA Networks and Lycos, the market was still dreaming of free internet money and sent the stocks of both companies into a tailspin. In retrospect, people holding Lycos stock must be gnashing their teeth.
This is not to say, back in the real world, that AOL-Time Warner will be a good company. Gerald Levin, current CEO of Time Warner, will still be at the helm, and while all the traditional media companies have demonstrated an uncanny knack for making a hash of their web efforts, the debacle of Pathfinder puts Time Warner comfortably at the head of that class. One of the reasons traditional media stocks have languished relative to their more nimble-footed internet counterparts is that the imagined synergies from the round of media consolidations have largely failed to materialize, and this could end up sandbagging AOL as well. There is no guarantee that Levin will forego the opportunity to limit intra- company competition: AOL might find its push for downloadable music slowed now that it’s joined at the hip to Warner Music Group. But no matter — the markets are valuing the sheer size of the combined companies, long before any real results are apparent, and it’s this market reaction (and not the eventual results from the merger) that will determine the reprecussions of the deal.
With Monday’s announcement, the ground has shifted in favor of size. As “mass” becomes valuable in and of itself in the way that “growth” has been the historic mantra of internet companies, every media outlet, online or offline, is going to spend the next few weeks deciding whether to emulate this merger strategy or to announce some counter- strategy. A neutral stance is now impossible. There is rarely this much clarity in these sort of seismic shifts — things like MP3s, Linux, web mail, even the original Mosaic browser, all snuck up on internet users over time. AOL-Time Warner, on the other hand, is page one from day one. Looking back, we’ll remember that this moment marked the end of the division of media companies into the categories of “old” and “new.” More important, we’ll remember that it marked the moment where the markets surveyed the global media landscape and announced that for media companies there is no such category as “too big.”
First published on O’Reilly’s OpenP2P, 12/01/2000.
As the excitement over P2P grew during the past year, it seemed that decentralized architectures could do no wrong. Napster and its cousins managed to decentralize costs and control, creating applications of seemingly unstoppable power. And then researchers at Xerox brought us P2P’s first crisis: freeloading.
Freeloading is the tendency of people to take resources without paying for them. In the case of P2P systems, this means consuming resources provided by other users without providing an equivalent amount of resources (if any) back to the system. The Xerox study of Gnutella(now available at FirstMonday) found that ” … a large proportion of the user population, upwards of 70 percent, enjoy the benefits of the system without contributing to its content,” and labels the problem a “Tragedy of the Digital Commons.”
The Tragedy of the Commons is an economic problem with a long pedigree. As Mojo Nation, a P2P system set up to combat freeloading, states in its FAQ:
Other file-sharing systems are plagued by “the tragedy of the commons,” in which rational folks using a shared resource eat the resources to death. Most often, the “Tragedy of the Commons” refers to farmers and pasture, but technology journalists are writing about users who download and download but never contribute to the system.
To combat this problem, Mojo Nation proposes creating a market for computational resources — disk space, bandwidth, CPU cycles. In its proposed system, if you provide computational resources to the system, you earn Mojo, a kind of digital currency. If you consume computational resources, you spend the Mojo you’ve earned. This system is designed to keep freeloaders from consuming more than they contribute to the system.
A very flawed premise
Mojo Nation is still in beta, but it already faces two issues — one fairly trivial, one quite serious. The trivial issue is that the system isn’t working out as planned: Users are not flocking to the system in sufficient numbers to turn it into a self-sustaining marketplace.
The serious issue is that the system will never work for public file-sharing, not even in theory, because the problem of users eating resources to death does not pose a real threat to systems such as Napster, and the solution Mojo Nation proposes would destroy the very things that allow file-sharing systems like Napster to work.
The Xerox study on Gnutella makes broad claims about the relevance of its findings, even as Napster, which adds more users each day than the entire installed base of Gnutella, is growing without suffering from the study’s predicted effects. Indeed, Napster’s genius in building an architecture that understands the inevitability of freeloading and works within those constraints has led Dan Bricklin to christen Napster’s effects “The Cornucopia of the Commons.”
Systems that set out to right the imagined wrongs of freeloading are more marketing efforts than technological ones, in that they attempt to inflame our sense of injustice at the users who download and download but never contribute to the system. This plays well in the press, of course, garnering headlines like “A revolutionary file-sharing system could spell the end for dot-communism and Net leeches” or labeling P2P users “cyberparasites.”
This sense of unfairness, however, obscures two key aspects of P2P file-sharing: the economics of digital resources, which are either replicable or replenishable; and the ways the selfish nature of user participation drives the system.
One from one equals two
Almost without fail, anyone addressing freeloading refers to the aforementioned “Tragedy of the Commons.” This is an economic parable illustrating the threat to commonly held resources. Imagine that in an area of farmland, the entire pasture is owned by a group of farmers who graze their sheep there. In this situation, it is in the farmers’ best interest to maintain herds of moderate size in order to keep the pasture from being overgrazed. However, it is in the best interest of each farmer to increase the size of his herd as much as possible, because the shared pasture is a free resource.
Even worse, although each herdsman will recognize that all of them should forgo increases in the size of their herd if they are acting for the good of the group, they also recognize that every other farmer also has the same incentives to increase the size of their herds as well. In this scenario, each individual has it in their individual interest to take as much of the common resources as they can, in part because they can benefit themselves and in part because if they don’t someone else will, even though doing so produces a bad outcome for the group as a whole.
The Tragedy of the Commons is a simple, compelling illustration of what can happen to commonly owned resources. It is also almost completely inapplicable to the digital world.
Start with the nature of consumption. If your sheep takes a mouthful of grass from the common pasture, the grass exits the common pasture and enters the sheep, a net decrease in commonly accessible resources. If you take a copy of the Pink Floyd song “Sheep” from another Napster user, that song is not deleted from that user’s hard drive. Furthermore, since your copy also exists within the Napster universe, this sort of consumption createscommonly accessible resources, rather than destroying them. The song is replicated; it is not consumed. Thus the Xerox thesis — that a user replicating a file is consuming resources — seems problematic when the original resource is left intact and a new copy is created.
Even if, in the worst scenario, you download the song and never make it available to any other Napster user, there is no net loss of available songs, so in any file-sharing system where even some small percentage of new users makes the files they download subsequently available, the system will grow in resources, which will in turn attract new users, which will in turn create new resources, whether the system has freeloaders or not. In fact, in the Napster architecture, it is the most replicated resources that suffer least from freeloading, because even with a large percentage of freeloaders, popular songs will tend to become more available.
Bandwidth over time is infinite
But what of bandwidth, the other resource consumed by file sharing? Here again, the idea of freeloading misconstrues digital economics. If you saturate a 1 Mb DSL line for 60 seconds while downloading a song, how much bandwidth do you have available in the 61st second? One meg, of course, just like every other second. Again, the Tragedy of the Commons is the wrong comparison, because the notion that freeloading users will somehow eat the available resources to death doesn’t apply. Unlike grass, bandwidth can’t be “used up,” any more than CPU cycles or RAM can.
Like a digital horn of plenty, most of the resources that go into networking computers together are constantly replenished; “Bandwidth over time is infinite,” as the Internet saying goes. By using all the available bandwidth in any given minute, you have not reduced future bandwidth, nor have you saved anything on the cost of that bandwidth when it’s priced at a flat rate.
Bandwidth can’t be conserved over time either. By not using all the available bandwidth in any given minute, you have not saved any bandwidth for the future, because bandwidth is an event, not a conservable resource. Unused bandwidth expires just like unused plane tickets do, and as long as the demand on bandwidth is distributed through the system — something P2P systems excel at — no single node suffers from the SlashDot effect, the tendency of sites to crash under massive load (named after the frequent crashes to small sites that crash after getting front-page placement on the news site SlashDot.org).
Given this quality of persistently replenished resources, we would expect users to dislike sharing resources they want to use at that moment, but indifferent to sharing resources they make no claim on, such as available CPU cycles or bandwidth when they are away from their desks. Conservation of resources, in other words, should be situational and keyed to user behavior, and it is in misreading user behavior where attempts to discourage freeloading really jump the rails.
Selfish choices, beneficial outcomes
Attempts to prevent freeloading are usually framed in terms of preventing users from behaving selfishly, but selfishness is a key lubricant in P2P systems. In fact, selfishness is what makes the resources used by P2P available in the first place.
Since the writings of Adam Smith, literature detailing the workings of free markets has put the selfishness — or more accurately, the self-interest — of the individual actor at the center of the system, and the situation with P2P networks is no different. Mojo Nation’s central thesis about existing file-sharing systems is that some small number of users in those systems choose, through regard for their fellow man, to make available resources that a larger number of freeloaders then take unfair advantage of. This does not jibe with the experience of millions of present-day users.
Consider an ideal Napster user, with a 10 GB hard drive, a 1 Mb DSL line, and a computer connected to the Net round the clock. Did this user buy her hard drive in order to host MP3s for the community? Obviously not — the size of the drive was selected solely out of self-interest. Does she store MP3s she feels will be of interest to her fellow Napster users. No, she stores only the music she wants to listen to, self-interest again. Bandwidth? Is she shelling out for fast DSL so other users can download files quickly from her? Again, no. Her check goes to the phone company every month so she can have fast download times.
Likewise, decisions she makes about leaving her computer on and connected are self-interested choices. Bandwidth is not metered, and the pennies it costs her to leave her computer on while she is away from her desk, whether to make a pot of coffee or get some sleep, is a small price to pay for not having to sit through a five-minute boot sequence on her return.
Accentuate the positive
Economists call these kinds of valuable side effects “positive externalities.” The canonical example of a positive externality is a shade tree. If you buy a tree large enough to shade your lawn, there is a good chance that for at least part of the day it will shade your neighbor’s lawn as well. This free shade for your neighbor is a positive externality, a benefit to them that costs you nothing more than what you were willing to spend to shade your own lawn anyway.
Napster’s single economic genius is to coordinate such effects. Other than the central database of songs and user addresses, every resource within the Napster network is a positive externality. Furthermore, Napster coordinates these externalities in a way that encourages altruism. The system is resistant to negative effects of freeloading, because as long as Napster users are able to find the songs they want, they will continue to participate in the system, even if the people who download songs from them are not the same people they download songs from.
As long as even a small portion of the users accept this bargain, the system will grow, bringing in more users, who bring in more songs. In such a system, trying to figure out who is freeloading and who is not isn’t worth the effort of the self-interested user.
Real life is asymmetrical
Consider the positive externalities our self-interested user has created. While she sleeps, the Lynyrd Skynrd and N’Sync songs can fly off her hard drive at no additional cost over what she is willing to pay to have a fast computer and an always-on connection. When she is at her PC, there are a number of ways for her to reassert control of her local resources when she doesn’t want to share them. She can cancel individual uploads unilaterally, disconnect from the Napster server or even shut Napster off completely. Even her advertised connection speed acts as a kind of brake on undesirable external use of resources.
Consider a second user on a 14.4 modem downloading a song from our user with her 1 Mb DSL. At first glance, this seems unfair, since our user seems to be providing more resources. This is, however, the most desirable situation for both users. The 14.4 user is getting files at the fastest rate he can, a speed that takes such a small fraction of our user’s DSL bandwidth that she may not even notice it happening in the background.
Furthermore, reversing the situation to create “fairness” would be a disaster — a transfer from 14.4 to DSL would saturate the 14.4 line and all but paralyze that user’s Internet connection for a file transfer not in that user’s self-interest, while giving the DSL user a less-than-optimum download speed. Asymmetric transfers, far from being unfair, are the ideal scenario — as fast as possible on the downloads, and so slow when other users download from you that you don’t even notice.
In any system where the necessary resources like disk space and bandwidth are priced at a flat rate, these economics will prevail. The question for Napster and other systems that rely on these economics is whether flat-rate pricing is likely to disappear.
Setting prices
The economic history of telecommunications has returned again and again to one particular question: flat-rate vs. unit pricing. Simple economic theory tells us that unit pricing — a discrete price per hour online, per e-mail sent or file downloaded — is the most efficient way to allocate resources. By allowing users to take only those resources they are willing to pay for, per-unit pricing distributes resources most efficiently. Some form of unit pricing is at the center of almost all attempts to prevent freeloading, even if the currency the units are priced in are notional units such as Mojo.
Flat-rate pricing, meanwhile, is too blunt an instrument to create such efficiencies. In flat-rate systems, light users pay a higher per-unit cost, thus subsidizing the heavy users. Additionally, the flat-rate price for resources has to be high enough to cover the cost of unexpected spikes in usage, meaning that the average user is guaranteed to pay more in a flat-rate system than in a per-unit system.
Flat-rate is therefore unfair to all users, whether by creating unfair costs for light and average users, or by unfairly subsidizing heavy users. Given the obvious gap in efficient allocation of resources between the two systems, we would expect to see unit pricing ascendant in all situations where the two methods of pricing are in competition. The opposite, of course, is the actual case.
Too cheap to meter
Despite the insistence of economic theoreticians, in the real world people all over the world have expressed an overwhelming preference for flat-rate pricing in their telecommunications systems. Prodigy and CompuServe were forced to abandon their per-e-mail prices in the face of competition from systems that allowed unlimited e-mail. AOL was forced to drop its per-hour charges in the face of competition from ISPs that offered unlimited Internet access for a single monthly charge. Today, the music industry is caught in a struggle between those who want to preserve per-song charges and those who understand the inevitability of subscription charges for digital music.
For years, the refusal of users to embrace per-unit pricing for telecommunications was regarded by economists as little more than a perversion, but recently several economic theorists, especially Nick Szabo and Andrew Odlyzko, have worked out why a rational user might prefer flat-rate pricing, and it revolves around the phrase “Too Cheap to Meter,” or, put another way, “Not Worth Worrying About.”
People like to control costs, but they like to control anxiety as well. Prodigy’s per-e-mail charges and AOL’s hourly rates gave users complete control of their costs, but it also created a scenario where the user was always wondering if the next e-mail or the next hour was worth the price. When offered systems with slightly higher prices but no anxiety, users embraced them so wholeheartedly that Prodigy and AOL were each forced to give in to user preference. Lowered anxiety turned out to be worth paying for.
Anxiety is a kind of mental transaction cost, the cost incurred by having to stop to think about doing something before you do it. Mental transaction costs are what users are minimizing when they demand flat-rate systems. They are willing to spend more money to save themselves from having to make hundreds of individual decisions about e-mail, connect time or files downloaded.
Like Andrew Odlyzko’s notion of Paris Metro Pricing,” where one price gets you into a particular class of service in the system without requiring you to differentiate between short and long trips, users prefer systems where they pay to get in, but are not asked to constantly price resources on a case-by-case basis afterward, which is why micropayment systems for end users have always failed. Micropayments overestimate the value users pay on resources and underestimate the value they place on predictable costs and peace of mind.
The taxman
In the face of this user preference for flat-rate systems, attempts to stem freeloading with market systems are actually reintroducing mental transaction costs, thus destroying the advantages of flat-rate systems. If our hypothetical user is running a distributed computing client like SETI@Home, it is pointless to force her to set a price on her otherwise unused CPU cycles. Any cycles she values she will use, and the program will remain in the background. So long as she has chosen what she wants her spare cycles used for, any cycles she wouldn’t otherwise use for herself aren’t worth worrying about anyway.
Mojo Nation would like to suggest that Mojo is a currency, but it is more like a tax, a markup on an existing resource. Our user chose to run SETI, and since it costs her nothing to donate her unused cycles, any mental transaction costs incurred in pricing the resources raises the cost of the cycles above zero for no reason. Like all tax systems, this creates what economists call “deadweight loss,” the loss that comes from people simply avoiding transactions whose price is pushed too high by the tax itself. By asking its users to price something that they could give away free without incurring any loss, these systems discourage the benefits that come from coordinating positive externalities.
Lessons From Napster
Napster’s ability to add more users per week than all other P2P file-sharing systems combined is based in part on the ease of use that comes from its ability to tolerate freeloading. By decentralizing the parts of the system that are already paid for (disk space, bandwidth) while centralizing the parts of the system that individuals would not provide for themselves working individually (databases of songs and users ids), Napster has created a system that is far easier to use than most of the purely decentralized file-sharing systems.
This does not mean that Napster is the perfect model for all P2P systems. It is specific to the domain of popular music, and attempts to broaden its appeal to general file-sharing have largely failed. Nor does it mean that there is not some volume of users at which Napster begins to suffer from freeloading; all we know so far is that it can easily handle numbers in the tens of millions.
What Napster does show us is that, given the right architecture, freeloading is not the automatically corrosive problem that people believe it to be, and that creating systems which rely on micropayments or other methods of ensuring evenness between production and consumption are not the ideal alternative.
P2P systems use replicable or replenishable resources at the edges of the Internet, resources that tend to be paid for in lump sums or at rates that are insensitive to usage. Therefore, P2P systems that allow users to share resources they would have paid for anyway, so long as they are either getting something in return or contributing to a project they approve of, will tend to have better growth characteristics than systems that attempt to shut off freeloading altogether. If Napster is any guide, the ability to tolerate, rather than deflect, freeloading will be key to driving the growth of P2P.
1999 is shaping up to be a good year for lawyers. This fall saw the patent lawyers out in force, with Priceline suing Expedia over Priceline’s patented “name your own price” business model, and Amazon suing Barnes and Noble for copying Amazon’s “One-Click Ordering.” More recently, it’s been the trademark lawyers, with Etoys convincing a California court to issue a preliminary injunction against etoy.com, the Swiss art site, because the etoy.com URL might “confuse” potential shoppers. Never mind that etoy.com registered its URL years before Etoys existed: etoy has now been stripped of its domain name without so much as a trial, and is only accessible at its IP address (http://146.228.204.72:8080). Most recently, MIT’s journal of electronic culture, Leonardo, is being sued by a company called Transasia which has trademarked the name “Leonardo” in France, and is demanding a million dollars in damages on the grounds that search engines return links to the MIT journal, in violation of Transasia’s trademark. Lawsuits are threatening to dampen the dynamism of the internet because, even when they are obviously spurious, they add so much to the cost of doing business that soon amateurs and upstarts might not be able to afford to compete with anyone who can afford a lawyer.
The commercialization of the internet has been surprisingly good for amateurs and upstarts up until now. A couple of college kids with a well-managed bookmark list become Yahoo. A lone entrepreneur founds Amazon.com at a time when Barnes and Noble doesn’t even have a section called “internet” on its shelves, and now he’s Time’s Man of the Year. A solo journalist triggers the second presidential impeachment in US history. Over and over again, smart people with good ideas and not much else have challenged the pre-wired establishment and won. The idea that the web is not a battle of the big vs. the small but of the fast vs. the slow has become part of the web’s mystique, and big slow companies are being berated for not moving fast enough to keep up with their net-savvy competition. These big companies would do anything to find a way to use what they have — resources — to make up for what they lack — drive — and they may have found an answer to their prayers in lawsuits.
Lawsuits offer a return to the days of the fight between the big and the small, a fight the big players love. Ever since patents were expanded to include business models, patents have been applied to all sorts of ridiculous things — a patent on multimedia, a patent on downloading music, a patent on using cookies to allow shoppers to buy with one click. More recently, trademark law has become an equally fruitful arena for abuse. Online, a company’s URL is its business, and a trademark lawsuit which threatens a URL threatens the companies’ very existence. In an adversarial legal system, a company can make as spurious an accusation as it likes if it knows its target can’t afford a defense. As odious as Amazon’s suit of Barnes and Noble is, it’s hard to shed any tears over either of them. etoy and Leonardo, on the other hand, are both not-for-profits, and defending what is rightfully theirs might bankrupt them. If etoy cannot afford the necessary (and expensive) legal talent, the preliminary injunction stripping them of their URL might as well be a final decision.
The definition of theft depends on the definition of property, and in an age when so much wealth resides in intelligence, it’s no wonder that those with access to the legal system are trying to alter the definition of intellectual property in their favor. Even Amazon, one of the upstarts just a few years ago, has lost so much faith in its ability to innovate that it is now behaving like the very dinosaurs it challenged in the mid-90’s. It’s also no surprise that both recent trademark cases — etoy and Leonardo — ran across national borders. Judges are more likely to rule in favor of their fellow citizens and against some far away organization, no matter what principle is at stake. The web, which grew so quickly because there were so few barriers to entry, has created an almost irresistible temptation to create legal barriers where no technological ones exist. If this spate of foolish lawsuits continues — and there is every indication that it will — the next few years will see a web where the law becomes a tool for the slow to retard the fast and the big to stymie the small.
1999 was a turning point in the history of the net — for the first time (and from now on) US internet users make up less than 50% of the net’s total population, marking the end of the American Internet. (1969-1999, R.I.P.) This growing internationalization will have profound effects on the growth of the internet over the next several years, as country after country gets wired. The large and growing pressure on businesses to get the citizens of their own country online, and then to expand beyond their own borders in pursuit of further growth, will accelerate internet penetration throughout the world. Internet theorist Frances Cainrcross has called the internet “the death of distance” in her book of the same name, and as the American internet fades and the global internet takes its place, it will finally begin to live up to that promise.
Countries do not get wired gradually. Instead, they pass through a tipping point in their internet population (somewhere around 10%) where for a sizable segment of the population network access stops being a luxury and starts being a necessity. Once this threshold is crossed, the wired population quickly grows large enough to begin affecting that country’s economics, politics, culture. The US crossed that threshold in 1995, and because of this early passage, it is very fashionable these days to assume that most of the rest of the world is still several years behind the US. This view, most recently espoused by Morgan Stanley’s star internet analyst Mary Meeker, is dead wrong. What that smug, Americ-o-centric attitude overlooks is that internet adoption is accelerating — these days when countries cross the 1/10th tipping point, they are now growing faster then the US did. It took 4 years for the US internet population to go from 1/10th of the country to 1/3rd, 1995 to 1998. In the UK that growth took just 15 months, from fall of 1998 to now. The Chinese internet population quadrupled in 1999. The big change in these figures is the role of business — in the US, businesses got in the way of the internet in the early years, while in the UK and elsewhere, businesses have now assumed a driving role in the internet’s growth.
In the US, the online services and ISPs created a wired population long before anyone ever heard the word e-commerce, and the early reaction of most US businesses was to ignore or fear the internet, usually in that order. As it became clear that the online audience wasn’t going away, this left the businesses playing catch up, a situation people in the US are so used to it has become almost a reflex to assume that businesses don’t “get it”. Today, however, businesses in countries at the tipping point of widespread internet adoption have learned their lesson by watching the US — once the internet comes along, businesses know they will be valued in part by their internet strategy, and generating loyal internet clients will raise their worth in the market. The results of these lessons is clear in two of todays most dynamic markets, Britain and Brazil; in both of those countries existing banks are offering free internet access for life to their clients, in order to acquire loyal e-customers, while driving up internet adoption as a side effect. The effect of businesses getting it sooner means that once a country crosses the tipping point, its market will grow faster and become net-savvy sooner than the US market did. The Mary Meekers of the world are going to be caught by surprise when they see how rapidly the industrialized world becomes synonymous with the wired world.
This change in the role of businesses — going from sitting on the sidelines to accelerating internet growth — will separate the world into 3 spheres: countries with little or no internet penetration, whether for reasons of insfrastructure or political resistance or both: think Cuba, Sierra Leone, Iraq. In the middle will be countries with a small but rapidly growing net population, often doubling annually — countries crossing the tipping point, like Brazil, Italy, Taiwan. Finally, there will be a few countries with a large and mature net population, whose growth will have slowed to a more leisurely 25% a year or so. The first country in this group is the US, of course, but the UK and the Scandinavian countries are joining this club as well. 1999 marks the end of businesses focussed on growing within the net population of their home countries. In the next few years, the real action is going to be between tipping point countries and mature market countries, because every business in both groups is pursuing the same thing: growth.
The cultural and economic logic of internet businesses demands constant growth — in page views, unique users, sales, transaction value, in every possible measure of success. The valuations of internet stocks are based on a climate where that growth has been easy to attain — the US from 1996-1999 — but as the US nears 50% penetration, the growth in users is still strong but no longer breathtaking. This leaves mature market internet companies with two choices: break the news to their shareholders about lowered growth expectations, or look for customers in other markets. The usual answer has been international expansion: Yahoo is in over 20 countries, the biggest online bookstore and auction site in the UK are Amazon and eBay respectively, and a host of other US-based internet companies, everyone from AOL to salon, are expanding aggressively overseas. All these companies are trying to hit other markets at the same moment: after they have crossed the tipping point, but before the playing field holds too many well-established competitors.
This pressure from the mature markets is in turn creating huge incentives for businesses in small but growing markets to go international as well, particularly if they can do it along linguistic lines, with Spanish-speaking businesses reaching out across Latin American, English-speaking businesses re-tracing the lines of the British Empire, and so on. The goal of this metaphorical land grab is two-fold: first to stave off the competition from mature markets where possible and grab the growth in user base for themselves, and second to provide themselves with leverage when the inevitable partnerships and acquisition offers come in from the Yahoo’s and Amazons of the world.
The next five years are going to see several cycles of customer acquisition, consolidation through partnership or purchase, followed by a new round of customer acquisition, and only companies with a credible international strategy will be able to play that game at the highest levels. The US-based intenret companies will have an advantage in this market, but not a complete one, now that the US internet population is in a minority, 1999 marks the point where the real work of taking the World Wide Web world wide began.
When is a cable not a cable? This is the question working its way through the Ninth Circuit Court right now, courtesy of AT&T and the good people of Portland, Oregon. When the city of Portland and surrounding Multnomah County passed a regulation requiring AT&T to open its newly acquired cable lines to other internet service providers, such as AOL and GTE, the rumble over high-speed internet access began. In one corner was the City of Portland, weighing in on the side of competition, open access, and consumer choice. In the other corner stood AT&T, championing legal continuity: When AT&T made plans to buy MediaOne, the company was a local monopoly, and AT&T wants it to stay that way. It looked to be a clean fight. And yet, on Monday, one of the three appellate judges, Edward Leavy, threw in a twist: He asked whether there is really any such thing as the cable industry anymore. The answer to Judge Leavy’s simple question has the potential to radically alter the landscape of American media.
AT&T has not invested $100 billion in refashioning itself as a cable giant because it’s committed to offering its customers high-speed internet access. Rather, AT&T has invested that money to buy back what it really wants: a national monopoly. Indeed, AT&T has been dreaming of regaining its monopoly status ever since the company was broken up into AT&T and the Baby Bells, back in 1984. With cable internet access, AT&T sees its opportunity. In an operation that would have made all the King’s horses and all the King’s men gasp in awe, AT&T is stitching a national monopoly back together out of the fragments of the local cable monopolies. If it can buy up enough cable outlets, it could become the sole provider for high-speed internet access for a sizable chunk of the country.
Cable is attractive to the internet industry because the cable industry has what internet service providers have wanted for years: a way to make money off content. By creating artificial scarcity — we’ll define this channel as basic, that channel as premium, this event is free, that one is pay-per-view — the cable industry has used its monopoly over the wires to derive profits from the content that travels over those wires. So, if you think about it, what AT&T is really buying is not infrastructure but control: By using those same television wires for internet access, they will be able to affect the net content its users can and can’t see (you can bet they will restrict access to Time-Warner’s offerings, for example), bringing pay-per-view economics to the internet.
In this environment, the stakes for the continued monopoly of the cable market couldn’t be higher, which is what makes Judge Leavy’s speculation about the cable industry so radical. Obviously frustrated with the debate, the Judge interjected that, “It strikes me that everybody is trying to dance around the issue of whether we’re talking about a telecommunications service.” His logic seems straightforward enough. If the internet is a telecommunications service, and cable is a way to get internet access, then surely cable is a telecommunications service. Despite the soundness of this logic, however, neither AT&T nor Portland was ready for it, because beneath its simplicity is a much more extreme notion: If the Judge is right, and anyone who provides internet access is a telecommunications company, then the entire legal structure on which the cable industry is based — monopolies and service contracts negotiated city by city — will be destroyed, and cable will be regulated by the FCC on a national level. By declaring that regulations should cover how a medium is used and not merely who owns it, Judge Leavy will be moving the internet to another level of the American media pecking-order. If monopolies really aren’t portable from industry to industry — if owning the wire doesn’t mean owning the customer — then this latest attempt to turn the internet into a walled garden will be turned back.
Just got back from Internet World in New York. My approach to Internet World is always the same – skip the conference and the keynotes, ignore most of the mega-installations, and go straight to the trade floor, and then just walk every aisle and look (briefly) at every booth. Today was several hundred booths, and took 5 hours.
I do this because it doesn’t matter to me when one company thinks they have a good idea, but when I see three companies with the same idea, then I start to take notice. In addition, the floor of Internet World is a good proxy for the marketplace: if a company can’t make its business proposition clear to the person passing by their booth, they aren’t going to be able to make it clear in the market either. (One company had put up a single sign, announcing only that they were “Using Advanced Technology to Create Customer-Driven Solutions.” Another said they were “The Entertainment Marketing Promotion Organization.”)
Herewith my impressions of the IW ’99 zeitgeist:
Product categories
– Any web site which seemed like a good idea in 1997 is now an entire industry sector, with both copycat websites, and groups offering to outsource the function for you. Free home pages, free email, vertical search engines, etc etc etc
– Customer service is the newest such category: the interest generated in LivePerson has led to a number of similar services. Look for the human element to re-enter the sales channel as a point of differentiation in the next 12 months.
– Filtering is also now a bona fide category. As if on cue, Xerox fired 40 employees for inappropriate use of the net this week, with slightly more than 20 of these people fired for looking at porn on the job. Filtering has left behind its “Big Brother/Religious Conservative” connotations and become a business liability issue.
– Promotions/sweepstakes/turn-key affiliate programs are hot. Last year was all about ecommerce – this year, the need to have viewers to extract all those dollars from has loomed as an issue as well, so spending money on tricks other than pure media buys to move traffic to your site has gained as a category.
– Language translation is hot. The economic rationale of high-traffic, ad-supported sites demands constant growth, but net growth in the States is slowing from impossibly fast to merely fast, even as other parts of the world hit the “annual doubling” part of their growth curve. The logical solution is prospecting for clients overseas – expect the rush to Europe to accelerate in the next 12 months.
(The previous two categories clash, as the legalities of running promotions vary enormously from place to place. In as much as international traffic is valuable, promotions are a very complicated way to get it. Sometime in the next 12 months, someone is going to test the waters for multi-national promotions, probably within the EU.)
– Quality Assurance is not hot, yet, but is has grown quite a bit from last year. eBay outages, post-IPO scrutiny, and the general pressures of scale and service are making people much more aware of QA issues, and several companies were hawking their services in interface design, human factors and stress testing. – Embedded systems. Lots and lots of embedded systems. Since it takes hardware longer to adjust than software, I don’t know how fast this is moving, but by 2001, all devices smarter than a pencil sharpener will have a TCP/IP stack.
– Internet telephony is weakening as a category, as the phone-centric view (“Let’s route telephone conversations over the net.”) gives way to a net-centric view (“Let’s turn telephone conversations into digital data”). Many companies are integrating voice digitization into their products, but instead of merely turning these into phone calls, they’re also letting the user store them, edit them, forward them as a clickable link (think voicemail.yahoo.com) etc. Voice is going from being an application to a data format.
ASP – Application Service Providers (a.k.a. apps you log into)
– There is no such thing as an ASP. I went in looking for them, expecting the show to be filled with them, and found almost none. It only dawned on me after the fact that consumers buy products, not categories, and that in fact there were three major ASP categories, though they didn’t call themselves that. They were:
– Document storage. There have been ‘free net backup’ utilities for years, but this year, with mp3 and the need to store jpegs somewhere besides your home computer (ahem), the “i-drive/freedrive/z-drive” concept is really taking off.
– Document management. Many companies are vying to offer the killer app that will untie the document from the desktop, with some combination of net backup/file versioning/format conversion/update notification/web download in a kind of webified LAN. This document-driven (as opposed to device-driven) computing represents both the biggest opportunity and the biggest threat for Microsoft, since it is the document formats and not the software that really drives their current hold on the desktop. If MS can stand the pain of interoperating with non-MS devices, they could be the major stakeholder in this arena in two years time.
– Conferencing/calendering. Personal computers are best used by persons, not groups. This category is an attempt to use the Web as a kind of GC, a “group computer”, by taking what the net has always done best, communication within and between groups, into the center of the business world.
The dogs that didn’t bark in the night:
– Very few companies selling pure XML products anymore – I counted 2. XML is on a lot of feature/compatibility lists, but makes up the core offering of few products.
– Ditto Java. Lots of products that use Java; very few banners that said “Java Inside”.
– Despite the large and growing impact of the net on gaming and vice-versa, there was almost no gaming presence there. Games are still a seperate industry, and they have their own show (E3) which runs on a different logic. (Games are one of the only software categories currently not challenged by a serious free software movement.)
– Ditto movies. Only atomfilms was there. The Internet is everywhere, but movies are still in LA.
– Ditto music. .mp3’s are not an Internet revolution – they’re just another file format, after all. What they are is a music revolution, and the real action is with the people who own the content. Lots of people advertised .mp3 compatibility, but almost no one structured their product offering around them. Those battles will be fought elsewhere.
– To my surprise, there were also few companies offering 3rd party solutions to warehousing/shipping. I expect this will become a big deal after we see what happens during the second ecommerce christmas.
Random notes
– Its impossible to take any company seriously if they only have a kiosk in a country pavilion or a “Partner Pavilion”– they just end up looking like arm candy for Oracle or Germany, not like real companies. – That goes double if there are people in colorful native garb in the booth.
– The conference is more feminized than last year, which was alarmingly like a car show. There were more women who knew what they were talking about on either side of the podium, and fewer “booth babes” per capita.
– The search engine war has broken out into the open (viz. every request on internet mailing lists in the past year by someone asking how to improve their site’s ranking on a search engine.) There were companies advertising automated creation of search-engine-only URLs to stuff the rankings. Look for the search engine firms to develop ways of filtering these within 6 months, probably using router data.
– The big booth holders have moved from computing to networking. Two years ago, Dell had a big presence, but the biggest booths now were the usuals (IBM, MS, Oracle) and then the telcos – ATT, Qwest, GTE.
– For years, IW had several PC manufacturers. This year, the number of booths for companies who sell servers, power-supplies, etc., outnumbered the companies who concentrate on PCs. Every company at IW that does ship PCs will ship with Linux pre-installed – no MS-only hardware vendors anymore.
– Every company has a Linux port or a story about working on one. Unlike last year, nobody says ‘Huh?’ or ‘No.’ – Your next computer will have a flat screen. There were more flat screens than glass monitors in the booths.
– The marketing effect of changing the Macintosh case colors came home; the iMac was the booth acoutrement of choice after the flat panel. Furthermore, since almost no one writes software just for the Mac anymore, having an iMac showing your product has become a visual symbol for ‘cross-platform’.
– The sound was unbearably loud at times – a perfect metaphor for the increasingly noisy market. Two illustrative moments: a bona fide smart person talking too quietly into a microphone, trying to explain how XML really works, while the song “Fame” is drowning him out from the next booth as accompaniement to a guy speed-finger-painting a giant portrait on black velvet. Also, you could hardly hear what Intel was up to because of the noise from the Microsoft pavilion, and vice-versa.
– 2 different companies offered internet/stereo compatibility. Expect convergence to merge home and car audio with the net before it merges the computer with the TV. – The only large crowd with that ‘tell us more, tell us more’ look in their eyes were crowded around the Handspring booth. Handspring’s product, the Vizor, is nothing more that a Palm Pilot in a PVC case with more apps and memory for fewer dollars — its biggest selling point in fact is how little it differs from the Pilot — but to see the crowd at the booth you’d have thought they were giving away hot buttered money.