“The Consumer” is the internet’s most recent casualty. We have often heard that Internet puts power in the hands of the consumer, but this is nonsense — ‘powerful consumer’ is an oxymoron. The historic role of the consumer has been nothing more than a giant maw at the end of the mass media’s long conveyer belt, the all-absorbing Yin to mass media’s all-producing Yang. Mass media’s role has been to package consumers and sell their atention to the advertisers, in bulk. The consumers’ appointed role in this system gives them and no way to communicate anything about themselves except their preference between Coke and Pepsi, Bounty and Brawny, Trix and Chex. They have no way to respond to the things they see on television or hear on the radio, and they have no access to any media on their own — media is something that is done to them, and consuming is how they register their repsonse. In changing the relations between media and individuals, the Internet does not herald the rise of a powerful consumer. The Internet heralds the disappearance of the consumer altogether, because the Internet destroys the noisy advertiser/silent consumer relationship that the mass media relies upon. The rise of the internet undermines the existence of the consumer because it undermines the role of mass media. In the age of the internet, no one is a passive consumer anymore because everyone is a media outlet.
To profit from its symbiotic relationship with advertisers, the mass media required two things from its consumers – size and silence. Size allowed the media to address groups while ignoring the individual — a single viewer makes up less than 1% of 1% of 1% of Frasier’s 10-million-strong audience. In this system, the individual matters not at all: the standard unit for measuring television audiences is a million households at a time. Silence, meanwhile, allowed the media’s message to pass unchallenged by the viewers themselves. Marketers could broadcast synthetic consumer reaction — “Tastes Great!”, ” Less filling!” — without having to respond to real customers’ real reactions — “Tastes bland”, “More expensive”. The enforced silence leaves the consumer with only binary choices — “I will or won’t watch I Dream of Genie, I will or won’t buy Lemon Fresh Pledge” and so on. Silence has kept the consumer from injecting any complex or demanding interests into the equation because mass media is one-way media.
This combination of size and silence has meant that mass media, where producers could address 10 million people at once with no fear of crosstalk, has been a very profitable business to be in.
Unfortunately for the mass media, however, the last decade of the 20th century was hell on both the size and silence of the consumer audience. As AOL’s takeover of Time Warner demonstrated, while everyone in the traditional media was waiting for the Web to become like traditional media, traditional media has become vastly more like the Web. TV’s worst characteristics — its blandness, its cultural homogeneity, its appeal to the lowest common denominator — weren’t an inevitable part of the medium, they were simply byproducts of a restricted number of channels, leaving every channel to fight for the average viewer with their average tastes. The proliferation of TV channels has eroded the audience for any given show — the average program now commands a fraction of the audience it did 10 years ago, forcing TV stations to find and defend audience niches which will be attractive to advertisers.
Accompanying this reduction in size is a growing response from formerly passive consumers. Marketing lore says that if a customer has a bad expereince, they will tell 9 other people, but that figure badly needs updating. Armed with nothing more than an email address, a disgruntled customer who vents to a mailing list can reach hundreds of people at once; the same person can reach thousands on ivillage or deja; a post on slashdot or a review on amazon can reach tens of thousands. Furthermore, the Internet never forgets — a complaint made on the phone is gone forever, but a complaint made on the Web is there forever. With mass media outlets shrinking and the reach of the individual growing, the one-sided relationship between media and consumer is over, and it is being replaced with something a lot less conducive to unquestioning acceptance.
In retrospect, mass media’s position in the 20th century was an anomoly and not an inevitability. There have always been both one-way and two-way media — pamphlets vs. letters, stock tickers vs. telegraphs — but in 20th century the TV so outstripped the town square that we came to assume that ‘large audience’ necessarily meant ‘passive audience’, even though size and passivity are unrelated. With the Internet, we have the world’s first large, active medium, but when it got here no one was ready for it, least of all the people who have learned to rely on the consumer’s quiescent attention while the Lucky Strike boxes tapdance across the screen. Frasier’s advertisers no longer reach 10 million consumers, they reach 10 million other media outlets, each of whom has the power to amplify or contradict the advertiser’s message in something frighteningly close to real time. In place of the giant maw are millions of mouths who can all talk back. There are no more consumers, because in a world where an email address constitutes a media channel, we are all producers now.
Thanks to the wireless application protocol (WAP), the telephone and the PC are going to collide this year, and it’s not going to be pretty. The problem with “wireless everywhere” is that the PC and the phone can’t fuse into the tidy little converged info-appliance that pundits have been predicting for years, because while it’s easy to combine the hardware of the phone and the PC, it’s impossible to combine their philosophies.
The phone-based assumptions about innovation, freedom, and commercial control are so different from those of the PC that the upcoming battle between the two devices will be nothing less than a battle over the relationship of the Internet to its users.
The philosophy behind the PC is simple: Put as much control in the hands of the user as you possibly can. PC users can install any software they like; they can connect their PCs to any network; they can connect any peripherals; they can even replace the operating system. And they don’t need anyone’s permission to do any of these things. The phone has an equally simple underlying philosophy: Take as much control from the user as possible while still producing a useable device. Phones allow so little user control that users don’t even think of their phones as having operating systems, much less software they can upgrade or replace themselves. The phone, in other words, is built around principles of restriction and corporate control of the user interface that are anathema to the Internet as it has developed so far.
WAP extends this idea of control into the network itself, by purporting to offer Internet access while redesigning almost every protocol needed to move data across the wireless part of the network. WAP does not offer direct access to the Internet, but instead links the phone to a WAP gateway which brokers connections between the phone and the rest of the Net. The data that passes between the phone and this WAP gateway is translated from standard Internet protocols to a kind of parallel “W” universe, where HTML becomes WML, TCP becomes WTP, and so on. The implication is that the W world is simply wireless Internet, but in fact the WAP Forum has not only renamed but redesigned these protocols. WML, for example, is not in fact a markup language but a programming language, and therefore much more difficult for the average content creator to use. Likewise, WAP designers choose to ignore the lesson of HTML, which is so adaptable precisely because it was never designed for any particular interface.
Familiar principles
The rationale behind these redesigns is that WAP allows for error checking and for interconnecting different kinds of networks. If that sounds familiar, it’s because these were the founding principles of the Internet itself, principles that have proven astonishingly flexible over 30 or so years and are perfectly applicable to wireless networks. The redesign of the protocol lets the WAP consortium blend the functions of delivery and display so the browser choice is locked in by the phone manufacturer. (Imagine how much Microsoft would like to have pulled off that trick.) No matter what the technical arguments for WAP are, its effect is to put the phone companies firmly in control of the user. The WAP consortium is determined that no third party will be able to reach the user of a wireless device without going through an interface that one of its member companies controls and derives revenue from.
The effects of this control can be seen in a recent string of commercial announcements. Geoworks intends to enforce its WAP patents to extract a $20,000 fee from any large company using a WAP gateway (contrast the free Apache Web server). Sprint has made licensing deals with companies such as E-Compare to distribute content over its WAP-enabled phones (imagine having to negotiate a separate deal with every ISP to distribute content to PC users). Nokia announced it will use WAP to deliver ads to its users’ phones (imagine WorldNet hijacking its subscribers’ browsers to serve them ads.) By linking hardware, browser, and data transport together far more tightly than they are on the PC-based Internet, the members of the WAP Forum hope to create artificial scarcity for content, and avoid having to offer individual users unfettered access to the Internet.
In the short run this might work because WAP has a head start over other protocols for wireless data. In the long run, though, it is doomed to fail because the only thing we’ve ever seen with the growth characteristics of the Internet is the Internet itself. The people touting WAP over the current PC-based methods of accessing the Internet want to focus on phone hardware versus PC hardware
The message of Napster, the wildly popular mp3 “sharing” software, is plain: The internet is being turned inside out.
Napster is downloadable software that allows users to trade mp3 files with one another. It works by constantly updating a master song list, adding and removing songs as individual users connect and disconnect their PCs. When someone requests a particular song, the Napster server then initiates a direct file transfer from the user who has a copy of the song to the user who wants one. Running against the twin tides of the death of the PC and the rise of application service providers (ASPs), Napster instead points the way to a networking architecture which re-invents the PC as a hybrid client+server while relegating the center of the internet, where all the action has been recently, to nothing but brokering connections.
For software which is still in beta, Napster’s success is difficult to overstate: at any given moment, Napster servers keep track of thousands of PCs, holding hundreds of thousands of songs which comprise terabytes of data. This is a complete violation of the Web’s current data model — “Content at the center” — and Napster’s success in violating it points the way to an alternative — “Content at the edges”. The current content-at-the-center model has one significant flaw: most internet content is created on the PCs at the edges, but for it to become universally accessible, it must be pushed to the center, to always-on, always-up Web servers. As anyone who has ever spent time trying to upload material to a Web site knows, the Web has made downloading trivially easy, but uploading is still needlessly hard. Napster relies on three networking innovations to get around these limitations:
It dispenses with uploading and leaves the files on the PCs, merely brokering requests from one PC to another — the mp3 files do not have to travel through any central Napster server.
PCs running Napster do not need a fixed internet address or a permanent conenction to use the service.
It ignores the reigning Web paradigm of client and server. Napster makes no distinction between the two functions: if you can receive files from other people, they can receive files from you as well.
Leave aside for the moment the fact that virtually all of the file transfers brokered by Napster are illegal — piracy is often an indicator of massive untapped demand. The real import of Napster is that it is proof-of-concept for a networking architecture which recognizes that bandwidth to the desktop is becoming fast enough to allow PCs to act as servers, and that PCs are becoming powerful enough to fulfill this new role. In other words, just as the ASP space is taking off, Napster’s success represents the revenge of the PC. By removing the need to upload data (the single biggest bottleneck to using the ASP model for everything), the content-at-the-edges model points the way to a re-invention of the desktop as the center of a user’s data, only this time the user will no longer need physical access to the PC itself. The use of the PC as central repository and server of user content will have profound effects on several internet developments currently underway:
This is the ground on on which the Windows2000 vs. Linux battle will be fought. As the functions of desktop and server fuse, look for Microsoft to aggressively push Web services which rely on content- at-the-edges, trying to undermine Linux’s hold on the server market. (Ominously for Linux, the Napster Linux client is not seen as a priority by Napster themselves.)
Free hosting companies like Geocities exist because the present system makes it difficult for the average user to host their own web content. With PCs increasingly able to act as Web servers, look for a Napster-like service which simply points requests to individual users machines.
WAP and other mobile access protocols are currently focussing on access to centralized commercial services, but when you are on the road the information you are likeliest to need is on your PC, not on CNN. An always-on always-accessible PC is going to be the ideal source of WAP-enabled information for travelling business people.
The trend towards centralized personalization services on sites like Yahoo will find itself fighting with a trend towards making your PC the source of your calendar, phone book, and to do list. The Palm Pilot currently syncs with the PC, and it will be easier to turn the PC itself into a Web server than to teach the average user how to upload a contact database.
Stolen mp3’s are obvious targets to be served from individiual machines, but they are by no means the only such content category. Everything from wedding pictures to home office documents to amateur porn (watch for a content-at-the-edges version of persiankitty) can be served from a PC now, and as long as the data does not require central management, it will be more efficient to do so.
This is not to say that desktop will replace all web servers — systems which require steady backups or contain professionally updated content will still continue to work best on centrally managed servers. Nevertheless, Napster’s rise shows us that the versatility of the PC as a hardware platform will give the millions of desktop machines currently in use a new lease on life. This in turn means that the ASP revolution will be not be as swift nor will the death of the PC be as total as the current press would have us believe. The current content-at-the-center architecture got us through the 90’s, where PCs too poorly engineered to be servers and bandwidth was too slow and variable to open a pipe to the desktop, but with DSL and stable operating systems in the offing, much of the next 5 years will be shaped by the rise of content-at-the-edges.
Napster has joined the pantheon of Netscape, Hotmail, and ICQ as a software-cum- social movement, and its growth shows no sign of abating any time soon. Needless to say, anything this successful needs its own lawsuit to make it a full-fledged Net phenomenon. The Recording Industry Association of America has been only too happy to oblige, with a suit seeking up to a hundred thousand dollars per copyrighted song exchanged (an amount that would be on the order of a trillion dollars, based on Napster usage to date). Unfortunately for the RIAA, the history of music shows that when technological change comes along, the defenders of the old order are powerless to stop it.
In the twenties, the American Federation of Musicians launched a protest when The Jazz Singer inaugurated the talkies and put silent-movie orchestras out of business. The protest was as vigorous as it was ineffective. Once the talkies created a way to distribute movie music without needing to hire movie musicians, there was nothing anyone could do to hold it back, leading the way for new sorts of organizations that embraced recorded music — organizations like the RIAA. Now that the RIAA is faced with another innovation in distribution, it shouldn’t be wasting its time arguing that Napster users are breaking the law. As we’ve seen with the distribution of print on the Web, efficiency trumps legality, and RIAA needs to be developing new models that work with electronic distribution rather than against it.
In the early nineties, a service called Clarinet was launched that distributed news- wire content over the Net, but this distribution came with a catch — users were never ever supposed to forward the articles they read. The underlying (and fruitless) hope behind this system was that if everyone could be made to pretend that the Net was no different from paper, then the newspaper’s “pay directly for content” model wouldn’t be challenged on-line. What sidelined this argument — and Clarinet — was that a bunch of competing businesses said, literally, “Publish and be damned,” and the Yahoos and News.coms of the world bypassed Clarinet by developing business models that encouraged copying. But other companies developed new models well after realizing that Clarinet’s approach was wrong, and they still took years to get it right. The idea that people shouldn’t forward articles to one another has collapsed so completely that it’s hard to remember when it was taken seriously. Years of dire warnings that violating the print model of copyright would lead to writers starving in the streets and render the Web a backwater of amateur content have come to naught. The quality of written material available on-line is rising every year.
The lesson for the RIAA here is that old distribution models can fail long before anyone has any idea what the new models will look like. As with digital text, so now with music. People have a strong preference for making unlimited perfect copies of the music they want to hear. Napster now makes it feasible to do so in just the way the Web made it possible with text. Right now, no one knows how musicians will be rewarded in the future. But the lack of immediate alternatives doesn’t change the fact that Napster is the death knell for the current music distribution system. The music industry does not need to know how musicians will be rewarded when this new system takes hold to know that musicians will be rewarded somehow. Society can’t exist without artists; it can, however, exist without A&R; departments.
The RIAA-Napster suit feels like nothing so much as the fight over the national speed limit in the seventies and eighties. The people arguing in favor of keeping the 55-MPH limit had almost everything on their side — facts and figures, commonsense concerns about safety and fuel efficiency, even the force of federal law. The only thing they lacked was the willingness of the people to go along. As with the speed limit, Napster shows us a case where millions of people are willing to see the law, understand the law, and violate it anyway on a daily basis. The bad news for the RIAA is not that the law isn’t on their side. It plainly is. The bad news for the RIAA is that in a democracy, when the will of the people and the law diverge too strongly for too long, it is the law that changes. Thus are speed limits raised.
WAP is in the air, both literally and figuratively. A mobile phone consortium called Unwired Planet has been working on WAP (Wireless Access Protocol) since May of 1995 in an effort to establish the foundation for the mobile phone’s version of the Web. After several false starts, that work seems to be bearing fruit this year: Nokia was caught by surprise at the demand for its first WAP-enabled phone, Ericsson is right behind with its model, and analysts are predicting that by 2002, more people will access the internet through mobile phones than through PCs. However, we’ve got to be careful when we tout WAP as the next major networking development after the Web itself, because it differs in two crucial ways: the Web grew organically (and non-commercially) in its first few years, and anyone could create or view Web content without a license. WAP, by contrast, is being pushed commercially from the jump, and it is fenced in by a remarkable array of patents which will affect both producers and consumers of WAP content. These differences put WAP’s development on a collision course with the Web as it exists today.
Even after years of commercial development, the Web we have is still remarkably cross-platform, open to amateur content, unmanaged, and unmanageable, and it’s tempting to think that that’s just what global networks look like in the age of the internet. However, the Web is not just the story of the internet, it’s also a story of the computing ecology of the 1990’s. The Web has grown up in an environment where hardware is radically divorced from software: Anyone can install anything on their own PC with no interference (or even knowledge) from the manufacturer. The ISP business operates with a total separation of content and delivery: Internet access is charged by the month, not by the download. And most important of all, the critical pair of protocols — http and HTML — were allowed to spread unhampered by intellectual property laws. The separation of these layers meant that ISPs didn’t have to co-ordinate with browser engineers, who didn’t have to co-ordinate with site designers, who didn’t have to co-ordinate with hardware manufacturers, and this freedom to innovate one layer at a time has been part and parcel of the Web’s remarkable growth.
None of those things are true with WAP. The integration of WAP software with the telephone hardware is far tighter than it was on the PC. The mobile phone business is predicated on charging either per minute or per byte, making it much easier to charge directly for content. Most importantly, WAP’s patents have been designed from the beginning to prevent anyone from creating a way to get content onto mobile phones without cutting the phone companies themselves in on the action, as evidenced by Unwired Planets first patent in 1995, the astonishingly broad “Method and architecture for an interactive two-way data communication network.” WAP, in other words, offers a chance to rebuild the Web, without all that annoying freedom, and without all that annoying competition.
Many industries have looked at the Web and thought that it was almost perfect, with two exceptions — they didn’t own it, and it was too difficult to stifle competition. Microsoft’s first versions of MSN, Apple’s e-world, the pre-dot-com AOL, were all attempts to build a service which that grow like the Web but let them charge consumers like pay-per-view TV. All such attempts have failed so far, because wherever restrictions of either content creators or users were put in place, growth faltered in favor of the freer medium. With WAP, however, we are seeing our first attempt at a walled garden where there is no competition within a “freer” medium — the Unwired Planet patents cover every mobile device ever made, which may give them the leverage to enforce its ideal of total commercial control of mobile internet access. If predictions of the protocol’s growth, ubiquity, and hegemony are correct, then WAP may pose the first real threat to the freewheeling internet.
Windows2000, just beginning to ship, and slated for a high profile launch next month, will fundamentally alter the nature of Windows’ competition with Linux, its only real competitor. Up until now, this competition has focused on two separate spheres: servers and desktops. In the server arena, Linux is largely thought to have the upper hand over WindowsNT, with a smaller installed base but much faster growth. On the desktop, though, Linux’s success as a server has had as yet little effect, and the ubiquity of Windows remains unchallenged. With the launch of Windows2000, the battle will no longer be fought in two separate arenas, because just as rising chip power destroyed the distinction between PCs and “workstations,” growing connectivity is destroying the distinction between the desktop and the server. All operating systems are moving in this direction, but the first one to catch the average customer’s eye will rock the market.
The fusion of desktop and server, already underway, is turning the internet inside out. The current network is built on a “content in the center” architecture, where a core of always-on, always-connected servers provides content on demand to a much larger group of PCs which only connect to the net from time to time (mostly to request content, rarely to provide it). With the rise of faster and more stable PCs, however, the ability for a desktop machine to take on the work of a server increases annually. In addition, the newer networking services like cable modems and DSL offer “always on” connectivity — instead of dialing up, their connection to the internet is (at least theoretically) persistent. Add to these forces an increasing number of PCs in networked offices and dorms, and you have the outlines of a new “content at the edges” architecture. This architecture is exemplified by software like Napster or Hotline, designed for sharing MP3s, images, and other files from one PC to another without the need for a central server. In the Napster model, the content resides on the PCs at the edges of the net, and the center is only used for bit-transport. In this “content at the edges” system, the old separation between desktop and server vanishes, with the PC playing both functions at different times. This is the future, and Microsoft knows it.
In the same way Windows95 had built-in dial-up software, Windows2000 has a built-in Web server. The average user has terrible trouble uploading files, but would like to use the web to share their resumes, recipes, cat pictures, pirated music, amateur porn, and powerpoint presentations, so Microsoft wants to make running a web server with Windows2000 as easy as establishing a dialup connection was with Windows95. In addition to giving Microsoft potentially huge competitive leverage over Linux, this desktop/server combo will also allow them to better compete with the phenomenally successful Apache web server and give them a foothold for making Microsoft Word leverage over HTML as the chosen format for web documents — as long as both sender and receiver are running Windows2000.
The Linux camp’s response to this challenge is unclear. Microsoft has typically employed an “attack from below” strategy, using incremental improvements to an initially inferior product to erode a competitor’s advantage. Linux has some defenses against this strategy — the Open Source methodology gives Linux the edge in incremental improvements, and the fact that Linux is free gives Microsoft no way to win a “price vs. features” comparison — but the central fact remains that as desktop computers become servers as well, Microsoft’s desktop monopoly will give them a huge advantage, if they can provide (or even claim to provide) a simple and painless upgrade. Windows2000 has not been out long, it is not yet being targeted at the home user, and developments on the Linux front are coming thick and fast, but the battle lines are clear: The fusing of the functions of desktop and server represents Microsoft’s best (and perhaps last) chance to prevent Linux from toppling its monopoly.
Freedom of speech in the computer age was thrown dramatically into question by a pair of recent stories. The first was the news that Ford would be offering its entire 350,000-member global work force an internet-connected computer for $5 a month. This move,already startling, was made more so by the praise Ford received from Stephen Yokich, the head of the UAW, who said “This will allow us to communicate with our brothers and sisters from around the world.” This display of unanimity between management and the unions was in bizarre contrast to an announcement later in the week concerning Northwest airlines flight attendants. US District Judge Donovan Frank ruled that the home PCs of Northwest Airlines flight attendants could be confiscated and searched by Northwest, who were looking for evidence of email organizing a New Year’s sickout. Clearly corporations do not always look favorably on communication amongst their employees — if the legal barriers to privacy on a home PC are weak now, and if a large number of workers’ PCs will be on loan from their parent company, the freedom of speech and relative anonymity we’ve taken for granted on the internet to date will be seriously tested, and the law may be of little help.
Freedom of speech evangelists tend to worship at the altar of the First Amendment, but many of them haven’t actually read it. As with many sacred documents, it is far less sweeping than people often imagine. Leaving aside the obvious problem of its applicability outside the geographical United States, the essential weakness of the Amendment at the dawn of the 21st century is that it only prohibits governmental interference in speech; it says nothing about commercial interference in speech. Though you can’t prevent people from picketing on the sidewalk, you can prevent them from picketing inside your place of business. This distinction relies on the adjacency of public and private spaces, and the First Amendment only compels the Government to protect free speech in the public arena.
What happens if there is no public arena, though? Put another way, what happens if all the space accessible to protesters is commercially owned? These questions call to mind another clash between labor and management in the annals of US case law, Hudgens v. NLRB (1976), in which the Supreme Court ruled that private space only fell under First Amendment control if it has “taken on all the attributes of a town” (a doctrine which arose to cover worker protests in company towns). However, the attributes the Court requires in order to consider something a town don’t map well to the internet, because they include municipal functions like a police force and a post office. By that measure, has Yahoo taken on all the functions of a town? Has AOL? If Ford provides workers their only link with the internet, has Ford taken on all the functions of a town?
Freedom of speech is following internet infrastructure, where commercial control blossoms and Government input withers. Since Congress declared the internet open for commercial use in 1991, there has been a wholesale migration from services run mostly by state colleges and Government labs to services run by commercial entities. As Ford’s move demonstrates, this has been a good thing for internet use as a whole — prices have plummeted, available services have mushroomed, and the number of users has skyrocketed — but we may be building an arena of all private stores and no public sidewalks. The internet is clearly the new agora, but without a new litmus test from the Supreme Court, all online space may become the kind of commercial space where the protections of the First Amendment will no longer reach.
The word “synergy” always gets a workout whenever two media behemoths join forces (usually accompanied by “unique” and “unprecedented”), and Monday’s press release announcing AOL’s acquisition of Time Warner delivered its fair share of breathless prose. But in practical terms, Monday’s deal was made only for the markets, not for the consumers. AOL and Time Warner are in very different parts of the media business, so there will be little of the cost-cutting that usually follows a mega-merger. Likewise, because AOL chief Steve Case has been waging a war against the regional cable monopolies, looking for the widest possible access to AOL content, it seems more likely that AOL-Time Warner will use its combined reach to open new markets instead of closing existing ones. This means that most of the touted synergies are little more than bundling deals and cross-media promotions — useful, but not earth-shaking. The real import of the deal is that its financial effects are so incomparably vast, and so well timed, that every media company in the world is feeling its foundations shaken by the quake.
The back story to this deal was AOL’s dizzying rise in valuation — 1500% in two years — which left it, like most dot-com stocks, wildly overvalued by traditional measures, and put the company under tremendous pressure to do something to lock in that value before the stock prices return to earth. AOL was very shrewd in working out the holdings of the new company. Although it was worth almost twice of Time Warner on paper, AOL stock holders will take a mere 55% of the new company. This is a brilliant way of backing down from an overvalued stock without causing investors to head for the aisles. Time Warner, meanwhile, got its fondest wish: Once it trades on the markets under the “AOL” ticker, it has a chance to achieve internet-style valuations of its offline assets. The timing was also impeccable; when Barry Diller tried a similar deal last year, linking USA Networks and Lycos, the market was still dreaming of free internet money and sent the stocks of both companies into a tailspin. In retrospect, people holding Lycos stock must be gnashing their teeth.
This is not to say, back in the real world, that AOL-Time Warner will be a good company. Gerald Levin, current CEO of Time Warner, will still be at the helm, and while all the traditional media companies have demonstrated an uncanny knack for making a hash of their web efforts, the debacle of Pathfinder puts Time Warner comfortably at the head of that class. One of the reasons traditional media stocks have languished relative to their more nimble-footed internet counterparts is that the imagined synergies from the round of media consolidations have largely failed to materialize, and this could end up sandbagging AOL as well. There is no guarantee that Levin will forego the opportunity to limit intra- company competition: AOL might find its push for downloadable music slowed now that it’s joined at the hip to Warner Music Group. But no matter — the markets are valuing the sheer size of the combined companies, long before any real results are apparent, and it’s this market reaction (and not the eventual results from the merger) that will determine the reprecussions of the deal.
With Monday’s announcement, the ground has shifted in favor of size. As “mass” becomes valuable in and of itself in the way that “growth” has been the historic mantra of internet companies, every media outlet, online or offline, is going to spend the next few weeks deciding whether to emulate this merger strategy or to announce some counter- strategy. A neutral stance is now impossible. There is rarely this much clarity in these sort of seismic shifts — things like MP3s, Linux, web mail, even the original Mosaic browser, all snuck up on internet users over time. AOL-Time Warner, on the other hand, is page one from day one. Looking back, we’ll remember that this moment marked the end of the division of media companies into the categories of “old” and “new.” More important, we’ll remember that it marked the moment where the markets surveyed the global media landscape and announced that for media companies there is no such category as “too big.”
First published on O’Reilly’s OpenP2P, 12/01/2000.
As the excitement over P2P grew during the past year, it seemed that decentralized architectures could do no wrong. Napster and its cousins managed to decentralize costs and control, creating applications of seemingly unstoppable power. And then researchers at Xerox brought us P2P’s first crisis: freeloading.
Freeloading is the tendency of people to take resources without paying for them. In the case of P2P systems, this means consuming resources provided by other users without providing an equivalent amount of resources (if any) back to the system. The Xerox study of Gnutella(now available at FirstMonday) found that ” … a large proportion of the user population, upwards of 70 percent, enjoy the benefits of the system without contributing to its content,” and labels the problem a “Tragedy of the Digital Commons.”
The Tragedy of the Commons is an economic problem with a long pedigree. As Mojo Nation, a P2P system set up to combat freeloading, states in its FAQ:
Other file-sharing systems are plagued by “the tragedy of the commons,” in which rational folks using a shared resource eat the resources to death. Most often, the “Tragedy of the Commons” refers to farmers and pasture, but technology journalists are writing about users who download and download but never contribute to the system.
To combat this problem, Mojo Nation proposes creating a market for computational resources — disk space, bandwidth, CPU cycles. In its proposed system, if you provide computational resources to the system, you earn Mojo, a kind of digital currency. If you consume computational resources, you spend the Mojo you’ve earned. This system is designed to keep freeloaders from consuming more than they contribute to the system.
A very flawed premise
Mojo Nation is still in beta, but it already faces two issues — one fairly trivial, one quite serious. The trivial issue is that the system isn’t working out as planned: Users are not flocking to the system in sufficient numbers to turn it into a self-sustaining marketplace.
The serious issue is that the system will never work for public file-sharing, not even in theory, because the problem of users eating resources to death does not pose a real threat to systems such as Napster, and the solution Mojo Nation proposes would destroy the very things that allow file-sharing systems like Napster to work.
The Xerox study on Gnutella makes broad claims about the relevance of its findings, even as Napster, which adds more users each day than the entire installed base of Gnutella, is growing without suffering from the study’s predicted effects. Indeed, Napster’s genius in building an architecture that understands the inevitability of freeloading and works within those constraints has led Dan Bricklin to christen Napster’s effects “The Cornucopia of the Commons.”
Systems that set out to right the imagined wrongs of freeloading are more marketing efforts than technological ones, in that they attempt to inflame our sense of injustice at the users who download and download but never contribute to the system. This plays well in the press, of course, garnering headlines like “A revolutionary file-sharing system could spell the end for dot-communism and Net leeches” or labeling P2P users “cyberparasites.”
This sense of unfairness, however, obscures two key aspects of P2P file-sharing: the economics of digital resources, which are either replicable or replenishable; and the ways the selfish nature of user participation drives the system.
One from one equals two
Almost without fail, anyone addressing freeloading refers to the aforementioned “Tragedy of the Commons.” This is an economic parable illustrating the threat to commonly held resources. Imagine that in an area of farmland, the entire pasture is owned by a group of farmers who graze their sheep there. In this situation, it is in the farmers’ best interest to maintain herds of moderate size in order to keep the pasture from being overgrazed. However, it is in the best interest of each farmer to increase the size of his herd as much as possible, because the shared pasture is a free resource.
Even worse, although each herdsman will recognize that all of them should forgo increases in the size of their herd if they are acting for the good of the group, they also recognize that every other farmer also has the same incentives to increase the size of their herds as well. In this scenario, each individual has it in their individual interest to take as much of the common resources as they can, in part because they can benefit themselves and in part because if they don’t someone else will, even though doing so produces a bad outcome for the group as a whole.
The Tragedy of the Commons is a simple, compelling illustration of what can happen to commonly owned resources. It is also almost completely inapplicable to the digital world.
Start with the nature of consumption. If your sheep takes a mouthful of grass from the common pasture, the grass exits the common pasture and enters the sheep, a net decrease in commonly accessible resources. If you take a copy of the Pink Floyd song “Sheep” from another Napster user, that song is not deleted from that user’s hard drive. Furthermore, since your copy also exists within the Napster universe, this sort of consumption createscommonly accessible resources, rather than destroying them. The song is replicated; it is not consumed. Thus the Xerox thesis — that a user replicating a file is consuming resources — seems problematic when the original resource is left intact and a new copy is created.
Even if, in the worst scenario, you download the song and never make it available to any other Napster user, there is no net loss of available songs, so in any file-sharing system where even some small percentage of new users makes the files they download subsequently available, the system will grow in resources, which will in turn attract new users, which will in turn create new resources, whether the system has freeloaders or not. In fact, in the Napster architecture, it is the most replicated resources that suffer least from freeloading, because even with a large percentage of freeloaders, popular songs will tend to become more available.
Bandwidth over time is infinite
But what of bandwidth, the other resource consumed by file sharing? Here again, the idea of freeloading misconstrues digital economics. If you saturate a 1 Mb DSL line for 60 seconds while downloading a song, how much bandwidth do you have available in the 61st second? One meg, of course, just like every other second. Again, the Tragedy of the Commons is the wrong comparison, because the notion that freeloading users will somehow eat the available resources to death doesn’t apply. Unlike grass, bandwidth can’t be “used up,” any more than CPU cycles or RAM can.
Like a digital horn of plenty, most of the resources that go into networking computers together are constantly replenished; “Bandwidth over time is infinite,” as the Internet saying goes. By using all the available bandwidth in any given minute, you have not reduced future bandwidth, nor have you saved anything on the cost of that bandwidth when it’s priced at a flat rate.
Bandwidth can’t be conserved over time either. By not using all the available bandwidth in any given minute, you have not saved any bandwidth for the future, because bandwidth is an event, not a conservable resource. Unused bandwidth expires just like unused plane tickets do, and as long as the demand on bandwidth is distributed through the system — something P2P systems excel at — no single node suffers from the SlashDot effect, the tendency of sites to crash under massive load (named after the frequent crashes to small sites that crash after getting front-page placement on the news site SlashDot.org).
Given this quality of persistently replenished resources, we would expect users to dislike sharing resources they want to use at that moment, but indifferent to sharing resources they make no claim on, such as available CPU cycles or bandwidth when they are away from their desks. Conservation of resources, in other words, should be situational and keyed to user behavior, and it is in misreading user behavior where attempts to discourage freeloading really jump the rails.
Selfish choices, beneficial outcomes
Attempts to prevent freeloading are usually framed in terms of preventing users from behaving selfishly, but selfishness is a key lubricant in P2P systems. In fact, selfishness is what makes the resources used by P2P available in the first place.
Since the writings of Adam Smith, literature detailing the workings of free markets has put the selfishness — or more accurately, the self-interest — of the individual actor at the center of the system, and the situation with P2P networks is no different. Mojo Nation’s central thesis about existing file-sharing systems is that some small number of users in those systems choose, through regard for their fellow man, to make available resources that a larger number of freeloaders then take unfair advantage of. This does not jibe with the experience of millions of present-day users.
Consider an ideal Napster user, with a 10 GB hard drive, a 1 Mb DSL line, and a computer connected to the Net round the clock. Did this user buy her hard drive in order to host MP3s for the community? Obviously not — the size of the drive was selected solely out of self-interest. Does she store MP3s she feels will be of interest to her fellow Napster users. No, she stores only the music she wants to listen to, self-interest again. Bandwidth? Is she shelling out for fast DSL so other users can download files quickly from her? Again, no. Her check goes to the phone company every month so she can have fast download times.
Likewise, decisions she makes about leaving her computer on and connected are self-interested choices. Bandwidth is not metered, and the pennies it costs her to leave her computer on while she is away from her desk, whether to make a pot of coffee or get some sleep, is a small price to pay for not having to sit through a five-minute boot sequence on her return.
Accentuate the positive
Economists call these kinds of valuable side effects “positive externalities.” The canonical example of a positive externality is a shade tree. If you buy a tree large enough to shade your lawn, there is a good chance that for at least part of the day it will shade your neighbor’s lawn as well. This free shade for your neighbor is a positive externality, a benefit to them that costs you nothing more than what you were willing to spend to shade your own lawn anyway.
Napster’s single economic genius is to coordinate such effects. Other than the central database of songs and user addresses, every resource within the Napster network is a positive externality. Furthermore, Napster coordinates these externalities in a way that encourages altruism. The system is resistant to negative effects of freeloading, because as long as Napster users are able to find the songs they want, they will continue to participate in the system, even if the people who download songs from them are not the same people they download songs from.
As long as even a small portion of the users accept this bargain, the system will grow, bringing in more users, who bring in more songs. In such a system, trying to figure out who is freeloading and who is not isn’t worth the effort of the self-interested user.
Real life is asymmetrical
Consider the positive externalities our self-interested user has created. While she sleeps, the Lynyrd Skynrd and N’Sync songs can fly off her hard drive at no additional cost over what she is willing to pay to have a fast computer and an always-on connection. When she is at her PC, there are a number of ways for her to reassert control of her local resources when she doesn’t want to share them. She can cancel individual uploads unilaterally, disconnect from the Napster server or even shut Napster off completely. Even her advertised connection speed acts as a kind of brake on undesirable external use of resources.
Consider a second user on a 14.4 modem downloading a song from our user with her 1 Mb DSL. At first glance, this seems unfair, since our user seems to be providing more resources. This is, however, the most desirable situation for both users. The 14.4 user is getting files at the fastest rate he can, a speed that takes such a small fraction of our user’s DSL bandwidth that she may not even notice it happening in the background.
Furthermore, reversing the situation to create “fairness” would be a disaster — a transfer from 14.4 to DSL would saturate the 14.4 line and all but paralyze that user’s Internet connection for a file transfer not in that user’s self-interest, while giving the DSL user a less-than-optimum download speed. Asymmetric transfers, far from being unfair, are the ideal scenario — as fast as possible on the downloads, and so slow when other users download from you that you don’t even notice.
In any system where the necessary resources like disk space and bandwidth are priced at a flat rate, these economics will prevail. The question for Napster and other systems that rely on these economics is whether flat-rate pricing is likely to disappear.
Setting prices
The economic history of telecommunications has returned again and again to one particular question: flat-rate vs. unit pricing. Simple economic theory tells us that unit pricing — a discrete price per hour online, per e-mail sent or file downloaded — is the most efficient way to allocate resources. By allowing users to take only those resources they are willing to pay for, per-unit pricing distributes resources most efficiently. Some form of unit pricing is at the center of almost all attempts to prevent freeloading, even if the currency the units are priced in are notional units such as Mojo.
Flat-rate pricing, meanwhile, is too blunt an instrument to create such efficiencies. In flat-rate systems, light users pay a higher per-unit cost, thus subsidizing the heavy users. Additionally, the flat-rate price for resources has to be high enough to cover the cost of unexpected spikes in usage, meaning that the average user is guaranteed to pay more in a flat-rate system than in a per-unit system.
Flat-rate is therefore unfair to all users, whether by creating unfair costs for light and average users, or by unfairly subsidizing heavy users. Given the obvious gap in efficient allocation of resources between the two systems, we would expect to see unit pricing ascendant in all situations where the two methods of pricing are in competition. The opposite, of course, is the actual case.
Too cheap to meter
Despite the insistence of economic theoreticians, in the real world people all over the world have expressed an overwhelming preference for flat-rate pricing in their telecommunications systems. Prodigy and CompuServe were forced to abandon their per-e-mail prices in the face of competition from systems that allowed unlimited e-mail. AOL was forced to drop its per-hour charges in the face of competition from ISPs that offered unlimited Internet access for a single monthly charge. Today, the music industry is caught in a struggle between those who want to preserve per-song charges and those who understand the inevitability of subscription charges for digital music.
For years, the refusal of users to embrace per-unit pricing for telecommunications was regarded by economists as little more than a perversion, but recently several economic theorists, especially Nick Szabo and Andrew Odlyzko, have worked out why a rational user might prefer flat-rate pricing, and it revolves around the phrase “Too Cheap to Meter,” or, put another way, “Not Worth Worrying About.”
People like to control costs, but they like to control anxiety as well. Prodigy’s per-e-mail charges and AOL’s hourly rates gave users complete control of their costs, but it also created a scenario where the user was always wondering if the next e-mail or the next hour was worth the price. When offered systems with slightly higher prices but no anxiety, users embraced them so wholeheartedly that Prodigy and AOL were each forced to give in to user preference. Lowered anxiety turned out to be worth paying for.
Anxiety is a kind of mental transaction cost, the cost incurred by having to stop to think about doing something before you do it. Mental transaction costs are what users are minimizing when they demand flat-rate systems. They are willing to spend more money to save themselves from having to make hundreds of individual decisions about e-mail, connect time or files downloaded.
Like Andrew Odlyzko’s notion of Paris Metro Pricing,” where one price gets you into a particular class of service in the system without requiring you to differentiate between short and long trips, users prefer systems where they pay to get in, but are not asked to constantly price resources on a case-by-case basis afterward, which is why micropayment systems for end users have always failed. Micropayments overestimate the value users pay on resources and underestimate the value they place on predictable costs and peace of mind.
The taxman
In the face of this user preference for flat-rate systems, attempts to stem freeloading with market systems are actually reintroducing mental transaction costs, thus destroying the advantages of flat-rate systems. If our hypothetical user is running a distributed computing client like SETI@Home, it is pointless to force her to set a price on her otherwise unused CPU cycles. Any cycles she values she will use, and the program will remain in the background. So long as she has chosen what she wants her spare cycles used for, any cycles she wouldn’t otherwise use for herself aren’t worth worrying about anyway.
Mojo Nation would like to suggest that Mojo is a currency, but it is more like a tax, a markup on an existing resource. Our user chose to run SETI, and since it costs her nothing to donate her unused cycles, any mental transaction costs incurred in pricing the resources raises the cost of the cycles above zero for no reason. Like all tax systems, this creates what economists call “deadweight loss,” the loss that comes from people simply avoiding transactions whose price is pushed too high by the tax itself. By asking its users to price something that they could give away free without incurring any loss, these systems discourage the benefits that come from coordinating positive externalities.
Lessons From Napster
Napster’s ability to add more users per week than all other P2P file-sharing systems combined is based in part on the ease of use that comes from its ability to tolerate freeloading. By decentralizing the parts of the system that are already paid for (disk space, bandwidth) while centralizing the parts of the system that individuals would not provide for themselves working individually (databases of songs and users ids), Napster has created a system that is far easier to use than most of the purely decentralized file-sharing systems.
This does not mean that Napster is the perfect model for all P2P systems. It is specific to the domain of popular music, and attempts to broaden its appeal to general file-sharing have largely failed. Nor does it mean that there is not some volume of users at which Napster begins to suffer from freeloading; all we know so far is that it can easily handle numbers in the tens of millions.
What Napster does show us is that, given the right architecture, freeloading is not the automatically corrosive problem that people believe it to be, and that creating systems which rely on micropayments or other methods of ensuring evenness between production and consumption are not the ideal alternative.
P2P systems use replicable or replenishable resources at the edges of the Internet, resources that tend to be paid for in lump sums or at rates that are insensitive to usage. Therefore, P2P systems that allow users to share resources they would have paid for anyway, so long as they are either getting something in return or contributing to a project they approve of, will tend to have better growth characteristics than systems that attempt to shut off freeloading altogether. If Napster is any guide, the ability to tolerate, rather than deflect, freeloading will be key to driving the growth of P2P.
1999 is shaping up to be a good year for lawyers. This fall saw the patent lawyers out in force, with Priceline suing Expedia over Priceline’s patented “name your own price” business model, and Amazon suing Barnes and Noble for copying Amazon’s “One-Click Ordering.” More recently, it’s been the trademark lawyers, with Etoys convincing a California court to issue a preliminary injunction against etoy.com, the Swiss art site, because the etoy.com URL might “confuse” potential shoppers. Never mind that etoy.com registered its URL years before Etoys existed: etoy has now been stripped of its domain name without so much as a trial, and is only accessible at its IP address (http://146.228.204.72:8080). Most recently, MIT’s journal of electronic culture, Leonardo, is being sued by a company called Transasia which has trademarked the name “Leonardo” in France, and is demanding a million dollars in damages on the grounds that search engines return links to the MIT journal, in violation of Transasia’s trademark. Lawsuits are threatening to dampen the dynamism of the internet because, even when they are obviously spurious, they add so much to the cost of doing business that soon amateurs and upstarts might not be able to afford to compete with anyone who can afford a lawyer.
The commercialization of the internet has been surprisingly good for amateurs and upstarts up until now. A couple of college kids with a well-managed bookmark list become Yahoo. A lone entrepreneur founds Amazon.com at a time when Barnes and Noble doesn’t even have a section called “internet” on its shelves, and now he’s Time’s Man of the Year. A solo journalist triggers the second presidential impeachment in US history. Over and over again, smart people with good ideas and not much else have challenged the pre-wired establishment and won. The idea that the web is not a battle of the big vs. the small but of the fast vs. the slow has become part of the web’s mystique, and big slow companies are being berated for not moving fast enough to keep up with their net-savvy competition. These big companies would do anything to find a way to use what they have — resources — to make up for what they lack — drive — and they may have found an answer to their prayers in lawsuits.
Lawsuits offer a return to the days of the fight between the big and the small, a fight the big players love. Ever since patents were expanded to include business models, patents have been applied to all sorts of ridiculous things — a patent on multimedia, a patent on downloading music, a patent on using cookies to allow shoppers to buy with one click. More recently, trademark law has become an equally fruitful arena for abuse. Online, a company’s URL is its business, and a trademark lawsuit which threatens a URL threatens the companies’ very existence. In an adversarial legal system, a company can make as spurious an accusation as it likes if it knows its target can’t afford a defense. As odious as Amazon’s suit of Barnes and Noble is, it’s hard to shed any tears over either of them. etoy and Leonardo, on the other hand, are both not-for-profits, and defending what is rightfully theirs might bankrupt them. If etoy cannot afford the necessary (and expensive) legal talent, the preliminary injunction stripping them of their URL might as well be a final decision.
The definition of theft depends on the definition of property, and in an age when so much wealth resides in intelligence, it’s no wonder that those with access to the legal system are trying to alter the definition of intellectual property in their favor. Even Amazon, one of the upstarts just a few years ago, has lost so much faith in its ability to innovate that it is now behaving like the very dinosaurs it challenged in the mid-90’s. It’s also no surprise that both recent trademark cases — etoy and Leonardo — ran across national borders. Judges are more likely to rule in favor of their fellow citizens and against some far away organization, no matter what principle is at stake. The web, which grew so quickly because there were so few barriers to entry, has created an almost irresistible temptation to create legal barriers where no technological ones exist. If this spate of foolish lawsuits continues — and there is every indication that it will — the next few years will see a web where the law becomes a tool for the slow to retard the fast and the big to stymie the small.