PCs Are The Dark Matter Of The Internet

First published on Biz2, 10/00.

Premature definition is a danger for any movement. Once a definitive
label is applied to a new phenomenon, it invariably begins shaping —
and possibly distorting — people’s views. So it is with the current
revolution, where Napster, SETI@Home, and their cousins now seem to be
part of a larger and more coherent change in the nature of the
internet. There have been many attempts to describe this change in a
phrase — decentralization, distributed computing — but the label
that seems to have stuck is peer-to-peer. And now that peer-to-peer is
the name of the game, the rush is on to apply this definition both as
a litmus test and as a marketing tool.

This is leading to silliness of the predictable sort — businesses
that have nothing in common with Napster, Gnutella, or Freeserve are
nevertheless re-inventing themselves as “peer-to-peer” companies,
applying the term like a fresh coat of paint over a tired business
model. Meanwhile, newly vigilant interpreters of the revolution are
now suggesting that Napster itself is not “truly peer-to-peer”,
because it relies on a centralized server to host its song list.

It seems obvious, but bears repeating: definitions are only useful as
tools for sharpening one’s perception of reality. If Napster isn’t
peer-to-peer, then “peer-to-peer” is a bad description of what’s
happening. Napster is the killer app for this revolution, and defining
it out of the club after the fact is like saying “Sure it might work
in practice, but it will never fly in theory.”

No matter what you call it, what is happening is this: PCs, and in
particular their latent computing power, are for the first time being
integrated directly into the fabric of the internet.

PCs are the dark matter of the internet. Like the barely detectable
stuff that makes up most of the mass of the universe, PCs are
connected to the internet by the hundreds of millions but have very
little discernable effect on the whole, because they are largely
unused as anything other than dumb clients (and expensive dumb
clients to boot.) From the point of view of most of the internet
industry, a PC is nothing more than a life-support system for a
browser and a place to store cookies.

PCs have been restricted to this expensive-but-dumb client mode for
many historical reasons — slow CPUs, small disks, flakey OSs, slow
and intermittant connections, no permanent IP addresses — but with
the steady growth in hardware quality, connectivity, and user base,
the PCs at the edges of the network now represent an astonishing and
untapped pool of computing power.

At a conservative estimate, the world’s net-connected PCs host an
aggregate 10 billion Mhz of processing power and 10 thousand terabytes
of storage. And this calculation assumes 100 million PCs among the
net’s 300 million users, with an average chip speed of 100 Mhz and an
average 100 Mb hard drive. And these numbers continue to climb —
today, sub-$2K PCs have an order of magnitude more processing power
and two orders of magnitude more storage than this assumed average.

This is the fuel powering the current revolution — the latent
capabilities of PC hardware made newly accessible represent a huge,
untapped resource. No matter how it gets labelled (and peer-to-peer
seems likely to stick), the thing that software like the Gnutella file
sharing system and the Popular Power distributed computing network
have in common is an ability to harness this dark matter, the otherwise
underused hardware at the edges of the net.

Note though that this isn’t just “Return of the PC”, because in these
new models, PCs aren’t just personal computers, they’re promiscious
computers, hosting data the rest of the world has access to, a la
Napster, and sometimes even hosting calculations that are of no use to
the PC’s owner at all, like Popular Powers influenza virus
simulations. Furthermore, the PCs themselves are being disaggregated
— Popular Power will take as much CPU time as it can get but needs
practically no storage, while Gnutella needs vast amounts of disk
space but almost no CPU time. And neither kind of business
particularly needs the operating system — since the important
connection is often with the network rather than the local user, Intel
and Seagate matter more to the peer-to-peer companies than do
Microsoft or Apple.

Its early days yet for this architectural shift, and the danger of the
peer-to-peer label is that it may actually obscure the real
engineering changes afoot. With improvements in hardware, connectivity
and sheer numbers still mounting rapidly, anyone who can figure how to
light up the internet’s dark matter gains access to a large and
growing pool of computing resources, even if some of the functions are
centralized (again, like Napster or Popular Power.)

Its still too soon to see who the major players will be, but don’t
place any bets on people or companies reflexively using the
peer-to-peer label. Bet instead on the people figuring out how to
leverage the underused PC hardware, because the actual engineering
challenges in taking advantage of the world’s PCs matters more — and
will create more value — than merely taking on the theoretical
challenges of peer-to-peer architecture.

The Napster-BMG Merger

Napster has always been a revolution within the commercial music business, not against it, and yesterday’s deal between BMG and Napster demonstrates that at least one of the 5 major labels understands that. The press release was short on details, but the rough outlines of the deal has Bertelsmann dropping its lawsuit and instead working with Napster to create subscription-based access to its entire music
catalog online. Despite a year of legal action by the major labels, and despite the revolutionary fervor of some of Napster’s users, Napster’s success has more to do with the economics of digital music than with copyright law, and the BMG deal is merely a recognition of those economic realities.

Until Napster, the industry had an astonishingly successful run in producing digital music while preventing digital copying from taking place on a wide scale, managing to sideline DAT, Minidisc, and recordable CDs for years. Every time any of the major labels announced an online initiative, it was always based around digital rights
management schemes like SDMI, designed to make the experience of buying and playing digital files at least as inconvenient as physical albums and tapes.

In this environment, Napster was a cold shower. Napster demonstrated how easily and cheaply music could be distributed by people who did not have a vested interest in preserving inefficiency. This in turn reduced the industry to calling music lovers ‘pirates’ (even though Napster users weren’t in it for the money, surely the definition of piracy), or trying to ‘educate’ us about about why we should be happy to pay as much for downloaded files as for a CD (because it was costing them so much to make downloaded music inconvenient.)

As long as the labels kept whining, Napster looked revolutionary, but once BMG finally faced the economic realities of online distribution and flat rate pricing, the obvious partner for the new era was Napster. That era began in earnest yesterday, and the people in for the real surprise are not the music executives, who are after all
adept at reading popular sentiment, and who stand to make more money from the recurring revenues of a subscription model. The real surprise is coming for those users who convinced themselves that Napster’s growth had anything to do with anti-authoritarian zeal.

Despite the rants of a few artists and techno-anarchists who believed that Napster users were willing to go to the ramparts for the cause, large scale civil disobedience against things like like Prohibition or the 55 mph speed limit has usually been about relaxing restrictions, not repealing them. You can still make gin for free in your bathub, but nobody does it anymore, because the legal liquor industry now sells high-quality gin at a reasonable price, with restrictions that society can live with.

Likewise, the BMG deal points to a future where you can subscribe to legal music from Napster for an attractive price, music which, as a bonus, won’t skip, end early, or be misindexed. Faced with the choice between shelling out five bucks a month for high quality legal access or mastering gnutella, many music lovers will simply plump for the subscription. This will in turn reduce the number of copyright violators, making it easier for the industry to go after them, which will drive still more people to legal subscriptions, and so on.

For a moment there, as Napster’s usage went through the roof while the music industry spread insane propaganda about the impending collapse of all professional music making, one could imagine that the collective will of 30 million people looking for free Britney Spears songs constituted some sort of grass roots uprising against The Man. As the BMG deal reverberates through the industry, though, it will become apparent that those Napster users were really just agitating for better prices. In unleashing these economic effects, Napster has almost single-handedly dragged the music industry into the internet age. Now the industry is repaying the favor by dragging Napster into the mainstream of the music business.

The Domain Name System is Coming Apart at the Seams

First published on Biz2, 10/00

The Domain Name System is coming apart at the seams. DNS, the protocol which maps IP addresses like 206.107.251.22 to domain names like FindDentist.com, is showing its age after almost 20 years. It has proved unable to adapt to dynamic internet addresses, to the number of new services being offered, and particularly to the needs of end users, who are increasingly using their PCs to serve files, host
software, and even search for extra-terrestrial intelligence. As these PCs become a vital part of the internet infrastructure, they need real addresses just as surely as yahoo.com does. This is something the DNS system can’t offer them, but the competitors to DNS can.

The original DNS system was invented, back in the early 80s, for distinctly machine-centric world. Internet-connected computers were rare, occupying a few well-understood niches in academic and government labs. This was a world of permanence: any given computer would always have one and only one IP address, and any given IP address would have one and only one domain name. Neat and tidy and static.

Then along came 1994, the Year of the Web, when the demand for connecting PCs directly to the internet grew so quickly that the IP namespace — the total number of addresses — was too small to meet the demand. In response, the ISPs began doling out temporary IP addresses on an as-needed basis, which kept PCs out of the domain name system: no permanent IP, no domain name. This wasn’t a problem in the mid-90s — PCs were so bad, and modem connections so intermittent, that no one really thought of giving PCs their own domain names.

Over the last 5 years, though, cheap PC hardware has gotten quite good, operating systems have gotten distinctively less flaky, and connectivity via LAN, DSL and cable have given us acceptable connections. Against the background of these remarkable improvements, the DNS system got no better at all — anyone with a PC was still a
second-class citizen with no address, and it was Napster, ICQ, and their cousins, not the managers of the DNS system, who stepped into this breech.

These companies, realizing that interesting services could be run off of PCs if only they had real addresses, simply ignored DNS and replaced the machine-centric model with a protocol-centric one. Protocol-centric addressing creates a parallel namespace for each piece of software, and the mapping of ICQ or Napster usernames to temporary IP addresses is not handled by the net’s DNS servers but by
privately owned servers dedicated to each protocol — the ICQ server matches ICQ names to the users’ current IP address, and so on. As a side-effect of handling dynamic IP addresses, these protocols are also able to handle internet address changes in real time, while current DNS system can take several days to fully log a change.

In Napster’s case, protocol-centric addressing merely turns Napster into customized ftp for music files. The real action is in software like ICQ, which not only uses protocol-centric addressing schemes, but where the address points to a person, not a machine. When I log into ICQ, I’m me, no matter what machine I’m at, and no matter what IP address is presently assigned to that machine. This completely decouples what humans care about — can I find my friends and talk with them online — with how the machines go about it — route message A to IP address X.

This is analgous to the change in telephony brought about by mobile phones. In the same way a phone number is no longer tied to a particular location but is now mapped to the physical location of the phone’s owner, an ICQ address is mapped to me, not to a machine, no matter where I am.

This does not mean that the DNS system is going away, any more than landlines went away with the invention of mobile telephony. It does mean that DNS is no longer the only game in town. The rush is now on, with instant messaging protocols, single sign-on and wallet applications, and the explosion in peer-to-peer businesses, to create
and manage protocol-centric addresses, because these are essentially privately owned, centrally managed, instantly updated alternatives to DNS.

This also does not mean that this change is entirely to the good. While it is always refreshing to see people innovate their way around a bottleneck, sometimes bottlenecks are valuable. While ICQ and Napster came to their addressing schemes honestly, any number of people have noticed how valuable it is to own a namespace, and many business plans making the rounds are just me-too copies of Napster or
ICQ, which will make an already growing list of kinds of addresses — phone, fax, email, url, ICQ, … — explode into meaninglessness.

Protocol-centric namespaces will also force the browser into lesser
importance, as users return to the days they namaged multiple pieces
of internet software, or it will mean that addresses like
icq://12345678 or napster://green_day_fan will have to be added to the
browsers repetoire of recognized URLs. Expect the rise of
‘meta-address’ servers as well, which offer to manage a user’s
addresses for all of these competing protocols, and even to translate
from one kind of address to another. (These meta-address servers will,
of course, need their own addressses as well.)

Its not clear what is going to happen to internet addressing, but it is clear that its going to get a lot more complicated before it gets simpler. Fortunately, both the underlying IP addressing system and the design of URLs can handle this explosion of new protocols and addresses, but that familiar DNS bit in the middle (which really put the dot in dot com) will never recover the central position it has occupied in the last 2 decades, and that means that a critical piece of internet infrastructure is now up for grabs.


Thanks to Dan Gilmor of the San Jose Mercury News for pointing out to me the important relationship between peer-to-peer networking and DNS.

The Music Industry Will Miss Napster

First published in the Wall Street Journal, July 28, 2000.

On Wednesday, federal Judge Marilyn Hall Patel ordered Napster, a
company that provides software allowing users to swap MP3 music files
over the Internet, to stop facilitating the trading of copyrighted
material by midnight today. Now the argument surrounding digital
downloading of music enters a new phase.

In business terms, this is a crushing blow for Napster. Although
Napster gets to keep its “community” intact, in that it can still
offer its chat function, Napster users will have no reason to use the
Napster chat software for any purpose other than trading information
on alternative sources of free MP3s like Napagator and Gnutella. If
Napster is kept offline for any length of time, it will likely lose so
much market share to these competitors and others that it will not be
able to recover even if the ruling is later overturned.

MP3 Files

For the Recording Industry Association of America, which sued Napster,
Judge Patel’s order is a powerful demonstration that it can cause
enormous difficulties for companies that try to shut the major record
labels out of the Internet distribution game. But the recording
industry association should not be sanguine simply because the total
number of MP3 files being moved around the Internet will plummet, at
least temporarily.

There are still well over 10 million users out there who have become
used to downloading music over the Internet. Napster or no Napster,
someone is going to service this enormous and still growing
appetite. The recording industry cannot expect that one court ruling,
or even many court rulings, will turn back the tide of technology.

Closing Napster as an outlet will have no effect on the underlying
ability of average users to create and listen to MP3 files. It is not
Napster that is digitizing music, it is the users themselves, using
cheap personal computers and free software. And it is the users’
willingness to trade copyrighted material, not Napster’s willingness
to facilitate those trades, that the record industry should be worried
about.

For the record companies, the issue is plain and simple — what
Napster is doing is illegal, and Judge Patel has ruled in their
favor. Legality isn’t the whole story, however. The critical thing for
the music executives to realize is that while they have not lost the
right to enforce copyright laws, their ability to do so is waning.

The analogy here is to the 55 miles-per-hour speed limit, which turned
out to be unenforceable because accelerator pedals are standard
operating equipment on a car. Likewise, government cannot control the
fact that computers are capable of making an unlimited number of
perfect copies of a file.

The current legal ruling has not taken a single Rio music player off
the shelves, nor destroyed a single piece of WinAmp music software,
nor deleted a single MP3 file. More to the point, it has not, and will
not, prevent anyone from “ripping” a CD, turning every track into a
separate MP3 file, in less time than it actually takes to listen to
the CD in the first place.

Napster did achieve an ease of use coupled with a distributed source
of files that no one else has been able to touch, so the short-term
advantage here is to the recording industry. The industry now has a
more favorable bargaining position with Napster, and it has a couple
of months to regroup and propose some Napster-like system before
Napster’s users disperse to new services. This new system must give
fans a way to download music without the expense and restrictions of
the current CD format, which requires you to buy either a dozen songs
or none.

If the industry doesn’t meet this demand, and quickly, it may find
that it’s traded the evil it knew for one it doesn’t know. The lesson
that new Internet music companies will take from the Napster case is
that they should either be hosted offshore (where U.S. laws don’t
apply) or they should avoid storing information about music files in a
central location. Either option would be far worse for the record
industry than Napster.

Napster at least provides one central spot for the music industry to
monitor the activity of consumers. Driving Napster completely out of
business could lead to total market fragmentation, which the industry
could never control.

Digital Is Different

It’s very hard to explain to businesses that have for years been able
to charge high margins for distributing intellectual property in a
physical format that the digital world is different, but that doesn’t
make it any less true. If the record labels really want to keep their
customers from going completely AWOL, they will use this ruling to
negotiate a deal with Napster on their own terms.

In all likelihood, though, the record executives will believe what so
many others used to believe: The Internet may have disrupted other
business models, but we are uniquely capable of holding back the
tide. As Rocky the Flying Squirrel put it so eloquently, “That trick
never works.”

Napster and the Death of the Album Format

Napster, the wildly popular software that allows users to trade music over the Internet, could be shut down later this month if the Recording Industry Association of America gets an injunction it is seeking in federal court in California. The big record companies in the association — along with the angry artists who testified before the Senate Judiciary Committee this week — maintain that Napster is
nothing more than a tool for digital piracy.

But Napster and the MP3 technology it exploits have changed the music business no matter how the lawsuit comes out. Despite all the fuss about copyright and legality, the most important freedom Napster has spread across the music world is not freedom from cost, but freedom of choice.

Napster, by linking music lovers and letting them share their collections, lets them select from a boundless range of music, one song at a time. This is a huge change from the way the music industry currently does business, and even if Napster Inc. disappears, it won’t be easy to persuade customers to go back to getting their music as the music industry has long packaged it.

Most albums have only two or three songs that any given listener likes, but the album format forces people to choose between paying for a dozen mediocre songs to get those two or three, or not getting any of the songs at all. This all-or-nothing approach has resulted in music collections that are the barest approximation of a listener’s
actual tastes. Even CD ”singles” have been turned into multi-track mini-albums almost as expensive as the real thing, and though there have been some commercial ”mix your own CD” experiments in recent years, they foundered because major labels wouldn’t allow access to their collections a song at a time.

Napster has demonstrated that there are no technological barriers to gaining access to the world’s music catalogue, just commercial ones.

Napster users aren’t merely cherry-picking the hits off well-known albums. Listeners are indulging all their contradictory interests, constantly updating their playlists with a little Bach, a little Beck or a little Buckwheat Zydeco, as the mood strikes them. Because it knows nothing of genre, a Napster search produces a cornucopia of
alternate versions: Hank Williams’s ”I’m So Lonesome I Could Die” as interpreted by both Dean Martin and the Cowboy Junkies, or two dozen covers of ”Louie Louie.”

Napster has become a tool for musical adventure, producing more diversity by accident than the world music section of the local record store does by design: a simple search for the word ”water” brings up Simon and Garfunkel’s ”Bridge Over Troubled Water,” Deep Purple’s ”Smoke on the Water,” ”Cool Water” by the Sons of the Pioneers and ”Water No Get Enemy” by Fela Anikulapo Kuti. After experiencing this freedom, music lovers are not going to go happily back to buying albums.

The question remains of how artists will be paid when songs are downloaded over the Internet, and there are many sources of revenue being bandied about—advertising, sponsorship, user subscription, pay-per-song. But merely recreating the CD in cyberspace will not work.

In an echo of Prohibition, Napster users have shown that they are willing to break the law to escape the constraints of all-or-nothing musical choices. This won’t be changed by shutting down Napster. The music industry is going to have to find some way to indulge its customers in their newfound freedom.

Content Shifts to the Edges, Redux

First published on Biz2, 06/00.

It’s not enough that Napster is erasing the distinction between client
and server (discussed in an earlier column); its erasing the
distinction between consumer and provider as well. You can see the
threat to the established order in a recent legal action: A San Diego
cable ISP, Cox@Home, ordered customers to stop running Napster not
because they were violating copyright laws, but because Napster allows
Cox subscribers to serve files from their home PCs. Cox has built its
service on the current web architecture, where producers serve content
from from always-connected servers at the internet’s center, and
consumers consume from intermittantly-connected client PCs at the
edges. Napster, on the other hand, inaugurates a model where PCs are
always on and always connected, where content is increasingly stored
and served from the edges of the network, and where the distinction
between client and server is erased. Set aside Napster’s legal woes —
“Cox vs. Napster” isn’t just a legal fight, its a fight over the
difference between information consumers and information
providers. The question of the day is “Can Cox (or any media business)
force its users to retain their second-class status as mere consumers
of information.” To judge by Napster’s growth and the rise of
Napster-like services such as gnutella, freenet, and wrapster, the
answer is “No”.

The split between consumers and providers of information has its roots
in the internet’s addressing scheme. A computer can only be located on
by its internet protocol (IP) address, like 127.0.0.1, and although
you can attach a more memorable name to those numbers, like
makemoneyfast.com, the domain name is just an alias — the IP address
is the defining feature. By the mid-90’s there weren’t enough to go
around, so ISPs started randomly assigning IP addresses whenever a
user dialed in. This means that users never have a fixed IP address,
so while they can consume data stored elsewhere, they can never
provide anything from their own PCs. This division wasn’t part of the
internet’s original architecture, but the proposed fix (the next
generation of IP, called IPv6) has been coming Real Soon Now for a
long time. In the meantime, services like Cox have been built with the
expectation that this consumer/provider split would remain in effect
for the forseeable future.

How short the forseeable future sometimes is. Napster short-circuits
the temporary IP problem by turning the domain name system inside out:
with Napster, you register a name for your PC and every time you
connect, it makes your current IP address an alias to that name,
instead of the other way around. This inversion makes it trivially
easy to host content on a home PC, which destroys the assymetry of
“end users consume but can’t provide”. If your computer is online, it
can be reached, even without a permanent IP address, and any material
you decide to host on your PC can become globally accessible.
Napster-style architecture erases the people-based distinction of
provider and consumer just as surely as it erases the computer-based
distinction between server and client.

There could not be worse news for Cox, since the limitations of cable
ISP’s only become apparent if its users actually want to do something
useful with their upstream bandwidth, but the fact that cable
companies are hamstrung by upstream speed (less than a tenth of its
downstream speed in Cox’s case) just makes them the first to face the
eroding value of the media bottleneck. Any media business that relies
on a neat division between information consumer and provider will be
affected. Sites like Geocities or The Globe, which made their money
providing fixed addresses for end user content, may find that users
are perfectly content to use their PCs as that fixed address.
Copyright holders who have assumed up until now that large-scale
serving of material could only take place on a handful of relatively
identifiable and central locations are suddenly going to find that the
net has sprung another million leaks. Meanwhile, the rise of the end
user as info provider will be good news for other businesses. DSL
companies will have a huge advantage in the race to provide fast
upstream bandwidth; Apple may find that the ability to stream home
movies over the net from a PC at home drives adoption of Mac hardware
and software; and of course companies that provide the Napster-style
service of matching dynamic IP addresses with fixed names will have
just the sort of sticky relationship with their users that VC’s slaver
over.

Real technological revolutions are human revolutions as well. The
architecture of the internet has affected the largest transfer of
power from organizations to individuals the world has ever seen, and
Napster’s destruction on the serving limitations on end users
demonstrates taht this change has not yet run its course. Media
businesses which have assumed that all the power that has been
transferred to the individual for things like stock broking and
airline tickets wouldn’t affect them are going to find that the
millions of passive consumers are being replaced by millions of
one-person media channels. This is not to say that all content is
going to the edges or the net, or that every user is going to be an
enthusiastic media outlet, but when the limitations of “Do I really
want to (or know how to) upload my home video?” go away, the total
amount of user generated and hosted content is going to explode beyond
anything the Geocities model allows. This will have two big effects:
the user’s power as a media outlet of one will be dramatically
increased, creating unexpected new competition with corporate media
outlets; and the spread of hosting means that the lawyers of copyright
holders can no longer go to Geocities to acheive leverage over
individual users — in the age of user-as-media-outlet, lawsuits will
have to be undertaken one user at a time. That old saw about the press
only being free for people who own a printing press is about to take
on a whole new resonanace.

Content Shifts to the Edges

First published on Biz2, 04/00.

The message of Napster, the wildly popular mp3 “sharing” software, is
plain: The internet is being turned inside out.

Napster is downloadable software that allows users to trade mp3 files
with one another. It works by constantly updating a master song list,
adding and removing songs as individual users connect and disconnect
their PCs. When someone requests a particular song, the Napster server
then initiates a direct file transfer from the user who has a copy of
the song to the user who wants one. Running against the twin tides of
the death of the PC and the rise of application service providers
(ASPs), Napster instead points the way to a networking architecture
which re-invents the PC as a hybrid client+server while relegating the
center of the internet, where all the action has been recently, to
nothing but brokering connections.

For software which is still in beta, Napster’s success is difficult to
overstate: at any given moment, Napster servers keep track of
thousands of PCs, holding hundreds of thousands of songs which
comprise terabytes of data. This is a complete violation of the
Web’s current data model — “Content at the center” — and Napster’s
success in violating it points the way to an alternative — “Content
at the edges”. The current content-at-the-center model has one
significant flaw: most internet content is created on the PCs at the
edges, but for it to become universally accessible, it must be pushed
to the center, to always-on, always-up Web servers. As anyone who has
ever spent time trying to upload material to a Web site knows, the Web
has made downloading trivially easy, but uploading is still needlessly
hard. Napster relies on three networking innovations to get around
these limitations:

  • It dispenses with uploading and leaves the files on the PCs, merely
    brokering requests from one PC to another — the mp3 files do not have
    to travel through any central Napster server.
  • PCs running Napster do not need a fixed internet address or a
    permanent conenction to use the service.
  • It ignores the reigning Web paradigm of client and server. Napster
    makes no distinction between the two functions: if you can receive
    files from other people, they can receive files from you as well.

Leave aside for the moment the fact that virtually all of the file
transfers brokered by Napster are illegal — piracy is often an
indicator of massive untapped demand. The real import of Napster is
that it is proof-of-concept for a networking architecture which
recognizes that bandwidth to the desktop is becoming fast enough to
allow PCs to act as servers, and that PCs are becoming powerful enough
to fulfill this new role. In other words, just as the ASP space is
taking off, Napster’s success represents the revenge of the PC. By
removing the need to upload data (the single biggest bottleneck to
using the ASP model for everything), the content-at-the-edges model
points the way to a re-invention of the desktop as the center of a
user’s data, only this time the user will no longer need physical
access to the PC itself. The use of the PC as central repository and
server of user content will have profound effects on several internet
developments currently underway:

  • This is the ground on on which the Windows2000 vs. Linux battle will
    be fought. As the functions of desktop and server fuse, look for
    Microsoft to aggressively push Web services which rely on content-
    at-the-edges, trying to undermine Linux’s hold on the server market.
    (Ominously for Linux, the Napster Linux client is not seen as a
    priority by Napster themselves.)
  • Free hosting companies like Geocities exist because the present
    system makes it difficult for the average user to host their own web
    content. With PCs increasingly able to act as Web servers, look for
    a Napster-like service which simply points requests to individual
    users machines.
  • WAP and other mobile access protocols are currently focussing on
    access to centralized commercial services, but when you are on the
    road the information you are likeliest to need is on your PC, not
    on CNN. An always-on always-accessible PC is going to be the
    ideal source of WAP-enabled information for travelling business
    people.
  • The trend towards centralized personalization services on sites like
    Yahoo will find itself fighting with a trend towards making your PC
    the source of your calendar, phone book, and to do list. The Palm
    Pilot currently syncs with the PC, and it will be easier to turn the
    PC itself into a Web server than to teach the average user how to
    upload a contact database.
  • Stolen mp3’s are obvious targets to be served from individiual
    machines, but they are by no means the only such content category.
    Everything from wedding pictures to home office documents to amateur
    porn (watch for a content-at-the-edges version of persiankitty) can
    be served from a PC now, and as long as the data does not require
    central management, it will be more efficient to do so.

This is not to say that desktop will replace all web servers — systems
which require steady backups or contain professionally updated content
will still continue to work best on centrally managed servers.
Nevertheless, Napster’s rise shows us that the versatility of the PC as
a hardware platform will give the millions of desktop machines currently
in use a new lease on life. This in turn means that the ASP revolution
will be not be as swift nor will the death of the PC be as total as the
current press would have us believe. The current content-at-the-center
architecture got us through the 90’s, where PCs too poorly engineered to
be servers and bandwidth was too slow and variable to open a pipe to the
desktop, but with DSL and stable operating systems in the offing, much of
the next 5 years will be shaped by the rise of content-at-the-edges.

Napster and Music Distribution

Napster has joined the pantheon of Netscape, Hotmail, and ICQ as a software-cum-
social movement, and its growth shows no sign of abating any time soon. Needless
to say, anything this successful needs its own lawsuit to make it a full-fledged
Net phenomenon. The Recording Industry Association of America has been only too
happy to oblige, with a suit seeking up to a hundred thousand dollars per copyrighted
song exchanged (an amount that would be on the order of a trillion dollars, based on
Napster usage to date). Unfortunately for the RIAA, the history of music shows that
when technological change comes along, the defenders of the old order are powerless
to stop it.

In the twenties, the American Federation of Musicians launched a protest when The
Jazz Singer inaugurated the talkies and put silent-movie orchestras out of business.
The protest was as vigorous as it was ineffective. Once the talkies created a way to
distribute movie music without needing to hire movie musicians, there was nothing
anyone could do to hold it back, leading the way for new sorts of organizations that
embraced recorded music — organizations like the RIAA. Now that the RIAA is faced
with another innovation in distribution, it shouldn’t be wasting its time arguing that
Napster users are breaking the law. As we’ve seen with the distribution of print on
the Web, efficiency trumps legality, and RIAA needs to be developing new models that work with electronic distribution rather than against it.

In the early nineties, a service called Clarinet was launched that distributed news-
wire content over the Net, but this distribution came with a catch — users were never
ever supposed to forward the articles they read. The underlying (and fruitless) hope
behind this system was that if everyone could be made to pretend that the Net was no different from paper, then the newspaper’s “pay directly for content” model wouldn’t be challenged on-line. What sidelined this argument — and Clarinet — was that a bunch of competing businesses said, literally, “Publish and be damned,” and the Yahoos and News.coms of the world bypassed Clarinet by developing business models that encouraged copying. But other companies developed new models well after realizing that Clarinet’s approach was wrong, and they still took years to get it right. The idea that people shouldn’t forward articles to one another has collapsed so completely that it’s hard to remember when it was taken seriously. Years of dire warnings that violating the print model of copyright would lead to writers starving in the streets and render the Web a backwater of amateur content have come to naught. The quality of written material available on-line is rising every year.

The lesson for the RIAA here is that old distribution models can fail long before
anyone has any idea what the new models will look like. As with digital text, so now
with music. People have a strong preference for making unlimited perfect copies of the music they want to hear. Napster now makes it feasible to do so in just the way the Web made it possible with text. Right now, no one knows how musicians will be rewarded in the future. But the lack of immediate alternatives doesn’t change the fact that Napster is the death knell for the current music distribution system. The music industry does not need to know how musicians will be rewarded when this new system takes hold to know that musicians will be rewarded somehow. Society can’t exist without artists; it can, however, exist without A&R; departments.

The RIAA-Napster suit feels like nothing so much as the fight over the national speed
limit in the seventies and eighties. The people arguing in favor of keeping the 55-MPH limit had almost everything on their side — facts and figures, commonsense concerns about safety and fuel efficiency, even the force of federal law. The only thing they lacked was the willingness of the people to go along. As with the speed limit, Napster shows us a case where millions of people are willing to see the law, understand the law, and violate it anyway on a daily basis. The bad news for the RIAA is not that the law isn’t on their side. It plainly is. The bad news for the RIAA is that in a democracy, when the will of the people and the law diverge too strongly for too long, it is the law that changes. Thus are speed limits raised.

The Fusing of Desktops And Servers

First published on FEED, 1/27/2000

Windows2000, just beginning to ship, and slated for a high profile launch next
month, will fundamentally alter the nature of Windows’ competition with Linux, its
only real competitor. Up until now, this competition has focused on two separate
spheres: servers and desktops. In the server arena, Linux is largely thought to have
the upper hand over WindowsNT, with a smaller installed base but much faster growth. On the desktop, though, Linux’s success as a server has had as yet little effect, and the ubiquity of Windows remains unchallenged. With the launch of Windows2000, the battle will no longer be fought in two separate arenas, because just as rising chip power destroyed the distinction between PCs and “workstations,” growing connectivity is destroying the distinction between the desktop and the server. All operating systems are moving in this direction, but the first one to catch the average customer’s eye will rock the market.

The fusion of desktop and server, already underway, is turning the internet inside
out. The current network is built on a “content in the center” architecture, where a
core of always-on, always-connected servers provides content on demand to a much larger group of PCs which only connect to the net from time to time (mostly to request content, rarely to provide it). With the rise of faster and more stable PCs, however, the ability for a desktop machine to take on the work of a server increases annually. In addition, the newer networking services like cable modems and DSL offer “always on” connectivity — instead of dialing up, their connection to the internet is (at least theoretically) persistent. Add to these forces an increasing number of PCs in networked offices and dorms, and you have the outlines of a new “content at the edges” architecture. This architecture is exemplified by software like Napster or Hotline, designed for sharing MP3s, images, and other files from one PC to another without the need for a central server. In the Napster model, the content resides on the PCs at the edges of the net, and the center is only used for bit-transport. In this “content at the edges” system, the old separation between desktop and server vanishes, with the PC playing both functions at different times. This is the future, and Microsoft knows it.

In the same way Windows95 had built-in dial-up software, Windows2000 has a built-in Web server. The average user has terrible trouble uploading files, but would like to use the web to share their resumes, recipes, cat pictures, pirated music, amateur porn, and powerpoint presentations, so Microsoft wants to make running a web server with Windows2000 as easy as establishing a dialup connection was with Windows95. In addition to giving Microsoft potentially huge competitive leverage over Linux, this desktop/server combo will also allow them to better compete with the phenomenally successful Apache web server and give them a foothold for making Microsoft Word leverage over HTML as the chosen format for web documents — as long as both sender and receiver are running Windows2000.

The Linux camp’s response to this challenge is unclear. Microsoft has typically
employed an “attack from below” strategy, using incremental improvements to an
initially inferior product to erode a competitor’s advantage. Linux has some defenses
against this strategy — the Open Source methodology gives Linux the edge in incremental improvements, and the fact that Linux is free gives Microsoft no way to win a “price vs. features” comparison — but the central fact remains that as desktop computers become servers as well, Microsoft’s desktop monopoly will give them a huge advantage, if they can provide (or even claim to provide) a simple and painless upgrade. Windows2000 has not been out long, it is not yet being targeted at the home user, and developments on the Linux front are coming thick and fast, but the battle lines are clear: The fusing of the functions of desktop and server represents Microsoft’s best (and perhaps last) chance to prevent Linux from toppling its monopoly.

In Praise Of Freeloaders

First published on O’Reilly’s OpenP2P, 12/01/2000.

As the excitement over P2P grew during the past year, it seemed that decentralized architectures could do no wrong. Napster and its cousins managed to decentralize costs and control, creating applications of seemingly unstoppable power. And then researchers at Xerox brought us P2P’s first crisis: freeloading.

Freeloading is the tendency of people to take resources without paying for them. In the case of P2P systems, this means consuming resources provided by other users without providing an equivalent amount of resources (if any) back to the system. The Xerox study of Gnutella(now available at FirstMonday) found that ” … a large proportion of the user population, upwards of 70 percent, enjoy the benefits of the system without contributing to its content,” and labels the problem a “Tragedy of the Digital Commons.”

The Tragedy of the Commons is an economic problem with a long pedigree. As Mojo Nation, a P2P system set up to combat freeloading, states in its FAQ:

Other file-sharing systems are plagued by “the tragedy of the commons,” in which rational folks using a shared resource eat the resources to death. Most often, the “Tragedy of the Commons” refers to farmers and pasture, but technology journalists are writing about users who download and download but never contribute to the system.

To combat this problem, Mojo Nation proposes creating a market for computational resources — disk space, bandwidth, CPU cycles. In its proposed system, if you provide computational resources to the system, you earn Mojo, a kind of digital currency. If you consume computational resources, you spend the Mojo you’ve earned. This system is designed to keep freeloaders from consuming more than they contribute to the system.

A very flawed premise

Mojo Nation is still in beta, but it already faces two issues — one fairly trivial, one quite serious. The trivial issue is that the system isn’t working out as planned: Users are not flocking to the system in sufficient numbers to turn it into a self-sustaining marketplace.

The serious issue is that the system will never work for public file-sharing, not even in theory, because the problem of users eating resources to death does not pose a real threat to systems such as Napster, and the solution Mojo Nation proposes would destroy the very things that allow file-sharing systems like Napster to work.

The Xerox study on Gnutella makes broad claims about the relevance of its findings, even as Napster, which adds more users each day than the entire installed base of Gnutella, is growing without suffering from the study’s predicted effects. Indeed, Napster’s genius in building an architecture that understands the inevitability of freeloading and works within those constraints has led Dan Bricklin to christen Napster’s effects “The Cornucopia of the Commons.”

Systems that set out to right the imagined wrongs of freeloading are more marketing efforts than technological ones, in that they attempt to inflame our sense of injustice at the users who download and download but never contribute to the system. This plays well in the press, of course, garnering headlines like “A revolutionary file-sharing system could spell the end for dot-communism and Net leeches” or labeling P2P users “cyberparasites.”

This sense of unfairness, however, obscures two key aspects of P2P file-sharing: the economics of digital resources, which are either replicable or replenishable; and the ways the selfish nature of user participation drives the system.

One from one equals two

Almost without fail, anyone addressing freeloading refers to the aforementioned “Tragedy of the Commons.” This is an economic parable illustrating the threat to commonly held resources. Imagine that in an area of farmland, the entire pasture is owned by a group of farmers who graze their sheep there. In this situation, it is in the farmers’ best interest to maintain herds of moderate size in order to keep the pasture from being overgrazed. However, it is in the best interest of each farmer to increase the size of his herd as much as possible, because the shared pasture is a free resource.

Even worse, although each herdsman will recognize that all of them should forgo increases in the size of their herd if they are acting for the good of the group, they also recognize that every other farmer also has the same incentives to increase the size of their herds as well. In this scenario, each individual has it in their individual interest to take as much of the common resources as they can, in part because they can benefit themselves and in part because if they don’t someone else will, even though doing so produces a bad outcome for the group as a whole.

The Tragedy of the Commons is a simple, compelling illustration of what can happen to commonly owned resources. It is also almost completely inapplicable to the digital world.

Start with the nature of consumption. If your sheep takes a mouthful of grass from the common pasture, the grass exits the common pasture and enters the sheep, a net decrease in commonly accessible resources. If you take a copy of the Pink Floyd song “Sheep” from another Napster user, that song is not deleted from that user’s hard drive. Furthermore, since your copy also exists within the Napster universe, this sort of consumption createscommonly accessible resources, rather than destroying them. The song is replicated; it is not consumed. Thus the Xerox thesis — that a user replicating a file is consuming resources — seems problematic when the original resource is left intact and a new copy is created.

Even if, in the worst scenario, you download the song and never make it available to any other Napster user, there is no net loss of available songs, so in any file-sharing system where even some small percentage of new users makes the files they download subsequently available, the system will grow in resources, which will in turn attract new users, which will in turn create new resources, whether the system has freeloaders or not. In fact, in the Napster architecture, it is the most replicated resources that suffer least from freeloading, because even with a large percentage of freeloaders, popular songs will tend to become more available.

Bandwidth over time is infinite

But what of bandwidth, the other resource consumed by file sharing? Here again, the idea of freeloading misconstrues digital economics. If you saturate a 1 Mb DSL line for 60 seconds while downloading a song, how much bandwidth do you have available in the 61st second? One meg, of course, just like every other second. Again, the Tragedy of the Commons is the wrong comparison, because the notion that freeloading users will somehow eat the available resources to death doesn’t apply. Unlike grass, bandwidth can’t be “used up,” any more than CPU cycles or RAM can.

Like a digital horn of plenty, most of the resources that go into networking computers together are constantly replenished; “Bandwidth over time is infinite,” as the Internet saying goes. By using all the available bandwidth in any given minute, you have not reduced future bandwidth, nor have you saved anything on the cost of that bandwidth when it’s priced at a flat rate.

Bandwidth can’t be conserved over time either. By not using all the available bandwidth in any given minute, you have not saved any bandwidth for the future, because bandwidth is an event, not a conservable resource. Unused bandwidth expires just like unused plane tickets do, and as long as the demand on bandwidth is distributed through the system — something P2P systems excel at — no single node suffers from the SlashDot effect, the tendency of sites to crash under massive load (named after the frequent crashes to small sites that crash after getting front-page placement on the news site SlashDot.org).

Given this quality of persistently replenished resources, we would expect users to dislike sharing resources they want to use at that moment, but indifferent to sharing resources they make no claim on, such as available CPU cycles or bandwidth when they are away from their desks. Conservation of resources, in other words, should be situational and keyed to user behavior, and it is in misreading user behavior where attempts to discourage freeloading really jump the rails.

Selfish choices, beneficial outcomes

Attempts to prevent freeloading are usually framed in terms of preventing users from behaving selfishly, but selfishness is a key lubricant in P2P systems. In fact, selfishness is what makes the resources used by P2P available in the first place.

Since the writings of Adam Smith, literature detailing the workings of free markets has put the selfishness — or more accurately, the self-interest — of the individual actor at the center of the system, and the situation with P2P networks is no different. Mojo Nation’s central thesis about existing file-sharing systems is that some small number of users in those systems choose, through regard for their fellow man, to make available resources that a larger number of freeloaders then take unfair advantage of. This does not jibe with the experience of millions of present-day users.

Consider an ideal Napster user, with a 10 GB hard drive, a 1 Mb DSL line, and a computer connected to the Net round the clock. Did this user buy her hard drive in order to host MP3s for the community? Obviously not — the size of the drive was selected solely out of self-interest. Does she store MP3s she feels will be of interest to her fellow Napster users. No, she stores only the music she wants to listen to, self-interest again. Bandwidth? Is she shelling out for fast DSL so other users can download files quickly from her? Again, no. Her check goes to the phone company every month so she can have fast download times.

Likewise, decisions she makes about leaving her computer on and connected are self-interested choices. Bandwidth is not metered, and the pennies it costs her to leave her computer on while she is away from her desk, whether to make a pot of coffee or get some sleep, is a small price to pay for not having to sit through a five-minute boot sequence on her return.

Accentuate the positive

Economists call these kinds of valuable side effects “positive externalities.” The canonical example of a positive externality is a shade tree. If you buy a tree large enough to shade your lawn, there is a good chance that for at least part of the day it will shade your neighbor’s lawn as well. This free shade for your neighbor is a positive externality, a benefit to them that costs you nothing more than what you were willing to spend to shade your own lawn anyway.

Napster’s single economic genius is to coordinate such effects. Other than the central database of songs and user addresses, every resource within the Napster network is a positive externality. Furthermore, Napster coordinates these externalities in a way that encourages altruism. The system is resistant to negative effects of freeloading, because as long as Napster users are able to find the songs they want, they will continue to participate in the system, even if the people who download songs from them are not the same people they download songs from.

As long as even a small portion of the users accept this bargain, the system will grow, bringing in more users, who bring in more songs. In such a system, trying to figure out who is freeloading and who is not isn’t worth the effort of the self-interested user.

Real life is asymmetrical

Consider the positive externalities our self-interested user has created. While she sleeps, the Lynyrd Skynrd and N’Sync songs can fly off her hard drive at no additional cost over what she is willing to pay to have a fast computer and an always-on connection. When she is at her PC, there are a number of ways for her to reassert control of her local resources when she doesn’t want to share them. She can cancel individual uploads unilaterally, disconnect from the Napster server or even shut Napster off completely. Even her advertised connection speed acts as a kind of brake on undesirable external use of resources.

Consider a second user on a 14.4 modem downloading a song from our user with her 1 Mb DSL. At first glance, this seems unfair, since our user seems to be providing more resources. This is, however, the most desirable situation for both users. The 14.4 user is getting files at the fastest rate he can, a speed that takes such a small fraction of our user’s DSL bandwidth that she may not even notice it happening in the background.

Furthermore, reversing the situation to create “fairness” would be a disaster — a transfer from 14.4 to DSL would saturate the 14.4 line and all but paralyze that user’s Internet connection for a file transfer not in that user’s self-interest, while giving the DSL user a less-than-optimum download speed. Asymmetric transfers, far from being unfair, are the ideal scenario — as fast as possible on the downloads, and so slow when other users download from you that you don’t even notice.

In any system where the necessary resources like disk space and bandwidth are priced at a flat rate, these economics will prevail. The question for Napster and other systems that rely on these economics is whether flat-rate pricing is likely to disappear.

Setting prices

The economic history of telecommunications has returned again and again to one particular question: flat-rate vs. unit pricing. Simple economic theory tells us that unit pricing — a discrete price per hour online, per e-mail sent or file downloaded — is the most efficient way to allocate resources. By allowing users to take only those resources they are willing to pay for, per-unit pricing distributes resources most efficiently. Some form of unit pricing is at the center of almost all attempts to prevent freeloading, even if the currency the units are priced in are notional units such as Mojo.

Flat-rate pricing, meanwhile, is too blunt an instrument to create such efficiencies. In flat-rate systems, light users pay a higher per-unit cost, thus subsidizing the heavy users. Additionally, the flat-rate price for resources has to be high enough to cover the cost of unexpected spikes in usage, meaning that the average user is guaranteed to pay more in a flat-rate system than in a per-unit system.

Flat-rate is therefore unfair to all users, whether by creating unfair costs for light and average users, or by unfairly subsidizing heavy users. Given the obvious gap in efficient allocation of resources between the two systems, we would expect to see unit pricing ascendant in all situations where the two methods of pricing are in competition. The opposite, of course, is the actual case.

Too cheap to meter

Despite the insistence of economic theoreticians, in the real world people all over the world have expressed an overwhelming preference for flat-rate pricing in their telecommunications systems. Prodigy and CompuServe were forced to abandon their per-e-mail prices in the face of competition from systems that allowed unlimited e-mail. AOL was forced to drop its per-hour charges in the face of competition from ISPs that offered unlimited Internet access for a single monthly charge. Today, the music industry is caught in a struggle between those who want to preserve per-song charges and those who understand the inevitability of subscription charges for digital music.

For years, the refusal of users to embrace per-unit pricing for telecommunications was regarded by economists as little more than a perversion, but recently several economic theorists, especially Nick Szabo and Andrew Odlyzko, have worked out why a rational user might prefer flat-rate pricing, and it revolves around the phrase “Too Cheap to Meter,” or, put another way, “Not Worth Worrying About.”

People like to control costs, but they like to control anxiety as well. Prodigy’s per-e-mail charges and AOL’s hourly rates gave users complete control of their costs, but it also created a scenario where the user was always wondering if the next e-mail or the next hour was worth the price. When offered systems with slightly higher prices but no anxiety, users embraced them so wholeheartedly that Prodigy and AOL were each forced to give in to user preference. Lowered anxiety turned out to be worth paying for.

Anxiety is a kind of mental transaction cost, the cost incurred by having to stop to think about doing something before you do it. Mental transaction costs are what users are minimizing when they demand flat-rate systems. They are willing to spend more money to save themselves from having to make hundreds of individual decisions about e-mail, connect time or files downloaded.

Like Andrew Odlyzko’s notion of Paris Metro Pricing,” where one price gets you into a particular class of service in the system without requiring you to differentiate between short and long trips, users prefer systems where they pay to get in, but are not asked to constantly price resources on a case-by-case basis afterward, which is why micropayment systems for end users have always failed. Micropayments overestimate the value users pay on resources and underestimate the value they place on predictable costs and peace of mind.

The taxman

In the face of this user preference for flat-rate systems, attempts to stem freeloading with market systems are actually reintroducing mental transaction costs, thus destroying the advantages of flat-rate systems. If our hypothetical user is running a distributed computing client like SETI@Home, it is pointless to force her to set a price on her otherwise unused CPU cycles. Any cycles she values she will use, and the program will remain in the background. So long as she has chosen what she wants her spare cycles used for, any cycles she wouldn’t otherwise use for herself aren’t worth worrying about anyway.

Mojo Nation would like to suggest that Mojo is a currency, but it is more like a tax, a markup on an existing resource. Our user chose to run SETI, and since it costs her nothing to donate her unused cycles, any mental transaction costs incurred in pricing the resources raises the cost of the cycles above zero for no reason. Like all tax systems, this creates what economists call “deadweight loss,” the loss that comes from people simply avoiding transactions whose price is pushed too high by the tax itself. By asking its users to price something that they could give away free without incurring any loss, these systems discourage the benefits that come from coordinating positive externalities.

Lessons From Napster

Napster’s ability to add more users per week than all other P2P file-sharing systems combined is based in part on the ease of use that comes from its ability to tolerate freeloading. By decentralizing the parts of the system that are already paid for (disk space, bandwidth) while centralizing the parts of the system that individuals would not provide for themselves working individually (databases of songs and users ids), Napster has created a system that is far easier to use than most of the purely decentralized file-sharing systems.

This does not mean that Napster is the perfect model for all P2P systems. It is specific to the domain of popular music, and attempts to broaden its appeal to general file-sharing have largely failed. Nor does it mean that there is not some volume of users at which Napster begins to suffer from freeloading; all we know so far is that it can easily handle numbers in the tens of millions.

What Napster does show us is that, given the right architecture, freeloading is not the automatically corrosive problem that people believe it to be, and that creating systems which rely on micropayments or other methods of ensuring evenness between production and consumption are not the ideal alternative.

P2P systems use replicable or replenishable resources at the edges of the Internet, resources that tend to be paid for in lump sums or at rates that are insensitive to usage. Therefore, P2P systems that allow users to share resources they would have paid for anyway, so long as they are either getting something in return or contributing to a project they approve of, will tend to have better growth characteristics than systems that attempt to shut off freeloading altogether. If Napster is any guide, the ability to tolerate, rather than deflect, freeloading will be key to driving the growth of P2P.