Content Shifts to the Edges, Redux

First published on Biz2, 06/00.

It’s not enough that Napster is erasing the distinction between client
and server (discussed in an earlier column); its erasing the
distinction between consumer and provider as well. You can see the
threat to the established order in a recent legal action: A San Diego
cable ISP, Cox@Home, ordered customers to stop running Napster not
because they were violating copyright laws, but because Napster allows
Cox subscribers to serve files from their home PCs. Cox has built its
service on the current web architecture, where producers serve content
from from always-connected servers at the internet’s center, and
consumers consume from intermittantly-connected client PCs at the
edges. Napster, on the other hand, inaugurates a model where PCs are
always on and always connected, where content is increasingly stored
and served from the edges of the network, and where the distinction
between client and server is erased. Set aside Napster’s legal woes —
“Cox vs. Napster” isn’t just a legal fight, its a fight over the
difference between information consumers and information
providers. The question of the day is “Can Cox (or any media business)
force its users to retain their second-class status as mere consumers
of information.” To judge by Napster’s growth and the rise of
Napster-like services such as gnutella, freenet, and wrapster, the
answer is “No”.

The split between consumers and providers of information has its roots
in the internet’s addressing scheme. A computer can only be located on
by its internet protocol (IP) address, like 127.0.0.1, and although
you can attach a more memorable name to those numbers, like
makemoneyfast.com, the domain name is just an alias — the IP address
is the defining feature. By the mid-90’s there weren’t enough to go
around, so ISPs started randomly assigning IP addresses whenever a
user dialed in. This means that users never have a fixed IP address,
so while they can consume data stored elsewhere, they can never
provide anything from their own PCs. This division wasn’t part of the
internet’s original architecture, but the proposed fix (the next
generation of IP, called IPv6) has been coming Real Soon Now for a
long time. In the meantime, services like Cox have been built with the
expectation that this consumer/provider split would remain in effect
for the forseeable future.

How short the forseeable future sometimes is. Napster short-circuits
the temporary IP problem by turning the domain name system inside out:
with Napster, you register a name for your PC and every time you
connect, it makes your current IP address an alias to that name,
instead of the other way around. This inversion makes it trivially
easy to host content on a home PC, which destroys the assymetry of
“end users consume but can’t provide”. If your computer is online, it
can be reached, even without a permanent IP address, and any material
you decide to host on your PC can become globally accessible.
Napster-style architecture erases the people-based distinction of
provider and consumer just as surely as it erases the computer-based
distinction between server and client.

There could not be worse news for Cox, since the limitations of cable
ISP’s only become apparent if its users actually want to do something
useful with their upstream bandwidth, but the fact that cable
companies are hamstrung by upstream speed (less than a tenth of its
downstream speed in Cox’s case) just makes them the first to face the
eroding value of the media bottleneck. Any media business that relies
on a neat division between information consumer and provider will be
affected. Sites like Geocities or The Globe, which made their money
providing fixed addresses for end user content, may find that users
are perfectly content to use their PCs as that fixed address.
Copyright holders who have assumed up until now that large-scale
serving of material could only take place on a handful of relatively
identifiable and central locations are suddenly going to find that the
net has sprung another million leaks. Meanwhile, the rise of the end
user as info provider will be good news for other businesses. DSL
companies will have a huge advantage in the race to provide fast
upstream bandwidth; Apple may find that the ability to stream home
movies over the net from a PC at home drives adoption of Mac hardware
and software; and of course companies that provide the Napster-style
service of matching dynamic IP addresses with fixed names will have
just the sort of sticky relationship with their users that VC’s slaver
over.

Real technological revolutions are human revolutions as well. The
architecture of the internet has affected the largest transfer of
power from organizations to individuals the world has ever seen, and
Napster’s destruction on the serving limitations on end users
demonstrates taht this change has not yet run its course. Media
businesses which have assumed that all the power that has been
transferred to the individual for things like stock broking and
airline tickets wouldn’t affect them are going to find that the
millions of passive consumers are being replaced by millions of
one-person media channels. This is not to say that all content is
going to the edges or the net, or that every user is going to be an
enthusiastic media outlet, but when the limitations of “Do I really
want to (or know how to) upload my home video?” go away, the total
amount of user generated and hosted content is going to explode beyond
anything the Geocities model allows. This will have two big effects:
the user’s power as a media outlet of one will be dramatically
increased, creating unexpected new competition with corporate media
outlets; and the spread of hosting means that the lawyers of copyright
holders can no longer go to Geocities to acheive leverage over
individual users — in the age of user-as-media-outlet, lawsuits will
have to be undertaken one user at a time. That old saw about the press
only being free for people who own a printing press is about to take
on a whole new resonanace.

We (Still) Have a Long Way to Go

First published in Biz2, 06/00.

Just when you thought the Internet was a broken link shy of ubiquity, along comes the head of the Library of Congress to remind us how many people still don’t get it.

The Librarian of Congress, James Billington, gave a speech on April 14 to the National Press Club in which he outlined the library’s attitude toward the Net, and toward digitized books in particular. Billington said the library has no plans to digitize the books in its collection. This came as no surprise because governmental digitizing of copyrighted material would open a huge can of worms.

What was surprising were the reasons he gave as to why the library would not be digitizing books: “So far, the Internet seems to be largely amplifying the worst features of television’s preoccupation with sex and violence, semi-illiterate chatter, shortened attention spans, and a near-total subservience to commercial marketing. Where is the virtue in all of this virtual information?” According to the April 15 edition of the Tech Law Journal, in the Q&A section of his address, Billington characterized the desire to have the contents of books in digital form as “arrogance” and “hubris,” and said that books should inspire “a certain presumption of reverence.”

It seems obvious, but it bears repeating: Billington is wrong.

The Internet is the most important thing for scholarship since the printing press, and all information which can be online should be online, because that is the most efficient way to distribute material to the widest possible audience. Billington should probably be asked to resign, based on his contempt for U.S. citizens who don’t happen to live within walking distance of his library. More importantly, however, is what his views illustrate about how far the Internet revolution still has to go.

The efficiency chain

The mistake Billington is making is sentimentality. He is right in thinking that books are special objects, but he is wrong about why. Books don’t have a sacred essence, they are simply the best interface for text yet invented — lightweight, portable, high-contrast, and cheap. They are far more efficient than the scrolls and oral lore they replaced.

Efficiency is relative, however, and when something even more efficient comes along, it will replace books just as surely as books replaced scrolls. And this is what we’re starting to see: Books are being replaced by digital text wherever books are technologically inferior. Unlike digital text, a book can’t be in two places at once, can’t be searched by keyword, can’t contain dynamic links, and can’t be automatically updated. Encyclopaedia Britannica is no longer published on paper because the kind of information it is dedicated to — short, timely, searchable, and heavily cross-referenced — is infinitely better carried on CD-ROMs or over the Web. Entombing annual snapshots of the Encyclopaedia Britannica database on paper stopped making sense.

Books which enable quick access to short bits of text — dictionaries, thesauruses, phone books — are likely to go the way of Encyclopedia Britannica over the next few years. Meanwhile, books that still require paper’s combination of low cost, high contrast, and portability — any book destined for the bed, the bath or the beach — will likely be replaced by the growth of print-on-demand services, at least until the arrival of disposable screens.

What is sure is that wherever the Internet arrives, it is the death knell for production in advance of demand, and for expensive warehousing, the current models of the publishing industry and of libraries. This matters for more than just publishers and librarians, however. Text is the Internet’s uber-medium, and with email still the undisputed killer app, and portable devices like the Palm Pilot and cell phones relying heavily or exclusively on text interfaces, text is a leading indicator for other kinds of media. Books are not sacred objects, and neither are radios, VCRs, telephones, or televisions.

Internet as rule

There are two ways to think about the Internet’s effect on existing media. The first is “Internet as exception”: treat the Net as a new entrant in an existing environment and guess at the eventual adoption rate. This method, so sensible for things such as microwaves or CD players, is wrong for the Internet, because it relies on the same
sentimentality about the world that the Librarian of Congress does. The Net is not an addition, it is a revolution; the Net is not a new factor in an existing environment, it is itself the new environment.

The right way to think about Internet penetration is “Internet as rule”: simply start with the assumption that the Internet is going to become part of everything — every book, every song, every plane ticket bought, every share of stock sold — and then look for the roadblocks to this vision. This is the attitude that got us where we are today, and this is the attitude that will continue the Net’s advance.

You do not need to force the Internet into new configurations — the Internet’s efficiency provides the necessary force. You only need to remove the roadblocks of technology and attitude. Digital books will become ubiquitous when interfaces for digital text are uniformly better than the publishing products we have today. And as the Librarian of Congress shows us, there are still plenty of institutions that just don’t understand this, and there is still a lot of innovation, and profit, to be achieved by proving them wrong.