It’s Communication, Stupid

To hear the makers of internet-enabled phones tell it, content is going to be king again, because mobile phone subscribers are clamoring for expensive new ways of getting headline news. The feature list for wireless devices reads like a re-hash of every ‘content provider’ press release of the last five years: Travel Updates. Stock quotes. Health tips. And of course all of this great content is supposed to lead to a rise in M-Commerce, a re-hash of E-Commerce. Many wireless analysts have bought this line, and are anointing future winners already, based on their perceived ability to deliver cutting edge content like sports scores (now there’s a brainstorm). The telcos
obviously haven’t asked what their customers want in a wireless device, and when they finally do ask, they are going to be in for a rude shock, because most of their customers aren’t desperate for packaged content, no matter how ‘dynamic’ it is. It seems strange to point this out to the Nokias and Sprints of the world, but the thing
users want to do with a communications device is communicate, and communicate with each other, not with Proctor and Gamble or the NBA. Stranger still, the killer wireless app is already out there, and it’s driving the adoption of a wireless device which isn’t just another mobile phone+WAP browser combo. The killer app is email, and the device in question is a pager on steroids called the Blackberry, manufactured by RIM (Research in Motion).

Building a usable wireless device is complicated, and the Blackberry gets a lot of things right — it gets around the form factor problem of ideal size by offering both a pager-sized version and a PDA-sized version; it provides a surprisingly usable thumb-sized keyboard to speed text input; and it offers always-on connection at flat-rate prices. But the thing that really has gadget-loving CEOs addicted to it is access to the only thing really worth paying for: their own email. No matter what the press releases say, mobile internet access is about staying in touch, and travelling executives have a much greater need to stay in touch with colleagues and family than with CNN or ESPN. RIM has gotten it right where the current vendors of wireless
devices have it wrong, by realizing that email is the core interactive service and everything else is an add-on, not the other way around.

Despite email’s status as the net’s most useful application, it has a long history of being underestimated. In the earliest days of DARPANET, email was an afterthought, and caught its designers by surprise when it quickly became the most popular service on the nascent net. Fast forward to the early 90’s, when Prodigy set about raising the price of its email services in order to get people to stop wasting time talking to each other so they could start shopping, and got caught by surprise when many users defected to AOL. And just this June eMarketer.com expressed some puzzlement at the results of a Pricewaterhousecoopers survey, which found that teens were going
online primarily to talk to one another via email, not to shop. (Have these people never been to a mall?) The surprise here is that phone companies would make the same mistake, since phones were invented to let people communicate. How could the telcos have spent so many billions of dollars creating wireless services which underplay the communications capabilities of the phone?

There are several answers to that question, but they can all be rolled into one phrase: media envy. Phone companies are trying to create devices which will let them treat people as captive media subscribers, rather than as mere customers. Email is damaging to this attempt in several ways: The email protocol can’t be owned. It is difficult to insert ads without being instrusive. It allows absolute interoperability between customers and non-customers. Worst of all, telcos can’t charge sponsors for access to their user base if those users are more interested in their email than headline news. The phone companies hope to use their ability to charge by the byte or minute to recreate the ‘pay for content’ model which has failed so miserably on the wired net, and they don’t want to run into any Prodigy-style problems of users preferring email to for-fee content on the way, especially as serious email use requires the kind of keyboard and screen its difficult to fit into a phone. Vendors of mobile phones are committed to text-based content rather than text-based
communication in large part because that’s what its easy to make a phone do.

The Nokias and Sprints of the world made a strategic miscalculation by hyping the current generation of WAP phones as ‘wireless internet’, Users understand that the most important feature of the internet is email, and it is a pipe dream to believe that users will care more about receiving packaged content than news from home. As with the development of the wired internet, communications will lead the growth
of content and commerce in the wireless space, not follow it. The RIM devices are by no means perfect, but unlike WAP phones they create in their users the kind of rapt attention usually reserved for Gameboy addicts, by giving them something of real value. Ignore the wireless analysts who don’t get that wireless devices are primarily
communications tools. Bet against any service that assumes users are eager to pay to find out what the weather is like in Sausalito. Bet on any service that makes wireless email easier to use, because whoever makes email easier will earn their users undying loyalty, and everything else will follow from that.

The Music Industry Will Miss Napster

First published in the Wall Street Journal, July 28, 2000.

On Wednesday, federal Judge Marilyn Hall Patel ordered Napster, a
company that provides software allowing users to swap MP3 music files
over the Internet, to stop facilitating the trading of copyrighted
material by midnight today. Now the argument surrounding digital
downloading of music enters a new phase.

In business terms, this is a crushing blow for Napster. Although
Napster gets to keep its “community” intact, in that it can still
offer its chat function, Napster users will have no reason to use the
Napster chat software for any purpose other than trading information
on alternative sources of free MP3s like Napagator and Gnutella. If
Napster is kept offline for any length of time, it will likely lose so
much market share to these competitors and others that it will not be
able to recover even if the ruling is later overturned.

MP3 Files

For the Recording Industry Association of America, which sued Napster,
Judge Patel’s order is a powerful demonstration that it can cause
enormous difficulties for companies that try to shut the major record
labels out of the Internet distribution game. But the recording
industry association should not be sanguine simply because the total
number of MP3 files being moved around the Internet will plummet, at
least temporarily.

There are still well over 10 million users out there who have become
used to downloading music over the Internet. Napster or no Napster,
someone is going to service this enormous and still growing
appetite. The recording industry cannot expect that one court ruling,
or even many court rulings, will turn back the tide of technology.

Closing Napster as an outlet will have no effect on the underlying
ability of average users to create and listen to MP3 files. It is not
Napster that is digitizing music, it is the users themselves, using
cheap personal computers and free software. And it is the users’
willingness to trade copyrighted material, not Napster’s willingness
to facilitate those trades, that the record industry should be worried
about.

For the record companies, the issue is plain and simple — what
Napster is doing is illegal, and Judge Patel has ruled in their
favor. Legality isn’t the whole story, however. The critical thing for
the music executives to realize is that while they have not lost the
right to enforce copyright laws, their ability to do so is waning.

The analogy here is to the 55 miles-per-hour speed limit, which turned
out to be unenforceable because accelerator pedals are standard
operating equipment on a car. Likewise, government cannot control the
fact that computers are capable of making an unlimited number of
perfect copies of a file.

The current legal ruling has not taken a single Rio music player off
the shelves, nor destroyed a single piece of WinAmp music software,
nor deleted a single MP3 file. More to the point, it has not, and will
not, prevent anyone from “ripping” a CD, turning every track into a
separate MP3 file, in less time than it actually takes to listen to
the CD in the first place.

Napster did achieve an ease of use coupled with a distributed source
of files that no one else has been able to touch, so the short-term
advantage here is to the recording industry. The industry now has a
more favorable bargaining position with Napster, and it has a couple
of months to regroup and propose some Napster-like system before
Napster’s users disperse to new services. This new system must give
fans a way to download music without the expense and restrictions of
the current CD format, which requires you to buy either a dozen songs
or none.

If the industry doesn’t meet this demand, and quickly, it may find
that it’s traded the evil it knew for one it doesn’t know. The lesson
that new Internet music companies will take from the Napster case is
that they should either be hosted offshore (where U.S. laws don’t
apply) or they should avoid storing information about music files in a
central location. Either option would be far worse for the record
industry than Napster.

Napster at least provides one central spot for the music industry to
monitor the activity of consumers. Driving Napster completely out of
business could lead to total market fragmentation, which the industry
could never control.

Digital Is Different

It’s very hard to explain to businesses that have for years been able
to charge high margins for distributing intellectual property in a
physical format that the digital world is different, but that doesn’t
make it any less true. If the record labels really want to keep their
customers from going completely AWOL, they will use this ruling to
negotiate a deal with Napster on their own terms.

In all likelihood, though, the record executives will believe what so
many others used to believe: The Internet may have disrupted other
business models, but we are uniquely capable of holding back the
tide. As Rocky the Flying Squirrel put it so eloquently, “That trick
never works.”

Napster and the Death of the Album Format

Napster, the wildly popular software that allows users to trade music over the Internet, could be shut down later this month if the Recording Industry Association of America gets an injunction it is seeking in federal court in California. The big record companies in the association — along with the angry artists who testified before the Senate Judiciary Committee this week — maintain that Napster is
nothing more than a tool for digital piracy.

But Napster and the MP3 technology it exploits have changed the music business no matter how the lawsuit comes out. Despite all the fuss about copyright and legality, the most important freedom Napster has spread across the music world is not freedom from cost, but freedom of choice.

Napster, by linking music lovers and letting them share their collections, lets them select from a boundless range of music, one song at a time. This is a huge change from the way the music industry currently does business, and even if Napster Inc. disappears, it won’t be easy to persuade customers to go back to getting their music as the music industry has long packaged it.

Most albums have only two or three songs that any given listener likes, but the album format forces people to choose between paying for a dozen mediocre songs to get those two or three, or not getting any of the songs at all. This all-or-nothing approach has resulted in music collections that are the barest approximation of a listener’s
actual tastes. Even CD ”singles” have been turned into multi-track mini-albums almost as expensive as the real thing, and though there have been some commercial ”mix your own CD” experiments in recent years, they foundered because major labels wouldn’t allow access to their collections a song at a time.

Napster has demonstrated that there are no technological barriers to gaining access to the world’s music catalogue, just commercial ones.

Napster users aren’t merely cherry-picking the hits off well-known albums. Listeners are indulging all their contradictory interests, constantly updating their playlists with a little Bach, a little Beck or a little Buckwheat Zydeco, as the mood strikes them. Because it knows nothing of genre, a Napster search produces a cornucopia of
alternate versions: Hank Williams’s ”I’m So Lonesome I Could Die” as interpreted by both Dean Martin and the Cowboy Junkies, or two dozen covers of ”Louie Louie.”

Napster has become a tool for musical adventure, producing more diversity by accident than the world music section of the local record store does by design: a simple search for the word ”water” brings up Simon and Garfunkel’s ”Bridge Over Troubled Water,” Deep Purple’s ”Smoke on the Water,” ”Cool Water” by the Sons of the Pioneers and ”Water No Get Enemy” by Fela Anikulapo Kuti. After experiencing this freedom, music lovers are not going to go happily back to buying albums.

The question remains of how artists will be paid when songs are downloaded over the Internet, and there are many sources of revenue being bandied about—advertising, sponsorship, user subscription, pay-per-song. But merely recreating the CD in cyberspace will not work.

In an echo of Prohibition, Napster users have shown that they are willing to break the law to escape the constraints of all-or-nothing musical choices. This won’t be changed by shutting down Napster. The music industry is going to have to find some way to indulge its customers in their newfound freedom.

The Toughest Virus of All

First published on Biz2, 07/00.

“Viral marketing” is back, making its return as one of the gotta-have-it phrases for dot-com business plans currently making the rounds. The phrase was coined (by Steve Jurvetson and Tim Draper in “Turning Customers into a Sales Force,” Nov. ’98, p103) to describe the astonishing success of Hotmail, which grew to 12 million subscribers 18 months after launch.

The viral marketing meme has always been hot, but now its expansion is being undertaken by a raft of emarketing sites promising to elucidate “The Six Simple Principles for Viral Marketing” or offering instructions on “How to Use Viral Marketing to Drive Traffic and Sales for Free!” As with anything that promises miracle results, there is a catch. Viral marketing can work, but it requires two things often in short supply in the marketing world: honesty and execution.

It’s all about control

It’s easy to see why businesses would want to embrace viral marketing. Not only is it supposed to create those stellar growth rates, but it can also reduce the marketing budget to approximately zero. Against this too-good-to-be-true backdrop, though, is the reality: Viral marketing only works when the user is in control and actually endorses the viral message, rather than merely acting as a carrier.

Consider Hotmail: It gives its subscribers a useful service, Web-based email, and then attaches an ad for Hotmail at the bottom of each sent message. Hotmail gains the credibility needed for successful viral marketing by putting its users in control, because when users recommend something without being tricked or co-opted, it provides the message with a kind of credibility that cannot be bought. Viral marketing is McLuhan marketing: The medium validates the message.

Viral marketing is also based on the perception of honesty: If the recipient of the ad fails to believe the sender is providing an honest endorsement, the viral effect disappears. An ad tacked on to a message without the endorsement of the author loses credibility; it’s no different from a banner ad. This element of trust becomes even more critical when firms begin to employ explicit viral marketing, where users go beyond merely endorsing ads to actually generating them.

These services–PayPal.com or Love Monkey, for example–rely on users to market the service because the value of the service grows with new recruits. If I want to pay you through PayPal, you must be a PayPal user as well (unlike Hotmail, where you just need a valid address to receive mail from me). With PayPal, I benefit if you join, and the value of the network grows for both of us and for all present and future users as well.

Love Monkey, a college matchmaking service, works similarly. Students at a particular college enter lists of fellow students they have crushes on, and those people are sent anonymous email asking them to join Love Monkey and enter their own list of crushes. It then notifies any two people whose lists include each other. Love Monkey must earn users’ trust before any viral effect can take place because Metcalfe’s Law only works when people are willing to interact. Passive networks such as cable or satellite television provide no benefits to existing users when new users join.

Persistent infections

Continuing the biological metaphor, viral marketing does not create a one-time infection, but a persistent one. The only thing that keeps Love Monkey users from being “infected” by another free matchmaking service is their continued use of Love Monkey. Viral marketing, far from eliminating the need to deliver on promises, makes businesses more dependent on the goodwill of their users. Any company that incorporates viral marketing techniques must provide quality services–ones that users are continually willing to vouch for, whether implicitly or explicitly.

People generally conspire to misunderstand what they should fear. The people rushing to embrace viral marketing misunderstand how difficult it is to make it work well. You can’t buy it, you can’t fake it, and you can’t pay your users to do it for you without watering down your message. Worse still, anything that is going to benefit from viral marketing must be genuinely useful, well designed, and flawlessly executed, so consumers repeatedly choose to use the service.

Sadly, the phrase “viral marketing” seems to be going the way of “robust” and “scalable” –formerly useful concepts which have been flattened by overuse. A year from now, viral marketing will simply mean word of mouth. However, the concept described by the phrase– a way of acquiring new customers by encouraging honest communication–will continue to be available, but only to businesses that are prepared to offer ongoing value.

Viral marketing is not going to save mediocre businesses from extinction. It is the scourge of the stupid and the slow, because it only rewards companies that offer great service and have the strength to allow and even encourage their customers to publicly pass judgment on that service every single day.

Content Shifts to the Edges, Redux

First published on Biz2, 06/00.

It’s not enough that Napster is erasing the distinction between client
and server (discussed in an earlier column); its erasing the
distinction between consumer and provider as well. You can see the
threat to the established order in a recent legal action: A San Diego
cable ISP, Cox@Home, ordered customers to stop running Napster not
because they were violating copyright laws, but because Napster allows
Cox subscribers to serve files from their home PCs. Cox has built its
service on the current web architecture, where producers serve content
from from always-connected servers at the internet’s center, and
consumers consume from intermittantly-connected client PCs at the
edges. Napster, on the other hand, inaugurates a model where PCs are
always on and always connected, where content is increasingly stored
and served from the edges of the network, and where the distinction
between client and server is erased. Set aside Napster’s legal woes —
“Cox vs. Napster” isn’t just a legal fight, its a fight over the
difference between information consumers and information
providers. The question of the day is “Can Cox (or any media business)
force its users to retain their second-class status as mere consumers
of information.” To judge by Napster’s growth and the rise of
Napster-like services such as gnutella, freenet, and wrapster, the
answer is “No”.

The split between consumers and providers of information has its roots
in the internet’s addressing scheme. A computer can only be located on
by its internet protocol (IP) address, like 127.0.0.1, and although
you can attach a more memorable name to those numbers, like
makemoneyfast.com, the domain name is just an alias — the IP address
is the defining feature. By the mid-90’s there weren’t enough to go
around, so ISPs started randomly assigning IP addresses whenever a
user dialed in. This means that users never have a fixed IP address,
so while they can consume data stored elsewhere, they can never
provide anything from their own PCs. This division wasn’t part of the
internet’s original architecture, but the proposed fix (the next
generation of IP, called IPv6) has been coming Real Soon Now for a
long time. In the meantime, services like Cox have been built with the
expectation that this consumer/provider split would remain in effect
for the forseeable future.

How short the forseeable future sometimes is. Napster short-circuits
the temporary IP problem by turning the domain name system inside out:
with Napster, you register a name for your PC and every time you
connect, it makes your current IP address an alias to that name,
instead of the other way around. This inversion makes it trivially
easy to host content on a home PC, which destroys the assymetry of
“end users consume but can’t provide”. If your computer is online, it
can be reached, even without a permanent IP address, and any material
you decide to host on your PC can become globally accessible.
Napster-style architecture erases the people-based distinction of
provider and consumer just as surely as it erases the computer-based
distinction between server and client.

There could not be worse news for Cox, since the limitations of cable
ISP’s only become apparent if its users actually want to do something
useful with their upstream bandwidth, but the fact that cable
companies are hamstrung by upstream speed (less than a tenth of its
downstream speed in Cox’s case) just makes them the first to face the
eroding value of the media bottleneck. Any media business that relies
on a neat division between information consumer and provider will be
affected. Sites like Geocities or The Globe, which made their money
providing fixed addresses for end user content, may find that users
are perfectly content to use their PCs as that fixed address.
Copyright holders who have assumed up until now that large-scale
serving of material could only take place on a handful of relatively
identifiable and central locations are suddenly going to find that the
net has sprung another million leaks. Meanwhile, the rise of the end
user as info provider will be good news for other businesses. DSL
companies will have a huge advantage in the race to provide fast
upstream bandwidth; Apple may find that the ability to stream home
movies over the net from a PC at home drives adoption of Mac hardware
and software; and of course companies that provide the Napster-style
service of matching dynamic IP addresses with fixed names will have
just the sort of sticky relationship with their users that VC’s slaver
over.

Real technological revolutions are human revolutions as well. The
architecture of the internet has affected the largest transfer of
power from organizations to individuals the world has ever seen, and
Napster’s destruction on the serving limitations on end users
demonstrates taht this change has not yet run its course. Media
businesses which have assumed that all the power that has been
transferred to the individual for things like stock broking and
airline tickets wouldn’t affect them are going to find that the
millions of passive consumers are being replaced by millions of
one-person media channels. This is not to say that all content is
going to the edges or the net, or that every user is going to be an
enthusiastic media outlet, but when the limitations of “Do I really
want to (or know how to) upload my home video?” go away, the total
amount of user generated and hosted content is going to explode beyond
anything the Geocities model allows. This will have two big effects:
the user’s power as a media outlet of one will be dramatically
increased, creating unexpected new competition with corporate media
outlets; and the spread of hosting means that the lawyers of copyright
holders can no longer go to Geocities to acheive leverage over
individual users — in the age of user-as-media-outlet, lawsuits will
have to be undertaken one user at a time. That old saw about the press
only being free for people who own a printing press is about to take
on a whole new resonanace.

We (Still) Have a Long Way to Go

First published in Biz2, 06/00.

Just when you thought the Internet was a broken link shy of ubiquity, along comes the head of the Library of Congress to remind us how many people still don’t get it.

The Librarian of Congress, James Billington, gave a speech on April 14 to the National Press Club in which he outlined the library’s attitude toward the Net, and toward digitized books in particular. Billington said the library has no plans to digitize the books in its collection. This came as no surprise because governmental digitizing of copyrighted material would open a huge can of worms.

What was surprising were the reasons he gave as to why the library would not be digitizing books: “So far, the Internet seems to be largely amplifying the worst features of television’s preoccupation with sex and violence, semi-illiterate chatter, shortened attention spans, and a near-total subservience to commercial marketing. Where is the virtue in all of this virtual information?” According to the April 15 edition of the Tech Law Journal, in the Q&A section of his address, Billington characterized the desire to have the contents of books in digital form as “arrogance” and “hubris,” and said that books should inspire “a certain presumption of reverence.”

It seems obvious, but it bears repeating: Billington is wrong.

The Internet is the most important thing for scholarship since the printing press, and all information which can be online should be online, because that is the most efficient way to distribute material to the widest possible audience. Billington should probably be asked to resign, based on his contempt for U.S. citizens who don’t happen to live within walking distance of his library. More importantly, however, is what his views illustrate about how far the Internet revolution still has to go.

The efficiency chain

The mistake Billington is making is sentimentality. He is right in thinking that books are special objects, but he is wrong about why. Books don’t have a sacred essence, they are simply the best interface for text yet invented — lightweight, portable, high-contrast, and cheap. They are far more efficient than the scrolls and oral lore they replaced.

Efficiency is relative, however, and when something even more efficient comes along, it will replace books just as surely as books replaced scrolls. And this is what we’re starting to see: Books are being replaced by digital text wherever books are technologically inferior. Unlike digital text, a book can’t be in two places at once, can’t be searched by keyword, can’t contain dynamic links, and can’t be automatically updated. Encyclopaedia Britannica is no longer published on paper because the kind of information it is dedicated to — short, timely, searchable, and heavily cross-referenced — is infinitely better carried on CD-ROMs or over the Web. Entombing annual snapshots of the Encyclopaedia Britannica database on paper stopped making sense.

Books which enable quick access to short bits of text — dictionaries, thesauruses, phone books — are likely to go the way of Encyclopedia Britannica over the next few years. Meanwhile, books that still require paper’s combination of low cost, high contrast, and portability — any book destined for the bed, the bath or the beach — will likely be replaced by the growth of print-on-demand services, at least until the arrival of disposable screens.

What is sure is that wherever the Internet arrives, it is the death knell for production in advance of demand, and for expensive warehousing, the current models of the publishing industry and of libraries. This matters for more than just publishers and librarians, however. Text is the Internet’s uber-medium, and with email still the undisputed killer app, and portable devices like the Palm Pilot and cell phones relying heavily or exclusively on text interfaces, text is a leading indicator for other kinds of media. Books are not sacred objects, and neither are radios, VCRs, telephones, or televisions.

Internet as rule

There are two ways to think about the Internet’s effect on existing media. The first is “Internet as exception”: treat the Net as a new entrant in an existing environment and guess at the eventual adoption rate. This method, so sensible for things such as microwaves or CD players, is wrong for the Internet, because it relies on the same
sentimentality about the world that the Librarian of Congress does. The Net is not an addition, it is a revolution; the Net is not a new factor in an existing environment, it is itself the new environment.

The right way to think about Internet penetration is “Internet as rule”: simply start with the assumption that the Internet is going to become part of everything — every book, every song, every plane ticket bought, every share of stock sold — and then look for the roadblocks to this vision. This is the attitude that got us where we are today, and this is the attitude that will continue the Net’s advance.

You do not need to force the Internet into new configurations — the Internet’s efficiency provides the necessary force. You only need to remove the roadblocks of technology and attitude. Digital books will become ubiquitous when interfaces for digital text are uniformly better than the publishing products we have today. And as the Librarian of Congress shows us, there are still plenty of institutions that just don’t understand this, and there is still a lot of innovation, and profit, to be achieved by proving them wrong.

Open Source and Quake

First published in FEED, 1/6/2000.

The Open Source movement got a Christmas present at the end of 1999, some assembly required. John Carmack of id software, arguably the greatest games programmer ever, released the source code for Quake, the wildly popular shoot-em-up. Quake, already several years old, has maintained popularity because it allows players to battle one another over the internet, with hundreds of servers hosting round-the-clock battles. It jives with the Open Source ethos because it has already benefitted enormously from player-created ‘mods’ (or modifications) to the game’s surface appearance. By opening up the source code to the game’s legion of fanatical players, id is hoping to spur a new round of innovation by allowing anyone interested in creating these modifications to be able to alter any aspect of the program, not just its surface. Within minutes of id’s announcement, gamers and hackers the world over were downloading the source code. Within hours they were compiling new versions of the game. Within days people began using their new knowledge of the games inner workings to cheat. A problem new to the Open Source movement began to surface: what to do when access to the source code opens it up to abuse.

Quake works as a multi-player game where each player has a version of Quake running on his or her own PC, and it is this local copy of the game that reports on the player’s behavior — running, shooting, hiding — to a central Quake server. This server then collates all the players’ behaviors and works out who’s killed whom. With access to the Quake source code, a tech-savvy player can put themselves on electronic steroids by altering their local version of the game to give themselves superhuman speed, accuracy, or force, simply by over-reporting their skill to the server. This would be like playing tennis against someone with an invisible racket a yard wide.

All of this matters much more than you would expect a game to matter. With Open Source now associated with truth, justice, and the Internet Way, and with Carmack revered as a genius and a hero, the idea that the combination of these two things could breed anything so mundane as cheating caught people by surprise. One school of thought has been simply to deny that there is a problem by noting that if Quake had been Open Source to begin with, this situation would never have arisen. This is true, as far as it goes, but a theory which doesn’t cover real world cases isn’t much use. id’s attempt to open the source for some of its products while keeping others closed is exactly the strategy players like Apple, IBM, and Sun are all testing out, and if the release of Quake fails to generate innovation, mere ideological purity will be cold comfort.

As so often in the digital world, what happens to the gaming industry has ramifications for the computing industry as a whole. Players in a game are simultaneously competing and co-operating, and all agree to abide by rules that sort winners from losers, a process with ramifications for online economics, education, even auctions. If Quake, with its enormous audience of tech-savvy players and its history of benefitting from user modifications, can’t make the transition from closed source to open source easily, then companies with a less loyal user base might think twice about opening their products, so id’s example is going to be watched very closely. The Quake release marks a watershed — if the people currently hard at work on the Quake cheating problem find a solution, it will be another Open Source triumph, but if they fail, the Quake release might be remembered as the moment that cheating robbed the Open Source movement of its aura of continuous progress.

RIP THE CONSUMER, 1900-1999

“The Consumer” is the internet’s most recent casualty. We have often
heard that Internet puts power in the hands of the consumer, but this
is nonsense — ‘powerful consumer’ is an oxymoron. The historic role
of the consumer has been nothing more than a giant maw at the end of
the mass media’s long conveyer belt, the all-absorbing Yin to mass
media’s all-producing Yang. Mass media’s role has been to package
consumers and sell their atention to the advertisers, in bulk. The
consumers’ appointed role in this system gives them and no way to
communicate anything about themselves except their preference between
Coke and Pepsi, Bounty and Brawny, Trix and Chex. They have no way to
respond to the things they see on television or hear on the radio, and
they have no access to any media on their own — media is something
that is done to them, and consuming is how they register their
repsonse. In changing the relations between media and individuals,
the Internet does not herald the rise of a powerful consumer. The
Internet heralds the disappearance of the consumer altogether, because
the Internet destroys the noisy advertiser/silent consumer
relationship that the mass media relies upon. The rise of the internet
undermines the existence of the consumer because it undermines the
role of mass media. In the age of the internet, no one is a passive
consumer anymore because everyone is a media outlet.

To profit from its symbiotic relationship with advertisers, the mass
media required two things from its consumers – size and silence. Size
allowed the media to address groups while ignoring the individual — a
single viewer makes up less than 1% of 1% of 1% of Frasier’s
10-million-strong audience. In this system, the individual matters not
at all: the standard unit for measuring television audiences is a
million households at a time. Silence, meanwhile, allowed the media’s
message to pass unchallenged by the viewers themselves. Marketers
could broadcast synthetic consumer reaction — “Tastes Great!”, ” Less
filling!” — without having to respond to real customers’ real
reactions — “Tastes bland”, “More expensive”. The enforced silence
leaves the consumer with only binary choices — “I will or won’t watch
I Dream of Genie, I will or won’t buy Lemon Fresh Pledge” and so
on. Silence has kept the consumer from injecting any complex or
demanding interests into the equation because mass media is one-way media.

This combination of size and silence has meant that mass media, where
producers could address 10 million people at once with no fear of
crosstalk, has been a very profitable business to be in.

Unfortunately for the mass media, however, the last decade of the 20th
century was hell on both the size and silence of the consumer
audience. As AOL’s takeover of Time Warner demonstrated, while
everyone in the traditional media was waiting for the Web to become
like traditional media, traditional media has become vastly more like
the Web. TV’s worst characteristics — its blandness, its cultural
homogeneity, its appeal to the lowest common denominator — weren’t an
inevitable part of the medium, they were simply byproducts of a
restricted number of channels, leaving every channel to fight for the
average viewer with their average tastes. The proliferation of TV
channels has eroded the audience for any given show — the average
program now commands a fraction of the audience it did 10 years ago,
forcing TV stations to find and defend audience niches which will be
attractive to advertisers.

Accompanying this reduction in size is a growing response from
formerly passive consumers. Marketing lore says that if a customer has
a bad expereince, they will tell 9 other people, but that figure badly
needs updating. Armed with nothing more than an email address, a
disgruntled customer who vents to a mailing list can reach hundreds of
people at once; the same person can reach thousands on ivillage or
deja; a post on slashdot or a review on amazon can reach tens of
thousands. Furthermore, the Internet never forgets — a complaint
made on the phone is gone forever, but a complaint made on the Web is
there forever. With mass media outlets shrinking and the reach of the
individual growing, the one-sided relationship between media and
consumer is over, and it is being replaced with something a lot less
conducive to unquestioning acceptance.

In retrospect, mass media’s position in the 20th century was an
anomoly and not an inevitability. There have always been both one-way
and two-way media — pamphlets vs. letters, stock tickers vs.
telegraphs — but in 20th century the TV so outstripped the town
square that we came to assume that ‘large audience’ necessarily meant
‘passive audience’, even though size and passivity are unrelated.
With the Internet, we have the world’s first large, active medium, but
when it got here no one was ready for it, least of all the people who
have learned to rely on the consumer’s quiescent attention while the
Lucky Strike boxes tapdance across the screen. Frasier’s advertisers
no longer reach 10 million consumers, they reach 10 million other
media outlets, each of whom has the power to amplify or contradict the
advertiser’s message in something frighteningly close to real time. In
place of the giant maw are millions of mouths who can all talk
back. There are no more consumers, because in a world where an email
address constitutes a media channel, we are all producers now.

WAP’s Closed Door Approach

First published in Biz2, 05/00.

Thanks to the wireless application protocol (WAP), the telephone and the PC are going to collide this year, and it’s not going to be pretty. The problem with “wireless everywhere” is that the PC and the phone can’t fuse into the tidy little converged info-appliance that pundits have been predicting for years, because while it’s easy to
combine the hardware of the phone and the PC, it’s impossible to combine their philosophies.

The phone-based assumptions about innovation, freedom, and commercial control are so different from those of the PC that the upcoming battle between the two devices will be nothing less than a battle over the relationship of the Internet to its users.

The philosophy behind the PC is simple: Put as much control in the hands of the user as you possibly can. PC users can install any software they like; they can connect their PCs to any network; they can connect any peripherals; they can even replace the operating system. And they don’t need anyone’s permission to do any of these
things. The phone has an equally simple underlying philosophy: Take as much control from the user as possible while still producing a useable device. Phones allow so little user control that users don’t even think of their phones as having operating systems, much less software they can upgrade or replace themselves. The phone, in other words, is built around principles of restriction and corporate control of the user interface that are anathema to the Internet as it has developed so far.

WAP extends this idea of control into the network itself, by purporting to offer Internet access while redesigning almost every protocol needed to move data across the wireless part of the network. WAP does not offer direct access to the Internet, but instead links the phone to a WAP gateway which brokers connections between the phone and the rest of the Net. The data that passes between the phone and this WAP gateway is translated from standard Internet protocols to a kind of parallel “W” universe, where HTML becomes WML, TCP becomes WTP, and so on. The implication is that the W world is simply wireless Internet, but in fact the WAP Forum has not only renamed but redesigned these protocols. WML, for example, is not in fact a markup
language but a programming language, and therefore much more difficult for the average content creator to use. Likewise, WAP designers choose to ignore the lesson of HTML, which is so adaptable precisely because it was never designed for any particular interface.

Familiar principles

The rationale behind these redesigns is that WAP allows for error checking and for interconnecting different kinds of networks. If that sounds familiar, it’s because these were the founding principles of the Internet itself, principles that have proven astonishingly flexible over 30 or so years and are perfectly applicable to wireless
networks. The redesign of the protocol lets the WAP consortium blend the functions of delivery and display so the browser choice is locked in by the phone manufacturer. (Imagine how much Microsoft would like to have pulled off that trick.) No matter what the technical arguments for WAP are, its effect is to put the phone companies firmly in control of the user. The WAP consortium is determined that no third party will be able to reach the user of a wireless device without going through an interface that one of its member companies controls and derives revenue from.

The effects of this control can be seen in a recent string of commercial announcements. Geoworks intends to enforce its WAP patents to extract a $20,000 fee from any large company using a WAP gateway (contrast the free Apache Web server). Sprint has made licensing deals with companies such as E-Compare to distribute content over its WAP-enabled phones (imagine having to negotiate a separate deal with every ISP to distribute content to PC users). Nokia announced it will use WAP to deliver ads to its users’ phones (imagine WorldNet hijacking its subscribers’ browsers to serve them ads.) By linking hardware, browser, and data transport together far more tightly than they are on the PC-based Internet, the members of the WAP Forum hope to create artificial scarcity for content, and avoid having to offer individual users unfettered access to the Internet.

In the short run this might work because WAP has a head start over other protocols for wireless data. In the long run, though, it is doomed to fail because the only thing we’ve ever seen with the growth characteristics of the Internet is the Internet itself. The people touting WAP over the current PC-based methods of accessing the Internet want to focus on phone hardware versus PC hardware

Content Shifts to the Edges

First published on Biz2, 04/00.

The message of Napster, the wildly popular mp3 “sharing” software, is
plain: The internet is being turned inside out.

Napster is downloadable software that allows users to trade mp3 files
with one another. It works by constantly updating a master song list,
adding and removing songs as individual users connect and disconnect
their PCs. When someone requests a particular song, the Napster server
then initiates a direct file transfer from the user who has a copy of
the song to the user who wants one. Running against the twin tides of
the death of the PC and the rise of application service providers
(ASPs), Napster instead points the way to a networking architecture
which re-invents the PC as a hybrid client+server while relegating the
center of the internet, where all the action has been recently, to
nothing but brokering connections.

For software which is still in beta, Napster’s success is difficult to
overstate: at any given moment, Napster servers keep track of
thousands of PCs, holding hundreds of thousands of songs which
comprise terabytes of data. This is a complete violation of the
Web’s current data model — “Content at the center” — and Napster’s
success in violating it points the way to an alternative — “Content
at the edges”. The current content-at-the-center model has one
significant flaw: most internet content is created on the PCs at the
edges, but for it to become universally accessible, it must be pushed
to the center, to always-on, always-up Web servers. As anyone who has
ever spent time trying to upload material to a Web site knows, the Web
has made downloading trivially easy, but uploading is still needlessly
hard. Napster relies on three networking innovations to get around
these limitations:

  • It dispenses with uploading and leaves the files on the PCs, merely
    brokering requests from one PC to another — the mp3 files do not have
    to travel through any central Napster server.
  • PCs running Napster do not need a fixed internet address or a
    permanent conenction to use the service.
  • It ignores the reigning Web paradigm of client and server. Napster
    makes no distinction between the two functions: if you can receive
    files from other people, they can receive files from you as well.

Leave aside for the moment the fact that virtually all of the file
transfers brokered by Napster are illegal — piracy is often an
indicator of massive untapped demand. The real import of Napster is
that it is proof-of-concept for a networking architecture which
recognizes that bandwidth to the desktop is becoming fast enough to
allow PCs to act as servers, and that PCs are becoming powerful enough
to fulfill this new role. In other words, just as the ASP space is
taking off, Napster’s success represents the revenge of the PC. By
removing the need to upload data (the single biggest bottleneck to
using the ASP model for everything), the content-at-the-edges model
points the way to a re-invention of the desktop as the center of a
user’s data, only this time the user will no longer need physical
access to the PC itself. The use of the PC as central repository and
server of user content will have profound effects on several internet
developments currently underway:

  • This is the ground on on which the Windows2000 vs. Linux battle will
    be fought. As the functions of desktop and server fuse, look for
    Microsoft to aggressively push Web services which rely on content-
    at-the-edges, trying to undermine Linux’s hold on the server market.
    (Ominously for Linux, the Napster Linux client is not seen as a
    priority by Napster themselves.)
  • Free hosting companies like Geocities exist because the present
    system makes it difficult for the average user to host their own web
    content. With PCs increasingly able to act as Web servers, look for
    a Napster-like service which simply points requests to individual
    users machines.
  • WAP and other mobile access protocols are currently focussing on
    access to centralized commercial services, but when you are on the
    road the information you are likeliest to need is on your PC, not
    on CNN. An always-on always-accessible PC is going to be the
    ideal source of WAP-enabled information for travelling business
    people.
  • The trend towards centralized personalization services on sites like
    Yahoo will find itself fighting with a trend towards making your PC
    the source of your calendar, phone book, and to do list. The Palm
    Pilot currently syncs with the PC, and it will be easier to turn the
    PC itself into a Web server than to teach the average user how to
    upload a contact database.
  • Stolen mp3’s are obvious targets to be served from individiual
    machines, but they are by no means the only such content category.
    Everything from wedding pictures to home office documents to amateur
    porn (watch for a content-at-the-edges version of persiankitty) can
    be served from a PC now, and as long as the data does not require
    central management, it will be more efficient to do so.

This is not to say that desktop will replace all web servers — systems
which require steady backups or contain professionally updated content
will still continue to work best on centrally managed servers.
Nevertheless, Napster’s rise shows us that the versatility of the PC as
a hardware platform will give the millions of desktop machines currently
in use a new lease on life. This in turn means that the ASP revolution
will be not be as swift nor will the death of the PC be as total as the
current press would have us believe. The current content-at-the-center
architecture got us through the 90’s, where PCs too poorly engineered to
be servers and bandwidth was too slow and variable to open a pipe to the
desktop, but with DSL and stable operating systems in the offing, much of
the next 5 years will be shaped by the rise of content-at-the-edges.