The Domain Name System is Coming Apart at the Seams

First published on Biz2, 10/00

The Domain Name System is coming apart at the seams. DNS, the protocol which maps IP addresses like 206.107.251.22 to domain names like FindDentist.com, is showing its age after almost 20 years. It has proved unable to adapt to dynamic internet addresses, to the number of new services being offered, and particularly to the needs of end users, who are increasingly using their PCs to serve files, host
software, and even search for extra-terrestrial intelligence. As these PCs become a vital part of the internet infrastructure, they need real addresses just as surely as yahoo.com does. This is something the DNS system can’t offer them, but the competitors to DNS can.

The original DNS system was invented, back in the early 80s, for distinctly machine-centric world. Internet-connected computers were rare, occupying a few well-understood niches in academic and government labs. This was a world of permanence: any given computer would always have one and only one IP address, and any given IP address would have one and only one domain name. Neat and tidy and static.

Then along came 1994, the Year of the Web, when the demand for connecting PCs directly to the internet grew so quickly that the IP namespace — the total number of addresses — was too small to meet the demand. In response, the ISPs began doling out temporary IP addresses on an as-needed basis, which kept PCs out of the domain name system: no permanent IP, no domain name. This wasn’t a problem in the mid-90s — PCs were so bad, and modem connections so intermittent, that no one really thought of giving PCs their own domain names.

Over the last 5 years, though, cheap PC hardware has gotten quite good, operating systems have gotten distinctively less flaky, and connectivity via LAN, DSL and cable have given us acceptable connections. Against the background of these remarkable improvements, the DNS system got no better at all — anyone with a PC was still a
second-class citizen with no address, and it was Napster, ICQ, and their cousins, not the managers of the DNS system, who stepped into this breech.

These companies, realizing that interesting services could be run off of PCs if only they had real addresses, simply ignored DNS and replaced the machine-centric model with a protocol-centric one. Protocol-centric addressing creates a parallel namespace for each piece of software, and the mapping of ICQ or Napster usernames to temporary IP addresses is not handled by the net’s DNS servers but by
privately owned servers dedicated to each protocol — the ICQ server matches ICQ names to the users’ current IP address, and so on. As a side-effect of handling dynamic IP addresses, these protocols are also able to handle internet address changes in real time, while current DNS system can take several days to fully log a change.

In Napster’s case, protocol-centric addressing merely turns Napster into customized ftp for music files. The real action is in software like ICQ, which not only uses protocol-centric addressing schemes, but where the address points to a person, not a machine. When I log into ICQ, I’m me, no matter what machine I’m at, and no matter what IP address is presently assigned to that machine. This completely decouples what humans care about — can I find my friends and talk with them online — with how the machines go about it — route message A to IP address X.

This is analgous to the change in telephony brought about by mobile phones. In the same way a phone number is no longer tied to a particular location but is now mapped to the physical location of the phone’s owner, an ICQ address is mapped to me, not to a machine, no matter where I am.

This does not mean that the DNS system is going away, any more than landlines went away with the invention of mobile telephony. It does mean that DNS is no longer the only game in town. The rush is now on, with instant messaging protocols, single sign-on and wallet applications, and the explosion in peer-to-peer businesses, to create
and manage protocol-centric addresses, because these are essentially privately owned, centrally managed, instantly updated alternatives to DNS.

This also does not mean that this change is entirely to the good. While it is always refreshing to see people innovate their way around a bottleneck, sometimes bottlenecks are valuable. While ICQ and Napster came to their addressing schemes honestly, any number of people have noticed how valuable it is to own a namespace, and many business plans making the rounds are just me-too copies of Napster or
ICQ, which will make an already growing list of kinds of addresses — phone, fax, email, url, ICQ, … — explode into meaninglessness.

Protocol-centric namespaces will also force the browser into lesser
importance, as users return to the days they namaged multiple pieces
of internet software, or it will mean that addresses like
icq://12345678 or napster://green_day_fan will have to be added to the
browsers repetoire of recognized URLs. Expect the rise of
‘meta-address’ servers as well, which offer to manage a user’s
addresses for all of these competing protocols, and even to translate
from one kind of address to another. (These meta-address servers will,
of course, need their own addressses as well.)

Its not clear what is going to happen to internet addressing, but it is clear that its going to get a lot more complicated before it gets simpler. Fortunately, both the underlying IP addressing system and the design of URLs can handle this explosion of new protocols and addresses, but that familiar DNS bit in the middle (which really put the dot in dot com) will never recover the central position it has occupied in the last 2 decades, and that means that a critical piece of internet infrastructure is now up for grabs.


Thanks to Dan Gilmor of the San Jose Mercury News for pointing out to me the important relationship between peer-to-peer networking and DNS.

Darwin, Linux, and Radiation

10/16/2000

In the aftermath of LinuxWorld, the open source conference that took place in San
Jose, Calif., in August, we’re now being treated with press releases announcing Linux
as Almost Ready for the Desktop.

It is not.

Even if Linux were to achieve double-digit penetration among the world’s PC users, it
would be little more than an also-ran desktop OS. For Linux, the real action is
elsewhere. If you want to understand why Linux is the most important operating system in the world, ignore the posturing about Linux on the desktop, and pay attention to the fact that IBM has just ported Linux to a wristwatch, because that is the kind of news that illustrates Linux’s real strengths.

At first glance, Linux on a wristwatch seems little more than a gimmick–cellphone
displays and keypads seem luxurious by comparison, and a wristwatch that requires you to type “date” at the prompt doesn’t seem like much of an upgrade. The real import of the Linux wristwatch is ecological, though, rather than practical, because it
illustrates Linux’s unparalleled ability to take advantage of something called
“adaptive radiation.”

Let’s radiate

Adaptive radiation is a biological term that describes the way organisms evolve to
take advantage of new environments. The most famous example is Darwin’s finches. A single species of finch blew off of the west coast of South America and landed on the Galapagos Islands, and as these birds took advantage of the new ecological niches offered by the islands, they evolved into several separate but closely related species.

Adaptive radiation requires new environments not already crowded with competitors and organisms adaptable enough to take advantage of those environments. So it is with Linux–after a decade of computers acting as either clients or servers, new classes of devices are now being invented almost weekly–phones, consoles, PDAs–and only Linux is adaptable enough to work on most of them.

In addition to servers and the occasional desktop, Linux is being modified for use in
game machines (Indrema), Internet appliances (iOpener, IAN), handhelds (Yopy, iPAQ), mainframes (S/390), supercomputers (Los Lobos, a Beowulf cluster), phones (Japan Embedded Linux Consortium), digital VCRs (TiVO), and, of course, wristwatches. Although Linux faces fierce competition in each of these categories, no single competitor covers every one. Furthermore, given that each successful porting effort increases Linux’s overall plasticity, the gap between Linux’s diversity and that of its competitors will almost inevitably increase.

Where ‘good’ beats ‘best’

In a multidevice world, the kernel matters more than the interface. Many commentators (including Microsoft) have suggested that Linux will challenge Microsoft’s desktop monopoly, and among this camp it is an article of faith that one of the things holding Linux back is its lack of a single standardized interface. This is not merely wrong, it’s backward–the fact that Linux refuses to constrain the types of interfaces that are wrapped around the kernel is precisely what makes Linux so valuable to the individuals and companies adapting it for new uses. (The corollary is also true–Microsoft’s attempt to simply repackage the Windows interface for PDAs rendered early versions of WinCE unusable.)

Another lesson is that being merely good enough has better characteristics for adaptive
radiation, and therefore for long-term survival, than being Best of Breed.

Linux is not optimized for any particular use, and it is improved in many small
increments rather than large redesigns. Therefore, the chances that Linux will become a better high-availability server OS than Solaris, say, in the next few years, is tiny. Although not ideal, Linux is quite a good server, whereas Solaris is unusable for game consoles, digital VCRs, or wristwatches. This will keep Linux out of the best of breed competition because it is never perfectly tailored to any particular environment, but it also means that Linux avoids the best of breed trap. For any given purpose, best of breed products are either ideal or useless. Linux’s ability to adapt to an astonishing array of applications means that the chances of it being able to run on any new class of device are superior to a best of breed product.

The real action

The immediate benefits of Linux’s adaptive radiation ability are obvious to the Linux
community. Since nothing succeeds like success, every new porting effort increases both the engineering talent pool and the available code base. The potential long-term benefit, though, is even greater. If a Linux kernel makes interoperation easier, each new Linux device can potentially accelerate a network effect, driving Linux adoption still faster.

This is not to say that Linux will someday take over everything, or even a large subset
of everything. There will always be a place for “Best of Breed” software, and Linux’s
use of open protocols means its advantage is always in ease of use, never in locking out the competition. Nevertheless, only Linux is in a position to become ubiquitous across most kinds of devices. Pay no attention to the desktop sideshow–in the operating system world, the real action in the next couple of years is in adaptive radiation.

XML: No Magic Problem Solver

First published on Biz2.0, 09/00.

The Internet is a wonderful source of technical jargon and a bubbling cauldron of alphabet soup. FTP, TCP, DSL, and a host of additional TLAs (three-letter acronyms) litter the speech of engineers and programmers. Every now and then, however, one of those bits of jargon breaks away, leaving the world of geek-speak to become that most sought-after of technological developments: a Magic Problem Solver.

A Magic Problem Solver is technology that non-technologists believe can dissolve stubborn problems on contact. Just sprinkle a little Java or ODBC or clustering onto your product or service, and, voila, problems evaporate. The downside to Magic Problem Solvers is that they never work as advertised. In fact, the unrealistic expectations created by asserting that a technology is a Magic Problem Solver may damage its real technological value: Java, for example, has succeeded far beyond any realistic expectations, but it hasn’t succeeded beyond the unrealistic expectations it spurred early on.

Today’s silver bullet

The Magic Problem Solver du jour is XML, or Extensible Markup Language, a system for describing arbitrary data. Among people who know nothing about software engineering, XML is the most popular technology since Java. This is a shame since, although it really is wonderful, it won’t solve half the problems people think it will. Worse, if it continues to be presented as a Magic Problem Solver, it may not be able to live up to its actual (and considerably more modest) promise.

XML is being presented as the ideal solution for the problem of the age: interoperability. By asserting that their product or service uses XML, vendors everywhere are inviting clients to ignore the problems that arise from incompatible standards, devices, and formats, as if XML alone could act as a universal translator and future-proofer in the post-Babel world we inhabit.

The truth is much more mundane: XML is not a format, it is a way of making formats, a set of rules for making sets of rules. With XML, you can create ways to describe Web-accessible resources using RDF (Resource Description Framework), syndicated content using ICE (Information Content Exchange), or even customer leads for the auto industry using ADF (Auto-lead Data Format). (Readers may be led to believe that XML is also a TLA that generates additional TLAs.)

Notice, however, that using XML as a format-describing language does not guarantee that the result will be well designed (XML is no more resistant to “Garbage In, Garbage Out” than any other technology), that it will be adopted industry-wide (ICE and RDF are overlapping attempts to describe types of Internet-accessible data), or even that the format is a good idea (Auto-lead Data Format?). If two industry groups settle on XML to design their respective formats, they’re no more automatically interoperable than are two languages that use the same alphabet–no more “interoperable,” for example, than are English and French.

Three sad truths

When it meets the real world, this vision of XML as a pain-free method of describing and working with data runs into some sad truths:

Sad XML Truth No. 1: Designing a good format using XML still requires human intelligence. The people selling XML as a tool that makes life easy are deluding their customers–good XML takes more work because it requires a rigorous description of the problem to be solved, and its much vaunted extensibility only works if the basic framework is sound.

Sad XML Truth No. 2: XML does not mean less pain. It does not remove the pain of having to describe your data; it simply front-loads the pain where it’s easier to see and deal with. The payoff only comes if XML is rolled out carefully enough at the start to lessen day-to-day difficulties once the system is up and running. Businesses that use XML thoughtlessly will face all of the upfront trouble of implementing XML, plus all of the day-to-day annoyances that result from improperly described data.

Sad XML Truth No. 3: Interoperability isn’t an engineering issue, it’s a business issue. Creating the Web — HTTP plus HTML — was probably the last instance where standards of global importance were designed and implemented without commercial interference. Standards have become too important as competitive tools to leave them where they belong, in the hands of engineers. Incompatibility doesn’t exist because companies can’t figure out how to cooperate with one another. It exists because they don’t want to cooperate with one another.

XML will not solve the interoperability problem because the difficulties faced by those hoping to design a single standard and the difficulties caused by the existence of competing standards have not gone away. The best XML can do is to ensure that data formats can be described with rigor by thoughtful and talented people capable of successfully completing the job, and that the standard the market selects can easily be spread, understood, and adopted. XML doesn’t replace standards competition, in other words, but if it is widely used it might at least allow for better refereeing and more decisive victories. On the other hand, if XML is oversold as a Magic Problem Solver, it might fall victim to unrealistically high expectations, and even the modest improvement it promises will fail to materialize.

It’s Communication, Stupid

To hear the makers of internet-enabled phones tell it, content is going to be king again, because mobile phone subscribers are clamoring for expensive new ways of getting headline news. The feature list for wireless devices reads like a re-hash of every ‘content provider’ press release of the last five years: Travel Updates. Stock quotes. Health tips. And of course all of this great content is supposed to lead to a rise in M-Commerce, a re-hash of E-Commerce. Many wireless analysts have bought this line, and are anointing future winners already, based on their perceived ability to deliver cutting edge content like sports scores (now there’s a brainstorm). The telcos
obviously haven’t asked what their customers want in a wireless device, and when they finally do ask, they are going to be in for a rude shock, because most of their customers aren’t desperate for packaged content, no matter how ‘dynamic’ it is. It seems strange to point this out to the Nokias and Sprints of the world, but the thing
users want to do with a communications device is communicate, and communicate with each other, not with Proctor and Gamble or the NBA. Stranger still, the killer wireless app is already out there, and it’s driving the adoption of a wireless device which isn’t just another mobile phone+WAP browser combo. The killer app is email, and the device in question is a pager on steroids called the Blackberry, manufactured by RIM (Research in Motion).

Building a usable wireless device is complicated, and the Blackberry gets a lot of things right — it gets around the form factor problem of ideal size by offering both a pager-sized version and a PDA-sized version; it provides a surprisingly usable thumb-sized keyboard to speed text input; and it offers always-on connection at flat-rate prices. But the thing that really has gadget-loving CEOs addicted to it is access to the only thing really worth paying for: their own email. No matter what the press releases say, mobile internet access is about staying in touch, and travelling executives have a much greater need to stay in touch with colleagues and family than with CNN or ESPN. RIM has gotten it right where the current vendors of wireless
devices have it wrong, by realizing that email is the core interactive service and everything else is an add-on, not the other way around.

Despite email’s status as the net’s most useful application, it has a long history of being underestimated. In the earliest days of DARPANET, email was an afterthought, and caught its designers by surprise when it quickly became the most popular service on the nascent net. Fast forward to the early 90’s, when Prodigy set about raising the price of its email services in order to get people to stop wasting time talking to each other so they could start shopping, and got caught by surprise when many users defected to AOL. And just this June eMarketer.com expressed some puzzlement at the results of a Pricewaterhousecoopers survey, which found that teens were going
online primarily to talk to one another via email, not to shop. (Have these people never been to a mall?) The surprise here is that phone companies would make the same mistake, since phones were invented to let people communicate. How could the telcos have spent so many billions of dollars creating wireless services which underplay the communications capabilities of the phone?

There are several answers to that question, but they can all be rolled into one phrase: media envy. Phone companies are trying to create devices which will let them treat people as captive media subscribers, rather than as mere customers. Email is damaging to this attempt in several ways: The email protocol can’t be owned. It is difficult to insert ads without being instrusive. It allows absolute interoperability between customers and non-customers. Worst of all, telcos can’t charge sponsors for access to their user base if those users are more interested in their email than headline news. The phone companies hope to use their ability to charge by the byte or minute to recreate the ‘pay for content’ model which has failed so miserably on the wired net, and they don’t want to run into any Prodigy-style problems of users preferring email to for-fee content on the way, especially as serious email use requires the kind of keyboard and screen its difficult to fit into a phone. Vendors of mobile phones are committed to text-based content rather than text-based
communication in large part because that’s what its easy to make a phone do.

The Nokias and Sprints of the world made a strategic miscalculation by hyping the current generation of WAP phones as ‘wireless internet’, Users understand that the most important feature of the internet is email, and it is a pipe dream to believe that users will care more about receiving packaged content than news from home. As with the development of the wired internet, communications will lead the growth
of content and commerce in the wireless space, not follow it. The RIM devices are by no means perfect, but unlike WAP phones they create in their users the kind of rapt attention usually reserved for Gameboy addicts, by giving them something of real value. Ignore the wireless analysts who don’t get that wireless devices are primarily
communications tools. Bet against any service that assumes users are eager to pay to find out what the weather is like in Sausalito. Bet on any service that makes wireless email easier to use, because whoever makes email easier will earn their users undying loyalty, and everything else will follow from that.

The Music Industry Will Miss Napster

First published in the Wall Street Journal, July 28, 2000.

On Wednesday, federal Judge Marilyn Hall Patel ordered Napster, a
company that provides software allowing users to swap MP3 music files
over the Internet, to stop facilitating the trading of copyrighted
material by midnight today. Now the argument surrounding digital
downloading of music enters a new phase.

In business terms, this is a crushing blow for Napster. Although
Napster gets to keep its “community” intact, in that it can still
offer its chat function, Napster users will have no reason to use the
Napster chat software for any purpose other than trading information
on alternative sources of free MP3s like Napagator and Gnutella. If
Napster is kept offline for any length of time, it will likely lose so
much market share to these competitors and others that it will not be
able to recover even if the ruling is later overturned.

MP3 Files

For the Recording Industry Association of America, which sued Napster,
Judge Patel’s order is a powerful demonstration that it can cause
enormous difficulties for companies that try to shut the major record
labels out of the Internet distribution game. But the recording
industry association should not be sanguine simply because the total
number of MP3 files being moved around the Internet will plummet, at
least temporarily.

There are still well over 10 million users out there who have become
used to downloading music over the Internet. Napster or no Napster,
someone is going to service this enormous and still growing
appetite. The recording industry cannot expect that one court ruling,
or even many court rulings, will turn back the tide of technology.

Closing Napster as an outlet will have no effect on the underlying
ability of average users to create and listen to MP3 files. It is not
Napster that is digitizing music, it is the users themselves, using
cheap personal computers and free software. And it is the users’
willingness to trade copyrighted material, not Napster’s willingness
to facilitate those trades, that the record industry should be worried
about.

For the record companies, the issue is plain and simple — what
Napster is doing is illegal, and Judge Patel has ruled in their
favor. Legality isn’t the whole story, however. The critical thing for
the music executives to realize is that while they have not lost the
right to enforce copyright laws, their ability to do so is waning.

The analogy here is to the 55 miles-per-hour speed limit, which turned
out to be unenforceable because accelerator pedals are standard
operating equipment on a car. Likewise, government cannot control the
fact that computers are capable of making an unlimited number of
perfect copies of a file.

The current legal ruling has not taken a single Rio music player off
the shelves, nor destroyed a single piece of WinAmp music software,
nor deleted a single MP3 file. More to the point, it has not, and will
not, prevent anyone from “ripping” a CD, turning every track into a
separate MP3 file, in less time than it actually takes to listen to
the CD in the first place.

Napster did achieve an ease of use coupled with a distributed source
of files that no one else has been able to touch, so the short-term
advantage here is to the recording industry. The industry now has a
more favorable bargaining position with Napster, and it has a couple
of months to regroup and propose some Napster-like system before
Napster’s users disperse to new services. This new system must give
fans a way to download music without the expense and restrictions of
the current CD format, which requires you to buy either a dozen songs
or none.

If the industry doesn’t meet this demand, and quickly, it may find
that it’s traded the evil it knew for one it doesn’t know. The lesson
that new Internet music companies will take from the Napster case is
that they should either be hosted offshore (where U.S. laws don’t
apply) or they should avoid storing information about music files in a
central location. Either option would be far worse for the record
industry than Napster.

Napster at least provides one central spot for the music industry to
monitor the activity of consumers. Driving Napster completely out of
business could lead to total market fragmentation, which the industry
could never control.

Digital Is Different

It’s very hard to explain to businesses that have for years been able
to charge high margins for distributing intellectual property in a
physical format that the digital world is different, but that doesn’t
make it any less true. If the record labels really want to keep their
customers from going completely AWOL, they will use this ruling to
negotiate a deal with Napster on their own terms.

In all likelihood, though, the record executives will believe what so
many others used to believe: The Internet may have disrupted other
business models, but we are uniquely capable of holding back the
tide. As Rocky the Flying Squirrel put it so eloquently, “That trick
never works.”

Napster and the Death of the Album Format

Napster, the wildly popular software that allows users to trade music over the Internet, could be shut down later this month if the Recording Industry Association of America gets an injunction it is seeking in federal court in California. The big record companies in the association — along with the angry artists who testified before the Senate Judiciary Committee this week — maintain that Napster is
nothing more than a tool for digital piracy.

But Napster and the MP3 technology it exploits have changed the music business no matter how the lawsuit comes out. Despite all the fuss about copyright and legality, the most important freedom Napster has spread across the music world is not freedom from cost, but freedom of choice.

Napster, by linking music lovers and letting them share their collections, lets them select from a boundless range of music, one song at a time. This is a huge change from the way the music industry currently does business, and even if Napster Inc. disappears, it won’t be easy to persuade customers to go back to getting their music as the music industry has long packaged it.

Most albums have only two or three songs that any given listener likes, but the album format forces people to choose between paying for a dozen mediocre songs to get those two or three, or not getting any of the songs at all. This all-or-nothing approach has resulted in music collections that are the barest approximation of a listener’s
actual tastes. Even CD ”singles” have been turned into multi-track mini-albums almost as expensive as the real thing, and though there have been some commercial ”mix your own CD” experiments in recent years, they foundered because major labels wouldn’t allow access to their collections a song at a time.

Napster has demonstrated that there are no technological barriers to gaining access to the world’s music catalogue, just commercial ones.

Napster users aren’t merely cherry-picking the hits off well-known albums. Listeners are indulging all their contradictory interests, constantly updating their playlists with a little Bach, a little Beck or a little Buckwheat Zydeco, as the mood strikes them. Because it knows nothing of genre, a Napster search produces a cornucopia of
alternate versions: Hank Williams’s ”I’m So Lonesome I Could Die” as interpreted by both Dean Martin and the Cowboy Junkies, or two dozen covers of ”Louie Louie.”

Napster has become a tool for musical adventure, producing more diversity by accident than the world music section of the local record store does by design: a simple search for the word ”water” brings up Simon and Garfunkel’s ”Bridge Over Troubled Water,” Deep Purple’s ”Smoke on the Water,” ”Cool Water” by the Sons of the Pioneers and ”Water No Get Enemy” by Fela Anikulapo Kuti. After experiencing this freedom, music lovers are not going to go happily back to buying albums.

The question remains of how artists will be paid when songs are downloaded over the Internet, and there are many sources of revenue being bandied about—advertising, sponsorship, user subscription, pay-per-song. But merely recreating the CD in cyberspace will not work.

In an echo of Prohibition, Napster users have shown that they are willing to break the law to escape the constraints of all-or-nothing musical choices. This won’t be changed by shutting down Napster. The music industry is going to have to find some way to indulge its customers in their newfound freedom.

The Toughest Virus of All

First published on Biz2, 07/00.

“Viral marketing” is back, making its return as one of the gotta-have-it phrases for dot-com business plans currently making the rounds. The phrase was coined (by Steve Jurvetson and Tim Draper in “Turning Customers into a Sales Force,” Nov. ’98, p103) to describe the astonishing success of Hotmail, which grew to 12 million subscribers 18 months after launch.

The viral marketing meme has always been hot, but now its expansion is being undertaken by a raft of emarketing sites promising to elucidate “The Six Simple Principles for Viral Marketing” or offering instructions on “How to Use Viral Marketing to Drive Traffic and Sales for Free!” As with anything that promises miracle results, there is a catch. Viral marketing can work, but it requires two things often in short supply in the marketing world: honesty and execution.

It’s all about control

It’s easy to see why businesses would want to embrace viral marketing. Not only is it supposed to create those stellar growth rates, but it can also reduce the marketing budget to approximately zero. Against this too-good-to-be-true backdrop, though, is the reality: Viral marketing only works when the user is in control and actually endorses the viral message, rather than merely acting as a carrier.

Consider Hotmail: It gives its subscribers a useful service, Web-based email, and then attaches an ad for Hotmail at the bottom of each sent message. Hotmail gains the credibility needed for successful viral marketing by putting its users in control, because when users recommend something without being tricked or co-opted, it provides the message with a kind of credibility that cannot be bought. Viral marketing is McLuhan marketing: The medium validates the message.

Viral marketing is also based on the perception of honesty: If the recipient of the ad fails to believe the sender is providing an honest endorsement, the viral effect disappears. An ad tacked on to a message without the endorsement of the author loses credibility; it’s no different from a banner ad. This element of trust becomes even more critical when firms begin to employ explicit viral marketing, where users go beyond merely endorsing ads to actually generating them.

These services–PayPal.com or Love Monkey, for example–rely on users to market the service because the value of the service grows with new recruits. If I want to pay you through PayPal, you must be a PayPal user as well (unlike Hotmail, where you just need a valid address to receive mail from me). With PayPal, I benefit if you join, and the value of the network grows for both of us and for all present and future users as well.

Love Monkey, a college matchmaking service, works similarly. Students at a particular college enter lists of fellow students they have crushes on, and those people are sent anonymous email asking them to join Love Monkey and enter their own list of crushes. It then notifies any two people whose lists include each other. Love Monkey must earn users’ trust before any viral effect can take place because Metcalfe’s Law only works when people are willing to interact. Passive networks such as cable or satellite television provide no benefits to existing users when new users join.

Persistent infections

Continuing the biological metaphor, viral marketing does not create a one-time infection, but a persistent one. The only thing that keeps Love Monkey users from being “infected” by another free matchmaking service is their continued use of Love Monkey. Viral marketing, far from eliminating the need to deliver on promises, makes businesses more dependent on the goodwill of their users. Any company that incorporates viral marketing techniques must provide quality services–ones that users are continually willing to vouch for, whether implicitly or explicitly.

People generally conspire to misunderstand what they should fear. The people rushing to embrace viral marketing misunderstand how difficult it is to make it work well. You can’t buy it, you can’t fake it, and you can’t pay your users to do it for you without watering down your message. Worse still, anything that is going to benefit from viral marketing must be genuinely useful, well designed, and flawlessly executed, so consumers repeatedly choose to use the service.

Sadly, the phrase “viral marketing” seems to be going the way of “robust” and “scalable” –formerly useful concepts which have been flattened by overuse. A year from now, viral marketing will simply mean word of mouth. However, the concept described by the phrase– a way of acquiring new customers by encouraging honest communication–will continue to be available, but only to businesses that are prepared to offer ongoing value.

Viral marketing is not going to save mediocre businesses from extinction. It is the scourge of the stupid and the slow, because it only rewards companies that offer great service and have the strength to allow and even encourage their customers to publicly pass judgment on that service every single day.

Content Shifts to the Edges, Redux

First published on Biz2, 06/00.

It’s not enough that Napster is erasing the distinction between client
and server (discussed in an earlier column); its erasing the
distinction between consumer and provider as well. You can see the
threat to the established order in a recent legal action: A San Diego
cable ISP, Cox@Home, ordered customers to stop running Napster not
because they were violating copyright laws, but because Napster allows
Cox subscribers to serve files from their home PCs. Cox has built its
service on the current web architecture, where producers serve content
from from always-connected servers at the internet’s center, and
consumers consume from intermittantly-connected client PCs at the
edges. Napster, on the other hand, inaugurates a model where PCs are
always on and always connected, where content is increasingly stored
and served from the edges of the network, and where the distinction
between client and server is erased. Set aside Napster’s legal woes —
“Cox vs. Napster” isn’t just a legal fight, its a fight over the
difference between information consumers and information
providers. The question of the day is “Can Cox (or any media business)
force its users to retain their second-class status as mere consumers
of information.” To judge by Napster’s growth and the rise of
Napster-like services such as gnutella, freenet, and wrapster, the
answer is “No”.

The split between consumers and providers of information has its roots
in the internet’s addressing scheme. A computer can only be located on
by its internet protocol (IP) address, like 127.0.0.1, and although
you can attach a more memorable name to those numbers, like
makemoneyfast.com, the domain name is just an alias — the IP address
is the defining feature. By the mid-90’s there weren’t enough to go
around, so ISPs started randomly assigning IP addresses whenever a
user dialed in. This means that users never have a fixed IP address,
so while they can consume data stored elsewhere, they can never
provide anything from their own PCs. This division wasn’t part of the
internet’s original architecture, but the proposed fix (the next
generation of IP, called IPv6) has been coming Real Soon Now for a
long time. In the meantime, services like Cox have been built with the
expectation that this consumer/provider split would remain in effect
for the forseeable future.

How short the forseeable future sometimes is. Napster short-circuits
the temporary IP problem by turning the domain name system inside out:
with Napster, you register a name for your PC and every time you
connect, it makes your current IP address an alias to that name,
instead of the other way around. This inversion makes it trivially
easy to host content on a home PC, which destroys the assymetry of
“end users consume but can’t provide”. If your computer is online, it
can be reached, even without a permanent IP address, and any material
you decide to host on your PC can become globally accessible.
Napster-style architecture erases the people-based distinction of
provider and consumer just as surely as it erases the computer-based
distinction between server and client.

There could not be worse news for Cox, since the limitations of cable
ISP’s only become apparent if its users actually want to do something
useful with their upstream bandwidth, but the fact that cable
companies are hamstrung by upstream speed (less than a tenth of its
downstream speed in Cox’s case) just makes them the first to face the
eroding value of the media bottleneck. Any media business that relies
on a neat division between information consumer and provider will be
affected. Sites like Geocities or The Globe, which made their money
providing fixed addresses for end user content, may find that users
are perfectly content to use their PCs as that fixed address.
Copyright holders who have assumed up until now that large-scale
serving of material could only take place on a handful of relatively
identifiable and central locations are suddenly going to find that the
net has sprung another million leaks. Meanwhile, the rise of the end
user as info provider will be good news for other businesses. DSL
companies will have a huge advantage in the race to provide fast
upstream bandwidth; Apple may find that the ability to stream home
movies over the net from a PC at home drives adoption of Mac hardware
and software; and of course companies that provide the Napster-style
service of matching dynamic IP addresses with fixed names will have
just the sort of sticky relationship with their users that VC’s slaver
over.

Real technological revolutions are human revolutions as well. The
architecture of the internet has affected the largest transfer of
power from organizations to individuals the world has ever seen, and
Napster’s destruction on the serving limitations on end users
demonstrates taht this change has not yet run its course. Media
businesses which have assumed that all the power that has been
transferred to the individual for things like stock broking and
airline tickets wouldn’t affect them are going to find that the
millions of passive consumers are being replaced by millions of
one-person media channels. This is not to say that all content is
going to the edges or the net, or that every user is going to be an
enthusiastic media outlet, but when the limitations of “Do I really
want to (or know how to) upload my home video?” go away, the total
amount of user generated and hosted content is going to explode beyond
anything the Geocities model allows. This will have two big effects:
the user’s power as a media outlet of one will be dramatically
increased, creating unexpected new competition with corporate media
outlets; and the spread of hosting means that the lawyers of copyright
holders can no longer go to Geocities to acheive leverage over
individual users — in the age of user-as-media-outlet, lawsuits will
have to be undertaken one user at a time. That old saw about the press
only being free for people who own a printing press is about to take
on a whole new resonanace.

We (Still) Have a Long Way to Go

First published in Biz2, 06/00.

Just when you thought the Internet was a broken link shy of ubiquity, along comes the head of the Library of Congress to remind us how many people still don’t get it.

The Librarian of Congress, James Billington, gave a speech on April 14 to the National Press Club in which he outlined the library’s attitude toward the Net, and toward digitized books in particular. Billington said the library has no plans to digitize the books in its collection. This came as no surprise because governmental digitizing of copyrighted material would open a huge can of worms.

What was surprising were the reasons he gave as to why the library would not be digitizing books: “So far, the Internet seems to be largely amplifying the worst features of television’s preoccupation with sex and violence, semi-illiterate chatter, shortened attention spans, and a near-total subservience to commercial marketing. Where is the virtue in all of this virtual information?” According to the April 15 edition of the Tech Law Journal, in the Q&A section of his address, Billington characterized the desire to have the contents of books in digital form as “arrogance” and “hubris,” and said that books should inspire “a certain presumption of reverence.”

It seems obvious, but it bears repeating: Billington is wrong.

The Internet is the most important thing for scholarship since the printing press, and all information which can be online should be online, because that is the most efficient way to distribute material to the widest possible audience. Billington should probably be asked to resign, based on his contempt for U.S. citizens who don’t happen to live within walking distance of his library. More importantly, however, is what his views illustrate about how far the Internet revolution still has to go.

The efficiency chain

The mistake Billington is making is sentimentality. He is right in thinking that books are special objects, but he is wrong about why. Books don’t have a sacred essence, they are simply the best interface for text yet invented — lightweight, portable, high-contrast, and cheap. They are far more efficient than the scrolls and oral lore they replaced.

Efficiency is relative, however, and when something even more efficient comes along, it will replace books just as surely as books replaced scrolls. And this is what we’re starting to see: Books are being replaced by digital text wherever books are technologically inferior. Unlike digital text, a book can’t be in two places at once, can’t be searched by keyword, can’t contain dynamic links, and can’t be automatically updated. Encyclopaedia Britannica is no longer published on paper because the kind of information it is dedicated to — short, timely, searchable, and heavily cross-referenced — is infinitely better carried on CD-ROMs or over the Web. Entombing annual snapshots of the Encyclopaedia Britannica database on paper stopped making sense.

Books which enable quick access to short bits of text — dictionaries, thesauruses, phone books — are likely to go the way of Encyclopedia Britannica over the next few years. Meanwhile, books that still require paper’s combination of low cost, high contrast, and portability — any book destined for the bed, the bath or the beach — will likely be replaced by the growth of print-on-demand services, at least until the arrival of disposable screens.

What is sure is that wherever the Internet arrives, it is the death knell for production in advance of demand, and for expensive warehousing, the current models of the publishing industry and of libraries. This matters for more than just publishers and librarians, however. Text is the Internet’s uber-medium, and with email still the undisputed killer app, and portable devices like the Palm Pilot and cell phones relying heavily or exclusively on text interfaces, text is a leading indicator for other kinds of media. Books are not sacred objects, and neither are radios, VCRs, telephones, or televisions.

Internet as rule

There are two ways to think about the Internet’s effect on existing media. The first is “Internet as exception”: treat the Net as a new entrant in an existing environment and guess at the eventual adoption rate. This method, so sensible for things such as microwaves or CD players, is wrong for the Internet, because it relies on the same
sentimentality about the world that the Librarian of Congress does. The Net is not an addition, it is a revolution; the Net is not a new factor in an existing environment, it is itself the new environment.

The right way to think about Internet penetration is “Internet as rule”: simply start with the assumption that the Internet is going to become part of everything — every book, every song, every plane ticket bought, every share of stock sold — and then look for the roadblocks to this vision. This is the attitude that got us where we are today, and this is the attitude that will continue the Net’s advance.

You do not need to force the Internet into new configurations — the Internet’s efficiency provides the necessary force. You only need to remove the roadblocks of technology and attitude. Digital books will become ubiquitous when interfaces for digital text are uniformly better than the publishing products we have today. And as the Librarian of Congress shows us, there are still plenty of institutions that just don’t understand this, and there is still a lot of innovation, and profit, to be achieved by proving them wrong.

Open Source and Quake

First published in FEED, 1/6/2000.

The Open Source movement got a Christmas present at the end of 1999, some assembly required. John Carmack of id software, arguably the greatest games programmer ever, released the source code for Quake, the wildly popular shoot-em-up. Quake, already several years old, has maintained popularity because it allows players to battle one another over the internet, with hundreds of servers hosting round-the-clock battles. It jives with the Open Source ethos because it has already benefitted enormously from player-created ‘mods’ (or modifications) to the game’s surface appearance. By opening up the source code to the game’s legion of fanatical players, id is hoping to spur a new round of innovation by allowing anyone interested in creating these modifications to be able to alter any aspect of the program, not just its surface. Within minutes of id’s announcement, gamers and hackers the world over were downloading the source code. Within hours they were compiling new versions of the game. Within days people began using their new knowledge of the games inner workings to cheat. A problem new to the Open Source movement began to surface: what to do when access to the source code opens it up to abuse.

Quake works as a multi-player game where each player has a version of Quake running on his or her own PC, and it is this local copy of the game that reports on the player’s behavior — running, shooting, hiding — to a central Quake server. This server then collates all the players’ behaviors and works out who’s killed whom. With access to the Quake source code, a tech-savvy player can put themselves on electronic steroids by altering their local version of the game to give themselves superhuman speed, accuracy, or force, simply by over-reporting their skill to the server. This would be like playing tennis against someone with an invisible racket a yard wide.

All of this matters much more than you would expect a game to matter. With Open Source now associated with truth, justice, and the Internet Way, and with Carmack revered as a genius and a hero, the idea that the combination of these two things could breed anything so mundane as cheating caught people by surprise. One school of thought has been simply to deny that there is a problem by noting that if Quake had been Open Source to begin with, this situation would never have arisen. This is true, as far as it goes, but a theory which doesn’t cover real world cases isn’t much use. id’s attempt to open the source for some of its products while keeping others closed is exactly the strategy players like Apple, IBM, and Sun are all testing out, and if the release of Quake fails to generate innovation, mere ideological purity will be cold comfort.

As so often in the digital world, what happens to the gaming industry has ramifications for the computing industry as a whole. Players in a game are simultaneously competing and co-operating, and all agree to abide by rules that sort winners from losers, a process with ramifications for online economics, education, even auctions. If Quake, with its enormous audience of tech-savvy players and its history of benefitting from user modifications, can’t make the transition from closed source to open source easily, then companies with a less loyal user base might think twice about opening their products, so id’s example is going to be watched very closely. The Quake release marks a watershed — if the people currently hard at work on the Quake cheating problem find a solution, it will be another Open Source triumph, but if they fail, the Quake release might be remembered as the moment that cheating robbed the Open Source movement of its aura of continuous progress.