An Open Letter to Jakob Nielsen

[For those not subscribing to CACM, Jakob Nielsen and I have come down on opposite sides of a usability debate. Jakob believes that the prevalence of bad design on the Web is an indication that the current method of designing Web sites is not working and should be replaced or augmented with a single set of design conventions. I believe that the Web is an adaptive system and that the prevalence of bad design is how evolving systems work.

Jakob’s ideas are laid out in “User Interface Directions for the Web”, CACM, January 1999.

My ideas are laid out in “View Source… Lessons from the Web’s Massively Parallel Development”, networker, December 1998, and http://www.shirky.com/writings/view-source

Further dialogue between the two of us is in the Letters section of the March 1999 CACM.]

Jakob,

I read your response to my CACM letter with great interest, and while I still disagree, I think I better understand the disagreement, an will try to set out my side of the argument in this letter. Let me preface all of this by noting what we agree on: the Web is host to some hideous dreck; things would be better for users if Web designers made usability more of a priority; and there are some basics of interface usability that one violates at one’s peril.

Where we disagree, however, is on both attitude and method – for you, every Web site is a piece of software first and foremost, and therefore in need of a uniform set of UI conventions, while for me, a Web site’s function is something only determined by its designers and users – function is as function does. I think it presumptous to force
a third party into that equation, no matter how much more “efficient” that would make things.

You despair of any systemic fix for poor design and so want some sort of enforcement mechanism for these external standards. I believe that the Web is an adaptive system, and that what you deride as “Digital Darwinism” is what I would call a “Market for Quality”. Most importantly, I believe that a market for quality is in fact the correct solution for creating steady improvements in the Web’s usability.

Let me quickly address the least interesting objection to your idea: it is unworkable. Your plan requires both centralization and force of a sort it is impossible to acheive on the Internet. You say

“…to ensure interaction consistency across all sites it will be necessary to promote a single set of design conventions.”

and

“…the main problem lies in getting Web sites to actually obey any usability rules.”

but you never address who you are proposing to put in the driver’s seat – “it will be necessary” for whom? “[T]he main problem” is a problem for whom? Not for me – I am relieved that there is no authority who can make web site designers “obey” anything other httpd header validity. That strikes me as the Web’s saving grace. With the Web poised to go from 4 million sites to 100 million in the next few years, as you note in your article, the idea of enforcing usability rules will never get past the “thought experiment” stage.

However, as you are not merely a man of action but also a theorist, I want to address why I think enforced conformity to usability standards is wrong, even in theory. My objections break out into three rough categories: creating a market for usability is better than central standards for reasons of efficency, innovation, and morality.

EFFICENCY

In your letter, you say “Why go all the way to shipping products only to have to throw away 99% of the work?” I assume that you meant this as a rhetorical question – after all, how could anybody be stupid enough to suggest that a 1% solution is good enough? The Nielsen Solution – redesign for everybody not presently complying with a “single set of design conventions” – takes care of 100% of the problem, while the Shirky Solution, let’s call it evolutionary progress for the top 1% of sites, well what could that possibly get you?

1% gets you a surprising amount, actually, if it’s properly arranged.

If only the top 1% most trafficked Web sites make usability a priority, those sites would nevertheless account for 70% of all Web traffic. You will recognize this as your own conclusion, of course, as you have suggested on UseIt (http://www.useit.com/alertbox/9704b.html) that Web site traffic is roughly a Zipf distribution, where the thousandth most popular site only sees 1/1000th the traffic of the most popular site. This in turn means that a very small percentage of the Web gets the lion’s share of the traffic. If you are right, then you do not need good design on 70% of the web sites to cover 70% of user traffic, you only need good design on the top 1% of web sites to reach 70% of the traffic.

By ignoring the mass of low traffic sites and instead concentrating on making the popular sites more usable and the usable sites more popular, a market for quality is a more efficient way of improving the Web than trying to raise quality across the board without regard to user interest.

INNOVATION

A market for usability is also better for fostering innovation. As I argue in “View Source…”, good tools let designers do stupid things. This saves overhead on the design of the tools, since they only need to concern themselves with structural validity, and can avoid building in complex heuristics of quality or style. In HTML’s case, if it renders, it’s right, even if it’s blinking yellow text on a leopard-skin background. (This is roughly the equivalent of letting the reference C compiler settle arguments about syntax – if it compiles, it’s correct.)

Consider the use of HTML headers and tables as layout tools. When these practices appeared, in 1994 and 1995 respectively, they infuriated partisans of the SGML ‘descriptive language’ camp who insisted that HTML documents should contain only semantic descriptions and remain absolutely mute about layout. This in turn led to white-hot flamefests about how HTML ‘should’ and ‘shouldn’t’ be used.

It seems obvious from the hindsight of 1999, but it is worth repeating: Everyone who argued that HTML shouldn’t be used as a layout language was wrong. The narrowly correct answer, that SGML was designed as a semantic language, lost out to the need of designers to work visually, and they were able to override partisan notions of correctness to get there. The wrong answer from a standards point of view was nevertheless the right thing to do.

Enforcing any set of rules limits the universe of possibilities, no matter how well intentioned or universal those rules seem. Rules which raise the average quality by limiting the worst excesses risk ruling out the most innovative experiments as well by insisting on a set of givens. Letting the market separate good from bad leaves the door open to these innovations.

MORALITY

This is the most serious objection to your suggestion that standards of usability should be enforced. A web site is an implicit contract between two and only two parties – designer and user. No one – not you, not Don Norman, not anyone, has any right to enter into that contract without being invited in, no matter how valuable you think your contribution might be.

IN PRAISE OF EVOLVEABLE SYSTEMS, REDUX

I believe that the Web is already a market for quality – switching costs are low, word of mouth effects are both large and swift, and redesign is relatively painless compared to most software interfaces. If I design a usable site, I will get more repeat business than if I don’t. If my competitor launches a more usable site, it’s only a click
away. No one who has seen the development of Barnes and Noble and Amazon or Travelocity and Expedia can doubt that competition helps keep sites focussed on improving usability. Nevertheless, as I am a man of action and not just a theorist, I am going to suggest a practical way to improve the workings of this market for usability –
lets call it usable.lycos.com.

The way to allocate resources efficently in a market with many sellers (sites) and many buyers (users) is competition, not standards. Other things being equal, users will prefer a more usable site over its less usable competition. Meanwhile, site owners prefer more traffic to less, and more repeat visits to fewer. Imagine a search engine that weighted usability in its rankings, where users knew that a good way to find a usable site was by checking the “Weight Results by Usability” box and owners knew that a site could rise in the list by offering a good user experience. In this environment, the premium for good UI would align the interests of buyers and sellers around increasing quality. There is no Commisar of Web Design here, no International Bureau of Markup Standards, just an implicit and ongoing compact between users and designers that improvement will be rewarded.

The same effect could be created in other ways – a Nielsen/Norman “Seal of Approval”, a “Usability” category at the various Web awards ceremonies, a “Usable Web Ring”. As anyone who has seen “Hamster Dance” or an emailed list of office jokes can tell you, the net is the most efficent medium the world has ever known for turning user preference into widespread awareness. Improving the market for quality simply harnesses that effect.

Web environments like usable.lycos.com, with all parties maximizing preferences, will be more efficent and less innovation-dampening than the centralized control which would be necessary to enforce a single set of standards. Furthermore, the virtues of such a decentralized system mirrors the virtues of the Internet itself rather than fighting them. I once did a usability analysis on a commerce site which had fairly ugly graphics but a good UI nevertheless. When I queried the site’s owner about his design process, he said “I didn’t know anything when I started out, so I just put up a site with an email link on every page, and my customers would mail me suggestions.”

The Web is a marvelous thing, as is. There is a dream dreamed by engineers and designers everywhere that they will someday be put in charge, and that their rigorous vision for the world will finally overcome the mediocrity around them once and for all. Resist this idea – the world does not work that way, and the dream of centralized control is only pleasant for the dreamer. The Internet’s ability to be adapted slowly, imperfectly, and in many conflicting directions all at once is precisely what makes it so powerful, and the Web has taken those advantages and opened them up to people who don’t know source code from bar code by creating a simple interface design language.

The obvious short term effect of this has been the creation of an ocean of bad design, but the long term effects will be different – over time bad sites die and good sites get better, so while those short-term advantages seem tempting, we would do well to remember that there is rarely any profit in betting against the power of the marketplace in the long haul.

Domain Names: Memorable, Global, Non-political?

Everyone understands that something happened to the domain name system in the mid-90s to turn it into a political minefield, with domain name squatters and trademark lawsuits and all the rest of it. It’s tempting to believe that if we could identify that something and reverse it, we could return to the relatively placid days prior to ICANN.

Unfortunately, what made domain names contentious was simply that the internet became important, and there’s no putting the genie back in that bottle. The legal issues involved actually predate not only ICANN but the DNS itself, going back to the mid-70s and the earliest decision to create memorable aliases for unmemorable IP addresses. Once the original host name system was in place — IBM.com instead of 129.42.18.99 — the system was potentially subject to trademark litigation. The legal issues were thus implicit in the DNS from the day it launched; it just took a decade or so for anyone to care enough to hire a lawyer.

There is no easy way to undo this. The fact that ICANN is a political body is not their fault (though the kind of political institution it has become is their fault.) Memorable names create trademark issues. Global namespace requires global oversight. Names that are both memorable and globally unique will therefore require global political oversight. As long as we want names we can remember, and which work unambiguously anywhere in the world, someone somewhere will have to handle the issues that ICANN currently handles.

Safety in Numbers

One reaction to the inevitable legal trouble with memorable names is simply to do away with memorable names. In this scenario, ICANN would only be responsible for assigning handles, unique IDs devoid of any real meaning. (The most articulate of these proposals is Bob Frankston’s “Safe Haven” approach.) [http://www.frankston.com/public/essays/DNSSafeHaven.asp]

In practice, this would mean giving a web site a meaningless but unique numerical address. Like a domain name today, it would be globally unambiguous, but unlike today’s domain names, such an address would not be memorable, as people are bad at remembering numbers, and terrible at remembering long numbers.

Though this is a good way to produce URLs free from trademark, we don’t need a new domain to do this. Anyone can register unmemorable numeric URLs today — whois says 294753904578.com, for example, is currently available. Since this is already possible, such a system wouldn’t free us from trademark issues, because whenever systems with numerical addresses grow popular (e.g. Compuserve or ICQ), users demand memorable aliases, to avoid dealing with horrible addresses like 71234.5671@compuserve.com. Likewise, the DNS was designed to manage memorable names, not merely unique handles, and creating a set of non-memorable handles simply moves the issue of memorable names to a different part of the system. It doesn’t make the issue go away.

Embrace Ambiguity

Another set of proposals would do away with globally unique aspect of domain names. Instead of awarding a single firm the coveted .com address, a search for ACME would yield several different matches, which the user would then pick from. This is analogous to a Google search on ACME, but one where none of the matches had a memorable address of their own.

The ambiguity in such a system would make it impossible to automate business-to-business connections using the names of the businesses themselves. These addresses would also fail the ‘side of the bus’ test, where a user seeing a simple address like IBM.com on a bus or a business card (or hearing it over the phone or the radio) could go to a browser and type it in. Instead, there would be a market for third-parties who resolve name->address mappings.

The rise of peer-to-peer networks has given us a test-bed for market-allocated namespaces, and the news isn’t good. Despite the obvious value in having a single interoperable system for instant messaging, to take one example, we don’t have interoperability because AOL is (unsurprisingly) unwilling to abandon the value in owning the majority of those addresses. The winner in a post-DNS market would potentially have even more control and less accountability than ICANN does today.

Names as a Public Good

The two best theories of network value we have — Metcalfe’s law for point-to-point networks and Reed’s law for group-forming networks — both rely on optionality, the possibility actually creating any of the untold potential connections that might exist on large networks. Valuable networks allow nodes to connect to one another without significant transaction costs.

Otherwise identical networks will thus have very different values for their users, depending on how easy or hard it is to form connections. In this theory, the worst damage spam does is not in wasting individual user’s time, but in making users skeptical of all mail from unknown sources, thus greatly reducing the possibility of unlikely connections. (What if you got real mail from Nigeria?)

Likewise, a system that provides a global namespace, managed as a public good, will create enormous value in a network, because it will lower the transaction costs of establishing a connection or group globally. It will also aid innovation by allowing new applications to bootstrap into an existing namespace without needing explicit coordination or permission. Despite its flaws, and despite ICANN’s deteriorating stewardship, this is what the DNS currently does.

Names Are Inevitable

We make sense of the world by naming things. Faced with any sort of numerical complexity, humans require tools for oversimplifying, and names are one of the best oversimplifications we have. We have only recently created systems that require global namespaces (ship registries, telephone numbers) so we’re not very good at it yet. In most of those cases, we have used existing national entities to guarantee uniqueness — we get globally unique phone numbers if we have nationally unique phone numbers and globally unique country codes.

The DNS, and the internet itself, have broken this ‘National Partition’ solution because they derive so much of their value from being so effortlessly global. There are still serious technical issues with the DNS, such as the need for domain names in non-English character sets, as well as serious political issues, like the need for hundreds if not thousands of new top-level domains. However, it would be hard to overstate the value created by memorable and globally unique domain names, names that are accessible to any application without requiring advance coordination, and which lower the transaction costs for making connections.

There are no pure engineering solutions here, because this is not a pure engineering problem. Human interest in names is a deeply wired characteristic, and it creates political and legal issues because names are genuinely important. In the 4 years since its founding, ICANN has moved from being merely unaccountable to being actively anti-democratic, but as reforming or replacing ICANN becomes an urgent problem, we need to face the dilemma implicit in namespaces generally: Memorable, Global, Non-political — pick two.

Communities, Audiences, and Scale

April 6, 2002

Prior to the internet, the differences in communication between community and audience was largely enforced by media — telephones were good for one-to-one conversations but bad for reaching large numbers quickly, while TV had the inverse set of characteristics. The internet bridged that divide, by providing a single medium that could be used to address either communities or audiences. Email can be used for conversations or broadcast, usenet newsgroups can support either group conversation or the broadcast of common documents, and so on. Most recently the rise of software for “The Writable Web”, principally weblogs, is adding two-way features to the Web’s largely one-way publishing model.

With such software, the obvious question is “Can we get the best of both worlds? Can we have a medium that spreads messages to a large audience, but also allows all the members of that audience to engage with one another like a single community?” The answer seems to be “No.”

Communities are different than audiences in fundamental human ways, not merely technological ones. You cannot simply transform an audience into a community with technology, because they assume very different relationships between the sender and receiver of messages.

Though both are held together in some way by communication, an audience is typified by a one-way relationship between sender and receiver, and by the disconnection of its members from one another — a one-to-many pattern. In a community, by contrast, people typically send and receive messages, and the members of a community are connected to one another, not just to some central outlet — a many-to-many pattern [1]. The extreme positions for the two patterns might be visualized as a broadcast star where all the interaction is one-way from center to edge, vs. a ring where everyone is directly connected to everyone else without requiring a central hub.

As a result of these differences, communities have strong upper limits on size, while audiences can grow arbitrarily large. Put another way, the larger a group held together by communication grows, the more it must become like an audience — largely disconnected and held together by communication traveling from center to edge — because increasing the number of people in a group weakens communal connection. 

The characteristics we associate with mass media are as much a product of the mass as the media. Because growth in group size alone is enough to turn a community into an audience, social software, no matter what its design, will never be able to create a group that is both large and densely interconnected. 

Community Topology

This barrier to the growth of a single community is caused by the collision of social limits with the math of large groups: As group size grows, the number of connections required between people in the group exceeds human capacity to make or keep track of them all.

A community’s members are interconnected, and a community in its extreme position is a “complete” network, where every connection that can be made is made. (Bob knows Carol, Ted, and Alice; Carol knows Bob, Ted, and Alice; and so on.) Dense interconnection is obviously the source of a community’s value, but it also increases the effort that must be expended as the group grows. You can’t join a community without entering into some sort of mutual relationship with at least some of its members, but because more members requires more connections, these coordination costs increase with group size.

For a new member to connect to an existing group in a complete fashion requires as many new connections as there are group members, so joining a community that has 5 members is much simpler than joining a community that has 50 members. Furthermore, this tradeoff between size and the ease of adding new members exists even if the group is not completely interconnected; maintaining any given density of connectedness becomes much harder as group size grows. As new members join, it creates either more effort or lowers the density of connectedness, or both, thus jeopardizing the interconnection that makes for community. [2]

As group size grows past any individual’s ability to maintain connections to all members of a group, the density shrinks, and as the group grows very large (>10,000) the number of actual connections drops to less than 1% of the potential connections, even if each member of the group knows dozens of other members. Thus growth in size is enough to alter the fabric of connection that makes a community work. (Anyone who has seen a discussion group or mailing list grow quickly is familiar with this phenomenon.)

An audience, by contrast, has a very sparse set of connections, and requires no mutuality between members. Thus an audience has no coordination costs associated with growth, because each new member of an audience creates only a single one-way connection. You need to know Yahoo’s address to join the Yahoo audience, but neither Yahoo nor any of its other users need to know anything about you. The disconnected quality of an audience that makes it possible for them to grow much (much) larger than a connected community can, because an audience can always exist at the minimum number of required connection (N connections for N users).

The Emergence of Audiences in Two-way Media

Prior to the internet, the outbound quality of mass media could be ascribed to technical limits — TV had a one-way relationship to its audience because TV was a one-way medium. The growth of two-way media, however, shows that the audience pattern re-establishes itself in one way or another — large mailing lists become read-only, online communities (eg. LambdaMOO, WELL, ECHO) eventually see their members agitate to stem the tide of newcomers, users of sites like slashdot see fewer of their posts accepted. [3]

If real group engagement is limited to groups numbering in the hundreds or even the thousands [4], then the asymmetry and disconnection that characterizes an audience will automatically appear as a group of people grows in size, as many-to-many becomes few-to-many and most of the communication passes from center to edge, not edge to center or edge to edge. Furthermore, the larger the group, the more significant this asymmetry and disconnection will become: any mailing list or weblog with 10,000 readers will be very sparsely connected, no matter how it is organized. (This sparse organization of the larger group can of course encompass smaller, more densely clustered communities.)

More Is Different

Meanwhile, there are 500 million people on the net, and the population is still growing. Anyone who wants to reach even ten thousand of those people will not know most of them, nor will most of them know one another. The community model is good for spreading messages through a relatively small and tight knit group, but bad for reaching a large and dispersed group, because the tradeoff between size and connectedness dampens message spread well below the numbers that can be addressed as an audience.

It’s significant that the only two examples we have of truly massive community spread of messages on the internet — email hoaxes and Outlook viruses — rely on disabling the users’ disinclination to forward widely, either by a social or technological trick. When something like All Your Base or OddTodd bursts on the scene, the moment of its arrival comes not when it spreads laterally from community to community, but when that lateral spread attracts the attention of a media outlet [5].

No matter what the technology, large groups are different than small groups, because they create social pressures against community organization that can’t be trivially overcome. This is a pattern we have seen often, with mailing lists, BBSes, MUDs, usenet, and most recently with weblogs, the majority of which reach small and tightly knit groups, while a handful reach audiences numbering in the tens or even hundreds of thousands (e.g. andrewsullivan.com.)

The inability of a single engaged community to grow past a certain size, irrespective of the technology, will mean that over time, barriers to community scale will cause a separation between media outlets that embrace the community model and stay small, and those that adopt the publishing model in order to accommodate growth. This is not to say that all media that address ten thousand or more people at once are identical; having a Letters to the Editor column changes a newspaper’s relationship to its audience, even though most readers never write, most letters don’t get published, and most readers don’t read every letter.

Though it is tempting to think that we can somehow do away with the effects of mass media with new technology, the difficulty of reaching millions or even tens of thousands of people one community at a time is as much about human wiring as it is about network wiring. No matter how community minded a media outlet is, needing to reach a large group of people creates asymmetry and disconnection among that group — turns them into an audience, in other words — and there is no easy technological fix for that problem. 

Like the leavening effects of Letters to the Editor, one of the design challenges for social software is in allowing groups to grow past the limitations of a single, densely interconnected community while preserving some possibility of shared purpose or participation, even though most members of that group will never actually interact with one another.


Footnotes

1. Defining community as a communicating group risks circularity by ignoring other, more passive uses of the term, as with “the community of retirees.” Though there are several valid definitions of community that point to shared but latent characteristics, there is really no other word that describes a group of people actively engaged in some shared conversation or task, and infelicitous turns of phrase like ‘engaged communicative group’ are more narrowly accurate, but fail to capture the communal feeling that arises out of such engagement. For this analysis, ‘community’ is used as a term of art to refer to groups whose members actively communicate with one another. [Return]

2. The total number of possible connections in a group grows quadratically, because each member of a group must connect to every other member but themselves. In general, therefore, a group with N members has N x (N-1) connections, which is the same as N2 – N. If Carol and Ted knowing one another count as a single relationship, there are half as many relationships as connections, so the relevant number is (N2 – N)/2.

Because these numbers grow quadratically, every 10-fold increase in group size creates a 100-fold increase in possible connections; a group of ten has about a hundred possible connections (and half as many two-way relationships), a group of a hundred has about ten thousand connections, a thousand has about a million, and so on. The number of potential connections in a group passes a billion as group size grows past thirty thousand. [Return]

3. Slashdot is suffering from one of the common effects of community growth — the uprising of users objecting to the control the editors exert over the site. Much of the commentary on this issue, both at slashdot and on similar sites such as kuro5hin, revolves around the twin themes of understanding that the owners and operators of slashdot can do whatever they like with the site, coupled with a surprisingly emotional sense of betrayal that the community control, in the form of moderation. 

(More at kuro5hin and slashdot. [Return]

4. In Grooming, Gossip, and the Evolution of Language (ISBN 0674363361), the primatologist Robin Dunbar argues that humans are adapted for social group sizes of around 150 or less, a size that shows up in a number of traditional societies, as well as in present day groups such as the Hutterite religious communities. Dunbar argues that the human brain is optimized for keeping track of social relationships in groups small than 150, but not larger. [Return]

5. In The Tipping Point (ISBN 0316346624), Malcolm Gladwell detailed the surprising spread of Hush Puppies shoes in the mid the ’90s, from their adoption by a group of cool kids in the East Village to a national phenomenon. The breakout moment came when Hush Puppies were adopted by fashion designers, with one designer going so far as to place a 25 foot inflatable Hush Puppy mascot on the roof of his boutique in LA. The cool kids got the attention of the fashion designers, but it was the fashion designers who got the attention of the world, by taking Hush Puppies beyond the communities in which it started and spreading them outwards to an audience that looked to the designers. [Return]

Time-Warner and ILOVEYOU

First published in FEED, 05/00.

Content may not be king, but it was certainly making headlines last week. From the “content that should have been distributed but wasn’t” department, Time Warner’s spectacularly ill-fated removal of ABC from its cable delivery lineup ended up cutting off content essential to the orderly workings of America — Who Wants to Be A Millionaire? Meanwhile, from the “content that shouldn’t have been distributed but was” department, Spyder’s use of a loosely controlled medium spread content damaging to the orderly workings of America and everywhere else — the ILOVEYOU virus. Taken together, these events are making one message increasingly obvious: The power of corporations to make decisions about distribution is falling, and the power of individuals as media channels in their own right is rising.

The week started off with Time Warner’s effort to show Disney who was the boss, by dropping ABC from its cable lineup. The boss turned out to be Disney, because owning the delivery channel doesn’t give Time Warner half the negotiating leverage the cable owners at Time Warner thought it did. Time Warner was foolish to cut off ABC during sweeps month, when Disney had legal recourse, but their real miscalculation was assuming that owning the cable meant owning the customer. What had ABC back on the air and Time Warner bribing its customers with a thirty-day rebate was the fact that Americans resent any attempt to interfere with the delivery of content, legal issues or no. Indeed, the aftermath saw Peter Vallone of the NY City Council holding forth on the right of Americans to watch television. It is easy to mock this attitude, but Vallone has a point: People have become accustomed to constantly rising media access, from three channels to 150 in a generation, with the attendant rise in user access to new kinds of content. Any attempt to reintroduce artificial scarcity by limiting this access now creates so much blind fury that television might as well be ranked alongside water and electricity as utilities. The week ended as badly for Time Warner as it began, because even though their executives glumly refused to promise never to hold their viewers hostage as a negotiating tactic, their inability to face the wrath of their own paying customers had been exposed for all the world to see.

Meanwhile, halfway round the world, further proof of individual leverage over media distribution was mounting. The ILOVEYOU virus struck Thursday morning, and in less than twenty-four hours had spread further than the Melissa virus had in its entire life. The press immediately began looking for the human culprit, but largely missed the back story: The real difference between ILOVEYOU and Melissa was not the ability of Outlook to launch programs from within email, a security hole unchanged since last year. The real difference was the delivery channel itself — the number and interconnectedness of e-mail users — that makes ILOVEYOU more of a media virus than a computer virus. The lesson of a virus that starts in the Philippines and ends up flooding desktops from London to Los Angeles in a few hours is that while email may not be a mass medium, that reaches millions at the same time, it has become a massive one, reaching tens of millions in mere hours, one user at a time. With even a handful of globally superconnected individuals, the transmission rates for e-mail are growing exponentially, with no end in sight, either for viruses or legitimate material. The humble practice of forwarding e-mail, which has anointed The Onion, Mahir, and the Dancing Baby as pop-culture icons, has now crossed one of those invisible thresholds that makes it a new kind of force — e-mail as a media channel more global than CNN. As the world grows more connected, the idea that individuals are simply media consumers looks increasingly absurd — anyone with an email address is in fact a media channel, and in light of ILOVEYOU’s success as a distribution medium, we may have to revise that six degrees of separation thing downwards a little.

Both Time Warner’s failure and ILOVEYOUs success spread the bad news to several parties: TV cable companies, of course, but also cable ISPs, who hope to use their leverage over delivery to hold Internet content hostage; the creators of WAP, who hope to erect permanent tollbooths between the Internet and the mobile phone without enraging their subscribers; governments who hoped to control their citizens’ access to “the media” before e-mail turned out to be a media channel as well; and everyone who owns copyrighted material, for whom e-mail attachments threaten to create hundreds of millions of small leaks in copyright protection. (At least Napster has a business address.) There is a fear, shared by all these parties, that decisions about distribution — who gets to see what, when — will pass out of the hands of governments and corporations and into the hands of individuals. Given the enormity of the vested interests at stake, this scenario is still at the outside edges of the imaginable. But when companies that own the pipes can’t get any leverage over their users, and when users with access to e-mail can participate in a system whose ubiquity has been so dramatically illustrated, the scenario goes from unthinkable to merely unlikely.

The Toughest Virus of All

First published on Biz2, 07/00.

“Viral marketing” is back, making its return as one of the gotta-have-it phrases for dot-com business plans currently making the rounds. The phrase was coined (by Steve Jurvetson and Tim Draper in “Turning Customers into a Sales Force,” Nov. ’98, p103) to describe the astonishing success of Hotmail, which grew to 12 million subscribers 18 months after launch.

The viral marketing meme has always been hot, but now its expansion is being undertaken by a raft of emarketing sites promising to elucidate “The Six Simple Principles for Viral Marketing” or offering instructions on “How to Use Viral Marketing to Drive Traffic and Sales for Free!” As with anything that promises miracle results, there is a catch. Viral marketing can work, but it requires two things often in short supply in the marketing world: honesty and execution.

It’s all about control

It’s easy to see why businesses would want to embrace viral marketing. Not only is it supposed to create those stellar growth rates, but it can also reduce the marketing budget to approximately zero. Against this too-good-to-be-true backdrop, though, is the reality: Viral marketing only works when the user is in control and actually endorses the viral message, rather than merely acting as a carrier.

Consider Hotmail: It gives its subscribers a useful service, Web-based email, and then attaches an ad for Hotmail at the bottom of each sent message. Hotmail gains the credibility needed for successful viral marketing by putting its users in control, because when users recommend something without being tricked or co-opted, it provides the message with a kind of credibility that cannot be bought. Viral marketing is McLuhan marketing: The medium validates the message.

Viral marketing is also based on the perception of honesty: If the recipient of the ad fails to believe the sender is providing an honest endorsement, the viral effect disappears. An ad tacked on to a message without the endorsement of the author loses credibility; it’s no different from a banner ad. This element of trust becomes even more critical when firms begin to employ explicit viral marketing, where users go beyond merely endorsing ads to actually generating them.

These services–PayPal.com or Love Monkey, for example–rely on users to market the service because the value of the service grows with new recruits. If I want to pay you through PayPal, you must be a PayPal user as well (unlike Hotmail, where you just need a valid address to receive mail from me). With PayPal, I benefit if you join, and the value of the network grows for both of us and for all present and future users as well.

Love Monkey, a college matchmaking service, works similarly. Students at a particular college enter lists of fellow students they have crushes on, and those people are sent anonymous email asking them to join Love Monkey and enter their own list of crushes. It then notifies any two people whose lists include each other. Love Monkey must earn users’ trust before any viral effect can take place because Metcalfe’s Law only works when people are willing to interact. Passive networks such as cable or satellite television provide no benefits to existing users when new users join.

Persistent infections

Continuing the biological metaphor, viral marketing does not create a one-time infection, but a persistent one. The only thing that keeps Love Monkey users from being “infected” by another free matchmaking service is their continued use of Love Monkey. Viral marketing, far from eliminating the need to deliver on promises, makes businesses more dependent on the goodwill of their users. Any company that incorporates viral marketing techniques must provide quality services–ones that users are continually willing to vouch for, whether implicitly or explicitly.

People generally conspire to misunderstand what they should fear. The people rushing to embrace viral marketing misunderstand how difficult it is to make it work well. You can’t buy it, you can’t fake it, and you can’t pay your users to do it for you without watering down your message. Worse still, anything that is going to benefit from viral marketing must be genuinely useful, well designed, and flawlessly executed, so consumers repeatedly choose to use the service.

Sadly, the phrase “viral marketing” seems to be going the way of “robust” and “scalable” –formerly useful concepts which have been flattened by overuse. A year from now, viral marketing will simply mean word of mouth. However, the concept described by the phrase– a way of acquiring new customers by encouraging honest communication–will continue to be available, but only to businesses that are prepared to offer ongoing value.

Viral marketing is not going to save mediocre businesses from extinction. It is the scourge of the stupid and the slow, because it only rewards companies that offer great service and have the strength to allow and even encourage their customers to publicly pass judgment on that service every single day.

We (Still) Have a Long Way to Go

First published in Biz2, 06/00.

Just when you thought the Internet was a broken link shy of ubiquity, along comes the head of the Library of Congress to remind us how many people still don’t get it.

The Librarian of Congress, James Billington, gave a speech on April 14 to the National Press Club in which he outlined the library’s attitude toward the Net, and toward digitized books in particular. Billington said the library has no plans to digitize the books in its collection. This came as no surprise because governmental digitizing of copyrighted material would open a huge can of worms.

What was surprising were the reasons he gave as to why the library would not be digitizing books: “So far, the Internet seems to be largely amplifying the worst features of television’s preoccupation with sex and violence, semi-illiterate chatter, shortened attention spans, and a near-total subservience to commercial marketing. Where is the virtue in all of this virtual information?” According to the April 15 edition of the Tech Law Journal, in the Q&A section of his address, Billington characterized the desire to have the contents of books in digital form as “arrogance” and “hubris,” and said that books should inspire “a certain presumption of reverence.”

It seems obvious, but it bears repeating: Billington is wrong.

The Internet is the most important thing for scholarship since the printing press, and all information which can be online should be online, because that is the most efficient way to distribute material to the widest possible audience. Billington should probably be asked to resign, based on his contempt for U.S. citizens who don’t happen to live within walking distance of his library. More importantly, however, is what his views illustrate about how far the Internet revolution still has to go.

The efficiency chain

The mistake Billington is making is sentimentality. He is right in thinking that books are special objects, but he is wrong about why. Books don’t have a sacred essence, they are simply the best interface for text yet invented — lightweight, portable, high-contrast, and cheap. They are far more efficient than the scrolls and oral lore they replaced.

Efficiency is relative, however, and when something even more efficient comes along, it will replace books just as surely as books replaced scrolls. And this is what we’re starting to see: Books are being replaced by digital text wherever books are technologically inferior. Unlike digital text, a book can’t be in two places at once, can’t be searched by keyword, can’t contain dynamic links, and can’t be automatically updated. Encyclopaedia Britannica is no longer published on paper because the kind of information it is dedicated to — short, timely, searchable, and heavily cross-referenced — is infinitely better carried on CD-ROMs or over the Web. Entombing annual snapshots of the Encyclopaedia Britannica database on paper stopped making sense.

Books which enable quick access to short bits of text — dictionaries, thesauruses, phone books — are likely to go the way of Encyclopedia Britannica over the next few years. Meanwhile, books that still require paper’s combination of low cost, high contrast, and portability — any book destined for the bed, the bath or the beach — will likely be replaced by the growth of print-on-demand services, at least until the arrival of disposable screens.

What is sure is that wherever the Internet arrives, it is the death knell for production in advance of demand, and for expensive warehousing, the current models of the publishing industry and of libraries. This matters for more than just publishers and librarians, however. Text is the Internet’s uber-medium, and with email still the undisputed killer app, and portable devices like the Palm Pilot and cell phones relying heavily or exclusively on text interfaces, text is a leading indicator for other kinds of media. Books are not sacred objects, and neither are radios, VCRs, telephones, or televisions.

Internet as rule

There are two ways to think about the Internet’s effect on existing media. The first is “Internet as exception”: treat the Net as a new entrant in an existing environment and guess at the eventual adoption rate. This method, so sensible for things such as microwaves or CD players, is wrong for the Internet, because it relies on the same
sentimentality about the world that the Librarian of Congress does. The Net is not an addition, it is a revolution; the Net is not a new factor in an existing environment, it is itself the new environment.

The right way to think about Internet penetration is “Internet as rule”: simply start with the assumption that the Internet is going to become part of everything — every book, every song, every plane ticket bought, every share of stock sold — and then look for the roadblocks to this vision. This is the attitude that got us where we are today, and this is the attitude that will continue the Net’s advance.

You do not need to force the Internet into new configurations — the Internet’s efficiency provides the necessary force. You only need to remove the roadblocks of technology and attitude. Digital books will become ubiquitous when interfaces for digital text are uniformly better than the publishing products we have today. And as the Librarian of Congress shows us, there are still plenty of institutions that just don’t understand this, and there is still a lot of innovation, and profit, to be achieved by proving them wrong.

RIP THE CONSUMER, 1900-1999

“The Consumer” is the internet’s most recent casualty. We have often
heard that Internet puts power in the hands of the consumer, but this
is nonsense — ‘powerful consumer’ is an oxymoron. The historic role
of the consumer has been nothing more than a giant maw at the end of
the mass media’s long conveyer belt, the all-absorbing Yin to mass
media’s all-producing Yang. Mass media’s role has been to package
consumers and sell their atention to the advertisers, in bulk. The
consumers’ appointed role in this system gives them and no way to
communicate anything about themselves except their preference between
Coke and Pepsi, Bounty and Brawny, Trix and Chex. They have no way to
respond to the things they see on television or hear on the radio, and
they have no access to any media on their own — media is something
that is done to them, and consuming is how they register their
repsonse. In changing the relations between media and individuals,
the Internet does not herald the rise of a powerful consumer. The
Internet heralds the disappearance of the consumer altogether, because
the Internet destroys the noisy advertiser/silent consumer
relationship that the mass media relies upon. The rise of the internet
undermines the existence of the consumer because it undermines the
role of mass media. In the age of the internet, no one is a passive
consumer anymore because everyone is a media outlet.

To profit from its symbiotic relationship with advertisers, the mass
media required two things from its consumers – size and silence. Size
allowed the media to address groups while ignoring the individual — a
single viewer makes up less than 1% of 1% of 1% of Frasier’s
10-million-strong audience. In this system, the individual matters not
at all: the standard unit for measuring television audiences is a
million households at a time. Silence, meanwhile, allowed the media’s
message to pass unchallenged by the viewers themselves. Marketers
could broadcast synthetic consumer reaction — “Tastes Great!”, ” Less
filling!” — without having to respond to real customers’ real
reactions — “Tastes bland”, “More expensive”. The enforced silence
leaves the consumer with only binary choices — “I will or won’t watch
I Dream of Genie, I will or won’t buy Lemon Fresh Pledge” and so
on. Silence has kept the consumer from injecting any complex or
demanding interests into the equation because mass media is one-way media.

This combination of size and silence has meant that mass media, where
producers could address 10 million people at once with no fear of
crosstalk, has been a very profitable business to be in.

Unfortunately for the mass media, however, the last decade of the 20th
century was hell on both the size and silence of the consumer
audience. As AOL’s takeover of Time Warner demonstrated, while
everyone in the traditional media was waiting for the Web to become
like traditional media, traditional media has become vastly more like
the Web. TV’s worst characteristics — its blandness, its cultural
homogeneity, its appeal to the lowest common denominator — weren’t an
inevitable part of the medium, they were simply byproducts of a
restricted number of channels, leaving every channel to fight for the
average viewer with their average tastes. The proliferation of TV
channels has eroded the audience for any given show — the average
program now commands a fraction of the audience it did 10 years ago,
forcing TV stations to find and defend audience niches which will be
attractive to advertisers.

Accompanying this reduction in size is a growing response from
formerly passive consumers. Marketing lore says that if a customer has
a bad expereince, they will tell 9 other people, but that figure badly
needs updating. Armed with nothing more than an email address, a
disgruntled customer who vents to a mailing list can reach hundreds of
people at once; the same person can reach thousands on ivillage or
deja; a post on slashdot or a review on amazon can reach tens of
thousands. Furthermore, the Internet never forgets — a complaint
made on the phone is gone forever, but a complaint made on the Web is
there forever. With mass media outlets shrinking and the reach of the
individual growing, the one-sided relationship between media and
consumer is over, and it is being replaced with something a lot less
conducive to unquestioning acceptance.

In retrospect, mass media’s position in the 20th century was an
anomoly and not an inevitability. There have always been both one-way
and two-way media — pamphlets vs. letters, stock tickers vs.
telegraphs — but in 20th century the TV so outstripped the town
square that we came to assume that ‘large audience’ necessarily meant
‘passive audience’, even though size and passivity are unrelated.
With the Internet, we have the world’s first large, active medium, but
when it got here no one was ready for it, least of all the people who
have learned to rely on the consumer’s quiescent attention while the
Lucky Strike boxes tapdance across the screen. Frasier’s advertisers
no longer reach 10 million consumers, they reach 10 million other
media outlets, each of whom has the power to amplify or contradict the
advertiser’s message in something frighteningly close to real time. In
place of the giant maw are millions of mouths who can all talk
back. There are no more consumers, because in a world where an email
address constitutes a media channel, we are all producers now.

Ford, Subsidized Computers, and Free Speech

2/11/2000

Freedom of speech in the computer age was thrown dramatically into question by a pair of recent stories. The first was the news that Ford would be offering its entire 350,000-member global work force an internet-connected computer for $5 a month. This move,already startling, was made more so by the praise Ford received from Stephen Yokich, the head of the UAW, who said “This will allow us to communicate with our brothers and sisters from around the world.” This display of unanimity between management and the unions was in bizarre contrast to an announcement later in the week concerning Northwest airlines flight attendants. US District Judge Donovan Frank ruled that the home PCs of Northwest Airlines flight attendants could be confiscated and searched by Northwest, who were looking for evidence of email organizing a New Year’s sickout. Clearly corporations do not always look favorably on communication amongst their employees — if the legal barriers to privacy on a home PC are weak now, and if a large number of workers’ PCs will
be on loan from their parent company, the freedom of speech and relative anonymity we’ve taken for granted on the internet to date will be seriously tested, and the law may be of little help.

Freedom of speech evangelists tend to worship at the altar of the First Amendment, but many of them haven’t actually read it. As with many sacred documents, it is far less sweeping than people often imagine. Leaving aside the obvious problem of its applicability outside the geographical United States, the essential weakness of the Amendment at the dawn of the 21st century is that it only prohibits governmental interference in speech; it says nothing about commercial interference in speech. Though you can’t prevent people from picketing on the sidewalk, you can prevent them from picketing inside your place of business. This distinction relies on the adjacency of public and private spaces, and the First Amendment only compels the Government to protect free speech in the public arena.

What happens if there is no public arena, though? Put another way, what happens if all the space accessible to protesters is commercially owned? These questions call to mind another clash between labor and management in the annals of US case law, Hudgens v. NLRB (1976), in which the Supreme Court ruled that private space only fell under First Amendment control if it has “taken on all the attributes of a town” (a doctrine which arose to cover worker protests in company towns). However, the attributes the Court requires in order to consider something a town don’t map well to the internet, because they include municipal functions like a police force and a post office. By that measure, has Yahoo taken on all the functions of a town? Has AOL? If Ford provides workers their only link with the internet, has Ford taken on all the functions of a town?

Freedom of speech is following internet infrastructure, where commercial control
blossoms and Government input withers. Since Congress declared the internet open for commercial use in 1991, there has been a wholesale migration from services run mostly by state colleges and Government labs to services run by commercial entities. As Ford’s move demonstrates, this has been a good thing for internet use as a whole — prices have plummeted, available services have mushroomed, and the number of users has skyrocketed — but we may be building an arena of all private stores and no public sidewalks. The internet is clearly the new agora, but without a new litmus test from the Supreme Court, all online space may become the kind of commercial space where the protections of the First Amendment will no longer reach.

ATT and Cable Internet Access

11/4/1999

When is a cable not a cable? This is the question working its way through the Ninth
Circuit Court right now, courtesy of AT&T and the good people of Portland, Oregon. When the city of Portland and surrounding Multnomah County passed a regulation requiring AT&T to open its newly acquired cable lines to other internet service providers, such as AOL and GTE, the rumble over high-speed internet access began. In one corner was the City of Portland, weighing in on the side of competition, open access, and consumer choice. In the other corner stood AT&T, championing legal continuity: When AT&T made plans to buy MediaOne, the company was a local monopoly, and AT&T wants it to stay that way. It looked to be a clean fight. And yet, on Monday, one of the three appellate judges, Edward Leavy, threw in a twist: He asked whether there is really any such thing as the cable industry anymore. The answer to Judge Leavy’s simple question has the potential to radically alter the landscape of American media.

AT&T has not invested $100 billion in refashioning itself as a cable giant because it’s
committed to offering its customers high-speed internet access. Rather, AT&T has invested that money to buy back what it really wants: a national monopoly. Indeed, AT&T has been dreaming of regaining its monopoly status ever since the company was broken up into AT&T and the Baby Bells, back in 1984. With cable internet access, AT&T sees its opportunity. In an operation that would have made all the King’s horses and all the King’s men gasp in awe, AT&T is stitching a national monopoly back together out of the fragments of the local cable monopolies. If it can buy up enough cable outlets, it could become the sole provider for high-speed internet access for a sizable chunk of the country.

Cable is attractive to the internet industry because the cable industry has what internet service providers have wanted for years: a way to make money off content. By creating artificial scarcity — we’ll define this channel as basic, that channel as premium, this event is free, that one is pay-per-view — the cable industry has used its monopoly over the wires to derive profits from the content that travels over those wires. So, if you think about it, what AT&T is really buying is not infrastructure but control: By using those same television wires for internet access, they will be able to affect the net content its users can and can’t see (you can bet they will restrict access to Time-Warner’s offerings, for example), bringing pay-per-view economics to the internet.

In this environment, the stakes for the continued monopoly of the cable market couldn’t be higher, which is what makes Judge Leavy’s speculation about the cable industry so radical. Obviously frustrated with the debate, the Judge interjected that, “It strikes me that everybody is trying to dance around the issue of whether we’re talking about a telecommunications service.” His logic seems straightforward enough. If the internet is a telecommunications service, and cable is a way to get internet access, then surely cable is a telecommunications service. Despite the soundness of this logic, however, neither AT&T nor Portland was ready for it, because beneath its simplicity is a much more extreme notion: If the Judge is right, and anyone who provides internet access is a telecommunications company, then the entire legal structure on which the cable industry is based — monopolies and service contracts negotiated city by city — will be destroyed, and cable will be regulated by the FCC on a national level. By declaring that regulations should cover how a medium is used and not merely who owns it, Judge Leavy will be moving the internet to another level of the American media pecking-order. If monopolies really aren’t portable from industry to industry — if owning the wire doesn’t mean owning the customer — then this latest attempt to turn the internet into a walled garden will be turned back.

Kasparov vs. The World

10/21/1999

It was going to be acres of good PR. After the success of Garry Kasparov’s chess
matchup with IBM’s Deep Blue, Microsoft wanted to host another computerized chess match this summer — Kasparov vs. The World. The setup was simple: Kasparov, the John Henry of the Information Age, would play white, posting a move every other day on the Microsoft chess BBS. “The World” consisted of four teenage chess experts who would analyze the game and recommend counter-moves, which would also be posted on the BBS. Chess aficionados from around the world could then log in and vote for which of the four moves Black should play. This had everything Microsoft could want — community, celebrity, online collaboration, and lots of “Microsoft hosts The World!” press releases. This “experts recommend, The World votes” method worked better than anybody dared hope, resulting in surprisingly challenging chess and the ascension of one of The World’s experts, Irina Krush, into chess stardom. Things were going well up until last week, when Microsoft missed a crucial piece of email and the good PR began to hiss out of the event like helium from a leaky balloon.

While Deep Blue was a lone computer, here Kasparov’s opponent was to be the chess
community itself, a kind of strategic “group mind.” Since communication was the glue
that held the community together, it’s fitting that the game came unglued after a missed email. During last week’s end game, it was generally agreed that The World had made a serious tactical error in move 52, but that there was still the possibility of a draw. Then, on October 13th, Ms. Krush’s recommendation for move 58 was delayed by mail server problems, problems compounded by a further delay in posting the information on the Microsoft server. Without Ms. Krush’s input, an inferior move was suggested and accepted, making it obvious that despite the rhetoric of collaboration, the game had become Kasparov v. Krush with Kibbitzing by The World. Deprived of Ms. Krush’s strategic vision, the game was doomed. The World responded to this communication breakdown by collective hari kari, with 66% of the team voting for a suicidal move. Facing the possibility of headlines like “The World Resigns from Microsoft,” the corporate titan rejected the people’s move and
substituted one of its own. The World, not surprisingly, reacted badly.

Within hours of Microsoft’s reneging on the vote, a protest movement was launched,
including press releases, coordinating web sites, and even a counter-BBS which archived articles from the Microsoft chess server before they expired. Microsoft had run afoul of the first rule of online PR: On the internet, there is no practical difference between “community” and “media”; anyone with an email address is a media outlet, a tiny media outlet to be sure, but still part of the continuum. Since online communities and online media outlets use the same tools — web sites, mailing lists, BBS’s — the border between “community interest” and “news” is far easier to cross. The Microsoft story took less than a week to go from the complaints of a few passionate members of the chess community to a story on the BBC.

Microsoft made the same famously bad bet that sidelined Prodigy: By giving The World a forum for expressing itself, it assumed that The World’s gratitude would prevent criticism of its host, should anything go wrong. As Rocky the Flying Squirrel would say, “That trick never works.” What started as a way to follow on IBM’s success with Deep Blue has become a more informative comparison than even Microsoft knew. Two computerized chess games against the World Champion — one meant to display the power of computation, the other the power of community — and the lesson is this: While computers sometimes behave the way you want them to, communities never do. Or, as the Microsoft PR person put it after the game ended: “Live by the internet, die by the internet.”