Citizens and Customers

6/15/1999

All countries are different; all customers are the same. That’s the lesson to be
learned from Freeserve ISP’s meteoric rise, and the subsequent reshaping of the UK
internet industry. Prior to Freeserve, the British adoption rate of the internet was
fairly sluggish, but Freeserve figured out how to offer free internet access by
subsidizing its service with for-fee tech support and a cut of local call revenues,
and in the six months since they’ve launched (and spawned over 50 copycat services),
the UK user base has grown from 6 to 10 million. Their main advantage over the other major ISP player, British Telecom, was the contempt BT has for the British public.

Wherever technology is concerned, there are a host of nationalistic prejudices: the
Americans are early adopters, for example, while the British are nation of shopkeepers, suspicious of technology and fearful of change. BT held this latter view, behaving as if Britain’s slow adoption of the internet was just another aspect of a national reticence about technology, and therefore treating the ISP business as an expensive service for elites rather than trying to roll it out cheaply to the masses.

This idea of national differences in the use of the internet is everywhere these days,
but this idea confuses content with form. There will be Czech content on the net, but
there won’t be a “Czech Way” of using the network, or a “Chinese Way” or a “Chilean Way.” The internet’s content is culturally determined, but its form is shaped by economics. Once a country gets sufficiently wired, the economic force of the internet has little to do with ethnicity or national sentiment and much to do with the unsurprising fact that given two offers of equal value, people all over the world will take the cheaper one, no matter who is offering it to them.

Unsurprising to consumers, that is; businesses all over the world are desperate to
convince themselves that national identity matters more than quality and price. (Remember the “Buy American” campaign that tried to get Americans to pay more for inferior cars? or the suggestion that corrupt business practices were part of “Asian Values”?) Freeserve’s genius was not to be swayed by the caricature of stodgy, technophobic Brits. British reticence about the internet turned out to be about price and not about national character at all — now that internet access has come in line with ordinary incomes, the British have been as keen to get connected as Americans are.

Patriotism is the last refuge of an unprofitable business. We’ve seen the internet take
off in enough countries to have some idea of the necessary preconditions: when a
literate population has phones at home, cheap PCs, and competitive telecom businesses, the value of connecting to the internet rises continually while the cost of doing so falls. In these countries, any business that expects national identity to provide some defense against competition is merely using a flag as a fig leaf. In the end, countries with wired populations will see national differences reduced in importance to the level of the Local Dish and Colorful Garb, because once a country passes some tipping point, its population starts behaving less like citizens of a particular place and more like global customers, making the same demands made by customers everywhere. Businesses that fill those demands, regardless of nationality, will thrive, and businesses that ignore those demands, regardless of nationality, will die.

Why Smart Agents Are A Dumb Idea

Smart agents are a dumb idea. Like several of the other undead ideas floating around (e.g Digital Cash, Videophones), the idea of having autonomous digital agents that scour the net acting on your behalf seems so attractive that, despite a string of failures, agents enjoy periodic resurgances of interest. A new such surge seems to be beginning, with another round of stories in the press about how autonomous agents
equipped with instructions from you (and your credit card number) are going to shop for your CDs, buy and sell your stocks, and arrange your travel plans. The primary thing smart agents seem to have going for them is the ‘cool’ factor (as in ‘This will work because it would be cool if it did.’) The primary thing they have going against them is that they do not work and they never will work, and not just because they are
impractical, but because they have the wrong problems in their domain, and they solve them in the wrong order.

Smart agents — web crawling agents as opposed to stored preferences in
a database — have three things going against them:

  • Agents’ performance degrades with network growth
  • Agents ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking).
  • Agents make the market for information less efficient rather than more

These three barriers render the idea of agents impractical for almost all of the duties they are supposedly going to perform.

Consider these problems in context; the classic scenario for the mobile agent is the business trip. You have business in Paris (or, more likely, Peoria) and you need a flight, a hotel and a rental car. You instruct your agent about your dates, preferences, and price limits, and it scours the network for you putting together the ideal package based on its interpretatino of your instructions. Once it has secured this package, it makes the purchases on your behalf, and presents you with the completed travel package, dates, times and confirmation numbers in one fell swoop.

A scenario like this requires a good deal of hand waving to make it seem viable, to say nothing of worthwhile, because it assumes that the agent’s time is more valuable than your time. Place that scenario in a real world context – your boss tells you you need to be in Paris (Peoria) at the end of the week, and could you make the arrangements before you go home? You fire up your trusty agent, and run into the
following problems:

  • Agents’ performance degrades with network growth

Upon being given its charge, the agent needs to go out and query all the available sources of travel information, issue the relevant query, digest the returned information and then run the necessary weighting of the results in real time. This is like going to Lycos and asking it to find all the resources related to Unix and then having it start indexing the Web. Forget leaving your computer to make a pot of coffee – you could leave your computer and make a ship in a bottle.

One of the critical weaknesses in the idea of mobile agents is that the time taken to run a query improves with processor speed (~2x every 18 months) but degrades with the amount of data to be searched (~2x every 4 months). A back of the envelope calculation comparing Moore’s Law vs. traffic patterns at public internet interconnect points suggests that an autonomous agent’s performance for real-time requests should suffer by roughly an order of magnitude annually. Even if you make optimistic assumptions about algorithm design and multi-threading and assume that
data sources are always accessible, mere network latency in an exploding number sources prohibits real-time queries. The right way to handle this problem is the mundane way – gather and index the material to be queried in advance.

  • Agents ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking).

The usual answer to this problem with real-time queries is to assume that people are happy to ask a question hours or days in advance of needing the answer, a scenario that occurs with a frequency of approximately never. People ask questions when they want to know the answer – if they wanted the answer later they would have asked the
question later. Agents thus reverse the appropriate division of labor between humans and computers — in the agent scenario above, humans do the waiting while agents do the thinking. The humans are required to state the problem in terms rigorous enough to be acted on by a machine, and be willing to wait for the answer while the machine applies the heuristics. This is in keeping with the Central Dream of AI, namely that humans can be relegated to a check-off function after the machines have done the thinking.

As attractive as this dream might be, it is far from the realm of the possible. When you can have an agent which understands why 8 hours between trains in Paris is better than 4 hours between trains in Frankfurt but 8 hours in Peoria is worse than 4 hours in Fargo, then you can let it do all the work for you, but until then the final step in the process is going to take place in your neural network, not your agent’s.

  • Agents make the market for information less efficient

This is the biggest problem of all – agents rely on a wrong abstraction of the world. In the agent’s world, their particular owner is at the center, and there are a huge number of heterogenous data sources scattered all around, and one agent makes thousands of queries outwards to perform one task. This ignores the fact that the data is neither static nor insensitive to the agent’s request. The agent is not just
importing information about supply, it is exporting information about demand at the same time, thus changing the very market conditions it is trying to record. The price of a Beanie Baby rises as demand rises since Beanie Babies are an (artificially) limited resource, while the price of bestsellers falls with demand, since bookstores can charge lower prices in return for higher volume. Airline prices are updated thousands of times a day, currency exchange rates are updated tens of thousands of times a day. Net-crawling agents are completely unable to deal with markets for information like these; these kind of problems require the structured data to be at the center, and for a huge number of heterogenous queries to be made inwards towards the centralized data, so that information about supply and demand are all captured in one place, something no autonomous agent can do.

Enter The Big Fat Webserver

So much of the history of the Internet, and particularly of the Web, has been about decentralization that the idea of distributing processes has become almost reflexive. Because the first decade of the Web has relied on PCs, which are by their very nature decentralized, it is hard to see that much of the Web’s effect has been in the opposite direction, towards centralization, and centralization of a particular kind – market-making.

The alternative for the autonomous mobile agent is the Big Fat Webserver, and while its superiority as a solution has often been overlooked next to the sexier idea of smart agents, B.F.Ws are A Good Thing for the same reasons markets are A Good Thing – they are the best way of matching supply with demand in real time. What you would really do when Paris (Peoria) beckons is go to Travelocity or some similar
B.F.W. for travel planning. Travelocity runs on that unsexiest of hardware (the mainframe) in that unsexiest of architectures (centralized) and because of that, it works well everywhere the agent scenario works badly. You log into Travelocity and ask it a question about plane flights, get an answer right then, and decide.

B.F.Ws performance scales with database size, not network size

The most important advantage of B.F.Ws over agents is that BFWs acquire and structure the data before a request comes in. Net-crawling agents are asked to identify sources, gather data and then query the results all at once, even though these functions require completely different strategies. By gathering and structuring data in advance, B.F.W.s remove the two biggest obstacles to agent performance before any request is issued.

B.F.W.s let computer do what computers are good at (gathering, indexing) and people do what people are good at (querying, deciding).

Propaganda to the contrary, when given a result set of sufficiently narrow range (a dozen items, give or take), humans are far better at choosing between different options than agents are. B.F.W.s provide the appropriate division of labor, letting the machine do the coarse-grained sorting, which has mostly to do with excluding the worst options, while letting the humans make the fine-grained choices at the end.

B.F.W.s make markets

This is the biggest advantage of B.F.W.s over agents — databases open to heterogenous requests are markets for information. Information about supply and demand are handled at the same time, and the transaction takes place as close to real time as database processing plus network latency can allow.

For the next few years, B.F.W.s are going to be a growth area. They solve the problems previously thought to be in the agents domain, and they solve them better than agents every could. Where the agents make the assumption of a human in the center, facing outward to a heterogenous collection of data which can be gathered asynchronously, B.F.Ws make the asusmption that is more in line with markets (and
reality) – a source of data (a market, really) in the center, with a collection of humans facing inwards and making requests in real time. Until someone finda abetter method of matching supply with demand than real-time markets, B.F.W.s are a better answer than agents every time.

Pretend vs. Real Economy

First published in FEED, 06/99.

The Internet happened to Merrill Lynch last week, and it cost them a couple billion dollars — when Merrill announced its plans to open an online brokerage after years of deriding the idea, its stock price promptly fell by a tenth, wiping out $2 billion in its market capitalization. The internet’s been happening like that to a lot of companies lately — Barnes and Noble’s internet stock is well below its recent launch price, Barry Diller’s company had to drop its Lycos acquisition because of damage to the stock prices of both companies, and both Borders and Compaq dumped their CEOs after it became clear that they were losing internet market share. In all of these cases, those involved learned the hard way that the internet is a destroyer of net value for traditional businesses because the internet economy is fundamentally at odds with the market for internet stocks.

The internet that the stock market has been so in love with (call it the “Pretend Internet” for short) is all upside — it enables companies to cut costs and compete without respect to geography. The internet that affects the way existing goods and services are sold, on the other hand (call it the “Real Internet”), forces companies to cut profit margins, and exposes them to competitors without respect to geography. On the Pretend Internet, new products will pave the way for enormous profitability arising from unspecified revenue streams. Meanwhile, on the Real Internet, prices have fallen and they can’t get up. There is a rift here, and its fault line appears wherever offline companies like Merrill tie their stock to their internet offerings. Merrill currently pockets a hundred bucks every time it executes a trade, and when investors see that Merrill online is only charging $30 a trade, they see a serious loss of revenue. When they go on to notice that $30 is something like three times the going rate for an internet stock trade, they see more than loss of revenue, they see loss of value. When a company can cut its prices 70% and still be three times as expensive as its competitors, something has to give. Usually that is the company’s stock price.

The internet is the locus of the future economy, and its effect is the wholesale transfer of information and choice (read: power and leverage) from producer to consumer. Producers (and the stock market) prefer one-of-a-kind businesses who can force their customers to accept continual price increases for the same products. Consumers, on the other hand, prefer commodity businesses where prices start low and keep falling. On the internet, consumers have the upper hand, and as a result, anybody who profited from offline inefficiencies — it used to be hard work to distribute new information to thousands of people every day, for example — are going to see much of their revenue destroyed with no immediate replacement in sight.

This is not to say that the internet produces no new value — on the contrary, it produces enormous value every day. Its just that most of the value is concentrated in the hands of the consumer. Every time someone uses the net to shop on price (cars, plane tickets, computers, stock trades) the money they didn’t spend is now available for other things. The economy grows even as profit margins shrink. In the end, this is what Merrill’s missing market cap tells us — the internet is now a necessity, but there’s no way to use the internet without embracing consumer power, and any business which profits from inefficiency is going to find this embrace more constricting than comforting. The effects of easy price comparison and global reach are going to wring inefficiency (read: profits) out of the economy like a damp dishrag, and as the market comes to terms with this equation between consumer power and lower profit margins, $2 billion of missing value is going to seem like a drop in the bucket.

Who Are You Paying When You’re Paying Attention?

First published in ACM, 06/99.

Two columns ago, in “Help, the Price of Information Has Fallen and It Can’t Get Up”, I argued that traditional pricing models for informational goods (goods that can theoretically be transmitted as pure data – plane tickets, stock quotes, classified ads) fall apart on the net because so much of what’s actually being paid for when this data is distributed is not the content itself, but its packaging, storage and transportation. This content is distributed either as physical packages, like books or newspapers, or on closed (pronounced ‘expensive’) networks, like Lexis/Nexis or stock tickers, and its cost reflects both these production and distribution expenses and the scaricty that is created when only a few companies can afford to produce and distribute said content. 

The net destroys both those effects, first by removing the need for either printing and distributing physical objects (online newspapers, e-tickets and electronic greeting ‘cards’ are all effortless to distribute relative to their physical counterparts) and by removing many of the barriers to distribution (only a company with access to a printing press can sell classified ads offline, but on the network all it takes is a well-trafficked site), so that many more companies can compete with one another. 

The net effect of all this, pun intended, is to remove the ability to charge direct user fees for many kinds of online content which people are willing to shell out for offline. This does not mean that the net is valueless, however, or that users can’t be asked to pay for content delivered over the Internet. In fact, most users willingly pay for content now. The only hitch is that what they’re paying isn’t money. They’re paying attention. 

THE CURRENCY EXCHANGE MODEL

Much of the current debate surrounding charging user fees on the Internet assumes that content made available over the network follows (or should follow) the model used in the print world – ask users to pay directly for some physical object which contains the content. In some cases, the whole cost of the object is borne by the users, as with books, and in other cases users are simply subsidizing the part of the cost not paid for by advertisements, as with newspapers and magazines. There is, however, another model, one more in line with the things the net does well, where the user pays no direct fees but the providers of the content still get paid – the television model. 

TV networks are like those currency exchange booths for tourists. People pay attention to the TV, and the networks collect this attention and convert it into money at agreed upon rates by supplying it in bulk to their advertisers, generally by calculating the cost to the advertiser of reaching a thousand viewers. The user exchanges their attention in return for the content, and the TV networks exchange this attention for income. These exchange rates rise and fall just like currency markets, based on the perceived value of audience attention and the amount of available cash from the advertiser. 

This model, which generates income by making content widely available over open networks without charging user fees, is usually called ‘ad-supported content’, and it is currently very much in disfavor on the Internet. I believe however, that not only can ad-supported content work on the Internet, I believe it can’t not work. It’s success is guaranteed by the net’s very makeup – the net is simply too good at gathering communities of interest, too good at freely distributing content, and too lousy at keeping anything locked inside subscription networks, for it to fail. Like TV, the net is better at getting people to pay attention than anything else. 

OK SHERLOCK, SO IF THE IDEA OF MAKING MONEY ON THE INTERNET BY CONVERTING ATTENTION INTO INCOME IS SO BRILLIANT, HOW COME EVERYONE ELSE THINKS YOU’RE WRONG?

Its a question of scale and time horizons. 

One of the reasons for the skepticism about applying the TV model to the Internet is the enormous gulf between the two media. This is reflected in both the relative sizes of their audiences and the incomes of those businesses – TV is the quintessential mass medium, commanding tens of millions more viewers than the net does. TV dwarfs the net in both popularity and income. 

Skeptics eyeing the new media landscape often ask “The Internet is fine as a toy, but when will it be like TV?” By this they generally mean ‘When will the net have a TV-style audience with TV-style profits?’ The question “When will the net be like TV” is easy to answer – ‘Never’. The more interesting question is when will TV be like the net, and the answer is “Sooner than you think”. 

A BRIEF DIGRESSION INTO THE BAD OLD DAYS OF NETWORK TV

Many people have written about the differences between the net and television, usually focussing on the difference between broadcast models and packet switched models like multicasting and narrowcasting, but these analyses, while important, overlook one of the principal differences between the two media. The thing that turned TV into the behemoth we know today isn’t broadcast technology but scarcity. 

From the mid-1950s to the mid-1980s, the US national TV networks operated at an artificially high profit. Because the airwaves were deemed a public good, their use was heavily regulated by the FCC, and as a consequence only three companies got to play on a national level. With this FCC-managed scarcity in place, the law of supply and demand worked in the TV networks favor in ways that most industries can only dream of – they had their own private government created and regulated cartel. 

It is difficult to overstate the effect this had on the medium. With just three players, a TV show of merely average popularity would get a third of the available audience, so all the networks were locked in a decades long three-way race for the attention of the ‘average’ viewer. Any business which can get the attention of a third of its 100 million+ strong audience by producing a run-of-the-mill product, while being freed for any other sort of competition by the government, has a license to print money and a barrel of free ink. 

SO WHAT HAPPENED?

Cable happened – the TV universe has been slowly fracturing for the last 20 years or so, with the last 5 years seeing especially sharp movement. With a growing competition from cable (and later sattelite, microwave, a 4th US TV network, and most recently the Internet draining TV watching time), the ability of television to command a vast audience with average work has suffered badly. The two most popular US shows of this year each struggle to get the attention of a fifth of the possible viewers, called a “20 share’ in TV parlance, where 20 is the percentage of the possible audience tuning in. 

The TV networks used to cancel shows with a 20 share, and now that’s the best they can hope for from their most popular shows, and its only going to get worse. As you might imagine, this has played hell with the attention-to-cash conversion machine. When the goal was a creating multiplicity of shows for the ‘average’ viewer, pure volume was good, but in the days of the Wind-Chill Channel and the Abe Vigoda Channel, the networks have had to turn to audience segmentation, to not just counting numbers but counting numbers of women, or teenagers, or Californians, or gardeners, who are watching certain programs. 

The TV world has gone from three channels of pure mass entertainment to tens or even hundreds of interest-specific channels, with attention being converted to cash based not solely on the total number people they attract, but also on how many people with specific interests, or needs, or characteristics. 

Starting to sound a bit like the Web, isn’t it? 

THE TV PEOPLE ARE GOING TO RUE THE DAY THEY EVER HEARD THE WORD ‘DIGITAL’

All this is bad enough from the TV networks point of view, but its a mere scherzo compared to the coming effects of digitality. As I said in an earlier column, apropos CD-ROMs, “Information has been decoupled from objects. Forever.”, and this is starting to be true of information and any form of delivery. A TV is like a book in that it is both the mechanism of distribution and of display – the receiver, decoder and screen travel together. Once television becomes digital, this is over, as any digital content can be delivered over any digital medium. “I just saw an amazing thing on 60 Minutes – here, I’ll mail it to you”, “Whats the URL for ER again?”, “Once everybody’s here, we’ll start streaming Titanic”. Digital Baywatch plus frame relay is the end of ‘appointment TV’. 

Now I am not saying that the net will surpass TV in size of audience or income anytime soon, or that the net’s structure, as is, is suitable for TV content, as is. I am saying that the net’s method of turning attention into income, by letting audience members select what they’re intersted in and when, where and how to view it, is superior to TV’s, and I am saying that as the net’s bandwidth and quality of service increases and television digitizes, many of the advantages TV had move over to the network. 

The Internet is a massive medium, but it is not a mass medium, and this gives it an edge as the scarcity that TV has lived on begins to seriously erode. The fluidity with which the net apportions content to those interested in it without wasting the time of those not interested in it makes it much more suited in the long run for competing for attention in the increasingly fractured environment for television programming, or for any content delivery for that matter. 

In fact, most of the experiments with TV in the last decade – high-definition digital content, interactive shopping and gaming, community organization, and the evergreen ‘video on demand’ – are all things that can be better accomplished by a high bandwidth packet switched network than by traditional TV broadcast signals. 

In the same way that ATT held onto over 2/3rds of its long-distance market for a decade after the breakup only to see it quickly fall below 50% in the last three years, the big 3 TV networks have been coasting on that same kind of intertia. Only recently, prodded by cable and the net, is the sense memory of scarcity starting to fade, and in its place is arising a welter of competition for attention, one that the Internet is poised to profit from enormously. A publicly accessible two-way network that can accomodate both push and pull and can transmit digital content with little regard to protocol has a lot of advantages over TV as an ‘attention to income’ converter, and in the next few years those advantages will make themselves felt. I’m not going to bet on when overall net income surpasses overall TV income, but in an arena where paying attention is the coin of the realm, the net has a natural edge, and I feel confident in predicting that revenues from content will continue to double annually on the net for the forseeable future, while network TV will begin to stagnate, caught flat in a future that looks more like the Internet than it does like network TV.

An Open Letter to Jakob Nielsen

First published on ACM, 06/99.

[For those not subscribing to CACM, Jakob Nielsen and I have come down on opposite sides of a usability debate. Jakob believes that the prevalence of bad design on the Web is an indication that the current method of designing Web sites is not working and should be replaced or augmented with a single set of design conventions. I believe that the Web is an adaptive system and that the prevalence of bad design is how evolving systems work.

Jakob’s ideas are laid out in “User Interface Directions for the Web”, CACM, January 1999.

My ideas are laid out in “View Source… Lessons from the Web’s Massively Parallel Development”, networker, December 1998, and http://www.shirky.com/writings/view-source

Further dialogue between the two of us is in the Letters section of the March 1999 CACM.]

Jakob,

I read your response to my CACM letter with great interest, and while I still disagree, I think I better understand the disagreement, an will try to set out my side of the argument in this letter. Let me preface all of this by noting what we agree on: the Web is host to some hideous dreck; things would be better for users if Web designers made usability more of a priority; and there are some basics of interface usability that one violates at one’s peril.

Where we disagree, however, is on both attitude and method – for you, every Web site is a piece of software first and foremost, and therefore in need of a uniform set of UI conventions, while for me, a Web site’s function is something only determined by its designers and users – function is as function does. I think it presumptous to force a third party into that equation, no matter how much more “efficient” that would make things.

You despair of any systemic fix for poor design and so want some sort of enforcement mechanism for these external standards. I believe that the Web is an adaptive system, and that what you deride as “Digital Darwinism” is what I would call a “Market for Quality”. Most importantly, I believe that a market for quality is in fact the correct solution for creating steady improvements in the Web’s usability.

Let me quickly address the least interesting objection to your idea: it is unworkable. Your plan requires both centralization and force of a sort it is impossible to acheive on the Internet. You say

“…to ensure interaction consistency across all sites it will be necessary to promote a single set of design conventions.”

and

“…the main problem lies in getting Web sites to actually obey any usability rules.”

but you never address who you are proposing to put in the driver’s seat – “it will be necessary” for whom? “[T]he main problem” is a problem for whom? Not for me – I am relieved that there is no authority who can make web site designers “obey” anything other httpd header validity. That strikes me as the Web’s saving grace. With the Web poised to go from 4 million sites to 100 million in the next few years, as you note in your article, the idea of enforcing usability rules will never get past the “thought experiment” stage.

However, as you are not merely a man of action but also a theorist, I want to address why I think enforced conformity to usability standards is wrong, even in theory. My objections break out into three rough categories: creating a market for usability is better than central standards for reasons of efficency, innovation, and morality.

EFFICENCY

In your letter, you say “Why go all the way to shipping products only to have to throw away 99% of the work?” I assume that you meant this as a rhetorical question – after all, how could anybody be stupid enough to suggest that a 1% solution is good enough? The Nielsen Solution – redesign for everybody not presently complying with a “single set of design conventions” – takes care of 100% of the problem, while the Shirky Solution, let’s call it evolutionary progress for the top 1% of sites, well what could that possibly get you?

1% gets you a surprising amount, actually, if it’s properly arranged.

If only the top 1% most trafficked Web sites make usability a priority, those sites would nevertheless account for 70% of all Web traffic. You will recognize this as your own conclusion, of course, as you have suggested on UseIt (http://www.useit.com/alertbox/9704b.html) that Web site traffic is roughly a Zipf distribution, where the thousandth most popular site only sees 1/1000th the traffic of the most popular site. This in turn means that a very small percentage of the Web gets the lion’s share of the traffic. If you are right, then you do not need good design on 70% of the web sites to cover 70% of user traffic, you only need good design on the top 1% of web sites to reach 70% of the traffic.

By ignoring the mass of low traffic sites and instead concentrating on making the popular sites more usable and the usable sites more popular, a market for quality is a more efficient way of improving the Web than trying to raise quality across the board without regard to user interest.

INNOVATION

A market for usability is also better for fostering innovation. As I argue in “View Source…”, good tools let designers do stupid things. This saves overhead on the design of the tools, since they only need to concern themselves with structural validity, and can avoid building in complex heuristics of quality or style. In HTML’s case, if it renders, it’s right, even if it’s blinking yellow text on a leopard-skin background. (This is roughly the equivalent of letting the reference C compiler settle arguments about syntax – if it compiles, it’s correct.)

Consider the use of HTML headers and tables as layout tools. When these practices appeared, in 1994 and 1995 respectively, they infuriated partisans of the SGML ‘descriptive language’ camp who insisted that HTML documents should contain only semantic descriptions and remain absolutely mute about layout. This in turn led to white-hot flamefests about how HTML ‘should’ and ‘shouldn’t’ be used.

It seems obvious from the hindsight of 1999, but it is worth repeating: Everyone who argued that HTML shouldn’t be used as a layout language was wrong. The narrowly correct answer, that SGML was designed as a semantic language, lost out to the need of designers to work visually, and they were able to override partisan notions of correctness to get there. The wrong answer from a standards point of view was nevertheless the right thing to do.

Enforcing any set of rules limits the universe of possibilities, no matter how well intentioned or universal those rules seem. Rules which raise the average quality by limiting the worst excesses risk ruling out the most innovative experiments as well by insisting on a set of givens. Letting the market separate good from bad leaves the door open to these innovations.

MORALITY

This is the most serious objection to your suggestion that standards of usability should be enforced. A web site is an implicit contract between two and only two parties – designer and user. No one – not you, not Don Norman, not anyone, has any right to enter into that contract without being invited in, no matter how valuable you think your contribution might be.

IN PRAISE OF EVOLVEABLE SYSTEMS, REDUX

I believe that the Web is already a market for quality – switching costs are low, word of mouth effects are both large and swift, and redesign is relatively painless compared to most software interfaces. If I design a usable site, I will get more repeat business than if I don’t. If my competitor launches a more usable site, it’s only a click
away. No one who has seen the development of Barnes and Noble and Amazon or Travelocity and Expedia can doubt that competition helps keep sites focussed on improving usability. Nevertheless, as I am a man of action and not just a theorist, I am going to suggest a practical way to improve the workings of this market for usability –
lets call it usable.lycos.com.

The way to allocate resources efficently in a market with many sellers (sites) and many buyers (users) is competition, not standards. Other things being equal, users will prefer a more usable site over its less usable competition. Meanwhile, site owners prefer more traffic to less, and more repeat visits to fewer. Imagine a search engine that weighted usability in its rankings, where users knew that a good way to find a usable site was by checking the “Weight Results by Usability” box and owners knew that a site could rise in the list by offering a good user experience. In this environment, the premium for good UI would align the interests of buyers and sellers around increasing quality. There is no Commisar of Web Design here, no International Bureau of Markup Standards, just an implicit and ongoing compact between users and designers that improvement will be rewarded.

The same effect could be created in other ways – a Nielsen/Norman “Seal of Approval”, a “Usability” category at the various Web awards ceremonies, a “Usable Web Ring”. As anyone who has seen “Hamster Dance” or an emailed list of office jokes can tell you, the net is the most efficent medium the world has ever known for turning user preference into widespread awareness. Improving the market for quality simply harnesses that effect.

Web environments like usable.lycos.com, with all parties maximizing preferences, will be more efficent and less innovation-dampening than the centralized control which would be necessary to enforce a single set of standards. Furthermore, the virtues of such a decentralized system mirrors the virtues of the Internet itself rather than fighting them. I once did a usability analysis on a commerce site which had fairly ugly graphics but a good UI nevertheless. When I queried the site’s owner about his design process, he said “I didn’t know anything when I started out, so I just put up a site with an email link on every page, and my customers would mail me suggestions.”

The Web is a marvelous thing, as is. There is a dream dreamed by engineers and designers everywhere that they will someday be put in charge, and that their rigorous vision for the world will finally overcome the mediocrity around them once and for all. Resist this idea – the world does not work that way, and the dream of centralized control is only pleasant for the dreamer. The Internet’s ability to be adapted slowly, imperfectly, and in many conflicting directions all at once is precisely what makes it so powerful, and the Web has taken those advantages and opened them up to people who don’t know source code from bar code by creating a simple interface design language.

The obvious short term effect of this has been the creation of an ocean of bad design, but the long term effects will be different – over time bad sites die and good sites get better, so while those short-term advantages seem tempting, we would do well to remember that there is rarely any profit in betting against the power of the marketplace in the long haul.