Citizens and Customers

6/15/1999

All countries are different; all customers are the same. That’s the lesson to be
learned from Freeserve ISP’s meteoric rise, and the subsequent reshaping of the UK
internet industry. Prior to Freeserve, the British adoption rate of the internet was
fairly sluggish, but Freeserve figured out how to offer free internet access by
subsidizing its service with for-fee tech support and a cut of local call revenues,
and in the six months since they’ve launched (and spawned over 50 copycat services),
the UK user base has grown from 6 to 10 million. Their main advantage over the other major ISP player, British Telecom, was the contempt BT has for the British public.

Wherever technology is concerned, there are a host of nationalistic prejudices: the
Americans are early adopters, for example, while the British are nation of shopkeepers, suspicious of technology and fearful of change. BT held this latter view, behaving as if Britain’s slow adoption of the internet was just another aspect of a national reticence about technology, and therefore treating the ISP business as an expensive service for elites rather than trying to roll it out cheaply to the masses.

This idea of national differences in the use of the internet is everywhere these days,
but this idea confuses content with form. There will be Czech content on the net, but
there won’t be a “Czech Way” of using the network, or a “Chinese Way” or a “Chilean Way.” The internet’s content is culturally determined, but its form is shaped by economics. Once a country gets sufficiently wired, the economic force of the internet has little to do with ethnicity or national sentiment and much to do with the unsurprising fact that given two offers of equal value, people all over the world will take the cheaper one, no matter who is offering it to them.

Unsurprising to consumers, that is; businesses all over the world are desperate to
convince themselves that national identity matters more than quality and price. (Remember the “Buy American” campaign that tried to get Americans to pay more for inferior cars? or the suggestion that corrupt business practices were part of “Asian Values”?) Freeserve’s genius was not to be swayed by the caricature of stodgy, technophobic Brits. British reticence about the internet turned out to be about price and not about national character at all — now that internet access has come in line with ordinary incomes, the British have been as keen to get connected as Americans are.

Patriotism is the last refuge of an unprofitable business. We’ve seen the internet take
off in enough countries to have some idea of the necessary preconditions: when a
literate population has phones at home, cheap PCs, and competitive telecom businesses, the value of connecting to the internet rises continually while the cost of doing so falls. In these countries, any business that expects national identity to provide some defense against competition is merely using a flag as a fig leaf. In the end, countries with wired populations will see national differences reduced in importance to the level of the Local Dish and Colorful Garb, because once a country passes some tipping point, its population starts behaving less like citizens of a particular place and more like global customers, making the same demands made by customers everywhere. Businesses that fill those demands, regardless of nationality, will thrive, and businesses that ignore those demands, regardless of nationality, will die.

Why Smart Agents Are A Dumb Idea

Smart agents are a dumb idea. Like several of the other undead ideas floating around (e.g Digital Cash, Videophones), the idea of having autonomous digital agents that scour the net acting on your behalf seems so attractive that, despite a string of failures, agents enjoy periodic resurgances of interest. A new such surge seems to be beginning, with another round of stories in the press about how autonomous agents
equipped with instructions from you (and your credit card number) are going to shop for your CDs, buy and sell your stocks, and arrange your travel plans. The primary thing smart agents seem to have going for them is the ‘cool’ factor (as in ‘This will work because it would be cool if it did.’) The primary thing they have going against them is that they do not work and they never will work, and not just because they are
impractical, but because they have the wrong problems in their domain, and they solve them in the wrong order.

Smart agents — web crawling agents as opposed to stored preferences in
a database — have three things going against them:

  • Agents’ performance degrades with network growth
  • Agents ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking).
  • Agents make the market for information less efficient rather than more

These three barriers render the idea of agents impractical for almost all of the duties they are supposedly going to perform.

Consider these problems in context; the classic scenario for the mobile agent is the business trip. You have business in Paris (or, more likely, Peoria) and you need a flight, a hotel and a rental car. You instruct your agent about your dates, preferences, and price limits, and it scours the network for you putting together the ideal package based on its interpretatino of your instructions. Once it has secured this package, it makes the purchases on your behalf, and presents you with the completed travel package, dates, times and confirmation numbers in one fell swoop.

A scenario like this requires a good deal of hand waving to make it seem viable, to say nothing of worthwhile, because it assumes that the agent’s time is more valuable than your time. Place that scenario in a real world context – your boss tells you you need to be in Paris (Peoria) at the end of the week, and could you make the arrangements before you go home? You fire up your trusty agent, and run into the
following problems:

  • Agents’ performance degrades with network growth

Upon being given its charge, the agent needs to go out and query all the available sources of travel information, issue the relevant query, digest the returned information and then run the necessary weighting of the results in real time. This is like going to Lycos and asking it to find all the resources related to Unix and then having it start indexing the Web. Forget leaving your computer to make a pot of coffee – you could leave your computer and make a ship in a bottle.

One of the critical weaknesses in the idea of mobile agents is that the time taken to run a query improves with processor speed (~2x every 18 months) but degrades with the amount of data to be searched (~2x every 4 months). A back of the envelope calculation comparing Moore’s Law vs. traffic patterns at public internet interconnect points suggests that an autonomous agent’s performance for real-time requests should suffer by roughly an order of magnitude annually. Even if you make optimistic assumptions about algorithm design and multi-threading and assume that
data sources are always accessible, mere network latency in an exploding number sources prohibits real-time queries. The right way to handle this problem is the mundane way – gather and index the material to be queried in advance.

  • Agents ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking).

The usual answer to this problem with real-time queries is to assume that people are happy to ask a question hours or days in advance of needing the answer, a scenario that occurs with a frequency of approximately never. People ask questions when they want to know the answer – if they wanted the answer later they would have asked the
question later. Agents thus reverse the appropriate division of labor between humans and computers — in the agent scenario above, humans do the waiting while agents do the thinking. The humans are required to state the problem in terms rigorous enough to be acted on by a machine, and be willing to wait for the answer while the machine applies the heuristics. This is in keeping with the Central Dream of AI, namely that humans can be relegated to a check-off function after the machines have done the thinking.

As attractive as this dream might be, it is far from the realm of the possible. When you can have an agent which understands why 8 hours between trains in Paris is better than 4 hours between trains in Frankfurt but 8 hours in Peoria is worse than 4 hours in Fargo, then you can let it do all the work for you, but until then the final step in the process is going to take place in your neural network, not your agent’s.

  • Agents make the market for information less efficient

This is the biggest problem of all – agents rely on a wrong abstraction of the world. In the agent’s world, their particular owner is at the center, and there are a huge number of heterogenous data sources scattered all around, and one agent makes thousands of queries outwards to perform one task. This ignores the fact that the data is neither static nor insensitive to the agent’s request. The agent is not just
importing information about supply, it is exporting information about demand at the same time, thus changing the very market conditions it is trying to record. The price of a Beanie Baby rises as demand rises since Beanie Babies are an (artificially) limited resource, while the price of bestsellers falls with demand, since bookstores can charge lower prices in return for higher volume. Airline prices are updated thousands of times a day, currency exchange rates are updated tens of thousands of times a day. Net-crawling agents are completely unable to deal with markets for information like these; these kind of problems require the structured data to be at the center, and for a huge number of heterogenous queries to be made inwards towards the centralized data, so that information about supply and demand are all captured in one place, something no autonomous agent can do.

Enter The Big Fat Webserver

So much of the history of the Internet, and particularly of the Web, has been about decentralization that the idea of distributing processes has become almost reflexive. Because the first decade of the Web has relied on PCs, which are by their very nature decentralized, it is hard to see that much of the Web’s effect has been in the opposite direction, towards centralization, and centralization of a particular kind – market-making.

The alternative for the autonomous mobile agent is the Big Fat Webserver, and while its superiority as a solution has often been overlooked next to the sexier idea of smart agents, B.F.Ws are A Good Thing for the same reasons markets are A Good Thing – they are the best way of matching supply with demand in real time. What you would really do when Paris (Peoria) beckons is go to Travelocity or some similar
B.F.W. for travel planning. Travelocity runs on that unsexiest of hardware (the mainframe) in that unsexiest of architectures (centralized) and because of that, it works well everywhere the agent scenario works badly. You log into Travelocity and ask it a question about plane flights, get an answer right then, and decide.

B.F.Ws performance scales with database size, not network size

The most important advantage of B.F.Ws over agents is that BFWs acquire and structure the data before a request comes in. Net-crawling agents are asked to identify sources, gather data and then query the results all at once, even though these functions require completely different strategies. By gathering and structuring data in advance, B.F.W.s remove the two biggest obstacles to agent performance before any request is issued.

B.F.W.s let computer do what computers are good at (gathering, indexing) and people do what people are good at (querying, deciding).

Propaganda to the contrary, when given a result set of sufficiently narrow range (a dozen items, give or take), humans are far better at choosing between different options than agents are. B.F.W.s provide the appropriate division of labor, letting the machine do the coarse-grained sorting, which has mostly to do with excluding the worst options, while letting the humans make the fine-grained choices at the end.

B.F.W.s make markets

This is the biggest advantage of B.F.W.s over agents — databases open to heterogenous requests are markets for information. Information about supply and demand are handled at the same time, and the transaction takes place as close to real time as database processing plus network latency can allow.

For the next few years, B.F.W.s are going to be a growth area. They solve the problems previously thought to be in the agents domain, and they solve them better than agents every could. Where the agents make the assumption of a human in the center, facing outward to a heterogenous collection of data which can be gathered asynchronously, B.F.Ws make the asusmption that is more in line with markets (and
reality) – a source of data (a market, really) in the center, with a collection of humans facing inwards and making requests in real time. Until someone finda abetter method of matching supply with demand than real-time markets, B.F.W.s are a better answer than agents every time.

Pretend vs. Real Economy

First published in FEED, 06/99.

The Internet happened to Merrill Lynch last week, and it cost them a couple billion dollars — when Merrill announced its plans to open an online brokerage after years of deriding the idea, its stock price promptly fell by a tenth, wiping out $2 billion in its market capitalization. The internet’s been happening like that to a lot of companies lately — Barnes and Noble’s internet stock is well below its recent launch price, Barry Diller’s company had to drop its Lycos acquisition because of damage to the stock prices of both companies, and both Borders and Compaq dumped their CEOs after it became clear that they were losing internet market share. In all of these cases, those involved learned the hard way that the internet is a destroyer of net value for traditional businesses because the internet economy is fundamentally at odds with the market for internet stocks.

The internet that the stock market has been so in love with (call it the “Pretend Internet” for short) is all upside — it enables companies to cut costs and compete without respect to geography. The internet that affects the way existing goods and services are sold, on the other hand (call it the “Real Internet”), forces companies to cut profit margins, and exposes them to competitors without respect to geography. On the Pretend Internet, new products will pave the way for enormous profitability arising from unspecified revenue streams. Meanwhile, on the Real Internet, prices have fallen and they can’t get up. There is a rift here, and its fault line appears wherever offline companies like Merrill tie their stock to their internet offerings. Merrill currently pockets a hundred bucks every time it executes a trade, and when investors see that Merrill online is only charging $30 a trade, they see a serious loss of revenue. When they go on to notice that $30 is something like three times the going rate for an internet stock trade, they see more than loss of revenue, they see loss of value. When a company can cut its prices 70% and still be three times as expensive as its competitors, something has to give. Usually that is the company’s stock price.

The internet is the locus of the future economy, and its effect is the wholesale transfer of information and choice (read: power and leverage) from producer to consumer. Producers (and the stock market) prefer one-of-a-kind businesses who can force their customers to accept continual price increases for the same products. Consumers, on the other hand, prefer commodity businesses where prices start low and keep falling. On the internet, consumers have the upper hand, and as a result, anybody who profited from offline inefficiencies — it used to be hard work to distribute new information to thousands of people every day, for example — are going to see much of their revenue destroyed with no immediate replacement in sight.

This is not to say that the internet produces no new value — on the contrary, it produces enormous value every day. Its just that most of the value is concentrated in the hands of the consumer. Every time someone uses the net to shop on price (cars, plane tickets, computers, stock trades) the money they didn’t spend is now available for other things. The economy grows even as profit margins shrink. In the end, this is what Merrill’s missing market cap tells us — the internet is now a necessity, but there’s no way to use the internet without embracing consumer power, and any business which profits from inefficiency is going to find this embrace more constricting than comforting. The effects of easy price comparison and global reach are going to wring inefficiency (read: profits) out of the economy like a damp dishrag, and as the market comes to terms with this equation between consumer power and lower profit margins, $2 billion of missing value is going to seem like a drop in the bucket.

Who Are You Paying When You’re Paying Attention?

First published in ACM, 06/99.

Two columns ago, in “Help, the Price of Information Has Fallen and It Can’t Get Up”, I argued that traditional pricing models for informational goods (goods that can theoretically be transmitted as pure data – plane tickets, stock quotes, classified ads) fall apart on the net because so much of what’s actually being paid for when this data is distributed is not the content itself, but its packaging, storage and transportation. This content is distributed either as physical packages, like books or newspapers, or on closed (pronounced ‘expensive’) networks, like Lexis/Nexis or stock tickers, and its cost reflects both these production and distribution expenses and the scaricty that is created when only a few companies can afford to produce and distribute said content. 

The net destroys both those effects, first by removing the need for either printing and distributing physical objects (online newspapers, e-tickets and electronic greeting ‘cards’ are all effortless to distribute relative to their physical counterparts) and by removing many of the barriers to distribution (only a company with access to a printing press can sell classified ads offline, but on the network all it takes is a well-trafficked site), so that many more companies can compete with one another. 

The net effect of all this, pun intended, is to remove the ability to charge direct user fees for many kinds of online content which people are willing to shell out for offline. This does not mean that the net is valueless, however, or that users can’t be asked to pay for content delivered over the Internet. In fact, most users willingly pay for content now. The only hitch is that what they’re paying isn’t money. They’re paying attention. 

THE CURRENCY EXCHANGE MODEL

Much of the current debate surrounding charging user fees on the Internet assumes that content made available over the network follows (or should follow) the model used in the print world – ask users to pay directly for some physical object which contains the content. In some cases, the whole cost of the object is borne by the users, as with books, and in other cases users are simply subsidizing the part of the cost not paid for by advertisements, as with newspapers and magazines. There is, however, another model, one more in line with the things the net does well, where the user pays no direct fees but the providers of the content still get paid – the television model. 

TV networks are like those currency exchange booths for tourists. People pay attention to the TV, and the networks collect this attention and convert it into money at agreed upon rates by supplying it in bulk to their advertisers, generally by calculating the cost to the advertiser of reaching a thousand viewers. The user exchanges their attention in return for the content, and the TV networks exchange this attention for income. These exchange rates rise and fall just like currency markets, based on the perceived value of audience attention and the amount of available cash from the advertiser. 

This model, which generates income by making content widely available over open networks without charging user fees, is usually called ‘ad-supported content’, and it is currently very much in disfavor on the Internet. I believe however, that not only can ad-supported content work on the Internet, I believe it can’t not work. It’s success is guaranteed by the net’s very makeup – the net is simply too good at gathering communities of interest, too good at freely distributing content, and too lousy at keeping anything locked inside subscription networks, for it to fail. Like TV, the net is better at getting people to pay attention than anything else. 

OK SHERLOCK, SO IF THE IDEA OF MAKING MONEY ON THE INTERNET BY CONVERTING ATTENTION INTO INCOME IS SO BRILLIANT, HOW COME EVERYONE ELSE THINKS YOU’RE WRONG?

Its a question of scale and time horizons. 

One of the reasons for the skepticism about applying the TV model to the Internet is the enormous gulf between the two media. This is reflected in both the relative sizes of their audiences and the incomes of those businesses – TV is the quintessential mass medium, commanding tens of millions more viewers than the net does. TV dwarfs the net in both popularity and income. 

Skeptics eyeing the new media landscape often ask “The Internet is fine as a toy, but when will it be like TV?” By this they generally mean ‘When will the net have a TV-style audience with TV-style profits?’ The question “When will the net be like TV” is easy to answer – ‘Never’. The more interesting question is when will TV be like the net, and the answer is “Sooner than you think”. 

A BRIEF DIGRESSION INTO THE BAD OLD DAYS OF NETWORK TV

Many people have written about the differences between the net and television, usually focussing on the difference between broadcast models and packet switched models like multicasting and narrowcasting, but these analyses, while important, overlook one of the principal differences between the two media. The thing that turned TV into the behemoth we know today isn’t broadcast technology but scarcity. 

From the mid-1950s to the mid-1980s, the US national TV networks operated at an artificially high profit. Because the airwaves were deemed a public good, their use was heavily regulated by the FCC, and as a consequence only three companies got to play on a national level. With this FCC-managed scarcity in place, the law of supply and demand worked in the TV networks favor in ways that most industries can only dream of – they had their own private government created and regulated cartel. 

It is difficult to overstate the effect this had on the medium. With just three players, a TV show of merely average popularity would get a third of the available audience, so all the networks were locked in a decades long three-way race for the attention of the ‘average’ viewer. Any business which can get the attention of a third of its 100 million+ strong audience by producing a run-of-the-mill product, while being freed for any other sort of competition by the government, has a license to print money and a barrel of free ink. 

SO WHAT HAPPENED?

Cable happened – the TV universe has been slowly fracturing for the last 20 years or so, with the last 5 years seeing especially sharp movement. With a growing competition from cable (and later sattelite, microwave, a 4th US TV network, and most recently the Internet draining TV watching time), the ability of television to command a vast audience with average work has suffered badly. The two most popular US shows of this year each struggle to get the attention of a fifth of the possible viewers, called a “20 share’ in TV parlance, where 20 is the percentage of the possible audience tuning in. 

The TV networks used to cancel shows with a 20 share, and now that’s the best they can hope for from their most popular shows, and its only going to get worse. As you might imagine, this has played hell with the attention-to-cash conversion machine. When the goal was a creating multiplicity of shows for the ‘average’ viewer, pure volume was good, but in the days of the Wind-Chill Channel and the Abe Vigoda Channel, the networks have had to turn to audience segmentation, to not just counting numbers but counting numbers of women, or teenagers, or Californians, or gardeners, who are watching certain programs. 

The TV world has gone from three channels of pure mass entertainment to tens or even hundreds of interest-specific channels, with attention being converted to cash based not solely on the total number people they attract, but also on how many people with specific interests, or needs, or characteristics. 

Starting to sound a bit like the Web, isn’t it? 

THE TV PEOPLE ARE GOING TO RUE THE DAY THEY EVER HEARD THE WORD ‘DIGITAL’

All this is bad enough from the TV networks point of view, but its a mere scherzo compared to the coming effects of digitality. As I said in an earlier column, apropos CD-ROMs, “Information has been decoupled from objects. Forever.”, and this is starting to be true of information and any form of delivery. A TV is like a book in that it is both the mechanism of distribution and of display – the receiver, decoder and screen travel together. Once television becomes digital, this is over, as any digital content can be delivered over any digital medium. “I just saw an amazing thing on 60 Minutes – here, I’ll mail it to you”, “Whats the URL for ER again?”, “Once everybody’s here, we’ll start streaming Titanic”. Digital Baywatch plus frame relay is the end of ‘appointment TV’. 

Now I am not saying that the net will surpass TV in size of audience or income anytime soon, or that the net’s structure, as is, is suitable for TV content, as is. I am saying that the net’s method of turning attention into income, by letting audience members select what they’re intersted in and when, where and how to view it, is superior to TV’s, and I am saying that as the net’s bandwidth and quality of service increases and television digitizes, many of the advantages TV had move over to the network. 

The Internet is a massive medium, but it is not a mass medium, and this gives it an edge as the scarcity that TV has lived on begins to seriously erode. The fluidity with which the net apportions content to those interested in it without wasting the time of those not interested in it makes it much more suited in the long run for competing for attention in the increasingly fractured environment for television programming, or for any content delivery for that matter. 

In fact, most of the experiments with TV in the last decade – high-definition digital content, interactive shopping and gaming, community organization, and the evergreen ‘video on demand’ – are all things that can be better accomplished by a high bandwidth packet switched network than by traditional TV broadcast signals. 

In the same way that ATT held onto over 2/3rds of its long-distance market for a decade after the breakup only to see it quickly fall below 50% in the last three years, the big 3 TV networks have been coasting on that same kind of intertia. Only recently, prodded by cable and the net, is the sense memory of scarcity starting to fade, and in its place is arising a welter of competition for attention, one that the Internet is poised to profit from enormously. A publicly accessible two-way network that can accomodate both push and pull and can transmit digital content with little regard to protocol has a lot of advantages over TV as an ‘attention to income’ converter, and in the next few years those advantages will make themselves felt. I’m not going to bet on when overall net income surpasses overall TV income, but in an arena where paying attention is the coin of the realm, the net has a natural edge, and I feel confident in predicting that revenues from content will continue to double annually on the net for the forseeable future, while network TV will begin to stagnate, caught flat in a future that looks more like the Internet than it does like network TV.