An Open Letter to Microsoft

First published on SAR, 09/99.

Dear Microsoft,

I tried to give you some of my money last weekend and you wouldn’t let me. As this sort of behavior might be bad for your long-term profitability, I thought I’d write and explain how you can fix the problem.

Last Sunday night while visiting friends, I remembered that I was running out of time for a discount plane ticket, so I opened Internet Explorer 3.0, the browser running on my friend’s machine, and went to Expedia to make my purchase. (I trust it will not be lost on you that both IE and Expedia are Microsoft products). After submitting my flight details on Expedia, the results page created so many browser errors that I couldn’t even see what flights were available, much less buy a ticket.

Do you understand that? I wanted to use your product on your service to give you my money, and you wouldn’t let me. Being a good citizen, I wrote to the Customer Service address, expecting no more than a piece of ‘Thanks, we’ll look into it’ mail. What I got instead was this:

“Thank you for contacting Expedia with your concerns. Since Microsoft has upgraded the website, they have also upgraded the version you should use with Expedia. The version will be more user-friendly with Internet Explorer 4.0 and above.”

Now before you think to yourselves, “Exactly right — we want to force people to get new browsers”, let me contextualize the situation for you:

I cannot use IE 3.0 to buy tickets on Expedia, but…

I can use IE 3.0 to buy tickets on Travelocity.
I can use Netscape 3.0 to buy tickets on Expedia.
I can even use Netscape 3.0 to buy tickets on Travelocity.

Let me assure you that I bought my tickets as planned. I just didn’t buy them with you.

I understand from what I read in the papers that your desktop monopoly has atrophied your ability to deal with real competition. Get over it. If you really want to start building customer-pleasing businesses on the Web (and given the experience I had on Expedia, I am not sure you do) then let me clue you in to the awful truths of the medium.

First, you need to understand that the operating system wars are over, and operating systems lost. Mac vs. Windows on PCs? Dead issue. Linux vs. NT? Nobody but geeks knows or cares what a web site is running on. There’s no such thing as a “personal” computer any more – any computer I can get to Amazon on is my computer for as long as my hands are on the keyboard, and any operating system with a browser is fine by me. Intentionally introducing incompatibilities isn’t a way to lock people in any more, its just a way to piss us off.

Second, media channels don’t get “upgrades”. You have made a lot of foolish comments about AOL over the years, because you have failed to understand that AOL is a media company, not a software company. They launched AOL 1.0 at about the same time as you launched MS Word 1.0, and in all that time AOL is only up to version 4.0. How many “upgrades” has Word had – a dozen? I’ve lost count. What AOL understands that you don’t is that software is a means and not an end. As a media company, you can no longer force people to change software every year.

Third, and this is the most awful truth of all, your customers are the ones with the leverage in this equation, because your competition is just a click away. We get to decide what browser we should use to buy plane tickets or anything else for that matter, and guess what? The browser we want to use is whatever browser we have whenever we decide we want to buy something. Jeff Bezos understands this – Amazon works with Netscape 1.22. Jerry Yang understands this – there is not a single piece of javascript on the Yahoo home page. Maybe somebody over at Microsoft will come to understand this too.

Thinking that you can force us to sit through a multi-hour “upgrade” of a product we aren’t interested in solely for the pleasure of giving you our money is daft. Before you go on to redesign all your web sites to be as error-prone as Expedia, ask yourself whether your customers will prefer spending a couple of hours downloading new software or a couple of seconds clicking over to your competition. Judging from the mail I got from your Customer Service department, the answer to that question might come as a big surprise.

Yours sincerely,

Clay Shirky

Why Stricter Software Patents End Up Benefitting Open Source

First published in FEED, 8/18/1999.

As the software industry cracks down on its customers, the software itself is opening up. Open Source software, the freely available alternative to commercial software, is making inroads in the corporate world because of its superior flexibility, adaptability, and cost. Despite this competition, the commercial software industry has decided that now’s the time to make licensed software less flexible, less adaptable, and above all, more expensive. A proposed new law, called the Uniform Computer Information Transactions Act (UCITA) would give software manufacturers enormous new powers to raise prices and to control their software even after it is in the hands of their customers. By giving the software industry additional leverage over its customers, UCITA will have two principal effects: first, it will turn software from a product you buy to a service you pay for again and again. Second, its restrictions will greatly accelerate the corporate adoption of Open Source software.

All trade groups aspire to become like OPEC, the cartel that jacks up oil prices by controlling supply. The UCITA working group is no exception: UCITA is designed to allow software companies relief from competition (companies could forbid publishing the results of software comparisons), and to artificially limit supply (any company which was acquired by a larger company could be forced to re-license all its software). The most startling part of UCITA, though, is the smugly named “self-help” clause, which would allow a software company to remotely disable software at the customer’s site, even after it has been sold. This clause could be invoked with only 15 days notice if a software developer felt its licensing terms were being violated, making the customer guilty until proven innocent. UCITA’s proponents disingenuously suggest that the use of “self-help” would be rare, as it would make customers unhappy — what they are not as quick to point out is that the presence of “self-help” as a credible threat would give software companies permanent leverage over their customers in negotiating all future contracts.

Unfortunately for cartel-minded software firms, the OPEC scenario is elusive because software isn’t like oil. Software has no physical scarcity, and the people who know how to create good software can’t be isolated or controlled the way oil wells can. Where UCITA sets out to make software a controlled substance, the Open Source movement sets out to take advantage of software’s innate flexibility of distribution. By making software freely available and freely modifiable, Open Source takes advantage of everything UCITA would limit — Open Source software is easy to get, easy to modify, and easy to share. If UCITA becomes law, the difference between Open Source and commercial software will become even more stark. Expect Open Source to do very well in these circumstances.

Economics 101 tells us that people make economic decisions “on the margin” — a calculation not of total worth to total cost, but of additional worth for additional cost. For someone who wants a watch for telling time but not for status, for example, the choice between a $20 Timex and a $20,000 Rolex is clear — the $19,980 marginal cost of the Rolex knocks it out of the running. In the case of Open Source vs. commercial software, the differences in cost can be equally vast — in many cases (such as the Apache web server) the Open Source solution is both cheaper and better. Cartels only work if there is no competition, a fact the UCITA group seems not to have grasped. If UCITA becomes law — that could happen as soon as December — the commercial software industry will be sending its customers scrambling for Linux, Apache, and the other Open Source products faster than they already are.

The Internet and the Size of Government

First published in FEED, 8/11/99.

Fritz Hollings, Senator of South Carolina, and Zhu Rongji, Premier of China, have the same problem — the internet has made their governments too small. In Mr. Hollings’ case, ecommerce is threatening to damage South Carolina’s local tax base, while Mr. Zhu is facing threats to the Chinese Communist Party from dissident web sites in other
countries. They have both decided to extend the reach of their governments past their current geographical boundaries to attack these problems. The final thing the senator and the Premier share is that their proposed solutions will accelerate the changes they are meant to postpone — you can’t fight the effects of the internet without
embracing it, and you cannot embrace it without being changed by it.

Fritz Hollings’ problem is simple — states can collect taxes on local sales but not on ecommerce, because ecommerce has no respect for locality. His proposed solution is equally simple: a 5% national ecommerce tax (the “Sales Tax Safety Net and Teacher Funding Act”) to collect money at the Federal level and funnel it into state “subsidies” for teachers salaries. Unfortunately for him, “No Taxation Without Representation” cuts both ways — education policy is one of a state’s most important functions, and a Federal education tax opens the door for a Federal education policy
as well. If Hollings is worried about loss of state power, extending the states’ reach to the level of national taxation is a good short-term solution, but in the long run it will worsen the problem — states who defend their autonomy will watch the internet erode their revenue base, but states who defend their revenues will watch the internet erode their autonomy.

Zhu Rongji’s dilemma is more complicated, but no less stark — China’s communist party is vulnerable to international dissent because political web sites have no respect for national borders. Shortly after the Chinese government banned the Falun Gong sect for “jeopardising social stability,” China’s Internet Monitoring Bureau attacked Falun Gong web servers in the US and UK. The People’s Liberation Army newspaper called this action a “struggle in the realm of thought,” indicating that China now respects no national borders in its attacks on dissent. During the Kosovo crisis, China argued loudly for non-intervention in the affairs of other countries, but these attacks on foreign web servers tell a different story — non-interference in a connected world is incompatible with a determination to stifle dissent. The Falun Gong attacks will help
protect the Communist Party in the short term, but by demonstrating to future dissidents how afraid the Party is of the web it makes them more vulnerable to the winds of change in the long haul.

Governments, like companies, are being forced to respond to the increasingly borderless movement of money and ideas, but unlike companies they have no graceful way of going out of business when they are no longer viable. In place of mergers and bankruptcy, governments have wars and violent overthrows — governments will do anything to avoid closing up shop, even if that would be better for the people they are meant to serve. This makes governments more successful than businesses in fighting change in the short term, but in the long run, governments are more brittle and therefore more at risk. Zhu Rongji and Fritz Hollings have both adopted old men’s strategies — preserve the
present at all costs — but postponing change will heighten its force when it does come. In another five years, when the internet has become truly global, the damage it will do to things like South Carolina’s tax base and the Chinese Communist Party will make it clear that Messr’s Hollings and Zhu are trying to put out fire with gasoline.

Time to Open Source the Human Genome

9/8/1999

The news this week that a researcher has bred a genetically smarter mouse is another precursor to a new era of genetic manipulation. The experiment, performed by Dr. Joe Tsien of Princeton, stimulated a gene called NR2B, producing mice with greatly heightened intelligence and memory. The story has generated a great deal of interest, especially as human brains may have a similar genetic mechanism, but there is a behind- the-scenes story which better illuminates the likely shape of 21st century medicine. Almost from the moment that the experiment was publicized, a dispute broke out about ownership of NR2B itself, which is already patented by another lab. Like an information age land-grab, whole sections of genetic information are being locked behind pharmaceutical patents as fast as they are being identified. If the current system of private ownership of genes continues, the majority of human genes could be owned in less than two years. The only thing that could save us from this fate would be an open- source movement in genetics — a movement to open the DNA source code for the human genome as a vast public trust.

Fighting over mouse brains may seem picayune, but the larger issue is ownership of the genetic recipe for life. The case of the smarter mouse, patent pending, is not an
isolated one: among the patents granted for human genes are blindness (Axys
Pharmaceuticals), epilepsy (Progenitor), and Alzheimer’s (Glaxo Wellcome). Rights to the gene which controls arthritis will be worth millions, breast cancer, billions, and
the company that patents the genes that control weight loss can write their own ticket. As the genome code becomes an essential part of industries from bio-computing and cybernetic interfaces to cosmetics and nutrition, the social and economic changes it instigates are going to make the effects of the Internet seem inconsequential. Unfortunately for us, though, the Internet’s intellectual property is mostly in the public domain, while the genome is largely — and increasingly — private.

It didn’t have to be this way. Patents exist to encourage investment by guaranteeing a
discoverer time to recoup their investment before facing any competition, but patent
law has become increasingly permissive in what constitutes a patentable discovery. There are obviously patents to be had in methods of sequencing genes and for methods of using those sequences to cure disease or enhance capability. But to allow a gene itself to be patented makes a mockery of “prior art,” the term which covers unpatented but widely dispersed discoveries. It is prior art which keeps anyone from patenting fire, or the wheel, and in the case of genetic information, life itself is the prior art. It is a travesty of patent law that someone can have the gene for Alzheimer’s in every cell of their body, but that the patent for that gene is owned by Glaxo Wellcome.

The real action, of course, is not in mouse brains but in the human genome. Two teams, one public and one private, are working feverishly to sequence all of the 100,000 or so genes which lie within the 23 pairs of human chromosomes. The public consortium aims to release the sequence into the public domain, while the private group aims to patent much of the genome, especially the valuable list of mutations that cause genetic disease. The competition between these two groups has vastly accelerated the pace of the work — moving it from scheduled completion in 2005 to next year — but the irony is that this accelerated timetable won’t give the public time to grasp the enormous changes the project portends. By this time next year, the fate of the source code for life on earth — open or closed — will be largely finished, long before most people have even begun to understand what is at stake.

The Internet and Hit-driven Industries

First published on SAR, 07/99.

ALERT! New community-based email virus on the loose!

This virus, called the “DON’T GO” virus, primarily targets major Hollywood studios, and is known to proliferate within 24 hours of the release of a mediocre movie. If you receive a piece of email from a friend with a subject line like “Notting Hill: Don’t Go”, do not open it! Its contents can cause part of your memory to be erased, replacing expensive marketing hype with actual information from movie-goers. If this virus is allowed to spread, it can cause millions of dollars of damage to a Hollywood studio in a single weekend.

Hit-driven industries (movies, books, music, et al.) are being radically transformed by Internet communities, because the way these industries make money is directly threatened by what Internet communities do best – move word of mouth at the speed of light. Internet communities are putting so much information in the hands of the audience so quickly that the ability of studios, publishing houses, and record labels to boost sales with pre-release hype are diminishing even as the costs of that hype are rising.

Consider Hollywood, the classic hit-driven industry. The financial realities can be summed up thusly:

Every year, there are several major flops. There are a horde of movies that range from mildly unprofitable to mildly profitable. There are a tiny core of wildly profitable hits. The hits are what pays for everything else.

This is true of all hit-driven businesses – Stephen King’s books more than earn back what Marcia Clark lost, the computer game Half-Life sells enough to pay for flops like Dominion, Boyzone recoups The Spice Girls latest album, and so on. Many individual works lose money, but the studios, publishers, or labels overall turn a profit. Obviously, the best thing Hollywood could do in this case would be to make all the movies worth watching, but as the perennial black joke goes, “There are three simple rules for making a successful movie, but unfortunately nobody knows what they are.” Thus the industry is stuck managing a product whose popularity they can’t predict in advance, and they’ve responded by creating a market where the hits more than pay for the flops.

For Hollywood, this all hinges on the moment when a movie’s quality is revealed: opening weekend. Once a movie is in the theaters, the audience weighs in and its fate is largely sealed. Opening weekend is the one time when the producers know more about the product than the audience — it isn’t until Monday morning water cooler talk begins that a general sense of “thumbs up” or “thumbs down” becomes widespread. Almost everything movie marketers do is to try to use the media they do control — magazine ads, press releases, commercials, talk show spots — to manipulate the medium they don’t control — word of mouth. The last thing a studio executive wants is to allow a movie to be judged on its merits — they’ve spent too much money to leave things like that to the fickle reaction of the actual audience. The best weapon they have in this fight is that advertising spreads quickly while word of mouth spreads slowly.

Enter the Internet. A movie audience is kind of loose community, but the way information is passed — phone calls, chance encounters at the mall, conversations in the office — makes it a community where news travels slow. No so with email, news groups, fan web pages — news that a movie isn’t worth the price of admission can spread through an Internet community in hours. Waiting til Monday morning to know about a movie’s fate now seems positively sluggish — a single piece of email can be forwarded 10 times, a newsgroup can reach hundreds or even thousands, a popular web site can reach tens of thousands, and before you know it, it’s Saturday afternoon and the people are staying away in droves.

This threat — that after many months and many millions of dollars the fate of a movie can be controlled by actual fan’s actual reactions — is Hollywood’s worst nightmare. There are two scenarios that can unfold in the wake of this increased community power: The first scenario (call it “Status Quo Plus”) is that the studios can do more of everything they’re already doing: more secrecy about the product, more pre-release hype, more marketing tie-ins, more theaters showing the movie on opening weekend. This has the effect of maximising revenues before people talk to their friends and neighbors about the film. This is Hollywood’s current strategy, having hit its current high-water mark with the marketing juggernaut of The Phantom Menace. The advantage of this strategy is that it plays to the strengths of the existing Hollywood marketing machine. The disadvantage of this strategy is that it won’t work.

Power has moved from the marketer to the audience, and there is no way to reverse that trend, because nothing is faster than the Internet. The Internet creates communities of affinity without regard to geography, and if you want the judgement of your peers you can now get it instantly. (Star Wars fan sites were posting reactions to “The Phantom Menace” within minutes of the end of the first showing.) Furthermore, this is only going to get worse, both because the Internet population is still rising rapidly and because Internet users are increasingly willing to use community recommendations in place of the views of the experts, and while experts can be bought, communities can’t be. This leaves the other scenario, the one that actually leverages the power of Internet communities: let the artists and the fans have more direct contact. If the audience knows instantly whats good and what isn’t, let the creators take their products directly to the audience. Since the creators are the ones making the work, and the community is where the work stands or falls, much of what the studio does only adds needless expense while inhibiting the more direct feedback that might help shape future work. Businesses that halve the marketing budget and double the community outreach will find that costs go down while profits from successful work goes up. The terrible price of this scenario, though, is that flops will fail faster, and the studio will have to share more revenue with the artist in return for asking them to take on more risk.

The technical issues of entertainment on the Internet are a sideshow compared to community involvement — the rise of downloadable video, MP3 or electronic books will have a smaller long-term effect on restructuring hit-driven industries than the fact that there’s no bullshitting the audience anymore, not even for as long as a weekend. We will doubtless witness an orgy of new marketing strategies in the face of this awful truth — coupons for everything a given movie studio or record label produces in a season, discounts for repeat viewings, Frequent Buyer Miles, on and on — but in the end, hit driven businesses will have to restructure themselves around the idea that Internet communities will sort the good from the bad at lightning speed, and only businesses that embrace that fact and work with word of mouth rather than against it will thrive in the long run.

Language Networks

7/7/1999

The 21st Century is going to look a lot like the 19th century, thanks to the internet.
A recent study in the aftermath of the Asian financial storm (“Beyond The Crisis –
Asia’s Challenge for Recovery,” Dentsu Institute for Human Studies) found that citizens of Asian countries who speak English are far more likely to be online than those who don’t. The study, conducted in Tokyo, Beijing, Seoul, Bangkok, Singapore, and Jakarta, found that English speakers were between two and four times as likely to use the internet as their non-English speaking fellow citizens. Within each country, this is a familiar story of haves and have-nots, but in the connections between countries something altogether different is happening — the internet is creating an American version of the British Empire, with the English language playing the role of the Royal Navy.

This isn’t about TCP/IP — in an information economy the vital protocol is language,
written and spoken language. In this century, trade agreements have tended to revolve around moving physical goods across geographical borders: ASEAN, EU, OAS, NAFTA. In the next century, as countries increasingly trade more in information than hard goods, the definition of proximity changes from geographic to linguistic: two countries border one another if and only if they have a language they can use in common. The map of the world is being redrawn along these axes: traders in London are closer to their counterparts in New York than in Frankfurt, programmers in Sydney are closer to their colleagues in Vancouver than in Taipei. This isn’t an entirely English phenomenon: on the internet, Lisbon is closer to Rio than to Vienna, Dakar is closer to Paris than to Nairobi.

This linguistic map is vitally important for the wealth of nations — as the Dentsu
study suggests, the degree to which a country can plug into a “language network,”
especially the English network, will have much to do with its place in the 21st century
economy. These language networks won’t just build new connections, they’ll tear at
existing ones as well. Germany becomes a linguistic island despite its powerhouse
economy. Belgium will be rent in two as its French- and Flemish-speaking halves link
with French and Dutch networks. The Muslim world will see increasing connection among its Arabic-speaking nations — Iraq, Syria, Egypt — and decreasing connections with its non-Arabic-speaking members. (Even the translation software being developed reflects this bias: given the expense of developing translation software, only languages with millions of users — standard versions of English, French, Portuguese, Spanish, Italian, German — will make the cut.) And as we would expect of networks with different standards, gateways will arise; places where multi-lingual populations will smooth the transition between language networks. These gateways — Hong Kong, Brussels, New York, Delhi — will become economic centers in the 21st century because they were places where languages overlapped in the 19th.

There are all sorts of reasons why none of this should happen — why the Age of Empire shouldn’t be resurrected, why countries that didn’t export their language by force should suffer, why English shouldn’t become the Official Second Language of the 21st century — but none of those reasons will matter. We know from the 30-year history of the internet that when a new protocol is needed to continue internet growth, it’ll be acquired at any expense. What the internet economy demands more than anything right now is common linguistic standards. In the next 10 years, we will see the world’s languages sorted into two categories — those that form part of language networks will grow, and those that don’t will shrink, as the export of languages in the last century reshapes the map of the next one.

Language, The Internet, and the Next Century

Published in ACM, 12/1999.

The 21st Century is going to look a lot like the 19th century, thanks to the internet. A recent study in the aftermath of the Asian financial storm (“Beyond The Crisis – Asia’s Challenge for Recovery,” Dentsu Institute for Human Studies) found that citizens of Asian countries who speak English are far more likely to be online than those who don’t. The study, conducted in Tokyo, Beijing, Seoul, Bangkok, Singapore, and Jakarta, found that English speakers were between two and four times as likely to use the internet as their non-English speaking fellow citizens. Within each country, this is a familiar story of haves and have-nots, but in the connections between countries something altogether different is happening — the internet is creating an American version of the British Empire, with the English language playing the role of the Royal Navy.

This isn’t about TCP/IP — in an information economy the vital protocol is language, written and spoken language. In this century, trade agreements have tended to revolve around moving physical goods across geographical borders: ASEAN, EU, OAS, NAFTA. In the next century, as countries increasingly trade more in information than hard goods, the definition of proximity changes from geographic to linguistic: two countries border one another if and only if they have a language they can use in common. The map of the world is being redrawn along these axes: traders in London are closer to their counterparts in New York than in Frankfurt, programmers in Sydney are closer to their colleagues in Vancouver than in Taipei. This isn’t an entirely English phenomenon: on the internet, Lisbon is closer to Rio than to Vienna, Dakar is closer to Paris than to Nairobi.

This linguistic map is vitally important for the wealth of nations — as the Dentsu study suggests, the degree to which a country can plug into a “language network,” especially the English network, will have much to do with its place in the 21st century economy. These language networks won’t just build new connections, they’ll tear at existing ones as well. Germany becomes a linguistic island despite its powerhouse economy. Belgium will be rent in two as its French- and Flemish-speaking halves link with French and Dutch networks. The Muslim world will see increasing connection among its Arabic-speaking nations — Iraq, Syria, Egypt — and decreasing connections with its non-Arabic-speaking members. (Even the translation software being developed reflects
this bias: given the expense of developing translation software, only languages with millions of users — standard versions of English, French, Portuguese, Spanish, Italian, German — will make the cut.) And as we would expect of networks with different standards, gateways will arise; places where multi-lingual populations will smooth the transition between language networks. These gateways — Hong Kong, Brussels, New York, Delhi — will become economic centers in the 21st century because they were places where languages overlapped in the 19th.

There are all sorts of reasons why none of this should happen — why the Age of Empire shouldn’t be resurrected, why countries that didn’t export their language by force should suffer, why English shouldn’t become the Official Second Language of the 21st century — but none of those reasons will matter. We know from the 30-year history of the internet that when a new protocol is needed to continue internet growth, it’ll be acquired at any expense. What the internet economy demands more than anything right now is common linguistic standards. In the next 10 years, we will see the world’s languages sorted into two categories — those that form part of language networks will grow, and those that don’t will shrink, as the export of languages in the last century reshapes the map of the next one.

Internet Use and National Identity

First published in FEED, 7/15/99.

The United Nations released its annual Human Development Report this week, including a section concerning the distribution of Internet use among the nations of the world. It painted a picture of massively unequal distribution, showing among other things that the United States has a hundred times more Internet users per capita than the Arab States, and that Europe has 70 times more users per capita than sub-Saharan Africa.
Surveying the adoption rates detailed in this report, anyone who has any contact with the Internet can only be left with one thought — “Well, duh.” There is some advantage to quantifying what is common knowledge, but the UN has muddied the issues here rather than clarifying them.

Is there really anybody who could be surprised that the country that invented the internet has more users per capita than Qatar? Is there really anyone who can work themselves up over the lack of MyYahoo accounts in nations that also lack clean water? The truth of the matter is that internet growth is not gradual, it is a phase change — when a country crosses some threshold of readiness, demand amongst its citizens explodes. Beneath that threshold, trying to introduce the internet by force is like pushing string — its is absurd to put internet access on the same plane as access to condoms and antibiotics.

Once a country reaches that threshold, though, there is one critical resource that drives internet adoption, and the UN desperately wants that resource to be money. Among the UN’s proposals is a “bit tax” (one penny per 100 emails) to build out telecommunications infrastructure in the developing world. While improving infrastructure is an admirable goal, it fudges the real issue: among countries who are ready for rapid internet adoption, the most important resource isn’t per capita income but per
capita freedom. Massive internet adoption of the sort the UN envisions will require an equally massive increase in political freedom, and the UN is in no position to say that part out loud.

The HDR report is hamstrung by the UN’s twin goals of advancing human rights and respecting national sovereignty. Where the internet is concerned, these goals are incompatible. The United Arab Emirates has a much better telecom infrastructure than Argentina, but a lower per capita use of the internet. Saudi Arabia has a higher per capita income than Spain but lower internet penetration. What Argentina has more of
than the UAE is neither infrastructure, nor money, but the right of the citizens to get information from a wide variety of sources, and their willingness to exercise that right. Among nations of relatively equal development, it will be the freer nations and not the richer ones that adopt the internet fastest.

The report addresses this issue by suggesting a toothless campaign to “…persuade national governments not to restrict access to the internet because of its tremendous potential for human development,” avoiding mentioning that the “potential for human development” is a death sentence for many of the world’s leaders. If the UN was serious
about driving internet adoption, the section on the internet would have started with the following declaration: “Attention all dictators: internet access is the last stop for your regime. You can try to pull into the station gradually, as China and Kuwait are trying to do, or you can wait to see what happens when you plow into the wall at full speed, like North Korea and Kenya, but the one thing you can’t do is keep going full steam ahead. Enjoy your ride.”

Citizens and Customers

6/15/1999

All countries are different; all customers are the same. That’s the lesson to be
learned from Freeserve ISP’s meteoric rise, and the subsequent reshaping of the UK
internet industry. Prior to Freeserve, the British adoption rate of the internet was
fairly sluggish, but Freeserve figured out how to offer free internet access by
subsidizing its service with for-fee tech support and a cut of local call revenues,
and in the six months since they’ve launched (and spawned over 50 copycat services),
the UK user base has grown from 6 to 10 million. Their main advantage over the other major ISP player, British Telecom, was the contempt BT has for the British public.

Wherever technology is concerned, there are a host of nationalistic prejudices: the
Americans are early adopters, for example, while the British are nation of shopkeepers, suspicious of technology and fearful of change. BT held this latter view, behaving as if Britain’s slow adoption of the internet was just another aspect of a national reticence about technology, and therefore treating the ISP business as an expensive service for elites rather than trying to roll it out cheaply to the masses.

This idea of national differences in the use of the internet is everywhere these days,
but this idea confuses content with form. There will be Czech content on the net, but
there won’t be a “Czech Way” of using the network, or a “Chinese Way” or a “Chilean Way.” The internet’s content is culturally determined, but its form is shaped by economics. Once a country gets sufficiently wired, the economic force of the internet has little to do with ethnicity or national sentiment and much to do with the unsurprising fact that given two offers of equal value, people all over the world will take the cheaper one, no matter who is offering it to them.

Unsurprising to consumers, that is; businesses all over the world are desperate to
convince themselves that national identity matters more than quality and price. (Remember the “Buy American” campaign that tried to get Americans to pay more for inferior cars? or the suggestion that corrupt business practices were part of “Asian Values”?) Freeserve’s genius was not to be swayed by the caricature of stodgy, technophobic Brits. British reticence about the internet turned out to be about price and not about national character at all — now that internet access has come in line with ordinary incomes, the British have been as keen to get connected as Americans are.

Patriotism is the last refuge of an unprofitable business. We’ve seen the internet take
off in enough countries to have some idea of the necessary preconditions: when a
literate population has phones at home, cheap PCs, and competitive telecom businesses, the value of connecting to the internet rises continually while the cost of doing so falls. In these countries, any business that expects national identity to provide some defense against competition is merely using a flag as a fig leaf. In the end, countries with wired populations will see national differences reduced in importance to the level of the Local Dish and Colorful Garb, because once a country passes some tipping point, its population starts behaving less like citizens of a particular place and more like global customers, making the same demands made by customers everywhere. Businesses that fill those demands, regardless of nationality, will thrive, and businesses that ignore those demands, regardless of nationality, will die.

Why Smart Agents Are A Dumb Idea

Smart agents are a dumb idea. Like several of the other undead ideas floating around (e.g Digital Cash, Videophones), the idea of having autonomous digital agents that scour the net acting on your behalf seems so attractive that, despite a string of failures, agents enjoy periodic resurgances of interest. A new such surge seems to be beginning, with another round of stories in the press about how autonomous agents
equipped with instructions from you (and your credit card number) are going to shop for your CDs, buy and sell your stocks, and arrange your travel plans. The primary thing smart agents seem to have going for them is the ‘cool’ factor (as in ‘This will work because it would be cool if it did.’) The primary thing they have going against them is that they do not work and they never will work, and not just because they are
impractical, but because they have the wrong problems in their domain, and they solve them in the wrong order.

Smart agents — web crawling agents as opposed to stored preferences in
a database — have three things going against them:

  • Agents’ performance degrades with network growth
  • Agents ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking).
  • Agents make the market for information less efficient rather than more

These three barriers render the idea of agents impractical for almost all of the duties they are supposedly going to perform.

Consider these problems in context; the classic scenario for the mobile agent is the business trip. You have business in Paris (or, more likely, Peoria) and you need a flight, a hotel and a rental car. You instruct your agent about your dates, preferences, and price limits, and it scours the network for you putting together the ideal package based on its interpretatino of your instructions. Once it has secured this package, it makes the purchases on your behalf, and presents you with the completed travel package, dates, times and confirmation numbers in one fell swoop.

A scenario like this requires a good deal of hand waving to make it seem viable, to say nothing of worthwhile, because it assumes that the agent’s time is more valuable than your time. Place that scenario in a real world context – your boss tells you you need to be in Paris (Peoria) at the end of the week, and could you make the arrangements before you go home? You fire up your trusty agent, and run into the
following problems:

  • Agents’ performance degrades with network growth

Upon being given its charge, the agent needs to go out and query all the available sources of travel information, issue the relevant query, digest the returned information and then run the necessary weighting of the results in real time. This is like going to Lycos and asking it to find all the resources related to Unix and then having it start indexing the Web. Forget leaving your computer to make a pot of coffee – you could leave your computer and make a ship in a bottle.

One of the critical weaknesses in the idea of mobile agents is that the time taken to run a query improves with processor speed (~2x every 18 months) but degrades with the amount of data to be searched (~2x every 4 months). A back of the envelope calculation comparing Moore’s Law vs. traffic patterns at public internet interconnect points suggests that an autonomous agent’s performance for real-time requests should suffer by roughly an order of magnitude annually. Even if you make optimistic assumptions about algorithm design and multi-threading and assume that
data sources are always accessible, mere network latency in an exploding number sources prohibits real-time queries. The right way to handle this problem is the mundane way – gather and index the material to be queried in advance.

  • Agents ask people to do what machines are good at (waiting) and machines to do what people are good at (thinking).

The usual answer to this problem with real-time queries is to assume that people are happy to ask a question hours or days in advance of needing the answer, a scenario that occurs with a frequency of approximately never. People ask questions when they want to know the answer – if they wanted the answer later they would have asked the
question later. Agents thus reverse the appropriate division of labor between humans and computers — in the agent scenario above, humans do the waiting while agents do the thinking. The humans are required to state the problem in terms rigorous enough to be acted on by a machine, and be willing to wait for the answer while the machine applies the heuristics. This is in keeping with the Central Dream of AI, namely that humans can be relegated to a check-off function after the machines have done the thinking.

As attractive as this dream might be, it is far from the realm of the possible. When you can have an agent which understands why 8 hours between trains in Paris is better than 4 hours between trains in Frankfurt but 8 hours in Peoria is worse than 4 hours in Fargo, then you can let it do all the work for you, but until then the final step in the process is going to take place in your neural network, not your agent’s.

  • Agents make the market for information less efficient

This is the biggest problem of all – agents rely on a wrong abstraction of the world. In the agent’s world, their particular owner is at the center, and there are a huge number of heterogenous data sources scattered all around, and one agent makes thousands of queries outwards to perform one task. This ignores the fact that the data is neither static nor insensitive to the agent’s request. The agent is not just
importing information about supply, it is exporting information about demand at the same time, thus changing the very market conditions it is trying to record. The price of a Beanie Baby rises as demand rises since Beanie Babies are an (artificially) limited resource, while the price of bestsellers falls with demand, since bookstores can charge lower prices in return for higher volume. Airline prices are updated thousands of times a day, currency exchange rates are updated tens of thousands of times a day. Net-crawling agents are completely unable to deal with markets for information like these; these kind of problems require the structured data to be at the center, and for a huge number of heterogenous queries to be made inwards towards the centralized data, so that information about supply and demand are all captured in one place, something no autonomous agent can do.

Enter The Big Fat Webserver

So much of the history of the Internet, and particularly of the Web, has been about decentralization that the idea of distributing processes has become almost reflexive. Because the first decade of the Web has relied on PCs, which are by their very nature decentralized, it is hard to see that much of the Web’s effect has been in the opposite direction, towards centralization, and centralization of a particular kind – market-making.

The alternative for the autonomous mobile agent is the Big Fat Webserver, and while its superiority as a solution has often been overlooked next to the sexier idea of smart agents, B.F.Ws are A Good Thing for the same reasons markets are A Good Thing – they are the best way of matching supply with demand in real time. What you would really do when Paris (Peoria) beckons is go to Travelocity or some similar
B.F.W. for travel planning. Travelocity runs on that unsexiest of hardware (the mainframe) in that unsexiest of architectures (centralized) and because of that, it works well everywhere the agent scenario works badly. You log into Travelocity and ask it a question about plane flights, get an answer right then, and decide.

B.F.Ws performance scales with database size, not network size

The most important advantage of B.F.Ws over agents is that BFWs acquire and structure the data before a request comes in. Net-crawling agents are asked to identify sources, gather data and then query the results all at once, even though these functions require completely different strategies. By gathering and structuring data in advance, B.F.W.s remove the two biggest obstacles to agent performance before any request is issued.

B.F.W.s let computer do what computers are good at (gathering, indexing) and people do what people are good at (querying, deciding).

Propaganda to the contrary, when given a result set of sufficiently narrow range (a dozen items, give or take), humans are far better at choosing between different options than agents are. B.F.W.s provide the appropriate division of labor, letting the machine do the coarse-grained sorting, which has mostly to do with excluding the worst options, while letting the humans make the fine-grained choices at the end.

B.F.W.s make markets

This is the biggest advantage of B.F.W.s over agents — databases open to heterogenous requests are markets for information. Information about supply and demand are handled at the same time, and the transaction takes place as close to real time as database processing plus network latency can allow.

For the next few years, B.F.W.s are going to be a growth area. They solve the problems previously thought to be in the agents domain, and they solve them better than agents every could. Where the agents make the assumption of a human in the center, facing outward to a heterogenous collection of data which can be gathered asynchronously, B.F.Ws make the asusmption that is more in line with markets (and
reality) – a source of data (a market, really) in the center, with a collection of humans facing inwards and making requests in real time. Until someone finda abetter method of matching supply with demand than real-time markets, B.F.W.s are a better answer than agents every time.