Domain Names: Memorable, Global, Non-political?

Everyone understands that something happened to the domain name system in the mid-90s to turn it into a political minefield, with domain name squatters and trademark lawsuits and all the rest of it. It’s tempting to believe that if we could identify that something and reverse it, we could return to the relatively placid days prior to ICANN.

Unfortunately, what made domain names contentious was simply that the internet became important, and there’s no putting the genie back in that bottle. The legal issues involved actually predate not only ICANN but the DNS itself, going back to the mid-70s and the earliest decision to create memorable aliases for unmemorable IP addresses. Once the original host name system was in place — IBM.com instead of 129.42.18.99 — the system was potentially subject to trademark litigation. The legal issues were thus implicit in the DNS from the day it launched; it just took a decade or so for anyone to care enough to hire a lawyer.

There is no easy way to undo this. The fact that ICANN is a political body is not their fault (though the kind of political institution it has become is their fault.) Memorable names create trademark issues. Global namespace requires global oversight. Names that are both memorable and globally unique will therefore require global political oversight. As long as we want names we can remember, and which work unambiguously anywhere in the world, someone somewhere will have to handle the issues that ICANN currently handles.

Safety in Numbers

One reaction to the inevitable legal trouble with memorable names is simply to do away with memorable names. In this scenario, ICANN would only be responsible for assigning handles, unique IDs devoid of any real meaning. (The most articulate of these proposals is Bob Frankston’s “Safe Haven” approach.) [http://www.frankston.com/public/essays/DNSSafeHaven.asp]

In practice, this would mean giving a web site a meaningless but unique numerical address. Like a domain name today, it would be globally unambiguous, but unlike today’s domain names, such an address would not be memorable, as people are bad at remembering numbers, and terrible at remembering long numbers.

Though this is a good way to produce URLs free from trademark, we don’t need a new domain to do this. Anyone can register unmemorable numeric URLs today — whois says 294753904578.com, for example, is currently available. Since this is already possible, such a system wouldn’t free us from trademark issues, because whenever systems with numerical addresses grow popular (e.g. Compuserve or ICQ), users demand memorable aliases, to avoid dealing with horrible addresses like 71234.5671@compuserve.com. Likewise, the DNS was designed to manage memorable names, not merely unique handles, and creating a set of non-memorable handles simply moves the issue of memorable names to a different part of the system. It doesn’t make the issue go away.

Embrace Ambiguity

Another set of proposals would do away with globally unique aspect of domain names. Instead of awarding a single firm the coveted .com address, a search for ACME would yield several different matches, which the user would then pick from. This is analogous to a Google search on ACME, but one where none of the matches had a memorable address of their own.

The ambiguity in such a system would make it impossible to automate business-to-business connections using the names of the businesses themselves. These addresses would also fail the ‘side of the bus’ test, where a user seeing a simple address like IBM.com on a bus or a business card (or hearing it over the phone or the radio) could go to a browser and type it in. Instead, there would be a market for third-parties who resolve name->address mappings.

The rise of peer-to-peer networks has given us a test-bed for market-allocated namespaces, and the news isn’t good. Despite the obvious value in having a single interoperable system for instant messaging, to take one example, we don’t have interoperability because AOL is (unsurprisingly) unwilling to abandon the value in owning the majority of those addresses. The winner in a post-DNS market would potentially have even more control and less accountability than ICANN does today.

Names as a Public Good

The two best theories of network value we have — Metcalfe’s law for point-to-point networks and Reed’s law for group-forming networks — both rely on optionality, the possibility actually creating any of the untold potential connections that might exist on large networks. Valuable networks allow nodes to connect to one another without significant transaction costs.

Otherwise identical networks will thus have very different values for their users, depending on how easy or hard it is to form connections. In this theory, the worst damage spam does is not in wasting individual user’s time, but in making users skeptical of all mail from unknown sources, thus greatly reducing the possibility of unlikely connections. (What if you got real mail from Nigeria?)

Likewise, a system that provides a global namespace, managed as a public good, will create enormous value in a network, because it will lower the transaction costs of establishing a connection or group globally. It will also aid innovation by allowing new applications to bootstrap into an existing namespace without needing explicit coordination or permission. Despite its flaws, and despite ICANN’s deteriorating stewardship, this is what the DNS currently does.

Names Are Inevitable

We make sense of the world by naming things. Faced with any sort of numerical complexity, humans require tools for oversimplifying, and names are one of the best oversimplifications we have. We have only recently created systems that require global namespaces (ship registries, telephone numbers) so we’re not very good at it yet. In most of those cases, we have used existing national entities to guarantee uniqueness — we get globally unique phone numbers if we have nationally unique phone numbers and globally unique country codes.

The DNS, and the internet itself, have broken this ‘National Partition’ solution because they derive so much of their value from being so effortlessly global. There are still serious technical issues with the DNS, such as the need for domain names in non-English character sets, as well as serious political issues, like the need for hundreds if not thousands of new top-level domains. However, it would be hard to overstate the value created by memorable and globally unique domain names, names that are accessible to any application without requiring advance coordination, and which lower the transaction costs for making connections.

There are no pure engineering solutions here, because this is not a pure engineering problem. Human interest in names is a deeply wired characteristic, and it creates political and legal issues because names are genuinely important. In the 4 years since its founding, ICANN has moved from being merely unaccountable to being actively anti-democratic, but as reforming or replacing ICANN becomes an urgent problem, we need to face the dilemma implicit in namespaces generally: Memorable, Global, Non-political — pick two.

Communities, Audiences, and Scale

April 6, 2002

Prior to the internet, the differences in communication between community and audience was largely enforced by media — telephones were good for one-to-one conversations but bad for reaching large numbers quickly, while TV had the inverse set of characteristics. The internet bridged that divide, by providing a single medium that could be used to address either communities or audiences. Email can be used for conversations or broadcast, usenet newsgroups can support either group conversation or the broadcast of common documents, and so on. Most recently the rise of software for “The Writable Web”, principally weblogs, is adding two-way features to the Web’s largely one-way publishing model.

With such software, the obvious question is “Can we get the best of both worlds? Can we have a medium that spreads messages to a large audience, but also allows all the members of that audience to engage with one another like a single community?” The answer seems to be “No.”

Communities are different than audiences in fundamental human ways, not merely technological ones. You cannot simply transform an audience into a community with technology, because they assume very different relationships between the sender and receiver of messages.

Though both are held together in some way by communication, an audience is typified by a one-way relationship between sender and receiver, and by the disconnection of its members from one another — a one-to-many pattern. In a community, by contrast, people typically send and receive messages, and the members of a community are connected to one another, not just to some central outlet — a many-to-many pattern [1]. The extreme positions for the two patterns might be visualized as a broadcast star where all the interaction is one-way from center to edge, vs. a ring where everyone is directly connected to everyone else without requiring a central hub.

As a result of these differences, communities have strong upper limits on size, while audiences can grow arbitrarily large. Put another way, the larger a group held together by communication grows, the more it must become like an audience — largely disconnected and held together by communication traveling from center to edge — because increasing the number of people in a group weakens communal connection. 

The characteristics we associate with mass media are as much a product of the mass as the media. Because growth in group size alone is enough to turn a community into an audience, social software, no matter what its design, will never be able to create a group that is both large and densely interconnected. 

Community Topology

This barrier to the growth of a single community is caused by the collision of social limits with the math of large groups: As group size grows, the number of connections required between people in the group exceeds human capacity to make or keep track of them all.

A community’s members are interconnected, and a community in its extreme position is a “complete” network, where every connection that can be made is made. (Bob knows Carol, Ted, and Alice; Carol knows Bob, Ted, and Alice; and so on.) Dense interconnection is obviously the source of a community’s value, but it also increases the effort that must be expended as the group grows. You can’t join a community without entering into some sort of mutual relationship with at least some of its members, but because more members requires more connections, these coordination costs increase with group size.

For a new member to connect to an existing group in a complete fashion requires as many new connections as there are group members, so joining a community that has 5 members is much simpler than joining a community that has 50 members. Furthermore, this tradeoff between size and the ease of adding new members exists even if the group is not completely interconnected; maintaining any given density of connectedness becomes much harder as group size grows. As new members join, it creates either more effort or lowers the density of connectedness, or both, thus jeopardizing the interconnection that makes for community. [2]

As group size grows past any individual’s ability to maintain connections to all members of a group, the density shrinks, and as the group grows very large (>10,000) the number of actual connections drops to less than 1% of the potential connections, even if each member of the group knows dozens of other members. Thus growth in size is enough to alter the fabric of connection that makes a community work. (Anyone who has seen a discussion group or mailing list grow quickly is familiar with this phenomenon.)

An audience, by contrast, has a very sparse set of connections, and requires no mutuality between members. Thus an audience has no coordination costs associated with growth, because each new member of an audience creates only a single one-way connection. You need to know Yahoo’s address to join the Yahoo audience, but neither Yahoo nor any of its other users need to know anything about you. The disconnected quality of an audience that makes it possible for them to grow much (much) larger than a connected community can, because an audience can always exist at the minimum number of required connection (N connections for N users).

The Emergence of Audiences in Two-way Media

Prior to the internet, the outbound quality of mass media could be ascribed to technical limits — TV had a one-way relationship to its audience because TV was a one-way medium. The growth of two-way media, however, shows that the audience pattern re-establishes itself in one way or another — large mailing lists become read-only, online communities (eg. LambdaMOO, WELL, ECHO) eventually see their members agitate to stem the tide of newcomers, users of sites like slashdot see fewer of their posts accepted. [3]

If real group engagement is limited to groups numbering in the hundreds or even the thousands [4], then the asymmetry and disconnection that characterizes an audience will automatically appear as a group of people grows in size, as many-to-many becomes few-to-many and most of the communication passes from center to edge, not edge to center or edge to edge. Furthermore, the larger the group, the more significant this asymmetry and disconnection will become: any mailing list or weblog with 10,000 readers will be very sparsely connected, no matter how it is organized. (This sparse organization of the larger group can of course encompass smaller, more densely clustered communities.)

More Is Different

Meanwhile, there are 500 million people on the net, and the population is still growing. Anyone who wants to reach even ten thousand of those people will not know most of them, nor will most of them know one another. The community model is good for spreading messages through a relatively small and tight knit group, but bad for reaching a large and dispersed group, because the tradeoff between size and connectedness dampens message spread well below the numbers that can be addressed as an audience.

It’s significant that the only two examples we have of truly massive community spread of messages on the internet — email hoaxes and Outlook viruses — rely on disabling the users’ disinclination to forward widely, either by a social or technological trick. When something like All Your Base or OddTodd bursts on the scene, the moment of its arrival comes not when it spreads laterally from community to community, but when that lateral spread attracts the attention of a media outlet [5].

No matter what the technology, large groups are different than small groups, because they create social pressures against community organization that can’t be trivially overcome. This is a pattern we have seen often, with mailing lists, BBSes, MUDs, usenet, and most recently with weblogs, the majority of which reach small and tightly knit groups, while a handful reach audiences numbering in the tens or even hundreds of thousands (e.g. andrewsullivan.com.)

The inability of a single engaged community to grow past a certain size, irrespective of the technology, will mean that over time, barriers to community scale will cause a separation between media outlets that embrace the community model and stay small, and those that adopt the publishing model in order to accommodate growth. This is not to say that all media that address ten thousand or more people at once are identical; having a Letters to the Editor column changes a newspaper’s relationship to its audience, even though most readers never write, most letters don’t get published, and most readers don’t read every letter.

Though it is tempting to think that we can somehow do away with the effects of mass media with new technology, the difficulty of reaching millions or even tens of thousands of people one community at a time is as much about human wiring as it is about network wiring. No matter how community minded a media outlet is, needing to reach a large group of people creates asymmetry and disconnection among that group — turns them into an audience, in other words — and there is no easy technological fix for that problem. 

Like the leavening effects of Letters to the Editor, one of the design challenges for social software is in allowing groups to grow past the limitations of a single, densely interconnected community while preserving some possibility of shared purpose or participation, even though most members of that group will never actually interact with one another.


Footnotes

1. Defining community as a communicating group risks circularity by ignoring other, more passive uses of the term, as with “the community of retirees.” Though there are several valid definitions of community that point to shared but latent characteristics, there is really no other word that describes a group of people actively engaged in some shared conversation or task, and infelicitous turns of phrase like ‘engaged communicative group’ are more narrowly accurate, but fail to capture the communal feeling that arises out of such engagement. For this analysis, ‘community’ is used as a term of art to refer to groups whose members actively communicate with one another. [Return]

2. The total number of possible connections in a group grows quadratically, because each member of a group must connect to every other member but themselves. In general, therefore, a group with N members has N x (N-1) connections, which is the same as N2 – N. If Carol and Ted knowing one another count as a single relationship, there are half as many relationships as connections, so the relevant number is (N2 – N)/2.

Because these numbers grow quadratically, every 10-fold increase in group size creates a 100-fold increase in possible connections; a group of ten has about a hundred possible connections (and half as many two-way relationships), a group of a hundred has about ten thousand connections, a thousand has about a million, and so on. The number of potential connections in a group passes a billion as group size grows past thirty thousand. [Return]

3. Slashdot is suffering from one of the common effects of community growth — the uprising of users objecting to the control the editors exert over the site. Much of the commentary on this issue, both at slashdot and on similar sites such as kuro5hin, revolves around the twin themes of understanding that the owners and operators of slashdot can do whatever they like with the site, coupled with a surprisingly emotional sense of betrayal that the community control, in the form of moderation. 

(More at kuro5hin and slashdot. [Return]

4. In Grooming, Gossip, and the Evolution of Language (ISBN 0674363361), the primatologist Robin Dunbar argues that humans are adapted for social group sizes of around 150 or less, a size that shows up in a number of traditional societies, as well as in present day groups such as the Hutterite religious communities. Dunbar argues that the human brain is optimized for keeping track of social relationships in groups small than 150, but not larger. [Return]

5. In The Tipping Point (ISBN 0316346624), Malcolm Gladwell detailed the surprising spread of Hush Puppies shoes in the mid the ’90s, from their adoption by a group of cool kids in the East Village to a national phenomenon. The breakout moment came when Hush Puppies were adopted by fashion designers, with one designer going so far as to place a 25 foot inflatable Hush Puppy mascot on the roof of his boutique in LA. The cool kids got the attention of the fashion designers, but it was the fashion designers who got the attention of the world, by taking Hush Puppies beyond the communities in which it started and spreading them outwards to an audience that looked to the designers. [Return]

Time-Warner and ILOVEYOU

First published in FEED, 05/00.

Content may not be king, but it was certainly making headlines last week. From the “content that should have been distributed but wasn’t” department, Time Warner’s spectacularly ill-fated removal of ABC from its cable delivery lineup ended up cutting off content essential to the orderly workings of America — Who Wants to Be A Millionaire? Meanwhile, from the “content that shouldn’t have been distributed but was” department, Spyder’s use of a loosely controlled medium spread content damaging to the orderly workings of America and everywhere else — the ILOVEYOU virus. Taken together, these events are making one message increasingly obvious: The power of corporations to make decisions about distribution is falling, and the power of individuals as media channels in their own right is rising.

The week started off with Time Warner’s effort to show Disney who was the boss, by dropping ABC from its cable lineup. The boss turned out to be Disney, because owning the delivery channel doesn’t give Time Warner half the negotiating leverage the cable owners at Time Warner thought it did. Time Warner was foolish to cut off ABC during sweeps month, when Disney had legal recourse, but their real miscalculation was assuming that owning the cable meant owning the customer. What had ABC back on the air and Time Warner bribing its customers with a thirty-day rebate was the fact that Americans resent any attempt to interfere with the delivery of content, legal issues or no. Indeed, the aftermath saw Peter Vallone of the NY City Council holding forth on the right of Americans to watch television. It is easy to mock this attitude, but Vallone has a point: People have become accustomed to constantly rising media access, from three channels to 150 in a generation, with the attendant rise in user access to new kinds of content. Any attempt to reintroduce artificial scarcity by limiting this access now creates so much blind fury that television might as well be ranked alongside water and electricity as utilities. The week ended as badly for Time Warner as it began, because even though their executives glumly refused to promise never to hold their viewers hostage as a negotiating tactic, their inability to face the wrath of their own paying customers had been exposed for all the world to see.

Meanwhile, halfway round the world, further proof of individual leverage over media distribution was mounting. The ILOVEYOU virus struck Thursday morning, and in less than twenty-four hours had spread further than the Melissa virus had in its entire life. The press immediately began looking for the human culprit, but largely missed the back story: The real difference between ILOVEYOU and Melissa was not the ability of Outlook to launch programs from within email, a security hole unchanged since last year. The real difference was the delivery channel itself — the number and interconnectedness of e-mail users — that makes ILOVEYOU more of a media virus than a computer virus. The lesson of a virus that starts in the Philippines and ends up flooding desktops from London to Los Angeles in a few hours is that while email may not be a mass medium, that reaches millions at the same time, it has become a massive one, reaching tens of millions in mere hours, one user at a time. With even a handful of globally superconnected individuals, the transmission rates for e-mail are growing exponentially, with no end in sight, either for viruses or legitimate material. The humble practice of forwarding e-mail, which has anointed The Onion, Mahir, and the Dancing Baby as pop-culture icons, has now crossed one of those invisible thresholds that makes it a new kind of force — e-mail as a media channel more global than CNN. As the world grows more connected, the idea that individuals are simply media consumers looks increasingly absurd — anyone with an email address is in fact a media channel, and in light of ILOVEYOU’s success as a distribution medium, we may have to revise that six degrees of separation thing downwards a little.

Both Time Warner’s failure and ILOVEYOUs success spread the bad news to several parties: TV cable companies, of course, but also cable ISPs, who hope to use their leverage over delivery to hold Internet content hostage; the creators of WAP, who hope to erect permanent tollbooths between the Internet and the mobile phone without enraging their subscribers; governments who hoped to control their citizens’ access to “the media” before e-mail turned out to be a media channel as well; and everyone who owns copyrighted material, for whom e-mail attachments threaten to create hundreds of millions of small leaks in copyright protection. (At least Napster has a business address.) There is a fear, shared by all these parties, that decisions about distribution — who gets to see what, when — will pass out of the hands of governments and corporations and into the hands of individuals. Given the enormity of the vested interests at stake, this scenario is still at the outside edges of the imaginable. But when companies that own the pipes can’t get any leverage over their users, and when users with access to e-mail can participate in a system whose ubiquity has been so dramatically illustrated, the scenario goes from unthinkable to merely unlikely.

The Toughest Virus of All

First published on Biz2, 07/00.

“Viral marketing” is back, making its return as one of the gotta-have-it phrases for dot-com business plans currently making the rounds. The phrase was coined (by Steve Jurvetson and Tim Draper in “Turning Customers into a Sales Force,” Nov. ’98, p103) to describe the astonishing success of Hotmail, which grew to 12 million subscribers 18 months after launch.

The viral marketing meme has always been hot, but now its expansion is being undertaken by a raft of emarketing sites promising to elucidate “The Six Simple Principles for Viral Marketing” or offering instructions on “How to Use Viral Marketing to Drive Traffic and Sales for Free!” As with anything that promises miracle results, there is a catch. Viral marketing can work, but it requires two things often in short supply in the marketing world: honesty and execution.

It’s all about control

It’s easy to see why businesses would want to embrace viral marketing. Not only is it supposed to create those stellar growth rates, but it can also reduce the marketing budget to approximately zero. Against this too-good-to-be-true backdrop, though, is the reality: Viral marketing only works when the user is in control and actually endorses the viral message, rather than merely acting as a carrier.

Consider Hotmail: It gives its subscribers a useful service, Web-based email, and then attaches an ad for Hotmail at the bottom of each sent message. Hotmail gains the credibility needed for successful viral marketing by putting its users in control, because when users recommend something without being tricked or co-opted, it provides the message with a kind of credibility that cannot be bought. Viral marketing is McLuhan marketing: The medium validates the message.

Viral marketing is also based on the perception of honesty: If the recipient of the ad fails to believe the sender is providing an honest endorsement, the viral effect disappears. An ad tacked on to a message without the endorsement of the author loses credibility; it’s no different from a banner ad. This element of trust becomes even more critical when firms begin to employ explicit viral marketing, where users go beyond merely endorsing ads to actually generating them.

These services–PayPal.com or Love Monkey, for example–rely on users to market the service because the value of the service grows with new recruits. If I want to pay you through PayPal, you must be a PayPal user as well (unlike Hotmail, where you just need a valid address to receive mail from me). With PayPal, I benefit if you join, and the value of the network grows for both of us and for all present and future users as well.

Love Monkey, a college matchmaking service, works similarly. Students at a particular college enter lists of fellow students they have crushes on, and those people are sent anonymous email asking them to join Love Monkey and enter their own list of crushes. It then notifies any two people whose lists include each other. Love Monkey must earn users’ trust before any viral effect can take place because Metcalfe’s Law only works when people are willing to interact. Passive networks such as cable or satellite television provide no benefits to existing users when new users join.

Persistent infections

Continuing the biological metaphor, viral marketing does not create a one-time infection, but a persistent one. The only thing that keeps Love Monkey users from being “infected” by another free matchmaking service is their continued use of Love Monkey. Viral marketing, far from eliminating the need to deliver on promises, makes businesses more dependent on the goodwill of their users. Any company that incorporates viral marketing techniques must provide quality services–ones that users are continually willing to vouch for, whether implicitly or explicitly.

People generally conspire to misunderstand what they should fear. The people rushing to embrace viral marketing misunderstand how difficult it is to make it work well. You can’t buy it, you can’t fake it, and you can’t pay your users to do it for you without watering down your message. Worse still, anything that is going to benefit from viral marketing must be genuinely useful, well designed, and flawlessly executed, so consumers repeatedly choose to use the service.

Sadly, the phrase “viral marketing” seems to be going the way of “robust” and “scalable” –formerly useful concepts which have been flattened by overuse. A year from now, viral marketing will simply mean word of mouth. However, the concept described by the phrase– a way of acquiring new customers by encouraging honest communication–will continue to be available, but only to businesses that are prepared to offer ongoing value.

Viral marketing is not going to save mediocre businesses from extinction. It is the scourge of the stupid and the slow, because it only rewards companies that offer great service and have the strength to allow and even encourage their customers to publicly pass judgment on that service every single day.

We (Still) Have a Long Way to Go

First published in Biz2, 06/00.

Just when you thought the Internet was a broken link shy of ubiquity, along comes the head of the Library of Congress to remind us how many people still don’t get it.

The Librarian of Congress, James Billington, gave a speech on April 14 to the National Press Club in which he outlined the library’s attitude toward the Net, and toward digitized books in particular. Billington said the library has no plans to digitize the books in its collection. This came as no surprise because governmental digitizing of copyrighted material would open a huge can of worms.

What was surprising were the reasons he gave as to why the library would not be digitizing books: “So far, the Internet seems to be largely amplifying the worst features of television’s preoccupation with sex and violence, semi-illiterate chatter, shortened attention spans, and a near-total subservience to commercial marketing. Where is the virtue in all of this virtual information?” According to the April 15 edition of the Tech Law Journal, in the Q&A section of his address, Billington characterized the desire to have the contents of books in digital form as “arrogance” and “hubris,” and said that books should inspire “a certain presumption of reverence.”

It seems obvious, but it bears repeating: Billington is wrong.

The Internet is the most important thing for scholarship since the printing press, and all information which can be online should be online, because that is the most efficient way to distribute material to the widest possible audience. Billington should probably be asked to resign, based on his contempt for U.S. citizens who don’t happen to live within walking distance of his library. More importantly, however, is what his views illustrate about how far the Internet revolution still has to go.

The efficiency chain

The mistake Billington is making is sentimentality. He is right in thinking that books are special objects, but he is wrong about why. Books don’t have a sacred essence, they are simply the best interface for text yet invented — lightweight, portable, high-contrast, and cheap. They are far more efficient than the scrolls and oral lore they replaced.

Efficiency is relative, however, and when something even more efficient comes along, it will replace books just as surely as books replaced scrolls. And this is what we’re starting to see: Books are being replaced by digital text wherever books are technologically inferior. Unlike digital text, a book can’t be in two places at once, can’t be searched by keyword, can’t contain dynamic links, and can’t be automatically updated. Encyclopaedia Britannica is no longer published on paper because the kind of information it is dedicated to — short, timely, searchable, and heavily cross-referenced — is infinitely better carried on CD-ROMs or over the Web. Entombing annual snapshots of the Encyclopaedia Britannica database on paper stopped making sense.

Books which enable quick access to short bits of text — dictionaries, thesauruses, phone books — are likely to go the way of Encyclopedia Britannica over the next few years. Meanwhile, books that still require paper’s combination of low cost, high contrast, and portability — any book destined for the bed, the bath or the beach — will likely be replaced by the growth of print-on-demand services, at least until the arrival of disposable screens.

What is sure is that wherever the Internet arrives, it is the death knell for production in advance of demand, and for expensive warehousing, the current models of the publishing industry and of libraries. This matters for more than just publishers and librarians, however. Text is the Internet’s uber-medium, and with email still the undisputed killer app, and portable devices like the Palm Pilot and cell phones relying heavily or exclusively on text interfaces, text is a leading indicator for other kinds of media. Books are not sacred objects, and neither are radios, VCRs, telephones, or televisions.

Internet as rule

There are two ways to think about the Internet’s effect on existing media. The first is “Internet as exception”: treat the Net as a new entrant in an existing environment and guess at the eventual adoption rate. This method, so sensible for things such as microwaves or CD players, is wrong for the Internet, because it relies on the same
sentimentality about the world that the Librarian of Congress does. The Net is not an addition, it is a revolution; the Net is not a new factor in an existing environment, it is itself the new environment.

The right way to think about Internet penetration is “Internet as rule”: simply start with the assumption that the Internet is going to become part of everything — every book, every song, every plane ticket bought, every share of stock sold — and then look for the roadblocks to this vision. This is the attitude that got us where we are today, and this is the attitude that will continue the Net’s advance.

You do not need to force the Internet into new configurations — the Internet’s efficiency provides the necessary force. You only need to remove the roadblocks of technology and attitude. Digital books will become ubiquitous when interfaces for digital text are uniformly better than the publishing products we have today. And as the Librarian of Congress shows us, there are still plenty of institutions that just don’t understand this, and there is still a lot of innovation, and profit, to be achieved by proving them wrong.

RIP THE CONSUMER, 1900-1999

“The Consumer” is the internet’s most recent casualty. We have often
heard that Internet puts power in the hands of the consumer, but this
is nonsense — ‘powerful consumer’ is an oxymoron. The historic role
of the consumer has been nothing more than a giant maw at the end of
the mass media’s long conveyer belt, the all-absorbing Yin to mass
media’s all-producing Yang. Mass media’s role has been to package
consumers and sell their atention to the advertisers, in bulk. The
consumers’ appointed role in this system gives them and no way to
communicate anything about themselves except their preference between
Coke and Pepsi, Bounty and Brawny, Trix and Chex. They have no way to
respond to the things they see on television or hear on the radio, and
they have no access to any media on their own — media is something
that is done to them, and consuming is how they register their
repsonse. In changing the relations between media and individuals,
the Internet does not herald the rise of a powerful consumer. The
Internet heralds the disappearance of the consumer altogether, because
the Internet destroys the noisy advertiser/silent consumer
relationship that the mass media relies upon. The rise of the internet
undermines the existence of the consumer because it undermines the
role of mass media. In the age of the internet, no one is a passive
consumer anymore because everyone is a media outlet.

To profit from its symbiotic relationship with advertisers, the mass
media required two things from its consumers – size and silence. Size
allowed the media to address groups while ignoring the individual — a
single viewer makes up less than 1% of 1% of 1% of Frasier’s
10-million-strong audience. In this system, the individual matters not
at all: the standard unit for measuring television audiences is a
million households at a time. Silence, meanwhile, allowed the media’s
message to pass unchallenged by the viewers themselves. Marketers
could broadcast synthetic consumer reaction — “Tastes Great!”, ” Less
filling!” — without having to respond to real customers’ real
reactions — “Tastes bland”, “More expensive”. The enforced silence
leaves the consumer with only binary choices — “I will or won’t watch
I Dream of Genie, I will or won’t buy Lemon Fresh Pledge” and so
on. Silence has kept the consumer from injecting any complex or
demanding interests into the equation because mass media is one-way media.

This combination of size and silence has meant that mass media, where
producers could address 10 million people at once with no fear of
crosstalk, has been a very profitable business to be in.

Unfortunately for the mass media, however, the last decade of the 20th
century was hell on both the size and silence of the consumer
audience. As AOL’s takeover of Time Warner demonstrated, while
everyone in the traditional media was waiting for the Web to become
like traditional media, traditional media has become vastly more like
the Web. TV’s worst characteristics — its blandness, its cultural
homogeneity, its appeal to the lowest common denominator — weren’t an
inevitable part of the medium, they were simply byproducts of a
restricted number of channels, leaving every channel to fight for the
average viewer with their average tastes. The proliferation of TV
channels has eroded the audience for any given show — the average
program now commands a fraction of the audience it did 10 years ago,
forcing TV stations to find and defend audience niches which will be
attractive to advertisers.

Accompanying this reduction in size is a growing response from
formerly passive consumers. Marketing lore says that if a customer has
a bad expereince, they will tell 9 other people, but that figure badly
needs updating. Armed with nothing more than an email address, a
disgruntled customer who vents to a mailing list can reach hundreds of
people at once; the same person can reach thousands on ivillage or
deja; a post on slashdot or a review on amazon can reach tens of
thousands. Furthermore, the Internet never forgets — a complaint
made on the phone is gone forever, but a complaint made on the Web is
there forever. With mass media outlets shrinking and the reach of the
individual growing, the one-sided relationship between media and
consumer is over, and it is being replaced with something a lot less
conducive to unquestioning acceptance.

In retrospect, mass media’s position in the 20th century was an
anomoly and not an inevitability. There have always been both one-way
and two-way media — pamphlets vs. letters, stock tickers vs.
telegraphs — but in 20th century the TV so outstripped the town
square that we came to assume that ‘large audience’ necessarily meant
‘passive audience’, even though size and passivity are unrelated.
With the Internet, we have the world’s first large, active medium, but
when it got here no one was ready for it, least of all the people who
have learned to rely on the consumer’s quiescent attention while the
Lucky Strike boxes tapdance across the screen. Frasier’s advertisers
no longer reach 10 million consumers, they reach 10 million other
media outlets, each of whom has the power to amplify or contradict the
advertiser’s message in something frighteningly close to real time. In
place of the giant maw are millions of mouths who can all talk
back. There are no more consumers, because in a world where an email
address constitutes a media channel, we are all producers now.

Ford, Subsidized Computers, and Free Speech

2/11/2000

Freedom of speech in the computer age was thrown dramatically into question by a pair of recent stories. The first was the news that Ford would be offering its entire 350,000-member global work force an internet-connected computer for $5 a month. This move,already startling, was made more so by the praise Ford received from Stephen Yokich, the head of the UAW, who said “This will allow us to communicate with our brothers and sisters from around the world.” This display of unanimity between management and the unions was in bizarre contrast to an announcement later in the week concerning Northwest airlines flight attendants. US District Judge Donovan Frank ruled that the home PCs of Northwest Airlines flight attendants could be confiscated and searched by Northwest, who were looking for evidence of email organizing a New Year’s sickout. Clearly corporations do not always look favorably on communication amongst their employees — if the legal barriers to privacy on a home PC are weak now, and if a large number of workers’ PCs will
be on loan from their parent company, the freedom of speech and relative anonymity we’ve taken for granted on the internet to date will be seriously tested, and the law may be of little help.

Freedom of speech evangelists tend to worship at the altar of the First Amendment, but many of them haven’t actually read it. As with many sacred documents, it is far less sweeping than people often imagine. Leaving aside the obvious problem of its applicability outside the geographical United States, the essential weakness of the Amendment at the dawn of the 21st century is that it only prohibits governmental interference in speech; it says nothing about commercial interference in speech. Though you can’t prevent people from picketing on the sidewalk, you can prevent them from picketing inside your place of business. This distinction relies on the adjacency of public and private spaces, and the First Amendment only compels the Government to protect free speech in the public arena.

What happens if there is no public arena, though? Put another way, what happens if all the space accessible to protesters is commercially owned? These questions call to mind another clash between labor and management in the annals of US case law, Hudgens v. NLRB (1976), in which the Supreme Court ruled that private space only fell under First Amendment control if it has “taken on all the attributes of a town” (a doctrine which arose to cover worker protests in company towns). However, the attributes the Court requires in order to consider something a town don’t map well to the internet, because they include municipal functions like a police force and a post office. By that measure, has Yahoo taken on all the functions of a town? Has AOL? If Ford provides workers their only link with the internet, has Ford taken on all the functions of a town?

Freedom of speech is following internet infrastructure, where commercial control
blossoms and Government input withers. Since Congress declared the internet open for commercial use in 1991, there has been a wholesale migration from services run mostly by state colleges and Government labs to services run by commercial entities. As Ford’s move demonstrates, this has been a good thing for internet use as a whole — prices have plummeted, available services have mushroomed, and the number of users has skyrocketed — but we may be building an arena of all private stores and no public sidewalks. The internet is clearly the new agora, but without a new litmus test from the Supreme Court, all online space may become the kind of commercial space where the protections of the First Amendment will no longer reach.

ATT and Cable Internet Access

11/4/1999

When is a cable not a cable? This is the question working its way through the Ninth
Circuit Court right now, courtesy of AT&T and the good people of Portland, Oregon. When the city of Portland and surrounding Multnomah County passed a regulation requiring AT&T to open its newly acquired cable lines to other internet service providers, such as AOL and GTE, the rumble over high-speed internet access began. In one corner was the City of Portland, weighing in on the side of competition, open access, and consumer choice. In the other corner stood AT&T, championing legal continuity: When AT&T made plans to buy MediaOne, the company was a local monopoly, and AT&T wants it to stay that way. It looked to be a clean fight. And yet, on Monday, one of the three appellate judges, Edward Leavy, threw in a twist: He asked whether there is really any such thing as the cable industry anymore. The answer to Judge Leavy’s simple question has the potential to radically alter the landscape of American media.

AT&T has not invested $100 billion in refashioning itself as a cable giant because it’s
committed to offering its customers high-speed internet access. Rather, AT&T has invested that money to buy back what it really wants: a national monopoly. Indeed, AT&T has been dreaming of regaining its monopoly status ever since the company was broken up into AT&T and the Baby Bells, back in 1984. With cable internet access, AT&T sees its opportunity. In an operation that would have made all the King’s horses and all the King’s men gasp in awe, AT&T is stitching a national monopoly back together out of the fragments of the local cable monopolies. If it can buy up enough cable outlets, it could become the sole provider for high-speed internet access for a sizable chunk of the country.

Cable is attractive to the internet industry because the cable industry has what internet service providers have wanted for years: a way to make money off content. By creating artificial scarcity — we’ll define this channel as basic, that channel as premium, this event is free, that one is pay-per-view — the cable industry has used its monopoly over the wires to derive profits from the content that travels over those wires. So, if you think about it, what AT&T is really buying is not infrastructure but control: By using those same television wires for internet access, they will be able to affect the net content its users can and can’t see (you can bet they will restrict access to Time-Warner’s offerings, for example), bringing pay-per-view economics to the internet.

In this environment, the stakes for the continued monopoly of the cable market couldn’t be higher, which is what makes Judge Leavy’s speculation about the cable industry so radical. Obviously frustrated with the debate, the Judge interjected that, “It strikes me that everybody is trying to dance around the issue of whether we’re talking about a telecommunications service.” His logic seems straightforward enough. If the internet is a telecommunications service, and cable is a way to get internet access, then surely cable is a telecommunications service. Despite the soundness of this logic, however, neither AT&T nor Portland was ready for it, because beneath its simplicity is a much more extreme notion: If the Judge is right, and anyone who provides internet access is a telecommunications company, then the entire legal structure on which the cable industry is based — monopolies and service contracts negotiated city by city — will be destroyed, and cable will be regulated by the FCC on a national level. By declaring that regulations should cover how a medium is used and not merely who owns it, Judge Leavy will be moving the internet to another level of the American media pecking-order. If monopolies really aren’t portable from industry to industry — if owning the wire doesn’t mean owning the customer — then this latest attempt to turn the internet into a walled garden will be turned back.

Notes from October’s “Internet World” Conference.

First published on SAR, 11/99.

Just got back from Internet World in New York. My approach to Internet World is always the same – skip the conference and the keynotes, ignore most of the mega-installations, and go straight to the trade floor, and then just walk every aisle and look (briefly) at every booth. Today was several hundred booths, and took 5 hours.

I do this because it doesn’t matter to me when one company thinks they have a good idea, but when I see three companies with the same idea, then I start to take notice. In addition, the floor of Internet World is a good proxy for the marketplace: if a company can’t make its business proposition clear to the person passing by their booth, they aren’t going to be able to make it clear in the market either. (One company had put up a single sign, announcing only that they were “Using Advanced Technology to Create Customer-Driven Solutions.” Another said they were “The Entertainment Marketing Promotion Organization.”)

Herewith my impressions of the IW ’99 zeitgeist:

Product categories

– Any web site which seemed like a good idea in 1997 is now an entire industry sector, with both copycat websites, and groups offering to outsource the function for you. Free home pages, free email, vertical search engines, etc etc etc

– Customer service is the newest such category: the interest generated in LivePerson has led to a number of similar services. Look for the human element to re-enter the sales channel as a point of differentiation in the next 12 months.

– Filtering is also now a bona fide category. As if on cue, Xerox fired 40 employees for inappropriate use of the net this week, with slightly more than 20 of these people fired for looking at porn on the job. Filtering has left behind its “Big Brother/Religious Conservative” connotations and become a business liability issue.

– Promotions/sweepstakes/turn-key affiliate programs are hot. Last year was all about ecommerce – this year, the need to have viewers to extract all those dollars from has loomed as an issue as well, so spending money on tricks other than pure media buys to move traffic to your site has gained as a category.

– Language translation is hot. The economic rationale of high-traffic, ad-supported sites demands constant growth, but net growth in the States is slowing from impossibly fast to merely fast, even as other parts of the world hit the “annual doubling” part of their growth curve. The logical solution is prospecting for clients overseas – expect the rush to Europe to accelerate in the next 12 months.

(The previous two categories clash, as the legalities of running promotions vary enormously from place to place. In as much as international traffic is valuable, promotions are a very complicated way to get it. Sometime in the next 12 months, someone is going to test the waters for multi-national promotions, probably within the EU.)

– Quality Assurance is not hot, yet, but is has grown quite a bit from last year. eBay outages, post-IPO scrutiny, and the general pressures of scale and service are making people much more aware of QA issues, and several companies were hawking their services in interface design, human factors and stress testing. – Embedded systems. Lots and lots of embedded systems. Since it takes hardware longer to adjust than software, I don’t know how fast this is moving, but by 2001, all devices smarter than a pencil sharpener will have a TCP/IP stack.

– Internet telephony is weakening as a category, as the phone-centric view (“Let’s route telephone conversations over the net.”) gives way to a net-centric view (“Let’s turn telephone conversations into digital data”). Many companies are integrating voice digitization into their products, but instead of merely turning these into phone calls, they’re also letting the user store them, edit them, forward them as a clickable link (think voicemail.yahoo.com) etc. Voice is going from being an application to a data format.

ASP – Application Service Providers (a.k.a. apps you log into)

– There is no such thing as an ASP. I went in looking for them, expecting the show to be filled with them, and found almost none. It only dawned on me after the fact that consumers buy products, not categories, and that in fact there were three major ASP categories, though they didn’t call themselves that. They were:

– Document storage. There have been ‘free net backup’ utilities for years, but this year, with mp3 and the need to store jpegs somewhere besides your home computer (ahem), the “i-drive/freedrive/z-drive” concept is really taking off.

– Document management. Many companies are vying to offer the killer app that will untie the document from the desktop, with some combination of net backup/file versioning/format conversion/update notification/web download in a kind of webified LAN. This document-driven (as opposed to device-driven) computing represents both the biggest opportunity and the biggest threat for Microsoft, since it is the document formats and not the software that really drives their current hold on the desktop. If MS can stand the pain of interoperating with non-MS devices, they could be the major stakeholder in this arena in two years time.

– Conferencing/calendering. Personal computers are best used by persons, not groups. This category is an attempt to use the Web as a kind of GC, a “group computer”, by taking what the net has always done best, communication within and between groups, into the center of the business world.

The dogs that didn’t bark in the night:

– Very few companies selling pure XML products anymore – I counted 2. XML is on a lot of feature/compatibility lists, but makes up the core offering of few products.

– Ditto Java. Lots of products that use Java; very few banners that said “Java Inside”.

– Despite the large and growing impact of the net on gaming and vice-versa, there was almost no gaming presence there. Games are still a seperate industry, and they have their own show (E3) which runs on a different logic. (Games are one of the only software categories currently not challenged by a serious free software movement.)

– Ditto movies. Only atomfilms was there. The Internet is everywhere, but movies are still in LA.

– Ditto music. .mp3’s are not an Internet revolution – they’re just another file format, after all. What they are is a music revolution, and the real action is with the people who own the content. Lots of people advertised .mp3 compatibility, but almost no one structured their product offering around them. Those battles will be fought elsewhere.

– To my surprise, there were also few companies offering 3rd party solutions to warehousing/shipping. I expect this will become a big deal after we see what happens during the second ecommerce christmas.

Random notes

– Its impossible to take any company seriously if they only have a kiosk in a country pavilion or a “Partner Pavilion”– they just end up looking like arm candy for Oracle or Germany, not like real companies. – That goes double if there are people in colorful native garb in the booth.

– The conference is more feminized than last year, which was alarmingly like a car show. There were more women who knew what they were talking about on either side of the podium, and fewer “booth babes” per capita.

– The search engine war has broken out into the open (viz. every request on internet mailing lists in the past year by someone asking how to improve their site’s ranking on a search engine.) There were companies advertising automated creation of search-engine-only URLs to stuff the rankings. Look for the search engine firms to develop ways of filtering these within 6 months, probably using router data.

– The big booth holders have moved from computing to networking. Two years ago, Dell had a big presence, but the biggest booths now were the usuals (IBM, MS, Oracle) and then the telcos – ATT, Qwest, GTE.

– For years, IW had several PC manufacturers. This year, the number of booths for companies who sell servers, power-supplies, etc., outnumbered the companies who concentrate on PCs. Every company at IW that does ship PCs will ship with Linux pre-installed – no MS-only hardware vendors anymore.

– Every company has a Linux port or a story about working on one. Unlike last year, nobody says ‘Huh?’ or ‘No.’ – Your next computer will have a flat screen. There were more flat screens than glass monitors in the booths.

– The marketing effect of changing the Macintosh case colors came home; the iMac was the booth acoutrement of choice after the flat panel. Furthermore, since almost no one writes software just for the Mac anymore, having an iMac showing your product has become a visual symbol for ‘cross-platform’.

– The sound was unbearably loud at times – a perfect metaphor for the increasingly noisy market. Two illustrative moments: a bona fide smart person talking too quietly into a microphone, trying to explain how XML really works, while the song “Fame” is drowning him out from the next booth as accompaniement to a guy speed-finger-painting a giant portrait on black velvet. Also, you could hardly hear what Intel was up to because of the noise from the Microsoft pavilion, and vice-versa.

– 2 different companies offered internet/stereo compatibility. Expect convergence to merge home and car audio with the net before it merges the computer with the TV. – The only large crowd with that ‘tell us more, tell us more’ look in their eyes were crowded around the Handspring booth. Handspring’s product, the Vizor, is nothing more that a Palm Pilot in a PVC case with more apps and memory for fewer dollars — its biggest selling point in fact is how little it differs from the Pilot — but to see the crowd at the booth you’d have thought they were giving away hot buttered money.

whew. Glad that’s over til next year…

-clay

Internet World ’00: A View From the Floor

First published in SAR, 11/99.

Just got back from Internet World 2K in New York. I have gone to every Internet World in New York since 1993, and my approach is always the same – skip the conference and the keynotes and go straight to the trade floor, and then walk every aisle and look (briefly) at every booth. Today was several hundred booths, and took 7 hours, because the show not only filled the Javits end-to-end, but spilled over into a temporary structure erected north of the Javits to take the overflow.

I do this because it doesn’t matter to me when one company thinks they have a good idea, but when I see three companies with the same idea, then I start to take notice. As such, this write-up isn’t about individual companies so much as about industry themes.

In addition, the floor of Internet World is a good proxy for the marketplace: if a company can’t make its business proposition clear to the person passing by their booth, they aren’t going to be able to make it clear in the market either. Most memorable tag line this year: “Web Experience Management Solutions for Organizations that must deliver successful web interactions.” Oh good, my boss was just asking for that. Another goodie: “It begins with ‘e'” (for a company called eon). That probably tested great with the focus groups, but to my eye it looks like a commentary on how predictable naming strategies for internet businesses have become. Herewith my impressions of the IW2K zeitgeist:

What is Internet World for?

In years past, Internet World has had an identity crisis, as a tradeshow for everyone from router engineers to media buyers to end users. That is over now — INterent World, at least in NY, is now a tradeshow for companies that provides goods and services to companies using the internet for their business.

There were only a handful of booths for sites for end users — Atomfilms, MyFamily.com — and they looked completely out of place amidst the content syndicators, web site audit tools, multi-lingual translation services and HR services. Internet World has become a strictly b2b affair.

The New New Thing…

…didn’t show up. I didn’t see anything really new at the show, for either or hardware or software. There were lots of new products and services, of course, and lots of new companies, but no new ideas and no new categories.

Internet World is now a trade show for a maturing industry, where the emphasis is not on creating new kinds of functions but in making improvements on existing ones — CRM, ASP, content management, hosting, et al.

The only really new thing at IW was Groove, Ray Ozzie’s audacious attempt at living up to the dream of the two-way web, and Ray was holding court in meeting rooms off to the side — Groove wasn’t even on the show floor.

I lost count of the number of times companies told me they were the leading this or the leading that in their space. No one, in 7 hours and many conversations, ever told me that their offering was unique, or that they had no competition — everyone wanted to be part of a well-defined sector, and then to be the leading company in that sector. I detect novelty fatigue, and a sense that its safer to be in a validated business sector than to be pioneering a new one.

No big theme jumped out this year either. In other years, there’s been Java, or push, or some other bandwagon people were eager to jump on or distance themselves from. This year though, the only broad theme I could detect was fairly broad, and which I would call “Anxiety Alleviation”.

Gone are the days when convicing businesses to go online in the first place was the mission of IW: to a first approximation, every business is now using the net. Many of the exhibitors in this show were linked by trying to provide these businesses not great new opportunities, but trying instead to solve particularly thorny problems, like –

“How can I manage all this content?”

Content management is back as a category. As the home grown solutions groan under the weight of constant updates, and more businesses have to move to dynamic content in order to be able to update the site without breaking it, the competition for web publishing/content management systems is heating up again, with Allaire introducing its first offering in this space, and Vignette, the clear leader in high-end web publishing, trying to outpace the competition by re-positioning itself as an ‘ebusiness solution’, instead of merely being a content tool.

– “How can I let my employees publish to the web while bypassing those recalcitrant sysadmins?”

Lots of competitors to Microsoft’s FrontPage as well, but designed specifically for business environments. These products are more workflow systems than full content management solutions, and are more positioned around the idea that a companies Web site can’t be allowed to suffer from the bottleneck of an organizations IT group. The general solution for this class of products is to allow certain employees the ability to change web content, and then to give them tools to let them update pages without requiring them to know HTML.

– “How can I use the Web for groups of employees?”

Many offerings for intranet/groupware, designed to “enable collaborative workgroups” or “capture group intelligence.” The days when internet use was a largely personal affair seem to be ending, at least for businesses, as concern for creating shared Web environments grows. (The aforementioned groove.net is in this space as well.)

Another concern here was in “knowledge management”, companies offering ways to capture the collective wisdom of a sales force or work group.

– “How can I talk to/interact with my customers?”

Lots of companies offering to answer this question. Of all the “Anxiety Alleviators” of the show, this was the category with the most obvious anxiety around it: why are so few of the visitors to my site buying anything? There were lots of related solutions here — CRM, internet call centers, real time chat with customers onsite — and it is all targetted at raising the the number of people who buy and number of people who return to ecommerce web sites.

– “How can I get them this Zirconium ring they just bought?”

There were several companies offering to help with shipping and tracking, from a start-up that is taking a page from Kozmo and building package dropoffs in grocery stores, to UPS, who is trying to make up for the ground it lost in letting FedEx become the ecommerce shipper of choice for several high profile brands.

– “How can I monitor my customer base?”

The anxiety in the executive suites about how little they know about how customers are using their web site was also in full flower. WebTrends was there, of course, but most of the offerings weren’t stand-alone log file analysis but full-on suties of “ebusiness intelligence monitoring” — customer experience audits, site performance analysis, data mining. All these businesses share an idea that by their behaviors, customers are trying to tell you something, and if you listen, you will be able to improve their experinece and your bottom line.

– “How can I get a kicker on ecommerce revenues?”

I was amused to see several companies offering customized wrapping paper, logo printing, and other ways to generate an extra $1 on ecommerce transactions. Look for a lot of this sort of novelty/one off upsell this Christmas.

– “What if I get sued?”

There were several insurance companies, a category I haven’t even been aware of in previous Internet Worlds. How’s that for the net going mainstream?

– “How can I make my site more responsive?”

Many products or services designed to speed up the responsiveness of a web site, by caching, streaming, or optimizing the way it is served. Akamai has clearly raised the standards for web site responsiveness, and in light of the slow roll-out of broadband, there are a number of companies rushing into the breach.

– “How can I keep my servers running?”

As uptime becomes a mainstream business issue, it is driving growth in the number of companies offering remote monitoring, testing, and reporting on network health and web site responsiveness/uptime.

If they pay for you, they own you.

The only big surprise to me this year was the explosion of partner pavilions. I have always been suspicious of partner pavilions, since the real beneficiary is the sponsor, often at the expense of the nominal partner: “BEHOLD THE MAJESTY THAT IS ORACLE!!! (and here’s our partner, MumbleCo….)”.

The real effect of the Partner Pavilions is chest-beating. Companies that sell services or solutions (Oracle, MS, Real) use the size of their Partner Pavilion to signal how wide industry adoption of their offering is. As a result, I found it impossible to take any of the actual partenrs in the booths seriously — they are the corporate equivalent of booth babes. The bigger message, however, is that the giants we have now are largely going to be the giants we have in the future, and for smaller companies, their affilaitions with these giants are going to become increasingly critical.

Even odder are the country pavilions, the “Made In Israel/Germany/ Sweden” booths with companies from their respective countries. These companies are likewise impossible to take seriously, because the message behind this supposed show of national pride is “Not Good Enough.”

Not good enough to make it to Internet World on their own, that is, a suspicion heightened when you see how many companies on the show floor have dual headquarters in the US and Tel Aviv, or Berlin, or Hong Kong without needing any help from their respective governments.

If a country really wants to spend its tax dollars helping local businesses join the internet economy, it should buy 3 or 4 separate spots on the show floor and give them to the companies, without the self-defeating “Made in Sweden” label.

The power of the hosting providers

Hosting providers, those unsexy businesses that build network centers and buy bandwidth, are suddenly the industry’s new power brokers. Globix, Global Center, and Genuity all had mammoth booths.

Because hosting providers have long term relationships with their clients, unlike ad agencies and consultants, and because they get a monthly check, unlike product vendors, and because they are the only third party who has access behind the firewall, they have emereged as the ideal broker of additinal services to companies that already use them.

The global internet and the local internet

Geography has arrived as a major force. Language translation is everywhere as a service, as the days of the purely American-centric internet wane.

There are also a surprising number of services offering to aggregate content by local area, from a mapping company, to a search engine that maps URLs to geography to a company that aggregates news/weather/etc and lets you pick content modules for your website, tailored to your area. The web is increasingly being mapped, literally and figuratively, to the real world.

Content syndication

Many companies are offering content syndication tools, where they provide some sort of dynamic service — news, calendaring, “browser buddy” navigation tools — which you license for your website.

Along with the content management tools and template-driven publishing tools, the push towards syndicated contetn is all part of the general anxiety over managing a dynamic and constnatly updated web site without having to become a publishing company.

Mobile

Mobile has arrived, of course. The bad news for WAP is that everyone is selling “Mobile Solutions”, of which WAP is presented as a sub-set. WAP had hoped to become synonymous with the mobile internet, and that didn’t happen.

In particular, Palm is emerging as the alternate platform of choice, a move further dramatized by the side-by-side positioning of the packed Palm pavilion and the relatively sparsely attended WAP pavilion. Because Palm is a real OS, several companies are offering Palm browsers (YadaYada) or Palm servers (Thin Air) to extend Palm’s functions as a mobile device.

Talk is back

Voice over the intenet is back in another incarnation. Last year, there was lots of internet telephony — that was all but gone this year, to be replaced by several companies offering talk as a service to increase customer retention and loyalty, whether through sales agents, pre-recorded messages, or help functions.

VR/3D is back, with a vengeance.

Another hardy perennial is the idea of 3D shopping, sometimes with a 3D avatar, sometimes just to look at the product.

This has always seemed to me like the Videophone of the Web, an idea that is always being pushed by technologists but never adopted by users. In keeping with this year’s theme of Anxiety Alleviation, however, 3D is no longer being sold as a way to make a cool web site, but as just the ticket for closing sales and encouraging customer loyalty.

I’m not holding my breath.

Ontology is back, and so are search engines

After a few years in which AI and agents were largely ignored as impractical, and in which a few major portals had the search space locked up, specialized search engines, especially ones armed with “inference engines” (sets of rules about which words mean what and are related to which other words) or geared to smaller than Web-wide searches (only industry veritcals, or corporate intranets, or in geographic areas. Google proved that as the amount of data on the Web continues to explode, new search solutions can find a market.

The dogs that didn’t bark in the night.

– Digital Signatures. There were few companies trying to take advantage of the new Digitial Signature Act. By next year, expect services offering electronically enforceable signatures to be legion.

– E-books. There was only one standalone “ebook” product I saw. This is probably because ebooks don’t (yet) have any b2b applications.

– Peer-to-peer. Softwax was the only peer-to-peer company there, and no one was touting their peer-to-peer solutions. By next year this meme will either be ubiquitous or gone.

– Linux. Only one company specifically touting Linux. When it comes to operating systems, the concerns represented at the show were much more about uptime, remote management, and reporting, rather than being about which operating system to use.

– Napster, and the Entertainment Industry. Not a peep, except Atom Film and one lonely NAPTE booth in the North PAvilion. The entertainment industry’s issues with the internet are being aired elsewhere.

– Likewise, there were very few companies offering employee monitoring/copyright compliance software to companies, or Digital Rights Management services to producers of content.

Other random notes.

– Two years ago, ‘IP’ at Internet World stopped meaning ‘Internet Protocol’ and started meaning ‘Intellectual Property.’ This year, ‘POP’ stopped meaning ‘Post Office Protocol’ and started meaning ‘Point of Presence’ marketing.

– Lots of job recruitment sites and outsourced business solutions for tech employees this year. Despite the layoffs in recent months, the need for skilled labor is still strong.

– Magazines are a trailing indicator of a business sector. There were lots of magazines there, presumably hoping to validate themselves as good places to do all that spending on ads dot coms do. Even the NY Times got into the act, publishing a gratuitious “Ecommerce’ section on the first day of the trade show, featuring not a single article that wouldn’t have been at home in the ‘Circuits’ section.

Expect this to contract this year. As a trailing indicator, its will soon be the magazine world’s turn to get big, get niche, or get out.

– Associations are a trailing indicator as well. There were lots of associations you could join this year, whch blur in my mind into a “Worldwide Consortium of Affiliated Web Managers and Marketers Society”. This presumably is occaisioned by the desire to keep a new crop of upstarts from displacing us, the old crop of upstarts.

– AOL metamorphosis into an internet company is complete. The AOL booth, complete with partner pavilion, and the ads for AOL6.0 and AOL anywhere, put it at the center of the action.

– XML is no longer a service, now its a feature. Last year there were several companies offering to help you with your XML needs. This year, XML is simply a check-off on many products and services.

Like Java, XML won’t live up to the hype, but like Java, once its been widely enough adopted, it will find its niche and occupy it.

– The ASP model is now the norm. I had several conversations with companies selling standalone corporate software products, but almost all of them assured me they had an ASP alternative in the works. This is related to the new power of the hosting providers, who have the leverage to hasten adoption of ASP services.

– I did see two services that took bits of old models to make a new thing. The first was Navixo, which is taking the “Yahoo Browser Bar” idea, but allowing companies to populate it differently when users come to their sites. Its a way, in other words, to add real pull-down menus to site navigation, an idea that never really caught on with the launch of DHTML.

The other was a company called Enkia, which uses its own database to build “Find more like this…” features for sites using an ASP model, rather than requiring those sites to install local software on their database.

– The personal computer was nowhere to be seen. PCs are now as taken for granted as routers, and while Compaq and IBM were there, they weren’t showing PCs, while the pure PC manufacturers like Toshiba and Gateway weren’t there at all.

– Ominously for Apple, the iMac, last year’s booth acoutrement of choice, was also nowhere to be seen. – In general, this show was the most characterless Internet World I’ve been to. You would never know, walking the trade floor, that this industry had ever been considered threatening to corporate America, since by now it largely is corporate America. I saw one head of purple hair, but otherwise there was almost no one there who would be unwelcome in the halls of General Electric.

As if to underline the point, as I was leaving I ran into Ugly George, the Flying Dutchman of New York’s Internet Industry, lugging his albatross of a web cam with him, and not only did he seem bizarrely out of place, even his identity had been corporatized, as his access pass read “George Ugly”. Glad thats over. See you next year. -clay