Enter the Decentralized Zone

Digital security is a trade-off. If securing digital data were the only concern a business had, users would have no control over their own computing environment at all-the Web would be forbidden territory; every disk drive would be welded shut. That doesn’t happen, of course, because workers also need the flexibility to communicate with one another and with the outside world. The current compromise between security and flexibility is a sort of intranet-plus- firewall sandbox, where the IT department sets the security policies that workers live within. This allows workers a measure of freedom and flexibility while giving their companies heightened security. That was the idea, anyway. In practice, the sandbox model is broken. Some of the problem is technological, of course, but most of the problem is human. The model is broken because the IT department isn’t rewarded for helping workers do new things, but for keeping existing things from breaking. Workers who want to do new things are slowly taking control of networking, and this movement toward decentralized control cannot be reversed. The most obvious evidence of the gap between the workers’ view of the world and the IT department’s is in the proliferation of email viruses. When faced with the I Love You virus and its cousins, the information technology department lectures users against opening attachments. Making such an absurd suggestion only underlines how out of touch the IT group is: If you’re not going to open attachments, you may as well not show up for work. Email viruses are plaguing the workplace because users must open attachments to get their jobs done- the IT department has not given them another way to exchange files. For all the talk of intranets and extranets, the only simple, general-purpose tool for moving files between users, especially users outside the corporation, is email. Faced with an IT department that thinks not opening attachments is a reasonable option, end users have done the only sensible thing: ignore the IT department. Email was just the beginning. The Web has created an ever-widening hole in the sandbox. Once firewalls were opened up to the Web, other kinds of services like streaming media began arriving through the same hole, called port 80. Now that workers have won access to the Web through port 80, it has become the front door to a whole host of services, including file sharing. And now there’s ICQ. At least the IT folks knew the Web was coming-in many cases, they even installed the browsers themselves. ICQ (and its instant messaging brethren) is something else entirely-the first widely adopted piece of business software that no CTO evaluated and no administrator installed. Any worker who would ever have gone to the boss and asked for something that allowed them to trade real-time messages with anyone on the Net would have been turned down flat. So they didn’t ask, they just did it, and now it can’t be undone. Shutting off instant messaging is not an option. The flood is coming. And those three holes- email for file transfer, port 80 drilled through the firewall, and business applications that workers can download and install themselves-are still only cracks in the dike. The real flood is coming, with companies such as Groove Networks, Roku Technologies, and Aimster lining up to offer workers groupware solutions that don’t require centralized servers, and don’t make users ask the IT department for either help or permission to set them up. The IT workers of any organization larger than 50 people are now in an impossible situation: They are rewarded for negative events-no crashes or breeches-even as workers are inexorably eroding their ability to build or manage a corporate sandbox. The obvious parallel here is with the PC itself; 20 years ago, the mainframe guys laughed at the toy computers workers were bringing into the workplace because they knew that computation was too complex to be handled by anyone other than a centralized group of trained professionals. Today, we take it for granted that workers can manage their own computers. But we still regard network access and configuration as something that needs to be centrally managed by trained professionals, even as workers take network configuration under their control. There is no one right answer-digital security is a trade-off. But no solution that requires centralized control over what network users do will succeed. It’s too early to know what the new compromise between security and flexibility will look like, but it’s not too early to know that the old compromise is over.

Hailstorm: Open Web Services Controlled by Microsoft

First published on O’Reilly’s Openp2p on May 30, 2001.

So many ideas and so many technologies are swirling around P2P — decentralization, distributed computing, web services, JXTA, UDDI, SOAP — that it’s getting hard to tell whether something is or isn’t P2P, and it’s unclear that there is much point in trying to do so just for the sake of a label.

What there is some point in doing is evaluating new technologies to see how they fit in or depart from the traditional client-server model of computing, especially as exemplified in recent years by the browser-and-web-server model. In this category, Microsoft’s Hailstorm is an audacious, if presently ill-defined, entrant. Rather than subject Hailstorm to some sort of P2P litmus test, it is more illuminating to examine where it embraces the centralization of the client-server model and where it departs by decentralizing functions to devices at the network’s edge.

The design and implementation of HailStorm is still in flux, but the tension that exists within HailStorm between centralization and decentralization is already quite vivid.

Background

HailStorm, which launched in March with a public announcement and a white paper, is Microsoft’s bid to put some meat on the bones of its .NET initiative. It is a set of Web services whose data is contained in a set of XML documents, and which is accessed from the various clients (or “HailStorm endpoints”) via SOAP (Simple Object Access Protocol.) These services are organized around user identity, and will include standard functions such as myAddress (electronic and geographic address for an identity); myProfile, (name, nickname, special dates, picture); myCalendar, myWallet; and so on.

HailStorm can best be thought of as an attempt to re-visit the original MS-DOS strategy: Microsoft writes and owns the basic framework, and third-party developers write applications to run on top of that framework.

Three critical things differentiate the networked version of this strategy, as exemplified by HailStorm, from the earlier MS-DOS strategy:

  • First, the Internet has gone mainstream. This means that Microsoft can exploit both looser and tighter coupling within HailStorm — looser in that applications can have different parts existing on different clients and servers anywhere in the world; tighter because all software can phone home to Microsoft to authenticate users and transactions in real time.
  • Second, Microsoft has come to the conclusion that its monopoly on PC operating systems is not going to be quickly transferable to other kinds of devices (such as PDAs and servers); for the next few years at least, any truly ubiquitous software will have to run on non-MS devices. This conclusion is reflected in HailStorm’s embrace of SOAP and XML, allowing HailStorm to be accessed from any minimally connected device.
  • Third, the world has shifted from “software as product” to “software as service,” where software can be accessed remotely and paid for in per-use or per-time-period licenses. HailStorm asks both developers and users to pay for access to HailStorm, though the nature and size of these fees are far from worked out.

Authentication-Centric

The key to shifting from a machine-centric application model to a distributed computing model is to shift the central unit away from the computer and towards the user. In a machine-centric system, the software license was the core attribute — a software license meant a certain piece of software could be legally run on a certain machine. Without such a license, that software could not be installed or run, or could only be installed and run illegally.

In a distributed model, it is the user and not the hardware that needs to be validated, so user authentication becomes the core attribute — not “Is this software licensed to run on this machine?” but “Is this software licensed to run for this user?” To accomplish this requires a system that first validates users, and then maintains a list of attributes in order to determine what they are and are not allowed to do within the system.

HailStorm is thus authentication-centric, and is organized around Passport. HailStorm is designed to create a common set of services which can be accessed globally by authenticated users, and to this end it provides common definitions for:

  • Identity
  • Security
  • Definitions and Descriptions

or as Microsoft puts it:

From a technical perspective, HailStorm is based on Microsoft Passport as the basic user credential. The HailStorm architecture defines identity, security, and data models that are common to all HailStorm services and ensure consistency of development and operation.

Decentralization

The decentralized portion of HailStorm is a remarkable departure for Microsoft: they have made accessing HailStorm services on non-Microsoft clients a core part of the proposition. As the white paper puts it:

The HailStorm platform uses an open access model, which means it can be used with any device, application or services, regardless of the underlying platform, operating system, object model, programming language or network provider. All HailStorm services are XML Web SOAP; no Microsoft runtime or tool is required to call them.

To underscore the point at the press conference, they demonstrated HailStorm services running on a Palm, a Macintosh, and a Linux box.

While Microsoft stresses the wide support for HailStorm clients, the relationship of HailStorm to the Web’s servers is less clear. In the presentation, they suggested that servers running non-Microsoft operating systems like Linux or Solaris can nevertheless “participate” in HailStorm, though they didn’t spell out how that participation would be defined.

This decentralization of the client is designed to allow Hailstorm applications to spread as quickly as possible. Despite their monopoly in desktop operating systems, Microsoft does not have a majority market share for any of the universe of non-PC devices — PDAs, set-tops, pagers, game consoles, cell phones. This is not to say that they don’t have some notable successes — NT has over a third of the server market, the iPaq running the PocketPC operating system is becoming increasingly popular, and the XBox has captured the interest of the gaming community. Nevertheless, hardware upgrade cycles are long, so there is no way Microsoft can achieve market dominance in these categories as quickly.

Enter HailStorm. HailStorm offers a way for Microsoft to sell software and services on devices that aren’t using Microsoft operating systems. This is a big change — Microsoft typically links its software and operating systems (SQLServer won’t run outside an MS environment; Office is only ported to the Mac). By tying HailStorm to SOAP and XML rather than specific client environments, Microsoft says it is giving up its ability to control (or even predict) what software, running on which kinds of devices, will be accessing HailStorm services.

The embrace of SOAP is particularly significant, as it seems to put HailStorm out of reach of many of its other business battles — vs. Java, vs. Linux, vs. PalmOS, and so on — because, according to Microsoft, any device using SOAP will be able to participate in HailStorm without prejudice — “no Microsoft runtime or tool” will be required, though the full effect of this client-insensitivity will be determined by how much Microsoft alters Kerberos or SOAP in ways that limit or prevent other companies from writing HailStorm-compliant applications.

HailStorm is Microsoft’s most serious attempt to date to move from competing on unit sales to selling software as a service, and the announced intention to allow any sort of client to access HailStorm represents a remarkable decentralization for Microsoft.

It is not, however, a total decentralization by any means. In decentralizing their control over the client, Microsoft seeks to gain control over a much larger set of functions, for a much larger group of devices, than they have now. The functions that HailStorm centralizes are in many ways more significant than the functions it decentralizes.

Centralization

In the press surrounding HailStorm, Microsoft refers to its “massively distributed” nature, its “user-centric” model, and even makes reference to its tracking of user presence as “peer-to-peer.” Despite this rhetoric, however, HailStorm as described is a mega-service, and may be the largest client-server installation ever conceived.

Microsoft addressed the requirements for running such a mega-service, saying:

Reliability will be critical to the success of the HailStorm services, and good operations are a core competency required to ensure that reliability. […] Microsoft is also making significant operational investments to provide the level of service and reliability that will be required for HailStorm services. These investments include such things as physically redundant data centers and common best practices across services.

This kind of server installation is necessary for HailStorm, because Microsoft’s ambitions for this service are large: they would like to create the world’s largest address registry, not only of machines but of people as well. In particular, they would like to host the identity of every person on the Internet, and mediate every transaction in the consumer economy. They will fail at such vast goals of course, but succeeding at even a small subset of such large ambitions would be a huge victory.

Because they have decentralized their support of the client, they must necessarily make large parts of HailStorm open, but always with a caveat: while HailStorm is open for developers to use, it is not open for developers to build on or revise. Microsoft calls this an “Open Access” model — you can access it freely, but not alter it freely.

This does not mean that HailStorm cannot be updated or revised by the developer community; it simply means that any changes made to HailStorm must be approved by Microsoft, a procedure they call “Open Process Extensibility.” This process is not defined within the white paper, though it seems to mean revising and validating proposals from HailStorm developers, which is to say, developers who have paid to participate in HailStorm.

With HailStorm, Microsoft is shifting from a strategy of controlling software to controlling transactions. Instead of selling units of licensed software, Hailstorm will allow them to offer services to other developers, even those working on non-Microsoft platforms, while owning the intellectual property which underlies the authentications and transactions, a kind of “describe and defend” strategy.

“Describe and defend” is a move away from “software as unit” to “software as service,” and means that their control of the HailStorm universe will rely less on software licenses and more on patented or copyrighted methods, procedures, and database schema.

While decentralizing client-code, Microsoft centralizes the three core aspects of the service:

  • Identity (using Passport)
  • Security (using Kerberos)
  • Definitions and Descriptions (using HailStorm’s globally standardized schema)

Identity: The goal with Passport is simple — ubiquity. As Bill Gates put it at the press conference: “So it’s our goal to have virtually everybody who uses the Internet to have one of these Passport connections.”

HailStorm provides a set of globally useful services which, because they are authentication-centric, requires all users to participate in its Passport program. This allows Microsoft to be a gatekeeper at the level of individual participation — an Internet user without a Passport will not exist within the system, and will not be able to access or use Passport services. Because users pay to participate in the HailStorm system, in practice this means that Microsoft will control a user’s identity, leasing it to them for use within HailStorm for a recurring fee.

It’s not clear how open the Passport system will be. Microsoft has a history of launching web initiatives with restrictive conditions, and then dropping the restrictions that limit growth: the original deployment of Passport required users to get a Hotmail account, a restriction that was later dropped when this adversely affected the potential size of the Passport program. You can now get a Passport with any email address, and since an email address is guaranteed to be globally unique, any issuer of email addresses is also issuing potentially valid Passport addresses.

The metaphor of a passport suggests that several different entities agree to both issue and honor passports, as national governments presently do with real passports. There are several entities who have issued email addresses to millions or tens of millions of users — AOL, Yahoo, ATT, British Telecom, et al. Microsoft has not spelled out how or whether these entities will be allowed to participate in HailStorm, but it appears that all issuing and validation of Passports will be centralized under Microsoft’s control.

Security: Authentication of a HailStorm user is provided via Kerberos, a secure method developed at MIT for authenticating a request for a service in a computer network. Last year, Microsoft added its own proprietary extension to Kerberos, which creates potential incompatibilities between clients running non-Microsoft versions of Kerberos and servers running Microsoft’s versions.

Microsoft has published the details of its version of Kerberos, but it is not clear if interoperability with the Microsoft version of Kerberos is required to participate in HailStorm, or if there are any licensing restrictions for developers who want to write SOAP clients that use Kerberos to access HailStorm services.

Definitions and Descriptions: This is the most audacious aspect of HailStorm, and the core of the describe-and-defend strategy. Microsoft wants to create a schema which describes all possible user transactions, and then copyright that schema, in order to create and manage the ontology of life on the Internet. In HailStorm as it was described, all entities, methods, and transactions will be defined and mediated by Microsoft or Microsoft-licensed developers, with Microsoft acting as a kind of arbiter of descriptions of electronic reality:

The initial release of HailStorm provides a basic set of possible services users and developers might need. Beyond that, new services (for example, myPhotos or myPortfolio) and extensions will be defined via the Microsoft Open Process with developer community involvement. There will be a single schema for each area to avoid conflicts that are detrimental to users (like having both myTV and myFavoriteTVShows) and to ensure a consistent architectural approach around attributes like security model and data manipulation. Microsoft’s involvement in HailStorm extensions will be based on our expertise in a given area.

The business difficulties with such a system are obvious. Will the airline industry help define myFrequentFlierMiles, copyright Microsoft, when Microsoft also runs the Expedia travel service? Will the automotive industry sign up to help the owner of CarPoint develop myDealerRebate?

Less obvious but potentially more dangerous are the engineering risks in a single, global schema, because there are significant areas where developers might legitimately disagree about how resources should be arranged. Should business users record the corporate credit card as a part of myWallet, alongside their personal credit card, or as part of myBusinessPayments, alongside their EDI and purchase order information? Should a family’s individual myCalendars be a subset of ourCalendar, or should they be synched manually? Is it really so obvious that there is no useful distinction between myTV (the box, through which you might also access DVDs and even WebTV) and myFavorite TVShows (the list of programs to be piped to the TiVo)?

Microsoft proposes to take over all the work of defining the conceptual entities of the system, promising that this will free developers to concentrate their efforts elsewhere:

By taking advantage of Microsoft’s significant investment in HailStorm, developers will be able to create user-centric solutions while focusing on their core value proposition instead of the plumbing.

Unmentioned is what developers whose core value proposition is the plumbing are to do with HailStorm’s global schema. With Hailstorm, Microsoft proposes to divide the world into plumbers and application developers, and to take over the plumbing for itself. This is analogous to the split early in its history when Microsoft wrote the DOS operating system, and let other groups write the software that ran on top of DOS.

Unlike DOS, which could be tied to a single reference platform — the “IBM compatible” PC — HailStorm is launching into a far more heterogeneous environment. However, this also means that the competition is far more fragmented, and given the usefulness of HailStorm to developers who want to offer Web services without rethinking identity or authentication from the ground up (one of the biggest hurdles to widespread use of Sun’s JXTA), and the possible network effects that a global credentials schema could create, HailStorm could quickly account for a plurality of Internet users. Even a 20% share of every transaction made by every Internet user would make Microsoft by far the dominant player in the world of Web services.

Non-Microsoft Participation

With HailStorm, Microsoft has abandoned tying its major software offerings to its client operating systems. Even if every operating system it has — NT/Win2k, PocketPC, Stinger, et al — spreads like kudzu, the majority of the world’s non-PC devices will still not be controlled by Microsoft in any short-term future. By adopting open standards such as XML and SOAP, Microsoft hopes to attract the world’s application developers to write for the HailStorm system now or soon, and by owning the authentication and schema of the system, they hope to be the mediator of all HailStorm users and transactions, or the licenser of all members of the HailStorm federation.

Given the decentralization on the client-side, where a Java program running on a Linux box could access Hailstorm, the obvious question is “Can a HailStorm transaction take place without talking to Microsoft owned or licensed servers?”

The answer seems to be no, for two, and possibly three, reasons.

First, you cannot use a non-Passport identity within HailStorm, and at least for now, that means that using HailStorm requires a Microsoft-hosted identity.

Second, you cannot use a non-Microsoft copyrighted schema to broker transactions within HailStorm, nor can you alter or build on existing schema without Microsoft’s permission.

Third, developers might not be able to write HailStorm services or clients without using the Microsoft-extended version of Kerberos.

At three critical points in HailStorm, Microsoft is using an open standard (email address, Kerberos, SOAP) and putting it into a system it controls, not through software licensing but through copyright (Passport, Kerberos MS, HailStorm schema). By making the system transparent to developers but not freely extensible, Microsoft hopes to gain the growth that comes with openness, while avoiding the erosion of control that also comes with openness.

This is a strategy many companies have tried before — sometimes it works and sometimes it doesn’t. Compuserve collapsed while pursuing a partly open/partly closed strategy, while AOL flourished. Linux has spread remarkably with a completely open strategy, but many Linux vendors have suffered. Sun and Apple are both wrestling with “open enough to attract developers, but closed enough to stave off competitors” strategies with Solaris and OS X respectively.

Hailstorm will not be launching in any real way until 2002, so it is too early to handicap Microsoft’s newest entrant in the “open for users but closed for competitors” category. But if it succeeds at even a fraction of its stated goals, Hailstorm will mark the full-scale arrival of Web services and set the terms of both competition and cooperation within the rest of the industry.

P2P Backlash!

First published on O’Reilly’s OpenP2P.

The peer-to-peer backlash has begun. On the same day, the Wall St. Journal ran an article by Lee Gomes entitled “Is P2P plunging off the deep end?”, while Slashdot’s resident commentator, Jon Katz, ran a review of O’Reilly’s Peer to Peerbook under the title “Does peer-to-peer suck?”

It’s tempting to write this off as part of the Great Wheel of Hype we’ve been living with for years:

New Thing happens; someone thinks up catchy label for New Thing; press picks up on New Thing story; pundits line up to declare New Thing “Greatest Since Sliced Bread.” Whole world not transformed in matter of months; press investigates further; New Thing turns out to be only best thing since soda in cans; pundits (often the same ones) line up to say they never believed it anyway.

This quick reversal is certainly part of the story here. The Journal quoted entrepreneurs and investors recently associated with peer-to-peer who are now distancing themselves from the phrase in order to avoid getting caught in the backlash. There is more to these critiques than business people simply repositioning themselves when the story crescendos, however, because each of the articles captures something important and true about peer-to-peer.

Where’s the money?

The Wall St. Journal’s take on peer-to-peer is simple and direct: it’s not making investors any money right now. Mr. Gomes notes that many of the companies set up to take advantage of file sharing in the wake of Napster’s successes have hit on tough times, and that Napster’s serious legal difficulties have taken the bloom off the file sharing rose. Meanwhile, the distributed computing companies have found it hard to get either customers or investors, as the closing of Popular Power and the difficulties of the remaining field in finding customers have highlighted.

Furthermore, Gomes notes that P2P as a label has been taken on by many companies eager to seem cutting edge, even those whose technologies have architectures that differ scarcely at all from traditional client-server models. The principle critiques Gomes makes — P2P isn’t a well-defined business sector, nor a well-defined technology — are both sensible. From a venture capitalist’s point of view, P2P is too broad a category to be a real investment sector.

Is P2P even relevant?

Jon Katz’s complaints about peer-to-peer are somewhat more discursive, but seem to center on its lack of a coherent definition. Like Gomes, he laments the hype surrounding peer-to-peer, riffing off a book jacket blurb that overstates peer-to-peer’s importance, and goes on to note that the applications grouped together under the label peer-to-peer differ from one another in architecture and effect, often quite radically.

Katz goes on to suggest that interest in P2P is restricted to a kind of techno-elite, and is unlikely to affect the lives of “Harry and Martha in Dubuque.” While Katz’s writing is not as focused as Gomes’, he touches on the same points: there is no simple definition for what makes something peer-to-peer, and its application in people’s lives is unclear.

The unspoken premise of both articles is this: if peer-to-peer is neither a technology or a business model, then it must just be hot air. There is, however a third possibility besides “technology” and “business.” The third way is simply this: Peer-to-peer is an idea.

Revolution convergence

As Jon Orwant noted recently in these pages, “ Peer-to-peer is not a technology, it’s a mindset.”” Put another way, peer-to-peer is a related group of ideas about network architecture, ideas about how to achieve better integration between the Internet and the personal computer — the two computing revolutions of the last 15 years.

The history of the Internet has been told often — from the late ’60s to the mid-’80s, the DARPA agency in the Department of Defense commissioned work on a distributed computer network that used packet switching as a way to preserve the fabric of the network, even if any given node failed.

The history of the PC has likewise been often told, with the rise of DIY kits and early manufacturers of computers for home use — Osborne, Sinclair, the famous Z-80, and then the familiar IBM PC and with it Microsoft’s DOS.

In an accident of history, both of those movements were transformed in January 1984, and began having parallel but increasingly important effects on the world. That month, a new plan for handling DARPA net addresses was launched. Dreamed up by Vint Cerf, this plan was called the Internet Protocol, and required changing the addresses of every node on the network over to one of the new IP addresses, a unique, global, and numerical address. This was the birth of the Internet we have today.

Meanwhile, over at Apple Computer, January 1984 saw the launch of the first Macintosh, the computer that popularized the graphic user interface (GUI), with its now familiar point-and-click interactions and desktop metaphor. The GUI revolutionized the personal computer and made it accessible to the masses.

For the next decade, roughly 1984 to 1994, both the Internet and the PC grew by leaps and bounds, the Internet as a highly connected but very exclusive technology, and the PC as a highly dispersed but very inclusive technology, with the two hardly intersecting at all. One revolution for the engineers, another for the masses.

The thing that changed all of this was the Web. The invention of the image tag, as part of the Mosaic browser (ancestor of Netscape), brought a GUI to the previously text-only Internet in exactly the same way that, a decade earlier, Apple brought a GUI to the previously text-only operating system. The browser made the Internet point-and-click easy, and with that in place, there was suddenly pressure to fuse the parallel revolutions, to connect PCs to the Internet.

Which is how we got the mess we have today.

First and second-class citizens

In 1994, the browser created sudden pressure to wire the world’s PCs, in order to take advantage of the browser’s ability to make the network easy to use. The way the wiring happened, though — slow modems, intermittent connections, dynamic or even dummy IP addresses — meant that the world’s PCs weren’t being really connected to the Internet, so much as they were being hung off its edges, with the PC acting as no more than a life-support system for the browser. Locked behind their slow modems and impermanent addresses, the world’s PC owners have for the last half-dozen years been the second-class citizens of the Internet.

Anyone who wanted to share anything with the world had to find space on a “real” computer, which is to say a server. Servers are the net’s first-class citizens, with real connectivity and a real address. This is how the Geocities and Tripods of the world made their name, arbitraging the distinction between the PCs that were (barely) attached to the networks edge and the servers that were fully woven into the fabric of the Internet.

Big, sloppy ideas

Rejection of this gap between client and server is the heart of P2P. As both Gomes and Katz noted, P2P means many things to many people. PC users don’t have to be second-class citizens. Personal computers can be woven directly into the Internet. Content can be provided from the edges of the network just as surely as from the center. Millions of small computers, with overlapping bits of content, can be more reliable than one giant server. Millions of small CPUs, loosely coupled, can do the work of a supercomputer.

These are sloppy ideas. It’s not clear when something stops being “file sharing” and starts being “groupware.” It’s not clear where the border between client-server and peer-to-peer is, since the two-way Web moves power to the edges of the network while Napster and ICQ bootstrap connections from a big server farm. It’s not clear how ICQ and SETI@Home are related, other than deriving their power from the network’s edge.

No matter. These may be sloppy ideas, ideas that don’t describe a technology or a business model, but they are also big ideas, and they are also good ideas. The world’s Net-connected PCs host, both individually and in aggregate, an astonishing amount of power — computing power, collaborative power, communicative power.

Our first shot at wiring PCs to the Internet was a half-measure — second-class citizenship wasn’t good enough. Peer-to-peer is an attempt to rectify that situation, to really integrate personal devices into the Internet. Someday we will not need a blanket phrase like peer-to-peer, because we will have a clearer picture of what is really possible, in the same way the arrival of the Palm dispensed with any need to talk about “pen-based computing.” 

In the meantime, something important is happening, and peer-to-peer is the phrase we’ve got to describe it. The challenge now is to take all these big sloppy ideas and actually do something with them, or, as Michael Tanne of XDegrees put it at the end of the Journal article:

“P2P is going to be used very broadly, but by itself, it’s not going to create new companies. …[T]he companies that will become successful are those that solve a problem.”

Time-Warner and ILOVEYOU

First published in FEED, 05/00.

Content may not be king, but it was certainly making headlines last week. From the “content that should have been distributed but wasn’t” department, Time Warner’s spectacularly ill-fated removal of ABC from its cable delivery lineup ended up cutting off content essential to the orderly workings of America — Who Wants to Be A Millionaire? Meanwhile, from the “content that shouldn’t have been distributed but was” department, Spyder’s use of a loosely controlled medium spread content damaging to the orderly workings of America and everywhere else — the ILOVEYOU virus. Taken together, these events are making one message increasingly obvious: The power of corporations to make decisions about distribution is falling, and the power of individuals as media channels in their own right is rising.

The week started off with Time Warner’s effort to show Disney who was the boss, by dropping ABC from its cable lineup. The boss turned out to be Disney, because owning the delivery channel doesn’t give Time Warner half the negotiating leverage the cable owners at Time Warner thought it did. Time Warner was foolish to cut off ABC during sweeps month, when Disney had legal recourse, but their real miscalculation was assuming that owning the cable meant owning the customer. What had ABC back on the air and Time Warner bribing its customers with a thirty-day rebate was the fact that Americans resent any attempt to interfere with the delivery of content, legal issues or no. Indeed, the aftermath saw Peter Vallone of the NY City Council holding forth on the right of Americans to watch television. It is easy to mock this attitude, but Vallone has a point: People have become accustomed to constantly rising media access, from three channels to 150 in a generation, with the attendant rise in user access to new kinds of content. Any attempt to reintroduce artificial scarcity by limiting this access now creates so much blind fury that television might as well be ranked alongside water and electricity as utilities. The week ended as badly for Time Warner as it began, because even though their executives glumly refused to promise never to hold their viewers hostage as a negotiating tactic, their inability to face the wrath of their own paying customers had been exposed for all the world to see.

Meanwhile, halfway round the world, further proof of individual leverage over media distribution was mounting. The ILOVEYOU virus struck Thursday morning, and in less than twenty-four hours had spread further than the Melissa virus had in its entire life. The press immediately began looking for the human culprit, but largely missed the back story: The real difference between ILOVEYOU and Melissa was not the ability of Outlook to launch programs from within email, a security hole unchanged since last year. The real difference was the delivery channel itself — the number and interconnectedness of e-mail users — that makes ILOVEYOU more of a media virus than a computer virus. The lesson of a virus that starts in the Philippines and ends up flooding desktops from London to Los Angeles in a few hours is that while email may not be a mass medium, that reaches millions at the same time, it has become a massive one, reaching tens of millions in mere hours, one user at a time. With even a handful of globally superconnected individuals, the transmission rates for e-mail are growing exponentially, with no end in sight, either for viruses or legitimate material. The humble practice of forwarding e-mail, which has anointed The Onion, Mahir, and the Dancing Baby as pop-culture icons, has now crossed one of those invisible thresholds that makes it a new kind of force — e-mail as a media channel more global than CNN. As the world grows more connected, the idea that individuals are simply media consumers looks increasingly absurd — anyone with an email address is in fact a media channel, and in light of ILOVEYOU’s success as a distribution medium, we may have to revise that six degrees of separation thing downwards a little.

Both Time Warner’s failure and ILOVEYOUs success spread the bad news to several parties: TV cable companies, of course, but also cable ISPs, who hope to use their leverage over delivery to hold Internet content hostage; the creators of WAP, who hope to erect permanent tollbooths between the Internet and the mobile phone without enraging their subscribers; governments who hoped to control their citizens’ access to “the media” before e-mail turned out to be a media channel as well; and everyone who owns copyrighted material, for whom e-mail attachments threaten to create hundreds of millions of small leaks in copyright protection. (At least Napster has a business address.) There is a fear, shared by all these parties, that decisions about distribution — who gets to see what, when — will pass out of the hands of governments and corporations and into the hands of individuals. Given the enormity of the vested interests at stake, this scenario is still at the outside edges of the imaginable. But when companies that own the pipes can’t get any leverage over their users, and when users with access to e-mail can participate in a system whose ubiquity has been so dramatically illustrated, the scenario goes from unthinkable to merely unlikely.