Interoperability, Not Standards

First published on O’Reilly’s OpenP2P.

“Whatever else you think about, think about interoperability. Don’t think about standards yet.”

Nothing I said at the O’Reilly P2P conference in San Francisco has netted me more flak than that statement. To advocate interoperability while advising caution on standards seems oxymoronic — surely standards and interoperability are inextricably linked?

Indeed, the coupling of standards and interoperability is the default for any widely dispersed technology. However, there is one critical period where interoperability is not synonymous with standardization, and that is in the earliest phases of work, when it is not entirely clear what, if anything, should be standardized.

For people working with hardware, where Pin 5 had better carry voltage on all plugs from the get-go, you need a body creating a priori standards. In the squishier field of software, however, the history of RFCs demonstrates a successful model where standards don’t get created out of whole cloth, but ratify existing practice. “We reject kings, presidents and voting. We believe in rough consensus and running code,” as David Clarke put it. Standardization of software can’t proceed in a single giant hop, but requires some practical solution to point to first.

I take standardization to be an almost recursive phenomena: a standard is any official designation of a protocol that is to be adopted by any group wanting to comply with the standard. Interoperability, meanwhile, is much looser: two systems are interoperable if a user of one system can access even some resources or functions of the other system.

Because standardization requires a large enough body of existing practice to be worth arguing over, and because P2P engineering is in its early phases, I believe that a focus on standardization creates two particular dangers: risk of premature group definition and damage to meaningful work. Focusing on the more modest goals of interoperability offers a more productive alternative, one that will postpone but improve the eventual standards that do arise.

Standardization and Group Definition

A standard implies group adoption, which presupposes the existence of a group, but no real P2P group exists yet. (The P2P Working Group is an obvious but problematic candidate for such a group.) The only two things that genuinely exist in the P2P world right now are software and conversations, which can can be thought of as overlapping circles:

  • There is a small set of applications that almost anyone thinking about P2P regards as foundational — Napster, ICQ and SETI@Home seem to be as close to canonical as we’re likely to get.
  • There is a much larger set of applications that combine or extend these functions, often with a view to creating a general purpose framework, like Gnutella, Jabber, Aimster, Bitzi, Allcast, Groove, Improv, and on and on.
  • There is a still larger set of protocols and concepts that seem to address the same problems as these applications, but from different angles — on the protocol front, there are attempts to standardize addressing and grid computing with things like UDDI, XNS, XML-RPC, and SOAP, and conceptually there are things like the two-way Web, reputation management and P2P journalism.
  • And covering all of these things is a wide-ranging conversation about something called P2P that, depending on your outlook, embraces some but probably not all of these things.

What is clear about this hodge-podge of concepts is that there are some powerful unlocking resources at the edges of the Internet and democratizing the Internet as a media channel. 

Does P2P even need standards? What should the work of the P2P Working Group be?
Tell us what you think.

What is not clear is which of these things constitute any sort of group amenable to standards. Should content networks use a standard format for hashing their content for identification by search tools? Probably. Would the distributed computation projects having a standard client engine to run code? Maybe. Should the people who care about P2P journalism create standards for all P2P journalists to follow. No.

P2P is a big tent right now, and it’s not at all clear that there is any one thing that constitutes membership in a P2P group, nor is there any reason to believe (and many reasons to disbelieve) that there is any one standard, other than eventually resolving to IP addresses for nodes, that could be adopted by even a large subset of companies who describe themselves as “P2P” companies.

Standardization and Damage to Meaningful Work

Even if at this point, P2P were a crystal-clear definition –within which it was clear which sub-groups should be adopting standards — premature standardization risks destroying meaningful work.

This is the biggest single risk with premature standardization — the loss of that critical period of conceptualization and testing that any protocol should undergo before it is declared superior to its competitors. It’s tempting to believe that standards are good simply because they are standard, but to have a good standard, you first need a good protocol, and to have a good protocol, you need to test it in real-world conditions.

Imagine two P2P companies working on separate metadata schemes; call them A and B. For these two companies to standardize, there are only two options: one standard gets adopted by both groups, or some hybrid standard is created.

Now if both A and B are in their 1.0 versions, simply dropping B in favor of A for the sole purpose of having a standard sacrifices any interesting or innovative work done on B, while the idea of merging A and B could muddy both standards, especially if the protocols have different design maxims, like “lightweight” vs. “complete.”

This is roughly the position of RSS and ICE, or XML-RPC and SOAP. Everyone who has looked at these protocols has had some sense that these pairs of protocols solve similar problems, but as it is not immediately obvious which one is better (and better here can mean “most lightweight” or “most complete,” “most widely implemented” or “most easily extensible,” and so on) the work goes on both of them.

This could also describe things like Gnutella vs. Freenet, or further up the stack, BearShare vs. ToadNode vs. Lime Wire. What will push these things in the end will be user adoption — faced with more than one choice, the balance of user favor will either tip decisively in one direction, as with the fight over whether HTML should include visual elements, or else each standard will become useful for particular kinds of tasks, as with Perl and C++.

Premature standardization is a special case of premature optimization, the root of all evil, and in many cases standardization will have to wait until something more organic happens: interoperability.

Interoperability Can Proceed by Pairwise Cooperation

Standardization requires group definition — interoperability can proceed with just a handshake between two teams or even two individuals — and by allowing this kind of pairwise cooperation, interoperability is more peer-to-peer in spirit than standardization is. By growing out of a shared conversation, two projects can pursue their own design goals, while working out between themselves only those aspects of interoperability both consider important.

This approach is often criticized because it creates the N2 problem, but the N2 problem is only a problem for large values of N. Even the largest P2P category in the O’Reilly P2P directory — file sharing — contains only 50 entries, and it’s obvious that many of these companies, like Publius, are not appropriate targets for standardization now, and may not even be P2P.

For small numbers of parallel engineering efforts, pairwise cooperation maximizes the participation of each member of the collaboration, while minimizing bureaucratic overhead.

Interoperability Can Proceed Without Pairwise Cooperation

If a protocol or format is well-documented and published, you can also create interoperability without pairwise cooperation. The OpenNAP servers adopted the Napster protocol without having to coordinate with Napster; Gnutella was reverse-engineered from the protocol used by the original binary; and after Jabber published its messaging protocol, Roku adopted it and built a working product without ever having to get Jabber’s sign-off or help.

Likewise, in what is probably the picture-perfect test case of the way interoperability may grow into standards in P2P, the P2P conference in San Francisco was the site of a group conversation about adopting SHA1 instead of MD5 as the appropriate hash for digital content. This came about not because of a SHA1 vs MD5 Committee, but because Bitzi and OpenCOLA thought it was a good idea, and talked it up to Freenet, and to Gnutella, and so on. It’s not clear how many groups will eventually adopt SHA1, but it is clear that interoperability is growing, all without standards being sent down from a standards body.

Even in an industry as young as ours, there is a tradition of alternative interfaces to file-sharing networks for things like Mac, Linux and Java clients being created by groups who have nothing more than access to publicly published protcols. There is widespread interoperability for the Napster protocol, which is a standard in all but name, and it has approached this state of de facto standardhood without any official body to nominate it.

Interoperability Preserves Meaningful Work

The biggest advantage of pursuing interoperability is that it allows for partial or three-layer solutions, where interested parties agree to overlap in some but not all places, or where an intermediate layer that speaks to both protocols is created. In the early days, when no one is sure what will work, and user adoption has not yet settled any battles, the kludgy aspects of translation layers can, if done right, be more than offset by the fact that two protocols can be made interoperable to some degree without having to adjust the core protocols themselves. 

What Is Needed

To have standards, you need a standards body. To have interoperability, you just need software and conversations, which is good news, since that’s all we have right now.

The bad news is that the conversations are still so fragmented and so dispersed. 

There are only a handful of steady sources for P2P news and opinion: this site, Peerprofits.com,, the decentralization@yahoogroups.com mailing list, the P2P Working Group and a handful of people who have been consistently smart and public about this stuff — Dan Bricklin, Doc Searls, Dan Gillmor, Dave Winer and Jon Udell. While each of these sources is interesting, the conversation carried on in and between them is far from being spread widely enough to get the appropriate parties talking about interoperability.

As a quick sampling, Openp2p.com’s P2P directory and Peerprofit.com’s P2P directory list about 125 projects, but only 50 groups appear on both lists. Likewise, the Members List at the P2P Working Group is heavy on participating technology companies, but does not include Freenet, Gnutella, OpenCola or AIMster.

The P2P Working Group is one logical place to begin public conversations about interoperability, but it may be so compromised by its heritage as a corporate PR site that it can never perform this function. That in itself is a conversation we need to have, because while it may be premature to have a “Standards Body,” it is probably not premature to have a place where people are tying to hammer out rough consensus about running code. 

The decentralization list is the other obvious candidate, but with 400 messages a month recently, it may be too much for people wanting to work out specific interoperability issues.

But whatever the difficulties in finding a suitable place or places to have these conversations, now is the time for it. The industry is too young for standards, but old enough for interoperability. So don’t think about standards yet, but whatever else you think about, think about interoperability.

P2P Smuggled In Under Cover of Darkness

First published on O’Reilly’s OpenP2P, 2/14/2001

2001 is the year peer-to-peer will make its real appearance in the enterprise, but most of it isn’t going to come in the front door. Just as workers took control of computing 20 years ago by smuggling PCs into businesses behind the backs of the people running the mainframes, workers are now taking control of networking by downloading P2P applications under the noses of the IT department.

Although it’s hard to remember, the PC started as a hobbyist’s toy in the late ’70s, and personal computers appeared in the business world not because management decided to embrace them, but because individual workers brought them in on their own. At the time, PCs were slow and prone to crashing, while the mainframes and minis that ran businesses were expensive but powerful. This quality gap made it almost impossible for businesses to take early PCs seriously.

However, workers weren’t bringing in PCs because of some sober-minded judgment about quality, but because they wanted to be in control. Whatever workers thought about the PC’s computational abilities relative to Big Iron, the motivating factor was that a PC was your own computer.

Today, networking — the ability to configure and alter the ways those PCs connect — is as centralized a function as computation was in the early ’80s, and thanks to P2P, this central control is just as surely and subtly being eroded. The driving force of this erosion is the same as it was with the PC: Workers want, and will agitate for, control over anything that affects their lives.

This smuggling in of P2P applications isn’t just being driven by the human drive for control of the environment. There is another, more proximate cause of the change.

You Hate the IT department, and They Hate You Right Back

The mutual enmity between the average IT department and the average end user is the key feature driving P2P adoption in the business setting.

The situation now is all but intolerable: No matter who you are, unless you are the CTO, the IT department does not work for you, so your interests and their interests are not aligned.

The IT department is rewarded for their ability to keep bad things from happening, and that means there is a pressure to create and then preserve stability. Meanwhile, you are rewarded for your ability to make good things happen, meaning that a certain amount of risk-taking is a necessary condition of your job.

Risk-taking undermines stability. Stability deflects risk-taking. You think your IT department are jerks for not helping you do what you want to do; they consider you an idiot for installing software without their permission. Also, because of the way your interests are (mis)aligned, you are both right.

Thought Experiment

Imagine that you marched into your IT department and explained that you wanted the capability to have real-time conversations with Internet users directly from your PC, that you wanted this set up within the hour, and that you had no budget for it.

Now imagine being laughed out of the room.

Yet consider ICQ. Those are exactly its characteristics, and it is second only to e-mail, and well ahead of things such as Usenet and Web bulletin boards, as the tool of choice for text messaging in the workplace. Furthermore, chat is a “ratchet” technology: Once workers start using chat, they will never go back to being disconnected, even if the IT department objects.

And all this happened in less than 4 years, with absolutely no involvement from the IT department. Chat was offered directly to individual users as a new function, and since the business users among them knew (even if only unconsciously) that the chances of getting the IT department to help them get it were approximately “forget it.” Their only other option was to install and configure the application themselves; which they promptly did.

So chat became the first corporate networking software never approved by the majority of the corporations whose employees use it. It will not be the last.

Chat Is Just the Beginning

ICQ was the first application that made creating a public network address effortless. Because ICQ simply ignored the idea that anyone else had any say over how you use your computer, you never had to ask the IT department about IP addresses, domain name servers or hosting facilities. You could give your PC an network address, and that PC could talk to any other PC with an address in the ICQ name space, all on your own.

More recently, Napster has made sharing files as easy as ICQ made chat. Before Napster, if you wanted to serve files from your PC, you needed a permanent IP address, a domain name, registration with domain name servers and properly configured Web server software on the PC. With Napster, you could be serving files within 5 minutes of having downloaded the software. Napster is so simple that it is easy to forget that it performs all of the functions of a Web server with none of the hassle.

Napster is optimized for MP3s, but there is no reason general purpose file sharing can’t make the same leap. File sharing is especially ripe for a P2P solution, as the current norm for file sharing in the workplace — e-mail attachments — notoriously falls victim to arbitrary limits on file sizes, mangled MIME headers and simple failure of users to attach the documents they meant to attach. (How may times have you received otherwise empty “here’s that file” mail?)

Though there are several systems vying for the title of general file-sharing network, the primary reason holding back systems such as Gnutella is their focus on purity of decentralization rather than ease of use. The reason that brought chat and Napster into the workplace is the same reason that brought PCs into the workplace two decades ago: They were easy enough to use that non-technical workers felt comfortable setting them up themselves.

Necessity Is the Mother of Adoption

Workers’ desire for something to replace the e-mail attachment system of file sharing is so great that some system or systems will be adopted. Perhaps it could be Aimster, which links chat with file sharing; perhaps Groove, which is designed to set up an extensible group work environment without a server; perhaps Roku, OpenCola or Globus, all of which are trying to create general purpose P2P computing solutions; and there are many others.

The first workplace P2P solution may also be a specific tool for a specific set of workers. One can easily imagine a P2P environment for programmers, where the version control system reverses its usual course, and instead of checking out files stored centrally, it checks in files stored on individual desktops. And a system whose compiler knows where the source files are, even if they are spread across a dozen PCs.

And as with chat, once a system like this exists and crosses some threshold of ease of use, users will adopt it without asking or even informing the IT department.

End-to-End

As both Jon Udell and Larry Lessig have pointed out from different points of view, the fundamental promise of the Internet is end-to-end communications, where any node can get to any other node on its own. Things such as firewalls, NAT translation and dynamic IP addresses violate the fundamental promise of the Internet both at the protocol level, by breaking the implicit contract of TCP/IP (two nodes can always contact each other) and on a social level (the Internet has no second-class citizens).

Business users have been second-class citizens for some time. Not only do systems such as ICQ and Napster undo this by allowing users to create their own hosted network applications, but systems such as Mojo Nation are creating connection brokers that allow two machines — both behind firewalls — to talk to each other by taking the e-mail concept of store and forward, and using it to broker requests for files and other resources.

The breaking of firewalls by the general adoption of port 80 as a front door is nothing compared to the ability to allow users to create network identities for themselves without having to ask for either permission or help.

Security, Freedom and the Pendulum

Thus P2P represents a swing of the pendulum back toward user control. Twenty years ago, the issue was control over the center of a business where the mainframes sat. Today, it is over the edges of a business, where the firewalls sit. However, the tension between the user’s interests and corporate policy is the same.

The security-minded will always complain about the dangers of users controlling their own network access, just like the mainframe support staff worried that users of PCs were going to destroy their tidy environments with their copies of VisiCalc. And, like the mainframe guys, they will be right. Security is only half the story, however.

Everyone knows that the easiest way to secure a PC is to disconnect it from the Internet, but no one outside of the NSA seriously suggests running a business where the staff has no Internet access. Security, in other words, always necessitates a tradeoff with convenience, and there are times when security can go too far. What the widespread adoption of chat software is telling us is that security concerns have gone too far, and that workers not only want more control over how and when their computers connect to the network, but that when someone offers them this control, they will take it.

This is likely to make for a showdown over P2P technologies in the workplace, with an argument between the freedom of individual workers vs. the advantages of centralized control, and of security vs. flexibility. Adoption of some form of P2P addressing, addressing that bypasses DNS to give individual PCs externally contactable addresses, is now in the tens of millions thanks to Napster and ICQ.

By the time general adoption of serverless intranets begins, workers will have gone too far to integrate P2P functions into their day for IT departments to simply ban them. As with the integration of the PC, expect the workers to win more control over the machines on their desk, and for the IT departments to accept this change as the new norm over time.

Peak Performance Pricing

First published at Biz2, February 2001.

Of all the columns I have written, none has netted as much contentious mail as
“Moving from Units to Eunuchs” (October 10, 2000, p114, and at Business2.com).
That column argued that Napster was the death knell for unit pricing of online music.
By allowing users to copy songs from one another with no per-unit costs, Napster
introduced the possibility of “all you can eat” pricing for music, in the same way
that America Online moved to “all you can eat” pricing for email.

Most of the mail I received disputed the idea that Napster had no per-unit costs.
That idea, said many readers, violates every bit of common sense about the economics of resource allotment. If more resources are being used, the users must be paying more for them somewhere, right?

Wrong. The notion that Napster must generate per-unit costs fails the most obvious
test: reality. Download Napster, download a few popular songs, and then let other
Napster users download those songs from you. Now scan your credit-card bills to see where the extra costs for those 10 or 100 or 1,000 downloads come in.

You can perform this experiment month after month, and the per-unit costs will never show up-you are not charged per byte for bandwidth. Even Napster’s plan to charge a subscription doesn’t change this math, because the charge is for access to the system, not for individual songs.

‘Pay as you go’
Napster and other peer-to-peer file-sharing systems take advantage of the curious way individual users pay for computers and bandwidth. While common sense suggests using a “pay as you go” system, the average PC user actually pays for peak performance, not overall resources, and it is peak pricing that produces the excess resources that let Napster and its cousins piggyback for free.

Pay as you go is the way we pay for everything from groceries to gasoline. Use some,
pay some. Use more, pay more. At the center of the Internet, resources like bandwidth are indeed paid for in this way. If you host a Web server that sees a sudden spike in demand, your hosting company will simply deliver more bandwidth, and then charge you more for it on next month’s bill.

The average PC user, on the other hand, does not buy resources on a pay-as-you-go
basis. First of all, the average PC is not operating 24 hours a day. Furthermore,
individual users prize predictability in pricing. (This is why AOL was forced to drop
its per-hour pricing in favor of the now-standard flat rate.) Finally, what users pay
for when they buy a PC is not steady performance but peak performance. PC buyers don’t choose a faster chip because it will give them more total cycles; they choose a faster chip because they want Microsoft Excel to run faster. Without even doing the math, users understand that programs that don’t use up all of the available millions of instructions per second will be more responsive, while those that use all the CPU cycles (to perform complicated rendering or calculations) will finish sooner.

Likewise, they choose faster DSL so that the line will be idle more often, not less.
Paying for peak performance sets a threshold between a user’s impatience and the size of their wallet, without exposing them to extra charges later.

A side effect of buying peak cycles and bandwidth is that resources that don’t get used have nevertheless been paid for. People who understand the economics of money but not of time don’t understand why peak pricing works. But anyone who has ever paid for a faster chip to improve peak performance knows instinctively that paying for resources upfront, no matter what you end up using, saves enough hassles to be worth the money.

The Napster trick
The genius of Napster was to find a way to piggyback on these already-paid-up resources in order to create new copies of songs with no more per-unit cost than new pieces of email, a trick now being tried in several other arenas. The SETI@home project creates a virtual supercomputer out of otherwise unused CPU time, as do Popular Power, DataSynapse, and United Devices.

The flagship application of openCola combines two of the most talked-about trends on the Internet: peer-to-peer networking and expert communities that lets users share knowledge instead of songs. It turns the unused resources at the edge of the network into a collaborative platform on which other developers can build peer-to-peer applications, as does Groove Networks.

As more users connect to the Internet every day and as both their personal computers and their bandwidth gets faster, the amount of pre-paid but unused resources at the edges of the network is growing to staggering proportions.

By cleverly using those resources in a way that allowed it to sidestep per-unit
pricing, Napster demonstrated the value of the world’s Net-connected PCs. The race is now on to capitalize on them in a more general fashion.

The Parable of Umbrellas and Taxicabs

First published on O’Reilly’s OpenP2P, 01/18/2001

Many companies in the P2P space are trying to figure out how to deploy P2P resources most effectively in a dynamic system. This problem is particularly acute for the companies selling distributed computation, such as Popular Power and United Devices, or the companies trying to build a general P2P framework, such as ROKU and Globus.

The first successful implementations of distributed computation like SETI@Home and distributed.net relied on non-financial incentives: The participants donated their cycles because they felt good about the project, or because they enjoyed “collaborative competition,” such as distributed.net’s team rankings for its encryption-cracking contests.

Ego gratification is a powerful tool, but it is a finicky one. Someone happy to donate their time to chess problems may not want to donate their cycles to designing bioengineered food. The problem that companies who rely on distributed resources face is how to get people to give time to commercial projects. 

The general solution to this problem seems to be “if you give people an incentive to do something, they will respond, so find the right incentive.” If ego gratification is an effective way to get people to donate resources, the trick is in finding other kinds of incentives to replace ego gratification for commercial services.

The obvious all-purpose incentive is money (or at least some sort of currency), and several systems are being designed with the idea that paying users is the best way convince them to provide resources.

This solution may not work well, however, for things like cycles and disk space, because there are two kinds of resources that can be deployed in a P2P system, and they have very different characteristics — a difference that can best be illustrated by The Parable of the Umbrellas and Taxis.

Umbrellas and taxis

Anyone who’s spent any time in New York City knows that when it begins to rain, two things happen immediately: It becomes easier to buy an umbrella and it becomes harder to hail a cab. As soon as the first few drops fall, people appear on the street selling cheap umbrellas, while a lucky few pedestrians occupy all the available cabs.

Why does an increase in demand produce opposite effects on supply — more available umbrellas and fewer available taxis? The answer is the nature of the resources themselves. Umbrellas are small and inexpensive to store, so it’s easy to take them out when it’s raining and put them back when the rain stops. Additional umbrellas can be deployed in response to demand.

Taxis, on the other hand, are large and expensive to store. In addition, taxis have all sorts of up-front costs: registration for a yellow cab or car service, license for the driver, local regulations, the cost of an automobile. These up-front costs can be high or low, but whatever they are, they set some maximum number of cabs available in the city on any given day. And if it starts raining, too bad: Additional taxis cannot be deployed in response to peak demand. Every city has a total number of cabs which represents a compromise between the number of potential riders in sun vs. rain, or 4 a.m. vs. 4 p.m., or April vs. August.

Not all P2P resources are created equal

Some of the resources in a P2P system are umbrellas. Some are taxis.

Umbrella resources are what make things like file-sharing systems so powerful. If you decide at 3:17 a.m. that you must listen to Golden Earring’s “Radar Love” or you will surely die, it’s Napster to the rescue: Your demand produces extra supply.

If, however, you decide that you must have a faster chip, you’re out of luck. For that, you have to make a trip to the store. You could use someone else’s cycles, of course, but you can’t increase the total number of cycles in your system: Your demand does not produce extra supply.

This has ramifications for getting users to provide additional resources to a P2P system. If United Devices wants to use more cycles than you have, you can’t instantly produce more chip. Ditto bandwidth and disk space. You have what you have, and you can’t deploy any more than that in the short term.

Thresholds and gaps

Since the increased demand during a rainstorm doesn’t create more taxis on the spot, the way to have more taxis when it rains is to have more taxis overall — to change the overall capacity of the system. One could make it cheaper to buy a taxi, raise the rates drivers could charge, or relax the legal restrictions on the business, and any of these things would increase capacity.

What doesn’t increase taxi capacity is momentarily increased demand, at least not in well-off economies. It’s easy to see how more cab service could become available — drivers of ordinary cars could simply offer their services to damp pedestrians in times of increased demand. Why don’t they do this?

The answer is that the pedestrians aren’t willing to pay enough to make it worth the drivers’ time. There are many sorts of obstacles here — from the time it would take to haggle, to the laws regulating such a thing — but all these obstacles add up to a higher cost than the pedestrian is willing to pay. A passenger would pay more to get in a cab when it’s raining than when it’s dry, but not enough more to change the behavior of private drivers.

One particular effect here deserves mention: Unlike cab drivers, whose vehicles are deployed for use by others, drivers of private cars are presumably driving those cars for some reason other than picking up random passengers. Part of the threshold that keeps them from picking up riders for a fee is that they are unwilling to give up their own use of their cars for the period the passenger wants to use it.

Variable use equals variable value

With that in mind, consider the case of PC users in a distributed computing system like Popular Power. Assume the average computer chassis costs $1,000. Assume also that the average user replaces their computer every two and a half years. This means $1,000 buys roughly 20,000 hours of device use prior to replacement.

Now imagine you owned such a machine and were using it to play Quake, but Popular Power wanted to use it to design flu vaccines. To compensate you for an hour of your computing time, Popular Power should be willing to offer you a nickel, which is to say 1/20,000th of $1,000, the cost of your device for that hour.

Would you be willing to give up an hour of playing Quake (or working on a spreadsheet, or chatting with your friends) for a nickel? No. And yet, once the cost crosses the nickel threshold, Popular Power is spending enough, pro-rata, to buy their own box. 

The secret to projects like Popular Power, in other words, is to use the gap between the permanence of your computer and the variability of your use of it. Of the 20,000 hours of so you will own your computer, between 1,000 and 5,000 of those hours are likely to be highly valuable to you — too valuable for Popular Power to be able to get you to give them up for any amount of money they are willing to pay. 

Successful P2P programs recognize the implicit difference between the times you want to use your PC and the times you don’t. Napster gives users total control of when they want to be logged in, and allows the owner of any PC to unilaterally terminate any uploads that annoy them. Popular Power runs in the background, only actively using cycles when the user is not.

Incentives that match value

Cycles, disk space, and bandwidth are like taxis: They are resources that get provisioned up front and are used variably from then on. The way to get more such resources within a P2P system is to change the up-front costs — not the ongoing costs — since it is the up-front costs that determine the ceiling on available resources.

There are several sorts of up-front incentives that could raise this ceiling: 

  • PeoplePC could take $100 off the price of your computer in return for your spare cycles. 
  • AudioGalaxy could pay for the delta between 384- and 768-Kbps DSL in return for running the AudioGalaxy client in the background. 
  • United Devices could pay your electricity bill in return for leaving your machine on 24/7.

Those kinds of incentives match the way resources are currently deployed, and those incentives will have far more effect on the resources available to P2P systems than simply telling you that there’s momentary demand for more resources than you have.

Futures markets and big winners

Because these resources need to be provisioned in large chunks when the machines are bought, and because users don’t want to spend their time and effort putting spot prices on unused cycles, the markets that form around P2P resources are not likely to be real-time micromarkets but futures macromarkets. By providing up-front incentives, or ongoing incentives that don’t need to be re-priced (a donation to a charity, offsetting the cost of electricity or bandwidth), companies that have access to distributed computing resources are likely to be able to create and maintain vast pools of untapped computing power. 

As long as end users aren’t required to give up their use of the PC during the 1,000 to 5,000 hours they need it, they will prefer seeing the remaining 15,000 hours used in a simple way they approve of, rather than spending the time working out the bidding between companies paying pennies an hour at best. Evenness of reward and lowered mental transaction costs are a big incentive to adopt a “set it and forget it”attitude.

Indeed, the price fluctuations and market are likely to be at the other end of the scale, on a futures market for vast amounts of bandwidth. If a company wants 100,000 hours of computing time on the day they close their quarterly books, they may be willing to pay more for that than simply getting any 100,000 hours spread over three months. 

This suggests a futures market dealing in massive quantities of computing power, a market participated in by a small group of companies that can guarantee delivery of certain minimums on certain dates, or within certain time periods. The price of a bushel of wheat is a commodity, a price not set or affected by individual wheat producers (without operating a cartel), so the price fluctuation is set by the largest consumers. No one goes onto the futures market to buy guaranteed future delivery of a single barrel of oil or bushel of wheat — the price is set by buying and selling in bulk.

Far from being an atomized many-to-many market of buyers, aggregators, and sellers, distributed computing will likely be a “butterfly” market with many providers of spare cycles, many consumers of cycles, and a very few aggregators, all of whom are pursuing network effects and massive economies of scale. 

The likeliest winners are the companies or consortia that have the most open framework and the most installed clients, because once one or a few leaders emerge, they will be in a better position to create such a futures market than hundreds of also-ran competitors. (Of particular note here is Microsoft, who has access to more desktops than everyone else put together. A real P2P framework, run by Microsoft, could become the market leader in selling aggregated computing power.)

Many people in the P2P world are talking about general P2P frameworks for sharing any and all computing resources, and this language makes it seem like all resources are equally fungible. In fact, the vendors of umbrellas are operating under conditions very different from the operators of taxis. The companies aggregating and reselling the resources that are allocated up front and in big chunks, will likely face brutal competition between an ever-smaller group of ever-larger players. The economics of this part of the business so favor economies of scale that within 12 months, even as the rest of the P2P infrastructure is developing, the winners in the world of distributed computation will be anointed.

The Wal-Mart Future

Business-to-consumer retail Websites were going to be really big. Consumers were going to be dazzled by the combination of lower prices and the ability to purchase products from anywhere. The Web was supposed to be the best retail environment the world had ever seen.

This imagined future success created an astonishingly optimistic investment climate, where people believed that any amount of money spent on growth was bound to pay off later. You could build a $5 million Website, buy a Super Bowl ad for $70,000 per second, sell your wares at cost, give away shipping, and rest assured the markets would support you all the way.

The end of this ideal was crushing, as every advantage of B-to-C turned out to have a deflationary downside. Customers lured to your site by low prices could just as easily by lured away by lower prices elsewhere. And the lack of geographic segmentation meant that everyone else could reach your potential customers as easily as you could.

Like a scientist who invents a universal solvent and then has nowhere to keep it, online retail businesses couldn’t find a way to contain the deflationary currents they unleashed, ultimately diminishing their own bottom lines.

B-to-C: Not so bad after all

The interpreters of all things Internet began to tell us that ecommerce was much more than silly old B-to-C. The real action was going to be in B-to-B-to-C or B-to-G or B-to-B exchanges or even E-to-E, the newly minted “exchange-to-exchange” sectors.

So we have the newly received wisdom. B-to-C is a bad business to be in, and only ecommerce companies that operate far, far from the consumer will prosper.

This, of course, is nonsense. Selling to consumers cannot, by definition, be bad business. Individual companies can fail, but B-to-C as a sector cannot.

Money comes from consumers. If you sell screws to Seagate Technology, which sells hard disks to Dell Computer, which sells Web servers to Amazon.com, everybody in that chain is getting paid because Amazon sells books to consumers. Everything in B-to-B markets–steel, software, whatever–is being sold somewhere down the line to a company that sells to consumers.

When the market began punishing B-to-C stocks, it became attractive to see the consumer as the disposable endpoint of all this great B-to-B activity, but that is exactly backward. The B-to-B market is playing with the consumers’ money, and without those revenues flowing upstream in a daisy chain of accounts receivable and accounts payable, everything else dries up.

The fundamental problem to date with B-to-C is that it pursued an inflationary path to a deflationary ideal. The original assessment was correct: the Web is the best retail environment the world has ever seen, because it is deflationary. However, this means businesses with trendy loft headquarters, high burn rates, and $2 million Super Bowl ads are precisely the wrong companies to be building efficient businesses that lower both consumer prices and internal costs.

The future of B-to-C used to look like boo.com–uncontrolled spending by founders who thought that the stock market would support them no matter how much cash they burned pursuing growth.

I’ve seen the future…

Now the future looks like Wal-Mart, a company that enjoys global sales rivaled by only Exxon Mobil and General Motors.

Wal-Mart recently challenged standard operating procedure by pulling its Website down for a few weeks for renovation. While not everyone understood the brilliance of this move–fuckedcompany.com tut-tutted that “No pure-ecommerce company would ever do that” –anyone who has ever had the misfortune to retool a Website while leaving it open for business knows that it can cost millions more than simply taking the old site down first.

The religion of 24/7 uptime, however, forbids these kinds of cost savings.

Wal-Mart’s managers took the site down anyway, in the same way they’d close a store for remodeling, because they know that the easiest way to make a dollar is to avoid spending one, and because they don’t care how people do it in Silicon Valley. Running a B-to-C organization for the long haul means saving money wherever you can. Indeed, making a commitment to steadily lowering costs as well as prices is the only way to make B-to-C (or B-to-B or E-to-E, for that matter) work.

Despite all of the obstacles, the B-to-C sector is going to be huge. But it won’t be dominated by companies trying to spend their way to savings.

It’s too early to know if the Wal-Mart of the Web will be the same Wal-Mart we know. But it isn’t too early to know that the businesses that succeed in the B-to-C sector will invest in holding down costs and forcing their suppliers to do the same, rather than those that invest in high-priced staffs and expensive ad campaigns.

The deflationary pressures the Web unleashes can be put to good use, but only by companies that embrace cost control for themselves, not just for their customers.

The Case Against Micropayments

First published on O’Reilly OpenP2P, 12/19/2000.

Micropayments are back, at least in theory, thanks to P2P. Micropayments are an idea with a long history and a disputed definition – as the W3C micropayment working group puts it, ” … there is no clear definition of a ‘Web micropayment’ that encompasses all systems,” but in its broadest definition, the word micropayment refers to “low-value electronic financial transactions.”

P2P creates two problems that micropayments seem ideally suited to solve. The first is the need to reward creators of text, graphics, music or video without the overhead of publishing middlemen or the necessity to charge high prices. The success of music-sharing systems such as Napster and Audiogalaxy, and the growth of more general platforms for file sharing such as Gnutella, Freenet and AIMster, make this problem urgent.

The other, more general P2P problem micropayments seem to solve is the need for efficient markets. Proponents believe that micropayments are ideal not just for paying artists and musicians, but for providers of any resource – spare cycles, spare disk space, and so on. Accordingly, micropayments are a necessary precondition for the efficient use of distributed resources.

Jakob Nielsen, in his essay The Case for Micropayments writes, “I predict that most sites that are not financed through traditional product sales will move to micropayments in less than two years,” and Nicholas Negroponte makes an even shorter-term prediction: “You’re going to see within the next year an extraordinary movement on the Web of systems for micropayment … .” He goes on to predict micropayment revenues in the tens or hundreds of billions of dollars.

Alas for micropayments, both of these predictions were made in 1998. (In 1999, Nielsen reiterated his position, saying, “I now finally believe that the first wave of micropayment services will hit in 2000.”) And here it is, the end of 2000. Not only did we not get the flying cars, we didn’t get micropayments either. What happened?

Micropayments: An Idea Whose Time Has Gone

Micropayment systems have not failed because of poor implementation; they have failed because they are a bad idea. Furthermore, since their weakness is systemic, they will continue to fail in the future.

Proponents of micropayments often argue that the real world demonstrates user acceptance: Micropayments are used in a number of household utilities such as electricity, gas, and most germanely telecom services like long distance.

These arguments run aground on the historical record. There have been a number of attempts to implement micropayments, and they have not caught on in even in a modest fashion – a partial list of floundering or failed systems includes FirstVirtual, Cybercoin, Millicent, Digicash, Internet Dollar, Pay2See, MicroMint and Cybercent. If there was going to be broad user support, we would have seen some glimmer of it by now.

Furthermore, businesses like the gas company and the phone company that use micropayments offline share one characteristic: They are all monopolies or cartels. In situations where there is real competition, providers are usually forced to drop “pay as you go” schemes in response to user preference, because if they don’t, anyone who can offer flat-rate pricing becomes the market leader. (See sidebar: “Simplicity in pricing.”)

Simplicity in pricing

The historical record for user preferences in telecom has been particularly clear. In Andrew Odlyzko’s seminal work, The history of communications and its implications for the Internet, he puts it this way:

“There are repeating patterns in the histories of communication technologies, including ordinary mail, the telegraph, the telephone, and the Internet. In particular, the typical story for each service is that quality rises, prices decrease, and usage increases to produce increased total revenues. At the same time, prices become simpler.

“The historical analogies of this paper suggest that the Internet will evolve in a similar way, towards simplicity. The schemes that aim to provide differentiated service levels and sophisticated pricing schemes are unlikely to be widely adopted.”

Why have micropayments failed? There’s a short answer and a long one. The short answer captures micropayment’s fatal weakness; the long one just provides additional detail. 

The Short Answer for Why Micropayments Fail

Users hate them.

The Long Answer for Why Micropayments Fail

Why does it matter that users hate micropayments? Because users are the ones with the money, and micropayments do not take user preferences into account.

In particular, users want predictable and simple pricing. Micropayments, meanwhile, waste the users’ mental effort in order to conserve cheap resources, by creating many tiny, unpredictable transactions. Micropayments thus create in the mind of the user both anxiety and confusion, characteristics that users have not heretofore been known to actively seek out.

Anxiety and the Double-Standard of Decision Making

Many people working on micropayments emphasize the need for simplicity in the implementation. Indeed, the W3C is working on a micropayment system embedded within a link itself, an attempt to make the decision to purchase almost literally a no-brainer.

Embedding the micropayment into the link would seem to take the intrusiveness of the micropayment to an absolute minimum, but in fact it creates a double-standard. A transaction can’t be worth so much as to require a decision but worth so little that that decision is automatic. There is a certain amount of anxiety involved in any decision to buy, no matter how small, and it derives not from the interface used or the time required, but from the very act of deciding.

Micropayments, like all payments, require a comparison: “Is this much of X worth that much of Y?” There is a minimum mental transaction cost created by this fact that cannot be optimized away, because the only transaction a user will be willing to approve with no thought will be one that costs them nothing, which is no transaction at all. 

Thus the anxiety of buying is a permanent feature of micropayment systems, since economic decisions are made on the margin – not, “Is a drink worth a dollar?” but, “Is the next drink worth the next dollar?” Anything that requires the user to approve a transaction creates this anxiety, no matter what the mechanism for deciding or paying is. 

The desired state for micropayments – “Get the user to authorize payment without creating any overhead” – can thus never be achieved, because the anxiety of decision making creates overhead. No matter how simple the interface is, there will always be transactions too small to be worth the hassle.

Confusion and the Double-Standard of Value

Even accepting the anxiety of deciding as a permanent feature of commerce, micropayments would still seem to have an advantage over larger payments, since the cost of the transaction is so low. Who could haggle over a penny’s worth of content? After all, people routinely leave extra pennies in a jar by the cashier. Surely amounts this small makes valuing a micropayment transaction effortless?

Here again micropayments create a double-standard. One cannot tell users that they need to place a monetary value on something while also suggesting that the fee charged is functionally zero. This creates confusion – if the message to the user is that paying a penny for something makes it effectively free, then why isn’t it actually free? Alternatively, if the user is being forced to assent to a debit, how can they behave as if they are not spending money? 

Beneath a certain price, goods or services become harder to value, not easier, because the X for Y comparison becomes more confusing, not less. Users have no trouble deciding whether a $1 newspaper is worthwhile – did it interest you, did it keep you from getting bored, did reading it let you sound up to date – but how could you decide whether each part of the newspaper is worth a penny?

Was each of 100 individual stories in the newspaper worth a penny, even though you didn’t read all of them? Was each of the 25 stories you read worth 4 cents apiece? If you read a story halfway through, was it worth half what a full story was worth? And so on.

When you disaggregate a newspaper, it becomes harder to value, not easier. By accepting that different people will find different things interesting, and by rolling all of those things together, a newspaper achieves what micropayments cannot: clarity in pricing. 

The very micro-ness of micropayments makes them confusing. At the very least, users will be persistently puzzled over the conflicting messages of “This is worth so much you have to decide whether to buy it or not” and “This is worth so little that it has virtually no cost to you.”

User Preferences

Micropayment advocates mistakenly believe that efficient allocation of resources is the purpose of markets. Efficiency is a byproduct of market systems, not their goal. The reasons markets work are not because users have embraced efficiency but because markets are the best place to allow users to maximize their preferences, and very often their preferences are not for conservation of cheap resources.

Imagine you are moving and need to buy cardboard boxes. Now you could go and measure the height, width, and depth of every object in your house – every book, every fork, every shoe – and then create 3D models of how these objects could be most densely packed into cardboard boxes, and only then buy the actual boxes. This would allow you to use the minimum number of boxes.

But you don’t care about cardboard boxes, you care about moving, so spending time and effort to calculate the exact number of boxes conserves boxes but wastes time. Furthermore, you know that having one box too many is not nearly as bad as having one box too few, so you will be willing to guess how many boxes you will need, and then pad the number.

For low-cost items, in other words, you are willing to overpay for cheap resources, in order to have a system that maximizes other, more important, preferences. Micropayment systems, by contrast, typically treat cheap resources (content, cycles, disk) as precious commodities, while treating the user’s time as if were so abundant as to be free.

Micropayments Are Just Payments

Neither the difficulties posed by mental transaction costs nor the the historical record of user demand for simple, predictable pricing offers much hope for micropayments. In fact, as happened with earlier experiments attempting to replace cash with “smart cards,” a new form of financial infrastructure turned out to be unnecessary when the existing infrastructure proved flexible enough to be modified. Smart cards as cash replacements failed because the existing credit card infrastructure was extended to include both debit cards and ubiquitous card-reading terminals.

So it is with micropayments. The closest thing we have to functioning micropayment systems, Qpass and Paypal, are simply new interfaces to the existing credit card infrastructure. These services do not lower mental transaction costs nor do they make it any easier for a user to value a penny’s worth of anything – they simply make it possible for users to spend their money once they’ve decided to.

Micropayment systems are simply payment systems, and the size and frequency of the average purchase will be set by the user’s willingness to spend, not by special infrastructure or interfaces. There is no magic bullet – only payment systems that work within user expectations can succeed, and users will not tolerate many tiny payments.

Old Solutions

This still leaves the problems that micropayments were meant to solve. How to balance users’ strong preference for simple pricing with the enormous number of cheap, but not free, things available on the Net?

Micropayment advocates often act as if this is a problem particular to the Internet, but the real world abounds with items of vanishingly small value: a single stick of gum, a single newspaper article, a single day’s rent. There are three principal solutions to this problem offline – aggregation, subscription, and subsidy – that are used individually or in combination. It is these same solutions – and not micropayments – that are likely to prevail online as well. 

Aggregation

Aggregation follows the newspaper example earlier – gather together a large number of low-value things, and bundle them into a single higher-value transaction.

Call this the “Disneyland” pricing model – entrance to the park costs money, and all the rides are free. Likewise, the newspaper has a single cost, that, once paid, gives the user free access to all the stories.

Aggregation also smoothes out the differences in preferences. Imagine a newspaper sold in three separate sections – news, business, and sports. Now imagine that Curly would pay a nickel to get the news section, a dime for business, and a dime for sports; Moe would pay a dime each for news and business but only a nickel for sports; and Larry would pay a dime, a nickel, and a dime. 

If the newspaper charges a nickel a section, each man will buy all three sections, for 15 cents. If it prices each section at a dime, each man will opt out of one section, paying a total of 20 cents. If the newspaper aggregates all three sections together, however, Curly, Moe and Larry will all agree to pay 25 cents for the whole, even though they value the parts differently.

Aggregation thus not only lowers the mental transaction costs associated with micropayments by bundling several purchase decisions together, it creates economic efficiencies unavailable in a world where each resource is priced separately. 

Subscription

A subscription is a way of bundling diverse materials together over a set period, in return for a set fee from the user. As the newspaper example demonstrates, aggregation and subscription can work together for the same bundle of assets. 

Subscription is more than just aggregation in time. Money’s value is variable – $100 today is better than $100 a month from now. Furthermore, producers value predictability no less than consumers, so producers are often willing to trade lower subscription prices in return for lump sum payments and more predictable revenue stream.

Long-term incentivesGame theory fans will recognize subscription arrangements as an Iterated Prisoner’s Dilemma, where the producer’s incentive to ship substandard product or the consumer’s to take resources without paying is dampened by the repetition of delivery and payment.

Subscription also serves as a reputation management system. Because producer and consumer are more known to one another in a subscription arrangement than in one-off purchases, and because the consumer expects steady production from the producer, while the producer hopes for renewed subscriptions from the consumer, both sides have an incentive to live up to their part of the bargain, as a way of creating long-term value. (See sidebar: “Long-term incentives”.)

Subsidy

Subsidy is by far the most common form of pricing for the resources micropayments were meant to target. Subsidy is simply getting someone other than the audience to offset costs. Again, the newspaper example shows that subsidy can exist alongside aggregation and subscription, since the advertisers subsidize most, and in some cases all, of a newspaper’s costs. Advertising subsidy is the normal form of revenue for most Web sites offering content.

The biggest source of subsidy on the Net overall, however, is from the the users themselves. The weblog movement, where users generate daily logs of their thoughts and interests, is typically user subsidized – both the time and the resources needed to generate and distribute the content are donated by the user as a labor of love. 

Indeed, even as the micropayment movement imagines a world where charging for resources becomes easy enough to spawn a new class of professionals, what seems to be happening is that the resources are becoming cheap enough to allow amateurs to easily subsidize their own work.

Against users’ distaste for micropayments, the tools of aggregation, subscription and subsidy will be the principle tools for bridging the gap between atomized resources and demand for simple, predictable pricing.

Playing by the Users’ Rules

Micropayment proponents have long suggested that micropayments will work because it would be great if they did. A functioning micropayment system would solve several thorny financial problems all at once. Unfortunately, the barriers to micropayments are not problems of technology and interface, but user approval. The advantage of micropayment systems to people receiving micropayments is clear, but the value to users whose money and time is involved isn’t.

Because of transactional inefficiencies, user resistance, and the increasing flexibility of the existing financial framework, micropayments will never become a general class of network application. Anyone setting out to build systems that reward resource providers will have to create payment systems that provides users the kind of financial experience they demand – simple, predictable and easily valued. Only solutions that play by these rules will succeed.

Peers not Pareto

First published on O’Reilly’s OpenP2P, 12/15/2000.

After writing on the issue of freeriding, and particularly on why it isn’t the general problem for P2P that people think it is, a possibly neater explanation of the same issue occurred to me. 

I now think that most people working on the freeriding problem are assuming that P2P systems are “Pareto Optimal,” when they actually aren’t. 

Named after the work of Italian economist and sociologist Vilfredo Pareto (1848-1923), Pareto Optimal refers to a situation in which you can’t make anybody better off without making someone else worse off. An analogy is an oversold plane, where there are 110 ticket holders for 100 seats. You could make any of the 10 standby passengers better off, but only by making one of the current passengers worse off. Note that this says nothing about the overall fairness of the system, or even overall efficiency. It simply describes systems where there is equilibrium.

Since free markets are so good at producing competitive equilibrium, free markets abound with Pareto Optimal situations. Furthermore, we are used to needing market incentives to adjust things fairly in Pareto Optimal situations. If you were told you had to get off the plane to make room for a standby passanger, you would think that was unfair, but if you were offered and accepted a travel voucher as recompense, you would not.

I think that we are in fact so used to Pareto Optimal situations that we see them even where they don’t exist. Much of the writing about freeriding assumes that P2P systems are all Pareto Optimal, so that, logically, if someone takes from you but does not give back, they have gained and you have lost.

This is plainly not true. If I leave Napster on over the weekend, I typically get more downloads than I have total songs stored, at a marginal cost of zero. Those users are better off. I am not worse off. The situation is not Pareto Optimal.

Consider the parallel of email. Like mp3s, an email is a file — it takes up disk space and bandwidth. If I send you email, who is the producer and who the consumer in that situation? Should you be charged for the time I spent writing? Should I be charged for the disk space my email takes up on your hard drive? Are lurkers on this list freeriders if they don’t post? Or are the posters freeriders because we are taking up the lurkers’ resources without paying them for the bandwidth and disk our posts are using, to say nothing of their collective time?

Markets are both cause and effect of Pareto Optimal situations. One of the bets made by the “market systems” people is that a market for resources can either move a system to an equilibrium use of resources, or else that the system will move to an equilibrium use of resources of its own accord, at which point a market will be necessary to allocate resources efficiently. 

On the other hand, we are already well-accustomed to situations on the Net where the resources produced and consumed are not in fact subject to real-time market pricing (like the production and consumption of the files that make up email), and it is apparent to everyone who uses those systems that real-time pricing (aka micropayments) would not make the system more efficient overall (though it would cut down on spam, which is the non-Pareto Optimal system’s version of freeriding).

My view is that P2P file-sharing is (or can be, depending on its architecture) in the same position, where some users can be better off without making other users worse off.

Wireless Auction Follies

10/30/2000

An interesting contrast in wireless strategy is taking place in Britain and Sweden.
Last spring, Britain decided to auction off its wireless spectrum to the highest bidder.
The results were breathtaking, with Britain raising $35.4 billion for the government
coffers. This fall, Sweden will also assign its wireless spectrum to telecom companies
eager to offer next-generation (or 3G) wireless services, but instead of emulating
Britain’s budget-maximizing strategy, it opted for a seemingly wasteful beauty pageant, granting hugely valuable spectrum at no cost to whichever telecom companies they judged to have the best proposals.

The contrast couldn’t be clearer. After assigning its spectrum, the British government
is $35.4 billion ahead of Sweden–and its wireless industry is $35.4 billion behind.

Britain has, in effect, imposed a tax on next-generation wireless services, paid in full
before (long before) the first penny of 3G revenue is earned. This in turn means that
$35.4 billion over and above the cost of actually building those 3G services must be
extracted from British consumers, in order for the new owners of that spectrum to remain viable businesses.

For the auction’s alleged winners, the UK sale couldn’t have taken place at a worse time. Wireless hype was at a head, and the markets were desperately looking for the Next Big Thing after the early April meltdown. In addition, WAP euphoria still reigned supreme, bringing with it visions of “m-commerce” and the massive B-to-C revenues that had been so elusive in ecommerce. In this environment, wireless spectrum looked like a license to print money, and was priced accordingly.

The air has been leaking steadily out of that balloon. First came the beginnings of the
“WAPlash” and the disillusionment of designers and engineers with the difficulty of
offering content or services over WAP (not only is WML difficult to program relative to HTML, but different handsets display any given WAP site differently). Next came the disillusionment of the users, who found waiting for a new download every time they changed menus to be intolerable.

Then there was the loss of customer lock-in. When British Telecom was forced to abandon its plans to lock their users into its own gateway, it destroyed the illusion that telcos would ever be able to act as the sole gatekeeper (and tollbooth) for all of their users’ wireless data.

Lastly, the competition arrived. NTT DoCoMo’s iMode and RIM’s Blackberry have both demonstrated that it’s possible to make functional and popular wireless devices based on open standards–iMode uses HTML; Blackberry handles email.

We’ve been here before. Between 1994 and 1996, when the Web was young, many companies tried to offer both telecommunications and media services, providing both dial-up access and walled gardens of content: Prodigy, CompuServe, and even AOL before it embraced the Web. These models failed when users expressed a strong preference for paying different companies for access and commercial transactions. At that point, we settled down to the Web we have today: ISPs and telcos on one side, media and commerce on the other.

As in the States, many British telecom companies disastrously flirted with the idea of
transforming themselves into Internet media businesses. And as in the States, they
ended up making most of their money by providing bandwidth. The wireless industry in Britain would be poised for a similar arrangement, but for one sticky wicket–having forked over $35.4 billion, a split between access providers and content and commerce providers would be the death knell for the auction’s winners.

In fact, the wireless carriers are going to be forced to behave like media companies
whether they want to or not, because any money they could make selling access to their newly acquired spectrum has already been taxed away in advance. Furthermore, startups that want to build new businesses on top of that spectrum represent a threat rather than an opportunity, because anything–anything–that suggests the auction winners will capture less than 100 percent of the revenue from their customers would illustrate how badly they overpaid.

In effect, the British government has issued this decree to the winners of the wireless
auction:

“Hear Ye, Hear Ye, You are enjoined from passing savings on to users, offering spectrum access to startups, or rolling out low-margin services no matter how innovative or popular they may be, until such time as the first $35.4 billion of profit has been extracted from the populace.”

And Sweden? Sweden is laughing.

In Sweden, any wireless service that will generate a krona’s worth of revenue for a
krona’s worth of investment is worth trying. The beauty pageant will create a system
where experimentation with new services, even moderately profitable ones, can be
undertaken by the new owners of the spectrum.

Seen in this light, Sweden’s beauty contest doesn’t look so wasteful. Britain’s auction
may have generated a huge sum, but at the cost of sandbagging the industry. So keep an eye on the Swedes–their “forgo the revenue” strategy will have paid off if, by refusing to tax the industry in advance, they earn an additional $35.4 billion in taxes from the growth created by their dynamic and innovative wireless industry.

PCs Are The Dark Matter Of The Internet

First published on Biz2, 10/00.

Premature definition is a danger for any movement. Once a definitive
label is applied to a new phenomenon, it invariably begins shaping —
and possibly distorting — people’s views. So it is with the current
revolution, where Napster, SETI@Home, and their cousins now seem to be
part of a larger and more coherent change in the nature of the
internet. There have been many attempts to describe this change in a
phrase — decentralization, distributed computing — but the label
that seems to have stuck is peer-to-peer. And now that peer-to-peer is
the name of the game, the rush is on to apply this definition both as
a litmus test and as a marketing tool.

This is leading to silliness of the predictable sort — businesses
that have nothing in common with Napster, Gnutella, or Freeserve are
nevertheless re-inventing themselves as “peer-to-peer” companies,
applying the term like a fresh coat of paint over a tired business
model. Meanwhile, newly vigilant interpreters of the revolution are
now suggesting that Napster itself is not “truly peer-to-peer”,
because it relies on a centralized server to host its song list.

It seems obvious, but bears repeating: definitions are only useful as
tools for sharpening one’s perception of reality. If Napster isn’t
peer-to-peer, then “peer-to-peer” is a bad description of what’s
happening. Napster is the killer app for this revolution, and defining
it out of the club after the fact is like saying “Sure it might work
in practice, but it will never fly in theory.”

No matter what you call it, what is happening is this: PCs, and in
particular their latent computing power, are for the first time being
integrated directly into the fabric of the internet.

PCs are the dark matter of the internet. Like the barely detectable
stuff that makes up most of the mass of the universe, PCs are
connected to the internet by the hundreds of millions but have very
little discernable effect on the whole, because they are largely
unused as anything other than dumb clients (and expensive dumb
clients to boot.) From the point of view of most of the internet
industry, a PC is nothing more than a life-support system for a
browser and a place to store cookies.

PCs have been restricted to this expensive-but-dumb client mode for
many historical reasons — slow CPUs, small disks, flakey OSs, slow
and intermittant connections, no permanent IP addresses — but with
the steady growth in hardware quality, connectivity, and user base,
the PCs at the edges of the network now represent an astonishing and
untapped pool of computing power.

At a conservative estimate, the world’s net-connected PCs host an
aggregate 10 billion Mhz of processing power and 10 thousand terabytes
of storage. And this calculation assumes 100 million PCs among the
net’s 300 million users, with an average chip speed of 100 Mhz and an
average 100 Mb hard drive. And these numbers continue to climb —
today, sub-$2K PCs have an order of magnitude more processing power
and two orders of magnitude more storage than this assumed average.

This is the fuel powering the current revolution — the latent
capabilities of PC hardware made newly accessible represent a huge,
untapped resource. No matter how it gets labelled (and peer-to-peer
seems likely to stick), the thing that software like the Gnutella file
sharing system and the Popular Power distributed computing network
have in common is an ability to harness this dark matter, the otherwise
underused hardware at the edges of the net.

Note though that this isn’t just “Return of the PC”, because in these
new models, PCs aren’t just personal computers, they’re promiscious
computers, hosting data the rest of the world has access to, a la
Napster, and sometimes even hosting calculations that are of no use to
the PC’s owner at all, like Popular Powers influenza virus
simulations. Furthermore, the PCs themselves are being disaggregated
— Popular Power will take as much CPU time as it can get but needs
practically no storage, while Gnutella needs vast amounts of disk
space but almost no CPU time. And neither kind of business
particularly needs the operating system — since the important
connection is often with the network rather than the local user, Intel
and Seagate matter more to the peer-to-peer companies than do
Microsoft or Apple.

Its early days yet for this architectural shift, and the danger of the
peer-to-peer label is that it may actually obscure the real
engineering changes afoot. With improvements in hardware, connectivity
and sheer numbers still mounting rapidly, anyone who can figure how to
light up the internet’s dark matter gains access to a large and
growing pool of computing resources, even if some of the functions are
centralized (again, like Napster or Popular Power.)

Its still too soon to see who the major players will be, but don’t
place any bets on people or companies reflexively using the
peer-to-peer label. Bet instead on the people figuring out how to
leverage the underused PC hardware, because the actual engineering
challenges in taking advantage of the world’s PCs matters more — and
will create more value — than merely taking on the theoretical
challenges of peer-to-peer architecture.

The Napster-BMG Merger

Napster has always been a revolution within the commercial music business, not against it, and yesterday’s deal between BMG and Napster demonstrates that at least one of the 5 major labels understands that. The press release was short on details, but the rough outlines of the deal has Bertelsmann dropping its lawsuit and instead working with Napster to create subscription-based access to its entire music
catalog online. Despite a year of legal action by the major labels, and despite the revolutionary fervor of some of Napster’s users, Napster’s success has more to do with the economics of digital music than with copyright law, and the BMG deal is merely a recognition of those economic realities.

Until Napster, the industry had an astonishingly successful run in producing digital music while preventing digital copying from taking place on a wide scale, managing to sideline DAT, Minidisc, and recordable CDs for years. Every time any of the major labels announced an online initiative, it was always based around digital rights
management schemes like SDMI, designed to make the experience of buying and playing digital files at least as inconvenient as physical albums and tapes.

In this environment, Napster was a cold shower. Napster demonstrated how easily and cheaply music could be distributed by people who did not have a vested interest in preserving inefficiency. This in turn reduced the industry to calling music lovers ‘pirates’ (even though Napster users weren’t in it for the money, surely the definition of piracy), or trying to ‘educate’ us about about why we should be happy to pay as much for downloaded files as for a CD (because it was costing them so much to make downloaded music inconvenient.)

As long as the labels kept whining, Napster looked revolutionary, but once BMG finally faced the economic realities of online distribution and flat rate pricing, the obvious partner for the new era was Napster. That era began in earnest yesterday, and the people in for the real surprise are not the music executives, who are after all
adept at reading popular sentiment, and who stand to make more money from the recurring revenues of a subscription model. The real surprise is coming for those users who convinced themselves that Napster’s growth had anything to do with anti-authoritarian zeal.

Despite the rants of a few artists and techno-anarchists who believed that Napster users were willing to go to the ramparts for the cause, large scale civil disobedience against things like like Prohibition or the 55 mph speed limit has usually been about relaxing restrictions, not repealing them. You can still make gin for free in your bathub, but nobody does it anymore, because the legal liquor industry now sells high-quality gin at a reasonable price, with restrictions that society can live with.

Likewise, the BMG deal points to a future where you can subscribe to legal music from Napster for an attractive price, music which, as a bonus, won’t skip, end early, or be misindexed. Faced with the choice between shelling out five bucks a month for high quality legal access or mastering gnutella, many music lovers will simply plump for the subscription. This will in turn reduce the number of copyright violators, making it easier for the industry to go after them, which will drive still more people to legal subscriptions, and so on.

For a moment there, as Napster’s usage went through the roof while the music industry spread insane propaganda about the impending collapse of all professional music making, one could imagine that the collective will of 30 million people looking for free Britney Spears songs constituted some sort of grass roots uprising against The Man. As the BMG deal reverberates through the industry, though, it will become apparent that those Napster users were really just agitating for better prices. In unleashing these economic effects, Napster has almost single-handedly dragged the music industry into the internet age. Now the industry is repaying the favor by dragging Napster into the mainstream of the music business.