P2P Smuggled In Under Cover of Darkness

First published on O’Reilly’s OpenP2P, 2/14/2001

2001 is the year peer-to-peer will make its real appearance in the enterprise, but most of it isn’t going to come in the front door. Just as workers took control of computing 20 years ago by smuggling PCs into businesses behind the backs of the people running the mainframes, workers are now taking control of networking by downloading P2P applications under the noses of the IT department.

Although it’s hard to remember, the PC started as a hobbyist’s toy in the late ’70s, and personal computers appeared in the business world not because management decided to embrace them, but because individual workers brought them in on their own. At the time, PCs were slow and prone to crashing, while the mainframes and minis that ran businesses were expensive but powerful. This quality gap made it almost impossible for businesses to take early PCs seriously.

However, workers weren’t bringing in PCs because of some sober-minded judgment about quality, but because they wanted to be in control. Whatever workers thought about the PC’s computational abilities relative to Big Iron, the motivating factor was that a PC was your own computer.

Today, networking — the ability to configure and alter the ways those PCs connect — is as centralized a function as computation was in the early ’80s, and thanks to P2P, this central control is just as surely and subtly being eroded. The driving force of this erosion is the same as it was with the PC: Workers want, and will agitate for, control over anything that affects their lives.

This smuggling in of P2P applications isn’t just being driven by the human drive for control of the environment. There is another, more proximate cause of the change.

You Hate the IT department, and They Hate You Right Back

The mutual enmity between the average IT department and the average end user is the key feature driving P2P adoption in the business setting.

The situation now is all but intolerable: No matter who you are, unless you are the CTO, the IT department does not work for you, so your interests and their interests are not aligned.

The IT department is rewarded for their ability to keep bad things from happening, and that means there is a pressure to create and then preserve stability. Meanwhile, you are rewarded for your ability to make good things happen, meaning that a certain amount of risk-taking is a necessary condition of your job.

Risk-taking undermines stability. Stability deflects risk-taking. You think your IT department are jerks for not helping you do what you want to do; they consider you an idiot for installing software without their permission. Also, because of the way your interests are (mis)aligned, you are both right.

Thought Experiment

Imagine that you marched into your IT department and explained that you wanted the capability to have real-time conversations with Internet users directly from your PC, that you wanted this set up within the hour, and that you had no budget for it.

Now imagine being laughed out of the room.

Yet consider ICQ. Those are exactly its characteristics, and it is second only to e-mail, and well ahead of things such as Usenet and Web bulletin boards, as the tool of choice for text messaging in the workplace. Furthermore, chat is a “ratchet” technology: Once workers start using chat, they will never go back to being disconnected, even if the IT department objects.

And all this happened in less than 4 years, with absolutely no involvement from the IT department. Chat was offered directly to individual users as a new function, and since the business users among them knew (even if only unconsciously) that the chances of getting the IT department to help them get it were approximately “forget it.” Their only other option was to install and configure the application themselves; which they promptly did.

So chat became the first corporate networking software never approved by the majority of the corporations whose employees use it. It will not be the last.

Chat Is Just the Beginning

ICQ was the first application that made creating a public network address effortless. Because ICQ simply ignored the idea that anyone else had any say over how you use your computer, you never had to ask the IT department about IP addresses, domain name servers or hosting facilities. You could give your PC an network address, and that PC could talk to any other PC with an address in the ICQ name space, all on your own.

More recently, Napster has made sharing files as easy as ICQ made chat. Before Napster, if you wanted to serve files from your PC, you needed a permanent IP address, a domain name, registration with domain name servers and properly configured Web server software on the PC. With Napster, you could be serving files within 5 minutes of having downloaded the software. Napster is so simple that it is easy to forget that it performs all of the functions of a Web server with none of the hassle.

Napster is optimized for MP3s, but there is no reason general purpose file sharing can’t make the same leap. File sharing is especially ripe for a P2P solution, as the current norm for file sharing in the workplace — e-mail attachments — notoriously falls victim to arbitrary limits on file sizes, mangled MIME headers and simple failure of users to attach the documents they meant to attach. (How may times have you received otherwise empty “here’s that file” mail?)

Though there are several systems vying for the title of general file-sharing network, the primary reason holding back systems such as Gnutella is their focus on purity of decentralization rather than ease of use. The reason that brought chat and Napster into the workplace is the same reason that brought PCs into the workplace two decades ago: They were easy enough to use that non-technical workers felt comfortable setting them up themselves.

Necessity Is the Mother of Adoption

Workers’ desire for something to replace the e-mail attachment system of file sharing is so great that some system or systems will be adopted. Perhaps it could be Aimster, which links chat with file sharing; perhaps Groove, which is designed to set up an extensible group work environment without a server; perhaps Roku, OpenCola or Globus, all of which are trying to create general purpose P2P computing solutions; and there are many others.

The first workplace P2P solution may also be a specific tool for a specific set of workers. One can easily imagine a P2P environment for programmers, where the version control system reverses its usual course, and instead of checking out files stored centrally, it checks in files stored on individual desktops. And a system whose compiler knows where the source files are, even if they are spread across a dozen PCs.

And as with chat, once a system like this exists and crosses some threshold of ease of use, users will adopt it without asking or even informing the IT department.

End-to-End

As both Jon Udell and Larry Lessig have pointed out from different points of view, the fundamental promise of the Internet is end-to-end communications, where any node can get to any other node on its own. Things such as firewalls, NAT translation and dynamic IP addresses violate the fundamental promise of the Internet both at the protocol level, by breaking the implicit contract of TCP/IP (two nodes can always contact each other) and on a social level (the Internet has no second-class citizens).

Business users have been second-class citizens for some time. Not only do systems such as ICQ and Napster undo this by allowing users to create their own hosted network applications, but systems such as Mojo Nation are creating connection brokers that allow two machines — both behind firewalls — to talk to each other by taking the e-mail concept of store and forward, and using it to broker requests for files and other resources.

The breaking of firewalls by the general adoption of port 80 as a front door is nothing compared to the ability to allow users to create network identities for themselves without having to ask for either permission or help.

Security, Freedom and the Pendulum

Thus P2P represents a swing of the pendulum back toward user control. Twenty years ago, the issue was control over the center of a business where the mainframes sat. Today, it is over the edges of a business, where the firewalls sit. However, the tension between the user’s interests and corporate policy is the same.

The security-minded will always complain about the dangers of users controlling their own network access, just like the mainframe support staff worried that users of PCs were going to destroy their tidy environments with their copies of VisiCalc. And, like the mainframe guys, they will be right. Security is only half the story, however.

Everyone knows that the easiest way to secure a PC is to disconnect it from the Internet, but no one outside of the NSA seriously suggests running a business where the staff has no Internet access. Security, in other words, always necessitates a tradeoff with convenience, and there are times when security can go too far. What the widespread adoption of chat software is telling us is that security concerns have gone too far, and that workers not only want more control over how and when their computers connect to the network, but that when someone offers them this control, they will take it.

This is likely to make for a showdown over P2P technologies in the workplace, with an argument between the freedom of individual workers vs. the advantages of centralized control, and of security vs. flexibility. Adoption of some form of P2P addressing, addressing that bypasses DNS to give individual PCs externally contactable addresses, is now in the tens of millions thanks to Napster and ICQ.

By the time general adoption of serverless intranets begins, workers will have gone too far to integrate P2P functions into their day for IT departments to simply ban them. As with the integration of the PC, expect the workers to win more control over the machines on their desk, and for the IT departments to accept this change as the new norm over time.

Peak Performance Pricing

First published at Biz2, February 2001.

Of all the columns I have written, none has netted as much contentious mail as
“Moving from Units to Eunuchs” (October 10, 2000, p114, and at Business2.com).
That column argued that Napster was the death knell for unit pricing of online music.
By allowing users to copy songs from one another with no per-unit costs, Napster
introduced the possibility of “all you can eat” pricing for music, in the same way
that America Online moved to “all you can eat” pricing for email.

Most of the mail I received disputed the idea that Napster had no per-unit costs.
That idea, said many readers, violates every bit of common sense about the economics of resource allotment. If more resources are being used, the users must be paying more for them somewhere, right?

Wrong. The notion that Napster must generate per-unit costs fails the most obvious
test: reality. Download Napster, download a few popular songs, and then let other
Napster users download those songs from you. Now scan your credit-card bills to see where the extra costs for those 10 or 100 or 1,000 downloads come in.

You can perform this experiment month after month, and the per-unit costs will never show up-you are not charged per byte for bandwidth. Even Napster’s plan to charge a subscription doesn’t change this math, because the charge is for access to the system, not for individual songs.

‘Pay as you go’
Napster and other peer-to-peer file-sharing systems take advantage of the curious way individual users pay for computers and bandwidth. While common sense suggests using a “pay as you go” system, the average PC user actually pays for peak performance, not overall resources, and it is peak pricing that produces the excess resources that let Napster and its cousins piggyback for free.

Pay as you go is the way we pay for everything from groceries to gasoline. Use some,
pay some. Use more, pay more. At the center of the Internet, resources like bandwidth are indeed paid for in this way. If you host a Web server that sees a sudden spike in demand, your hosting company will simply deliver more bandwidth, and then charge you more for it on next month’s bill.

The average PC user, on the other hand, does not buy resources on a pay-as-you-go
basis. First of all, the average PC is not operating 24 hours a day. Furthermore,
individual users prize predictability in pricing. (This is why AOL was forced to drop
its per-hour pricing in favor of the now-standard flat rate.) Finally, what users pay
for when they buy a PC is not steady performance but peak performance. PC buyers don’t choose a faster chip because it will give them more total cycles; they choose a faster chip because they want Microsoft Excel to run faster. Without even doing the math, users understand that programs that don’t use up all of the available millions of instructions per second will be more responsive, while those that use all the CPU cycles (to perform complicated rendering or calculations) will finish sooner.

Likewise, they choose faster DSL so that the line will be idle more often, not less.
Paying for peak performance sets a threshold between a user’s impatience and the size of their wallet, without exposing them to extra charges later.

A side effect of buying peak cycles and bandwidth is that resources that don’t get used have nevertheless been paid for. People who understand the economics of money but not of time don’t understand why peak pricing works. But anyone who has ever paid for a faster chip to improve peak performance knows instinctively that paying for resources upfront, no matter what you end up using, saves enough hassles to be worth the money.

The Napster trick
The genius of Napster was to find a way to piggyback on these already-paid-up resources in order to create new copies of songs with no more per-unit cost than new pieces of email, a trick now being tried in several other arenas. The SETI@home project creates a virtual supercomputer out of otherwise unused CPU time, as do Popular Power, DataSynapse, and United Devices.

The flagship application of openCola combines two of the most talked-about trends on the Internet: peer-to-peer networking and expert communities that lets users share knowledge instead of songs. It turns the unused resources at the edge of the network into a collaborative platform on which other developers can build peer-to-peer applications, as does Groove Networks.

As more users connect to the Internet every day and as both their personal computers and their bandwidth gets faster, the amount of pre-paid but unused resources at the edges of the network is growing to staggering proportions.

By cleverly using those resources in a way that allowed it to sidestep per-unit
pricing, Napster demonstrated the value of the world’s Net-connected PCs. The race is now on to capitalize on them in a more general fashion.