The Divide by Zero Era: The Arrival of the Free Computer

First published in ACM, 12/98.

"I think there's a world market for about five computers."
-- attr. Thomas J. Watson (IBM Chairman), 1945

"There is is no reason for any individual to have a computer in their home."
-- Ken Olsen (President of Digital Equipment
Corporation), Convention of the World Future
Society, 1977

"The PC model will prove itself once again."
-- Bill Gates (CEO of Microsoft),
COMDEX, 1998

Like Thomas Kuhn’s famous idea of “paradigm shifts” in “The Structure of Scientific Revolutions”, the computing industry has two basic modes – improvements in a given architecture, a kind of “normal computing”, punctuated by introductions of new architectures or ideas, or “radical computing”. Once a radical shift has been completed, the industry reverts to “normal computing”, albeit organized around the new norm, and once that new kind of computing becomes better understood, it creates (or reveals) a new set of problems, problems which will someday require yet another radical shift. 

The mainframe is introduced, then undergoes years of steady and rapid improvement. At the height of its powers, the mainframe solves all the problems within its domain, but gives rise to a new set of problems (problems of distributed computing, in this case), which couldn’t even have been discovered without the mainframe solving the previous generation of problems. These new problems do not respond to the kinds of things the mainframe does well, the community casts about for new solutions, and slowly, the client-server model (in this case) is born. 

This simplified view overstates the case, of course; in the real world these changes only appear neat and obvious after the fact. However, in spite of this muddiness, the most visible quality of such a shift is that it can be identified not just by changes in architecture, but by changes in user base. Put another way, real computing revolutions take place not when people are introduced to new kinds of computers but when computers are introduced to new kinds of people. 

Failure to understand the timing of these of radical shifts is the underlying error made by all three of the computer executives quoted at the beginning of this article. Each of them dominated computing technology of their day, and each of them failed to see that “the computer” as they understood it (and manufactured it) was too limited to be an all-inclusive solution. In particular, they all approached their contemporary computing environment as if it was the last computing environment, but in computing ecology, there is no “last”, there is just “next”. 

Scarcity Drives Strategy

What characterizes any computing era is scarcity. In the earliest days, everything was scarce, and building a computer was such a Herculean effort that had the price of hardware not plummeted, and expertise in building computers not skyrocketed, Watson’s estimate of 5 computers would have made sense. However, price did plummet, expertise did skyrocket, and computers were introduced to a new class of businesses in the 60s and 70s that could never have imagined owning a computer in earlier decades. 

This era of business expansion was still going strong in the late 70’s, when Olsen made his famous prediction. The scarcity at the time was processing power – well into the 80s, there were computers which were calculating each user’s cost for a second of CPU time. In this environment, it was impossible to imagine home computing. Such a thing would assume that CPU cycles would become so abundant that they would be free, an absurdity from Ken Olsen’s point of view, which is why it took DEC by surprise. 

We are now in the second decade of the “PC Model”, which took advantage of the falling cost of CPU cycles to create distibuted computing through the spread of stand-alone boxes. After such a long time of living with the PC, we can see the problems it can’t solve – it centralizes too much that should be distributed (most cycles are wasted on most PCs most of the time) and it distributes too much that should be centralized (a company contact list should be maintained and updated centrally, not given out in multiple copies, one per PC). Furthermore, the “Zero Maintenance Cost” hardware solutions that are being proposed – essentially pulling the hard drive out of a PC to make it an NC – are too little too late. 

Divide by Zero

A computing era ends – computers break out of normal mode and spread outwards to new groups of users – when the the previous era’s scarcity disappears. At that point the old era’s calculations run into a divide by zero error – calculating “CPU cycles per dollar” becomes pointless when CPU cycles are abundant enough to be free, charging users per kilobyte of storage becomes pointless when the ordinary unit of drive storage is the gigabyte, and so on. 

I believe that today we are seeing the end of the PC era because of another divide-by-zero error: Many people today wrongly assume that in the future you will continue to be able to charge money for computers. In the future, in the near future, the price of computers will fall to free, which will in turn open computer use to an enormously expanded population. 

You can already see the traditional trade press strugling with these changes, since many of them still write about the “sub $1000 PC” at a time when there are several popular sub _$500_ PCs on offer, and no sign that the end of the price cutting is in sight. The emergence of free computers will be driven not just by falling costs on the supply side, but by financial advantages on the demand side – businesses will begin to treat computers as a service instead of a product, in the same way that mobile phone services give away the phone itself in order to sell a service plan. In the same way that falling price has democratized the mobile phone industry, free computers will open the net to people whose incomes don’t include and extra $2000 of disposable income every two years. 

There are many factors going into making this shift away from the PC model possible – the obvious one is that for a significant and growing percentage of PC users, the browser is the primary interface and the Web is the world’s largest hard drive, but there are other factors at work as well. I list three others here: 

The bandwidth bottleneck has moved innovation to the server. 

A 500 Mhz chip with a 100 Mhz local bus is simply too fast for almost any home use of a computer. Multimedia is the videophone of the 90’s, popular among manufacturers but not among consumers, who want a computer primarily for word processing, email, and Web use. The principal scarcity is no longer clock speed but bandwidth, and the industry is stuck at about 50 kilobits, where it will stay for at least the next 24 months. ADSL and cable modems will simply not take the pressure off the bandwidth to the home in time to save the PC – the action has moved from the local machine to the network, and all the innovation is going on the server, where new “applications you log into” are being unveiled on the Web daily. By the time bandwidth into the home is fast enough to require even today’s processor speeds, the reign of the PC will be over. 

Flat screens are the DRAM of the next decade. 

As CPU prices have fallen, the price of the monitor has become a larger and larger part of the total package. A $400 monitor is not such a big deal if a computer costs $2500, but for a computer that costs $400 it doubles the price. In 1998, flat screens have finally reached the home market. Since flat screens are made of transistors, their costs will fall the way chip costs do and they will finally join CPU, RAM, and storage in delivering increasing performance for decreasing cost year over year. By the middle of the next decade, flat screens prices per square inch will be as commodified as DRAM prices per megabyte are now. 

Linux and Open Source software. 

It’s hard to compete with free. Linux, the free Unix-like OS, makes the additional cost of an operating system zero, which opens up the US market for PCs (currently around 40% of the population) to a much greater segment of the population. Furthermore, since Linux can run all the basic computing apps (these days, there are actually only two, a word processor and a web browser, which also have Open Source versions) on 80486 architecture, it resuscitates a whole generation of previously obsolete equipment from the scrap heap. If a free operating system running free software on a 5 year old computer can do everything the average user needs, it switches the pressures on the computer industry from performance to price. 

The “Personal” is Removed from the Personal Computer.

Even if the first generation of free computers are built on PC chassis, they won’t be PCs. Unlike a “personal” computer, with its assumption of local ownership of both applications and data, these machines will be network clients, made to connect to the Web and run distributed applications. As these functions come to replace the local software, new interfaces will be invented based more on the browser than on the desktop metaphor, and as time goes on, even these devices will share the stage with other networking clients, such as PDAs, telephones, set-top boxes, and even toasters. 

Who will be in the vanguard of distributing the first free computers? The obvious organizations to do such a thing are people who have high fixed costs tied up in infrastructure, and high marketing costs but low technological costs for acquiring a customer. This means that any business that earns monthly or annual income from its clients and is susceptible to competition can give away a computer as part of a package deal for a long-term service contract, simultaneously increasing the potential pool of customers and getting loyal customers. 

If AOL knew it could keep a customer from leaving the service, they would happily give away a computer to get that customer. If it lowered service costs enough to have someone switch to electronic banking, Citibank could give away a computer to anyone who moved their account online, and so on, through stock brokerages, credit card companies and colleges. Anyone with an interest in moving its customers online, and in keeping them once they are there, will start to think about taking advantage of cheap computers and free operating systems, and these machines, free to the user, will change the complexion of the network population. 

Inasmuch those of us who were watching for the rise of network computing were betting on the rise of NCs as hardware, we were dead wrong. In retrospect, it is obvious that the NC was just a doppleganger of the PC with no hard drive. The real radical shift we are seeing is that there is no one hardware model coming next, that you can have network computing without needing a thing called a “network computer”. The PC is falling victim to its own successes, as its ability to offer more speed for less money is about to cause a divide by zero error. 

A computer can’t get cheaper than free, and once we get to free, computer ownership will expand outwards from people who can afford a computer to include people who have bank accounts, or people who have telephones, and finally to include everyone who has a TV. I won’t predict what new uses these new groups of people will put their computers too, but I’m sure that the results will be as surprising to us as workstations were to Thomas Watson or PCs were to Ken Olsen.

Help, The Price of Information Has Fallen And It Can’t Get Up

Among people who publish what is rather deprecatingly called ‘content’ on the Internet, there has been an oft repeated refrain which runs thusly:

‘Users will eventually pay for content.’

or sometimes, more petulantly,

‘Users will eventually have to pay for content.’

It seems worth noting that the people who think this are wrong.

The price of information has not only gone into free fall in the last few years, it is still in free fall now, it will continue to fall long before it hits bottom, and when it does whole categories of currently lucrative businesses will be either transfigured unrecognizably or completely wiped out, and there is nothing anyone can do about it.

ECONOMICS 101

The basic assumption behind the fond hope for direct user fees for content is a simple theory of pricing, sometimes called ‘cost plus’, where the price of any given thing is determined by figuring out its cost to produce and distribute, and then adding some profit margin. The profit margin for your groceries is in the 1-2% range, while the margin for diamonds is often greater than the original cost, i.e greater than 100%.

Using this theory, the value of information distributed online could theoretically be derived by deducting the costs of production and distribution of the physical objects (books, newspapers, CD-ROMs) from the final cost and reapplying the profit margin. If paying writers and editors for a book manuscript incurs 50% of the costs, and printing and distributing it makes up the other 50%, then offering the book as downloadable electronic text should theoretically cut 50% (but only 50%) of the cost.

If that book enjoys the same profit margins in its electronic version as in its physical version, then the overall profits will also be cut 50%, but this should (again, theoretically) still be enough profit to act as an incentive, since one could now produce two books for the same cost.

ECONOMICS 201

So what’s wrong with that theory? Why isn’t the price of the online version of your hometown newspaper equal to the cover price of the physical product minus the incremental costs of production and distribution? Why can’t you download the latest Tom Clancy novel for $8.97?

Remember the law of supply and demand? While there are many economic conditions which defy this old saw, its basic precepts are worth remembering. Prices rise when demand outstrips supply, even if both are falling. Prices fall when supply outstrips demand, even if both are rising. This second state describes the network perfectly, since the Web is growing even faster than the number of new users.

From the point of view of our hapless hopeful ‘content provider’, waiting for the largesse of beneficent users, the primary benefits from the network come in the form of cost savings from storage and distribution, and in access to users worldwide. From their point of view, using the network is (or ought to be) an enormous plus as a way of cutting costs.

This desire on the part of publishers of various stripes to cut costs by offering their wares over the network misconstrues what their readers are paying for. Much of what people are rewarding businesses for when they pay for ‘content’, even if they don’t recognize it, is not merely creating the content but producing and distributing it. Transporting dictionaries or magazines or weekly shoppers is hard work, and requires a significant investment. People are also paying for proximity, since the willingness of the producer to move newspapers 15 miles and books 1500 miles means that users only have to travel 15 feet to get a paper on their doorstep and 15 miles to get a book in the store.

Because of these difficulties in overcoming geography, there is some small upper limit to the number of players who can sucessfully make a business out of anything which requires such a distribution network. This in turn means that this small group (magazine publishers, bookstores, retail software outlets, etc.) can command relatively high profit margins.

ECONOMICS 100101101

The network changes all of that, in way ill-understood by many traditional publishers. Now that the cost of being a global publisher has dropped to an up-front investment of $1000 and a monthly fee of $19.95, (and those charges are half of what they were a year ago and twice what they will be a year from now), being able to offer your product more cheaply around the world offers no competitive edge, given that everyone else in the world, even people and organizations who were not formerly your competitors, can now effortlessly reach people in your geographic locale as well.

To take newspapers as a test case, there is a delicate equilibrium between profitibility and geography in the newspaper business. Most newspapers determine what regions they cover by finding (whether theoretically or experiemntally) the geographic perimeter where the cost of trucking the newspaper outweighs the willingess of the residents to pay for it. Over the decades, the US has settled into a patchwork of abutting borders of local and regional newspapers.

The Internet destroys any cost associated with geographic distribution, which means that even though each individual paper can now reach a much wider theoretical audience, the competition also increases for all papers by orders of magnitude. This much increased competition means that anyone who can figure out how to deliver a product to the consumer for free (usually by paying the writers and producers from advertising revenues instead of direct user fees, as network television does) will have a huge advantage over its competitors.

IT’S HARD TO COMPETE WITH FREE.

To see how this would work, consider these three thought experiments showing how the cost to users of formerly expensive products can fall to zero, permanently.

  • Greeting Cards

Greeting card companies have a nominal product, a piece of folded paper with some combination of words and pictures on it. In reality, however, the greeting card business is mostly a service industry, where the service being sold is convenience. If greeting card companies kept all the cards in a central warehouse, and people needing to send a card had to order it days in advance, sales would plummet. The real selling point of greeting cards is immediate availability – they’re on every street corner and in every mall.

Considered in this light, it is easy to see how the network destroys any issue of convenience, since all Web sites are equally convenient (or inconvenient, depending on bandwidth) to get to. This ubiquity is a product of the network, so the value of an online ‘card’ is a fraction of its offline value. Likewise, since the costs of linking words and images has left the world of paper and ink for the altogether cheaper arena of HTML, all the greeting card sites on the Web offer their product for free, whether as a community service, as with the original MIT greeting card site, or as a free service to their users to encourage loyalty and get attention, as many magazine publishers now do.

Once a product has entered the world of the freebies used to sell boxes of cereal, it will never become a direct source of user fees again.

  • Classified Ads

Newspapers make an enormous proportion of their revenues on classified ads, for everything from baby clothes to used cars to rare coins. This is partly because the lack of serious competition in their geographic area allows them to charge relatively high prices. However, this arrangement is something of a kludge, since the things being sold have a much more intricate relationship to geography than newspapers do.

You might drive three miles to buy used baby clothes, thirty for a used car and sixty for rare coins. Thus, in the economically ideal classified ad scheme, all sellers would use one single classified database nationwide, and then buyers would simply limit their searches by area. This would maximize the choice available to the buyers and the cost able to be commanded by the sellers. It would also destroy a huge source of newspapers revenue.

This is happening now. The search engines like Yahoo and Lycos, the agora of the Web, are now offering classified ads as a service to get people to use their sites more. Unlike offline classified ads, however, the service is free to both buyer and seller, since the sites are both competing with one another for differentiators in their battle to survive, and they are extracting advertising revenue (on the order of one-half of one cent) every time a page on their site is viewed.

When a product can be profitable on gross revenues of one-half of one cent per use, anyone deriving income from traditional classifieds is doomed in the long run.

  • Real-time stock quotes

Real time stock quotes, like the ‘ticker’ you often see running at the bottom of financial TV shows, used to cost a few hundred dollars a month, when sold directly. However, much of that money went to maintaining the infrastructure necessary to get the data from point A, the stock exchange, to point B, you. When that data is sent over the Internet, the costs of that same trip fall to very near zero for both producer and consumer.

As with classified ads, once this cost is reduced, it is comparatively easy for online financial services to offer this formerly expensive service as a freebie, in the hopes that it will help them either acquire or retain customers. In less than two years, the price to the consumer has fallen from thousands of dollars annually to all but free, never to rise again.

There is an added twist with stock quotes, however. In the market, information is only valuable as a delta between what you know and what other people know – a piece of financial information which everyone knows is worthless, since the market has already accounted for it in the current prices. Thus, in addition to making real time financial data cost less to deliver, the Internet also makes it _worth_ less to have.

TIME AIN’T MONEY IF ALL YOU’VE GOT IS TIME

This last transformation is something of a conundrum – one of the principal effects of the much-touted ‘Information Economy’ is actually to devalue information more swiftly and more fully. Information is only power if it is hard to find and easy to hold, but in an arena where it is as fluid as water, value now has to come from elsewhere. 

The Internet wipes out of both the difficulty and the expense of geographic barriers to distribution, and it does it for individuals and multi-national corporations alike. “Content as product” is giving way to “content as service”, where users won’t pay for the object but will pay for its manipulation (editorial imprimatur, instant delivery, custom editing, filtering by relevance, and so on.) In my next column, I will talk about what the rising fluidity and falling cost of pure information means for the networked economy, and how value can be derived from content when traditional pricing models have collapsed.