The Divide by Zero Era: The Arrival of the Free Computer

First published in ACM, 12/98.

"I think there's a world market for about five computers."
-- attr. Thomas J. Watson (IBM Chairman), 1945

"There is is no reason for any individual to have a computer in their home."
-- Ken Olsen (President of Digital Equipment
Corporation), Convention of the World Future
Society, 1977

"The PC model will prove itself once again."
-- Bill Gates (CEO of Microsoft),
COMDEX, 1998

Like Thomas Kuhn’s famous idea of “paradigm shifts” in “The Structure of Scientific Revolutions”, the computing industry has two basic modes – improvements in a given architecture, a kind of “normal computing”, punctuated by introductions of new architectures or ideas, or “radical computing”. Once a radical shift has been completed, the industry reverts to “normal computing”, albeit organized around the new norm, and once that new kind of computing becomes better understood, it creates (or reveals) a new set of problems, problems which will someday require yet another radical shift. 

The mainframe is introduced, then undergoes years of steady and rapid improvement. At the height of its powers, the mainframe solves all the problems within its domain, but gives rise to a new set of problems (problems of distributed computing, in this case), which couldn’t even have been discovered without the mainframe solving the previous generation of problems. These new problems do not respond to the kinds of things the mainframe does well, the community casts about for new solutions, and slowly, the client-server model (in this case) is born. 

This simplified view overstates the case, of course; in the real world these changes only appear neat and obvious after the fact. However, in spite of this muddiness, the most visible quality of such a shift is that it can be identified not just by changes in architecture, but by changes in user base. Put another way, real computing revolutions take place not when people are introduced to new kinds of computers but when computers are introduced to new kinds of people. 

Failure to understand the timing of these of radical shifts is the underlying error made by all three of the computer executives quoted at the beginning of this article. Each of them dominated computing technology of their day, and each of them failed to see that “the computer” as they understood it (and manufactured it) was too limited to be an all-inclusive solution. In particular, they all approached their contemporary computing environment as if it was the last computing environment, but in computing ecology, there is no “last”, there is just “next”. 

Scarcity Drives Strategy

What characterizes any computing era is scarcity. In the earliest days, everything was scarce, and building a computer was such a Herculean effort that had the price of hardware not plummeted, and expertise in building computers not skyrocketed, Watson’s estimate of 5 computers would have made sense. However, price did plummet, expertise did skyrocket, and computers were introduced to a new class of businesses in the 60s and 70s that could never have imagined owning a computer in earlier decades. 

This era of business expansion was still going strong in the late 70’s, when Olsen made his famous prediction. The scarcity at the time was processing power – well into the 80s, there were computers which were calculating each user’s cost for a second of CPU time. In this environment, it was impossible to imagine home computing. Such a thing would assume that CPU cycles would become so abundant that they would be free, an absurdity from Ken Olsen’s point of view, which is why it took DEC by surprise. 

We are now in the second decade of the “PC Model”, which took advantage of the falling cost of CPU cycles to create distibuted computing through the spread of stand-alone boxes. After such a long time of living with the PC, we can see the problems it can’t solve – it centralizes too much that should be distributed (most cycles are wasted on most PCs most of the time) and it distributes too much that should be centralized (a company contact list should be maintained and updated centrally, not given out in multiple copies, one per PC). Furthermore, the “Zero Maintenance Cost” hardware solutions that are being proposed – essentially pulling the hard drive out of a PC to make it an NC – are too little too late. 

Divide by Zero

A computing era ends – computers break out of normal mode and spread outwards to new groups of users – when the the previous era’s scarcity disappears. At that point the old era’s calculations run into a divide by zero error – calculating “CPU cycles per dollar” becomes pointless when CPU cycles are abundant enough to be free, charging users per kilobyte of storage becomes pointless when the ordinary unit of drive storage is the gigabyte, and so on. 

I believe that today we are seeing the end of the PC era because of another divide-by-zero error: Many people today wrongly assume that in the future you will continue to be able to charge money for computers. In the future, in the near future, the price of computers will fall to free, which will in turn open computer use to an enormously expanded population. 

You can already see the traditional trade press strugling with these changes, since many of them still write about the “sub $1000 PC” at a time when there are several popular sub _$500_ PCs on offer, and no sign that the end of the price cutting is in sight. The emergence of free computers will be driven not just by falling costs on the supply side, but by financial advantages on the demand side – businesses will begin to treat computers as a service instead of a product, in the same way that mobile phone services give away the phone itself in order to sell a service plan. In the same way that falling price has democratized the mobile phone industry, free computers will open the net to people whose incomes don’t include and extra $2000 of disposable income every two years. 

There are many factors going into making this shift away from the PC model possible – the obvious one is that for a significant and growing percentage of PC users, the browser is the primary interface and the Web is the world’s largest hard drive, but there are other factors at work as well. I list three others here: 

The bandwidth bottleneck has moved innovation to the server. 

A 500 Mhz chip with a 100 Mhz local bus is simply too fast for almost any home use of a computer. Multimedia is the videophone of the 90’s, popular among manufacturers but not among consumers, who want a computer primarily for word processing, email, and Web use. The principal scarcity is no longer clock speed but bandwidth, and the industry is stuck at about 50 kilobits, where it will stay for at least the next 24 months. ADSL and cable modems will simply not take the pressure off the bandwidth to the home in time to save the PC – the action has moved from the local machine to the network, and all the innovation is going on the server, where new “applications you log into” are being unveiled on the Web daily. By the time bandwidth into the home is fast enough to require even today’s processor speeds, the reign of the PC will be over. 

Flat screens are the DRAM of the next decade. 

As CPU prices have fallen, the price of the monitor has become a larger and larger part of the total package. A $400 monitor is not such a big deal if a computer costs $2500, but for a computer that costs $400 it doubles the price. In 1998, flat screens have finally reached the home market. Since flat screens are made of transistors, their costs will fall the way chip costs do and they will finally join CPU, RAM, and storage in delivering increasing performance for decreasing cost year over year. By the middle of the next decade, flat screens prices per square inch will be as commodified as DRAM prices per megabyte are now. 

Linux and Open Source software. 

It’s hard to compete with free. Linux, the free Unix-like OS, makes the additional cost of an operating system zero, which opens up the US market for PCs (currently around 40% of the population) to a much greater segment of the population. Furthermore, since Linux can run all the basic computing apps (these days, there are actually only two, a word processor and a web browser, which also have Open Source versions) on 80486 architecture, it resuscitates a whole generation of previously obsolete equipment from the scrap heap. If a free operating system running free software on a 5 year old computer can do everything the average user needs, it switches the pressures on the computer industry from performance to price. 

The “Personal” is Removed from the Personal Computer.

Even if the first generation of free computers are built on PC chassis, they won’t be PCs. Unlike a “personal” computer, with its assumption of local ownership of both applications and data, these machines will be network clients, made to connect to the Web and run distributed applications. As these functions come to replace the local software, new interfaces will be invented based more on the browser than on the desktop metaphor, and as time goes on, even these devices will share the stage with other networking clients, such as PDAs, telephones, set-top boxes, and even toasters. 

Who will be in the vanguard of distributing the first free computers? The obvious organizations to do such a thing are people who have high fixed costs tied up in infrastructure, and high marketing costs but low technological costs for acquiring a customer. This means that any business that earns monthly or annual income from its clients and is susceptible to competition can give away a computer as part of a package deal for a long-term service contract, simultaneously increasing the potential pool of customers and getting loyal customers. 

If AOL knew it could keep a customer from leaving the service, they would happily give away a computer to get that customer. If it lowered service costs enough to have someone switch to electronic banking, Citibank could give away a computer to anyone who moved their account online, and so on, through stock brokerages, credit card companies and colleges. Anyone with an interest in moving its customers online, and in keeping them once they are there, will start to think about taking advantage of cheap computers and free operating systems, and these machines, free to the user, will change the complexion of the network population. 

Inasmuch those of us who were watching for the rise of network computing were betting on the rise of NCs as hardware, we were dead wrong. In retrospect, it is obvious that the NC was just a doppleganger of the PC with no hard drive. The real radical shift we are seeing is that there is no one hardware model coming next, that you can have network computing without needing a thing called a “network computer”. The PC is falling victim to its own successes, as its ability to offer more speed for less money is about to cause a divide by zero error. 

A computer can’t get cheaper than free, and once we get to free, computer ownership will expand outwards from people who can afford a computer to include people who have bank accounts, or people who have telephones, and finally to include everyone who has a TV. I won’t predict what new uses these new groups of people will put their computers too, but I’m sure that the results will be as surprising to us as workstations were to Thomas Watson or PCs were to Ken Olsen.

The Icarus Effect

First published in ACM, 11/97.

A funny thing happened on the way from the router. 

We are all such good students of Moore’s Law, the notion that processor speeds will double every year and a half or so, that in any digital arena, we have come to treat it as our ‘c’, our measure of maximum speed. Moore’s Law, the most famously accurate prediction in the history of computer science, is treated as a kind of inviolable upper limit: the implicit idea is that since nothing can grow faster than chip speed, and chip speed is doubling evey 18 months, that necessarily sets the pace for everything else we do. 

Parallelling Moore’s law is the almost equally rapid increase in storage density, with the amount of data accessible on any square inch of media growing by a similar amount. These twin effects are contantly referenced in ‘gee-whiz’ articles in the computer press: “Why, just 7 minutes ago, a 14Mhz chip with a 22K disk cost eleventy-seven thousand dollars, and now look! 333 Mhz and a 9 gig drive for $39.95!” 

All this breath-taking period doubling makes these measurements into a kind of physics of our world, where clock speeds and disk densities become our speed of light and our gravity – the boundaries that determine the behavior for everything else. All this is well and good for stand-alone computers, but once you network them, a funny thing happens on the way from the router: this version of the speed of light is exceeded, and from a most improbable quarter. 

It isn’t another engineering benchmark that is outstripping the work at Intel and IBM, its the thing that often gets shortest shrift in the world of computer science – the users of the network. 

Chip speeds and disk densities may be doubling every 18 months, but network population is doubling roughly annually, half again as fast as either of those physical measurements. Network traffic, measured in packets, is doubling semi-annually (last year MAE-East, a major East Coast internet interconnect point) was seeing twice the load every 4 months, or an 8-fold annualized increase). 

There have always been internal pressures for better, faster computers – weather modelling programs and 3-D rendering, to name just two, can always consume more speed, more RAM, more disk – but the Internet, and particularly the Web and its multi-media cousins of java applications and streaming media, present the first externalpressure on computers, where Moore’s law simply can’t keep up and will never catch up. The network can put more external pressure on individual computers than they handle, now and for the forseeable future. 

IF YOU SUCCEED, YOU FAIL. 

This leads to a curious situation on the Internet, where any new service risks the usual failure if there is not enough traffic, but also risks failure if there is too much traffic. In a literal update of Yogi Berra’s complaint about a former favorite hang-out, “Nobody goes there anymore. Its too crowded”, many of the Web sites covering the 1996 US Presidential election crashed on election night, the time when they would have been most valuable, because so many people thought they were a good idea. We might dub this the ‘Icarus Effect’ – fly too high and you crash. 

What makes this ‘Icarus Effect’ more than just an engineering oversight is the relentless upward pressure on both population and traffic – given the same scenario in the 2000 election, computers will be roughly 8 times better equipped to handle the same traffic, but they will be asked to handle roughly 16 times the traffic. (More traffic than that even, much more, if the rise in number of users is accompanied by the same rise in time spent on the net by each user that we’re seeing today.) 

This is obviously an untenable situation – computing limits can’t be allowed to force entrepreneurs and engineers to hope for only middling success, and yet everywhere I go, I see companies excercising caution whenever they are comtemplating making any moves which will increase traffic, even if that would be make for a better site or service. 

FIRST, FIX THE PROBLEM. NEXT, EMBRACE FAILURE. 

We know what happens when the need for computing power outstrips current technology – its a two-step process, which first beefs up the current offering by improving performance and fighting off failure, and then, when that line of development hits a wall (as it inevitably does), embracing the imperfection of individual parts and adopting parallel development to fill the gap. 

Ten years ago, Wall St. had a similar problem to the Web today, except it wasn’t web sites and traffic, it was data and disk failure. When you’re moving trillions of dollars around the world in real time, a disk drive dying can be a catastrophic loss, and a backup that can get online ‘in a few hours’ does little to soften the blow. The first solution is to buy bigger and better disk drives, moving the Mean Time Between Failure from say, 10,000 hours to 30,000 hours. This is certainly better, but in the end, the result is simply spreading the pain of catastrophic failure over a longer average period of time. When the failure does come, it is the same catastrophe as before. 

Even more disheartening, the price/performance curve is exponential, putting the necessary order-of-magnitude improvements out of reach. It would cost far more to go from 30K/hrs MTBF to 90K/hrs than it did to go from 10 to 30, and going from 90 to 270 would be unthinkably expensive. 

Enter the RAID, the redundant array of inexpensive disks. Instead of hoping for the Platonic ‘ideal disk’, the RAID accepts that each disk is prone to failure, howsobeit rare, and simply groups them together in such a way that the failure of any one disk isn’t catastrophic, because the other disks contain all of the failed disk’s data in a matrix shared among the remaining disks. As long as a new working disk is put in place of the failed drive, the theoretical MTBF of a RAID made of ordinary disks, where two disks failed at precisely the same time, would be something like 900 million hours. 

A similar path of development happened with the overtaking of the supercomputer by the parallel processor, where the increasingly baroque designs of single CPU supercomputers was facing the same uphill climb that building single reliable disks did, and where the notion of networking cheaper, slower CPUs proved a way out that bottleneck. 

THE WEB HITS THE WALL. 

I believe that with the Web we are now seeing the beginning of one of those uphill curves – there is no way that chip speed and storage density can keep up with exploding user base, and this problem will not abate in the forseeable future. Computers, individual computers, are now too small, slow and weak to handle the demand of a popular web site, and the current solution to the demands of user traffic – buy a bigger computer – are simply postponing the day when those solutions also fail. 

What I can see in the outlines of in current web site development is what might be called a ‘RAIS’ strategy – redundant arrays of inexpensive servers. Just as RAIDs accept the inadequacy of any individual disk, a RAIS would accept that servers crash when overloaded, and that when you are facing 10% more traffic than you can handle, having to buy a much bigger and more expensive server is a lousy solution. RAIS architecture comes much closer to the necessary level of granularity for dealing with network traffic increases. 

If you were to host a Web site on 10 Linux boxes instead of one big commercial Unix server, you could react to a 10% increase in traffic with 10% more server for 10% more money. Furthermore, one server dying would only inconvenience the users who were mid-request on that particular box, and they could restart their work on one of the remaining servers immediately. Contrast this with the current norm, a 100% failure for the full duration of a restart in cases where a site is served by a single server. 

The initial RAISs are here in sites like C|NET and ESPN, where round-robin DNS configurations spread the load across multiple boxes. However, these solutions are just the beginning – their version of redundancy is often simply to mirror copies of the Web server. A true RAIS architecture will spread not only versions of the site, but will also spread functionality: images, a huge part of network traffic, are ‘read only’ – a server or group of servers optimized to handle only images could be served from WORM drives and serve the most popular images from RAM. Incoming CGI data, on the other hand, can potentially be ‘write only’ simply recording information on removable medai which can be imported into a database at a later date, on another computer, and so on. 

This kind of development will ultimately dissolve the notion of discrete net servers, and will lead to server networks, where an individual network address does not map to a physical computer but rather to a notional source of data. Requests to and from this IP address will actually be handled not by individual computers, whether singly or grouped into clusters of mirroring machines, but by a single-address network, a kind of ecosystem of networked processors, disks and other devices, each optimized for handling certain aspects of the request – database lookups, image serving, redirects, etc. Think of the part of the site that handles database requests as an organ, specialized to its particular task, rather than as a seperate organism pressed into that particular service. 

THE CHILD IS FATHER TO THE MAN 

It has long been observed that in the early days of ARPANet, packet switching started out by piggy-backing on the circuit-switched network, only to overtake it in total traffic, which will happen this year, and almost certainly to subsume it completely within a decade. I beleive a similar process is happening to computers themselves: the Internet is the first place where we can see that cumulative user need outstrips the power of individual computers, even taking Moore’s law into account, but it will not be the last. In the early days, computers were turned into networks, with the cumulative power of the net rising with the number of computers added to it. 

In a situation similar to the packet/circuit dichotomy, I believe that we are witnessing another such tipping point, where networks are brought into individual computers, where all computing resources, whether cycles, RAM, storage, whatever, are mediated by a network instead of being bundled into discrete boxes. This may have been the decade where the network was the computer, but in the next decade the computer will be the network, and so will everything else.

Help, The Price of Information Has Fallen And It Can’t Get Up

Among people who publish what is rather deprecatingly called ‘content’ on the Internet, there has been an oft repeated refrain which runs thusly:

‘Users will eventually pay for content.’

or sometimes, more petulantly,

‘Users will eventually have to pay for content.’

It seems worth noting that the people who think this are wrong.

The price of information has not only gone into free fall in the last few years, it is still in free fall now, it will continue to fall long before it hits bottom, and when it does whole categories of currently lucrative businesses will be either transfigured unrecognizably or completely wiped out, and there is nothing anyone can do about it.

ECONOMICS 101

The basic assumption behind the fond hope for direct user fees for content is a simple theory of pricing, sometimes called ‘cost plus’, where the price of any given thing is determined by figuring out its cost to produce and distribute, and then adding some profit margin. The profit margin for your groceries is in the 1-2% range, while the margin for diamonds is often greater than the original cost, i.e greater than 100%.

Using this theory, the value of information distributed online could theoretically be derived by deducting the costs of production and distribution of the physical objects (books, newspapers, CD-ROMs) from the final cost and reapplying the profit margin. If paying writers and editors for a book manuscript incurs 50% of the costs, and printing and distributing it makes up the other 50%, then offering the book as downloadable electronic text should theoretically cut 50% (but only 50%) of the cost.

If that book enjoys the same profit margins in its electronic version as in its physical version, then the overall profits will also be cut 50%, but this should (again, theoretically) still be enough profit to act as an incentive, since one could now produce two books for the same cost.

ECONOMICS 201

So what’s wrong with that theory? Why isn’t the price of the online version of your hometown newspaper equal to the cover price of the physical product minus the incremental costs of production and distribution? Why can’t you download the latest Tom Clancy novel for $8.97?

Remember the law of supply and demand? While there are many economic conditions which defy this old saw, its basic precepts are worth remembering. Prices rise when demand outstrips supply, even if both are falling. Prices fall when supply outstrips demand, even if both are rising. This second state describes the network perfectly, since the Web is growing even faster than the number of new users.

From the point of view of our hapless hopeful ‘content provider’, waiting for the largesse of beneficent users, the primary benefits from the network come in the form of cost savings from storage and distribution, and in access to users worldwide. From their point of view, using the network is (or ought to be) an enormous plus as a way of cutting costs.

This desire on the part of publishers of various stripes to cut costs by offering their wares over the network misconstrues what their readers are paying for. Much of what people are rewarding businesses for when they pay for ‘content’, even if they don’t recognize it, is not merely creating the content but producing and distributing it. Transporting dictionaries or magazines or weekly shoppers is hard work, and requires a significant investment. People are also paying for proximity, since the willingness of the producer to move newspapers 15 miles and books 1500 miles means that users only have to travel 15 feet to get a paper on their doorstep and 15 miles to get a book in the store.

Because of these difficulties in overcoming geography, there is some small upper limit to the number of players who can sucessfully make a business out of anything which requires such a distribution network. This in turn means that this small group (magazine publishers, bookstores, retail software outlets, etc.) can command relatively high profit margins.

ECONOMICS 100101101

The network changes all of that, in way ill-understood by many traditional publishers. Now that the cost of being a global publisher has dropped to an up-front investment of $1000 and a monthly fee of $19.95, (and those charges are half of what they were a year ago and twice what they will be a year from now), being able to offer your product more cheaply around the world offers no competitive edge, given that everyone else in the world, even people and organizations who were not formerly your competitors, can now effortlessly reach people in your geographic locale as well.

To take newspapers as a test case, there is a delicate equilibrium between profitibility and geography in the newspaper business. Most newspapers determine what regions they cover by finding (whether theoretically or experiemntally) the geographic perimeter where the cost of trucking the newspaper outweighs the willingess of the residents to pay for it. Over the decades, the US has settled into a patchwork of abutting borders of local and regional newspapers.

The Internet destroys any cost associated with geographic distribution, which means that even though each individual paper can now reach a much wider theoretical audience, the competition also increases for all papers by orders of magnitude. This much increased competition means that anyone who can figure out how to deliver a product to the consumer for free (usually by paying the writers and producers from advertising revenues instead of direct user fees, as network television does) will have a huge advantage over its competitors.

IT’S HARD TO COMPETE WITH FREE.

To see how this would work, consider these three thought experiments showing how the cost to users of formerly expensive products can fall to zero, permanently.

  • Greeting Cards

Greeting card companies have a nominal product, a piece of folded paper with some combination of words and pictures on it. In reality, however, the greeting card business is mostly a service industry, where the service being sold is convenience. If greeting card companies kept all the cards in a central warehouse, and people needing to send a card had to order it days in advance, sales would plummet. The real selling point of greeting cards is immediate availability – they’re on every street corner and in every mall.

Considered in this light, it is easy to see how the network destroys any issue of convenience, since all Web sites are equally convenient (or inconvenient, depending on bandwidth) to get to. This ubiquity is a product of the network, so the value of an online ‘card’ is a fraction of its offline value. Likewise, since the costs of linking words and images has left the world of paper and ink for the altogether cheaper arena of HTML, all the greeting card sites on the Web offer their product for free, whether as a community service, as with the original MIT greeting card site, or as a free service to their users to encourage loyalty and get attention, as many magazine publishers now do.

Once a product has entered the world of the freebies used to sell boxes of cereal, it will never become a direct source of user fees again.

  • Classified Ads

Newspapers make an enormous proportion of their revenues on classified ads, for everything from baby clothes to used cars to rare coins. This is partly because the lack of serious competition in their geographic area allows them to charge relatively high prices. However, this arrangement is something of a kludge, since the things being sold have a much more intricate relationship to geography than newspapers do.

You might drive three miles to buy used baby clothes, thirty for a used car and sixty for rare coins. Thus, in the economically ideal classified ad scheme, all sellers would use one single classified database nationwide, and then buyers would simply limit their searches by area. This would maximize the choice available to the buyers and the cost able to be commanded by the sellers. It would also destroy a huge source of newspapers revenue.

This is happening now. The search engines like Yahoo and Lycos, the agora of the Web, are now offering classified ads as a service to get people to use their sites more. Unlike offline classified ads, however, the service is free to both buyer and seller, since the sites are both competing with one another for differentiators in their battle to survive, and they are extracting advertising revenue (on the order of one-half of one cent) every time a page on their site is viewed.

When a product can be profitable on gross revenues of one-half of one cent per use, anyone deriving income from traditional classifieds is doomed in the long run.

  • Real-time stock quotes

Real time stock quotes, like the ‘ticker’ you often see running at the bottom of financial TV shows, used to cost a few hundred dollars a month, when sold directly. However, much of that money went to maintaining the infrastructure necessary to get the data from point A, the stock exchange, to point B, you. When that data is sent over the Internet, the costs of that same trip fall to very near zero for both producer and consumer.

As with classified ads, once this cost is reduced, it is comparatively easy for online financial services to offer this formerly expensive service as a freebie, in the hopes that it will help them either acquire or retain customers. In less than two years, the price to the consumer has fallen from thousands of dollars annually to all but free, never to rise again.

There is an added twist with stock quotes, however. In the market, information is only valuable as a delta between what you know and what other people know – a piece of financial information which everyone knows is worthless, since the market has already accounted for it in the current prices. Thus, in addition to making real time financial data cost less to deliver, the Internet also makes it _worth_ less to have.

TIME AIN’T MONEY IF ALL YOU’VE GOT IS TIME

This last transformation is something of a conundrum – one of the principal effects of the much-touted ‘Information Economy’ is actually to devalue information more swiftly and more fully. Information is only power if it is hard to find and easy to hold, but in an arena where it is as fluid as water, value now has to come from elsewhere. 

The Internet wipes out of both the difficulty and the expense of geographic barriers to distribution, and it does it for individuals and multi-national corporations alike. “Content as product” is giving way to “content as service”, where users won’t pay for the object but will pay for its manipulation (editorial imprimatur, instant delivery, custom editing, filtering by relevance, and so on.) In my next column, I will talk about what the rising fluidity and falling cost of pure information means for the networked economy, and how value can be derived from content when traditional pricing models have collapsed.

Don’t Believe the Hype

First published in ACM, 10/96.

One of my responsibilities as CTO of a New York new media agency is to evaluate new networking technologies – new browsers, new tools, new business models – in order to determine where my company should be focussing its energies in the distant future. (Like May.). The good part of that job is free t-shirts. The bad part is having to read the press releases in order to figure out what’s being offered: 

“Our company has a totally unique and unprecedentedly original proprietary product which will transform the Internet as we know it. If you don’t get in on the ground floor, you will be tossed into the nether darkness of network anachronism where demons will feast on your flesh. Expected out of alpha in the mumbleth quarter of next year, our product, Net:Web-Matrix@Thing, will …” 

Faced with the growing pile of these jewels of clarity on my desk, I realized that I could spend all my time trying to evaluate new technologies and still only be making (barely) educated guesses. What I needed was a set of principles to use to sort the wheat from the chaff, principles which take more account of what users on the network actually spend their time doing instead of what new media marketers want them to be doing. So, without further ado, I present my modest little list, “Clay Shirky’s Five Rules For Figuring Out When Networking Marketers Are Blowing Smoke.” 

RULE #1: Don’t Believe the Hype. 

This could also be called the “Everything New is Old Again” rule. The network we know and love is decades old, and the research which made it possible is older still. Despite the cult of newness which is surrounding the Internet, the Web, and all their cousins right now, the engines which drive the network change very slowly. Only the dashboards change quickly. 

Changing the way networking works is a complex business. Events which alter everything which comes after them (a paradigm shift, to quote Thomas Kuhn) are few and far between. The invention of DARPANet was a paradigm shift. The deployment of TCP/IP was a paradigm shift. The original NCSA Mosaic may even turn out to be a paradigm shift. The launch of Netscape Navigator 4.0 for Macintosh will not be a paradigm shift, no matter how many t-shirts they give out. 

Right now, the only two big ideas in networking are the use of a browser as an interface to the network, and the use of distributable object-oriented programs on virtual machines. Everything else is a detail, either some refinement of one of those two ideas (the much-hyped Active Desktop is a browser that looks at the local disk as well) or a refinement of something that has gone before (the Network Computer is Sun’s “diskless workstation” ten year later). We will spend the next few years just digesting those two big ideas – everyone else, no matter what their press releases say, is just helping us color inside the lines. 

RULE #2: Trust the “Would I use it?” Test. 

The first question to ask yourself when looking at new technology is not “Will it run on my server?” but rather “Would I use it?” 

I once saw a demo of a product whose sole function was to accompanying the user around the Web, showing them extra ads while taking up more screen space. The sales rep was taking great pains to point out that if you charged 2 cents an ad, you could collect $20 per thousand ads served. While this certainly comported with what I remember about multiplication, I was much more interested to know why he thought people would tolerate this intrusion. He never got around to explaining that part, and I considered it rude to ask. 

We already know what people using networks want: they want to do what they do now, only cheaper, or faster, or both. They want to do more interesting stuff than they do now, for the same amount of money. They strongly prefer open systems to closed ones. They strongly prefer open standards to proprietary ones. They will accept ads if thats what pays for interesting stuff. They want to play games, look at people in various states of undress, read the news, follow sports scores or the weather, and most of all they want to communicate with one another. 

Beware any product which claims that people would prefer information to communication, any service which claims that people will choose closed systems over open ones, and any protocol which claims that people will tolerate incompatibility to gain features. Any idea for a networking service which does not satisfy some basic desire of network users is doomed to fail. 

RULE #3: Don’t Confuse Their Ideas With Your Ideas 

A time-honored marketing tool for sidespepping criticism is to make ridiculous assertions with enough confidence that people come to believe them. The best possible antidote for this is to simply keep a running rebuttal going in your head while someone is telling you why network users are going to flock to their particular product. 

What follows is a list of things I have actually heard marketers say to me in all seriousness. Anyone saying anything like this is not to be trusted with anything more technological than a garage door opener: 

“The way we figure it, why _wouldn’t_ users adopt our ?” (Same reason they don’t use LotusNotes.) 

“We’re a year ahead of the competition.” (Nope.) 

“The way we figure it, why _wouldn’t_ users be willing to register a name and password with us?” (Because they already can’t remember their other 17 8-character-mix-of-letters-and-numbers passwords.) 

“We don’t have any competition.” (Then you don’t have a market.) 

“The way we figure it, why _wouldn’t_ users subscribe to ?” (Because they can get better for free elsewhere.) 

“If you get everyone in your organization to adopt our product all at once, then you won’t needcompatibility with your current tools.” (That’s the way Communism was supposed to work, too.) 

RULE #4: Information Wants to be Cheap. 

Information has been decoupled from objects. Forever. Newspaper companies are having to learn to separate news from paper; CD-ROM companies are having to choose between being multi-media producers and vendors of plastic circles. Like the rise of written text displacing the oral tradition, the separation of data from its containers will never be reversed. 

With the Internet, the incremental cost of storing and distributing information is zero: once a Web server is up, serving 10,000 pages costs no more than serving 1000 pages. Network users recognize this, even if they can’t articulate it directly, and behave accordingly. 

Anyone who is basing their plans on the willingness of network users to pay for electronic information in the same way that they now pay for physical objects like CDs and books will fail. Many Web-based services have tried to get users to pay for pretend scarcity, as if downloading information should be treated no differently from printing, shipping, storing, displaying and selling physical objects offline. People will pay for point of view, timely information, or the imprimatur of expertise. They will not pay for simply repackaging information from other sources (they can now do that themselves) or for the inefficient business practices bred by the costs of packaging. 

RULE #5: Its the Economy, Stupid. 

The future of the network will be driven by the economy – not the economy of dollars and cents, but the economy of millions of users acting to maximize their preferences. 

Technology is not the network, technology is just how the network works. The network itself is made up of the people who use it, and because of its ability to instantly link up people separated by distance but joined in outlook, it is the best tool for focussing group endeavors the world has ever seen. 

This ability of groups to form or change in a moment creates a fluid economy of personal preference: Individual choices are featherweight when considered alone, but in cumulative effect they are an unstoppable force. Anyone trying to control people’s access to information these days has learned this lesson, often painfully. Prodigy, the Serbian Government, and the Church of Scientology all have this in common – when they decided that they could unilaterally deny people certain kinds of information, they found that the networks made this kind of control impossible. 

The economics of personal preference will shape the network for the forseeable future. Online services will wither unless they offer some value over what the Internet itself offers. Network phones will proliferate despite the attempts of long-distance companies to stifle them. “Groupware” programs will embrace open standards or die. The economics which will guide the network will be based on what users want to buy, not on what marketers want to sell. 

These rules have been around the world 9 times. Don’t break the chain! If you remember these rules, you will be able to ward off some of the unbearable hype currently surrounding the unveiling of every two-bit Java animation program and Web server add-on, all while keeping on top of the real user-driven forces shaping the network. If, on the other hand, you fail to remember these rules, demons will feast on your flesh.

Cyberporn!!! (So What???)

First published on Urban Desires, 03/96.

So much has been said on the net and off, about Philip Elmer-DeWitt’s Time magazine cover story, “Cyberporn” (July 3, 1995), that I have little to add except this: as bad as the decision to use Marty Rimm’s methodologically worthless study was, the subsequent damage control was worse. Compare these two passages, both written by Elmer-DeWitt after it became clear that an inadequately fact-checked story had come to grace Time’s cover:


Some [network users] clearly believe that Time, by publicizing the Rimm study, was contributing to a mood of popular hysteria, sparked by the Christian Coalition and other radical-right groups, that might lead to a crackdown. It would be a shame, however, if the damaging flaws in Rimm’s study obscured the larger and more important debate about hard-core porn on the Internet.
and
I don’t know how else to say it, so I’ll just repeat what I’ve said before. I screwed up. The cover story was my idea, I pushed for it, and it ran pretty much the way I wrote it. It was my mistake, and my mistake alone. I do hope other reporters will learn from it. I know I have.


The first quote is from a Time followup three weeks after the original story (“Fire Storm on the Computer Nets”), the second is from the Usenet newsgroup alt.culture.internet. This was the worst aspect of the cyberporn debacle – Elmer-DeWitt, living in two worlds, exploited that difference in the two cultures rather than bridging them. For those of us on the net, he was willing to write simply and directly – “It was my mistake, and my mistake alone.” Off the net, however, his contempt for his readers leads to equivocation, “It would be a shame, however, if the damaging flaws…” In other words, his offline readers are not to know the truth about the decisions which went into making an unreviewed undergraduate study into a Time cover story.

This split between “us” and “them,” the people on the net and off, and what each group sees and thinks about online pornography, is at the heart of the debate about porn on the net. We are often not even speaking the same language, so the discussion devolves into one of those great American dichotomies where each side characterizes the other as not merely wrong but evil. I will have to confine my remarks here to the US, because that is where the Elmer-DeWitt article has gotten the most play – it was even entered into the US Congressional Record – and because his article frames the debate in terms of the First Amendment to the US Constitution, which guarantees the freedom of speech and of the press. This makes the debate especially thorny, because the gap between what most US citizens know about the First Amendment and what they think they know is often extreme.

Who’s On First?

The two sides of this debate are not what they seem. At first glance, the basic split appears to be between net-savvy libertarians who are opposed to government regulation of pornography on the net in any form, and anti-smut crusaders of all political stripes who want to use Federal regulations to reduce the net’s contents to the equivalent of a prime time TV broadcast. This is a false opposition – both of these groups are merely squabbling over details. The libertarians want no new laws ever, and the crusaders want strong ones now, but they share the same central conviction, namely that without new laws, the US has no legal right to control obscene material on the net which lies within its boundaries.

Both groups are wrong.

Obscene material on the net which is within the jurisdiction of the U.S. does not now and has not ever enjoyed the protection of the First Amendment. The Supreme Court was unambiguously clear on the right to regulate obscene material in Miller v. California (413 U.S. 15 1973), the landmark case which sets the current U.S. standard for obscene material. The ambiguity in that decision is in defining what precisely is obscene (about which more later), but once something is so declared, states may regulate it however they like, no matter what medium it is in.

Many people who have been sold on the net as a kind of anarchist paradise react badly to this news, arguing that since the U.S. has not bothered enforcing these laws on the net so far, it is unfair to do so now. Indeed, it probably is unfair, but the law concerning obscenity has little concerned itself with fairness. Prosecution for obscenity is almost never directed at an elite – as long as sexually explicit materials are not readily available to the general populace, the law has usually ignored them. Once they become widespread, however, both regulations and enforcement generally become much more stringent.

The Current Regulations

The most memorable thing ever said about “hard-core” pornography in the Supreme Court was the late Justice Stewart’s quote “I know it when I see it.” The force of this sentiment is so strong that although it no longer has anything to do with case law, more people know it than know the current Miller standard, which comes much less trippingly off the tougue. Miller does not define pornography per se, but rather “obscenity,” which is roughly the same as hard-core pornography. Material is obscene if it matches all three of these criteria:

1. The average person, applying contemporary community standards, would find the work, taken as a whole, appeals to the prurient interest; and 
2. The work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and 
3. The work, taken as a whole, lacks serious literary, artistic, political or scientific value.

Every phrase in that tri-partate test has been gone over with a fine-toothed legal comb, but it is the first part which is the oddest, because it introduces as a test for obscenity the legal entity “contemporary community standards.”

Which Community? What Standards?

“Contemporary community standards,” is a way of recognizing that there are different standards in Manhattan, New York and Manhattan, Kansas. Both “prurient interest” and “patently offensive” are considered local constraints. (The third test, for “serious literary, artistic, political or scientific value,” is taken to be a national standard, and most recent and notable obscenity trials recently have hinged on this clause.) The effect of these locally determined qualities is to grant First Amendment protection to materials which a community does not consider offensive. Put another way, with the “contemporary community standards” benchmark, pornography which is popular cannot be obscene, and pornography which is obscene cannot be popular.Obscene material on the net which is within the jurisdiction of the U.S. does not now and has not ever enjoyed the protection of the First Amendment. The Supreme Court was unambiguously clear on the right to regulate obscene material in Miller v. California (413 U.S. 15 1973), the landmark case which sets the current U.S. standard for obscene material. The ambiguity in that decision is in defining what precisely is obscene (about which more later), but once something is so declared, states may regulate it however they like, no matter what medium it is in.

Many people who have been sold on the net as a kind of anarchist paradise react badly to this news, arguing that since the U.S. has not bothered enforcing these laws on the net so far, it is unfair to do so now. Indeed, it probably is unfair, but the law concerning obscenity has little concerned itself with fairness. Prosecution for obscenity is almost never directed at an elite – as long as sexually explicit materials are not readily available to the general populace, the law has usually ignored them. Once they become widespread, however, both regulations and enforcement generally become much more stringent.

The Current Regulations

The most memorable thing ever said about “hard-core” pornography in the Supreme Court was the late Justice Stewart’s quote “I know it when I see it.” The force of this sentiment is so strong that although it no longer has anything to do with case law, more people know it than know the current Miller standard, which comes much less trippingly off the tougue. Miller does not define pornography per se, but rather “obscenity,” which is roughly the same as hard-core pornography. Material is obscene if it matches all three of these criteria:

1. The average person, applying contemporary community standards, would find the work, taken as a whole, appeals to the prurient interest; and 
2. The work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and 
3. The work, taken as a whole, lacks serious literary, artistic, political or scientific value.

Every phrase in that tri-partate test has been gone over with a fine-toothed legal comb, but it is the first part which is the oddest, because it introduces as a test for obscenity the legal entity “contemporary community standards.”

Which Community? What Standards?

“Contemporary community standards,” is a way of recognizing that there are different standards in Manhattan, New York and Manhattan, Kansas. Both “prurient interest” and “patently offensive” are considered local constraints. (The third test, for “serious literary, artistic, political or scientific value,” is taken to be a national standard, and most recent and notable obscenity trials recently have hinged on this clause.) The effect of these locally determined qualities is to grant First Amendment protection to materials which a community does not consider offensive. Put another way, with the “contemporary community standards” benchmark, pornography which is popular cannot be obscene, and pornography which is obscene cannot be popular.

Back to the Marty Rimm study. The study, and Elmer-DeWitt’s treatment of it, were designed to play on the hypocrisy which surrounds most of the debates on pornography, where people profess to be shocked – SHOCKED – that such filth is being peddled in our fair etc., etc. The tenor of this debate is designed to obscure what everyone knows about pornography: many people like it. Furthermore, the more anonymously they can get to it, the more they will consume. (Increased anonymity also seems to be a factor in increasing use of pornography by women.) This is a pattern we have seen with the growth of video tapes, CD-ROMs and 1-900 numbers as well as on the net.

That many people like pornography seems self-evident, but the debate is usually couched in terms of production, not consumption. The ubiquity of pornography is laid at the feet of the people who make and distribute sexually explicit material: they are accused of pandering, of trafficking in porn, even of being pushers, as if the general populace has some natural inclination not to be interested in erotic stimulation, which inclination is being deviously eroded by amoral profiteers.

The net is the death knell for this fiction, and the anti-smut crusaders are in terror lest the law take note of this fact.

We are so comfortable with the world of magazines and books and video tapes that we often fail to notice the differences between data in a physical package, which has to exist in advance of a person wanting it, and data on the net, which does not. Physical packages of data have to be created and edited and shipped and stored and displayed, all in the hopes that they will catch someone’s eye. Data on the net, in contrast, only gets moved or copied when someone implicitly or explicitly requests it. 

This means that anyone measuring net pornography in terms of data – bandwidth, disk space, web hits, whatever – is not measuring production, they are measuring desire. 

Pornography on the net is not pushed out, it is pulled in. For a million people to have a picture of Miss September, Playboy does not have to put a million copies of it on its Web site; it only has to place one copy there and wait for a million people to request it. This means that if there are more copies of “Blacks with Blondes!!!” than “Othello” floating around cyberspace, it is because more people have requested the former than the latter. Bandwidth measures popularity.

You Can See Where This Is Going

The legal effects of demonstrating porn’s popularity on the net may well be precisely the opposite of what the anti-smut crusaders believe they will be. If they can truly demonstrate that pornography is ubiquitous on the net, it will be much harder to regulate under current case law, not easier. In particular, if the net itself can be legally established as a “community,” whose contemporary standards must be consulted in determining obscenity, all the hype will likely mean that it will become difficult to have much net porn declared obscene under Miller.

This notion of the net as a community is not as far-fetched as it sounds. There is a case winding its way through the courts right now where a California man, Robert Thomas, running the “Amateur Action BBS” out of his home, was convicted for violating the community standards of Memphis, Tennessee, where his images were downloaded by a postal inspector trying to shut his BBS down. Part of the defense is the question of which community’s standards should be used. If the verdict is ultimately upheld, every community in the country will have the same standards for net porn as Memphis, because any BBS in any other U.S. jurisdiction can be reached from Tennessee by phone. If the verdict is overturned, no community anywhere in the U.S. will be able to regulate obscenity any more stringently than the most permissive community that has a porn-distributing BBS. If the net itself is found to be a non-geographic community, then there will be two sets of laws, one for people with modems!
and one for people without.

The Likeliest Casualty

In any case, the likeliest casualty of this case and others like it will be Miller itself. The very notion of contemporary community standards resonates with echoes of Grovers Corners, of a tightly knit community with identifiable and cohesive values. Communities like that, already mostly a memory in 1973, are little more than a dream now. The original Miller descision, which was 5-4, has held together mostly because the challenges raised to individual materials have usually focussed on the idea that the works in question had some serious value (e.g. the Mapplethorpe photos trial in Cinncinnatti.) Sometime in the next couple of years however, Miller is going to run directly into a challenge to the idea of local control of a non-localized entity like the net. The result will likely be the first obscenity law geared to 21st century America. Watch this space.

The Power of Lutefisk

From: clays@panix.com (Clay Shirky)
Newsgroups: alt.folklore.urban
Subject: Ode to Lutefisk (Long)
Date: Sun, 04 Dec 1994 09:11:19 -0500

It is my wont when travelling to forgo the touristic in favor of the real, to pesuade my kind hosts, whoever they may be, that an evening in the local, imbibing pints of whatever the natives use as intoxicants, would be more interesting than another espresso in another place called Cafe Opera. Chiefest among my interests is the Favorite Dish: the plate, cup, or bowl of whatever stuff my hosts consider most representative of the regions virtues. As I just finished a week’s work in Oslo, this dish was of course lutefisk.

(snd f/x: organ music in minor key – cresc. and out.)

The Norwegians are remarkably single-minded in their attachment to the stuff. Every one of them would launch themselves into a hydrophobic frenzy of praise on the mere mention of the word. Though these panegyrics were as varied as they were fulsome, they shared one element in common. Every testimonial to the recondite deliciousness of cod soaked in lye ended with the phrase “…but I only eat it once a year.”

When I pressed my hosts as to why they would voluntarily forswear what was by all accounts the tastiest fish dish ever for 364 days of the year, each of them said “Oh, you can’t eat lutefisk more than once a year.” (Their unanmity on this particular point carried with it the same finality as the answers you get when casually asking a Scientologist about L. Ron’s untimely demise.)

Despite my misgivings from these interlocutions however, there was nothing for it but to actually try the stuff, as it was clearly the local delicacy. A plan was hatched whereby my hosts and I would distill ourselves to a nearby brasserie, and I would order something tame like reindeer steak, and they would order lutefisk. The portions at this particular establishment were large, they assured me, and when I discovered for myself how scrumptious jellied fish tasted, I could have an adequate amount from each of their plates to satiate my taste for this new-found treat.

Ah, but the best laid plans… My hostess, clearly feeling in a holiday mood (and perhaps further cheered by my immanent departure as their house guest) proceeded to order lutefisks all round.

“But I was going to order reinde…”

“Nonononono,” she said, “you must have your own lutefisk. It would be rude to bring you to Norway and not give you your own lutefisk.”

My mumbled suggestion that I had never been one to stand on formality went unnoticed, and moments later, somewhere in the kitchen, there was a lutefisk with my name on it.

The waitress, having conveyed this order to the chef, returned with a bottle and three shot glasses and spent some time interogating my host. He laughed as she left, and I asked what she said.

“Oh she said ‘Is the American really going to eat lutefisk?’ and when I told her you were, she said that it takes some time to get used to it.”

“How long?” I asked.

“Well, she said a couple of years.”

In the meantime, my hostess was busily decanting a clear liquid into the shot glass and passing it my way. When I learned that it was aquavit, I demurred, as I intended to get some writing done on the train.

“Oh no,” said my hostess, donning the smile polite people use when giving an order, “you must have aquavit with lutefisk.”

To understand the relationship between aquavit and lutefisk, here’s an experiment you can do at home. In addition to aquavit, you will need a slice of lemon, a cracker, a dishtowel, ketchup, some caviar, and a Kit-Kat candy bar.

  1. Take a shot aquavit.
  2. Take two. (They’re small.)
  3. Put a bit of caviar on a cracker.
  4. Squeeze some lemon juice on the caviar.
  5. Pour some ketchup on the Kit-Kat bar.
  6. Tie the dishtowel around your eyes.

If you can taste the difference between caviar on a cracker and ketchup on a Kit-Kat while blindfolded, you have not had enough aquavit to be ready for lutefisk. Return to step one.

The first real sign of trouble was when a plate arrived and was set in front of my host, sitting to my left. It contained a collection of dark and aromatic food stuffs of a variety of textures. Having steeled myself for an encounter with a pale jelly, I was puzzled at its appearance, and I leaned over to get a better look.

“Oh,” said my host, “that’s not lutefisk. I changed my mind and ordered the juletid plate. Its is pork and sausages.”

“But you’re leaving for New York tomorrow, so tonight is your last chance to have lutefisk this year” I pointed out.

“Oh well,” he said, tucking into what looked like a very tasty pork chop.

Shortly thereafter the two remaining plates arrived, each containing the lutefisk itself, boiled potatoes, and a mash of peas from which all the color had been expertly tortured. There was also a garnish of a slice of cucumber, a wedge of lemon, and a sliver of red pepper.

“This is bullshit!” said my hostess, snatching the garnish off her plate.

“What’s wrong,” I asked, “not enough lemon?”

“No, a plate of lutefisk should be totally gray!”

Indeed, with the removal of the garnish, it was totally gray, and waiting for me to dig in. There being no time like the present, I tore a forkful away from the cod carcass and lifted it to my mouth.

“Wait,” said my host, “you can’t eat it like that!”

“OK,” I said, “how should I eat it?”

“Mash up your potatoes, and then mix a bit of lutefisk in, and then add some bacon.” he said, handing me a tureen filled to the brim with bacon bits floating in fat.

I began to strain some of the bits out of the tureen. “No, not like that, like this” he said, snatching up the tureen and pouring three fingers of pure bacon grease directly over the beige mush I had made from the potatoes and lutefisk already on my plate.

“Now can I eat it?”

“No, not yet, you have to mix in the mustard.”

“And the pepper” added my hostess, “you have to have lutefisk with lots and lots of pepper. And then you have to eat it right away, because if it gets cold its horrible.”

They proceeded to add pepper and mustard in amounts I felt were more apporpriate to ingredients rather than flavors, but no matter. At this point what I had was an undercooked hash brown with mustard on it, flavored with a little bit of lutefisk. “How bad could it be?” I thought to myself as I lifted my fork to my mouth.

The moment every traveller lives for is the native dinner where, throwing caution to the wind and plunging into a local delicacy which ought by rights to be disgusting, one discovers that it is not only delicious but that it also contradicts a previously held prejudice about food, that it expands ones culinary horizons to include surprising new smells, tastes, and textures.

Lutefisk is not such a dish.

Lutefisk is instead pretty much what you’d expect of jellied cod; it is a foul and odiferous goo, whose gelatinous texture and rancid oily taste are locked in spirited competition to see which can be the more responsible for rendering the whole completely inedble.

How to describe that first bite? Its a bit like describing passing a kidneystone to the uninitiated. If you are talking to someone else who has lived through the experience, a nod will suffice to acknowledge your shared pain, but to explain it to the person who has not been there, mere words seem inadequate to the task. So it is with lutefisk. 

One could bandy about the time honored phrases like “nauseating sordid gunk”, “unimaginably horrific”, “lasting psychological damage”, but these seem hollow when applied to the task at hand. I will have to resort to a recipe for a kind of metaphorical lutefisk, to describe the experience. Take marshmallows made without sugar, blend them together with overcooked Japanese noodles, and then bathe the whole liberally in acetone. Let it marinate in cod liver oil for several days at room temperature. When it has achieved the appropriate consistency (though the word “appropriate” is somewhat problematic here), heat it to just above lukewarm, sprinkle in thousands of tiny, sharp, invisible fish bones, and serve.

The waitress, returning to clear our plates, surveyed the half-eaten goo I had left.

She nodded conspiritorially at me, said something to my host, and left.

“What’d she say?, I asked.

“Oh, she said ‘I never eat lutefisk either. It tastes like python.'”

Clay “I think my mistake was in using the dishtowel: you need to drink enough aquavit so you can’t tell the difference between caviar on a cracker and ketchup on a Kit-Kat with your eyes open” Shirky