The Music Business and the Big Flip

First published January 21, 2003 on the ‘Networks, Economics, and Culture’ mailing list.

The first and last thirds of the music industry have been reconfigured by digital tools. The functions in the middle have not.

Thanks to software like ProTools and CakeWalk, the production of music is heavily digital. Thanks to Napster and its heirs like Gnutella and Kazaa, the reproduction and distribution of music is also digital. As usual, this digitization has taken an enormous amount of power formerly reserved for professionals and delivered it to amateurs. But the middle part — deciding what new music should be available — is still analog and still professionally controlled.

The most important departments at a record label are Artists & Repertoire, and Marketing. A&R’s job is to find new talent, and Marketing’s job is to publicize it. These are both genuinely hard tasks, and unlike production or distribution, there is no serious competition for those functions outside the labels themselves. Prior to its demise, Napster began publicizing itself as a way to find new music, but this was a fig leaf, since users had to know the name of a song or artist in advance. Napster did little to place new music in an existing context, and the current file-sharing networks don’t do much better. In strong contrast to writing and photos, almost all the music available on the internet is there because it was chosen by professionals.

Aggregate Judgments

The curious thing about this state of affairs is that in other domains, we now use amateur input for finding and publicizing. The last 5 years have seen the launch of Google, Blogdex, Kuro5in, Slashdot, and many other collaborative filtering sites that transform the simple judgments of a few participants into aggregate recommendations of remarkably high quality.

This is all part of the Big Flip in publishing generally, where the old notion of “filter, then publish” is giving way to “publish, then filter.” There is no need for Slashdot’s or Kuro5hin’s owners to sort the good posts from the bad in advance, no need for Blogdex or Daypop to pressure people not to post drivel, because lightweight filters applied after the fact work better at large scale than paying editors to enforce minimum quality in advance. A side-effect of the Big Flip is that the division between amateur and professional turns into a spectrum, giving us a world where unpaid writers are discussed side-by-side with New York Times columnists.

The music industry is largely untouched by the Big Flip. The industry harvests the aggregate taste of music lovers and sells it back to us as popularity, without offering anyone the chance to be heard without their approval. The industry’s judgment, not ours, still determines the entire domain in which any collaborative filtering will subsequently operate. A working “publish, then filter” system that used our collective judgment to sort new music before it gets played on the radio or sold at the record store would be a revolution.

Core Assumptions

Several attempts at such a thing have been launched, but most are languishing, because they are constructed as extensions of the current way of producing music, not alternatives to it. A working collaborative filter would have to make three assumptions. 

First, it would have to support the users’ interests. Most new music is bad, and the users know it. Sites that sell themselves as places for bands to find audiences are analogous to paid placement on search engines — more marketing vehicle than real filter. FarmFreshMusic, for example lists its goals as “1. To help artists get signed with a record label. 2. To help record labels find great artists efficiently. 3. To help music lovers find the best music on the Internet.” Note who comes third. 

Second, life is too short to listen to stuff you hate. A working system would have to err more on the side of false negatives (not offering you music you might like) rather than false positives (offering you music you might not like). With false negatives as the default, adventurous users could expand their preferences at will, while the mass of listeners would get the Google version — not a long list of every possible match, but rather a short list of high relevance, no matter what has been left out. 

Finally, the system would have to use lightweight rating methods. The surprise in collaborative filtering is how few people need to be consulted, and how simple their judgments need to be. Each Slashdot comment is moderated up or down only a handful of times, by only a tiny fraction of its readers. The Blogdex Top 50 links are sometimes pointed to by as few as half a dozen weblogs, and the measure of interest is entirely implicit in the choice to link. Despite the almost trivial nature of the input, these systems are remarkably effective, given the mass of mediocrity they are sorting through. 

A working filter for music would similarly involve a small number of people (SMS voting at clubs, periodic “jury selection” of editors a la Slashdot, HotOrNot-style user uploads), and would pass the highest ranked recommendations on to progressively larger pools of judgment, which would add increasing degrees of refinement about both quality and classification. 

Such a system won’t undo inequalities in popularity, of course, because inequality appears whenever a large group expresses their preferences among many options. Few weblogs have many readers while many have few readers, but there is no professional “weblog industry” manipulating popularity. However, putting the filter for music directly in the hands of listeners could reflect our own aggregate judgments back to us more quickly, iteratively, and with less distortion than the system we have today.

Business Models and Love

Why would musicians voluntarily put new music into such a system? 

Money is one answer, of course. Several sorts of businesses profit from music without needing the artificial scarcity of physical media or DRM-protected files. Clubs and concert halls sell music as experience rather than as ownable object, and might welcome a system that identified and marketed artists for free. Webcasting radio stations are currently forced to pay the music industry per listener without extracting fees from the listeners themselves. They might be willing to pay artists for music unencumbered by per-listener fees. Both of these solutions (and other ones, like listener-supported radio) would offer at least some artists some revenues, even if their music were freely available elsewhere. 

The more general answer, however, is replacement of greed with love, in Kevin Kelly’s felicitous construction. The internet has lowered the threshold of publishing to the point where you no longer need help or permission to distribute your work. What has happened with writing may be possible with music. Like writers, most musicians who work for fame and fortune get neither, but unlike writers, the internet has not offered wide distribution to people making music for the love of the thing. A system that offered musicians a chance at finding an audience outside the professional system would appeal to at least some of them. 

Music Is Different

There are obvious differences here, of course, as music is unlike writing in several important ways. Writing tools are free or cheap, while analog and digital instruments can be expensive, and writing can be done solo, while music-making is usually done by a group, making coordination much more complex. Furthermore, bad music is far more painful to listen to than bad writing is to read, so the difference between amateur and professional music may be far more extreme. 

But for all those limits, change may yet come. Unlike an article or essay, people will listen to a song they like over and over again, meaning that even a small amount of high-quality music that found its way from artist to public without passing through an A&R department could create a significant change. This would not upend the professional music industry so much as alter its ecosystem, in the same way newspapers now publish in an environment filled with amateur writing. 

Indeed, the world’s A&R departments would be among the most avid users of any collaborative filter that really worked. The change would not herald the death of A&R, but rather a reconfiguration of the dynamic. A world where the musicians already had an audience when they were approached by professional publishers would be considerably different from the system we have today, where musicians must get the attention of the world’s A&R departments to get an audience in the first place. 

Digital changes in music have given us amateur production and distribution, but left intact professional control of fame. It used to be hard to record music, but no longer. It used to be hard to reproduce and distribute music, but no longer. It is still hard to find and publicize good new music. We have created a number of tools that make filtering and publicizing both easy and effective in other domains. The application of those tools to new music could change the musical landscape.

Customer-owned Networks: ZapMail and the Telecommunications Industry

First published January 7, 2003 on the ‘Networks, Economics, and Culture’ mailing list. 

To understand what’s going to happen to the telephone companies this year thanks to WiFi (otherwise known as 802.11b) and Voice over IP (VoIP) you only need to know one story: ZapMail.

The story goes like this. In 1984, flush from the success of their overnight delivery business, Federal Express announced a new service called ZapMail, which guaranteed document delivery in 2 hours. They built this service not by replacing their planes with rockets, but with fax machines.

This was CEO Fred Smith’s next big idea after the original delivery business. Putting a fax machine in every FedEx office would radically reconfigure the center of their network, thus slashing costs: toner would replace jet fuel, bike messenger’s hourly rates would replace pilot’s salaries, and so on. With a much less expensive network, FedEx could attract customers with a discount on regular delivery rates, but with the dramatically lower costs, profit margins would be huge compared to actually moving packages point to point. Lower prices, higher margins, and to top it all off, the customer would get their documents in 2 hours instead of 24. What’s not to love?

Abject failure was not to love, as it turned out. Two years and hundreds of millions of dollars later, FedEx pulled the plug on ZapMail, allowing it to vanish without a trace. And the story of ZapMail’s collapse holds a crucial lesson for the telephone companies today.

The Customer is the Competitor

ZapMail had three fatal weaknesses.

First of all, Federal Express didn’t get that faxing was a product, not a service. FedEx understood that faxing would be cheaper than physical delivery. What they missed, however, was that their customers understood this too. The important business decision wasn’t when to pay for individual faxes, as the ZapMail model assumed, but rather when to buy a fax machine. The service was enabled by the device, and the business opportunity was in selling the devices.

Second, because FedEx thought of faxing as a service, it failed to understand how the fax network would be built. FedEx was correct in assuming it would take hundreds of millions of dollars to create a useful network. (It has taken billions, in fact, over the last two decades.) However, instead of the single massive build out FedEx undertook, the network was constructed by individual customers buying one fax machine at a time. The capital expenditure was indeed huge, but it was paid for in tiny chunks, at the edges of the network.

Finally, because it misunderstood how the fax network would be built, FedEx misunderstood who its competition was. Seeing itself in the delivery business, it thought it had only UPS and DHL to worry about. What FedEx didn’t see was that its customers were its competition. ZapMail offered two hour delivery for slightly reduced prices, charged each time a message was sent. A business with a fax machine, on the other hand, could send and receive an unlimited number of messages almost instantaneously and at little cost, for a one-time hardware fee of a few hundred dollars.

There was simply no competition. ZapMail looked good next to FedEx’s physical delivery option, but compared to the advantages enjoyed by the owners of fax machines, it was laughable. If the phone network offered cheap service, it was better to buy a device to tap directly into that than to allow FedEx to overcharge for an interface to that network that created no additional value. The competitive force that killed ZapMail was the common sense of its putative users.

ZapPhone

The business Fred Smith imagined being in — build a network that’s cheap to run but charge customers as it if were expensive — is the business the telephone companies are in today. They are selling us a kind of ZapPhone service, where they’ve digitized their entire network up to the last mile, but are still charging the high and confusing rates established when the network was analog.

The original design of the circuit-switched telephone network required the customers to lease a real circuit of copper wire for the duration of their call. Those days are long over, as copper wires have been largely replaced by fiber optic cable. Every long distance phone call and virtually every local call is now digitized for at least some part of its journey.

As FedEx was about faxes, the telephone companies are in deep denial about the change from analog to digital. A particularly clueless report written for the telephone companies offers this choice bit of advice:

Telcos gain billions in service fees from […] services like Call Forwarding and Call Waiting […]. Hence, capex programs that shift a telco, say, from TDM to IP, as in a softswitch approach that might have less capital intensity, must absolutely preserve the revenue stream. [ http://www.proberesearch.com/alerts/refocusing.htm]

You don’t need to know telephone company jargon to see that this is the ZapMail strategy. 

Step #1: Scrap the existing network, which relies on pricey hardware switches and voice-specific protocols like Time Division Multiplexing (TDM). 
Step #2: Replace it with a network that runs on inexpensive software switches and Internet Protocol (IP). This new network will cost less to build and be much cheaper to run. 
Step #3: “Preserve the revenue stream” by continuing to charge the prices from the old, expensive network.

This will not work, because the customers don’t need to wait for the telephone companies to offer services based on IP. The customers already have access to an IP network — it’s called the internet. And like the fax machine, they are going to buy devices that enable the services they want on top of this network, without additional involvement by the telephone companies.

Two cheap consumer devices loom large on this front, devices that create enormous value for the owners while generating little revenue for the phone companies. The first is WiFi access points, which allow the effortless sharing of broadband connections, and the second is VoIP converters, which provide the ability to route phone calls over the internet from a regular phone.

WiFi — Wireless local networks

In classic ZapMail fashion, the telephone companies misunderstand the WiFi business. WiFi is a product, not a service, and they assume their competition is limited to other service companies. There are now half a dozen companies selling wireless access points; at the low end, Linksys sells a hundred dollar device for the home that connects to DSL or cable modems, provides wireless access, and has a built-in ethernet hub to boot. The industry has visions of the “2nd phone line” effect coming to data networking, where multi-computer households will have multiple accounts, but if customers can share a high-speed connection among several devices with a single product, the service business will never materialize.

The wireless ISPs are likely to fare no better. Most people do their computing at home or at work, and deploying WiFi to those two areas will cost at worst a couple hundred bucks, assuming no one to split the cost with. There may be a small business in wiring “third places” — coffee shops, hotels, and meeting rooms — but that will be a marginal business at best. WiFi is the new fax machine, a huge value for consumers that generates little new revenue for the phone companies. And, like the fax network, the WiFi extension to the internet will cost hundreds of millions of dollars, but it will not be built by a few companies with deep pockets. It will be built by millions of individual customers, a hundred dollars at a time.

VoIP — Phone calls at internet prices

Voice over IP is another area where a service is becoming a product. Cisco now manufactures an analog telephone adapter (ATA) with a phone jack in the front and an ethernet jack in the back. The box couldn’t be simpler, and does exactly what you’d expect a box with a phone jack in the front and an ethernet jack in the back to do. The big advantage is that unlike the earlier generation of VoIP products — “Now you can use your computer as a phone!” — the ATA lets you use your phone as a phone, allowing new competitors to offer voice service over any high-speed internet connection.

Vonage.com, for example, is giving away ATAs and offering phone service for $40 a month. Unlike the complex billing structures of the existing telephone companies, Vonage prices the phone like an ISP subscription. A Vonage customer can make an unlimited number of unlimited-length domestic long distance calls for their forty bucks, with call waiting, call forwarding, call transfer, web-accessible voicemail and caller ID thrown in free. Vonage can do this because, like the telephone companies, they are offering voice as an application on a digital network, but unlike the phone companies, they are not committed to charging the old prices by pretending that they are running an analog network.

Voice quality is just one feature among many

True to form, the telephone companies also misunderstand the threat from VoIP (though here it is in part because people have been predicting VoIPs rise since 1996.) The core of the misunderstanding is the MP3 mistake: believing that users care about audio quality above all else. Audiophiles confidently predicted that MP3s would be no big deal, because the sound quality was less than perfect. Listeners, however, turned out to be interested in a mix of things, including accessibility, convenience, and price. The average music lover was willing, even eager, to give up driving to the mall to buy high quality but expensive CDs, once Napster made it possible to download lower quality but free music.

Phone calls are like that. Voice over IP doesn’t sound as good as a regular phone call, and everyone knows it. But like music, people don’t want the best voice quality they can get no matter what the cost, they want a minimum threshold of quality, after which they will choose phone service based on an overall mix of features. And now that VoIP has reached that minimum quality, VoIP offers one feature the phone companies can’t touch: price.

The service fees charged by the average telephone company (call waiting, caller ID, dial-tone and number portability fees, etc) add enough to the cost of a phone that a two-line household that moved only its second line to VoIP could save $40 a month before making their first actual phone call. By simply paying for the costs of the related services, a VoIP customer can get all their domestic phone calls thrown in as a freebie. 

As with ZapMail, the principal threat to the telephone companies’ ability to shrink costs but not revenues is their customers’ common sense. Given the choice, an increasing number of customers will simply bypass the phone company and buy the hardware necessary to acquire the service on their own.

And hardware symbiosis will further magnify the threat of WiFi and VoIP. The hardest part of setting up VoIP is simply getting a network hub in place. Once a hub is installed, adding an analog telephone adapter is literally a three-plug set-up: power, network, phone. Meanwhile, one of the side-effects of installing WiFi is getting a hub with open ethernet ports. The synergy is obvious: Installing WiFi? You’ve done most of the work towards adding VoIP. Want VoIP? Since you need to add a hub, why not get a WiFi-enabled hub? (There are obvious opportunities here for bundling, and later for integration — a single box with WiFi, Ethernet ports, and phone jacks for VoIP.)

The economic logic of customer owned networks

According to Metcalfe’s Law, the value of an internet connection rises with the number of users on the network. However, the phone companies do not get to raise their prices in return for that increase in value. This is a matter of considerable frustration to them.

The economic logic of the market suggests that capital should be invested by whoever captures the value of the investment. The telephone companies are using that argument to suggest that they should either be given monopoly pricing power over the last mile, or that they should be allowed to vertically integrate content with conduit. Either strategy would allow them to raise prices by locking out the competition, thus restoring their coercive power over the customer and helping them extract new revenues from their internet subscribers.

However, a second possibility has appeared. If the economics of internet connectivity lets the user rather than the network operator capture the residual value of the network, the economics likewise suggest that the user should be the builder and owner of the network infrastructure.

The creation of the fax network was the first time this happened, but it won’t be the last. WiFi hubs and VoIP adapters allow the users to build out the edges of the network without needing to ask the phone companies for either help or permission. Thanks to the move from analog to digital networks, the telephone companies’ most significant competition is now their customers, because if the customer can buy a simple device that makes wireless connectivity or IP phone calls possible, then anything the phone companies offer by way of competition is nothing more than the latest version of ZapMail.

LazyWeb and RSS: Given Enough Eyeballs, Are Features Shallow Too?

First published on O’Reilly’s OpenP2P on January 7, 2003.

A persistent criticism of open source software is that it is more about copying existing features than creating new ones. While this criticism is overblown, the literature of open source is clearer on debugging than on design. This note concerns an attempt to apply debugging techniques to feature requests and concludes by describing Ben Hammersley’s attempt to create such a system, implemented as an RSS feed.

A key observation in Eric Raymond’s The Cathedral and the Bazaar is: “Given enough eyeballs, all bugs are shallow.” Raymond suggests that Brook’s Law–“Adding more programmers to a late software project makes it later”–doesn’t apply here, because debugging is less collaborative than most other software development. He quotes Linus on the nature of debugging: “Somebody finds the problem, and somebody else understands it. And I’ll go on record as saying that finding it is the bigger challenge.”

Finding a bug doesn’t mean simply pointing out broken behavior, though. Finding a bug means both locating it and describing it clearly. The difference between “It doesn’t work” and “Whenever I resize the login window, the Submit button disappears” is the difference between useless mail and useful feedback. Both the description and the fix are vital, and the description precedes the fix.

Enter the LazyWeb

There is evidence that this two-step process applies to features as well, in a pattern Matt Jones has dubbed the LazyWeb. The original formulation was “If you wait long enough, someone will write/build/design what you were thinking about.” But it is coming to mean “I describe a feature I think should exist in hopes that someone else will code it.” Like debugging, the success of the LazyWeb is related at least in part to the quality of the descriptions. A feature, schema, or application described in enough detail can give the right developer (usually someone thinking about the same problem) a clear idea of how to code it quickly. 

Examples of the LazyWeb in action are Stephen Johnson’s URL catcher as built by Andre Torrez, and Ben Trott’s “More Like This From Others” feature after Ben Hammersley’s initial characterization.

LazyWeb seems to work for at least three reasons:

  1. Developers have blind spotsBecause a developer knows a piece of software in a far more detailed way than their users, they can have blind spots. A good LazyWeb description provides a developer with a alternate perspective on the problem. And sometimes the triviality of the coding involved keeps developers from understanding how valuable a feature would be to its users, as with Yoz Grahame’s “get, search and replace, display” script for revising the text of web pages. Sometimes four lines of code can make all the difference.
  2. Developers have social itchesThe canonical motivation for open source developers is that they want to “scratch an itch.” In this view, most open source software is written with the developer as the primary user, with any additional use seen as a valuable but secondary side-effect.Sometimes, though, the itch a developer has is social: they want to write software other people will adopt. In this case, the advantage of the LazyWeb is not just that a new application or feature is described clearly, but that it is guaranteed to have at least one grateful user. Furthermore, LazyWeb etiquette involves publicizing any solution that does arise, meaning that the developer gets free public attention, even if only to a select group. If writing software that gets used in the wild is a motivation, acting on a LazyWeb description is in many ways a karmically optimal move.
  3. Many eyes make features shallowThis is really a meta-advantage. The above advantages would apply even to a conversation between a single describer and a single developer, if they were the right people. Expanding the conversation to include more describers and developers increases the possibility that at least one such pairing will occur.

Transaction Costs and Coordination Costs

The transaction costs of the LazyWeb are extremely low. Someone describes; someone else codes. The describer can write sketchily or in great detail. No developers are required to read the description; and those who do read it can ignore or modify the proposed design. The interface between the parties is lightweight, one-way, and optional.

However, the coordination costs of the LazyWeb as a whole are very high, and they will grow as more people try it. More people can describe features than write software, just as more people can characterize bugs than fix them. Unlike debugging, however, a LazyWeb description does not necessarily have a target application or a target group of developers. This creates significant interface problems, since maximal LazyWeb awareness would have every developer reading every description, an obvious impossibility. (Shades of Brook’s Law.)

This would be true even if the LazyWeb were confined to skilled programmers. The ability of system architects, say, to describe new visual layout tools, or graphics programmers to characterize their filesharing needs, ensures that there will always be more capable describers than suitable developers.

Thus the LazyWeb is currently limited to those environments that maximize the likelihood that a developer with a social itch and a good grasp of the problem space will happen to read a particular LazyWeb description. In practice, this means that successful LazyWeb requests work best when posted on a few blogs read by many developers. Far from being a “More describers than developers” scenario, in other words, the current LazyWeb has many fewer describers than developers, with the developers fragmented across several sites.

Sounds Like a Job for RSS

One common answer to this kind of problem is to launch a portal for all LazyWeb requests. (There have been earlier experiments in this domain, like http://www.halfbakery.comhttp://www.shouldexist.org, andhttp://www.creativitypool.com, and Magnetbox has launched a Lazyweb-specific site.) These sites are meant to be brokers between describers and developers.

However, nearly a decade of experimentation with single-purpose portals shows that most of them fail. As an alternative to making a LazyWeb portal, creating an RSS feed of LazyWeb descriptions has several potential advantages, including letting anyone anywhere add to the feed, letting sites that serve developers present the feed in an existing context, and letting the developers themselves fold, spindle, and mutilate the feed in any way they choose.

Ben Hammersley has designed a version of a LazyWeb feed. It has three moving parts. 

The first part is the collection of descriptions. Hammersley assumes that a growing number of people will be writing LazyWeb descriptions, and that most of these descriptions will be posted to blogs.

The second part is aggregation. He has created a trackback address, http://blog.mediacooperative.com/mt-tb.cgi/1080, for LazyWeb posts. Blog posts that point to this address are aggregated and presented at http://www.benhammersley.com/lazyweb/.

The third part is the RSS feed itself, athttp://www.benhammersley.com/lazyweb/index.rdf, which is simply the XML version of http://www.benhammersley.com/lazyweb/. However, because it is a feed, third parties can subscribe to it, filter it, present it as a sidebar on their own sites, and so on.

It’s easy to see new features that could be added to this system. A LazyWeb item in RDF has only four elements, set by the Trackback spec — title, link, description, and date. Thus almost all the onus on filtering the feed is on the subscriber, not the producer. An RDF format with optional but recommended tags (type: feature, schema, application, etc; domain: chat, blog, email, etc.) might allow for higher-quality syndication, but would be hard with the current version of Trackback. Alternatively, community consensus about how to use title tags to characterize feature requests could help.

And not everyone with a LazyWeb idea runs a Trackback-enabled weblog, so having some way for those people to register their ideas could be useful. Hooks for automated translation could make the feed more useful to developers working in languages other than English, and so on.

But for all the possible new features, this is a good start, having achieved a kind of bootstrap phase analogous to compiler development. The original work came out of a LazyWeb characterization made during a larger conversation Jones, Hammersley, and I have been having about social software, and some of the early LazyWeb requests are themselves feature descriptions for the system.

Will It Work?

Will it work? Who knows. Like any experiment, it could die from inactivity. It could also be swamped by a flood of low-quality submissions. It may be that the membrane that a weblog forms around its readers is better for matching describers and developers than an open feed would be. And Paul Hammond has suggested that “Any attempt to invoke the LazyWeb directly will cause the whole thing to stop working.” 

It’s worth trying, though, because the potential win is so large. If the benefits open source development offers for fixing bugs can be applied to creating features as well, it could confer a huge advantage on the development of Mob Software.

Stefano Mazzocchi of the Cocoon project has said “Anyway, it’s a design pattern: ‘good ideas and bad code build communities, the other three combinations do not.” This is extremely hard to understand, it’s probably the most counter-intuitive thing about open source dynamics.” If Mazzocchi is right, then a high-quality stream of feature requests could be a powerful tool for building communities of developers and users, as well as providing a significant advantage to open over closed source development.

The closed source shops could subscribe to such a feed as well, of course, but their advantage on the feature front isn’t speed, it’s secrecy. If a small group of closed source developers is working on a feature list that only they know, they will often ship it first. But for good ideas in the public domain, the open and closed development teams will have the same starting gun. And releasing early and often is where open development has always excelled.

It’s too early to tell if LazyWeb is just the flavor of the month or points to something profound about the way ideas can spread. And it’s much too early to know if an RSS feed is the right way to spread LazyWeb ideas to the developers best able to take advantage of them. But it’s not too early to know that it’s worth trying.