diff options
Diffstat (limited to 'data')
-rw-r--r-- | data/samples/current/en/the_wealth_of_networks.yochai_benkler.sst | 44 |
1 files changed, 22 insertions, 22 deletions
diff --git a/data/samples/current/en/the_wealth_of_networks.yochai_benkler.sst b/data/samples/current/en/the_wealth_of_networks.yochai_benkler.sst index da86aa2..6e0d17e 100644 --- a/data/samples/current/en/the_wealth_of_networks.yochai_benkler.sst +++ b/data/samples/current/en/the_wealth_of_networks.yochai_benkler.sst @@ -42,7 +42,7 @@ @make: :breaks: new=:B; break=1 :home_button_image: {won_benkler.png }http://cyber.law.harvard.edu/wealth_of_networks/Main_Page - :footer: {The Wealth of Networks}http://cyber.law.harvard.edu/wealth_of_networks/Main_Page; {Yochai Benkler}http://http://www.doctorow.com + :footer: {The Wealth of Networks}http://cyber.law.harvard.edu/wealth_of_networks/Main_Page; {Yochai Benkler}http://http://www.benkler.org :A~ @title @author @@ -880,7 +880,7 @@ The quintessential instance of commons-based peer production has been free softw This requires anyone who modifies software and distributes the modified version to license it under the same free terms as the original software. While there have been many arguments about how widely the provisions that prevent downstream appropriation should be used, the practical adoption patterns have been dominated by forms of licensing that prevent anyone from exclusively appropriating the contributions or the joint product. More than 85 percent of active free software projects include some version of the GPL or similarly structured license.~{ Josh Lerner and Jean Tirole, "The Scope of Open Source Licensing" (Harvard NOM working paper no. 02-42, table 1, Cambridge, MA, 2002). The figure is computed out of the data reported in this paper for the number of free software development projects that Lerner and Tirole identify as having "restrictive" or "very restrictive" licenses. }~ -Free software has played a critical role in the recognition of peer production, because software is a functional good with measurable qualities. It can be more or less authoritatively tested against its market-based competitors. And, in many instances, free software has prevailed. About 70 percent of Web server software, in particular for critical e-commerce sites, runs on the Apache Web server--free software.~{ Netcraft, April 2004 Web Server Survey, http://news.netcraft.com/archives/web_ server_survey.html. }~ More than half of all back-office e-mail functions are run by one free software program or another. Google, Amazon, and CNN.com, for example, run their Web servers on the GNU/Linux operating system. They do this, presumably, because they believe this peerproduced operating system is more reliable than the alternatives, not because the system is "free." It would be absurd to risk a higher rate of failure in their core business activities in order to save a few hundred thousand dollars on licensing fees. Companies like IBM and Hewlett Packard, consumer electronics manufacturers, as well as military and other mission-critical government agencies around the world have begun to adopt business and service strategies that rely and extend free software. They do this because it allows them to build better equipment, sell better services, or better fulfill their public role, even though they do not control the software development process and cannot claim proprietary rights of exclusion in the products of their contributions. +Free software has played a critical role in the recognition of peer production, because software is a functional good with measurable qualities. It can be more or less authoritatively tested against its market-based competitors. And, in many instances, free software has prevailed. About 70 percent of Web server software, in particular for critical e-commerce sites, runs on the Apache Web server--free software.~{ Netcraft, April 2004 Web Server Survey, http://news.netcraft.com/archives/web_server_survey.html. }~ More than half of all back-office e-mail functions are run by one free software program or another. Google, Amazon, and CNN.com, for example, run their Web servers on the GNU/Linux operating system. They do this, presumably, because they believe this peerproduced operating system is more reliable than the alternatives, not because the system is "free." It would be absurd to risk a higher rate of failure in their core business activities in order to save a few hundred thousand dollars on licensing fees. Companies like IBM and Hewlett Packard, consumer electronics manufacturers, as well as military and other mission-critical government agencies around the world have begun to adopt business and service strategies that rely and extend free software. They do this because it allows them to build better equipment, sell better services, or better fulfill their public role, even though they do not control the software development process and cannot claim proprietary rights of exclusion in the products of their contributions. ={ GNU/Linux operating system +3 } The story of free software begins in 1984, when Richard Stallman started working on a project of building a nonproprietary operating system he called GNU (GNU's Not Unix). Stallman, then at the Massachusetts Institute of Technology (MIT), operated from political conviction. He wanted a world in which software enabled people to use information freely, where no one would have to ask permission to change the software they use to fit their needs or to share it with a friend for whom it would be helpful. These freedoms to share and to make your own software were fundamentally incompatible with a model of production that relies on property rights and markets, he thought, because in order for there to be a market in uses of ,{[pg 65]}, software, owners must be able to make the software unavailable to people who need it. These people would then pay the provider in exchange for access to the software or modification they need. If anyone can make software or share software they possess with friends, it becomes very difficult to write software on a business model that relies on excluding people from software they need unless they pay. As a practical matter, Stallman started writing software himself, and wrote a good bit of it. More fundamentally, he adopted a legal technique that started a snowball rolling. He could not write a whole operating system by himself. Instead, he released pieces of his code under a license that allowed anyone to copy, distribute, and modify the software in whatever way they pleased. He required only that, if the person who modified the software then distributed it to others, he or she do so under the exact same conditions that he had distributed his software. In this way, he invited all other programmers to collaborate with him on this development program, if they wanted to, on the condition that they be as generous with making their contributions available to others as he had been with his. Because he retained the copyright to the software he distributed, he could write this condition into the license that he attached to the software. This meant that anyone using or distributing the software as is, without modifying it, would not violate Stallman's license. They could also modify the software for their own use, and this would not violate the license. However, if they chose to distribute the modified software, they would violate Stallman's copyright unless they included a license identical to his with the software they distributed. This license became the GNU General Public License, or GPL. The legal jujitsu Stallman used--asserting his own copyright claims, but only to force all downstream users who wanted to rely on his contributions to make their own contributions available to everyone else--came to be known as "copyleft," an ironic twist on copyright. This legal artifice allowed anyone to contribute to the GNU project without worrying that one day they would wake up and find that someone had locked them out of the system they had helped to build. @@ -915,7 +915,7 @@ Free software is, without a doubt, the most visible instance of peer production 3~ Uttering Content -NASA Clickworkers was "an experiment to see if public volunteers, each working for a few minutes here and there can do some routine science analysis that would normally be done by a scientist or graduate student working for months on end." Users could mark craters on maps of Mars, classify craters that have already been marked, or search the Mars landscape for "honeycomb" terrain. The project was "a pilot study with limited funding, run part-time by one software engineer, with occasional input from two scientists." In its first six months of operation, more than 85,000 users visited the site, with many contributing to the effort, making more than 1.9 million entries (including redundant entries of the same craters, used to average out errors). An analysis of the quality of markings showed "that the automaticallycomputed consensus of a large number of clickworkers is virtually indistinguishable from the inputs of a geologist with years of experience in identifying Mars craters."~{ Clickworkers Results: Crater Marking Activity, July 3, 2001, http://clickworkers.arc .nasa.gov/documents/crater-marking.pdf. }~ The tasks performed by clickworkers (like marking craters) were discrete, each easily performed in a matter of minutes. As a result, users could choose to work for a few minutes doing a single iteration or for hours by doing many. An early study of the project suggested that some clickworkers indeed worked on the project for weeks, but that 37 percent of the work was done by one-time contributors.~{ /{B. Kanefsky, N. G. Barlow, and V. C. Gulick}/, Can Distributed Volunteers Accomplish Massive Data Analysis Tasks? http://www.clickworkers.arc.nasa.gov/documents /abstract.pdf. }~ +NASA Clickworkers was "an experiment to see if public volunteers, each working for a few minutes here and there can do some routine science analysis that would normally be done by a scientist or graduate student working for months on end." Users could mark craters on maps of Mars, classify craters that have already been marked, or search the Mars landscape for "honeycomb" terrain. The project was "a pilot study with limited funding, run part-time by one software engineer, with occasional input from two scientists." In its first six months of operation, more than 85,000 users visited the site, with many contributing to the effort, making more than 1.9 million entries (including redundant entries of the same craters, used to average out errors). An analysis of the quality of markings showed "that the automaticallycomputed consensus of a large number of clickworkers is virtually indistinguishable from the inputs of a geologist with years of experience in identifying Mars craters."~{ Clickworkers Results: Crater Marking Activity, July 3, 2001, http://clickworkers.arc.nasa.gov/documents/crater-marking.pdf. }~ The tasks performed by clickworkers (like marking craters) were discrete, each easily performed in a matter of minutes. As a result, users could choose to work for a few minutes doing a single iteration or for hours by doing many. An early study of the project suggested that some clickworkers indeed worked on the project for weeks, but that 37 percent of the work was done by one-time contributors.~{ /{B. Kanefsky, N. G. Barlow, and V. C. Gulick}/, Can Distributed Volunteers Accomplish Massive Data Analysis Tasks? http://www.clickworkers.arc.nasa.gov/documents/abstract.pdf. }~ ={ clickworkers project +2 ; information production inputs : NASA Clickworkers project +2 ; @@ -1975,7 +1975,7 @@ In the context of information, knowledge, and culture, because of the nonrivalry The structure of our information environment is constitutive of our autonomy, not only functionally significant to it. While the capacity to act free of constraints is most immediately and clearly changed by the networked information economy, information plays an even more foundational role in our very capacity to make and pursue life plans that can properly be called our own. A fundamental requirement of self-direction is the capacity to perceive the state of the world, to conceive of available options for action, to connect actions to consequences, to evaluate alternative outcomes, and to ,{[pg 147]}, decide upon and pursue an action accordingly. Without these, no action, even if mechanically self-directed in the sense that my brain consciously directs my body to act, can be understood as autonomous in any normatively interesting sense. All of the components of decision making prior to action, and those actions that are themselves communicative moves or require communication as a precondition to efficacy, are constituted by the information and communications environment we, as agents, occupy. Conditions that cause failures at any of these junctures, which place bottlenecks, failures of communication, or provide opportunities for manipulation by a gatekeeper in the information environment, create threats to the autonomy of individuals in that environment. The shape of the information environment, and the distribution of power within it to control information flows to and from individuals, are, as we have seen, the contingent product of a combination of technology, economic behavior, social patterns, and institutional structure or law. -In 1999, Cisco Systems issued a technical white paper, which described a new router that the company planned to sell to cable broadband providers. In describing advantages that these new "policy routers" offer cable providers, the paper explained that if the provider's users want to subscribe to a service that "pushes" information to their computer: "You could restrict the incoming push broadcasts as well as subscribers' outgoing access to the push site to discourage its use. At the same time, you could promote your own or a partner's services with full speed features to encourage adoption of your services."~{ White Paper, "Controlling Your Network, A Must for Cable Operators" (1999), http:// www.cptech.org/ecom/openaccess/cisco1.html. }~ +In 1999, Cisco Systems issued a technical white paper, which described a new router that the company planned to sell to cable broadband providers. In describing advantages that these new "policy routers" offer cable providers, the paper explained that if the provider's users want to subscribe to a service that "pushes" information to their computer: "You could restrict the incoming push broadcasts as well as subscribers' outgoing access to the push site to discourage its use. At the same time, you could promote your own or a partner's services with full speed features to encourage adoption of your services."~{ White Paper, "Controlling Your Network, A Must for Cable Operators" (1999), http://www.cptech.org/ecom/openaccess/cisco1.html. }~ ={ access : systematically blocked by policy routers +3 ; blocked access : @@ -2667,7 +2667,7 @@ The structure of mass media as a mode of communications imposes a certain set of power of mass media owners +6 } -The Sinclair Broadcast Group is one of the largest owners of television broadcast stations in the United States. The group's 2003 Annual Report proudly states in its title, "Our Company. Your Message. 26 Million Households"; that is, roughly one quarter of U.S. households. Sinclair owns and operates or provides programming and sales to sixty-two stations in the United States, including multiple local affiliates of NBC, ABC, CBS, and ,{[pg 200]}, Fox. In April 2004, ABC News's program Nightline dedicated a special program to reading the names of American service personnel who had been killed in the Iraq War. The management of Sinclair decided that its seven ABC affiliates would not air the program, defending its decision because the program "appears to be motivated by a political agenda designed to undermine the efforts of the United States in Iraq."~{ "Names of U.S. Dead Read on Nightline," Associated Press Report, May 1, 2004, http://www.msnbc.msn.com/id/4864247/. }~ At the time, the rising number of American casualties in Iraq was already a major factor in the 2004 presidential election campaign, and both ABC's decision to air the program, and Sinclair's decision to refuse to carry it could be seen as interventions by the media in setting the political agenda and contributing to the public debate. It is difficult to gauge the politics of a commercial organization, but one rough proxy is political donations. In the case of Sinclair, 95 percent of the donations made by individuals associated with the company during the 2004 election cycle went to Republicans, while only 5 percent went to Democrats.~{ The numbers given here are taken from The Center for Responsive Politics, http:// www.opensecrets.org/, and are based on information released by the Federal Elections Commission. }~ Contributions of Disney, on the other hand, the owner of the ABC network, split about seventy-thirty in favor of contribution to Democrats. It is difficult to parse the extent to which political leanings of this sort are personal to the executives and professional employees who make decisions about programming, and to what extent these are more organizationally self-interested, depending on the respective positions of the political parties on the conditions of the industry's business. In some cases, it is quite obvious that the motives are political. When one looks, for example, at contributions by Disney's film division, they are distributed 100 percent in favor of Democrats. This mostly seems to reflect the large contributions of the Weinstein brothers, who run the semi-independent studio Miramax, which also distributed Michael Moore's politically explosive criticism of the Bush administration, Fahrenheit 9/11, in 2004. Sinclair's contributions were aligned with, though more skewed than, those of the National Association of Broadcasters political action committee, which were distributed 61 percent to 39 percent in favor of Republicans. Here the possible motivation is that Republicans have espoused a regulatory agenda at the Federal Communications Commission that allows broadcasters greater freedom to consolidate and to operate more as businesses and less as public trustees. +The Sinclair Broadcast Group is one of the largest owners of television broadcast stations in the United States. The group's 2003 Annual Report proudly states in its title, "Our Company. Your Message. 26 Million Households"; that is, roughly one quarter of U.S. households. Sinclair owns and operates or provides programming and sales to sixty-two stations in the United States, including multiple local affiliates of NBC, ABC, CBS, and ,{[pg 200]}, Fox. In April 2004, ABC News's program Nightline dedicated a special program to reading the names of American service personnel who had been killed in the Iraq War. The management of Sinclair decided that its seven ABC affiliates would not air the program, defending its decision because the program "appears to be motivated by a political agenda designed to undermine the efforts of the United States in Iraq."~{ "Names of U.S. Dead Read on Nightline," Associated Press Report, May 1, 2004, http://www.msnbc.msn.com/id/4864247/. }~ At the time, the rising number of American casualties in Iraq was already a major factor in the 2004 presidential election campaign, and both ABC's decision to air the program, and Sinclair's decision to refuse to carry it could be seen as interventions by the media in setting the political agenda and contributing to the public debate. It is difficult to gauge the politics of a commercial organization, but one rough proxy is political donations. In the case of Sinclair, 95 percent of the donations made by individuals associated with the company during the 2004 election cycle went to Republicans, while only 5 percent went to Democrats.~{ The numbers given here are taken from The Center for Responsive Politics, http://www.opensecrets.org/, and are based on information released by the Federal Elections Commission. }~ Contributions of Disney, on the other hand, the owner of the ABC network, split about seventy-thirty in favor of contribution to Democrats. It is difficult to parse the extent to which political leanings of this sort are personal to the executives and professional employees who make decisions about programming, and to what extent these are more organizationally self-interested, depending on the respective positions of the political parties on the conditions of the industry's business. In some cases, it is quite obvious that the motives are political. When one looks, for example, at contributions by Disney's film division, they are distributed 100 percent in favor of Democrats. This mostly seems to reflect the large contributions of the Weinstein brothers, who run the semi-independent studio Miramax, which also distributed Michael Moore's politically explosive criticism of the Bush administration, Fahrenheit 9/11, in 2004. Sinclair's contributions were aligned with, though more skewed than, those of the National Association of Broadcasters political action committee, which were distributed 61 percent to 39 percent in favor of Republicans. Here the possible motivation is that Republicans have espoused a regulatory agenda at the Federal Communications Commission that allows broadcasters greater freedom to consolidate and to operate more as businesses and less as public trustees. ={ exercise of programming power +5 ; SBG (Sinclair Broadcast Group) ; Sinclair Broadcast Group (SBG) ; @@ -2981,7 +2981,7 @@ Sinclair, which owns major television stations in a number of what were consider Stolen Honor documentary +5 } -Alongside these standard avenues of response in the traditional public sphere of commercial mass media, their regulators, and established parties, a very different kind of response was brewing on the Net, in the blogosphere. On the morning of October 9, 2004, the Los Angeles Times story was blogged on a number of political blogs--Josh Marshall on talkingpointsmemo. com, Chris Bower on MyDD.com, and Markos Moulitsas on dailyKos.com. By midday that Saturday, October 9, two efforts aimed at organizing opposition to Sinclair were posted in the dailyKos and MyDD. A "boycottSinclair" site was set up by one individual, and was pointed to by these blogs. Chris Bowers on MyDD provided a complete list of Sinclair stations and urged people to call the stations and threaten to picket and boycott. By Sunday, October 10, the dailyKos posted a list of national advertisers with Sinclair, urging readers to call them. On Monday, October 11, MyDD linked to that list, while another blog, theleftcoaster.com, posted a variety of action agenda items, from picketing affiliates of Sinclair to suggesting that readers oppose Sinclair license renewals, providing a link to the FCC site explaining the basic renewal process and listing public-interest organizations to work with. That same day, another individual, Nick Davis, started a Web site, ,{[pg 222]}, BoycottSBG.com, on which he posted the basic idea that a concerted boycott of local advertisers was the way to go, while another site, stopsinclair.org, began pushing for a petition. In the meantime, TalkingPoints published a letter from Reed Hundt, former chairman of the FCC, to Sinclair, and continued finding tidbits about the film and its maker. Later on Monday, TalkingPoints posted a letter from a reader who suggested that stockholders of Sinclair could bring a derivative action. By 5:00 a.m. on the dawn of Tuesday, October 12, however, TalkingPoints began pointing toward Davis's database on BoycottSBG.com. By 10:00 that morning, Marshall posted on TalkingPoints a letter from an anonymous reader, which began by saying: "I've worked in the media business for 30 years and I guarantee you that sales is what these local TV stations are all about. They don't care about license renewal or overwhelming public outrage. They care about sales only, so only local advertisers can affect their decisions." This reader then outlined a plan for how to watch and list all local advertisers, and then write to the sales managers--not general managers--of the local stations and tell them which advertisers you are going to call, and then call those. By 1:00 p.m. Marshall posted a story of his own experience with this strategy. He used Davis's database to identify an Ohio affiliate's local advertisers. He tried to call the sales manager of the station, but could not get through. He then called the advertisers. The post is a "how to" instruction manual, including admonitions to remember that the advertisers know nothing of this, the story must be explained, and accusatory tones avoided, and so on. Marshall then began to post letters from readers who explained with whom they had talked--a particular sales manager, for example--and who were then referred to national headquarters. He continued to emphasize that advertisers were the right addressees. By 5:00 p.m. that same Tuesday, Marshall was reporting more readers writing in about experiences, and continued to steer his readers to sites that helped them to identify their local affiliate's sales manager and their advertisers.~{ The various posts are archived and can be read, chronologically, at http:// www.talkingpointsmemo.com/archives/week_2004_10_10.php. }~ +Alongside these standard avenues of response in the traditional public sphere of commercial mass media, their regulators, and established parties, a very different kind of response was brewing on the Net, in the blogosphere. On the morning of October 9, 2004, the Los Angeles Times story was blogged on a number of political blogs--Josh Marshall on talkingpointsmemo. com, Chris Bower on MyDD.com, and Markos Moulitsas on dailyKos.com. By midday that Saturday, October 9, two efforts aimed at organizing opposition to Sinclair were posted in the dailyKos and MyDD. A "boycottSinclair" site was set up by one individual, and was pointed to by these blogs. Chris Bowers on MyDD provided a complete list of Sinclair stations and urged people to call the stations and threaten to picket and boycott. By Sunday, October 10, the dailyKos posted a list of national advertisers with Sinclair, urging readers to call them. On Monday, October 11, MyDD linked to that list, while another blog, theleftcoaster.com, posted a variety of action agenda items, from picketing affiliates of Sinclair to suggesting that readers oppose Sinclair license renewals, providing a link to the FCC site explaining the basic renewal process and listing public-interest organizations to work with. That same day, another individual, Nick Davis, started a Web site, ,{[pg 222]}, BoycottSBG.com, on which he posted the basic idea that a concerted boycott of local advertisers was the way to go, while another site, stopsinclair.org, began pushing for a petition. In the meantime, TalkingPoints published a letter from Reed Hundt, former chairman of the FCC, to Sinclair, and continued finding tidbits about the film and its maker. Later on Monday, TalkingPoints posted a letter from a reader who suggested that stockholders of Sinclair could bring a derivative action. By 5:00 a.m. on the dawn of Tuesday, October 12, however, TalkingPoints began pointing toward Davis's database on BoycottSBG.com. By 10:00 that morning, Marshall posted on TalkingPoints a letter from an anonymous reader, which began by saying: "I've worked in the media business for 30 years and I guarantee you that sales is what these local TV stations are all about. They don't care about license renewal or overwhelming public outrage. They care about sales only, so only local advertisers can affect their decisions." This reader then outlined a plan for how to watch and list all local advertisers, and then write to the sales managers--not general managers--of the local stations and tell them which advertisers you are going to call, and then call those. By 1:00 p.m. Marshall posted a story of his own experience with this strategy. He used Davis's database to identify an Ohio affiliate's local advertisers. He tried to call the sales manager of the station, but could not get through. He then called the advertisers. The post is a "how to" instruction manual, including admonitions to remember that the advertisers know nothing of this, the story must be explained, and accusatory tones avoided, and so on. Marshall then began to post letters from readers who explained with whom they had talked--a particular sales manager, for example--and who were then referred to national headquarters. He continued to emphasize that advertisers were the right addressees. By 5:00 p.m. that same Tuesday, Marshall was reporting more readers writing in about experiences, and continued to steer his readers to sites that helped them to identify their local affiliate's sales manager and their advertisers.~{ The various posts are archived and can be read, chronologically, at http://www.talkingpointsmemo.com/archives/week_2004_10_10.php. }~ ={ Bower, Chris ; dailyKos.com site ; Davis, Nick +1 ; @@ -3027,7 +3027,7 @@ Electronic voting machines were first used to a substantial degree in the United In late January 2003, Bev Harris, an activist focused on electronic voting machines, was doing research on Diebold, which has provided more than 75,000 voting machines in the United States and produced many of the machines used in Brazil's purely electronic voting system. Harris had set up a whistle-blower site as part of a Web site she ran at the time, blackboxvoting.com. Apparently working from a tip, Harris found out about an openly available site where Diebold stored more than forty thousand files about how its system works. These included specifications for, and the actual code of, Diebold's machines and vote-tallying system. In early February 2003, Harris published two initial journalistic accounts on an online journal in New Zealand, Scoop.com--whose business model includes providing an unedited platform for commentators who wish to use it as a platform to publish their materials. She also set up a space on her Web site for technically literate users to comment on the files she had retrieved. In early July of that year, she published an analysis of the results of the discussions on her site, which pointed out how access to the Diebold open site could have been used to affect the 2002 election results in Georgia (where there had been a tightly contested Senate race). In an editorial attached to the publication, entitled "Bigger than Watergate," the editors of Scoop claimed that what Harris had found was nothing short of a mechanism for capturing the U.S. elections process. They then inserted a number of lines that go to the very heart of how the networked information economy can use peer production to play the role of watchdog: ={ Harris, Bev } -_1 We can now reveal for the first time the location of a complete online copy of the original data set. As we anticipate attempts to prevent the distribution of this information we encourage supporters of democracy to make copies of these files and to make them available on websites and file sharing networks: http:// users.actrix.co.nz/dolly/. As many of the files are zip password protected you may need some assistance in opening them, we have found that the utility available at ,{[pg 228]}, the following URL works well: http://www.lostpassword.com. Finally some of the zip files are partially damaged, but these too can be read by using the utility at: http://www.zip-repair.com/. At this stage in this inquiry we do not believe that we have come even remotely close to investigating all aspects of this data; i.e., there is no reason to believe that the security flaws discovered so far are the only ones. Therefore we expect many more discoveries to be made. We want the assistance of the online computing community in this enterprise and we encourage you to file your findings at the forum HERE [providing link to forum]. +_1 We can now reveal for the first time the location of a complete online copy of the original data set. As we anticipate attempts to prevent the distribution of this information we encourage supporters of democracy to make copies of these files and to make them available on websites and file sharing networks: http://users.actrix.co.nz/dolly/. As many of the files are zip password protected you may need some assistance in opening them, we have found that the utility available at ,{[pg 228]}, the following URL works well: http://www.lostpassword.com. Finally some of the zip files are partially damaged, but these too can be read by using the utility at: http://www.zip-repair.com/. At this stage in this inquiry we do not believe that we have come even remotely close to investigating all aspects of this data; i.e., there is no reason to believe that the security flaws discovered so far are the only ones. Therefore we expect many more discoveries to be made. We want the assistance of the online computing community in this enterprise and we encourage you to file your findings at the forum HERE [providing link to forum]. A number of characteristics of this call to arms would have been simply infeasible in the mass-media environment. They represent a genuinely different mind-set about how news and analysis are produced and how censorship and power are circumvented. First, the ubiquity of storage and communications capacity means that public discourse can rely on "see for yourself" rather than on "trust me." The first move, then, is to make the raw materials available for all to see. Second, the editors anticipated that the company would try to suppress the information. Their response was not to use a counterweight of the economic and public muscle of a big media corporation to protect use of the materials. Instead, it was widespread distribution of information--about where the files could be found, and about where tools to crack the passwords and repair bad files could be found-- matched with a call for action: get these files, copy them, and store them in many places so they cannot be squelched. Third, the editors did not rely on large sums of money flowing from being a big media organization to hire experts and interns to scour the files. Instead, they posed a challenge to whoever was interested--there are more scoops to be found, this is important for democracy, good hunting!! Finally, they offered a platform for integration of the insights on their own forum. This short paragraph outlines a mechanism for radically distributed storage, distribution, analysis, and reporting on the Diebold files. @@ -3187,7 +3187,7 @@ The remainder of this chapter is devoted to responding to these critiques, provi concentration of mass-media power +7 } -The first-generation critique of the claims that the Internet democratizes focused heavily on three variants of the information overload or Babel objection. The basic descriptive proposition that animated the Supreme Court in /{Reno v. ACLU}/ was taken as more or less descriptively accurate: Everyone would be equally able to speak on the Internet. However, this basic observation ,{[pg 238]}, was then followed by a descriptive or normative explanation of why this development was a threat to democracy, or at least not much of a boon. The basic problem that is diagnosed by this line of critique is the problem of attention. When everyone can speak, the central point of failure becomes the capacity to be heard--who listens to whom, and how that question is decided. Speaking in a medium that no one will actually hear with any reasonable likelihood may be psychologically satisfying, but it is not a move in a political conversation. Noam's prediction was, therefore, that there would be a reconcentration of attention: money would reemerge in this environment as a major determinant of the capacity to be heard, certainly no less, and perhaps even more so, than it was in the mass-media environment.~{ Eli Noam, "Will the Internet Be Bad for Democracy?" (November 2001), http:// www.citi.columbia.edu/elinoam/articles/int_bad_dem.htm. }~ Sunstein's theory was different. He accepted Nicholas Negroponte's prediction that people would be reading "The Daily Me," that is, that each of us would create highly customized windows on the information environment that would be narrowly tailored to our unique combination of interests. From this assumption about how people would be informed, he spun out two distinct but related critiques. The first was that discourse would be fragmented. With no six o'clock news to tell us what is on the public agenda, there would be no public agenda, just a fragmented multiplicity of private agendas that never coalesce into a platform for political discussion. The second was that, in a fragmented discourse, individuals would cluster into groups of self-reinforcing, self-referential discussion groups. These types of groups, he argued from social scientific evidence, tend to render their participants' views more extreme and less amenable to the conversation across political divides necessary to achieve reasoned democratic decisions. +The first-generation critique of the claims that the Internet democratizes focused heavily on three variants of the information overload or Babel objection. The basic descriptive proposition that animated the Supreme Court in /{Reno v. ACLU}/ was taken as more or less descriptively accurate: Everyone would be equally able to speak on the Internet. However, this basic observation ,{[pg 238]}, was then followed by a descriptive or normative explanation of why this development was a threat to democracy, or at least not much of a boon. The basic problem that is diagnosed by this line of critique is the problem of attention. When everyone can speak, the central point of failure becomes the capacity to be heard--who listens to whom, and how that question is decided. Speaking in a medium that no one will actually hear with any reasonable likelihood may be psychologically satisfying, but it is not a move in a political conversation. Noam's prediction was, therefore, that there would be a reconcentration of attention: money would reemerge in this environment as a major determinant of the capacity to be heard, certainly no less, and perhaps even more so, than it was in the mass-media environment.~{ Eli Noam, "Will the Internet Be Bad for Democracy?" (November 2001), http://www.citi.columbia.edu/elinoam/articles/int_bad_dem.htm. }~ Sunstein's theory was different. He accepted Nicholas Negroponte's prediction that people would be reading "The Daily Me," that is, that each of us would create highly customized windows on the information environment that would be narrowly tailored to our unique combination of interests. From this assumption about how people would be informed, he spun out two distinct but related critiques. The first was that discourse would be fragmented. With no six o'clock news to tell us what is on the public agenda, there would be no public agenda, just a fragmented multiplicity of private agendas that never coalesce into a platform for political discussion. The second was that, in a fragmented discourse, individuals would cluster into groups of self-reinforcing, self-referential discussion groups. These types of groups, he argued from social scientific evidence, tend to render their participants' views more extreme and less amenable to the conversation across political divides necessary to achieve reasoned democratic decisions. ={ Negroponte, Nicholas ; Noam, Eli +1 ; attention fragmentation +1 ; @@ -3209,7 +3209,7 @@ Therefore, we now turn to the question: Is the Internet in fact too chaotic or t There are two very distinct types of claims about Internet centralization. The first, and earlier, has the familiar ring of media concentration. It is the simpler of the two, and is tractable to policy. The second, concerned with the emergent patterns of attention and linking on an otherwise open network, is more difficult to explain and intractable to policy. I suggest, however, that it actually stabilizes and structures democratic discourse, providing a better answer to the fears of information overload than either the mass media or any efforts to regulate attention to matters of public concern. -The media-concentration type argument has been central to arguments about the necessity of open access to broadband platforms, made most forcefully over the past few years by Lawrence Lessig. The argument is that the basic instrumentalities of Internet communications are subject to concentrated markets. This market concentration in basic access becomes a potential point of concentration of the power to influence the discourse made possible by access. Eli Noam's recent work provides the most comprehensive study currently available of the degree of market concentration in media industries. It offers a bleak picture.~{ Eli Noam, "The Internet Still Wide, Open, and Competitive?" Paper presented at The Telecommunications Policy Research Conference, September 2003, http:// www.tprc.org/papers/2003/200/noam_TPRC2003.pdf. }~ Noam looked at markets in basic infrastructure components of the Internet: Internet backbones, Internet service providers (ISPs), broadband providers, portals, search engines, browser software, media player software, and Internet telephony. Aggregating across all these sectors, he found that the Internet sector defined in terms of these components was, throughout most of the period from 1984 to 2002, concentrated according to traditional antitrust measures. Between 1992 and 1998, however, this sector was "highly concentrated" by the Justice Department's measure of market concentration for antitrust purposes. Moreover, the power ,{[pg 240]}, of the top ten firms in each of these markets, and in aggregate for firms that had large market segments in a number of these markets, shows that an ever-smaller number of firms were capturing about 25 percent of the revenues in the Internet sector. A cruder, but consistent finding is the FCC's, showing that 96 percent of homes and small offices get their broadband access either from their incumbent cable operator or their incumbent local telephone carrier.~{ Federal Communications Commission, Report on High Speed Services, December 2003. }~ It is important to recognize that these findings are suggesting potential points of failure for the networked information economy. They are not a critique of the democratic potential of the networked public sphere, but rather show us how we could fail to develop it by following the wrong policies. +The media-concentration type argument has been central to arguments about the necessity of open access to broadband platforms, made most forcefully over the past few years by Lawrence Lessig. The argument is that the basic instrumentalities of Internet communications are subject to concentrated markets. This market concentration in basic access becomes a potential point of concentration of the power to influence the discourse made possible by access. Eli Noam's recent work provides the most comprehensive study currently available of the degree of market concentration in media industries. It offers a bleak picture.~{ Eli Noam, "The Internet Still Wide, Open, and Competitive?" Paper presented at The Telecommunications Policy Research Conference, September 2003, http://www.tprc.org/papers/2003/200/noam_TPRC2003.pdf. }~ Noam looked at markets in basic infrastructure components of the Internet: Internet backbones, Internet service providers (ISPs), broadband providers, portals, search engines, browser software, media player software, and Internet telephony. Aggregating across all these sectors, he found that the Internet sector defined in terms of these components was, throughout most of the period from 1984 to 2002, concentrated according to traditional antitrust measures. Between 1992 and 1998, however, this sector was "highly concentrated" by the Justice Department's measure of market concentration for antitrust purposes. Moreover, the power ,{[pg 240]}, of the top ten firms in each of these markets, and in aggregate for firms that had large market segments in a number of these markets, shows that an ever-smaller number of firms were capturing about 25 percent of the revenues in the Internet sector. A cruder, but consistent finding is the FCC's, showing that 96 percent of homes and small offices get their broadband access either from their incumbent cable operator or their incumbent local telephone carrier.~{ Federal Communications Commission, Report on High Speed Services, December 2003. }~ It is important to recognize that these findings are suggesting potential points of failure for the networked information economy. They are not a critique of the democratic potential of the networked public sphere, but rather show us how we could fail to develop it by following the wrong policies. ={ Lessig, Lawrence (Larry) ; access : broadband services, concentration of +1 ; @@ -3301,10 +3301,10 @@ Within two months of the publication of Barabasi and Albert's article, Adamic an topical clustering +7 } -First, links are not smoothly distributed throughout the network. Sites cluster into densely linked "regions" or communities of interest. Computer scientists have looked at clustering from the perspective of what topical or other correlated characteristics describe these relatively high-density interconnected regions of nodes. What they found was perhaps entirely predictable from an intuitive perspective of the network users, but important as we try to understand the structure of information flow on the Web. Web sites cluster into topical and social/organizational clusters. Early work done in the IBM Almaden Research Center on how link structure could be used as a search technique showed that by mapping densely interlinked sites without looking at content, one could find communities of interest that identify very fine-grained topical connections, such as Australian fire brigades or Turkish students in the United States.~{ Ravi Kumar et al., "Trawling the Web for Emerging Cyber-Communities," WWW8/ Computer Networks 31, nos. 11-16 (1999): 1481-1493. }~ A later study out of the NEC Research Institute more formally defined the interlinking that would identify a "community" as one in which the nodes were more densely connected to each other than they were to nodes outside the cluster by some amount. The study also showed that topically connected sites meet this definition. For instance, sites related to molecular biology clustered with each other--in the sense of being more interlinked with each other than with off-topic sites--as did sites about physics and black holes.~{ Gary W. Flake et al., "Self-Organization and Identification of Web Communities," IEEE Computer 35, no. 3 (2002): 66-71. Another paper that showed significant internal citations within topics was Soumen Chakrabati et al., "The Structure of Broad Topics on the Web," WWW2002, Honolulu, HI, May 7-11, 2002. }~ Lada Adamic and Natalie Glance recently showed that liberal political blogs and conservative political blogs densely interlink with each other, mostly pointing within each political leaning but with about 15 percent of links posted by the most visible sites also linking across the political divide.~{ Lada Adamic and Natalie Glance, "The Political Blogosphere and the 2004 Election: Divided They Blog," March 1, 2005, http://www.blogpulse.com/papers/2005/ AdamicGlanceBlogWWW.pdf. }~ Physicists analyze clustering as the property of transitivity in networks: the increased probability that if node A is connected to node B, and node B is connected to node C, that node A also will be connected to node C, forming a triangle. Newman has shown that ,{[pg 249]}, the clustering coefficient of a network that exhibits power law distribution of connections or degrees--that is, its tendency to cluster--is related to the exponent of the distribution. At low exponents, below 2.333, the clustering coefficient becomes high. This explains analytically the empirically observed high level of clustering on the Web, whose exponent for inlinks has been empirically shown to be 2.1.~{ M.E.J. Newman, "The Structure and Function of Complex Networks," Society for Industrial and Applied Mathematics Review 45, section 4.2.2 (2003): 167-256; S. N. Dorogovstev and J.F.F. Mendes, Evolution of Networks: From Biological Nets to the Internet and WWW (Oxford: Oxford University Press, 2003). }~ +First, links are not smoothly distributed throughout the network. Sites cluster into densely linked "regions" or communities of interest. Computer scientists have looked at clustering from the perspective of what topical or other correlated characteristics describe these relatively high-density interconnected regions of nodes. What they found was perhaps entirely predictable from an intuitive perspective of the network users, but important as we try to understand the structure of information flow on the Web. Web sites cluster into topical and social/organizational clusters. Early work done in the IBM Almaden Research Center on how link structure could be used as a search technique showed that by mapping densely interlinked sites without looking at content, one could find communities of interest that identify very fine-grained topical connections, such as Australian fire brigades or Turkish students in the United States.~{ Ravi Kumar et al., "Trawling the Web for Emerging Cyber-Communities," WWW8/ Computer Networks 31, nos. 11-16 (1999): 1481-1493. }~ A later study out of the NEC Research Institute more formally defined the interlinking that would identify a "community" as one in which the nodes were more densely connected to each other than they were to nodes outside the cluster by some amount. The study also showed that topically connected sites meet this definition. For instance, sites related to molecular biology clustered with each other--in the sense of being more interlinked with each other than with off-topic sites--as did sites about physics and black holes.~{ Gary W. Flake et al., "Self-Organization and Identification of Web Communities," IEEE Computer 35, no. 3 (2002): 66-71. Another paper that showed significant internal citations within topics was Soumen Chakrabati et al., "The Structure of Broad Topics on the Web," WWW2002, Honolulu, HI, May 7-11, 2002. }~ Lada Adamic and Natalie Glance recently showed that liberal political blogs and conservative political blogs densely interlink with each other, mostly pointing within each political leaning but with about 15 percent of links posted by the most visible sites also linking across the political divide.~{ Lada Adamic and Natalie Glance, "The Political Blogosphere and the 2004 Election: Divided They Blog," March 1, 2005, http://www.blogpulse.com/papers/2005/AdamicGlanceBlogWWW.pdf. }~ Physicists analyze clustering as the property of transitivity in networks: the increased probability that if node A is connected to node B, and node B is connected to node C, that node A also will be connected to node C, forming a triangle. Newman has shown that ,{[pg 249]}, the clustering coefficient of a network that exhibits power law distribution of connections or degrees--that is, its tendency to cluster--is related to the exponent of the distribution. At low exponents, below 2.333, the clustering coefficient becomes high. This explains analytically the empirically observed high level of clustering on the Web, whose exponent for inlinks has been empirically shown to be 2.1.~{ M.E.J. Newman, "The Structure and Function of Complex Networks," Society for Industrial and Applied Mathematics Review 45, section 4.2.2 (2003): 167-256; S. N. Dorogovstev and J.F.F. Mendes, Evolution of Networks: From Biological Nets to the Internet and WWW (Oxford: Oxford University Press, 2003). }~ ={ Glance, Natalie } -Second, at a macrolevel and in smaller subclusters, the power law distribution does not resolve into everyone being connected in a mass-media model relationship to a small number of major "backbone" sites. As early as 1999, Broder and others showed that a very large number of sites occupy what has been called a giant, strongly connected core.~{ This structure was first described by Andrei Broder et al., "Graph Structure of the Web," paper presented at www9 conference (1999), http://www.almaden.ibm.com/ webfountain/resources/GraphStructureintheWeb.pdf. It has since been further studied, refined, and substantiated in various studies. }~ That is, nodes within this core are heavily linked and interlinked, with multiple redundant paths among them. Empirically, as of 2001, this structure was comprised of about 28 percent of nodes. At the same time, about 22 percent of nodes had links into the core, but were not linked to from it--these may have been new sites, or relatively lower-interest sites. The same proportion of sites was linked-to from the core, but did not link back to it--these might have been ultimate depositories of documents, or internal organizational sites. Finally, roughly the same proportion of sites occupied "tendrils" or "tubes" that cannot reach, or be reached from, the core. Tendrils can be reached from the group of sites that link into the strongly connected core or can reach into the group that can be connected to from the core. Tubes connect the inlinking sites to the outlinked sites without going through the core. About 10 percent of sites are entirely isolated. This structure has been called a "bow tie"--with a large core and equally sized in- and outflows to and from that core (see figure 7.5). +Second, at a macrolevel and in smaller subclusters, the power law distribution does not resolve into everyone being connected in a mass-media model relationship to a small number of major "backbone" sites. As early as 1999, Broder and others showed that a very large number of sites occupy what has been called a giant, strongly connected core.~{ This structure was first described by Andrei Broder et al., "Graph Structure of the Web," paper presented at www9 conference (1999), http://www.almaden.ibm.com/webfountain/resources/GraphStructureintheWeb.pdf. It has since been further studied, refined, and substantiated in various studies. }~ That is, nodes within this core are heavily linked and interlinked, with multiple redundant paths among them. Empirically, as of 2001, this structure was comprised of about 28 percent of nodes. At the same time, about 22 percent of nodes had links into the core, but were not linked to from it--these may have been new sites, or relatively lower-interest sites. The same proportion of sites was linked-to from the core, but did not link back to it--these might have been ultimate depositories of documents, or internal organizational sites. Finally, roughly the same proportion of sites occupied "tendrils" or "tubes" that cannot reach, or be reached from, the core. Tendrils can be reached from the group of sites that link into the strongly connected core or can reach into the group that can be connected to from the core. Tubes connect the inlinking sites to the outlinked sites without going through the core. About 10 percent of sites are entirely isolated. This structure has been called a "bow tie"--with a large core and equally sized in- and outflows to and from that core (see figure 7.5). ={ Broder, Andrei ; backbone Web sites +2 ; bow tie structure of Web +2 ; @@ -3371,7 +3371,7 @@ The fourth and last piece of mapping the network as a platform for the public sp small-worlds effect +3 } -What is true of the Web as a whole turns out to be true of the blogosphere as well, and even of the specifically political blogosphere. Early 2003 saw increasing conversations in the blogosphere about the emergence of an "Alist," a number of highly visible blogs that were beginning to seem more like mass media than like blogs. In two blog-based studies, Clay Shirky and then Jason Kottke published widely read explanations of how the blogosphere ,{[pg 253]}, was simply exhibiting the power law characteristics common on the Web.~{ Clay Shirky, "Power Law, Weblogs, and Inequality" (February 8, 2003), http:// www.shirky.com/writings/powerlaw_weblog.htm; Jason Kottke, "Weblogs and Power Laws" (February 9, 2003), http://www.kottke.org/03/02/weblogs-and-power-laws. }~ The emergence in 2003 of discussions of this sort in the blogosphere is, it turns out, hardly surprising. In a time-sensitive study also published in 2003, Kumar and others provided an analysis of the network topology of the blogosphere. They found that it was very similar to that of the Web as a whole--both at the macro- and microlevels. Interestingly, they found that the strongly connected core only developed after a certain threshold, in terms of total number of nodes, had been reached, and that it began to develop extensively only in 2001, reached about 20 percent of all blogs in 2002, and continued to grow rapidly. They also showed that what they called the "community" structure--the degree of clustering or mutual pointing within groups--was high, an order of magnitude more than a random graph with a similar power law exponent would have generated. Moreover, the degree to which a cluster is active or inactive, highly connected or not, changes over time. In addition to time-insensitive superstars, there are also flare-ups of connectivity for sites depending on the activity and relevance of their community of interest. This latter observation is consistent with what we saw happen for BoycottSBG.com. Kumar and his collaborators explained these phenomena by the not-too-surprising claim that bloggers link to each other based on topicality--that is, their judgment of the quality and relevance of the materials--not only on the basis of how well connected they are already.~{ Ravi Kumar et al., "On the Bursty Evolution of Blogspace," Proceedings of WWW2003, May 20-24, 2003, http://www2003.org/cdrom/papers/refereed/p477/ p477-kumar/p477-kumar.htm. }~ +What is true of the Web as a whole turns out to be true of the blogosphere as well, and even of the specifically political blogosphere. Early 2003 saw increasing conversations in the blogosphere about the emergence of an "Alist," a number of highly visible blogs that were beginning to seem more like mass media than like blogs. In two blog-based studies, Clay Shirky and then Jason Kottke published widely read explanations of how the blogosphere ,{[pg 253]}, was simply exhibiting the power law characteristics common on the Web.~{ Clay Shirky, "Power Law, Weblogs, and Inequality" (February 8, 2003), http://www.shirky.com/writings/powerlaw_weblog.htm; Jason Kottke, "Weblogs and Power Laws" (February 9, 2003), http://www.kottke.org/03/02/weblogs-and-power-laws. }~ The emergence in 2003 of discussions of this sort in the blogosphere is, it turns out, hardly surprising. In a time-sensitive study also published in 2003, Kumar and others provided an analysis of the network topology of the blogosphere. They found that it was very similar to that of the Web as a whole--both at the macro- and microlevels. Interestingly, they found that the strongly connected core only developed after a certain threshold, in terms of total number of nodes, had been reached, and that it began to develop extensively only in 2001, reached about 20 percent of all blogs in 2002, and continued to grow rapidly. They also showed that what they called the "community" structure--the degree of clustering or mutual pointing within groups--was high, an order of magnitude more than a random graph with a similar power law exponent would have generated. Moreover, the degree to which a cluster is active or inactive, highly connected or not, changes over time. In addition to time-insensitive superstars, there are also flare-ups of connectivity for sites depending on the activity and relevance of their community of interest. This latter observation is consistent with what we saw happen for BoycottSBG.com. Kumar and his collaborators explained these phenomena by the not-too-surprising claim that bloggers link to each other based on topicality--that is, their judgment of the quality and relevance of the materials--not only on the basis of how well connected they are already.~{ Ravi Kumar et al., "On the Bursty Evolution of Blogspace," Proceedings of WWW2003, May 20-24, 2003, http://www2003.org/cdrom/papers/refereed/p477/p477-kumar/p477-kumar.htm. }~ ={ Kottke, Jason ; Kumar, Ravi ; Shirky, Clay ; @@ -3502,7 +3502,7 @@ This diagnosis of the potential of the networked public sphere underrepresents i voting, electronic } -The Diebold case was not an aberration, but merely a particularly rich case study of a much broader phenomenon, most extensively described in Dan Gilmore's We the Media. The basic production modalities that typify the networked information economy are now being applied to the problem of producing politically relevant information. In 2005, the most visible example of application of the networked information economy--both in its peer-production dimension and more generally by combining a wide range of nonproprietary production models--to the watchdog function of the media is the political blogosphere. The founding myth of the blogosphere's ,{[pg 263]}, journalistic potency was built on the back of then Senate majority leader Trent Lott. In 2002, Lott had the indiscretion of saying, at the onehundredth-birthday party of Republican Senator Strom Thurmond, that if Thurmond had won his Dixiecrat presidential campaign, "we wouldn't have had all these problems over all these years." Thurmond had run on a segregationist campaign, splitting from the Democratic Party in opposition to Harry Truman's early civil rights efforts, as the post?World War II winds began blowing toward the eventual demise of formal, legal racial segregation in the United States. Few positions are taken to be more self-evident in the national public morality of early twenty-first-century America than that formal, state-imposed, racial discrimination is an abomination. And yet, the first few days after the birthday party at which Lott made his statement saw almost no reporting on the statement. ABC News and the Washington Post made small mention of it, but most media outlets reported merely on a congenial salute and farewell celebration of the Senate's oldest and longestserving member. Things were different in the blogosphere. At first liberal blogs, and within three days conservative bloggers as well, began to excavate past racist statements by Lott, and to beat the drums calling for his censure or removal as Senate leader. Within about a week, the story surfaced in the mainstream media, became a major embarrassment, and led to Lott's resignation as Senate majority leader about a week later. A careful case study of this event leaves it unclear why the mainstream media initially ignored the story.~{ Harvard Kennedy School of Government, Case Program: " `Big Media' Meets `Bloggers': Coverage of Trent Lott's Remarks at Strom Thurmond's Birthday Party," http:// www.ksg.harvard.edu/presspol/Research_Publications/Case_Studies/1731_0.pdf.}~ It may have been that the largely social event drew the wrong sort of reporters. It may have been that reporters and editors who depend on major Washington, D.C., players were reluctant to challenge Lott. Perhaps they thought it rude to emphasize this indiscretion, or too upsetting to us all to think of just how close to the surface thoughts that we deem abominable can lurk. There is little disagreement that the day after the party, the story was picked up and discussed by Marshall on TalkingPoints, as well as by another liberal blogger, Atrios, who apparently got it from a post on Slate's "Chatterbox," which picked it up from ABC News's own The Note, a news summary made available on the television network's Web site. While the mass media largely ignored the story, and the two or three mainstream reporters who tried to write about it were getting little traction, bloggers were collecting more stories about prior instances where Lott's actions tended to suggest support for racist causes. Marshall, for example, found that Lott had filed a 1981 amicus curiae brief in support of Bob Jones University's effort to retain its tax-exempt status. The U.S. government had rescinded ,{[pg 264]}, that status because the university practiced racial discrimination--such as prohibiting interracial dating. By Monday of the following week, four days after the remarks, conservative bloggers like Glenn Reynolds on Instapundit, Andrew Sullivan, and others were calling for Lott's resignation. It is possible that, absent the blogosphere, the story would still have flared up. There were two or so mainstream reporters still looking into the story. Jesse Jackson had come out within four days of the comment and said Lott should resign as majority leader. Eventually, when the mass media did enter the fray, its coverage clearly dominated the public agenda and its reporters uncovered materials that helped speed Lott's exit. However, given the short news cycle, the lack of initial interest by the media, and the large time lag between the event itself and when the media actually took the subject up, it seems likely that without the intervention of the blogosphere, the story would have died. What happened instead is that the cluster of political blogs--starting on the Left but then moving across the Left-Right divide--took up the subject, investigated, wrote opinions, collected links and public interest, and eventually captured enough attention to make the comments a matter of public importance. Free from the need to appear neutral and not to offend readers, and free from the need to keep close working relationships with news subjects, bloggers were able to identify something that grated on their sensibilities, talk about it, dig deeper, and eventually generate a substantial intervention into the public sphere. That intervention still had to pass through the mass media, for we still live in a communications environment heavily based on those media. However, the new source of insight, debate, and eventual condensation of effective public opinion came from within the networked information environment. +The Diebold case was not an aberration, but merely a particularly rich case study of a much broader phenomenon, most extensively described in Dan Gilmore's We the Media. The basic production modalities that typify the networked information economy are now being applied to the problem of producing politically relevant information. In 2005, the most visible example of application of the networked information economy--both in its peer-production dimension and more generally by combining a wide range of nonproprietary production models--to the watchdog function of the media is the political blogosphere. The founding myth of the blogosphere's ,{[pg 263]}, journalistic potency was built on the back of then Senate majority leader Trent Lott. In 2002, Lott had the indiscretion of saying, at the onehundredth-birthday party of Republican Senator Strom Thurmond, that if Thurmond had won his Dixiecrat presidential campaign, "we wouldn't have had all these problems over all these years." Thurmond had run on a segregationist campaign, splitting from the Democratic Party in opposition to Harry Truman's early civil rights efforts, as the post?World War II winds began blowing toward the eventual demise of formal, legal racial segregation in the United States. Few positions are taken to be more self-evident in the national public morality of early twenty-first-century America than that formal, state-imposed, racial discrimination is an abomination. And yet, the first few days after the birthday party at which Lott made his statement saw almost no reporting on the statement. ABC News and the Washington Post made small mention of it, but most media outlets reported merely on a congenial salute and farewell celebration of the Senate's oldest and longestserving member. Things were different in the blogosphere. At first liberal blogs, and within three days conservative bloggers as well, began to excavate past racist statements by Lott, and to beat the drums calling for his censure or removal as Senate leader. Within about a week, the story surfaced in the mainstream media, became a major embarrassment, and led to Lott's resignation as Senate majority leader about a week later. A careful case study of this event leaves it unclear why the mainstream media initially ignored the story.~{ Harvard Kennedy School of Government, Case Program: " `Big Media' Meets `Bloggers': Coverage of Trent Lott's Remarks at Strom Thurmond's Birthday Party," http://www.ksg.harvard.edu/presspol/Research_Publications/Case_Studies/1731_0.pdf. }~ It may have been that the largely social event drew the wrong sort of reporters. It may have been that reporters and editors who depend on major Washington, D.C., players were reluctant to challenge Lott. Perhaps they thought it rude to emphasize this indiscretion, or too upsetting to us all to think of just how close to the surface thoughts that we deem abominable can lurk. There is little disagreement that the day after the party, the story was picked up and discussed by Marshall on TalkingPoints, as well as by another liberal blogger, Atrios, who apparently got it from a post on Slate's "Chatterbox," which picked it up from ABC News's own The Note, a news summary made available on the television network's Web site. While the mass media largely ignored the story, and the two or three mainstream reporters who tried to write about it were getting little traction, bloggers were collecting more stories about prior instances where Lott's actions tended to suggest support for racist causes. Marshall, for example, found that Lott had filed a 1981 amicus curiae brief in support of Bob Jones University's effort to retain its tax-exempt status. The U.S. government had rescinded ,{[pg 264]}, that status because the university practiced racial discrimination--such as prohibiting interracial dating. By Monday of the following week, four days after the remarks, conservative bloggers like Glenn Reynolds on Instapundit, Andrew Sullivan, and others were calling for Lott's resignation. It is possible that, absent the blogosphere, the story would still have flared up. There were two or so mainstream reporters still looking into the story. Jesse Jackson had come out within four days of the comment and said Lott should resign as majority leader. Eventually, when the mass media did enter the fray, its coverage clearly dominated the public agenda and its reporters uncovered materials that helped speed Lott's exit. However, given the short news cycle, the lack of initial interest by the media, and the large time lag between the event itself and when the media actually took the subject up, it seems likely that without the intervention of the blogosphere, the story would have died. What happened instead is that the cluster of political blogs--starting on the Left but then moving across the Left-Right divide--took up the subject, investigated, wrote opinions, collected links and public interest, and eventually captured enough attention to make the comments a matter of public importance. Free from the need to appear neutral and not to offend readers, and free from the need to keep close working relationships with news subjects, bloggers were able to identify something that grated on their sensibilities, talk about it, dig deeper, and eventually generate a substantial intervention into the public sphere. That intervention still had to pass through the mass media, for we still live in a communications environment heavily based on those media. However, the new source of insight, debate, and eventual condensation of effective public opinion came from within the networked information environment. ={ Gilmore, Dan ; Atrios (blogger Duncan Black) ; Lott, Trent ; @@ -4266,7 +4266,7 @@ The agricultural research that went into much of the Green Revolution did not in This largely benign story of increasing yields, resistance, and quality has not been without critics, to put it mildly. The criticism predates biotechnology and the development of transgenic varieties. Its roots are in criticism of experimental breeding programs of the American agricultural sectors and the Green Revolution. However, the greatest public visibility and political success of these criticisms has been in the context of GM foods. The critique brings together odd intellectual and political bedfellows, because it includes five distinct components: social and economic critique of the industrialization of agriculture, environmental and health effects, consumer preference for "natural" or artisan production of foodstuffs, and, perhaps to a more limited extent, protectionism of domestic farm sectors. -Perhaps the oldest component of the critique is the social-economic critique. One arm of the critique focuses on how mechanization, increased use of chemicals, and ultimately the use of nonreproducing proprietary seed led to incorporation of the agricultural sector into the capitalist form of production. In the United States, even with its large "family farm" sector, purchased inputs now greatly exceed nonpurchased inputs, production is highly capital intensive, and large-scale production accounts for the majority of land tilled and the majority of revenue captured from farming.~{ Jack R. Kloppenburg, Jr., First the Seed: The Political Economy of Plant Biotechnology 1492-2000 (Cambridge and New York: Cambridge University Press, 1988), table 2.2. }~ In 2003, 56 percent of farms had sales of less than $10,000 a year. Roughly 85 percent of farms had less than $100,000 in sales.~{ USDA National Agriculture Statistics Survey (2004), http://www.usda.gov/ nass/aggraphs/fncht3.htm. }~ These farms account for only 42 percent of the farmland. By comparison, 3.4 percent of farms have sales of more than $500,000 a year, and account for more than 21 percent of land. In the aggregate, the 7.5 percent of farms with sales over $250,000 account for 37 percent of land cultivated. Of all principal owners of farms in the United States in 2002, 42.5 percent reported something other than farming as their principal occupation, and many reported spending two hundred or ,{[pg 334]}, more days off-farm, or even no work days at all on the farm. The growth of large-scale "agribusiness," that is, mechanized, rationalized industrial-scale production of agricultural products, and more important, of agricultural inputs, is seen as replacing the family farm and the small-scale, self-sufficient farm, and bringing farm labor into the capitalist mode of production. As scientific development of seeds and chemical applications increases, the seed as input becomes separated from the grain as output, making farmers dependent on the purchase of industrially produced seed. This further removes farmwork from traditional modes of self-sufficiency and craftlike production to an industrial mode. This basic dynamic is repeated in the critique of the Green Revolution, with the added overlay that the industrial producers of seed are seen to be multinational corporations, and the industrialization of agriculture is seen as creating dependencies in the periphery on the industrial-scientific core of the global economy. +Perhaps the oldest component of the critique is the social-economic critique. One arm of the critique focuses on how mechanization, increased use of chemicals, and ultimately the use of nonreproducing proprietary seed led to incorporation of the agricultural sector into the capitalist form of production. In the United States, even with its large "family farm" sector, purchased inputs now greatly exceed nonpurchased inputs, production is highly capital intensive, and large-scale production accounts for the majority of land tilled and the majority of revenue captured from farming.~{ Jack R. Kloppenburg, Jr., First the Seed: The Political Economy of Plant Biotechnology 1492-2000 (Cambridge and New York: Cambridge University Press, 1988), table 2.2. }~ In 2003, 56 percent of farms had sales of less than $10,000 a year. Roughly 85 percent of farms had less than $100,000 in sales.~{ USDA National Agriculture Statistics Survey (2004), http://www.usda.gov/nass/aggraphs/fncht3.htm. }~ These farms account for only 42 percent of the farmland. By comparison, 3.4 percent of farms have sales of more than $500,000 a year, and account for more than 21 percent of land. In the aggregate, the 7.5 percent of farms with sales over $250,000 account for 37 percent of land cultivated. Of all principal owners of farms in the United States in 2002, 42.5 percent reported something other than farming as their principal occupation, and many reported spending two hundred or ,{[pg 334]}, more days off-farm, or even no work days at all on the farm. The growth of large-scale "agribusiness," that is, mechanized, rationalized industrial-scale production of agricultural products, and more important, of agricultural inputs, is seen as replacing the family farm and the small-scale, self-sufficient farm, and bringing farm labor into the capitalist mode of production. As scientific development of seeds and chemical applications increases, the seed as input becomes separated from the grain as output, making farmers dependent on the purchase of industrially produced seed. This further removes farmwork from traditional modes of self-sufficiency and craftlike production to an industrial mode. This basic dynamic is repeated in the critique of the Green Revolution, with the added overlay that the industrial producers of seed are seen to be multinational corporations, and the industrialization of agriculture is seen as creating dependencies in the periphery on the industrial-scientific core of the global economy. The social-economic critique has been enmeshed, as a political matter, with environmental, health, and consumer-oriented critiques as well. The environmental critiques focus on describing the products of science as monocultures, which, lacking the genetic diversity of locally used varieties, are more susceptible to catastrophic failure. Critics also fear contamination of existing varieties, unpredictable interactions with pests, and negative effects on indigenous species. The health effects concern focused initially on how breeding for yield may have decreased nutritional content, and in the more recent GM food debates, the concern that genetically altered foods will have some unanticipated negative health reactions that would only become apparent many years from now. The consumer concerns have to do with quality and an aesthetic attraction to artisan-mode agricultural products and aversion to eating industrial outputs. These social-economic and environmental-health-consumer concerns tend also to be aligned with protectionist lobbies, not only for economic purposes, but also reflecting a strong cultural attachment to the farming landscape and human ecology, particularly in Europe. ={ environmental criticism of GM foods +1 ; @@ -4627,7 +4627,7 @@ It was not long before a very different set of claims emerged about the Internet loneliness +2 } -Another strand of criticism focused less on the thinness, not to say vacuity, of online relations, and more on sheer time. According to this argument, the time and effort spent on the Net came at the expense of time spent with family and friends. Prominent and oft cited in this vein were two early studies. The first, entitled Internet Paradox, was led by Robert Kraut.~{ Robert Kraut et al., "Internet Paradox, A Social Technology that Reduces Social Involvement and Psychological Well Being," American Psychologist 53 (1998): 1017? 1031. }~ It was the first longitudinal study of a substantial number of users--169 users in the first year or two of their Internet use. Kraut and his collaborators found a slight, but statistically significant, correlation between increases in Internet use and (a) decreases in family communication, (b) decreases in the size of social circle, both near and far, and (c) an increase in depression and loneliness. The researchers hypothesized that use of the Internet replaces strong ties with weak ties. They ideal-typed these communications as exchanging knitting tips with participants in a knitting Listserv, or jokes with someone you would meet on a tourist information site. These trivialities, they thought, came to fill time that, in the absence of the Internet, would be spent with people with whom one has stronger ties. From a communications theory perspective, this causal explanation was more sophisticated than the more widely claimed assimilation of the Internet and television--that a computer monitor is simply one more screen to take away from the time one has to talk to real human beings.~{ A fairly typical statement of this view, quoted in a study commissioned by the Kellogg Foundation, was: "TV or other media, such as computers, are no longer a kind of `electronic hearth,' where a family will gather around and make decisions or have discussions. My position, based on our most recent studies, is that most media in the home are working against bringing families together." Christopher Lee et al., "Evaluating Information and Communications Technology: Perspective for a Balanced Approach," Report to the Kellogg Foundation (December 17, 2001), http:// www.si.umich.edu/pne/kellogg/013.html. }~ It recognized that using the Internet is fundamentally different from watching TV. It allows users to communicate with each other, rather than, like television, encouraging passive reception in a kind of "parallel play." Using a distinction between strong ties and weak ties, introduced by Mark Granovetter in what later became the social capital literature, these researchers suggested that the kind of human contact that was built around online interactions was thinner and less meaningful, so that the time spent on these relationships, on balance, weakened one's stock of social relations. +Another strand of criticism focused less on the thinness, not to say vacuity, of online relations, and more on sheer time. According to this argument, the time and effort spent on the Net came at the expense of time spent with family and friends. Prominent and oft cited in this vein were two early studies. The first, entitled Internet Paradox, was led by Robert Kraut.~{ Robert Kraut et al., "Internet Paradox, A Social Technology that Reduces Social Involvement and Psychological Well Being," American Psychologist 53 (1998): 1017? 1031. }~ It was the first longitudinal study of a substantial number of users--169 users in the first year or two of their Internet use. Kraut and his collaborators found a slight, but statistically significant, correlation between increases in Internet use and (a) decreases in family communication, (b) decreases in the size of social circle, both near and far, and (c) an increase in depression and loneliness. The researchers hypothesized that use of the Internet replaces strong ties with weak ties. They ideal-typed these communications as exchanging knitting tips with participants in a knitting Listserv, or jokes with someone you would meet on a tourist information site. These trivialities, they thought, came to fill time that, in the absence of the Internet, would be spent with people with whom one has stronger ties. From a communications theory perspective, this causal explanation was more sophisticated than the more widely claimed assimilation of the Internet and television--that a computer monitor is simply one more screen to take away from the time one has to talk to real human beings.~{ A fairly typical statement of this view, quoted in a study commissioned by the Kellogg Foundation, was: "TV or other media, such as computers, are no longer a kind of `electronic hearth,' where a family will gather around and make decisions or have discussions. My position, based on our most recent studies, is that most media in the home are working against bringing families together." Christopher Lee et al., "Evaluating Information and Communications Technology: Perspective for a Balanced Approach," Report to the Kellogg Foundation (December 17, 2001), http://www.si.umich.edu/pne/kellogg/013.html. }~ It recognized that using the Internet is fundamentally different from watching TV. It allows users to communicate with each other, rather than, like television, encouraging passive reception in a kind of "parallel play." Using a distinction between strong ties and weak ties, introduced by Mark Granovetter in what later became the social capital literature, these researchers suggested that the kind of human contact that was built around online interactions was thinner and less meaningful, so that the time spent on these relationships, on balance, weakened one's stock of social relations. ={ Granovetter, Mark ; Kraut, Robert ; contact, online vs. physical +1 ; @@ -4642,7 +4642,7 @@ Another strand of criticism focused less on the thinness, not to say vacuity, of social capital +1 } -A second, more sensationalist release of a study followed two years later. In 2000, the Stanford Institute for the Quantitative Study of Society's "preliminary report" on Internet and society, more of a press release than a report, emphasized the finding that "the more hours people use the Internet, the less time they spend with real human beings."~{ Norman H. Nie and Lutz Ebring, "Internet and Society, A Preliminary Report," Stanford Institute for the Quantitative Study of Society, February 17, 2000, 15 (Press Release), http://www.pkp.ubc.ca/bctf/Stanford_Report.pdf. }~ The actual results were somewhat less stark than the widely reported press release. As among all Internet users, only slightly more than 8 percent reported spending less time with family; 6 percent reported spending more time with family, and 86 percent spent about the same amount of time. Similarly, 9 percent reported spending less time with friends, 4 percent spent more time, and 87 percent spent the ,{[pg 361]}, same amount of time.~{ Ibid., 42-43, tables CH-WFAM, CH-WFRN. }~ The press release probably should not have read, "social isolation increases," but instead, "Internet seems to have indeterminate, but in any event small, effects on our interaction with family and friends"--hardly the stuff of front-page news coverage.~{ See John Markoff and A. Newer, "Lonelier Crowd Emerges in Internet Study," New York Times, February 16, 2000, section A, page 1, column 1. }~ The strongest result supporting the "isolation" thesis in that study was that 27 percent of respondents who were heavy Internet users reported spending less time on the phone with friends and family. The study did not ask whether they used email instead of the phone to keep in touch with these family and friends, and whether they thought they had more or less of a connection with these friends and family as a result. Instead, as the author reported in his press release, "E-mail is a way to stay in touch, but you can't share coffee or beer with somebody on e-mail, or give them a hug" (as opposed, one supposes, to the common practice of phone hugs).~{ Nie and Ebring, "Internet and Society," 19. }~ As Amitai Etzioni noted in his biting critique of that study, the truly significant findings were that Internet users spent less time watching television and shopping. Forty-seven percent of those surveyed said that they watched less television than they used to, and that number reached 65 percent for heavy users and 27 percent for light users. Only 3 percent of those surveyed said they watched more TV. Nineteen percent of all respondents and 25 percent of those who used the Internet more than five hours a week said they shopped less in stores, while only 3 percent said they shopped more in stores. The study did not explore how people were using the time they freed by watching less television and shopping less in physical stores. It did not ask whether they used any of this newfound time to increase and strengthen their social and kin ties.~{ Amitai Etzioni, "Debating the Societal Effects of the Internet: Connecting with the World," Public Perspective 11 (May/June 2000): 42, also available at http:// www.gwu.edu/ ccps/etzioni/A273.html. }~ +A second, more sensationalist release of a study followed two years later. In 2000, the Stanford Institute for the Quantitative Study of Society's "preliminary report" on Internet and society, more of a press release than a report, emphasized the finding that "the more hours people use the Internet, the less time they spend with real human beings."~{ Norman H. Nie and Lutz Ebring, "Internet and Society, A Preliminary Report," Stanford Institute for the Quantitative Study of Society, February 17, 2000, 15 (Press Release), http://www.pkp.ubc.ca/bctf/Stanford_Report.pdf. }~ The actual results were somewhat less stark than the widely reported press release. As among all Internet users, only slightly more than 8 percent reported spending less time with family; 6 percent reported spending more time with family, and 86 percent spent about the same amount of time. Similarly, 9 percent reported spending less time with friends, 4 percent spent more time, and 87 percent spent the ,{[pg 361]}, same amount of time.~{ Ibid., 42-43, tables CH-WFAM, CH-WFRN. }~ The press release probably should not have read, "social isolation increases," but instead, "Internet seems to have indeterminate, but in any event small, effects on our interaction with family and friends"--hardly the stuff of front-page news coverage.~{ See John Markoff and A. Newer, "Lonelier Crowd Emerges in Internet Study," New York Times, February 16, 2000, section A, page 1, column 1. }~ The strongest result supporting the "isolation" thesis in that study was that 27 percent of respondents who were heavy Internet users reported spending less time on the phone with friends and family. The study did not ask whether they used email instead of the phone to keep in touch with these family and friends, and whether they thought they had more or less of a connection with these friends and family as a result. Instead, as the author reported in his press release, "E-mail is a way to stay in touch, but you can't share coffee or beer with somebody on e-mail, or give them a hug" (as opposed, one supposes, to the common practice of phone hugs).~{ Nie and Ebring, "Internet and Society," 19. }~ As Amitai Etzioni noted in his biting critique of that study, the truly significant findings were that Internet users spent less time watching television and shopping. Forty-seven percent of those surveyed said that they watched less television than they used to, and that number reached 65 percent for heavy users and 27 percent for light users. Only 3 percent of those surveyed said they watched more TV. Nineteen percent of all respondents and 25 percent of those who used the Internet more than five hours a week said they shopped less in stores, while only 3 percent said they shopped more in stores. The study did not explore how people were using the time they freed by watching less television and shopping less in physical stores. It did not ask whether they used any of this newfound time to increase and strengthen their social and kin ties.~{ Amitai Etzioni, "Debating the Societal Effects of the Internet: Connecting with the World," Public Perspective 11 (May/June 2000): 42, also available at http://www.gwu.edu/ccps/etzioni/A273.html. }~ 2~ A MORE POSITIVE PICTURE EMERGES OVER TIME ={ social capital +13 } @@ -4692,12 +4692,12 @@ The most basic response to the concerns over the decline of community and its im wired neighbors began to sit on their front porches, instead of in their backyard, thereby providing live social reinforcement of community through daily brief greetings, as well as creating a socially enforced community policing mechanism. -We now have quite a bit of social science research on the side of a number of factual propositions.~{ Useful surveys include: Paul DiMaggio et al., "Social Implications of the Internet," Annual Review of Sociology 27 (2001): 307-336; Robyn B. Driskell and Larry Lyon, "Are Virtual Communities True Communities? Examining the Environments and Elements of Community," City & Community 1, no. 4 (December 2002): 349; James E. Katz and Ronald E. Rice, Social Consequences of Internet Use: Access, Involvement, Interaction (Cambridge, MA: MIT Press, 2002). }~ Human beings, whether connected to the Internet or not, continue to communicate preferentially with people who are geographically proximate than with those who are distant.~{ Barry Wellman, "Computer Networks as Social Networks," Science 293, issue 5537 (September 2001): 2031. }~ Nevertheless, people who are connected to the Internet communicate more with people who are geographically distant without decreasing the number of local connections. While the total number of connections continues to be greatest with proximate family members, friends, coworkers, and neighbors, the Internet's greatest effect is in improving the ability of individuals to add to these proximate relationships new and better-connected relationships with people who are geographically distant. This includes keeping more in touch with friends and relatives who live far away, and creating new weak-tie relationships around communities of interest and practice. To the extent that survey data are reliable, the most comprehensive and updated surveys support these observations. It now seems clear that Internet users "buy" their time to use the Internet by watching less television, and that the more Internet experience they have, the less they watch TV. People who use the Internet claim to have increased the number of people they stay in touch with, while mostly reporting no effect on time they spend with their family.~{ Jeffery I. Cole et al., "The UCLA Internet Report: Surveying the Digital Future, Year Three" (UCLA Center for Communication Policy, January 2003), 33, 55, 62, http:// www.ccp.ucla.edu/pdf/UCLA-Internet-Report-Year-Three.pdf. }~ +We now have quite a bit of social science research on the side of a number of factual propositions.~{ Useful surveys include: Paul DiMaggio et al., "Social Implications of the Internet," Annual Review of Sociology 27 (2001): 307-336; Robyn B. Driskell and Larry Lyon, "Are Virtual Communities True Communities? Examining the Environments and Elements of Community," City & Community 1, no. 4 (December 2002): 349; James E. Katz and Ronald E. Rice, Social Consequences of Internet Use: Access, Involvement, Interaction (Cambridge, MA: MIT Press, 2002). }~ Human beings, whether connected to the Internet or not, continue to communicate preferentially with people who are geographically proximate than with those who are distant.~{ Barry Wellman, "Computer Networks as Social Networks," Science 293, issue 5537 (September 2001): 2031. }~ Nevertheless, people who are connected to the Internet communicate more with people who are geographically distant without decreasing the number of local connections. While the total number of connections continues to be greatest with proximate family members, friends, coworkers, and neighbors, the Internet's greatest effect is in improving the ability of individuals to add to these proximate relationships new and better-connected relationships with people who are geographically distant. This includes keeping more in touch with friends and relatives who live far away, and creating new weak-tie relationships around communities of interest and practice. To the extent that survey data are reliable, the most comprehensive and updated surveys support these observations. It now seems clear that Internet users "buy" their time to use the Internet by watching less television, and that the more Internet experience they have, the less they watch TV. People who use the Internet claim to have increased the number of people they stay in touch with, while mostly reporting no effect on time they spend with their family.~{ Jeffery I. Cole et al., "The UCLA Internet Report: Surveying the Digital Future, Year Three" (UCLA Center for Communication Policy, January 2003), 33, 55, 62, http://www.ccp.ucla.edu/pdf/UCLA-Internet-Report-Year-Three.pdf. }~ ={ television : Internet use vs. } -Connections with family and friends seemed to be thickened by the new channels of communication, rather than supplanted by them. Emblematic of this were recent results of a survey conducted by the Pew project on "Internet and American Life" on Holidays Online. Almost half of respondents surveyed reported using e-mail to organize holiday activities with family (48 percent) and friends (46 percent), 27 percent reported sending or receiving holiday greetings, and while a third described themselves as shopping online in order to save money, 51 percent said they went online to find an unusual or hard-to-find gift. In other words, half of those who used the Internet for holiday shopping did so in order to personalize their gift further, rather than simply to take advantage of the most obvious use of e-commerce--price comparison and time savings. Further support for this position is offered in another Pew study, entitled "Internet and Daily Life." In that survey, the two most common uses--both of which respondents claimed they did more of because of the Net than they otherwise would have--were connecting ,{[pg 365]}, with family and friends and looking up information.~{ Pew Internet and Daily Life Project (August 11, 2004), report available at http:// www.pewinternet.org/PPF/r/131/report_display.asp. }~ Further evidence that the Internet is used to strengthen and service preexisting relations, rather than create new ones, is the fact that 79 percent of those who use the Internet at all do so to communicate with friends and family, while only 26 percent use the Internet to meet new people or to arrange dates. Another point of evidence is the use of instant messaging (IM). IM is a synchronous communications medium that requires its users to set time aside to respond and provides information to those who wish to communicate with an individual about whether that person is or is not available at any given moment. Because it is so demanding, IM is preferentially useful for communicating with individuals with whom one already has a preexisting relationship. This preferential use for strengthening preexisting relations is also indicated by the fact that two-thirds of IM users report using IM with no more than five others, while only one in ten users reports instant messaging with more than ten people. A recent Pew study of instant messaging shows that 53 million adults--42 percent of Internet users in the United States--trade IM messages. Forty percent use IM to contact coworkers, one-third family, and 21 percent use it to communicate equally with both. Men and women IM in equal proportions, but women IM more than men do, averaging 433 minutes per month as compared to 366 minutes, respectively, and households with children IM more than households without children. +Connections with family and friends seemed to be thickened by the new channels of communication, rather than supplanted by them. Emblematic of this were recent results of a survey conducted by the Pew project on "Internet and American Life" on Holidays Online. Almost half of respondents surveyed reported using e-mail to organize holiday activities with family (48 percent) and friends (46 percent), 27 percent reported sending or receiving holiday greetings, and while a third described themselves as shopping online in order to save money, 51 percent said they went online to find an unusual or hard-to-find gift. In other words, half of those who used the Internet for holiday shopping did so in order to personalize their gift further, rather than simply to take advantage of the most obvious use of e-commerce--price comparison and time savings. Further support for this position is offered in another Pew study, entitled "Internet and Daily Life." In that survey, the two most common uses--both of which respondents claimed they did more of because of the Net than they otherwise would have--were connecting ,{[pg 365]}, with family and friends and looking up information.~{ Pew Internet and Daily Life Project (August 11, 2004), report available at http://www.pewinternet.org/PPF/r/131/report_display.asp. }~ Further evidence that the Internet is used to strengthen and service preexisting relations, rather than create new ones, is the fact that 79 percent of those who use the Internet at all do so to communicate with friends and family, while only 26 percent use the Internet to meet new people or to arrange dates. Another point of evidence is the use of instant messaging (IM). IM is a synchronous communications medium that requires its users to set time aside to respond and provides information to those who wish to communicate with an individual about whether that person is or is not available at any given moment. Because it is so demanding, IM is preferentially useful for communicating with individuals with whom one already has a preexisting relationship. This preferential use for strengthening preexisting relations is also indicated by the fact that two-thirds of IM users report using IM with no more than five others, while only one in ten users reports instant messaging with more than ten people. A recent Pew study of instant messaging shows that 53 million adults--42 percent of Internet users in the United States--trade IM messages. Forty percent use IM to contact coworkers, one-third family, and 21 percent use it to communicate equally with both. Men and women IM in equal proportions, but women IM more than men do, averaging 433 minutes per month as compared to 366 minutes, respectively, and households with children IM more than households without children. ={ Pew studies ; instant messaging ; text messaging @@ -5090,7 +5090,7 @@ The physical layer encompasses both transmission channels and devices for produc 3~ Transport: Wires and Wireless ={ transport channel policy +16 } -Recall the Cisco white paper quoted in chapter 5. In it, Cisco touted the value of its then new router, which would allow a broadband provider to differentiate streams of information going to and from the home at the packet level. If the packet came from a competitor, or someone the user wanted to see or hear but the owner preferred that the user did not, the packet could be slowed down or dropped. If it came from the owner or an affiliate, it could be speeded up. The purpose of the router was not to enable evil control over users. It was to provide better-functioning networks. America Online (AOL), for example, has been reported as blocking its users from reaching Web sites that have been advertised in spam e-mails. The theory is that if spammers know their Web site will be inaccessible to AOL customers, they will stop.~{ Jonathan Krim, "AOL Blocks Spammers' Web Sites," Washington Post, March 20, 2004, p. A01; also available at http://www.washingtonpost.com/ac2/wp-dyn?page name article&contentId A9449-2004Mar19¬Found true. }~ The ability of service providers to block sites or packets from ,{[pg 398]}, certain senders and promote packets from others may indeed be used to improve the network. However, whether this ability will in fact be used to improve service depends on the extent to which the interests of all users, and particularly those concerned with productive uses of the network, are aligned with the interests of the service providers. Clearly, when in 2005 Telus, Canada's second largest telecommunications company, blocked access to the Web site of the Telecommunications Workers Union for all of its own clients and those of internet service providers that relied on its backbone network, it was not seeking to improve service for those customers' benefit, but to control a conversation in which it had an intense interest. When there is a misalignment, the question is what, if anything, disciplines the service providers' use of the technological capabilities they possess? One source of discipline would be a genuinely competitive market. The transition to broadband has, however, severely constrained the degree of competition in Internet access services. Another would be regulation: requiring owners to treat all packets equally. This solution, while simple to describe, remains highly controversial in the policy world. It has strong supporters and strong opposition from the incumbent broadband providers, and has, as a practical matter, been rejected for the time being by the FCC. The third type of solution would be both more radical and less "interventionist" from the perspective of regulation. It would involve eliminating contemporary regulatory barriers to the emergence of a user-owned wireless infrastructure. It would allow users to deploy their own equipment, share their wireless capacity, and create a "last mile" owned by all users in common, and controlled by none. This would, in effect, put equipment manufacturers in competition to construct the "last mile" of broadband networks, and thereby open up the market in "middle-mile" Internet connection services. +Recall the Cisco white paper quoted in chapter 5. In it, Cisco touted the value of its then new router, which would allow a broadband provider to differentiate streams of information going to and from the home at the packet level. If the packet came from a competitor, or someone the user wanted to see or hear but the owner preferred that the user did not, the packet could be slowed down or dropped. If it came from the owner or an affiliate, it could be speeded up. The purpose of the router was not to enable evil control over users. It was to provide better-functioning networks. America Online (AOL), for example, has been reported as blocking its users from reaching Web sites that have been advertised in spam e-mails. The theory is that if spammers know their Web site will be inaccessible to AOL customers, they will stop.~{ Jonathan Krim, "AOL Blocks Spammers' Web Sites," Washington Post, March 20, 2004, p. A01; also available at http://www.washingtonpost.com/ac2/wp-dyn?page%20name%20article&contentId%20A9449-2004Mar19¬Found%20true. }~ The ability of service providers to block sites or packets from ,{[pg 398]}, certain senders and promote packets from others may indeed be used to improve the network. However, whether this ability will in fact be used to improve service depends on the extent to which the interests of all users, and particularly those concerned with productive uses of the network, are aligned with the interests of the service providers. Clearly, when in 2005 Telus, Canada's second largest telecommunications company, blocked access to the Web site of the Telecommunications Workers Union for all of its own clients and those of internet service providers that relied on its backbone network, it was not seeking to improve service for those customers' benefit, but to control a conversation in which it had an intense interest. When there is a misalignment, the question is what, if anything, disciplines the service providers' use of the technological capabilities they possess? One source of discipline would be a genuinely competitive market. The transition to broadband has, however, severely constrained the degree of competition in Internet access services. Another would be regulation: requiring owners to treat all packets equally. This solution, while simple to describe, remains highly controversial in the policy world. It has strong supporters and strong opposition from the incumbent broadband providers, and has, as a practical matter, been rejected for the time being by the FCC. The third type of solution would be both more radical and less "interventionist" from the perspective of regulation. It would involve eliminating contemporary regulatory barriers to the emergence of a user-owned wireless infrastructure. It would allow users to deploy their own equipment, share their wireless capacity, and create a "last mile" owned by all users in common, and controlled by none. This would, in effect, put equipment manufacturers in competition to construct the "last mile" of broadband networks, and thereby open up the market in "middle-mile" Internet connection services. ={ access : systematically blocked by policy routers ; blocked access : @@ -5330,7 +5330,7 @@ How important more generally are these legal battles to the organization of cult MP3.com was the first major music distribution site shut down by litigation. From the industry's perspective, it should have represented an entirely unthreatening business model. Users paid a subscription fee, in exchange for which they were allowed to download music. There were various quirks and kinks in this model that made it unattractive to the music industry at the time: the industry did not control this major site, and therefore had to share the rents from the music, and more important, there was no effective control over the music files once downloaded. However, from the perspective of 2005, MP3.com was a vastly more manageable technology for the sound recording business model than a free software file-sharing client. MP3.com was a single site, with a corporate owner that could be (and was) held responsible. It controlled which user had access to what files--by requiring each user to insert a CD into the computer to prove that he or she had bought the CD--so that usage could in principle be monitored and, if ,{[pg 423]}, desired, compensation could be tied to usage. It did not fundamentally change the social practice of choosing music. It provided something that was more like a music-on-demand jukebox than a point of music sharing. As a legal matter, MP3.com's infringement was centered on the fact that it stored and delivered the music from this central server instead of from the licensed individual copies. In response to the shutdown of MP3.com, Napster redesigned the role of the centralized mode, and left storage in the hands of users, keeping only the directory and search functions centralized. When Napster was shut down, Gnutella and later FastTrack further decentralized the system, offering a fully decentralized, ad hoc reconfigurable cataloging and search function. Because these algorithms represent architecture and a protocol-based network, not a particular program, they are usable in many different implementations. This includes free software programs like MLDonkey--which is a nascent file-sharing system that is aimed to run simultaneously across most of the popular file-sharing networks, including FastTrack, BitTorrent, and Overnet, the eDonkey network. These programs are now written by, and available from, many different jurisdictions. There is no central point of control over their distribution. There is no central point through which to measure and charge for their use. They are, from a technical perspective, much more resilient to litigation attack, and much less friendly to various possible models of charging for downloads or usage. From a technological perspective, then, the litigation backfired. It created a network that is less susceptible to integration into an industrial model of music distribution based on royalty payments per user or use. ={ MP3.com } -It is harder to gauge, however, whether the litigation was a success or a failure from a social-practice point of view. There have been conflicting reports on the effects of file sharing and the litigation on CD sales. The recording industry claimed that CD sales were down because of file sharing, but more independent academic studies suggested that CD sales were not independently affected by file sharing, as opposed to the general economic downturn.~{ See Felix Oberholzer and Koleman Strumpf, "The Effect of File Sharing on Record Sales" (working paper), http://www.unc.edu/cigar/papers/FileSharing_March2004.pdf. }~ The Pew project on Internet and American Life user survey data suggests that the litigation strategy against individual users has dampened the use of file sharing, though file sharing is still substantially more common among users than paying for files from the newly emerging payper-download authorized services. In mid-2003, the Pew study found that 29 percent of Internet users surveyed said they had downloaded music files, identical to the percentage of users who had downloaded music in the first quarter of 2001, the heyday of Napster. Twenty-one percent responded that ,{[pg 424]}, they allow others to download from their computer.~{ Mary Madden and Amanda Lenhart, "Music Downloading, File-Sharing, and Copyright" (Pew, July 2003), http://www.pewinternet.org/pdfs/PIP_Copyright_Memo.pdf/. }~ This meant that somewhere between twenty-six and thirty-five million adults in the United States alone were sharing music files in mid-2003, when the recording industry began to sue individual users. Of these, fully two-thirds expressly stated that they did not care whether the files they downloaded were or were not copyrighted. By the end of 2003, five months after the industry began to sue individuals, the number of respondents who admitted to downloading music dropped by half. During the next few months, these numbers increased slightly to twenty-three million adults, remaining below the mid-2003 numbers in absolute terms and more so in terms of percentage of Internet users. Of those who had at one point downloaded, but had stopped, roughly a third said that the threat of suit was the reason they had stopped file sharing.~{ Lee Rainie and Mary Madden, "The State of Music Downloading and File-Sharing Online" (Pew, April 2004), http://www.pewinternet.org/pdfs/PIP_Filesharing_April_ 04.pdf. }~ During this same period, use of pay online music download services, like iTunes, rose to about 7 percent of Internet users. Sharing of all kinds of media files--music, movies, and games--was at 23 percent of adult Internet users. These numbers do indeed suggest that, in the aggregate, music downloading is reported somewhat less often than it was in the past. It is hard to tell how much of this reduction is due to actual behavioral change as compared to an unwillingness to self-report on behavior that could subject one to litigation. It is impossible to tell how much of an effect the litigation has had specifically on sharing by younger people--teenagers and college students--who make up a large portion of both CD buyers and file sharers. Nonetheless, the reduction in the total number of self-reported users and the relatively steady percentage of total Internet users who share files of various kinds suggest that the litigation does seem to have had a moderating effect on file sharing as a social practice. It has not, however, prevented file sharing from continuing to be a major behavioral pattern among one-fifth to one-quarter of Internet users, and likely a much higher proportion in the most relevant populations from the perspective of the music and movie industries--teenagers and young adults. +It is harder to gauge, however, whether the litigation was a success or a failure from a social-practice point of view. There have been conflicting reports on the effects of file sharing and the litigation on CD sales. The recording industry claimed that CD sales were down because of file sharing, but more independent academic studies suggested that CD sales were not independently affected by file sharing, as opposed to the general economic downturn.~{ See Felix Oberholzer and Koleman Strumpf, "The Effect of File Sharing on Record Sales" (working paper), http://www.unc.edu/cigar/papers/FileSharing_March2004.pdf. }~ The Pew project on Internet and American Life user survey data suggests that the litigation strategy against individual users has dampened the use of file sharing, though file sharing is still substantially more common among users than paying for files from the newly emerging payper-download authorized services. In mid-2003, the Pew study found that 29 percent of Internet users surveyed said they had downloaded music files, identical to the percentage of users who had downloaded music in the first quarter of 2001, the heyday of Napster. Twenty-one percent responded that ,{[pg 424]}, they allow others to download from their computer.~{ Mary Madden and Amanda Lenhart, "Music Downloading, File-Sharing, and Copyright" (Pew, July 2003), http://www.pewinternet.org/pdfs/PIP_Copyright_Memo.pdf/. }~ This meant that somewhere between twenty-six and thirty-five million adults in the United States alone were sharing music files in mid-2003, when the recording industry began to sue individual users. Of these, fully two-thirds expressly stated that they did not care whether the files they downloaded were or were not copyrighted. By the end of 2003, five months after the industry began to sue individuals, the number of respondents who admitted to downloading music dropped by half. During the next few months, these numbers increased slightly to twenty-three million adults, remaining below the mid-2003 numbers in absolute terms and more so in terms of percentage of Internet users. Of those who had at one point downloaded, but had stopped, roughly a third said that the threat of suit was the reason they had stopped file sharing.~{ Lee Rainie and Mary Madden, "The State of Music Downloading and File-Sharing Online" (Pew, April 2004), http://www.pewinternet.org/pdfs/PIP_Filesharing_April_04.pdf. }~ During this same period, use of pay online music download services, like iTunes, rose to about 7 percent of Internet users. Sharing of all kinds of media files--music, movies, and games--was at 23 percent of adult Internet users. These numbers do indeed suggest that, in the aggregate, music downloading is reported somewhat less often than it was in the past. It is hard to tell how much of this reduction is due to actual behavioral change as compared to an unwillingness to self-report on behavior that could subject one to litigation. It is impossible to tell how much of an effect the litigation has had specifically on sharing by younger people--teenagers and college students--who make up a large portion of both CD buyers and file sharers. Nonetheless, the reduction in the total number of self-reported users and the relatively steady percentage of total Internet users who share files of various kinds suggest that the litigation does seem to have had a moderating effect on file sharing as a social practice. It has not, however, prevented file sharing from continuing to be a major behavioral pattern among one-fifth to one-quarter of Internet users, and likely a much higher proportion in the most relevant populations from the perspective of the music and movie industries--teenagers and young adults. ={ Pew studies } From the perspective of understanding the effects of institutional ecology, then, the still-raging battle over peer-to-peer networks presents an ambiguous picture. One can speculate with some degree of confidence that, had Napster not been stopped by litigation, file sharing would have been a much wider social practice than it is today. The application was extremely easy to use; it offered a single network for all file-sharing users, thereby offering an extremely diverse and universal content distribution network; and for a brief period, it was a cultural icon and a seemingly acceptable social practice. The ,{[pg 425]}, period of regrouping that followed its closure; the imperfect interfaces of early Gnutella clients; the relative fragmentation of file sharing into a number of networks, each with a smaller coverage of content than was present; and the fear of personal litigation risk are likely to have limited adoption. On the other hand, in the longer run, the technological developments have created platforms that are less compatible with the industrial model, and which would be harder to integrate into a stable settlement for music distribution in the digital environment. @@ -5346,7 +5346,7 @@ Recorded music began with the phonograph--a packaged good intended primarily for Musicians and songwriters seem to be relatively insulated from the effects of p2p networks, and on balance, are probably affected positively. The most comprehensive survey data available, from mid-2004, shows that 35 percent of musicians and songwriters said that free downloads have helped their careers. Only 5 percent said it has hurt them. Thirty percent said it increased attendance at concerts, 21 percent that it helped them sell CDs and other merchandise, and 19 percent that it helped them gain radio playing time. These results are consistent with what one would expect given the revenue structure of the industry, although the study did not separate answers out based on whether the respondent was able to live entirely or primarily on their music, which represented only 16 percent of the respondents to the survey. In all, it appears that much of the actual flow of revenue to artists-- from performances and other sources--is stable. This is likely to remain true even if the CD market were entirely displaced by peer-to-peer distribution. Musicians will still be able to play for their dinner, at least not significantly less so than they can today. Perhaps there will be fewer millionaires. Perhaps fewer mediocre musicians with attractive physiques will be sold as "geniuses," and more talented musicians will be heard than otherwise would have, and will as a result be able to get paying gigs instead of waiting tables or "getting a job." But it would be silly to think that music, a cultural form without which no human society has existed, will cease to be in our world if we abandon the industrial form it took for the blink of a historical eye that was the twentieth century. Music was not born with the phonograph, nor will it die with the peer-to-peer network. The terms of the debate, then, are about cultural policy; perhaps about industrial policy. Will we get the kind of music we want in this system, whoever "we" are? Will American recording companies continue to get the export revenue streams they do? Will artists be able to live from making music? Some of these arguments are serious. Some are but a tempest in a monopoly-rent teapot. It is clear that a technological change has rendered obsolete a particular mode of distributing ,{[pg 427]}, information and culture. Distribution, once the sole domain of market-based firms, now can be produced by decentralized networks of users, sharing instantiations of music they deem attractive with others, using equipment they own and generic network connections. This distribution network, in turn, allows a much more diverse range of musicians to reach much more finely grained audiences than were optimal for industrial production and distribution of mechanical instantiations of music in vinyl or CD formats. The legal battles reflect an effort by an incumbent industry to preserve its very lucrative business model. The industry has, to this point, delayed the transition to peer-based distribution, but it is unclear for how long or to what extent it will be successful in preventing the gradual transition to userbased distribution. -The movie industry has a different industrial structure and likely a different trajectory in its relations to p2p networks. First and foremost, movies began as a relatively high capital cost experience good. Making a movie, as opposed to writing a song, was something that required a studio and a large workforce. It could not be done by a musician with a guitar or a piano. Furthermore, movies were, throughout most of their history, collective experience goods. They were a medium for public performance experienced outside of the home, in a social context. With the introduction of television, it was easy to adapt movie revenue structure by delaying release of films to television viewing until after demand for the movie at the theater declined, as well as to develop their capabilities into a new line of business--television production. However, theatrical release continued to be the major source of revenue. When video came along, the movie industry cried murder in the Sony Betamax case, but actually found it quite easy to work videocassettes into yet another release window, like television, and another medium, the made-for-video movie. Digital distribution affects the distribution of cultural artifacts as packaged goods for home consumption. It does not affect the social experience of going out to the movies. At most, it could affect the consumption of the twenty-year-old mode of movie distribution: videos and DVDs. As recently as the year 2000, when the Hollywood studios were litigating the DeCSS case, they represented to the court that home video sales were roughly 40 percent of revenue, a number consistent with other reports.~{ See 111 F.Supp.2d at 310, fns. 69-70; PBS Frontline report, http://www.pbs.org/ wgbh/pages/frontline/shows/hollywood/business/windows.html. }~ The remainder, composed of theatrical release revenues and various television releases, remains reasonably unthreatened as a set of modes of revenue capture to sustain the high-production value, high-cost movies that typify Hollywood. Forty percent is undoubtedly a large chunk, but unlike ,{[pg 428]}, the recording industry, which began with individually owned recordings, the movie industry preexisted videocassettes and DVDs, and is likely to outlive them even if p2p networks were to eliminate that market entirely, which is doubtful. +The movie industry has a different industrial structure and likely a different trajectory in its relations to p2p networks. First and foremost, movies began as a relatively high capital cost experience good. Making a movie, as opposed to writing a song, was something that required a studio and a large workforce. It could not be done by a musician with a guitar or a piano. Furthermore, movies were, throughout most of their history, collective experience goods. They were a medium for public performance experienced outside of the home, in a social context. With the introduction of television, it was easy to adapt movie revenue structure by delaying release of films to television viewing until after demand for the movie at the theater declined, as well as to develop their capabilities into a new line of business--television production. However, theatrical release continued to be the major source of revenue. When video came along, the movie industry cried murder in the Sony Betamax case, but actually found it quite easy to work videocassettes into yet another release window, like television, and another medium, the made-for-video movie. Digital distribution affects the distribution of cultural artifacts as packaged goods for home consumption. It does not affect the social experience of going out to the movies. At most, it could affect the consumption of the twenty-year-old mode of movie distribution: videos and DVDs. As recently as the year 2000, when the Hollywood studios were litigating the DeCSS case, they represented to the court that home video sales were roughly 40 percent of revenue, a number consistent with other reports.~{ See 111 F.Supp.2d at 310, fns. 69-70; PBS Frontline report, http://www.pbs.org/wgbh/pages/frontline/shows/hollywood/business/windows.html. }~ The remainder, composed of theatrical release revenues and various television releases, remains reasonably unthreatened as a set of modes of revenue capture to sustain the high-production value, high-cost movies that typify Hollywood. Forty percent is undoubtedly a large chunk, but unlike ,{[pg 428]}, the recording industry, which began with individually owned recordings, the movie industry preexisted videocassettes and DVDs, and is likely to outlive them even if p2p networks were to eliminate that market entirely, which is doubtful. The harder and more interesting question is whether cheap high-quality digital video-capture and editing technologies combined with p2p networks for efficient distribution could make film a more diverse medium than it is now. The potential hypothetical promise of p2p networks like BitTorrent is that they could offer very robust and efficient distribution networks for films outside the mainstream industry. Unlike garage bands and small-scale music productions, however, this promise is as yet speculative. We do not invest in public education for film creation, as we do in the teaching of writing. Most of the raw materials out of which a culture of digital capture and amateur editing could develop are themselves under copyright, a subject we return to when considering the content layer. There are some early efforts, like atomfilms.com, at short movie distribution. The technological capabilities are there. It is possible that if films older than thirty or even fifty years were released into the public domain, they would form the raw material out of which a new cultural production practice would form. If it did, p2p networks would likely play an important role in their distribution. However, for now, although the sound recording and movie industries stand shoulder to shoulder in the lobbying efforts, their circumstances and likely trajectory in relation to file sharing are likely quite different. @@ -5376,7 +5376,7 @@ Not all battles over the role of property-like arrangements at the logical layer domain names +4 } -None of this institutional edifice could be built without the U.S. government. In early 1998, the administration responded to this ferment with a green paper, seeking the creation of a private, nonprofit corporation registered in the United States to take on management of the domain name issue. By its own terms, the green paper responded to concerns of the domain name registration monopoly and of trademark issues in domain names, first and foremost, and to some extent to increasing clamor from abroad for a voice in Internet governance. Despite a cool response from the European Union, the U.S. government proceeded to finalize a white paper and authorize the creation of its preferred model--the private, nonprofit corporation. Thus was born the Internet Corporation for Assigned Names and Numbers (ICANN) as a private, nonprofit California corporation. Over time, it succeeded in large measure in loosening NSI's monopoly on domain name registration. Its efforts on the trademark side effectively created a global preemptive property right. Following an invitation in the U.S. government's white paper for ICANN to study the proper approach to trademark enforcement in the domain name space, ICANN and WIPO initiated a process ,{[pg 432]}, that began in July 1998 and ended in April 1999. As Froomkin describes his experience as a public-interest expert in this process, the process feigned transparency and open discourse, but was in actuality an opaque staff-driven drafting effort.~{ A. M. Froomkin, "Semi-Private International Rulemaking: Lessons Learned from the WIPO Domain Name Process," http://www.personal.law.miami.edu/froomkin/ articles/TPRC99.pdf. }~ The result was a very strong global property right available to trademark owners in the alphanumeric strings that make up domain names. This was supported by binding arbitration. Because it controlled the root server, ICANN could enforce its arbitration decisions worldwide. If ICANN decides that, say, the McDonald's fast-food corporation and not a hypothetical farmer named Old McDonald owned www.mcdonalds.com, all computers in the world would be referred to the corporate site, not the personal one. Not entirely satisfied with the degree to which the ICANNWIPO process protected their trademarks, some of the major trademark owners lobbied the U.S. Congress to pass an even stricter law. This law would make it easier for the owners of commercial brand names to obtain domain names that include their brand, whether or not there was any probability that users would actually confuse sites like the hypothetical Old McDonald's with that of the fast-food chain. +None of this institutional edifice could be built without the U.S. government. In early 1998, the administration responded to this ferment with a green paper, seeking the creation of a private, nonprofit corporation registered in the United States to take on management of the domain name issue. By its own terms, the green paper responded to concerns of the domain name registration monopoly and of trademark issues in domain names, first and foremost, and to some extent to increasing clamor from abroad for a voice in Internet governance. Despite a cool response from the European Union, the U.S. government proceeded to finalize a white paper and authorize the creation of its preferred model--the private, nonprofit corporation. Thus was born the Internet Corporation for Assigned Names and Numbers (ICANN) as a private, nonprofit California corporation. Over time, it succeeded in large measure in loosening NSI's monopoly on domain name registration. Its efforts on the trademark side effectively created a global preemptive property right. Following an invitation in the U.S. government's white paper for ICANN to study the proper approach to trademark enforcement in the domain name space, ICANN and WIPO initiated a process ,{[pg 432]}, that began in July 1998 and ended in April 1999. As Froomkin describes his experience as a public-interest expert in this process, the process feigned transparency and open discourse, but was in actuality an opaque staff-driven drafting effort.~{ A. M. Froomkin, "Semi-Private International Rulemaking: Lessons Learned from the WIPO Domain Name Process," http://www.personal.law.miami.edu/froomkin/articles/TPRC99.pdf. }~ The result was a very strong global property right available to trademark owners in the alphanumeric strings that make up domain names. This was supported by binding arbitration. Because it controlled the root server, ICANN could enforce its arbitration decisions worldwide. If ICANN decides that, say, the McDonald's fast-food corporation and not a hypothetical farmer named Old McDonald owned www.mcdonalds.com, all computers in the world would be referred to the corporate site, not the personal one. Not entirely satisfied with the degree to which the ICANNWIPO process protected their trademarks, some of the major trademark owners lobbied the U.S. Congress to pass an even stricter law. This law would make it easier for the owners of commercial brand names to obtain domain names that include their brand, whether or not there was any probability that users would actually confuse sites like the hypothetical Old McDonald's with that of the fast-food chain. ={ Froomkin, Michael ; ICANN (Internet Corporation for Assigned Names and Numbers) } |