Geeks talk a lot. They don’t talk about recursive publics. They don’t often talk about imaginations, infrastructures, moral or technical orders. But they do talk a lot. A great deal of time and typing is necessary to create software and networks: learning and talking, teaching and arguing, telling stories and reading polemics, reflecting on the world in and about the infrastructure one inhabits. In this chapter I linger on the stories geeks tell, and especially on stories and reflections that mark out contemporary problems of knowledge and power—stories about grand issues like progress, enlightenment, liberty, and freedom.
Issues of enlightenment, progress, and freedom are quite obviously still part of a “social imaginary,” especially imaginations of the relationship of knowledge and enlightenment to freedom and autonomy so clearly at stake in the notion of a public or public [pg 65] sphere. And while the example of Free Software illuminates how issues of enlightenment, progress, and freedom are proposed, contested, and implemented in and through software and networks, this chapter contains stories that are better understood as “usable pasts”—less technical and more accessible narratives that make sense of the contemporary world by reflecting on the past and its difference from today.
Usable pasts is a more charitable term for what might be called modern myths among geeks: stories that the tellers know to be a combination of fact and fiction. They are told not in order to remember the past, but in order to make sense of the present and of the future. They make sense of practices that are not questioned in the doing, but which are not easily understood in available intellectual or colloquial terms. The first set of stories I relate are those about the Protestant Reformation: allegories that make use of Catholic and Protestant churches, laity, clergy, high priests, and reformation-era images of control and liberation. It might be surprising that geeks turn to the past (and especially to religious allegory) in order to make sense of the present, but the reason is quite simple: there are no “ready-to-narrate” stories that make sense of the practices of geeks today. Precisely because geeks are “figuring out” things that are not clear or obvious, they are of necessity bereft of effective ways of talking about it. The Protestant Reformation makes for good allegory because it separates power from control; it draws on stories of catechism and ritual, alphabets, pamphlets and liturgies, indulgences and self-help in order to give geeks a way to make sense of the distinction between power and control, and how it relates to the technical and political economy they occupy. The contemporary relationship among states, corporations, small businesses, and geeks is not captured by familiar oppositions like commercial/noncommercial, for/against private property, or capitalist/socialist—it is a relationship of reform and conversion, not revolution or overthrow.
Usable pasts are stories, but they are stories that reflect specific attitudes and specific ways of thinking about the relationship between past, present, and future. Geeks think and talk a lot about time, progress, and change, but their conclusions and attitudes are by no means uniform. Some geeks are much more aware of the specific historical circumstances and contexts in which they operate, others less so. In this chapter I pose a question via Michel [pg 66] Foucault’s famous short piece “What Is Enlightenment?” Namely, are geeks modern? For Foucault, rereading Kant’s eponymous piece from 1784, the problem of being modern (or of an age being “enlightened”) is not one of a period or epoch that people live through; rather, it involves a subjective relationship, an attitude. Kant’s explanation of enlightenment does not suggest that it is itself a universal, but that it occurs through a form of reflection on what difference the changes of one’s immediate historical past make to one’s understanding of the supposed universals of a much longer history—that is, one must ask why it is necessary to think the way one does today about problems that have been confronted in ages past. For Foucault, such reflections must be rooted in the “historically unique forms in which the generalities of our relations . . . have been problematized.”
The attitudes that geeks take in responding to these questions fall along a spectrum that I have identified as ranging from “polymaths” to “transhumanists.” These monikers are drawn from real discussions with geeks, but they don’t designate a kind of person. They are “subroutines,” perhaps, called from within a larger program of moral and technical imaginations of order. It is possible for the same person to be a polymath at work and a transhumanist at home, but generally speaking they are conflicting and opposite mantles. In polymath routines, technology is an intervention into a complicated, historically unique field of people, customs, organizations, other technologies, and laws; in transhumanist routines, technology is seen as an inevitable force—a product of human action, but not of human design—that is impossible to control or resist through legal or customary means.
Geeks love allegories about the Protestant Reformation; they relish stories of Luther and Calvin, of property and iconoclasm, of reformation [pg 67] over revolution. Allegories of Protestant revolt allow geeks to make sense of the relationship between the state (the monarchy), large corporations (the Catholic Church), the small start-ups, individual programmers, and adepts among whom they spend most of their time (Protestant reformers), and the laity (known as “lusers” and “sheeple”). It gives them a way to assert that they prefer reformation (to save capitalism from the capitalists) over revolution. Obviously, not all geeks tell stories of “religious wars” and the Protestant Reformation, but these images reappear often enough in conversations that most geeks will more or less instantly recognize them as a way of making sense of modern corporate, state, and political power in the arena of information technology: the figures of Pope, the Catholic Church, the Vatican, the monarchs of various nations, the laity, the rebel adepts like Luther and Calvin, as well as models of sectarianism, iconoclasm (“In the beginning was the Command Line”), politicoreligious power, and arcane theological argumentation.
At the first level are allegories of “religious war” or “holy war” (and increasingly, of “jihads”). Such stories reveal a certain cynicism: they describe a technical war of details between two pieces of software that accomplish the same thing through different means, so devotion to one or the other is seen as a kind of arbitrary theological commitment, at once reliant on a pure rationality and requiring aesthetic or political judgment. Such stories imply that two technologies are equally good and equally bad and that one’s choice of sect is thus an entirely nonrational one based in the vicissitudes of background and belief. Some people are zealous proselytizers of a technology, some are not. As one Usenet message explains: “Religious ‘wars’ have tended to occur over theological and doctrinal [pg 68] technicalities of one sort or another. The parallels between that and the computing technicalities that result in ‘computing wars’ are pretty strong.”
Perhaps the most familiar and famous of these wars is that between Apple and Microsoft (formerly between Apple and IBM), a conflict that is often played out in dramatic and broad strokes that imply fundamental differences, when in fact the differences are extremely slight.
Often the language of the Reformation creeps playfully into otherwise serious attempts to make aesthetic judgments about technology, as in this analysis of the programming language tcl/tk:
It’s also not clear that the primary design criterion in tcl, perl, or Visual BASIC was visual beauty—nor, probably, should it have been. Ousterhout said people will vote with their feet. This is important. While the High Priests in their Ivory Towers design pristine languages of stark beauty and balanced perfection for their own appreciation, the rest of the mundane world will in blind and contented ignorance go plodding along using nasty little languages like those enumerated above. These poor sots will be getting a great deal of work done, putting bread on the table for their kids, and getting home at night to share it with them. The difference is that the priests will shake their fingers at the laity, and the laity won’t care, because they’ll be in bed asleep.
In this instance, the “religious war” concerns the difference between academic programming languages and regular programmers made equivalent to a distinction between the insularity of the Catholic Church and the self-help of a protestant laity: the heroes (such as tcl/tk, perl, and python—all Free Software) are the “nasty little languages” of the laity; the High Priests design (presumably) Algol, LISP, and other “academic” languages.
At a second level, however, the allegory makes precise use of Protestant Reformation details. For example, in a discussion about the various fights over the Gnu C Compiler (gcc), a central component of the various UNIX operating systems, Christopher Browne posted this counter-reformation allegory to a Usenet group.
The EGCS project was started around two years ago when G++ (and GCC) development got pretty “stuck.” EGCS sought to integrate together [pg 69] a number of the groups of patches that people were making to the GCC “family.” In effect, there had been a “Protestant Reformation,” with split-offs of:
a) The GNU FORTRAN Denomination;
b) The Pentium Tuning Sect;
c) The IBM Haifa Instruction Scheduler Denomination;
d) The C++ Standard Acolytes.
These groups had been unable to integrate their efforts (for various reasons) with the Catholic Version, GCC 2.8. The Ecumenical GNU Compiler Society sought to draw these groups back into the Catholic flock. The project was fairly successful; GCC 2.8 was succeeded by GCC 2.9, which was not a direct upgrade from 2.8, but rather the results of the EGCS project. EGCS is now GCC.
In addition to the obvious pleasure with which they deploy the sectarian aspects of the Protestant Reformation, geeks also allow themselves to see their struggles as those of Luther-like adepts, confronted by powerful worldly institutions that are distinct but intertwined: the Catholic Church and absolutist monarchs. Sometimes these comparisons are meant to mock theological argument; sometimes they are more straightforwardly hagiographic. For instance, a 1998 article in Salon compares Martin Luther and Linus Torvalds (originator of the Linux kernel).
In Luther’s Day, the Roman Catholic Church had a near-monopoly on the cultural, intellectual and spiritual life of Europe. But the principal source text informing that life—the Bible—was off limits to ordinary people. . . . Linus Torvalds is an information-age reformer cut from the same cloth. Like Luther, his journey began while studying for ordination into the modern priesthood of computer scientists at the University of Helsinki—far from the seats of power in Redmond and Silicon Valley. Also like Luther, he had a divine, slightly nutty idea to remove the intervening bureaucracies and put ordinary folks in a direct relationship to a higher power—in this case, their computers. Dissolving the programmer-user distinction, he encouraged ordinary people to participate in the development of their computing environment. And just as Luther sought to make the entire sacramental shebang—the wine, the bread and the translated Word—available to the hoi polloi, Linus seeks to revoke the developer’s proprietary access to the OS, insisting that the full operating system source code be delivered—without cost—to every ordinary Joe at the desktop.
Adepts with strong convictions—monks and priests whose initiation and mastery are evident—make the allegory work. Other uses of Christian iconography are less, so to speak, faithful to the sources. Another prominent personality, Richard Stallman, of the Free Software Foundation, is prone to dressing as his alter-ego, St. IGNUcius, patron saint of the church of EMACS—a church with no god, but intense devotion to a baroque text-processing program of undeniable, nigh-miraculous power.
Often the appeal of Reformation-era rhetoric comes from a kind of indictment of the present: despite all this high tech, super-fabulous computronic wonderfulness, we are no less feudal, no less violent, no less arbitrary and undemocratic; which is to say, geeks have progressed, have seen the light and the way, but the rest of society—and especially management and marketing—have not. In this sense, Reformation allegories are stories of how “things never change.”
But the most compelling use of the Protestant Reformation as usable past comes in the more detailed understandings geeks have of the political economy of information technology. The allegorization of the Catholic Church with Microsoft, for instance, is a frequent component, as in this brief message regarding start-up key combinations in the Be operating system: “These secret handshakes are intended to reinforce a cabalistic high priesthood and should not have been disclosed to the laity. Forget you ever saw this post and go by [sic] something from Microsoft.”
More generally, large corporations like IBM, Oracle, or Microsoft are made to stand in for Catholicism, while bureaucratic congresses and parliaments with their lobbyists take on the role of absolutist monarchs and their cronies. Geeks can then see themselves as fighting to uphold Christianity (true capitalism) against the church (corporations) and to be reforming a way of life that is corrupted by church and monarchs, instead of overthrowing through revolution a system they believe to be flawed. There is a historically and technically specific component of this political economy in which it is in the interest of corporations like IBM and Microsoft to keep users “locked as securely to Big Blue as an manacled wretch in a medieval dungeon.”
Such stories appeal because they bypass the language of modern American politics (liberal, conservative, Democrat, Republican) in which there are only two sides to any issue. They also bypass an [pg 71] argument between capitalism and socialism, in which if you are not pro-capitalism you must be a communist. They are stories that allow the more pragmatist of the geeks to engage in intervention and reformation, rather than revolution. Though I’ve rarely heard it articulated so bluntly, the allegory often implies that one must “save capitalism from the capitalists,” a sentiment that implies at least some kind of human control over capitalism.
In fact, the allegorical use of the Reformation and the church generates all kinds of clever comparisons. A typical description of such comparisons might go like this: the Catholic Church stands in for large, publicly traded corporations, especially those controlling large amounts of intellectual property (the granting of which might roughly be equated with the ceremonies of communion and confession) for which they depend on the assistance and support of national governments. Naturally, it is the storied excesses of the church—indulgences, liturgical complexity, ritualistic ceremony, and corruption—which make for easy allegory. Modern corporations can be figured as a small, elite papal body with theologians (executives and their lawyers, boards of directors and their lawyers), who command a much larger clergy (employees), who serve a laity (consumers) largely imagined to be sinful (underspending on music and movies—indeed, even “stealing” them) and thus in need of elaborate and ritualistic cleansing (advertising and lawsuits) by the church. Access to grace (the American Dream) is mediated only by the church and is given form through the holy acts of shopping and home improvement. The executives preach messages of damnation to the government, messages most government officials are all too willing to hear: do not tamper with our market share, do not affect our pricing, do not limit our ability to expand these markets. The executives also offer unaccountable promises of salvation in the guise of deregulation and the American version of “reform”—the demolition of state and national social services. Government officials in turn have developed their own “divine right of kings,” which justifies certain forms of manipulation (once called “elections”) of succession. Indulgences are sold left and right by lobbyists or industry associations, and the decrees of the papacy evidence little but full disconnection from the miserable everyday existence of the flock.
In fact, it is remarkable how easy such comparisons become the more details of the political economy of information one learns. But [pg 72] allegories of the Reformation and clerical power can lead easily to cynicism, which should perhaps be read in this instance as evidence of political disenfranchisement, rather than a lapse in faith. And yet the usable pasts of these reformation-minded modern monks and priests crop up regularly not only because they provide relief from technical chatter but because they explain a political, technical, legal situation that does not have ready-to-narrate stories. Geeks live in a world finely controlled by corporate organizations, mass media, marketing departments, and lobbyists, yet they share a profound distrust of government regulation—they need another set of just-so stories to make sense of it. The standard unusable pasts of the freeing of markets, the inevitability of capitalism and democracy, or more lately, the necessity of security don’t do justice to their experience.
Allegories of Reformation are stories that make sense of the political economy of information. But they also have a more precise use: to make sense of the distinction between power and control. Because geeks are “closer to the machine” than the rest of the laity, one might reasonably expect them to be the ones in power. This is clearly not the case, however, and it is the frustrations and mysteries by which states, corporations, and individuals manipulate technical details in order to shift power that often earns the deepest ire of geeks. Control, therefore, includes the detailed methods and actual practices by which corporations, government agencies, or individuals attempt to manipulate people (or enroll them to manipulate themselves and others) into making technical choices that serve power, rather than rationality, liberty, elegance, or any other geekly concern.
Consider the subject of evil. During my conversations with Sean Doyle in the late 1990s, as well as with a number of other geeks, the term evil was regularly used to refer to some kind of design or technical problem. I asked Sean what he meant.
SD: [Evil is] just a term I use to say that something’s wrong, but usually it means something is wrong on purpose, there was agency behind it. I can’t remember [the example you gave] but I think it may have been some GE equipment, where it has this default where it likes to send things in its own private format rather than in DICOM [the radiology industry standard for digital images], if you give it a choice. I don’t know why they would have done something like that, [pg 73] it doesn’t solve any backward compatibility problem, it’s really just an exclusionary sort of thing. So I guess there’s Evil like that. . . .
CK: one of the other examples that you had . . . was something with Internet Explorer 3.0?
SD: Yes, oh yes, there are so many things with IE3 that are completely Evil. Like here’s one of them: in the http protocol there’s a thing called the “user agent field” where a browser announces to the server who it is. If you look at IE, it announces that it is Mozilla, which is the [code-name for] Netscape. Why did they do this? Well because a lot of the web servers were sending out certain code that said, if it were Mozilla they would serve the stuff down, [if not] they would send out something very simple or stupid that would look very ugly. But it turned out that [IE3, or maybe IE2] didn’t support things when it first came out. Like, I don’t think they supported tables, and later on, their versions of Javascript were so different that there was no way it was compatible—it just added tremendous complexity. It was just a way of pissing on the Internet and saying there’s no law that says we have to follow these Internet standards. We can do as we damn well please, and we’re so big that you can’t stop us. So I view it as Evil in that way. I mean they obviously have the talent to do it. They obviously have the resources to do it. They’ve obviously done the work, it’s just that they’ll have this little twitch where they won’t support a certain MIME type or they’ll support some things differently than others.
CK: But these kinds of incompatibility issues can happen as a result of a lack of communication or coordination, which might involve agency at some level, right?
SD: Well, I think of that more as Stupidity than Evil [laughter]. No, Evil is when there is an opportunity to do something, and an understanding that there is an opportunity to, and resources and all that—and then you do something just to spite the other person. You know I’m sure it’s like in messy divorces, where you would rather sell the property at half its value rather than have it go to the other person.
Sean relates control to power by casting the decisions of a large corporation in a moral light. Although the specific allegory of the Protestant Reformation does not operate here, the details do. Microsoft’s decision to manipulate Internet Explorer’s behavior stems not from a lack of technical sophistication, nor is it an “accident” of [pg 74] complexity, according to Sean, but is a deliberate assertion of economic and political power to corrupt the very details by which software has been created and standardized and is expected to function. The clear goal of this activity is conversion, the expansion of Microsoft’s flock through a detailed control of the beliefs and practices (browsers and functionality) of computer users. Calling Microsoft “Evil” in this way has much the same meaning as questioning the Catholic Church’s use of ritual, ceremony, literacy, and history—the details of the “implementation” of religion, so to speak.
Or, in the terms of the Protestant Reformation itself, the practices of conversion as well as those of liberation, learning, and self-help are central to the story. It is not an accident that many historians of the Reformation themselves draw attention to the promises of liberation through reformation “information technologies.”
✠ ©
One final way to demonstrate the effectiveness of these allegories—their ability to work on the minds of geeks—is to demonstrate how they have started to work on me, to demonstrate how much of a geek I have become—a form of participant allegorization, so to speak. The longer one considers the problems that make up the contemporary political economy of information technology that geeks inhabit, the more likely it is that these allegories will start to present themselves almost automatically—as, for instance, when I read The Story of A, a delightful book having nothing to do with geeks, a book about literacy in early America. The author, Patricia Crain, explains that the Christ’s cross (see above) was often used in the creation of hornbooks or battledores, small leather-backed paddles inscribed with the Lord’s Prayer and the alphabet, which were used [pg 75] to teach children their ABCs from as early as the fifteenth century until as late as the nineteenth: “In its early print manifestations, the pedagogical alphabet is headed not by the letter A but by the ‘Christ’s Cross’: ✠. . . . Because the alphabet is associated with Catholic Iconography, as if the two sets of signs were really part of one semiological system, one of the struggles of the Reformation would be to wrest the alphabet away from the Catholic Church.”
Here, allegorically, the Catholic Church’s control of the alphabet (like Microsoft’s programming of Internet Explorer to blur public standards for the Internet) is not simply ideological; it is not just a fantasy of origin or ownership planted in the fallow mental soil of believers, but in fact a very specific, very nonsubjective, and very media-specific normative tool of control. Crain explains further: “Today ✠ represents the imprimatur of the Catholic Church on copyright pages. In its connection to the early modern alphabet as well, this cross carries an imprimatur or licensing effect. This ‘let it be printed,’ however, is directed not to the artisan printer but to the mind and memory of the young scholar. . . . Like modern copyright, the cross authorizes the existence of the alphabet and associates the letters with sacred authorship, especially since another long-lived function of ✠ in liturgical missals is to mark gospel passages. The symbol both conveys information and generates ritual behavior.”
The © today carries as much if not more power, both ideologically and legally, as the cross of the Catholic church. It is the very symbol of authorship, even though in origin and in function it governs only ownership and rights. Magical thinking about copyright abounds, but one important function of the symbol ©, if not its legal implications, is to achieve the same thing as the Christ’s cross: to associate in the mind of the reader the ownership of a particular text (or in this case, piece of software) with a particular organization or person. Furthermore, even though the symbol is an artifact of national and international law, it creates an association not between a text and the state or government, but between a text and particular corporations, publishers, printers, or authors.
Like the Christ’s cross, the copyright symbol carries both a licensing effect (exclusive, limited or nonexclusive) and an imprimatur on the minds of people: “let it be imprinted in memory” that this is the work of such and such an author and that this is the property of such and such a corporation.
Without the allegory of the Protestant Reformation, the only available narrative for such evil—whether it be the behavior of Microsoft or of some other corporation—is that corporations are “competing in the marketplace according to the rules of capitalism” and thus when geeks decry such behavior, it’s just sour grapes. If corporations are not breaking any laws, why shouldn’t they be allowed to achieve control in this manner? In this narrative there is no room for a moral evaluation of competition—anything goes, it would seem. Claiming for Microsoft that it is simply playing by the rules of capitalism puts everyone else into either the competitor box or the noncompetitor box (the state and other noncompetitive organizations). Using the allegory of the Protestant Reformation, on the other hand, gives geeks a way to make sense of an unequal distribution among competing powers—between large and small corporations, and between market power and the details of control. It provides an alternate imagination against which to judge the technically and legally specific actions that corporations and individuals take, and to imagine forms of justified action in return.
Without such an allegory, geeks who oppose Microsoft are generally forced into the position of being anticapitalist or are forced to adopt the stance that all standards should be publicly generated and controlled, a position few wish to take. Indeed, many geeks would prefer a different kind of imaginary altogether—a recursive public, perhaps. Instead of an infrastructure subject to unequal distributions of power and shot through with “evil” distortions of technical control, there is, as geeks see it, the possibility for a “self-leveling” level playing field, an autotelic system of rules, both technical and legal, by which all participants are expected to compete equally. Even if it remains an imaginary, the allegory of the Protestant Reformation makes sense of (gives order to) the political economy of the contemporary information-technology world and allows geeks to conceive of their interests and actions according to a narrative of reformation, rather than one of revolution or submission. In the Reformation the interpretation or truth of Christian teaching was not primarily in question: it was not a doctrinal revolution, but a bureaucratic one. Likewise, geeks do not question the rightness of networks, software, or protocols and standards, nor are they against capitalism or intellectual property, but they do wish to maintain a space for critique and the moral evaluation of contemporary capitalism and competition.
Usable pasts articulate the conjunction of “operating systems and social systems,” giving narrative form to imaginations of moral and technical order. To say that there are no ready-to-narrate stories about contemporary political economy means only that the standard colloquial explanations of the state of the modern world do not do justice to the kinds of moral and technical imaginations of order that geeks possess by virtue of their practices. Geeks live in, and build, one kind of world—a world of software, networks, and infrastructures—but they are often confronted with stories and explanations that simply don’t match up with their experience, whether in newspapers and on television, or among nongeek friends. To many geeks, proselytization seems an obvious route: why not help friends and neighbors to understand the hidden world of networks and software, since, they are quite certain, it will come to structure their lives as well?
Geeks gather through the Internet and, like a self-governing people, possess nascent ideas of independence, contract, and constitution by which they wish to govern themselves and resist governance by others.
Geeks live in specific ways in time and space. They are not just users of technology, or a “network society,” or a “virtual community,” but embodied and imagining actors whose affinity for one another is enabled in new ways by the tools and technologies they have such deep affective connections to. They live in this-network-here, a historically unique form grounded in particular social, moral, national, and historical specificities which nonetheless relates to generalities such as progress, technology, infrastructure, and liberty. Geeks are by no means of one mind about such generalities though, and they often have highly developed means of thinking about them.
Foucault’s article “What Is Enlightenment?” captures part of this problematic. For Foucault, Kant’s understanding of modernity was an attempt to rethink the relationship between the passage of historical time and the subjective relationship that individuals have toward it.
Thinking back on Kant’s text, I wonder whether we may not envisage modernity as an attitude rather than as a period of history. And by “attitude,” I mean a mode of relating to contemporary reality; a voluntary choice made by certain people; in the end, a way of thinking and feeling; a way, too, of acting and behaving that at one and the same time marks a relation of belonging and presents itself as a task. No doubt a bit like what the Greeks called an ethos. And consequently, rather than seeking to distinguish the “modern era” from the “premodern” or “postmodern,” I think it would be more useful to try to find out how the attitude of modernity, ever since its formation, has found itself struggling with attitudes of “countermodernity.”
In thinking through how geeks understand the present, the past, and the future, I pose the question of whether they are “modern” in this sense. Foucault makes use of Baudelaire as his foil for explaining in what the attitude of modernity consists: “For [Baudelaire,] being modern . . . consists in recapturing something eternal that is not beyond the present, or behind it, but within it.”
The questions I raise here are also those of politics in a classical sense: Are the geeks I discuss bound by an attitude toward the present that concerns such things as the relationship of the public to the private and the social (à la Hannah Arendt), the relationship [pg 79] of economics to liberty (à la John Stuart Mill and John Dewey), or the possibilities for rational organization of society through the application of scientific knowledge (à la Friedrich Hayek or Foucault)? Are geeks “enlightened”? Are they Enlightenment rationalists? What might this mean so long after the Enlightenment and its vigorous, wide-ranging critiques? How is their enlightenment related to the technical and infrastructural commitments they have made? Or, to put it differently, what makes enlightenment newly necessary now, in the milieu of the Internet, Free Software, and recursive publics? What kinds of relationships become apparent when one asks how these geeks relate their own conscious appreciation of the history and politics of their time to their everyday practices and commitments? Do geeks despise the present?
Polymaths and transhumanists speak differently about concepts like technology, infrastructure, networks, and software, and they have different ideas about their temporality and relationship to progress and liberty. Some geeks see technology as one kind of intervention into a constituted field of organizations, money, politics, and people. Some see it as an autonomous force made up of humans and impersonal forces of evolution and complexity. Different geeks speak about the role of technology and its relationship to the present and future in different ways, and how they understand this relationship is related to their own rich understandings of the complex technical and political environment they live and work in.
Polymaths Polymathy is “avowed dilettantism,” not extreme intelligence. It results from a curiosity that seems to grip a remarkable number of people who spend their time on the Internet and from the basic necessity of being able to evaluate and incorporate sometimes quite disparate fields of knowledge in order to build workable software. Polymathy inevitably emerges in the context of large software and networking projects; it is a creature of constraints, a process bootstrapped by the complex sediment of technologies, businesses, people, money, and plans. It might also be posed in the negative: bad software design is often the result of not enough avowed dilettantism. Polymaths must know a very large and wide range of things in order to intervene in an existing distribution of machines, people, practices, and places. They must have a detailed sense of the present, and the project of the present, in order to imagine how the future might be different.
My favorite polymath is Sean Doyle. Sean built the first versions of a piece of software that forms the centerpiece of the radiological-image-management company Amicas. In order to build it Sean learned the following: Java, to program it; the mathematics of wavelets, to encode the images; the workflow of hospital radiologists and the manner in which they make diagnoses from images, to make the interface usable; several incompatible databases and the SQL database language, to build the archive and repository; and manual after manual of technical standards, the largest and most frightening of which was the Digital Imaging and Communication (DICOM) standard for radiological images. Sean also read Science and Nature regularly, looking for inspiration about interface design; he read books and articles about imaging very small things (mosquito knees), very large things (galaxies and interstellar dust), very old things (fossils), and very pretty things (butterfly-wing patterns as a function of developmental pathways). Sean also introduced me to Tibetan food, to Jan Svankmeyer films, to Open Source Software, to cladistics and paleoherpetology, to Disney’s scorched-earth policy with respect to culture, and to many other awesome things.
Sean is clearly an unusual character, but not that unusual. Over the years I have met many people with a similar range and depth of knowledge (though rarely with Sean’s humility, which does set him apart). Polymathy is an occupational hazard for geeks. There is no sense in which a good programmer, software architect, or information architect simply specializes in code. Specialization is seen not as an end in itself, but rather as a kind of technical prerequisite before other work—the real work—can be accomplished. The real work is the design, the process of inserting usable software into a completely unfamiliar amalgamation of people, organizations, machines, and practices. Design is hard work, whereas the technical stuff—like choosing the right language or adhering to a standard or finding a ready-made piece of code to plug in somewhere—is not.
It is possible for Internet geeks and software architects to think this way in part due to the fact that so many of the technical issues they face are both extremely well defined and very easy to address with a quick search and download. It is easy to be an avowed dilettante in the age of mailing lists, newsgroups, and online scientific publishing. I myself have learned whole swaths of technical practices in this manner, but I have designed no technology of note. [pg 81]
Sean’s partner in Amicas, Adrian Gropper, also fits the bill of polymath, though he is not a programmer. Adrian, a physician and a graduate of MIT’s engineering program, might be called a “high-functioning polymath.” He scans the horizon of technical and scientific accomplishments, looking for ways to incorporate them into his vision of medical technology qua intervention. Sean mockingly calls these “delusions,” but both agree that Amicas would be nowhere without them. Adrian and Sean exemplify how the meanings of technology, intervention, design, and infrastructure are understood by polymaths as a particular form of pragmatic intervention, a progress achieved through deliberate, piecemeal re-formation of existing systems. As Adrian comments:
I firmly believe that in the long run the only way you can save money and improve healthcare is to add technology. I believe that more strongly than I believe, for instance, that if people invent better pesticides they’ll be able to grow more rice, and it’s for the universal good of the world to be able to support more people. I have some doubt as to whether I support people doing genetic engineering of crops and pesticides as being “to the good.” But I do, however, believe that healthcare is different in that in the long run you can impact both the cost and quality of healthcare by adding technology. And you can call that a religious belief if you want, it’s not rational. But I guess what I’m willing to say is that traditional healthcare that’s not technology-based has pretty much run out of steam.
In this conversation, the “technological” is restricted to the novel things that can make healthcare less costly (i.e., cost-reducing, not cost-cutting), ease suffering, or extend life. Certain kinds of technological intervention are either superfluous or even pointless, and Adrian can’t quite identify this “class”—it isn’t “technology” in general, but it includes some kinds of things that are technological. What is more important is that technology does not solve anything by itself; it does not obviate the political problems of healthcare rationing: “Now, however, you get this other problem, which is that the way that healthcare is rationed is through the fear of pain, financial pain to some extent, but physical pain; so if you have a technology that, for instance, makes it relatively painless to fix . . . I guess, bluntly put, it’s cheaper to let people die in most cases, and that’s just undeniable. So what I find interesting in all of this, is that most people who are dealing with the politics of healthcare [pg 82] resource management don’t want to have this discussion, nobody wants to talk about this, the doctors don’t want to talk about it, because it’s too depressing to talk about the value of. . . . And they don’t really have a mandate to talk about technology.”
Adrian’s self-defined role in this arena is as a nonpracticing physician who is also an engineer and an entrepreneur—hence, his polymathy has emerged from his attempts to translate between doctors, engineers, and businesspeople. His goal is twofold: first, create technologies that save money and improve the allocation of healthcare (and the great dream of telemedicine concerns precisely this goal: the reallocation of the most valuable asset, individuals and their expertise); second, to raise the level of discussion in the business-cum-medical world about the role of technology in managing healthcare resources. Polymathy is essential, since Adrian’s twofold mission requires understanding the language and lives of at least three distinct groups who work elbow-to-elbow in healthcare: engineers and software architects; doctors and nurses; and businessmen.
Technology has two different meanings according to Adrian’s two goals: in the first case technology refers to the intervention by means of new technologies (from software, to materials, to electronics, to pharmaceuticals) in specific healthcare situations wherein high costs or limited access to care can be affected. Sometimes technology is allocated, sometimes it does the allocating. Adrian’s goal is to match his knowledge of state-of-the-art technology—in particular, Internet technology—with a specific healthcare situation and thereby effect a reorganization of practices, people, tools, and information. The tool Amicas created was distinguished by its clever use of compression, Internet standards, and cheap storage media to compete with much larger, more expensive, much more entrenched “legacy” and “turnkey” systems. Whether Amicas invented something “new” is less interesting than the nature of this intervention into an existing milieu. This intervention is what Adrian calls “technology.” For Amicas, the relevant technology—the important intervention—was the Internet, which Amicas conceived as a tool for changing the nature of the way healthcare was organized. Their goal was to replace the infrastructure of the hospital radiology department (and potentially the other departments as well) with the Internet. Amicas was able to confront and reform the practices of powerful, entrenched entities, from the administration of large [pg 83] hospitals to their corporate bedfellows, like HBOC, Agfa, Siemens, and GE.
With regard to raising the level of discussion, however, technology refers to a kind of political-rhetorical argument: technology does not save the world (nor does it destroy it); it only saves lives—and it does this only when one makes particular decisions about its allocation. Or, put differently, the means is technology, but the ends are still where the action is at. Thus, the hype surrounding information technology in healthcare is horrifying to Adrian: promises precede technologies, and the promises suggest that the means can replace the ends. Large corporations that promise “technology,” but offer no real hard interventions (Adrian’s first meaning of technology) that can be concretely demonstrated to reduce costs or improve allocation are simply a waste of resources. Such companies are doubly frustrating because they use “technology” as a blinder that allows people to not think about the hard problems (the ends) of allocation, equity, management, and organization; that is, they treat “technology” (the means) as if it were a solution as such.
Adrian routinely analyzes the rhetorical and practical uses of technology in healthcare with this kind of subtlety; clearly, such subtlety of thought is rare, and it sets Adrian apart as someone who understands that intervention into, and reform of, modern organizations and styles of thought has to happen through reformation—through the clever use of technology by people who understand it intimately—not through revolution. Reformation through technical innovation is opposed here to control through the consolidation of money and power.
In my observations, Adrian always made a point of making the technology—the software tools and picture-archiving system—easily accessible, easily demonstrable to customers. When talking to hospital purchasers, he often said something like “I can show you the software, and I can tell you the price, and I can demonstrate the problem it will solve.” In contrast, however, an array of enormous corporations with salesmen and women (usually called consultants) were probably saying something more like “Your hospital needs more technology, our corporation is big and stable—give us this much money and we will solve your problem.” For Adrian, the decision to “hold hands,” as he put it, with the comfortably large corporation was irrational if the hospital could instead purchase a specific technology that did a specific thing, for a real price. [pg 84]
Adrian’s reflections on technology are also reflections on the nature of progress. Progress is limited intervention structured by goals that are not set by the technology itself, even if entrepreneurial activity is specifically focused on finding new uses and new ideas for new technologies. But discussions about healthcare allocation—which Adrian sees as a problem amenable to certain kinds of technical solutions—are instead structured as if technology did not matter to the nature of the ends. It is a point Adrian resists: “I firmly believe that in the long run the only way you can save money and improve healthcare is to add technology.”
Sean is similarly frustrated by the homogenization of the concept of technology, especially when it is used to suggest, for instance, that hospitals “lag behind” other industries with regard to computerization, a complaint usually made in order to either instigate investment or explain failures. Sean first objects to such a homogenous notion of “technological.”
I actually have no idea what that means, that it’s lagging behind. Because certainly in many ways in terms of image processing or some very high-tech things it’s probably way ahead. And if that means what’s on people’s desktops, ever since 19-maybe-84 or so when I arrived at MGH [Massachusetts General Hospital] there’s been a computer on pretty much everyone’s desktop. . . . It seems like most hospitals that I have been to seem to have a serious commitment to networks and automation, etcetera. . . . I don’t know about a lot of manufacturing industries—they might have computer consoles there, but it’s a different sort of animal. Farms probably lag really far behind, I won’t even talk about amusement parks. In some sense, hospitals are very complicated little communities, and so to say that this thing as a whole is lagging behind doesn’t make much sense.
He also objects to the notion that such a lag results in failures caused by technology, rather than by something like incompetence or bad management. In fact, it might be fair to say that, for the polymath, sometimes technology actually dissolves. Its boundaries are not easily drawn, nor are its uses, nor are its purported “unintended consequences.” On one side there are rules, regulations, protocols, standards, norms, and forms of behavior; on the other there are organizational structures, business plans and logic, human skills, and other machines. This complex milieu requires reform from within: it cannot be replaced wholesale; it cannot leap-frog [pg 85] other industries in terms of computerization, as intervention is always local and strategic; and it involves a more complex relationship to the project of the present than simply “lagging behind” or “leaping ahead.”
Polymathy—inasmuch as it is a polymathy of the lived experience of the necessity for multiple expertise to suit a situation—turns people into pragmatists. Technology is never simply a solution to a problem, but always part of a series of factors. The polymath, unlike the technophobe, can see when technology matters and when it doesn’t. The polymath has a very this-worldly approach to technology: there is neither mystery nor promise, only human ingenuity and error. In this manner, polymaths might better be described as Feyerabendians than as pragmatists (and, indeed, Sean turned out to be an avid reader of Feyerabend). The polymath feels there is no single method by which technology works its magic: it is highly dependent on rules, on patterned actions, and on the observation of contingent and contextual factors. Intervention into this already instituted field of people, machines, tools, desires, and beliefs requires a kind of scientific-technical genius, but it is hardly single, or even autonomous. This version of pragmatism is, as Feyerabend sometimes refers to it, simply a kind of awareness: of standards, of rules, of history, of possibility.
Sean and Adrian are avowedly scientific and technical people; like Feyerabend, they assume that their interlocutors believe in good science and the benefits of progress. They have little patience for Luddites, for new-agers, for religious intolerance, or for any other non-Enlightenment-derived attitude. They do not despise the present, because they have a well-developed sense of how provisional the conventions of modern technology and business are. Very little is sacred, and rules, when they exist, are fragile. Breaking them pointlessly is immodest, but innovation is often itself seen as a way of transforming a set of accepted rules or practices to other ends. Progress is limited intervention.
How ironic, and troubling, then, to realize that Sean’s and Adrian’s company would eventually become the kind of thing they started Amicas in order to reform. Outside of the limited intervention, certain kinds of momentum seem irresistible: the demand for investment and funding rounds, the need for “professional management,” [pg 86] and the inertia of already streamlined and highly conservative purchasing practices in healthcare. For Sean and Adrian, Amicas became a failure in its success. Nonetheless, they remain resolutely modern polymaths: they do not despise the present. As described in Kant’s “What Is Enlightenment?” the duty of the citizen is broken into public and private: on the one hand, a duty to carry out the responsibilities of an office; on the other, a duty to offer criticism where criticism is due, as a “scholar” in a reading public. Sean’s and Adrian’s endeavor, in the form of a private start-up company, might well be understood as the expression of the scholar’s duty to offer criticism, through the creation of a particular kind of technical critique of an existing (and by their assessment) ethically suspect healthcare system. The mixture of private capital, public institutions, citizenship, and technology, however, is something Kant could not have known—and Sean and Adrian’s technical pursuits must be understood as something more: a kind of modern civic duty, in the service of liberty and responding to the particularities of contemporary technical life.
Transhumanists Polymathy is born of practical and pragmatic engagement with specific situations, and in some ways is demanded by such exigencies. Opposite polymathy, however, and leaning more toward a concern with the whole, with totality and the universal, are attitudes that I refer to by the label transhumanism, which concerns the mode of belief in the Timeline of Technical Progress.
Transhumanism, the movement and the philosophy, focuses on the power of technology to transcend the limitations of the human body as currently evolved. Subscribers believe—but already this is the wrong word—in the possibility of downloading consciousness onto silicon, of cryobiological suspension, of the near emergence of strong artificial intelligence and of various other forms of technical augmentation of the human body for the purposes of achieving immortality—or at least, much more life.
Various groups could be reasonably included under this label. There are the most ardent purveyors of the vision, the Extropians; there are a broad class of people who call themselves transhumanists; there is a French-Canadian subclass, the Raelians, who are more an alien-worshiping cult than a strictly scientific one and are bitterly denounced by the first two; there are also the variety of cosmologists and engineers who do not formally consider themselves [pg 87] transhumanist, but whose beliefs participate in some way or another: Stephen Hawking, Frank Tipler and John Barrow (famous for their anthropic cosmological principle), Hans Moravic, Ray Kurzweil, Danny Hillis, and down the line through those who embrace the cognitive sciences, the philosophy of artificial intelligence, the philosophy of mind, the philosophy of science, and so forth.
Historically speaking, the line of descent is diffuse. Teilhard de Chardin is broadly influential, sometimes acknowledged, sometimes not (depending on the amount of mysticism allowed). A more generally recognized starting point is Julian Huxley’s article “Transhumanism” in New Bottles for New Wine.
For many observers, transhumanists are a lunatic fringe, bounded on either side by alien abductees and Ayn Rand-spouting objectivists. However, like so much of the fringe, it merely represents in crystalline form attitudes that seem to permeate discussions more broadly, whether as beliefs professed or as beliefs attributed. Transhumanism, while probably anathema to most people, actually reveals a very specific attitude toward technical innovation, technical intervention, and political life that is widespread among technically adept individuals. It is a belief that has everything to do also with the timeline of progress and the role of technology in it.
The transhumanist understanding of technological progress can best be understood through the sometimes serious and sometimes playful concept of the “singularity,” popularized by the science-fiction writer and mathematician Vernor Vinge.
~[* Illustration © 2005 Ray Kurzweil. Modifications © 2007 by C. Kelty. Original work licensed under a Creative Commons Attribution License: http://en.wikipedia.org/wiki/Image:PPTCountdowntoSingularityLog.jpg. ]~
In figure 1, on the left hand of the timeline, there is history, or rather, there is a string of technological inventions (by which is implied that previous inventions set the stage for later ones) spaced such that they produce a logarithmic curve that can look very much like the doomsday population curves that started to appear in the 1960s. Each invention is associated with a name or sometimes a nation. Beyond the edge of the graph to the right side is the future: history changes here from a series of inventions to an autonomous self-inventing technology associated not with individual inventors but with a complex system of evolutionary adaptation that includes technological as well as biological forms. It is a future in which “humans” are no longer necessary to the progress of science and technology: technology-as-extension-of-humans on the left, a Borg-like autonomous technical intelligence on the right. The fundamental [pg 89] operation in constructing the “singularity” is the “reasoned extrapolation” familiar to the “hard science fiction” writer or the futurist. One takes present technology as the initial condition for future possibilities and extrapolates based on the (haphazardly handled) evidence of past technical speed-up and change.
The position of the observer is always a bit uncertain, since he or she is naturally projected at the highest (or lowest, depending on your orientation) point of this curve, but one implication is clear: that the function or necessity of human reflection on the present will disappear at the same time that humans do, rendering enlightenment a quaint, but necessary, step on the route to superrational, transhuman immortality.
Strangely, the notion that technical progress has acceleration seems to precede any sense of what the velocity of progress might mean in the first instance; technology is presumed to exist in absolute time—from the Big Bang to the heat death of the universe—and not in any relationship with human life or consciousness. The singularity is always described from the point of view of a god who is not God. The fact of technological speed-up is generally treated as the most obvious thing in the world, reinforced by the constant refrain in the media of the incredible pace of change in contemporary society.
Why is the singularity important? Because it always implies that the absolute fact of technical acceleration—this knowing glance into the future—should order the kinds of interventions that occur in the present. It is not mute waiting or eschatological certainty that governs this attitude; rather, it is a mode of historical consciousness that privileges the inevitability of technological progress over the inevitability of human power. Only by looking into the future can one manipulate the present in a way that will be widely meaningful, an attitude that could be expressed as something like “Those who do not learn from the future are condemned to suffer in it.” Since it is a philosophy based on the success of human rationality and ingenuity, rationality and ingenuity are still clearly essential in the future. They lead, however, to a kind of posthuman state of constant technological becoming which is inconceivable to the individual human mind—and can only be comprehended by a transcendental intelligence that is not God.
Such is a fair description of some strands of transhumanism, and the reason I highlight them is to characterize the kinds of attitudes [pg 90] toward technology-as-intervention and the ideas of moral and technical order that geeks can evince. On the far side of polymathy, geeks are too close to the machine to see a big picture or to think about imponderable philosophical issues; on the transhuman side, by contrast, one is constantly reassessing the arcane details of everyday technical change with respect to a vision of the whole—a vision of the evolution of technology and its relationship to the humans that (for the time being) must create and attempt to channel it.
My favorite transhumanist is Eugen Leitl (who is, in fact, an authentic transhumanist and has been vice-chair of the World Transhumanist Association). Eugen is Russian-born, lives in Munich, and once worked in a cryobiology research lab. He is well versed in chemistry, nanotechnology, artificial-intelligence (AI) research, computational- and network-complexity research, artificial organs, cryobiology, materials engineering, and science fiction. He writes, for example,
If you consider AI handcoded by humans, yes. However, given considerable computational resources (~cubic meter of computronium), and using suitable start population, you can coevolve machine intelligence on a time scale of much less than a year. After it achieves about a human level, it is potentially capable of entering an autofeedback loop. Given that even autoassembly-grade computronium is capable of running a human-grade intellect in a volume ranging from a sugar cube to an orange at a speed ranging from 10^4 . . . 10^6 it is easy to see that the autofeedback loop has explosive dynamics.
(I hope above is intelligible, I’ve been exposed to weird memes for far too long).
Eugen is also a polymath (and an autodidact to boot), but in the conventional sense. Eugen’s polymathy is an avocational necessity: transhumanists need to keep up with all advances in technology and science in order to better assess what kinds of human-augmenting or human-obsolescing technologies are out there. It is not for work in this world that the transhumanist expands his or her knowledge, nor quite for the next, but for a “this world” yet to arrive.
Eugen and I were introduced during the Napster debates of 2001, which seemed at the time to be a knock-down, drag-out conflagration, but Eugen has been involved in so many online flame wars that he probably experienced it as a mere blip in an otherwise constant struggle with less-evolved intelligences like mine. Nonetheless, [pg 91] it was one of the more clarifying examples of how geeks think, and think differently, about technology, infrastructure, networks, and software. Transhumanism has no truck with old-fashioned humanism.
> >From: Ramu Narayan . . . > >I don’t like the > >notion of technology as an unstoppable force with a will of its own that > >has nothing to do with the needs of real people.
[Eugen Leitl:] Emergent large-scale behaviour is nothing new. How do you intend to control individual behaviour of a large population of only partially rational agents? They don’t come with too many convenient behaviour-modifying hooks (pheromones as in social insects, but notice menarche-synch in females sharing quarters), and for a good reason. The few hooks we have (mob, war, politics, religion) have been notoriously abused, already. Analogous to apoptosis, metaindividuals may function using processes deletorious[sic] to its components (us).
Eugen’s understanding of what “technological progress” means is sufficiently complex to confound most of his interlocutors. For one surprising thing, it is not exactly inevitable. The manner in which Leitl argues with people is usually a kind of machine-gun prattle of coevolutionary, game-theoretic, cryptographic sorites. Eugen piles on the scientific and transhumanist reasoning, and his interlocutors slowly peel away from the discussion. But it isn’t craziness, hype, or half-digested popular science—Eugen generally knows his stuff—it just fits together in a way that almost no one else can quite grasp. Eugen sees the large-scale adoption and proliferation of technologies (particularly self-replicating molecular devices and evolutionary software algorithms) as a danger that transcends all possibility of control at the individual or state level. Billions of individual decisions do not “average” into one will, but instead produce complex dynamics and hang perilously on initial conditions. In discussing the possibility of the singularity, Eugen suggests, “It could literally be a science-fair project [that causes the singularity].” If Francis Bacon’s understanding of the relation between Man and Nature was that of master and possessor, Eugen’s is its radicalization: Man is a powerful but ultimately arbitrary force in the progress of Life-Intelligence. Man is fully incorporated into Nature in this story, [pg 92] so much so that he dissolves into it. Eugen writes, when “life crosses over into this petri dish which is getting readied, things will become a lot more lively. . . . I hope we’ll make it.”
For Eugen, the arguments about technology that the polymaths involve themselves in couldn’t be more parochial. They are important only insofar as they will set the “initial conditions” for the grand coevolutionary adventure of technology ahead of us. For the transhumanist, technology does not dissolve. Instead, it is the solution within which humans are dissolved. Suffering, allocation, decision making—all these are inessential to the ultimate outcome of technological progress; they are worldly affairs, even if they concern life and death, and as such, they can be either denounced or supported, but only with respect to fine-tuning the acceleration toward the singularity. For the transhumanist, one can’t fight the inevitability of technical evolution, but one certainly can contribute to it. Technical progress is thus both law-like and subject to intelligent manipulation; technical progress is inevitable, but only because of the power of massively parallel human curiosity.
Considered as one of the modes of thought present in this-worldly political discussion, the transhumanist (like the polymath) turns technology into a rhetorical argument. Technology is the more powerful political argument because “it works.” It is pointless to argue “about” technology, but not pointless to argue through and with it. It is pointless to talk about whether stopping technology is good or bad, because someone will simply build a technology that will invalidate your argument.
There is still a role for technical invention, but it is strongly distinguished from political, legal, cultural, or social interventions. For most transhumanists, there is no rhetoric here, no sophistry, just the pure truth of “it works”: the pure, undeniable, unstoppable, and undeconstructable reality of technology. For the transhumanist attitude, the reality of “working code” has a reality that other assertions about the world do not. Extreme transhumanism replaces the life-world with the world of the computer, where bad (ethically bad) ideas won’t compile. Less-staunch versions of transhumanism simply allow the confusion to operate opportunistically: the progress of technology is unquestionable (omniscient), and only its effects on humans are worth investigating.
The pure transhumanist, then, is a countermodern. The transhumanist despises the present for its intolerably slow descent into the [pg 93] future of immortality and superhuman self-improvement, and fears destruction because of too much turbulent (and ignorant) human resistance. One need have no individual conception of the present, no reflection on or synthetic understanding of it. One only need contribute to it correctly. One might even go so far as to suggest that forms of reflection on the present that do not contribute to technical progress endanger the very future of life-intelligence. Curiosity and technical innovation are not historical features of Western science, but natural features of a human animal that has created its own conditions for development. Thus, the transhumanists’ historical consciousness consists largely of a timeline that makes ordered sense of our place on the progress toward the Singularity.
The moral of the story is not just that technology determines history, however. Transhumanism is a radically antihumanist position in which human agency or will—if it even exists—is not ontologically distinct from the agency of machines and animals and life itself. Even if it is necessary to organize, do things, make choices, participate, build, hack, innovate, this does not amount to a belief in the ability of humans to control their destiny, individually or collectively. In the end, the transhumanist cannot quite pinpoint exactly what part of this story is inevitable—except perhaps the story itself. Technology does not develop without millions of distributed humans contributing to it; humans cannot evolve without the explicit human adoption of life-altering and identity-altering technologies; evolution cannot become inevitable without the manipulation of environments and struggles for fitness. As in the dilemma of Calvinism (wherein one cannot know if one is saved by one’s good works), the transhumanist must still create technology according to the particular and parochial demands of the day, but this by no means determines the eventual outcome of technological progress. It is a sentiment well articulated by Adam Ferguson and highlighted repeatedly by Friederich Hayek with respect to human society: “the result of human action, but not the execution of any human design.”
To many observers, geeks exhibit a perhaps bewildering mix of liberalism, libertarianism, anarchism, idealism, and pragmatism, [pg 94] yet tend to fall firmly into one or another constituted political category (liberal, conservative, socialist, capitalist, neoliberal, etc.). By showing how geeks make use of the Protestant Reformation as a usable past and how they occupy a spectrum of beliefs about progress, liberty, and intervention, I hope to resist this urge to classify. Geeks are an interesting case precisely because they are involved in the creation of new things that change the meaning of our constituted political categories. Their politics are mixed up and combined with the technical details of the Internet, Free Software, and the various and sundry organizations, laws, people, and practices that they deal with on a regular basis: operating systems and social systems. But such mixing does not make Geeks merely technoliberals or technoconservatives. Rather, it reveals how they think through the specific, historically unique situation of the Internet to the general problems of knowledge and power, liberty and enlightenment, progress and intervention.
Geeks are not a kind of person: geeks are geeks only insofar as they come together in new, technically mediated forms of their own creation and in ways that are not easy to identify (not language, not culture, not markets, not nations, not telephone books or databases). While their affinity is very clearly constituted through the Internet, the Internet is not the only reason for that affinity. It is this collective affinity that I refer to as a recursive public. Because it is impossible to understand this affinity by trying to identify particular types of people, it is necessary to turn to historically specific sets of practices that form the substance of their affinity. Free Software is an exemplary case—perhaps the exemplar—of a recursive public. To understand Free Software through its changing practices not only gives better access to the life-world of the geek but also reveals how the structure of a recursive public comes into being and manages to persist and transform, how it can become a powerful form of life that extends its affinities beyond technophile geeks into the realms of ordinary life.
Copyright: 2008 Duke University Press
Printed in the United States of America on acid-free paper ∞
Designed by C. H. Westmoreland
Typeset in Charis (an Open Source font) by Achorn International
Library of Congress Cataloging-in-Publication data and republication acknowledgments appear on the last printed pages of this book.
License: Licensed under the Creative Commons Attribution-NonCommercial-Share Alike License, available at https://creativecommons.org/licenses/by-nc-sa/3.0/ or by mail from Creative Commons, 559 Nathan Abbott Way, Stanford, Calif. 94305, U.S.A. "NonCommercial" as defined in this license specifically excludes any sale of this work or any portion thereof for money, even if sale does not result in a profit by the seller or if the sale is by a 501(c)(3) nonprofit or NGO.
Duke University Press gratefully acknowledges the support of HASTAC (Humanities, Arts, Science, and Technology Advanced Collaboratory), which provided funds to help support the electronic interface of this book.
Two Bits is accessible on the Web at twobits.net.
≅ SiSU Spine ፨ (object numbering & object search)
(web 1993, object numbering 1997, object search 2002 ...) 2024