Problems in Christopher Kelty’s Two Bits: The Cultural Significance of Free Software

Christopher M. Kelty examines the cultural impact of the free software and open source movements in his book Two Bits: The Cultural Significance of Free Software (Duke University Press, 2008) (PDF, zipped HTML). Kelty is an anthropologist, which makes his analysis quite distinctive and interesting, but I spotted a number of problems in his text, which are worth pointing out. In my opinion, Kelty does a poor job of analyzing the motivations which drive people like me who promote the use of free software.

Below in green are some of the problematic parts in his book:

Free Software, as its ambiguous moniker suggests, is both free from constraints and free of charge. Such characteristics seem to violate economic logic and the principles of private ownership and individual autonomy, yet there are tens of millions of people creating this software and hundreds of millions more using it. Why? Why now? And most important: how? (p. 1-2)

First of all, Free Software is not necessarily free, and in the case of Red Hat, SuSE and Oracle Linux, it costs roughly the same amount as Microsoft’s operating system, and IBM’s services based on Free Software are quite expensive. It can be free of charge for many end users, but it isn’t for most companies who use it in mission critical applications.

Second, the number of developers of free software is actually quite small, probably less than a couple hundred thousand. There may be millions of software developers who use free software languages and tools like PHP, Python, Ruby, GCC, Apache, Firefox, etc., but they do not release their code under a free software license.

Likewise, geeks do not question the rightness of networks, software, or protocols and standards, nor are they against capitalism or intellectual property, but they do wish to maintain a space for critique and the moral evaluation of contemporary capitalism and competition. (p. 76)

Advocates of free software (which is what Kelty calls “geeks”) do question the standards and protocols, because they demand that they be free and open. Likewise, they do question the software and the networks when they aren’t free and open.

First of all, free software advocates have a problem with the term “intellectual property” since it confuses copyright, patent, trade secret and trademark, which are very different things. A fundamental questioning of patent laws is at the heart of the free software movement. Moreover, free software uses copyright laws, but turns them around to do the exact opposite of what they were intended, so they are subverting them at the same time that they use them.

Rather than define what makes Free Software free or Open Source open, Two Bits treats the five practices as parts of a collective technical experimental system: each component has its own history, development, and temporality, but they come together as a package and emerge as a recognizable thing around 1998–99. (p. 98)

It was in 1998–99 that geeks came to recognize that they were all doing the same thing and, almost immediately, to argue about why. (p. 98)

Free/Libre/Open Source software reached the wider public consciousness in 1998-9, but it would be wrong to say that it emerged “as a recognizable thing” at that point. For the people involved in it, it had clear recognizable practices and was considered a movement long before the open sourcing of Netscape Navigator. I remember going to a talk in January 1996 at Grinnell College, a small Iowan liberal arts college, where it was discussed as a movement.

It would be more accurate to say that 1998-9 was the time period when the rest of the world started taking notice and where free software began to impact other areas of culture outside of software. What I would say is that 1998-9 is the period when free software for hackers became “open source” for “geeks”.

Prior to 1998, Free Software referred either to the Free Software Foundation (and the watchful, micromanaging eye of Stallman) or to one of thousands of different commercial, avocational, or university-research projects, processes, licenses, and ideologies that had a variety of names: sourceware, freeware, shareware, open software, public domain software, and so on. The term Open Source, by contrast, sought to encompass them all in one movement. (p. 99)

Open Source has nothing to do with freeware and shareware, and it doesn’t encompass most forms of sourceware or open software (for which I have never seen a clear definition). The Open Source definition does encompass a few more types of software than FSF’s free software, it still excludes freeware, shareware, and 3 out of the 5 licenses which Microsoft calls “Shared Source”. Eric S. Raymond clearly states that “Open Source” was essentially a rebranding effort, rather than trying to fundamentally change the definition of “free software”.

The Java-based Navigator (called Javagator, of course) created a problem, however, with respect to the practice of keeping source code secret. Whenever a program in Java was run, it created a set of “bytecodes” that were easy to reverse-engineer because they had to be transmitted from the server to the machine that ran the program and were thus visible to anyone who might know how and where to look. (p. 101)

The Java bytecode for the JavaGator is installed on the local machine, it isn’t transmited from the server to the local machine, like the HTML content of a web page. Still, the larger point is correct, programmers could reengineer the JavaGator by examining the bytecode.

If two radically opposed ideologies can support people engaged in identical practices, then it seems obvious that the real space of politics and contestation is at the level of these practices and their emergence. These practices emerge as a response to a reorientation of power and knowledge, a reorientation somewhat impervious to conventional narratives of freedom and liberty, or to pragmatic claims of methodological necessity or market-driven innovation. (p. 116)

The idea that the practices of free/open source developers is the “real space of politics and contestation” is where Kelty need to give concrete examples. What kind of “politics and contestation” is Kelty talking about? From what I have observed, both the free software advocates and the open source advocates generally agree on practices, so there is little being contested at that level. Eric S. Raymond and Richard M. Stallman collaborated for years in the development of emacs despite having differing philosophies. The one major point of “contestation” was the use of BitKeeper to develop the Linux kernel and eventually Linus Torvalds developed Git which solved the conflict.

I would argue that the free software movement arose less out of a “reorientation of power and knowledge” and more as a reaction to power and knowledge being taken away from computer programmers with the advent of proprietary binary executables in the late 70s and early 80s. Before they had access to the source code and could borrow and share it at will, but it was the restriction of these rights which led to the GNU project and the GPL license. Look at how Stallman developed the ideas behind the GPL after James Gosling sold the rights for Gosling Emacs to a commercial software company, breaking his earlier promise that its code could by used Stallman’s Emacs. Likewise, it was the increasing restrictions and rising fees that AT&T put on the UNIX code, that led to the creation of BSD and its license.

OSI would be a “vendor-neutral” standard: vendors would create their own, secret implementations that could be validated by OSI and thereby be expected to interoperate with other OSI-validated systems. By stark contrast, the TCP/IP protocols were not published (in any conventional sense), nor were the implementations validated by a legitimate international-standards organization; instead, the protocols are themselves represented by implementations that allow connection to the network itself (where the TCP/IP protocols and implementations are themselves made available). The fact that one can only join the network if one possesses or makes an implementation of the protocol is generally seen as the ultimate in validation: it works. In this sense, the struggle between TCP/IP and OSI is indicative of a very familiar twentieth-century struggle over the role and extent of government planning and regulation (versus entrepreneurial activity and individual freedom), perhaps best represented by the twin figures of Friedrich Hayek and Maynard Keynes. In this story, it is Hayek’s aversion to planning and the subsequent privileging of spontaneous order that eventually triumphs, not Keynes’s paternalistic view of the government as a neutral body that absorbs or encourages the swings of the market. (p. 168)

First of all, the TCP/IP protocols are published on the internet and anyone can download them, whereas the OSI protocols were only available as expensive print copies from the ISO, so TCP/IP was more published than OSI.

The TCP/IP protocol was developed by the US government and largely implemented either by government contractors or state universities receiving funding from the government. Robert Kahn and Vinton Cerf were government employees working at the Information Processing Techniques Office in DARPA when they designed TCP/IP between 1973 and 1978. TCP/IP was first implemented by a DARPA contractor, Bolt, Beranek and Newman, and that code was reworked by Berkeley’s Computer Systems Research Group, which was also funded by DARPA. Berkeley’s TCP/IP stack has been reused by most commercial UNIX variants, MS Windows, and Mac OS X.

When OSI was first discussed in 1978, the US government did not initially support it, but companies like IBM, Digital Equipment Corp., Honeywell, GM supported it, along with the telecoms who successfully lobbied the US government to adopt it in 1985.

The struggle between TCP/IP and OSI at the beginning was a struggle between a government program used by academics at universities and a plan backed by big companies. Both TCP/IP and OSI were planned, and were not created by a “free market,” but OSI was chosen by the big commercial businesses, partially because TCP/IP was designed for academic use and commercial uses were prohibited before 1982. What doomed OSI was the fact that it tried to open up its planning to many actors and they couldn’t come to agreement, whereas TCP/IP was already running and its planning sessions were attended principally by academics who didn’t have commercial interests to defend, so it was easier to come to agreement.

The planning of TCP/IP was less hierarchical and less centralized than OSI, but in no way was TCP/IP a validation of the individualized market freedom advocated by Hayak, Friedman, etc. TCP/IP has succeeded over the years because people like Vint Cerf have successfully argued that its access should be free and open to everyone, and can’t be dominated by large telecoms who want to undo net neutrality and can’t be stifled by government regulation, so it is really a struggle to prevent powerful entities from controlling it. Turning the TCP/IP vs OSI conflict into a tale of the individualized free market vs centralized government planning is bizarre and doesn’t fit the history.

The failure of open systems reveals the centrality of the moral and technical order of intellectual property—to both technology and markets—and shows how a reliance on this imagination of order literally renders impossible the standardization of singular market infrastructure.

The failure of open systems in the late 1980s among UNIX competitors was caused by the fact that many of the competitors didn’t truly want to achieve singularity of the standard. They wanted to exclude their competitors from the standard. That is very different from the way that open standards are shared today among free software. ODT is used by half a dozen different programs today and many groups were invited to participate in its definition.

(3) Free Software is not an ethical stance, but a practical response to the revelation of these older problems; and (4) the best way to understand this response is to see it as a kind of public sphere, a recursive public that is specific to the technical and moral imaginations of order in the contemporary world of geeks. (p. 306)

Not sure by what Kelty means by “ethical” here, but I assume that Kelty is trying to say that it it wasn’t based upon unchanging ethical principals, but rather a flexible and practical response to a specific historical context. OK, I accept your argument that what we today know as the “hacker ethic” was actually an experimental response to the problems that confronted hackers in the late 70s and early 80s, but Steve Levy’s account of Hackers at MIT and Stanford suggests that many of the hackers in the 60s and 70s held abiding ethical principals that software should be freely shared and modified in a collaborative way. One could argue that the ethics did not change, but rather the practical ways that hackers defended those ethics changes, becoming codified in the GPL and the rhetoric of the FSF. I actually think that the truth lies somewhere in between the two arguments. The famous “hacker ethic” may have existed as a common practice in the 60s and 70s, but it was poorly understood and defined by its practitioners. It was largely an unconscious norm.

The events which Kelty outlines in the book forced it to become a conscious norm, with the moral and legal justifications by which it is known today. The response to those events also turned the “hacker ethic” into something which was proselytized and purposely promoted to a wider audience, moving from an odd cultural practice in a few elite schools to a genuine movement.

As an ardent promoter of free software and its ideals, I had a very visceral reaction to reading that free software isn’t “ethical”–I understand the movement as being fundamentally an ethical stance first and foremost. Kelty’s characterization of free software as a “practical response” may be true in some cases, but it doesn’t fit what I see in the free software communities in which I have participated. I find it demeaning to call my life’s work just a “practical response” rather than an ethical decision. Oh well, it isn’t the first time that a “native” being studied objects to the analytic terms employed by an anthropologist.

Kelty appears to have interviewed very few actual participants in the free software community and instead relies mostly on his analytical framework to draw his conclusions. This lack of engagement with the subjects he studies hinders his conclusions.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s