Das theoretische Potential (OSP): Unterschied zwischen den Versionen

Aus Philo Wiki
Wechseln zu:Navigation, Suche
K (cut)
K (Gorz)
Zeile 105: Zeile 105:
  
 
[http://e-conomy.berkeley.edu/publications/wp/wp140.pdf Stephen Weber: The political Economy of Open Source Software]
 
[http://e-conomy.berkeley.edu/publications/wp/wp140.pdf Stephen Weber: The political Economy of Open Source Software]
 +
 +
[[Andre Gorz: Wissen, Wert und Kapital]]
  
 
<br />
 
<br />

Version vom 4. Dezember 2008, 10:43 Uhr

First Monday: Open Source — 3 October 2005

Rishab Aiyer Gosh: Cooking Pot Markets
"Linus Torvalds did not release Linux source code free of charge to the world as a lark, or because he was naive, but because it was a "natural decision within the community that [he] felt [he] wanted to be a part of" [8] Any economic logic of this community - the Internet - has to be found somewhere in that "natural decision". It is found in whatever it was that motivated Torvalds, like so many others on the Net, to act as he did and produce without direct monetary payment.
Of course, it is the motivation behind people's patterns of consumption and, what is more relevant in the case of Linux, production that forms the marrow of economics. Such motivation is usually expressed in terms of curves of supply and demand, measured by costs and prices in dollars and cents. Figuring out what motivates, leave alone measuring it, is much tougher when price tags don't exist. It is simpler to just assume that motivations only exist when prices are attached, and not attempt to find economic reason in actions motivated by things other than money; simpler, therefore, just to assume as we often do that the Internet has no economic logic at all.
This is wrong. The best portions of our lives usually do come without price tags on them; that they're the best parts imply that they have value to us, even if they don't cost money. The pricelessness here doesn't matter much, not unless you're trying to build an economic model for love, friendship and fresh air. But you don't need to be an economist to know that all of these things do involve motives, and perhaps also the matching of (ordinal) demand and supply, even if demand curves are not easily measured without price tags. Economics may not often need to be used in an environment where valuables are free, but that doesn't necessarily mean it can't be so used. And any economic logic of the Internet has to have come to terms with the difficulty of measuring such value."

...

"Even those who have never studied economics have an idea of its basic principles: that prices rise with scarcity and fall in a glut, that they are settled when what consumers will pay matches what producers can charge. These principles obviously work, as can be seen in day-to-day life. But that's the "real world" of things you can drop on your toe. Will they work in a knowledge economy? After all, this is where you frequently don't really know what the "thing" is that you're buying or selling, or clearly when it is that you're doing it, or, as in the case of my column, even whether you're buying - or selling. Contrary to what many doomsayers and hype-mongers suggest, it always seemed to me that the basic principles of economics would work in an economy of knowledge, information and expertise. They are, after all, not only logical on the surface but also practically proven over centuries - a powerful combination. Even if the Internet appeared to behave strangely in how it handled value, there was no reason to believe that, if it had an economic model of its own, this would contradict the economic principles that have generally worked.
However, if Paul Samuelson's textbook definition of economics as the "study of how societies use scarce resources to produce valuable commodities and distribute them among different people" [12] remains as valid now as ever, almost all the terms in there need reexamination. This is because of the same peculiar economic behaviour of the Net that suggests it has developed its own model, the economic model of the information age."

...

"Unlike the markets of the "real world", where trade is denominated in some form of money, on the Net every trade of ideas and reputations is a direct, equal exchange, in forms derivative of barter. This means that not only are there two sides to every trade as far as the transaction of exchanging one thing for another goes - which also applies to trades involving money - there are also two points of view in any exchange, two conceptions of where the value lies. (In a monetary transaction, by definition, both parties see the value as fixed by the price.)
As the poster of notes on tomcats, the value of your posting something is in throwing your note into the cooking-pot of participatory discussion that is rec.pets.cats and seeing what comes out. As the author of a page on cats, what you value in exchange for your words and photographs is the visits and comments of others. On the other hand, as a participant on rec.pets.cats I value your post for its humour and what it tells me to expect when my kitten grows up; as a visitor to your Web page I learn about cats and enjoy pretty pictures.
When I buy your book about cats, it's clear that I am the consumer, you the producer. On the Net, this clear black-and-white distinction disappears; any exchange can be seen as two simultaneous transactions, with interchanging roles for producer and consumer. In one transaction, you are buying feedback to your ideas about cats; in the other, I am buying those ideas. In the "real world" this would happen in a very roundabout manner, through at least two exchanges: in one, I pay for your book in cash; in the next, you send me a cheque for my response. This does not happen very often! (The exception is in the academic world, where neither of us would get money from the Journal of Cat Studies for our contributions; instead our employers would pay us to think about cats.)
As soon as you see that every message posted and every Web site visited is an act of trade - as is the reading or publishing of a paper in an academic journal - any pretense at an inherent value of economic goods through a price-tag is lost.
In a barter exchange the value of nothing is absolute. Both parties to a barter have to provide something of value to the other; this something is not a universally or even widely accepted intermediary such as money. There can be no formal price-tags, as an evaluation must take place on the spot at the time of exchange. When you barter you are, in general, not likely to exchange your produce for another's in order to make a further exchange with that. Unlike the money you receive when you sell something - which you value only in its ability to be exchanged for yet another thing - in a barter transaction you normally yourself use, and obviously value, what you receive."

First Monday: Cyberinfrastructure June 2007

First Monday: Public Knowledge Project July 2007

Ajit Pyati: A critical Theory of Open Access
"As we have seen from the previous section, a shift in the economic environment of scholarly publishing has forced academic libraries to take an advocacy stance and re–define the nature of scholarly publishing. This expansion of library roles in the publication process is made possible by the transformation of the technological environment and the advancement of the Internet. Without the Internet, electronic publishing would not exist, and consequently related developments such as the open access movement and institutional repositories could not be a reality. A process is underway in which libraries are taking advantage of Internet technologies to advance an agenda imbued with their professional value of access to information.
With libraries becoming important actors in technological innovation, it thus becomes important to interrogate libraries’ relationship to technology. As discussed earlier, a neo–liberal information society environment of increasing commodification has created daunting challenges for the library profession. In addressing issues of digital copyright and scholarly publishing, libraries are in for a long, uphill struggle, but gains have certainly been made. For instance, the development of SPARC and its advocacy has brought attention to the scholarly publication crisis, and certain major corporate publishers of academic journals have reacted to this kind of pressure by widening authors’ rights and allowing the publication of articles in e–print archives (Willinsky, 2006).
I have argued in this article for critical theory as a useful construct to view emerging forms of library advocacy and activism against the encroachment of techno–capitalist logics, with the open access movement as an example. Critical theory consciously links open access advocacy in libraries to other movements which challenge restrictions on access to information. Most importantly, critical theory opens up a discursive space for libraries in the democratization of technological discourses in society. Technology, rather than being part of a determinist discourse that will lead to the “demise” or “irrelevance” of libraries, in fact can be a realm for increased democratic participation of libraries. Critical theory creates a wider space for a progressive re–envisioning of the roles of libraries in promoting enhanced and more democratic forms of information access.
Thus, libraries can be envisioned as active shapers of technology for democratic and progressive ends. In examining the roles of academic libraries in mobilizing and building partnerships with scholars to challenge the traditional scholarly publication process, we are seeing librarians injecting their values into this debate. With libraries taking a more active role in the publication of materials, we are witnessing a shift in the realm of technological expertise into the arena of libraries."


Ökonux, Texte

Henrik Ingo: Open Life. The Philosophy of Open Source

Milton Mueller: Info-Communism?

"Here we are forced to acknowledge the appropriation of communist symbols, including symbolism drawn from Marxist–Leninist and even Maoist movements of the past, sometimes ironically and sometimes not, by certain elements of the information left. Why does this happen? Because communism affords them a readily available repertoire of symbols and historical connotations. The image is one of a mass movement challenging the powerful and wealthy and overturning the economic status quo. While recognizing that this appropriation of communist symbolism apparently is irresistible to some on the informational left (Hunter’s essay fell for it hook, line and sinker), we must also acknowledge that it is troublesome and actively contested by others. What the people who reject this framing realize, perhaps more clearly than the others, is that frames and labels can become self–fulfilling prophecies. Symbols can re–shape social movements in their own image. A movement that uses images of Che Guevara as a banner is going to attract different constituencies and follow a different path than one that uses other symbols."

...

"There is little doubt that the moral and political impetus that led Richard Stallman to create the Free Software Foundation was based on concepts very close to anarcho–communism. Based in a university research institute in the 1970s and early 1980s (MIT artificial intelligence labs) Stallman, like many other hackers, became acculturated to an ethic of total sharing of work product and almost complete freedom from organizational hierarchies. In the early 1980s, as the software developed in these research labs became valuable business assets, it began to be protected and enclosed in various ways; e.g., by withholding the source code from publication, binding programmers with non–disclosure agreements, and copyright protection. Stallman was deeply angered and felt excluded and “victimized” by his initial encounters with the propertization of software [6]. He also actively resisted the use of exclusive identities and passwords on computer systems. Significantly, he viewed the refusal to share code not in practical or policy terms but as a moral issue, a violation of the basic ethical command to “do unto others as you would have them do unto you.” Stallman’s rationale, insofar as it is rooted in a sharing ethic, is truly communalist.
But the “communist” label is belied by Stallman’s strategy of institutionalization. The free software movement pioneered a new economic institution, the software licensing concept embodied by the GNU General Public License (GPL). The GPL is based, ironically, on copyright law. It grants users the right to run, copy, redistribute, study, change, and improve the underlying source code of a program. The license is designed to prevent anyone from acquiring exclusive, proprietary rights to software developed by the F/OSS community; as Stallman puts it, “instead of a means of privatizing software, [the license] becomes a means of keeping software free.” [7] That does not, however, prevent developers from selling copies of the software for profit or from commercializing services associated with it. The economy around that software can presumably remain capitalist, though this is an ambiguity we will explore later in the paper. Also, open source software advocates would later self–consciously pioneer new methods of virtual organization and collaboration, dovetailing with anarcho–syndicalist concepts of a “gift economy” wherein the people who actually produce the product interact with each other directly avoiding managerial hierarchies (Raymond, 1999; Benkler 2006)."

...

"There is only one difference between Stallman’s definition of free software and the OSI’s definition of open source software. Free software requires reciprocity; that is, those who incorporate open source code into a derivative product must license the product as free software. Open source, on the other hand, does not require reciprocity; point 3 of the open source definition allows it but doesn’t require it. Thus open source licensed code can be incorporated into proprietary software. This seemingly small distinction has great political significance. Although both approaches are contractually based, the GPL is designed to be a one–way valve into the commons. Its intention is to cumulatively push all software into it through viral replication. Open source, on the other hand, lets users pick the license that suits them best in a more utilitarian calculation, and is agnostic about the overall economic direction of the software industry. In effect, it envisions a mixed economy, a co–existence of proprietary and open information."

...

"Stallman refers repeatedly to the “the moral unacceptability of non–free software.” What is it that makes owned software morally unacceptable? The argument takes two distinct forms.
One is a simple appeal to the moral obligation to cooperate and share. Software ownership is wrong because we have a duty to let others use resources we have. “If your friend asks to make a copy [of software],” Stallman claims, “it would be wrong to refuse. Cooperation is more important than copyright.” This is a deontological claim; i.e., it holds that moral worth is an intrinsic feature of certain actions, and makes no reference to the practical consequences that the actions happen to have.
A second, clearly distinguishable aspect of the moral case for free software is that attempts to institutionalize proprietary information leads to unacceptable restrictions on the freedoms of end users. It is a “system of subjugation” and cannot be enforced without eliminating the transparency of source code and thereby impairing users’ ability to modify, copy and redistribute the program. It extends the owner’s control beyond the first sale into a set of ongoing restrictions on human action. This is a consequentialist ethical claim. It focuses more on the concrete effects of instituting proprietary software on end users and society.
Of these two prongs of thinking, I believe that the first is invalid and leads to the dead end of communism. The second is a far more important and substantive claim, but has not, I think, been consistently thought out. The clash between principled and pragmatist advocacy reflects this imperfection in the ideology. It reveals a widespread lack of clarity regarding which of these two claims is the basis for advocacy."

...

"The Internet seems to be based on an unusually successful combination of private market and commons. TCP/IP internetworking is based on global, open and non–proprietary standards. The networking protocols can be freely adopted by anyone. They are published openly and can be used by anyone without paying a fee. At the same time, the Internet is a decentralized network of networks, the constituent parts of which are privately owned and administered by autonomous organizations: the private networks of households, small businesses, large enterprises and non–profit organizations as well as the (usually privately owned) public data networks, both large and small, of Internet service providers and telecommunication companies. This aspect of the Internet leads to privatization and decentralization of network operations and policies. By facilitating interoperability, Internet leads to privatization and decentralization of software applications and information content as well. At the endpoints of the Internet, the free market and privatization rule; at the core standards level, a commons is in place. The end–to–end principle has in the past ensured that commons and market complement each other. The market in applications, content and networking requires neutral coordinating mechanisms that enable interoperation. With end–to–end, the sharing and coordinating mechanisms are deliberately minimized to provide maximum scope for private initiative and innovation. There is a clear separation between the parts of the system that are subject to private initiative and control, and the parts that are subject to global coordination and non–exclusive access. In short, it is the combination of the two, private and common, that works."

Michael Goldhaber: The Value of Openness in an Attention Economy

Johan Söderberg Copyleft vs. Copyright

In: Open Source Jahrbuch 2007: Alexander Knorr: Die Deutungsoffenheit der Quellen

"Wissen und Erkenntnis begnügen sich nicht damit, ein Dasein in Form von sprachlich kodierter Information zu fristen, sondern nehmen manifeste Gestalt an, werden zu materialisierter Kultur (Spittler 1993, S. 180), zu Artefakten z. B. zu Maschinen oder Software. Artefakt ist ein sehr neutraler Begriff und bedeutet lediglich ein Ding, das künstlich geschaffen wurde, nicht naturgegeben ist und von einem denkenden und handlungsmächtigen Wesen in die Welt gebracht wurde. Aber in gängiger Diktion werden Maschinen und Software kaum als Artefakte, sondern als Produkte bezeichnet. Genau wie Artefakt nimmt auch der Begriff Produkt Bezug auf das Hergestelltsein, trägt aber weitere Implikationen in sich namentlich die Unterscheidung und Trennung zweier Sphären: die der Hersteller und die der Kunden. Letztere werden gemeinhin als (End-)Verbraucher bezeichnet, im Zusammenhang mit Computern und Software oftmals als Benutzer (User). Diese Bezeichnungen wiederum implizieren das passive Konsumieren oder Gebrauchen eines Artefakts im Sinne seiner Produzenten. Doch diese Zuschreibungen sind fern der empirischen Wirklichkeit, denn mit erfolgter Produktion ist die Geschichte eines künstlich hergestellten Dinges keineswegs beendet (Kopytoff 1986).
Das hervorstechendste, ja vielleicht das wesentliche Merkmal des Menschen, das ihn von den anderen Tieren unterscheidet, ist seine Fähigkeit, in schwindelerregend rascher Abfolge Neuerungen in die Welt zu bringen. Diese Kreativität greift aber nicht nur auf Rohstoffe zurück, sondern auch auf Artefakte. Rohstoffe sind maximal formbar, im Grunde genommen setzen lediglich Vorstellungsgabe und Materialeigenschaften dieser Formbarkeit Grenzen. Aber auch Artefakte sind von einer interpretativen Flexibilität umgeben: Sie sind deutungsoffen (Beck 2001, S. 67)."

...

"Das Spektrum dessen, was im neueren ethnologischen Sinne unter Aneignung verstanden wird, reicht vom Inbesitznehmen über Umdeutungen und Umwidmungen bis hin zur Umarbeitung. Diese Prozesse schweben weder im leeren Raum noch werden sie einfach nur von nach unseren Vorstellungen rationalen, pragmatischen oder opportunistischen Entscheidungen getragen, sondern hängen eng mit gesellschaftlichen Umständen und kulturellen Vorstellungen zusammen. Kulturelle Vorstellungen informieren die Aneignung, gesellschaftliche Gegebenheiten stecken den Rahmen der Möglichkeiten ab, der Prozess selbst und die resultierenden Artefakte wiederum entfalten Rückwirkung auf die aneignende Kultur und Gesellschaft und verändern diese. Soziokulturelle Aneignung ist somit ein dynamischer Prozess, der ineinander verschachtelte Rückkopplungsschleifen beherbergt."

...

"Umdeutung Ist ein Artefakt in einem Milieu angekommen, so kann es vorkommen, dass es den Intentionen der Hersteller entsprechend Verwendung ndet. Sehr häu g jedoch wird es mit neuen Bedeutungen belegt und in den kulturellen Kontext eingebettet. Vor dem Hintergrund der Gegebenheiten ihres Lebensraumes erscheint es uns absolut vernünftig, dass die Kel Ewey häu g Sonnenbrillen tragen. Allerdings tragen sie die Brillen gar nicht in der Wüste, sondern bei Festlichkeiten, gerade auch nach Sonnenuntergang. Für die Kel Ewey komplettieren spiegelnde Sonnenbrillen die Verschleierung der Männer und perfektionieren damit die Umsetzung einer kulturellen Bekleidungsvorstellung (Spittler 2002, S. 18). Das Beispiel habe ich gewählt, weil einer Sonnenbrille zunächst eine eindeutige Funktion fest zugeordnet scheint, sie aber dennoch Umdeutungen erfährt, die einem selbst wohl kaum einfallen würden. Vergleichbares geschieht mit Software, die speziell für eindeutige Zwecke geschrieben wurde."

...

"Aneignung bedeutet nicht nur symbolische Umdeutung, sondern auch kulturell angeleitetes Handeln (Spittler 1993), aktives In-die-Hand-Nehmen und tatsächliches Verändern von Artefakten (Beck 2001, 2003, 2004). In Bezug auf Maschinen und Technologie sind die Freiheitsgrade der Aneignung zu einem gewissen Grad eingeschränkt, wenn der Wunsch besteht, dass die Maschine nach der Aneignung immer noch funktionieren soll (Beck 2003). Dieser Umstand führt häu g zu dem Fehlschluss, der von den Ingenieuren eingeschriebene Sinn einer Maschine sei dominant, verhindere eine tiefgreifende Aneignung jenseits des Ober ächlich-Optischen, des Symbolisch-Ästhetischen. Mittlerweile sind uns die Bilder der prunkvollen, mit Verzierungen überladenen Busse und LKWs aus z. B. Indien und Pakistan vertraut geworden. Die auf sie getürmten Götter- und Heiligenbilder, Kalligraphie und exotisch wirkenden Muster zeigen uns, dass diese Maschinen der Ästhetik einer uns zunächst fremden Kultur unterworfen wurden. Man staunt und lächelt, etwa so wie über den längst sprichwörtlich gewordenen, getunten Opel Manta. Das Blenden der bizarren Schönheit verstellt den Blick auf die viel weiter reichenden, für das Verständnis der Beziehungen zwischen Technik und Kultur bedeutsamen Aneignungsprozesse."

...

"Natürlich hatten die Universitätsverantwortlichen bestimmte Vorstellungen davon, wie und für was die immer noch sehr teuren Maschinen verwendet werden sollten. Der PDP-1 etwa, der im Jahr 1961 immer noch ungefähr 120 000 US-Dollar kostete, war eindeutig als Werkzeug für wissenschaftliche Zwecke gedacht. Doch eine Gruppe von MIT-Studenten reinterpretierte ihn als Spielmaschine und schrieb Spacewar , eines der allerersten Computerspiele (Graetz 1981). Objekte und Begriffe ändern mit dem Gebrauch, den verschiedene Akteure von ihnen machen, auch ihren Sinn. (Lévy 2002, S. 921 922) Niemand konnte voraussehen, dass aus dieser Umdeutung eine bedeutende Industrie entstehen würde,welche heute die wesentliche sozioökonomische Triebfeder für die Weiterentwicklung von Computerhardware darstellt (Montfort 2002). Notwendige Vorbedingung für die Erfindung des Computerspiels war die Deutungsoffenheit des Systems Computer, die nicht durch soziale oder technische Restriktionen eingeschränkt wurde. Die Universitäten gewährten freien Zugang zur Hardware Software war damals ohnehin frei, denn der Computermarkt bezog sich ausschlieÿlich auf Hardware (Grassmuck 2000, S. 5). Erst mit den 1980er Jahren setzte die Kommodi zierung von Software ein, der Quelltext von sogenannter proprietärer Software wurde unzugänglich gehalten und letztere büÿte entscheidend an Deutungsoffenheit ein. Doch die Werte der seit 20 Jahren existierenden Hackerkultur brachen sich eine neue Bahn und manifestierten sich in der Gegenbewegung GNU/Linux, woraus das, was wir heute als Open-Source-Bewegung kennen, entstand."

...

"Die ebenfalls in der Einleitung vorgenommene Analyse der Implikationen von Begriffen wie Produkt und Konsument ist keine rein semantische, schöngeistige Interpretation, sondern weist auf in bestimmten Milieus existierende Haltungen hin, die in entsprechenden, empirisch wahrnehmbaren Praktiken ihren Ausdruck nden. Die restriktive Handhabung des Urheberrechts, das Versiegeln von Maschinenkomponenten, das Notwendigmachen von Spezialwerkzeugen und das Zurückhalten des Quelltexts verringern die Deutungsoffenheit von Artefakten und drücken damit Kreativität und Innovationen bis zu einem gewissen Grad die Luft ab. Man kann hier aber nicht einfach einer Seite den Schwarzen Peter zuschieben, denn hier stehen sich schlichtweg unterschiedliche kulturelle Vorstellungen gegenüber. Auch Lawrence Lessig weist darauf hin, wenn er schreibt, dass die Industrie nicht böse sei, sondern die Gesellschaft als Ganzes in der Verantwortung stehe (Lessig 2004, S. 260). Genau diese Verantwortung kann aber erst dann wahrgenommen werden, wenn die kulturellen Aspekte und Hintergründe der verschiedenen beteiligten Gruppen verstanden sind. Kultur- und Sozialwissenschaften wie die Soziologie, aber eben auch die Ethnologie besitzen dazu entsprechende Methoden, Konzepte und Modelle, wie die für ein Verstehen des Open-Source-Phänomens fruchtbar angewendete Theorie der Gabenökonomie bereits gezeigt hat."

Stephen Weber: The political Economy of Open Source Software

Andre Gorz: Wissen, Wert und Kapital



zurück zu Open Source Philosophie (Vorlesung Hrachovec, Winter 2008)