Categories
publications

CommonSENSE at the 8th International Conference on Intelligent Environments

Our paper “CommonSENSE: A Participatory Design Toolkit for Shaping Physical Space through Real-Time Data”, in collaboration with the great colleagues Elena Antonopoulou, Eirini Vouliouri and Christos Chondros was presented in the 8th International Conference on Intelligent Environments.

Categories
blogging

Open [Source (Code)] Architecture: Codes and Coding

The last few posts are parts of a thought experiment which aims to problematize “Open Architecture” by looking at the different faces of the phrase itself. One my central hypotheses in this blog has been that “Open Source” operates as a “boundary space”. This idea borrows from Star and Griesemer’s discussion of “boundary objects” as objects which have “different meanings in different social worlds but their structure is common enough to more than one world to make them recognizable, a means of translation” [1]. Along these lines, “Open Source” is analyzed as a space which various disciplines use to rethink their models of production, distribution and use of knowledge and artifacts. This process draws simultaneously from their disciplinary concerns and history as well as from metaphors and free associations with the worlds of Software and Hardware, where the ideas of Open Source have been predominantly developed.
In order to understand how the various discussions on Open Architecture(s) use metaphors from “Open Source” I propose a strategy which begins by isolating different word combinations contained in the phrase “Open [Source (Code)] Architecture” and then seeks to expose potential questions that they raise and tensions that they contain. These complementary framings aim to problematize the Open Source metaphor in architecture from as many perspectives as possible.

The word-systems that I have so far analyzed are [Open, Open Source] and [Open, Source, Architecture]. I first examined the history of Open Source so as to expose the ideological tensions between “Open” and “Free”. This aimed to challenge the assumption that “Open” has an inherently democratizing intention, although its results may provide this potential. I then discussed how the idea of access to the “Source”, which is central in the OSS definition translates in Architecture, where the ambiguity between information and end result is vast.
This post continues this word game by proposing the system [Open, Code, Architecture]. The question of what “Code” is in Architecture and therefore what “Open Code” would be, is far beyond the scope and the length of this post. My purpose is to propose a series of -essentially elliptic- framings of architectural Code, which can be useful in informing the discussion on Open Architecture.
A fascinating aspect of this discussion is that Code, as a set of instructions which lead to an end product, can be viewed from a double perspective in architecture: building code (building regulations) and computational code (building representation and generative tools). Each one of these areas contains, in turn, multiple meanings and interpretations. Starting from the idea of building code, as a regulatory system developed around the processes and objects of building, it is interesting to trace these different meanings.

On Code
A commonly shared belief is that the first building code was the Babylonian Code of Hammurabi (ca. 1780 BC). This code regulates the social relations around the act of building and assigns responsibilities in case of accidents or failures. Apart from the surprising austerity of the punishments it prescribes, what is interesting to observe is that it is very much structured around the idea of building safety. However, instead of codifying the design requirements to achieve it, it provides some strong “incentives” to ensure it. A telling example is: “If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kills its owner, then that builder shall be put to death”.
Current building codes, have of course little to do with that. Carrying the same principle of safety and extending it to a discourse on public health and welfare, they mandate a set of rules and requirements with which buildings have to comply in order to be built. As far as the legislative part is concerned, codes are either laws of the state, proposed by committees of experts and tailored to local requirements with governmental decrees; or they can be specified by the local authorities based on “model codes”. Indicative examples of that are the Eurocode, which applies to all the countries of the EU replacing their former national codes or the International Building Code, which has been adopted by US states such as New York.

All professional architects are required to build “to code”.
In fact this is the most common source of disillusionment for most beginning professionals, who nurtured in the freedom of academia, find themselves operating within a highly structured and rigid framework. Code creates an entire ecosystem of code-knowers, code-interpreters and code-policers, which to a great extend defines the landscape of architectural practice.

When it comes to the actual content of the code, the discussion has again multiple dimensions. A first set of rules is related to the building itself and its resilience to accidents and natural disasters such as earthquakes, fires, floods etc. In many cases, the history of code revisions narrates a parallel history of natural catastrophes. An indicative example is the seismic code in Greece, oscillating between the politics of profit of the building industry (inexpensive concrete, fast construction) and the frequent seismic activity, resulting in multi-victim failures of large parts of the built fabric. The history of form of the greek polykatoikia (popular multi-storey building) can be seen to certain extend as a history of corrective moves following large disasters.
Apart from ensuring the safety of the residents, building codes are meant to also ensure their “wellbeing”. One aspect of this is expressed through the “medicalization” of building and the emergence of requirements evolving around the idea of “health” (from the size and number window openings to forbidden hazardous materials), parallel to the modernization of cities. Another interesting aspect that can be discussed through looking at building codes is the idea of standardization of human measurements. How high can the rise of a stair be? How wide should its steps be? What is the minimum height of a space? How wide does a corridor have to be? John Hardwood’s “The Interface: Ergonomics and the aesthetics of survival”[2] offers very interesting observations on the technologies of wellbeing and their relation to the concepts of the “normal”, the “machine” and the “environment”.

However, code requirements do not just stem from the building itself, but articulate a relationship of this building with the larger ensembles in which it participates (the neighborhood, the city). Within this context, code defines the relationship between the local and the global. The idea of safety, now becomes public safety, the idea of health, public health etc. The first building codes which also coincided with the forces of modernization in Europe (17th-18th century) were prophylactic measures taken after -again- catastrophic events (plagues, fires etc). The distances between the buildings, the materials of their facades or their geometric characteristics so as to allow for proper light and air in the public space were the types of requirements which progressively became codified. Code as a new form of top down “policing”, defining the way through which bodies are arranged in space as well as the characteristics of that space, cannot be viewed cut off from its political connotations.
However, apart from stemming from-, and exercising, biopolitical power, building codes are also often deployed to preserve a specific “character of place”, which is judged as important for the preservation of community union or national identity. Most practicing architects have stories to share about their experiences of building within “traditional” communities, where the code requirements extend beyond the general geometric characteristics of buildings and mandate formal elements of the design (size and shape of windows, geometry of roofs, colors, materials) These formal characteristics become the trademark of the community. Very telling examples of this in are gated communities and artificial cities around the world which come with a very specific set of building regulations to construct a very specific “spirit of place”. The regulations of Disney’s Celebration are a fascinating example in this direction.
Last but not least, codes specify land use. These specifications are based on prospective models of urban activity and have been an issue of vast controversy throughout the 20th century. From Modernist zoning, exemplified by the Athens Charter (1943), to the postmodern apotheosis of urban complexity, the functional mapping of cities had been a central debate in architectural and urban discourse.

A summary of these diverse observations, is that building code is a top down framework of instructions which gets locally interpreted so as to produce buildings and building complexes with specific qualitative characteristics (“safety”, “well-being”, “beauty”, “character”).
The question is then, what are the implications of the Code being Open?

Opening the Code
I propose that this idea of openness can be interpreted through three different prisms.
The first is more decentralized, participatory building regulations, which allow for inhabitants and communities to have a say in what the underlying principles of their environment are. An example of this is the DPZ Smartcode, a “template intended for local calibration to your town or neighborhood”
The second prism through which the idea of “Open Code” can be viewed is a set of rules which instead of subordinating the local to the global, assert the whole as emerging from local rules and relations. This bottom-up model has long been capturing the architectural imaginary. Drawing its references from vernacular architecture where the only restriction in building is the approval of one’s neighbors, this model was framed by many architects as the pathway to better, more sustainable and more “democratic” designs. Of particular interest are the cases in which these negotiations have been formalized into written rules. Throughout my studies in Athens, I was fascinated by my Professor’s, Dimitris Papalexopoulos, descriptions on the vernacular law of Syros which explicitly described the local, neighboring rules of urban growth.

Yona Friedman's pictograms from Negroponte's "Computer Aided Participatory Design" in "Soft Architecture Machines"

A third attitude towards “Open Code” is the interplay of local conditions with global constraints. This has been the conceptual basis of the early architectural techno-utopias from which I often draw my references. Friedman’s FLATWRITER, for example, proposed a “softer” kind of code (iso-effort lines) and a resilient infrastructure within which every local desire would be negotiated and accommodated.
This opens the discussion to another kind of Code, computational this time, which was initially vested with the vision to couple “the very large and the very small”. In his 1970 book “The Architecture Machine”[3], Nicholas Negroponte framed this vision as a new Humanism enabled by machines. According to his analysis, the main problem with architects was the fact that they were accustomed to the middle scale of the buildings and therefore proved incompetent to handle the complexities of the general (the urban) or the specificities of the very small, perpetuating a gap between the scale of the mass and the scale of the individual. In the new machine “humanism” that he envisions, intelligent machines combine the adaptability of humans and the computational specificity in order to recognize general shifts in context, as well as particular changes, in need and desire. Negroponte envisions an architect-machine partnership, where the machine exhibits alternatives, suggestions, incompatibilities and oversees the urban rights of individuals. In his later book “Soft Architecture Machines” [4] (1975), Negroponte develops this vision into an anti-architect discourse. As Alexander will later do in his “A Pattern Language: Towns, Buildings, Construction”[5], Negroponte characterized the existence of local conditions within unifying global forces as the alphabet of the language of the vernacular. Following this concept of a global “objective” system that allows for local intuitive solutions, he proposed a framework of a resilient building and information technology and introduced a new type of personalized architecture machine, a “design Amplifier” which constitutes the interface between the infrastructure and the user’s ever changing needs.

On (another kind of) Code
In the last part of my post, I will further examine the relationship between the computational code and building code through current examples. I will draw my examples from an edition of one of the most widely circulated architectural periodicals today, the Architectural Design (AD) magazine. The 2009 volume, entitled “Digital Cities”, addressed precisely this question of the interplay of computational code with the production or simulation of complex building assemblages; cities. The different tendencies towards code which emerge in this volume are very useful in problematizing the role of the computational code in the democratization of the building code.
The first article that I will comment on is Patrick Schumacher’s controversial “Parametricism: a New Global Style for Architecture and Urban Design”. Patrick Schumacher is a powerful actor in the current architectural scene with a central role in the Architectural Association in London and as a close collaborator of the renown “digital” architect Zaha Hadid. Taking as a common axis the concept of parametricism, Schumacher sketches the vision for urban design “to integrate the building morphology – all the way to the detailed tectonic articulation and the interior organisation”. He calls this idea “deep relationality”, denoting the integration of all urban subsystems in a unified global, designed system. This system may allow for infinite differentiation, but always relates the local to the global and vice versa under the vision of a rationalized complexity. The top down control of the urban environment, through the computational descriptions of a global system does not need the straight line to be characterized as deeply Modernist. The hegemony of the computational code brings Schumacher’s visions in very close convergence with the modernist ideals of rationality and order besides the mere appearance of visual complexity and differentiation offered by digital tools.

Zaha Hadid Architects - Kartal Pendik in Instabul

On the opposite side, Steven Johnson, quoted by Neil Leach in his article “Swarm Urbanism”[6] describes the city as an emergent motif in time. According to Johnson, the city has all the characteristics of a dynamic, adaptive system which evolves based on neighboring relations, feedbacks and indirect control. For him the city should demonstrate a bottom up collective intelligence, not unlike a population of cooperating monads (swarm).
For Neil Leach the concept of emergence, which becomes increasingly popular in architectural discourses, can be traced both as a characteristic of urban systems and as a computational model. Based on this observation Leich raises the question to which extend the latter (computational tools) can be deployed to model and design the former (urban form). Leach refers to the architectural group Kokkugia, which does not attempt to simulate the movement of agents in the city so as to produce an optimal solution, but aims to develop an adaptive and flexible system based on a collective and self-organizing intelligence. The transition from the master-plan to the master-algorithm, views urban planning as a set of micro- or local decisions which create a complex urban system.

Neil Leach - "Swarm Urbanism"

The computational metaphor is very powerful here. The central assumption is that computational systems with bottom-up structures (Cellular Automata, L-Systems) can lead to bottom-up, emergent urban complexes, which are not designed per se, but result from the interactions of the material mass of the city and the ever changing needs of its inhabitants. An example of this approach is the Kokkugia project entitled “Behavioral Urbanism”, which uses cellular automata to envision a growing and shrinking city, not very different from the 1960s megastructural visions discussed in the “Tracing lines of thought” section of the blog. The code here refers only local rules, offering agency to the user, whose decisions and actions control their activation. The lack of centralized, global control may “Open” the system to local perturbations, but does not grant the user direct access to the “code” which guides them.

The project which in my opinion offers interesting insight on the idea of what would constitute a truly “Open” urban and architectural system, is the science fictional fantasy of the provocative French group R&Sie(n).
Their “I’ve heard about…” project is presented as a vision for an unpredictable organic urbanism, which resists to the conventional desire to control urban systems. The regulations for this urban system is not a building code, but a series of “neighborhood protocols”. These protocols do make explicit stylistic references to traditional building codes; short phrases, numbering; axioms, paragraphs, articles and chapters build up the “I’ve heard about…” constitution.

R&Sie(n) - I've Heard About...

In R&Sie(n)’s project, the city is an inhabitable organism, a biostructure in constant evolution , materialized in real time with a fleet of robots controlled by open source algorithms and interpreting an amalgamate of “internal” and “external” data. Human desires are communicated either verbally, through an electronic participation system, or through the chemical excretions of the body, traced through inhaled 24 hour micro-transmitters. This constantly destabilizes the system leading to unpredictable results. When it comes to the structure and the code of the system these are in constant negotiation and reprogramming. The only thing that remains stable, is a set of general rules, axioms which oxymoronically ensure that the system remains unstable and open to change.
In R&Sie(n) project, the “I’ve heard about…” bio-citizens participate in collectively shaping the space that they inhabit. Not unlike Nicholas Negroponte’s interconnected Design Amplifiers, the fleet of building robots becomes the space where the desires of the individuals and a set of “objective” building constraints (an idiosyncratic computationally-coded-building-code) come together. What is very different in R&Sie(n) discussion is that unlike the tradition of the early computational theories which accommodated the unpredictable by allowing for local action within a resilient infrastructure (computational or physical), the Protocol, or the Code of the “I’ve heard about…” project, is not a naturalized black box, determined by the “designer”, but a modifiable framework, accessed and controlled by the collective.

How one passes from this science fictional exploration to its actualization in physical space remains a very hard, almost un-answerable question. However, R&Sie(n) fantasy seems to propose a mental model which learns from the criticisms of the techno-utopias of the 1960s and 1970s and builds on the cyber-cultural potential for real time communication and collaboration. Using these as raw materials it envisions a truly Open Source space; indeterminate, unpredictable, participatory and co-designed. Has that not always been the role of utopia; to create experimental laboratories in which alternate not-so-unreal worlds can emerge?

_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
Notes:
[1] Star S.,Griesemer, J. Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science 19 (3), 1989
[2] Harwood J., The Interface: Ergonomics and the Aesthetics of Survival. in Aggregate, Governing by Design: Architecture, Economy, and Politics in the Twentieth Century (Pittsburgh, PA: University of Pittsburgh Press, forthcoming)
[3] Negroponte, N. The Architecture Machine. Cambridge, MIT Press, 1970
[4] Negroponte, N. Soft Architecture Machines. Cambridge, MIT Press, 1975
[5] Alexander, Ch. 1977. A pattern language : towns, buildings, construction / Christopher Alexander, Sara Ishikawa, Murray Silverstein, with Max Jacobson, Ingrid Fiksdahl-King, Shlomo Angel. New York : Oxford University Press
[6] Leach N., ‘Swarm Urbanism’, in Neil Leach (ed.), Digital Cities, London: Wiley, 2009, pp. 56-63

Categories
blogging

The “Source” in Open Source

The growing use of the term “Open Source” has attributed to it an almost protean meaning, tailored according to the goals and preoccupations of its user(s). The common denominator in all these cases is an interesting tension between literal and metaphoric uses of the term. On the one hand Open Source denotes a series of principles and practices, which were conceived within the context of software production and distribution and can only seen accurately in relation with the history and particularities of the field. On the other hand, the same term can be (and is) used to express a broader attitude towards the design, distribution and use of immaterial and material products.
The spread of Open Source culture beyond the realms software, gives rise to an ever-growing number of translations and appropriations of the term in other fields of human activity, which utilize it as a conceptual tool to rethink their own practices.

In my previous post entitled “Free and open: on, in and in-between” I argued for the importance of developing critical frameworks addressing this migration of the term to a multitude of other fields. Through this argument I do not intend to reject the creative potential of this translational looseness. I believe, however, that this process can only benefit from being conscious of the pitfalls of the trans-disciplinary translation of the term “Open Source”.
When it comes to identifying the dangers of the translation and appropriation of the term one can first refer to the abstract use of “Open Source”, as denoting an intention of a more “democratic design” with little reference to the practices which make it possible. This approach renders the history, internal tensions and controversies of Open Source invisible, and often leads to the reductionist assumption that “democratizing” is synonymous to “open sourcing”. A second danger is the direct translation of the term to other fields, without first taking the time to reflect on its translatability. The definition of Open Source in other areas, and especially when one leaves the realm of the immaterial to talk about the material world becomes a complex task and requires careful consideration of the particularities of this transfer.

In one of my previous posts I commented on the ambiguity of the closest one has today to an Open Source Architecture definition. Carlo Ratti’s comment that “Open source architecture draws from references as diverse as open-source culture, avant-garde architectural theory, science fiction, language theory, and others” is indicative on the one hand of the need for a disambiguation of the term so that it goes beyond being just a discursive medium. On the other hand, this observation is very telling of the signification of the term “Open Source” within the architectural imaginary.
When it comes to Architecture the user control of the design tools and decisions challenges the fundamental assumptions on the structures of the discipline. The assertion of the design of space as an open, collaborative project, along with the vision of the unmediated expression of the individual’s needs and desires, resonates with the much broader discussion on space and power, from foucaultian accounts asserting architecture as the ordering of bodies in space, to phenomenological discussions on space, perception and self. The precedence the architectural visions of the 1960s and 1970s, rich with visions of self-planning and self-design, loads “Open Source Architecture” with a series of unrealized utopias.

In my attempt to provide potential schemes for the disambiguation of this term within architectura discourse, I previously discussed the terms “open”, “open source” and “free”, within the context in which they emerged. I attempted to expose their ideological and practical disparities in order to better map the space in which other fields seek their references. In this post I will focus on the “Source (code)” part of the term.
According to the Linux Information Project,
Source code (also referred to as source or code) is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters).
This definition can be broadened to include non textual representations: “‘Source code’ is taken to mean any fully executable description of a software system. It is therefore so construed as to include machine code, very high level languages and executable graphical representations of systems” [1]

The Open Source definition starts with the declaration that in order for a program to be characterized as “Open Source” then its source code does not only need to be accessible but it has to comply to certain criteria. When it comes to “Source Code”, the second thematic in the list of these criteria, the following requirements must be met:
The program must include source code, and must allow distribution in source code as well as compiled form […] The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed.
The rationale behind these requirements is that the evolution of a program requires its modification, which in turn is contingent upon an easy source code.“Since our purpose is to make evolution easy, we require that modification be made easy.

The idea of an un- obfuscated code is here discussed in terms of efficiency and productivity, which brings to mind Richard Stallman’s criticism on “open source missing the point of free software”, which is about offering users control over the technologies that they are using through the freedom to modify them according to their needs and desires. Although, as I have previously argued, these differences of principle should be taken into account when discussing Open Source, I would like to leave them aside for a moment and dwell more on the implicit assumptions of the “Source Code” section of the Open Source Software definition.

First, an assumption contained in the term “Source Code” itself is a direct link between the source code and the outcome of its execution. In other words, that the source code contains all the information necessary so as to produce identical copies of a product, in an unambiguous manner. Within this context, the access to the source code offers full mastery and control of the product (software) itself.
Second, that the writer of the code should ensure that this direct link from code to product is easily legible and that the only requirement for appropriating and re-authoring it, is for the new author to “speak” the language in which the program was written. This creates an entire ethic around authorship and the distribution of knowledge. Extensive commenting and documentation are common practices which ensure the transparency of the source code and anticipate its future appropriations and modifications by other users.
This leads to the third and perhaps most important observation, which puts the weight on the user of the code, who is also expected to modify it. If one was to introduce Richard Stallman’s vision of “freedom” in the equation, then a necessary requirement would be not just to make the code as clear as possible, but to also use a language which makes it accessible to as large audiences as possible. There is a growing number of community developed projects, such as the Processing, Scratch etc, which undertake the challenge to produce low floor powerful development environments, functioning both as entry points to programming and as spaces for creation, sharing and distribution of products.
The post “Design for Empowerment for Design: environments, subjects and toolkits” discussed the historical perspective of strategies and programs allowing non experts to become their own architects; from Yona Friedman’s charminly naivist pictograms describing design and construction processes, to Frazer’s design toolkits and to Nicholas Negroponte’s Design Amplifiers, operating as self reflexive learning machines offering users a trip to “Designland”.
A common characteristic of all these proposals was the shared assumption that a more intuitive interface, allowing users to visualize their desires was a necessary but not sufficient condition for true unmediated design participation, which seems to be the vision of current conceptions of Open Architecture. All of these projects incorporated the idea that through this modeling the users would be educated in the non trivial task of expressing their desires with spatial decisions. Given the growing accessibility of design software accompanied by a growing software literacy, which allows users to experiment with 3d visualization software (eg. Sketchup) the revisiting of these considerations become a very productive field of inquiry.

Going back to the main discussion, the mapping of these fundamental assumptions, contained in the original definition of “Open Source”, raises the question of if and under which term they can be translatable in architecture.
Defining what would be the “Source Code” itself when it comes to Architecture is a hard problem in its own right. Within the context of this post I will refer to the scale of the building, adopting the perhaps simplistic but rather natural hypothesis that the source code of the building is its representation, its models and drawings. This hypothesis is shared amongst architectural practices, such as the Open Architecture Network, where the free distribution and modification of computer drawings and models is a fundamental principle of operation. In my next post I will discuss the idea of Open Source Code in the scale of the city.
As I previously mentioned, when it comes to software there is a linear procession from source code (information), to the compiler (mediator) and the final outcome (software product). Within this general context of free analogizing with architecture, this scheme would be translated as a procession from some sort of encoding of building information (drawings or models) to the mediator, who is the contractor and the builder and the final outcome (the building). This analogy reveals an inherent tension between its parts.
In the case of software the access of the source code guarantees access to the final product. Nothing unpredictable is expected to happen during the interpretation or compiling process. On the opposite when it comes to the production of buildings, every step of this procedure is vested with ambiguity.

In his essay “Mapping the unmappable” [2] Stan Allen discusses the notational nature of drawings, characterizing them as “abstract machines” operating by means of transposition rather than translation. Not unlike a musical composition, the score (drawing) offers instructions on how the piece will be performed but is unable to determine the outcome, as this is always dependent on the players themselves.
The counterargument in the abstractness and notational nature of architectural representation, which traditionally characterized drawing is that the growing digitization of architecture, both in the way it is designed and fabricated, takes away a large part of this ambiguity.

Building Information Modeling (BIM), currently gaining ground in the realms of architectural practice, provides the opportunity of concentrating and managing building data in one parametric, hierarchical model. This model contains simultaneously information about the spatial and geometric attributes of the building, as well as specifications about its building components including cost analysis, parts ordering etc. At the same time, it allows for the collapse of all the different systems of the building (electromechanical, structural) in the same representation. This abundance of information often invites the assumption that having the BIM model is like having the building itself. This transition from notational, reductive architectural drawings to a virtual representation of the building, offering from assembly instructions to lifecycle management data, seemingly takes away a large part of the ambiguity and makes the vision of shareable building information and the streamlining between design and construction appear more realizable.

However, the assumption that more information increases the constructibility of a design is not left unchallenged. The focus of these critiques is placed on the builders and contractors and argues that any human mediation in the process of materializing building information is in essence an act of interpretation and reconstruction.
An example of such discourses, is Joshua Lobel’s thesis “Building Information: Means and methods of communication in design and construction” [3], at the MIT Department of Architecture. Through a series of field studies in the professional world and analysis of informational models, Lobel argues that that the demand for effective communication between the architects and the contractors, which is crucial for the constructability of a design, requires a different mental model than the standards-based approaches adopted in the development and use of current computer aided design tools.
He demonstrates that the perceived complexity of a design is a measure of the difficulty in the translation of the design information into construction information, strongly related not to the quantity but to the interpretation of this information. The current standardized approaches to design communication which share the intention of the disambiguation of information through a fixed data model, can result in acts of wasteful repetition in design, in the loss of non-standardizable expert knowledge and in the rigidification and denaturation of meaningful acts of communication incorporated in the design process.
Unless one imagines the deployment of full scale 3d printers, reproducing the three dimensional specification of the building in the physical world, the process of going from building information to building cannot follow the linear fashion in which a-contextual and a-metaphoric software algebras are interpreted and compiled.
The crucial question which is raised here is how can one encode and share what comes after the BIM model; the builder’s solutions invented on the fly, the local conditions and building habits, the meta-design of the building by its users. When it comes to Open Source Architecture the “sharing” and “distribution” of the building information is in essence always elliptic and resorts to the level of a design solution, a notation again, no matter how elaborate it is, interpreted according to the particularities of its locus of implementation.

This is not to claim that “Open Source Architecture” is a futile goal, but to point out the necessity of a different mental model when it comes to specifying how information is distributed and accessed. Having excluded the possibility of creating a reproducible form of the artifact itself the question comes down to what is the essence, the “source” of a building. Stan Allen, using Nelson Goodman’s distinction between autographic and allographic arts, offers a suggestive view in this area. Using again the analogy between architecture and music he claims that “The guarantee of authenticity is not the contact with the original author but the internal structure of the work as set down in the score
This suggests an approach in which the essence of an architectural solution is abstracted from the information model or the outcome and is traced in those elements which allow for its re-authoring and “performance” under different conditions. This idea, which was the basic operational mode of vernacular architecture (recipes rather than specifications for buildings) also brings to mind Christopher Alexander’s visions, finding their current implementation through the practices and goals of Peer 2 Peer Urbanism.
In his 1977 seminal book “A Pattern Language: Towns, Buildings, Construction” [4] Alexander created an architectural language through 253 patterns which correlate problems and solutions. His objective is summarized in the sentence “at the core… is the idea that people should design for themselves their own houses, streets and communities. This idea… comes simply from the observation that most of the wonderful places of the world were not made by architects but by the people”, which has interesting conceptual affinities to Richard Stallman’s discourse.
Christopher Alexander’s pattern language, coupled with my recent interview of Nikos Salingaros, founder of P2P Urbanism calls for extensive analysis requires at a minimum a separate post. Within the context of this discussion, what is perhaps the most salient idea is the principle of a structural rather than ontological analysis of an architectural solution as a reinterpretation of what could be a useful architectural “source code”. In my next post, I will take a step back and examine the notion of code in the scale of the city, where the idea of Open Source acquires much broader social and political connotations.

_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._.._._._._._._._._.

Notes:
[1] Hartman,M. 2010. “Why Source Analysis and Manipulation will always be important” in Source Code Analysis and Manipulation (SCAM), 2010 10th IEEE Working Conference
[2] Allen, S. 2000. “Mapping the Unmappable: On Notation” in Stan Allen and Diana Agrest. Practice: Architecture, Technique and Representation. London: Routledge.
[3] Lobel, Joshua M. 2008. “Building information : means and methods of communication in design and construction”. SMArchS Thesis . MIT Department of Architecture
[4] Alexander, Ch. 1977. “A pattern language : towns, buildings, construction” / Christopher Alexander, Sara Ishikawa, Murray Silverstein, with Max Jacobson, Ingrid Fiksdahl-King, Shlomo Angel. New York : Oxford University Press

Categories
blogging

Free +[?] Open: P2P Urbanism

Peer-to-peer Urbanism is along the lines of the tendency, also discussed in the “Free and Open: On, In and In-Between”, of the reloading of latent architectural visions under the light of the new cultural constructs of the networked paradigm.

The “Brief History of P2P-Urbanism”, written by Nikos A. Salingaros and Federico Mena-Quintero is an essay/manifesto explaining the origins of P2P Urbanism concept, the ideas it fuses and the strategies it deploys in order to “define space for people’s use” through “creative and cooperative practices”. The text was published in October 2010 and it is marked as version 4.0 (a reference to the versioning conventions of the definitions and op ed articles of affine communities?)

This text succeeds the P2P Urbanism Definition which was published a month earlier and which I plan to discuss in detail in one of my coming posts, along with similar definitions moving along the same lines. Apart from offering the opportunity to look at the P2P Urbanism Movement as a “flavor” of “Open” architecture, this text is also interesting in terms of the methods that it uses in order to orchestrate this intersection of past architectural discourse with the culture of open source, in order to produce a new diagram of space and city building.

The first chapter titled “Recent History of Urbanism” starts with a polemic on the top-down large scale city planning combined with the pseudo-novel “starchitect” buildings, with “notorious visual characteristics” which marked (notice the past tense) the landscape in the 20th and the beginning of the 21st century. These architectures are criticized for sacrificing centuries of collective building wisdom for the sake of innovation.
The US New Urbanism (1993), or what in Europe was simply referred to as Traditional Urbanism, is situated as a critique to post WWII international modernism and its discontents; the city-as-machine, the high rise spatial economics meeting cookie cutter housing, the car centric development. Oriented towards a more human environment, it places the needs of the users at the center of its discourse and reclaims the city for its inhabitants through the production of human scale spaces(“well-proportioned” buildings are explicitly mentioned) for everyday life and social interaction.
This approach is accompanied by a “willingness” of the planners to engage the community in their decisions, contrary to what is characterized as a “hit-and-run” model of development where spaces are designed in absentia of their true stakeholders – the inhabitants. However, New/Traditional Urbanists, still tend to resort to top-down, central models, primed by a the predominant financial models favoring large scale development.

The fundamental goal of this text is to frame P2P Urbanism as an alternative to the paternalist practices of Modernist thought, which is perpetuated in the thinking about space and the city through construction industry inertias, financial models, architectural education, monumentalization of the modernist past etc. The desire to transcend the boundaries of “top down and energy-wasteful modernism”, is discussed as a point of inersection between different lines of thought around the world, ranging from political movements actively participating in urban renewal to a growing mass of urbanists and architects.

The vision of P2P Urbanism is explicit: “We wish to give everyone the tools to design and even construct their own physical space.”

This is where open-source software and p2p concepts plug into the discussion. The reference to open source does not escape some of the usual assumptions that Richard Stallman points out in his article “Why Open Source missed the Point of Free Software”. Open source is discussed by Salingaros and Mena-Quintero as the new name for free software (quotes like “Nowadays this is commonly called open-source software”, “Free or open-source” are indicative). The imprecision of this account, however, should not come as a surprise. Its purpose is to set the ground for a broad metaphor, based on the ideas of free access, modification according to the user’s needs and redistribution of information, using the affordances of blogs, wikis, mailing lists and shared documents for communication and collaboration. The formation of online communities of diverse individuals, carrying different skills and interests which they share for a common cause, is discussed as the basis of the concept of P2P, which was initiated in economy and technology development and distribution (material and immaterial)
The advocates of P2P Urbanism, a concept still in its making, are portrayed as a heterogeneous group only just realizing their common intentions (from “followers of Christopher Alexander” (!) and “urban activists” to the potential candidates of Permaculturists, “advocates of vernacular and low-energy construction, and various independent or resilient communities that wish to sustain themselves “from the ground up”)

What is fascinating in this text is the explicitness with which it orchestrates parallels between unanswered demands stemming from a disciplinary history of architecture and conceptually affine paradigms emerging in other areas, which is one of my central research hypotheses in this blog.

“P2P-Urbanism is all about letting people design and build their own environments, using information and techniques that are shared freely.”

The first point of focus for the realization of this declaration is an Open Source city code, receptive to local conditions and the needs of the individuals while ensuring the sustainability of the whole (for example DPZ “Smart Code”). This rings many bells when one is thinks of Yona Friedman’s or Nicholas Negroponte’s proposals, drawing their inspiration from vernacular planning “codes, where the urban fabric is produced bottom up, through a set of neighboring rules. The main conceptual displacement here is that the code itself is actually accessible to the community and can be modified and distributed. In other words, the paradigm of a platform which allows for the expression of local needs and desires is replaced by a new scheme where the platform itself is modifiable.
However, what P2P Urbanism acknowledges as their conceptual predecessors, is the J. F. C. Turner on self-built housing in South America and Christopher Alexander’s “Pattern Language” and the “Nature of Order”. I was surprised by the lack of reference to the participatory megastructural visions of the 1960-1970, mainly because of the multiplicity of parallels in the way they articulate their goals (priming of user needs, rejection of modernism). One can only speculate about the reasons of this absence, ranging from the authors’ personal histories and influences to an implicit rejection of technological fantasies and an orientation towards more pragmatic and realizable proposals.

A very suggestive but not sufficiently developed part of this account is the authors’ hint towards the common through the work of Agatino Rizzo in Italy. Rizzo’s idea of “Cityleft” or “Open Source Urbanism”, as well as the possibility for the emergence of the common as a sphere between the private and the public, as Negri and Hardt discuss it, is a fascinating field of inquiry which calls for its own space in this blog (– coming soon!)

What is of particular interest in this text is P2P Urbanism’s action plan, explained though an example incorporating an architect (carrying assumptions about space), a builder (carrying an accumulated knowledge), a user (carrying needs and desires). The model which is proposed can be condensed to the following:
“P2P-Urbanism is like an informally scientific way of building: take someone’s published knowledge, improve it, and publish it again so that other people can do the same. Evidence-based design relies upon a growing stock of scientific experiments that document and interpret the positive or negative effects the built environment has on human psychology and wellbeing.
The practice of the “charrette” as systematic request for user input prior the realization of the process is discussed as a means for negotiation of conflicts and interests and the maturing of the acceptance of a design amongst the cycles of a community.

P2P Urbanism is not a humanitarian act, it is the persistent architectural vision of the expression of spontaneous local desires within sustainable wholes, the interplay of local and global rules “A P2P process will have to somehow channel and amalgamate pure individualist, spontaneous preferences and cravings within a practical common goal”
The target areas of this new paradigm for Urbanism are both large scale cities (“western”, I assume) in the tradition of most of the past proposals I have so far discussed in this blog and small scale, self built settlements in the developing world, allowing people to “take care of their own problems”.

With architectural corporatism, the deceptions of the top down models of landscape urbanism, the knowledge priesthoods, geometrical fundamentalisms and spectacular acrhitectures as the main detractors, P2P Urbanism proposes a free, community based educational and informational process allowing inaugurating a participatory and collaborative production of space so as it reflects its true essence: the deep socio-cultural processes which fuel it.

Categories
blogging

Nicholas Negroponte: an interview

Last Thursday I had the exciting opportunity to meet with Professor Nicholas Negroponte and talk with him about the Architecture Machine Group.

Nicholas Negroponte’s early work has long been central in my research. His 1975 book, Soft Architecture Machines, was the first explicit reference to the idea, means and methods of computer aided participatory design. I have always been fascinated by the way this text fused the technological imaginaries of the time (Artificial Intelligence), with radical programmatic architectural diagrams (Yona Friedman), principles from vernacular architecture and ongoing research at MIT (conversation theory, technologies for learning, sketch recognition and representation) to problematize the role of computation in design and to construct a new coherent diagram of the design process, actively destabilizing the roles of its actors.

URBAn 5’s overlay and the IBM 2250 model 1 cathode ray-tube used for URBAn 5

Taking as a point of departure indigenous architecture Negroponte identifies the existence of local conditions within unifying global forces as the alphabet of the vernacular language. Following this concept of a global “objective” system that allows for local intuitive solutions, negroponte proposes a framework of a resilient building and information technology and introduces a new type of personalized architecture machine, a “design Amplifier” that constitutes the interface between the infrastructure and the user’s ever changing needs.
The visionary character of his proposal brings to mind Marvin Minsky’s comment: “and Nicholas was incredibly productive at… at the beginning of computer science, of… thinking of things that no one else imagined computers doing.”
I interviewed Professor Negroponte about the importance of participatory design in his research, his relation with Yona Friedman, his thoughts on prospective thinking and architecture and his attitude about the revisiting of these ideas under the light of current technological advances and the spread of open source culture.

Nicholas Negroponte’s first reaction to my question on his early studies in computer aided participatory design was that these ideas never really went anywhere and were soon abandoned. Interestingly, Professor Negroponte denoted that the closest the concepts discussed in Soft Architecture Machines came to realization was the SEEK project shown at the New York City Jewish museum in 1970. The SEEK project, also known as Blockworld, was a provocative part of the exhibition “SOFTWARE. Information technology: its new meaning for art”, curated by Jack Burnam.
The exhibit consisted of a small group of Mongolian desert gerbils, which were placed in an environment of plexiglass blocks constantly rearranged by a robotic arm. The basic concept was that the mechanism would observe the interaction of the gerbils with their habitat (the blocks), and would gradually “learn” their “living preferences” by observing their behavior. This machine was conceived as both a “cybernetic world model” and a “behaviourist laboratory for observation and experimentation”.

[source 1=”http://www.projektklasse.de/html/personen/lehrende.html” language=”:”][/source]

Nicholas Negroponte's SEEK project

I found the placement of SEEK and Soft Architecture Machines in such close proximity very intriguing. In a first reaction, it is rather counterintuitive to link the unmediated expression of user desires via a soft, “informed” machine, with the derivation of behavior rules for rodents rearranging their environment. However, a closer reading of this comment leads to the realization that these two projects intersect when it comes to the role of the machine; the fascination with the prospect of implementing a system which can observe, learn and anticipate behavior individual’s surrogate, their problem worrying partner, able to predict their behavior and reconcile the local with the global.
What is unique in the case of Soft Architecture machines is that they are the result of a cross-fertilization of the visions of the golden age of Artificial Intelligence at MIT and the vibrant ideas of Cybernetics, with european influences, namely the work of the “eccentric” Yona Friedman and Gordon Pask.

Professor Negroponte spoke about his close acquaintance with Yona Friedman and his wife, who he met during one of their stays in the United States. At that time Yona Friedman had already written the “Towards a scientific Architecture” and had set out the idea of the FLATWRITER, as a self planning tool which combined user participation with behavioral self reflection. To use Negroponte’s word, the diagrams that Yona Friedman was trying at the time “were sufficiently computational” as to provide a basis for implementation. It seems that at the time Friedman’s systematic thinking and his discussion on the democratization of architecture through technology met one of the fundamental principles which have consistently characterized Nicholas Negroponte’s work since its early days to the current era.

It was refreshing to hear Professor Negroponte’s enduring technological optimism. Although his later work at the Media Lab and his devotion to the One Laptop Per Child campaign, made the Architecture Machine a fascinated episode of the cybernetic era, clearly however demarcated in the past, Nicholas Negroponte confessed that it was at that time that he wrote the best line he had ever written. This line was the subtitle of the 1970 Architecture Machine book, a vision and a manifesto at the same time. “The Architecture Machine: Toward a more Human Environment”.
Negroponte’s technological humanism, his deep belief that machines make us more human and his life long devotion in the development of visual and technological literacy made him very receptive to ideas which were stemming from political and social movements around Europe and advocated for collective ownership of information and information processing and the participation of users in decision making processes.

Nicholas Negroponte’s early work, as a visionary fusion of affine elements drawn from different areas and disciplines, under the overarching principle of technological optimism produced a very powerful construct, a new diagram of the role of the user, the architect and the technological platform in the handling of the tension between the individual and the collective, the local and the global, the way in which design is conceived and realized. In that sense, although the idea of computer aided participatory design was left in the shelves, it produced multiple offsprings; from intelligent environments to contemporary paradigms of user driven design, to current conceptions of technology mediated participatory design. Establishing these links and tracing affinities and disparities in their fundamental principles, their assumptions and the tools that they employ is a very suggestive uncharted territory.

Categories
blogging

Free and Open: on, in and in-between

Leaving aside the controversy around its originality, the alleged Linus Torvalds quote The future is open source everything captures a tendency which is gradually gaining ground in the collective imaginary. The constantly growing wave of translations and interpretations of the tools, practices and concepts of open source in almost all the domains of human activity, makes this quote shift from a provocative speculation to a plausible observation – not to say a realizable “project”.

In most of these cases, ranging from hardware and robotics to governance and religion (!) “Open” drifts away from its initial meaning and becomes a space for rethinking the fundamental assumptions of the way knowledge, space and artifacts (material and immaterial) are produced and used.
I particularly enjoy the understanding of words as combinators, spaces where ideas are brought together -often through misuse- and allow for the mixing of concepts which would not be directly comparable if one was to strict about them. In that sense the term “Open source” is a laboratory for the production of new diagrams and the re-conception of how we create and use within the context of the digital paradigm. It is characteristic that in many cases, practices based on free sharing and modification of the information leading to an “end product” existed long prior to their naming as “open”; from cooking, to popular medicine, and vernacular architecture.

However, this situation of semantic flux, poses the danger of labeling as “open” practices which widely differ in their philosophy and operational modes. This can result either to a misleading sanctioning of “open source” practices as inherently “free” or “democratic” and vice versa, to the reduction of practices which intend to be “free” and “democratic” to merely being “open”.
It is therefore important to disambiguate the term by placing it in its historical perspective, so as to identify its inherent tensions, questions and challenges, which can serve as essential frameworks of critique in the various acts of the termʼs adoptions and translations beyond its initial use in software.

An indicative example of the loose, merely discursive use of “Open Source” is the recent op ed article on Open Source Architecture, initiated by Carlo Ratti et al. Quoting from the current Wikipedia version: “Open Source Architecture (OSArc) is an emerging paradigm describing new procedures for the design, construction and operation of buildings, infrastructure and spaces. Drawing from references as diverse as open-source culture, avant-garde architectural theory, science fiction, language theory, and others, it describes an inclusive approach to spatial design, a collaborative use of design software and the transparent operation throughout the course of a building and city’s life cycle”
What is interesting about this quote, which is then supported by a series of more specific observations ranging from the role of the non expert and the professional to funding models, is that it shows precisely the intersection of a preexisting architectural imaginary which I discussed in my previous posts, with the growing open source culture, to produce a new ambiguous construct which conveys more the general atmosphere of a thing rather than the goals and practices of the thing itself.
This, of course, cannot be seen as different from a long tradition in architectural thought, which borrowed from new technological paradigms (in this case the network paradigm and the open sharing of information) to produce the spatial imaginaries of its time.

However if we are to truly rethink participatory practices in architecture, under the light of the digital paradigm and open source culture and not revert to the repetition of the same schemes with a false sense of innovation, it becomes important to move both vertically and horizontally in time. To investigate, in other words, both at the relations of current conceptions on “Open Architecture” with the precedents of technology mediated participatory design, as well as its relations with “Open Source”. The purpose of this post is to do the latter by looking at the term “open source” in the context of software and exposing its internal conflicts and potential discontents.

The Open Source Initiative is the prodigy child of the Free Software Movement. It was found as a California public benefit corporation in 1998 by some of the members of the FSM who to use Richard Stallmanʼs words “splintered off and began campaigning in the name of “open source”[1]. A commonly shared explanation for this is that, contrary to “Open Source”, the term “Free” was menacing for corporations and investors who found this practice too precarious or politically loaded for their taste. Given this sequence of events and the fact that “Open Source” was initially defined in response to another term condensing a set of concepts and practices, it is impossible to grasp the essence of “Open Source” without first discussing “Free”.

the GNU project icon

The Free Software Movement was initiated in 1985 by Richard Stallman, who was at the time a hacker and programmer at the MIT Artificial Intelligence Laboratory. In 1984 Stallman launched GNU, a free operating system, as a response to the growing proprietarization of software and its bundling with specific hardware. This, along with the establishment of software copyright laws in 1980, subverted a common practice in the first years of software use and development which were based on the idea of software sharing and modification according to the personal needs of the users.

The Free Software movement advocated for “free software as a matter of freedom”. Quoting from the GNU project “philosophy” webpage: “people should be free to use software in all the ways that are socially useful. Software differs from material objects—such as chairs, sandwiches, and gasoline—in that it can be copied and changed much more easily. These possibilities make software as useful as it is; we believe software users should be able to make use of them”[2]
The reference to immaterial production as a par excellence field where “free” can function, not having to deal with the complexities of the material world is a statement which set the boundaries of “free” practices until recently, when groups and communities undertook the challenge of “open sourcing” the physical world. The “Open source hardware definition”, “open design” and to a next level the nascent idea of “Open Architecture” are now moving along these lines, developing layered definitions to address the material and immaterial aspects of the issue.
The most powerful implication, however, of Stallmanʼs statement is that “socially useful” applications can only emerge if users are in control of the technologies that they are using. Stallman advocates for the unmediated expression of the needs and desires of the users, by allowing them to be simultaneously producers and users of the technologies that they employ. The famous moto “Free as in Speech, not in Beer”, exemplifies the fundamental principle of the Free Software Movement, which evolves around four essential freedoms, teasingly numbered from 0-3.
■ The freedom to run the program, for any purpose (freedom 0).
■ The freedom to study how the program works, and change it so it does your computing as you
wish (freedom 1). Access to the source code is a precondition for this.
■ The freedom to redistribute copies so you can help your neighbor (freedom 2).
■ The freedom to distribute copies of your modified versions to others (freedom 3). By doing this
you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
[3]

If one was to take a step back and abstract these principles in a more generic design process of a given immaterial “thing”, then what Stallman seems to be suggesting is freedom of use, freedom of modification/redesign, sharing and common-ing. The objective of a technology for the users by the users, as a way to true freedom and social change (sharing and collaboration), along with the rupture of the FSM principles from traditional centralized economic models structured around intellectual property, expert priesthoods and proprieratization, made the idea of “free” spread in the realms of immaterial production.
Free cultural works, whose open editing phase was initiated in May 1, 2006 by Erik Moller, with the support of Richard Stallman, Lawrence Lessig, Angela Beesley, and Benjamin Mako Hill, exemplify such an act of creative translation in the world of cultural production, under the vision of culture as a participatory, collectively driven process, constantly enriched by the contributions of its receptors.

Along the same lines of the abstractions of the principles of the Free Software Movement in any design process, the idea of user control and the priming of the collective and the social both as the “subject” who produces and uses the “goods” in question, has strong conceptual affinities with Yona Friedmanʼs reference to an architecture which moves beyond the architectʼs (authorʼs) hypotheses, allowing the users to create their own hypotheses. This brings back to the surface, from a different area this time, the discourse on the roles of the expert and the non expert in the process of decision making. In a world that becomes increasingly digitized, Richard Stallman sees software freedom as a precondition of social freedom and a ground for solidarity, collaboration and sharing. He states: “In this freedom, it is the user’s purpose that matters, not the developer’s purpose; you as a user are free to run the program for your purposes, and if you distribute it to someone else, she is then free to run it for her purposes, but you are not entitled to impose your purposes on her.” [4] The principle of transferring control to the user, however, does not only rely on the act of giving them access to the source code, but implies that the code itself is actually accessible (ie. legible and not obfuscated). The issue of accessibility as a property which can be designed into the actual “free” product rather than being taken for granted, opens the door to a two-way discussion, referring both to the design decisions which ensure this accessibility of a “product” by others, as well as the “literacy” of the users. Within that context education and technologies for learning could very well fit into this discussion, opening tangents such as the work of Seymour Papert at MIT and the architectural language studies (symbols, sketch recognition etc) which had been conducted both by the Architecture Machine Group and in various places in Europe (Friedman, Habraken). When it comes to the case of architecture, or more generally to art, which is an inherently ambiguous area, the definition of what a non-obfuscated code would be is a hard intellectual exercise, as one has to balance between overly legible -to the extend of being populistic- conceptions of art and architecture and their eclectic counterpart, which asserts certain groups of literati, by making through “difficult”, unfamiliar languages. That said, it seems that knowledge and education is a central issue in the discussion of (software) freedom. Although the different licenses and distribution models, beyond the core concept of copylefting, are essential to the Free Software Definition, their thorough description would be beyond the scope and objective of my post. What is interesting, however, is that this is the space where one can locate subtle distinctions between “Free” and “Open”.

Open Source started as a means to transcend the alleged confusion around what the word “free” denotes. Although it is difficult to identify blatant differences between the two definitions, and the former seem to consider themselves as intellectual offsprings of the latter, Richard Stallman argues that the Open Source movement has very little to do with the philosophy and objectives of “Free”.

His article titled: “Why Open Source misses the point of Free Software” can serve as a very suggestive frame of critique exposing a series of questions around the idea of “Open”. In the following paragraphs I will attempt to isolate the main points of his discussion which go beyond software per se and relate to the ideological and conceptual underpinnings of the term “Open Source”.
The first point that Stallman is making is the priming of practicality vs freedom. In their attempt to appeal to the business world, the Open Source supporters adopted a vocabulary of efficiency and convenience related to having fast, cheap and reliable software. In that sense, the “Open Source” movement adopted a vocabulary which made it a convincing business model denuding it from its political and social objectives and connotations. “Open source is a development methodology; free software is a social movement.” [5] This distinction is very valuable within the broader discussion on technology democratization, as it places technological efficiency and user freedom in opposing camps, if not in terms of practice, then definitely in terms of value systems, the one leading to capitalist completion and “better” products and the other to true social change through processes of sharing, collaboration and communication. As Stallman characteristically denotes: “A pure open source enthusiast, one that is not at all influenced by the ideals of free software, will say, “I am surprised you were able to make the program work so well without using our development model, but you did. How can I get a copy?” This attitude will reward schemes that take away our freedom, leading to its loss.” [6]

If linguistic ambiguity is framed as a danger when it comes to “free”, then the one way meaning of “open source” is even more problematic because of its overly literate meaning which cultivates the wrong common assumption that access to the source code suffices for a project to be characterized as open. This point brings us back to where we started, to the need for a disambiguation of the term so as not to act as a trojan horse for practices which not only are not open, but conflict with the fundamental principles of freedom and openness. Stallman rejects the opportunistic use of the word and its translation in other disciplines and areas of human practice, and claims that the principles of free software are specific to software.
“The term “open source” has been further stretched by its application to other activities, such as government, education, and science, where there is no such thing as source code, and where criteria for software licensing are simply not pertinent. The only thing these activities have in common is that they somehow invite people to participate. They stretch the term so far that it only means “participatory”. [7]

Besides Stallmanʼs call for caution, the imaginary of “Open Source” is forming a new culture of production and use, with constantly growing echoes in spatial theory and practices. Instead of disrupting this imaginary we should perhaps embrace it exposing at the same time the history and internal tensions of the words themselves. Along these lines, mapping flavors of openness in the spectrum from free to open source, from principle to practicality and from a vehicle for social change to a design methodology, can be an invaluable tool for reframing past practices and seeking new diagrams and interpretations.

_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._.__._._._._._._._._._._._._._._._._._._._._
Notes
[1] http://www.gnu.org/philosophy/open-source-misses-the-point.html
[2], [3], [4] http://www.gnu.org/philosophy/free-sw.html
[5], [6], [7] op.cit [1]

Categories
blogging

Actualities [one]: on an “Open” architecture and a merger

A few days ago I came across the news of a merger of two TED winning firms. Architecture for Humanity acquired Worldchanging, an American non-profit online magazine and blog about sustainability and social innovation, and set off to develop a “robust center for applied innovation”
I am copying below part from the press report in Archdaily. You can find the entire report here.
“In November of 2010, Worldchanging announced it was taking steps to close its doors and dissolve as a 501(c)3 nonprofit organization. […] The Board (of Directors) committed to finding a new home for all the valuable essays, stories, and learning accumulated through the seven years of Worldchanging’s efforts so they could continue to be a catalyst for future discussions and information sharing on how we can build a bright green future. The Worldchanging Board of Directors embarked on a formal Request for Proposal (RFP) selection process and found Architecture for Humanity to be the ideal match in terms of mission, vision, impact, and importantly, the organizational and technological capacity to run a program such as Worldchanging.”
Apart from the financial background of the process, which I am in little position to analyze, I found these news quite intriguing. Companies and firms of course are started, dissolved or merged all the time. However, I think this event is of particular interest for a new type of coalition it produces between a large and growing architectural firm which finds its press alter-ego to develop its humanitarian polemic. Worldchanging, renown for its “solutions-based journalism” adds to the communicative aspects of Architecture for Humanity through its prospectiveness and environmental conscience, while at the same time it serves as a solutions archive, a source of concentrated knowledge on all sorts of small and large sustainability-oriented innovations around the globe. Are we witnessing the rise of a humanitarian colossus? And what is its role in the discussion of the democratization of design?

My first close encounter with Architecture for Humanity was this past June in Athens. The organization’s CEO, Cameron Sinclair, gave a  forty hour talk at the Against All Odds (AAO) Project: Ethics/Aesthetics Conference talking explaining its goals and modes of operation of his firm through a show and tell of around-the-globe projects.
Prior to this conference, the AAO project had organized a similar event at the Athens School of Fine along with a series of workshops and interactive works around Athens. The main premise of the project was to “explore the moral values related to territorial practices and to communicate the conclusions to the public”. The democratization of decision making processes and the active inclusion of social groups in the design of the spaces they inhabit was an axis that traversed the project, reopening the discussion on the processes and frames of critique of participatory spatial practices. The key thematics designated were “design as action of relief”, “design as building up equality” and “design as inspiration”.
The financial situation in Greece, suffering from deep recession and measures of increasing austerity, as well as the role of urban space in the manifestation of protest and political participation, had brought the politics of space back to the surface, making the AAO project temporally and locally relevant. The AAO project was well funded and well attended and Cameron Sinclair was the person to open it.


Architecture for Humanity is a non profit design services firm, founded in 1999. It functions as a global network of design, development and construction professionals brought together by the common cause of providing design and construction advice to those in need. The firm currently has 73 chapters in 25 countries with more than 4,650 volunteer design professionals, and it is affiliated with more than 500 design professionals worldwide.
After being awarder the 2006 TED prize, Sinclair used the money to initiate the Open Architecture Network, an online open source design sharing and project management community. Apart from TED, the OAN is sponsored by large corporations such as Autodesk, Sun microsystems, AMD, Blue Gecko, Hot Studio and Creative Commons, which provides a “some rights reserved” license. Recently, the OAN network launched an iPad App.

Sinclair’s lecture was almost manifestly, very much like his website. The firm moto “Design like you give a damn”, proclamations like “Le Corbusier had it wrong” and “let the revolution begin”, feverish accounts of design successes in communities in crisis and videos of children in underdeveloped areas, gratefully smiling towards the camera, were the repertoire of his presentation. I was particularly struck by one of his starting comments, about not being able to keep track of all the crisis and disasters happening worldwide for which he gets real-time notifications through his i-phone. The analogies of this description with superhero story conventions which excite the popular imagination, like the Batman sign or the Captain Planet flashing ring, notifying them when there is someone in need, are quite amusing. One could indeed claim that Architecture for Humanity’s corporate identity is to a great extend defined by a narrative of philanthropy, and can be critiqued as carefully concealed post-colonialism using the notion of participation as a discursive medium to account for its practices.

However, maybe I am being overly critical. The practices of Architecture for Humanity bring back to the discussion the social role of the designer and the need for user empowerment through education and participation in decision making processes. After more than two decades where the academic and professional scene of the discipline was dominated by stArchitects and stArchitectures, one sees the reemergence of a discourse on the democratization of design and the self definition of the architect as a social actor.
Perhaps thinking along the lines of this paradigm, one can start developing a multi-layered critical discourse about its goals, means and methods by asking once more the fundamental question “Empowering Who?”. In this way one can maybe surpass the dilemma of accepting or rejecting such practices alltogether and starting to recognize their internal intricacies and their essentially conflictual nature.

Categories
blogging

Design for empowerment for Design: environments, partners and toolkits

In my last post I examined a shift in architectural discourse which emerged from the cycles of International Modernism and subverted its most fundamental assumptions from the inside. This discourse, initiated at the CIAM 10 in Dubrovnik questioned the practices of the architect-expert as paternalistic and by definition reductionist, unable to account for the complexities of modern life, and advocated for systems which allow for the expression of the “relational needs of man in society” [1]. The vision was to orchestrate architectures which transcend the “pure and inhuman technique of modernist functionalism”[2] and are receptive to unpredictable, ever changing personal needs; or to use Yona Friedman’s words “personal hypotheses”. Within this context the megastructural topologies of spatial urbanism are born as the “return to the science fictional attributes of the Modern Movement, ideal and magical, detached from the real where they think they adhere”[3]
Of course, one cannot dissociate this climate of architectural prospectiveness from the broader spatial culture in France. In his text “The Urban Utopia in France, 1960-1970”[4] Larry Busbea argues that the unbuilt projects of spatial urbanism were in fact depictions, representations of this spatial culture. He proposes to read them as a mixture of the flux space of the postwar “trentes glorieuses” in France, the thriving of structuralist thought and the emergence of a discourse amongst intellectuals allowing for the conception of a technological humanism (Van Lier, Simondon, Moles).
The situation of spatial urbanism in its historical context, or in other words its reading as a cultural product made at a specific place and time is inarguably valuable and perhaps more valid from a methodological standpoint. However, what I am interested in doing in this post is to look at spatial urbanism as a diagram of technology mediated design participation; to detach it from its context and examine its structures and operational modes.
I have already framed the hypothesis that there are interesting diagrammatic affinities between the way the roles of the architect, the user and the technological platform are conceptualized in spatial urbanism and in early computer aided participatory design (1970-1975). These affinities are often overshadowed by a clear difference of the general climate in which all these proposals were formed. On the one hand, the engagement of the users in the design process through the help of computers is seen as an episode of american technocratic pragmatism and the spatial culture in France stemming from a network of philosophical and political referents. Busbea’s observation that “The French engagement with these networks and systems was fundamentally different than that of the Americans, whose energies were clearly focused on economic superiority and political supremacy”[5] is indicative of such differences which exclude the possibility of parallel readings of the french and the anglo-saxon scene.

In this post I will talk about Yona Friedman’s Spatial City [6] along with the space allocation and design program that accompanied it (the Flatwriter), Nicholas Negroponte’s Soft Architecture Machines [6] and John Frazer’s Evolutionary Architecture [7], to propose three intersecting diagrams of technology mediated participatory design, with emphasis on the way the technological tool is figured / conceptualized in the process.
I claim that these proposals can be read under the prism of three main schemes: technology-as-environment, technology-as-subject and technology-as-toolkit respectively. This schematization is a first attempt towards a taxonomy which can be used as a conceptual frame to discuss current practices of technologies for the democratization of design, or -as the title denotes- design for empowerment for design.

1. Urban climates: designing the new Umwelt
The megastructure is initiated as a spatial urbanism of the unpredictable fostering the relational needs (besoins relationelles) of its inhabitants. It is detached from the ground and extends in three-dimensional space offering a new technological substrate, which rejects the stable and the permanent and creates the ground for a multiplicity of personal hypotheses. The megastructure is a global man-made environment, a new artificial nature where even climatic conditions can be adjusted (what Dominique Rouillard refers to us urban climatization). This mega-scale climate conditioning gives an end to the human fight against a hostile nature, which was considered one of the fundamental raisons dʼexistence of architecture in the paradigm of modernist functionalism.

Yona Friedman's Collage of the Spatial City extending over the Place de la Concorde in Paris - Source: http://www.megastructure-reloaded.org/yona-friedman/

Larry Busbea interestingly points out that the spatial urbanism stems from a psychological drive to design and therefore control the networked, ultra-technological, undecipherable universe of flows which constituted the new landscape of human interactions in the postwar metropolis; Paris more specifically. If technology had become an Umwelt to use Van Lier’s words, then this Umwelt can be the new object of design, can be architecturalized and inhabited.
In the heavy, systematized structures of spatial urbanism rises the vision of a continuous immaterial world of flows. A world beyond spatial segmentations; a garden of Eden reclaimed through technology. As one can clearly see from the drawings of the megastructure the ultimate dream of the megastructure’s creators (Friedman, Otto, Fielitz, Zenetos) is for it to disappear, to become an invisible environment through its absolute ubiquity. This all pervasive technology contains all the constraints that ensure the viability of the system. What is designed here is a framework which allows for a multiplicity of subjectivities and gives space to free play without ever allowing the system to fail.
in the context of spatial urbanism, architecture is liberated from the the constant intervention of the experts by having all the fundamental constraints that ensure its viability detached from it and transposed in an outer layer, which conversely is an object condensing high technical knowledge and innovative expertise. Within it, architecture as we know it, in the scale of the building, is left to the taste and preferences of the individual. Within the spatial city, anything goes.

2. Architecture without architects: the FLATWRITER
However, after coming in contact with the proto-computational experiments, Yona Friedman starts envisioning a system for design empowerment, a tool which at the same time allows for self-planning within his all containing infrastructure and at the same time empowers individuals to create their own designs, without the mediation of the architect.
In his book Towards a Scientific Architecture, Friedman proposed a model where the desires of the users are expressed within a “repertoire” of computer generated possibilities situated within a containing infrastructure that carries all the necessary utilities. I discuss the details of this program in my paper Architecture-by-yourself: Early Studies in Computer Aided Participatory Design. Friedman claims that the use of the FLATWRITER establishes a new informational process reconciling the future user and the object that they use and allowing for a proclaimed limitless individual choice combined with the immediate chance to correct errors without the oversight of a professional paternalism.
In the second loop of the FLATWRITER Friedman sets out a programmatic outline of a process of self-planning based on a very general idea of loose zoning (efficiency lines). It is quite revealing that the countries which carried the legacy of the megastructure gave self planning a central role in their early computational visions. France and Italy (even after the wave of the megastructural dystopias) are the two exemplify this observation . Apart from Friedman, the Turin Center for the Study of Environmental Cybernetics, was conducting similar research in the 70s while advocating for the collective ownership of information and information processing.
However, what I also find of particular interest in Friedman’s program is the first loop: The users first create an associative graph of spaces by explicitly stating their desires. These desires are then evaluated in relation to “real-life” data, which have to do with a mapping of the user’s everyday habits (how many times one enters a room, what are the user’s most usual circulation patterns). In the case of conflicts between the user’s desires and the life patterns that resulting from this mapping, the program states the conflict and invites the user to reconsider their choices. The constitution of “objective” user behavior models and the establishment of occupancy patterns brings to mind current researches on sensor enhanced environments for behavior monitoring. One does not need to go far to think of projects such as the Newcastle University Culture Lab Ambient Kitchen project, incorporating the dimension of reflection on the way one actually inhabits space.With a hint of irony one could propose that this monitoring, which becomes subtler and subtler as sensor technologies progress, almost proposes a model of inverse phenomenology (!); making design decisions based on the room’s experience of the user. This is not very far from Cedric Price’s Generator project, of a space which rearranges itself based on the user’s behaviors and makes more informed choices based on the users’ reactions to these changes (more about this in the section about Frazer). Also, this is certainly not far away from Negroponte’s initial visions on the interaction of user and environment as they had been hinted towards in his thesis and elaborated on at the Architecture Machine. The FLATWRITER exemplifies the meeting point of the french spatial topologies and the cybernetic visions of the US. However, this contact did not leave the other side unaffected.

The FLATWRITER's repertoire of choices

3. Architecture Machine(s): Your Surrogate You
Yona Friedman has used a mathematical scaffolding to support philosophical positions in a manner which affords the reader the opportunity to disagree with his utopian posture, but still benefit from his techniques. […] If you are a student you will find the paradoxical intersection of two academic streams – participatory design and scientific methods – too frequently held apart by the circumstances of our practice [10]
In the Architecture machine [11], Negroponte had already outlined paradigms of human-machine interaction within the design process, attributing to the computer the role of a problem worrying partner. A medium, which through its computational power can allow architects to reach a new humanism, addressing at the same time the very big and the very small and thus surpassing the reductionist averaging models used in prior architectural paradigms. This approach was primarily based on the assumption that the functions of communication, inference, understanding of the context and self improvement, will raise the machines to the level of valuable collaborators, not problem-solving artifacts, but problem-worrying partners of the designer. These improved versions of his self which would allow him to manage inconceivable complexities and stand critically in front of his own work, with beneficial results both for him and the user.

By 1975 Negroponte has already moved away from the vision of empowering the architect and has started formulating the vision of computation as a tool for user empowerment in design, influenced by the vision of an “architecture without architects”. This attitude is manifested in every possible way; in his introduction of Friedman’s Towards a Scientific Architecture, in the Introduction of Reflections on Computer Aids to Design and Architecture[12] where he explicitly mentions Friedman’s ideas as sources of inspiration for how to use computers in architecture and of course in his chapter “Computer aided participatory design” in Soft Architecture Machines.
In Negroponte’s visions the machine acquires connotations of a subject, in conversation with the user-designer. In his Soft Architecture Machines, the ecology of interconnected amplifiers combined with the discourse of a “surrogate you” brings back to the surface the previously discussed scheme of a technological ubiquity. However, this time the technological subject-object is not figured as an environment but as a collective body, with social and political connotations. The design amplifier is the representation of the individual in the collective system. The idea of a network of surrogate individuals, which however maintains the appearance of an one to one, personal relationship between the self and its own design amplifier, proposes a scheme where the machine becomes the space where the individual and the collective are unified.
It seems that the design amplifiers have a twofold function; they establish a ubiquitous network in scale which once more transcends the individual, while at the same time they become recognizable subjects; artifacts with which one can engage in conversation, can empathise with, can learn to know and be known. The process of thinking about thinking through engaging in conversation with one’s amplifier makes it also a tool for self reflection and education. The technological platform/object is portrayed as a subject with the knowledge of an architect (expert knowledge) but no will towards power. On the opposite the machine is establishes an empathic relationship with the user, it “learns” the user and starts injecting him/her with its embedded expertise, making them the architect.
Opposite to Alexander who saw computers as an army of uncreative clerks, Negroponte sees them as conversational machines democratizing design by engaging non experts in an educational and self reflective process; offering them a trip to “Designland”. John Frazer on the other hand sees computers as slaves of infinite power and patience and takes a distance from Negroponte’s approach which “placed high expectations on software and hardware none of which delivered really any answers”.

4. Evolutionary Architecture: Kits of changing parts
One could perhaps claim that Frazer’s Evolutionary Architecture does not quite fit the time frame or the thematics of this discussion. Indeed, the book was published in 1995, two decades after the time from which this discussion draws its references and at what one could call a time of -more at least- computational maturity. However, the Evolutionary Architecture is a condenser of thirty years of work, much of which draws its references from the cybernetics and more specifically Gordon Pask, who also prologues the book. In his foreword Pask sserts architecture as a “living, evolving thing”[13], proactive and culturally expressive at the same time, a condenser of the life of those who inhabit it. His vision is to use the computer not as an aid to design but as an evolutionary accelerator and a generative force: “A new form of designed artifact interacting and evolving in harmony with natural forces, including those of society”[14].
Frazer seldom explicitly discusses design democratization and user empowerment. However, the personalization of design through the interaction with an environment genetically programmed to be responsive to the movements of its inhabitants is a central theme in his book.
When it comes to stating a vision about architecture, Frazer outlines an intelligent environment, not stable like the crystalline structures of the spatial city, but responsive to the ever changing life it fosters. When on the other hand it comes to the technological tool, the computer is conceptualized as an electronic muse, a “genii in a bottle which can compress evolutionary space and time so that complexity and emergent architectural form are able to develop” [15]. More so than his architectural manifest, which can be seen as deeply influenced by the imaginary of Artificial Life, perhaps as a counterpoint to Negroponte’s Artificial Intelligence, what is interesting in Frazer’s book is his presentation of a repertoire for toolkits for design.
In most of the cases, these toolkits cubes with embedded sensors are used as an intuitive way for extracting fully developed architectural drawings and perspectives by playing in the physical world. Frazer characteristically refers to Cedric Price’s Generator project for which he had been asked to be the computational consultant. The Generator is a kit of parts allowing for spatial reconfigurations according to the desires of its users, where even the building itself would be able to register its own boredom and initiate processes of space rearrangements.
The implications of design participation and the discourse on the non expert are brought into the discussion only through the reference to the Walter Segal model, allowing people who knew nothing about architecture to build simple models based on surfaces and sticks through a physical toolkit and have the computer calculate the entire structure for them.

Working electronic model of the Generator project: John and Julia Frazer, Cedric Price Computer Consultants, 1980

This prototyping approach, taking as input sketchy, hand on models from the user and translating them to expert views, is the dominant paradigm in current tools designed for the democratization of engineering offering the possibility of low floor input high ceiling output. An indicative example, but not the only one, in this direction is Fritzing, an electronics prototyping software, with views requiring increasing technical expertise.
Frazer however, denotes the limiting character of the kit of parts approach and advocates for a kit without the parts, or more specifically, a kit with parts which can also be subjects of evolution.
These toolkits are portrayed as form finding tools, providing the designer with constantly unpredictable stimuli and allowing for simulations of the phenotypes of the design’s genetic code under different conditions. The approach of a slave of infinite power and patience, evoking evoking responses to the designer through its endless generative potential and its possibility for simulation is very close to current tangents of computational visions.

In summer 2011 in the International Conference “Rethinking the Human in Technology Driven Architecture”, Kostas Terzidis referred to the computer as a new author in ways very similar to Frazer, while Manuel DeLanda proposed the combination of genetic programming, neural nets and multi-agent systems as the ultimate form finding tool for designers. The vision is quite simple: a tool which can with certainty expose all the possible solutions meeting certain constraints, which has the capability of learning one’s taste through every iteration and designer decision and finally is able to reconcile the local with the global by simulating different behavior patterns under specific general rulesets. The only parts of the process which demand a certain degree of expertise would in this case be the specification of the design requirements prior to the specification of the genetic code and the evaluation of the designs proposed by the computer patient slave.
The designer plays the role of the evaluator of the creative accidents of what in the bottom line is an unimaginative tool. Technology, again, becomes what we expect to surprise us.

~~~~

In this post I diagrammed three different approaches of design for empowerment for design and commented on fragments of their overlaps, disparities and conceptual continuities to affine contemporary discourses. Whether figured as an environment, subject or toolkit, the technological artifact (be it machine, space-structure or sensor enhanced physical modeling kits) is designed so as to encompass the necessary means of control ensuring the stability and adequacy of the system; be it structural, statistical or performative constraints. Ironically, however, all these proposals are architectural proposals, designed by architects so as to participate in the disciplinary discussion which they try to subvert. Nonetheless, I believe that the dissection and reshuffling of these figurations provide fertile diagrams for the re-conception of technology mediated design participation today, within the broader context of the democratization of/through technology.
For instance, as a thought experiment, when the environment is dematerialized/virtualized and can also be changed in real time, when even the platform can be collectively designed and destabilized then one can potentially conceive a scheme where these two models interact and retroact, contain and are contained within each other in a loop where collectivities design the environments of their interaction?

_.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__.__
Notes
[1] Alison and Peter Smithson quoted in Rouillard, D. Superarchitecture, le future de l’architecture 1950-1970. Paris: editions de la Villette, 2004
[2] Rouillard, D. Superarchitecture, le future de l’architecture 1950-1970. Paris: editions de la Villette, 2004
[3] op.cit 2, pp.83
[4] Busbea, L. Spatial Culture in France, 1960-1970. in Topologies: The Urban Utopia in France, 1960-1970
[5] op.cit 4
[6] Friedman, Y. Utopies Realisables, Paris: L’ Eclat, 2000
[7] Negroponte, N. Soft Architecture Machines. Cambridge, MIT Press, 1975
[8] Frazer J.H., An Evolutionary Architecture, Architectural Association, London, 1995 download from http://www.aaschool.ac.uk/publications/ea/intro.html
[9] Friedman, Y. Toward a Scientific Architecture. (transl. Lang C.) Cambridge, MIT Press, 1975
[10] op.cit 9
[11] Negroponte, N. The Architecture Machine. Cambridge, MIT Press, 1970
[12] Negroponte, N. Introduction, Reflections on Computer Aids to Design and Architecture. Negroponte N. (editor), London, Mason/Charter Publishers, Inc., 1975, 1-13
[13] op.cit 8
[14] op.cit 8
[15] op.cit 8

Categories
blogging

The emergence of participatory techno-utopias: GEAM, GIAP and Yona Friedman

This post is part of my attempt to construct an archaeology of Open Architecture(s), with emphasis on works where the figuration of technology plays a central -although not always explicitly stated- role. In this entry I provide fragments of the general climate which nourished the radical architectural proposals of the 1960s-1970s in France and consecutively focus on the life and work of Yona Friedman. This approach by no means claims to be a complete account of the emergence of the participatory project – in fact it is deliberately selective. As many of you may argue, participatory design did not only inhabit the space of unbuilt utopian projects, neither can the radical utopias of the time be viewed under the unified interpretational prism of technology-mediated participation, as this was not always what was mainly at stake; a reading of Constant’s proposals solely from this viewpoint would be deficient.

I choose to focus on Yona Friedman and his Mobile Architecture for a series of reasons. First, because his biography can be seen as an exemplar of the historical transformations which interest me in this exploration; from modernist paternalism, to the technopolitical utopia and then to computation. Second, because his work “played” simultaneously in the European and American scene, acting as the precursor, not only of the sixties architectural radicalism (Archigram, the Japanese metabolists) but also of the Architecture Machine Group at MIT. In that sense, through the work on Friedman one can establish connections between this radicalism and the first computational experiments in architecture.

My claim is that the destabilization of the architect-expert and the emergence of the demand for the “democratization” of architecture through the mediation of technology can benefit from an investigation on the way technology was conceptualized in pre-computational examples.

Eckhard Schulze Fielitz's metaeder structures for a Spatial Urbanism

Yona Friedman is part of a generation, situated from the late ‘1950s to the early ‘1970s, which radically modified the boundaries and definition of Architecture. The rejection of the Modern Movement paternalist practices and the quest for an Other Architecture, cannot be seen isolated from a broader change of paradigm, which not only created new conditions and subsequently new issues to be addressed, but decisively influenced all the fields of architectural thinking and praxis. Being very conscious of the fact that the relation of the socio-economical shifts of the post war era with the multiple paradigmatic shifts in architecture is far beyond the scope of a blog entry, I will resort to a single observation: The social and economic shifts in the end of the ‘50s formed the demand of a new functionality and led to a re-conception of architecture beyond the built secular object, as an environment, a spatial field for the expression of the relations and processes of an increasingly complex world.
According to Andrea Branzi[1], the culture of classical modernism could not accept the idea of chaos as a product of the emerging international market and insisted in the belief that the observed chaotic phenomena were not but the result of a temporary decadence, curable by the modernist endeavor. However, this climate of crisis, also created the opportunity for cultural change, through the questioning of the traditional definition of the “projet” and the assertion of a discursive and experimental nature of architectural production.
Referring to this phenomenon Marie Ange Brayer[2] identifies the inauguration of an architecture which constantly questions its own practice and is at the same time demiourgic and critical. This architecture poses as the common denominator of its explorations the question of mobility, “the utopia of an architecture without inscription”. It is around this demand that a new experimental aesthetic arises in the cycles of architects and artists, who move beyond rationalist order and push architecture to its conceptual limits through constant inquiry and experimentation.

These radical groups seem to almost unanimously attempt to orchestrate a liberation from Architecture, which until then was perceived as a discipline oriented towards construction. I find very interesting Andrea Branzi’s observation about a disintegration of the total work of international modernism in its constituent parts. He observes: “All the activities around the conception of the work, which were until then interconnected in multiple scales, gradually become cultures and logics in their own right, calling for a central position and an exclusive strategy”. City without architecture, architecture without city, objects without city or architecture, architects without work are not but products of these schisms of the parts of the former total “projet”[3].
The groups emerging from this disciplinary turmoil, to which I will refer in more detail below, were by no means formed by a body of common beliefs, principles or practices, but rather from the participation in a general reformative era. In fact it was often the case that some of these groups were formed exoterically and did not become conscious of their existence until the moment that an external observer provided them with a name and identity.
A characteristic example of such a construction is the GIAP (Groupe Internationale d’ Architecture Prospective) founded in 1965 with the initiative of the historian and art critic Michel Ragon. In his interview to Marie Ange Brayer και Frederick Migayrou[4], Ragon accepts that the idea of the synthesis of the arts and the social role of the artist were two of his main preoccupations at that time. It was this “romanticism of the avant-garde” that led him publish articles on Yona Friedman and after they approached him, Maymont, Ruhnau and Frei Otto.
Having all these files at hand Michel Ragon admits that he started realizing affinities, not so much in terms of content, but mostly in the realization of the necessity of decisive change. “It was obvious that Le Corbusier’s time had finished and now something new was coming […] I thought that a synthesis was necessary so I wrote “Ou Vivrons nous Demand?” […] There was indeed an international condition of creation, but nonetheless many of its participants did not know each other. For example Friedman with Maymont. They met later, but did not really appreciate each other.”
One can barely question the inaugural character of Ragon’s first collection, before which there was barely any literature on prospective architecture. This construction attracted the interest of Yona Friedman, Maymont and Schoffer, who thought that it was important to invest it with manifests and exhinitions, which was also the initiation of GIAP. This concept of “prospective architecture” was the central theme of the weekly meetings of the group in a room that they had been offered by the Musee des Arts Decoratifs, as well as the content of publications and exhibitions which followed.
The presence of Michel Ragon, taking into account his vision on a synthesis of the arts and his contribution in the theorizing of the relation between architecture and sculpture in the 60s, perhaps led to a totalization of architecture, which in turn contributed in the subversion of its stiff disciplinary boundaries. It signified what Friedman defines as the existence of a link between architecture and the fundamental fields of human culture, like the sciences (physics and biology), the social organization (economy, group construction) and the arts (individuation in all possible forms). This does not mean that architecture is more or less important than any of these sectors, but that it is part of the same whole.

But let’s take a few steps back.

Yona Friedman in his atelier in Paris (source: Blueprint magazine)

The 10th CIAM which marked the end of the International Style, brings Yona Friedman in Dubrovnik, from Israel where he was working as an architect. Friedman has already completed his studies in the Technion at Haifa and earlier in the Polytechnic University of his birth-town, Budapest, he has come in contact with Konrad Wachsmann’s studies on prefabrication techniques and three-dimensional structures and he has already realized an unsuccessful attempt of participatory residential design with inhabitants of Haifa.
The central theme of Dubrovnik’s CIAM was the constitution of a Charter of Habitat, discussed by four different subgroups. Through the subgroup “Growth and Change”, Yona Friedman has the chance to present for the first time the principles of an architecture which allows for social mobility through habitats and urban configurations which are composed and recomposed according to the intentions of their inhabitants.
Friedman’s proposals on mobility found the support of the journalist G. Kuhne, who published an article authored by Yona Friedman in the german magazine “Baumwelt”. This publication became the reason that Friedman left Haifa and moved to Germany, where he met Frei Otto and Gunschel, to the Netherlands and finally to Paris, where he settled in 1957 after receiving a collaboration proposal from Jean Prouve.
Paris was the founding place of GEAM (Groupe d’ Etudes d’ Architecture Mobile) team, which was formed around the demand for an architecture adapted to the fast paced changes of modern life, clearly influenced, as may other groups which were constituted around the same time, from a general climate of inquiry and experimentation which followed the 10th CIAM inside and outside its circles.
In 1958, having already published the “Manifest for Mobile Architecture”, Friedman outlines the fundamental principles of its most renown application: the Spatial City. According to him, the pedagogy of Architecture itself had led architects to dismiss the importance of the user, who they substituted with the non-existent entity of the “Average Man”; a being whose invented needs were increasingly discrepant with the needs of the real user. In his manifest, Friedman advocated for an architecture where “the habitat is decided by the user within the framework of an infrastructure which is neither determined nor determinant” and where the buildings “should touch the ground as little as possible and can be disassembled and moved, can be altered according to the desire of each inhabitant”
Based on the model of the “Spatial City” a tri-hedrical space-structure with inhabitable voids supported by columns and spanning over inhabited and uninhabited areas, Friedman produced in 1958 his proposals on the Spatial Tunis and Spatial Paris and in 1959 the Venice Monegasque. Friedman retained his interest for the three-dimensional space structures for the following years, through the study of bridge-towns. The most well known example from this era is the Bridge over the Army Channel which he designed in collaboration with Eckhard Schulze Fielitz.

Collage from Friedman's drawings of the "Spatial City"(1958) and the "Bridge over the army channel"(1963)

According to FRAC center‘s biographical note, Friedman’s space structures were a point of inspiration for the largest part of the radical architecture of his time, namely Archigram and the Japanese metabolist movement. However, besides his highly influential work, Friedman never came to the spotlight of his era: “Besides the ways that the work of his contemporaries has been incorporated in the vocabulary of mainstream architecture, Friedman’s ideas are since his time a footnote in the architectural history of Europe, especially from its British perspective”[5]
As Friedman narrates in his Blueprint magazine interview “Cedric Price’s Fun Palace was influenced by La Ville Spatiale, which was how I got to know him. Building is not an object, it’s a process. Cedric liked this statement a lot” Nonetheless, Friedman particularly emphasizes the field in which his approach was differentiated from Price’s, who used as the structural unit and basis of every human effort the collective: “No individual, whether in particle physics or sociology behaves according to abstract laws: call it the ‘principle of individuality’” In the top down total design Friedman did not see but pseudo-theories, observations which only reflected the preferences of their beholders. He contends that a theory must be general and valid for anybody: “Everyone has their hypotheses. The general theory that I am trying to propound underpins all individual hypotheses”
In a first reading one could place Friedman’s atomo-centric approach as a response in the modernist reductionist and despotic generalizations. Although one can hardly negate the expression of resistance reflexes against the narcissistic subordination of the complexities of reality in rigid rules invented by experts, Friedman himself offers an alternative interpretational framework for the origins of his position. When he was 18 years old he had attended a lecture by Werner Heisenberg in Budapest. The Uncertainty principle, which for many shook the foundations of scientific objectivity, was a deeply formative experience. It would perhaps be legitimate to make links between these adolescent fascination and his suspicion against the 20th century Grand Theories or the adoption of indeterminacy as a fundamental principle in his work.

Pictogram from Friedman's "Utopies realizables" representing his model of the "Societe de faible communication"

A parallel interest, present from Friedman’s first steps and accentuated through time was the self-planning of collectivities in space and the construction of a vocabulary capable of making a “scientific architecture” approachable to non-experts. Although his “African Studies” and his interest on developing countries had started since the 60’s the systematization of this thread of research came through his book “Toward a scientific architecture”. UNESCO, for whom he had worked in India during the 80s, gave him the opportunity to test and develop these ideas through a commission, asking him to create an illustrated manual for un-trained workers, so that they successfully produce structures based on simple materials and techniques. The result of this exploration on self-construction was the Simple Technology Museum in Madras, India.
Through “Toward a Scientific Architecture” Friedman contributed in the systematization of the philosophical implications of user empowerment, turning the concept of “auto-planification” into a consistent and multifaceted theory, while at the same time, he set the foundations for the viewing of computation as a facilitator of the user empowerment that he envisions. What is particularly interesting here, is that the foreword of the book’s English translation is written by nicholas Negroponte, who admits the significant influence that his encounter with Friedman’s ideas had in the transformation of his research agenda.
Friedman’s main objective in this book is to provide the techniques to “democratize” design, to free the user from the “patronage” of the architect, to enable “non experts” to make their own designs, as they are the ones who better know their needs and desires and, most importantly, bear the risk of failure.

Self planning, the rejection of the expert, the invention of languages for learning, as well as his computational endeavors which strongly influenced the Architecture Machine group at MIT, all spinal threads throughout Friedman’s multivalent work, bring him very close to the central theme ofopenArchitecture(s). In my upcoming entry I will attempt to conceptualize Friedman’s figuration of technologically mediated participation, within a larger framework of practices in Europe and the US.

Notes
[1]Branzi A., Le mouvement radical, in Brayer M.A, Migayrou F., Architectures Experimentales, 1950-2000, Collection du FRAC Center, Orleans (France): Editions HYX, 2005, pp. 33-38
[2]Brayer M. A., Le FRAC Centre, une collection experimentale, op. cit. [1], pp. 7-10
[3]op.cit [1]
[4]Ragon M., Entretien avec M.-A. Brayer et F. Migayrou, op. cit. [1], pp. 45-50
[5]http://www.blueprintmagazine.co.uk/index.php/everything-else/interview-yona-friedman/

Categories
blogging

Prolegomena

“Open design is now finding its place inside the collective imagination […] there are no more isolated projects but a whole ecosystem is emerging through the weaving of collaborative networks
— Massimo Menichinelli, founder of openp2pdesign.org

In the world of immaterial production (information, ideas, code etc), Open Source has granted individuals and collectivities immediate access to the technologies they use by actively re- diagramming the process of design/distribution and redefining the roles of its actors. Apart from the impact of this shift from a political economy standpoint, the spread of the ideas and practices of Open Source have reloaded the discussion on technology democratization and user empowerment, not only in the immaterial but also in the material sphere.
The challenge of “open sourcing” the physical world is, however, considerably more complex; when it comes to actual objects, the question of what it means for something to be “open” has to simultaneously account for its material (manufacturing) and immaterial (design, code) aspects. The multifaceted nature of this “openness” creates the need for layered conceptualizations -e.g. Sterling’s six layer burrito– addressing the theoretical, practical and legislative intricacies of “open design” (the Open Source Hardware (OSHW) definition is an example of community-driven efforts in this direction).

The cultivation of these ideas revive a latent architectural vision, expressed with enthusiastic techno-utopias about four decades ago and abandoned in the mid-seventies, leaving a sense of disillusionment and unfulfilled potential. This vision of technology mediated participatory design, evolved around the use of computer-aided design and (information) technology as a means to encourage user participation and to empower “non-experts” to directly express their needs and desires beyond or without the mediation of the architect.
The combination of this pre-computational historical precedent, which has left a heritage of concepts, diagrams and science-fictional representations (ie. megastructure, design amplifiers etc) with the growing discourse on open source and the affordances offered by information technology, creates the potential for a re-problematization of participatory design under the light of this new paradigm.

The recent initiation of the discourse on “Open Source Architecture (OSArc)” indicates this possibility to rethink Architecture as a peer to peer, collectively driven process, where individuals and collectivities design and change the spaces they inhabit. Initiatives like p2p urbanism, or Architecture for Humanity have already started discussing or actively implementing versions of this concept. However, these movements remain isolated and lean heavily towards either manifestly declarations with sparse case studies, or immediate implementations of a vernacular type of OSArc (emergency habitat, developing world). This breach between theory and practice, creates the need for the initiation of a discussion on the definition of Open Source Architecture, bringing together its current pioneers and engaging communities of architects and users.

Especially when it comes to architecture, the problem of defining “openness” becomes even more complex, first because it has to account for multiple actors/stakeholders who participate in the design, construction and operation of buildings and second because it has to take a position in the persistent question of what is a truly “open” architecture and with which procedures and tools it can be achieved. Given that open access to blueprints and BIM models may be a necessary but is by no means a sufficient condition for this openness, we need to consider how “architectural knowledge” can be made accessible to non-experts and to conceive platforms which allow for horizontal decision-making processes.

Open Source Architecture is not just Architecture going “Open”; it requires a rethinking of the “discipline’s(?)” theory and practice, a re-diagramming of its processes and the roles of the subjects involved in them. Therefore, in order to establish a systematic substrate for the definition of Open Source Architecture, “Open” needs to be viewed from the perspective of two parallel discourses; from a “architecture” point of view (re-diagramming the design process) and from an “open design” point of view (analogies, disparities, extensions of current definitions/problematic etc)

This is what this blog envisions to do: to create a a repository of ideas and attitudes discussing the notion of “openness” in Architecture, through intersections of the past participatory techno-utopias with the conceptual and technical underpinnings of Open Source/Open Design.