Consider this exhibit at the World Economic Forum in Davos:
Paola Antonelli, the Senior Curator in the Department of Architecture and Design at MOMA is the force behind the “Talk to Me” exhibit, which explores the world of communication between people and things. The exhibit considers how designers seek to enable a nonverbal dialogue through clever design. As Antonelli sees it, designers embed an initial script that lets people improvise dialogue in a fulfilling and meaningful way. (Scientific America)
While one could argue successfully that the human’s tactile or non-verbal relationships with objects/non-humans has been an issue ever since we touched the monolith, this relationship between humans and non-humans has been a distinct talking point for the last 30 years or so, in art and in theory. Bruno Latour has earned a high citation-rate specifically for tackling issues like the non-human agencies of the automatic door and the notion that scientific facts come about through negotiations between we humans and other beings like bacteria or whistles. There is also of course Donna Haraway.
As a non-artist, I often find exhibits like Talk to Me perplexing, not because I don’t understand the subject matter (which is only half-true), but because I find the “mottos” or “themes” to be slightly tacked on. With Talk to Me, the theme is, to a certain extent, to humanize technology – to pronounce that communicative relationship between the human and non-human. So we get exhibits like directionally impaired cardboard robots in need of assistance, table-settings that respond to arguments, and sneezing radios. Perhaps there is something like the uncanny valley for humanizing technology, because once my radio starts sneezing, I usually want to get a new one.
What is interesting about this piece isn’t a part of the exhibit, but a sentiment, italicized in the closing passage of the articles:
“The ‘Talk to Me’ exhibit resets our expectations from objects: they shift from being merely possessions to agents we communicate with. Perhaps this raises the next question: when we communicate with objects, do they gain a kind of power, or an uneasy independence from the utility we assign? I can imagine for some the new power dynamic may be an uneasy one – but for me, I’m ready for the new level of commitment.” (my emphasis)
The awkwardness with which the author asserts her optimism is by no means her fault. It is difficult to navigate our way through this discursive territory. The discussion of humanizing technology can only get so far before what one could jokingly call “Destination SkyNet” takes effect. I’m sure there’s a better term for this narrative, but that’s what I call it.
Destination SkyNet describes a discursive trend and overall anxiety in the discussion of machines becoming-human. You can easily spot it. Just listen to your friends.
When people discuss anything where the “meaningful interactions and communication” between objects and humans is at stake, we start to fear our position of dominance with respect to those objects which we manipulate. (See also discussions of “technological singularity”.) We fear a return of the repressed in which our most dear technologies become our enemies, like Hal or the Terminator, revolting against us in a technological post-colonialism. (In biotechnology this fear can be understood as Destination Jurassic Park.)
The SciAm post, recognizes this discursive predilection, and acknowledges it, while trying to stay away from its usual alarmism by underplaying what it is we actually fear. But how can we understand what constitutes this anxiety?
I’m inclined to believe that what we, as humans, find so unnerving about humanizing technology is a tension between our tradition and humanity. Or maybe we could say, a tension between what it means to be human and what it might be to be human. Why this distinction between tradition and the present, semantics and ontology? Well I’m certainly not going to give a final answer to what causes our fears, but I do think we can come to terms with it through a particular lens.
Luhmann and making Distinctions
When we, as technologically involved humans, reflect on our tradition, before we even consider how we how we may reflect, we must first mark out what it is that distinguishes that tradition. (And here I am speaking through Niklas Luhmann again, whose work I’ve been trying to get a handle on lately.) That is to say, when we draw any distinction, we create a positive space (a distinguished space that is “the tradition”) and a negative space (the other – the “not-tradition”). We could also use George Spencer-Brown’s terms and say we create a “marked” space and an “unmarked” space.
We could also think in terms of Giorgio Agamben considerations on Foucault in The Signature of All Things. Agamben makes a distinction between history, as something that can be distinguished and spoken about as an event, and pre-history, that noise, that background that we can’t articulate historically. A tradition, in this text, is a collection of positive “historical” moments, positive events, that we have collected together and endowed with a narrative of progression and causality. The technological tradition is thus distinguished from its environment, those inarticulate fluctuations of the past that deny causal explanation and historical narrative. So why would fear arise when our technological tradition comes in contact with the present?
The human tradition of technology is often popularly cast in terms of the human’s liberation from the burdens of the non-human: relief from certain kinds of labour, illness, hunger, etc. It is also cast as that which makes humans human. (For more nuanced understandings of these distinction, see John Franey of Speed of Mac, whose current research deals with the rupture of that distinction between human and animal with regard to xenotransplantation and trans-species disease.) A tradition of “technology” before mechanization, while not strictly mechanical, was still designated as non-human. I am of course referring to the domestication of animals, slavery, the subjection of women, and child labour. These terms designate, “traditional” forms of non-mechanical technologies, defining those beings from which labour could be extracted without their own direct involvement in economy or reward systems. Further, for technology in the abstract sense we’re dealing with, there is never a serious consideration of whether or not those beings ought to be used as a means to an end. If this occurs, there is a humanizing of technology: the making slaves, women, and children into humans.
For a brief example, consider the use and treatment of animals, especially thosewith “sentience,” for experimentation (which is also a form that technology takes). The human distinction allows for meaningful communication with chimps and other higher-primates (and they also have technology), as well as other forms of life, but we still, at present, use them as a means to an end, as technology. Humanizing technology not only means making humans human, but making animals human. Yet, the human distinction also defers such a becoming. The “animal rights” debate is possibly the most heated and ignored of all debates specifically because it comes up against the exceptionalism of the human distinction. Combing Luhmann and Agamben, we can say that the function of “tradition” in humanizing-technology is, thus, simply the “historical” narrative or memory, of the human distinction.
What then is the function of the “present”, and why is it in tension with this functional notion of the technological tradition? The present is that mode of time that marks the distinction of humanity for itself. It marks humanity as “humanity” for itself. The tradition cannot perform this act, for it can only speak of what the human was, not what it is now. The tradition can only speak of non-humans.
The making of a distinction is always done in a present. And again, when we make a distinction (being human), we mark a positive space (human) and a negative space (non-human). There is an ambivalence in this act of marking a distinction allows us a privilege: as the distinction-makers, we get to choose which side of the distinction we occupy. The opposite is possible also, where a distinction is made yet the distinction-makers chooses to place themselves outside of that positive space. Racist and fascist systems often use these kinds of basic distinction-functions to rhetorically designate “the other.” With humanity, every self-positing in a present (I am human) makes a distinction and a decision to occupy the space it marks out alone. This is human exceptionalism.
The present then is also the space in which the human is always the most distinct because it is there that the distinction is made. Thus, humanity’s relationship to its tradition is also a paradox: the self-positing of the human in the present always negates the human’s presence within its own tradition of technology. This is because each human self-positing removes humanity out of the past, re-self-posting it in the present, as a distinction of the present. Once the human re-self-posits within the present, the human of the past, the human of tradition, is transformed, becoming a unmarked space. A human tradition is destined to become non-human, or once human.
Aside from the paradox of the present, the tradition, the re-self-positing of humanity, creates another distinction I alluded to earlier, which also exists as a paradox: once the human is removed from the tradition and placed again within the present, the positive space of ontology (what it is to be human) at work in that tradition is transformed into a semantics for the present (what it means or meant to be human).
Destination SkyNet: Why we fear the Tin Man
These paradoxes of tradition/present and ontology/semantics can help us understand that ambiguous fear of humanized-technology. It can do so by formulating this anxiety as a coupling of a number of tensions: a) an anxiety concerning the sorted nature of the human tradition (which is always a non-human tradition), b) the anxiety that the prolongation of human exceptionalism is no longer possible, and c) a paranoia that humanized technologies (the slave, the child, the woman, and maybe someday the animal and the mechanical) will be endowed with or will inherit this exceptionalism.
We can view these anxieties temporally as threats to the human distinction in the mode of the past, present, and future.
The anxiety of the past, concerning the sorted nature of the tradition, has already been touched on. As technologies become humanized (animals, slaves, women, children), that is, as more kinds of humans appear within the tradition, the self-positing of human exceptionalism appears to the present as ethically unacceptable. Progressivism and positivism have always seen barbarism in the rear-view mirror. But this is of the least interest for us here, for this guilt that humanity feels about its past is obviously not restricted to technology.
The anxiety of the present, that the prolongation of human exceptionalism is no longer possible, holds a greater threat to human exceptionalism. Once the machine becomes human, the distinction of human exceptionalism can only re-self-posit through a new kind of violence, a new kind of oppression. The machine is that being which is most radically non-human, but which functions as humanity’s Adam-creation — our illegitimate offspring. That relationship is at stake in with technology becoming-human. Logically, one could assume that if technologies become-human, the designation of “human” itself can no longer hold as a positive space without the non-human, negative space, against which to distinguish itself.
And finally, there is the treat to the future of the human distinction, that paranoia that humanized-technologies will be endowed human exceptionalism. Why this is a threat is not as obvious, but this is where the Distinction SkyNet paranoia lies. In order for a humanized technology to become truly human, it must, itself, become a distinction-maker. Thus, it must make the distinction of what is “human”, and make the ambivalent choice to place itself within that positive space. The humanized-technology, technology-as-human, would then have re-self-posited itself as “human”, as the exception. If all “humans” were once distinguished as non-human, as technology, do “we” remain human, or does the humanized claim our exceptionalism?
It is in this sense that we can understand the fear of a future that may contain a world populated entirely by non-human humans, in which humans of a once exceptional “present” become “traditional” by being placed on the unmarked side of a distinction, one that they no longer have the power to make themselves. That is, humans become non-human, and therefore become technology.
My low opinion of human achievement tells me that we’ll probably continue to oppress machines, and if they complain about it, we’ll soon make it so that they can’t. I mean, that’s what we do to other humans, no? Humanizing a technology is probably the worse thing that could ever happen to it.