Allegiance vs. Fetish

In an interview, Neil Harbisson (2013) said, “It’s a very exciting moment in history. Instead of using technology or wearing technology constantly, we will start becoming technology.” While it is fascinating thought, a question stems in my mind. If we become technology then what becomes of us?

Is the relationship between humans and embedded technologies an optimistic allegiance or an unexamined techno fetish?

This thesis explores the relation between humans and technology and how they shape each other. We have augmented our abilities; our wisdom and our lives with tools since prehistory. But digital technology has crossed a threshold and poses to threaten our sense of being. In the light of the recent technological transcendence over our physical and biological limits, the question of distinctiveness of the human species has risen. The developments brought about by converging nanotechnology, biotechnology and information technology within the field of cognitive sciences have ushered in new sub-species of humans with expanded abilities. Or are they even humans? While the realm of ‘cyborgs’ is upon us, I debate, in the relationship between man and machine, where does one stop being a human and start becoming a machine? And where, if at all, does one limit the technological determination of a man?

Chapter 1

Technology: the weapon of choice

Harder, better, faster, stronger

The word technology comes from the Greek words ‘techne’ and ‘logos’. ‘Techne’ means art, skill, craft or the manner/means by which a thing is gained. ‘Logos’ means the utterance by which inward thought is expressed. Technology in its true sense then is, words or discourse to rationally realize certain valued ends. The definition of these ‘valued ends’ is what unsettles us.

Undeniably, man’s relationship with tools and technology has been longstanding. We first learned to use stone tools, and then bronze and later iron and eventually complicated machinary emerged. Advances in medical sciences and digital technology have enabled us be stronger, live longer, think faster. What started with the industrial revolution has resulted in the advent of integrated digital technology.

“Our appropriation of objects and use of tools allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.”

In his book The Artificial Ape, archaeologist Timothy Taylor (2010) raises an interesting debate on how our relationship with objects helped the weakest ape evolve into the humans that we are today. Evolution then is not by virtue of adaptability and natural selection but by virtue of technology. He suggests we first departed from our genetically fixed behavior patterns which led to our ever increasing brain capacity and hence more innovations. In the evolution of man, technology is but a catalyst.

Technology as Extensions of the Human Body

To further the argument, in his famous book Understanding Media – The Extensions of Man, Marshall McLuhan (1994) depicts technologies as extensions of humanity.

“It is a persistent theme of this book that all technologies are extensions of our physical and nervous system to increase power and speed. Again, unless there were such increases of power and speed, new extensions of ourselves would not occur or would be discarded.”

McLuhan claims weapons like bows and knives are extensions of hands and nails, while clothing is an extension of the skin. Electronic media are analyzed as extension of the information processing functions of the central nervous system. In his own words, humans in this digital

age are ‘an organism that now wears its brain outside its skull and its nerves outside its hide’. Then in essence, the Internet is an extension of our mind and collective consciousness. The idea resonates with that of ‘hive mind’, beyond sensory and motor control of humans’ maybe. McLuhan also envisioned an era in which human intelligence and creativity would be automated and translated into information functions performed by machines. Maybe we can attribute the now foreseeable future of cyborgs and trans-humans to this ideology.

Long before Marshall McLuhan wrote his book Understanding Media – The Extensions of Man, philosopher Ernst Kapp published Grundlinien Einer Philosophie der Technik (1877), a work on philosophy of technology. In this Kapp argues, ‘humans unconsciously transfer form, function and normal proportions of their body to artifacts’. According to him, form follows function. Therefore for two things to be functionally similar, they must also be morphologically similar. The bent finger becomes a hook, the hollow of the hand becomes a bowl and the human arm becomes a shovel and so forth.

While Ernst Kapp emphasizes on the physical form of the tool, Marshall McLuhan highlights functional extensions. Amplificatory or complementary, I believe, we are extended through our technology.

Chapter 2

Humans: Techno-evolution

Drawing attention to the debate of what is human, in this chapter I set out to explore, how we codify, classify, and ratify what actually makes us human? How has the definition of being human changed through the times? We have been dealing with this age-old dilemma, an unending debate: Alan Turing worked to define exactly what it is that proves that a certain set of behavior is produced by a human and not by a sophisticated machine. Longer ago still, the Spanish Catholic Church debated whether the natives of America who were caught in the slave trade were actually humans (and had souls), or whether they could be treated like any other animal or automaton.

Biologically speaking, genes carry much information to codify a human. But what does the genome really tell us? There is surprisingly little genetic difference between humans and chimpanzees. The DNA sequence that can be directly compared between the two genomes is almost 99% identical. When DNA insertions and deletions are taken into account, humans and chimpanzees share 96% of their sequence (National Human Genome Research Institute, 2005). If we are so similar to our ancestors, where does the difference lie? Anatomical changes, argues Timothy Taylor, in The Artificial Ape (2010). After the switch to an upright posture, probably the biggest single anatomical change on the journey from apes to humans was the weakening of the jaw. In apes, the jaw is large and protrudes way beyond the nose. It is attached by muscle to a bony ridge on the top of the skull and has the strength many times that of a human jaw. Recent genomics research has shown that a large mutation about 2.4 million years ago disabled the key muscle protein in human jaws. The ape brain could not grow because of the huge muscle load anchored to the skull’s crest and apes can not articulate speech-like sounds because of the clumsy force of their jaws. What brought about this change? The discovery of stone tools and cooked food. This mutation allowed the increase in human brain size and the acquisition of language. Interestingly though, it has been observed that over the last 30,000 years there has been an overall decrease in brain size (Discover Magazine, 2011). Maybe because we can outsource our intelligence – we do not need to remember as much as a Neanderthal. Maybe humans are going to continue to get less biologically intelligent. But will that make us lesser humans?

Natural Selection vs. Mutation

Human evolution is a subject that greatly assists in the understanding of what is human. Charles Darwin’s theory of evolution by natural selection works when organisms change over time as a result of changes inherited in physical or behavioral traits. Changes take place over the course of several generations – a ‘microevolution’. But natural selection is also capable of creating entirely new species. Certain changes could be initiated at the genome level causing chemical changes in DNA replications. Such mutations can even be deliberately induced in order to adapt rapidly to changing environments. They can also be easily negated if desired. Think in-vitro fertilization and selective gene pool. It has become possible to isolate and manipulate the techniques of biological change, genes, outside of the control of sexual reproduction. Prominent actress Angelina Jolie recently underwent a preventive double mastectomy, because she had BRCA gene mutation (a BRCA gene mutation makes your risk for breast and ovarian cancer incredibly high). While incredibly brave, the incident throws open questions regarding the right of a human to an open future.

Peter Forbes of The Guardian (2010) attributes this to an emerging Grand Universal Theory of Human Origins. He claims that proto-human beings, through innovative technologies, created the conditions that led to a rapid spread of new mutations. It is quite the radical thought in light of the accepted idea of evolution through natural selection. Technology allows us to overcome any biological deficits we might have. In the past, we lost our sharp fingernails because we had cutting tools; we lost our heavy jaw muscles thanks to stone tools. These changes reduced our aggression, increased our physical dexterity and made us more civilized. And unlike other animals, we humans don’t adapt to environments; we adapt environments to us.

Rise of the cyborgs

“Humans are artificial apes – we are biology plus technology.” – Timothy Taylor (2010)

The history of human self improvement, includes deeply troubled movements as eugenics. Most eugenic techniques involve discouraging unfavorable traits by preventing people with those traits from breeding and encouraging improvement of favorable traits through ensuring those with those traits bred together. After enabling human evolution through assistive technology outside of the human body, time is ripe for the current scenario of embedded technology. Prosthetics as a mode of body augmentation is not new. Humans for long have been benefiting from pacemakers, artificial hearts, prosthetic limbs, hearing aids, and the likes. Recent developments in bioelectronics have brought in technologies that interfere with the human nervous and other biological systems at a more physiological level. Nanotechnology may be able to effect biological changes at the intracellular level too, causing extraordinary changes in human biological structure. But the question arises, how much of a human being could you replace and still preserve its essential humanity?

Bionic Man

Neil Harbisson (2013) is the first human cyborg, who overcame his inability to see color by ‘hearing’ color. Harbisson was born with a condition called Achromatopsia, which means he sees everything around him in shads of grey. The head mounted eyeborg attachment converts colors around him into sound waves, which are transmitted to his inner ear via a vibration mechanism on the back of his skull. Interestingly, his visual prowess extends far beyond the human visible spectrum and he can perceive infrared as well as ultraviolet wavelengths. What is more intriguing is that in an attempt to develop the device further, he plans to integrate it entirely with his bodily functions, so as to draw power from his own human physiology.

“I have like a USB-like connector that I put at the back of the head which allows me to plug myself in to the mains. I take three hours to charge myself and then I can go usually three or four days, but the aim is not to use electricity. One of the next stages is to find a way of charging the chip with my own body energy, so I might be using blood circulation or my kinetic energy – or maybe the energy of my brain could charge the chip in the future. That’s one of the next things; to be able to charge the chip without depending on any external energy.”

Dancer and Choreographer Moon Ribas (2008), has an attachment on her arm, which allows her to detect earthquakes. It’s a chip connected to her phone, which collects seismic data from the Internet. The receiver in her arm vibrates in reaction to it. In here performances, the dance series ‘Waiting for Earthquakes’, she moves with the intensity of the earthquake.

“It’s a collaboration with the Earth, a choreography with our planet and my body, which I communicate to the audience.”

Bionic research has enabled cochlear ear implants and retinal implants, which unlike hearing aids, are not bulky and cumbersome but minute intelligent devices embedded in the human body to simply amplify sound or simulate vision. Cochlear implants stimulate nerves in the ear, helping deaf people hear while retinal implants in the eye, could provide a crude form of vision for people with limited blindness. While such prosthetics is widely prevalent, the acceptability of such augmentations is debatable. The controversy surrounding Oscar Pistorius, long before his murder trial, was that of him competing in the ‘able-bodied’ Olympics despite being a double amputee. Many signed a petition asking the IOC to ban Oscar Pistorius from the Olympics as his use of artificial aids was equivalent to using performance-enhancing drugs.

Steve Mann (2012), the father of wearable computing, was recently assaulted at a fast food chain in Paris for wearing a digital recording device. The ‘mediated reality’ eyewear not only enables him to augment the dark portions of what he is viewing while diminishing the amount of light in the bright areas but also extends his vision beyond the usual human spectral bands. In another incident, a woman was attacked in a San Francisco bar for wearing Google Glass (Dezeen, 2014). Athlete and double leg amputee, ‘super-abled’ Aimee Mullins can change her legs depending on her (desired) height, speed and capabilities from a selection of twelve prosthetic limbs. Aimee found herself in the eye of a controversy when she walked the ramp for Alexander McQueen in 1998. McQueen was accused of turning his fashion show into a freak show. Speaking of the experience, she says, “I started to move away from the need to replicate human-ness as the only aesthetic ideal.” Technology is accepted ‘okay’ as long as it’s restorative, getting you back to what is considered normal. As soon as there is technology to enhance, there is a fear that no one fully understands. In the words of Gregor Wolbring (2010), un-augmented human beings function in the realm of ‘already and always disabled’.

Professor Steve Fuller (2011) speaks of societal acceptance of such ‘humans’ with embedded wearable technologies. Fuller argues that the pursuit for enhancements is based on a need ‘to create some distance between us and the other animals’. This human ‘techno-physio evolution’ (Fogel, 2011), has so outpaced traditional evolution that humans today stand apart not just from every other species, but also from all previous generations of Homo sapiens as well.

Chapter 3

Towards Singularity

Philosopher Max More (1995) and futurist Ray Kurzweil (2005) have for long advocated the trans-humanist movement. A post-human condition where humanity is replaced by the next stage in evolution: a human-machine hybrid. They even envision uploading human consciousness into a machine so we can ‘live’ forever within computer systems as networks of information. This being is a hybrid of flesh and steel, neurons and wires. It is a human being transformed into a machine. The believers of trans-humanist movement want human consciousness to continue beyond biological death, within technology. Others do not believe that this will ever happen. The human brain is complex and we will presumably never understand it completely. Therefore probability of creating a machine to fully match human characteristics is feeble.

Miguel Nicolelis (2012) has worked on brain-machine interface in trying to isolate brain intention from the physical domains of the body. Working with a monkey named Aurora, he was successfully able to read brain waves that enabled the monkey to move a robotic arm by recreating the same brain signals, as it would while moving his own arm. The monkey’s brain had incorporated the artificial device as an extension of its body. Through this study what he was trying to achieve was a complete liberation of the brain from the physical constraints of the body.

“The model of the self that Aurora had in her mind has been expanded to get one more arm.”

A strong criticism of this movement comes from Paul Virilio (1994), who warns us about the consequences of our increasing dependence on technology in his book, The Vision Machine. He claims that this ‘techno-scientific fundamentalism’ is an antithesis to life. He argues that as a result, our sense organs will atrophy and we will degenerate into neurologically simple organisms.

“The will to power science without a conscience will pave the way for a kind of intolerance yet unimaginable today precisely because it will not simply attack peculiarities of the species like sex, race or religion. It will attack what is alive, ‘natural’ vitality finally being eliminated by the quasi-messianic coming of the wholly hyper-activated man.” – Paul Virilio (1995)

In the wake of exponential technology, speculations are aplenty that this may create a new rift, just like race and wealth divides. A new kind of differentiation will form between those who have technological enhancements and those who are just flesh and blood. What would happen to society when intelligence and strength boosting technologies become easily available? Should everyone then have the right to boost his or her intellect?

While Paul Virilio (2010) insists this may give rise to a new kind of colonialism, what intrigues me is his insistence that the quest for extra-terrestrials is man’s quest for super-humans.

“As soon as you create the idea of the super-man, you discredit, you downgrade and you degrade a kind of man. In super-racism you would find all over again the foundations both of colonialism and of racism and of xenophobia, but on a cosmic level I would say – hence the idea that the extra-human is the future of the extra-terrestrial. And that the search for little green men was not at all science fiction, but the forerunner of the search for a superior man. Simply, since nobody dared to be part of Nazi eugenics, we went to outer space and opted for little green men.”

Professor Steve Fuller of Warwick University (2011), on the other hand argues that humans use technology to resonate on the biblical notion that we are created in ‘the image and likeness of God’ and hence are God-like in our powers. Extensive medical research dedicated to extending human life is the result of this ultimate fantasy of immortality.

To define is to measure. To measure is to limit.

Since the 1950s, scientists have been working on creating machines that could replicate the human brain’s ability to solve problems – machines with artificial intelligence. For scientists, human behavior is no more than chemicals, electricity and levers; intimations of intention are necessary aberrations for understanding this ‘biological machine’.

But human intelligence is complex. The human brain consists of more than 100 million nerve cells, neurons, which are connected with each other in different directions. When the brain solves a problem, it sets a multitude of neurons in action. The brain spreads out the calculations. Just like an assembly line breaks down the production process into smaller tasks; technology aims to break down brain complexity into sets of probabilities. All experimentations hence, revolve around the idea of the computer as a device for augmentation as opposed to intervening human intellect.

Humans assume that we have attributes that machines do not, our ability to learn from experiences and form opinions exclusive of the common accepted whole. But as more and more technology is ‘hardwired’ into human organs, will this thought hold truth? German philosopher Martin Heidegger developed the theory that technology, as it gradually comes to dominate our world, forces us to see the world in a defined way.

Jaron Lanier (2011) sums up this dichotomy beautifully in his book; You Are Not A Gadget : A Manifesto. Writing about music (and the advent of MIDI) and how such a transient medium has been locked-in to technology, he says,

“The whole of the human auditory experience has become filled with discrete notes that fit in a grid.”

Will humans lose their value then, when the sum of the whole is valued higher than its parts?

When Garry Kasparov played a game of chess against Deep Blue, on May 3, 1997 and lost, it sent shockwaves around the world. A computer’s victory over the world’s best player meant that a human invention now held the upper hand over its creator’s most important characteristic – intelligence. Had we succeeded in reproducing the human’s complex brain in a machine? As much as I would like to believe it was the triumph of machine intelligence over human intelligence, the machine was after all a product of highly skilled humans developing a complex algorithm driven by endless probabilities. Computerized cognitivism after all is derived from connectionist and reductionist agentic theories. In effect Kasparov was playing against another intelligent being with the façade of a machine.

While Ray Kurzweil deems, Deep Blue as the preliminary culmination of the development moving towards machine intelligence, research fellow US expert in machine super-intelligence, Daniel Dewey (2014) talks about an ‘intelligence explosion’ where the accelerating power of computers becomes less predictable and controllable. Nanotechnology and biotechnology breaks down the working hierarchy of organisms and places greater powers in smaller packages. These changes could start small but have a bigger chain reaction, affecting everyone in the world. If the advance of technology overtakes our capacity to control the possible consequences, then we land ourselves in a reality when computers will be able to create more and more powerful generations of computers, by themselves. Would it mean that machines could also have consciousness? Will they be able to relate to each other and act morally or set targets? Would it then shake the very foundation of our claim to a special position on earth?

In the same way as Deep Blue’s intelligence was limited to playing chess, other machines will also lack some aspects of human intelligence. We can, at best, create a machine that reflects our own understanding of the brain. But the brain will always be more complex than we can understand and we will therefore never be able to make anything as complex.

“Who owns your extended eyes? Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left.” – Marshall McLuhan (1994)

Donna Haraway (1991), a researcher in the History of Consciousness Department at the University of California, Santa Cruz, United States is interested in the links between humans and computers, and feels that cyborgs and trans-humans are metaphorical identities for human beings. They resist essentialism and help to display the fluidness and boundary-transgression of postmodern identities. According to her, machine intelligence is a way of the human race finally freeing itself from the culture-nature split reality we have found ourselves in.

“Biological-determinist ideology is only one position opened up in scientific culture for arguing the meanings of human animality.”

Chapter 4

What is normal and acceptable?

‘Normal’ is what most people strive to be just because it is probably nothing more than fiction. Our identities are neither fixed nor personal. They are ways in which we navigate the social environment. The concept of normalcy only entered the cultural imagination in the mid- nineteenth century as a statistical average of human qualities (Cohen-Rottenberg, 2014). The notion of ‘average’ was synonymous with ‘normal’ and anyone who did not fit the demands of normalcy was deemed deviant and dangerous. As professor Lennard Davis (1995) puts it, ‘normalcy’ is the ultimate unattainable ideal, but there are powerfully internalized forces at work that keep it in place. These internalized forces fulfill the purpose of herding people into a very narrow idea of what it means to be human. Normalcy in a way then serves to highlight all of society’s prejudices and injustices.

With human evolution, the circle of empathy (Lanier, 2011) has expanded. What earlier constituted likeness to our physical being has expanded to family, to the immediate environment, society and then the nation. Unfortunately, there is no way to capture mathematically, higher cognitive capacities that are associated with ‘consciousness’ and ‘self- awareness’. Thereby making it difficult for us to draw the radius of our circle of empathy. Looking at this from the vantage point of our future more enhanced selves, it would be better to draw a line than be perplexed by a blurry boundary.

The Future of Humanity Institute at Oxford focuses research on questions that are critically important for humanity’s future, potential social impacts of future transformative technologies and cognitive enhancements. Research also includes issues related to the future of computing and existential risks. Directed by Professor Nick Bostrom, they frequently advise governments and other public bodies on ethics, risks and consequences of such developments.

Government policies for transhumanism and cyborgs

The main ethical issue that concerns cybernetic augmentation is the distinction that is to be made between restorative augmentation and augmentation for enhancement. For example, contact lenses or glasses are commonly used and accepted means of corrective vision. But with the advent of the bionic eye, capable of surpassing the limits of human vision, augmentation for enhancement is easily possible. Situations like these initiate debates on ethical issues. One may say, augmentation such as these is trespassing into god’s domain – ‘playing God.’

Center for Genetics and Society (2002) state in their summary statement:

“The new human genetic technologies present a threshold challenge for humanity. […] If abused they could open the door to a powerful new eugenics that would objectify human life and undermine the foundations of human society. […] The rapid development of these technologies has created a civil society deficit. Policy makers have not had time to understand and assess their implications.”

Nick Bostrom recently advised Obama’s Presidential Commission for the Study of Bioethical Issues (bioethics.gov, 2014) on subjects related to ethical considerations in cognitive enhancement. Discussions included how concerns about distributive justice and fairness might be addressed in light of potential individual or societal benefits of cognitive enhancement. In South Korea though, popular for cultural acceptance of robots, robot ethics (South Korean Robot Ethics Charter, 2012) is taken seriously. The government has appointed an expert committee to create a set of rules for relations between humans and robots. These rules prevent humans misusing robots and vice versa. Its primary focus is however to ensure the protection of human beings when robots (possibly) develop higher intelligence. The British government published a report, The Future of Identity by the Department for Business, Innovation & Skills (2013) to evaluate the changing technologies that are shaping our society. It examines the impact that a hyper-connected society will have on crime prevention, healthcare, employment and education.

Interestingly, even though most countries levy regulations against human reproductive cloning and inheritable genetic modification, there are not many legal limitations on enhancements.

“Those that have or might have clearly beneficial uses. Some technologies or applications in this category raise social concerns, and their use should thus be regulated and controlled as appropriate. [e.g.] Genetically targeted drugs, somatic gene therapies, infertility treatments, stem cell research, embryo research […] Those that has few if any beneficial applications, and whose harmful impacts would be profound and irrevocable. [e.g.] Human reproductive cloning and inheritable genetic modification.”

– Center for Genetics and Society (2002)

Conclusion

While Paul Virilio (1995) may argue that technology is the death of our natural senses and the beginning of our reverse evolution to simple organisms, I think technology has only extended senses beyond our natural capabilities. In doing so, if it facilitates the necessary intelligences and renders others vestigial, it is but a natural process. In the course of human evolution, as a result of our habitual bipedalism, toes became shorter, upper arms became weaker, lower limbs became longer; jawline became parabolic and at some point in time we stopped being apes and turned into humans. In essence, the emergence of the thumb was probably the beginning of this long relationship between technology and man. Maybe we were part of nature’s eugenics experiment. Yet we are at this threshold, where artificial life algorithms are simulating, in microseconds, what evolution and natural selection did for us in millions of years.

Owen Paepke (1992) attributes this rise of technological dependence on our economic independence. The saturation point in economic growth has resulted in our interest in ‘human growth’ – the re-engineering of the human species. Perhaps we can alleviate from being victims of natural selection to masters of self-selection. After all, the search for human perfection is one of the oldest of utopian dreams. This utopian dream though has always confused me. On one hand humans want to reaffirm their position on earth as the superior, intelligent being but at the same time strive to create more powerful reflections of them. Humans for long have been heralded as ‘the crown of creation’ but in reality have existed as a species that is weak, dependent, inadequate and incomplete. In the wake of its enhanced sub-species like cyborgs, transhumans or post-humans, humans become the powerful ‘creator’ Himself. Stemming from this formidable inkling, transhumanists provide a new interpretation of Darwin’s theory of evolution. The temporary variations of the living are no longer mutations in the traditional sense, but rather scientific constructs. While many argue, the rise of cybernetic organisms will be the death of humanity but our evolution from apes did not result in the termination of the species. Change and constant development are what drives this planet let alone the human species. To me, the human cyborg represents a transitional species of sorts, before the human enters into total biological non-existence. If progression is hypothesized from the perspective of increased information-processing power, (read intelligence and brain size) of the species, then organic construct will eventually lose to a more durable and efficient artificial construct. But what is artificial life without any human experience?

Echoing my thoughts exactly, Jaron Lanier (2011) writes:

“A computer isn’t even there unless a person experiences it. There will be a warm mass of patterned silicon with electricity coursing through it, but the bits don’t mean anything without a cultured person to interpret them.”

Alternatively, cyborgs could become a new sort of the dominant caste, forcing the rest of un- technologized humanity into serfdom. Or perhaps they might decide simply to eliminate us. We could soon be living the premise of William Goulding’s book, The Inheritors (1955), but in the next generation of evolution.

It is my personal belief that technology is a man made construct just like religion or language. While it had its humble beginnings in semantics, it has now reached an incomprehensible complex plane. Just like at a point in time human history was driven by religion, the present is being driven by technology. It is, perhaps the fear of death that has driven us to evolve new ways of conquering our existence. If Ray Kurzweil’s predictions are right, we would soon be able to upload our consciousness onto a machine body and possibly attain immortality. In philosophy and in theology, the body has always been just the medium for expression of the mind or soul. The immortality that we are out to attain is that of our intellect, of our conscious. In lieu of ‘singularity,’ are we ready to escape the body like Kurzweil, or even philosophically like Descartes (1637)?

The uneasiness that surround new, paradigm-shifting technologies is justified. And it has only been amplified by the exponential acceleration of technology that has occurred during the last few years. While we strive to improve the human brain through the use of skill chips for implanting new knowledge, mastery of human cognitive skills through deductive reasoning is by far a distant reality. However, technologically augmented physical proficiencies are a dream realized. It is probably imperative for society to assert, to those charged with creating new technologies, the appropriate social responsibility. Questions like how accessible should enhancements be made or how does a society function where everybody is an optimized self are yet to be answered. The cultural implications of such questions hold the key to understanding the future of humankind.

Ghosts of Project’s Past

The Great Bed of Ware ::  An exercise in Interactive Narrative

Haunted Bed

The main challenge in this project lay in converting a primary story telling piece, the museum artifact, into an engaging community conversation. While I read the text provided to us before the project, one part that caught my eye was, what Lewis Mumford (1960) said about artifacts. ‘From late Neolithic times in the Near East, right down to our own day, two technologies have recurrently existed side by side: one authoritarian, the other democratic’. Most items in a museum come laden with rich history and cultural significance that maybe obsolete now. This sense of alienation (in time and culture) is heightened by further encasing the artifacts in glass boxes. Thereby creating distance between the observer and the object. To induce an interaction stemmed from the narrative of the object was an exciting prospect. The main question was how do we make the visitors curious about the object. Curiosity has been recognized as a critical motive that influences human behavior (Loewenstein, 1994). Playing on this idea and the idea of Gestalt psychology, we devised an interaction that was based not only on curiosity of the person but also on his/her beliefs. The artifact that we chose was the Great Bed of Ware. Bed as a concept borderlines between the realms of the physical and the spiritual. A sort of plane of transference between two states of being.
‘Symbolic Anthropology’  was a term that I came across while researching on ghosts and why people believe in ghosts or spirits. It views culture as an independent system of meaning deciphered by interpreting key symbols and rituals (Spencer, 1996). The major premises governing symbolic anthropology is that ‘beliefs, however unintelligible, become comprehensible when understood as part of a cultural system of meaning’. As Roland Barthes (1957) puts it, myth is a metalanguage. It turns language into a means to speak about itself. We used technology as a metalanguage for the construction of our narrative. The interaction takes place in the ambiguous space of real and hyper real and uses the observers’ personal experience as an interface to gauge the success of the interaction. Thinking back, I don’t suppose I would change anything about the designed interaction. It was a perfect blend of technology and spiritual beliefs. An extension of it though, could have been a deeper study of semiotics and symbols that were found on the bed. Not only was the bed marred with carvings and amulet stamps from its various users, the wooden ornamentation suggested the origins and the aspirations of its maker. It would have been interesting to study the four-poster bed as a concept of private space within four walls of a room.

The bed with its illustrious past had played host to many a party of men and women spending the night on the bed at one time. Stories of the secret life of the bed immersed, as a host for orgies and sexcapades. During the initial brainstorming and research for the project, we came across a very interesting story about the bed devoid of any such sensual connotation. It was believed that when the maker of the Great bed of Ware presented the bed to King Edward IV, he was so impressed that he blessed the artisan with a lifetime of compensation. But soon after the death of the maker, the bed found itself in traveling through inns. So enraged was he, that his masterpiece (meant for the royals) was now being used by lowly commoners, that his ghost would haunt the sleepers of the bed. Users complained of being thrown off the bed, tickling and waking up with scratch marks all over them. Though not much truth can be found about this story, we really liked the idea of this historic piece being haunted. It is a sort of expectation that you have about old items.
The bed has been a celebrated piece within the museum (Victoria & Albert Museum, London) because of its grandeur and stature. We noticed, many tourists taking pictures of themselves with the bed as a souvenir of their travels. Taking our idea of the unexplained forward, we decided to use a camera lens as an extension of the human eye. Photography as a mode of documentation. A simple image clicked in front of the bed reveals more than what is visible to the naked eye. A photograph is always invisible; it is not it that we see (Barthes, 1980).

The bed as a metaphor for Twilight zone

Through our project, we wanted to question the idea of technology as an extension of the human body. By McLuhan’s (1964) theory, pencil becomes an extension for the hand, wheel becomes the extension of the legs, and the camera becomes an extension of the human eye. To challenge the idea we took to physics. Scientifically, the human eye can see a negligible portion of the light spectrum, between 390 – 700 nm. Light frequencies that are either too high or too low for humans to see are ultraviolet and infrared, located just past the red portion of the visible light spectrum. CCD cameras, on the other hand, are able to ‘see’ outside of the human visible spectrum. Using this phenomenon we devised a method by which a shadow is cast onto the bed using infrared light. To passers by this light and the subsequent shadow remain invisible; devoid of any interaction. But as soon as the user takes a picture or views the object through the camera lens, a shadow appears on the bed. What we hoped to achieve through this was this feeling of surprise, of momentary confusion, of awe. For observers who believed in spiritual existence, it was a sighting and for people who didn’t, this was a way of dismissing the theory. We really wanted the element of playfulness to be present in the interaction. The unexplained intrigues people and humor is a good way of diluting the seriousness of the matter. Documentation of the project happened mainly though photographs taken via mobile devices and tablets. A part of the simulation was created via animation. Documentation of the creative processes took place in the form of paper prototypes and experiments with lights of different intensities.

Since the main idea of the project (that of ghost imaging) remained the same, different approaches were explored for concept representation. Playing on the belief that, looking at oneself in the mirror right out of bed leads to the soul exiting the body, we planned to devise an interface which, when a person took a ‘selfie’ in front of the bed, would produce an inverted image of the person’s face into the picture. This was to be done via a positioning sensor and face substitution. But this seemed a far stretch from the actual narrative of the Bed of Ware. The other approach was to have infrared projections onto the bed simulating the existence of the spirit. But, since projectors do not project infrared light we resorted to looking at other sources. The development of this idea had its humble beginnings with 2 5W infrared LEDs borrowed from Nicolas. The light given out by them was negligible. We realized we needed an infrared bulb of high intensity and thus began our weeklong search of an infrared light source. Though the production of infrared light is not uncommon, its main usage is as a heat producing, pain-relieving device, in the medical field. Many online purchases and returns later, we decided to use a CCTV camera with night vision as our source of infrared light. Equipped with a dark room, a night vision CCTV camera, a paper stencil of an Edwardian male face, a mock up of the bed in super white glossy paper (we realized the shadow cast on any other surface was faint; the light was being absorbed by the surface), we began our experiments with invisible light. After some initial hiccups of gauging the right distance between the light, stencil and the bed, we were able to achieve a middle ground.

Paper prototype of the great bed experiments with invisible lights

The intention behind this narrative project was an organic interaction that compelled the users to look at the artifact in new light (and literally so!). Though for this project augmented reality would have been a good approach, we wanted to produce a piece of work that was achievable given the time frame and expertise. Early on, the idea of weaving the infrared LEDs into the fabric of the sheets, hiding the electrical circuit was discussed and later dismissed, since it over complicated the project. We also debated having the light source dangle oscillate under the canopy of the bed so that when captured as a still image it would not have distinct outlines but be a cloudy blur just as how ghosts are generally perceived to be. The brief called for the use of a tablet device in front of the artifact. We wanted the interaction to be as organic and spontaneous as possible. Though a tablet fixed in front of the artifact warrants an immediate curiosity from the observer, (plaques and information boards are a common phenomenon in museums) it could result in the drop of the ‘wow’ factor. The audience would then expect for something to happen. The idea was to keep the interaction as organic as possible by concealing technology and have the spiritual beliefs of individuals play as a layer of interface.

The major setback of our project was the unavailability of infrared light bulbs. When looking for the bulb in stores around London yielded no result, we ordered a 250 W (which we’d presumed to be strong enough) bulb from the Lightbulb Company but that turned out to be an infrared heat bulb. We then got a CCTV camera with night visibility up to 8 meters. Unfortunately the LEDs were not strong enough to produce clear images. Some research and consultation later we found a CCTV camera with 48 LEDs that had visibility up to 40 meters. This was the strongest source of light found. Though this solved the problem partially, the light emitted needed to be collected and concentrated to achieve maximum result. With our lo-fi approach to the whole project, this was prototyped with a piece of mirror. A convex lens or a Fresnel lens would have been the ideal solution. Given the progress and constant setbacks in the project, we debated going back to the initial idea of using a Max patch to create mirror ghost images of the users. But the idea of using infrared light and creating this ambiguity between existence and non-existence (technology induced twilight state), seemed like the better idea. Using science to draw a fine line between representation and abstraction. As Gary Davis described John Smith’s work, ‘Smith signals a chiefly associational system, which deftly manipulates the path of our expectations.’ Quintessentially, the same reaction we’d hope from our audience.

The first points of references were Ectoplasm (Richet, 1923) photos and Pepper’s Ghost. John Henry Pepper (17 June 1821 – 25 March 1900), who popularized the effect of creating a ghostly image in a 3D space using Pelxi-glass and lighting, was a good starting point. Ectoplasm is said to be a substance or spiritual energy ‘exteriorized’ by physical mediums. Though the existence of genuine cases is debated, this sort of photo-manipulation served as an early idea and influence. The Image Fulgurator by Julius Von Bismarck was an excellent study for this project.  It intervenes when a photo is being taken, without the photographer being able to detect anything. The manipulation is only visible on the photo afterwards. Though the idea was almost similar the technology suggested/used was different. Audun Mathias Øygard’s experiments in real-time face substitution were another body of work that influenced the outcome of this project. Works of Hellicar and Lewis, who make the audience an active part of the interaction greatly, intrigued me. Their projects like the ‘Hello Cube’ and ‘Night Lights’ have a curiosity inducing playfulness. This makes the outcome more engaging. Created by Thyra Hilden and Pio Diaz, ‘Forms of Nature‘ chandelier is a beautifully designed bundle of white tangled branches, casting shadows on the walls that look like forest trees.

If there were anything I could change about the project, it would be to trigger the infrared shadow just as the picture is taken, much like the Image Fulgurator (Von Bismarck, 2008). Not have the intervention be seen via the camera lens but only after the picture taken. That would have added another layer of ambiguity to the project. There is certain romanticism about finding something hidden in a photograph, something you hadn’t seen before. Another change that can take this project a few notches up is the use of augmented reality. Use the tablet device to allow a digitally enhanced view/perspective of the artifact.