In an interview, Neil Harbisson (2013) said, “It’s a very exciting moment in history. Instead of using technology or wearing technology constantly, we will start becoming technology.” While it is fascinating thought, a question stems in my mind. If we become technology then what becomes of us?
Is the relationship between humans and embedded technologies an optimistic allegiance or an unexamined techno fetish?
This thesis explores the relation between humans and technology and how they shape each other. We have augmented our abilities; our wisdom and our lives with tools since prehistory. But digital technology has crossed a threshold and poses to threaten our sense of being. In the light of the recent technological transcendence over our physical and biological limits, the question of distinctiveness of the human species has risen. The developments brought about by converging nanotechnology, biotechnology and information technology within the field of cognitive sciences have ushered in new sub-species of humans with expanded abilities. Or are they even humans? While the realm of ‘cyborgs’ is upon us, I debate, in the relationship between man and machine, where does one stop being a human and start becoming a machine? And where, if at all, does one limit the technological determination of a man?
Technology: the weapon of choice
Harder, better, faster, stronger
The word technology comes from the Greek words ‘techne’ and ‘logos’. ‘Techne’ means art, skill, craft or the manner/means by which a thing is gained. ‘Logos’ means the utterance by which inward thought is expressed. Technology in its true sense then is, words or discourse to rationally realize certain valued ends. The definition of these ‘valued ends’ is what unsettles us.
Undeniably, man’s relationship with tools and technology has been longstanding. We first learned to use stone tools, and then bronze and later iron and eventually complicated machinary emerged. Advances in medical sciences and digital technology have enabled us be stronger, live longer, think faster. What started with the industrial revolution has resulted in the advent of integrated digital technology.
“Our appropriation of objects and use of tools allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.”
In his book The Artificial Ape, archaeologist Timothy Taylor (2010) raises an interesting debate on how our relationship with objects helped the weakest ape evolve into the humans that we are today. Evolution then is not by virtue of adaptability and natural selection but by virtue of technology. He suggests we first departed from our genetically fixed behavior patterns which led to our ever increasing brain capacity and hence more innovations. In the evolution of man, technology is but a catalyst.
Technology as Extensions of the Human Body
To further the argument, in his famous book Understanding Media – The Extensions of Man, Marshall McLuhan (1994) depicts technologies as extensions of humanity.
“It is a persistent theme of this book that all technologies are extensions of our physical and nervous system to increase power and speed. Again, unless there were such increases of power and speed, new extensions of ourselves would not occur or would be discarded.”
McLuhan claims weapons like bows and knives are extensions of hands and nails, while clothing is an extension of the skin. Electronic media are analyzed as extension of the information processing functions of the central nervous system. In his own words, humans in this digital
age are ‘an organism that now wears its brain outside its skull and its nerves outside its hide’. Then in essence, the Internet is an extension of our mind and collective consciousness. The idea resonates with that of ‘hive mind’, beyond sensory and motor control of humans’ maybe. McLuhan also envisioned an era in which human intelligence and creativity would be automated and translated into information functions performed by machines. Maybe we can attribute the now foreseeable future of cyborgs and trans-humans to this ideology.
Long before Marshall McLuhan wrote his book Understanding Media – The Extensions of Man, philosopher Ernst Kapp published Grundlinien Einer Philosophie der Technik (1877), a work on philosophy of technology. In this Kapp argues, ‘humans unconsciously transfer form, function and normal proportions of their body to artifacts’. According to him, form follows function. Therefore for two things to be functionally similar, they must also be morphologically similar. The bent finger becomes a hook, the hollow of the hand becomes a bowl and the human arm becomes a shovel and so forth.
While Ernst Kapp emphasizes on the physical form of the tool, Marshall McLuhan highlights functional extensions. Amplificatory or complementary, I believe, we are extended through our technology.
Drawing attention to the debate of what is human, in this chapter I set out to explore, how we codify, classify, and ratify what actually makes us human? How has the definition of being human changed through the times? We have been dealing with this age-old dilemma, an unending debate: Alan Turing worked to define exactly what it is that proves that a certain set of behavior is produced by a human and not by a sophisticated machine. Longer ago still, the Spanish Catholic Church debated whether the natives of America who were caught in the slave trade were actually humans (and had souls), or whether they could be treated like any other animal or automaton.
Biologically speaking, genes carry much information to codify a human. But what does the genome really tell us? There is surprisingly little genetic difference between humans and chimpanzees. The DNA sequence that can be directly compared between the two genomes is almost 99% identical. When DNA insertions and deletions are taken into account, humans and chimpanzees share 96% of their sequence (National Human Genome Research Institute, 2005). If we are so similar to our ancestors, where does the difference lie? Anatomical changes, argues Timothy Taylor, in The Artificial Ape (2010). After the switch to an upright posture, probably the biggest single anatomical change on the journey from apes to humans was the weakening of the jaw. In apes, the jaw is large and protrudes way beyond the nose. It is attached by muscle to a bony ridge on the top of the skull and has the strength many times that of a human jaw. Recent genomics research has shown that a large mutation about 2.4 million years ago disabled the key muscle protein in human jaws. The ape brain could not grow because of the huge muscle load anchored to the skull’s crest and apes can not articulate speech-like sounds because of the clumsy force of their jaws. What brought about this change? The discovery of stone tools and cooked food. This mutation allowed the increase in human brain size and the acquisition of language. Interestingly though, it has been observed that over the last 30,000 years there has been an overall decrease in brain size (Discover Magazine, 2011). Maybe because we can outsource our intelligence – we do not need to remember as much as a Neanderthal. Maybe humans are going to continue to get less biologically intelligent. But will that make us lesser humans?
Natural Selection vs. Mutation
Human evolution is a subject that greatly assists in the understanding of what is human. Charles Darwin’s theory of evolution by natural selection works when organisms change over time as a result of changes inherited in physical or behavioral traits. Changes take place over the course of several generations – a ‘microevolution’. But natural selection is also capable of creating entirely new species. Certain changes could be initiated at the genome level causing chemical changes in DNA replications. Such mutations can even be deliberately induced in order to adapt rapidly to changing environments. They can also be easily negated if desired. Think in-vitro fertilization and selective gene pool. It has become possible to isolate and manipulate the techniques of biological change, genes, outside of the control of sexual reproduction. Prominent actress Angelina Jolie recently underwent a preventive double mastectomy, because she had BRCA gene mutation (a BRCA gene mutation makes your risk for breast and ovarian cancer incredibly high). While incredibly brave, the incident throws open questions regarding the right of a human to an open future.
Peter Forbes of The Guardian (2010) attributes this to an emerging Grand Universal Theory of Human Origins. He claims that proto-human beings, through innovative technologies, created the conditions that led to a rapid spread of new mutations. It is quite the radical thought in light of the accepted idea of evolution through natural selection. Technology allows us to overcome any biological deficits we might have. In the past, we lost our sharp fingernails because we had cutting tools; we lost our heavy jaw muscles thanks to stone tools. These changes reduced our aggression, increased our physical dexterity and made us more civilized. And unlike other animals, we humans don’t adapt to environments; we adapt environments to us.
Rise of the cyborgs
“Humans are artificial apes – we are biology plus technology.” – Timothy Taylor (2010)
The history of human self improvement, includes deeply troubled movements as eugenics. Most eugenic techniques involve discouraging unfavorable traits by preventing people with those traits from breeding and encouraging improvement of favorable traits through ensuring those with those traits bred together. After enabling human evolution through assistive technology outside of the human body, time is ripe for the current scenario of embedded technology. Prosthetics as a mode of body augmentation is not new. Humans for long have been benefiting from pacemakers, artificial hearts, prosthetic limbs, hearing aids, and the likes. Recent developments in bioelectronics have brought in technologies that interfere with the human nervous and other biological systems at a more physiological level. Nanotechnology may be able to effect biological changes at the intracellular level too, causing extraordinary changes in human biological structure. But the question arises, how much of a human being could you replace and still preserve its essential humanity?
Neil Harbisson (2013) is the first human cyborg, who overcame his inability to see color by ‘hearing’ color. Harbisson was born with a condition called Achromatopsia, which means he sees everything around him in shads of grey. The head mounted eyeborg attachment converts colors around him into sound waves, which are transmitted to his inner ear via a vibration mechanism on the back of his skull. Interestingly, his visual prowess extends far beyond the human visible spectrum and he can perceive infrared as well as ultraviolet wavelengths. What is more intriguing is that in an attempt to develop the device further, he plans to integrate it entirely with his bodily functions, so as to draw power from his own human physiology.
“I have like a USB-like connector that I put at the back of the head which allows me to plug myself in to the mains. I take three hours to charge myself and then I can go usually three or four days, but the aim is not to use electricity. One of the next stages is to find a way of charging the chip with my own body energy, so I might be using blood circulation or my kinetic energy – or maybe the energy of my brain could charge the chip in the future. That’s one of the next things; to be able to charge the chip without depending on any external energy.”
Dancer and Choreographer Moon Ribas (2008), has an attachment on her arm, which allows her to detect earthquakes. It’s a chip connected to her phone, which collects seismic data from the Internet. The receiver in her arm vibrates in reaction to it. In here performances, the dance series ‘Waiting for Earthquakes’, she moves with the intensity of the earthquake.
“It’s a collaboration with the Earth, a choreography with our planet and my body, which I communicate to the audience.”
Bionic research has enabled cochlear ear implants and retinal implants, which unlike hearing aids, are not bulky and cumbersome but minute intelligent devices embedded in the human body to simply amplify sound or simulate vision. Cochlear implants stimulate nerves in the ear, helping deaf people hear while retinal implants in the eye, could provide a crude form of vision for people with limited blindness. While such prosthetics is widely prevalent, the acceptability of such augmentations is debatable. The controversy surrounding Oscar Pistorius, long before his murder trial, was that of him competing in the ‘able-bodied’ Olympics despite being a double amputee. Many signed a petition asking the IOC to ban Oscar Pistorius from the Olympics as his use of artificial aids was equivalent to using performance-enhancing drugs.
Steve Mann (2012), the father of wearable computing, was recently assaulted at a fast food chain in Paris for wearing a digital recording device. The ‘mediated reality’ eyewear not only enables him to augment the dark portions of what he is viewing while diminishing the amount of light in the bright areas but also extends his vision beyond the usual human spectral bands. In another incident, a woman was attacked in a San Francisco bar for wearing Google Glass (Dezeen, 2014). Athlete and double leg amputee, ‘super-abled’ Aimee Mullins can change her legs depending on her (desired) height, speed and capabilities from a selection of twelve prosthetic limbs. Aimee found herself in the eye of a controversy when she walked the ramp for Alexander McQueen in 1998. McQueen was accused of turning his fashion show into a freak show. Speaking of the experience, she says, “I started to move away from the need to replicate human-ness as the only aesthetic ideal.” Technology is accepted ‘okay’ as long as it’s restorative, getting you back to what is considered normal. As soon as there is technology to enhance, there is a fear that no one fully understands. In the words of Gregor Wolbring (2010), un-augmented human beings function in the realm of ‘already and always disabled’.
Professor Steve Fuller (2011) speaks of societal acceptance of such ‘humans’ with embedded wearable technologies. Fuller argues that the pursuit for enhancements is based on a need ‘to create some distance between us and the other animals’. This human ‘techno-physio evolution’ (Fogel, 2011), has so outpaced traditional evolution that humans today stand apart not just from every other species, but also from all previous generations of Homo sapiens as well.
Philosopher Max More (1995) and futurist Ray Kurzweil (2005) have for long advocated the trans-humanist movement. A post-human condition where humanity is replaced by the next stage in evolution: a human-machine hybrid. They even envision uploading human consciousness into a machine so we can ‘live’ forever within computer systems as networks of information. This being is a hybrid of flesh and steel, neurons and wires. It is a human being transformed into a machine. The believers of trans-humanist movement want human consciousness to continue beyond biological death, within technology. Others do not believe that this will ever happen. The human brain is complex and we will presumably never understand it completely. Therefore probability of creating a machine to fully match human characteristics is feeble.
Miguel Nicolelis (2012) has worked on brain-machine interface in trying to isolate brain intention from the physical domains of the body. Working with a monkey named Aurora, he was successfully able to read brain waves that enabled the monkey to move a robotic arm by recreating the same brain signals, as it would while moving his own arm. The monkey’s brain had incorporated the artificial device as an extension of its body. Through this study what he was trying to achieve was a complete liberation of the brain from the physical constraints of the body.
“The model of the self that Aurora had in her mind has been expanded to get one more arm.”
A strong criticism of this movement comes from Paul Virilio (1994), who warns us about the consequences of our increasing dependence on technology in his book, The Vision Machine. He claims that this ‘techno-scientific fundamentalism’ is an antithesis to life. He argues that as a result, our sense organs will atrophy and we will degenerate into neurologically simple organisms.
“The will to power science without a conscience will pave the way for a kind of intolerance yet unimaginable today precisely because it will not simply attack peculiarities of the species like sex, race or religion. It will attack what is alive, ‘natural’ vitality finally being eliminated by the quasi-messianic coming of the wholly hyper-activated man.” – Paul Virilio (1995)
In the wake of exponential technology, speculations are aplenty that this may create a new rift, just like race and wealth divides. A new kind of differentiation will form between those who have technological enhancements and those who are just flesh and blood. What would happen to society when intelligence and strength boosting technologies become easily available? Should everyone then have the right to boost his or her intellect?
While Paul Virilio (2010) insists this may give rise to a new kind of colonialism, what intrigues me is his insistence that the quest for extra-terrestrials is man’s quest for super-humans.
“As soon as you create the idea of the super-man, you discredit, you downgrade and you degrade a kind of man. In super-racism you would find all over again the foundations both of colonialism and of racism and of xenophobia, but on a cosmic level I would say – hence the idea that the extra-human is the future of the extra-terrestrial. And that the search for little green men was not at all science fiction, but the forerunner of the search for a superior man. Simply, since nobody dared to be part of Nazi eugenics, we went to outer space and opted for little green men.”
Professor Steve Fuller of Warwick University (2011), on the other hand argues that humans use technology to resonate on the biblical notion that we are created in ‘the image and likeness of God’ and hence are God-like in our powers. Extensive medical research dedicated to extending human life is the result of this ultimate fantasy of immortality.
To define is to measure. To measure is to limit.
Since the 1950s, scientists have been working on creating machines that could replicate the human brain’s ability to solve problems – machines with artificial intelligence. For scientists, human behavior is no more than chemicals, electricity and levers; intimations of intention are necessary aberrations for understanding this ‘biological machine’.
But human intelligence is complex. The human brain consists of more than 100 million nerve cells, neurons, which are connected with each other in different directions. When the brain solves a problem, it sets a multitude of neurons in action. The brain spreads out the calculations. Just like an assembly line breaks down the production process into smaller tasks; technology aims to break down brain complexity into sets of probabilities. All experimentations hence, revolve around the idea of the computer as a device for augmentation as opposed to intervening human intellect.
Humans assume that we have attributes that machines do not, our ability to learn from experiences and form opinions exclusive of the common accepted whole. But as more and more technology is ‘hardwired’ into human organs, will this thought hold truth? German philosopher Martin Heidegger developed the theory that technology, as it gradually comes to dominate our world, forces us to see the world in a defined way.
Jaron Lanier (2011) sums up this dichotomy beautifully in his book; You Are Not A Gadget : A Manifesto. Writing about music (and the advent of MIDI) and how such a transient medium has been locked-in to technology, he says,
“The whole of the human auditory experience has become filled with discrete notes that fit in a grid.”
Will humans lose their value then, when the sum of the whole is valued higher than its parts?
When Garry Kasparov played a game of chess against Deep Blue, on May 3, 1997 and lost, it sent shockwaves around the world. A computer’s victory over the world’s best player meant that a human invention now held the upper hand over its creator’s most important characteristic – intelligence. Had we succeeded in reproducing the human’s complex brain in a machine? As much as I would like to believe it was the triumph of machine intelligence over human intelligence, the machine was after all a product of highly skilled humans developing a complex algorithm driven by endless probabilities. Computerized cognitivism after all is derived from connectionist and reductionist agentic theories. In effect Kasparov was playing against another intelligent being with the façade of a machine.
While Ray Kurzweil deems, Deep Blue as the preliminary culmination of the development moving towards machine intelligence, research fellow US expert in machine super-intelligence, Daniel Dewey (2014) talks about an ‘intelligence explosion’ where the accelerating power of computers becomes less predictable and controllable. Nanotechnology and biotechnology breaks down the working hierarchy of organisms and places greater powers in smaller packages. These changes could start small but have a bigger chain reaction, affecting everyone in the world. If the advance of technology overtakes our capacity to control the possible consequences, then we land ourselves in a reality when computers will be able to create more and more powerful generations of computers, by themselves. Would it mean that machines could also have consciousness? Will they be able to relate to each other and act morally or set targets? Would it then shake the very foundation of our claim to a special position on earth?
In the same way as Deep Blue’s intelligence was limited to playing chess, other machines will also lack some aspects of human intelligence. We can, at best, create a machine that reflects our own understanding of the brain. But the brain will always be more complex than we can understand and we will therefore never be able to make anything as complex.
“Who owns your extended eyes? Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left.” – Marshall McLuhan (1994)
Donna Haraway (1991), a researcher in the History of Consciousness Department at the University of California, Santa Cruz, United States is interested in the links between humans and computers, and feels that cyborgs and trans-humans are metaphorical identities for human beings. They resist essentialism and help to display the fluidness and boundary-transgression of postmodern identities. According to her, machine intelligence is a way of the human race finally freeing itself from the culture-nature split reality we have found ourselves in.
“Biological-determinist ideology is only one position opened up in scientific culture for arguing the meanings of human animality.”
What is normal and acceptable?
‘Normal’ is what most people strive to be just because it is probably nothing more than fiction. Our identities are neither fixed nor personal. They are ways in which we navigate the social environment. The concept of normalcy only entered the cultural imagination in the mid- nineteenth century as a statistical average of human qualities (Cohen-Rottenberg, 2014). The notion of ‘average’ was synonymous with ‘normal’ and anyone who did not fit the demands of normalcy was deemed deviant and dangerous. As professor Lennard Davis (1995) puts it, ‘normalcy’ is the ultimate unattainable ideal, but there are powerfully internalized forces at work that keep it in place. These internalized forces fulfill the purpose of herding people into a very narrow idea of what it means to be human. Normalcy in a way then serves to highlight all of society’s prejudices and injustices.
With human evolution, the circle of empathy (Lanier, 2011) has expanded. What earlier constituted likeness to our physical being has expanded to family, to the immediate environment, society and then the nation. Unfortunately, there is no way to capture mathematically, higher cognitive capacities that are associated with ‘consciousness’ and ‘self- awareness’. Thereby making it difficult for us to draw the radius of our circle of empathy. Looking at this from the vantage point of our future more enhanced selves, it would be better to draw a line than be perplexed by a blurry boundary.
The Future of Humanity Institute at Oxford focuses research on questions that are critically important for humanity’s future, potential social impacts of future transformative technologies and cognitive enhancements. Research also includes issues related to the future of computing and existential risks. Directed by Professor Nick Bostrom, they frequently advise governments and other public bodies on ethics, risks and consequences of such developments.
Government policies for transhumanism and cyborgs
The main ethical issue that concerns cybernetic augmentation is the distinction that is to be made between restorative augmentation and augmentation for enhancement. For example, contact lenses or glasses are commonly used and accepted means of corrective vision. But with the advent of the bionic eye, capable of surpassing the limits of human vision, augmentation for enhancement is easily possible. Situations like these initiate debates on ethical issues. One may say, augmentation such as these is trespassing into god’s domain – ‘playing God.’
Center for Genetics and Society (2002) state in their summary statement:
“The new human genetic technologies present a threshold challenge for humanity. […] If abused they could open the door to a powerful new eugenics that would objectify human life and undermine the foundations of human society. […] The rapid development of these technologies has created a civil society deficit. Policy makers have not had time to understand and assess their implications.”
Nick Bostrom recently advised Obama’s Presidential Commission for the Study of Bioethical Issues (bioethics.gov, 2014) on subjects related to ethical considerations in cognitive enhancement. Discussions included how concerns about distributive justice and fairness might be addressed in light of potential individual or societal benefits of cognitive enhancement. In South Korea though, popular for cultural acceptance of robots, robot ethics (South Korean Robot Ethics Charter, 2012) is taken seriously. The government has appointed an expert committee to create a set of rules for relations between humans and robots. These rules prevent humans misusing robots and vice versa. Its primary focus is however to ensure the protection of human beings when robots (possibly) develop higher intelligence. The British government published a report, The Future of Identity by the Department for Business, Innovation & Skills (2013) to evaluate the changing technologies that are shaping our society. It examines the impact that a hyper-connected society will have on crime prevention, healthcare, employment and education.
Interestingly, even though most countries levy regulations against human reproductive cloning and inheritable genetic modification, there are not many legal limitations on enhancements.
“Those that have or might have clearly beneficial uses. Some technologies or applications in this category raise social concerns, and their use should thus be regulated and controlled as appropriate. [e.g.] Genetically targeted drugs, somatic gene therapies, infertility treatments, stem cell research, embryo research […] Those that has few if any beneficial applications, and whose harmful impacts would be profound and irrevocable. [e.g.] Human reproductive cloning and inheritable genetic modification.”
– Center for Genetics and Society (2002)
While Paul Virilio (1995) may argue that technology is the death of our natural senses and the beginning of our reverse evolution to simple organisms, I think technology has only extended senses beyond our natural capabilities. In doing so, if it facilitates the necessary intelligences and renders others vestigial, it is but a natural process. In the course of human evolution, as a result of our habitual bipedalism, toes became shorter, upper arms became weaker, lower limbs became longer; jawline became parabolic and at some point in time we stopped being apes and turned into humans. In essence, the emergence of the thumb was probably the beginning of this long relationship between technology and man. Maybe we were part of nature’s eugenics experiment. Yet we are at this threshold, where artificial life algorithms are simulating, in microseconds, what evolution and natural selection did for us in millions of years.
Owen Paepke (1992) attributes this rise of technological dependence on our economic independence. The saturation point in economic growth has resulted in our interest in ‘human growth’ – the re-engineering of the human species. Perhaps we can alleviate from being victims of natural selection to masters of self-selection. After all, the search for human perfection is one of the oldest of utopian dreams. This utopian dream though has always confused me. On one hand humans want to reaffirm their position on earth as the superior, intelligent being but at the same time strive to create more powerful reflections of them. Humans for long have been heralded as ‘the crown of creation’ but in reality have existed as a species that is weak, dependent, inadequate and incomplete. In the wake of its enhanced sub-species like cyborgs, transhumans or post-humans, humans become the powerful ‘creator’ Himself. Stemming from this formidable inkling, transhumanists provide a new interpretation of Darwin’s theory of evolution. The temporary variations of the living are no longer mutations in the traditional sense, but rather scientific constructs. While many argue, the rise of cybernetic organisms will be the death of humanity but our evolution from apes did not result in the termination of the species. Change and constant development are what drives this planet let alone the human species. To me, the human cyborg represents a transitional species of sorts, before the human enters into total biological non-existence. If progression is hypothesized from the perspective of increased information-processing power, (read intelligence and brain size) of the species, then organic construct will eventually lose to a more durable and efficient artificial construct. But what is artificial life without any human experience?
Echoing my thoughts exactly, Jaron Lanier (2011) writes:
“A computer isn’t even there unless a person experiences it. There will be a warm mass of patterned silicon with electricity coursing through it, but the bits don’t mean anything without a cultured person to interpret them.”
Alternatively, cyborgs could become a new sort of the dominant caste, forcing the rest of un- technologized humanity into serfdom. Or perhaps they might decide simply to eliminate us. We could soon be living the premise of William Goulding’s book, The Inheritors (1955), but in the next generation of evolution.
It is my personal belief that technology is a man made construct just like religion or language. While it had its humble beginnings in semantics, it has now reached an incomprehensible complex plane. Just like at a point in time human history was driven by religion, the present is being driven by technology. It is, perhaps the fear of death that has driven us to evolve new ways of conquering our existence. If Ray Kurzweil’s predictions are right, we would soon be able to upload our consciousness onto a machine body and possibly attain immortality. In philosophy and in theology, the body has always been just the medium for expression of the mind or soul. The immortality that we are out to attain is that of our intellect, of our conscious. In lieu of ‘singularity,’ are we ready to escape the body like Kurzweil, or even philosophically like Descartes (1637)?
The uneasiness that surround new, paradigm-shifting technologies is justified. And it has only been amplified by the exponential acceleration of technology that has occurred during the last few years. While we strive to improve the human brain through the use of skill chips for implanting new knowledge, mastery of human cognitive skills through deductive reasoning is by far a distant reality. However, technologically augmented physical proficiencies are a dream realized. It is probably imperative for society to assert, to those charged with creating new technologies, the appropriate social responsibility. Questions like how accessible should enhancements be made or how does a society function where everybody is an optimized self are yet to be answered. The cultural implications of such questions hold the key to understanding the future of humankind.