Artificial Intelligence and the Concept of the Self
Artificial Intelligence and the Concept of the Self
Gary Kolb, PhD
August 15, 2016
The advancement and development of technology is progressing at an astronomical rate, accompanied by changes in our individual and social structure. Chaos is an appropriate word for what seems to be happening in the world today politically, spiritually, and economically. There has also been a shift in the sense of security on both individual and national levels. Technology has infiltrated many aspects of daily lives. It has changed how a person develops a self-concept to how we communicate with others. It has increased the potential for addiction. The development of technology that could support Artificial Intelligence (AI) has fueled debate between scientists, theologians, philosophers and others focusing on the possibility that computers (machines) will develop consciousness which up to this point has been an exclusively human characteristic.
In the earliest stages of the study of Artificial Intelligence (AI), very few people had more than the sketchiest idea of what is involved in the process (Boden, 1977, p.3). In the beginning, most people were skeptical of the value of AI. The lay person may have feared that machine models of the mind would alienate us from our proper humanity (Boden, 1977, p.425). Now, philosophers, theologians, scientists, futurists, and inventors are all involved in the discussion. Much of the information in this article comes from the Gilder-Forbes Telecosm conference held in 2002. The participants included: Ray Kurzweil , inventor and futurist; John Searle, philosopher; Michael Denton, biologist; Tom Ray, zoologist and theorist; and William Dembski, philosopher and mathematician.
Kurzweil was an advocate of Artificial Intelligence (AI) and believed that the intelligence of non-biological entities (machines) will exceed human intelligence, including musical and artistic aptitude, creativity, physical movement through the world, and response to emotion, in the next century (2002, p. 12). He predicted that “by 2019, a $1000 computer will match the processing power of the human brain” (2002, p. 12). Kurzweil envisioned computers with the ability to understand languages and read models of knowledge contained in written documents. He wrote that ultimately, machines will be able to gather knowledge on their own from the physical world, gathering data from many sources and sharing it with each other (Kurzweil,2002, p. 12). They will be able to read on their own, model and understand what they have read.
According to Kurzwiel, mastering the software of intelligence is a matter of “tapping into the blueprint of human intelligence by reverse engineering, or copying, the design of the human brain” (2002, p. 32). Scanning the brain involves mapping the locations, interconnections, and contents of all the somas, axons, dendrites, pre-synaptic receptacles, neurotransmitter concentrations, and other neural components at levels which can be accomplished with high-resolution Magnetic Resonance Imagery, optical imaging, near-infared scanning and other non-invasive techniques (Kurzweil, 2002, p. 32). Researchers have already developed integrated circuits that precisely match the digital and analog information processing of neurons, and have ”built a variety of integrated circuits that emulate the digital-analog characteristics of mammalian neural circuits” (Kurzweil, 2002, p. 36). Additional research has discovered “the potential for electronic neurons to precisely emulate biological ones” (Kurzweil, 2002, p. 36). Research started with developing functionally equivalent recreations of single neurons which then evolved into clusters.
Qualities of human thinking can be combined with certain advantages of machine intelligence. “As we combine the brain’s pattern recognition methods derived from high-resolution brain scans and reverse engineering efforts with the knowledge-sharing, speed , and memory accuracy advantages of non-biological intelligence, the combination will be formidable” (Kurzweil, 2002, p. 39). Kurzweil explained that ”within several decades information-based technologies will encompass all human knowledge and proficiency, ultimately including the pattern-recognition powers, problem-solving skills, and emotional and moral intelligence of the human brain itself” (2005, p. 8).
Kurzweil used the term Singularity to describe “a future period during which the pace of technological changes will be so rapid, its impact so deep, that human life will be irreversibly transformed” (2006, p. 7). In Singularity, people will be able to transcend the limitations of biological bodies and brains. By the end of this century, “the non-biological parts of intelligence will be trillions and trillions of times more powerful than unaided human intelligence” (Kurzweil, 2005, p. 9). Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human, but that transcends our biological roots. There will be no distinction between human and machine or between physical and virtual reality post-Singularity (Kurzweil, 2005, p. 9).
Clark, who identified himself as a cognitive scientist, wrote that “the more I have learned about the brain and the mind, the more convinced I have become that the everyday notions of ‘minds’ and ‘persons’ are open-ended plastic systems fully capable of including non-biological props and aids quite literally as parts of themselves” (2003, p. 8). He claimed that as technology becomes more portable, pervasive, flexible, reliable, and increasingly personalized, our tools will become more and more a part of who and what we are we are (Clark, 2003, p. 10).
If technology develops in the ways that Kurzweil predicted, will machines also become conscious? Consciousness in machines will be a critically important issue in the development of machine intelligence (AI) in the twenty-first century. If consciousness is just a certain type of intelligent skill, for example, the ability to reflect on one’s own self and situation, then the issue is not difficult because any skill or capability or form of intelligence that one cares to define will be replicated in non-biological entities within a few decades (Kurzweil, 2002, p. 44). Any computational process sufficiently capable of altering or organizing itself can produce consciousness. However, the essence of consciousness is subjective experience, not objective correlates of the experience (Kurzweil, 2002, p. 44). Will future machines be capable of having spiritual experience? How can consciousness be measured? There is no objective test that can absolutely measure consciousness. Fundamentally, it is not possible to penetrate the subjective experience of another entity with direct objective measurement. Only the correlates of subjective experience, such as behavior, are measurable (Kurzweil, 2002, p. 45).
Dembski, a philosopher and mathematician, pointed out that the debate about whether or not humans are machines has been going on for the last 200 years (2002, p.98). French Materialists of the Enlightenment believed humans were machines (2002, p. 98). LaMettrie, one of the French Materialists of the Enlightenment, authored a book titled Man the Machine (Dembski, 2002, p. 98). Modern materialists hold the view that the motions and the modification of matter account for human mentality. Materialism has its faults, but it is a predictable philosophy. “If matter is all there is, then the mind must, in some fashion, reduce to matter” (Dembski, 2002, p. 99). “While Enlightenment philosophers may have thought of humans in terms of gear mechanisms and fluid flow, contemporary materialists think of humans in terms of neurological symptoms and computational devices” (Dembski, 2002, p. 99). Humans aspire for freedom, immortality, and the beatific vision. If in order to find ourselves, we need to transcend ourselves, then spiritual materialism is possible, since human aspirations are spiritual (Dembski, 2002, p.100). Dembski stated that “human beings need to be transcended, not by going beyond matter, but by reinstating themselves in more efficient forms of matter, to wit, the computer” (2002, p. 100).
Dembski contended that we have come to view ourselves as machines, so it is no accident that our society looks for salvation in technologies such as behavior modification, psychotropic drugs, cognitive programming, and genetic engineering (2002, p. 103). However, the problem with machines is that they are incapable of sustaining what philosophers call substantial forms. “A substantial form is a principle of unity that holds things together and maintains its identity over time” (2002, p. 103). A machine configured in one way could be just as easily be reconfigured in other ways. A machine is subjected to constant tinkering and need not bear any semblance to past incarnations. What a machine is now and what it might be in the future are entirely open and discontinuous, whereas substantial forms maintain identity over time (Dembski, 2002, p. 104). “A machine is entirely characterized in terms of the constitution, dynamics, and interrelationships of its physical parts; ‘spiritual’ cannot refer to some non-physical aspects of the machine” (Dembski, 2002, p. 130). In addition, attributing spirituality to machines on the basis of future actions is equally problematic, since the only access to a machine’s future is through its present. Machines break and malfunction, and it is impossible to predict the full range of stresses that a machine may encounter and that may cause it to break or malfunction (Dembski, 2002, p. 105). Dembski proposed that “contemporary spirituality places a premium on religious experience and neglects the more traditional aspects of spirituality as revelation, virtue, tradition, morality, and above all communion with a non-physical God who transcends or physical being” (Dembski, 2002, p. 106). Within traditional spirituality we are aware of God’s presence because God has freely chosen to make his presence known to us. God cannot make his presence known to a machine by acting on it and thereby changing its state. If a machine comes to awareness of God’s presence, it must be self-induced. “Machine spirituality is the spirituality of Self-Realization, not the spirituality of an active God who freely gives himself in self-revelation and thereby transforms the being with which he is in communion” (Dembski, 2002, p.106).
Searle, a philosopher, emphasized that actual human brains cause consciousness by a series of specific neurobiological processes in the brain, while a computer succeeds by manipulating formal symbols. The computer is not designed to be conscious, to duplicate the actual causal powers of the brain (Searle, 2002, p.67).” Searle pointed out that all of the advances made in technology are due to the ingenuity of programmers and engineers (2002, p. 64). Increased computational power in a machine gives no reason whatever to suppose that the machine is duplicating the specific neurobiological powers of the brain to create consciousness” (Searle, 2002, p. 76). Technology enables us to build tools to do things we cannot do, or cannot do as well or as fast without the tools (Searle, 2002, p. 96).
Denton, a biologist, proposed that if living organisms are analogous in all important respects to artificial mechanical systems, there are no serious grounds for doubting the possibility of “spiritual” machines. However, if living things are not machine-like in their basic design, if they differ in certain critical ways from machines, none of the characteristics of living organisms, including intelligence, are likely to be incorporated in non-living mechanical systems (Denton, 2002, p. 80). He firmly believed that living things are more than just a sum of their parts, even though most modern biologists view living things as analogous to machines with the parts determining the properties of the whole, the mechanistic/reductionist approach. Denton wrote that there is more to reality than the material world and that biological organisms transcend physics and chemistry, agreeing with the vitalist perspective. While biological organisms may depend on lower levels, they can’t be reduced to them (Denton, 2002, p. 5).
Many biological phenomena can be reduced to mechanical explanations and, despite the fact that machines have become more life-like as technology has advanced, it is undeniable that living things possess abilities that are still without any significant analogue in any machine that has yet been constructed (Denton, 2002 p. 84).Living things have the remarkable abilities to replicate themselves and to change their forms and structure without any external guidance or control (Denton, 2002, p. 87).Organic systems have a unique order that is missing from mechanical systems. There is a reciprocal influence of all parts of the organic whole on each other and on the whole in which they function (Denton, 2002, p. 88). “The parts of machines do not undergo complex reciprocal self-formative interactions, but have essentially the same properties and form in isolation as they do in the mechanical whole that makes their design possible” (Denton, 2002, p. 94). The capacity for conscious self-reflection which also includes morphing, self-regulation, self-assembly and organization, are all higher plateaus of functioning that make humans different from machines (Denton, 2002, p. 97).
Ray, a zoologist and evolutionary algorithm theorist, disagreed with Kurzweil’s theory that the entire organization of the human brain could be re-created on a neural computer of sufficient capacity (2002, p. 121). It is unfeasible to ‘copy’ a complex organic organ into silicon without losing its function (Ray, 2002, p. 121). Additionally, Ray argued that “the structure and function of the brain or its components cannot be separated” (2002, p. 122). The brain is a chemical organ with a broad spectrum of chemical communication that has evolved in exquisitely subtle and complex functionality based on the properties of these chemical systems. The brain is composed of a variety of relatively distinct, but densely inter-communicating subsystems.
“A metallic computation system operates on fundamentally different dynamic properties and could never precisely and exactly ‘copy’ the function of a brain, since the materials of which computers are constructed have fundamentally different physical, chemical, and electrical properties that materials from which the brain is constructed” (Ray, 2002, p.123). Ray accepted that the level of computing power needed to map a brain is likely to be reached in the future, but “no amount of computer power will be intelligent in a relevant sense unless it is properly organized and organization is a software problem, not a hardware problem” (2002, p. 124). He expected that “intelligences which emerge from the digital and organic media will be as different as their respective media, even if they have comparable computing performance” (Ray, 2002, p. 125).
Effects of technological advancement on the concept of self
Some of the predictions for the future of computers may be hard to imagine, but when cyberspace came into existence during the 1960’s, our world changed forever. The internet has made new social experiences possible. Before the internet, self-concept was developed in reality. Through face-to face interactions with others, people learned about themselves. The interpersonal self results from the direct perception of the relation between the self and another person (Stern in Neisser, 1993). It forms from the social interactions with other people which provide objective information that is directly available to each. Technological innovations of the cyber age have altered fundamental processes of social interactions. Technology has altered perception and experience, and sense of self (Goren, 2003, p. 487). People can gain a sense of who and what they are, not just through experiences in reality, but also through online experiences. Developing a self- concept is no longer solely based in reality. Self-concept also develops in the fantasy of online encounters and relationships. Technology has provided the means for a person to present a fantasy version of himself or herself to others. Clark believed that technology can allow us to learn more about what really matters in the on-going construction of our sense of place and of person-hood (2003, p. 115).
Kourosh stated that new technology provides greater opportunities for both growth and regression (2008, p. 105). The question arises, how might this adaptation change or modify an individual’s way of believing who he or she desires to be? Is Artificial Intelligence the next step in modifying the self perception?
Is this concern true or could be we become trapped into a transition of fantasy? Is this a new form of creativity or a process of mesmerizing one into acceptance of a mysterious and unpredictable venture? Is this transition a natural process of evolutional for advancement or a restrictive process for conformity? What is gained or what is lost in major adjustments to the perception of ourselves? Most importantly, are we addressing these questions today? Are the questions appropriate to future projections of research in technology?
Fantasy can become an addiction. The fascination with a mechanical device is intriguing and may institute thoughts of power or control in one’s life. Egan believed that the infiltration of fantasy as the capability to preoccupy the mind and which could lead to an unhealthy venture into our reality (2008, p. 381). According to Weiss and Schneider, many technology-based activities have addictive potential, since they evoke feelings of extreme pleasure and satisfaction while serving as a source of profound, although temporary, distraction (2014, p.135). They noted that the problem of addiction has always been driven by human technological advances. “Technology in all of its forms delivers an increasingly wider array of powerful experiences that are, for some emotionally, psychologically, and/or physically unmanageable” (Weiss & Schneider, 2014, p. 135). Technology offers highly distracting, emotionally involved behaviors. The contribution of fantasy may, very well change our perception of ourselves through repetitive and reinforced patterns of behavior over time. We might speculate that artificial intelligence, with its simulation capacity for human processing, its influence on thought and emotional patterns and ultimately its ability to direct behavioral decisions, could become a method for population control?
What is the future of Artificial Intelligence? Will there be configured, fully non-biological brains that are copies of human brains? Will nanobot technology provide fully immersive, totally convincing virtual reality where virtual people interact with humans? Will brain implants expand memories and vastly improve all of our sensory, pattern recognition and cognitive behaviors as Kurweil suggested (2002, p. 49)?
Are we becoming more intelligent with new information available sooner or are we becoming more stressed? Are we becoming more efficient and smarter with newly accumulated knowledge, or are we falling trapped into a funnel of compliance for survival?
According to Plato, ignorance or error about reality is among the worst disasters than can befall us. From ignorance or error, many other pains and disasters follow, and fantasy contributes to that worst disaster (Egan, 2008, p. 321). Maybe what Plato was advising is a cautionary movement into the unknown and on its net effect on us.
Boden, M. (1977). Artificial intelligence and natural man. New York: Basic Books, Inc.
Clark, A. (2003). Natural-born cyborgs: Minds, technology and the future of human intelligence, New York: Oxford University Press.
Dembski, W. (2002). Kurzweil’s Impoverished spirituality. In Are we spiritual machines?, Ed. J.
W. Richards. Seattle, WA: Discovery Institute.
Denton, M. (2002). Organism and machine: The flawed analogy. In Are we spiritual machines?,
Ed. J.W. Richards. Seattle, WA: Discovery Institute.
Egan, K. (July, 2008). Conference: Achieving educational goals with imagination. Tract: d:27 Paper: d:776.
Goren, W. (2003). America’s love affair with technology: The transformation of sexuality and the self over the twentieth century. Psychoanalytic Psychology 20: 487-508.
Kourosh, D. (2008). Video games: Play and addiction, a guide for parents. NY: Universal, Inc.
Kurzweil, R. (2002). The evolution of mind in the twenty-first century. In Are we spiritual machines?, Ed. J. W. Richards. Seattle, WA: Discovery Institute.
Kurzweil, R. (2005). The singularity is near when humans transcend biology. NY: Penguin Books.
Nusselder, A. (2009). Interface fantasy: A Laconian-cyborg ontology. Cambridge, MA: The MIT Press.
Ray, T. (2002). Kurzweil’s turing fallacy. In Are we spiritual machines?. Ed. J.W. Richards. Seattle, WA: Discovery Institute.
Searle, J. (2002). I married a computer. In Are we spiritual machines?, Ed. J.W. Richards. Seattle, WA: Discovery Institute.
Weiss, R, and Schneider, J. (2014). Closer together, further apart: The effect of technology and the internet on parenting, work, and relationships. Carefree, AZ: Gentle Path Press.