Abstract
Free will is a topic that has been debated for centuries, for the longest time its study has been largely ignored due to the lack of concrete empirical evidence. Philosophers have not been as reserved on the subject there are countless examples of ideas put forth by some of history’s greatest minds from Spinoza to Einstein and Alan Turing. In contrast, psychologists historically avoid addressing parts of the mind that are not tangible and can not be studied easily, structuralism, and instead focus on the functional side of the mind, functionalism. Science is finally beginning to catch up and new fields of study are being created as well such as neuroscience. Psychologists and scientists are finding new ways of explaining how the mind works using new technologies and taking approaches that have previously been overlooked and disregarded as too metaphysical or illusory. Computers and the digital revolution are also offering many new insights not just in the mind but in many fields. Studying the mind has generally been impeded because of its taboo nature, but that is beginning to change, and the implications are phenomenal.
Keywords: Free will, Determinism, Will Wright, John Conway, Theology, Games and Numbers, Alan Turing, Turing test, Artificial Intelligence, Robotics
Do humans really have “free will”?
Since the dawn of time man has wondered about what he is, where he came from, and where he is going. In his quest he has created religion, sacrificial ceremonies, tombs and relics such as the Pyramids of Giza, and put himself on other celestial bodies in the search for the answer. It is a philosophic question going back as far as the proverbial chicken and egg. Information processing and technology has been developing at exponential rates since inception, taking off rapidly at the turn of the last century. Every where you go somebody has a smart phone or laptop or other device that can access just about anything or any person in seconds from the Library of Congress, the fall of the Roman Empire, to World War II and the Apollo moon missions. Just as agriculture gave birth to exponential growth of the human population, and the industrial revolution gave birth to machines, the digital revolution gave birth to knowledge and information. This phenomenon was also observed by Gordon E. Moore, co-founder of Intel, who created a theory called Moore’s law which states the exponential growth of computing power as a result of transistor density on an integrated circuit. This makes it seem evident that computers will inevitably surpass the processing power of the human mind which has a wide array of implications. This was first theorized by Alan Turing who is the “father of computer science” and laid the ground work which became the first computers, several of which he worked on directly. Turing speculated that eventually artificial Intelligence will advance so much that humans will be able to have conversations with computers and not be aware of whether they are talking to another human. For this he created his formal Turing Test which measures the ability of a machine to mimic human behavior. Never in our history has man been so close to finding out whether he really controls his own fate, or if everything is truly a matter of personal destiny.
People consistently turn to churches, religious institutions, and other ecclesiastical sources of spirituality to find answers to the meaning and purpose of life. Consciousness or Vijñāna is a central theme in Buddhism, monks meditating and reconciling with their ego. The Bible of Western Christianity teaches us that our fate is not predetermined and the outcome is our ultimate choice (King James Bible, Deut. 30.19). In contrast many of the world religions teach of an omniscient, omnipotent, and omnipresent deity or divine being. A follower of Buddhism will follow the many paths to enlightenment where they will eventually reach the conclusion that we are all one conscious.
Einstein famously proclaimed that “God does not play dice with the universe” to which his colleague responded by telling Einstein to stop telling God what to do with his dice [sic]. Einstein did not believe in a personal god, but believed in the order of the universe such as Spinoza’s god of probability and orderliness. Benedict de Spinoza was a Dutch philosopher in the 1600s and wrote many controversial works including his magnum opus Ethics, he was considered a rationalist and is attributed with propagating the Enlightenment era. Spinoza asserts in his book those that believe God is the nature of free will are wrong and that the two contradict each other, while also providing many other proofs (Spinoza & White, 1973). Free will has been a topic debated by philosophers for centuries, from the grandfather paradox to the philosophical zombie.
Researchers at Cornell University and other institutions have been studying consciousness using certain technologies as Magnetic Resonance Imaging, or MRI scans (Owen, 2009). MRI and other technology is helping us gain a better understanding of what portions of our brain are responsible for which tasks and exactly how it operates, something we don’t really know a lot about. In a study they conducted individuals were asked to randomly raise either their left or right hands at the sound of a beep, but activation of specific regions of the brain indicated that it was making the decisions before they consciously chose the hand they wanted to raise, as if they weren’t really making the decision it was merely an illusion. These findings consistently show that the brain can make decisions before it chooses to do so, but that begs the question, which part of brain is the one doing the choosing? Most scientists and neurologists agree that the cerebral cortex gives rise to human intelligence setting us apart from the other species. After all, man can build complex machines, societal structures, and has an advanced capacity for emotion and thought that other species on the planet only have a primitive capacity for. It is also believed that this may be the center of our consciousness or the active-aware part of our brains.
Alan Turing was a famous computer scientist working in the 1940s to create the field of computer science laying out complete concepts such as algorithm and computation. His work also led to the first computers, which he theorized in the 1930s also known as a Turing machine and considered the basis for the modern theory of computation, which would later become the digital computers we have today, Turing worked on several of these machines himself. During World War II he significantly improved the capability of British and Allied intelligence operations to decrypt Nazi communications by automating much of the process replacing human cryptanalysts with machines (Muggleton, 2014). After the war Turing began designing a logical computer, which would have become the first digital computer, but he was rebuked by his colleagues. Turing then went on to Manchester University where he helped lay the foundation for the field of artificial intelligence.
John Conway, Professor of Mathematics at Princeton University, is known by computer scientists and mathematicians alike for his work in game theory. Conway essentially solved a problem postulated by John von Neumann in the 1940s by creating a self-replicating digital cellular automaton, called the Game of Life, published in the October 1970 issue of Scientific American (Gardner, 1970). The game is implemented as a computer simulation. Typically starting with a rectangular or hexagonal grid, the player marks cells as one of two states, either alive or dead. The simulator then computes some very basic rules for regular steps or loops simulating cell overgrowth, underpopulation, and reproduction. The result was astonishing, a computationally cheap and efficient evolutionary model that could be run on a standard desktop. In this way the simulation could be paused, saved, resumed and restarted yielding the same results. I propose a hypothetical, what if it were possible to build a computational model that simulated our universe down to the exact superposition of every atom? While that may seem beyond the capabilities of current computing power, quantum mechanics sets the stage for revolutionary advances in computing efficiency.
In classical computing everything breaks down into binary, the JPEG image you save from your phone or digital camera from a family vacation, a popular song from some new band a young teenager downloads from iTunes, the ASCII or Unicode you enter into a text file or word document, even the buttons you press on your keyboard all break down into binary that is transmitted and interpreted by the various components of your hardware. Binary is one of the most basic ways of representing information digitally. A single bit is either a 0 or 1 and handled by a semi-conducting transistor, think of a light switch the light is either on or off, thus more transistors on a computer chip result in more memory and processing power. However, Heisenberg’s Uncertainty Principle states that a quantum bit, or qubit, can be either a 0 or a 1 or both! Basically, we do not know the state of an atom until we observe it by collapsing the wave function. This has profound implications for cryptography, concurrency, robotics, and artificial Intelligence. The media has been reporting on cars which can drive for you through the use of a global positioning system, and when computer controlled enemies or “bots” attack you in video games, they usually perform path finding to get from point A to point B. For a computer to find its way out of a maze it must go through every possible route, reaching occasional dead ends, until it finds the correct route, but it may continue searching if it needs to find the shortest possible path than just the first one that it finds. These tasks can get even more fundamentally complicated with such problems as the traveling salesman. Because of the quantum uncertainty principle scientists believe it is theoretically possible to develop computer chips that can concurrently travel all paths at the same time and find the solution to complex problems almost trivially.
Will Wright is a game programmer and computer scientist who founded Maxis, now a subsidiary of Electronic Arts. Wright is responsible for designing and programming his trademark simulation games, SimCity, The Sims, SimAnt, etc. While initially thought to be only practical for government and educational purposes, many of his creations have gone on to become world renowned classics in gaming and entertainment. The Sims allows the player to control digital people in a digital simulation of every day life, allowing you to chose what they wear, who they are friends with, and even what career they have. The game also allows you to toggle settings granting how much autonomy the Sims’ have over their lives, and whether or not they can make their own decisions. Wright also created a more advanced automaton of life called Spore where you follow the evolutionary path of an organism from single celled primordial origins to multicellular advanced intergalactic societies. Several of his creations are my personal favorites, his games will never get old to me. Wright believes that it’s possible that we could be living in a simulation ourselves and not even know it (“Do We Have Free Will?”, 2013).
The free will debate has always been a controversial subject and is likely to remain so for quite some time. Many aspects of the subject are contradictory and conflict greatly making it hard to reach a standard theory or model let alone a scientific consensus. It is also possible that the truth may simply be beyond our comprehension or ability to perceive logically. Religion and science have been searching for the answer and are likely to continue doing so. Technology has contributed a great deal to understanding the nature of the mind and may one day solve the hard problems of consciousness. Life is the Universe’s most natural personification, yet forever traps us in ephemeral bodies; as Carl Sagan said, “we are a way for the cosmos to know itself.” Whether humans will be able to transcend their physical limitations and gain an altruistic complete understanding of their existence is a matter of debate. It will probably be a while before the human capacity for understanding reaches that point, but I think it is inevitable.
References
Baumeister, R. F. Free Will in Scientific Psychology. Perspectives on Psychological Science 3, 14-19. Retrieved June 22, 2014, from http://academic.udayton.edu/jackbauer/Readings%20595/Baumeister%2008%20Free%20Will.pdf.
(2013). Do we have free will? [Television series episode]. In Through The Wormhole. Discovery.
English, E. S. (1948). Holy Bible, containing the Old and New Testaments, authorized King James version; with notes especially adapted for young Christians. (Pilgrim ed.). New York: Oxford Univ. Press.
Gardner, M. (1970). Mathematical Games: On Cellular Automata, Self-Reproduction, the Garden of Eden, and the Game `Life’. Scientific American.
Muggleton, S. (2014). Alan Turing and the Development of Artificial Intelligence. AI Communications, 27, 3-10. Retrieved June 30, 2014, from http://www.doc.ic.ac.uk/~shm/Papers/AIC579.pdf.
Owen, A., Schiff, N., & Laureys, S. (2009). A New Era of Coma and Consciousness Science. Progress in Brain Research 177, 399-411. Retrieved July 27, 2014, from http://www-users.med.cornell.edu/~jdvicto/pdfs/owscly09.pdf.
Prakashananda, S. (1921). The Inner Consciousness, How to Awaken and Direct It. San Francisco, CA: Vedanta Society of S. F.
Saha, I. Artificial Neural Networks in Hardware: A Survey of Two Decades of Progress. Neurocomputing 74, 239-255. Retrieved June 22, 2014, from http://eprints.fri.uni-lj.si/1752/4/Applicabilityofapproximatemultipliersinhardwareneuralnetworks.pdf.
Saracho, O. N. Theory of Mind: Understanding Young Children’s Pretence and Mental States. Early Child Development and Care. Retrieved May 30, 2014, from http://www2.psy.uq.edu.au/~nielsen/pretence_and_theory_of_mind.pdf.
Schopenhauer, A, & Payne, E. F. J. (1966). The World as Will and Representation. New York: Dover Publications.
Spinoza, B. d., & White, W. H. (1923). Ethics, Demonstrated in Geometrical Order (4th ed.). London: Oxford University Press.