Monthly Archives: October 2014

Do humans really have “free will”?


Free will is a topic that has been debated for centuries, for the longest time its study has been largely ignored due to the lack of concrete empirical evidence. Philosophers have not been as reserved on the subject there are countless examples of ideas put forth by some of history’s greatest minds from Spinoza to Einstein and Alan Turing. In contrast, psychologists historically avoid addressing parts of the mind that are not tangible and can not be studied easily, structuralism, and instead focus on the functional side of the mind, functionalism. Science is finally beginning to catch up and new fields of study are being created as well such as neuroscience. Psychologists and scientists are finding new ways of explaining how the mind works using new technologies and taking approaches that have previously been overlooked and disregarded as too metaphysical or illusory. Computers and the digital revolution are also offering many new insights not just in the mind but in many fields. Studying the mind has generally been impeded because of its taboo nature, but that is beginning to change, and the implications are phenomenal.

Keywords: Free will, Determinism, Will Wright, John Conway, Theology, Games and Numbers, Alan Turing, Turing test, Artificial Intelligence, Robotics

Do humans really have “free will”?

Since the dawn of time man has wondered about what he is, where he came from, and where he is going. In his quest he has created religion, sacrificial ceremonies, tombs and relics such as the Pyramids of Giza, and put himself on other celestial bodies in the search for the answer. It is a philosophic question going back as far as the proverbial chicken and egg. Information processing and technology has been developing at exponential rates since inception, taking off rapidly at the turn of the last century. Every where you go somebody has a smart phone or laptop or other device that can access just about anything or any person in seconds from the Library of Congress, the fall of the Roman Empire, to World War II and the Apollo moon missions. Just as agriculture gave birth to exponential growth of the human population, and the industrial revolution gave birth to machines, the digital revolution gave birth to knowledge and information. This phenomenon was also observed by Gordon E. Moore, co-founder of Intel, who created a theory called Moore’s law which states the exponential growth of computing power as a result of transistor density on an integrated circuit. This makes it seem evident that computers will inevitably surpass the processing power of the human mind which has a wide array of implications. This was first theorized by Alan Turing who is the “father of computer science” and laid the ground work which became the first computers, several of which he worked on directly. Turing speculated that eventually artificial Intelligence will advance so much that humans will be able to have conversations with computers and not be aware of whether they are talking to another human. For this he created his formal Turing Test which measures the ability of a machine to mimic human behavior. Never in our history has man been so close to finding out whether he really controls his own fate, or if everything is truly a matter of personal destiny.

People consistently turn to churches, religious institutions, and other ecclesiastical sources of spirituality to find answers to the meaning and purpose of life. Consciousness or Vijñāna is a central theme in Buddhism, monks meditating and reconciling with their ego. The Bible of Western Christianity teaches us that our fate is not predetermined and the outcome is our ultimate choice (King James Bible, Deut. 30.19). In contrast many of the world religions teach of an omniscient, omnipotent, and omnipresent deity or divine being. A follower of Buddhism will follow the many paths to enlightenment where they will eventually reach the conclusion that we are all one conscious.

Einstein famously proclaimed that “God does not play dice with the universe” to which his colleague responded by telling Einstein to stop telling God what to do with his dice [sic]. Einstein did not believe in a personal god, but believed in the order of the universe such as Spinoza’s god of probability and orderliness. Benedict de Spinoza was a Dutch philosopher in the 1600s and wrote many controversial works including his magnum opus Ethics, he was considered a rationalist and is attributed with propagating the Enlightenment era. Spinoza asserts in his book those that believe God is the nature of free will are wrong and that the two contradict each other, while also providing many other proofs (Spinoza & White, 1973). Free will has been a topic debated by philosophers for centuries, from the grandfather paradox to the philosophical zombie.

Researchers at Cornell University and other institutions have been studying consciousness using certain technologies as Magnetic Resonance Imaging, or MRI scans (Owen, 2009). MRI and other technology is helping us gain a better understanding of what portions of our brain are responsible for which tasks and exactly how it operates, something we don’t really know a lot about. In a study they conducted individuals were asked to randomly raise either their left or right hands at the sound of a beep, but activation of specific regions of the brain indicated that it was making the decisions before they consciously chose the hand they wanted to raise, as if they weren’t really making the decision it was merely an illusion. These findings consistently show that the brain can make decisions before it chooses to do so, but that begs the question, which part of brain is the one doing the choosing? Most scientists and neurologists agree that the cerebral cortex gives rise to human intelligence setting us apart from the other species. After all, man can build complex machines, societal structures, and has an advanced capacity for emotion and thought that other species on the planet only have a primitive capacity for. It is also believed that this may be the center of our consciousness or the active-aware part of our brains.

Alan Turing was a famous computer scientist working in the 1940s to create the field of computer science laying out complete concepts such as algorithm and computation. His work also led to the first computers, which he theorized in the 1930s also known as a Turing machine and considered the basis for the modern theory of computation, which would later become the digital computers we have today, Turing worked on several of these machines himself. During World War II he significantly improved the capability of British and Allied intelligence operations to decrypt Nazi communications by automating much of the process replacing human cryptanalysts with machines (Muggleton, 2014). After the war Turing began designing a logical computer, which would have become the first digital computer, but he was rebuked by his colleagues. Turing then went on to Manchester University where he helped lay the foundation for the field of artificial intelligence.

John Conway, Professor of Mathematics at Princeton University, is known by computer scientists and mathematicians alike for his work in game theory. Conway essentially solved a problem postulated by John von Neumann in the 1940s by creating a self-replicating digital cellular automaton, called the Game of Life, published in the October 1970 issue of Scientific American (Gardner, 1970). The game is implemented as a computer simulation. Typically starting with a rectangular or hexagonal grid, the player marks cells as one of two states, either alive or dead. The simulator then computes some very basic rules for regular steps or loops simulating cell overgrowth, underpopulation, and reproduction. The result was astonishing, a computationally cheap and efficient evolutionary model that could be run on a standard desktop. In this way the simulation could be paused, saved, resumed and restarted yielding the same results. I propose a hypothetical, what if it were possible to build a computational model that simulated our universe down to the exact superposition of every atom? While that may seem beyond the capabilities of current computing power, quantum mechanics sets the stage for revolutionary advances in computing efficiency.

In classical computing everything breaks down into binary, the JPEG image you save from your phone or digital camera from a family vacation, a popular song from some new band a young teenager downloads from iTunes, the ASCII or Unicode you enter into a text file or word document, even the buttons you press on your keyboard all break down into binary that is transmitted and interpreted by the various components of your hardware. Binary is one of the most basic ways of representing information digitally. A single bit is either a 0 or 1 and handled by a semi-conducting transistor, think of a light switch the light is either on or off, thus more transistors on a computer chip result in more memory and processing power. However, Heisenberg’s Uncertainty Principle states that a quantum bit, or qubit, can be either a 0 or a 1 or both! Basically, we do not know the state of an atom until we observe it by collapsing the wave function. This has profound implications for cryptography, concurrency, robotics, and artificial Intelligence. The media has been reporting on cars which can drive for you through the use of a global positioning system, and when computer controlled enemies or “bots” attack you in video games, they usually perform path finding to get from point A to point B. For a computer to find its way out of a maze it must go through every possible route, reaching occasional dead ends, until it finds the correct route, but it may continue searching if it needs to find the shortest possible path than just the first one that it finds. These tasks can get even more fundamentally complicated with such problems as the traveling salesman. Because of the quantum uncertainty principle scientists believe it is theoretically possible to develop computer chips that can concurrently travel all paths at the same time and find the solution to complex problems almost trivially.

Will Wright is a game programmer and computer scientist who founded Maxis, now a subsidiary of Electronic Arts. Wright is responsible for designing and programming his trademark simulation games, SimCity, The Sims, SimAnt, etc. While initially thought to be only practical for government and educational purposes, many of his creations have gone on to become world renowned classics in gaming and entertainment. The Sims allows the player to control digital people in a digital simulation of every day life, allowing you to chose what they wear, who they are friends with, and even what career they have. The game also allows you to toggle settings granting how much autonomy the Sims’ have over their lives, and whether or not they can make their own decisions. Wright also created a more advanced automaton of life called Spore where you follow the evolutionary path of an organism from single celled primordial origins to multicellular advanced intergalactic societies. Several of his creations are my personal favorites, his games will never get old to me. Wright believes that it’s possible that we could be living in a simulation ourselves and not even know it (“Do We Have Free Will?”, 2013).

The free will debate has always been a controversial subject and is likely to remain so for quite some time. Many aspects of the subject are contradictory and conflict greatly making it hard to reach a standard theory or model let alone a scientific consensus. It is also possible that the truth may simply be beyond our comprehension or ability to perceive logically. Religion and science have been searching for the answer and are likely to continue doing so. Technology has contributed a great deal to understanding the nature of the mind and may one day solve the hard problems of consciousness. Life is the Universe’s most natural personification, yet forever traps us in ephemeral bodies; as Carl Sagan said, “we are a way for the cosmos to know itself.” Whether humans will be able to transcend their physical limitations and gain an altruistic complete understanding of their existence is a matter of debate. It will probably be a while before the human capacity for understanding reaches that point, but I think it is inevitable.


Baumeister, R. F. Free Will in Scientific Psychology. Perspectives on Psychological Science 3, 14-19. Retrieved June 22, 2014, from

(2013). Do we have free will? [Television series episode]. In Through The Wormhole. Discovery.

English, E. S. (1948). Holy Bible, containing the Old and New Testaments, authorized King James version; with notes especially adapted for young Christians. (Pilgrim ed.). New York: Oxford Univ. Press.

Gardner, M. (1970). Mathematical Games: On Cellular Automata, Self-Reproduction, the Garden of Eden, and the Game `Life’. Scientific American.

Muggleton, S. (2014). Alan Turing and the Development of Artificial Intelligence. AI Communications, 27, 3-10. Retrieved June 30, 2014, from

Owen, A., Schiff, N., & Laureys, S. (2009). A New Era of Coma and Consciousness Science. Progress in Brain Research 177, 399-411. Retrieved July 27, 2014, from

Prakashananda, S. (1921). The Inner Consciousness, How to Awaken and Direct It. San Francisco, CA: Vedanta Society of S. F.

Saha, I. Artificial Neural Networks in Hardware: A Survey of Two Decades of Progress. Neurocomputing 74, 239-255. Retrieved June 22, 2014, from

Saracho, O. N. Theory of Mind: Understanding Young Children’s Pretence and Mental States. Early Child Development and Care. Retrieved May 30, 2014, from

Schopenhauer, A, & Payne, E. F. J. (1966). The World as Will and Representation. New York: Dover Publications.

Spinoza, B. d., & White, W. H. (1923). Ethics, Demonstrated in Geometrical Order (4th ed.). London: Oxford University Press.

Security of Electronic Communications

One of the biggest problems facing software and technology companies as well as all major financial institutions today is the security and authenticity of electronically transmitted communications and data. When evidence of phone hacking surmounted around Piers Morgan back in Q1 2014 it was revealed that access was easily gained to victim’s voicemail recordings because they simply never changed the password (Spark, 2014). Why then do people often neglect and undermine the importance of securing their communications and what are some ways to address this? These are important cognitive, biometric, and psychological questions which must be answered in order to improve security of databases, emails, networks, and data transmission. This requires not only innovating and improving the encryption methods and techniques utilized in these systems by engineers but also changing the perception and appraisal by people, including the ordinary layman, of the problem.

During World War II the Germans used the Enigma machine to encrypt nearly all communications, which was of course until Alan Turing created the world’s first computer in the interest of automating much of the decryption process at Bletchley Park. In the process he laid the foundation of Computer Science and Artificial Intelligence positing the noteworthy Turing test as a measure of a machines intelligence. Every time you make a purchase at Amazon or, send a message on Facebook or Twitter your information is bounced between several servers, stored in databases on remote computers, and sometimes intercepted by even the National Security Agency, in offices and buildings occasionally not even in the same country as you. Merely opening an email attachment can compromise all of the data on your computer as attachments can be easily infected with Trojans and other viruses that can take over your computer, control system processes, or scan for files containing credit card numbers and upload them back to the intruder. Even if you consider yourself a modern Luddite of sorts, there is very little hope in escaping the arbitrarily encompassing technology of the digital age, unless of course you don’t mind not having a driver’s license and never taking out a loan for a house, car, or student loan.

Heartbleed was a major security vulnerability in OpenSSL, a popular open source socket security library, which could be used to bypass authenticity and security measures by the software and was in isolated instances. A Pew research poll indicated that only about 60% of adult internet users had heard of Heartbleed, and that even worse only 39% took additional steps to secure their online accounts (Rainie, 2014). Warnings of Heartbleed going largely unheeded Shellshock, a vulnerability in Bash a command prompt used in Mac and Linux, was just discovered with early estimates of 500 million affected computers (Lee, 2014). So it is evident the implications of data security on our jobs, lives, and basically our very way of life. But what can be done to address these issues? Well examples such as OpenSSL may actually be the solution and not just the problem. Open source software grants users special privileges including being able to read the source code easily without extensive reverse engineering, and sometimes even the rights to redistribute that code with certain caveats. For this reason not only were the hackers aware of the bug, so were other users of the software allowing the issue to be much more quickly addressed. With proprietary software this may not be the case, by the time the developers become informed it could be too late. We can also see companies like Oracle which are making a point of improving security in Java based applications. Their approach lately has been to promote wide spread adoption of new Java versions, which as a result of new features has been largely embraced by the community with Java 8 adoption up nearly 20% from previous releases (Oracle, 2014). Not only are they correcting the issues, they are giving users incentives to install and adopt these more secure versions. The now obsolete Windows XP operating system is the epitome of where this methodology could be applied as it is still used in many ATM machines today (Pagliery, 2014). They have been proven to be extremely susceptible to fraudulent attacks, even vulnerable enough to hacking from a cellphone!

New advances in physics are also creating promising solutions to encryption as well as classical computing problems. A lot of recent research has shown that lasers can be used to encrypt messages that cannot be deciphered without the original manifest, much like traditional asymmetric cryptography (Berridge, 2010). Optimizations in parallel processing that quantum mechanics is postulated to allow can also mean near instantaneous decryption of encrypted keys. This means that anybody or organization or corporation that can develop the first real quantum computer could decrypt every message using todays encryption standards instantly any time they want. It is no surprise then why the National Security Agency, NASA, Google, Microsoft and many other tech titans are clamoring to build these machines.

Another important aspect of this problem is the psychological importance of security and privacy that individuals feel. The most obvious issue here is that in order for passwords to be secure they also have to be somewhat hard to remember, and consider that usually people have more than 1 even 10 networked accounts on the internet for email, their student account, online bank account, social accounts and much more. Acronyms and anagrams can be conventionally applied as mnemonic devices for remembering passwords, but users generally prefer convenience. The most commonly used password for 2013 was, consistent with popular belief, you guessed it, “password” (Ngak, 2014). Many companies have begun to address this part of the problem in new ways, such as Apple which provides facial recognition locking for most iOS devices (Whitney, 2013). Besides facial recognition research is bringing new solutions such as fingerprint scanning, retina scanning, DNA tests, and other forms of biometric identification and authentication some of which are old and some of which are new. One of the most often utilized methods of preventing bots and spam on websites has been the contemporary use of optical character recognition or CAPTCHAS for example.

The problems of privacy and security have been perennial and persistent in many contexts and not just technology alone. Technology and science is not only expanding the issues but actively providing new and innovative solutions. The development of quantum computers seems to draw many parallels to Alan Turing’s creation of the first computer with grave implications. Millions of people are left vulnerable by security flaws and subject to attack, fraud, and other harm every single day. The problems of privacy and security are thus important matters that are frequently undermined and that must be taken more seriously and researched more thoroughly.


Berridge, E. (2010, September 1). Quantum encryption defeated by lasers. Retrieved October 17, 2014, from

Lee, D. (2014, September 25). Shellshock: ‘Deadly serious’ new vulnerability found. Retrieved October 17, 2014, from

Ngak, C. (2014, January 21). The 25 most common passwords of 2013. Retrieved October 19, 2014, from

Oracle Highlights Continued Java SE Momentum and Innovation at JavaOne 2014. (2014, September 29). Retrieved October 17, 2014, from

Pagliery, J. (2014, March 4). 95% of bank ATMs face end of security support. Retrieved October 17, 2014, from

Rainie, L., & Duggan, M. (2014, April 30). Heartbleed’s Impact. Retrieved October 17, 2014, from

Spark, L. (2014, February 14). CNN host Piers Morgan questioned in UK hacking investigation. Retrieved October 10, 2014, from

Whitney, L. (2013, December 20). How to use facial recognition on your iPhone. Retrieved October 19, 2014, from