Author Archives: Robert B. Colton

Robotic probes on Mars

The Mars Science Laboratory (MSL) is a robotic probe on a mission to study Mars with international support. It has been achieving its goals by finding signs of habitable conditions that would have supported biological life improving our understanding of the kind of environment Mars once had. It has faced delays and setbacks, but will provide necessary experience for developing future manned missions to the planet. It will also likely continue on its journey for some time exploring the Martian surface. MSL is extending the depths of our understanding of the solar system and sits at the forefront of space exploration.

The path that Curiosity has followed since landing in 2012 (Grush, 2016).

The high-level goals of MSL set by NASA are to study the soil on the Martian surface for signs of biological life. They hope to achieve these goals by equipping the rover with instruments that can perform chemical analyses on soil samples collected from the Martian surface. It also has the auxiliary goals of testing new landing techniques that include more automation and autonomy on the part of the robotic probe, which proved successful with the landing of its single rover called Curiosity on August 6th, 2012 (Amos, 2012). MSL is achieving the goals NASA set for it by finding things like silica deposits, which are associated with environments conducive to microbial life (Chappell, 2015). Interesting to consider that silica is a compound of oxygen and silicon (Si), a semiconducting material used in transistors and integrated circuits. The interesting question is whether biological life forms could be based on an element other than carbon, and one such element considered by academics is silicon because of its similarities to carbon (“Could silicon be the basis for alien life forms, just as carbon is on Earth?”, n.d.). The Curiosity rover itself almost certainly has some silicon parts too. Curiosity also recently found boron, which was a positive sign, because of its water solubility (Grush, 2016). The boron is indicative of a lake that may have once existed in the Gale Crater.

The journey of the MSL robotic probe to Mars has not been without problems and critical moments of difficulty. It has, however, been a relatively calm journey overall in comparison to other unmanned missions. During its development, the rover’s launch had to be delayed multiple times because of engineering challenges and design problems with, among other things, its heat shield (Chang, n.d.). NASA did not think it would be a good idea to compromise the mission and decided to extend the rover’s development for two more years for further testing (Matson, 2008). Testing of the rover’s equipment also led to increased costs as a result of the delays and changes to the rover. Curiosity was actually designed with shutdown features that would protect the probe in certain hazardous situations that could destroy it. On several different occasions since landing, Curiosity has put itself into a safe mode before scientists rebooted the rover. These incidents were caused mostly by software problems including memory management (Lumb, 2016). If NASA had not been able to recover the rover, it would have clearly resulted in mission failure and would mean that its retirement could not be delayed any further.

The future of MSL has mixed projections, some positive and some not as positive. Curiosity has already outlived its original expected retirement date. Even after Curiosity is officially retired, its future will still be its legacy of discoveries also leaving a lasting impact on our future manned missions to Mars. However, Curiosity has already showed signs of aging as A.I. and robotics has continued on its exponential advance since 2012. Even worse, the rover hasn’t really found any real evidence that biological life once existed on the planet, one of its primary goals. This has led Astrobiologists to consider other novel ways of finding life on the red planet. They suggest that deeper subsurface analysis for life would be better than our current method of simply finding where water reservoirs and lakes were because we could find fossils as well as microbes in groundwater (David, 2017). It’s also true that these bodies of water probably moved around as a result of continental drift caused by the shifting of tectonic plates that may actually still be occurring.

In conclusion, I feel the Mars Science Laboratory mission has gone very well despite its shortcomings. The Curiosity rover has found some positive signs, such as elements like silica and boron, which could indicate an environment that would have supported life. It may also be beneficial if we incorporated other strategies for finding life on the red planet. Curiosity, like its predecessors, is still only the beginning of our journey to Mars.


Amos, J. (2012, August 06). Nasas Curiosity rover successfully lands on Mars. Retrieved June 23, 2017, from

Chang, A. (n.d.). Mars Science Laboratory faces technical problems. Retrieved June 23, 2017, from

Chappell, B. (2015, December 18). Curiouser And Curiouser: NASA’s Curiosity Rover Finds Piles Of Silica On Mars. Retrieved June 23, 2017, from

Could silicon be the basis for alien life forms, just as carbon is on Earth? (n.d.). Retrieved June 23, 2017, from

David, L. (2017, May 09). The Search for Life on Mars Is about to Get Weird. Retrieved June 23, 2017, from

Grush, L. (2016, December 14). NASA’s Curiosity rover finds more evidence that Mars was once habitable. Retrieved June 23, 2017, from

Lumb, D. (2016, July 14). NASA’s Curiosity rover took a ‘safe mode’ nap this weekend. Retrieved June 23, 2017, from

Matson, J. (2008, December 4). Mars Science Laboratory rover delayed two-plus years. Retrieved June 23, 2017, from

Landsat Remote Sensing Data

The Landsat project captures a wide variety of heterogeneous data about the surface of the Earth approximately every 16 days. Most of its satellites, including Landsat 7 and 8, are placed in sun-synchronous and near-polar orbits to accomplish this. Landsat imagery has been used to document the deforestation of the rain forests which has been crucial to understanding why it occurs. The imagery will prove beneficial in reforestation efforts brought on by wildfires in Oklahoma, Kansas, and the Okefenokee National Wildlife Refuge. Mapping glacier loss additionally helps us monitor climate change in remote areas where statistical anomalies defy global trends. The Landsat project offers a very high-resolution view of Earth’s environment.

Deforested areas (light green and pink) of the Amazon rain forest captured by Landsat between November 13th, 1986 (left) and October 30th, 2016 (“Monitoring Deforestation in the Amazon,” 2017).

One application of the remote sensing data collected by the Landsat project is to monitor the deforestation of the Amazon rain forest. The imagery collected by Landsat documents the deforestation that has been occurring over the past several decades. It is used to determine the reason for the deforestation which is primarily agricultural farming and cattle grazing. Deforestation anywhere is particularly problematic because plants and trees convert CO2 into carbohydrates which produce oxygen as a byproduct during photosynthesis (“The Carbon Cycle and Earth’s Climate,” n.d.). Destruction of these forests means that there’s less plants offsetting global carbon emissions in the atmosphere from industrialized nations as these areas become industrialized themselves. This is why world leaders have been taking steps with things like the Paris accord to set international agreements for climate emission reductions and mitigation. Interestingly, the agreements are considered so important that even countries like North Korea have praised them (Finnegan, 2017). Landsat tells a similar story of forest loss in Cambodia where a surge in land concessions has contributed to an accelerated rate of deforestation. The Okomu Forest has also been subject to the same industrial demands of large-scale rubber and oil palm plantations.

Another tangible application of the data is the monitoring of drought and wildfire conditions. Landsat 8’s data will be used in planning the reforestation of the Okefenokee National Wildlife Refuge after a wildfire that was ignited by lightning is contained. Infrared radiation band measurements of vegetation by Landsat will also assist with wildfires in Kansas and Oklahoma where 780,000 acres of farm and ranch land have been scorched. Understanding where these wildfires spread is necessary when assessing the damage caused to both wildlife ecosystems and human property. A lack of geological data would preclude strategizing containment efforts and planning evacuation routes.

The growth of Pio XI Glacier in the Southern Patagonia Icefield between October 4th, 1986 (left) and October 22nd, 2016 (“As Glaciers Worldwide Are Retreating, One Defies the Trend,” 2017).

What I felt was the most important application of the data was the monitoring of changing climates and ecosystems. It’s important to consider that humans have only been around for a fraction of the Earth’s existence and that the planet wasn’t always as habitable to biological life as it is today. The Landsat imagery has captured a wide array of changes to our planet’s climate over the decades which could have a profound impact on its habitability. Certain anomalies have been recorded such as the growth of the Pio XI Glacier that is defying global glacier trends of decreasing ice mass. However, supplemental data may be necessary, like the depth of the lakes it flows into, in order to determine why the glacier shows abnormal growth. This glacier may be important in understanding why other glaciers are melting. When glaciers melt they release trapped methane and other greenhouse gases that may contribute to global warming. Methane is a more potent greenhouse gas than carbon dioxide by as much as 86 times (Vaidyanathan, 2015). It is believed that as global warming progresses, more glaciers will melt, and the process becomes cyclical.

In conclusion, the Landsat project collects data that is essential to a lot of our most critical science research. The imagery collected tells us many stories about the changes occurring in our global environment from the deforestation of rain forests to wildfires, natural disasters, and the growth and shrinking of glaciers. It is for these reasons that I believe the USGS’s Landsat project has contributed positively to many applications of the scientific community.


As Glaciers Worldwide Are Retreating, One Defies the Trend [Online image]. (2017, May 18). Retrieved June 08, 2017, from

Finnegan, C. (2017, June 7). North Korea blasts Trump as ‘silly,’ ‘ignorant’ over Paris Accord withdrawal. Retrieved June 08, 2017, from

Monitoring Deforestation in the Amazon [Online image]. (2017, May 18). Retrieved June 08, 2017, from

The Carbon Cycle and Earth’s Climate. (n.d.). Retrieved June 08, 2017, from

Vaidyanathan, G. (2015, December 22). How Bad of a Greenhouse Gas Is Methane? Retrieved June 08, 2017, from

Java 8 Nashorn Script Engine

For a side project I am developing in Java I needed a good JavaScript parser but the publicly documented Nashorn interface is all about compiling when I only needed an intermediate representation. It is currently possible as of JDK8u40 to use the parser to get the AST as a JSON encoded string either from a JS file being executed by the ScriptEngine or from within Java using the non-public Nashorn API.

The below image should convey to you how a simple assignment statement is broken into a stream of tokens then an AST is generated as JSON on the right. This is why symbols like * + – are called binary operators because they take two operands, ! is the logical negation and an unary operator becaues it takes only one operand. The operands can also be expressions because expressions are defined recursively in terms of expressions which can be literals, terms, or other expressions. This is how we end up with tree’s which when coupled with additional semantic information such as keywords, types, and identifiers help us do code generation. This enables you to take in one language, say GML, and spit out a completely different one like C++ which, if you don’t already know, is exactly what ENIGMA’s compiler does.

Abstract Syntax Tree

An abstract syntax tree is produced by a parser after the lexer phase breaks code into a stream of tokens.

Wikipedia has additional information on abstract syntax trees if you would like to know more.
The following StackOverflow post provides clarification between an AST and a parse tree.

This example shows you how to get the AST as JSON from Java. This was my own discovery from studying the Nashorn source code.

String code = "function a() { var b = 5; } function c() { }";

Options options = new Options("nashorn");
options.set("anon.functions", true);
options.set("parse.only", true);
options.set("scripting", true);

ErrorManager errors = new ErrorManager();
Context contextm = new Context(options, errors, Thread.currentThread().getContextClassLoader());
String json = ScriptUtils.parse(code, "<unknown>", false);

This example should give the following JSON encoded AST as I executed it on Java 8u51. This JSON encoding provided by Nashorn is compliant with the community standard JavaScript JSON AST model popularized by Mozilla.


It is important to note however that this interface may change because it’s not well documented and is new to the JSE. Additionally the OpenJDK project is developing a public interface for Java 9 that allows AST traversal in a more standard and user friendly way.

Limited documentation for the existing public Nashorn classes in Java 8 can be found below.

The following link provides a list of all of the parser and compiler options that I set above. However it is important to note that the syntax is different when setting the options inside Java where – is replaced with a period.

The Nashorn source code can be found on GitHub and also on BitBucket. I prefer the BitBucket version as the GitHub version seems to be missing some classes.

The Art of Animation

When creating and testing new graphics rendering engines or learning to create three dimensional models one of the first things usually produced is what is called a Utah teapot invented at the University of Utah. Several Alumni of the University of Utah include Ed Catmull who is widely cited as having created the first computer animation while a student. Ed Catmull cofounded Pixar with Steve Jobs and John Lasseter.

Computer graphics and computer generated imagery are two fields that have been deeply intertwined with the history and development of modern computers. Computer graphics have played an important role in technology and turned human achievements such as the Apollo Moon landings that would otherwise not be possible into a reality. The graphical user interface turned ordinary people’s perception of computers and what they were capable of on its head. CGI completely revolutionized the film industry with hit movies by Pixar like Toy Story and A Bug’s Life changing the entire way Walt Disney did animation. When we turn on our computers or cellphones or tablets we expect to watch multimedia and play games with a rich and high performance end user experience. But the technology that makes this possible did not just happen over night and is the product of years of research.

The crux of animation and the highest grossing animated film of all time, Walt Disney’s The Lion King has many important lessons for our journey on the endless round. One of the most important you will ever learn is that every person you will ever meet is no more than your 15th cousin. Known as the pedigree collapse the origin of life lies in the number 2 to the power of 32. Each and every one of us has 4,294,967,296 ancestors which is larger than the world population 32 generations ago, and so we are all connected in the great circle of life (Dawkins, 1995).

The development of computer graphics and computer generated imagery is intricately linked to the history of traditional animation. We begin with the development of The Lion King adapted video game for computers and Windows 3.1 in time for Christmas of 1994 (Disney Lion King Disaster, 2013). In those days Windows was a largely underdeveloped operating system with very poor driver level abstraction. Game development for the platform was abysmal as a result of inefficient and buggy graphics drivers. The result was Microsoft’s backing of the DirectX API which would turn into an industry standard for Windows OS based game development and graphics being the inspiration of the name for the Xbox console and surpassing the market share of OpenGL for years to come.

Steve Jobs co-founded Apple with colleague Steve Wozniak in 1976 in Cupertino California at the advent of the personal computer revolution. It is lesser known that shortly before founding Apple Computer, Jobs was an employee at Atari designing video games. Jobs often found it difficult to work with other people and conflicts often arose leading his boss to set up a night shift for him to work. While at Atari Jobs and colleague Steve Wozniak designed the game Breakout (Hanson, 2013). Though it is believed that Jobs purposefully mislead Wozniak about the payment received for completing the project in order to buy into a farm in Oregon. Jobs was later ousted from Apple Computer in 1985 after disagreements with CEO John Sculley who would himself later be ousted from Apple. During that time he founded a computer software company, NeXT Software Inc, who worked to pioneer networking, inter-connectivity, and whose computers were used by Tim Berners-Lee to create the HTML programming language later forming the backbone of the internet (A Short History of NeXT, 2015). The NEXTcube workstation was also used by John Carmack when designing and coding the classic Doom and Wolfenstein 3D computer games (Antoniades, 2009). That same year Jobs left Apple the animator behind The Brave Little Toaster, John Lasseter, left Disney for Lucasfilm’s Industrial Light and Magic. Lucasfilm was created by George Lucas to render special effects in Star Wars. Later that year Lucasfilm sold Graphics Group to Steve Jobs who would later rename the company Pixar Animation Studios giving Lasseter the freedom to direct and produce content specifically for television commercials (John Lasseter Biograpy, The New York Times).

Walt Disney Storyboard

A few other innovations made computer animated films cheaper to produce including Walt Disney’s pioneering use of the storyboard in the earlier days of animation.

Pixar and the growing field of computer generated imagery initially faced many difficulties and technical limitations. Traditional animators were largely afraid of computer animation fearing it would replace them entirely. Those at Disney feared that animation was soon to be a thing of the past and that animation was a dying art form. Computer animation was also extremely expensive requiring very advanced computers for the time, but eventually Moore’s Law, which states the exponential growth of transistor density on integrated circuits, made it cheap enough for Jobs and Pixar to finally pull off a feature length computer animated film. However a couple of other prior innovations including Walt Disney’s pioneering of the storyboard made it cheaper to produce computer animated films. In contrast live action movies are filmed first and later correlated into movies. Animation was extremely expensive for even short clips because of the time and prowess needed to render and produce the scenes. Storyboarding helped eliminate this inefficiency by only rendering the scenes that will make it into the final movie.

This November will mark 10 years since Steve Jobs debuted Toy Story at SIGGRAPH in 1995 which would go on to become a box office success. With it’s plastic doll-like anthropomorphic rendering Toy Story brought the first computer animated characters to life on the big screen and set a new precedent for animation. Pixar’s internal RenderMan engine, used to render the actual scenes, has a number of similarities with OpenGL and the two API’s can be used interchangeably as both take the form of a stack-based state machine.

Despite the ambivalent feelings at the time Pixar would not become a one hit success and would go on to work on their next hit A Bug’s Life which would also be the highest grossing animated film for its year of release, 1998, raking in an initial $33 million dollars on its opening weekend. The movie broke new technical barriers including the number of animated characters possible on screen at any given time. Pixar would go on to release hit after hit including Monsters Inc, Cars, Up, and Brave.

Pixar has been such an immaculate success spawning a following of millions of movie goers worldwide. One product of this success places Pixar at the center of a prominent conspiracy known as The Pixar Theory which postulates all Pixar movies are related and can be used as the basis to predict future events in the Pixar timeline. John Lasseter is currently on board to direct Toy Story 4 and a century of animation and film will likely not be coming to a halt any time soon.

The End!


Antoniades, Alexander. “The Game Developer Archives: ‘Monsters From the Id: The Making of Doom'” Gamasutra. UBM Tech, 15 Jan. 2009. Web. 12 Jan. 2015. <>.

“A Short History of NeXT.” A Short History of NeXT. Web. 10 Mar. 2015. <>.

Dawkins, Richard. “All Africa and Her Progenies.” River out of Eden: A Darwinian View of Life. New York, NY: Basic, 1995. Print.

“Disney Lion King Disaster.” The Saint. 4 Jan. 2013. Web. 10 Mar. 2015. <>.

Hanson, Ben. “How Steve Wozniak’s Breakout Defined Apple’s Future.” 27 June 2013. Web. 7 June 2015. <>.

“John Lasseter Biography.” Movies & TV. The New York Times. Web. 10 Mar. 2015. <>.

Do humans really have “free will”?


Free will is a topic that has been debated for centuries, for the longest time its study has been largely ignored due to the lack of concrete empirical evidence. Philosophers have not been as reserved on the subject there are countless examples of ideas put forth by some of history’s greatest minds from Spinoza to Einstein and Alan Turing. In contrast, psychologists historically avoid addressing parts of the mind that are not tangible and can not be studied easily, structuralism, and instead focus on the functional side of the mind, functionalism. Science is finally beginning to catch up and new fields of study are being created as well such as neuroscience. Psychologists and scientists are finding new ways of explaining how the mind works using new technologies and taking approaches that have previously been overlooked and disregarded as too metaphysical or illusory. Computers and the digital revolution are also offering many new insights not just in the mind but in many fields. Studying the mind has generally been impeded because of its taboo nature, but that is beginning to change, and the implications are phenomenal.

Keywords: Free will, Determinism, Will Wright, John Conway, Theology, Games and Numbers, Alan Turing, Turing test, Artificial Intelligence, Robotics

Do humans really have “free will”?

Since the dawn of time man has wondered about what he is, where he came from, and where he is going. In his quest he has created religion, sacrificial ceremonies, tombs and relics such as the Pyramids of Giza, and put himself on other celestial bodies in the search for the answer. It is a philosophic question going back as far as the proverbial chicken and egg. Information processing and technology has been developing at exponential rates since inception, taking off rapidly at the turn of the last century. Every where you go somebody has a smart phone or laptop or other device that can access just about anything or any person in seconds from the Library of Congress, the fall of the Roman Empire, to World War II and the Apollo moon missions. Just as agriculture gave birth to exponential growth of the human population, and the industrial revolution gave birth to machines, the digital revolution gave birth to knowledge and information. This phenomenon was also observed by Gordon E. Moore, co-founder of Intel, who created a theory called Moore’s law which states the exponential growth of computing power as a result of transistor density on an integrated circuit. This makes it seem evident that computers will inevitably surpass the processing power of the human mind which has a wide array of implications. This was first theorized by Alan Turing who is the “father of computer science” and laid the ground work which became the first computers, several of which he worked on directly. Turing speculated that eventually artificial Intelligence will advance so much that humans will be able to have conversations with computers and not be aware of whether they are talking to another human. For this he created his formal Turing Test which measures the ability of a machine to mimic human behavior. Never in our history has man been so close to finding out whether he really controls his own fate, or if everything is truly a matter of personal destiny.

People consistently turn to churches, religious institutions, and other ecclesiastical sources of spirituality to find answers to the meaning and purpose of life. Consciousness or Vijñāna is a central theme in Buddhism, monks meditating and reconciling with their ego. The Bible of Western Christianity teaches us that our fate is not predetermined and the outcome is our ultimate choice (King James Bible, Deut. 30.19). In contrast many of the world religions teach of an omniscient, omnipotent, and omnipresent deity or divine being. A follower of Buddhism will follow the many paths to enlightenment where they will eventually reach the conclusion that we are all one conscious.

Einstein famously proclaimed that “God does not play dice with the universe” to which his colleague responded by telling Einstein to stop telling God what to do with his dice [sic]. Einstein did not believe in a personal god, but believed in the order of the universe such as Spinoza’s god of probability and orderliness. Benedict de Spinoza was a Dutch philosopher in the 1600s and wrote many controversial works including his magnum opus Ethics, he was considered a rationalist and is attributed with propagating the Enlightenment era. Spinoza asserts in his book those that believe God is the nature of free will are wrong and that the two contradict each other, while also providing many other proofs (Spinoza & White, 1973). Free will has been a topic debated by philosophers for centuries, from the grandfather paradox to the philosophical zombie.

Researchers at Cornell University and other institutions have been studying consciousness using certain technologies as Magnetic Resonance Imaging, or MRI scans (Owen, 2009). MRI and other technology is helping us gain a better understanding of what portions of our brain are responsible for which tasks and exactly how it operates, something we don’t really know a lot about. In a study they conducted individuals were asked to randomly raise either their left or right hands at the sound of a beep, but activation of specific regions of the brain indicated that it was making the decisions before they consciously chose the hand they wanted to raise, as if they weren’t really making the decision it was merely an illusion. These findings consistently show that the brain can make decisions before it chooses to do so, but that begs the question, which part of brain is the one doing the choosing? Most scientists and neurologists agree that the cerebral cortex gives rise to human intelligence setting us apart from the other species. After all, man can build complex machines, societal structures, and has an advanced capacity for emotion and thought that other species on the planet only have a primitive capacity for. It is also believed that this may be the center of our consciousness or the active-aware part of our brains.

Alan Turing was a famous computer scientist working in the 1940s to create the field of computer science laying out complete concepts such as algorithm and computation. His work also led to the first computers, which he theorized in the 1930s also known as a Turing machine and considered the basis for the modern theory of computation, which would later become the digital computers we have today, Turing worked on several of these machines himself. During World War II he significantly improved the capability of British and Allied intelligence operations to decrypt Nazi communications by automating much of the process replacing human cryptanalysts with machines (Muggleton, 2014). After the war Turing began designing a logical computer, which would have become the first digital computer, but he was rebuked by his colleagues. Turing then went on to Manchester University where he helped lay the foundation for the field of artificial intelligence.

John Conway, Professor of Mathematics at Princeton University, is known by computer scientists and mathematicians alike for his work in game theory. Conway essentially solved a problem postulated by John von Neumann in the 1940s by creating a self-replicating digital cellular automaton, called the Game of Life, published in the October 1970 issue of Scientific American (Gardner, 1970). The game is implemented as a computer simulation. Typically starting with a rectangular or hexagonal grid, the player marks cells as one of two states, either alive or dead. The simulator then computes some very basic rules for regular steps or loops simulating cell overgrowth, underpopulation, and reproduction. The result was astonishing, a computationally cheap and efficient evolutionary model that could be run on a standard desktop. In this way the simulation could be paused, saved, resumed and restarted yielding the same results. I propose a hypothetical, what if it were possible to build a computational model that simulated our universe down to the exact superposition of every atom? While that may seem beyond the capabilities of current computing power, quantum mechanics sets the stage for revolutionary advances in computing efficiency.

In classical computing everything breaks down into binary, the JPEG image you save from your phone or digital camera from a family vacation, a popular song from some new band a young teenager downloads from iTunes, the ASCII or Unicode you enter into a text file or word document, even the buttons you press on your keyboard all break down into binary that is transmitted and interpreted by the various components of your hardware. Binary is one of the most basic ways of representing information digitally. A single bit is either a 0 or 1 and handled by a semi-conducting transistor, think of a light switch the light is either on or off, thus more transistors on a computer chip result in more memory and processing power. However, Heisenberg’s Uncertainty Principle states that a quantum bit, or qubit, can be either a 0 or a 1 or both! Basically, we do not know the state of an atom until we observe it by collapsing the wave function. This has profound implications for cryptography, concurrency, robotics, and artificial Intelligence. The media has been reporting on cars which can drive for you through the use of a global positioning system, and when computer controlled enemies or “bots” attack you in video games, they usually perform path finding to get from point A to point B. For a computer to find its way out of a maze it must go through every possible route, reaching occasional dead ends, until it finds the correct route, but it may continue searching if it needs to find the shortest possible path than just the first one that it finds. These tasks can get even more fundamentally complicated with such problems as the traveling salesman. Because of the quantum uncertainty principle scientists believe it is theoretically possible to develop computer chips that can concurrently travel all paths at the same time and find the solution to complex problems almost trivially.

Will Wright is a game programmer and computer scientist who founded Maxis, now a subsidiary of Electronic Arts. Wright is responsible for designing and programming his trademark simulation games, SimCity, The Sims, SimAnt, etc. While initially thought to be only practical for government and educational purposes, many of his creations have gone on to become world renowned classics in gaming and entertainment. The Sims allows the player to control digital people in a digital simulation of every day life, allowing you to chose what they wear, who they are friends with, and even what career they have. The game also allows you to toggle settings granting how much autonomy the Sims’ have over their lives, and whether or not they can make their own decisions. Wright also created a more advanced automaton of life called Spore where you follow the evolutionary path of an organism from single celled primordial origins to multicellular advanced intergalactic societies. Several of his creations are my personal favorites, his games will never get old to me. Wright believes that it’s possible that we could be living in a simulation ourselves and not even know it (“Do We Have Free Will?”, 2013).

The free will debate has always been a controversial subject and is likely to remain so for quite some time. Many aspects of the subject are contradictory and conflict greatly making it hard to reach a standard theory or model let alone a scientific consensus. It is also possible that the truth may simply be beyond our comprehension or ability to perceive logically. Religion and science have been searching for the answer and are likely to continue doing so. Technology has contributed a great deal to understanding the nature of the mind and may one day solve the hard problems of consciousness. Life is the Universe’s most natural personification, yet forever traps us in ephemeral bodies; as Carl Sagan said, “we are a way for the cosmos to know itself.” Whether humans will be able to transcend their physical limitations and gain an altruistic complete understanding of their existence is a matter of debate. It will probably be a while before the human capacity for understanding reaches that point, but I think it is inevitable.


Baumeister, R. F. Free Will in Scientific Psychology. Perspectives on Psychological Science 3, 14-19. Retrieved June 22, 2014, from

(2013). Do we have free will? [Television series episode]. In Through The Wormhole. Discovery.

English, E. S. (1948). Holy Bible, containing the Old and New Testaments, authorized King James version; with notes especially adapted for young Christians. (Pilgrim ed.). New York: Oxford Univ. Press.

Gardner, M. (1970). Mathematical Games: On Cellular Automata, Self-Reproduction, the Garden of Eden, and the Game `Life’. Scientific American.

Muggleton, S. (2014). Alan Turing and the Development of Artificial Intelligence. AI Communications, 27, 3-10. Retrieved June 30, 2014, from

Owen, A., Schiff, N., & Laureys, S. (2009). A New Era of Coma and Consciousness Science. Progress in Brain Research 177, 399-411. Retrieved July 27, 2014, from

Prakashananda, S. (1921). The Inner Consciousness, How to Awaken and Direct It. San Francisco, CA: Vedanta Society of S. F.

Saha, I. Artificial Neural Networks in Hardware: A Survey of Two Decades of Progress. Neurocomputing 74, 239-255. Retrieved June 22, 2014, from

Saracho, O. N. Theory of Mind: Understanding Young Children’s Pretence and Mental States. Early Child Development and Care. Retrieved May 30, 2014, from

Schopenhauer, A, & Payne, E. F. J. (1966). The World as Will and Representation. New York: Dover Publications.

Spinoza, B. d., & White, W. H. (1923). Ethics, Demonstrated in Geometrical Order (4th ed.). London: Oxford University Press.

Security of Electronic Communications

One of the biggest problems facing software and technology companies as well as all major financial institutions today is the security and authenticity of electronically transmitted communications and data. When evidence of phone hacking surmounted around Piers Morgan back in Q1 2014 it was revealed that access was easily gained to victim’s voicemail recordings because they simply never changed the password (Spark, 2014). Why then do people often neglect and undermine the importance of securing their communications and what are some ways to address this? These are important cognitive, biometric, and psychological questions which must be answered in order to improve security of databases, emails, networks, and data transmission. This requires not only innovating and improving the encryption methods and techniques utilized in these systems by engineers but also changing the perception and appraisal by people, including the ordinary layman, of the problem.

During World War II the Germans used the Enigma machine to encrypt nearly all communications, which was of course until Alan Turing created the world’s first computer in the interest of automating much of the decryption process at Bletchley Park. In the process he laid the foundation of Computer Science and Artificial Intelligence positing the noteworthy Turing test as a measure of a machines intelligence. Every time you make a purchase at Amazon or, send a message on Facebook or Twitter your information is bounced between several servers, stored in databases on remote computers, and sometimes intercepted by even the National Security Agency, in offices and buildings occasionally not even in the same country as you. Merely opening an email attachment can compromise all of the data on your computer as attachments can be easily infected with Trojans and other viruses that can take over your computer, control system processes, or scan for files containing credit card numbers and upload them back to the intruder. Even if you consider yourself a modern Luddite of sorts, there is very little hope in escaping the arbitrarily encompassing technology of the digital age, unless of course you don’t mind not having a driver’s license and never taking out a loan for a house, car, or student loan.

Heartbleed was a major security vulnerability in OpenSSL, a popular open source socket security library, which could be used to bypass authenticity and security measures by the software and was in isolated instances. A Pew research poll indicated that only about 60% of adult internet users had heard of Heartbleed, and that even worse only 39% took additional steps to secure their online accounts (Rainie, 2014). Warnings of Heartbleed going largely unheeded Shellshock, a vulnerability in Bash a command prompt used in Mac and Linux, was just discovered with early estimates of 500 million affected computers (Lee, 2014). So it is evident the implications of data security on our jobs, lives, and basically our very way of life. But what can be done to address these issues? Well examples such as OpenSSL may actually be the solution and not just the problem. Open source software grants users special privileges including being able to read the source code easily without extensive reverse engineering, and sometimes even the rights to redistribute that code with certain caveats. For this reason not only were the hackers aware of the bug, so were other users of the software allowing the issue to be much more quickly addressed. With proprietary software this may not be the case, by the time the developers become informed it could be too late. We can also see companies like Oracle which are making a point of improving security in Java based applications. Their approach lately has been to promote wide spread adoption of new Java versions, which as a result of new features has been largely embraced by the community with Java 8 adoption up nearly 20% from previous releases (Oracle, 2014). Not only are they correcting the issues, they are giving users incentives to install and adopt these more secure versions. The now obsolete Windows XP operating system is the epitome of where this methodology could be applied as it is still used in many ATM machines today (Pagliery, 2014). They have been proven to be extremely susceptible to fraudulent attacks, even vulnerable enough to hacking from a cellphone!

New advances in physics are also creating promising solutions to encryption as well as classical computing problems. A lot of recent research has shown that lasers can be used to encrypt messages that cannot be deciphered without the original manifest, much like traditional asymmetric cryptography (Berridge, 2010). Optimizations in parallel processing that quantum mechanics is postulated to allow can also mean near instantaneous decryption of encrypted keys. This means that anybody or organization or corporation that can develop the first real quantum computer could decrypt every message using todays encryption standards instantly any time they want. It is no surprise then why the National Security Agency, NASA, Google, Microsoft and many other tech titans are clamoring to build these machines.

Another important aspect of this problem is the psychological importance of security and privacy that individuals feel. The most obvious issue here is that in order for passwords to be secure they also have to be somewhat hard to remember, and consider that usually people have more than 1 even 10 networked accounts on the internet for email, their student account, online bank account, social accounts and much more. Acronyms and anagrams can be conventionally applied as mnemonic devices for remembering passwords, but users generally prefer convenience. The most commonly used password for 2013 was, consistent with popular belief, you guessed it, “password” (Ngak, 2014). Many companies have begun to address this part of the problem in new ways, such as Apple which provides facial recognition locking for most iOS devices (Whitney, 2013). Besides facial recognition research is bringing new solutions such as fingerprint scanning, retina scanning, DNA tests, and other forms of biometric identification and authentication some of which are old and some of which are new. One of the most often utilized methods of preventing bots and spam on websites has been the contemporary use of optical character recognition or CAPTCHAS for example.

The problems of privacy and security have been perennial and persistent in many contexts and not just technology alone. Technology and science is not only expanding the issues but actively providing new and innovative solutions. The development of quantum computers seems to draw many parallels to Alan Turing’s creation of the first computer with grave implications. Millions of people are left vulnerable by security flaws and subject to attack, fraud, and other harm every single day. The problems of privacy and security are thus important matters that are frequently undermined and that must be taken more seriously and researched more thoroughly.


Berridge, E. (2010, September 1). Quantum encryption defeated by lasers. Retrieved October 17, 2014, from

Lee, D. (2014, September 25). Shellshock: ‘Deadly serious’ new vulnerability found. Retrieved October 17, 2014, from

Ngak, C. (2014, January 21). The 25 most common passwords of 2013. Retrieved October 19, 2014, from

Oracle Highlights Continued Java SE Momentum and Innovation at JavaOne 2014. (2014, September 29). Retrieved October 17, 2014, from

Pagliery, J. (2014, March 4). 95% of bank ATMs face end of security support. Retrieved October 17, 2014, from

Rainie, L., & Duggan, M. (2014, April 30). Heartbleed’s Impact. Retrieved October 17, 2014, from

Spark, L. (2014, February 14). CNN host Piers Morgan questioned in UK hacking investigation. Retrieved October 10, 2014, from

Whitney, L. (2013, December 20). How to use facial recognition on your iPhone. Retrieved October 19, 2014, from