My Favorite Mathematicians

To culminate the mathematics series, it is only appropriate to examine the lives and contributions of those who molded the discipline into what it is today. Below are details regarding some of the most influential, and my personal favorite, mathematicians throughout history.

Archimedes of Syracuse (288 BC – 212 BC)

Image Courtesy of The 100 Greatest Scientists

There exists a number referred to as Archimedes’ Constant, and you certainly know of it: pi. Archimedes, a Greek, was the first to approximate the famous irrational number that we know as the ratio of a circle’s circumference to its diameter. He achieved this by inscribing and circumscribing circles around polygons of ever increasing numbers of sides (triangle, square, pentagon, hexagon, etc.) and observing that the ratio of the perimeter of each polygon to its radius approached 3.14159…. Indeed, a circle can be described as a polygon with infinitely many sides. Archimedes also made several contributions to fluid mechanics, inventing such devices as the Archimedes screw and developing his eponymous buoyancy principle. Little is known about his personal life, but according to legend, he was killed by a passing Roman soldier because the soldier was frustrated at the level of concentration by the working mathematician with an ongoing war in his midst.

Fibonacci (1170 – c. 1250)

Image Courtesy of Famous Mathematicians

Very little mathematical progress occurred during the Middle Ages, but the famous Fibonacci Sequence was developed during this time by the Italian mathematician of the same name. As you may know, each term of the Fibonacci Sequence is the sum of the two preceding terms:

1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, …

What is remarkable about the Fibonacci Sequence lies in the ratio between consecutive terms (2/1, 3/2, 5/3, 8/5, 13/8, etc.). As the terms increase, such ratios approach the Golden Ratio, denoted by the Greek letter phi and equal to half the sum of 1 and the square root of 5 (approximately 1.618). Listing every occurrence of the Golden Ratio in nature would be extensive, so I link to a site that describes a few here. You will undoubtedly find it intriguing how one quantity is so engrained in botany, anatomy, cosmology, biochemistry, etc. It is even said that the Golden Ratio is one of the foundational building blocks of life itself.

Leonhard Euler (1707 – 1783)

Image Courtesy of Wikipedia

Arguably the greatest mathematician of all time, Euler, a Swiss, made numerous contributions to the fields of physics; astronomy; logic; and topology, developing infinitesimal calculus, graph theory, and several other paradigms. He is most famous for his conception of two formulae that are both popularly dubbed “Euler’s Identity”:

  • V – E + F = 2
    • For a 3D solid, the number of vertices minus the number of edges plus the number of faces equals 2 (e.g., in a cube, V = 8, E = 12, F = 6). The resultant sum of 2 is unique to objects that are topologically similar to the sphere. For other objects in three and in higher dimensions, the sum can vary.
  • e^(j*theta) = cos(theta) + j*sin(theta)
    • Note: I use the electrical engineer’s convention for the imaginary number (instead of i, which already signifies current).
    • The more famous of the two “Euler Identities” constitutes what is regarded as the most “beautiful” formula in mathematics: e^(j*pi) + 1 = 0. Having a role in every discipline from differential equations to Fourier analysis, this formula alone bestows Euler with his renown.

Alan Turing (1912-1954)

Image Courtesy of Biography

Turing, a British mathematician, has his life depicted in the 2015 film The Imitation Game. As the film illustrates, Turing is most famous for working as a cryptanalyst at Bletchley Park during World War II, where he played a key role in helping crack the Nazi Enigma code—initially regarded as “unbreakable”—and greatly shortening what would have most likely been a much longer war. In his 1950 paper “Computing Machinery and Intelligence,” Turing conceives what is now called the “Turing Test,” the standard for measuring artificial intelligence. Far more impactful, however, was his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem,” regarded as the founding publication of computer science. Turing introduces the notion of a “universal machine” to lay the foundation for the modern digital computer; moreover, he demonstrates that there is no general method for determining whether a given machine will terminate a program, proving the unsolvability of the Halting Problem—or Entscheidungsproblem (“decision problem”) as David Hilbert had proposed eight years prior). Turing’s life was tragically cut short when he committed suicide on June 7, 1954 due to the humiliation of mandatory hormonal injections to “treat” his homosexuality, which was illegal at the time in Great Britain.

Boolean Algebra, Part 2

In last week’s post, we addressed the basic operations of Boolean algebra—AND, OR, XOR, NOT—that lay the foundation for modern digital logic. In this post, we will extend this repertoire to include more obscure operations that are nevertheless ubiquitous in everyday conversation and thought.

De Morgan’s Laws

Before these operations are reviewed, let us consider an anomaly with the basic operations that was formulated by British mathematician Augustus De Morgan (1806-1871). Supposing there exist conditions A and B, the following statements hold true:

  • NOT (A OR B) = NOT A AND NOT B
  • NOT (A AND B) = NOT OR NOT B

In the context of language, De Morgan’s Laws inherently make sense. If it is not raining or snowing, then it is both not raining and not snowing. Likewise, if it is not raining and snowing simultaneously, we can conclude that either it is not raining or it is not snowing (or both).

Implication (–>)

The Boolean implication operation takes the form of an “if-then” statement, much like a scientific hypothesis. To understand the nature of the implication, let us put aside the harmlessness of liquid precipitation and assume that an umbrella is a necessity to go outside during a rainstorm. Consider the following four statements:

  • IF it is raining THEN I will take an umbrella.
  • IF it is NOT raining THEN I will NOT take an umbrella.
  • IF it is NOT raining THEN I will take an umbrella.
  • IF it is raining THEN I will NOT take an umbrella.

These four statements comprise all of the possible situations in which I could be: it is either raining or it is not, and I will either take an umbrella or I will not. The first two statements are manifestly sensical and can thus be considered TRUE; I need an umbrella if it is raining and do not need one otherwise.

The third and fourth statements are less trivial. If it is not raining, then although I do not need an umbrella, there is no harm in taking one anyway. On the other hand, if it is raining and I neglect taking an umbrella, I will be (hypothetically) harmed as a result. Thus, while the third statement is TRUE, the final statement is FALSE.

Image Courtesy of Richard Coyne

The results convey that the implication is based on specificity vs generality. Consider the general case B and the specific case A, as illustrated in the diagram above. If A is true (1), then B must be true (1) as a result. It is impossible for the specific case to hold without the general case. If A is false (0), then B may be either true (1) or false (0). In the example, whether it is raining comprises the specific case while my taking an umbrella comprises the general case—I am able to take my umbrella during any type of weather. The only case that does not make sense is when the presence of rain (1) implies that I do not take my umbrella (0). In symbols:

  • 0 –> 1
  • –> 11
  • 1 –> 0
  • 1 –> 1

Interestingly, if A –> B, the implication can be expressed as NOT OR B, for the only false statement occurs when A is true (1) and B is false (0), consistent with the behavior of the OR and NOT operations.

Biconditional (<=>)

A stricter operation than the implication is the biconditional, characterized as an “if and only if” statement. If I say, “I will take my umbrella if and only if it is raining,” the generality of the umbrella condition is annihilated. That is, if it is not raining, I do not take my umbrella, and if it is raining, I take it; no other cases can be regarded as true. Symbolically, the biconditional deduces the following:

  • 0 <=> 1
  • <=> 10
  • 1 <=> 0
  • 1 <=> 1

If A <=> B, then A must imply B and B must imply A. The biconditional can thus be written as A –> B AND –> A. One condition no longer supersedes the other in generality; there is a one-to-one correspondence between event A and event B. If one condition is met, the other must also be must; conversely, failure to meet one condition directly implies failure to meet the other. Thus, the statement, “I will take my umbrella if and only if it is not raining,” no longer makes sense and is FALSE.

 

Boolean Algebra

Arithmetic, algebra, geometry, calculus, and statistics are the five main pillars of mathematics that are enacted in curricula from grade school through postsecondary education. As such, one of the more under-appreciated domains of mathematics is logic, with its syntactics rarely examined but its semantics ubiquitous in every aspect of society. Logic has been a human formulation since the dawn of humanity itself, but the foundations for modern digital logic were laid by English mathematician George Boole (1815-1864) in his 1854 work Laws of Thought. An understanding of Boolean logic–and by extension, Boolean algebra–is essential in understanding the workings of modern digital technology that are widely taken for granted.

Image Courtesy of Britannica

To maintain quantitative structure, True and False, the basic units of logic, are assigned values of 1 and 0, respectively. We will now examine the fundamental operations of Boolean algebra.

AND [*]

By definition, a statement containing the conjunction “and” is true only when both conditions in question are true. If I say, “I am at home AND I am in my dorm,” then the entire statement is false, even though the first condition (me being at home) is true. If the truth or falsity of each condition is denoted by 1 or 0, respectively, the following holds:

  • AND 0
  • AND 0
  • 1 AND 0
  • 1 AND 1

It is clear that the Boolean AND operation can be modeled as a traditional multiplication (*), for any false condition is said to annihilate the entire statement to 0 (False).

Inclusive OR [+]

In contrast with the definition of AND, an inclusive OR statement is true when either of the conditions in question is true. Inclusive means that the statement is still regarded as true if both conditions are true. This deviates from the traditional English definition of the word “or,” which is defined in the exclusive sense. As expressed below, an inclusive OR statement is only false when both conditions are false:

  • OR 0
  • OR 1
  • 1 OR 1
  • 1 OR 1

Boole synthesizes the inclusive OR operation with the arithmetic operation of addition (+). This differs from the traditional definition of addition in the last statement (1 + 1 = 1).

Exclusive Or (XOR)

As mentioned in the previous section, the exclusive OR operation excludes from truth the statement where both conditions are true:

  • 0 XOR 0
  • XOR 1
  • 1 XOR 1
  • 1 XOR 0

NOT [~]

Trivially, the NOT operation simply reverses the truth value of a singular (non-combinational) statement. Indeed, something that is “not false” is “true,” and something that is “not true” is “false.”

  • ~0 = 1
  • ~1 = 0

It is important to note that there exists an order of operations in Boolean algebra. Just as traditional algebraic expressions with several terms are evaluated in this order–

  1. Parentheses/Brackets/Grouping Symbols
  2. Exponentiation
  3. Multiplication and Division
  4. Addition and Subtraction–

the Boolean operations are evaluated in this order–

  1. Parenthetical Terms/NOT (corresponds to grouping of terms)
  2. AND (similar to multiplication)
  3. OR/XOR (similar to addition)

For example, the expression 1 AND (0 XOR 1) OR ~1 OR (1 AND ~0) AND ~~~1 is evaluated by the following procedure:

  1. Separately evaluate expressions in parentheses and simplify negated (NOT) terms: 1 AND 1 OR 0 OR 1 AND 0
  2. Evaluate AND expressions: 1 OR 0 OR 0
  3. Evaluate OR expressions (although there is a triple instead of a pair of conditions, it remains that any true condition yields a true statement) 1

In next week’s post, I will detail more involved and intriguing operations as well as the innumerable (indeed, uncountably infinite) applications of Boolean logic that permeate modern life.

To Infinity…And Beyond?

With the COVID-19 pandemic severely disrupting my typical research habits, I shall no longer tailor subsequent posts around the Smart Dust project and the entailed mathematics. While I still remain very much engaged with the project, it is only appropriate to revert to the original intention of this blog: the conveyance of my lifelong passion of teaching theoretical mathematics. Today, I shall write about infinity and its several variations.

Ever since the birth of numbers themselves, humans have struggled to explain the nature of infinity. What does it mean for something to “go on forever?” Does the universe extend infinitely in all directions? Can one break an object into infinitely small (infinitesimal) pieces? While cosmologists and chemists have universally agreed upon negative answers to each of the two respective questions, the theoretical nature of numbers legitimizes the concept in counting. Indeed, our decimal number system is constructed so that one is able to begin counting “1, 2, 3, 4, 5, 6,…” and continue indefinitely. Because each number can be assigned an ordinal position (1 is the “1st number,” 2 is the “2nd number,” 3 is the “3rd number,” etc.), we say that the natural numbers—or whole numbers if 0 is included—are countably infinite. That is, although it would take an eternity, one is hypothetically able to enumerate the entire set of natural (or whole) numbers in a list.

This raises the question, “Which is larger: the set of natural numbers or the set of even numbers?” One may be quick to assert that because the even numbers are simply alternate natural numbers (1, 2, 3, 4, 5, 6, 7, 8,…), there are half as many even numbers as natural numbers. However, notice how each even number can be assigned to an ordinal number just like the natural numbers:

  • 2 is the “1st number”
  • 4 is the “2nd number”
  • 6 is the “3rd number”
  • 8 is the “4th number,” etc.

Since the even numbers are also countably infinite, it is only reasonable to conclude that the even numbers and the natural numbers have the same amount of elements (technically, are equivalent in cardinality). Assigning ordinal positions in more complex ways, one can deduce that the following sets have the same cardinality—i.e., are equal in size—as the natural numbers, and thus are also countably infinite:

  • Integers (natural numbers plus 0 and negatives)
  • Rational numbers (positive and negative ratios of natural numbers)
  • Algebraic numbers (numbers that are solutions to algebraic equations—e.g., the square root of 2 is the solution to x^2 – 2 = 0)

The above observations are attributed to the German mathematician Georg Cantor (1845-1918).

Image Courtesy of Wikipedia

In determining whether the real numbers (all rational and irrational numbers) are also countably infinite, Cantor established his diagonal method, which takes the form of a reductio ad absurdum proof (proof by contradiction) and is illustrated below:

Cantor’s Diagonal Method

Suppose that one has compiled a comprehensive enumeration of all real numbers and has listed some below (recall that irrational numbers are non-terminating, non-repeating decimals):

  • 0.738972491301701703724… is the “1st number”
  • 0.218390218432041809123… is the “2nd number”
  • 0.182043928417132908121… is the “3rd number”
  • 0.213897412912010901010… is the “4th number”
  • 0.631913630329878119937… is the “5th number,” etc.

Now, we form a number by taking the nth decimal digit of the nth number. Thus, the first digit of the number is the first decimal digit of the first enumerated number; the second digit of the new number is the second decimal digit of the second enumerated number; and so on. There is a “diagonal” of sorts that is constituted:

  • 0.738972491301701703724…
  • 0.218390218432041809123…
  • 0.182043928417132908121…
  • 0.213897412912010901010…
  • 0.631913630329878119937…

Using this process, we deduce the decimal number 0.71281…. Let us now form a new number by adding 1 to each decimal digit of the current number: we obtain 0.82392….

Can 0.82392… be in our list? It certainly cannot be the first number, for the 8 in the tenths place differs from the 7 in the first number in the list. It cannot be the second number; the in the hundredths place differs from the 1 in the second number. In fact, the nth digit in the newly formed number differs from the nth digit in the nth enumerated number. Therefore, our seemingly comprehensive list of all real numbers is incomplete, for a new number can always be added by the method above. As a result, the real numbers exhibit an entirely different level of infinity—i.e., they are uncountably infinite.

Such “levels” of infinity are quantified in the transfinite numbers symbolized by the Hebrew letter aleph. Aleph-null signifies the countable infinity of the natural numbers (and additional sets like the rational and algebraic numbers). “Uncountable infinity” is expressed as aleph-1. In fact, there are an infinity of transfinite numbers (aleph-2, aleph-3, etc.); there exist infinities beyond that of the real numbers, but they encompass a higher mathematics that is much beyond the scope and demands of this blog.

Time: The Essence (and Nuisance) of Existence

It was my original intention to thoroughly read up on Smart Dust in the past week to prepare for my research and to gain a solid mathematical footing for this post. However, due to several exams and assessments with which I have been occupied, the “time got away from me,” and I was only able to read broad introductory pieces into the history and basic terminology associated with the technology. Thus, it is only fitting to freshly resume my endeavor after Spring Break—and to go into the break discussing time, particularly the calendar anomalies associated with this time of year.

1. Leap Day

This past Saturday was February 29, a day supposedly annexed to the shortest month every four years to account for the inexactness of a 365-day year. I say supposedly because it is popularly believed that any year divisible by 4—alternatively, every presidential election year—is a leap year. However, the algorithm has one caveat that has not been of interest recently but may be of interest to us should we live long enough to experience it:

A. Any year divisible by 4 and not by 100 is a leap year

B. If a year is divisible by 100, it must also be divisible by 400 to be a leap year.

Image Courtesy of General Blue

The second provision dictates that while years such as 1600, 2000, and 2400 are leap years like any other, century years not divisible by 400 (e.g., 1800, 1900, 2100, 2200) are not leap years. This was precisely the adjustment made by Pope Gregory XIII in 1582, when the dates of October 5-14 were eliminated to correct the fallacy of the Julian Calendar (the assumption that a leap year should unconditionally be every four years), which had let the calendar drift 12 days askew. To this day, the Gregorian Calendar forces leap-day babies who live through century years such as 1700, 1800, 1900, etc. to wait 8 years between birthdays instead of the already-intriguing 4. For the historian’s appeal, Thomas Jefferson and William McKinley are the only two U.S. presidents to have been elected in common years (non-leap years), and whoever is sworn in on January 20, 2101 will be the third.

2. Daylight Saving Time

This Sunday, March 8 (unless you are traveling to Hawaii or the non-Navajo part of Arizona for Spring Break), you will experience a 23-hour day, with 1:59 AM changing to 3:00 AM. The agricultural and statistical reasons for Daylight Saving Time (DST) notwithstanding, there is a growing movement to either abolish it (“fall back forever”) or stay in DST for 12 months of the year instead of the traditional 8. Keep in mind that the switch to and from Daylight Saving Time shifts the daily sunrise and sunset times by one hour; one can analyze this in the context of the longest and shortest days of the year (June 21 and December 21, respectively, with the opposite true in the Southern Hemisphere):

Currently, in State College (about 40 degrees N latitude), the Sun rises and sets at approximately 5:30 AM and 9:00 PM, respectively, on the summer solstice (6/21), which is in DST. Without DST, these times would be 4:30 AM and 8:00 PM.

On the winter solstice (12/21), which is in standard time, the Sun rises and sets at approximately 7:30 AM and 5:00 PM. If DST were year-round, these times would be 8:30 AM and 6:00 PM.

Image Courtesy of Business Insider

Image Courtesy of Blue Delaware

Based on this information, what is your opinion of Daylight Saving Time? Would you abolish it in favor of a super-early sunrise and a more modest sunset time in the summer, or would you abolish standard time to obtain an extra hour of daylight in evenings while waking up to darkness in the winter?

3. When Is Easter?

As a Catholic, Easter Sunday is the holiest day of the liturgical year. While it is not until April 12 this spring, the theme of this final pre-Smart Dust post bestows the ideal opportunity to discuss the little known formula of how the date is determined. In 2019, Easter fell on April 21; in 2018, April 1; in 2017, April 16. Next year, it will fall on April 4, 2021. The date seems to constitute a randomly chosen Sunday in March or April without rhyme or reason. However, it is actually derived as follows:

Easter falls on the first Sunday after the first full moon after the Vernal (Spring) Equinox.

For example, this year, the Vernal Equinox—“first day of spring”—is on Thursday, March 19. The next full moon after the equinox occurs on Tuesday, April 7. Easter is thus celebrated on the following Sunday, April 12. If the so-called paschal full moon happens to occur on a Sunday, Easter is placed the following week in lieu of the same day.

Due to the quasi-periodic nature of the moon phases, there exists an algorithm to mathematically determine the date of Easter Sunday for any given year. I will not regurgitate the algorithm in this post, but I link to it here and highly encourage you to read about it yourself. As an exercise, use the algorithm to determine Easter Sunday in your birth year (probably 2000 or 2001) and compare your result to a Google search.

Image Courtesy of Active Christianity

 

Smart Dust for a Quasi-Smart Mathematician

Throughout the next few posts, I will shift away from my grandfather’s anomaly and its applications to ASCII toward a research project that I have just initiated in conjunction with my honors electrical engineering advisor, Dr. Julio Urbina. As you shall notice, this project will be one of significant mathematical rigor and quite a challenge for someone not even halfway through the foundational electrical engineering course, EE 210. However, I decided to take on the project (partly through my research seminar course EE 396) to explore a potentially groundbreaking application of my major (EE) and my lifelong passion (mathematics).

What is Smart Dust?

Smart Dust is the collective name given to microscopic drones that function as both satellites and computers, with the ability to communicate with each other to autonomously accomplish a certain task. In a group, the drones resemble a levitating pile of dust (hence the name), and each device may also act as a single spacecraft. Through programming, one is able to synchronize the drones to move, orient themselves, and utilize their sensors in a desired manner. The technology is not widely recognized—only familiarized by me this semester—and research has not been extensive as in other EE applications such as quantum computing, radar, electronic circuit design, etc.

Image Courtesy of Medium

At this stage, I have only amassed research and review papers on the topic and have yet to actually read them in detail. Thus, my decision to shift the focus of this blog (while still emphasizing the purely mathematical aspect) was motivated primarily by my desire to engage with the topic every time I have a chance. While my writing may simply comprise a regurgitation of what I have learned (in terms of the mathematical nature of Smart Dust), I shall treat it as an opportunity to teach a broader audience from my experience, and after all, “to teach is to learn twice.”

Mathematical Aspects?

To convey the nature of the beast that awaits me, I have taken excerpts of equations from a Smart Dust review paper (“A review of Smart Dust architecture, dynamics, and mission applications” by Niccolai, et al.) that I plan to read and analyze from now until Spring Break.

Images Courtesy of Niccolai, et al.

From a mere glance, I recognize an application of linear algebra and vector calculus, covered in courses in which I am currently enrolled (MATH 220 and 230H, respectively). However, applying theoretical-based knowledge of the concepts and operations (resultant from the lack of application in homework problems and exam practice) to a technology of which I know very little will be daunting. Despite this, it is my hope that my passion for mathematics combined with my pursuit of an all-important application of mathematics (EE) will facilitate my learning of and eventual experimentation with (per Dr. Urbina’s request) a device (rather, many devices) that will potentially become ubiquitous yet is currently unheard of among the general public.

Encoding with ASCII (and Extended ASCII)

By its very definition, ASCII is a code system used to express English characters in binary or hexadecimal form. What if one sought an Enigma-like system in which alphanumerics and punctuation are signified by other characters? Could ASCII be used for such a purpose? It is helpful first to re-introduce the ASCII table:

Image Courtesy of Simple Wikipedia

For the following mathematical code systems, it is necessary to take note of the decimal representations of the ASCII characters, not often practically employed (e.g., A = 65, d = 100). Indeed, it is much easier and more useful to learn the hexadecimal (and consequently, binary) representations of the ASCII characters. However, since the aforementioned mathematical quirk arising from the “DeMarchis Riddle” is applicable only in base 10, the base-10 ASCII values should be considered.

Recall the necessary steps to realize the anomaly:

  1. Take any whole number x and subtract it by the sum of its digits [x – digitSum(x)].
  2. Divide that difference by 9 to yield a whole number quotient [(x – digitSum(x)) / 9].
  3. Add that quotient to the original number to obtain y [x + (x – digitSum(x)) / 9 = y].

For each value of x, there is a corresponding and unique value of y obtained by the following formula:

y(x) = [10x – digitSum(x)]/9

To encode a particular ASCII character, simply assign the decimal representation of the character to and rewrite the character as the one corresponding to the decimal value y. For example, the capital A is signified by 65. Inserting 65 into the y(x) function yields 71, which corresponds in ASCII to the capital G. Using this system, I can encode my full first name as:

Gz???{zt{

In attempting to encode the “t,” we encounter a problem. The traditional ASCII table only consists of decimal codes up to 127, and the 116 code signifying the lowercase t is manipulated to 128, which does not appear in the table above. Thus, to encode characters near the end of the table, it is necessary to employ the extended ASCII table, which encodes 128 additional characters with hexadecimal values 80 through FF. Although the extended table is not uniform among the different operating systems, I shall focus on the Apple (Mac OS) version of the table:

Image Courtesy of ASCII-Table

In the table above, a character’s hex code is signified by the digit of its column followed by the digit of its row. Thus, the recognizable Apple logo is encoded as F0. One disadvantage is that the table requires a intermediate conversion from hexadecimal to decimal, certainly not a trivial process. With the decimal number 128 equal to the hex value 80, I can now express my complete first name with ASCII and extended ASCII characters:

GzÄ{zt{

What is your name expressed in this new code? Of course, any attempted representations of high-value extended ASCII characters (e.g., the degree symbol) are futile, as there is no additional ASCII table for hex values larger than FF; this would require more than one byte per character. I am currently working on a Python program that will output encoded letters for a given input (e.g., a word). Although this particular code system is not the most secure or most difficult to grasp, it illustrates a working combination of both the ASCII system and the intriguing arithmetical anomaly discovered (to our knowledge) by my grandfather little more than five years ago.

0100000101010011010000110100100101001001

It would be no surprise if I had decided to write this entire post in binary digits. In fact, I have been infatuated with binary for nearly six years. In June of 2014 (at a measly 13 years old), I read Code by Charles Petzold (2000), consisting of a hypothetical step-by-step guide to constructing a computer from the logic gates to the operating system. Impressed by Petzold’s simplification of an overwhelmingly complex device and his compelling narrative-like style, I could never seem to put the book down, having read it cover-to-cover nearly ten times as of now. It is what sparked my interest in electrical engineering, and I referenced it multiple times in my application essays for Schreyer. In essence, I am enrolled in this very course—and this very university—partly due to Petzold and his work.

Image Courtesy of Goodreads

Once I read Code, I was driven to learn more about the base-2 number system (binary), the quintessential facet of digital computing. I began by practicing binary-to-decimal conversion, the process of rendering a familiar base-10 number (e.g., 25) into its binary form (11001 in the case of 25). In the base-10 decimal system—taken for granted to such an extent that alternative systems seem almost inconceivable—each digit in a number (0-9) represents a multiple of a power of 10. This is illustrated below for the number 3851:

3581 = 3 x 10^3 + 5 x 10^2 + 8 x 10^1 + 1 x 10^0

In the binary system, with only two digits, each digit represents a power of two. Conceptually, each position can be regarded as being turned “on” (multiplied by 1) or “off” (multiplied by 0, or annihilated):

1001101 = 1 x 2^6 + 0 x 2^5 + 0 x 2^4 + 1 x 2^3 + 1 x 2^2 + 0 x 2^1 + 1 x 2^0

However, base-conversion arithmetic wound up becoming tedious and uninteresting, with little potential application beyond fundamental mathematical skill. I thus turned to pursuing a system touched on by Petzold but imperative to the operation of machines everywhere: text representation via the American Standard Code for Information Interchange (ASCII).

Before the ASCII table is presented, it is helpful to have knowledge of the hexadecimal (base-16) number system, consisting of the digits 0-9 and six additional “digits” signified by the letters A-F (representing 10 through 15, respectively).

Image Courtesy of String Functions

In a two-digit hexadecimal number, each digit (or nibble) can be expressed as a string of exactly four binary digits. Zero (0) is signified by 0000, F is signified by 1111, and intermediate conversions are displayed below:

Image Courtesy of Geeks for Geeks

The ASCII system consists of common English characters (e.g., alphanumerics, $, &, !, [space]) each represented by a two-digit hexadecimal—and consequently, 8-bit binary—code. Applying the terminology that 8 bits corresponds to one byte, it holds that ASCII represents each character as a single byte. That is, a word or expression made of n characters occupies n bytes of storage or memory. The complete ASCII table is shown below:

Image Courtesy of Wikimedia

One may immediately note that the table only displays the first half of the hexadecimal codes and excludes 80 through FF. Such secondary codes comprise the extended ASCII system, which consists of more obscure characters and is unique to each operating system (Windows or Mac). Moreover, the ASCII hexadecimal codes 00 through 31 can essentially be ignored (except 09 through 0F), as these are primarily applicable to archaic typewriters, the main writing tools at the time of ASCII’s inception. Using ASCII, my complete first name can be written as follows:

Hexadecimal:  416E746F6E696F

Binary: 01000001011011100111010001101111011011100110100101101111

In the comments section, please try expressing your name in hexadecimal or binary using the ASCII table! Additionally, there are myriad online “text to binary converters” at your disposal. Many programs (and, if you noticed above, Petzold on the front cover of Code) yield only 7 binary digits per character, as 8 are necessary only if extended ASCII is incorporated. However, to maintain evenness and the “one byte per character” standard, the convention is to annex a leading zero to represent a character with 8 bits.

In the next blog post, I will detail my endeavors to mathematically apply ASCII to formulate a code system, particularly in relation to my grandfather’s riddle and the supplementary curiosity with the number 9.

How So Quickly?

This week, I showcase my effort to prove an intriguing mathematical curiosity in the DeMarchis Riddle written by my grandfather. Such a proof, as will be demonstrated, has been as elusive as the quirk itself has been fascinating for nearly four years now.

Quirk in the Riddle

As you may recall, the woman’s house number was 9489. Now take that number, and perform the following operation:

However many digits are remaining in the number, subtract it by the number consisting of the largest string of common digits such that the difference remains positive.

Surely an example is necessary to convey such convoluted language. Considering that we have 9489, what is the largest four-digit number consisting of the same digits (e.g., 1111, 2222, 3333,…) that we can subtract to keep the number positive? Clearly, it is 8888.

9489 – 8888 = 601

Now, perform the same operation on the resultant difference. To maintain positivity, one must subtract 555 from 601:

601 – 555 = 46

We keep subtracting in this manner until we are left with a single-digit number, upon which we subtract it from itself to yield 0:

46 – 44 = 2

2 – 2 = 0

What do you notice about the strings of digits we utilized (8888, 555, 44, 2)? In fact each separate digit constitutes an age of one of the children (i.e., the answer to last week’s problem: 8, 5, 4, and 2)!

That is, if you take any positive integer, subtract it by its digit sum, divide that difference by 9, and add the resultant quotient to the original number, you obtain an integer which may be used to recover the original number via the operations described above. This is precisely how the salesman from the problem was able to deduce the children’s ages so quickly, as noted in the description.

Attempt at a Proof

I begin by writing out the operation that transforms a given positive integer x into a related integer y.

(x – digitSum(x))/9 + x = y

The function digitSum is mathematically defined as an expression of modulo terms and exponential summations, which be abbreviated to the eponymous name for now.

Multiplying the equation above by 9 to obtain common denominators, I obtain the following linear equation:

10x – digitSum(x) = 9y

The main question is: Why can the output y be defined as a summation of the strings of digits of x, with the length of each string proportional to the position of that digit in x? That is, how can it be that, when one performs a digitSum subtraction, division by 9, and an addition to the original x, he or she obtains the summation of the strings of digits of x?

Surely, the answer must lie with the number 9. A less significant though interesting quirk is the fact that x – digitSum(x) is always divisible by 9. Additionally, along with and higher multiples, 9 is the only integer such that the divisibility of the digit sum of a given number implies the divisibility of the number itself. For example, 441 is divisible by 9 because 4 + 4 + 1  is also divisible by 9.

Clearly, I have no direction regarding a proof, originally requested by my grandfather. What I have developed, however, is a code system incorporating this very curiosity and my two favorite aspects of computer science: binary and ASCII, the subjects of next week’s post.

The Travelling Salesman Problem (The DeMarchis Version)

While all mathematicians are aware of the problem regarding the most efficient route for a salesman to travel in a cross-country tour, they probably do not know the one that my now-82-year-old grandfather assigned to me 4 years ago. Since the problem has a particular quirk that he, I, and others cannot seem to find published in mathematical literature, he coined it as the DeMarchis Riddle. It reads as follows:

A salesman knocked on the door of a house, and a woman answered. He gave his pitch and asked the woman if she would like to purchase his product. The woman agreed on the condition that the salesman correctly guess the ages of her four children. She gave the following clues: 1. The children are all under the age of 10, and they each have different ages.    2. If you take my children’s ages in descending order, express them as a four-digit number, subtract the sum of the ages from that number, divide the resultant difference by 9 and add that quotient to the original number, you obtain my house number.’ The salesman looked at the woman’s mailbox and read 9489In less than 30 seconds, he provided the woman with all four correct ages. What are they?

The run-on sentence of arithmetic operations may seem daunting, but they are, after all, arithmetic operations, so there is no need for any obscure geometric modeling. Taking this problem step-by-step and wisely assigning variables, the problem is very straightforward.

Let begin organizing the problem through the following variable assignments:

– ages in descending order expressed as 4-digit number

– sum of the ages

The first operation mentioned by the woman is the subtraction of the age number from its sum. Clearly, that is x – y.

We can express the division of that difference by 9 as (x – y) / 9.

That quantity is then added to the original number to yield 9489: (x – y) / 9 + x = 9489

First, we make a common denominator of 9 to give (10x – y) / 9 = 9489.

Multiplying the 9 over gives 10x – y = 85401

In the end, we are interested in obtaining x, so we divide the equation above by 10: x – (y/10) = 8540.1

Now, a decimal in the right-hand side throws a curveball in an otherwise trivial problem. Keep in mind that x must be a whole number, so if x = 8540.1 + y/10, then y must have a fraction of 9/10 to add to the fraction of 1/10 in x. Here are the possibilities:

  1. x = 8540.1 + 9/10 = 8541
  2. x = 8540.1 + (19/10) = 8542
  3. x  = 8540.1 + (29/10) = 8543
  4. ….etc….

While one could perform trial-and-error by performing the relevant operations on each possible x value, he or she could also recognize that y is simply the sum of the digits of x. In the first possibility, the ages are 8, 5, 4, and 1, which sum to 18. This does not match the y value of 9. In the second possibility, y is 19, which is the sum of of the digits of 8542. Conducting test operations confirms that the women’s children are indeed 8, 5, 4, and 2 years old.

Such a basic algebra problem may seem straight from a sixth or seventh grade textbook, but there is a special anomaly that allows one to solve this problem exponentially faster. Perusing the various steps of the problem and applying your knowledge of the number 9 in particular, can you deduce such a shortcut?