Is A Hot Dog A Sandwich?

After three more historical disputes, I decided that today it might be better to cover another contemporary trivial dispute, one that should be familiar to many of you reading: Is a hot dog a sandwich?

I don’t really think of a hot dog as a sandwich, as when I think of a sandwich I think of something between two slices of bread, probably because that’s what I’d get in my lunch box when I was younger. A longer sandwich like a sub/hoagie/whatever (I don’t really want to make that dialectical difference the subject of an article, but my mind could change) However, personal conceptions aren’t really a good benchmark for how to determine what is a sandwich, as I would never think of a hamburger as a sandwich even though it falls quite clearly into the sandwich camp.

The issue has become a sort of Internet meme, which first seems to have surged in 2015. That year, the National Hot Dog and Sausage Council (yes, such an organization exists and is 100% serious) weighed in to declare that hot dogs should not be considered sandwiches. Perhaps because it does not overtly identify as an internet meme, the debate has persisted even after other contemporary memes have worn out their welcome. Many sources have weighed in with their thoughts on the matter in the ensuing years.

Amusingly, there has been a court case predicated around the definition of a sandwich, White City Shopping Center vs. PR Restaurants, LLC, which was decided in 2006. PR Restaurants, a New England Panera Bread franchisee, had, as part of their agreement to lease space in White City Shopping Center in Worcester, Massachusetts, become the exclusive provider of sandwiches within the shopping center. When PR found out that another company was to open Qdoba restaurant within the same shopping center, they cried foul, arguing because Qdoba sold tacos, burritos, and quesadillas they should be considered an additional sandwich provider. White City Shopping Center decided to resolve the dispute in court, and the Worcester County Superior Court granted their request to dismiss PR’s objection, stating that tacos, burritos, and quesadillas were not obviously included as sandwiches and thus the contract was not violated. However, this legal definition says nothing about hot dogs.

The original sandwich, at least the ones consumed by John Montagu, 4th Earl of Sandwich in the 18th century, were meats between pieces of bread, although the dish had probably existed for many centuries by his lifetime. Other kinds of sandwich involving longer bread rolls came into existence in the 19th and early 20th centuries. By the general definition, a hot dog should probably be considered a sandwich, as it does consist of food between a bun. Sausage sandwiches do exist, after all. The main hangup, then, for those who are unconvinced that hot dogs are sandwiches, seems to be that the bun is supposed to serve as one solid piece, rather than the two separate pieces that constitute a sandwich. Ultimately, though, what matters most in the public perception is the wording of the issue, which is why the question is ambiguous to begin with.

(Also, the term refers to the sausage with or without the bun, perhaps adding further confusion.)

The Straw Hat Riots

As it was two weeks ago, the subject of this week’s installment of Trivial Things will be on a riot over what was seemingly quite a mundane cause. And just like the Astor Place Riots mentioned two weeks ago, this riot happened in New York City, although no deaths happened this time. However, in the Astor Place Riots, there were at least issues of national pride at stake. This riot in 1922, however, was over styles of men’s hats. It wasn’t even a reflection of bigger social issues, as far as I can tell. It was just some pranksters who wished to take advantage of customs regarding the seasonality of hats.

Before the middle of the 20th century, most men (and often women too) would rarely leave home without a hat. While the kind of hat varied depending on what you could afford, some kind of covering was a necessity. Silk or felt hats were likely the most fashionable, but they would get rather hot during the summer, so many would substitute straw hats instead. While these hats, often called “boaters” for their use during summer boating activities, were initially only worn for informal events, around the turn of the 20th century they became acceptable wear for all summer activities.

An image of the 1916 Republican National Convention, in which the ubiquity of hats is easily apparent. Since this was in June, many people wore straw hats.

However, a convention soon developed that straw hats were only to be worn during the summer without exception. Like the currently better-known tradition of not wearing white after Labor Day, it likely originated as a way of forcing people to be fashionable unless it was far too inconvenient. As hats were easily removable, though, their traditions could be more easily enforced, as hooligans would often make a game of knocking the hats off of anyone who was caught with a straw hat after the cut-off date (which varied from September 1st to the 15th.) Some mischief makers would get impatient and start knocking off hats before the generally-accepted end of straw hat season, though this would typically get them in more trouble. Personally, I wonder if these events were part of a general tradition in which young mischief makers were given certain days to go wild, such as the traditional “Mischief Night” or “Devil’s Night” before Halloween and (to a lesser extent) April Fool’s Day.

For whatever reason, in 1922 the straw hat hooligans were especially intense. It seems to have started when one group decided to stomp on the hats of some dock workers two days before the straw hat season was supposed to end. The dock workers did not take this well, and started a brawl which apparently blocked traffic on the Manhattan Bridge. The next day, the rioting just got worse, as gangs would prowl the streets with nails on boards to use for stripping people of their hats. Streets in affected areas were said to be littered with straw hats, The police eventually had to get involved, especially after plain clothes officers started to get their hats yanked as well. 

New York Times headline on the rioting. (PDF of full article)

While this chaos did not kill the tradition of straw hat season, it did not last much longer. In 1925 President Calvin Coolidge wore a straw hat after September 15th, and in the years afterwards the tradition was enforced less and less. Boater hats also fell out of fashion, especially after the Great Depression. By the 1950s and 60s, men’s hats in general became more of a novelty, although the reasons why are likely quite complicated. 

(For further information on the straw hat riots, and on New York City history in general, the Bowery Boys podcast on NYC history released an episode on the riots recently, from which I derived much of the information in this post.)

Theater troubles

Before media such as film, recorded music, and television became fully developed, live performance was the principal form of enter available to the masses. Throw in the passions which can be stirred up among large crowds of people, and it is thus unsurprising that mass uproar has taken place during many performances throughout history. For today’s entry, I will describe three notable examples.

 

Perhaps the most catastrophic disturbance stirred up by a play happened in 1849, when renowned English actor William Macready arrived in New York to perform Macbeth at the Astor Opera House. People from all walks of life would regularly attend Shakespeare performances, and thus had strong opinions on which actor was the best. Macready had quite a few critics in New York, who preferred American actor Edwin Forrest. Much of this criticism came from Macready’s Englishness, as anti-English sentiment was quite common among members of the lower classes, especially the Irish immigrants who had been arriving in New York in large numbers. 

 

Macready’s first performance was booed, and he would’ve left New York afterwards if his fans hadn’t begged him to stay. However, the elites of New York prepared for the worst, buying up all the seats in the house and placing hundreds of policemen and soldiers outside. And the worst did indeed come to pass, as a violent mob about 10,000 strong surrounded the the theater and started attacking the structure and those who were guarding it. In the rioting that followed, about 22 people died due to what was surely one of the worst-received performances in history. 

 

However, it was far from the only performance throughout history to inspire some sort of chaos. In 1907, Irish nationalist sentiments would again stir up an uproar around a play, though this time both the play’s audience and its writer were from Ireland itself. Agitations for Irish independence were growing at the time, so any Irish writer was expected to be a defining figure of the country’s culture. Thus, when John Millington Synge’s play “The Playboy of the Western World” was performed at the Abbey theater in Dublin, audiences were appalled at the immorality of its title character, and by a scene in which women in shifts (a sort of nightgown undergarment) were sent across the stage. During the play’s second performance, the audience stormed the stage, and the ensuing riot would overshadow the play itself, although things eventually calmed down.

 

Finally, a calmer (at least relative to the other two entries on the list) backlash occurred at the premiere of Igor Stravinsky’s groundbreaking ballet “The Rite of Spring” in Paris in 1913. Stravinsky’s jarring music and choreographer Vaslav Nijinsky’s equally jarring dances shocked audiences, who emerged in uproar, forcing Stravinsky to leave the theater. However, the show still went on, and despite the later characterization of the event as a riot, it is unclear if any actual violence occurred.

The beef on calculus

Today’s entry in the list of Trivial Things shows that even history’s greatest minds weren’t immune to largely pointless feuds. 

 

While Isaac Newton’s numerous discoveries and theories had a tremendous impact on various scientific fields, he was far from the only scientific mind at work, as all of Europe seemed to be busy with making scientific discoveries after Galileo and Kepler were able to prove that the Earth orbited the Sun. Newton himself worked with many colleagues, such as Edmond Halley and Christopher Wren, and in continental Europe lived many prominent scientists, including Christiaan Huygens, Johann and Jakob Bernoulli, Blaise Pascal, and (most importantly for this discussion) Gottfried Leibniz, who made many discoveries in the fields of astronomy and mathematics. 

 

From 1665 to 1667, Newton worked on many of his greatest achievements in a situation which has now become familiar to most of you, in his home while Cambridge University closed due to infectious disease, in this case the plague. However, Newton was reluctant to publish his work as he did not want to deal with potential criticism.

 

 In order to formulate the equations behind what is now Newton’s best-known achievement, his laws of gravity, Newton would devise a whole new form of mathematics. This is what would become known as calculus, but Newton referred to it as the “method of fluxions.” So why, then, is it referred to today as calculus? The answer is that Leibniz was much less reluctant to publish his findings, so when he discovered similar principles to Newton in the 1670s, he only waited until 1684 to publish his discoveries, which he termed ‘calculus.’ As Newton’s renown grew, he started to share his discoveries in calculus with his colleagues, but still didn’t publish a complete summary of what he had found. Initially, Leibniz and Newton greatly admired each other as mathematicians, although each sincerely believed they deserved credit for the discovery of the methods of calculus. 

 

However, controversy would start brewing when in 1695,.English mathematician John Wallis seemed to accuse Leibniz of getting his ideas from Newton. This was quite in character for Wallis, who was fond of attacking non-English mathematicians and defending English mathematicians at any opportunity. Leibniz was justifiably offended by this and sought to prove the superiority of his methods. 

 

In 1697, Newton anonymously solved a problem posed by Leibniz’s colleague Johann Bernoulli. However, it was obvious to Bernoulli and Leibniz who was responsible, and Leibniz stated that all those who solved the problem must’ve understood his calculus quite well. This offended one of Newton’s allies, who brought the controversy further into the public eye by directly accusing Leibniz of plagiarizing Newton. However, Newton and Leibniz still held each other in too high regard to publicly butt heads.

 

This changed somewhat, however, once Newton’s methods started being formally published in 1704 and Leibniz saw the intricacies of the method Newton had developed and saw that he had discovered most of the same principles that he had. Perhaps to dismiss the old accusations against Leibniz, an anonymous review implicitly accusing Newton of using Leibniz’s superior methods was published in Leibniz’s journal. While Newton likely did not see this review, English mathematician John Keill did and decided to defend Newton and attack Leibniz in 1708. This would bring the two greats of calculus directly against each other, as Leibniz wrote to Newton’s Royal Society to complain about Keill. In 1712, the Royal Society responded with a document compiling evidence that Leibniz got his ideas from Newton’s unpublished work, evidence which was mostly inaccurate. While Leibniz never publicly responded before his death in 1716 took him out of the controversy, the reverence English scientists had for Newton meant that it took them a very long time to acknowledge Leibniz’s role in calculus’s development.

 

Today, this controversy seems rather trifling for two of the greatest scientists of the time to get into. Indeed, it seems that Newton and Leibniz thought it was trifling as well, as they only really got involved once they felt that they had to. But once the dispute had started, both scientists were too prideful to let it stop, and one of the greatest stories in the history of mathematics followed suit.

GIFs

After taking a break from tech-related debates in my post about food controversy, I decided to go back into the technical sphere after I remembered one of the best known controversies in digital media: how to pronounce the acronym for the Graphics Interchange Format.

GIF was first introduced in 1987 to provide a method for transmitting color images across the internet. Initially, its popularity stemmed from the fact that it was one of the few formats in which small photos were compressed further, making it easy to share GIF images across the internet.

While the format is fairly primitive by today’s standards (it can only display 256 colors at once unless some clever trickery is used,) the fact that multiple images could be stored in one file inspired the addition of support for animated images to the 1989 version of GIF. This made GIF the go-to format for sending small, short animations, and due to its support by most web browsers it is still widely used for this purpose, and as is the case with many file formats, GIF images are typically just referred to as GIFs, and as the initials are theoretically pronounceable “GIF” is always pronounced as a one syllable word instead of as the initials “gee-i-eff”.

However, in English the letter G can be pronounced in 2 different ways, reflecting how in many  Romance languages G’s pronunciation changed before E and I. Words derived from French where G appears before E and I thus invariably are pronounced like “j,” but many native Germanic words, along with words borrowed from other non-Romance languages, are still pronounced with a “hard” G sound even if they appear before E and I. This makes it quite difficult to determine how to pronounce an unknown word containing “ge” or “gi,” especially for a word coined from an acronym like GIF. Also, as most people’s first encounter with the term GIF is written on the internet without a pronunciation guide, they develop their own pronunciation of the word independent of how others pronounced it. (Perhaps this is another reason for spelling reform.)

Unlike most of the debates on this blog, one side can actually claim to have the “right” answer. At CompuServe, where the format was created, the pronunciation was almost certainly “jif.” A large compilation of statements to this effect can be found on this rather amusing site, including the speech below by GIF developer Steve Wilhite, in which he states that the OED is wrong for including the hard G pronunciation alongside the soft one.

However, this does not mean the debate is over. In linguistics, dictionaries are encouraged to be descriptive instead of prescriptive, meaning that they should reflect how words are used in actual speech and writing rather than how a certain authority dictates a word is supposed to be used. By this standard, the hard G pronunciation should definitely be in the dictionary. Also, similarly to tabs vs. spaces, Stack Overflow’s developer survey in 2017 included a question on how the respondent pronounced “GIF.” The results show that despite not being the original pronunciation, the hard “G” was used by 65.6% of respondents. This likely means that both pronunciations will continue to be in use for quite some time.

(By the way, I’ve always used a hard G to pronounced GIF.)

Food feuds

One of the most important aspects of a culture tends to be its food. Given that different cultures often come to blows over some very trivial things, it is no surprise that there are many, many food-related disputes between different culinary regions. In this blog post, I’m going to cover some of the best-known ones, particularly those which come from the area somewhere around Pennsylvania. However, I am going to skip Wawa vs. Sheetz (for now, at least).

Four slices of Taylor pork roll. (Austin Murphy, Wikimedia Commons/CC)

Pork roll was first widely sold by John Taylor of Trenton, New Jersey in 1856 and it has been a statewide staple ever since. The term refers to a roll of pork (from various parts of the pig) which has been spiced and smoked and traditionally wrapped in cloth. However, when Taylor first sold his product, it was known instead as “Taylor’s Prepared Ham.” It was only in 1906 that Taylor was forced to change the name of his product to “Taylor Pork Roll.” This change allowed a slew of imitators to make their own pork rolls, given that Taylor was unable to get a trademark on the term “pork roll” by itself. Nevertheless, many still referred to the product as “Taylor ham,” and for reasons I can’t seem to ascertain “Taylor ham” remains the preferred term in northern New Jersey, but not elsewhere. The term has become a source of North Jersey regional pride, which is curious given that pork rolls themselves are mostly particular to New Jersey. Barack Obama referenced the dispute in 2016 during a speech at Rutgers University in New Jersey, saying “There’s not much I’m afraid to take on in my final year of office, but I know better than to get in the middle of that debate.”

However, in spite of the controversy over its name, at least the inventor of the modern-day pork roll is not in dispute. The same cannot be said for many classic American foods, which have multiple stories explaining their origin. As most foods do not see widespread sale for many years after they are first released, this is not really much of a surprise. One dish whose origin is disputed between various American cities is the ice cream sundae, which originated somewhere in America in the latter half of the 19th century but whose origin remains a mystery. Stories of the invention of the ice cream sundae also must explain the dessert’s unusual name. The two cities with the best-known claims to the invention of the sundae are Two Rivers, Wisconsin and Ithaca, New York. The Two Rivers story claims that Ed Berners, of a namesake ice cream parlor, made a regular ice cream with chocolate syrup instead of an ice cream soda with chocolate (which were quite popular at the time, and chocolate syrup was mainly used for making ice cream sodas) because it was a Sunday and for whatever reason ice cream sodas weren’t considered sufficiently sacred. Traditionally, this incident is dated to 1881, but this date is considered doubtful as Berners was only 18 at the time. However, Two Rivers’ campaign is contested by claims that the sundae was invented in Ithaca in 1892, with Ithaca even writing a resolution rebuking the city of Two Rivers in 2006. Ithaca’s story states that in 1892 a minister wanted ice cream after his services, but in a twist the shop owner, who was a friend of the minister, added a cherry and cherry syrup on top of two scoops of vanilla ice cream. The dish was thus referred to as a sunday because that was the date it was made. Ithaca’s story has much more documentation than the Two Rivers story, as ads mentioning the “sunday” and referring to it as new have been found from 1892. Nevertheless, the Two Rivers claim persists and both towns will still claim to be the true birthplace of the ice cream sundae.

Tabs vs. Spaces – The Great Coding Debate

In last week’s blog, about whether or not “internet” should be capitalized, I covered a computing-related issue for the first time. However, this is far from the only trivial dispute to emerge in the world of computing. A case in point is the dispute over whether it is better to use spaces or tabs when writing code. 

I first found out about this point of contention in an episode of Silicon Valley on HBO. In the episode, the show’s main character breaks up with a girl he just met after finding out she uses spaces to write her code instead of tabs. Although this reaction is probably an exaggeration, the debate is very real, and can cause heated arguments to erupt between coders. Even Bill Gates has weighed in on the debate, saying in response to a question on Reddit that he uses tabs.

In order to understand why this debate exists in the first place, you must understand a few things about writing code. (Disclaimer: I don’t code myself, so some of this explanation may be incomplete.) In order to make it easier to understand what is going on, I will use an example. Say you wanted to subtract 1 from numbers less than 2, and add 1 to numbers greater than or equal to 2. The code would look something like this (this is not a real programming language, just “psuedocode):

if n<2

subtract 1 from n

if n≥2 

add 1 to n

However, if this was part of a much more complex program, it might be hard to tell at first glance which command leads to which other command. To make the code easier to read, subordinate operations are usually indented. So our pseudocode would look like this:

if n<2 

   subtract 1 from n

if n≥2

   add 1 to n

However, when making the indents, spaces and tabs can both be used, which is where the debate comes from. There are arguments for both options. Coders that use tabs justify their preference because a substantial indent can be created with just 1 press of the tab key, while using the space key requires multiple presses for just one indent. Also, tabs take up less data than spaces do. Coders that use spaces counter that spaces allow for more precise alignment of indents than tabs do, Also, tabs being bigger than spaces can make the view screens for tabbed code quite wide. As for which format is actually used more, a 2016 study of repositories on the popular code sharing site GitHub found that spaces are more commonly used in almost all programming languages. However, coders that used tabs still insisted that tabs were superior and that the majority must’ve been wrong. Another study related to the debate was conducted in 2017, using data from a survey of users of the coding Q and A website Stack Overflow, found that developers who used spaces made more money than those who used tabs, a finding which was consistent across experience levels, countries, and many other types of group, though the original publishers of these results stopped short of explaining why this was the case. Thus, it seems that even if studies rule in favor of spaces, the debate will persist for many more years.

The internet… or should it be the Internet?

While I have dealt with spelling-related issues before in my spelling reform post from last semester, I’ve never devoted an entire article on how to write one word before. However, given that the (I/i)nternet is something we pretty much all use now, it is necessary to decide just how the word should be spelled. This is definitely something which has tripped me up when writing in the past, and just this week I had to correct an assignment in order to make sure that the word was spelled correctly.

The term was originally just a shortening of “internetwork,” a noun adjunct which could refer to any kind of link from one network to another. Given that this was a general term and not a reference to a specific entity, “internet” would never be capitalized. One of the first definitive uses of “internet” in reference to what would be called the Internet today was a 1974 technical document, Request for Comment (RFC) 675, entitled “Specification Of Internet Transmission Control Program.” However, the term “internetwork” is actually used in the paper. At the time, the network being referred to was ARPANET, one of the first projects to link computers together over long distances, which was established by ARPA, an agency of the U.S. Department of Defense, in 1969.

Map of computers connected to ARPANET in 1974.

As more networks were created, many of which engaged in “inter-network” communication with ARPANET, it became clear that in the future there would be a global, decentralized network, which then became referred to as “the ARPA-Internet” and then later “the Internet,” with the capital letter indicating its status as the most important internetwork. The former term was still used in RFC 959 from 1985, but by the introduction of the domain name system in 1987 in RFC  1034“the Internet” was being used.

As the global (i/I)nternet has for the most part removed the need for smaller long-distance connections between computer networks, there  was less need for a distinction between an internet and the Internet, as the former sense was not being used nearly as much. You might expect this to have led to the capitalized “Internet” becoming the predominant word form, but the opposite seems to have happened given that the internet is now viewed as a medium rather than as an entity. As mentioned by Susan Herring in her 2015 article for Wired, Apple’s decision to abbreviate internet with a lowercase I in the names of products like the iMac and iPod may have helped away public usage. She also notes that in the Oxford English Corpus, “Internet” was slowly losing ground to “internet,” though in 2015 Internet had not yet been overtaken in the data.

In order to reflect this change in popular usage, many publications which once insisted upon capitalizing “Internet” now consider “internet” to be the default spelling. In 2016, both the Chicago Manual of Style and the Associated Press switched to “internet.” The reasoning given for the latter decision was that it “reflects a growing trend toward lowercasing” the word, which has become “a generic term.”

Capitalization may be one of the more trivial aspects of writing standards, given that the rules for capitalization are typically clear-cut. However, given how commonly the phrase “the internet” is used, especially in journalism, in this case it is an issue which must be resolved but has no clear resolution. This likely explains why so many changes regarding the word’s capitalization have been made, and the coverage these changes get.

Place name disputes 4 (?): Mount Rainier vs Tacoma

While all of the previously mentioned place name disputes are still ongoing, or at the very least were still active until recently, disputes over place names are nothing new at all. A case in point is the dispute over the name of the highest peak in Washington state. Much like the Denali/Mount McKinley dispute, the Rainier/Tacoma controversy revolved around whether to use the mountain’s native name or a name given to it by later explorers, although in this case the controversy was not entirely  caused  by a perceived need to properly honor the local tribes. Rather, the controversy came from the fact that residents of the city of Tacoma wanted their city to be connected with the mountain, instead of Seattle. However, the fact that the Tacoma name was older was used by its proponents. When proposals came to turn the area around the mountain into a national park, one of the chief concerns was the naming of the mountain within. Said national park was created in 1899; while it was originally to be called Washington National Park, the name was ultimately changed to Mount Rainier National Park, though this did not dissuade Tacoma residents from referring to Mount Tacoma. They also sought to raise awareness to the issue through various means, including forming a “Justice-to-the-Mountain” committee. Seattle residents did not seem to care nearly as much about the issue, however, perhaps because there were no proposals to call it Mount Seattle. The Board of Geographic Names agreed to hear the issue in 1917, and ruled in favor of Mount Rainier on the grounds that it was the more established name. This did not stop efforts to change the name, however. Some of the more ludicrous reasons for opposing the name Mount Rainier were that it was named after Rainier beer (which was in fact named after the mountain) and that a British admiral like Rainier who fought the Americans did not deserve to have a mountain named after him. In 1924, a Congressional resolution was proposed to change the mountain’s name, but despite initial promise it ultimately failed in committee. Since then, the name Rainier has become fairly entrenched, but the renaming of Denali has sparked renewed discussion.

My main source was this article. 

Place name disputes 3: (North) Macedonia

It’s been a while since I’ve had to make one of these posts, so I couldn’t quite think of another trivial dispute off the top of my head. However, I somehow missed this one in my previous place names video, despite it being one of the biggest place name disputes out there.

Flag of North Macedonia (originally known as just Macedonia)

The ancient kingdom of Macedonia (or Macedon), famous as the original domain of Alexander the Great, was located mostly in Greece and had mostly Greek-speaking inhabitants. Macedonia continued to be regarded as a region into the Roman Empire, though Slavic tribes would settle in the region during the middle of the 1st millenium. As the strength of the Slavs in the Balkans (where Macedonia is located) increased, parts of traditional Macedonia began to fall into Slavic control.

Map of the ancient Greek kingdom of Macedon (Marsyas, Kordas, MinisterForBadTimes – Wikimedia Commons-CC)

In the Middle Ages, Slavic nations such as the First Bulgarian Empire and the Serbian Empire eventually managed to conquer the northern regions of Macedonia from the Byzantine Empire. When the Ottoman empire conquered Macedonia in the 14th century, Macedonia ceased to be defined as its own region, but Greek and Slavic populations remained in the area.

However, the Ottoman empire started to collapse in the 19th century, and new Slavic nations such as Serbia and later Bulgaria were created in the Balkans. Bulgaria, Serbia, and Greece all believed they deserved to govern some of the lands within the historical region of Macedonia. Despite the Macedonian Slavs mostly speaking a language more similar to Bulgarian than Serbian, the decision to give the country most of the land in Macedonia  was scrapped due to concerns about the country becoming too powerful. Once the land was taken from the Ottomans in 1913, it was decided to give most of the north part of the country to Serbia (which became part of Yugoslavia after WWI ended), with the southern portions going to Greece.

Map of the division of Macedonia after the Second Balkan War.

Many Slavic speakers remained in the Greek portions of Macedonia, which often faced cultural stigmatization from the Greeks. Yugoslavia also tried to regard the Macedonians as Serbs rather than as their own national identity, a policy which led to resistance from terror groups such as the Internal Macedonian Revolutionary Organization, which assassinated King  Alexander I of Yugoslavia.

During WWII Bulgaria would occupy much of Eastern Greece, including parts of Macedonia, a brutal process which only served to further inflame divisions between the Greeks and their Slavic neighbors. Bulgaria also occupied much of Yugoslav Macedonia during the war.  However, Yugoslavia, under its new Communist Marshal Josip Broz Tito, successfully reclaimed Macedonia. While Tito was initially optimistic about a union between Yugoslavia and Bulgaria, his split with Stalin’s communist bloc led him to call for a repudiation of any claims to Bulgarian heritage within Macedonia, thus cementing Macedonians as their own ethnic group. The civil war in Greece between the monarchists and the communists made Greece suspicious of the Macedonian nationalist cause, as many Slavic Macedonians in Greece supported the Communists and wished to unite with Yugoslavia.

The issue of Macedonia’s name reached its modern level of relevance once Macedonia became an independent country during the breakup of Yugoslavia, a period which was marred by bloody ethnic conflict. While Macedonia was spared the worst of these conflicts, it did raise concern among Greeks that Greek sovereignty over their part of Macedonia was threatened, along with worries that the name of Macedonia would no longer be attached to Greece.

On their part, the Macedonians did do many things which could reasonably be seen as appropriating the Greek history of Macedon. For one, their flag features the Vergina Sun, the symbol of the ancient kingdom of Macedon. (While still in use on the flag, the form of the symbol was changed to appease Greek concerns in 1995.) A monument to Alexander the Great was also added to Macedonia’s capital, despite the fact that he spent most of his formative years in modern-day Greece.

A statue of Alexander the Great in Macedonia’s capital Skopje. (Gonzosft – CC)

Greece did agree to recognize Macedonia, but only if it was referred to as the “former Yugoslav republic of Macedonia” (FYROM). However, Greece still refused to call the country Macedonia without qualifiers, even refusing to let Macedonia join NATO and the EU because of it. Talks between the two countries to reach a compromise went on for more than a decade, until Macedonia agreed to change its name to the “Republic of North Macedonia” in 2018. Hopefully, this compromise will prevent the naming dispute from returning.