Economic Health Care

At the beginning of this semester, I went on a Louie’s binge. At least two or three times a week, I would head over to the little store in Redifer dining commons and buy overpriced bags of popcorn or chips and ice cream with my meal points. And despite a fairly hefty eating plan for a 120 lb female, a few weeks ago, I almost ran out of mealpoints. Since then, I have been adding money on my account in small increments, and shaving off my meal costs wherever I can. I eat breakfast in my dorm, with milk and cereal bought from the regularly-priced grocery store downtown, and stuff my lunch takeout box at the buffet, which I eat for both lunch and dinner. That comes down to $3.45 per day, thanks to the student discount, plus around $3 per week for the milk and cereal. Not bad, given my previous $20 a day habits.

But what if my college meal plan worked differently? What if everyone paid the same amount of money per semester, that money would be pooled to form a university-wide fund, and everyone could consume as much food as they needed. I mean, for every jock or heavier-set boy, there would be a lighter eaters and dieting college girls to offset the consumption. Food is a necessity, and that way, nobody would ever have to worry about running out of meal points.

It doesn’t take an economist to tell you that this system could never work. Many people would adopt my attitude from the beginning of the semester: “I have an unlimited meal plan, so I can splurge on whatever food I want.” And what about the more expensive HUB dining? I know I would love to visit Panda Sushi five times a week, or that wonderful soup and salad place by Burger King. As for the milk and cereal, which I now buy for a lower price from McLanahan’s, I would probably spare myself the longer walk and get it overpriced from Louie’s. In the end, students would bankrupt the system halfway through a semester. And if university officials wanted to continue it, they would have to raise the price for everybody.

This type of thinking is something economists like to call the Moral Hazard complex, which states that people will take greater risks when they do not incur the costs of those risks. In other words, I will spend more money on food if everyone pays for meal points. But unfortunately, that is exactly how modern health care and insurance work.

As John Stossel explains in Insurance Makes Healthcare Far More Expensive, Insurance was a system designed to protect people against the costs of serious injuries or illnesses. A large group of people pay a small monthly amount which is pooled together, and if someone from that group gets cancer, for example, insurance will cover the costs of chemotherapy with the funds they have collected from everyone. And while healthy people ultimately end up losing money, people with life-threatening conditions can get the help they need without worrying about costs.

But insurance has expanded far beyond the scope of covering emergency care, and according to Stossel, this is what is driving up the cost of health care. Now, insurance covers even small things like flu shots and physicals, and consequently, people don’t “shop around” to find the lowest prices. Because people have no incentive to look for lower prices, suppliers (i.e. doctors and hospitals) don’t have the incentives to lower the costs of these procedures.

The best example of what would probably happen if non-emergency care weren’t covered is lasik eye surgery. According to Alex Tabarrok,”In 1998 the average price of laser eye surgery was about $2200 per eye.  Today the average price is $1350, that’s a decline of 38 percent in nominal terms and slightly more than that after taking into account inflation.” Why has the cost declined? Because insurance doesn’t cover it! In the Stossel video, Dr. Brian Bonanni says that even $100 can make a difference between which doctor people choose for lasik. When was the last time you or your family “shopped around” for the cheapest doctor’s visits?

Furthermore, there is an incentive for unhealthy people to lie about their health problems. We all know that conditions such as smoking, obesity, or heart disease can drive up the monthly costs of insurance. Naturally, nobody wants to make these conditions known to providers, and many people hide them. However, these people are more likely to buy health insurance than healthy people, since they know there is a large chance they will need health care in the future, a condition known as adverse selection. When these people begin consuming health care for their conditions, the pool of insurance money is quickly depleted. Providers often combat adverse selection by doing three things: raising the cost of insurance for everyone, requiring physicals and other tests to screen potential customers, and refusing to cover people with serious conditions or terminal illnesses that actually need the insurance.

Costs are further raised by increases in technology. This seems counterintuitive at first, since technology is usually meant to make things cheaper and more effective. But with advances in medical technology, and the blanket of insurance to cover the costs, people end up demanding the “best” (a.k.a. most expensive) treatments there are for their problems.

From a strictly economic point of view, health care would be most efficient if there were no insurance. However, basic insurance is a good thing: it gives people a safety net in case something awful happens and they are sick or injured. But I believe that insurance should not have such wide coverage, simply because it causes people to take unnecessary risks. If we truly want health care to be more affordable an readily available, then the only way to bring the costs down is to cut coverage, forcing consumers to “shop around,” and suppliers to lower prices in order to remain competitive. I also believe that high-deductible insurance is the best approach to health care: healthy people end up paying much less, while sick or injured people are still covered when they need to be.

Read More

“Marriage” and the Law

When it comes to same-sex marriage, individual opinions on the issue are often predetermined by personal beliefs, morals, and yes, even religion. But the issue is not so much a moral or religious one, as much as it is a legal one. Nevertheless, I am going preface this post by stating that I am in favor of same-sex unions having the same rights as heterosexual ones.

I use the term “unions,” because I believe the term “marriage” has no place in our legal or political systems. Marriage is in fact a religious term, which explains why religious communities are so often less receptive to the phrase being applied to non-heterosexual couple.

It’s not prejudice, it’s logic. Marriage is defined by Judaism, Christianity, and Islam (I will focus on these three religions since they make up the majority of the American populace) as a covenant (or agreement) before God between a man and a woman, declared during a wedding or other public ceremony in the presence of some sort of representative of God (i.e., a preacher), and then consummated as the final step (Source: Definition of Marriage: What Does the Bible Say?). Though there is some debate about the order, the actual ceremony, and the role of consummation, these elements are the same across all three religions, and the concept of only a heterosexual marriage is uncontested. So of course, many religious people take offense to that term being applied to other unions that do not fit those sacred characteristics. Which begs the question: Why is this religious term in our secular government?

The legal definition of marriage in the United States is:

The legal status, condition, or relationship that results from a contract by which one man and one woman, who have the capacity to enter into such an agreement, mutually promise to live together in the relationship of Husband and Wife in law for life, or until the legal termination of the relationship.(Source: Mariage: Legal Definition)

What’s more, with this official term comes official responsibilities and rights. Rights that range from the ability to care for a child, to family visitation rights in hospitals, to qualifications for insurance carriers. And in order for the LGBTA community to have access to those rights, the legal definition of marriage must somehow be made to apply to them. Consequently this group has been pushing, protest by protest and state by state to change the legal definition of this term.

However, I believe that this term has no place in the law and should be replaced entirely by the term “union.” Because though the legal definition of marriage does not speak of a contract in the eyes of God, or even about the role of consummation, it still stems from a religious concept. This being said, I also believe that the term “union,” if it ever does replace “marriage,” should not specify the two parties as male and female, allowing the LGBTA community the same rights as heterosexuals.

Read More

Unclear Promises and Empty Rhetoric: Obama’s State of the Union Address

“Here’s a reality about a second-term presidency: You have a narrow window — at the beginning of the term — to persuade Congress to do something big. For Ronald Reagan, it was tax reform (which he achieved); for Bill Clinton, it was education reform (which failed); and for George W. Bush, it was Social Security reform (which crashed and burned). And this is perhaps the best way to view President Obama’s State of the Union address at 9:00 pm ET tonight. It is essentially his last chance to lay the groundwork for domestic achievements.”

“First Thoughts: Obama’s Last Chance to Go Big”

I turned on CNN tonight expecting these big domestic policy pushes. I watched President Obama make his way to the podium, expecting to see a freshly re-inaugurated executive spearhead reforms for Congress, with concrete points for the two most important questions when it comes to public policy: “Why?” and “How?” I expected more riveting and inspiring rhetoric than Barack Campaign Obama ever produced. I expected to see the most powerful leader of the free world use his peoples’ mandate to institute positive change.

To be perfectly honest, none of my expectations were met.

The President began his speech with a John F. Kennedy quote about the purpose of the State of the Union Address. And to me, this set a bad tone from the get-go. Because quoting a popular former president struck me as piggybacking on another man’s popularity, as opposed to inspiring a divided Congress with original, and perhaps more adequately tailored rhetoric.

As Obama transitioned from his introduction to his main policy points, he threw in some crowd-pleasers such as “supporting the middle class” and “equal opportunity for each child.” And while this brought many Democrats to their feet, there were no phrases that truly stuck with me. I kept waiting for a climax, a purpose, something passionate enough to inspire. But it never happened.

Moving on to his policy proposals, I was once again, disappointed. I heard little mention of concrete solutions. There were kernels of good ideas in his deficit reduction and green energy programs, but I did not hear any solid legislation proposals or methodology. He also spent very little time addressing each issue, trying instead to cover as many domestic policy issues as possible, and even dipping into foreign policy and nuclear disarmament. Though he spoke with a steady, even pace, I felt myself being rushed through issue after issue, as if Obama didn’t want me to think on his proposals long enough to find parts to question. The one concrete idea he proposed was raising the minimum wage to $9, which made my inner economist cringe (because minimum wage is a price floor that creates a surplus of workers).

Furthermore, the President had emphasized at the beginning of his speech that none of his policy suggestions would increase the deficit by a single penny. But right after this sweeping statement, he began going through policy after policy, suggesting the creation of new government programs and the expansion of existing ones. By definition, this means increased government spending which naturally increases the deficit. I was expecting some sort of explanation after each proposal about how the President would counter these expenses with revenues or cuts elsewhere (in other words, I was waiting for him to back his previous statement), but Obama made no such mention. Marco Rubio was quick to criticize this afterwards in the Republican rebuttal to the President’s speech.

“Instead of striking a conciliatory tone and proposing compromises, as he did throughout much of his first term, Mr Obama laid out an unashamedly partisan agenda. He reiterated past calls not just for higher taxes on the rich, but also for more restrictive gun laws and for concerted action to slow climate change—all ideas which Republicans abhor, and which will therefore struggle to make headway in the House of Representatives, which is under Republican control.”

“Obama asks for more” – The Economist

The President’s speech ended with a sweeping presentation of Gun Control reform. Directly referencing three people associated with gun violence or heroism, he addressed special guests in the crowd. He preceded every policy proposal with a tragic story of gun violence, and used considerable emotional appeal. I found these tactics staged, inappropriate, and, quite frankly, slimeball moves for a person in his position. Don’t get me wrong, what happened in Newton was horrific, and there are certainly many lessons to be learned from there; but emotional stories do not serve as the basis for a public policy.

I created this blog for two linked purposes: to better educate myself about political issues, and to try to inspire others to do the same. Seeing the President of the United States deliver such a speech before Congress, devoid of inspiration, originality, or concrete specificity, truly sickens me. How can Americans make informed decisions when our politicians don’t believe us capable of understanding more than empty promises and emotional stories?

Read More

The Lessons of 2008

The stock market and housing bubble fiasco of 2008 left many Americans wondering if we were going to experience another depression. Politicians, Wall Street bankers, and even economists were dumbstruck that the goldmine of credit backed by real estate had run dry, sending both the domestic and the global economy reeling. But why did no one foresee this? What caused so many institutions to make such bad choices? And what does this mean for future economic policy?

“Aside from signaling the end of an era for Lehman Brothers and Merrill Lynch, this weekend’s activity definitively drew a line at the end of another historical era: the Age of Glass-Steagall.

Recent events on Wall Street—the failure or sale of three of the five largest independent investment banks—have effectively turned back the clock to the 1920s, when investment banks and commercial banks cohabited under the same corporate umbrella.”

“Shattering the Glass-Steagall” – Sept. 15, 2008

In order to understand what happened in 2008, we need to look back to what caused the banking failures of 1929, and by extent, Great Depression. Beginning in the 1900’s, commercial banks started underwriting* securities at great risk for enormous profits in the booming stock market. That is, until 1929 when Black Tuesday set off the fuse on the growing risk bombs, causing the failure of some 5,000 commercial banks (4,000 of which fell within 4 days of October, 29th).

*In underwriting, a bank guarantees to furnish a definite sum of money by a definite date to a business or government entity in return for an issue of bonds or stock.New York Times: Glass-Steagall Act (1933) 

In response, FDR passed one of my favorite pieces of legislation, the Glass-Steagall Act, in 1933. Originally part of the New Deal program, the act prohibited commercial banks from engaging in the investment business and created the FDIC, which insures bank deposits with money appropriated from banks (typically up to $100,000 per bank customer). It successfully restored public confidence in banking practices during the Great Depression and became a permanent measure in 1945, when World War II began turning the broken wheels of the economy. And until 1999, Glass-Steagall prevented banking failures for over half a century.

With the advent of the Internet, the late 90’s were characterized by an economic boom the likes of which had not been seen since World War II or the Roaring 20’s. The stock market soared, as did the standard of living, and the value of real estate. And commercial banks wanted a piece. Combined with a revival of classicalist economic theory (which argues for less government interference), Glass-Steagall stood no chance against repeal. While the FDIC remained, the Gramm-Leach-Bilely Act of 1999 repealed Glass-Steagall’s restrictions on bank and securities-firm affiliations. Free to invest in securities once more, banks did just that.

Here’s a video that describes just what happened in the early 2000’s, and how that directly contributed to the recent recession:

The Crisis of Credit Visualized (YouTube)

As the video shows, the constant passing of “credit bombs” merely had a snowball effect on increasing the risk of investment. And the resulting bank failures plunged the U.S. into a recession comparable to the 1930’s (though it should be noted that this recession is still not nearly as bad as the Great Depression).

So what does this mean for economic theory and U.S. economic policy? It means that our  current models, Keynesian and classicalist, are outdated. That FDR’s Glass-Steagall should have marked the incorporation of a new factor in the models: the banking industry. Because when it comes to the economy, government actions cannot be limited to merely fiscal and monetary policy, but must also include a banking policy.

So far, that policy has only been bailouts and upholding the dangerous Gramm-Leach-Bilely Act. But this is not enough. Glass-Steagall needs to be reinstated, and further measures need to be researched and implemented. If that doesn’t happen, the wheel of the business cycle will undoubtedly turn south once more, perhaps in 20 or 30 years, and 2008 will repeat itself all over again.

Read More

From Smith to Keynes: A History of Economic Policy

The economic history of the past hundred years can be divided into three periods, each guided by one of two different economic theories: classical and Keynesian economics.

Economic Policy Through the Lens of History

At the turn of the 20th century, classical economic theory was dominant. There was fairly universal consensus that when it came to markets, the government had no place. This of course doesn’t mean that the economy at the time was in a constant boom; on the contrary, it repeatedly fluctuated between panics and periods of growth. Economists and politicians accepted this pattern, known as the business cycle, and were perfectly content to wait out the bad times with the knowledge that good times were just around the corner. Nevertheless, in response to a particularly nasty recession, the Federal Reserve Act was passed in 1913*, creating a lender of last resort for banks in the hopes that it would prevent, or at least mitigate, future panics.

Of course, October 24, 1929* changed things a bit. As bank after bank closed and the unemployment rate steadily rose, it became clear that this economic panic would be bigger and more devastating than any before. In response, FDR, Congress, and the FED began experimenting with new means of stimulating the economy. The FED used its newly-established power of monetary policy and unfortunately, made some misguided decisions. In an effort to curb inflation and discourage banks from making poor investments, the FED raised the interest rate, decreasing the money supply. However, this action not only discouraged general investment (which decreased total output and further increased the unemployment rate), but also prevented banks from borrowing money to stay in business and caused more of them to fail. Congress, on the other hand, made things slightly better. FDR’s famed alphabet soup programs increased government spending, creating jobs for many unemployed workers, and benefits for ordinary citizens. And with these programs, fiscal policy was born.

We don’t know just how successful FDR’s fiscal policy was because World War II came after only 9 years of a plateauing depression and created a sudden demand for jobs, businesses, and investment. There is little doubt among economists that the war which claimed so many lives was also largely responsible for restarting the economy. But the lesson of the depression had been learned, and from 1946*, the government turned to the ideas of a British economist named John Maynard Keynes.

*Source: US Economic Timeline

Keynesian economics is fairly straightforward: it claims that the government is responsible for controlling the economy, creating jobs for the unemployed, and countering the effects of the business cycle. In times of economic expansion, the government should keep the economy from growing too fast by raising taxes and cutting government spending. In times of economic recession, the government should stimulate the economy by cutting taxes and raising government spending. Monetary policy works similarly, with high interest rates in times of expansion (to discourage investment) and low interest rates in times of recession (to encourage investment). (Here’s a useful link that helps explain fiscal and monetary policy.)

The Great Depression had taught politicians that the economy cannot go entirely unregulated; however, a new debate had sprung up over how much regulation was needed. In the 1990’s in particular, many so-called “classicalist” economists insisted that there was too much regulation, and successfully pushed for less. Furthermore,while contractionary fiscal policy may be good for the economy, it does not sit well with the public or politicians. (Because how many people can get elected campaigning for more taxes and less government spending?)

Another issue with Keynesian economics is that it focuses largely on the demand side of the economy. Until the mid-1970’s, economists looked largely at output (fiscal) and the demand for money (monetary), but when the oil shocks of 1973 and 1979 came, this was no longer the case. The attempt to keep oil prices low further decreased supply and caused greater shortages.

In case you haven’t picked up on it, most economic models were developed as a result of major crises. With each panic, recession, or depression, fiscal and monetary experiments allowed economists and politicians alike to learn how to stroke the economy’s fur in the right direction.

 

 

Read More
Skip to toolbar