Pharmaceuticals and Patent Life

When a pharmaceutical industry announces a drug, they’ve patented it so that they are the only ones legally allowed to manufacture it or any other drugs deemed similar enough. When a drug is patented, the pharmaceutical company often keeps the price of the drug high, allowing them to make a profit off of the drug. Most people criticize pharmaceutical companies for this as they are making a profit off of the wellbeing of others, something most people agree isn’t ethical. Pharmaceutical companies, on the other hand, argue that they have to price them high while they still have the patent in order to cover all the costs of making the drug in the first place.

Drug patents last for 20 years since the day of their filling date. While that may seem like a long time, most drugs get patented very early in their development career. Drug discovery companies want to reserve that chemical space to prevent other companies from poaching their work and patenting it themselves. However, they also want to balance it with patenting it too early (don’t want your patent to run out before you can release the drug to market). When one considers that most drugs take 12-15 years to make it to market once their initial filing day, the effective patent life is typically around 5 years.

Pharmaceuticals can take years to make and can be inaccessible for the first few years they’re on the market because of high pricing.

In those 5 years, pharmaceutical companies need to both recover the $3.5 billion dollars spent to develop the drug and then some in order to ensure that their investors have an incentive to invest with them again in the future. So while it is problematic that pharmaceutical companies try to make as much of a profit as they can in those 5 years, if they didn’t, no investor would give them their money — which would me no drug gets developed. Considering that cancer, genetic conditions, and illnesses aren’t going anywhere, we will continue to rely of these pharmaceuticals, and unless the government steps in to sponsor and provide incentives, this system isn’t going to change any time soon.

In order to ensure the accessibility of medicines, many believe that there needs to be serious patent reform.

Another perspective in this debate is those who feel that the 20 year patent life is too little, especially when art copyrights have lasted for much longer than that. They often argue that those advancement are much more impactful than those entertainment based ones, so why should there be a smaller monetary incentive for people to pursue development of these pharmaceuticals. However, many feel that there should be a tighter limit on pharmaceuticals because of their importance for the broader community, people should be able to purchase and use them without incurring significant damage to their financial wellbeing.

Obviously this is a complex issue that won’t be shaped over night but rather will be gradually shaped over decades by the changing sentiments of investors, the government and its role in pharmaceutical discovery, and the general populace.

The Rise of Antibiotic Resistance

WWII was one of the most influential events of the previous century. If any of the outcomes had been different, the world would be drastically different than it is today. There are many factors that contributed to the allies winning the war, one of the most influential was not the development of the atomic bomb or the industrial mobilization of America, but rather, the development of penicillin.

Penicillin was the first antibiotic ever developed that enabled the allies to fight off infection in their soldiers, enabling them to save more troops. In WWI, more troops died from infection than from war-inflicted wounds. As the allies were able to keep more of their troops on the ground, they were able to overpower the axis powers in Europe.

Alexander Flemming, a Scottish bacteriologist discovered Penicillin in 1928.

Following the discovery of penicillin, there was a rush to discover more antibiotics that would combat diseases caused by bacterial infections. However, as more and more antibiotics were applied, scientists began to notice that the effectiveness of antibiotic treatment began to rapidly decline: bacteria had mutated in ways that made them resistant to antibiotics.

Nowadays, many of the companies and corporations that had started out by developing antibiotics now dedicate their time and effort towards developing small molecule therapeutics to target genetic borne illnesses or oncology treatments. No corporation or investor is willing to spend billions of dollars developing an antibiotic that will lose its effectiveness over a few years once introduced to the public, or alternatively, be indefinetly shelfed by the FDA as a “break in case of emergency” antibiotic.

Graph illustrates the trend away from antibiotic development over the past few decades.

However, antibiotics remain the first line of defense when it comes to staving off an infection. Thereby antibiotic resistance is still on the rise, and its an increasingly pressing issue threatening public health.

One of the main issues that prevent meaningful moves to combat antibiotic resistance is the spread of misinformation by those in power. A particular case is during the anthrax panic, when politicians and influential individuals were being sent anthrax in their mail. Anthrax is a very dangerous infectious of the lungs caused by B.anthracis that is almost always fatal.

During this scare, politicians urged people to take antibiotics as a “preemptive measure” against infection. Clearly, they had no idea how antibiotics work: they kill bacteria that is already there.

By applying an antibiotic “preemptively”, you inadvertently confer an advantage on those one or two bacterial cells that — by random chance — happen to have a mutation that makes them resistant to the antibiotic. These bacterial cells will quickly take over the population, and just like that, the antibiotic is no longer effective.

Clearly there is the need for policy to educate the public about antibiotics and their importance, especially as antibiotic resistance is likely to become a major public health issue in the 21st century. However, given the nation’s response to the Covid-19 pandemic, such attempts do not look promising.

Direct to Consumer Advertisements

If you’ve ever watch channel TV in the last 10 years, then you definitely seen your fair share of pharmaceutical advertisements. These direct to consumer ads illustrate the benefits and symptoms of their product, and urge their viewers to ask their doctor is that medication is right for them. While these direct to consumer ads seem to be integral now to our experience with watching television, the United States is one of two countries that allows pharmaceutical companies to advertise their products in this way.

An example of a DTC advertisement for foot pain

Before the rise of such advertisements, pharmaceutical companies focused primarily on outreach to medical professional and doctors. Back then, it was expected that medical professionals would interpret the medicine and determine whether or not to prescribe it to their patients. However, in tangent with a rising sentiment that the individual should be involved in making healthcare decisions, pharmaceutical companies began to spend money on Direct to Consumer (DTC) advertisements. In 1996, pharmaceutical companies spend $550 million on drugs ads. This increased more than 10-fold by 2020, reaching $6.58 billion annually.

Like all such developments, there are many different views on how beneficial or detrimental DTC advertising is, especially as user data becomes increasingly accessible and personalized advertisments become more pervasive.

On one side, people view these drug ads as serving a public good as they educate consumers, increase engagement with beneficial medications that would’ve otherwise flown under the radar for a less beneficial alternative, and encouraging individuals to play an active role in their health — instead of taking a back seat and potentially getting mistreated or misdiagnosed.

However, there is concern regarding the boom in DTC advertisments in the past 2 decades. Many are worried that they may lead to the use of more epxensive branded medication, rather than just as effective, cheaper, generic options, and that people will automatically turn to perscriptions before attempting lifestyle changes/adopting healthier behaviros. Individuals may also become anxious that they possess a condition they don’t have, which may lead to unnecessary prescription of drugs that may actually be detrimental to patient health.

Additional concerns arise when one takes into account the rise in demand for personalized healthcare. On one hand, obtaining personalized treatment would be beneficial to patients. On the other, the infringement of privacy involved in utilizing user data to advertise prescriptions that an algorithm believes you need raises additional issues.

An illustration of how personalized medicine is catered to the individual.

It is unlikely that DTC advertisements will go away any time soon. However as media changes over time, the advertisements may change and shift to better obtain public attention — switching out TV channel advertisements for shorts on Tik Tok and reels on Instagram.

Clinical Trials and Fatal Conditions

In my previous blog post, I touched upon some of the ethical concerns for clinical trials, mostly in regards into how they are typically run and the ethics of running/not running them for rare disease cases. I briefly touched upon the dilemmas surrounding running a clinical trial when the condition which you are treating is fatal, but we’re gonna delve a bit deeper into that now.

As I previously highlighted, there are many steps and checkpoints in the Clinical Trial process. Phase I tests the drug in healthy individuals to check for side effects, phase II tests for the effectiveness of the drug against the disease, and phase III confirms the phase II findings in a larger population.

In some cases however, there are exceptions to this pipeline. One example is with patients who have tried all other standard options to target a (usually terminal) disease. Typically these are cancer patients who have exhausted all of their alternatives. In these circumstances, most have deemed it ethical to go straight to cancer patients in phase I, rather than healthy individuals, because those patients have been given only a few more months left to live, so many of the health considerations that are enforced for treatments for commonplace conditions like diabetes are waived.

Pfizer recently sponsored an advertisement showcasing all the research that has gone into combatting cancer and the work that they continue to do.

While there are more ethical dilemmas to unpack here, one that immediately comes to mind is the whether it is ethical to run the necessary double blind study with these patients as it that is a considered a requirement for FDA approval.

A double blind study involves the use of a sugar “placebo” pill in order to counteract the effects of placebo, which is where the patient gets better, not because the drug is actually doing anything, but because they’re told that it will. To prove the efficacy of a drug, one needs to confirm that placebo is not the cause of their success. So, to get around this, researchers have developed the double blind trial in which half of the patients are administered the actual drug, and the other half are given a placebo. To ensure that there is minimal impact of placebo, none of the patients know which group they belong to, and neither do the doctors directly administering the trial.

An infographic illustrating how a double-blind study works.

While necessary from a research perspective, and to ensure in the long term that a drug is effective, a ethical dilemma arises when it comes to purposefully withholding medications from patients who could benefit from them. This issue is still being debated within the oncology community as that is the most contentious case. A research paper in 2015purposed that a double blind study in cancer patients can be considered ethical based on their findings regarding the survival rates of patients in cancer clinical trials. However, considering it’s been 9 years since that study was conducted, perhaps the field of clincal oncology has shifted since then.

Thus the debate continues: Is it worth it to prove the absolute efficacy of a drug if it means putting individuals at risk? Or is it more dangerous in the long run to utilize a therapeutic that isn’t fully proven to work?

Running Clinical Trails

The most costly and time demanding process of the drug discovery timeline is the clinical trials. In fact, everything that happens before this stage is to try and get the drug to “quit early” so that pharmaceutical companies can avoid the hassle of running clinical trials for a drug that is going to fail out in the first phase. Beyond the costs of clinical trials, there are a few different ethical and civic issues surrounding them, some have partial solutions others have been left untouched.

Before we delve a bit deeper into these issues, let’s lay the groundwork for how clinical trials are usually carried out. For pharmaceuticals targeted at everyday ailments and conditions, diabetes for example, there is a very well defined procedure for conducting clinical trials. For conditions like cancer, there are some exceptions.

Phase I tests the medication in healthy individuals to prove that there are no side effects. This phase also helps researchers determine what the proper dose level is and what the lifetime of the drug is in humans. Up until this point, studies have been conducted in animals and other model systems. While researchers can predict what the dosage would be from these studies, it is impossible to tell without seeing it happen in humans. Once a safe dosage range has been determined, the study moves on to Phase II of clinical trials.

Infographic illustrating the stages of clinical trials.

During Phase II, researchers work to determine the efficacy of their drug: does it actually work? Once this phase is over, the company responsible for developing the drug submits a Investigational New Drug Application to the FDA, and once that’s approved, they move on to a much larger and longer Phase III study.

Phase III has the goals of proving what efficacy that was observed during Phase II, on a larger scale and with a much more diverse patient population. If a drug passes through Phase III, then the company responsible then submits a New Drug Application, and waits for FDA approval to begin selling the new prescription.

This entire process has requirements beyond just time and money. It requires a “suitable” patient population. To run clinical trials, you need to have enough patients to do so, and not every condition can provide this.

In the case of rare diseases, it is difficult to develop new medicines for them because the patient populations are so small. In some cases, attempting to run a proper phase I stage of clinical trials would potentially cure the entire patient population.

Infographic illustrating how few rare disease have therapeutic solutions directed for them.

While on paper, this seems like a great thing, in reality it means that few pharmaceutical companies actually get anywhere close to rare diseases. Most drug development companies work by raising money for their disease targets, and it would be near impossible to raise anywhere near the required amount of money for a disease in which the investors were guaranteed not to get their money back (especially considering the success rates are usually so low for normal drug targets).

Beyond rare diseases, there are additional ethical concerns when it comes to running a clinical trial: what do you do for pressing and fatal diseases? how do you decide who can join a clinical trial? how much/do you compensate the patients?

This blog post has already gotten quite long already, so I’ll continue it in the next one.

Properly Pricing Pharmaceuticals

At some point or another, we’ve all used over-the-counter prescription drugs. Whether it was a headache, runny nose, flu, cold, most of us have made that trip to the pharmacy to find some remedy to our ailment. However, in most cases, these are one-off purchases that occur once every few months, or even once a year.

CVS Pharmacy and other pharmacies are where most people obtain necessary medications and remedies to common ailments.

In a few cases however, some prescriptions are required at a regular frequency, every day, once a week, etc. In these cases, such prescriptions are necessary for an individuals ability to improve their quality of life or even just to live. When it comes to such prescriptions that are necessary for life and good health of an individual, how to price these prescriptions becomes an ethical conundrum for the pharmaceutical industry. Healthcare is something that we all need at one point or another, so putting a price on medicine that can save people’s lives or improve them significantly becomes a seriously controversial issue.

In the words of the World Health Organization, “healthcare is a fundamental human right,” which means that “everyone should have access to the health services they need, when and where they need them, without suffering financial hardship.” The key part of this statement is “without suffering financial hardship.” In most cases, the treatment that the individual needs exists and is technically available, but due to financial limitations and the often high pricing of newer medicines, people aren’t able to receive the treatment that they need to survive. In most cases, an individuals healthcare may be able to cover most of the costs, but when medicines become increasingly expensive, the burden on the individual increases as well.

When people learn about this disparity, often their first instinct is to then blame the drug discovery and pharmaceutical companies for putting profit over the wellbeing of individuals. This however, is not the complete narrative. What most people don’t realize is that the problem does not lie with pharmaceutical companies themselves but with the challenges involved in developing medicines that are effective, non-toxic, and consumable.

The statistic changes slightly depending on who you ask, but roughly 4% of proposed drugs ever make it to market. That means that roughly 96% of drugs FAIL before they make it to the shelf. Not only that, but the cost of developing drugs goes up exponentially the further into the process that you go.

Graph showing estimated cumulative costs of drug development
Graph shows the cumulative cost of drug development across the timeline (speculative because some take significantly more and some may take less).

Often reaching billions of dollars in costs only to fail before reaching clinical trials (the final test before a drug is able to be approved by the FDA).

So what is the solution? How do we ensure that patients get the care and medicines that they need, while also ensuring that drugs are able to be created to continue to create medicines for currently incurable diseases? It is a issue that stems far beyond the realm of pharmaceuticals and will require properly informed legislature to be passed in order for any meaningful impact to be made.