Mini Review - (2024) Volume 15, Issue 5
Received: 20-Jan-2021, Manuscript No. JCRB-24-8020; Editor assigned: 25-Jan-2021, Pre QC No. JCRB-24-8020 (PQ); Reviewed: 08-Feb-2021, QC No. JCRB-24-8020; Revised: 16-Aug-2024, Manuscript No. JCRB-24-8020 (R); Published: 13-Sep-2024, DOI: 10.35248/2155-9627.24.15.501
As with everything to do with high costs in any part of U.S. healthcare, high costs of pharmaceuticals didn’t happen overnight. Nor has there been one reason for where we are now. Unsurprisingly, there were several factors behind the current causes.
Barrueta was absolutely correct with his timeline, in that the U.S. experienced higher pharmaceutical costs, due to factors launched in the late 1980s. However, while the late 1980s and 1990s are considered the time during which pharmaceutical prices really began taking off, it’s also important to understand the history of pharmaceuticals, how they shaped the U.S. healthcare system and how and why they ended up becoming as expensive as they are now.
Pharmaceuticals; Healthcare system; History; Expensive
Going back to the very early days of “drogs,” many of the first pharmaceuticals were “natural,” in that they came from plants, herbs and shrubs, as well as insects and reptiles. Pharmacologically active substances coming from plants include opium (from the poppy), nicotine (tobacco plants), cannabinoids (cannabis leaves), cardiac glycosides (from the foxglove) and quinine from the cinchona tree [1]. It was not unusual for cave dwellers and then early agrarian societies to experiment with different plants and insects, in an effort to prevent or mitigate disease. However, many of those early societies also attributed disease and illness to spiritual reasons, as opposed to changes in body chemistry.
It wasn’t really until the age of reason, during which doctors were considered more than individuals who drove out evil spirits, that it was realized that some diseases could be treated without resorting to prayer. As such, use of plants and herbs for medicinal properties continued throughout the 17th century, with isolating and characterizing active principles in these plants proving to be a major challenge for chemists at the time. Actually, it wasn’t until the scientific revolution of the 17th century and its spread of rationalism and experimentation that the industry, as we know it today, took off [2].
From apothecaries to research and development
While in the early days, people would gather their own leaves and insects with which to make medicines, alchemists (who later become apothecarists) could be considered the forerunner of both the modern pharmaceutical manufacturer and pharmacist. These alchemist/apothecarists, with their skills in herbology and toxicology, knew how to gather ingredients for, then make all types of herbal remedies in hopes of finding a cure for medical complaints. It probably goes without saying that treating a patient involved a great deal of trial and error [3].
Fredrich Sertuner was considered the first to succeed in separating beneficial healing chemicals from a plant, as he isolated a plant alkaloid into a pure state. This eventually came to be known as morphine. He was able to do this by isolating meconic acid from raw opium; when the base of this was administered to a dog, the animal fell into a deep sleep. Not long afterward, other alkaloids were separated and isolated from opium; one of these was codeine. By the mid-1800s, German scientists began dominating the field of analytical and organic chemistry. The major focus of these chemists and early toxicologists was to develop methods allowing the identification of plant alkaloids in blood and human viscera, more to determine poisoning, as opposed to having a focus on healing or disease treatment. The first synthetic drugs again, the field of German organic chemists were discovered and modified in the 1800s, with Justus von Liebig discovering chloroform, which eventually was important in its use as a general anesthetic drug by Scots physician James Young Simpson.
Then came a way for these chemicals to be distributed to the general public. The roots of American pharmacies came from English shops, wholesalers and general stores [4]. While almost all medicines were imported from England, the revolutionary war led to the development of domestic sources of medicine. During the 17th century, these drugs could be found in general stores, as part of a multi-purpose dispensary.
At the time, apothecarists were, in a sense, physicians. They would diagnose issues and diseases, then prepare and/or compound medical products, while the pharmcist or druggist, owned the dispensaries. Meanwhile, most of these pharmacists relied on materia medica, a collection of the therapeutic properties of medicine, later to be known as pharmacology. Prescriptions were not needed in those days, simply because the medical profession at the time was vastly different from the one we confront today [5].
The world wars and their aftermath
Between world war I and world war II, one major competitive strategy among the pharmaceutical companies was research. Before the wars, the focus of these companies (many of which started life as chemical manufactures) was, development and distribution. However, many of the pharmaceutical companies, during the interwar years, established in-house laboratories, while forging collaborative relationships with academic biomedical, chemical and clinical researchers through grants-inaid and fellowships.
For example, in 1935, I.G. Farbenindustrie of Germany discovered sulfanilamide, an anti-infective agent, while screening dyes for antimicrobial activity. Following this discovery, industrial and academic researchers began screening both chemical and natural compounds for antimicrobial activity, leading to the isolation of hundreds of different antibiotic agents (including penicillin) in 1940.
The development of these drugs launched what was considered a “therapeutic revolution” for the first time, physicians had drugs that could cure patients of infections, rather than simply relieving symptoms. Penicillin was originally discovered by Alexander Fleming in the late 1920s. However, a government-supported international collaboration made up of Merck, Pfizer and Squibb set up functions to mass produced the drug during world war II [6].
Also developed during this period was insulin, for treatment of diabetes. In finding this particular drug, chemist Frederick Banting was able to isolate materials to treat insulin deficiency, leading to problems with high blood sugar. But it was only in collaboration with scientists at Eli Lilly that he and his colleagues both purify the extract and produce and distribute it as an effective medicine.
Also at this time, a shift in product production took place. More of the old-line pharmaceutical firms began to produce their own fine chemicals in-house, directly competing against Pfizer and Merck. Meanwhile, the fine chemical manufacturers were, themselves, producing their own innovative pharmaceutical compounds, but because they lacked the marketing capacity to market the drugs themselves, they still sold the active ingredients to the pharmaceutical companies which, in turn, packaged and marketed them under their own names. As such, during the 1950s, Pfizer built up its marketing organization, while Merck merged with pharmaceutical company Sharp and Dohme [7].
Price fixing and other problems
Due to more resources committed to research and development, more drugs were introduced, such as those to control hypertension. As such, the 1940s and 1950s were considered decades of intense innovation by the American drug industry. This was also the period of time during which the United States replaced the German pharmaceutical industry as the leading pharmaceutical innovator in the world.
Adding to this was assistance from generous government funding. For instance, the national institutes of health saw its federal funding increase to nearly $100 million by 1956, an investment that helped fuel development of new drugs among the growing industry.
However, as the industry became wealthier and perhaps unsurprisingly, concerns began to arise about potential ethical conflicts of making money from selling healthcare products. George Merck addressed the issue in 1950, pointing out that “We try never to forget that medicine is for the people. It is not for the profits [8].”
Getting over ulcers
By 1960, 20 pharmaceutical firms accounted for 80% of all U.S. sales. However, the majority of pharmaceutical firms, other than the major ones, such as Merck and Lilly, did little research or promotion, but rather, focused on packaging and distribution of unpatented or off-patent generic drugs; these firms were typically referred to as generic drug manufacturers. While research-based drug firms operated nationally and internationally, generic manufactures were geographically dispersed, often operating on a state or regional level.
The drugs produced by these companies paid well, but didn’t have huge profit margins. This all changed when it came to answering the question as to how to treat peptic ulcers. Peptic ulcers these days are considered more of a nuisance than a lifethreatening disease it was. This is, in part, thanks to medications on the market, as well as nutritional and other lifestyle changes.
Things were very different in the mid-20th century, however. Peptic ulcers were created by release of excess stomach acid, resulting in tears in the lining of the intestines. The most common treatment involved antacids, rest and bland diets; in extreme cases, surgery could be performed. But left untreated, such ulcers could lead to severe bleeding or even death. At the very least, this disease was unpleasant and definitely had an impact on quality of life. Patients and the market were ready for something that could take care of it [9].
The 90’s and price increases
According to a study published in health affairs, U.S. spending on pharmaceuticals took off in the late 1990s, tripling between 1997 and 2007. The 1990s are generally recognized as the turning point for hugely escalating drug prices in the United States, mainly because a record number of new drugs were released during that decade. Specifically, high-cost blood pressure medications and cancer drugs were released, a result of “the scientific explosion of the 1970s and 1980s, that allowed us to isolate the genetic basis of certain diseases,” which, in turn, helped open “a lot of therapeutic new areas for new drugs,” Harvard medical school associate professor Aaron Kesselheim told the New York times.
Also during that period, regulations on television drug advertising were relaxed, meaning more advertising, combined with an increase in FDA approvals, fueled by new fees collected from pharmaceutical manufacturers. This, in turn, helped add to the sudden increase of drugs coming to the market, as well as the overwhelmingly higher prices of drugs. Various studies have been conducted, focusing on pharmaceutical advertising and an increase in drug pricing. One study, focused on brands in five therapeutic classes noted that advertising increased demand for those drugs, thereby also increasing sales for those therapeutic classes. In addition to increasing demand, increases in operating costs due to higher promotional spending is generally shifted to consumers, leading to higher prices [10].
Drug price increases did slowdown in the 2000s, mainly attributed to a boost in generics drugs, along with fewer FDA approvals of blockbuster drugs. Then, in 2014, drug prices began spiking again, possibly due to expensive specialty drugs for diseases such as hepatitis C and cystic fibrosis. Additionally, many of the new drugs are based on recent advances in science, such as completion of the human genome project. Because these are biologics, there is little competition, which means, in turn, that these newer drugs command relatively higher prices.
Biologics differ from pharmaceuticals, in that the former is derived from biological methods (which might include living cells, requiring additional testing and clinical trials). Pharmaceuticals, on the other hand, are chemically based.
Biologic or pharmaceuticals, the fact that drugs were overpriced all came to a head with the advent of Daraprim and Martin Shrkeli. The question, however, was whether the spotlight has done anything to really impact drug prices.
[Crossref] [Google Scholar] [PubMed]
[Crossref] [Google Scholar] [PubMed]
[Crossref] [Google Scholar] [PubMed]
[Crossref] [Google Scholar] [PubMed]
[Crossref] [Google Scholar] [PubMed]
[Crossref] [Google Scholar] [PubMed]
[Crossref] [Google Scholar] [PubMed]
[Crossref] [Google Scholar] [PubMed]
[Crossref] [Google Scholar] [PubMed]
Citation: Boudreau RG (2024) Building Blocks of High Drug Prices. J Clin Res Bioeth. 15:501.
Copyright: © 2024 Boudreau RG. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.