The Rise and Fall of Antibiotics: What Would a Post-Antibiotic World Look Like?

Antibiotic resistance is already a problem. Almost all infectious diseases physicians have had the dreaded call about patients with infections that were essentially untreatable, or where they had to scramble to find supplies of long-forgotten last-line antibiotics.

These days, we don’t think much about being able to access a course of antibiotics to head off an infection. But that wasn’t always the case – antibiotics have been available for less than a century.

Before that, patients would die of relatively trivial infections that became more serious. Some serious infections, such as those involving the heart valves, were inevitably fatal.

Other serious infections, such as tuberculosis, weren’t always fatal. Up to a half of people died within a year with the most severe forms, but some people recovered without treatment and the remainder had ongoing chronic infection that slowly ate away at the body over many years.

Once we had antibiotics, the outcomes for these infections were much better.

Life (and death) before antibiotics

You’ve probably heard of Alexander Fleming’s accidental discovery of penicillin, when fungal spores landed on a plate with bacteria left over a long weekend in 1928.

But the first patient to receive penicillin was an instructive example of the impact of treatment.
In 1941, Constable Albert Alexander had a scratch on his face that had become infected.

He was hospitalised but despite various treatments, the infection progressed to involve his head. This required removing one of his eyes.

In 1941, Albert Alexander was hospitalised with a severe infection. Photo: Jonathan Borba/Pexels

Howard Florey, the Australian pharmacologist then working in Oxford, was concerned penicillin could be toxic in humans. Therefore, he felt it was only ethical to give this new drug to a patient in a desperate condition.

Constable Alexander was given the available dose of penicillin. Within the first day, his condition had started to improve.

But back then, penicillin was difficult to produce. One way of extending the limited supply was to “recycle” penicillin that was excreted in the patient’s urine. Despite this, supplies ran out by the fifth day of Alexander’s treatment.

Without further treatment, the infection again took hold. Constable Alexander eventually died a month later.

We now face a world where we are potentially running out of antibiotics – not because of difficulties manufacturing them, but because they’re losing their effectiveness.

What do we use antibiotics for?

We currently use antibiotics in humans and animals for a variety of reasons. Antibiotics reduce the duration of illness and the chance of death from infection. They also prevent infections in people who are at high risk, such as patients undergoing surgery and those with weakened immune systems.

But antibiotics aren’t always used appropriately. Studies consistently show a dose or two will adequately prevent infections after surgery, but antibiotics are often continued for several days unnecessarily. And sometimes we use the wrong type of antibiotic.

Surveys have found 22% of antimicrobial use in hospitals is inappropriate.

Antibiotics are used for longer than needed and sometimes the wrong type is used. Photo: National Cancer Institute/Unsplash

In some situations, this is understandable. Infections in different body sites are usually due to different types of bacteria. When the diagnosis isn’t certain, we often err on the side of caution by giving broad spectrum antibiotics to make sure we have active treatments for all possible infections, until further information becomes available.

In other situations, there is a degree of inertia. If the patient is improving, doctors tend to simply continue the same treatment, rather than change to more appropriate choice.

In general practice, the issue of diagnostic uncertainty and therapeutic inertia are often magnified. Patients who recover after starting antibiotics don’t usually require tests or come back for review, so there is no easy way of knowing if the antibiotic was actually required.

Antibiotic prescribing can be more complex again if patients are expecting “a pill for every ill”. While doctors are generally good at educating patients when antibiotics are not likely to work (for example, for viral infections), without confirmatory tests there can always be a lingering doubt in the minds of both doctors and patients. Or sometimes the patient goes elsewhere to find a prescription.

For other infections, resistance can develop if treatments aren’t given for long enough. This is particularly the case for tuberculosis, caused by a slow growing bacterium that requires a particularly long course of antibiotics to cure.

As in humans, antibiotics are also used to prevent and treat infections in animals. However, a proportion of antibiotics are used for growth promotion. In Australia, an estimated 60% of antibiotics were used in animals between 2005-2010, despite growth-promotion being phased out.

Why is overuse a problem?

Bacteria become resistant to the effect of antibiotics through natural selection – those that survive exposure to antibiotics are the strains that have a mechanism to evade their effects.

For example, antibiotics are sometimes given to prevent recurrent urinary tract infections, but a consequence, any infection that does develop tends to be with resistant bacteria.

When resistance to the commonly used first-line antibiotics occurs, we often need to reach deeper into the bag to find other effective treatments.

Some of these last-line antibiotics are those that had been superseded because they had serious side effects or couldn’t be given conveniently as tablets.

New drugs for some bacteria have been developed, but many are much more expensive than older ones.

Treating antibiotics as a valuable resource

The concept of antibiotics as a valuable resource has led to the concept of “antimicrobial stewardship”, with programs to promote the responsible use of antibiotics. It’s a similar concept to environmental stewardship to prevent climate change and environmental degradation.

Antibiotics are a rare class of medication where treatment of one patient can potentially affect the outcome of other patients, through the transmission of antibiotic resistant bacteria. Therefore, like efforts to combat climate change, antibiotic stewardship relies on changing individual actions to benefit the broader community.

Like climate change, antibiotic resistance is a complex problem when seen in a broader context. Studies have linked resistance to the values and priorities of governments such as corruption and infrastructure, including the availability of electricity and public services. This highlights that there are broader “causes of the causes”, such as public spending on sanitation and health care.

Other studies have suggested individuals need to be considered within the broader social and institutional influences on prescribing behaviour. Like all human behaviour, antibiotic prescribing is complicated, and factors like what doctors feel is “normal” prescribing, whether junior staff feel they can challenge senior doctors, and even their political views may be important.

There are also issues with the economic model for developing new antibiotics. When a new antibiotic is first approved for use, the first reaction for prescribers is not to use it, whether to ensure it retains its effectiveness or because it is often very expensive.

However, this doesn’t really encourage the development of new antibiotics, particularly when pharma research and development budgets can easily be diverted to developing drugs for conditions patients take for years, rather than a few days.

The slow moving pandemic of resistance

“If we fail to act, we are looking at an almost unthinkable scenario where antibiotics no longer work and we are cast back into the dark ages of medicine,” David Cameron, former UK Prime Minister

Antibiotic resistance is already a problem. Almost all infectious diseases physicians have had the dreaded call about patients with infections that were essentially untreatable, or where they had to scramble to find supplies of long-forgotten last-line antibiotics.

There are already hospitals in some parts of the world that have had to carefully consider whether it’s still viable to treat cancers, because of the high risk of infections with antibiotic-resistant bacteria.

A global study estimated that in 2019, almost 5 million deaths occurred with an infection involving antibiotic-resistant bacteria. Some 1.3 million would not have occurred if the bacteria were not resistant.

The UK’s 2014 O’Neill report predicted deaths from antimicrobial resistance could rise to 10 million deaths each year, and cost 2-3.5% of global GDP, by 2050 based on trends at that time.

What can we do about it?

There is a lot we can do to prevent antibiotic resistance. We can:

  • raise awareness that many infections will get better by themselves, and don’t necessarily need antibiotics
  • use the antibiotics we have more appropriately and for as short a time as possible, supported by co-ordinated clinical and public policy, and national oversight
  • monitor for infections due to resistant bacterial to inform control policies
  • reduce the inappropriate use of antibiotics in animals, such as growth promotion
  • reduce cross-transmission of resistant organisms in hospitals and in the community
  • prevent infections by other means, such as clean water, sanitation, hygiene and vaccines
  • continue developing new antibiotics and alternatives to antibiotics and ensure the right incentives are in place to encourage a continuous pipeline of new drugs.

Allen Cheng, Professor of Infectious Diseases, Monash University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Most People Don’t Benefit From Vaccination, But We Still Need It To Prevent Infections

Stating that a majority of people won’t benefit from a vaccine ignores the purpose of immunisation programmes.

A recent article in The Conversation questioned whether we should all get flu vaccinations, given 99 people would have to go through vaccination for one case of flu to be prevented.

But this position ignores the purpose of immunisation programs: whole populations of people need to take part for just a small number to benefit. So how do we decide what’s worth it and what’s not?

Decision-making in public health

When we consider a treatment for a patient, such as antibiotics for an infection, we first consider the evidence on the benefits and potential harms of treatment. Ideally, this is based on clinical trials, where we assume the proportion of people in the trial who respond represents the chance an individual patient will respond to treatment.

This evidence is then weighed up with the individual patient. What are the treatment options? What do they prefer? Are there factors that might make this patient more likely to respond or have side effects? Is there a treatment alternative they would be more likely to take?

In public health, the framework is the same but the “patient” is different – we are delivering an intervention for a whole population or group rather than a single individual.

We first consider the efficacy of the intervention as demonstrated in clinical trials or other types of studies. We then look at which groups in the population might benefit the most (such as the zoster vaccine, given routinely to adults over 70 years as this group has a high rate of shingles), and for whom the harms will be the least (such as the rotavirus vaccine, which is given before the age of six months to reduce the risk of intussusception, a serious bowel complication).

Compared to many other public health programmes, immunisation is a targeted intervention and clinical trials tell us they work. But programs still need to target broad groups, defined by age or other broad risk factors, such as chronic medical conditions or pregnancy.

Risks and benefits of interventions

When considering vaccination programs, safety is very important, as a vaccine is being given to a generally healthy population to prevent a disease that may be uncommon, even if serious.

For example, the lifetime risk of cervical cancer is one in 166 women, meaning one woman in 166 is diagnosed with this cancer. So even if the human papillomavirus (HPV) vaccine was completely effective at preventing cancer, 165 of 166 women vaccinated would not benefit. Clearly, if we could work out who that one woman was who would get cancer, we could just vaccinate her, but unfortunately we can’t.

It’s only acceptable to vaccinate large groups if clinically important side effects are low. For the HPV vaccine, anaphylaxis (a serious allergic reaction) has been reported, but occurs at a rate of approximately one in 380,000 doses.

An even more extreme case is meningococcal vaccination. Before vaccination, the incidence of meningococcal serogroup C (a particular type of this bacterium) infection in children aged one to four years old was around 2.5 per 100,000 children, or 7.5 cases for 100,000 children over three years.

Vaccination has almost eliminated infection with this strain (although other serotypes still cause meningococcal disease). But this means 13,332 of 13,333 children didn’t benefit from vaccination. Again, this is only acceptable if the rate of important side effects is low. Studies in the US have not found any significant side effects following routine use of meningococcal vaccines.

This is not to say there are no side effects from vaccines, but that the potential side effects of vaccines need to be weighed up against the benefit.

For example, Guillain Barre syndrome is a serious neurological complication of influenza vaccination as well as a number of different infections.

But studies have estimated the risk of this complication as being around one per million vaccination doses, which is much smaller than the risk of Guillain Barre syndrome following influenza infection (roughly one in 60,000 infections). And that’s before taking into account the benefit of preventing other complications of influenza.

High schools are bigger, so immunisation is easier than at primary schools. Credit: www.shutterstock.com

What other factors need to be considered?

We also need to consider access, uptake and how a health intervention will be delivered, whether through general practices, council programs, pharmacies or school-based programmes.

Equity issues must also be kept in mind: will this close the gap in Indigenous health or other disadvantaged populations? Will immunisation benefit more than the individual? What is the likely future incidence (the “epidemic curve”) of the infection in the absence of vaccination?

A current example is meningococcal W disease, which is a new strain of this bacteria in Australia. Although this currently affects individuals in all age groups, many state governments have implemented vaccination programs in adolescents.

This is because young adults in their late teens and early 20s carry the bacteria more than any other group, so vaccinating them will reduce transmission of this strain more generally.

But it’s difficult to get large cohorts of this age group together to deliver the vaccine. It’s much easier if the program targets slightly younger children who are still at school (who, of course, will soon enter the higher risk age group).

In rolling out this vaccine program, even factors such as the size of schools (it is easier to vaccinate children at high schools rather than primary schools, as they are larger), the timing of exams, holidays and religious considerations (such as Ramadan) are also taken into account.

For government, cost effectiveness is an important consideration when making decisions on the use of taxpayer dollars. This has been an issue when considering meningococcal B vaccine. As this is a relatively expensive vaccine, the Pharmaceutical Benefits Advisory Committee has found this not to be cost effective.

This is not to say that meningococcal B disease isn’t serious, or that the vaccine isn’t effective. It’s simply that the cost of the vaccine is so high, it’s felt there are better uses for the funding that could save lives elsewhere.

While this might seem to be a rather hard-headed decision, this approach frees up funding for other interventions such as expensive cancer treatments, primary care programs or other public health interventions.

Why is this important?

When we treat a disease, we expect most people will benefit from the treatment. As an example, without antibiotics, the death rate of pneumonia was more than 80%; with antibiotics, less than 20%.

The ConversationHowever, vaccination programmes aim to prevent disease in whole populations. So even if it seems as though many people are having to take part to prevent disease in a small proportion, this small proportion may represent hundreds or thousands of cases of disease in the community.

Allen Cheng, Professor in Infectious Diseases Epidemiology, Monash University

This article was originally published on The Conversation. Read the original article.