Are we prescribing antibiotics wrong?
The message has always been clear: make sure to finish your full course of antibiotics. Or else? Risk the development of antibiotic resistance - a one-way street with no way back, which leads to one of the most-life-saving interventions in modern medicine becoming virtually obsolete. It has long been thought that one of the best ways to prevent resistance is to prescribe long courses of antibiotics, and to ensure that patients stop the course of antibiotics no sooner than they are instructed to by their doctors. However, recent discussion in medical circles suggests the case may not be as clear-cut as it would initially appear…
How do bacteria develop resistance to antibiotics?
The basic principle of the way bacteria develop antibiotic resistance is as follows: because of their cellular properties, some bacteria are more likely to be eradicated by a particular antibiotic than others - these bacteria are termed ‘susceptible’. Those which are not susceptible and are able to survive the antibiotic are ‘resistant’. When an antibiotic is used to treat an infection, resistant bacteria are able to survive. In this sense, antibiotic treatment ‘selects for’ resistant bacteria. Alternatively, bacteria can accumulate mutations in their genetic material during an antibiotic course to acquire ‘genes for resistance’, and hence the ability to survive.
These effects occur whenever an antibiotic is used, and can be observed at the level of two distinct populations of bacteria: In the first instance, selection for resistance happens in the bacterial population that is causing the infection. This is known as ‘target selection’. However, when antibiotics are used, they do not only have an effect on the bacteria they are intended to treat. They also affect populations of bacteria which coexist harmlessly with the host. These ‘commensals’, as they are known, are often found on the skin or in the gut, for instance. Bacteria in these populations that are sensitive to the antibiotic are eradicated alongside the pathogen, whereas those that are resistant, survive. This is called ‘collateral selection’.
So why is this distinction important? Essentially, in order to prevent the development of resistance effectively, we need to address each unique mechanism. Unfortunately, the preventative strategies which work most effectively for each type of resistance are almost complete opposites of each other:
In terms of target resistance, it was noticed in early studies on bacteria such as TB that if the infection was not treated for long enough, and antibiotics were stopped too early, the infection was able to re-emerge; only this time, the infection was caused by bacteria resistant to the original antibiotic. The resistant bacteria retained the original infection’s ability to spread from person to person, meaning that they were able to cause resistant infection in a new host. The best way to prevent such resistant bacteria from developing and spreading was to ensure that antibiotics were used for a sufficiently long time to overcome the infection as effectively as possible. This is what gave rise to the advice regarding finishing the antibiotic course. For some organisms, even a prolonged antibiotic course isn’t sufficient if a single antibiotic is used: for TB for example, it is now standard practice to use a combination of four antibiotics. The rationale for this is that, even if a bacterium is very likely to develop resistance to any single antibiotic, it is still highly unlikely to be capable of developing resistance to more than one antibiotic at a time. Therefore, by using a combination of antibiotics, the chances of eradicating the infection are maximised and the risks of developing resistance are minimised.
On the other hand, the development of resistance by collateral selection is increased with increasing antibiotic use. This is because, with longer courses of treatment, the selective pressure on the commensal bacteria is stronger, making them more likely to acquire mutations that confer resistance. It follows that these commensal bacteria have the ability to develop resistance to any antibiotic they are exposed to, so using multiple antibiotics simply means that commensals will develop resistance to a larger proportion of our antibiotic repertoire.
Given that commensals are harmless, why is it a problem that they should develop antibiotic resistance? There are two reasons for this: first, commensals can become pathogenic, which means that they acquire the ability to cause disease in their host organism. This can happen when a commensal which usually ‘lives in’ one part of the body is found in another - for example, gut commensals can cause infection if they invade the urinary tract. Commensals can also acquire mutations that allow them to become pathogenic in their usual environment, or they can become pathogenic if the host organism has a compromised immune system, for instance due to chemotherapy. Perhaps more worryingly, commensals can pass between hosts, leading to colonies of resistant bacteria spreading throughout populations. Furthermore, antibiotic-resistant commensals can pass their genes for resistance to pathogenic bacteria of the same species, and they can even trade genes with entirely different strains and species of bacteria, including those that do cause significant diseases. In fact, some of the most notorious resistant bacteria, such as MRSA (Methicillin-Resistant Staphylococcus Aureus), are thought to have acquired their resistance in this way.
‘Finish the course’: is the advice wrong?
So where does this leave us in terms of prescribing and taking antibiotics safely and effectively? Overall, it has been shown very clearly that the development of antibiotic resistance is linked directly to antibiotic use, with increasing use leading to greater levels of resistance, largely mediated through the development of collateral resistance. It follows therefore that if we want to try to halt resistance in its tracks, we should be making an effort to limit our use of antibiotics. Indeed, the consensus in the medical community is slowly moving in that direction. Specifically, a lot of attention has been focused on the fact that much of the advice regarding the length of an antibiotic course isn’t particularly evidence based: historically, long antibiotic courses were favoured in order to ensure that an infection was not under-treated, because this could encourage an infection to re-emerge, or resistance to develop in the causative bacteria. But research is now beginning to show that, for many common infections, our antibiotic courses are much too long - the infection can be treated effectively, without recurrence, with a considerably shorter antibiotic course that the ones we are routinely prescribing. It even looks as though shorter courses of treatment reduce the amount of collateral resistance developing in an individual patient, without increasing the amount of target resistance. In the long run, this effect should mean a reduction in the development of resistance on a population level. As such, some researchers are beginning to suggest that the medical establishment should change its advice to the public regarding antibiotic courses, encouraging doctors and patients to be more flexible in their approach, and more open to individualising treatment. An example of how this might work is a model whereby patients contact their doctors when their symptoms resolve so treatment decisions can be re-evaluated more dynamically. This would be in line with the way in which antibiotic treatment in hospital settings is changing: clinicians are increasingly using clinical signs, such as temperature, as well as biochemical markers of infection on blood tests, to monitor how a patient is responding to treatment, and using this information to decide whether or not to stop antibiotics rather than an arbitrary date on a calendar.
Not everyone agrees though: other research groups point out that, given it is not possible to monitor response to treatment in the community the same way it is in a hospital, the instruction to ‘complete the course’ of antibiotics is still a useful tool: it is simple and easy to follow, and ensures that patients are not putting themselves at risk of under-treatment, particularly as it has been shown that, for some infections, shorter antiobiotic courses than those which are currently prescribed are not adequate for successful treatment.
Both sides do, however, agree on one thing: more research is needed so that we can be in a position to know the optimum treatment duration for common infections, which is currently the biggest barrier to reducing antibiotic prescribing. The question of how soon this should translate into a change in clinical practice may have to remain a matter of debate.
With thanks to Dr Nick Brown at the Department of Microbiology, Addenbrooke’s Hospital for his helpful contributions to this article.