Home Blogs Please kill fewer people….

Please kill fewer people….

0
Please kill fewer people….
Bioethics - Image Courtesy: Johns Hopkins Medicine

It’s rare that academic philosophy can have such a direct impact on people’s lives. But how we apply bioethics really can save or kill thousands, just by changing the speed with which we approve drugs.

By Nazarul Islam

In April of this year, the US Centers for Disease Control paused the use of the Johnson & Johnson vaccine. They had noticed that among the 6.8 million people who had been given the J&J injection, six people had suffered a rare blood clot known as a “cerebral venous sinus thrombosis” (CVST). They said in a statement that they were recommending that healthcare providers stop using it, until the FDA had reviewed the evidence, and that they were doing so “out of an abundance of caution”.

Caution is a strange word to use. On the day they released the statement, about 70,000 new cases of Covid were confirmed in the United States, and about 1,000 people died of it. Is it “cautious” to stop using a vaccine which would almost certainly reduce those numbers, because of an uncertain chance that it might have negative effects in a tiny cohort?

But caution, in this sense, has been rife during the pandemic. The governments of various European countries stopped the use of the AstraZeneca vaccine over similar concerns.

Germany was the first country to refuse to allow people over the age of 65 to have the AstraZeneca vaccine because of the absence of evidence of how well it worked in older people, indicating a more cautious approach than most.

Over the last two years, again and again, the fears of some possible risk caused by something we might do have outweighed the fears of a thoroughly real, utterly obvious risk which was killing people at the time. And it has, I think, been a failure of the field, or at least the practice, of bioethics.

What is Bioethics, really? To me, it is the study and application of ethical philosophy as it applies to medical and biological research and practice. As a former philosophy student, I have always found the idea very strange – after years spent arguing over the wildly different implications of minor-seeming differences in ethical frameworks, it was amazing to me that you could corral ethicists into things called “ethical review boards” or “independent ethics committees” which would then give you an answer to the question “so is this ethical or not?”

Drugs continue to be licensed and trials continue to be carried out, so I suppose not every ethics committee gets bogged down for three thousand years in discussions of what we mean by “the good”. And perhaps in normal times, the system works reasonably well, although you do hear some horror stories.

But during the pandemic, something has gone terribly wrong, and I think that the system of ethical approval in medicine has probably cost tens of thousands of lives, at a conservative estimate.

There are interesting questions, sure: if a self-driving car kills someone, who is responsible, the driver or the manufacturer? If you’re designing a safety system for the self-driving car, should it value the life of the occupants more highly than those of people outside?

But they’re quite niche questions. If, in the future, it turns out that a self-driving car will, on average, kill fewer people than a human-driven car, then it would be the ethical thing to do to get a self-driving car. Even if, every so often, the self-driving car does something really weird, like mistake a cyclist for a road marking, or a lorry for the sky, fewer people will be killed in a world where humans don’t drive and robots do. And that is good, so – all else being equal – we should say that self-driving cars are the ethical choice.

You can construct clever scenarios in which that are not true but they have to be clever as a starting point, as a reasonable first draft, “Do the thing that kills fewer people” is hard to beat.

In bioethics, though, we’ve overcomplicated things. For instance, early in the pandemic, people were campaigning for “human challenge trials” into Covid vaccines. In normal vaccine trials, people are given the vaccine (or a placebo or other control), and then the researchers observe how many people get the disease naturally. If it’s significantly fewer in the vaccine group, then we say that the vaccine works.

But it can take months for enough people to catch the disease naturally. When I was on the AstraZeneca trial in summer 2020, prevalence was low – there was real concern that it would take many months to get enough data.

With human challenge trials, participants agree not only to be given the vaccine but also the disease. It lets you use far fewer participants, and get your results far quicker, than a traditional vaccine trial.

A promising vaccine candidate, the Moderna mRNA vaccine, was ready in a lab in January 2020. The hundred or so doses that would have been required to get very solid evidence of effectiveness could have been made at lab scale in a few days. We could have known by February, or March at the latest, whether the vaccines worked. Yes, we’d still have had to scale up production, and that would have taken months. But the whole process would have started earlier. Instead, Moderna’s vaccine was not given emergency use approval in the US until December.

As the philosopher Richard Yetter Chappell points out, there are obvious-seeming objections to human challenge trials. You have to give people a potentially dangerous disease: what if it kills one of them?

But one plausible estimate is that roughly 18 million people have died of Covid during the pandemic – that’s an average of about 28,000 a day. Bringing the end of the pandemic forward by even a single day could easily save thousands of lives. A small risk to a small number of young, healthy volunteers was hugely outweighed by a very likely large reduction in risk to many thousands of old, vulnerable people.

Philosophy fans might think that this is a classic utilitarianism problem: is it OK to sacrifice one to save many; can I torture the terrorist in order to find the bomb? But as Chappell notes, in fact it is not. There are willing volunteers offering a (small but real) sacrifice for the greater good. It is an act of altruism, or even heroism, not coercion. “In what other context would the default assumption be to ban heroic acts of immense social value?”, asks Chappell.

There have been other failures. Recently, the drug Paxlovid was shown to be highly effective against severe disease. So effective, in fact, that the trial was stopped midway, because it was deemed unethical to give half of the participants a placebo when it was clear the real drug worked.

But the drug is still not approved in the US. So as Zvi Mowshowitz points out, “It is illegal to give this drug to any patients, because it hasn’t been proven safe and effective,” but also, “It is illegal to continue a trial to study the drug, because it has been proven so safe and effective that it isn’t ethical to not give the drug to half the patients.”

This is not the only such case. A trial of a drug designed to protect against HIV was stopped last year, because the drug was so effective that it was unethical to give placebo. But the drug was not actually approved by the FDA until … Monday!

There are two important points to make, here. First, while I’m talking about “bioethics”, it’s not clear that it’s actually bioethicists who are the problem. For instance, Peter Singer of Princeton, probably the world’s most famous bioethicist, is on the board of 1DaySooner, the human challenge advocacy group, as is his fellow bioethicist Nir Eyal, of Harvard. Leah Pierson, a Harvard bioethicist who is writing a book about the failings of bioethics during the pandemic, stresses that when the CDC paused the use of the J&J vaccine, lots of the bioethicists she knows were appalled at the decision.

But the practice of bioethics as actually carried out in major institutions, such as the FDA and CDC, often leads to these bad decisions. Matt Yglesias makes a good case here that public health agencies tend to follow somewhat rigid rules, rather than the best available science, leading to bad outcomes like a delay in approving fluvoxamine: perhaps I should complain about “institutionalized public health” rather than “bioethics” per se.

Second, as Pierson points out “When these systems work well you probably don’t hear about it.” No doubt there are lots of drugs that get approved relatively smoothly, and trials which go ahead without much fuss. I don’t know how representative these problems are.

But the problems do exist, and they seem to be exacerbated by the pandemic. Chappell thinks that the main problem is one of status quo bias: that is, that changing things feels like the “risky” option, and keeping things the same feels “safer”.

And, he admits, that may (or may not) be true in non-pandemic times. But in the pandemic, the status quo is visibly very dangerous. Throwing some low-but-not-zero-risk options into the mix, like early approval of vaccines or human challenge trials, are almost certainly lower-risk, in terms of the likeliest expected outcomes, than sticking with the status quo.

There is also an issue that humans instinctively think there’s a difference between bad outcomes caused by our actions, and bad outcomes caused by inaction. It’s hard to make a good philosophical case for this, and what the distinction between an “act” and an “omission” is (Jonathan Bennett had a go), but it’s how we feel. Killing one person by giving them a faulty vaccine feels worse, somehow, than letting a thousand die because we let the vaccines sit in a warehouse for another 24 hours.

And it’s easy to come up with reasons why we need to put more hoops in place for researchers and clinicians to jump through, because the one guy who dies in a botched human challenge trial is very obvious, whereas the thousands of people who would have died if the trial never took place are completely invisible.

But whatever the reason is that the hoops are in place, they are in place, and people have to jump through them to get things done. It took until December for the UK to decide to vaccinate the under-12s, despite it being well established that schoolchildren were driving the pandemic, because bioethicists could only take into account direct risk to the patient at the time – not the likelihood that prevalence would go up, or whether the children would rather not put their own relatives at risk. And it’s amazing, with hindsight, to read this approving piece from October last year about how a doctor prevented Donald Trump from forcing through early approval of a Covid vaccine.

It’s rare that academic philosophy can have such a direct impact on people’s lives. But how we apply bioethics really can save or kill thousands, just by changing the speed with which we approve drugs. In peacetime, perhaps, it’s OK to argue the toss and act with caution.

But in a pandemic, perhaps we really ought to apply the standard of “Do the thing that kills fewer people”.

[author title=”Nazarul Islam ” image=”https://sindhcourier.com/wp-content/uploads/2021/05/Nazarul-Islam-2.png”]The Bengal-born writer Nazarul Islam is a senior educationist based in USA. He writes for Sindh Courier and the newspapers of Bangladesh, India and America. He is author of a recently published book ‘Chasing Hope’ – a compilation of his 119 articles.[/author]