Maverick Citizen

BHEKISISA CENTRE FOR HEALTH JOURNALISM

The Sputnik vaccine case study (Part Four): What goes into the scientific hamburger of a Covid vaccine? An awful lot of trust

The Sputnik vaccine case study (Part Four): What goes into the scientific hamburger of a Covid vaccine? An awful lot of trust
What goes into the scientific hamburger of a Covid vaccine? A lot of trust. (Photo: bhekisisa.org / Wikipedia)

In the past year, Covid vaccines have gone from trials to mass roll-outs around the world. But this unprecedented rate of scientific advancement requires a lot of trust — from faith in the integrity of researchers all the way through to regulators. Here’s why Sputnik V undermines this trust.

This is the fourth of a four-part series published by Daily Maverick. Part One reveals that Sputnik V is plagued by red flags and question marks surrounding its clinical trials and results. Part Two explains how checkpoints in the scientific system are put in place to ensure that research can be sped up in a safe way. But during the Covid pandemic, the accelerated timeline to push out research in an effort to help has exposed gaps in the process. Part Three explores the role of the regulatory bodies and why Sputnik V is not endorsed. Today’s final instalment reveals why Sputnik V has broken the trust.

In June, Bhekisisa started doing research that would explain how well Russia’s Sputnik V vaccine works and to gauge whether it would be suitable for use in South Africa’s Covid-19 vaccine roll-out. Soon enough, it was realised Sputnik V had flouted the rules of every part of the system that produces scientific research.

Consider the hamburger. The perfect union of otherwise ordinary ingredients — bread, ground beef, basting and lettuce. Hamburger toppings abound (cheese, avo, onions), but you’ve got to get the basics right or the whole thing becomes a bit hard to swallow. 

Scientific research is a lot like a hamburger. Bear with us… 

The system that produces scientific research has lots of components. We’ve come to think of these parts as if they were the ingredients of a hamburger.  

The anatomy of a burger, as we see it: Burger ingredients function well on their own, but when they join forces for the good of the burger, each one’s role is amplified.  

Ground beef is the star of the traditional burger. Its scientific equal, in our view, is the raw data that scientists collect during their research. 

But you can’t just slap a bit of cooked mince on to a burger; you need some sort of binder, and a bit of seasoning, at least, before you can call it a patty.  

Just the same, the raw data on its own is not good enough.

You need a well designed study protocol, which is the guidebook researchers use to plan the trial. That’s the binder you use so that the patty will keep its shape. 

Next, you need all of this to culminate in a well run clinical trial that produces trustworthy results — the seasoning of the patty.

Upgrade your burger with some basting for extra flavour and moisture. Similarly, the peer review process enhances the credibility of your clinical trial, doubly so if the paper is published in a prestigious academic journal. 

Nobody likes a soggy burger bun, though. So, reach for a crispy lettuce leaf to help the burger keep its structural integrity and to keep the basting under control. 

Enter medicines regulators (alias: lettuce) who review raw data, peer review and clinical trials submitted to them by manufacturers to determine whether a medicine is safe for public use. They also keep tabs on safety standards at manufacturing plants where the medicines are made.

Burger buns, soft and perishable as they are, represent the public’s safety and their trust in medicines.

If the burger is not assembled correctly or is missing an ingredient, you risk the whole thing falling to pieces in your hands. Likewise, all parts of the burger are essential to the trustworthiness of research and medical interventions like vaccines.

There are, of course, still some things you can add to your burger to give it that little something extra. For instance, pickles would be represented by transparency at every step of the process and bonus garnishes like cheese are the public scrutiny and questions from other scientists in the form of correspondence in medical journals.

Et voilà, a burger. Or, the scientific research system. 

In the previous three instalments of this series, we’ve interrogated the shortfalls in raw data (ground beef), clinical trials (binding), peer review (basting) and medicines regulators (lettuce). 

The bottom line

All of these ingredients of the burger rely on the integrity of individual researchers. If you use bad meat to make your patty, then the entire burger is compromised. Wilted lettuce doesn’t do the burger any favours either.

Finally, public trust is to medicines what the bun is to hamburgers. Without the public’s trust to take medicines (or vaccines) scientific research becomes a bit, well, impractical.  

In this final part, we take a closer look at how the scientific system was designed — and why it leaves space for bad science to lurk in the shadows, which ultimately erodes public trust in medicines. 

How do you know if a vaccine is good?

A year and a half into the pandemic, several brands of Covid vaccines are being rolled out around the world. But getting the jabs into people’s arms is only the final step in a long scientific journey.

The lifecycle of a vaccine begins first with testing an idea — this is done first in a lab and on rodents to see if it’s safe, before moving on to testing the jab in people.

Then the results are shared in a paper — during Covid this is usually first done in a preprint before being submitted to a journal for peer review, where other scientists assess the quality of the study.

At the same time, manufacturers can also submit their data to various medicine regulators in different countries for approval. Once a regulator has reviewed the available information and made a decision on whether a vaccine is safe and effective, the jab can be rolled out to the public in that country (if the government decides to procure it).

At each level of this process, there is an inherent trust that underlies the entire system.

There is a lot of faith placed in the honesty of the researchers and their reporting of the findings of a study and, in reality, they’re not necessarily always honest — or accurate. 

When you’re trying to answer the question of if a vaccine is good, you first need to address two other questions: How good is the research and how well has the information been reported (either in a medical journal or by a regulatory body).

Here’s how Sputnik V measured up

  • The research underlying the work was poorly done, from phase one to phase three of the trials. These issues persisted despite the paper undergoing peer review prior to being published in The Lancet. Read more about this in two of our Sputnik series. 
  • The lack of good reporting is also highlighted by the fact that to date no stringent regulatory body has approved the jab.

With these two factors missing, it’s impossible to clearly and unequivocally answer the question, “Is Sputnik V a good choice of vaccine?”

This creates a tricky situation for regulators and anyone trying to assess the jab, explains health consumer advocate Hilda Bastian.

“This is just a horrific situation… if this is a really good vaccine for people, which it might be, then it’s awful if people can’t get it,” she says. “But if they [the regulators] can’t actually be sure that the vaccines are safe, and if something goes terribly wrong where a lot of people receive a faulty vaccine, then that is disastrous in itself.”

Transparency and trust

Trust in the entire system and process of ensuring that jabs are safe is eroded. 

The larger question to answer is: How much transparency is needed for shared scientific data to be considered credible? 

Scientists tend to guard their work closely. But during Covid, there has been a move for more teamwork and open sharing of information. This has been driven by the scientific community collaborating to find solutions and medical interventions as fast as possible.

Enter Sputnik V: A common complaint when assessing the jab is that the scientists who developed the vaccine are reluctant to share their data and provide regulators with additional information that they may require.

For instance, the European Medicines Regulator has had to postpone its decision on authorising the jab as they await further submissions from the manufacturer. South Africa’s own regulatory body is also reserving judgment until more details can be provided on the jab.

South Africa’s regulator is waiting for additional data on the risk the carrier virus in the second shot poses for HIV acquisition. Additionally, Sputnik researchers have not made the study design publicly available.

The less evasive researchers are about their work, the better people (the public and other scientists) can understand how the process works and therefore trust the findings.

Public trust in scientific developments is crucial because there is no sense in creating an intervention, for instance a vaccine, if no one trusts the product enough to be willing to take it.

With Sputnik V, for example, Russian people themselves are not keen on the jab — an independent survey published in May 2021 found more than 60% of Russian people are unwilling to get the vaccine. 

Open Science

Why is it so crucial for scientists to be transparent about their data? Information sharing, particularly by vaccine manufacturers, is important on many levels.

A joint statement by the World Health Organisation (WHO) and International Coalition of Medicines Regulatory Authorities outlines: “The Covid-19 pandemic has revealed how essential to public trust access to data is.” This is because “not all data is of high quality”, the statement explains.

The need for transparency extends beyond the public’s trust in medicines to aiding further developments in research, helping to expedite the regulatory process and making it easier for healthcare workers to choose the best treatment options, the WHO and International Coalition of Medicines Regulatory Authorities say.

A 2020 survey from the Pew Research Centre found that only about a third of people across 20 countries placed “a lot of trust” in scientists, although the profession is highly regarded.

One movement working to increase trust in scientists’ work is Open Science, which aims to create a more accessible research environment. 

The pandemic has put the concept of “Open Science” back at centre stage — and sped up the process. For example, the entire genome (the genetic make-up) for the SARS-CoV-2 virus which causes Covid was published a month after being identified in China. This same process took around five months during the previous SARS outbreak in 2003. Sharing this unravelled genetic code for the virus helps provide a roadmap for scientists to better understand how it works and where the epidemic could potentially be heading. 

There has also been a move among researchers to share their findings in preprint studies prior to publication in peer-reviewed journals. This circumvents the process of peer review (during which established scientists first assess the methodology and results of studies before they’re published), which can delay getting the intel out there — but it’s in the interest of getting scientific information out faster so that it can inform the global Covid response.

But although there are benefits to this model of rapid information sharing, it does come with a caveat — preprints are unregulated.

Richard Lessells, an infectious disease expert at the Kwazulu-Natal Research Innovation and Sequencing Platform, says that this way of information sharing means that “there’s a lot of bad science that gets through, but a lot of bad science gets through the formal peer review process as well”.

But because preprints are so accessible and so many researchers read them as a result, “bad science” is generally picked up fast. 

“The more open and transparent we are, the more people are scrutinising things, the more chance you have of seeing what the good stuff is and what the bad stuff is,” explains Lessells.

A March paper in The Lancet Planetary Health says this new approach places extra responsibility on authors “to ensure that their preprint research is rigorous and presented objectively”.

Can you trust the data?

A February paper in Data Intelligence explains that before even assessing the data, we first “need to know that the information we have found is really what it purports to be, and that the authors are really the people they claim to be”.

An interesting example of verifying authors arose during the release and subsequent retraction of two papers on possible Covid treatments.

The first was a paper published in May 2020 in The Lancet claiming that the antimalarial drugs hydroxychloroquine, or chloroquine, could lead to heart damage and endanger hospitalised patients. It was then retracted two weeks later.

The second was a study published in the New England Journal of Medicine in the same month claiming that the use of a type of heart medication called ACE inhibitors, which lowers blood pressure, could reduce the risk of death for Covid patients. The paper was retracted by the authors just over a month later.

Both of these papers relied on data provided by Surgisphere, an American-based company founded in 2007 that pivoted from medical textbooks into hospital data. The company’s website has since been deleted.

In a Nature podcast from June last year, Richard van Noorden explains that after the papers were published, the underlying data (which came from Surgisphere) was called into question — with hospitals supposedly in Surgisphere’s database saying they had never heard of the company.

Amid these questions, The Lancet published an expression of concern saying “serious scientific questions have been brought to our attention” and that “an independent audit” of the data was under way.

Following this, “Surgisphere said that they couldn’t provide any of this data or even any of the agreements with the hospital to third party auditors for confidentiality reasons”, according to Van Noorden.

With the dataset now considered compromised, the researchers chose to retract their papers from the journals.

In these two cases, it was surprisingly easy for false information to get through the checks and balances put in place.

“There’s only so much that peer review can find and it usually can’t find fraud because people just assume that what they’re given is the truth,” explains Bastian. “It’s really quite hard to pick up things that simply aren’t true, like if the numbers are actually faked, unless it’s completely implausible because all you can do is look at what you’ve got.”

Frauds and fakes — exploiting the system

Surgisphere is not alone in this practice; there is a history of scientists who have chosen to exploit the faith placed in their work.

One of the most infamous examples is a 1998 study published in The Lancet falsely claiming that the MMR (measles, mumps and rubella) vaccine could cause autism in children. The research was fundamentally flawed and the journal retracted the study in 2010, but the myth has caused ongoing damage to measles immunisation campaigns. 

The false links between the MMR jab and autism made global headlines, as did the fall from grace of the study’s author, Andrew Wakefield. 

The phenomenon of bad trial reporting has, however, continued even in prestigious journals that profess their commitment to evidence-based publishing standards, according to a study published in the Trials in 2019.

The researchers, led by Ben Goldacre, the author of the book Bad Science, monitored all the trials reported in five journals over a six-week period. 

The journals, (the New England Journal of Medicine, The Lancet, the Journal of the American Medical Association, the British Medical Journal and the Annals of Internal Medicine), all endorse guidelines, known as Consort, designed to combat bad reporting on randomised clinical trials. Consort is an acronym for Consolidated Standards of Reporting Trials. 

Using the Consort guidelines, the researchers assessed the published journal reports against their respective study protocols and entries on trial registries. 

They were looking for misreported trials — and they found plenty.  

Not only did all five journals flout Consort’s rules, the journals mostly rejected correction letters sent by Goldacre and his colleagues which pointed out the errors. 

Nearly 90% of trials assessed required a correction letter, but fewer than half of the letters sent were published by journals. The corrections that were published only saw the light about 99 days later. 

So why is it so hard for peer-reviewers to spot frauds? 

Being able to detect discrepancies in the data or inaccuracies “comes back a little bit to how much information is shared by the researchers and by the journals that publish it”, says Lessells.

With Sputnik, because they didn’t share the trial protocol, it becomes difficult to pick up these kinds of things, he explains.

Lessells concludes: “All you’ve got is the reported results [for Sputnik V], you don’t know if they might have changed how the study was done or what it was measuring and these are the kinds of things we want to be completely open and transparent [about] so that we can interpret the results.”

The final layer: Politics and public trust

There are concerns that some governments may be undercounting or skewing their Covid data — particularly when it comes to national death tolls.

Countries that are more transparent tend to report a higher number of Covid deaths, according to a June paper in SSM Population Health, indicating possible data manipulation in autocratic countries.

In the United Kingdom, a parliamentary inquiry was held to ensure data transparency around Covid information. An issue identified in the report is that there was no accountability within the government structure “for the data underpinning decisions on Covid-19”.

The flaw of having no clear information underlying decisions made in the interest of public safety, such as lockdowns, is that “a sense of confusion and mistrust developed”, the report says.

South Africa’s own Covid response was also meant to be evidence-based. The national health department formed a ministerial advisory committee in March last year (which was restructured in September).

But as its name suggests, the group could only offer advice: it was then up to the department and the National Command Council to determine how — and if — the committee’s advice would be taken on board and translated into regulations. For example, in May last year, there was great confusion over restrictions on the sale of open-toed shoes as the underlying motivation for this decision was unclear.

If the end goal is to garner public trust in a country’s Covid response, it becomes important to ensure transparency in what is informing decisions made at the top — and this openness begins with scientists providing the data and research behind their work.

Building trust in vaccines can be even harder than building a really clever vaccine candidate, says Mitchell Warren. 

“We think the science of vaccines is hard, and it is. But the delivery, the politics, the human parts of vaccine delivery, actually make the research and development look easy.

“We’ve got a long way to go.” DM/MC

This story was produced by the Bhekisisa Centre for Health Journalism. Sign up for the newsletter.

Gallery

"Information pertaining to Covid-19, vaccines, how to control the spread of the virus and potential treatments is ever-changing. Under the South African Disaster Management Act Regulation 11(5)(c) it is prohibited to publish information through any medium with the intention to deceive people on government measures to address COVID-19. We are therefore disabling the comment section on this article in order to protect both the commenting member and ourselves from potential liability. Should you have additional information that you think we should know, please email [email protected]"

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Become a Maverick Insider

This could have been a paywall

On another site this would have been a paywall. Maverick Insider keeps our content free for all.

Become an Insider

Every seed of hope will one day sprout.

South African citizens throughout the country are standing up for our human rights. Stay informed, connected and inspired by our weekly FREE Maverick Citizen newsletter.