Novel Probiotic Shows Promise in Treating Type-2 Diabetes

As these GEITP pages have often stated, any trait (phenotype) reflects contributions of: [a] genetics (differences in DNA sequence); [b] epigenetic factors (chromosomal events independent of DNA sequence. The four accepted subdivisions include: DNA methylation, RNA interference, histone modifications, and chromatin remodeling); [c] environmental effects (e.g., smoking, diet, lifestyle); [d] endogenous influences (e.g., kidney or cardiopulmonary disorders); and [e] each person’s microbiome. The topic today fits well with the theme of gene-environment interactions. One “signal” is dietary sugar and endogenous influences such as obesity. The genes that “respond to this signal” make the patient more susceptible to type-2 diabetes (T2D).

The fascinating aspect of this article [below] includes important advances in better understanding of our microbiome. Therefore, another “signal” represents metabolites generated by specific anaerobic bacterial species — which lead to a higher risk of T2D (the “response”); this is because these metabolites have been discovered to be underrepresented in the gut of T2D patients (having A1c levels of 6.5% or higher) and prediabetic patients (having A1c levels of 5.7% to 6.4%). This company’s particular preparation thus includes the oligosaccharide-consuming Akkermansia muciniphila and Bifidobacterium infantis, the butyrate producers Anaerobutyricum hallii, Clostridium beijerinckii, and Clostridium butyricum — along with the “prebiotic” dietary fiber inulin.

For those who have followed the explosion in our understanding of the gut microbiome, it wasn’t that long ago (perhaps 2005?) when we knew almost nothing about bacteria in our intestine (and elsewhere on the body) and what they do. Now we realize that, if one grinds up an entire mammal (or human) and isolates total DNA — ~92% of this DNA represents our bacteria (!!). Moreover, the “brain-gut-microbiome” represents an important axis of innumerable functions that can cause or influence changes in health, disease, and even our mood and behavior (all elicited by gut bacterial metabolites). To our knowledge, this probiotic preparation* (Pendulum Glucose Control) — containing gut bacterial strains that are deficient in people with pre-T2D or T2D — is the first example of Big Pharma utilizing knowledge gained by studying our gut microbiome, to (hopefully) improve (or prevent) clinically an undesirable and very serious disease. 😊😊

*Even though it is a rather expensive daily medication for the average patient ☹…


COMMENT: — there is a bit of a disconnect between your comments on fecal microbiota transplantation (FMT) and the Medscape article that GEITP had featured [pasted furthest below]. Many clinical centers are randomly trying FMT on virtually every human disorder — “to see if it works” (i.e., if it helps, or prevents, X-Y-Z). This is all well and good.

In contrast, this pharmaceutical firm took advantage of the discovery of the absence (or very low levels) of specific strains of anaerobic bacteria in patients with type-2 diabetes (T2D) or those showing signs of pre-T2D, compared with patients having no T2D or pre-T2D. The company then developed a preparation that includes [a] two oligosaccharide-consuming bacterial strains, [b] three butyrate-producing strains — combined with [c] the “prebiotic” dietary fiber inulin. Coming up with a specific commercial preparation … is wherein the creativity resides.

Imagine a fly on the wall 30 ft away, which you’d like to eliminate. 😊 The former approach is like a shotgun blast, hoping to hit the fly. The latter approach is like designing a laser gun to hit the fly specifically. 😊 The latter is therefore an example of “precision medicine.”


COMMENT: I agree that fecal microbiota transplantation (FMT) is amazing with its potential. To my knowledge, FMT has been most remarkably and reproducibly useful for treatment of persistent C. difficile infections. Here is the Cell Metabolism reference (and another in Gut and a newspiece) and here is the quote from the book where I first learned of this:

“Research is underway examining how certain probiotics might be able to reverse type-2 diabetes and the neurological challenges that can follow. At Harvard’s 2014 symposium on the microbiome, I was floored by the work of Dr. M. Nieuwdorp from the University of Amsterdam, who has done some incredible research related to obesity and type-2 diabetes. He has successfully improved the blood sugar mayhem found in type-2 diabetes in more than 250 people using fecal transplantation. He’s also used this procedure to improve insulin sensitivity.

These two achievements are virtually unheard of in traditional medicine. We have no medication available to reverse diabetes or significantly improve insulin sensitivity. Dr. Nieuwdorp had the room riveted, practically silenced, by his presentation. In this experiment, he transplanted fecal material from a healthy, lean, nondiabetic into a diabetic. What he did to control his experiment was quite clever: He simply transplanted the participants’ own microbiome back into their colons, so they didn’t know whether they were being “treated” or not. For those of us who see the far-reaching effects of diabetes in patients on a daily basis, outcomes like Dr. Nieuwdorp’s are a beacon of hope. […]”

Excerpt From: David Perlmutter. “Brain Maker: The Power of Gut Microbes to Heal and Protect Your Brain for Life.” iBooks.

Posted in Center for Environmental Genetics | Comments Off on Novel Probiotic Shows Promise in Treating Type-2 Diabetes

Novel Probiotic Shows Promise in Treating Type-2 Diabetes

Miriam E. Tucker

August 03, 2020
Read Comments

A novel probiotic product (Pendulum Glucose Control) containing gut bacteria strains that are deficient in people with type 2 diabetes (T2D) modestly improves blood glucose levels, new research suggests.

The findings were published online July 27 in BMJ Open Diabetes Research & Care by Fanny Perraudeau, PhD, and colleagues, all employees of Pendulum Therapeutics.

The product, classified as a medical food, is currently available for purchase on the company’s website without a prescription.

It contains the oligosaccharide-consuming Akkermansia muciniphila and Bifidobacterium infantis, the butyrate producers Anaerobutyricum hallii, Clostridium beijerinckii, and Clostridium butyricum, along with the “prebiotic” dietary fiber inulin.

In a 12-week trial of people with type-2 diabetes who were already taking metformin, with or without a sulfonylurea, 23 were randomized to the product and 26 received placebo capsules.

Participants in the active treatment arm had significantly reduced glucose levels after a 3-hour standard meal-tolerance test, by 36.1 mg/dL (P = .05), and average A1c reduction of 0.6 percentage points (P = .054) compared with those taking placebo. There were no major safety or tolerability issues, only transient gastrointestinal symptoms (nausea, diarrhea) lasting 3-5 days. No changes were seen in body weight, insulin sensitivity, or fasting blood glucose.

Asked to comment on the findings, Nanette I. Steinle, MD, an endocrinologist with expertise in nutrition who was not involved in the research, told Medscape Medical News: “To me it looks like the research was designed well and they didn’t overstate the results…I would say for folks with mild to modest blood glucose elevations, it could be helpful to augment a healthy lifestyle.”

However, the product is not cheap, so cost could be a limiting factor for some patients, said Steinle, who is associate professor of medicine at the University of Maryland School of Medicine, Baltimore, and chief of the endocrine section, Maryland VA Health Care System.

Product Could Augment Lifestyle Intervention in Early Type 2 Diabetes

Lead author Orville Kolterman, MD, chief medical officer at Pendulum, told Medscape Medical News that the formulation’s specificity distinguishes it from most commercially available probiotics.

“The ones sold in stores are reconfigurations of food probiotics, which are primarily aerobic organisms, whereas the abnormalities in the microbiome associated with type-2 diabetes reside in anaerobic organisms, which are more difficult to manufacture,” he explained.

The fiber component, inulin, is important as well, he said.

“This product may make the dietary management of type-2 diabetes more effective, in that you need both the fiber and the microbes to ferment the fiber and produce short-chain fatty acids that appear to be very important for many reasons.”

The blood glucose-lowering effect is related in part to the three organisms’ production of butyrate, which binds to epithelial cells in the gut to secrete glucagon-like peptide-1 (GLP-1), leading to inhibition of glucagon secretion among other actions.

And Akkermansia muciniphila protects the gut epithelium and has shown some evidence of improving insulin sensitivity and other beneficial metabolic effects in humans.

Kolterman, who was with Amylin Pharmaceuticals prior to moving to Pendulum, commented: “After doing this for 30 years or so, I’ve come to the strong appreciation that whenever you can do something to move back toward what Mother Nature set up, you’re doing a good thing.”
Clinically, Kolterman said, “I think perhaps the ideal place to try this would be shortly after diagnosis of type-2 diabetes, before patients go on to pharmacologic therapy.”

However, for practical reasons the study was done in patients who were already taking metformin. “The results we have are that it’s beneficial — above and beyond metformin, since [these] patients were not well controlled with metformin.”

He also noted that it might benefit patients who can’t tolerate metformin or who have prediabetes; there’s an ongoing investigator-initiated study of the latter.

Steinle, the endocrinologist with expertise in nutrition, also endorsed the possibility that the product may benefit people with prediabetes. “I would suspect this could be very helpful to augment attempts to prevent diabetes…The group with prediabetes is huge.”

However, she cautioned, “if the blood glucose is over 200 [mg/dL], I wouldn’t think a probiotic would get them where they need to go.”
Cost Could Be an Issue

Moreover, Steinle pointed out that cost might be a problem, given it is not covered by health insurance.

The product’s website lists several options: a “no commitment” one-time 30-day supply for $198; a “3-month starter package” including two free A1c tests for $180/month; and a “membership” including free A1c tests every 90 days, free dietician consultations, and “additional exclusive membership benefits” for $165/month.

“There’s a very large market out there of people who don’t find traditional allopathic medicine to be where they want to go for their healthcare,” Steinle observed.

“If they have reasonable means and want to try the ‘natural’ approach, they’ll probably see results but they’ll pay a lot for it,” she said.

Overall, she pointed out that targeting the microbiome is a very active and potentially important field of medical research, and that it has received support from the US National Institutes of Health (NIH).

“I do think we’ll see more of these types of products and use of the microbiome in various forms to treat a number of conditions.”

“I think we’re in the early stages of understanding how what grows in us, and on us, impacts our health and how we may be able to use these organisms to our benefit. I would expect we’ll see more of these probiotics being marketed in various forms.”

Kolterman is an employee of Pendulum. Steinle has reported receiving funding from the NIH, and she is conducting a study funded by Kowa through the VA.

BMJ Open Diabetes Res Care. Published online July 27, 2020: Full text

Posted in Center for Environmental Genetics | Comments Off on Novel Probiotic Shows Promise in Treating Type-2 Diabetes

Molnupiravir, a ribonucleotide antiviral, appears to be the best rreatment again SARS-CoV-2 virus

Because GEITP was very active in comparing remdesivir (RDV) with hydroxychloroquine (HCQ) — last spring and summer — GEITP feels obliged to report on this latest antiviral drug — which does appear to have great potential in lessening the symptomatology of COVID-19. Notice that this is a very preliminary study, which reaches statistical significance (P <0.05) between a MOLNUPIRAVIR-treated group (N = ~50) and the control group (N = ~150), which means these groups are very small to conclude anything unequivocally. Further studies with much larger sample sizes are of course indicated next. 😊 DwN Five-Day Course of Oral Antiviral Appears to Stop SARS-CoV-2 in Its Tracks Heather Boerner March 08, 2021 A single pill of the investigational drug molnupiravir, taken twice a day for 5 days, eliminated SARS-CoV-2 from the nasopharynx of 49 participants. That finding led Carlos del Rio, MD, distinguished professor of medicine at Emory University in Atlanta, Georgia, to suggest a future in which a drug like molnupiravir could be taken in the first few days of symptoms to prevent severe disease, similar to Tamiflu for influenza. "I think it's critically important," he told Medscape Medical News of the data. Emory University was involved in the trial of molnupiravir, but del Rio was not part of that team. "This drug offers the first antiviral oral drug that then could be used in an outpatient setting." Still, del Rio said it's too soon to call this particular drug the breakthrough — that clinicians need to keep people out of the ICU. "It has the potential to be practice-changing; it's not practice-changing at the moment." Wendy Painter, MD, of Ridgeback Biotherapeutics, who presented the data at the virtual Conference on Retroviruses and Opportunistic Infections, agreed. While the data are promising, "We will need to see if people get better from actual illness" to assess the real value of the drug in clinical care. "That's a phase 3 objective we'll need to prove," she told Medscape Medical News. Phase 2/3 efficacy and safety studies of the drug are now underway in hospitalized and nonhospitalized patients. In a brief pre-recorded presentation of the data, Painter laid out what researchers know so far: preclinical studies suggest that molnupiravir is effective against a number of viruses, including coronaviruses and specifically SARS-CoV-2. It prevents a virus from replicating by inducing viral error catastrophe — essentially overloading the virus with replication and mutation until the virus burns itself out and can't produce replicable copies. In this phase 2a, randomized, double-blind control trial, researchers recruited 202 adults who were treated at an outpatient clinic with fever or other symptoms of a respiratory virus and confirmed SARS-CoV-2 infection by day 4. Participants were randomly assigned to three different groups: 200 mg of molnupiravir, 400 mg; or 800 mg. The 200-mg arm was matched one-to-one with a placebo-controlled group, and the other two groups had three participants in the active group for every one control. Participants took the pills twice daily for 5 days, and then were followed for a total of 28 days to monitor for complications or adverse events. At days 3, 5, 7, 14, and 28, researchers also took nasopharyngeal swabs for PCR tests, to sequence the virus, and to grow cultures of SARS-CoV-2 to see if the virus that's present is actually capable of infecting others. Notably, the pills do not have to be refrigerated at any point in the process, alleviating the cold-chain challenges that have plagued vaccines. "There's an urgent need for an easily produced, transported, stored, and administered antiviral drug against SARS-CoV-2," Painter said. Of the 202 people recruited, 182 had swabs that could be evaluated, of which 78 showed infection at baseline. The results are based on labs of those 78 participants. By day 3, 28% of patients in the placebo arm had SARS-CoV-2 in their nasopharynx, compared to 20.4% of patients receiving any dose of molnupiravir. But by day 5, none of the participants receiving the active drug had evidence of SARS-CoV-2 in their nasopharynx. In comparison, 24% of people in the placebo arm still had detectable virus. Halfway through the treatment course, differences in the presence of infectious virus were already evident. By day 3 of the 5-day course, 36.4% of participants in the 200-mg group had detectable virus in the nasopharynx, compared with 21% in the 400-mg group and just 12.5% in the 800-mg group. And although the reduction in SARS-CoV-2 was noticeable in the 200-mg and the 400-mg arms, it was only statistically significant in the 800-mg arm. In contrast, by the end of the 5 days in the placebo groups, infectious virus varied from 18.2% in the 200-mg placebo group to 30% in the 800-mg group. This points out the variability of the disease course of SARS-CoV-2. "You just don't know" which infections will lead to serious disease, Painter told Medscape Medical News. "And don't you wish we did?" Seven participants discontinued treatment, though only four experienced adverse events. Three of those discontinued the trial due to adverse events. The study is still blinded, so it's unclear what those events were, but Painter said that they were not thought to be related to the study drug. The bottom line, said Painter, was that people treated with molnupiravir had starkly different outcomes in lab measures during the study. "An average of 10 days after symptom onset, 24% of placebo patients remained culture positive" for SARS-CoV-2 — meaning there wasn't just virus in the nasopharynx, but it was capable of replicating, Painter said. "In contrast, no infectious virus could be recovered at study day 5 in any molnupiravir-treated patients." Conference on Retroviruses and Opportunistic Infections 2021: Abstract SS777. Presented March 6, 2021. Heather Boerner is a science and medical reporter based in Pittsburgh, PA and can be found on Twitter at @HeatherBoerner.

Posted in Center for Environmental Genetics | Comments Off on Molnupiravir, a ribonucleotide antiviral, appears to be the best rreatment again SARS-CoV-2 virus

Policy Making in the Post-Truth World

The (l-o-n-g) article — [below, just posted] — seems (to some of us) to hit the nail on the head. “Basic science,” and the respect of basic scientists by the lay public, has clearly eroded over the past two or so decades. This is paralled by the “rise of inappropriate experts” (i.e., colleagues who think they ‘know’ more than anyone can possibly ‘know’ — about some topic). In the past year, this trend has especially been driven home, with regard to the COVID-19 hysteria.

For example, various “self-proclaimed experts” have frequently gone on national television and made unequivocal pronouncements as if they had “unambiguous facts.” As the result of pretending to be “an expert on SARS-CoV-2,” Dr. Anthony Fauci, for example, has unfortunately been attacked on all sides because he is continuously changing his “assertions.” ☹

In fact, the SARS-CoV-2 virus — and subsequent genetic differences in severity of response (mortality, degree of COVID-19 morbidity) — were unprecedented. The future was (and still is) unknown. An honest statement by a scientist should therefore be: “Nobody knows with certainty, but this is my best guess; I could be wrong.” And, one month later, if the data suggest something differently, why not say “Sorry, but I was wrong”…??

In the Post-Truth World, we have “inappropriate experts” creating policy mandates about many issues [e.g., facial coverings (in buildings, tennis courts, even on deserted beaches), business and school lockdowns, prevention of restaurants and bars from opening, “social distancing” of 6 feet (not 8 ft or 5 ft?), local and international travel, etc.] — to the point of absurdity because of so many uncertainty factor(s). It therefore comes as no surprise that respect of basic scientists by the lay public has clearly diminished.

This article below is an excellent summary of this recent transition from 20th-century expert science to the rise of inappropriate expertise that we’ve seen during the past several decades — on numerous policy issues. ☹
Policy Making in the Post-Truth World

On the Limits of Science and the Rise of Inappropriate Expertise


Rayner Royal Soc 2

Steve Rayner

Steve Rayner was the James Martin Professor of Science and Civilization at the University of Oxford, where he was the Founding Director of the Institute for Science, Innovation and Society.

Dan W Lefty

Daniel Sarewitz

Dan Sarewitz is Co-Director, Consortium for Science, Policy & Outcomes, and Professor of Science and Society at Arizona State University.

1 March 2021

Steve Rayner died a couple of months after he and I finished up a baggy first draft of this essay and circulated it to a few colleagues. The essay itself was to be the first of several that we had been discussing for years about science, technology, politics, and society. So, I had a thick folder — full of notes that I could draw on while making revisions, to assure myself that the end product was one that Steve would have fully approved of, even if it was not nearly as good as we could have achieved together.

Had Steve not died shortly before the onset of the COVID-19 pandemic, we would certainly have given it a central role in this revised version. But I never had the benefit of Steve’s insights on the strange unfolding of this disaster, and so, except in a couple of places where the extrapolation seems too obvious to not mention, the virus does not appear in what follows. Nonetheless, I know that Steve would have relished an obvious irony related to “expertise” and the societal response to COVID-19: some “experts” proclaimed a welcome reawakening of public respect for “experts” triggered by the pandemic, even as other “experts” were insisting that the course of the disease marked a decisive repudiation of the legitimacy of “experts” in modern societies. Which seems as good an entry point as any into our exploration of the troubled state of “expertise” in today’s troubled world.


Writing of his days as a riverboat pilot in Life on the Mississippi, Mark Twain described how he mastered his craft: “The face of the water, in time, became a wonderful book — a book that was a dead language to the uneducated passenger, but which told its mind to me without reserve, delivering its most cherished secrets as clearly as if it uttered them with a voice.”[1]

The “wonderful book” to which Twain refers, of course, can nowhere be written down. The riverboat pilot’s expertise derives not from formal education but from constant feedback from his surroundings, which allows him to continually hone and test his skill and knowledge, expanding its applicability to a broadening set of contexts and contingencies. “It was not a book to be read once and thrown aside, for it had a new story to tell every day,” Twain continued. “Throughout the long twelve hundred miles there was never a page that was void of interest, never one that you could leave unread without loss.”

Expertise, in this way, necessarily involves the ability to make causal inferences (drawn, say, from the pattern of ripples on the surface of a river) that guide understanding and action to achieve better outcomes than could be accomplished without such guidance. Such special knowledge allows the expert to reliably deliver a desired outcome that cannot be assured by the non-expert.

Expertise of this sort may also require lengthy formal training in sophisticated technical areas. But the expertise of the surgeon, or the airline pilot, is never just a matter of book learning; it requires the development of tacit knowledge, judgment, and skills that come only from long practical experience and the feedback that such experience delivers. Expert practitioners demonstrate their expertise by successfully performing tasks that society values and expects from them, reliably and with predictable results. They navigate the riverboat through turbulent narrows; they repair the damaged heart valve; they land the aircraft that has lost power to its engines.

Yet every day it seems we hear that neither politicians nor the public are paying sufficient heed to expertise. The claim has become a staple of scholarly assertion, media coverage, and political argument. Commentators raise alarm at our present “post-truth” condition,[2] made possible by rampant science illiteracy among the public, the rise of populist politics in many nations, and the proliferation of unverifiable information via the Internet and social media, exacerbated by mischievous actors such as Russia and extreme political views. This condition is said to result in a Balkanization of allegedly authoritative sources of information that in turn underlies distrust of mainstream experts and reinforces growing political division.

And still, despite this apparent turn away from science and expertise, few doubt the pilot or the surgeon. Or, for that matter, the plumber or the electrician. Clearly, what is contested is not all science, all knowledge, and all expertise, but particular kinds of science and claims to expertise, applied to particular types of problems.

Does population-wide mammography improve women’s health? It’s a simple question, still bitterly argued despite 50 years of mounting evidence. Is nuclear energy necessary to decarbonize global energy systems? Will missile defense systems work? Does Round-up cause cancer? What’s the most healthful type of diet? Or the best way to teach reading or math? For all of these questions, the answer depends on which expert you ask. Should face masks be worn outdoors in public places during the pandemic? Despite its relevance to the COVID-19 outbreak, this question has been scientifically debated for at least a century.[3] If the purpose of expertise applied to these sorts of questions is to help resolve them so that actions that are widely seen as effective can be pursued, then it would seem that the experts are failing. Indeed, these sorts of controversies have both proliferated and become ever more contested. Apparently, the type of expertise being deployed in these debates is different from the expertise of the riverboat pilot in the wheelhouse, or the surgeon in the operating room.

Practitioners like river pilots and surgeons can be judged and held accountable based on the outcomes of their decisions. Such a straightforward line of performance assessment can rarely be applied to experts who would advise policy makers on the scientific and technical dimensions of complex policy and political problems. Advisory experts of this sort are not acting directly on the problems they advise about. Even if their advice is taken, feedbacks on performance are often not only slow, but also typically incomplete, inconclusive, and ambiguous. Such experts are challenged to deliver anything resembling what we expect — and usually get — from our pilots, surgeons, and plumbers: predictable, reliable, intended, obvious, and desired outcomes.


Nobody worries whether laypeople trust astrophysicists who study the origins of stars or biologists who study anaerobic bacteria that cluster around deep-sea vents. The wrangling among scientists who are debating, say, the reasons dinosaurs went extinct or whether string theory tells us anything real about the structure of the universe can be acrimonious and protracted, but it bears little import for anyone’s day-to-day life beyond that of the scientists conducting the relevant research. But the past half-century or so has seen a gradual and profound expansion of science carried out in the name of directly informing human decisions and resolving disputes related to an expanding range of problems for democratic societies involving technology, the economy, and the environment.[4]

If it can be said that there is a crisis of science and expertise and that we have entered a post-truth era, it is with regard to these sorts of problems, and to the claims science and scientific experts would make upon how we live and how we are governed.

Writing about the limits of science for resolving political disagreements about issues such as the risks of nuclear energy, the physicist Alvin Weinberg argued in an influential 1972 article that the inherent uncertainties surrounding such complex and socially divisive problems lead to questions being asked of science that science simply cannot answer.[5]

He coined the term “trans-science” to describe scientific efforts to answer questions that actually transcend science.

Two decades later, the philosophers Silvio Funtowicz and Jerome Ravetz more fully elucidated the political difficulties raised by trans-science as those of “post-normal” science, in which decisions are urgent, uncertainties and stakes are high, and values are in dispute. Their term defined a “new type of science” aimed at addressing the “challenges of policy issues of risk and the environment.”[6](Funtowicz and Ravetz used the term “post-normal” to contrast with the day-to-day puzzle-solving business of mature sciences that Thomas Kuhn dubbed “normal science” in his famous 1962 book, The Structure of Scientific Revolutions.[7])

What Funtowicz and Ravetz stressed was the need to recognize that science carried out under such conditions could not — in theory or practice — be insulated from other social activities, especially politics.

Demands on science to resolve social disputes accelerated as the political landscape in the 1960s and 70s began to shift from a primary focus on the opposition between capital and labor toward one that pitted industrial society against the need to protect human health and the environment, a shift that intensified with the collapse of the Soviet Union in the 1980s. Public concerns about air and water pollution, nuclear energy, low levels of chemical contamination and pesticide residues, food additives, and genetically modified foods, translated into public debates among experts about the magnitude of the problems and the type of policy responses, if any, that were needed. It is thus no coincidence that the 1980s and 90s saw “risk” emerge as the explicit field of competing claims of rationality.[8]

As with the previous era of conflict between capital and labor, these disputes often mapped onto political divisions, with industrial interests typically aligning with conservative politics to assert low levels of risk and excessive costs of action, and interests advocating environmental protection aligning with regimes for which the proper role of government included regulation of industry to reduce risks, even uncertain ones, to public health and well-being.

As such conflicts proliferated, it was not much of a step to think that the well-earned authority of science to establish cause-effect relations about the physical and biological world might be applied to resolve these new political disputes. In much the same logical process that leads us to rely on the expertise of the riverboat pilot and cardiac surgeon, scientists with relevant expertise have been called upon to guide policy makers in devising optimal policies for managing complex problems and risks at the intersection of technology, the environment, health, and the economy.

But this logic has not borne out. Instead, starting in the 1970s, there has been a rapid expansion in health and environmental disputes, not-in-my-backyard protests, and concerns about environmental justice, invariably accompanied by dueling experts, usually backed by competing scientific assessments of potential versus actual damage to individuals and communities. These types of disputes constitute an important dimension of today’s divisive national politics.


Why has scientific expertise failed to meet the dual expectations created by the rise of scientific knowledge in the modern age and the impressive performance record of experts acting in other domains of technological society? The difficulties begin with nature itself.

The distinguished anthropologist Mary Douglas was wont to observe that nature is the trump card that can be played to win an argument even when time, God, and money have failed.[9] The resort to nature as ultimate arbiter of disagreement is a central characteristic of the modern Western world.[10] The debate between Burke and Paine in the 18th century over the origins of democratic legitimacy drew its energy from fundamentally conflicting claims about nature.[11] A century later, J. S. Mill observed that “the word unnatural has not ceased to be one of the most vituperative epithets in the language.”[12] This remains the case today. When we assert that something is only natural, we draw a line in the sand. We declare that it is simply the way things are and that no further argumentation can change that.

How does nature derive its voice in the political realm? In the modern world, nature speaks through science. Most people do not apprehend nature directly; they apprehend it via those experts who can speak and translate its language. Translated to the political realm, scientists who would advise policy-making draw their legitimacy principally from the claim that they speak for nature. That expertise is ostensibly wielded to help policy makers distinguish that which is correct about the world from that which is incorrect, causal claims that are true from those that are false, and ultimately, policies that are better from those that are worse.

Yet when it comes to the complicated interface of technology, environment, human development, and the economy, political combatants have their own sciences and experts advocating on behalf of their own scientifically mediated version of nature. What is produced under such circumstances, Herbert Simon observed in 1983, is not ever more reliable knowledge, but rather “experts for the affirmative and experts for the negative.”[13] Under these all-too-familiar conditions, science clearly must be doing something other than simply reporting upon well-established cause-and-effect inferences observed in nature. What, then, is it doing?

A key insight was provided in the work of ecologist C. S. Holling, who revealed the breadth and variety of scientists’ assumptions about how nature works by describing the seemingly contradictory ecological management strategies adopted by foresters to address problems such as insect infestations or wildfires.[14]

If foresters were conventionally rational, they would all do the same thing when given access to the same relevant scientific information. However, in the diverse forest management approaches that were actually implemented, Holling and colleagues detected “differences among the worldviews or myths of nature that people hold,” leading in turn to different scientific “explanations of how nature works and the [different] implication of those assumptions on subsequent policies and actions.”[15]

One view of nature understands the environment to be favorable toward humankind. In this world, a benign nature re-establishes its natural order regardless of what humans do to their environment. This version of rationality encourages institutions and individuals to take a trial-and-error approach in the face of uncertainty. It is a view that requires strong proof of significant environmental damage to justify intervention that restricts economic development. It encourages institutions and individuals to take a trial-and-error approach in the face of uncertainty.

From another perspective, nature is in a precarious and delicate balance. Even a slight perturbation can result in an irreversible change in the state of the system. This view encourages institutions to take a precautionary approach to managing an ephemeral nature. The burden of proof, in this worldview, rests with those who would act upon nature.

A third view of nature centers around the uncertainties regarding causes and effects themselves. From this perspective, uncertainty is inherent, and the objective of scientific management is not to avoid any perturbation but to limit disorder via indicators, audits, and the construction of elaborate technical assessments to ensure that no perturbation is too great.[16]

The point is not that any of these perspectives is entirely right or entirely wrong. Social scientists Schwarz and Thompson noted that “each of these views of nature appears irrational from the perspective of any other,” reflecting what they term “contradictory certainties.”[17] There can be no single, unified view of nature that can be expressed through a coherent body of science. In the post-normal context, when science is applied to policy making and decisions with potentially momentous consequences, scientists and decision-makers are always interpreting observations and data through a variety of pre-existing worldviews and frameworks that create coherence and meaning. Different myths of nature thus become associated with different institutional biases toward action.[18]

Consider claims that we are collectively on the brink of overstepping “planetary boundaries” that will render civilization unsustainable. In the scientific journal Nature, Johan Rockström and his colleagues at the Stockholm Resilience Centre argue that “human actions have become the main driver of global environmental change,” that “could see human activities push the Earth system outside the stable environment state of the Holocene, with consequences that are detrimental or even catastrophic for large parts of the world.”[19]

A review by Nordhaus et al. contests these claims, challenging the idea that these planetary boundaries constitute “non-negotiable” thresholds, interpreting them instead as rather arbitrary claims that for the most part don’t even operate at planetary scale.[20] Similarly, Brook et al. conclude that “ecological science is unable to define any single point along most planetary continua where the effects of global change will cause abrupt shifts or transitions to qualitatively different regimes across the whole planet.”[21] Strunz et al. argue that civilizational “collapse” narratives are themselves subject to interpretation and that the supposed alternatives of “sustainability or collapse” mischaracterize not only the nature of environmental challenges, but the types of policy responses available to societies.[22]

These various expert perspectives beautifully display the competing rationalities mapped out by Holling a generation before. They suggest that rather than non-negotiables, humanity faces a system of trade-offs — not only economic, but moral and aesthetic as well. Deciding how to balance these trade-offs is a matter of political contestation, not scientific fact.[23]

What counts as “unacceptable environmental change” involves judgments concerning the value of the things to be affected by the potential changes.

Seldom do scientists or laypeople consciously reflect on the underlying assumptions about the nature of nature that inform their arguments. Even when such assumptions can be made explicit, as Holling discussed in the case of forest ecosystem management,[24] it is not possible to say which provides the best foundation for policy making. This is the case given both that the science is concerned with the future states of open, complex, nondeterminate natural and social systems, and that people may reasonably disagree about the details of a desirable future as well as the best pathways of getting there.

Amidst such multi-level uncertainty and disagreement (which may last for decades, or forever), it is impossible to test causal inferences at large enough temporal and spatial scales to draw conclusions about which experts were right and which were wrong with regard to questions related to something like overall earth-system behavior. Experts participating in such debates thus need never worry that they will be held accountable for the consequences of acting on their advice. They wield their expertise with impunity.


The most powerful ammunition that experts can deploy are numbers. Indeed, we might say that if nature is a political trump card, numbers are what give that card its status and authority. Pretty much any accounting of science will put quantification at the center of science’s power to portray the phenomena that it seeks to understand and explain. As Lord Kelvin said in 1883: “When you can measure what you are speaking about and express it in numbers, you know something about it: but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.”[25]

To describe something with a number is to make a sharp claim about the correspondence between the idea being conveyed and the material world out there. Galileo said that mathematics was the language of the universe. The use of numbers to make arguments on behalf of nature thus amounts to an assertion of superior — that is, expert — authority over other ways to make claims about knowledge, truth, and reality.

When we look at the kinds of numbers that often play a role in publicly contested science, however, we see something surprising. Many numbers that appear to be important for informing policy discussions and political debates describe made-up things, not actual things in nature. They are, to be sure, abstractions about, surrogates for, or simulations of what scientists believe is happening — or will happen — in nature. But they are numbers — whose correspondence to something real in nature cannot be tested, even in theory.

Yet even when representing abstractions or poorly understood phenomena, numbers still appear to communicate the superior sort of knowledge that Lord Kelvin claimed for them, giving rise to what mathematician and philosopher Alfred North Whitehead in 1929 termed “the Fallacy of Misplaced Concreteness,” in which abstractions are taken as concrete facts.[26] This fallacy is particularly seductive in the political context, when complicated matters (for example, the costs versus benefits of a particular policy decision) can be condensed into easily communicated numbers that justify a particular policy decision, such as whether or not to build a dam[27] or protect an ecological site.[28]

Consider efforts to quantify the risks of high-level nuclear waste disposal in the United States and other countries. The behavior of buried nuclear waste is determined in part by the amount of water that can reach the disposal site and thus eventually corrode the containment vessel and transport radioactive isotopes into the surrounding environment.

One way to characterize groundwater flow is by calculating a variable called percolation flux, an expression of how fast a unit of water flows through rocks, expressed in distance per unit of time. The techniques used to assign numbers to percolation flux depend on hydrogeological models, which are always incomplete representations of real conditions,[29] and laboratory tests on small rock samples, which cannot well represent the actual heterogeneity of the disposal site. Based on these calculations, assessments of site behavior then adopt a value of percolation flux for the entire site for purposes of evaluating long-term risk.

Problems arise, though, because water will behave differently in different places and times depending on conditions (such as the varying density of fractures in the rocks, or connectedness between pores, or temperature). At Yucca Mountain, Nevada, the site chosen by Congress in 1987 to serve as the US high-level civilian nuclear waste repository, estimates of percolation flux made over a period of 30 years have varied from as low as 0.1 mm/yr to as much as 40 mm/yr.[30]

This three-orders-of-magnitude difference has momentous implications for site behavior, with the low end helping to assure decision-makers that the site will remain dry for thousands of years and the high end indicating a level of water flow that, depending on site design, could introduce considerable risk of environmental contamination over shorter time periods.[31]

To reduce uncertainties about the hydrogeology at Yucca Mountain, scientists proposed to test rocks from near the planned burial site, 300 meters underground, for chlorine 36 (36Cl). This radioisotope is rare in nature but is created during nuclear blasts and exists in higher abundance in areas where nuclear weapons have been tested, such as the Nevada Test Site near Yucca Mountain. If excess 36Cl could be found at the depth of the planned repository, it would mean that water had travelled from the surface to the repository depth in the 60 or so years since weapons tests were conducted, requiring a much higher percolation flux estimate than if no 36Cl was present.[32]

But confirming the presence of excess 36Cl hinged on the ability to detect it at concentrations of parts per 10 billion, a level of precision that turned out to introduce even more uncertainties to the percolation flux calculation. Indeed, contradictory results from scientists working in three different laboratories made it impossible to confirm whether or not the isotope was present in the sampled rocks.[33]

This research, performed to reduce uncertainty, actually increased it, so that the range of possible percolation flux values fully encompassed the question of whether or not the site was “wet” or “dry,” and fully permitted positions either in support of or opposing the burial of nuclear waste at Yucca Mountain.

And even if scientists were to agree on some “correct” value of percolation flux for the repository site, it is only one variable among innumerable others that will influence site behavior over the thousands of years during which the site must safely operate. Percolation flux thus turns out not to be a number that tells us something true about nature. Rather, it is an abstraction that allows scientists to demonstrate their expertise and relevance and allows policy makers to claim that they are making decisions based upon the best available science, even if that science is contradictory and can justify any decision.

Such numbers proliferate in post-normal science and include, for example, many types of cost-benefit ratio[34]; rates and percentages of species extinction[35]; population-wide mortality reduction from dietary changes[36]; ecosystem services valuation[37] and, as we will later discuss, volumes of hydrocarbon reserves. Consider a number called “climate sensitivity.” As with percolation flux, the number itself — often (but not always) defined as the average atmospheric temperature increase that would occur with a doubling of atmospheric carbon dioxide — corresponds to nothing real, in the sense that the idea of a global average temperature is itself a numerical abstraction that collapses a great diversity of temperature conditions (across oceans, continents, and all four seasons) into a single value.

The number has no knowable relationship to “reality” because it is an abstraction and one applied to the future no less — the very opposite of what Lord Kelvin had in mind in extolling the importance of quantification. Yet it has come to represent a scientific proxy for the seriousness of climate change, with lower sensitivity meaning less serious or more manageable consequences and higher values signaling a greater potential for catastrophic effects. Narrowing the uncertainty range around climate sensitivity has thus been viewed by scientists as crucially important for informing climate change policies.

Weirdly, though, the numerical representation of climate sensitivity remained constant over four decades — usually as a temperature range of 1.5°C to 4.5°C— even as the amount of science pertaining to the problem has expanded enormously. Starting out as a back-of-the-envelope representation of the range of results produced by different climate models from which no probabilistic inferences could be drawn,[38] climate sensitivity gradually came to represent a probability range.

Most recently, for example, an article by Brown and Caldeira reported an equilibrium climate sensitivity value of 3.7°C with a 50 percent confidence range of 3.0°C to 4.2°C, while a study by Cox et al. reported a mean value of 2.8°C with a 66 percent confidence range of 2.2°C to 3.4°C, and an assessment by Sherwood and a team of 24 other scientists reported a 66 percent probability range of 2.6°C to 3.9°C.[39]

The 2020 Sherwood team study characterized the initial 1.5°C to 4.5°C range, first published in a 1979 National Research Council report, as “prescient” and “based on very limited information.” In that case, one might reasonably wonder about the significance of four decades of enormous subsequent scientific effort (the Sherwood paper cites more than 500 relevant studies) leading to, perhaps, a slightly more precise characterization of the probability range of a number that is an abstraction in the first place.

The legacy of research on climate sensitivity is thus remarkably similar to that of percolation flux: decades of research and increasingly sophisticated science dedicated to better characterizing a numerical abstraction that does not actually describe observable phenomena, with little or no change in uncertainty.


Ultimately, most political and policy disputes involve the future — what it should look like and how to achieve that desired state. Scientific expertise is thus often called upon to extrapolate from current conditions to future ones. To do so, researchers often construct numerical representations of relevant phenomena that can be used to extrapolate from current conditions to future ones. These representations are called models.

Pretty much everyone is familiar with how numerical models can be used to inform decision-making through everyday experience with weather forecasting. Weather forecasting models are able to produce accurate forecasts up to about a week in advance. In part, this accuracy can be achieved because, for the short periods involved, weather systems can be treated as relatively closed, and the results of predictions can be evaluated rigorously. Weather forecasts have gotten progressively more accurate over decades because forecasters make millions of forecasts each year that they test against reality, allowing improved model performance due to continual learning from successes and mistakes and precise measurement of predictive skill.

But that’s not all. A sophisticated and diverse enterprise has developed to communicate weather predictions and uncertainties for a variety of users. Organizations that communicate weather information understand both the strengths and weaknesses of the predictions, as well as the needs of those who depend on the information. Examples range from NOAA’s Digital Marine Weather Dissemination System for maritime users, to the online hourly forecasts at

Meanwhile, people and institutions have innumerable opportunities to apply what they learn from such sources directly to decisions and to see the outcomes of their decisions — in contexts ranging from planning a picnic to scheduling airline traffic. Because people and institutions are continually acting on the basis of weather forecasts, they develop tacit knowledge that allows them to interpret information, accommodate uncertainties, and develop trust based on a shared understanding of benefits. Picnickers, airline executives, and fishers alike learn how far in advance they should trust forecasts of severe weather in order to make decisions whose stakes range from the relatively trivial to the truly consequential.

Even though the modeling outputs often remain inexact and fundamentally uncertain (consider the typical “50 percent chance of rain this afternoon” forecast) and specific short-term forecasts often turn out to be in error, people who might question the accuracy or utility of a given weather forecast are not accused of “weather science denial.” This is because the overall value of weather information is well integrated into the institutions that use the predictions to achieve desired benefits.

The attributes of successful weather forecasting are not, and cannot be, built into the kinds of environmental and economic models used to determine causal relations and predict future conditions in complex natural, technological, and social systems. Such models construct parallel alternative worlds whose correspondence to the real world often cannot be tested. The models drift away from the direct connection between science and nature, while giving meaning to quantified abstractions like percolation flux and climate sensitivity, which exist to meet the needs of modeled worlds but not the real one.

For example, the Chesapeake Bay Program (CBP), established in 1983, launched an extensive ecosystem modelling program to support its goal of undoing the negative effects of excessive nutrient loading in the bay from industrial activity, agricultural runoff, and economic development near the shoreline. A distributed suite of linked models was developed so scientists could predict the ecosystem impact of potential management actions, including improving sewage treatment, controlling urban sprawl, and reducing fertilizer or manure application on agricultural lands.[40]

While the CBP model includes data acquired from direct measurements in nature, the model itself is an imaginary world that stands between science and nature. The difference between a modelled prediction of, say, decreased nitrogen load in the Chesapeake Bay and an observation of such a decrease is that the achievement of that outcome in the model occurred by tweaking model inputs, parameters, and algorithms, whereas in nature the outcome was created by human decisions and actions.

And indeed, based on models that simulated the results of policy interventions, CPB claimed that it had achieved a steady improvement in the water quality of the main stem of the Bay. Yet interviews conducted in 1999 with program employees revealed that actual field testing did not demonstrate a trend of improved water quality.[41]

The computer model, designed to inform management of a real-world phenomenon, in fact became the object of management.

A similar phenomenon of displacing reality with a simulation occurs in modelling for climate policy when the impacts of nonexistent technologies — such as bioenergy with carbon capture and storage or solar radiation management interventions — are quantified and introduced into models as if they existed. Their introduction allows models to be tweaked to simulate reductions in future greenhouse warming, which are then supposed to become targets for policy making.[42]

As with the Chesapeake Bay model, these integrated assessment models depend on hybrid and constructed numbers to generate concrete predictions. To do so, they must assume future atmospheric composition, land cover, sea surface temperature, insolation, and albedo, not to mention the future of economic change, demographics, energy use, agriculture, and technological innovation. Many of the inputs themselves are derived from still other types of models, which are in turn based on still other sets of assumptions.

Based on these models, some scientists claim that solar radiation management techniques will contribute to global equity; others claim the opposite. In fact, the models upon which both sets of claims depend provide no verifiable knowledge about the actual world and ignore all of the scientific, engineering, economic, institutional, and social complexities that will determine real outcomes associated with whatever it is that human societies choose to do or not do.

The contrast between weather and climate forecasting could not be clearer. Weather forecasts are both reliable and useful because they predict outcomes in relatively closed systems for short periods with immediate feedback that can be rapidly incorporated to improve future forecasts, even as users (picnickers, ship captains) have innumerable opportunities to gain direct experience with the strengths and limits of the forecasts.

Using mathematical models to predict the future global climate over the course of a century of rapid sociotechnical change is a quite another matter. While the effects of different development pathways on future atmospheric greenhouse gas concentrations can be modeled using scenarios, there is no basis beyond conjecture for assigning relative probabilities to these alternative futures. There are also no mechanisms for improving conjectured probabilities because the time frames are too long to provide necessary feedback for learning. What’s being forecast exists only in an artificial world, constituted by numbers that correspond not to direct observations and measurements of phenomena in nature, but to an assumption-laden numerical representation of that artificial world.

The problem is not by any means limited to climate models. Anyone who has followed how differing interpretations of epidemiological models have been used to justify radically different policy choices for responding to the COVID-19 pandemic will recognize the challenges of extrapolating from assumption-laden models to real-world outcomes. Similar difficulties have been documented in policy problems related to shoreline engineering, mine waste cleanup, water and fisheries management, toxic chemical policy, nuclear waste storage (as discussed), land use decisions, and many others.[43]

And yet, because such models are built and used by scientists for research that is still called science and produce crisp numbers about the artificial worlds they simulate, they are often subject to Whitehead’s fallacy of misplaced concreteness and treated as if they represent real futures.[44] Their results are used by scientists and other political actors to make claims about how the world works and, therefore, what should be done to intervene in the world.[45]

In this sense, the models serve a role similar to goat entrails and other prescientific tools of prophecy. They separate the prophecy itself, laden with inferences and values, from the prophet, who merely reports upon what is being foretold. The models become political tools, not scientific ones.


When decisions are urgent, uncertainties and stakes are high, and values are in dispute — the post-normal conditions of Funtowicz and Ravetz — it turns out that science’s claim to speak for nature, using the unique precision of numbers and the future-predicting promise of models, is an infinitely contestable basis for expertise and its authority in the political realm.

And yet, science undoubtedly does offer an incomparably powerful foundation not only for understanding our world but also for reliably acting in it.

That foundation depends upon three interrelated conditions that allow us to authoritatively establish causal relationships that can guide understanding and effective action — conditions very different from those we have been describing, and with very different consequences in the world.

First is control: the creation or exploitation of closed systems, so that important phenomena and variables involved in the system can be isolated and studied. Second is fast learning: the availability of tight feedback loops, which allow mistakes to be identified and learning to occur because causal inferences can be repeatedly tested through observations and experiments in the controlled or well-specified conditions of a more or less closed system. Third is clear goals: the shared recognition or stipulation of sharply defined endpoints toward which scientific progress can be both defined and assessed, meaning that feedback and learning can occur relative to progress toward agreed-upon outcomes that confirm the validity of what is being learned.

Technology plays a dual role in the fulfillment of these three conditions. Inventions that observe or measure matter, such as scales, telescopes, microscopes, and mass spectrometers, translate inputs from nature into interpretable signals (measurements, images, waveforms, and so on) that allow scientists to observe and often quantify components and phenomena of nature that would otherwise be inaccessible. At the same time, the development and use of practical technologies such as steam engines, electric generators, airfoils (wings), cathode ray tubes, and semiconductors continually raise questions for scientists to explore about the natural phenomena that the technologies embody (e.g., the transformation of heated water into pressurized steam; the flow of fluids or electrons around or through various media) and, in turn, derive generalizable relationships.

Under these three technologically mediated conditions, the practical consequences of scientific advances have helped to create the technological infrastructure of modernity. Technology, it turns out, is what makes science real for us. The light goes on, the jet flies, the child becomes immune. From such outcomes, people reliably infer that the scientific account of phenomena must be true and that the causal inferences derived from them must be correct. Otherwise, the technologies would not work.

Thus, our sense of science’s reliability is significantly created by our experience with technology. Moreover, technological performance shares this essential characteristic with practitioner expertise: nonexperts can easily see whether this process of translation is actually taking place and doing what’s expected. Indeed, expert practice typically involves the use of technology (a riverboat, a plumber’s torch, a surgical laser) to achieve its goal.

The problem for efforts to apply scientific expertise to complex social problems is that the three conditions mostly do not pertain. The systems being studied — the climate-energy system, fluids in the earth’s crust, population-wide human health — are open, complex, and unpredictable. Controlled experiments are often impossible, so feedback that can allow learning is typically slow, ambiguous, and debatable. Perhaps most importantly, endpoints often cannot be sharply defined in a way that allows progress to be clearly assessed; they are often related to identifying and reducing risk, and risk is an inherently subjective concept, always involving values and worldviews.

In the case of weather forecasts, vaccines, and surgical procedures, experts can assure us of how a given action will lead to a particular consequence, and, crucially, we can see for ourselves if they are right. In the case of science advisory expertise, the outcomes of any particular decision or policy are likely to be unknown and unknowable. No one can be held to account. Assumptions about the future can be modified and modeled to suit competing and conflicting interests, values, and beliefs about how the future ought to unfold. Science advisory experts can thus make claims with political content while appearing only to be speaking the language of science.

The exercise of expert authority under such circumstances might be termed “inappropriate expertise.” Its origins are essentially epistemological: climate models, or the statistical characterization of a particular chemical’s cancer-causing potential, manifest a different type of knowledge than weather forecasts, jet aircraft and vaccines. Claims to expertise based upon the former achieve legitimacy by borrowing the well-earned authority of the latter. In stark contrast, the legitimacy of expert-practitioners derives directly from proven performance in the real world.


When we apply the authority of normal science to post-normal conditions, a mélange of science, expertise, and politics is the usual result. Neither more research nor more impassioned pleas to listen to and trust an undifferentiated “science” will improve the situation because it is precisely the proliferation of post-normal science and its confusion with normal science that are the cause of the problem.

Yet, in the face of controversies regarding risk, technology, and the environment, the usual remedy is to turn things over to expert organizations like the National Academy of Sciences or the UK Royal Society. But doing so typically obscures the normative questions that lie at the heart of conflicts in question. Why would anyone think that another 1,000 studies of climate sensitivity would change the mind of a conservative who opposes global governance regimes? Or that another decade of research on percolation flux might convince an opponent of nuclear power that nuclear waste can be safely stored for 10,000 years? Disagreements persist. More science is poured into the mix. Conflicts and controversies persist indefinitely.

There is an alternative. Decision-makers tasked with responding to controversial problems of risk and society would be better served to pursue solutions through institutions that can tease out the legitimate conflicts over values and rationality that are implicated in the problems. They should focus on designing institutional approaches that make this cognitive pluralism explicit, and they should support activities to identify political and policy options that have a chance of attracting a diverse constituency of supporters.

Three examples from different domains at the intersection of science, technology, and policy can help illuminate this alternative way of proceeding. Consider first efforts to mitigate the public health consequences of toxic chemical use and exposure. Such efforts, in particular via the federal Toxics Substances Control Act (TSCA) of 1976, have historically attempted to insulate the scientific assessment of human health risks of exposures to chemicals from the policy decisions that would regulate their use. But from TSCA’s inception, it has been clear that there is no obvious boundary that separates the science of risk from the politics of risk. The result — consistent with our discussion so far — has been endless legal action aimed at proving or disproving that scientific knowledge generated by EPA in support of TSCA was sufficient to allow regulation.[46]

Starting in the late 1980s, the state of Massachusetts adopted an alternative approach. Rather than attempting to use scientific risk assessment to ban harmful chemicals that are valued for their functionality and economic benefit, Massachusetts’ Toxics Use Reduction Act (TURA) of 1989 focused on finding replacements that perform the same functions. The aim was to satisfy the concerns of both those aiming to eliminate chemicals in the name of environmental health and those using them to produce economic and societal value.[47]

TURA turned the standard adversarial process into a collaborative one.[48] State researchers tested substitutes for effectiveness and developed cost–benefit estimates; they worked with firms to understand barriers to adoption and cooperated with state agencies and professional organizations to demonstrate the alternatives. Rather than fighting endless scientific and regulatory battles, firms that use toxic chemicals became constituents for safer chemicals.

Between 1990 and 2005, Massachusetts firms subject to TURA requirements reduced toxic chemical use by 40 percent and their on-site releases by 91 percent.[49] Massachusetts succeeded not by trying to reduce scientific uncertainty about the health consequences of toxic chemicals in an effort to compel regulatory compliance, but by searching for solutions that satisfied the beliefs and interests of competing rationalities about risk.

A second example draws from ongoing efforts to assess hydrocarbon reserves. In the 1970s and 1980s, coincident with national and global concerns about energy shortages, the US Geological Survey (USGS) began conducting regular assessments of the size of US hydrocarbon (oil and gas) reserves.[50] As with percolation flux and climate sensitivity, quantified estimates of the volume of natural gas or oil stored in a particular area of the Earth’s crust have no demonstrable correspondence to anything real in the world. The number cannot be directly measured, and it depends on other variables that change with time, such as the state of extraction technologies, the state of geological knowledge, the cost of various energy sources, and so on.

USGS assessments that reserves were declining over time were largely noncontroversial until 1988, when the natural gas industry began lobbying the government to deregulate natural gas markets. When the USGS assessment released that year predicted a continued sharp decline in natural gas reserves, the gas industry vociferously disagreed.[51]

According to the American Gas Association (AGA), “The U.S. Geological Survey’s erroneous and unduly pessimistic preliminary estimates of the amount of natural gas that remains to be discovered in the United States . . . is highly inaccurate and clearly incomplete . . . the direct result of questionable methodology and assumptions.”[52]

In the standard ritual, dueling numbers were invoked, with the USGS report estimating recoverable natural gas reserves at 254 trillion cubic feet and the AGA at 400 trillion.[53]

The customary prescription for resolving such disputes, of course, would be to do more research to better characterize the numbers. But in this case, the USGS adopted a different approach. It expanded the institutional diversity of the scientists involved in the resource assessment exercises, adding industry and academic experts to a procedure that had previously been conducted by government scientists alone.

The collective judgment of this more institutionally diverse group resulted in significantly changed assumptions about, definitions of, and criteria for estimating hydrocarbon reserves. By 1995, the official government estimate for US natural gas reserves went up more than fourfold, to 1,075 trillion cubic feet.[54]

Agreement was created not by insulating the assessment process from stakeholders with various interests in the outcome, but by bringing them into the process and pursuing a more pluralistic approach to science. Importantly, the new assessment numbers could be said to be more scientifically sound only insofar as they were no longer contested. Their accuracy was still unknowable. But agreement on the numbers helped to create the institutional and technological contexts in which recovering significantly more oil and gas in the United States became economically feasible.

Finally, consider the role of complex macroeconomic models in national fiscal policy decisions. Economists differ on their value, with some arguing that they are essential to the formulation of monetary policy and others arguing that they are useless. Among the latter, the Nobel Prize-winning economist Joseph Stiglitz asserts: “The standard macroeconomic models have failed, by all the most important tests of scientific theory.”[55]

In the end, it doesn’t appear to matter much. In the United States, the models are indeed used by the Federal Reserve to support policy making. Yet the results appear not to be a very important part of the system’s decision processes, which depend instead on informed judgement about the state of the economy and debate among its governors.[56]

Whatever role the models might play in the Federal Reserve decision process, it is entirely subservient to a deliberative process that amalgamates different sources of information and insight into narratives that help make sense of complex and uncertain phenomena.

Indeed, the result of the Federal Reserve’s deliberative process is typically a decision either to do nothing or to tweak the rate at which the government loans money to banks up or down by a quarter of a percent. The incremental nature of the decision allows for feedback and learning, assessed against an endpoint mandated by Congress: maximum employment and price stability. The role of the models in this process seems mostly to be totemic. Managing the national economy is something that experts do, and using complicated numerical models is a symbol of that expertise, inspiring confidence like the stethoscope around a doctor’s neck.

Each of these examples offers a corrective to the ways in which science advice typically worsens sociotechnical controversies.

The Federal Reserve crunches economic data through the most advanced models to test the implications of various policies for future economic performance. And then its members, representing different regions and perspectives, gather to argue about whether to take some very limited actions to intervene in a complex system — the national economy — whose behavior so often evades even short-term prediction.

When the US Geological Survey found itself in the middle of a firestorm of controversy around a synthetic number representing nothing observable in the natural world, it did not embark upon a decades-long research program to more precisely characterize the number. It instead invited scientists from other institutions, encompassing other values, interests, and worldviews, into the assessment process. This more cognitively diverse group of scientists agreed to new assumptions and definitions for assessing reserves and arrived at new numbers that would have seemed absurd to the earlier, more homogeneous group of experts, but that met the information needs of a greater diversity of users and interests.[57]

Toxic chemical regulation in the United States has foundered on the impossibility of providing evidence of harm sufficiently convincing to withstand legal opposition.[58] More research and more experts have helped to enrich lawyers and expert witnesses, but they failed to restrict the use of chemicals that are plausibly but not incontrovertibly dangerous. The state of Massachusetts pursued a different approach, working within the uncertainties, to find complementarities between the interests and risk perspectives of environmentalists and industry in the search for safer alternatives to chemicals that were plausibly harmful.[59]

Truth, it turns out, often comes with big error bars, and that allows space for managing cognitive pluralism to build institutional trust. The Federal Reserve maintains trust through transparency and an incremental, iterative approach to decision-making. The USGS restored trust by expanding the institutional and cognitive diversity of experts involved in its assessment process. Massachusetts created trust by taking seriously the competing interests and rationalities of constituencies traditionally at each other’s throats.

Institutions are what people use to manage their understanding of the world and determine what information can be trusted and who is both honest and reliable. Appropriate expertise emerges from institutions that ground their legitimacy not on claims of expert privilege and the authority of an undifferentiated “science,” but on institutional arrangements for managing the competing values, beliefs, worldviews, and facts arrayed around such incredibly complex problems as climate change or toxic chemical regulation or nuclear waste storage. Appropriate expertise is vested and manifested not in credentialed individuals, but in institutions that earn and maintain the trust of the polity. And the institutional strategies available for managing risk-related controversies of modern technological societies may be as diverse as the controversies themselves.


We do not view it as coincidental that concerns among scientists, advocates, and others about post-truth, science denial, and so on have arisen amidst the expenditure of tens of billions of dollars over several decades by governments and philanthropic foundations to produce research on risk-related political and policy challenges. These resources, which in turn incentivized the creation of many thousands of experts through formal academic training in relevant fields, have created a powerful political constituency for a particular view of how society should understand and manage its technological, environmental, health, and other risks: with more science, conveyed to policy makers by science advocacy experts, to compel rational action.

Yet the experience of unrelenting and expanding political controversies around the risks of modernity is precisely the opposite outcome of what has been promised. Entangling the sciences in political disputes in which differing views of nature, society, and government are implicated has not resolved or narrowed those disputes, but has cast doubt upon the trustworthiness and reliability of the sciences and experts who presume to advise on these matters. People still listen to their dentists and auto mechanics. But many do not believe the scientists who tell them that nuclear power is safe, or that vaccines work, or that climate change has been occurring since the planet was formed.

We don’t think that’s a perverse or provocative view, but an empirically grounded perspective on why things haven’t played out as promised. When risks and dilemmas of modern technological society become subject to political and policy action, doing more research to narrow uncertainties and turning to experts to characterize what’s going on as the foundation for taking action might seem like the only rational way to go. But under post-normal conditions, in which decisions are urgent, uncertainties and stakes are high, and values are in dispute, science and expertise are, at best, only directly relevant to one of those four variables — uncertainty — and even there, the capacity for making a difference is often, as we’ve shown, modest at best.

The conditions for failure are thus established. Advocates and experts urgently proclaim that the science related to this or that controversy is sufficiently settled to allow a particular political or policy prescription — the one favored by certain advocates and experts — to be implemented. Left out of the formula are the high stakes and disputed values. Who loses out because of the prescribed actions? Whose views of how the world works or should work are neglected and offended?

Successfully navigating the divisive politics that arise at the intersections of technology, environment, health, and economy depends not on more and better science, nor louder exhortations to trust science, nor stronger condemnations of “science denial.” Instead, the focus must be on the design of institutional arrangements that bring the strengths and limits of our always uncertain knowledge of the world’s complexities into better alignment with the cognitive and political pluralism that is the foundation for democratic governance — and the life’s blood of any democratic society.

Acknowledgments: Heather Katz assured me that this is what Steve would have wanted me to do, and Jerry Ravetz assured me that this is what Steve would have wanted us to say. Ted Nordhaus helped me figure out how best to say it.

Mark Twain, Life on the Mississippi (1883; rpt. New York: Harper and Brothers, 1917), 77.
Matthew d’Ancona, Post-Truth: The New War on Truth and How to Fight Back (London: Ebury Publishing, 2017).
For the 100-year range, see, for example, Richard Stutt et al., “A Modelling Framework to Assess the Likely Effectiveness of Facemasks in Combination with ‘Lock-Down’ in Managing the COVID-19 Pandemic,” Proceedings of the Royal Society A 476, no. 2238 (2020): 20200376; and W. H. Kellogg and G. MacMillan, “An Experimental Study of the Efficacy of Gauze Face Masks,” American Journal of Public Health 10, no. 1 (1920): 34–42.
Yaron Ezrahi, The Descent of Icarus: Science and the Transformation of Contemporary Democracy (Cambridge, MA: Harvard University Press, 1990).
Alvin M. Weinberg, “Science and Trans-Science,” Minerva 10, no. 2 (1972): 209–222.
Silvio O. Funtowicz and Jerome R. Ravetz, “Science for the Post-Normal Age,” Futures 25, no. 7 (1993): 739.
Thomas S. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962).
See, for example, Mary Douglas and Aaron Wildavsky, Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers (Berkeley: University of California Press, 1982); and Ulrich Beck, Risk Society: Towards a New Modernity, trans. Mark Ritter (Los Angeles: SAGE Publications, 1992).
Mary Douglas, Implicit Meanings: Selected Essays in Anthropology (1975; rpt. London: Routledge, 2003), 209.
Carl L. Becker, The Heavenly City of the Eighteenth-Century Philosophers (New Haven, CT: Yale University Press, 1932).
Yuval Levin, The Great Debate: Edmund Burke, Thomas Paine, and the Birth of Right and Left (New York: Basic Books, 2013).
John Stuart Mill, “Nature,” in Nature, the Utility of Religion, and Theism (1874; rpt. London: Watts & Co., 1904), 10.
Herbert A. Simon, Reason in Human Affairs (Stanford, CA: Stanford University Press, 1983), 97.
C. S. Holling, “The Resilience of Terrestrial Ecosystems: Local Surprise and Global Change,” in Sustainable Development of the Biosphere, ed. W. C. Clark and R. E. Munn (Cambridge, UK: Cambridge University Press, 1986), 292–317.
C.S. Holling, Lance Gunderson, and Donald Ludwig, “In Quest of a Theory of Adaptive Change,” in Panarchy: Understanding transformations in human and natural systems, ed. C.S. Holling and L. Gunderson (Washington, DC: Island Press), p. 10.
Steve Rayner, “Democracy in the Age of Assessment: Reflections on the Roles of Expertise and Democracy in Public-Sector Decision Making,” Science and Public Policy 30, no. 3 (2003): 163–70.
Michiel Schwarz and Michael Thompson, Divided We Stand: Redefining Politics, Technology and Social Choice (Philadelphia, PA: University of Pennsylvania Press, 1991), 3-5.
Michael Thompson and Steve Rayner, “Cultural Discourses,” in Human Choice and Climate Change, ed. Steve Rayner and Elizabeth Malone (Columbus, OH: Battelle Press, 1998), 1:265–343.
Johan Rockström et al., “A Safe Operating Space for Humanity,” Nature 461 (September 24, 2009): 472.
Ted Nordhaus, Michael Shellenberger, and Linus Blomqvist, The Planetary Boundaries Hypothesis: A Review of the Evidence (Oakland, CA: Breakthrough Institute, 2012), 37.
Barry W. Brook et al., “Does the Terrestrial Biosphere Have Planetary Tipping Points?,” Trends in Ecology & Evolution 28, no. 7 (2013): 401.
Sebastian Strunz, Melissa Marselle, and Matthias Schröter, “Leaving the ‘Sustainability or Collapse’ Narrative Behind,” Sustainability Science 14, no. 3 (2019): 1717–28.
Nordhaus, Schellenberger, and Blomqvist, Planetary Boundaries Hypothesis, 37.
Holling, “Resilience of Terrestrial Ecosystems.”
Susan Ratcliffe, ed., Oxford Essential Quotations (Oxford, UK: Oxford University Press, 2016), eISBN:
Alfred North Whitehead, Science and the Modern World (Cambridge, UK: Cambridge University Press, 1929), 64.
As explored in Theodore M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton, NJ: Princeton University Press, 1996).
Mark Sagoff. “The quantification and valuation of ecosystem services.” Ecological Economics (2011): 497-502.
See, for example, Naomi Oreskes, Kristin Shrader-Frechette, and Kenneth Belitz, “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences,” Science 263, no. 5147 (1994): 641–46.
Daniel Metlay, “From Tin Roof to Torn Wet Blanket: Predicting and Observing Groundwater Movement at a Proposed Nuclear Waste Site,” in Prediction: Science, Decision Making, and the Future of Nature, ed. Daniel Sarewitz, Roger A. Pielke Jr., and Radford Byerly Jr. (Covelo, CA: Island Press, 2000), 199–228. See also Stuart A. Stothoff and Gary R. Walter, “Average Infiltration at Yucca Mountain over the Next Million Years,” Water Resources Research 49, no. 11 (2013): 7528–45,
Metlay, “From Tin Roof.”
Metlay, “From Tin Roof.”
James Cizdziel and Amy J. Smiecinski, Bomb-Pulse Chlorine-36 at the Proposed Yucca Mountain Repository Horizon: An Investigation of Previous Conflicting Results and Collection of New Data (2006; Nevada System of Higher Education),
Theodore M. Porter, Trust in numbers: The pursuit of objectivity in science and public life. (Princeton University Press, 1996).
Jeroen P. van der Sluijs, “Numbers Running Wild,” in The Rightful Place of Science: Science on the Verge. Consortium for Science, Policy & Outcomes, (The consortium for science, policy and outcomes at Arizona State University, Tempe, AZ, 2016), 151-187.
Daniel Sarewitz, “Animals and beggars.” Science, Philosophy and Sustainability: The End of the Cartesian dream (2015): 135.
Mark Sagoff, “The quantification and valuation of ecosystem services.” Ecological Economics, 70(3), (2011): 497-502.
Jeroen Van der Sluijs et al., “Anchoring Devices in Science for Policy: The Case of the Consensus around Climate Sensitivity,” Social Studies of Science 28, no. 2 (1998): 291–323.
Patrick T. Brown and Ken Caldeira, “Greater Future Global Warming Inferred from Earth’s Recent Energy Budget,” Nature 552, no. 7683 (2017): 45–50; Peter M. Cox, Chris Huntingford, and Mark S. Williamson, “Emergent Constraint on Equilibrium Climate Sensitivity from Global Temperature Variability,” Nature 553 (January 18, 2018): 319–22; and S. C. Sherwood et al., “An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence,” Reviews of Geophysics 58, no. 4 (2020).
Steve Rayner, “Uncomfortable Knowledge in Science and Environmental Policy Discourses,” Economy and Society 41, no. 1 (2012): 120–21.
Rayner, “Uncomfortable Knowledge,” 121.
See, for example, Duncan McLaren and Nils Markusson, “The Co-Evolution of Technological Promises, Modelling, Policies and Climate Change Targets,” Nature Climate Change 10 (May 2020): 392–97; Roger Pielke Jr., “Opening Up the Climate Policy Envelope,” Issues in Science and Technology 34, no. 4 (Summer 2018); and Jane A. Flegal, “The Evidentiary Politics of the Geoengineering Imaginary” (PhD diss., University of California, Berkeley, 2018).
See, for example, Andrea Saltelli et al., “Five Ways to Ensure that Models Serve Society: A Manifesto,” Nature 582 (June 2020): 482-484.
See, for example, Myanna Lahsen, “Seductive Simulations? Uncertainty Distribution Around Climate Models,” Social Studies of Science 35, no. 6 (2005): 895–922.
For examples, see Juan B. Moreno-Cruz, Katherine L. Ricke, and David W. Keith, “A Simple Model to Account for Regional Inequalities in the Effectiveness of Solar Radiation Management,” Climatic Change 110, nos. 3–4 (2012): 649–68; and Colin J. Carlson and Christopher H. Trisos, “Climate Engineering Needs a Clean Bill of Health,” Nature Climate Change 8, no. 10 (2018): 843–45. For a critique, see Jane A. Flegal and Aarti Gupta, “Evoking Equity as a Rationale for Solar Geoengineering Research? Scrutinizing Emerging Expert Visions of Equity,” International Environmental Agreements: Politics, Law and Economics 18, no. 1 (2018): 45–61.
See, for example, David Goldston, “Not ’Til the Fat Lady Sings: TSCA’s Next Act.” Issues in Science and Technology 33, no. 1 (Fall 2016): 73-76.
“MassDEP Toxics Use Reduction Program,” Massachusetts Department of Environmental Protection, accessed February 16, 2020,
See, for example, Toxics Use Reduction Institute, Five Chemicals Alternatives Assessment Study: Executive Summary (Lowell: University of Massachusetts, Lowell, June 2006),…; and Pamela Eliason and Gregory Morose, “Safer Alternatives Assessment: The Massachusetts Process as a Model for State Governments,” Journal of Cleaner Production 19, no. 5 (March 2011): 517–26.
Rachel L. Massey, “Program Assessment at the 20 Year Mark: Experiences of Massachusetts Companies and Communities with the Toxics Use Reduction Act (TURA) Program,” Journal of Cleaner Production 19, no. 5 (2011): 517–26.
Donald L. Gautier, “Oil and Gas Resource Appraisal: Diminishing Reserves, Increasing Supplies,” in Prediction: Science, Decision Making, and the Future of Nature, ed. Daniel Sarewitz, Roger A. Pielke Jr., and Radford Byerly Jr. (Covelo, CA: Island Press, 2000), 231–49.
Gautier, “Oil and Gas Resource,” 244.
Quoted in Gautier, “Oil and Gas Resource,” 244.
Cass Peterson, “U.S. Estimates of Undiscovered Oil and Gas Are Cut 40 Percent,” Washington Post, March 10, 1988, A3.
Gautier, “Oil and Gas Resource,” 246.
Joseph E. Stiglitz, “Rethinking Macroeconomics: What Failed, and How to Repair it, Journal of the European Economic Association, 9, no. 4, (2011): 591.
Jerome H. Powell, “America’s Central Bank: The History and Structure of the Federal Reserve” (speech, West Virginia University College of Business and Economics Distinguished Speaker Series, Morgantown, WV, March 28, 2017),…; and Stanley Fischer, “Committee Decisions and Monetary Policy Rules” (speech, Hoover Institution Monetary Policy Conference, Stanford University, Stanford, CA, May 5, 2017),
Gautier, “Oil and Gas Resource.”
David Kriebel and Daniel Sarewitz, “Democratic and Expert Authority in Public and Environmental Health Policy,” in Policy Legitimacy, Science, and Political Authority, ed. Michael Heazle and John Kane, Earthscan Science in Society Series (London: Routledge, 2015), 123–40.
Massey, “Program Assessment.”

Posted in Center for Environmental Genetics | Comments Off on Policy Making in the Post-Truth World

For those interested, below is the HUGO-sponsord Gene Nomenclature Committee (HGNC) winter newsletter. —DwN

HGNC Winter Newsletter 2021

Beta version of new search released

Earlier this month, we released a beta version of an improved search for The main improvements compared to our current search are as follows:

One search works for all – there is no longer a need to select between searching, for example, for genes or gene groups
No more need to enter wildcards (*)
A new auto-suggest feature that works for several of our search categories, including approved symbols, previous symbols, aliases, gene names and group names
The ability to search with all major variant spellings between UK and American English, such as ‘signalling’ and ‘signaling’
The ability to download search results in TXT or JSON format

For more information, you can read our recent blog post, which outlines these improvements in full. Please test this new search on our beta genenames website and send us your feedback, either via our email address or our feedback form.
Coming soon – a new curator post

The HGNC will soon be advertising for a new curator post. Please look out for details of this in the forthcoming weeks and notify anyone you know that you think would be interested!
Updates to placeholder symbols

HGNC curators continue to update placeholder symbols wherever possible. The following examples are all genes with placeholder symbols that have been updated with more informative symbols based on published information:

C9orf16 -> BBLN, bublin coiled coil protein
C16orf71 -> DNAAF8, dynein axonemal assembly factor 8
C20orf194 -> DNAAF9, dynein axonemal assembly factor 9
C12orf66 -> KICS2, KICSTOR subunit 2
C16orf70 -> PHAF1, phagosome assembly factor 1
CXorf56 -> STEEP1, STING1 ER exit protein 1
C11orf95 -> ZFTA, zinc-finger, translocation associated
FAM160A1 -> FHIP1A, FHF complex subunit HOOK interacting protein 1A
FAM160A2 -> FHIP1B, FHF complex subunit HOOK interacting protein 1B
FAM160B1 -> FHIP2A, FHF complex subunit HOOK interacting protein 2A
FAM160B2 -> FHIP2B, FHF complex subunit HOOK interacting protein 2B

The following examples were updated based on a change in locus type from ‘gene with protein product’ to ‘RNA, long non-coding’:

C6orf99 -> LINC02901, long intergenic non-protein coding RNA 2901
C5orf66 -> PITX1-AS1, PITX1 antisense RNA 1
C17orf102 -> TMEM132E-DT, TMEM132E divergent transcript
C22orf34 -> MIR3667HG, MIR3667 host gene

New gene groups

We have released the following new gene groups in the past few months:

CREC family
Mitochondrial translation release factor family (MTRF)
Axonemal radial spoke subunits (RSPH)

Gene Symbols in the News

A variety of recent news reports have linked human genes to COVID-19, using approved gene symbols: Work on patients in intensive care with a severe form of the disease found that the following specific variants of five immune system-related genes were common in these patients: IFNAR2, TYK2, OAS1, DPP9, and CCR2. Another study found that either a particular HLA-E allele, or the absence of the KLRC2 gene, was associated with patients who were hospitalised with COVID-19, while a variant of the DPP4 gene that is believed to be inherited from Neanderthals is also associated with patients in hospital with severe disease. A different study found that memory T cells in the lungs of patients with severe COVID-19 express lower levels of the CXCR6 gene product, compared to patients with moderate disease. Finally, a study looking for reasons as to why older patients are more at risk for more severe disease found an association between overexpression of five genes, FASLG, CTSW, CTSE, VCAM1, and BAG3 in patients aged 50-79 compared to younger patients.

In non-COVID-related news, variants in the following three genes have been associated with skin colour and vitamin D deficiency in African American people: SLC24A5, SLC45A2 and OCA2, while the TBX15 gene has been associated with face shape, and a particular variant of this gene in Asia appears to inherited from Denisovans.

A recent study has linked the HAND2 gene to gestational length in humans, with levels of the expression of this gene decreasing as labour draws near. There is hope that this finding might be relevant for further studies on preterm birth.

Posted in Center for Environmental Genetics | Comments Off on For those interested, below is the HUGO-sponsord Gene Nomenclature Committee (HGNC) winter newsletter. —DwN

Progress of the COVID-19 epidemic in Sweden: an update

Dear Dan

The latest update by Nic Lewis is very interesting, thanks!

Below is a recent map of the excess deaths in Europe for 2020 (and therefore presumed to be COVID19-related). As you can see from the colour code — Finland, Denmark, Iceland and Latvia had even better “excess death” statistics (<3%) than Sweden (7.3%). Denmark (3-6%) ranks between Sweden and these other four countries. The remaining EU countries, plus the United Kingdom, fared much worse. The Sweden approach/strategy therefore appears to have been successful. And it has been less damaging to our overall economy. In Sweden there has never been any rule mandating that we wear masks. There have been some voluntary decisions for partial lockdowns of some small businesses. Very recently, we now have a regulation which does not permit restaurants to remain open after 20,30 (8.30 pm). Overall, I am very pleased that we have remained an open society. As for myself, during all of 2020 I have worked as usual — 100 % — and have never worn a mask, not even for one minute. In my lab building, masks are voluntarily worn mostly by our Asian colleagues, but not by Swedes. 😊 We now have a governmental recommendation that masks be worn on buses and trains; about 30-40 % are in compliance with this recommendation. As far as I have seen, to my knowledge there is no absolutely clear-cut truth, i.e., solid scientific data, showing that masks reduce viral transmission rates or severity of the disease. Best wishes, Magnus Karolinska Institutet, Stockholm From: Nebert, Daniel (nebertdw)
Sent: Saturday, February 20, 2021 5:08 PM

For those interested, this is a follow-up article by epidemiologist Nic Lewis who was proposing “herd immunity” in Sweden (and being severely criticized by many other colleagues). Because these GEITP pages shared several articles on this topic by this author last year, we feel obliged to share this comprehensive follow-up assessment of Sweden vs the U.K. 😊


Progress of the COVID-19 epidemic in Sweden: an update

Posted on February 18, 2021 by niclewis

By Nic Lewis

I thought it was time for an update of my original analysis of 28 June 2020. As I wrote then, the course of the COVID-19 pandemic in Sweden is of great interest, as it is one of very few advanced nations where no lockdown order that heavily restricted people’s movements and other basic freedoms was imposed.

Unfortunately, some of my comments on how the COVID-19 epidemic has developed in Sweden has been ill-informed. Indeed, a shadowy group of academics, opinion leaders, researchers and others who are upset about Sweden’s strategy and are actively seeking to influence it has been unmasked. They have been coordinating efforts to criticize media coverage of Sweden’s strategy and to damage both the image of Sweden abroad and the reputation of individuals who work in this field.

I present here updated plots of weekly new cases and deaths, with accompanying comments.[1]

Key Points

§ Despite Swedish Covid cases falling to low levels in the summer, they resurged in the autumn

§ This second wave, which was very likely a seasonal effect, now appears to be past its peak

§ Excess deaths in East Sweden were high in the first wave and low in the second; for South Sweden the opposite is true. This suggests that population immunity and/or the remaining number of frail old people are key factors in the severity of the second wave.

§ Excess deaths in Sweden to end 2020 were modest, particularly for 2019 (when deaths were abnormally low) and 2020 combined. They appear to be much lower relative to the population than in England, despite far harsher restrictions being imposed there.

§ Only 3% of recorded 2020 COVID-19 deaths in Sweden were of people aged under 60, compared to 6% in England & Wales


Overall development of the epidemic

Figure 1 shows the overall picture for confirmed weekly total new COVID-19 cases, intensive care admissions and deaths in Sweden, up to data released on 9 February 2021. The criteria for testing were widened during the early months, so case numbers up to June 2020 are not comparable with subsequent ones. Weekly new cases have been divided by 50 in order to make their scale comparable to that for ICU admissions and deaths. Death numbers for the two final weeks will be noticeably understated due to delays in death registrations.

Fig. 1 Total weekly COVID-19 confirmed cases, intensive care admissions and deaths in Sweden

In late summer 2020 it looked as if the epidemic had burnt itself out; however, a strong second wave developed during September to December. Although start of school and university term, along with more relaxed behaviour, may have started the second wave off, over the period as a whole the primary driver was almost certainly a seasonal increase in the virus’s transmission and hence reproduction number. Studies that indicated a lack of substantial seasonality in transmission[2] [3] have been proven wrong.

Analysis by age group

The changing age composition of new cases over time is shown in Figure 2. Case numbers before and after June 2020 are not comparable because of the major widening of testing during June 2020. However, it is clear that the second wave has been dominated by infections of people aged 10 to 59 years.

Fig. 2 Weekly COVID-19 confirmed cases by age group in Sweden

After falling to very low levels in late July 2020, weekly COVID-19 recorded deaths rose strongly from late October on, across all age groups (Figure 3). The data show the number of people with confirmed COVID-19 who died, regardless of the cause of death. In total, about 50% of deaths occurred up to and after 30 September 2020, that is in the first wave and in the second wave (which is, however, not over yet). During the second wave, a slightly higher proportion of deaths have been of people aged 80+ years than in the first wave.

Fig. 3 Weekly COVID-19 recorded deaths by age group in Sweden

Regional analysis

I turn now to regional analysis. Figure 4 shows weekly confirmed new cases for each of the 21 regions in Sweden. Although widening of testing (mainly in the second quarter of 2020) varied between regions, it is evident that Stockholm and Västra Götaland, which dominated cases during the first wave, were also two of the three regions dominating the second wave, with Stockholm region leading both waves. However, while Skåne had relatively few first wave cases, it broadly matched Stockholm in the second wave, albeit with a delay. Cases have fallen quite sharply in almost all regions since the turn of the year.

Fig. 4 Weekly COVID-19 confirmed cases by region in Sweden

Regions have varying populations, so confirmed cases per 100,000 head of population give a better picture of relative disease incidence (Figure 5). There is negligible correlation between the regions that had the highest incidence of COVID-19 cases during the first wave (including or excluding June to August) and the post-September 2020 period. In the absence of growing population immunity having an effect, one might expect that in those regions in which the virus spread most easily prior to September 2020 (by which time it was well ensconced in all regions) – for instance, due to greater urbanisation – it would also have spread most easily in the second wave, in the absence of changes in other factors. A lack of correlation between cases in the first and second waves is consistent with greater population immunity in those regions that were harder hit in the first wave counteracting, during the second wave, the greater ease with which infections spread there when population immunity was low.

Fig. 5 Weekly COVID-19 confirmed cases per 100,000 head of population by region in Sweden

As for cases, it is difficult to discern an obvious relationship across regions between COVID-19 deaths per 100,000 people in the first and the second waves, and the correlation between them is negligible. The non-identity between recorded COVID-19 deaths and those actually caused by the disease may be one reason for this. A somewhat clearer picture comes from examining weekly excess deaths in geographical super-regions, as shown in a recent Swedish report.[4]


ig. 6 Weekly deaths (purple line) up to week 3 of 2021 in Sweden compared with the expected normal death toll (solid green line) and its 95% confidence interval (dashed green line)

Figure 6 shows the position for Sweden as a whole. Data go up to week 3 2021; data for more recent weeks are incomplete. Peak excess deaths were higher in the first wave than in the second wave, the opposite relationship to that for recorded COVID-19 deaths. While this likely partly reflects an element of undercounting of deaths caused by COVID-19 at the peak of the first wave, it appears to be mainly due more to a considerably larger over counting of COVID-19 deaths throughout the second wave. While the second wave is not over yet, it does appear that excess deaths have peaked.

Figure 7 shows deaths for East Sweden, the population of which is dominated by Stockholm region. Excess deaths in the first COVID-19 wave were further above normal than for Sweden as a whole, but excess deaths during the second wave peaked at a level not much above that in the 2017/18 flu and pneumonia season, and fell back within the 95% confidence interval by the end of 2020 (and to close to normal for Stockholm region alone).

Fig. 7 As Figure 6 but for East Sweden (Stockholm, Uppsala, Södermanland, Östergötland, Örebro, Västmanland)

However, in Southern Sweden, the picture is quite different (Figure 8), with the second wave of COVID-19 excess deaths being considerably larger than the first, which was smaller than in the 2017/18 flu season.

Fig. 8 As Figure 6 but for South Sweden (Jönköping, Kronoberg, Kalmar, Gotland, Blekinge, Skåne, Halland, Västra Götaland)

The population of South Sweden is dominated by that of Västra Götaland in the north west and Skåne in the south, which contain respectively Sweden’s second and third largest cities (Gothenburg and Malmö). In southernmost Sweden, the first wave barely breached the upper bound of the 95% confidence interval for normal deaths, while peak excess deaths in the second wave were three times that level (Figure 9). In the remainder of South Sweden, excess deaths in the second peaked at a broadly similar level to in the first wave. The same is true for North Sweden, which is relatively sparsely populated and has few sizeable cities.

Fig. 9 As Figure 8 but for southernmost Sweden (Blekinge, Skåne) alone

In my view, the pattern of excess deaths in waves one and two in Stockholm-dominated East Sweden, compared to that in other parts of Sweden, suggests that much of the pool of people in East Sweden vulnerable to dying from COVID-19 had already succumbed by the end of wave one. On the other hand, although the level of previous infections and hence population immunity in Stockholm region at the end of wave one was more than adequate to inhibit large scale spread of COVID-19 during the summer, at the level of population mixing occurring then, with hindsight it was at that stage clearly insufficient to provide herd immunity in the winter, when transmission is higher, causing both the virus’s reproduction number (R0) and the herd immunity threshold to rise.[5]

Although it is too early to be certain, at present it appears that population immunity in both Stockholm region and Sweden as a whole is now adequate to prevent large-scale COVID-19 epidemic growth even in winter, at least at the current level of population mixing. However, there is a caveat in that the B.1.1.77 (UK-discovered) variant, which is estimated to be about one-third more transmissible[6] – and hence faster growing – only became apparent in Sweden during December. While present in 35% of all Swedish sequenced genomes during the first three weeks of 2021, it is not yet dominant, so transmission can be expected to rise as it achieves dominance over the next two or three months.

Total Swedish deaths due to COVID-19

Sweden had 10,082 deaths with confirmed COVID-19 infection for the 53 reporting weeks of 2020, ending 3 January 2021, including those reported subsequently.[7] On another measure[8], there were 9,432 deaths. Only 0.9% of deaths were of people under 50 years old, and only 3% were of under 60-year-olds (which compares with 1.0% and 6% respectively for England and Wales). People over 70 years old accounted for over 91% of COVID-19 deaths.

The definition of COVID-19 deaths imposed by the WHO is likely to over count deaths caused by COVID-19, since where there are multiple causes contributing to a death clinically-compatible with COVID-19 (normally respiratory failure or acute respiratory distress syndrome) it will be recorded as a death due to COVID-19 where SARS-CoV-2 infection is confirmed or suspected, even if COVID-19 is not considered to be the main cause of death.[9] Moreover, some countries have adopted even more over-pessimistic definitions of COVID-19 deaths. Others have likely undercounted COVID-19 deaths (Figure 10). And in many countries some deaths caused by COVID-19 at the start of the epidemic were probably not recognised as being such. Therefore, excess mortality over a normal level is usually thought to be the best measure of deaths due to COVID-19.

Excess mortality is primarily affected by the severity of respiratory disease (mainly influenza) winter seasons. A severe flu season, which may be caused by a new influenza virus strain, results in many more very frail unhealthy old people dying than a mild flu season. Severe flu seasons may occur in pairs in adjoining years, for instance due to widespread vulnerability to a new strain.

It follows that, other things being equal, fewer deaths will tend to occur in a flu season that follows a severe flu season, even more so where that is the second of a pair of severe flu seasons, as there will be fewer than normal very old and frail people alive. Correspondingly, more deaths than usual will tend to occur in a season following one or more mild flu seasons. This is known as the “dry tinder” effect. It has been shown, for example, that across 32 European countries there is a significant negative correlation (–0.63) between flu intensity in winter 2018/19 and 2019/20 combined and the COVID-19 mortality rate in the first wave (Figure 10).[10]

Sweden had unusually low mortality in 2019, which is largely a reflection of mild late 2018/19 and early 2019/20 flu seasons (the early and late part of each flu season falling in different calendar years). It thus had higher than usual “dry tinder” when the COVID-19 epidemic started.

A detailed analysis by a Danish researcher of the influence of “dry tinder” in Sweden, published by a US economic research institute, concluded that it accounted for many COVID-19 deaths.[11]

Similarly, an analysis by an economics researcher at a US university[12], which looked at 15 factors apart from severity of government interventions that might explain the higher COVID-19 deaths in Sweden than in other Nordic countries, concluded that the “dry tinder” factor was the most significant one. That paper also considered it plausible that Sweden’s “lighter government interventions” accounted for only a small part of Sweden’s higher Covid death rate than in other Nordic countries.

Fig.10 Death rate from COVID-19 up to 10 June 2020 by total 2-year flu intensity for 32 countries. The R2 of 0.396 (r=−0.63) is significant at the 1% level. A reproduction of Figure 1 in reference 10.

A fair estimate of excess deaths in Sweden caused by COVID-19 in 2020 should reflect the unusually large number of very old and frail people who survived 2019. That can be done by comparing actual and predicted deaths for 2019 and 2020 combined.[13]

I calculated excess mortality in Sweden for each year from 2000 on, by 5-year age group and sex, as the difference between actual mortality and normal mortality predicted by a regression fit to actual mortality rates over either 2000–2018 or 2009–2018, and then used population data to derive the expected number of deaths in a normal year for 2019 and 2020.[14] Mortality rates have been declining since 2000 in all age groups, although more slowly over the last decade. However, the effect on overall mortality of declining mortality rates at each age is partially counteracted by the increasing average age of the population.

When estimating normal mortality from trends in mortality over ,alternatively, 2000–2018 or 2009–2018, the excess combined 2019 and 2020 deaths were respectively 4,500 or 2,100, representing 0.043% or 0.020% of the mid-2020 Swedish population. For 2020 on its own, calculated excess deaths are 6,900 or 5,600 for the two regression bases (0.066% or 0.054% of the mid-2020 Swedish population).

Excess deaths for 2019 and 2020 combined were largely of men aged 65–79 and (to a lesser extent) aged 80–89 and 90+. Excess deaths of women were under 30% of those of men, based on mortality predicted by regressing over 2000-2018, and were actually negative based on regressing mortality over 2009-2018. On both regression bases and for both sexes, 2019 plus 2020 deaths of under 65 year olds were lower than predicted. And average overall mortality for 2019 and 2020 combined was lower than for any previous year this century (and very probably ever).

A more detailed analysis of Swedish mortality in 2020, but which used incomplete deaths data, was published a month ago by the blogger swdevperestroika; it is well worth reading.[15] That article made similar points, and reached similar conclusions, to my own analysis.

Comparison of Swedish and English excess deaths

I applied a similar analysis method to derive excess deaths in England for 2019 and 2020. The data published in England are rather less satisfactory than in Sweden, so the derived estimates should be regarded as approximate. I used data from Table 1 of the UK Office of National Statistics (ONS) monthly mortality analysis for December 2020, which spans 2001 to 2020.[16] Doing so gives best estimates for combined 2019 and 2020 excess deaths of 113,000 (0.20% of the population) when predicting normal deaths by regressing age-standardised annual mortality rates over 2001–2018, or 44,000 (0.08% of the population) when regressing over 2009–2018. The estimated excess deaths for 2020 alone were respectively 95,000 and 58,000. Other data published by the ONS suggests 2020 excess deaths in England were modestly below the average of these two estimates, and represented some 0.13% of the population.[17]


Whether the longer or shorter regression periods provide better estimates of normal mortality in 2019 and 2020, it seems clear that excess deaths, as a proportion of the population, were much higher in England than in Sweden. Excess deaths in England per 100,000 population were about four times those in Sweden for 2019 and 2020 combined, and about double those in Sweden for 2020 alone, .

Nicholas Lewis 18 February 2021

Originally posted here, where a pdf copy is also available

Update 19 February: Comparative percentage of Covid deaths aged under 60 in England and Wales added.

[1] The data is largely from daily Excel workbooks made available at I use versions published from 2 April 2020 (the earliest I could obtain) to 9 February 2021. During that period data ceased to be published at weekends and then also on Mondays. Data are presented as 7-day totals to a day of the week for which there is no missing cumulative dataset. Save for regional breakdowns of cases, I use the data as originally reported on each date, not the final adjusted daily figures (which do not provide the required breakdowns). There is a one day lag in reporting. Death numbers continue to be revised for up to several weeks due to reporting delays.

[2] “All pharmaceutical and non-pharmaceutical interventions are currently believed to have a stronger impact on transmission over space and time than any environmental driver.” Carlson CJ, Gomez AC, Bansal S, Ryan SJ. Misconceptions about weather and seasonality must not misguide COVID-19 response. Nature Communications. 2020 Aug 27;11(1):1-4.

[3] Engelbrecht FA, Scholes RJ. Test for Covid-19 seasonality and the risk of second waves. One Health. 2020 Nov 29:100202.




[%5b7%5d][7] Downloaded 12 February 2020; available via link Ladda ner data from copy of saved at


[9] International guidelines for certification and classification (coding) of COVID-19 as cause of death. World Health Organization 20 April 2020

[10] Hope, C. COVID-19 death rate is higher in European countries with a low flu intensity since 2018. Cambridge Judge Business School Working Paper No. 03/2020, September 2020.


[12] Klein, DB. 16 Possible Factors for Sweden’s High Covid Death Rate among the Nordics. George Mason University, Department of Economics Working Paper No. 20-27, August 2020.

[13] The 2020 death data, stratified by broad age bands, is currently available for reporting week numbers 1 to 53, plus some unallocated deaths. Since weeks 1 and 53 extend into respectively 2019 and 2021, I estimated deaths for the 2020 calendar year by deducting (53 * 7 – 362.25) times the average number of daily deaths in weeks 1 and 53 combined. I also adjusted up the actual 2019 deaths by 365.25/365 to give those for an average length year, which is what the regression fit estimate will be for.

[14] I undertook linear ordinary least squares regression of log(mortality), since mortality is more likely to improve by a certain fraction each year rather than by a fixed absolute amount. Results are almost identical if absolute mortality is regressed instead. Regressing over a longer the period reduces the influence of fluctuating flu intensity, but is less reflective of any change over time in the rate of improvement in mortality. Regressing over a shorter, ten year, period (2009–18) estimates a slightly slower fall in mortality over time than regressing over 2000–18. I downloaded annual mortality data for the whole of Sweden by sex and 5-year age bands on deaths generated at Likewise end 1999 to 2019 population data by sex and 1-year age bands from 2020 population estimates were taken from–the-whole-country/preliminary-population-statistics-2020/. Deaths in 2020 were downloaded from on 8 February 2021.



[17] The ONS has published provisional estimates of 2020 weekly excess deaths for 2020, but the data for the final two weeks appear to be incomplete. A crude adjustment for incompleteness implies that there were approaching 75,000 total excess deaths in 2020. That is in line with the ONS figure for recorded COVID-19 deaths in 2020. Public Health England estimates that there were approximately 70,000 excess deaths in 2020 from when COVID-19 deaths started occurring:

At least 100 comments and discussions (some informative, spot-on scientific) can be found at the bottom of this URL:

The progress of the COVID-19 epidemic in Sweden: an update

Posted in Center for Environmental Genetics | Comments Off on Progress of the COVID-19 epidemic in Sweden: an update

here is a lot of news out there, and a lot of confusing data (on all the various vaccines under development) are being discussed. However, this recent review in MedScape is excellent — as far as being up-to-date and clearly presented. DwN

Author: David J Cennimo, MD, FAAP, FACP, AAHIVS


The genetic sequence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was published on January 11, 2020, and the rapid emergence of research and collaboration among scientists and biopharmaceutical manufacturers followed. Various methods are used for vaccine discovery and manufacturing. As of December 17, 2020, The New York Times Coronavirus Vaccine Tracker listed 63 vaccines in human trials, and at least 85 preclinical vaccines were under investigation in animals. [1] A number of antiviral medications and immunotherapies are also under investigation for coronavirus disease 2019 (COVID-19).

On December 28, 2020, the National Institutes of Health announced the fifth phase 3 trial for COVID-19 vaccines in the United States has begun enrolling adult volunteers. Results from the phase 1 clinical trial for the NVX-CoV2373 vaccine were published online December 10, 2020 in The New England Journal of Medicine.

The Advisory Committee on Immunization Practices has published guidelines on the ethical principles for the initial allocation for this scarce resource. [2]

According to recommendations of the Centers for Disease Control and Prevention’s (CDC’s) Advisory Committee on Immunization Practices, the first 2 groups to get the vaccines will be healthcare workers (1a) and residents of long-term care facilities (1b). [3]

Young children are likely to be assigned lower priority for vaccines because it is young adults who are the main drivers of transmission in the United States. [4]

The next 2 priority groups will be front-line essential workers and adults 75 years and older (1c); and adults 65-74 years, individuals 16-64 years with high-risk medical conditions, and essential workers who did not qualify for inclusion in Phase 1b (1d).

In addition to the complexity of finding the most effective vaccine candidates, the production process is also important for manufacturing the vaccine to the scale needed globally. Other variables that increase complexity of distribution include storage requirements (eg, frozen vs refrigerated) and whether more than a single injection is required for optimal immunity. Several technological methods (eg, DNA, RNA, inactivated, viral vector, protein subunit) are available for vaccine development. Vaccine attributes (eg, number of doses, speed of development, scalability) depend on the type of technological method employed. [5, 6, 7]

Some methods have been used in the development of previous vaccines, whereas others are newly developed. For example, mRNA vaccines for influenza, rabies, and Zika virus have been previously tested in animals. [8]

Examples of advantages and disadvantages of the various vaccine technologies are included in Table 1. [6, 7, 8]

Table 1. Vaccine Platform Characteristics (Open Table in a new window)




Vaccine Candidate (Manufacturer)


Fast development speed; low- to-medium manufacturing scale


BNT-162b2 (Pfizer, BioNTech);

mRNA-1273 (Moderna)


Fast development speed; medium manufacturing scale


INO-4800 (Inovio)

Viral vector

Medium development; high manufacturing scale

1 or 2

AZA-1222 Ad5-CoV (AstraZeneca; Oxford University);

Ad26.COV2.S (Johnson & Johnson)

Protein subunit

Medium- to-fast development; high manufacturing scale


NVX-CoV2373 (Novavax)

Vaccines in Late-Stage Development

The following vaccines are in, or have completed, phase 3 clinical trials in the United States.

On December 18, 2020, the US Food and Drug Administration (FDA) granted Emergency Use Authorization (EUA) for the mRNA-1273 SARS-CoV-2 vaccine in individuals 18 years and older, after its Vaccines and Related Biological Products Advisory Committee (VRBPAC) voted to recommend (20 yes, 0 no, 1 abstention) the EUA on December 17.

On December 11, 2020, the FDA granted [http://fda%20eua%20fact%20sheet%20for%20health%20care%20providers%20https/]EUA for the BNT-162b2 SARS-CoV-2 vaccine in patients 16 years and older on December 11, 2020, after its VRBPAC voted to recommend (17 yes, 4 no, 1 abstention) the EUA on December 10.

Table 2. Vaccines in Late-Stage Development (Open Table in a new window)


Clinical Trials


Regulatory Status


(2 injections)

Phase 3 trial ongoing in individuals 16 y and older

In mid-October 2020, company allowed by FDA to expand phase 3 trial to adolescents 12 y and older.

Primary efficacy analysis:

· 95% effective against clinically evident COVID-19 infection 28 d after 1st dose across all subgroups [9]

· Well tolerated across all populations [9]

· 170 confirmed cases (placebo group, 162; vaccine group, 8) 10 severe cases after 1st dose (placebo group, 9; vaccine group, 1) [9]

· Efficacy consistent across age, sex, race, and ethnicity [9]

· Not evaluated for asymptomatic infection/carriage [9]

First approved in United Kingdom on December 2, 2020

Approved in early December 2020 by Bahrain and Canada

Emergency use authorized by FDA on December 11, 2020.


(2 injections)

US phase 3 trial (COVE) ongoing

Phase 2/3 trial began in adolescents 12-17 y in December 2020

Primary efficacy analysis:

· Efficacy rate 94.1%

· 196 confirmed cases (placebo group, 185; vaccine group, 11)

· Only severe illness (30 cases) was in placebo group, including 1 death [13]

· 90 d after 2nd dose (30 participants): high levels of binding and neutralizing antibodies that fell but remained elevated

· Well tolerated [10]

Emergency use authorized

by FDA on December 18, 2020.


(2 injections)

Phase 3 trials resumed on October 23, 2020 after being paused globally on September 6.

· Participant in United Kingdom diagnosed with transverse myelitis, triggering temporary hold on trial.

Interim analysis of phase 3 clinical trial in United Kingdom, Brazil, and South Africa:

· Efficacy 90%, depending on dosage; average efficacy of 70.4% in combined analysis of 2 dosing regimens.

· 131 COVID-19 cases: from 21 d after 1st dose, 10 hospitalizations, all in placebo group (2 classified as severe; 1 death)

Approved in United Kingdom December 29, 2020.

Phase 3 in United States.


(1 injection)

Phase 3 trial (ENSEMBLE) ongoing

Second phase 3 trial (EMSEMBLE 2) announced November 15, 2020, to study effects of 2 doses

· Phase 1/2a study: antibodies to SARS-CoV-2 observed after a single injection

· 99% were positive for neutralizing antibodies against SARS-CoV-2 at day 29: strong T-cell responses and a T H1 response were also noted [11]

Rolling biologics license application submitted in Canada and Europe on December 1, 2020.


Phase 3 trial in United Kingdom concluded enrollment at end of November 2020.

US and Mexico phase 3 trial began December 2020.

· Phase 1 data showed the adjuvanted vaccine induced neutralization titers in healthy volunteers that exceeded responses in convalescent serum from mostly symptomatic patients with COVID-19. [12]

Phase 3



· Genetic-code vaccine

· Storage and shipping requirements: Frozen; ultra-cold storage of -70ºC

· Requires reconstitution

· Once thawed, stable while refrigerated for up to 5 days

· Room temperature stability: 2 hours

· Dose: 2 intramuscular injections in deltoid muscle 21 days apart


BNT-152b2 (Pfizer) is a nucleoside-modified messenger RNA (modRNA) vaccine that encodes an optimized SARS-CoV-2 receptor-binding domain (RBD) antigen.

The ongoing multinational phase 3 trial included 43,548 participants 16 years and older who were randomly assigned to receive vaccine or placebo by injection; 43,448 participants received vaccine or placebo (vaccine group, 21,720; placebo group, 21,728). Approximately 42% of global participants and 30% of US participants were of racially and ethnically diverse backgrounds, and 41% of global and 45% of US participants were 56-85 years of age.

Vaccine efficacy was 95%, and no serious safety concerns were observed. The only grade 3 adverse event with a frequency of greater than 2% was fatigue at 3.8%; headache occurred in 2% of participants. Short-term mild-to-moderate pain at the injection site was the most commonly reported reaction, and severe pain occurred in less than 1% of participants across all age groups. [9]



· Genetic-code vaccine

· Dose: 2 injections 28days apart

· No dilution required

· Shipping and long-term storage: Frozen (-20°C) for 6 months

· After thawing: Standard refrigerator temperatures (2-8°C) for 30 days

· Room temperature: Up to 12 hours


The mRNA-1273 vaccine (Moderna) encodes the S-2P antigen. The US phase 3 trial (COVE) launched on July 27, 2020. The trial was conducted in cooperation with the National Institute of Allergy and Infectious Diseases and included more than 30,000 participants who received 2 100-µg doses or matched placebo on days 1 and 29. The primary efficacy analysis was released November 30, 2020.

The COVE study (n = 30,420) included Americans 65 years and older (24.8%), younger individuals with high-risk chronic diseases (16.7%), individuals who identify as Hispanic or Latinx (20.5%), and individuals who identify as Black or African American (10.4%).

Immunogenicity data at 90 days after the second vaccination was evaluated in 34 participants in the phase 3 trial. [10] A phase 2/3 trial in adolescents 12-17 years begun in December 2020 is expected to enroll 3,000 participants.



· Viral vector vaccine

· Phase 3 trial was temporarily put on hold globally on September 6, 2020 after a study participant in the United Kingdom was diagnosed with transverse myelitis. After FDA review in the United States, [14] phase 3 trials resumed there on October 23, 2020.

· Storage: Refrigeration

· Dose: 2 injections 28-days apart


AZD-1222 (ChAdOx1 nCoV-19; AstraZeneca) is a replication-deficient chimpanzee adenoviral vector vaccine containing the surface glycoprotein antigen (spike protein) gene. This vaccine primes the immune system by eliciting antibodies to attack the SARS-CoV-2 virus if it later infects the body. Owing to the testing of a different coronavirus vaccine last year, development for AZD-1222 was faster than that of other viral vector vaccines.

Results of an interim analysis of the phase 3 clinical trial in the United Kingdom, Brazil, and South Africa are as follows:

One dosing regimen (n = 2741) showed vaccine efficacy of 90% when given as a half dose, followed by a full dose at least 1 month later. Another dosing regimen (n = 8895) showed 62% efficacy when given as 2 full doses at least 1 month apart. The combined analysis from both dosing regimens (N = 11,636) resulted in an average efficacy of 70.4%. All results were statistically significant (p< .0001). [15] The phase 3 efficacy trial in the United States is ongoing. Concerns about the clinical trial implementation and data analysis have emerged because the half-dose regimen was not in the approved study design. [16, 17] These concerns will be addressed by regulatory agencies and await publication of the trial data. Ad26.COV2.S Overview · Viral vector vaccine · Storage: Refrigeration · Dose: 1 injection · The phase 3 trial (ENSEMBLE) for adenovirus serotype 26 (Ad26) recombinant vector-based vaccine (JNJ-78436735; Johnson & Johnson) was launched in September 2020 with a goal of 60,000 participants in the United States, South Africa, and South America. In December 2020, the goal was revised to 40,000 participants. Because of the high prevalence of virus in the United States, researchers will be able to reach conclusions with a smaller trial. The vaccine uses Janssen’s AdVac technology, which enhances vaccine stability (ie, 2 years at -20ºC and at least 3 months at 2-8ºC). This makes the vaccine candidate compatible with standard vaccine distribution channels and new infrastructure would not be required for distribution to people who need it. [18] A second phase 3 trial (EMSEMBLE 2) to observe effects of 2 doses of the vaccine in up to 30,000 participants worldwide was announced on November 15, 2020. NVX-CoV2373 Overview · Subunit vaccine · Dose: 2 injections, 21 days apart · NVX-CoV2373 (Novavax) is engineered using recombinant nanoparticle technology from SARS-CoV-2 genetic sequence to generate an antigen derived from the coronavirus spike protein. This is combined with an adjuvant (Matrix-M). Results of preclinical studies showed that it binds efficiently with human receptors targeted by the virus. Phase 1/2 trials were initiated in May 2020. Phase 1 data in healthy adults showed that the adjuvanted vaccine induced neutralization titers that exceeded responses in convalescent serum from mostly symptomatic patients with COVID-19. [12] The phase 3 trial in the United Kingdom has completed enrollment of 15,000 participants, including more than 25% who were older than 65 years. Researchers conducting the US and Mexico phase 3 trial, which started in December 2020, plan to enroll up to 30,000 participants. Other Investigational Vaccines Additional vaccine candidates are in various stages of development and clinical testing. Examples of these vaccines are provided in Table 3. Table 3. Other Investigational Vaccines (Open Table in a new window) Vaccine Comments INO-4800 (Inovio Pharmaceuticals) [19] DNA-based, 2-dose vaccine Stable at room temperature for more than 1 y; frozen shipment not needed; interim results from phase 1 human trial (n = 40): favorable safety and immunogenicity; expanded to include older participants. [46] Phase 2/3 trial (INNOVATE) ongoing; phase 2 to evaluate 2-dose regimen (1 mg or 2 mg) vs placebo in 400 participants. Grant from Bill and Melinda Gates Foundation to speed testing and scale up a smart device (Cellectra 3PSP) for large-scale intradermal vaccine delivery; company has also received funds from the US Department of Defense. CVnCoV (CureVac) [20] mRNA, 2-dose vaccine Preliminary data from phase 1 dose-escalating trial: 12-µg dose provided IgG antibody levels similar to convalescent plasma. phase 2b/3 trial enrollment (goal, 35,000 in Europe and Latin America) ongoing. Vaccine candidates V590 and V591 (Merck) [21] Vaccine V591 to be based on a modified measles virus that delivers portions of SARS-CoV-2 virus. phase 1 trial ongoing. Vaccine V590 uses Merck’s Ebola vaccine technology; human trials ongoing. COVID-19 S-Trimer (GlaxoSmithKline [GSK]) [22] Partnering with multiple companies using GSK’s adjuvants (compounds that enhance vaccine efficacy). CpG 1018 adjuvant vaccine (Dynavax) [23] Under development with Sanofi’s S-protein COVID-19 antigen and GSK’s adjuvant technology that stimulates the immune system; phase 1/2 trial ongoing. UB-612 multitope peptide-based vaccine (COVAXX [division of United Biomedical, Inc]) [24] Comprises SARS-CoV-2 amino acid sequences of the receptor binding domain; further formulated with designer Th and CTL epitope peptides derived from the S2 subunit, membrane, and nucleoprotein regions of SARS-CoV-2 structural proteins for induction of memory recall, T-cell activation, and effector functions against SARS-CoV-2. Company partnering with University of Nebraska Medical Center in the United States; phase 1, open-label, dose escalation study ongoing in Taiwan. HaloVax (Hoth Therapeutics; Voltron Therapeutics) [25] Collaboration with the Vaccine and Immunotherapy Center at Massachusetts General Hospital; use of VaxCelerate self-assembling vaccine platform offers 1 fixed immune adjuvant and 1 variable immune target to allow rapid development. Nanoparticle SARS-CoV-2 vaccine (Ufovax) [26] Vaccine prototype development utilizing self-assembling protein nanoparticle (1c-SapNP) vaccine platform technology. PDA0203 (PDS Biotechnology Corp) [27] Utilizes Versamune T-cell-activating platform for vaccine development. CoVLP recombinant coronavirus virus-like particles (Medicago and GlaxoSmithKline) [28] Combines Medicago’s recombinant coronavirus virus-like particles (rCoVLP) with GSK’s adjuvant system; phase 2/3 trial ongoing. AS03-adjuvanted SCB-2019 (Clover Pharmaceuticals) [44] Subunit vaccine containing SARS-CoV-2 spike (S) protein Phase 1 trial results reported in December 2020 showed high level of antibodies. Phase 2/3 trial launching by end of 2020 using GSK adjuvant with goal of 34,000 volunteers. Covaxin (Bharat Biotech and Ocugen) [45] Whole-virion inactivated vaccine Developed and manufactured in Bharat Biotech’s bio-safety level 3 biocontainment facility. Co-development with Ocugen announced for the US market. Elicited strong IgG responses against spike (S1) protein, receptor-binding domain (RBD) and the nucleocapsid (N) protein of SARS-CoV-2 along with strong cellular responses in Phase 1 and 2 clinical trials (n ~1000). Phase 3 trial is in progress in India that involves 26,000 volunteers. Recombinant adenovirus type-5-vectored vaccine (Ad5-vectored vaccine; Sinopharm [China]) [29] Approved in China and Saudi Arabia; preliminary data: 86% efficacy; phase 2 trial: seroconversion of neutralizing antibodies seen in 59% and 47% of those in 2-dose groups; seroconversion of binding antibody seen in 96-97% of participants; Positive specific T-cell responses seen in 88-90% of participants. CoronaVac (Ad5-vectored vaccine; Sinovac [China]) [47] Limited use in China. Interim phase 3 efficacy reports vary widely from several trials. A trial in Brazil reports efficacy of 50-90%. However, a Turkish trial reports 91.25% efficacy (n = 7,371; data analysis based on 1322 participants – 752 vaccine and 570 placebo). rAD26 (frozen) and rAd5 vector-based (lyophilized) formulations (Sputnik V; Moscow Gamaleya Institute) [30] Phase 1/2 trial complete; approved in Russi; both vaccines safe and well tolerated with mostly mild adverse events and no serious adverse events; all participants produced anti-spike protein and neutralizing antibodies after second dose, and generated CD4+ and CD8+ responses. hAd5 -COVID-19 (ImmunityBio) [31] Phase 1 trial ongoing; vaccine targets inner nucleocapsid (N) and outer spike (S) protein, which have been engineered to activate T cells and antibodies against SARS-CoV-2, respectively. These dual constructs offer the possibility for the vaccine candidate to provide durable, long-term cell-mediated immunity with potent antibody stimulation to patients against both the S and N proteins. MRT5500 (Sanofi and Translate Bio) [32] mRNA-based vaccine candidate; preclinical evaluation demonstrated favorable ability to elicit neutralizing antibodies using a 2-dose schedule administered 3 wk apart; phase 1/2 trial anticipated to start in Q4 2020. AG0302-COVID19 (AnGes and Brickell Biotech) [33] Adjuvanted DNA vaccine in phase 1/2 study in Japan; data readouts expected in Q1 2021; intent to follow with phase 3 trials in United States and South America.

Posted in Center for Environmental Genetics | Comments Off on here is a lot of news out there, and a lot of confusing data (on all the various vaccines under development) are being discussed. However, this recent review in MedScape is excellent — as far as being up-to-date and clearly presented. DwN

An mRNA Vaccine against SARS-CoV-2: Phase I findings

For those interested, this article see attached just appeared in N Engl J Med This article describes the Phase 1, dose-escalation, open-label trial — including 45 healthy adults, 18 to 55 years of age, who received two vaccinations, 28 days apart, with mRNA-1273in a dose of 25 μg, 100 μg, or 250 μg. Adverse reactions were reported.
There were 15 participants in each dose group. I’m not an expert on Phase 1 trials, but these findings apparently rendered the vaccine sufficiently safe to proceed with the much large Phase 2 clinical trials.  😊

An mRNA Vaccine against SARS-CoV-2 — Preliminary ReportL.A. Jackson, E.J. Anderson, N.G. Rouphael, P.C. Roberts, M. Makhene, R.N. Coler, M.P. McCullough, J.D. Chappell, M.R. Denison, L.J. Stevens, A.J. Pruijssers, A. McDermott, B. Flach, N.A. Doria-Rose, K.S. Corbett, K.M. Morabito, S. O’Dell, S.D. Schmidt, P.A. Swanson II, M. Padilla, J.R. Mascola, K.M. Neuzil, H. Bennett, W. Sun, E. Peters, M. Makowski, J. Albert, K. Cross, W. Buchanan, R. Pikaart-Tautges, J.E. Ledgerwood, B.S. Graham, and J.H. Beigel, for the mRNA-1273 Study Group*

Posted in Center for Environmental Genetics | Comments Off on An mRNA Vaccine against SARS-CoV-2: Phase I findings

Some of you might be interested in the latest news from the Human Genome Nomenclature Committee (HGNC). DwN

Coming soon – an improved search for

We are excited to announce that we are finalising a new version of the search engine for The look and feel will not significantly change but the search will feature an autosuggest function which will let the user know which category of our website the suggested result is in e.g., ‘Gene symbol’, ‘Gene name’, ‘Group’, or ‘Page’. Up to 5 results will be suggested per category but the complete number will be shown in parentheses. The new search also works without the addition of wildcards for gene symbols. After some testing by ourselves and selected colleagues, we expect to release a beta version for all of our users to test in mid January. Please watch out for a tweet to notify you of this!
Updates to placeholder symbols

As explained in our recent guidelines paper, the HGNC updates placeholder symbols with informative nomenclature where applicable. In the last few months, we have updated the following two Corf# symbols, based on published information:

C19orf57 -> BRME1, break repair meiotic recombinase recruitment factor 1

The approved gene name is slightly different to that first published with the BRME1 symbol and is the result of a discussion with all of the research groups that we are aware of who work on this gene. We are pleased that the most recent publication quotes both the approved gene symbol and approved gene name. Relevant publications: PMID: 32460033, PMID: 32345962, PMID: 32845237

CXorf21 -> TASL, TLR adaptor interacting with endolysosomal SLC15A4

The approved gene name has also been slightly modified compared to the name first published with the TASL symbol (PMID: 32433612). Again, this was discussed with the research group involved.

We would like to remind all researchers to contact the HGNC ahead of publication so that we can avoid requesting any post-publication changes to either gene symbols or gene names.

We were also able to update the nomenclature of three paralogous genes named with the placeholder FAM122 root symbol based on the functional characterisation (see PMID: 33108758) of one of these genes:

FAM122A -> PABIR1, PP2A Aalpha (PPP2R1A) and B55A (PPP2R2A) interacting phosphatase regulator 1
FAM122B -> PABIR2, PABIR family member 2
FAM122C -> PABIR3, PABIR family member 3

Note that the paralogs PABIR2 and PABIR3 contain ‘family member’ in their gene names, in place of the functional information shown for PABIR1, to reflect that the functions of the PABIR2 and PABIR3 genes are not yet determined.
New gene groups

New gene groups that we have released in the past few months include:

IQ motif containing GTPase activating protein family
Procollagen-lysine,2-oxoglutarate 5-dioxygenase family
3-hydroxyacyl-CoA dehydratase family
Dynein 1 complex subunits
Dynein 2 complex subunits
Dyneins, axonemal outer arm complex subunits
Dyneins, axonemal inner arm I1/f complex subunits
ADP-ribosyltransferase family
dishevelled binding antagonist of beta catenin family
DNA cross-link repair family
Radical S-adenosylmethionine domain containing

Gene Symbols in the News

The following news articles have connected specific human genes to COVID-19:

The expression of the NRP1-encoded protein in cultured lung cells has been shown to increase the rate of SARS-CoV-2 infection. RAB7A has been linked to SARS-CoV-2 infectivity via its regulation of the ACE2-encoded protein, which is known to be hijacked as a receptor for the virus. Variation in the following genes that are involved in the interferon I pathway: IRF7, IFNAR1 and TLR3 have been associated with an increased risk of developing life-threatening COVID-19 pneumonia.

In dementia-related news: A new, rare form of dementia has been identified that is caused by mutation of the VCP gene. This mutation results in the buildup of MAPT-encoded tau protein in the brain. A study reported on the mechanism of how a previously-identified dementia risk variant might cause the disease: GGA3 variants lead to the buildup of BACE1-encoded protein. While expression of the mouse ortholog of the RBM3 gene has previously been shown to postpone the onset of dementia, expression of this cold-induced gene was initially not found in human blood, making its connection to prevention of dementia in humans seem unlikely. Along with other recent studies on hypothermic babies and stroke victims, a recent study found that expression of RBM3 is induced in humans following exposure to low temperatures – this study used hardy volunteer swimmers at an open air London swimming pool!

Finally, a recent study on how genes vs. environment affects the likelihood of developing post-traumatic stress disorder (PTSD) suggested that the ability to make secure attachments to others helped to neutralise the risk of PTSD for carriers with a PTSD-associated IGSF11 gene variant.

Posted in Center for Environmental Genetics | Comments Off on Some of you might be interested in the latest news from the Human Genome Nomenclature Committee (HGNC). DwN

Advances in COVID-19 vaccine development, described for the lay person

I cannot find the source (and author) of this article — but it is an excellent lay summary of why these vaccines are being developed so rapidly and successfully, despite no vaccines to coronaviruses ever having been successful, prior to 2018. My lab was involved in the early 1980s with this concept of “adding unprotected mRNA to a cell or an organism, and having the encoded protein produced from that RNA.” It could not be done, or any effects were undetectable, because of the fragility of mRNA (VERY quickly destroyed by Rnases, enzymes breaking it down). The present snippet of mRNA, coding just for the ACE2 receptor spike protein, is delivered in a lipid particle that protects it from these RNases.


The Beginning of the End

Last email, I said the endgame for COVID would begin with the announcement that a vaccine works in a large randomized trial. Three days later, Pfizer reached that milestone. One week later, Moderna joined in. It’s queen to bishop 7 and checkmate for COVID-19.

The Pfizer vaccine, known as BNT162b2 among the cognoscenti, was initially developed by BioNTech, a German biotech company that has mostly focused on cancer vaccines. BioNTech needed the added expertise of a big pharma outfit like Pfizer to help get their COVID-19 vaccine through the necessary clinical trials and past the regulatory pathways. BioNTech is headquartered outside of Frankfurt but also has a footprint near Boston. Some folks are concerned about high unemployment right now, but I’d point out that BioNTech is hiring, and so if you’ve been laid off during COVID and happen to be a PhD immunologist with expertise in flow cytometry, well, we’re talking a matching 401(k), free parking, and pet insurance!

A New Sheriff is in Town

To understand why a German outfit that is developing cancer vaccines is first in line for ending the COVID pandemic, we’ll need an introduction to the newest and most exciting development in vaccine research in the past hundred years: vaccines made from genetic code.

Until recently, all vaccines were a dead or weakened version of the bug we want to protect against, or a piece of that bug, and manufactured in a big vat of viruses, bacteria, yeast, or chicken embryos. But what if we cut out the middleman? What if we just inject ourselves with the instructions on how to make our own vaccine? Such an approach could have some big advantages in speed and safety.

Our genetic code is DNA of course, and that DNA sits balled up in most every cell of our bodies. When a cell wants to make a protein, such as the enzyme alcohol dehydrogenase that our liver uses to break down all the booze we’re drinking right now, or for the collagen in the tendon needed to push the remote control on your TV, the master copy of DNA is unrolled and a working copy is produced, using a slightly different molecule, RNA, that looks mostly like DNA but is better suited as a template for manufacturing proteins. DNA is like the file stored on an architect’s computer, where RNA is a printed copy of blueprints sent to the construction site. It is going to get used, trashed, and thrown out, but while it lasts, it instructs the contractor on how to build the house. There are several different types of RNA that have different functions, but for our purposes, we’re interested in messenger RNA, abbreviated mRNA. That mRNA has the job of taking the instructions on DNA inside the nucleus of the cell, making copies in the nucleo-meister, and traveling out to little blobs in the cell called ribosomes where we make a protein.

Imagine that we just put a little snippet of custom-designed RNA directly into the cell. Instead of making the usual proteins that the cell produces, the cell will suddenly start making the protein specified in that strip of RNA. RNA is a very fragile molecule, and it won’t be long before enzymes called RNases chew it up, but for about 10 days, our own cell is going to be a miniature vaccine-producing factory. Most cells are not too picky and will readily make whatever protein we give it via RNA.

In the case of COVID, picking what protein to make is pretty easy. Those big spike proteins on the coronavirus are like having a “shoot here” sign on a video game. The Pfizer and Moderna vaccines are a strip of mRNA that produces a pile of the COVID spike protein, exactly as the protein looks when it is sitting on the virus floating around the body and before it has attached to a cell. This is where and when we want to attack the virus. In practice, there are challenges to getting these vaccines to work, but they offer many advantages, and some disadvantages, over current vaccine technology.

Advantages of mRNA Vaccines Over Those Clunky 1950s Models

One advantage is the speed at which an mRNA vaccine can be produced. Since these vaccines are made in a chemistry lab and require no living organisms to generate the product, we can manufacture vaccines in a rapid, standardized, and controlled fashion. It’s the difference between making vodka and wine. If you walked into a manufacturing facility for Pfizer’s BNT162b2 or Moderna’s mRNA-1273, you would not find big vats full of microorganisms making our pandemic-ending vaccine. Instead, you’d find an industrial chemistry lab and a team of managers working 90 hours/week having a nervous breakdown. Yeah, no pressure guys, but a hundred people did just die of COVID while you were on your coffee break.

Another advantage is safety. When you take a huge pile of polio virus and weaken it before injecting it into children, there is always the risk that your weakening process did not work right or that the virus mutated back to a more severe version. This can’t happen with an mRNA vaccine. Not only is RNA a fragile, easily-destroyed molecule, but it cannot make changes to our DNA. The copying is in one direction only. You can mark up your household Bible in Revelations 16:16 (that would be the part about Armageddon), but it won’t change the master copy at the printers, no matter how many exclamation points you’ve put in there since March. Similarly, mRNA vaccines cannot produce permanent changes in the body’s genes.

Another advantage is the strong immune response. If the body sees some dead hepatitis protein floating around, it might make antibodies, and it might not. Inert protein floating around is not highly motivating to the immune system. With an mRNA vaccine, a cell is now producing “viral” proteins, and to the immune system, it looks like the cell was infected with a virus. The better we can trick the body into thinking a vaccine is an actual viral infection, the stronger the immune response. In addition, just the presence of RNA outside of a cell causes the immune system to ramp up, and this increases the response to vaccination. RNA floating around is a signature of a viral infection.

Disadvantages of mRNA Vaccines

These mRNA vaccines do have some disadvantages. First, they have the same safety risks common to all vaccines. Rarely, a vaccine makes a patient worse if he or she gets the disease. Also, in order for the vaccine to work well, there is the chance of a sore arm, fever, or feeling unwell for a day or two. When people get sick with a virus, often the main cause of symptoms such as body aches, fever, or fatigue, is our own immune system’s response. The same is true for some vaccines that make us feel unwell. In research trials, vaccines for hepatitis and the flu don’t seem to make people any sicker than a placebo shot, but other vaccines, like the one for shingles, do sometimes make people sick.

Although the fragility of mRNA is a safety advantage, it is also a disadvantage, as we need mRNA vaccines to get inside cells in order to work. The body destroys RNA if it ever sees it outside a cell. The Pfizer and Moderna vaccines are enclosed in a very tiny ball of fat, called a lipid nanoparticle. This hides the mRNA and makes the vaccine look like a chocolate truffle to the cells in the body. In fact, it’s a chocolate truffle with sprinkles, because proteins stick to that ball of fat, what chemists call a protein corona, even though the nation collectively twitches a bit every time we hear the word “corona.” This ball of fat with proteins is as irresistible to many types of cells as a sprinkle-covered chocolate truffle is to most of us.

A third disadvantage is storage. Because mRNA is such a fragile molecule, the Pfizer vaccine is kept at -70 C, or for those of us still on the Fahrenheit scale, 94 degrees below zero. Brrrr. Moderna’s vaccine is stable in a standard medical freezer (-15 C or 5 F) for six months, giving it an advantage in logistics, but right now, we’ll take any vaccine we can get. It’s likely that the Pfizer vaccine will last at least a few days in a standard freezer so that frequent, smaller shipments will allow us to vaccinate patients with that product. In the long term, having a vaccine stable for a year in a standard refrigerator would be the ideal goal. That, or a freeze-dried powder we can mix up as needed. Some very smart chemists are working on the storage problem for us, and I don’t mean lightweight biochemists who watch The Big Bang Theory. I’m talking real, heavy duty chemists, the kind who work where chemistry becomes physics, the kind who wear T-shirts with math puns on them, the kind who obsessed on The Lord of the Rings when they were nine.

Other Uses for mRNA Vaccines

Currently, mRNA vaccines for influenza, Zika virus, HIV, and rabies are being tested in humans and animals, but the 60,000 or so subjects who have gotten mRNA vaccines specifically for COVID represent the largest group that has received this vaccine technology. Preventing infections is not the only reason to have an mRNA vaccine. BioNTech is actually developing most of its vaccines to treat cancer. The approach is to take the proteins that sit on top of cancer cells and make a snippet of RNA that will produce those proteins and inject it as a cancer vaccine. This stimulates our immune system to respond to the cancer. In a recent study of patients with very advanced melanoma, an mRNA vaccine was helpful, but not a cure.

Not only can you develop an mRNA vaccine against a type of cancer, but you can actually take a single patient and produce a customized vaccine for that person’s cancer cells. This approach is obviously expensive, but as the technology improves, we might see more personalized cancer vaccines in use. Research is underway for personalized mRNA cancer vaccines for brain tumors, leukemia, melanoma, and breast cancer. Just as NASA’s moon landing brought us huge advances in science that benefited everyday Americans with essential life-enhancing products like Tang, freeze-dried ice cream, and space blankets, the mRNA vaccines for COVID are going to accelerate the development of these vaccines in many areas of medicine.

More Vaccines are Coming

Novartis has an mRNA vaccine in development that cures baldness, produces 12-hour erections, and makes people thin no matter how much they eat. Ok, just kidding about that. But it is highly likely we’ll see reports on other vaccines made with other vaccine technology, such as the Oxford vaccine, generating positive results in the near future. The pharmaceutical industry takes a lot of heat in the press for high prices and aggressive marketing, but we should not forget that these large, for-profit, multinational corporations are about to save millions of people all over the planet, and that we’d be far worse off without them. Not to denigrate the frontline health care workers my staff and me who are risking our lives and the lives of our families to care-for patients, but the real heroes here are the scientists working in a mad frenzy to develop and push these COVID vaccines over the finish line. Sure, we’re the tip of the spear, but a tip is worthless without all those feathers at the back sending the arrow where it needs to go.

A Gut-Check Safety Moment

A patient asked me if I was nervous about such a new vaccine. The fact is, for shots against tetanus, hepatitis, polio, and measles, we’ve given more than a billion doses. For COVID, the Pfizer and Moderna products will have been tested in less than 100,000 people before being unleashed on the public. I’m sure after 20 million doses are administered, it’s quite possible we’re going to find uncommon, but nasty side effects. However, the odds of having a life-threatening illness or dying from COVID, at any age, are far higher than the risk of the vaccine. I will be getting this vaccine if it is approved, as will everyone in my family, as soon as we are permitted to receive it.

Monoclonal Antibodies for COVID

The two key pillars for control of COVID are public health (getting people to wear masks and quit having parties) and vaccination. However, treatments for patients with COVID are still important, especially over the next six months as vaccination gets underway. We do have medications for sick, hospitalized patients with COVID, but what about relatively healthy people seen in a doctor’s office? For folks not sick enough to be hospitalized, but at risk for serious illness, the most promising approach is the infusion of antibodies against the SARS-CoV-2 spike protein. The FDA has recently approved an antibody treatment called bamlanivimab (Bam’la-niv”-a-mab to rhyme with Pam-la give a gab) from Eli Lilly. This drug makes sick, hospitalized patients worse, but it appears to be effective for not-so-sick people who are not in the hospital but have COVID.

The FDA is allowing the drug to be used in these settings:

Age 65 or older.

Age 55 or older with high blood pressure, heart disease, or lung disease.

Age 12 or older with BMI over 35 (severe obesity), diabetes, kidney disease, or immunosuppressed.

Age 12-17 with certain other chronic medical problems.

The research trial, called BLAZE-1, showed that in people 65 or older, or those with a BMI of 35 or greater, the risk of hospitalization was 14.6% in the placebo group, and 4.2% in those who got the antibody infusion. If we treat 10 people, we’ll save one person a hospitalization. Good news, but not perfect, and this strategy has a few drawbacks.

The first problem is unleashing this complex medicine on millions of people based on a clinical trial involving only a few hundred folks. As we commonly see with new treatments, a second, larger trial with thousands of patients might reveal a host of issues that were not apparent here. Another problem is that of the 600,000 people getting COVID every day worldwide right now, probably 200,000 would be eligible for this drug, which has to be given as a one-hour intravenous infusion. We would exhaust the total supply in 36 hours. Lilly hopes to have a million doses by the end of the year, but again, not nearly enough to treat all eligible patients. In addition, the hospital infrastructure to give all these intravenous infusions would overwhelm many healthcare systems that now can barely keep up with the hospitalized COVID cases. It is not optimal to launch this undertaking full tilt for millions of patients without a bigger study. In the meantime, the best use for this drug is probably for the highest-risk (age 80 or 85 and older) patients. In the future, after vaccination, there will be a few patients who still get COVID anyway, and then, with much lower numbers of cases, bamlanivimab will be a potential option. In the meantime, if you’re in a higher-risk group, lordy, don’t rely on this treatment. DON’T GET COVID.

Antidepressants for COVID?

Another approach for mildly ill patients with COVID is to put them on the antidepressant fluvoxamine, sold under the brand name Luvox. In theory, one could also use Prozac, Zoloft, or Lexapro. All these drugs raise serotonin, of course, but they also stimulate a receptor called sigma-1 that is involved in many cellular functions, including damping down the immune system. A study out of Washington University in St. Louis randomized 152 adult patients with mild to moderate COVID to get fluvoxamine, which they quickly dialed up to the maximum dose, or placebo, and found a possible benefit to the drug. This approach would need to be tested in a larger trial to know if it works. It is extremely challenging to run studies like this in the middle of a pandemic, but it is even worse to not run them. Early, small studies of hydroxychloroquine suggested a benefit, later, bigger studies showed it was worthless, if not harmful. Remdesivir appeared to speed recovery by a few days in a 1000 patient study published in The New England Journal of Medicine, but a larger study of remdesivir, with almost triple the number of patients, did not really show much benefit. That study, sponsored by the World Health Organization, tested four different treatments for COVID and found all were worthless. Once again, the best treatment is not to get COVID in the first place.

The Tsunami Has Arrived

We’re right in the midst of what is likely the final wave of COVID. As the vaccines are being mass produced and distribution plans are underway, now it is more important than ever not to get this disease. I urge my higher-risk patients to be extra careful. It will be extremely embarrassing to die of COVID in December, only to have vaccine available in January. Now is the time to postpone holiday plans until next year. Christmas in July and Thanksgiving in June are great ideas. I urge everyone to be extra careful during this peak in cases. We’re not dragging folks in for a routine annual exam right now, but of course, if you need care or if you’re way overdue, then a visit might be in order.

Posted in Center for Environmental Genetics | Comments Off on Advances in COVID-19 vaccine development, described for the lay person