Origin of SARS-CoV-2 Virus — Following the Clues

For those interested “the origin of the Wuhan virus,” these data — just sent to me today — are interesting and very relevant

From: Anonymous
Sent: Thursday, June 24, 2021 5:35 AM

The article by Nicholas Wade [below] is not accurate when he states that “CGG for Arg (common codon usage in the human genome) has never been seen in any coronavirus genome.” In an article publishes last year [10.1016/j.virusres.2020.197976], it was shown that CGG is used by the SARS genome (frequency=0.09) and the MERS genome (0.43); in fact, the frequency is even much higher in MERS than it is in the SARS-CoV-2 (0.2) genome, which has caused the COVID-19 pandemic [see Table 1 of 10.1016/j.virusres.2020.197976].

Codon-usage differences like these (half or double in frequency) are very commonly seen — even in closely related bacterial or viral strains. The “double-CGG” feature also is present in the MERS genome. I did not find the polybasic insertion (Pro-Arg-Arg-Ala-Arg) in the COVID-19 reference genome. There will always be some unique genomic features that one might find in any organism. These features can always be interpreted by “conspiracy theorists” as “genetically-engineered,” but they are just as likely to be found in a Martian genome or in some genome from another galaxy.

[You can use this web site (http://hive.biochemistry.gwu.edu/review/codon) to find codon usage tables for different organisms, but this does not include virus genomes.]

Sent: Sunday, May 9, 2021 3:38 PM

For those interested in the possible origin of the coronavirus that causes COVID-19 — Nicholas Wade offers the best analysis yet (without any political agenda). Some of you might recognize the name; Wade has written scientific fact and opinion articles for Nature, Science and many for the New York Times over the years. He writes very well, and this is an EXCELLENT summary.

Origin of SARS-CoV-2 — Following the Clues

Did people, or Mother Nature, open Pandora’s box at Wuhan?

Nicholas Wade

The Covid-19 pandemic has disrupted lives the world over for more than a year. Its death toll will soon reach three million people. Yet the origin of pandemic remains uncertain: the political agendas of governments and scientists have generated thick clouds of obfuscation, which the mainstream press seems helpless to dispel.

In what follows I will sort through the available scientific facts, which hold many clues as to what happened, and provide readers with the evidence to make their own judgments. I will then try to assess the complex issue of blame, which starts with, but extends far beyond, the government of China.

By the end of this article, you may have learned a lot about the molecular biology of viruses. I will try to keep this process as painless as possible. But the science cannot be avoided because for now, and probably for a long time hence, it offers the only sure thread through the maze.

The virus that caused the pandemic is known officially as SARS-CoV-2, but can be called SARS2 for short. As many people know, there are two main theories about its origin. One is that it jumped naturally from wildlife to people. The other is that the virus was under study in a lab, from which it escaped. It matters a great deal which is the case if we hope to prevent a second such occurrence.

I’ll describe the two theories, explain why each is plausible, and then ask which provides the better explanation of the available facts. It’s important to note that so far there is no direct evidence for either theory. Each depends on a set of reasonable conjectures but so far lacks proof. So, I have only clues, not conclusions, to offer. But those clues point in a specific direction. And having inferred that direction, I’m going to delineate some of the strands in this tangled skein of disaster.

A Tale of Two Theories

After the pandemic first broke out in December 2019, Chinese authorities reported that many cases had occurred in the wet market — a place selling wild animals for meat — in Wuhan. This reminded experts of the SARS1 epidemic of 2002 in which a bat virus had spread first to civets, an animal sold in wet markets, and from civets to people. A similar bat virus caused a second epidemic, known as MERS, in 2012. This time the intermediary host animal was camels.

The decoding of the virus’s genome showed it belonged a viral family known as beta-coronaviruses, to which the SARS1 and MERS viruses also belong. The relationship supported the idea that, like them, it was a natural virus that had managed to jump from bats, via another animal host, to people. The wet market connection, the only other point of similarity with the SARS1 and MERS epidemics, was soon broken: Chinese researchers found earlier cases in Wuhan with no link to the wet market. But that seemed not to matter when so much further evidence in support of natural emergence was expected shortly.

Wuhan, however, is home of the Wuhan Institute of Virology, a leading world center for research on coronaviruses. So the possibility that the SARS2 virus had escaped from the lab could not be ruled out. Two reasonable scenarios of origin were on the table.

From early on, public and media perceptions were shaped in favor of the natural emergence scenario by strong statements from two scientific groups. These statements were not at first examined as critically as they should have been.

“We stand together to strongly condemn conspiracy theories suggesting that COVID-19 does not have a natural origin,” a group of virologists and others wrote in the Lancet on February 19, 2020, when it was really far too soon for anyone to be sure what had happened. Scientists “overwhelmingly conclude that this coronavirus originated in wildlife,” they said, with a stirring rallying call for readers to stand with Chinese colleagues on the frontline of fighting the disease.

Contrary to the letter writers’ assertion, the idea that the virus might have escaped from a lab invoked accident, not conspiracy. It surely needed to be explored, not rejected out of hand. A defining mark of good scientists is that they go to great pains to distinguish between what they know and what they don’t know. By this criterion, the signatories of the Lancet letter were behaving as poor scientists: they were assuring the public of facts they could not know for sure were true.

It later turned out that the Lancet letter had been organized and drafted by Peter Daszak, president of the EcoHealth Alliance of New York. Dr. Daszak’s organization funded coronavirus research at the Wuhan Institute of Virology. If the SARS2 virus had indeed escaped from research he funded, Dr. Daszak would be potentially culpable. This acute conflict of interest was not declared to the Lancet’s readers. To the contrary, the letter concluded, “We declare no competing interests.”

Peter Daszak, president of the EcoHealth Alliance

Virologists like Dr. Daszak had much at stake in the assigning of blame for the pandemic. For 20 years, mostly beneath the public’s attention, they had been playing a dangerous game. In their laboratories they routinely created viruses more dangerous than those that exist in nature. They argued they could do so safely, and that by getting ahead of nature they could predict and prevent natural “spillovers,” the cross-over of viruses from an animal host to people. If SARS2 had indeed escaped from such a laboratory experiment, a savage blowback could be expected, and the storm of public indignation would affect virologists everywhere, not just in China. “It would shatter the scientific edifice top to bottom,” an MIT Technology Review editor, Antonio Regalado, said in March 2020.

A second statement which had enormous influence in shaping public attitudes was a letter (in other words an opinion piece, not a scientific article) published on 17 March 2020 in the journal Nature Medicine. Its authors were a group of virologists led by Kristian G. Andersen of the Scripps Research Institute. “Our analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus,” the five virologists declared in the second paragraph of their letter.

Kristian G. Andersen, Scripps Research

Unfortunately, this was another case of poor science, in the sense defined above. True, some older methods of cutting and pasting viral genomes retain tell-tale signs of manipulation. But newer methods, called “no-see-um” or “seamless” approaches, leave no defining marks. Nor do other methods for manipulating viruses such as serial passage, the repeated transfer of viruses from one culture of cells to another. If a virus has been manipulated, whether with a seamless method or by serial passage, there is no way of knowing that this is the case. Dr. Andersen and his colleagues were assuring their readers of something they could not know.

The discussion part their letter begins, “It is improbable that SARS-CoV-2 emerged through laboratory manipulation of a related SARS-CoV-like coronavirus”. But wait, didn’t the lead say the virus had clearly not been manipulated? The authors’ degree of certainty seemed to slip several notches when it came to laying out their reasoning.

The reason for the slippage is clear once the technical language has been penetrated. The two reasons the authors give for supposing manipulation to be improbable are decidedly inconclusive.

First, they say that the spike protein of SARS2 binds very well to its target, the human ACE2 receptor, but does so in a different way from that which physical calculations suggest would be the best fit. Therefore, the virus must have arisen by natural selection, not manipulation.

If this argument seems hard to grasp, it’s because it’s so strained. The authors’ basic assumption, not spelt out, is that anyone trying to make a bat virus bind to human cells could do so in only one way. First, they would calculate the strongest possible fit between the human ACE2 receptor and the spike protein with which the virus latches onto it. They would then design the spike protein accordingly (by selecting the right string of amino acid units that compose it). But since the SARS2 spike protein is not of this calculated best design, the Andersen paper says, therefore it can’t have been manipulated.

But this ignores the way that virologists do in fact get spike proteins to bind to chosen targets, which is not by calculation but by splicing in spike protein genes from other viruses or by serial passage. With serial passage, each time the virus’s progeny are transferred to new cell cultures or animals, the more successful are selected until one emerges that makes a really tight bind to human cells. Natural selection has done all the heavy lifting. The Andersen paper’s speculation about designing a viral spike protein through calculation has no bearing on whether or not the virus was manipulated by one of the other two methods.

The authors’ second argument against manipulation is even more contrived. Although most living things use DNA as their hereditary material, a number of viruses use RNA, DNA’s close chemical cousin. But RNA is difficult to manipulate, so researchers working on coronaviruses, which are RNA-based, will first convert the RNA genome to DNA. They manipulate the DNA version, whether by adding or altering genes, and then arrange for the manipulated DNA genome to be converted back into infectious RNA.

Only a certain number of these DNA backbones have been described in the scientific literature. Anyone manipulating the SARS2 virus “would probably” have used one of these known backbones, the Andersen group writes, and since SARS2 is not derived from any of them, therefore it was not manipulated. But the argument is conspicuously inconclusive. DNA backbones are quite easy to make, so it’s obviously possible that SARS2 was manipulated, using an unpublished DNA backbone.

And that’s it. These are the two arguments made by the Andersen group in support of their declaration that the SARS2 virus was clearly not manipulated. And this conclusion, grounded in nothing but two inconclusive speculations, convinced the world’s press that SARS2 could not have escaped from a lab. A technical critique of the Andersen letter takes it down in harsher words.

Science is supposedly a self-correcting community of experts who constantly check each other’s work. So, why didn’t other virologists point out that the Andersen group’s argument was full of absurdly large holes? Perhaps because in today’s universities — speech can be very costly. Careers can be destroyed for stepping out of line. Any virologist who challenges the community’s declared view risks having his next grant application turned down by the panel of fellow virologists that advises the government grant distribution agency.

The Daszak and Andersen letters were really political, not scientific statements, yet were amazingly effective. Articles in the mainstream press repeatedly stated that a consensus of experts had ruled lab escape out of the question or extremely unlikely. Their authors relied for the most part on the Daszak and Andersen letters, failing to understand the yawning gaps in their arguments. Mainstream newspapers all have science journalists on their staff, as do the major networks, and these specialist reporters are supposed to be able to question scientists and check their assertions. But the Daszak and Andersen assertions went largely unchallenged.

Doubts about natural emergence

Natural emergence was the media’s preferred theory until around February 2021 and the visit by a World Health Organization commission to China. The commission’s composition and access were heavily controlled by the Chinese authorities. Its members, who included the ubiquitous Dr. Daszak, kept asserting before, during and after their visit that lab escape was extremely unlikely. But this was not quite the propaganda victory the Chinese authorities may have been hoping for. What became clear was that the Chinese had no evidence to offer the commission in support of the natural emergence theory.

This was surprising because both the SARS1 and MERS viruses had left copious traces in the environment. The intermediary host species of SARS1 was identified within four months of the epidemic’s outbreak, and the host of MERS within nine months. Yet some 15 months after the SARS2 pandemic began, and a presumably intensive search, Chinese researchers had failed to find either the original bat population, or the intermediate species to which SARS2 might have jumped, or any serological evidence that any Chinese population, including that of Wuhan, had ever been exposed to the virus prior to December 2019. Natural emergence remained a conjecture which, however plausible to begin with, had gained not a shred of supporting evidence in over a year.

And as long as that remains the case, it’s logical to pay serious attention to the alternative conjecture, i.e., that SARS2 escaped from a lab.

Why would anyone want to create a novel virus capable of causing a pandemic? Ever since virologists gained the tools for manipulating a virus’s genes, they have argued they could get ahead of a potential pandemic by exploring how close a given animal virus might be to making the jump to humans. And that justified lab experiments in enhancing the ability of dangerous animal viruses to infect people, virologists asserted.

With this rationale, they have recreated the 1918 flu virus, shown how the almost extinct polio virus can be synthesized from its published DNA sequence, and introduced a smallpox gene into a related virus.

These enhancements of viral capabilities are known blandly as gain-of-function experiments. With coronaviruses, there was particular interest in the spike proteins, which jut out all around the spherical surface of the virus and pretty much determine which species of animal it will target. In 2000 Dutch researchers, for instance, earned the gratitude of rodents everywhere by genetically engineering the spike protein of a mouse coronavirus so that it would attack only cats.

The spike proteins on the coronavirus’s surface determine which animal it can infect. CDC.gov

Virologists started studying bat coronaviruses in earnest, after these turned out to be the source of both the SARS1 and MERS epidemics. In particular, researchers wanted to understand what changes needed to occur in a bat virus’s spike proteins before it could infect people.

Researchers at the Wuhan Institute of Virology, led by China’s leading expert on bat viruses, Dr. Shi Zheng-li or “Bat Lady”, mounted frequent expeditions to the bat-infested caves of Yunnan in southern China and collected around a hundred different bat coronaviruses.

Dr. Shi then teamed up with Ralph S. Baric, an eminent coronavirus researcher at the University of North Carolina. Their work focused on enhancing the ability of bat viruses to attack humans so as to “examine the emergence potential (that is, the potential to infect humans) of circulating bat CoVs [coronaviruses].” In pursuit of this aim, in November 2015 they created a novel virus by taking the backbone of the SARS1 virus and replacing its spike protein with one from a bat virus (known as SHC014-CoV). This manufactured virus was able to infect the cells of the human airway, at least when tested against a lab culture of such cells.

The SHC014-CoV/SARS1 virus is known as a chimera because its genome contains genetic material from two strains of virus. If the SARS2 virus were to have been cooked up in Dr. Shi’s lab, then its direct prototype would have been the SHC014-CoV/SARS1 chimera, the potential danger of which concerned many observers and prompted intense discussion.

“If the virus escaped, nobody could predict the trajectory,” said Simon Wain-Hobson, a virologist at the Pasteur Institute in Paris.

Dr. Baric and Dr. Shi referred to the obvious risks in their paper but argued they should be weighed against the benefit of foreshadowing future spillovers. Scientific review panels, they wrote, “may deem similar studies building chimeric viruses based on circulating strains too risky to pursue.” Given various restrictions being placed on gain-of function (GOF) research, matters had arrived in their view at “a crossroads of GOF research concerns; the potential to prepare for and mitigate future outbreaks must be weighed against the risk of creating more dangerous pathogens. In developing policies moving forward, it is important to consider the value of the data generated by these studies and whether these types of chimeric virus studies warrant further investigation versus the inherent risks involved.”

That statement was made in 2015. From the hindsight of 2021, one can say that the value of gain-of-function studies in preventing the SARS2 epidemic was zero. The risk was catastrophic, if indeed the SARS2 virus was generated in a gain-of-function experiment.

Inside the Wuhan Institute of Virology

Dr. Baric had developed, and taught Dr. Shi, a general method for engineering bat coronaviruses to attack other species. The specific targets were human cells grown in cultures and humanized mice. These laboratory mice, a cheap and ethical stand-in for human subjects, are genetically engineered to carry the human version of a protein called ACE2 that studs the surface of cells that line the airways.

Dr. Shi returned to her lab at the Wuhan Institute of Virology and resumed the work she had started on genetically engineering coronaviruses to attack human cells.

Dr. Zheng-li Shi in a high safety (level BSL4) lab. Her coronavirus research was done in the much lower safety levels of BSL2 and BSL3 labs.

How can we be so sure?

Because, by a strange twist in the story, her work was funded by the National Institute of Allergy and Infectious Diseases (NIAID), a part of the U.S. National Institutes of Health (NIH). And grant proposals that funded her work, which are a matter of public record, specify exactly what she planned to do with the money.

The grants were assigned to the prime contractor, Dr. Daszak of the EcoHealth Alliance, who subcontracted them to Dr. Shi. Here are extracts from the grants for fiscal years 2018 and 2019. “CoV” stands for coronavirus and “S protein” refers to the virus’s spike protein.

“Test predictions of CoV inter-species transmission. Predictive models of host range (i.e. emergence potential) will be tested experimentally using reverse genetics, pseudovirus and receptor binding assays, and virus infection experiments across a range of cell cultures from different species and humanized mice.”

“We will use S protein sequence data, infectious clone technology, in vitro and in vivo infection experiments and analysis of receptor binding to test the hypothesis that % divergence thresholds in S protein sequences predict spillover potential.”

What this means, in non-technical language, is that Dr. Shi set out to create novel coronaviruses with the highest possible infectivity for human cells. Her plan was to take genes that coded for spike proteins possessing a variety of measured affinities for human cells, ranging from high to low. She would insert these spike genes one by one into the backbone of a number of viral genomes (“reverse genetics” and “infectious clone technology”), creating a series of chimeric viruses. These chimeric viruses would then be tested for their ability to attack human cell cultures (“in vitro”) and humanized mice (“in vivo”). And this information would help predict the likelihood of “spillover,” the jump of a coronavirus from bats to people.

The methodical approach was designed to find the best combination of coronavirus backbone and spike protein for infecting human cells. The approach could have generated SARS2-like viruses, and indeed may have created the SARS2 virus itself with the right combination of virus backbone and spike protein.

It cannot yet be stated that Dr. Shi did or did not generate SARS2 in her lab because her records have been sealed, but it seems she was certainly on the right track to have done so. “It is clear that the Wuhan Institute of Virology was systematically constructing novel chimeric coronaviruses and was assessing their ability to infect human cells and human-ACE2-expressing mice,” says Richard H. Ebright, a molecular biologist at Rutgers University and leading expert on biosafety.

“It is also clear,” Dr. Ebright said, “that, depending on the constant genomic contexts chosen for analysis, this work could have produced SARS-CoV-2 or a proximal progenitor of SARS-CoV-2.” “Genomic context” refers to the particular viral backbone used as the testbed for the spike protein.

The lab escape scenario for the origin of the SARS2 virus, as should by now be evident, is not mere hand-waving in the direction of the Wuhan Institute of Virology. It is a detailed proposal, based on the specific project being funded there by the NIAID.

Even if the grant required the work plan described above, how can we be sure that the plan was in fact carried out? For that we can rely on the word of Dr. Daszak, who has been much protesting for the last 15 months that lab escape was a ludicrous conspiracy theory invented by China-bashers.

On 9 December 2019, before the outbreak of the pandemic became generally known, Dr. Daszak gave an interview in which he talked in glowing terms of how researchers at the Wuhan Institute of Virology had been reprogramming the spike protein and generating chimeric coronaviruses capable of infecting humanized mice.

“And we have now found, you know, after 6 or 7 years of doing this, over 100 new sars-related coronaviruses, very close to SARS,” Dr. Daszak says around minute 28 of the interview. “Some of them get into human cells in the lab, some of them can cause SARS disease in humanized mice models and are untreatable with therapeutic monoclonals and you can’t vaccinate against them with a vaccine. So, these are a clear and present danger….

“Interviewer: You say these are diverse coronaviruses and you can’t vaccinate against them, and no anti-virals — so what do we do?

“Daszak: Well, I think…coronaviruses — you can manipulate them in the lab pretty easily. Spike protein drives a lot of what happen with coronavirus, in zoonotic risk. So, you can get the sequence, you can build the protein, and we work a lot with Ralph Baric at UNC to do this. Insert into the backbone of another virus and do some work in the lab. So you can get more predictive when you find a sequence. You’ve got this diversity. Now the logical progression for vaccines is, if you are going to develop a vaccine for SARS, people are going to use pandemic SARS, but let’s insert some of these other things and get a better vaccine.” The insertions he referred to perhaps included an element called the furin cleavage site, discussed below, which greatly increases viral infectivity for human cells.

In disjointed style, Dr. Daszak is referring to the fact that once you have generated a novel coronavirus that can attack human cells, you can take the spike protein and make it the basis for a vaccine.

One can only imagine Dr. Daszak’s reaction when he heard of the outbreak of the epidemic in Wuhan a few days later. He would have known better than anyone the Wuhan Institute’s goal of making bat coronaviruses infectious to humans, as well as the weaknesses in the institute’s defense against their own researchers becoming infected.

But instead of providing public health authorities with the plentiful information at his disposal, he immediately launched a public relations campaign to persuade the world that the epidemic couldn’t possibly have been caused by one of the institute’s souped-up viruses. “The idea that this virus escaped from a lab is just pure baloney. It’s simply not true,” he declared in an April 2020 interview.

The Safety Arrangements at the Wuhan Institute of Virology

Dr. Daszak was possibly unaware of, or perhaps he knew all too well, the long history of viruses escaping from even the best run laboratories. The smallpox virus escaped three times from labs in England in the 1960’s and 1970’s, causing 80 cases and 3 deaths. Dangerous viruses have leaked out of labs almost every year since. Coming to more recent times, the SARS1 virus has proved a true escape artist, leaking from laboratories in Singapore, Taiwan, and no less than four times from the Chinese National Institute of Virology in Beijing.

One reason for SARS1 being so hard to handle is that there were no vaccines available to protect laboratory workers. As Dr. Daszak mentioned in his December 19 interview quoted above, the Wuhan researchers too had been unable to develop vaccines against the coronaviruses they had designed to infect human cells. They would have been as defenseless against the SARS2 virus, if it were generated in their lab, as their Beijing colleagues were against SARS1.

A second reason for the severe danger of novel coronaviruses has to do with the required levels of lab safety. There are four degrees of safety, designated BSL1 to BSL4, with BSL4 being the most restrictive and designed for deadly pathogens like the Ebola virus.

The Wuhan Institute of Virology had a new BSL4 lab, but its state-of-readiness considerably alarmed the State Department inspectors who visited it from the Beijing embassy in 2018. “The new lab has a serious shortage of appropriately trained technicians and investigators needed to safely operate this high-containment laboratory,” the inspectors wrote in a cable of 19 January 2018.

The real problem, however, was not the unsafe state of the Wuhan BSL4 lab but the fact that virologists worldwide don’t like working in BSL4 conditions. You have to wear a space suit, do operations in closed cabinets and accept that everything will take twice as long. So, the rules assigning each kind of virus to a given safety level were laxer than some might think was prudent.

Before 2020, the rules followed by virologists in China and elsewhere required that experiments with the SARS1 and MERS viruses be conducted in BSL3 conditions. But all other bat coronaviruses could be studied in BSL2, the next level down. BSL2 requires taking fairly minimal safety precautions, such as wearing lab coats and gloves, not sucking up liquids in a pipette, and putting up biohazard warning signs. Yet a gain-of-function experiment conducted in BSL2 might produce an agent more infectious than either SARS1 or MERS. And if it did, then lab workers would stand a high chance of infection, especially if unvaccinated.

Much of Dr. Shi’s work on gain-of-function in coronaviruses was performed at the BSL2 safety level, as is stated in her publications and other documents. She has said in an interview with Science magazine that “The coronavirus research in our laboratory is conducted in BSL-2 or BSL-3 laboratories.”

“It is clear that some or all of this work was being performed using a biosafety standard — biosafety level 2, the biosafety level of a standard US dentist’s office — that would pose an unacceptably high risk of infection of laboratory staff upon contact with a virus having the transmission properties of SARS-CoV-2,” says Dr. Ebright.

“It also is clear,” he adds, “that this work never should have been funded and never should have been performed.”

This is a view he holds — regardless of whether or not the SARS2 virus ever saw the inside of a lab.

Concern about safety conditions at the Wuhan lab was not, it seems, misplaced. According to a fact sheet issued by the State Department on January 15,2021, “ The U.S. government has reason to believe that several researchers inside the WIV became sick in autumn 2019, before the first identified case of the outbreak, with symptoms consistent with both COVID-19 and common seasonal illnesses.”

David Asher, a fellow of the Hudson Institute and former consultant to the State Department, provided more detail about the incident at a seminar. Knowledge of the incident came from a mix of public information and “some high-end information collected by our intelligence community,” he said. Three people working at a BSL3 lab at the institute fell sick within a week of each other with severe symptoms that required hospitalization. This was “the first known cluster that we’re aware of, of victims of what we believe to be COVID-19.” Influenza could not completely be ruled out but seemed unlikely in the circumstances, he said.

Comparing the Rival Scenarios of SARS2 Origin

The evidence above adds up to a serious case that the SARS2 virus could have been created in a lab, from which it then escaped. But the case, however substantial, falls short of proof. Proof would consist of evidence from the Wuhan Institute of Virology, or related labs in Wuhan, that SARS2 or a predecessor virus was under development there. For lack of access to such records, another approach is to take certain salient facts about the SARS2 virus and ask how well each is explained by the two rival scenarios of origin, those of natural emergence and lab escape. Here are four tests of the two hypotheses. A couple have some technical detail, but these are among the most persuasive for those who may care to follow the argument.

1) The place of origin.

Start with geography. The two closest known relatives of the SARS2 virus were collected from bats living in caves in Yunnan, a province of southern China. If the SARS2 virus had first infected people living around the Yunnan caves, that would strongly support the idea that the virus had spilled over to people naturally. But this isn’t what happened. The pandemic broke out 1,500 kilometers away, in Wuhan.

Beta-coronaviruses, the family of bat viruses to which SARS2 belongs, infect the horseshoe bat Rhinolophus affinis, which ranges across southern China. The bats’ range is 50 kilometers, so it’s unlikely that any made it to Wuhan. In any case, the first cases of the Covid-19 pandemic probably occurred in September, when temperatures in Hubei province are already cold enough to send bats into hibernation.

What if the bat viruses infected some intermediate host first? You would need a longstanding population of bats in frequent proximity with an intermediate host, which in turn must often cross paths with people. All these exchanges of virus must take place somewhere outside Wuhan, a busy metropolis which so far as is known is not a natural habitat of Rhinolophus bat colonies. The infected person (or animal) carrying this highly transmissible virus must have traveled to Wuhan without infecting anyone else. No one in his or her family got sick. If the person jumped on a train to Wuhan, no fellow passengers fell ill.

It’s a stretch, in other words, to get the pandemic to break out naturally outside Wuhan and then, without leaving any trace, to make its first appearance there.

For the lab escape scenario, a Wuhan origin for the virus is a no-brainer. Wuhan is home to China’s leading center of coronavirus research where, as noted above, researchers were genetically engineering bat coronaviruses to attack human cells. They were doing so under the minimal safety conditions of a BSL2 lab. If a virus with the unexpected infectiousness of SARS2 had been generated there, its escape would be no surprise.

2) Natural history and evolution

The initial location of the pandemic is a small part of a larger problem, that of its natural history. Viruses don’t just make one time jumps from one species to another. The coronavirus spike protein, adapted to attack bat cells, needs repeated jumps to another species, most of which fail, before it gains a lucky mutation. Mutation — a change in one of its RNA units — causes a different amino acid unit to be incorporated into its spike protein and makes the spike protein better able to attack the cells of some other species.

Through several more such mutation-driven adjustments, the virus adapts to its new host, say some animal with which bats are in frequent contact. The whole process then resumes as the virus moves from this intermediate host to people.

In the case of SARS1, researchers have documented the successive changes in its spike protein as the virus evolved step by step into a dangerous pathogen. After it had gotten from bats into civets, there were six further changes in its spike protein before it became a mild pathogen in people. After a further 14 changes, the virus was much better adapted to humans, and with a further 4 the epidemic took off.

But when you look for the fingerprints of a similar transition in SARS2, a strange surprise awaits. The virus has changed hardly at all, at least until recently. From its very first appearance, it was well adapted to human cells. Researchers led by Alina Chan of the Broad Institute compared SARS2 with late stage SARS1, which by then was well adapted to human cells, and found that the two viruses were similarly well adapted. “By the time SARS-CoV-2 was first detected in late 2019, it was already pre-adapted to human transmission to an extent similar to late epidemic SARS-CoV,” they wrote.

Even those who think lab origin unlikely agree that SARS2 genomes are remarkably uniform. Dr. Baric writes that “early strains identified in Wuhan, China, showed limited genetic diversity, which suggests that the virus may have been introduced from a single source.”

A single source would of course be compatible with lab escape, less so with the massive variation and selection which is evolution’s hallmark way of doing business.

The uniform structure of SARS2 genomes gives no hint of any passage through an intermediate animal host, and no such host has been identified in nature.

Proponents of natural emergence suggest that SARS2 incubated in a yet-to-be found human population before gaining its special properties. Or that it jumped to a host animal outside China.

All these conjectures are possible, but strained. Proponents of lab leak have a simpler explanation. SARS2 was adapted to human cells from the start because it was grown in humanized mice or in lab cultures of human cells, just as described in Dr. Daszak’s grant proposal. Its genome shows little diversity because the hallmark of lab cultures is uniformity.

Proponents of laboratory escape joke that of course the SARS2 virus infected an intermediary host species before spreading to people, and that they have identified it — a humanized mouse from the Wuhan Institute of Virology.

3) The furin cleavage site.

The furin cleavage site is a minute part of the virus’s anatomy but one that exerts great influence on its infectivity. It sits in the middle of the SARS2 spike protein. It also lies at the heart of the puzzle of where the virus came from.

The spike protein has two sub-units with different roles. The first, called S1, recognizes the virus’s target, a protein called angiotensin converting enzyme-2 (or ACE2) which studs the surface of cells lining the human airways. The second, S2, helps the virus, once anchored to the cell, to fuse with the cell’s membrane. After the virus’s outer membrane has coalesced with that of the stricken cell, the viral genome is injected into the cell, hijacks its protein-making machinery and forces it to generate new viruses.

But this invasion cannot begin until the S1 and S2 subunits have been cut apart. And there, right at the S1/S2 junction, is the furin cleavage site that ensures the spike protein will be cleaved in exactly the right place.

The virus, a model of economic design, does not carry its own cleaver. It relies on the cell to do the cleaving for it. Human cells have a protein cutting tool on their surface known as furin. Furin will cut any protein chain that carries its signature target cutting site. This is the sequence of amino acid units, proline-arginine-arginine-alanine, or PRRA in the code that refers to each amino acid by a letter of the alphabet. PRRA is the amino acid sequence at the core of SARS2’s furin cleavage site.

Viruses have all kinds of clever tricks, so why does the furin cleavage site stand out? Because of all known SARS-related beta-coronaviruses, only SARS2 possesses a furin cleavage site. All the other viruses have their S2 unit cleaved at a different site and by a different mechanism.

How then did SARS2 acquire its furin cleavage site? Either the site evolved naturally, or it was inserted by researchers at the S1/S2 junction in a gain-of-function experiment.

Consider natural origin first. Two ways viruses evolve are by mutation and by recombination. Mutation is the process of random change in DNA (or RNA for coronaviruses) that usually results in one amino acid in a protein chain being switched for another. Many of these changes harm the virus but natural selection retains the few that do something useful. Mutation is the process by which the SARS1 spike protein gradually switched its preferred target cells from those of bats to civets, and then to humans.

Mutation seems a less likely way for SARS2’s furin cleavage site to be generated, even though it can’t completely be ruled out. The site’s four amino acid units are all together, and all at just the right place in the S1/S2 junction. Mutation is a random process triggered by copying errors (when new viral genomes are being generated) or by chemical decay of genomic units. So it typically affects single amino acids at different spots in a protein chain. A string of amino acids like that of the furin cleavage site is much more likely to be acquired all together through a quite different process known as recombination.

Recombination is an inadvertent swapping of genomic material that occurs when two viruses happen to invade the same cell, and their progeny are assembled with bits and pieces of RNA belonging to the other. Beta-coronaviruses will only combine with other beta-coronaviruses but can acquire, by recombination, almost any genetic element present in the collective genomic pool. What they cannot acquire is an element the pool does not possess. And no known SARS-related beta-coronavirus, the class to which SARS2 belongs, possesses a furin cleavage site.

Proponents of natural emergence say SARS2 could have picked up the site from some as yet unknown beta-coronavirus. But bat SARS-related beta-coronaviruses evidently don’t need a furin cleavage site to infect bat cells, so there’s no great likelihood that any in fact possesses one, and indeed none has been found so far.

The proponents’ next argument is that SARS2 acquired its furin cleavage site from people. A predecessor of SARS2 could have been circulating in the human population for months or years until at some point it acquired a furin cleavage site from human cells. It would then have been ready to break out as a pandemic.

If this is what happened, there should be traces in hospital surveillance records of the people infected by the slowly evolving virus. But none has so far come to light. According to the WHO report on the origins of the virus, the sentinel hospitals in Hubei province, home of Wuhan, routinely monitor influenza-like illnesses and “no evidence to suggest substantial SARSCoV-2 transmission in the months preceding the outbreak in December was observed.”

So — it’s hard to explain how the SARS2 virus picked up its furin cleavage site naturally, whether by mutation or recombination.

That leaves a gain-of-function experiment. For those who think SARS2 may have escaped from a lab, explaining the furin cleavage site is no problem at all. “Since 1992 the virology community has known that the one sure way to make a virus deadlier is to give it a furin cleavage site at the S1/S2 junction in the laboratory,” writes Dr. Steven Quay, a biotech entrepreneur interested in the origins of SARS2. “At least eleven gain-of-function experiments, adding a furin site to make a virus more infective, are published in the open literature, including [by] Dr. Zhengli Shi, head of coronavirus research at the Wuhan Institute of Virology.”

4) A Question of Codons

There’s another aspect of the furin cleavage site that narrows the path for a natural emergence origin even further.

As everyone knows (or may at least recall from high school), the genetic code uses three units of DNA to specify each amino acid unit of a protein chain. When read in groups of 3, the 4 different kinds of DNA can specify 4 x 4 x 4 or 64 different triplets, or codons as they are called. Since there are only 20 kinds of amino acid, there are more than enough codons to go around, allowing some amino acids to be specified by more than one codon. The amino acid arginine, for instance, can be designated by any of the six codons CGU, CGC, CGA, CGG, AGA or AGG, where A, U, G and C stand for the four different kinds of unit in RNA.

Here’s where it gets interesting. Different organisms have different codon preferences. Human cells like to designate arginine with the codons CGT, CGC or CGG. But CGG is coronavirus’s least popular codon for arginine. Keep that in mind when looking at how the amino acids in the furin cleavage site are encoded in the SARS2 genome.

Now the functional reason why SARS2 has a furin cleavage site, and its cousin viruses don’t, can be seen by lining up (in a computer) the string of nearly 30,000 nucleotides in its genome with those of its cousin coronaviruses, of which the closest so far known is one called RaTG13. Compared with RaTG13, SARS2 has a 12-nucleotide insert right at the S1/S2 junction. The insert is the sequence T-CCT-CGG-CGG-GC. The CCT codes for proline, the two CGG’s for two arginines, and the GC is the beginning of a GCA codon that codes for alanine.

There are several curious features about this insert but the oddest is that of the two side-by-side CGG codons. Only 5% of SARS2’s arginine codons are CGG, and the double codon CGG-CGG has not been found in any other beta-coronavirus. So how did SARS2 acquire a pair of arginine codons that are favored by human cells but not by coronaviruses?

Proponents of natural emergence have an up-hill task to explain all the features of SARS2’s furin cleavage site. They have to postulate a recombination event at a site on the virus’s genome where recombinations are rare, and the insertion of a 12-nucleotide sequence with a double arginine codon unknown in the beta-coronavirus repertoire, at the only site in the genome that would significantly expand the virus’s infectivity.

“Yes, but your wording makes this sound unlikely — viruses are specialists at unusual events,” is the riposte of David L. Robertson, a virologist at the University of Glasgow who regards lab escape as a conspiracy theory. “Recombination is naturally very, very frequent in these viruses, there are recombination breakpoints in the spike protein and these codons appear unusual exactly because we’ve not sampled enough.”

Dr. Robertson is correct that evolution is always producing results that may seem unlikely but in fact are not. Viruses can generate untold numbers of variants, but we see only the one-in-a-billion that natural selection picks for survival. But this argument could be pushed too far. For instance, any result of a gain-of-function experiment could be explained as one that evolution would have arrived at in time. And the numbers game can be played the other way. For the furin cleavage site to arise naturally in SARS2, a chain of events has to happen, each of which is quite unlikely for the reasons given above. A long chain with several improbable steps is unlikely to ever be completed.

For the lab escape scenario, the double CGG codon is no surprise. The human-preferred codon is routinely used in labs. So, anyone who wanted to insert a furin cleavage site into the virus’s genome would synthesize the PRRA-making sequence in the lab and would be likely to use CGG codons to do so.

“When I first saw the furin cleavage site in the viral sequence, with its arginine codons, I said to my wife it was the smoking gun for the origin of the virus,” said David Baltimore, an eminent virologist and former president of CalTech. “These features make a powerful challenge to the idea of a natural origin for SARS2,” he said.

A Third Scenario of Origin

There’s a variation on the natural emergence scenario that’s worth considering. This is the idea that SARS2 jumped directly from bats to humans, without going through an intermediate host as SARS1 and MERS did. A leading advocate is the virologist David Robertson who notes that SARS2 can attack several other species besides humans. He believes the virus evolved a generalist capability while still in bats. Because the bats that it infects are widely distributed in southern and central China, the virus had ample opportunity to jump to people, even though it seems to have done so on only one known occasion. Dr. Robertson’s thesis explains why no one has so far found a trace of SARS2 in any intermediate host or in human populations surveilled before December 2019. It would also explain the puzzling fact that SARS2 has not changed since it first appeared in humans — it didn’t need to because it could already attack human cells efficiently.

One problem with this idea, though, is that if SARS2 jumped from bats to people in a single leap and hasn’t changed much since, it should still be good at infecting bats. And it seems it isn’t.

“Tested bat species are poorly infected by SARS-CoV-2 and they are therefore unlikely to be the direct source for human infection,” write a scientific group skeptical of natural emergence.

Still, Dr. Robertson may be onto something. The bat coronaviruses of the Yunnan caves can infect people directly. In April 2012 six miners clearing bat guano from the Mojiang mine contracted severe pneumonia with Covid-19-like symptoms and three eventually died. A virus isolated from the Mojiang mine, called RaTG13, is still the closest known relative of SARS2. Much mystery surrounds the origin, reporting and strangely low affinity of RaTG13 for bat cells, as well as the nature of 8 similar viruses that Dr. Shi reports she collected at the same time but has not yet published despite their great relevance to the ancestry of SARS2. But all that is a story for another time. The point here is that bat viruses can infect people directly, though only in special conditions.

So who else, besides miners excavating bat guano, comes into particularly close contact with bat coronaviruses? Well, coronavirus researchers do. Dr. Shi says she and her group collected more than 1,300 bat samples during some 8 visits to the Mojiang cave between 2012 and 2015, and there were doubtless many expeditions to other Yunnan caves.

Imagine the researchers making frequent trips from Wuhan to Yunnan and back, stirring up bat guano in dark caves and mines, and now you begin to see a possible missing link between the two places. Researchers could have gotten infected during their collecting trips, or while working with the new viruses at the Wuhan Institute of Virology. The virus that escaped from the lab would have been a natural virus, not one cooked up by gain of function.

The direct-from-bats thesis is a chimera between the natural emergence and lab escape scenarios. It’s a possibility that can’t be dismissed. But against it are the facts that 1) both SARS2 and RaTG13 seem to have only feeble affinity for bat cells, so one can’t be fully confident that either ever saw the inside of a bat; and 2) the theory is no better than the natural emergence scenario at explaining how SARS2 gained its furin cleavage site, or why the furin cleavage site is determined by human-preferred arginine codons instead of by the bat-preferred codons.

Where We Are So Far

Neither the natural emergence nor the lab escape hypothesis can yet be ruled out. There is still no direct evidence for either. So, no definitive conclusion can be reached.

That said, the available evidence leans more strongly in one direction than the other. Readers will form their own opinion. But it seems to me that proponents of lab escape can explain all the available facts about SARS2 considerably more easily than can those who favor natural emergence.

It’s documented that researchers at the Wuhan Institute of Virology were doing gain-of-function experiments designed to make coronaviruses infect human cells and humanized mice. This is exactly the kind of experiment from which a SARS2-like virus could have emerged. The researchers were not vaccinated against the viruses under study, and they were working in the minimal safety conditions of a BSL2 laboratory. So, escape of a virus would not be at all surprising. In all of China, the pandemic broke out on the doorstep of the Wuhan institute. The virus was already well adapted to humans, as expected for a virus grown in humanized mice. It possessed an unusual enhancement, a furin cleavage site, which is not possessed by any other known SARS-related beta-coronavirus, and this site included a double arginine codon also unknown among beta-coronaviruses. What more evidence could you want, aside from the presently unobtainable lab records documenting SARS2’s creation?

Proponents of natural emergence have a rather harder story to tell. The plausibility of their case rests on a single surmise, the expected parallel between the emergence of SARS2 and that of SARS1 and MERS. But none of the evidence expected in support of such a parallel history has yet emerged. No one has found the bat population that was the source of SARS2, if indeed it ever infected bats. No intermediate host has presented itself, despite an intensive search by Chinese authorities that included the testing of 80,000 animals. There is no evidence of the virus making multiple independent jumps from its intermediate host to people, as both the SARS1 and MERS viruses did. There is no evidence from hospital surveillance records of the epidemic gathering strength in the population as the virus evolved. There is no explanation of why a natural epidemic should break out in Wuhan and nowhere else. There is no good explanation of how the virus acquired its furin cleavage site, which no other SARS-related beta-coronavirus possesses, nor why the site is composed of human-preferred codons. The natural emergence theory battles a bristling array of implausibilities.

The records of the Wuhan Institute of Virology certainly hold much relevant information. But Chinese authorities seem unlikely to release them — given the substantial chance that they incriminate the regime in the creation of the pandemic. Absent the efforts of some courageous Chinese whistle-blower, we may already have at hand just about all of the relevant information we are likely to get for a while.

So, it’s worth trying to assess responsibility for the pandemic, at least in a provisional way, because the paramount goal remains to prevent another one. Even those who aren’t persuaded that lab escape is the more likely origin of the SARS2 virus may see reason for concern about the present state of regulation governing gain-of-function research. There are two obvious levels of responsibility: the first, for allowing virologists to perform gain-of-function experiments, offering minimal gain and vast risk; the second, if indeed SARS2 was generated in a lab, for allowing the virus to escape and unleash a world-wide pandemic. Here are the players who seem most likely to deserve blame:

1. Chinese virologists

First and foremost, Chinese virologists are to blame for performing gain-of-function experiments in mostly BSL2-level safety conditions which were far too lax to contain a virus of unexpected infectiousness like SARS2. If the virus did indeed escape from their lab, they deserve the world’s censure for a foreseeable accident that has already caused the deaths of 3 million people.

True, Dr. Shi was trained by French virologists, worked closely with American virologists and was following international rules for the containment of coronaviruses. But she could and should have made her own assessment of the risks she was running. She and her colleagues bear the responsibility for their actions.

I have been using the Wuhan Institute of Virology as a shorthand for all virological activities in Wuhan. It’s possible that SARS2 was generated in some other Wuhan lab, perhaps in an attempt to make a vaccine that worked against all coronaviruses. But until the role of other Chinese virologists is clarified, Dr. Shi is the public face of Chinese work on coronaviruses, and provisionally she and her colleagues will stand first in line for opprobrium.

2. Chinese authorities

China’s central authorities did not generate SARS2 but they sure did their utmost to conceal the nature of the tragedy and China’s responsibility for it. They suppressed all records at the Wuhan Institute of Virology and closed down its virus databases. They released a trickle of information, much of which may have been outright false or designed to misdirect and mislead. They did their best to manipulate the WHO’s inquiry into the virus’s origins, and led the commission’s members on a fruitless run-around. So far they have proved far more interested in deflecting blame than in taking the steps necessary to prevent a second pandemic.

3. The worldwide community of virologists

Virologists around the world are a loose-knit professional community. They write articles in the same journals. They attend the same conferences. They have common interests in seeking funds from governments and in not being overburdened with safety regulations.

Virologists knew better than anyone the dangers of gain-of-function research. But the power to create new viruses, and the research funding obtainable by doing so, was too tempting. They pushed ahead with gain-of-function experiments. They lobbied against the moratorium imposed on Federal funding for gain-of-function research in 2014 and it was raised in 2017.

The benefits of the research in preventing future epidemics have so far been nil, the risks vast. If research on the SARS1 and MERS viruses could only be done at the BSL3 safety level, it was surely illogical to allow any work with novel coronaviruses at the lesser level of BSL2. Whether or not SARS2 escaped from a lab, virologists around the world have been playing with fire.

Their behavior has long alarmed other biologists. In 2014 scientists calling themselves the Cambridge Working Group urged caution on creating new viruses. In prescient words, they specified the risk of creating a SARS2-like virus. “Accident risks with newly created ‘potential pandemic pathogens’ raise grave new concerns,” they wrote. “Laboratory creation of highly transmissible, novel strains of dangerous viruses, especially but not limited to influenza, poses substantially increased risks. An accidental infection in such a setting could trigger outbreaks that would be difficult or impossible to control.”

When molecular biologists discovered a technique for moving genes from one organism to another, they held a public conference at Asilomar in 1975 to discuss the possible risks. Despite much internal opposition, they drew up a list of stringent safety measures that could be relaxed in future — and duly were — when the possible hazards had been better assessed.

When the CRISPR technique for editing genes was invented, biologists convened a joint report by the U.S., UK and Chinese national academies of science to urge restraint on making heritable changes to the human genome. Biologists who invented gene drives have also been open about the dangers of their work and have sought to involve the public.

You might think the SARS2 pandemic would spur virologists to re-evaluate the benefits of gain-of-function research, even to engage the public in their deliberations. But no. Many virologists deride lab escape as a conspiracy theory and others say nothing. They have barricaded themselves behind a Chinese wall of silence — which so far is working well to allay, or at least postpone, journalists’ curiosity and the public’s wrath. Professions that cannot regulate themselves deserve to get regulated by others, and this would seem to be the future that virologists are choosing for themselves.

4. The US Role in Funding the Wuhan Institute of Virology

From June 2014 to May 2019 Dr. Daszak’s EcoHealth Alliance had a grant from the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health, to do gain-of-function research with coronaviruses at the Wuhan Institute of Virology. Whether or not SARS2 is the product of that research, it seems a questionable policy to farm out high-risk research to unsafe foreign labs using minimal safety precautions. And if the SARS2 virus did indeed escape from the Wuhan institute, then the NIH will find itself in the terrible position of having funded a disastrous experiment that led to death of more than 3 million worldwide, including more than half a million of its own citizens.

The responsibility of the NIAID and NIH is even more acute because for the first three years of the grant to EcoHealth Alliance there was a moratorium on funding gain-of-function research. Why didn’t the two agencies therefore halt the Federal funding as apparently required to do so by law? Because someone wrote a loophole into the moratorium.

The moratorium specifically barred funding any gain-of-function research that increased the pathogenicity of the flu, MERS or SARS viruses. But then a footnote on p.2 of the moratorium document states that “An exception from the research pause may be obtained if the head of the USG funding agency determines that the research is urgently necessary to protect the public health or national security.”

This seems to mean that either the director of the NIAID, Dr. Anthony Fauci, or the director of the NIH, Dr. Francis Collins, or maybe both, would have invoked the footnote in order to keep the money flowing to Dr. Shi’s gain-of-function research.

“Unfortunately, the NIAID Director and the NIH Director exploited this loophole to issue exemptions to projects subject to the Pause –preposterously asserting the exempted research was ‘urgently necessary to protect public health or national security’ — thereby nullifying the Pause,” Dr. Richard Ebright said in an interview with Independent Science News.

When the moratorium was ended in 2017 it didn’t just vanish but was replaced by a reporting system, the Potential Pandemic Pathogens Control and Oversight (P3CO) Framework, which required agencies to report for review any dangerous gain-of-function work they wished to fund.

According to Dr. Ebright, both Dr. Collins and Dr. Fauci “have declined to flag and forward proposals for risk-benefit review, thereby nullifying the P3CO Framework.”

In his view, the two officials, in dealing with the moratorium and the ensuing reporting system, “have systematically thwarted efforts by the White House, the Congress, scientists, and science policy specialists to regulate GoF [gain-of-function] research of concern.”

Possibly the two officials had to take into account matters not evident in the public record, such as issues of national security. Perhaps funding the Wuhan Institute of Virology, which is believed to have ties with Chinese military virologists, provided a window into Chinese biowarfare research. But whatever other considerations may have been involved, the bottom line is that the National Institutes of Health was supporting gain-of-function research, of a kind that could have generated the SARS2 virus, in an unsupervised foreign lab that was doing work in BSL2 biosafety conditions. The prudence of this decision can be questioned, whether or not SARS2 and the death of 3 million people was the result of it.

In Conclusion

If the case that SARS2 originated in a lab is so substantial, why isn’t this more widely known? As may now be obvious, there are many people who have reason not to talk about it. The list is led, of course, by the Chinese authorities. But virologists in the United States and Europe have no great interest in igniting a public debate about the gain-of-function experiments that their community has been pursuing for years.

Nor have other scientists stepped forward to raise the issue. Government research funds are distributed on the advice of committees of scientific experts drawn from universities. Anyone who rocks the boat by raising awkward political issues runs the risk that their grant will not be renewed and their research career will be ended. Maybe good behavior is rewarded with the many perks that slosh around the distribution system. And if you thought that Dr. Andersen and Dr. Daszak might have blotted their reputation for scientific objectivity after their partisan attacks on the lab escape scenario, look at the 2nd and 3rd names on this list of recipients of an $82 million grant announced by the National Institute of Allergy and Infectious Diseases in August 2020.

The US government shares a strange common interest with the Chinese authorities: neither is keen on drawing attention to the fact that Dr. Shi’s coronavirus work was funded by the US National Institutes of Health. One can imagine the behind-the-scenes conversation in which the Chinese government says “If this research was so dangerous, why did you fund it, and on our territory too?” To which the US side might reply, “Looks like it was you who let it escape. But do we really need to have this discussion in public?”

Dr. Fauci is a longtime public servant who served with integrity under President Trump and has resumed leadership in the Biden Administration in handling the Covid epidemic. Congress, no doubt understandably, may have little appetite for hauling him over the coals for the apparent lapse of judgment in funding gain-of-function research in Wuhan.

To these serried walls of silence must be added that of the mainstream media. To my knowledge, no major newspaper or television network has yet provided readers with an in-depth news story of the lab escape scenario, such as the one you have just read, although some have run brief editorials or opinion pieces. One might think that any plausible origin of a virus that has killed three million people would merit a serious investigation. Or that the wisdom of continuing gain-of-function research, regardless of the virus’s origin, would be worth some probing. Or that the funding of gain-of-function research by the NIH and NIAID during a moratorium on such research would bear investigation. What accounts for the media’s apparent lack of curiosity?

The virologists’ omertà is one reason. Science reporters, unlike political reporters, have little innate skepticism of their sources’ motives; most see their role largely as purveying the wisdom of scientists to the unwashed masses. So, when their sources won’t help, these journalists are at a loss.

Another reason, perhaps, is the migration of much of the media toward the left of the political spectrum. Because President Trump said the virus had escaped from a Wuhan lab, editors gave the idea little credence. They joined the virologists in regarding lab escape as a dismissible conspiracy theory. During the Trump Administration, they had no trouble in rejecting the position of the intelligence services that lab escape could not be ruled out. But when Avril Haines, President Biden’s director of National Intelligence, said the same thing, she too was largely ignored. This is not to argue that editors should have endorsed the lab escape scenario, merely that they should have explored the possibility fully and fairly.

People round the world who have been pretty much confined to their homes for the last year might like a better answer than their media are giving them. Perhaps one will emerge in time. After all, the more months pass without the natural emergence theory gaining a shred of supporting evidence, the less plausible it may seem. Perhaps the international community of virologists will come to be seen as a false and self-interested guide. The common-sense perception that a pandemic breaking out in Wuhan might have something to do with a Wuhan lab cooking up novel viruses of maximal danger in unsafe conditions could eventually displace the ideological insistence that whatever Trump said can’t be true.

And then let the reckoning begin.

Nicholas Wade

April 30,2021


The first person to take a serious look at the origins of the SARS2 virus was Yuri Deigin, a biotech entrepreneur in Russia and Canada. In a long and brilliant essay, he dissected the molecular biology of the SARS2 virus and raised, without endorsing, the possibility that it had been manipulated. The essay, published on April 22, 2020, provided a roadmap for anyone seeking to understand the virus’s origins. Deigin packed so much information and analysis into his essay that some have doubted it could be the work of a single individual and suggested some intelligence agency must have authored it. But the essay is written with greater lightness and humor than I suspect are ever found in CIA or KGB reports, and I see no reason to doubt that Dr. Deigin is its very capable sole author.

In Deigin’s wake have followed several other skeptics of the virologists’ orthodoxy. Nikolai Petrovsky calculated how tightly the SARS2 virus binds to the ACE2 receptors of various species and found to his surprise that it seemed optimized for the human receptor, leading him to infer the virus might have been generated in a laboratory. Alina Chan published a paper showing that SARS2 from its first appearance was very well adapted to human cells.

One of the very few establishment scientists to have questioned the virologists’ absolute rejection of lab escape is Richard Ebright, who has long warned against the dangers of gain-of-function research. Another is David A. Relman of Stanford University. “Even though strong opinions abound, none of these scenarios can be confidently ruled in or ruled out with currently available facts,” he wrote. Kudos too to Robert Redfield, former director of the Centers for Disease Control and Prevention, who told CNN on March 26, 2021 that the “most likely” cause of the epidemic was “from a laboratory,” because he doubted that a bat virus could become an extreme human pathogen overnight, without taking time to evolve, as seemed to be the case with SARS2.

Steven Quay, a physician-researcher, has applied statistical and bioinformatic tools to ingenious explorations of the virus’s origin, showing for instance how the hospitals receiving the early patients are clustered along the Wuhan №2 subway line which connects the Institute of Virology at one end with the international airport at the other, the perfect conveyor belt for distributing the virus from lab to globe.

In June 2020 Milton Leitenberg published an early survey of the evidence favoring lab escape from gain-of-function research at the Wuhan Institute of Virology.

Many others have contributed significant pieces of the puzzle. “Truth is the daughter,” said Francis Bacon, “not of authority but time.” The efforts of people such as those named above are what makes it so.


Posted in Center for Environmental Genetics | Comments Off on Origin of SARS-CoV-2 Virus — Following the Clues

HGNC 2021 Spring NewLetter

New HGNC search application is live!

Readers of the Winter newsletter will remember being asked to test the beta version of our improved search. Thanks so much to all who did this and provided us with feedback. On April 1st we switched over to this new search on genenames.org. We have been enjoying the improved search ever since and we hope you have too! We are always happy to receive your feedback, both positive and negative…
MANE transcripts now on genenames.org

You can now find MANE (Matched Annotation from NCBI and EMBL-EBI). Select transcripts on our Symbol Reports and in our REST service. The MANE project aims to provide a set of standard transcripts for human protein-coding genes annotated by both RefSeq and Havana-Ensembl, for which there has been agreement between annotators from both teams on the entire sequence, including 5’ UTR, coding region and 3’ UTR. Please read our recent guest blog post, ‘Transcripts are the MANE attraction’ by Jane Loveland of the Havana-Ensembl team to learn more. Here is an example from our MTOR Symbol Report showing a MANE Select transcript in the ‘Nucleotide Resources’ section:

Note that MANE transcripts have both RefSeq and Ensembl IDs, and these are versioned, i.e. the MTOR gene report shows both the RefSeq ID NM_004958.4 and the Ensembl ID ENST00000361445.9. These IDs link through to transcript pages in NCBI Gene and Ensembl, respectively.
All about HCOP

We are delighted to announce that we recently published a paper in Briefings in Bioinformatics ‘Updates to HCOP: the HGNC comparison of orthology predictions tool’ describing the current version of our HCOP tool (to see the full citation for this paper, please see the ‘Publications’ section of this newsletter).

The original version of HCOP (HGNC Comparison of Orthology Predictions) was created nearly 20 years ago and initially collated orthology calls between human and mouse from a number of orthology prediction resources, in order for the HGNC and the Mouse Genomic Nomenclature Committee to identify cases where gene nomenclature could be aligned between the two species. Two decades later, the current tool presents orthology predictions between human and the following 19 different species: chimp, rhesus macaque, mouse, rat, dog, cat, horse, cow, pig, opossum, platypus, chicken, anole lizard, xenopus, zebrafish, Caenorhabditis elegans, fruit fly, Saccharomyces cerevisiae and Schizosaccharomyces pombe using 14 separate orthology prediction resources (please see Table 1 in the new paper to view the full resource list). As many of our readers will know, we still use HCOP to align nomenclature; we now use the tool to auto-approve appropriate gene nomenclature for each VGNC full species gene set (chimp, rhesus macaque, dog, cat, cow, horse and pig). We have a software pipeline that searches the HCOP data to identify high confidence ortholog sets between human and each VGNC species, as predicted by Ensembl, NCBI Gene, OMA and PANTHER. These VGNC genes are then auto-assigned the same gene symbol as their human ortholog, provided the symbols pass rules devised by curators to ensure that the human nomenclature is suitable for transfer across species. Genes not identified in this pipeline are not given auto-approved symbols and need to be assigned nomenclature manually by a curator – a huge task which we are still working towards for each core VGNC species!

In addition to orthology calls, HCOP displays approved nomenclature from HGNC, VGNC, MGNC (mouse gene nomenclature from MGI), Rat Genome Database (RGD), Chicken Gene Nomenclature Consortium (CGNC), Xenbase, ZFIN, WormBase, Saccharomyces Genome Database (SGD) and PomBase.

One of the biggest strengths of HCOP is that it is updated daily, so the data are always as current as possible. For example, it includes the latest data from Ensembl, OMA and PANTHER, which have all released new ortholog sets in the last few weeks. Please read the full paper to learn more!
Updates to placeholder symbols

The HGNC continues to update placeholder symbols whenever new data becomes available. In the past few months we have updated the following genes based on discussions between an HGNC curator and researchers working on the gene:

C8orf37 -> CFAP418, cilia and flagella associated protein 418
FAM155A -> NALF1, NALCN channel auxiliary factor 1
FAM155B -> NALF2, NALCN channel auxiliary factor 2

The following genes have been renamed following updates to their annotation models, resulting in a change in locus type from protein coding to long non-coding RNA:

C17orf77 -> CD300LD-AS1, CD300LD antisense RNA 1
C9orf147 -> HSDL2-AS1, HSDL2 antisense RNA 1
C9orf106 -> LINC02913, long intergenic non-protein coding RNA 2913
C14orf177 -> LINC02914, long intergenic non-protein coding RNA 2914
C15orf54 -> LINC02915, long intergenic non-protein coding RNA 2915
C11orf72 -> NDUFV1-DT, NDUFV1 divergent transcript

New gene groups

Here are some examples of new gene groups that we have made within the last couple of months:

Methylcrotonyl-CoA carboxylase subunits (MCCC)
Transcription factor AP-2 family (TFAP2)
Tet methylcytosine dioxygenase family (TET)
Adducin family (ADD)
TNRC6 adaptor family (TNRC6)
PARN exonuclease family
TLDc domain containing (TLDC)

Gene Symbols in the News

Babies born with spinal muscular atrophy in the UK can now benefit from gene therapy with an active copy of the gene SMN1. This therapy has been heralded as the ‘most expensive’ drug treatment ever approved and has already been available in other countries, such as the USA. It requires just a single treatment early on in life.

In cancer-related news, the cell surface-expressed gene LRRN4CL has been identified as a biomarker for melanoma following a CRISPR activation screen. There is hope that a drug could be developed in the future to target the LRRN4CL protein in melanoma patients, particularly as it is a cell surface protein and therefore drugs could be extracellular.

Most humans carry a pseudogenised copy of the SIGLEC12 gene, but about 30% of people have a protein coding version. The protein version has been suggested to partially explain why humans have high rates of carcinoma compared to other great apes, as the protein appears to be involved in aberrant cell signalling and its expression has been associated with poor prognosis in colorectal cancer patients.

In COVID-19 news, variants in the following genes have been newly associated with an increased risk of contracting the disease: ERMP1, FCER1G and CA11. The same study corroborated previously-reported variants in the ABO and SLC6A20 genes. Additionally, the study identified variants in the IL10RB, IFNAR2 and OAS1 genes that are linked to patients suffering from a more severe form of the disease.

Finally, we bring you news about a dog gene! A study from Finland has identified the causal gene for nonsyndromic early-onset hereditary hearing loss in Rottweilers as LOXHD1. We have already approved this gene symbol for the dog gene in VGNC via the pipeline described in the HCOP section above. The human ortholog has also been associated with deafness.


Posted in Center for Environmental Genetics | Comments Off on HGNC 2021 Spring NewLetter

What has happened to American education?

This description by a communist defector — is not to be taken lightly. This is very scary. ☹

North Korean defector says ‘even North Korea was not this crazy’ after attending Ivy League school
Yeonmi Park escaped the oppressive regime in 2007 at the age of 13

Teny Sahakian

By Teny Sahakian

You can Click on underlined title to listen to her description (in excellent English):
‘America’s future is as bleak as North Korea’ says defector after attending Columbia

Yeonmi Park was shocked by the oppressive culture within the university, reminding her of the country she fled.

As American educational institutions continue to be called into question, a North Korean defector fears the United States’ future “is as bleak as North Korea” after she attended one of the country’s most prestigious universities.

Yeonmi Park has experienced plenty of struggle and hardship, but she does not call herself a victim.

One of several hundred North Korean defectors settled in the United States, Park, 27, transferred to Columbia University from a South Korean university in 2016 and was deeply disturbed by what she found.

“I expected that I was paying this fortune, all this time and energy, to learn how to think. But they are forcing you to think the way they want you to think,” Park said in an interview with Fox News. “I realized, wow, this is insane. I thought America was different but I saw so many similarities to what I saw in North Korea that I started worrying.”

Those similarities include anti-Western sentiment, collective guilt and suffocating political correctness.

Yeonmi saw red flags immediately upon arriving at the school.

During orientation, she was scolded by a university staff member for admitting she enjoyed classic literature such as Jane Austen.

“I said ‘I love those books.’ I thought it was a good thing,” recalled Park.

“Then she said, ‘Did you know those writers had a colonial mindset? They were racists and bigots and are subconsciously brainwashing you.’”

It only got worse from there as Yeonmi realized that every one of her classes at the Ivy League school was infected with what she saw as anti-American propaganda, reminiscent to the sort she had grown up with.

“’American Bastard’ was one word for North Koreans” Park was taught growing up.

“The math problems would say ‘there are four American bastards, you kill two of them, how many American bastards are left to kill?'”

She was also shocked and confused by issues surrounding gender and language, with every class asking students to announce their preferred pronouns.

“English is my third language. I learned it as an adult. I sometimes still say ‘he’ or ‘she’ by mistake and now they are going to ask me to call them ‘they’? How the heck do I incorporate that into my sentences?”

“It was chaos,” said Yeonmi. “It felt like the regression in civilization.”

“Even North Korea is not this nuts,” she admitted. “North Korea was pretty crazy, but not this crazy.”

After getting into a number of arguments with professors and students, eventually Yeonmi “learned how to just shut up” in order to maintain a good GPA and graduate.

In North Korea, Yeonmi Park did not know of concepts like love or liberty.

“Because I have seen oppression, I know what it looks like,” said Yeonmi, who by the age of 13 had witnessed people drop dead of starvation right before her eyes.

“These kids keep saying how they’re oppressed, how much injustice they’ve experienced. They don’t know how hard it is to be free,” she admonished.

“I literally crossed through the middle of the Gobi Desert to be free. But what I did was nothing, so many people fought harder than me and didn’t make it.”

Park and her mother first fled the oppressive North Korean regime in 2007, when Yeonmi was 13 years old.

After crossing into China over the frozen Yalu River, they fell into the hands of human traffickers who sold them into slavery: Yeonmi for less than $300 and her mother for roughly $100.

With the help of Christian missionaries, the pair managed to flee to Mongolia, walking across the Gobi Desert to eventually find refuge in South Korea.

In 2015 she published her memoir “In Order to Live,” where she described what it took to survive in one of the world’s most brutal dictatorships and the harrowing journey to freedom.

“The people here are just dying to give their rights and power to the government. That is what scares me the most,” the human right activist said.

She accused American higher education institutions of stripping people’s ability to think critically.

“In North Korea I literally believed that my Dear Leader [Kim Jong-un] was starving,” she recalled. “He’s the fattest guy – how can anyone believe that? And then somebody showed me a photo and said ‘Look at him, he’s the fattest guy. Other people are all thin.’ And I was like, ‘Oh my God, why did I not notice that he was fat?’ Because I never learned how to think critically.”

“That is what is happening in America,” she continued. “People see things but they’ve just completely lost the ability to think critically.”

Witnessing the depth of American’s ignorance up close has made Yeonmi question everything about humanity.

“North Koreans, we don’t have Internet, we don’t have access to any of these great thinkers, we don’t know anything. But here, while having everything, people choose to be brainwashed. And they deny it.”

Having come to America with high hopes and expectations, Yeonmi expressed her disappointment.

“You guys have lost common sense to degree that I as a North Korean cannot even comprehend,” she said.

“Where are we going from here?” she wondered. “There’s no rule of law, no morality, nothing is good or bad anymore, it’s complete chaos.”

“I guess that’s what they want, to destroy every single thing and rebuild into a Communist paradise.”

Posted in Center for Environmental Genetics | Comments Off on What has happened to American education?


Right now, in the field of atmospheric science — there is an interesting debate going on. We’ve been in our “Solar Minimum” for more than a year now; the usual thing that happens next (in the normal 11-year solar cycle) is that sunspots (solar activity) return on the surface of the sun. However, some scientists are speculating that we might be entering a “Maunder Minimum” — which is a prolonged severely cold time on the planet.

Our last Maunder Minimum occurred between 1645 and 1715 AD (part of the Little Ice Age). We’ve had some evidence of “more extreme cold than usual” during this past year, which is a bit scary. We should know within the next ~2 years which way the sun is planning to go. ☹ 😊


Solar physicists at the ultra-warmist Potsdam Institute for Climate Impact are warning that Europe may be facing a ‘Mini-Ice Age’ due to a possible protracted solar minimum. For an institute that over the past 20 years has steadfastly insisted that man has been almost the sole factor in climate change over the past century and that the sun no longer plays a role, this is quite remarkable reports Pierre Gosselin.[1]

Little noticed by the mainstream media in their obsession with global warming is an exceptionally chilly 2020-2021 winter in the Northern Hemisphere and an unusually early start to the Southern Hemisphere winter. Low temperature and snowfall records are tumbling all over the globe. [2]

Presently, snow records have continued to be broken around the world

Whether the present frigid and snowy conditions in much of the world are merely a result of La Nina, or the start of a longer cooling trend, we won’t know for several years. Climate, after all, is a long-term average of the weather over an extended period of time, up to decades.

Nonetheless, there’s ample evidence that the current cold snap is not about to let up. At the same time the UK experienced its lowest average minimum temperature for April since 1922, and both Switzerland and Slovenia suffered record low temperatures for the month, bone-chilling cold struck Australia, New Zealand and even normally shivery Antarctica in the Southern Hemisphere. The 2021 sea ice extent around Antarctica is above or close to the 30 year average from 1981 to 2010. [2]

Flashback to 2014, when The New York Times, a charter member of the climate crisis cabal, ran a terrifying article that predicted ‘the end of snow.’ That dire forecast preceded 14 years earlier, in 2000 by the UK Independent, another charter member of the climate crisis cabal, which reported that “Snowfalls are a thing of the past. Children just aren’t going to know what snow is.” [3]

For those who claimed children would not know what snow looked like — global snowfall rates have risen 3% since 1980. [4]

Presently, snow records have continued to be broken around the world. Belgrade, the capital of Serbia, registered its all-time high snowfall for April, in record books dating back to1888; during April both Finland and Russia reported their heaviest snow in decades; and the UK, Spain and several countries in the Middle East saw rare spring snowfalls from March to May. On the other side of the globe, up to 22 cm (9 inches) of snow fell on southeastern Australia mountain peaks a full two months before the start of the 2021 ski season; and southern Africa was also blanketed in early season snow. [2]

Mass deaths of reindeer have been reported across the Yamal Peninsula, Russia

In early January 2018, a powerful blizzard caused severe disruptions along the East Coast of the United States, as snow and bitter cold weather set new records, with Erie, PA shattering its all-time snowfall record. The next month, Chicago tied a longstanding record with nine straight days of snow after chalking up its most frigid New Year’s Day in history. The Plains, Midwest, and Northeast were hit with record setting frigid temperatures, and the Deep South was gripped by sub-zero freezing that dumped snow, even in Florida. [3]

Mass deaths of reindeer have been reported across the Yamal Peninsula, Russia. The animals forage was locked under unusually thick ice this year. Members of a scientific expedition have called for new urgent ideas to rescue herding in the region due to an increase in periodic glaciation. [5]

Twenty-one people, including two of China’s top marathon athletes, died after freezing rain and high winds struck a 62-mile mountain race in northwestern China on Saturday May 22, 2021. The 62-mile mountain race in Gansu Province found athletes in shorts and T-shirts facing freezing rain, hail and high winds.

Hours into the event the weather suddenly deteriorated as the runners were climbing to 6,500 feet above sea level to the 12-mile mark. Runners dressed in shorts and T-shirts were suddenly facing freezing conditions, and rain turned to hail. [6]

One model has projected we are entering another ‘grand minimum,’ which will overtake the sun beginning in 2020 and will last through the 2050s, resulting in diminished magnetism, infrequent sunspots production, and less ultraviolet (UV) radiation reaching Earth. This all means we are facing a global cooling period that may span 31 to 43 years. The last grand minimum event produced the mini-ice age in the mid-17th century. Known as the Maunder Minimum, it occurred between 1645 and 1715, during a longer span of time when parts of the world became so cold that the period was called the Little Ice Age, which lasted from about 1300 to 1850 AD. [7]


1. Pierre Gosselin, “U-turn! Scientists at the PIK Potsdam Institute now warning of a mini ice age,” notrickszone.com, June 29, 2016
2. Ralph B. Alexander, “Is recent record cold just La Nina, or the onset of global cooling?”, scienceunderattack.com, May 17, 2021
3. John Eidson, “Al Gore, our melting planet and the blizzard of 2018,” Canada Free Press, May 16, 2021
4. P. Gosselin, “Surprising results: global snowfall rate increases 3% over the past 40 years,” notrickszone.com, May 5, 2021
5. “Unusually thick ice kills thousands of Russian reindeer,” principia- scientific.com, May 13, 2021
6. Alexandra Stevenson and Cao Li, “21 runners dead after extreme weather hits Chinese ultramarathon,” The New York Times, May 23, 2021
7. “Are we headed into another ice age?”, principia-scientific.com, May 3, 2021

Comment from an astrophysicist: My theory holds that reversals of the earth’s poles do not result from solar minima. Just the opposite. The earth’s negative pole is at the north axis because the earth orbits in a predominantly negative solar wind and rotates in the right direction to put the negative pole (south pole) of earth’s magnet at the north axis. We call it the north pole because it attracts the north pole of a compass.

Once every few hundred thousand years, the sun gets super-active. Then the more positively charges in the inner solar atmosphere come all the way out to the earth’s orbit. The positively charged region is has greater charge density because positive particles are 2000 times heavier than electrons. When that happens — the earth’s poles flip, because the rotation of the now-positively-charged earth puts the earth’s negative pole at the south axis.

However, instead of the earth getting hotter with a hotter sun, the opposite happens. Much more water is evaporated from the oceans via Compton Effect evaporation (different from thermal evaporation). Clouds rise higher and freezing occurs; ice falls and cools the lower atmosphere. The excess water precipitates as ice, rather than rain, and a new ice age builds rapidly. A very hot sun causes an ice age. Even on Venus the temperature in the upper atmosphere will freeze water. Earth is no different. When clouds rise to great heights, ice falls to the ground. Falling ice depletes the heat in the lower atmosphere.

Joseph Tomlinson, Physicist

Posted in Center for Environmental Genetics | Comments Off on NORMAL SOLAR MINIMUM vs the MAUNDER MINIMUM

The decline of mathematics and creative science these past two decades

Attached is the 98-page report by the National Association of Scholars (NAS) — warning the U..S. that our current trend of encouraging mediocrity in science will have disastrous long-term consequences. In fact, it already has. Comparing scientific thought processes and creative science — in the 1950s, 60s, 70s and 80s — to what we’ve seen during the past two decades … I’ve often concluded to my close colleagues that this is nothing short of terrifying.
“Climbing Down: How the Next Generation Science Standards Diminish Scientific Literacy”

Clearly, this obsession with “diversity, equity, social justice, inclusivity, environmental justice, race-baiting, and gender” nonsense — is seriously destroying the true meaning and focus of the quantitative fields of mathematics and science.

If “education in the fields of math and science” is not corrected and turned around VERY quickly, this nation is headed toward disaster. Among all nations, U.S. students in math and science are already ranked below the top-20 countries, worldwide.

Below is pasted the Summary & Conclusions, and the Recommendation — from the attached 98-page report. I encourage everyone to read the entire report. This could be the most important email of my last 12-13 years of GEITP (2008-2021). ☹



The Next-Generation Science Standards for K-12 teaching (NGSS) are the latest iteration in top-down, untested, and disas­trous education reform — touted by progressive activists, bureau­crats, and philanthropists. The botched rollout of the Common Core State Standards generally illustrates the bad track record of such imposed reforms [135]. National Assessment of Educational Progress (NAEP) math scores show the same percentage of 8th-graders scoring profi­cient or better in 2017, as the year before their imple­mentation in 2010. This suggests that the similarly unvetted CCSS mathematics curriculum’s negative effects — entirely undid what should have been a decade of improvement in mathematics education [136]. America’s experience with failed education reforms — suggests it should expect little from the NGSS standards.

135 Peter Wood, ed., Drilling through the Core: Why Common Core is Bad for American Education (Pioneer Institute, 2015).

136 National Center for Education Statistics, NAEP Mathematics Report Card, U.S. Department of Education, https://www.nationsreportcard.gov/math_2017/nation/scores?grade=8.

The NGSS actually do possess some good features. The addition of engineering standards — which introduces students to another field of science — is valuable. While we raise concerns about project-based ed­ucation standards, we too recognize that inquiry-based learning can be beneficial, if used as a pedagogical approach in moderation. Raising questions and encouraging curiosity is good. It appeals to the natural inclination of children to question everything in the world around them, and the naturally curious child may take a keen interest in science as a possible career pursuit. Children enjoy the process of discovery. In fact, some of the most valuable scientific discoveries are the result of curi­osity and the inclination to ask questions. It is the imbalance of this ap­proach that raises concerns, since overreliance on inquiry-based proj­ects may not contribute to long-term memory of what is learned. After all, we’re told that students can “just Google it.”

The poor track record of education standards and outcomes— at the hands of progressive education reformers — should, of course, give us all pause, when we consider the merit of any new set of education stan­dards. For decades, America has put its trust in education bureaucrats, not to mention well-meaning, but misguided, philanthropists like Bill Gates, to decide what is best for American schoolchildren. This has left us with unfulfilled promises of better educational outcomes, frustra­tion by parents and their children, with a de facto national curriculum in the form of CCSS, and consequent flat NAEP score growth since its implementation.

Adopting the new science standards nationwide may offer nothing better. It should come as no surprise that, given the pre­vious failures of constructivist math (“the new math”) in the mid-20th century, America has not fared any better with the constructivist math­ematics of CCSS. The assurances of superior education — resulting from math standards that were never piloted or vetted prior to implemen­tation — were simply hollow. How can the NGSS, without pilot testing or vetting, promise any better? We will not know the outcome of the NGSS until a generation of school children has completed its K-12 education. The potential cost of this educational gamble is much too high. The NGSS are an uncontrolled experiment in how to ruin science education in the name of reform.

Students should be able to engage in thoughtful analysis, sort through evidence, systematically analyze it, and then build arguments based on findings. Moreover, science education should be about discov­ering truth, not just assembling and regurgitating facts. Unfortunately, the NGSS abandon both. The NGSS severely neglect content instruction, politicize much of the content that remains, largely in the service of a diversity and equity nonsensical political agenda, and abandon instruction of the sci­entific method. The NGSS will leave students unable to use the scientific method as a way to approach the truth. Furthermore, content knowl­edge is replaced with group projects, and (it appears, anyway) consensus answers to scientific questions, rather than verifiable evidence, are ac­cepted without challenge. This is not real science, and it will most likely lead to more widespread issues of politicized groupthink and irrepro­ducible science — as described by David Randall and Christopher Welser in their National Association of Scholars (NAS) report, The Irreproducibility Crisis of Modern Science [137].

137 Randall and Welser, The Irreproducibility Crisis of Modern Science.

The NGSS fail to prepare students for undergraduate science coursework and to provide the basic scientific competency that all Americans should have when they graduate from high school, regard­less of whether they proceed to an interdisciplinary Science, Technology, Engineeering & Mathematics (STEM) career. NGSS proponents pre­sume that college professors will compensate for the resulting deficits in K-12 science education. If they do, this will reduce undergraduate science courses to remedial classes. If they don’t, a large number of un­prepared college students, ill-served by the NGSS, will fail out of intro­ductory science classes. Either way, the NGSS will do terrible damage both to college students and to colleges.

The most fundamental flaw of the NGSS is the missing essential sci­ence content. The “Framework for K-12 Science Education”, which was the foundation for the NGSS, summarizes the intended goal of the standards:

The overarching goal of our framework for K-12 science education is to ensure that by the end of the 12th grade, all students have some appreciation of the beauty and wonder of science; possess sufficient knowledge of science and en­gineering to engage in public discussions on related issues; are careful consumers of scientific and technological information related to their everyday lives; are able to continue to learn about science outside school; and have the skills to enter careers of their choice, including (but not limit­ed to) careers in science, engineering, and technology [138]. [Underlining for emphasis added]

138 Framework, p. 1.

This “overarching goal” makes it quite clear that the NGSS func­tion as a set of what Ze’ev Wurman (former senior policy advisor at the U.S. Department of Education and outspoken critic of the NGSS) so aptly calls science appreciation standards rather than rigorous education­al standards [139].

139 Ze’ev Wurman, “Education to Raise Technology Consumers Instead of Technology Creators,” Monolithic 3D, August 4, 2011, http://www.monolithic3d.com/blog/education-to-raise-technology-consumers-in­stead-of-technology-creators.

State education departments and boards of education should avoid adopting the NGSS — and, if they already have adopted it, immediately replace it with superior standards. The price of continuing with this ed­ucational folly is far too high.

The content errors, numerous omissions, imbalance in content, feasibility concerns with the implementation of integrated standards, obvious political dogma, and major shift in pedagogy — should all give deci­sion-makers pause. To adopt an entirely new set of standards — without any evidence of success through pilot testing — is a dangerous educational experiment that is a disservice to all high school students, regardless of whether they plan to pursue STEM careers, but especially so for those who do.

Blanket adoption of the NGSS without careful comparison to oth­er existing science standards — those rated higher than the NGSS by Fordham — is not beneficial. This should never happen, although many states have done so. It is time to engage in careful appraisal and ask questions about the science, or lack thereof, being taught in our schools.

We offer the following recommendations to States and school districts:

1. If a State has not adopted new science standards and wishes to update and improve its existing standards, it should use the science standards graded as ‘A’ by the Fordham Review as a template. It should compare them with and find any helpful additions from the NGSS, such as the engineering standards that will introduce students to a new discipline, but with the understanding that students will likely not have the prerequisite mathematics preparation for true engi­neering standards in the upper grades.

2. States that have already adopted the NGSS should compare them with the other State science standards graded as ‘A’ by Fordham and make changes, additions, and deletions as needed.

3. Chemistry and physics standards should be supplemented with previous existing standards to provide solid, complete high-school level courses for students who plan to pursue STEM in college.

4. States should strongly consider replacing CCSS mathemat­ics with higher-level standards, such as the excellent and highly rated pre-CCSS California mathematics standards, to allow students to begin algebra in 8th rather than 9th grade. This will better prepare STEM-bound students as they enter college-level work.

5. States which choose to incorporate engineering in K-12 science education should adopt rigorous standards that require substantial amounts of mathematics.

6. States should allow, encourage, or require students to begin algebra in 8th grade rather than 9th, so that they may be prepared for rigorous high-school science classes.

7. School districts using the NGSS should encourage science teachers to use pedagogies that emphasize knowledge reten­tion rather than project learning.

8. States should ensure that science instruction focuses its case studies on individual effort, scientific dissent, and para­digm shifts, selected from the most important episodes in the history of science, without reference to the race or gender of the scientists in question — but rather with preference for outstanding representatives of the American scientific and engineering tradition, i.e., Benjamin Franklin, Samuel Morse, Alexander Graham Bell, Othniel Charles Marsh, Josiah Willard Gibbs, Thomas Edison, Edwin Armstrong, Edwin Hubble, Thomas Hunt Morgan, Claude Shannon, William Shockley, Linus Pauling, Richard Feynman, Robert Jarvik, and James Watson.

9. States should remove all political commitments from science education, especially those to diversity, environmentalism, and activism.

10. States should ensure that science standards steer students toward the full range of scientific careers and highlight how science and engineering can and should serve the American national interest.

11. States should ensure that science standards emphasize that devotion to science and engineering is its own reward, with­out reference to any “societal need,” and that all research and design can and should aim, above all, for truth and beauty.


The warning issued in 1983 by Dr. Glenn Seaborg and his colleagues in the opening paragraphs of A Nation at Risk could have been a critique of the NGSS:

…the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people…

If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. As it stands, we have allowed this to happen to ourselves … We have, in effect, been committing an act of unthinking, uni­lateral educational disarmament [140].

140 A Nation at Risk, p. 7.

Posted in Center for Environmental Genetics | Comments Off on The decline of mathematics and creative science these past two decades

Role of Environmental Genetics in Preventive Medicine. An Interview with Daniel W. Nebert

The medical students of Yale University first published The Yale Journal of Biology and Medicine (YJBM), Volume 1, No. 1, in October 1928. Since then, YJBM has been the only internationally recognized medical journal edited and published by students.

In its first year of publication, YJBM presented papers on infectious diseases, embryology, nutritional deficiency diseases, community health, trauma, medical history, and cancer. Today, YJBM continues to cover the breadth of medical science, including the publication of proceedings of international symposia and festschrifts honoring notable Yale medical faculty.

Milton C. Winternitz, dean of the Yale School of Medicine from 1920-1935, founded YJBM for the educational and professional betterment of his students. The son of a Czechoslovakian Jewish immigrant doctor, Winternitz was a Baltimore native who obtained his MD degree from Johns Hopkins in 1907.

Winternitz’s educational mission continues — as student members of the Board of Editors participate directly in the peer review process. Every manuscript reviewed at YJBM is analyzed and scrutinized by a faculty-student team. The faculty is recruited from across the University, and other institutions as needed, on a case-by-case basis to provide an opportunity to work with the best available expert in the field of the manuscript under review. Hypotheses, methods, results, and conclusions are reviewed, questioned, and examined for validity and logical soundness. Each student reports the results of this joint review to the Board of Editors, and thus gains experience in the scientific process from peer review, while maturing into a more confident physician-scientist through his or her interaction with distinguished faculty.

YJBM is, and has been, an internationally distributed journal with a long history of landmark articles. Our contributors also feature a notable list of philosophers, statesmen, scientists, and physicians, including Ernst Cassirer, Harvey Cushing, René Dubos, Edward Kennedy, Donald Seldin, and Jack Strominger. Our Editorial Board consists of students and faculty members from the Yale School of Medicine and the Yale University Graduate School of Arts & Sciences.

YJBM has been divided into a number of sections over time — so as to cover a wide range of functions and topics — which include original articles, medical reviews, scientific reviews, book and software reviews, case reports, and a section for focus on Yale medicine. Yale M.D. thesis abstracts continue to be published annually, documenting student research in the medical literature. Each year, an average of 15 students joins the Board of Editors. A permanent board of medical faculty joins the students in an advisory capacity. The Board meets monthly throughout the year to consider manuscript reviews and other matters of academic publishing.

YJBM has been indexed at the National Library of Medicine (NLM) since 1949 and has been available on PubMed (and its precursors) for 56 years. YJBM is currently an open-access publication through PubMed Central. Yours truly was honored by an invitation last fall to be interviewed by a graduate student, Brian Thompson. Just published in the first quarter issue of 2021, the interview is attached, for your bedtime reading pleasure. 😊


Posted in Center for Environmental Genetics | Comments Off on Role of Environmental Genetics in Preventive Medicine. An Interview with Daniel W. Nebert

Novel Probiotic Shows Promise in Treating Type-2 Diabetes

As these GEITP pages have often stated, any trait (phenotype) reflects contributions of: [a] genetics (differences in DNA sequence); [b] epigenetic factors (chromosomal events independent of DNA sequence. The four accepted subdivisions include: DNA methylation, RNA interference, histone modifications, and chromatin remodeling); [c] environmental effects (e.g., smoking, diet, lifestyle); [d] endogenous influences (e.g., kidney or cardiopulmonary disorders); and [e] each person’s microbiome. The topic today fits well with the theme of gene-environment interactions. One “signal” is dietary sugar and endogenous influences such as obesity. The genes that “respond to this signal” make the patient more susceptible to type-2 diabetes (T2D).

The fascinating aspect of this article [below] includes important advances in better understanding of our microbiome. Therefore, another “signal” represents metabolites generated by specific anaerobic bacterial species — which lead to a higher risk of T2D (the “response”); this is because these metabolites have been discovered to be underrepresented in the gut of T2D patients (having A1c levels of 6.5% or higher) and prediabetic patients (having A1c levels of 5.7% to 6.4%). This company’s particular preparation thus includes the oligosaccharide-consuming Akkermansia muciniphila and Bifidobacterium infantis, the butyrate producers Anaerobutyricum hallii, Clostridium beijerinckii, and Clostridium butyricum — along with the “prebiotic” dietary fiber inulin.

For those who have followed the explosion in our understanding of the gut microbiome, it wasn’t that long ago (perhaps 2005?) when we knew almost nothing about bacteria in our intestine (and elsewhere on the body) and what they do. Now we realize that, if one grinds up an entire mammal (or human) and isolates total DNA — ~92% of this DNA represents our bacteria (!!). Moreover, the “brain-gut-microbiome” represents an important axis of innumerable functions that can cause or influence changes in health, disease, and even our mood and behavior (all elicited by gut bacterial metabolites). To our knowledge, this probiotic preparation* (Pendulum Glucose Control) — containing gut bacterial strains that are deficient in people with pre-T2D or T2D — is the first example of Big Pharma utilizing knowledge gained by studying our gut microbiome, to (hopefully) improve (or prevent) clinically an undesirable and very serious disease. 😊😊

*Even though it is a rather expensive daily medication for the average patient ☹…


COMMENT: — there is a bit of a disconnect between your comments on fecal microbiota transplantation (FMT) and the Medscape article that GEITP had featured [pasted furthest below]. Many clinical centers are randomly trying FMT on virtually every human disorder — “to see if it works” (i.e., if it helps, or prevents, X-Y-Z). This is all well and good.

In contrast, this pharmaceutical firm took advantage of the discovery of the absence (or very low levels) of specific strains of anaerobic bacteria in patients with type-2 diabetes (T2D) or those showing signs of pre-T2D, compared with patients having no T2D or pre-T2D. The company then developed a preparation that includes [a] two oligosaccharide-consuming bacterial strains, [b] three butyrate-producing strains — combined with [c] the “prebiotic” dietary fiber inulin. Coming up with a specific commercial preparation … is wherein the creativity resides.

Imagine a fly on the wall 30 ft away, which you’d like to eliminate. 😊 The former approach is like a shotgun blast, hoping to hit the fly. The latter approach is like designing a laser gun to hit the fly specifically. 😊 The latter is therefore an example of “precision medicine.”


COMMENT: I agree that fecal microbiota transplantation (FMT) is amazing with its potential. To my knowledge, FMT has been most remarkably and reproducibly useful for treatment of persistent C. difficile infections. Here is the Cell Metabolism reference (and another in Gut and a newspiece) and here is the quote from the book where I first learned of this:

“Research is underway examining how certain probiotics might be able to reverse type-2 diabetes and the neurological challenges that can follow. At Harvard’s 2014 symposium on the microbiome, I was floored by the work of Dr. M. Nieuwdorp from the University of Amsterdam, who has done some incredible research related to obesity and type-2 diabetes. He has successfully improved the blood sugar mayhem found in type-2 diabetes in more than 250 people using fecal transplantation. He’s also used this procedure to improve insulin sensitivity.

These two achievements are virtually unheard of in traditional medicine. We have no medication available to reverse diabetes or significantly improve insulin sensitivity. Dr. Nieuwdorp had the room riveted, practically silenced, by his presentation. In this experiment, he transplanted fecal material from a healthy, lean, nondiabetic into a diabetic. What he did to control his experiment was quite clever: He simply transplanted the participants’ own microbiome back into their colons, so they didn’t know whether they were being “treated” or not. For those of us who see the far-reaching effects of diabetes in patients on a daily basis, outcomes like Dr. Nieuwdorp’s are a beacon of hope. […]”

Excerpt From: David Perlmutter. “Brain Maker: The Power of Gut Microbes to Heal and Protect Your Brain for Life.” iBooks.

Posted in Center for Environmental Genetics | Comments Off on Novel Probiotic Shows Promise in Treating Type-2 Diabetes

Novel Probiotic Shows Promise in Treating Type-2 Diabetes

Miriam E. Tucker

August 03, 2020
Read Comments

A novel probiotic product (Pendulum Glucose Control) containing gut bacteria strains that are deficient in people with type 2 diabetes (T2D) modestly improves blood glucose levels, new research suggests.

The findings were published online July 27 in BMJ Open Diabetes Research & Care by Fanny Perraudeau, PhD, and colleagues, all employees of Pendulum Therapeutics.

The product, classified as a medical food, is currently available for purchase on the company’s website without a prescription.

It contains the oligosaccharide-consuming Akkermansia muciniphila and Bifidobacterium infantis, the butyrate producers Anaerobutyricum hallii, Clostridium beijerinckii, and Clostridium butyricum, along with the “prebiotic” dietary fiber inulin.

In a 12-week trial of people with type-2 diabetes who were already taking metformin, with or without a sulfonylurea, 23 were randomized to the product and 26 received placebo capsules.

Participants in the active treatment arm had significantly reduced glucose levels after a 3-hour standard meal-tolerance test, by 36.1 mg/dL (P = .05), and average A1c reduction of 0.6 percentage points (P = .054) compared with those taking placebo. There were no major safety or tolerability issues, only transient gastrointestinal symptoms (nausea, diarrhea) lasting 3-5 days. No changes were seen in body weight, insulin sensitivity, or fasting blood glucose.

Asked to comment on the findings, Nanette I. Steinle, MD, an endocrinologist with expertise in nutrition who was not involved in the research, told Medscape Medical News: “To me it looks like the research was designed well and they didn’t overstate the results…I would say for folks with mild to modest blood glucose elevations, it could be helpful to augment a healthy lifestyle.”

However, the product is not cheap, so cost could be a limiting factor for some patients, said Steinle, who is associate professor of medicine at the University of Maryland School of Medicine, Baltimore, and chief of the endocrine section, Maryland VA Health Care System.

Product Could Augment Lifestyle Intervention in Early Type 2 Diabetes

Lead author Orville Kolterman, MD, chief medical officer at Pendulum, told Medscape Medical News that the formulation’s specificity distinguishes it from most commercially available probiotics.

“The ones sold in stores are reconfigurations of food probiotics, which are primarily aerobic organisms, whereas the abnormalities in the microbiome associated with type-2 diabetes reside in anaerobic organisms, which are more difficult to manufacture,” he explained.

The fiber component, inulin, is important as well, he said.

“This product may make the dietary management of type-2 diabetes more effective, in that you need both the fiber and the microbes to ferment the fiber and produce short-chain fatty acids that appear to be very important for many reasons.”

The blood glucose-lowering effect is related in part to the three organisms’ production of butyrate, which binds to epithelial cells in the gut to secrete glucagon-like peptide-1 (GLP-1), leading to inhibition of glucagon secretion among other actions.

And Akkermansia muciniphila protects the gut epithelium and has shown some evidence of improving insulin sensitivity and other beneficial metabolic effects in humans.

Kolterman, who was with Amylin Pharmaceuticals prior to moving to Pendulum, commented: “After doing this for 30 years or so, I’ve come to the strong appreciation that whenever you can do something to move back toward what Mother Nature set up, you’re doing a good thing.”
Clinically, Kolterman said, “I think perhaps the ideal place to try this would be shortly after diagnosis of type-2 diabetes, before patients go on to pharmacologic therapy.”

However, for practical reasons the study was done in patients who were already taking metformin. “The results we have are that it’s beneficial — above and beyond metformin, since [these] patients were not well controlled with metformin.”

He also noted that it might benefit patients who can’t tolerate metformin or who have prediabetes; there’s an ongoing investigator-initiated study of the latter.

Steinle, the endocrinologist with expertise in nutrition, also endorsed the possibility that the product may benefit people with prediabetes. “I would suspect this could be very helpful to augment attempts to prevent diabetes…The group with prediabetes is huge.”

However, she cautioned, “if the blood glucose is over 200 [mg/dL], I wouldn’t think a probiotic would get them where they need to go.”
Cost Could Be an Issue

Moreover, Steinle pointed out that cost might be a problem, given it is not covered by health insurance.

The product’s website lists several options: a “no commitment” one-time 30-day supply for $198; a “3-month starter package” including two free A1c tests for $180/month; and a “membership” including free A1c tests every 90 days, free dietician consultations, and “additional exclusive membership benefits” for $165/month.

“There’s a very large market out there of people who don’t find traditional allopathic medicine to be where they want to go for their healthcare,” Steinle observed.

“If they have reasonable means and want to try the ‘natural’ approach, they’ll probably see results but they’ll pay a lot for it,” she said.

Overall, she pointed out that targeting the microbiome is a very active and potentially important field of medical research, and that it has received support from the US National Institutes of Health (NIH).

“I do think we’ll see more of these types of products and use of the microbiome in various forms to treat a number of conditions.”

“I think we’re in the early stages of understanding how what grows in us, and on us, impacts our health and how we may be able to use these organisms to our benefit. I would expect we’ll see more of these probiotics being marketed in various forms.”

Kolterman is an employee of Pendulum. Steinle has reported receiving funding from the NIH, and she is conducting a study funded by Kowa through the VA.

BMJ Open Diabetes Res Care. Published online July 27, 2020: Full text

Posted in Center for Environmental Genetics | Comments Off on Novel Probiotic Shows Promise in Treating Type-2 Diabetes

Molnupiravir, a ribonucleotide antiviral, appears to be the best rreatment again SARS-CoV-2 virus

Because GEITP was very active in comparing remdesivir (RDV) with hydroxychloroquine (HCQ) — last spring and summer — GEITP feels obliged to report on this latest antiviral drug — which does appear to have great potential in lessening the symptomatology of COVID-19. Notice that this is a very preliminary study, which reaches statistical significance (P <0.05) between a MOLNUPIRAVIR-treated group (N = ~50) and the control group (N = ~150), which means these groups are very small to conclude anything unequivocally. Further studies with much larger sample sizes are of course indicated next. 😊 DwN Five-Day Course of Oral Antiviral Appears to Stop SARS-CoV-2 in Its Tracks Heather Boerner March 08, 2021 A single pill of the investigational drug molnupiravir, taken twice a day for 5 days, eliminated SARS-CoV-2 from the nasopharynx of 49 participants. That finding led Carlos del Rio, MD, distinguished professor of medicine at Emory University in Atlanta, Georgia, to suggest a future in which a drug like molnupiravir could be taken in the first few days of symptoms to prevent severe disease, similar to Tamiflu for influenza. "I think it's critically important," he told Medscape Medical News of the data. Emory University was involved in the trial of molnupiravir, but del Rio was not part of that team. "This drug offers the first antiviral oral drug that then could be used in an outpatient setting." Still, del Rio said it's too soon to call this particular drug the breakthrough — that clinicians need to keep people out of the ICU. "It has the potential to be practice-changing; it's not practice-changing at the moment." Wendy Painter, MD, of Ridgeback Biotherapeutics, who presented the data at the virtual Conference on Retroviruses and Opportunistic Infections, agreed. While the data are promising, "We will need to see if people get better from actual illness" to assess the real value of the drug in clinical care. "That's a phase 3 objective we'll need to prove," she told Medscape Medical News. Phase 2/3 efficacy and safety studies of the drug are now underway in hospitalized and nonhospitalized patients. In a brief pre-recorded presentation of the data, Painter laid out what researchers know so far: preclinical studies suggest that molnupiravir is effective against a number of viruses, including coronaviruses and specifically SARS-CoV-2. It prevents a virus from replicating by inducing viral error catastrophe — essentially overloading the virus with replication and mutation until the virus burns itself out and can't produce replicable copies. In this phase 2a, randomized, double-blind control trial, researchers recruited 202 adults who were treated at an outpatient clinic with fever or other symptoms of a respiratory virus and confirmed SARS-CoV-2 infection by day 4. Participants were randomly assigned to three different groups: 200 mg of molnupiravir, 400 mg; or 800 mg. The 200-mg arm was matched one-to-one with a placebo-controlled group, and the other two groups had three participants in the active group for every one control. Participants took the pills twice daily for 5 days, and then were followed for a total of 28 days to monitor for complications or adverse events. At days 3, 5, 7, 14, and 28, researchers also took nasopharyngeal swabs for PCR tests, to sequence the virus, and to grow cultures of SARS-CoV-2 to see if the virus that's present is actually capable of infecting others. Notably, the pills do not have to be refrigerated at any point in the process, alleviating the cold-chain challenges that have plagued vaccines. "There's an urgent need for an easily produced, transported, stored, and administered antiviral drug against SARS-CoV-2," Painter said. Of the 202 people recruited, 182 had swabs that could be evaluated, of which 78 showed infection at baseline. The results are based on labs of those 78 participants. By day 3, 28% of patients in the placebo arm had SARS-CoV-2 in their nasopharynx, compared to 20.4% of patients receiving any dose of molnupiravir. But by day 5, none of the participants receiving the active drug had evidence of SARS-CoV-2 in their nasopharynx. In comparison, 24% of people in the placebo arm still had detectable virus. Halfway through the treatment course, differences in the presence of infectious virus were already evident. By day 3 of the 5-day course, 36.4% of participants in the 200-mg group had detectable virus in the nasopharynx, compared with 21% in the 400-mg group and just 12.5% in the 800-mg group. And although the reduction in SARS-CoV-2 was noticeable in the 200-mg and the 400-mg arms, it was only statistically significant in the 800-mg arm. In contrast, by the end of the 5 days in the placebo groups, infectious virus varied from 18.2% in the 200-mg placebo group to 30% in the 800-mg group. This points out the variability of the disease course of SARS-CoV-2. "You just don't know" which infections will lead to serious disease, Painter told Medscape Medical News. "And don't you wish we did?" Seven participants discontinued treatment, though only four experienced adverse events. Three of those discontinued the trial due to adverse events. The study is still blinded, so it's unclear what those events were, but Painter said that they were not thought to be related to the study drug. The bottom line, said Painter, was that people treated with molnupiravir had starkly different outcomes in lab measures during the study. "An average of 10 days after symptom onset, 24% of placebo patients remained culture positive" for SARS-CoV-2 — meaning there wasn't just virus in the nasopharynx, but it was capable of replicating, Painter said. "In contrast, no infectious virus could be recovered at study day 5 in any molnupiravir-treated patients." Conference on Retroviruses and Opportunistic Infections 2021: Abstract SS777. Presented March 6, 2021. Heather Boerner is a science and medical reporter based in Pittsburgh, PA and can be found on Twitter at @HeatherBoerner.

Posted in Center for Environmental Genetics | Comments Off on Molnupiravir, a ribonucleotide antiviral, appears to be the best rreatment again SARS-CoV-2 virus

Policy Making in the Post-Truth World

The (l-o-n-g) article — [below, just posted] — seems (to some of us) to hit the nail on the head. “Basic science,” and the respect of basic scientists by the lay public, has clearly eroded over the past two or so decades. This is paralled by the “rise of inappropriate experts” (i.e., colleagues who think they ‘know’ more than anyone can possibly ‘know’ — about some topic). In the past year, this trend has especially been driven home, with regard to the COVID-19 hysteria.

For example, various “self-proclaimed experts” have frequently gone on national television and made unequivocal pronouncements as if they had “unambiguous facts.” As the result of pretending to be “an expert on SARS-CoV-2,” Dr. Anthony Fauci, for example, has unfortunately been attacked on all sides because he is continuously changing his “assertions.” ☹

In fact, the SARS-CoV-2 virus — and subsequent genetic differences in severity of response (mortality, degree of COVID-19 morbidity) — were unprecedented. The future was (and still is) unknown. An honest statement by a scientist should therefore be: “Nobody knows with certainty, but this is my best guess; I could be wrong.” And, one month later, if the data suggest something differently, why not say “Sorry, but I was wrong”…??

In the Post-Truth World, we have “inappropriate experts” creating policy mandates about many issues [e.g., facial coverings (in buildings, tennis courts, even on deserted beaches), business and school lockdowns, prevention of restaurants and bars from opening, “social distancing” of 6 feet (not 8 ft or 5 ft?), local and international travel, etc.] — to the point of absurdity because of so many uncertainty factor(s). It therefore comes as no surprise that respect of basic scientists by the lay public has clearly diminished.

This article below is an excellent summary of this recent transition from 20th-century expert science to the rise of inappropriate expertise that we’ve seen during the past several decades — on numerous policy issues. ☹
Policy Making in the Post-Truth World

On the Limits of Science and the Rise of Inappropriate Expertise


Rayner Royal Soc 2

Steve Rayner

Steve Rayner was the James Martin Professor of Science and Civilization at the University of Oxford, where he was the Founding Director of the Institute for Science, Innovation and Society.

Dan W Lefty

Daniel Sarewitz

Dan Sarewitz is Co-Director, Consortium for Science, Policy & Outcomes, and Professor of Science and Society at Arizona State University.

1 March 2021

Steve Rayner died a couple of months after he and I finished up a baggy first draft of this essay and circulated it to a few colleagues. The essay itself was to be the first of several that we had been discussing for years about science, technology, politics, and society. So, I had a thick folder — full of notes that I could draw on while making revisions, to assure myself that the end product was one that Steve would have fully approved of, even if it was not nearly as good as we could have achieved together.

Had Steve not died shortly before the onset of the COVID-19 pandemic, we would certainly have given it a central role in this revised version. But I never had the benefit of Steve’s insights on the strange unfolding of this disaster, and so, except in a couple of places where the extrapolation seems too obvious to not mention, the virus does not appear in what follows. Nonetheless, I know that Steve would have relished an obvious irony related to “expertise” and the societal response to COVID-19: some “experts” proclaimed a welcome reawakening of public respect for “experts” triggered by the pandemic, even as other “experts” were insisting that the course of the disease marked a decisive repudiation of the legitimacy of “experts” in modern societies. Which seems as good an entry point as any into our exploration of the troubled state of “expertise” in today’s troubled world.


Writing of his days as a riverboat pilot in Life on the Mississippi, Mark Twain described how he mastered his craft: “The face of the water, in time, became a wonderful book — a book that was a dead language to the uneducated passenger, but which told its mind to me without reserve, delivering its most cherished secrets as clearly as if it uttered them with a voice.”[1]

The “wonderful book” to which Twain refers, of course, can nowhere be written down. The riverboat pilot’s expertise derives not from formal education but from constant feedback from his surroundings, which allows him to continually hone and test his skill and knowledge, expanding its applicability to a broadening set of contexts and contingencies. “It was not a book to be read once and thrown aside, for it had a new story to tell every day,” Twain continued. “Throughout the long twelve hundred miles there was never a page that was void of interest, never one that you could leave unread without loss.”

Expertise, in this way, necessarily involves the ability to make causal inferences (drawn, say, from the pattern of ripples on the surface of a river) that guide understanding and action to achieve better outcomes than could be accomplished without such guidance. Such special knowledge allows the expert to reliably deliver a desired outcome that cannot be assured by the non-expert.

Expertise of this sort may also require lengthy formal training in sophisticated technical areas. But the expertise of the surgeon, or the airline pilot, is never just a matter of book learning; it requires the development of tacit knowledge, judgment, and skills that come only from long practical experience and the feedback that such experience delivers. Expert practitioners demonstrate their expertise by successfully performing tasks that society values and expects from them, reliably and with predictable results. They navigate the riverboat through turbulent narrows; they repair the damaged heart valve; they land the aircraft that has lost power to its engines.

Yet every day it seems we hear that neither politicians nor the public are paying sufficient heed to expertise. The claim has become a staple of scholarly assertion, media coverage, and political argument. Commentators raise alarm at our present “post-truth” condition,[2] made possible by rampant science illiteracy among the public, the rise of populist politics in many nations, and the proliferation of unverifiable information via the Internet and social media, exacerbated by mischievous actors such as Russia and extreme political views. This condition is said to result in a Balkanization of allegedly authoritative sources of information that in turn underlies distrust of mainstream experts and reinforces growing political division.

And still, despite this apparent turn away from science and expertise, few doubt the pilot or the surgeon. Or, for that matter, the plumber or the electrician. Clearly, what is contested is not all science, all knowledge, and all expertise, but particular kinds of science and claims to expertise, applied to particular types of problems.

Does population-wide mammography improve women’s health? It’s a simple question, still bitterly argued despite 50 years of mounting evidence. Is nuclear energy necessary to decarbonize global energy systems? Will missile defense systems work? Does Round-up cause cancer? What’s the most healthful type of diet? Or the best way to teach reading or math? For all of these questions, the answer depends on which expert you ask. Should face masks be worn outdoors in public places during the pandemic? Despite its relevance to the COVID-19 outbreak, this question has been scientifically debated for at least a century.[3] If the purpose of expertise applied to these sorts of questions is to help resolve them so that actions that are widely seen as effective can be pursued, then it would seem that the experts are failing. Indeed, these sorts of controversies have both proliferated and become ever more contested. Apparently, the type of expertise being deployed in these debates is different from the expertise of the riverboat pilot in the wheelhouse, or the surgeon in the operating room.

Practitioners like river pilots and surgeons can be judged and held accountable based on the outcomes of their decisions. Such a straightforward line of performance assessment can rarely be applied to experts who would advise policy makers on the scientific and technical dimensions of complex policy and political problems. Advisory experts of this sort are not acting directly on the problems they advise about. Even if their advice is taken, feedbacks on performance are often not only slow, but also typically incomplete, inconclusive, and ambiguous. Such experts are challenged to deliver anything resembling what we expect — and usually get — from our pilots, surgeons, and plumbers: predictable, reliable, intended, obvious, and desired outcomes.


Nobody worries whether laypeople trust astrophysicists who study the origins of stars or biologists who study anaerobic bacteria that cluster around deep-sea vents. The wrangling among scientists who are debating, say, the reasons dinosaurs went extinct or whether string theory tells us anything real about the structure of the universe can be acrimonious and protracted, but it bears little import for anyone’s day-to-day life beyond that of the scientists conducting the relevant research. But the past half-century or so has seen a gradual and profound expansion of science carried out in the name of directly informing human decisions and resolving disputes related to an expanding range of problems for democratic societies involving technology, the economy, and the environment.[4]

If it can be said that there is a crisis of science and expertise and that we have entered a post-truth era, it is with regard to these sorts of problems, and to the claims science and scientific experts would make upon how we live and how we are governed.

Writing about the limits of science for resolving political disagreements about issues such as the risks of nuclear energy, the physicist Alvin Weinberg argued in an influential 1972 article that the inherent uncertainties surrounding such complex and socially divisive problems lead to questions being asked of science that science simply cannot answer.[5]

He coined the term “trans-science” to describe scientific efforts to answer questions that actually transcend science.

Two decades later, the philosophers Silvio Funtowicz and Jerome Ravetz more fully elucidated the political difficulties raised by trans-science as those of “post-normal” science, in which decisions are urgent, uncertainties and stakes are high, and values are in dispute. Their term defined a “new type of science” aimed at addressing the “challenges of policy issues of risk and the environment.”[6](Funtowicz and Ravetz used the term “post-normal” to contrast with the day-to-day puzzle-solving business of mature sciences that Thomas Kuhn dubbed “normal science” in his famous 1962 book, The Structure of Scientific Revolutions.[7])

What Funtowicz and Ravetz stressed was the need to recognize that science carried out under such conditions could not — in theory or practice — be insulated from other social activities, especially politics.

Demands on science to resolve social disputes accelerated as the political landscape in the 1960s and 70s began to shift from a primary focus on the opposition between capital and labor toward one that pitted industrial society against the need to protect human health and the environment, a shift that intensified with the collapse of the Soviet Union in the 1980s. Public concerns about air and water pollution, nuclear energy, low levels of chemical contamination and pesticide residues, food additives, and genetically modified foods, translated into public debates among experts about the magnitude of the problems and the type of policy responses, if any, that were needed. It is thus no coincidence that the 1980s and 90s saw “risk” emerge as the explicit field of competing claims of rationality.[8]

As with the previous era of conflict between capital and labor, these disputes often mapped onto political divisions, with industrial interests typically aligning with conservative politics to assert low levels of risk and excessive costs of action, and interests advocating environmental protection aligning with regimes for which the proper role of government included regulation of industry to reduce risks, even uncertain ones, to public health and well-being.

As such conflicts proliferated, it was not much of a step to think that the well-earned authority of science to establish cause-effect relations about the physical and biological world might be applied to resolve these new political disputes. In much the same logical process that leads us to rely on the expertise of the riverboat pilot and cardiac surgeon, scientists with relevant expertise have been called upon to guide policy makers in devising optimal policies for managing complex problems and risks at the intersection of technology, the environment, health, and the economy.

But this logic has not borne out. Instead, starting in the 1970s, there has been a rapid expansion in health and environmental disputes, not-in-my-backyard protests, and concerns about environmental justice, invariably accompanied by dueling experts, usually backed by competing scientific assessments of potential versus actual damage to individuals and communities. These types of disputes constitute an important dimension of today’s divisive national politics.


Why has scientific expertise failed to meet the dual expectations created by the rise of scientific knowledge in the modern age and the impressive performance record of experts acting in other domains of technological society? The difficulties begin with nature itself.

The distinguished anthropologist Mary Douglas was wont to observe that nature is the trump card that can be played to win an argument even when time, God, and money have failed.[9] The resort to nature as ultimate arbiter of disagreement is a central characteristic of the modern Western world.[10] The debate between Burke and Paine in the 18th century over the origins of democratic legitimacy drew its energy from fundamentally conflicting claims about nature.[11] A century later, J. S. Mill observed that “the word unnatural has not ceased to be one of the most vituperative epithets in the language.”[12] This remains the case today. When we assert that something is only natural, we draw a line in the sand. We declare that it is simply the way things are and that no further argumentation can change that.

How does nature derive its voice in the political realm? In the modern world, nature speaks through science. Most people do not apprehend nature directly; they apprehend it via those experts who can speak and translate its language. Translated to the political realm, scientists who would advise policy-making draw their legitimacy principally from the claim that they speak for nature. That expertise is ostensibly wielded to help policy makers distinguish that which is correct about the world from that which is incorrect, causal claims that are true from those that are false, and ultimately, policies that are better from those that are worse.

Yet when it comes to the complicated interface of technology, environment, human development, and the economy, political combatants have their own sciences and experts advocating on behalf of their own scientifically mediated version of nature. What is produced under such circumstances, Herbert Simon observed in 1983, is not ever more reliable knowledge, but rather “experts for the affirmative and experts for the negative.”[13] Under these all-too-familiar conditions, science clearly must be doing something other than simply reporting upon well-established cause-and-effect inferences observed in nature. What, then, is it doing?

A key insight was provided in the work of ecologist C. S. Holling, who revealed the breadth and variety of scientists’ assumptions about how nature works by describing the seemingly contradictory ecological management strategies adopted by foresters to address problems such as insect infestations or wildfires.[14]

If foresters were conventionally rational, they would all do the same thing when given access to the same relevant scientific information. However, in the diverse forest management approaches that were actually implemented, Holling and colleagues detected “differences among the worldviews or myths of nature that people hold,” leading in turn to different scientific “explanations of how nature works and the [different] implication of those assumptions on subsequent policies and actions.”[15]

One view of nature understands the environment to be favorable toward humankind. In this world, a benign nature re-establishes its natural order regardless of what humans do to their environment. This version of rationality encourages institutions and individuals to take a trial-and-error approach in the face of uncertainty. It is a view that requires strong proof of significant environmental damage to justify intervention that restricts economic development. It encourages institutions and individuals to take a trial-and-error approach in the face of uncertainty.

From another perspective, nature is in a precarious and delicate balance. Even a slight perturbation can result in an irreversible change in the state of the system. This view encourages institutions to take a precautionary approach to managing an ephemeral nature. The burden of proof, in this worldview, rests with those who would act upon nature.

A third view of nature centers around the uncertainties regarding causes and effects themselves. From this perspective, uncertainty is inherent, and the objective of scientific management is not to avoid any perturbation but to limit disorder via indicators, audits, and the construction of elaborate technical assessments to ensure that no perturbation is too great.[16]

The point is not that any of these perspectives is entirely right or entirely wrong. Social scientists Schwarz and Thompson noted that “each of these views of nature appears irrational from the perspective of any other,” reflecting what they term “contradictory certainties.”[17] There can be no single, unified view of nature that can be expressed through a coherent body of science. In the post-normal context, when science is applied to policy making and decisions with potentially momentous consequences, scientists and decision-makers are always interpreting observations and data through a variety of pre-existing worldviews and frameworks that create coherence and meaning. Different myths of nature thus become associated with different institutional biases toward action.[18]

Consider claims that we are collectively on the brink of overstepping “planetary boundaries” that will render civilization unsustainable. In the scientific journal Nature, Johan Rockström and his colleagues at the Stockholm Resilience Centre argue that “human actions have become the main driver of global environmental change,” that “could see human activities push the Earth system outside the stable environment state of the Holocene, with consequences that are detrimental or even catastrophic for large parts of the world.”[19]

A review by Nordhaus et al. contests these claims, challenging the idea that these planetary boundaries constitute “non-negotiable” thresholds, interpreting them instead as rather arbitrary claims that for the most part don’t even operate at planetary scale.[20] Similarly, Brook et al. conclude that “ecological science is unable to define any single point along most planetary continua where the effects of global change will cause abrupt shifts or transitions to qualitatively different regimes across the whole planet.”[21] Strunz et al. argue that civilizational “collapse” narratives are themselves subject to interpretation and that the supposed alternatives of “sustainability or collapse” mischaracterize not only the nature of environmental challenges, but the types of policy responses available to societies.[22]

These various expert perspectives beautifully display the competing rationalities mapped out by Holling a generation before. They suggest that rather than non-negotiables, humanity faces a system of trade-offs — not only economic, but moral and aesthetic as well. Deciding how to balance these trade-offs is a matter of political contestation, not scientific fact.[23]

What counts as “unacceptable environmental change” involves judgments concerning the value of the things to be affected by the potential changes.

Seldom do scientists or laypeople consciously reflect on the underlying assumptions about the nature of nature that inform their arguments. Even when such assumptions can be made explicit, as Holling discussed in the case of forest ecosystem management,[24] it is not possible to say which provides the best foundation for policy making. This is the case given both that the science is concerned with the future states of open, complex, nondeterminate natural and social systems, and that people may reasonably disagree about the details of a desirable future as well as the best pathways of getting there.

Amidst such multi-level uncertainty and disagreement (which may last for decades, or forever), it is impossible to test causal inferences at large enough temporal and spatial scales to draw conclusions about which experts were right and which were wrong with regard to questions related to something like overall earth-system behavior. Experts participating in such debates thus need never worry that they will be held accountable for the consequences of acting on their advice. They wield their expertise with impunity.


The most powerful ammunition that experts can deploy are numbers. Indeed, we might say that if nature is a political trump card, numbers are what give that card its status and authority. Pretty much any accounting of science will put quantification at the center of science’s power to portray the phenomena that it seeks to understand and explain. As Lord Kelvin said in 1883: “When you can measure what you are speaking about and express it in numbers, you know something about it: but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.”[25]

To describe something with a number is to make a sharp claim about the correspondence between the idea being conveyed and the material world out there. Galileo said that mathematics was the language of the universe. The use of numbers to make arguments on behalf of nature thus amounts to an assertion of superior — that is, expert — authority over other ways to make claims about knowledge, truth, and reality.

When we look at the kinds of numbers that often play a role in publicly contested science, however, we see something surprising. Many numbers that appear to be important for informing policy discussions and political debates describe made-up things, not actual things in nature. They are, to be sure, abstractions about, surrogates for, or simulations of what scientists believe is happening — or will happen — in nature. But they are numbers — whose correspondence to something real in nature cannot be tested, even in theory.

Yet even when representing abstractions or poorly understood phenomena, numbers still appear to communicate the superior sort of knowledge that Lord Kelvin claimed for them, giving rise to what mathematician and philosopher Alfred North Whitehead in 1929 termed “the Fallacy of Misplaced Concreteness,” in which abstractions are taken as concrete facts.[26] This fallacy is particularly seductive in the political context, when complicated matters (for example, the costs versus benefits of a particular policy decision) can be condensed into easily communicated numbers that justify a particular policy decision, such as whether or not to build a dam[27] or protect an ecological site.[28]

Consider efforts to quantify the risks of high-level nuclear waste disposal in the United States and other countries. The behavior of buried nuclear waste is determined in part by the amount of water that can reach the disposal site and thus eventually corrode the containment vessel and transport radioactive isotopes into the surrounding environment.

One way to characterize groundwater flow is by calculating a variable called percolation flux, an expression of how fast a unit of water flows through rocks, expressed in distance per unit of time. The techniques used to assign numbers to percolation flux depend on hydrogeological models, which are always incomplete representations of real conditions,[29] and laboratory tests on small rock samples, which cannot well represent the actual heterogeneity of the disposal site. Based on these calculations, assessments of site behavior then adopt a value of percolation flux for the entire site for purposes of evaluating long-term risk.

Problems arise, though, because water will behave differently in different places and times depending on conditions (such as the varying density of fractures in the rocks, or connectedness between pores, or temperature). At Yucca Mountain, Nevada, the site chosen by Congress in 1987 to serve as the US high-level civilian nuclear waste repository, estimates of percolation flux made over a period of 30 years have varied from as low as 0.1 mm/yr to as much as 40 mm/yr.[30]

This three-orders-of-magnitude difference has momentous implications for site behavior, with the low end helping to assure decision-makers that the site will remain dry for thousands of years and the high end indicating a level of water flow that, depending on site design, could introduce considerable risk of environmental contamination over shorter time periods.[31]

To reduce uncertainties about the hydrogeology at Yucca Mountain, scientists proposed to test rocks from near the planned burial site, 300 meters underground, for chlorine 36 (36Cl). This radioisotope is rare in nature but is created during nuclear blasts and exists in higher abundance in areas where nuclear weapons have been tested, such as the Nevada Test Site near Yucca Mountain. If excess 36Cl could be found at the depth of the planned repository, it would mean that water had travelled from the surface to the repository depth in the 60 or so years since weapons tests were conducted, requiring a much higher percolation flux estimate than if no 36Cl was present.[32]

But confirming the presence of excess 36Cl hinged on the ability to detect it at concentrations of parts per 10 billion, a level of precision that turned out to introduce even more uncertainties to the percolation flux calculation. Indeed, contradictory results from scientists working in three different laboratories made it impossible to confirm whether or not the isotope was present in the sampled rocks.[33]

This research, performed to reduce uncertainty, actually increased it, so that the range of possible percolation flux values fully encompassed the question of whether or not the site was “wet” or “dry,” and fully permitted positions either in support of or opposing the burial of nuclear waste at Yucca Mountain.

And even if scientists were to agree on some “correct” value of percolation flux for the repository site, it is only one variable among innumerable others that will influence site behavior over the thousands of years during which the site must safely operate. Percolation flux thus turns out not to be a number that tells us something true about nature. Rather, it is an abstraction that allows scientists to demonstrate their expertise and relevance and allows policy makers to claim that they are making decisions based upon the best available science, even if that science is contradictory and can justify any decision.

Such numbers proliferate in post-normal science and include, for example, many types of cost-benefit ratio[34]; rates and percentages of species extinction[35]; population-wide mortality reduction from dietary changes[36]; ecosystem services valuation[37] and, as we will later discuss, volumes of hydrocarbon reserves. Consider a number called “climate sensitivity.” As with percolation flux, the number itself — often (but not always) defined as the average atmospheric temperature increase that would occur with a doubling of atmospheric carbon dioxide — corresponds to nothing real, in the sense that the idea of a global average temperature is itself a numerical abstraction that collapses a great diversity of temperature conditions (across oceans, continents, and all four seasons) into a single value.

The number has no knowable relationship to “reality” because it is an abstraction and one applied to the future no less — the very opposite of what Lord Kelvin had in mind in extolling the importance of quantification. Yet it has come to represent a scientific proxy for the seriousness of climate change, with lower sensitivity meaning less serious or more manageable consequences and higher values signaling a greater potential for catastrophic effects. Narrowing the uncertainty range around climate sensitivity has thus been viewed by scientists as crucially important for informing climate change policies.

Weirdly, though, the numerical representation of climate sensitivity remained constant over four decades — usually as a temperature range of 1.5°C to 4.5°C— even as the amount of science pertaining to the problem has expanded enormously. Starting out as a back-of-the-envelope representation of the range of results produced by different climate models from which no probabilistic inferences could be drawn,[38] climate sensitivity gradually came to represent a probability range.

Most recently, for example, an article by Brown and Caldeira reported an equilibrium climate sensitivity value of 3.7°C with a 50 percent confidence range of 3.0°C to 4.2°C, while a study by Cox et al. reported a mean value of 2.8°C with a 66 percent confidence range of 2.2°C to 3.4°C, and an assessment by Sherwood and a team of 24 other scientists reported a 66 percent probability range of 2.6°C to 3.9°C.[39]

The 2020 Sherwood team study characterized the initial 1.5°C to 4.5°C range, first published in a 1979 National Research Council report, as “prescient” and “based on very limited information.” In that case, one might reasonably wonder about the significance of four decades of enormous subsequent scientific effort (the Sherwood paper cites more than 500 relevant studies) leading to, perhaps, a slightly more precise characterization of the probability range of a number that is an abstraction in the first place.

The legacy of research on climate sensitivity is thus remarkably similar to that of percolation flux: decades of research and increasingly sophisticated science dedicated to better characterizing a numerical abstraction that does not actually describe observable phenomena, with little or no change in uncertainty.


Ultimately, most political and policy disputes involve the future — what it should look like and how to achieve that desired state. Scientific expertise is thus often called upon to extrapolate from current conditions to future ones. To do so, researchers often construct numerical representations of relevant phenomena that can be used to extrapolate from current conditions to future ones. These representations are called models.

Pretty much everyone is familiar with how numerical models can be used to inform decision-making through everyday experience with weather forecasting. Weather forecasting models are able to produce accurate forecasts up to about a week in advance. In part, this accuracy can be achieved because, for the short periods involved, weather systems can be treated as relatively closed, and the results of predictions can be evaluated rigorously. Weather forecasts have gotten progressively more accurate over decades because forecasters make millions of forecasts each year that they test against reality, allowing improved model performance due to continual learning from successes and mistakes and precise measurement of predictive skill.

But that’s not all. A sophisticated and diverse enterprise has developed to communicate weather predictions and uncertainties for a variety of users. Organizations that communicate weather information understand both the strengths and weaknesses of the predictions, as well as the needs of those who depend on the information. Examples range from NOAA’s Digital Marine Weather Dissemination System for maritime users, to the online hourly forecasts at Weather.com.

Meanwhile, people and institutions have innumerable opportunities to apply what they learn from such sources directly to decisions and to see the outcomes of their decisions — in contexts ranging from planning a picnic to scheduling airline traffic. Because people and institutions are continually acting on the basis of weather forecasts, they develop tacit knowledge that allows them to interpret information, accommodate uncertainties, and develop trust based on a shared understanding of benefits. Picnickers, airline executives, and fishers alike learn how far in advance they should trust forecasts of severe weather in order to make decisions whose stakes range from the relatively trivial to the truly consequential.

Even though the modeling outputs often remain inexact and fundamentally uncertain (consider the typical “50 percent chance of rain this afternoon” forecast) and specific short-term forecasts often turn out to be in error, people who might question the accuracy or utility of a given weather forecast are not accused of “weather science denial.” This is because the overall value of weather information is well integrated into the institutions that use the predictions to achieve desired benefits.

The attributes of successful weather forecasting are not, and cannot be, built into the kinds of environmental and economic models used to determine causal relations and predict future conditions in complex natural, technological, and social systems. Such models construct parallel alternative worlds whose correspondence to the real world often cannot be tested. The models drift away from the direct connection between science and nature, while giving meaning to quantified abstractions like percolation flux and climate sensitivity, which exist to meet the needs of modeled worlds but not the real one.

For example, the Chesapeake Bay Program (CBP), established in 1983, launched an extensive ecosystem modelling program to support its goal of undoing the negative effects of excessive nutrient loading in the bay from industrial activity, agricultural runoff, and economic development near the shoreline. A distributed suite of linked models was developed so scientists could predict the ecosystem impact of potential management actions, including improving sewage treatment, controlling urban sprawl, and reducing fertilizer or manure application on agricultural lands.[40]

While the CBP model includes data acquired from direct measurements in nature, the model itself is an imaginary world that stands between science and nature. The difference between a modelled prediction of, say, decreased nitrogen load in the Chesapeake Bay and an observation of such a decrease is that the achievement of that outcome in the model occurred by tweaking model inputs, parameters, and algorithms, whereas in nature the outcome was created by human decisions and actions.

And indeed, based on models that simulated the results of policy interventions, CPB claimed that it had achieved a steady improvement in the water quality of the main stem of the Bay. Yet interviews conducted in 1999 with program employees revealed that actual field testing did not demonstrate a trend of improved water quality.[41]

The computer model, designed to inform management of a real-world phenomenon, in fact became the object of management.

A similar phenomenon of displacing reality with a simulation occurs in modelling for climate policy when the impacts of nonexistent technologies — such as bioenergy with carbon capture and storage or solar radiation management interventions — are quantified and introduced into models as if they existed. Their introduction allows models to be tweaked to simulate reductions in future greenhouse warming, which are then supposed to become targets for policy making.[42]

As with the Chesapeake Bay model, these integrated assessment models depend on hybrid and constructed numbers to generate concrete predictions. To do so, they must assume future atmospheric composition, land cover, sea surface temperature, insolation, and albedo, not to mention the future of economic change, demographics, energy use, agriculture, and technological innovation. Many of the inputs themselves are derived from still other types of models, which are in turn based on still other sets of assumptions.

Based on these models, some scientists claim that solar radiation management techniques will contribute to global equity; others claim the opposite. In fact, the models upon which both sets of claims depend provide no verifiable knowledge about the actual world and ignore all of the scientific, engineering, economic, institutional, and social complexities that will determine real outcomes associated with whatever it is that human societies choose to do or not do.

The contrast between weather and climate forecasting could not be clearer. Weather forecasts are both reliable and useful because they predict outcomes in relatively closed systems for short periods with immediate feedback that can be rapidly incorporated to improve future forecasts, even as users (picnickers, ship captains) have innumerable opportunities to gain direct experience with the strengths and limits of the forecasts.

Using mathematical models to predict the future global climate over the course of a century of rapid sociotechnical change is a quite another matter. While the effects of different development pathways on future atmospheric greenhouse gas concentrations can be modeled using scenarios, there is no basis beyond conjecture for assigning relative probabilities to these alternative futures. There are also no mechanisms for improving conjectured probabilities because the time frames are too long to provide necessary feedback for learning. What’s being forecast exists only in an artificial world, constituted by numbers that correspond not to direct observations and measurements of phenomena in nature, but to an assumption-laden numerical representation of that artificial world.

The problem is not by any means limited to climate models. Anyone who has followed how differing interpretations of epidemiological models have been used to justify radically different policy choices for responding to the COVID-19 pandemic will recognize the challenges of extrapolating from assumption-laden models to real-world outcomes. Similar difficulties have been documented in policy problems related to shoreline engineering, mine waste cleanup, water and fisheries management, toxic chemical policy, nuclear waste storage (as discussed), land use decisions, and many others.[43]

And yet, because such models are built and used by scientists for research that is still called science and produce crisp numbers about the artificial worlds they simulate, they are often subject to Whitehead’s fallacy of misplaced concreteness and treated as if they represent real futures.[44] Their results are used by scientists and other political actors to make claims about how the world works and, therefore, what should be done to intervene in the world.[45]

In this sense, the models serve a role similar to goat entrails and other prescientific tools of prophecy. They separate the prophecy itself, laden with inferences and values, from the prophet, who merely reports upon what is being foretold. The models become political tools, not scientific ones.


When decisions are urgent, uncertainties and stakes are high, and values are in dispute — the post-normal conditions of Funtowicz and Ravetz — it turns out that science’s claim to speak for nature, using the unique precision of numbers and the future-predicting promise of models, is an infinitely contestable basis for expertise and its authority in the political realm.

And yet, science undoubtedly does offer an incomparably powerful foundation not only for understanding our world but also for reliably acting in it.

That foundation depends upon three interrelated conditions that allow us to authoritatively establish causal relationships that can guide understanding and effective action — conditions very different from those we have been describing, and with very different consequences in the world.

First is control: the creation or exploitation of closed systems, so that important phenomena and variables involved in the system can be isolated and studied. Second is fast learning: the availability of tight feedback loops, which allow mistakes to be identified and learning to occur because causal inferences can be repeatedly tested through observations and experiments in the controlled or well-specified conditions of a more or less closed system. Third is clear goals: the shared recognition or stipulation of sharply defined endpoints toward which scientific progress can be both defined and assessed, meaning that feedback and learning can occur relative to progress toward agreed-upon outcomes that confirm the validity of what is being learned.

Technology plays a dual role in the fulfillment of these three conditions. Inventions that observe or measure matter, such as scales, telescopes, microscopes, and mass spectrometers, translate inputs from nature into interpretable signals (measurements, images, waveforms, and so on) that allow scientists to observe and often quantify components and phenomena of nature that would otherwise be inaccessible. At the same time, the development and use of practical technologies such as steam engines, electric generators, airfoils (wings), cathode ray tubes, and semiconductors continually raise questions for scientists to explore about the natural phenomena that the technologies embody (e.g., the transformation of heated water into pressurized steam; the flow of fluids or electrons around or through various media) and, in turn, derive generalizable relationships.

Under these three technologically mediated conditions, the practical consequences of scientific advances have helped to create the technological infrastructure of modernity. Technology, it turns out, is what makes science real for us. The light goes on, the jet flies, the child becomes immune. From such outcomes, people reliably infer that the scientific account of phenomena must be true and that the causal inferences derived from them must be correct. Otherwise, the technologies would not work.

Thus, our sense of science’s reliability is significantly created by our experience with technology. Moreover, technological performance shares this essential characteristic with practitioner expertise: nonexperts can easily see whether this process of translation is actually taking place and doing what’s expected. Indeed, expert practice typically involves the use of technology (a riverboat, a plumber’s torch, a surgical laser) to achieve its goal.

The problem for efforts to apply scientific expertise to complex social problems is that the three conditions mostly do not pertain. The systems being studied — the climate-energy system, fluids in the earth’s crust, population-wide human health — are open, complex, and unpredictable. Controlled experiments are often impossible, so feedback that can allow learning is typically slow, ambiguous, and debatable. Perhaps most importantly, endpoints often cannot be sharply defined in a way that allows progress to be clearly assessed; they are often related to identifying and reducing risk, and risk is an inherently subjective concept, always involving values and worldviews.

In the case of weather forecasts, vaccines, and surgical procedures, experts can assure us of how a given action will lead to a particular consequence, and, crucially, we can see for ourselves if they are right. In the case of science advisory expertise, the outcomes of any particular decision or policy are likely to be unknown and unknowable. No one can be held to account. Assumptions about the future can be modified and modeled to suit competing and conflicting interests, values, and beliefs about how the future ought to unfold. Science advisory experts can thus make claims with political content while appearing only to be speaking the language of science.

The exercise of expert authority under such circumstances might be termed “inappropriate expertise.” Its origins are essentially epistemological: climate models, or the statistical characterization of a particular chemical’s cancer-causing potential, manifest a different type of knowledge than weather forecasts, jet aircraft and vaccines. Claims to expertise based upon the former achieve legitimacy by borrowing the well-earned authority of the latter. In stark contrast, the legitimacy of expert-practitioners derives directly from proven performance in the real world.


When we apply the authority of normal science to post-normal conditions, a mélange of science, expertise, and politics is the usual result. Neither more research nor more impassioned pleas to listen to and trust an undifferentiated “science” will improve the situation because it is precisely the proliferation of post-normal science and its confusion with normal science that are the cause of the problem.

Yet, in the face of controversies regarding risk, technology, and the environment, the usual remedy is to turn things over to expert organizations like the National Academy of Sciences or the UK Royal Society. But doing so typically obscures the normative questions that lie at the heart of conflicts in question. Why would anyone think that another 1,000 studies of climate sensitivity would change the mind of a conservative who opposes global governance regimes? Or that another decade of research on percolation flux might convince an opponent of nuclear power that nuclear waste can be safely stored for 10,000 years? Disagreements persist. More science is poured into the mix. Conflicts and controversies persist indefinitely.

There is an alternative. Decision-makers tasked with responding to controversial problems of risk and society would be better served to pursue solutions through institutions that can tease out the legitimate conflicts over values and rationality that are implicated in the problems. They should focus on designing institutional approaches that make this cognitive pluralism explicit, and they should support activities to identify political and policy options that have a chance of attracting a diverse constituency of supporters.

Three examples from different domains at the intersection of science, technology, and policy can help illuminate this alternative way of proceeding. Consider first efforts to mitigate the public health consequences of toxic chemical use and exposure. Such efforts, in particular via the federal Toxics Substances Control Act (TSCA) of 1976, have historically attempted to insulate the scientific assessment of human health risks of exposures to chemicals from the policy decisions that would regulate their use. But from TSCA’s inception, it has been clear that there is no obvious boundary that separates the science of risk from the politics of risk. The result — consistent with our discussion so far — has been endless legal action aimed at proving or disproving that scientific knowledge generated by EPA in support of TSCA was sufficient to allow regulation.[46]

Starting in the late 1980s, the state of Massachusetts adopted an alternative approach. Rather than attempting to use scientific risk assessment to ban harmful chemicals that are valued for their functionality and economic benefit, Massachusetts’ Toxics Use Reduction Act (TURA) of 1989 focused on finding replacements that perform the same functions. The aim was to satisfy the concerns of both those aiming to eliminate chemicals in the name of environmental health and those using them to produce economic and societal value.[47]

TURA turned the standard adversarial process into a collaborative one.[48] State researchers tested substitutes for effectiveness and developed cost–benefit estimates; they worked with firms to understand barriers to adoption and cooperated with state agencies and professional organizations to demonstrate the alternatives. Rather than fighting endless scientific and regulatory battles, firms that use toxic chemicals became constituents for safer chemicals.

Between 1990 and 2005, Massachusetts firms subject to TURA requirements reduced toxic chemical use by 40 percent and their on-site releases by 91 percent.[49] Massachusetts succeeded not by trying to reduce scientific uncertainty about the health consequences of toxic chemicals in an effort to compel regulatory compliance, but by searching for solutions that satisfied the beliefs and interests of competing rationalities about risk.

A second example draws from ongoing efforts to assess hydrocarbon reserves. In the 1970s and 1980s, coincident with national and global concerns about energy shortages, the US Geological Survey (USGS) began conducting regular assessments of the size of US hydrocarbon (oil and gas) reserves.[50] As with percolation flux and climate sensitivity, quantified estimates of the volume of natural gas or oil stored in a particular area of the Earth’s crust have no demonstrable correspondence to anything real in the world. The number cannot be directly measured, and it depends on other variables that change with time, such as the state of extraction technologies, the state of geological knowledge, the cost of various energy sources, and so on.

USGS assessments that reserves were declining over time were largely noncontroversial until 1988, when the natural gas industry began lobbying the government to deregulate natural gas markets. When the USGS assessment released that year predicted a continued sharp decline in natural gas reserves, the gas industry vociferously disagreed.[51]

According to the American Gas Association (AGA), “The U.S. Geological Survey’s erroneous and unduly pessimistic preliminary estimates of the amount of natural gas that remains to be discovered in the United States . . . is highly inaccurate and clearly incomplete . . . the direct result of questionable methodology and assumptions.”[52]

In the standard ritual, dueling numbers were invoked, with the USGS report estimating recoverable natural gas reserves at 254 trillion cubic feet and the AGA at 400 trillion.[53]

The customary prescription for resolving such disputes, of course, would be to do more research to better characterize the numbers. But in this case, the USGS adopted a different approach. It expanded the institutional diversity of the scientists involved in the resource assessment exercises, adding industry and academic experts to a procedure that had previously been conducted by government scientists alone.

The collective judgment of this more institutionally diverse group resulted in significantly changed assumptions about, definitions of, and criteria for estimating hydrocarbon reserves. By 1995, the official government estimate for US natural gas reserves went up more than fourfold, to 1,075 trillion cubic feet.[54]

Agreement was created not by insulating the assessment process from stakeholders with various interests in the outcome, but by bringing them into the process and pursuing a more pluralistic approach to science. Importantly, the new assessment numbers could be said to be more scientifically sound only insofar as they were no longer contested. Their accuracy was still unknowable. But agreement on the numbers helped to create the institutional and technological contexts in which recovering significantly more oil and gas in the United States became economically feasible.

Finally, consider the role of complex macroeconomic models in national fiscal policy decisions. Economists differ on their value, with some arguing that they are essential to the formulation of monetary policy and others arguing that they are useless. Among the latter, the Nobel Prize-winning economist Joseph Stiglitz asserts: “The standard macroeconomic models have failed, by all the most important tests of scientific theory.”[55]

In the end, it doesn’t appear to matter much. In the United States, the models are indeed used by the Federal Reserve to support policy making. Yet the results appear not to be a very important part of the system’s decision processes, which depend instead on informed judgement about the state of the economy and debate among its governors.[56]

Whatever role the models might play in the Federal Reserve decision process, it is entirely subservient to a deliberative process that amalgamates different sources of information and insight into narratives that help make sense of complex and uncertain phenomena.

Indeed, the result of the Federal Reserve’s deliberative process is typically a decision either to do nothing or to tweak the rate at which the government loans money to banks up or down by a quarter of a percent. The incremental nature of the decision allows for feedback and learning, assessed against an endpoint mandated by Congress: maximum employment and price stability. The role of the models in this process seems mostly to be totemic. Managing the national economy is something that experts do, and using complicated numerical models is a symbol of that expertise, inspiring confidence like the stethoscope around a doctor’s neck.

Each of these examples offers a corrective to the ways in which science advice typically worsens sociotechnical controversies.

The Federal Reserve crunches economic data through the most advanced models to test the implications of various policies for future economic performance. And then its members, representing different regions and perspectives, gather to argue about whether to take some very limited actions to intervene in a complex system — the national economy — whose behavior so often evades even short-term prediction.

When the US Geological Survey found itself in the middle of a firestorm of controversy around a synthetic number representing nothing observable in the natural world, it did not embark upon a decades-long research program to more precisely characterize the number. It instead invited scientists from other institutions, encompassing other values, interests, and worldviews, into the assessment process. This more cognitively diverse group of scientists agreed to new assumptions and definitions for assessing reserves and arrived at new numbers that would have seemed absurd to the earlier, more homogeneous group of experts, but that met the information needs of a greater diversity of users and interests.[57]

Toxic chemical regulation in the United States has foundered on the impossibility of providing evidence of harm sufficiently convincing to withstand legal opposition.[58] More research and more experts have helped to enrich lawyers and expert witnesses, but they failed to restrict the use of chemicals that are plausibly but not incontrovertibly dangerous. The state of Massachusetts pursued a different approach, working within the uncertainties, to find complementarities between the interests and risk perspectives of environmentalists and industry in the search for safer alternatives to chemicals that were plausibly harmful.[59]

Truth, it turns out, often comes with big error bars, and that allows space for managing cognitive pluralism to build institutional trust. The Federal Reserve maintains trust through transparency and an incremental, iterative approach to decision-making. The USGS restored trust by expanding the institutional and cognitive diversity of experts involved in its assessment process. Massachusetts created trust by taking seriously the competing interests and rationalities of constituencies traditionally at each other’s throats.

Institutions are what people use to manage their understanding of the world and determine what information can be trusted and who is both honest and reliable. Appropriate expertise emerges from institutions that ground their legitimacy not on claims of expert privilege and the authority of an undifferentiated “science,” but on institutional arrangements for managing the competing values, beliefs, worldviews, and facts arrayed around such incredibly complex problems as climate change or toxic chemical regulation or nuclear waste storage. Appropriate expertise is vested and manifested not in credentialed individuals, but in institutions that earn and maintain the trust of the polity. And the institutional strategies available for managing risk-related controversies of modern technological societies may be as diverse as the controversies themselves.


We do not view it as coincidental that concerns among scientists, advocates, and others about post-truth, science denial, and so on have arisen amidst the expenditure of tens of billions of dollars over several decades by governments and philanthropic foundations to produce research on risk-related political and policy challenges. These resources, which in turn incentivized the creation of many thousands of experts through formal academic training in relevant fields, have created a powerful political constituency for a particular view of how society should understand and manage its technological, environmental, health, and other risks: with more science, conveyed to policy makers by science advocacy experts, to compel rational action.

Yet the experience of unrelenting and expanding political controversies around the risks of modernity is precisely the opposite outcome of what has been promised. Entangling the sciences in political disputes in which differing views of nature, society, and government are implicated has not resolved or narrowed those disputes, but has cast doubt upon the trustworthiness and reliability of the sciences and experts who presume to advise on these matters. People still listen to their dentists and auto mechanics. But many do not believe the scientists who tell them that nuclear power is safe, or that vaccines work, or that climate change has been occurring since the planet was formed.

We don’t think that’s a perverse or provocative view, but an empirically grounded perspective on why things haven’t played out as promised. When risks and dilemmas of modern technological society become subject to political and policy action, doing more research to narrow uncertainties and turning to experts to characterize what’s going on as the foundation for taking action might seem like the only rational way to go. But under post-normal conditions, in which decisions are urgent, uncertainties and stakes are high, and values are in dispute, science and expertise are, at best, only directly relevant to one of those four variables — uncertainty — and even there, the capacity for making a difference is often, as we’ve shown, modest at best.

The conditions for failure are thus established. Advocates and experts urgently proclaim that the science related to this or that controversy is sufficiently settled to allow a particular political or policy prescription — the one favored by certain advocates and experts — to be implemented. Left out of the formula are the high stakes and disputed values. Who loses out because of the prescribed actions? Whose views of how the world works or should work are neglected and offended?

Successfully navigating the divisive politics that arise at the intersections of technology, environment, health, and economy depends not on more and better science, nor louder exhortations to trust science, nor stronger condemnations of “science denial.” Instead, the focus must be on the design of institutional arrangements that bring the strengths and limits of our always uncertain knowledge of the world’s complexities into better alignment with the cognitive and political pluralism that is the foundation for democratic governance — and the life’s blood of any democratic society.

Acknowledgments: Heather Katz assured me that this is what Steve would have wanted me to do, and Jerry Ravetz assured me that this is what Steve would have wanted us to say. Ted Nordhaus helped me figure out how best to say it.

Mark Twain, Life on the Mississippi (1883; rpt. New York: Harper and Brothers, 1917), 77.
Matthew d’Ancona, Post-Truth: The New War on Truth and How to Fight Back (London: Ebury Publishing, 2017).
For the 100-year range, see, for example, Richard Stutt et al., “A Modelling Framework to Assess the Likely Effectiveness of Facemasks in Combination with ‘Lock-Down’ in Managing the COVID-19 Pandemic,” Proceedings of the Royal Society A 476, no. 2238 (2020): 20200376; and W. H. Kellogg and G. MacMillan, “An Experimental Study of the Efficacy of Gauze Face Masks,” American Journal of Public Health 10, no. 1 (1920): 34–42.
Yaron Ezrahi, The Descent of Icarus: Science and the Transformation of Contemporary Democracy (Cambridge, MA: Harvard University Press, 1990).
Alvin M. Weinberg, “Science and Trans-Science,” Minerva 10, no. 2 (1972): 209–222.
Silvio O. Funtowicz and Jerome R. Ravetz, “Science for the Post-Normal Age,” Futures 25, no. 7 (1993): 739.
Thomas S. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962).
See, for example, Mary Douglas and Aaron Wildavsky, Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers (Berkeley: University of California Press, 1982); and Ulrich Beck, Risk Society: Towards a New Modernity, trans. Mark Ritter (Los Angeles: SAGE Publications, 1992).
Mary Douglas, Implicit Meanings: Selected Essays in Anthropology (1975; rpt. London: Routledge, 2003), 209.
Carl L. Becker, The Heavenly City of the Eighteenth-Century Philosophers (New Haven, CT: Yale University Press, 1932).
Yuval Levin, The Great Debate: Edmund Burke, Thomas Paine, and the Birth of Right and Left (New York: Basic Books, 2013).
John Stuart Mill, “Nature,” in Nature, the Utility of Religion, and Theism (1874; rpt. London: Watts & Co., 1904), 10.
Herbert A. Simon, Reason in Human Affairs (Stanford, CA: Stanford University Press, 1983), 97.
C. S. Holling, “The Resilience of Terrestrial Ecosystems: Local Surprise and Global Change,” in Sustainable Development of the Biosphere, ed. W. C. Clark and R. E. Munn (Cambridge, UK: Cambridge University Press, 1986), 292–317.
C.S. Holling, Lance Gunderson, and Donald Ludwig, “In Quest of a Theory of Adaptive Change,” in Panarchy: Understanding transformations in human and natural systems, ed. C.S. Holling and L. Gunderson (Washington, DC: Island Press), p. 10.
Steve Rayner, “Democracy in the Age of Assessment: Reflections on the Roles of Expertise and Democracy in Public-Sector Decision Making,” Science and Public Policy 30, no. 3 (2003): 163–70.
Michiel Schwarz and Michael Thompson, Divided We Stand: Redefining Politics, Technology and Social Choice (Philadelphia, PA: University of Pennsylvania Press, 1991), 3-5.
Michael Thompson and Steve Rayner, “Cultural Discourses,” in Human Choice and Climate Change, ed. Steve Rayner and Elizabeth Malone (Columbus, OH: Battelle Press, 1998), 1:265–343.
Johan Rockström et al., “A Safe Operating Space for Humanity,” Nature 461 (September 24, 2009): 472.
Ted Nordhaus, Michael Shellenberger, and Linus Blomqvist, The Planetary Boundaries Hypothesis: A Review of the Evidence (Oakland, CA: Breakthrough Institute, 2012), 37.
Barry W. Brook et al., “Does the Terrestrial Biosphere Have Planetary Tipping Points?,” Trends in Ecology & Evolution 28, no. 7 (2013): 401.
Sebastian Strunz, Melissa Marselle, and Matthias Schröter, “Leaving the ‘Sustainability or Collapse’ Narrative Behind,” Sustainability Science 14, no. 3 (2019): 1717–28.
Nordhaus, Schellenberger, and Blomqvist, Planetary Boundaries Hypothesis, 37.
Holling, “Resilience of Terrestrial Ecosystems.”
Susan Ratcliffe, ed., Oxford Essential Quotations (Oxford, UK: Oxford University Press, 2016), eISBN:
9780191826719, https://www.oxfordreference.com/view/10.1093/acref/9780191826719.001.0001/q-oro-ed4-00006236.
Alfred North Whitehead, Science and the Modern World (Cambridge, UK: Cambridge University Press, 1929), 64.
As explored in Theodore M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton, NJ: Princeton University Press, 1996).
Mark Sagoff. “The quantification and valuation of ecosystem services.” Ecological Economics (2011): 497-502.
See, for example, Naomi Oreskes, Kristin Shrader-Frechette, and Kenneth Belitz, “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences,” Science 263, no. 5147 (1994): 641–46.
Daniel Metlay, “From Tin Roof to Torn Wet Blanket: Predicting and Observing Groundwater Movement at a Proposed Nuclear Waste Site,” in Prediction: Science, Decision Making, and the Future of Nature, ed. Daniel Sarewitz, Roger A. Pielke Jr., and Radford Byerly Jr. (Covelo, CA: Island Press, 2000), 199–228. See also Stuart A. Stothoff and Gary R. Walter, “Average Infiltration at Yucca Mountain over the Next Million Years,” Water Resources Research 49, no. 11 (2013): 7528–45, https://doi.org/10.1002/2013WR014122.
Metlay, “From Tin Roof.”
Metlay, “From Tin Roof.”
James Cizdziel and Amy J. Smiecinski, Bomb-Pulse Chlorine-36 at the Proposed Yucca Mountain Repository Horizon: An Investigation of Previous Conflicting Results and Collection of New Data (2006; Nevada System of Higher Education), https://digitalscholarship.unlv.edu/yucca_mtn_pubs/67.
Theodore M. Porter, Trust in numbers: The pursuit of objectivity in science and public life. (Princeton University Press, 1996).
Jeroen P. van der Sluijs, “Numbers Running Wild,” in The Rightful Place of Science: Science on the Verge. Consortium for Science, Policy & Outcomes, (The consortium for science, policy and outcomes at Arizona State University, Tempe, AZ, 2016), 151-187.
Daniel Sarewitz, “Animals and beggars.” Science, Philosophy and Sustainability: The End of the Cartesian dream (2015): 135.
Mark Sagoff, “The quantification and valuation of ecosystem services.” Ecological Economics, 70(3), (2011): 497-502.
Jeroen Van der Sluijs et al., “Anchoring Devices in Science for Policy: The Case of the Consensus around Climate Sensitivity,” Social Studies of Science 28, no. 2 (1998): 291–323.
Patrick T. Brown and Ken Caldeira, “Greater Future Global Warming Inferred from Earth’s Recent Energy Budget,” Nature 552, no. 7683 (2017): 45–50; Peter M. Cox, Chris Huntingford, and Mark S. Williamson, “Emergent Constraint on Equilibrium Climate Sensitivity from Global Temperature Variability,” Nature 553 (January 18, 2018): 319–22; and S. C. Sherwood et al., “An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence,” Reviews of Geophysics 58, no. 4 (2020).
Steve Rayner, “Uncomfortable Knowledge in Science and Environmental Policy Discourses,” Economy and Society 41, no. 1 (2012): 120–21.
Rayner, “Uncomfortable Knowledge,” 121.
See, for example, Duncan McLaren and Nils Markusson, “The Co-Evolution of Technological Promises, Modelling, Policies and Climate Change Targets,” Nature Climate Change 10 (May 2020): 392–97; Roger Pielke Jr., “Opening Up the Climate Policy Envelope,” Issues in Science and Technology 34, no. 4 (Summer 2018); and Jane A. Flegal, “The Evidentiary Politics of the Geoengineering Imaginary” (PhD diss., University of California, Berkeley, 2018).
See, for example, Andrea Saltelli et al., “Five Ways to Ensure that Models Serve Society: A Manifesto,” Nature 582 (June 2020): 482-484.
See, for example, Myanna Lahsen, “Seductive Simulations? Uncertainty Distribution Around Climate Models,” Social Studies of Science 35, no. 6 (2005): 895–922.
For examples, see Juan B. Moreno-Cruz, Katherine L. Ricke, and David W. Keith, “A Simple Model to Account for Regional Inequalities in the Effectiveness of Solar Radiation Management,” Climatic Change 110, nos. 3–4 (2012): 649–68; and Colin J. Carlson and Christopher H. Trisos, “Climate Engineering Needs a Clean Bill of Health,” Nature Climate Change 8, no. 10 (2018): 843–45. For a critique, see Jane A. Flegal and Aarti Gupta, “Evoking Equity as a Rationale for Solar Geoengineering Research? Scrutinizing Emerging Expert Visions of Equity,” International Environmental Agreements: Politics, Law and Economics 18, no. 1 (2018): 45–61.
See, for example, David Goldston, “Not ’Til the Fat Lady Sings: TSCA’s Next Act.” Issues in Science and Technology 33, no. 1 (Fall 2016): 73-76.
“MassDEP Toxics Use Reduction Program,” Massachusetts Department of Environmental Protection, accessed February 16, 2020, https://www.mass.gov/guides/massdep-toxics-use-reduction-program.
See, for example, Toxics Use Reduction Institute, Five Chemicals Alternatives Assessment Study: Executive Summary (Lowell: University of Massachusetts, Lowell, June 2006), https://www.turi.org/content/d…; and Pamela Eliason and Gregory Morose, “Safer Alternatives Assessment: The Massachusetts Process as a Model for State Governments,” Journal of Cleaner Production 19, no. 5 (March 2011): 517–26.
Rachel L. Massey, “Program Assessment at the 20 Year Mark: Experiences of Massachusetts Companies and Communities with the Toxics Use Reduction Act (TURA) Program,” Journal of Cleaner Production 19, no. 5 (2011): 517–26.
Donald L. Gautier, “Oil and Gas Resource Appraisal: Diminishing Reserves, Increasing Supplies,” in Prediction: Science, Decision Making, and the Future of Nature, ed. Daniel Sarewitz, Roger A. Pielke Jr., and Radford Byerly Jr. (Covelo, CA: Island Press, 2000), 231–49.
Gautier, “Oil and Gas Resource,” 244.
Quoted in Gautier, “Oil and Gas Resource,” 244.
Cass Peterson, “U.S. Estimates of Undiscovered Oil and Gas Are Cut 40 Percent,” Washington Post, March 10, 1988, A3.
Gautier, “Oil and Gas Resource,” 246.
Joseph E. Stiglitz, “Rethinking Macroeconomics: What Failed, and How to Repair it, Journal of the European Economic Association, 9, no. 4, (2011): 591.
Jerome H. Powell, “America’s Central Bank: The History and Structure of the Federal Reserve” (speech, West Virginia University College of Business and Economics Distinguished Speaker Series, Morgantown, WV, March 28, 2017),https://www.federalreserve.gov…; and Stanley Fischer, “Committee Decisions and Monetary Policy Rules” (speech, Hoover Institution Monetary Policy Conference, Stanford University, Stanford, CA, May 5, 2017),
Gautier, “Oil and Gas Resource.”
David Kriebel and Daniel Sarewitz, “Democratic and Expert Authority in Public and Environmental Health Policy,” in Policy Legitimacy, Science, and Political Authority, ed. Michael Heazle and John Kane, Earthscan Science in Society Series (London: Routledge, 2015), 123–40.
Massey, “Program Assessment.”


Posted in Center for Environmental Genetics | Comments Off on Policy Making in the Post-Truth World