New class of precision medicine strips cancer of its DNA defences

These GEITP pages do not usually focus on cancer drug therapy, but this article presents a new approach to the problem of attacking cancer cells. A new precision medicine — targeting the cancer cell’s ability to repair its own DNA — has shown promising results in the first clinical trial of this drug class. The new study, designed to test the drug’s safety, found that half the patients given the new drug — either alone or with platinum chemotherapy — saw their cancer stop growing, and two patients saw their tumors shrink or disappear completely. DNA damage (mutations, genomic instability) in a cell is the root cause for inititation and growth of cancers; however, it is also a fundamental weakness in tumors, and cancer cells can be killed by further damaging their DNA — or attacking their ability to repair it.

The new phase I trial tested the first in a new family of drugs that block a key DNA repair protein called ATR (ATR serine/threonine kinase). Phase I trials are designed to assess the safety of new treatments, and it is unusual to see a clinical response at this early stage. A team at the Institute of Cancer Research (ICR), London, and The Royal Marsden NHS Foundation Trust, led a trial in 40 patients with very advanced tumors, located in hospitals around the world — to test the possible benefit of an ATR inhibitor called berzosertib (M6620), either on its own or in combination with chemotherapy. The researchers established the doses at which the drug was safe for use in further clinical trials, and were pleased to find that berzosertib alone caused only mild side-effects. Surprisingly for a phase I trial, authors [see attached article] team found that berzosertib stopped tumor growth in 20 out of 38 patients whose treatment response could be measured.

The drug’s benefit in blocking DNA repair was even more striking in patients also given a DNA-damaging drug such as carboplatin chemotherapy. In these patients, 15 of 21 (71%) saw their disease stabilize — indicating that chemotherapy had boosted sensitivity to berzosertib. One male with advanced bowel cancer (whose tumor contained defect in key DNA repair genes including CHEK1 and ARID1A) responded remarkably well to berzosertib on its own, seeing his tumors disappear and remaining cancer-free for more than 2 years. One woman with advanced ovarian cancer — whose disease had returned after treatment with a drug that blocks PARP (poly(ADP-ribose) polymerase-1, another key DNA repair protein), received the combination treatment and saw her cancer masses shrink (this patient’s response suggests that berzosertib could be explored as a strategy to overcome resistance to the PARP inhibitor family of targeted treatments).

The drug is now moving forward into Phase II clinical trials, and the hope is that it could be developed into a new targeted treatment for patients, therapy that might help overcome resistance to other precision medicines such as PARP inhibitors that target DNA repair. The trial was funded by Merck KGaA, Darmstadt, Germany, manufacturer of the drug. The Institute of Cancer Research (ICR), a charity and research institute, will be focusing on how to overcome cancer evolution and drug resistance in its new Centre for Cancer Drug Discovery, for which it still needs to raise a final £2 million to complete these clinical trials. Authors concluded that, to their knowledge, this report is the first of its kind for an ATR inhibitor as monotherapy, and, combined with carboplatin. M6620 was still well tolerated, with amazing “target engagement” and preliminary anti-tumor responses observed. 😊

DwN

1

J Clin Oncol Jun 2020; ePub, DOI https://doi.org/10.1200/JCO.19.02404

Posted in Center for Environmental Genetics | Comments Off on New class of precision medicine strips cancer of its DNA defences

Genetic evidence of widespread variation in ethanol metabolism among mammals

This is an excellent topic for gene-environment interactions. The environmental signal is “alcohol” (ethanol; EtOH) and the response is enjoyment of drinking vs averson to drinking EtOH shows great variability among individuals in the same species, as well as differences between species; these differences in response to this signal reflect alterations of genes in the genome. Figure 1 [see attached article] shows protein variations in aldehyde dehydrogenase-7 (ADH7) combined with evolutionary relationships and diets of ~90 species included in this publication’s analysis. It has been hypothesised that our enjoyment of drinking EtOH can be traced evolutionarily to fruit-eating ancestors that were exposed to naturally

occurring alcohol in ripening fruits. In fact, chimpanzees and other primates are exceptionally sensitive to odors of

aliphatic alcohols including EtOH, and at some prefer drinking EtOH-containing solutions over water.

Humans, chimpanzees, bonobos and gorillas have the same mutation in the ADH7 gene (Ala294Val in the protein), which results in being able to metabolize EtOH ~40 times more rapidly. The evolutionary time of appearance of this mutation (~10 million years ago) coincides with increased terrestriality (i.e. living on the ground rather than in trees or water) in our lineage — which likely led to more frequent exposure to fermenting fruit on the forest floor. Other animals that seek out EtOH for consumption include aye-ayes (long-fingered lemur living in Madagascar), tree shrews, elk and other ungulates (hoofed mammals), many bird species (e.g. robins and blue jays will soon be robbing us of most of our blueberries), and fruit-eating bats — and there are many anecdotal stories of intoxication in all these species…!!

Authors [see attached article] conducted a comparative genetic analysis of the ADH7 gene — across ~90 mammalian species to provide insight into their evolutionary history with EtOH. Authors demonstrate genetic variation, including several different events that lead to a pseudogene [i.e. one means of inactivating the enzyme is to make the gene unable to translate a functional protein; another means is to translate a protein that is (still) unable to metabolize EtOH] in ADH7, indicating the ability to metabolize EtOH varies with the species. Table 1 [of attached article] provides a list of the genetic ways that lead to an inability to break down EtOH in various mammalian species.

Authors suggest that ADH enzymes are evolutionarily plastic, revealing dietary adaptation of each individual species. It is a fallacy to assume that other animals share our metabolic adaptations, rather than taking into consideration each species’ unique physiology. A topic not covered in this overview is that different inbred strains of the same species (e.g. mice, rats) — as well as ethnic differences among humans — exhibit EtOH preference vs EtOH aversion. 😊 ☹

DwN

Biol Lett Apr 2020; 16: 20200070

Posted in Center for Environmental Genetics | Comments Off on Genetic evidence of widespread variation in ethanol metabolism among mammals

BOOK: Global Warming Skepticism for Busy People [Kindle Edition]

For anyone interested in being enlightened about the facts and truth — concerning global warming and climate science — I highly recommend this book. Professor Roy Spencer (author of this book, available on Amazon.com to purchase as well as on kindle) is certainly among the top-ten most esteemed knowledgeable scientists in this field [and because he is among the most eminent in his field, he is among the most viciously attacked by global warming alarmists for political reasons]. Dr. Spencer has maintained a website http://www.drroyspencer.com/latest-global-temperatures/ illustrating the latest global average tropospheric temperatures since 1979 — when NOAA satellites first began to carry out the measurements of the natural microwave thermal emissions from oxygen in the atmosphere.

The intensity of the signals that these microwave radiometers measure — at different microwave frequencies — is directly proportional to the temperature of different deep layers of the atmosphere. Every month, John Christy and Roy Spencer (at University of Alabama Birmingham; UAB) update daily global temperature datasets that represent the piecing together of temperature data from a total of fifteen instruments flying on different satellites over the years and covering close to 100% of Earth’s surface at all times. The latest dataset is always discussed on this web site.

Global Warming Skepticism for Busy People

By Author Roy Spencer

Format: Kindle Edition

4.9 out of 5 stars 168 ratings

This book draws on decades of climate research to explain why the threat of anthropogenic climate change has been grossly exaggerated. Global warming and associated climate change exists – but the role of humans in that change is entirely debatable. A little-known aspect of modern climate science is that the warming of the global atmosphere-ocean system over the last 100 years, even if entirely human-caused, has progressed at a rate that reduces the threat of future warming by 50% compared to the climate model projections. To the extent warming is partly natural (a possibility even the IPCC acknowledges), the future threat is reduced even further. This, by itself, should be part of the debate over energy policy – but it isn’t. Why?

The news media, politicians, bureaucrats, rent-seekers, government funding agencies, and a “scientific-technological elite” (as President Eisenhower called it) have collaborated to spread what amounts to fake climate news. Exaggerated climate claims appear on a daily basis, sucking the air out of more reasoned discussions of the scientific evidence which are too boring for a populace increasingly addicted to climate change porn. Upon close examination it is found that the “97% of climate scientists agree” meme is inaccurate, misleading, and useless for decision-making; human causation of warming is simply assumed by the vast majority of scientists who actually know little, if anything, about climatology.

In contrast to what many have been taught, there have been no obvious changes in severe weather — including hurricanes, tornadoes, droughts or floods. Despite an active 2018 wildfire season, there has actually been a long-term decrease in wildfire activity, although that will change if forest management practices are not implemented. Proxy evidence of past temperature and Arctic sea ice changes suggest a warming and sea ice decline over the last 50 years or so is not out of the ordinary, and partly or even mostly natural. The Antarctic ice sheet isn’t collapsing, but remains stable. The human component of sea level rise is shown to be, at most, only 1 inch per 30 years (25% of the observed rate of rise); and the latest evidence is that more CO2 dissolved in ocean water will be good for marine life, not harmful.

Admittedly, continued emissions of CO2 from fossil fuel burning can be expected to cause (and probably has caused) some of our recent warming. But the Paris Agreement, even if extended through the end of the 21st century, will have no measurable effect on global temperatures, because the governments of the world realize humanity will depend upon fossil fuels for decades to come (until perhaps humans will accept atomic energy — or develop fusion energy in an inexpensive form). Despite news reports and politicians’ proclamations, international agreements to reduce CO2 emissions are all economic pain for no observable climate gain. What government-mandated reliance on expensive and impractical energy sources will do — is simply to increase energy poverty, and poverty kills.

This downside to illusory efforts to “Save the Earth” is already being experienced in the U.K. and elsewhere. If people are genuinely concerned about humanity thriving, they must reject global warming alarmism. In terms of environmental regulation, the end result of the U.S. EPA’s Endangerment Finding will be reduced prosperity for all, and climate gain for none. The good news is that there is no global warming crisis, and this book will inform citizens and help guide governments toward decisions which benefit the most people while doing the least harm.

Posted in Center for Environmental Genetics | Comments Off on BOOK: Global Warming Skepticism for Busy People [Kindle Edition]

Many People Lack Protective Antibodies After COVID-19 Infection

This article appeared this week on Medscape. Author F. Perry Wilson interprets the technical data in a very breezy non-intimidating lay-term manner — so that many of us who are not specialists in immunology can understand.

DwN

F. Perry Wilson, MD, MSCE
June 24, 2020
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson.

In what seems like 10 years ago, but was actually just 6 weeks ago, on this very website, I said this:

“This is the COVID that allows us to open up more quickly, assuming that antibodies are protective, which — let’s be honest — if they aren’t, we’re sort of screwed no matter what.”

Cut to a couple of days ago, when I came across this article in Nature — the first deep dive, attempting to answer the question of just how protective those coronavirus antibodies are.

And, at first blush at least, the news isn’t great.

Researchers recruited patients who had recovered from COVID-19 from the Rockefeller University Hospital in New York. The 111 individuals enrolled had to have been asymptomatic for at least 14 days. They also recruited 46 asymptomatic household contacts and some controls who had never had COVID-19.

Now, a brief refresher on antibodies. There are several different types, but we broadly think about immunoglobulin M (IgM) as the short-term antibody, generated in the throes of the illness, and immunoglobulin G (IgG) as the long-term antibody. But here’s the thing: The mere presence of antibodies does not mean that those antibodies are protective. The researchers tease this apart for us.

Source: Wikimedia Commons

They zeroed in on two types of anti-coronavirus antibodies: a group that binds to the spike protein (that’s the crown part of the corona), and more specifically, antibodies that bind to the receptor binding domain of the spike protein. This is the key, if you will, that opens the door of your cells (a receptor called ACE2) to infection. It’s a good bet that if there is an antibody that will shut down the virus, it’s one that will block the receptor binding domain.

Should we start with the good news?

Source: Robbiani DF, et al. Nature. Epub 18 June 2020.

Compared with controls, IgG and IgM levels were higher among those who had recovered from COVID-19. As expected in this convalescent group, a bigger difference was seen in IgG (the long-term antibody) compared with IgM. You can see in this graph that IgM levels seem to go down a bit over time.

Source: Robbiani DF, et al. Nature. Epub 18 June 2020.

And, I’ll note, about 20%-30% of people didn’t have antibody titers significantly above controls. But broadly, okay — the majority of people made antibodies.

But that’s not the key thing here. Were these neutralizing antibodies? Do they stop viral replication?

To figure this out, the researchers genetically engineered a SARS-CoV-2 pseudovirus which expressed the spike protein and let it run amok infecting ACE2-expressing cells in culture.

Source: Robbiani DF, et al. Nature. Epub 18 June 2020.

They then added varying dilutions of patient plasma to the petri dishes to determine how much plasma you would need to shut the virus down by 50%, the so-called “neutralizing titer” 50 (NT50).

The results here were not so encouraging.

Thirty-three percent of the individuals tested had an NT50 of less than 50, which implies essentially no immunity to repeat infection; 79% had an NT50 less than 1000 — they may have partial immunity. Only two people tested had an NT50 greater than 5000.

Higher overall antibody titers were associated with neutralizing ability, as might be expected.

Source: Robbiani DF, et al. Nature. Epub 18 June 2020.

Individuals who had been hospitalized for COVID-19 were more likely to have neutralizing antibodies than those who hadn’t been hospitalized, suggesting that those with more severe illness are more likely to be immune in the future.

Overall, this is fairly concerning. Without neutralizing antibodies, an end to coronavirus transmission seems unlikely. But let’s also remember the empirical data: We don’t yet have any significant numbers of individuals who have been documented to have cleared COVID-19 and then become re-infected. And even without high levels of neutralizing antibodies, a second infection is likely not to be as bad as the first.

There’s another nugget of hope in this study. The researchers didn’t stop by simply measuring how many people had neutralizing antibodies. They actually sequenced 89 different anti-COVID antibodies to determine which specific antibodies were highly neutralizing. They identified 52 that had neutralizing ability and several that had potent neutralizing ability, targeted to specific amino acids on the receptor binding domain.

And here’s the thing: Most of the people in the study had those highly neutralizing antibodies; they just weren’t the main antibodies they were producing. Why is this good news? Because it suggests a pathway for a successful vaccine. We can make these potent neutralizing antibodies; it’s just that many of us don’t. But a vaccine designed to promote that particular antibody response could be highly successful.

All in all, this was a study that suggested that the tunnel we are in now may be a bit longer than we had hoped, but it also shows perhaps a light at the end of the tunnel.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Program of Applied Translational Research. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape

COMMENT: I have a problem with conclusions in this article. First and foremost, the conclusion that “without neutralizing antibodies, an end to Coronavirus transmission seems unlikely”. Pouring dilutions of serum containing neutralizing antibodies onto COVID infected cell cultures in a Petri dish —is not an adequate model for the many ways by which a human is able to fight viral infections.

In the case of a respiratory virus, local mucosal IgA first appears — and then leukocytes and cytokines become available in the area, and then recruitment of cytotoxic T cells, natural killer cells and helper cells, and antiviral macrophages — all play a role in eliminating virus-infected cells. So, to imply that “the lack of sufficient neutralizing serum antibodies means you will get the infection again”, or that “vaccinations work only by that mechanism” — seems misleading to me.

COMMENT: Olga, Judy: In the field of pharmacology, where some political groups have supported (for decades) to stop using animals in drug studies (“because everything can be studied just as easily in a dish of cells in culture, or in silico”), these GEITP pages consider this approach to be naïve. When a drug is administered — it undergoes absorption, distribution, metabolism, and excretion (ADME). These are all complex processes (among many different organs and tissue and cell types) — that there is simply no way that one can duplicate in cell culture, or even embryoid bodies, what happens in the intact animal.

In the field of immunology and viral inflammation, these processes must be at least as complicated. Studying a phenomenon in a dish of cells or in vitro (i.e. in a flask or test tube) usually has little relevance to the intact animal or clinically to the patient.

Hi Dan — I agree with Olga that this article on Medscape is “beyond misleading” — when this is considered:

DwN

COMMENT: The Medscape article, and the research about which that article is summarizing, are not “wrong” — but rather simply naïve in only examining/explaining “the antibody function” as a stand-alone description of the immune response against a virus. For example, T cells also develop memory to the proteins presented by the infected cells and T cells can kill cells directly without antibody. However, these cytotoxic T cells are notoriously difficult to study/assess — without knowing the specific peptide presented. Furthermore, the study was definitely not high-throughput, with regard to searching for a wide range of antibody responses. —Michael

Posted in Center for Environmental Genetics | Comments Off on Many People Lack Protective Antibodies After COVID-19 Infection

Statin therapy is associated with increased prevalence of healthy gut microbes

As these GEITP pages keep reiterating, virtually any phenotype (trait) reflects the contribution of genetics, epigenetic effects, environmental factors, endogenous influences, and each individual’s microbiome. The topic (phenotype) being discussed herein is obesity, and the contribution of the gut microbiome is being examined.

Indications that changes in the fecal microbiome are linked to development of obesity have resulted in intense research efforts — since the early days of metagenomics. However, understanding an “obesity-associated microbiota constellation” has proved to be challenging. Obesity and obesity-related co-morbidities have clearly been associated with alterations in gut microbiota (including lowered fecal-community richness and decreased abundance of butyrate-producing bacteria). “Microbiome community-typing” analyses have recently identified the Bacteroides2 (Bact2) type of “intestinal microbiota configuration” (i.e. enterotype) — that is associated with systemic inflammation, leading to loose stools in many humans. The Bact2 enterotype is characterized by a high proportion of Bacteroides, a low proportion of Faecalibacterium, and low microbial cell densities; prevalence of the Bact2 enterotype varies from 13% in a general population cohort to as high as 78% in patients with inflammatory bowel disease (IBD).

Reported changes in stool consistency and inflammation status during progression towards obesity and metabolic co-morbidities led authors [see attached article & editorial] to propose that these developments might similarly be correlated with increased prevalence of the potentially dysbiotic (pathological) Bact2 enterotype. By exploring obesity-associated

microbiota alterations in quantitative fecal metagenomes of the MetaCardis Body Mass Index Spectrum cohort (n = 888), authors identified statin therapy as a key co-variate of microbiome diversification. By focusing on a subcohort of participants that are not medicated with statins, authors found that prevalence of the Bact2 enterotype was correlated with body mass index (BMI) — increasing from 3.9% in lean participants to 17.73% in seriously obese participants.

Systemic inflammation levels in Bact2-enterotyped individuals are higher than predicted — on the basis of their obesity status — indicating an association of the Bact2 enterotype with a dysbiotic microbiome constellation. Authors also discovered that obesity-associated microbiota dysbiosis is negatively associated with statin treatment (resulting in a lower Bact2 prevalence of 5.88% in statin-medicated obese participants). This was validated in both the accompanying MetaCardis cardiovascular disease datase (n = 282) and an independent Flemish Gut Flora Project population cohort (n = 2,345). Potential benefits of statins in this context will require further evaluation in a prospective clinical trial to ascertain whether the effect is reproducible in a randomized population; also, a prospective clinical trial will be needed before considering the application of statins as “microbiota-modulating therapeutics”. 😊

DwN

Nature 21 May 2020; 581: 310-315 + editorial pp 263-264

Posted in Center for Environmental Genetics | Comments Off on Statin therapy is associated with increased prevalence of healthy gut microbes

One coronavirus mutation has taken over the world

This article appeared in The Washington Post two days ago — and was referred to me by a fellow GEITP-er. Note that all the research described has not yet been published, has not yet undergone peer review. Some of it is now posted on Fred Guengerich’s favorite website bioRxiv 😉, where scientists can post “preprint” research that has not yet undergone peer-review.

DwN

One coronavirus mutation has taken over the world.
Scientists are trying to understand why.

Health-care workers from University of South Florida Health administer coronavirus testing June 25 at a community center in Tampa.

By Sarah Kaplan and Joel Achenbach

June 29

When the first coronavirus cases in Chicago appeared in January, they bore the same genetic signatures as a germ that emerged in China weeks before. But as Egon Ozer, an infectious-disease specialist at the Northwestern University Feinberg School of Medicine, examined the genetic structure of virus samples from local patients, he noticed something different.

A change in the virus was appearing — again and again. This mutation, associated with outbreaks in Europe and New York, eventually took over the city. By May, it was found in 95 percent of all the genomes Ozer sequenced.

At a glance, the mutation seemed trivial. About 1,300 amino acids serve as building blocks for a protein on the surface of the virus. In the mutant virus, the genetic instructions for just one of those amino acids — number 614 — switched in the new variant from a “D” (shorthand for aspartic acid) to a “G” (short for glycine).

But the location was significant, because the switch occurred in the part of the genome that codes for the all-important “spike protein” — the protruding structure that gives the coronavirus its crownlike profile and allows it to enter human cells the way a burglar picks a lock.

And its ubiquity is undeniable. Of the approximately 50,000 sequenced genomes of the new virus that researchers worldwide have uploaded to a shared database, about 70 percent carry the mutation, officially designated D614G but known more familiarly to scientists as “G.”

The tiny mutation found in the dominant coronavirus variant.

Like all coronaviruses, SARS-CoV-2 has a series of characteristic spikes surrounding its core. These spikes are what allow the virus to attach to human cells.

A mutation affecting the virus’s spike protein changed amino acid 614 from “D” (aspartic acid) to “G” (glycine). Research suggests that this small change — which affects three identical amino acid chains — might make the spike protein more effective, enhancing the virus’s infectiousness.

Source: GISAID, Post reporting

AARON STECKELBERG/THE WASHINGTON POST

“G” hasn’t just dominated the outbreak in Chicago — it has taken over the world. Now scientists are racing to figure out what it means.

At least four laboratory experiments suggest that the mutation makes the virus more infectious, although none of that work has been peer-reviewed. Another unpublished study led by scientists at Los Alamos National Laboratory asserts that patients with the G variant actually have more virus in their bodies, making them more likely to spread it to others.

The mutation doesn’t appear to make people sicker, but a growing number of scientists worry that it has made the virus more contagious.

“The epidemiological study and our data together really explain why the [G variant’s] spread in Europe and the U.S. was really fast,” said Hyeryun Choe, a virologist at Scripps Research and lead author of an unpublished study on the G variant’s enhanced infectiousness in laboratory cell cultures. “This is not just accidental.”

But there may be other explanations for the G variant’s dominance: biases in where genetic data are being collected, quirks of timing that gave the mutated virus an early foothold in susceptible populations.

“The bottom line is, we haven’t seen anything definitive yet,” said Jeremy Luban, a virologist at the University of Massachusetts Medical School.

The scramble to unravel this mutation mystery embodies the challenges of science during the coronavirus pandemic. With millions of people infected and thousands dying every day around the world, researchers must strike a high-stakes balance between getting information out quickly and making sure that it’s right.

Spike protein mutation takes over

A mutation in the spike protein of the SARS-CoV-2 virus changes just one amino acid in a chain of about 1,300, but it might make a difference in how the virus attacks human cells. The mutation (called D614G), which first appeared in January, is found in what has become the dominant variant of the coronavirus.

New weekly samples in Nextrain’s global subsample

100%

Proportion of samples with the D614G mutation

50%

Proportion of samples without the D614G mutation

0%

January 2020

March

May

June

Data includes 3,006 samples acquired June 24.

JOE FOX/THE WASHINGTON POST

Source: Nextstrain, GISAID
A better lock pick

SARS-CoV-2, the novel coronavirus that causes the disease covid-19, can be thought of as an extremely destructive burglar. Unable to live or reproduce on its own, it breaks into human cells and co-opts their biological machinery to make thousands of copies of itself. That leaves a trail of damaged tissue and triggers an immune system response in the host that, for some people, can be disastrous.

This replication process is messy. Even though it has a “proofreading” mechanism for copying its genome, the coronavirus frequently makes mistakes, or mutations. The vast majority of mutations have no effect on the behavior of the virus.

But since the virus’s genome was first sequenced in January, scientists have been on the lookout for changes that are meaningful. And few genetic mutations could be more significant than ones that affect the spike protein — the virus’s most powerful tool.

This protein attaches to a receptor on respiratory cells called ACE2, which opens the cell and lets the virus slip inside. The more effective the spike protein, the more easily the virus can break into the bodies of its hosts. Even when the original variant of the virus emerged in Wuhan, China, it was obvious that the spike protein on SARS-CoV-2 was already quite effective.

SARS-CoV-2

copies

SARS-CoV-2

ACE2

RNA

SARS-CoV-2 uses its spike to bind to the ACE2 receptor, allowing access into the cell.

The virus’s RNA is released into the cell. The cell reads the RNA and makes viral proteins.

The proteins are assembled into new copies of the virus, which then go on to infect more cells.

But it could have been even better, said Choe, who has studied spike proteins and the way they bind to the ACE2 receptor since the severe acute respiratory syndrome (SARS) outbreak in 2003.

The spike protein for SARS-CoV-2 has two parts that don’t always hold together well. In the version of the virus that arose in China, Choe said, the outer part — which the virus needs to attach to a human receptor — frequently broke off. Equipped with this faulty lock pick, the virus had a harder time invading host cells.

“I think this mutation happened to compensate,” Choe said.

Studying both versions of the gene using a proxy virus in a petri dish of human cells, Choe and her colleagues found that viruses with the G variant had more spike proteins, and the outer parts of those proteins were less likely to break off. This made the virus approximately 10 times more infectious in the lab experiment.

The mutation does not seem to lead to worse outcomes in patients. Nor did it alter the virus’s response to antibodies from patients who had the D variant, Choe said — suggesting that vaccines being developed based on the original version of the virus will be effective against the new strain.

Choe has uploaded a manuscript describing this study to the website bioRxiv, where scientists can post “preprint” research that has not yet been peer reviewed. She has also submitted the paper to an academic journal, which has not yet accepted it.

The distinctive infectiousness of the G strain is so strong that scientists have been drawn to the mutation even when they weren’t looking for it.

Neville Sanjana, a geneticist at the New York Genome Center and New York University, was trying to figure out which genes enable SARS-CoV-2 to infiltrate human cells. But in experiments based on a gene sequence taken from an early case of the virus in Wuhan, he struggled to get that form of the virus to infect cells. Then the team switched to a model virus based on the G variant.

“We were shocked,” Sanjana said. “Voilà! It was just this huge increase in viral transduction.” They repeated the experiment in many types of cells, and every time the variant was many times more infectious.

Their findings, published as a preprint on bioRxiv, generally matched what Choe and other laboratory scientists were seeing.

But the New York team offers a different explanation as to why the variant is so infectious. Whereas Choe’s study proposes that the mutation made the spike protein more stable, Sanjana said experiments in the past two weeks, not yet made public, suggest that the improvement is actually in the infection process. He hypothesized that the G variant is more efficient at beginning the process of invading the human cell and taking over its reproductive machinery.

Luban, who has also been experimenting with the D614G mutation, has been drawn to a third possibility: His experiments suggest that the mutation allows the spike protein to change shape as it attaches to the ACE2 receptor, improving its ability to fuse to the host cell.

Different approaches to making their model virus might explain these discrepancies, Luban said. “But it’s quite clear that something is going on.”
Unanswered questions

Although these experiments are compelling, they’re not conclusive, said Kristian Andersen, a Scripps virologist not involved in any of the studies. The scientists need to figure out why they’ve identified different mechanisms for the same effect. All the studies still have to pass peer review, and they have to be reproduced using the real version of the virus.

Even then, Andersen said, it will be too soon to say that the G variant transmits faster among people.

Cell culture experiments have been wrong before, noted Anderson Brito, a computational biologist at Yale University. Early experiments with hydroxychloroquine, a malaria drug, hinted that it was effective at fighting the coronavirus in a petri dish. The Food and Drug Administration authorized it for emergency use in hospitalized COVID-19 patients. But that authorization was withdrawn this month after evidence showed that the drug was “unlikely to be effective” against the virus and posed potential safety risks.

So far, the biggest study of transmission has come from Bette Korber, a computational biologist at Los Alamos National Laboratory who helped build one of the world’s biggest viral genome databases for tracking HIV. In late April, she and colleagues at Duke University and the University of Sheffield in Britain released a draft of their work arguing that the mutation boosts transmission of the virus.

Analyzing sequences from more than two dozen regions across the world, they found that most places where the original virus was dominant before March were eventually taken over by the mutated version. This switch was especially apparent in the United States: Ninety-six percent of early sequences here belonged to the D variant, but by the end of March, almost 70 percent of sequences carried the G amino acid instead.

The British researchers also found evidence that people with the G variant had more viral particles in their bodies. Although this higher viral load didn’t seem to make people sicker, it might explain the G variant’s rapid spread, the scientists wrote. People with more virus to shed are more likely to infect others.

The Los Alamos draft drew intense scrutiny when it was released in the spring, and many researchers remain skeptical of its conclusions.

“There are so many biases in the data set here that you can’t control for and you might not know exist,” Andersen said. In a time when as many as 90 percent of U.S. infections are still undetected and countries with limited public health infrastructure are struggling to keep up with surging cases, a shortage of data means “we can’t answer all the questions we want to answer.”

Pardis Sabeti, a computational biologist at Harvard University and the Broad Institute, noted that the vast majority of sequenced genomes come from Europe, where the G variant first emerged, and the United States, where infections thought to have been introduced by travelers from Europe spread undetected for weeks before the country shut down. This could at least partly explain why it appears so dominant.

The mutation’s success might also be a “founder effect,” she said. Arriving in a place like Northern Italy — where the vast majority of sequenced infections are caused by the G variant — it found easy purchase in an older and largely unprepared population, which then unwittingly spread it far and wide.

Scientists may be able to rule out these alternative explanations with more rigorous statistical analyses or a controlled experiment in an animal population. And as studies on the D614G mutation accumulate, researchers are starting to be convinced of its significance.

“I think that slowly we’re beginning to come to a consensus,” said Judd Hultquist, a virologist at Northwestern University.

Solving the mystery of the D614G mutation won’t make much of a difference in the short term, Andersen said. “We were unable to deal with D,” he said. “If G transmits even better, we’re going to be unable to deal with that one.”

But it’s still essential to understand how the genome influences the behavior of the virus, scientists say. Identifying emerging mutations allows researchers to track their spread. Knowing what genes affect how the virus transmits enables public health officials to tailor their efforts to contain it. Once therapeutics and vaccines are distributed on a large scale, having a baseline understanding of the genome will help pinpoint when drug resistance starts to evolve.

“Understanding how transmissions are happening won’t be a magic bullet, but it will help us respond better,” Sabeti said. “This is a race against time.”

2,100 Comments [so far]

https://www.washingtonpost.com/science/2020/06/29/coronavirus-mutation-science/?arc404=true

Posted in Center for Environmental Genetics | Comments Off on One coronavirus mutation has taken over the world

The “online competition” between pro- and anti-vaccination views

This GEITP topic might be appropriate — following on the recent concerns (by Drs. Kerkvliet, Eaton & Tanguay) concerns that risk-assessment toxicology is becoming increasingly polluted by pseudoscience and political views. Another big example is climate science (e.g. Michael Shellenberg, recipient of numerous prizes for pro-global warming rhetoric, wrote a 1700-word article in Forbes Magazine within the past week, which included the statement “On behalf of environmentalists everywhere, I would like to formally apologise for the climate scare we created over the past 30 years,” — which was immediately not only attacked, but removed from the internet within a few hours. This is not an action that scientists would ever do, but is the action that politically-motivated pseudoscientists are now doing very often). Directly or indirectly, we can assume that the internet has greatly enhanced this route of misinformation. Social media companies are strongly involved in controlling online health disinformation and misinformation; e.g. during the last five months of the COVID-19 pandemic, all the “frenetic news” that “vaccines against the SARS-Cov-2 virus are imminent” (with no need for 1-2 years of clinical trials) or “medicine X, Y or Z is beneficial or not beneficial” (in a small quick study of N=12 or 35 or 80 individuals).

Authors [see attached article] provided a system-level analysis of the multi-sided ecology of nearly 100 million individuals expressing views regarding vaccination — which have been emerging from the ~3 billion users of Facebook from across many countries, continents and languages (e.g. Figs. 1 & 2 show clusters of pro- and con-vaccine opinions; each red node is a cluster of fans of a page with anti-vaccination content; each blue node a cluster that supports vaccinations). Authors ask “Why have negative views about vaccination become so robust — despite a considerable number of news stories that supported vaccination and were against anti-vaccination views, during the measles outbreak of 2019?” Seven observations and possibilities are discussed herein [see attached article].

[1] Although anti-vaccination clusters are smaller numerically, anti-vaccination clusters have become central in terms of

the positioning within the network; this means that pro-vaccination clusters in the smaller network patches may remain ignorant of the main conflict and have the wrong impression that “they are winning.”

[2] Instead of the undecided population being passively persuaded by the anti- or pro-vaccination populations, undecided individuals are highly active; these findings challenge our current thinking that undecided individuals are passive in the battle for “hearts and minds.”

[3] Anti-vaccination individuals form more than twice as many clusters, compared with pro-vaccination individuals, by having a much smaller average cluster size. This means that the anti-vaccination population provides a larger number of sites for engagement than the pro-vaccination population.

[4] Authors’ qualitative analysis of cluster content shows that anti-vaccination clusters offer a wide range of potentially attractive narratives that blend topics such as safety concerns, conspiracy theories and alternative health and medicine, [and also now ‘the cause and cure’ of COVID-19]. In contrast, pro-vaccination views are far more monothematic.

[5] Anti-vaccination clusters showed the highest growth during the measles outbreak of 2019, pro-vaccination clusters the lowest growth; again, this is consistent with the anti-vaccination population being able to attract more undecided individuals by offering many different types of negative narratives.

[6] Medium-sized anti-vaccination clusters have grown most; this finding challenges a broader theoretical notion of

population dynamics that claims that “groups grow though preferential attachment (i.e. a larger size attracts more recruits).

[7] Geography is a favorable factor for the anti-vaccination population; anti-vaccination clusters are either self-located within cities, states or countries — or remain global.

Distrust in scientific expertise (and trust in internet ‘chatter’) is dangerous — but, sadly, growing among the misinformed. The authors’ theoretical framework reproduces the recent explosive growth in anti-vaccination views — predicting that these views might dominate in a decade. Perhaps new insights provided by this framework can be informative in providing new policies and approaches to interrupt this shift to negative views? This study: [a] challenges conventional thinking about undecided individuals in issues of contention surrounding health, [b] sheds light on other issues of contention such as climate change, and [c] highlights the key role of network cluster dynamics in multi-species ecologies.

DwN

Nature 11 June 2020; 582: 230-233

Posted in Center for Environmental Genetics | Comments Off on The “online competition” between pro- and anti-vaccination views

Placental imprinting: Emerging mechanisms and functions

To reiterate what these GEITP pages continue to emphasize — any trait (phenotype) reflects the contribution of gene differences in DNA sequence, epigenetic factors, environmental effects, endogenous influences, and each individual’s microbiome. Epigenetic factors include DNA-methylation, RNA-interference, histone modulation, and chromatin remodeling. The process of “imprinting” (i.e. modification of a gene’s expression, via epigenetic mechanisms — which will affect that gene’s expression in offspring; this can be behavioral, as well as a biochemical change) in the placenta is the topic of this fantastic, thorough well-written review [see attached]. Although the review focuses on DNA methylation, other forms of epigenetic factors (listed above) must be remembered as also (likely) contributors to imprinting.

The placenta is essential for healthy pregnancy, because it supports growth of the baby, helps the mother’s body adapt, and provides a connection between mother and the developing baby. Studying gene regulation and the early steps in placental development is challenging in human pregnancy; thus, mouse models have been key in building our understanding of these processes. In particular, these studies have identified a subset of genes that are essential for placentation (i.e. formation or arrangement of a placenta in a female animal’s uterus), which are called “imprinted genes”. Imprinted genes are those that are expressed from only one copy — depending on whether they are inherited from the mother or the father. It has now become apparent that regulation of imprinted genes in placenta is often unique from that in other tissues, and there are species-specific mechanisms, allowing evolution of new imprinted genes specifically in the placenta.

Cells that make up the placenta perform many diverse functions during pregnancy — including invasion into the maternal uterus, remodeling maternal vasculature, mediating nutrient and waste exchange between mother and fetus, producing pregnancy-supporting hormones, and modulating the maternal immune system to tolerate and support pregnancy. Although the placenta primarily comprises cell types arising from the conceptus, maternal immune and endometrial cells also contribute to its development and differentiation. Signals from the placenta modulate fetal growth and development, as well as maternal physiology. Through these selective pressures, genomic imprinting is thought to have evolved in placental mammals — with parental alleles in the fetal genome competing to influence maternal resource allocation…!!

Imprinted genes are those that are expressed mono-allelically (i.e. one copy only of the gene) — based on parent of origin — and >100 imprinted genes have been identified to date in mice and humans; many have been shown to be essential for fetal growth, placentation, and/or neurological function. Furthermore, tissues that show imprinted expression of the highest number of genes are placenta and brain. Given the developmental importance and evolutionary link between genomic imprinting and placenta, a growing body of work has centered on investigating regulatory mechanisms and function of imprinted genes in placenta. In particular, genetic tools — plus the ability to access to early embryonic stages in mouse models and recent advances in low-input sequence methodologies — have led to several exciting new discoveries. The author [see attached fantastic review] discusses recent work that has revealed unique mechanisms of imprinting in mouse placenta, the role these genes play in pregnancy, and aspects that we still do not understand — including how many analogous mechanisms of these genes exist in human placenta. Far below, we have tantalized the Reader by pasting Figure 1 of this review, which shows chromosomal locations of imprinted genes in mouse. 😊 😉

As emphasized, the role of placental-specific imprinting in human development and placentation is unclear and remains challenging to study — because of the inaccessibility of early embryonic development and the species-specific nature of placental imprinting. A recent study identified an association between aberrant imprinting at a number of placental-specific imprinted loci and intrauterine growth restriction — suggesting these genes may regulate fetal growth; however, determining whether these changes are the cause, or consequence, of poor growth will require further study. New technologies in culturing human trophoblast stem cells and trophoblast organoids offer new avenues into the study of placental-specific imprinting in trophoblast differentiation and function. Furthermore, recent advances in epi-CRISPR will specifically allow modulation of epigenetic states and transcriptional levels at imprinted loci in trophoblast cells in culture, to study the cellular phenotypic consequences of gene dosage.

Widespread placental-specific imprinted DNA methylation has been observed in humans, with >100 maternal genomic differentiated methylated regions (gDMRs) identified in placental trophoblasts, a feature NOT conserved in mouse. In addition, human placental imprinting is uniquely polymorphic between individuals — which may, in part, be attributable to differences in ZFP57 and ZFP445 (zinc-finger protein) activity in human vs mouse embryogenesis. These findings suggest

that the human placenta is exceptionally permissive to inherited DNA methylation from the oocyte; however, it is yet to be shown whether any imprinted loci in the human genome is/are regulated noncanonically (i.e. pathways that are presently unknown or that deviate from any canonical paradigm).

Recent studies in human preimplantation embryos suggest that H3K27me3 is reprogrammed much more rapidly than in mouse; yet, early allelic data suggest that at least a few loci may maintain a maternal bias in H3K27me3 to the morula stage (solid ball of cells from which the blastula is formed). Nevertheless, demonstrating whether noncanonical imprinting exists in the human — presents further challenges for future study because heterozygous single-nucleotide variants (SNVs) having important parental allelic information are far more rare than in mouse hybrids; this drastically limits the number of loci that can be evaluated in any one sample. Future work will reveal the full extent of imprinting in human placenta and help elucidate whether placental-specific imprinting has a role in regulating fetal growth and maternal adaptations to pregnancy, which will be essential for our understanding of pregnancy-related pathologies.

For the young investigator just starting out in his/her career, there is enough information in this fantastic review [se attached] to last a lifetime of grant proposals. 😊 😊 [And see Figure 1 below]

DwN

PLoS Genet Apr 2020; 16: e1008709

Fig 1. Extraembryonic-specific imprinted genes in the mouse genome. Genes reported to show imprinted gene expression almost exclusively in placenta and/or

visceral endoderm [5,33,35–40]. Red genes are maternally expressed, whereas blue genes are paternally expressed. Genes that are noncanonically imprinted are

underlined. Asterisks mark genes that are imprinted by an alternative mechanism in somatic tissues.

Posted in Center for Environmental Genetics | Comments Off on Placental imprinting: Emerging mechanisms and functions

Open letter to the KNAW: Do not exclude scientists with alternative views

These GEITP pages often speak out against fraud and corruption in science; such is the topic herein. This letter — posted on clintel.org by Professor Guus Berkhout — is to the new incoming president of the Royal Netherlands Academy of Arts and Sciences (KNAW). In the US, the same letter should be sent to the presidents of the National Academy of Sciences, National Academy of Engineering, and the National Academy of Medicine (and also to at least a dozen scientific professional societies — as well as the Nature Publishing Group and other publishing houses).

Every viewpoint and proposed theory must be allowed to be expressed and openly considered, and discussions must be encouraged. When scientists are suppressed, condemned, excluded, terrified to speak up, and even taken to court with lawsuits or lose their tenured positions, because their opinion differs from “the consensus” (groupthink) — that is no longer science; that is politics, plain and simple. ☹

DwN

Open letter to the KNAW: don’t exclude scientists with alternative views

Prof. Dr. Ineke Sluiter, President
Royal Netherlands Academy of Arts and Sciences (KNAW)
Trippenhuis
Kloveniersburgwal 29
1011 JV AMSTERDAM

The Hague, 10 June 2020

Dear Professor Sluiter,

As a dedicated KNAW member, I wrote to past-President José van Dijck more than 2 years ago and past-President Wim van Saarloos more than a year ago to express concern that climate science is being abused for political purposes. I wrote that climate policy was being made under the pretext that “the science is settled”.

Both presidents’ answers were far from reassuring: “The question has been carefully considered. We have full confidence in the IPCC. There is no reason for the KNAW to take further action.”

Why don’t we hear any alarm bells?
I am addressing you, in your capacity as the new President of the KNAW, because the climate issue is escalating. The IPCC and the associated activist climate movement have become highly politicised. Sceptical scientists are being silenced. As an IPCC expert reviewer, I critically looked at the latest draft climate report. My conclusion is that there is little evidence of any intent to discover the objective scientific truth.

Though IPCC’s doomsday scenarios are far from representative of reality, they play an important role in government climate policy. Only courageous individuals dare to point out that the predictions of the IPCC’s computer-models of climate have not happened, in that contemporary measurements contradict them. IPCC’s confidence in its own models does not match the real-world outturn. In the past, scientific societies such as ours would have sounded the alarm.

In your interview with Elsevier Weekblad (6 June 2020) you say: “Dutch science should be proud of itself” and, a little later, “A hallmark of high-quality research must be a wide variety of viewpoints – fewer dogmas, more viewpoints.” I agree. Unfortunately, your observations do not seem to apply to climate science. There, diversity is suppressed and the Anthropogenic Global Warming (AGW) dogma is promoted. That is why I am writing to you.

Faith in models is faith in modellers
Models’ outputs are not magically correct, even if those models are run on supercomputers. After all, models are the work of fallible humans. What models tell us depends entirely on what the modellers have put in: hypotheses, relationships, parameters, simplifications, boundary conditions, and so on. Unfortunately, there is little discussion about the validity of these crucial inputs. All that is discussed is the output.

As a result, tuning of models has come to be falsely equated with validation. The famous mathematician John von Neumann said: “The near-perfect match between your model and your data doesn’t tell you much about how good your model is. With four parameters I can fit an elephant. With five I can wiggle his trunk.” With sufficient tunable parameters compared to the data count, a model can replicate any dataset. This is exactly what happens when tuning climate models.

The real test of models is not how well they have been tuned to fit the past but how well they predict the future. Seen in the light of this test, climate models have failed. They cannot yet make reliable predictions. Therefore, they are unsuitable for long-term policymaking, particularly where, as here, the policies that IPCC and others advocate on the basis of these failed predictions have costly consequences for us all.

In the name of science
What concerns me most about this embarrassing state of affairs is that science is being misused to provide spurious justification for wishful climate policy and that the scientific establishment is looking the other way.

Why do scientific institutions not warn society that all these climate-change gloom-and-doom scenarios have little or no scientific justification? I know that there are many, many scientists around the world who doubt or disagree with the IPCC’s claims. I also know from my own experience and from correspondence with colleagues that there is much pressure on researchers to conform to what we are told is the climate “consensus”. But the history of science shows, time-and-time-again, that new insights do not come from followers — but from critical thinkers. For valid new insights, measurements trump models.

The KNAW, as the guardian of science, must surely take action now. The more governments invest in expensive climate policies in the name of climate science, the more difficult it becomes to point out that climate science in its present state falls a long way short of providing any justification for such policies. There are more-and-more indications that things are not right. If the scientific community waits for the dam to burst, the damage to science will be enormous. Society will then rightly ask itself the question: why were the Academies of Sciences silent? Surely there has been enough warning from scientific critics of the official position?

The KNAW must, of course, stay clear of politics and focus on excellence in finding the truth. But I repeat that the KNAW is also the guardian of science. In climate policy in particular, science is abused on a global scale. How can one plausibly state, on such a highly complex subject as the Earth’s climate, that “the science is settled”? That is not excellence: that is stupidity.

There has been no clear warning from the European Academies (EASAC) and/or the InterAcademy Partnership (IAP) that the climate sciences did a lot of work, but are still a long way from reaching definitive conclusions. I consider such a warning to be a moral scientific duty. After all, built on the IPCC’s myth of catastrophe, politicians are turning society upside down and, in the name of science, imposing upon us an extremely expensive climate policy.

Worried citizens, who no longer have any trust in science and want to know what is really going on, now approach me. I feel partly responsible for the lack of criticism from my colleagues. I try to explain the true state of affairs.

My explanation to troubled citizens
Earth’s climate is highly complex. Science is only at the beginning of a fascinating voyage of discovery. Those who maintain that their models’ outputs are correct are telling a political story, not a scientific one. The geological record – I am a geophysicist – shows that climate changes on all timescales. There were large temperature fluctuations — long before humans walked the Earth. Certainly, anthropogenic CO2 might have a small warming effect, but there is clearly a lot more going on. The climate movement focuses far too much on what is happening today. In doing so, it is looking through a keyhole at long-term climate processes.

Few sceptical scientists deny that CO2 has some warming effect. However, we do not know how substantial the effect of CO2 is compared with the contribution from natural factors. Measurements and research in recent years show that our contribution seems modest (of order 1 C° per century). Accordingly, climate catastrophism, whether concerning warming itself or its consequences such as sea-level rise, has no scientific basis. Scientific institutions are failing in their duty to warn society that research results are being abused. Indeed, climate activists say they have the support of the wider scientific community — when launching their extreme CO2 reduction proposals. But those proposals are entirely unfeasible and unaffordable.

The IPCC was supposed to be a scientific initiative – I had been a strong supporter of it myself. However, it has emerged as a political organisation that abuses science. It spreads doomsday scenarios about global warming with the same arrogance as the Club of Rome 50 years ago.

Now it is even more embarrassing. As I have said, climatologists persist in the scientific error of confusing model tuning with validation. Worst of all, the IPCC has proven to be totalitarian. It does not tolerate criticism. Critical input is invariably rejected or ignored. This is a mortal sin in science, isn’t it?

The decay of climate science
In your inaugural speech you said scientists make errors. I agree. We err all the time when building models. More importantly, when measurements indicate that those models are wrong, we should be willing to acknowledge that our assumptions are wrong. That is a matter of fundamental scientific integrity.

You say in Elsevier Weekblad: “As an academic, one should be protected against government interference.” As a former Senate member of my alma mater, I am sad to see how many university Senates seem willing to make science subservient to the will of government. This defect has escalated in government-guided climate research programs and related energy transition research. Scientists who take a sceptical stance are sidelined, excluded, or even dismissed. Yet, as you say yourself, being critical is part of the scientific process.

A characteristic example is Prof. Peter Ridd, a reef expert, who opposed the doomsday scenario that anthropogenic climate change is causing the Great Barrier Reef to die off on a large scale. He publicly denounced shortcomings in the alarming science about the reef and was fired by his university after decades of service. He fought his resignation and was proved right by the judge on all fronts. But it wasn’t enough for the university and with the most expensive lawyers they appealed. This shameless lawsuit is still on going. It is not only a very serious violation and threat to academic freedom, it also sends a completely wrong signal to young scientists: don’t you dare go against the IPCC dogma, because this awaits you. And unfortunately, Professor Ridd is not the only one. In the climate world, contrarians are harshly punished.

Conclusion
Today the KNAW cannot any longer rely upon the imagined credibility of the IPCC. Tenured professors are terrified of being excluded, with the result that they are no longer permitted to participate.

There is an enormous fear of using new concepts to take climate insights further. In the last 30 years, we have hardly seen any new concepts in the IPCC community. It is all about reinforcing the CO2 hypothesis, right or wrong.

In astronomy there was a time when errors in calculating planetary orbits were fixed by epicycles on epicycles. An innovative proposal of Copernicus (1473-1543) to improve the method was severely punished. And of course, we also know what happened to Galileo (1564-1642) when he proposed his revolutionary discovery. Are we back in the time of Copernicus and Galileo?

It is not models — but data — that are definitive. Think of the spectacular developments in the telescope and microscope. Recently the Large Hadron Collider confirmed the existence of the Higgs boson. The new Dutch LOFAR antenna network has discovered some 300,000 new galaxies. The more complex the systems we investigate, the more important it becomes to invest in better measurement systems, so as to refine and validate our theoretical models. That is no less true for climate research than for any other field of scientific endeavour.

Scientific progress always comes from those who dare to go against established opinion. The Paris Climate Accord (2015) is based on the lie that the science is settled. How sad it is that scientists who oppose it are condemned. Scientific progress springs from disagreement and discussion. We have taken a direction in climate research that is unworthy of science. History will blame those responsible. Evil is not done by those who initiate it so much as by those who facilitate it.

Proposal
Authoritative researchers, university boards and umbrella scientific organisations should at least speak out against:

The science is settled;
The uncritical utilization of today’s climate models;
The exclusion of scientists with a different vision.

In addition, an open scientific debate should be organised on at least the following themes:

Validation of IPCC’s climate models (today, there is not even a protocol!);
Varying solar radiance and its contribution to climate change, including the role of clouds;
Variations in the Gulf Stream, such as the North Atlantic Oscillation, and their influence on climate change;
Influence of increasing atmospheric CO2 on global warming;
Reality checks of the alarming IPCC scenarios;
The sustainability of biofuels, versus wind farms and solar fields;
Nuclear energy as the energy source of the future.

I propose to organise an international open blue-team/red-team meeting together with the KNAW, in which both teams can present their scientific views†. These discussions could be the start of a new era in climate science. Audiatur et altera pars [“Let the other side be heard”].

I am sending an English version of this letter to Professor Christina Moberg, President of EASAC and professor Volker ter Meulen, President of the IAP.

I wish you every success and satisfaction in your new role as President of the KNAW and look forward to your response.

Yours sincerely,

Dr. A. J. (Guus) Berkhout
Emeritus Professor of Geophysics
Member of the Domain Natural and Technical Sciences

† Organizations that regularly verify the effectiveness of their strategy may use an intensive “blue-team/red-team” exercise, in which two teams with opposing viewpoints – for example, “all’s well” (blue team) versus “change is essential” (red team) – debate with the aim of increasing the resilience of the organization. One team is the attackers; the other, the defenders.

In my proposal, the blue team represents the position of the IPCC and the red team represents the position of critical climate scientists. I am in regular contact with Professor Will Happer, emeritus professor of physics at Princeton University and, until recently, scientific advisor to the President of the United States. Professor Happer has proposed such an approach in the US. He is one of my international advisors on climate science and policy.

By Guus Berkhout

24 June 2020

COMMENT: Incredibly to the point! I wholeheartedly support a similar examination of the current long-standing assumptions underlying health risk assessments related to chemical exposure.

COMMENT: I couldn’t agree more with you on that one! Risk-averse assumptions and bad science are starting to dominate our field, which is unfortunate, to say the least. I do not understand how toxicology has deviated so far from the well-established principles of pharmacology — especially the fundamental concept of dose-response, and the lack of understanding of receptor-mediated responses.

COMMENT: Agreed, Dave, Nancy, and Dan. The field is increasingly dominated by “advocacy” rather than sound science. As professionals in the field, we need to help combat this trend. The impact of social media did not start this trend, but it sure has added fuel to the fire.

Posted in Center for Environmental Genetics | Comments Off on Open letter to the KNAW: Do not exclude scientists with alternative views

Origins of Life: A plausible route to the first genetic alphabet

These GEITP pages include the topic of evolution, which of course includes the origins of life on this planet (which might have occurred as early as 4.2 to 4.0 billion years ago). The genetic polymers RNA and DNA are central to information storage in all biological systems and, as such, form the core of most hypotheses about the origin of life. The most prominent of these theories is the “RNA world” hypothesis, which postulates that RNA was once both the central information-carrier and the catalyst for biochemical reactions on Earth before the emergence of life. Studies in the past few years, however, have suggested that the first genetic systems might have been based on nucleic-acid molecules that contain both RNA and DNA nucleotides — which then gradually self-separated into today’s RNA and DNA.

Authors [see attached article & editorial] offer fascinating experimental support for an initial mixed RNA–DNA world. Primordial geochemical processes are thought to have led to formation of the building blocks of nucleic acids — nucleotides, and nucleosides (nucleotides that lack a phosphate group). Under suitable conditions, these building blocks polymerized and the resulting strands would eventually have to be replicated, without assistance from modern protein enzymes. How could this have happened? These authors previously had identified a network of reactions (promoted by ultraviolet light) that resulted in synthesis of two of the standard nucleosides found in RNA: uridine (U) and cytidine (C) — (known collectively as pyrimidines). These reactions started from hydrogen cyanide (HCN) and derivatives thereof, simple molecules, thought to have been readily available — not only on many planets, comets and meteors throughout the universe — including early Earth.

Further studies and development of this reaction network raised the intriguing possibility that protein and lipid precursors could have arisen simultaneously alongside nucleosides — thereby providing three of the main types of molecules needed to make cells. However, a complementary route for formation of the other two standard RNA nucleosides [adenosine (A) and guanosine (G), i.e. the purines), using the same HCN-based chemistry, has remained elusive.

In the present work [see attached], authors revisited compounds produced as intermediates in the previously established reaction network that synthesizes U and C; they identified a pathway in which a key intermediate of pyrimidine-nucleoside synthesis, ribo-aminooxazoline (RAO; see fantastic diagram of the chemical ppathways in editorial), can also be converted into two purine DNA nucleosides, deoxyadenosine (dA) and deoxyinosine (dI, which is not one of the standard nucleosides found in modern DNA). Intriguingly, these DNA nucleosides can form base pairs with U and C.

The four nucleosides — U, C, dA and dI — therefore constitute a complete “alphabet” that could have encoded genetic information in nucleic acids in a prebiotic RNA–DNA world. Importantly, the synthesis of dA and dI can occur in parallel with that of U and C, producing mixtures of the four products — in yields and ratios suitable for the construction of a genetic system. This mutual compatibility of the two synthetic pathways increases plausibility of the reaction network as a prebiotic system: if the two syntheses were incompatible, then geological scenarios would need to be contrived to explain how they could have been separated into different pools to enable the chemistry to occur, and then combined to enable the formation of hybrid RNA–DNA molecules. Notably, under certain reaction conditions, U and C can survive only in the presence of the thio-anhydro-purine compounds that act as direct precursors of dA and dI.

In summary, authors have demonstrated a high-yielding, completely stereo-, regio- and furanosyl-selective prebiotic synthesis of the purine deoxyribonucleosides: dA and dI. Their synthesis uses key intermediates found in the prebiotic synthesis of the canonical pyrimidine ribonucleosides (C and U). Authors show that, once generated, the pyrimidines persist throughout the synthesis of the purine deoxyribonucleosides — leading to a mixture of dA, dI, C and U. These results would support the notion that purine deoxyribonucleosides and pyrimidine ribonucleosides may have co-existed before the emergence of life on this planet. 😊
DwN

Nature 4 June 2020; 582: 60-66 + editorial pp 33-34?

COMMENT: The publication offered by Prof. Lancet is attached as the third item above (Life 2019, 9: 77; doi:10.3390/life9040077).

DwN

Dear Dan, Good to see yet-another Origins-Of-Life (OOL) paper among the many papers you recommend, and which are made understandable by your excellent summaries!

The present OOL story no doubt helps establish the DNA-RNA world as a plausible life’s origin path. But — in stark conceptual contrast lies the more general, chemically-unbiased scheme in which life, which certainly began with a highly complex chemical mixture, emerged via an early reproduction mechanism much more compatible with environmental compound diversity. This scenario is different from (and independent of) mutually-templating polynucleotides as first reproducers. This scenario becomes possible, based on the concept of a lipid world (see recent paper attached and references therein).

In this conceptual framework, “lipid” stands for ANY amphiphile [any organic compound (e.g. detergent, bile salt, or phospholipid, surfactant), comprising both a water-loving and lipid-loving portion] — including such that have as headgroups a nucleobase, amino acid, or sugar, as well as any of thousands upon thousands of (sometime “nameless”) prebiotically plausible small molecules. Reproduction happens when micelles made of such lipids grow and split, transmitting compositional information from one generation to another. This, of course, necessitates that lipid micelles will portray catalytic capacities, as evidenced by a broad literature we are now summarizing in an in-preparation review.

Such a path of life’s origin transpires in a way that does not necessarily depend on the orthodoxy of nucleotide-base pairing: i.e. any event of molecular complementarity will do! The utterly specific roles of RNA and DNA as sequence-based information carriers and protein-encoding entities are then proposed to be a much later evolutionary emergent phenomenon. Importantly, our scenario is free of the setting in which any full-fledged system of replication polynucleotides had to miraculously become protein-encoding. The “lipid-world compositional reproduction setup” then allows co-evolution of the polynucleotide encoders, and the encoded polypeptides — in very small, more plausible steps.

The paper you have reviewed for us is highly relevant — as a key monomer supply scenario. But the alternative lipid world scenario, if further validated, bears a recommendation to employ similar eye-opening chemical scrutiny for the emergence of alternative molecular alphabets, including a diversity of functionalized lipids, to afford the overall understanding of life’s origins.

Posted in Center for Environmental Genetics | Comments Off on Origins of Life: A plausible route to the first genetic alphabet