Researchers Discover a Possible Pathway to Prevent COVID Infection

Due to the large interest among GEITP’ers in looking-at/reading/examining the preprint described in the previous email — attached please find the Petitjean et al., pdf file (from the David Alsteens Lab) in Nat Commun, in press, accepted for publication on 22 May 2022 (not 10th of May, as erroneously written below). 😊😊


This brief article just appeared on Medscape, summarizing in layman’s terms a paper from Belgium that was published on 10 May 2022 in Nature Communications. The article is written very conservatively, succinctly, no hysteria or hype, and very honestly about the researchers’ plans next to test this experimental system in mice, and then, if successful, they would move on into clinical studies. This experimental approach to prevent SARS-CoV-2 infections — seems to be a reasonable, and a plausible concept to tackle all SARS-CoV-2 variants (current and future) — irrespective of any and all viral mutations. Therefore, it is believed to be worth sharing this news ASAP. 😊

Researchers Find a Pathway to Prevent COVID Infection
Sabine Verschelde and Frédéric Soumois

25 May 2022

BRUSSELS — The Catholic University of Louvain (UCLouvain) in Belgium announced that its researchers have managed to identify the key that allows the COVID-19 virus to attack cells. What’s more, they have succeeded in closing the lock to block the virus and prevent it from interacting with the cell, thereby preventing infection.

UCLouvain emphasized that this discovery, which was published in the scientific journal Nature Communications on May 10, is sparking hope that an aerosol antiviral therapy can be developed that would eradicate the virus in the case of an infection or a high-risk contact.

For 2 years, the team under David Alsteens, PhD, a researcher at the UCLouvain Institute of Biomolecular Science and Technology, has been working hard to understand the precise molecular mechanisms the virus uses to infect a cell. They investigated the interaction between sialic acids (SAs), a type of sugar residue that is located on the surface of cells, and the SARS-CoV-2 spike (S) protein to clarify its role in the infection process.

It was already known that the function of the sugar residues that coat the cells is to promote cell recognition, thus enabling, in particular, viruses to identify their targets more easily, but also to provide them with a point of attachment and to facilitate infection of the cells.

The researchers have now discovered a variant of these sugars that interacts more strongly with the S protein than other sugars do.

In other words, the university explained, “they found the set of keys that allows the virus to open the cell door.” So, the researchers decided to catch the virus in its own trap, by preventing it from attaching to its host cell. To do this, they blocked the S protein’s points of attachment, thus suppressing any interaction with the cell surface, as if a padlock had been placed on the lock on the cell’s entry door.

Th researchers added that the advantage of this discovery is that it acts on the virus, irrespective of mutations.

The team of researchers will now conduct tests on mice to apply this blocking of virus binding sites and observe whether it works on the body. The results should make it possible to develop a clinical antiviral therapy administered by aerosol in the case of infection or severely at-risk contact.

Posted in Center for Environmental Genetics | Comments Off on Researchers Discover a Possible Pathway to Prevent COVID Infection

Should the patient really get the drug?

THIS ARTICLE is a SUPERB assessment of physicians overtreating patients. (A corollary to this would be over-ordering fancy expensive tests to rule out exotic diagnoses, but that’s for another day.) Recent personal experience includes one 86-year-old Caucasian male concomitantly taking 26 prescription medicines; another includes a geriatric patient taking (daily) 13 prescription medicines plus 15 over-the-counter (OTC) medications.

The topic [below] is also clearly within the realm of GEITP — because diagnosis and treatment of patients for various complex diseases (multifactorial traits) involves gene-environment (GxE) interactions. And we all know that complex diseases (which can also be called “genetic architecture”, i.e., the underlying genetic basis of a phenotypic trait and its variational properties) are manifested by: genetic differencess, epigenetic factors, environmental effects (includes drugs as well as other chemicals), endogenous influences (i.e., concurren lung, heart, kidney etc. disease), and each individual’s microbiome.

This topic also falls under the realm of “Personalized Medicine,” which has recently been renamed by some as “Precision Medicine.” What we like most about this article is the quantitation [in number of days’ increased longevity (i.e., benefit) — versus number of days of potentially shortening longevity (i.e., detriment) due to one or more adverse drug reaction(s) (ADRs)]. Other caveats not covered in this beautiful article include cost/benefit ratios and drug-drug interactions (DDIs). ☹


Should the patient really get the drug?

Sebastian Rushworth, M.D.

Jun 14, 2022

I recently gave a lecture to 70 primary care physicians here in Stockholm, titled “should the patient really get the drug?”. The lecture generated quite a bit of cognitive dissonance among some in the audience, based on the somewhat aggressive discussion that followed the lecture, which suggests to me that much of what I was saying was stuff they had literally never been exposed to before – not at any point in medical school, and not at any point during their careers after medical school either. Cognitive dissonance is good. It’s the first step towards change.

I thought it would be interesting to re-write the lecture as an article, so that more people can hopefully achieve similar levels of cognitive dissonance. Please feel free to share it with any doctors you know that you think might benefit from an expanded perspective. Anyway, here we go.

Let’s imagine a common patient. Every primary care physician meets this patient, or someone much like her, on an almost daily basis. She’s 75 years old, and overweight. She experienced a wrist fracture two years ago, and was subsequently diagnosed with osteoporosis. She has high cholesterol levels, but she’s never had a heart attack or other “cardiovascular event”. On top of that, she has type-2 diabetes, chronic knee pain due to osteoarthritis, and high blood pressure. She was diagnosed with depression a few years ago, after her husband died.

Our patient takes seven drugs every day:

1. Alendronate, because of her weak bones.

2. Atorvastatin, because of her high cholesterol levels.

3. Sertraline, because of her depression.

4. Metformin, because of her type 2 diabetes.

5. Insulin, also because of her type 2 diabetes.

6. Paracetamol (a.k.a. acetaminophen), because of her knee pain.

7. Enalapril, because of her high blood pressure.

So, the question is, are these drugs doing her any good?

Well, to answer that question, we need to consider NNT (Number Needed to Treat). NNT is the number of patients who need to take a drug for one patient to achieve a noticeable benefit.

For alendronate, the NNT is 20, i.e. if you treat 20 people for a couple of years, you prevent one fracture. For atorvastatin the NNT is 200, i.e. you need to treat 200 people for five years or so in order to prevent one heart attack. For sertraline, the NNT is 7, which means that you need to treat seven people in order to have a noticeable effect on depression in one patient. Note that this doesn’t mean that one out of seven gets cured of their depression, it just means that there is a noticeable difference on a rating scale for depression.

For metformin, the NNT is 14 – If you treat 14 type 2 diabetics with metformin for ten years, you prevent one death. For enalapril, the NNT is 70 – If you treat 70 people with high blood pressure with enalapril for five years or so, you prevent one stroke.

For insulin, however, there is no NNT, because insulin has not been shown to result in any benefit on any clinically relevant outcome, even though big studies have been carried out that have included thousands of patients and followed them for five or ten years. Note here that we’re talking about insulin for type 2 diabetics. When it comes to type 1 diabetes, insulin is pretty much magical – you don’t even need to do a randomised trial in order to show benefit. People with type 1 diabetes virtually return from the dead when treated with insulin. But when it comes to type 2 diabetes, there is no benefit, at least not to any hard outcomes. All insulin has been shown to do is reduce blood sugar, but it’s never been shown to result in any meaningful patient oriented benefit for type 2 diabetics.

The same is true for paracetamol/acetaminophen. When it comes to patients with knee pain due to osteoarthritis, the drug doesn’t provide any benefit whatsoever.

Ok, so we have seven drugs, and we know what their NNT’s are. If we plus the probabilities of benefit together, then we get the probability that our 75-year old woman will benefit in some way from at least one of the drugs she’s taking. So, what probability of benefit do we get?

We get 30%. Only 30%.

What that means is that there is a 70% probability that this woman doesn’t benefit at all from any of the seven drugs that she takes every day for years on end!

If you told her, I’d say there are pretty good odds she’d decide to stop taking her pills. Seven drugs a day, every day, and two to one odds of zero benefit.

And we haven’t even talked about harms yet. Because none of these pills are inert. All have widespread biological effects. And all can cause harms. So any rational treatment decision must include not just the potential benefits, but also the potential harms.

For figuring out harms, we have NNH (Number Needed to Harm), which is the counterpoint to NNT. NNH is the number of patients who need to get a drug for one to be harmed. Like I said, the drugs all have widespread biological effects, so there isn’t just one NNH – there is an NNH for each possible harm. That means that there are multiple NNH’s for each drug.

With our 75-year old woman and her seven drugs, we don’t have time to go through the NNH for every possible side effect, so we’re just going to look at a few, and put them side by side with the NNT, to get a somewhat more complete picture of benefits vs harms. I’ve tried to make sure that the NNH numbers apply to the same time period as the NNT numbers, since otherwise it’s an apples to oranges comparison.

If we do that, we get something like this:

NNT: 20 (fractures)
NNH: 200 (esophagitis), 260 (atrial fibrillation), 4,000 (osteonecrosis)

NNT: 200 (cardiac infarction)
NNH: 20 (myalgia), 20 (type 2 diabetes)

NNT: 7 (depression)
NNH: 2 (sexual disturbance), 10 (hyponatremia)

NNT: 14 (death)
NNH: 2 (stomach upset), 5 (B12 deficiency), 1,000 (lactic acidosis)

NNT: 70 (stroke), 125 (death)
NNH: hyperkalemia (10), acute kidney failure (100)

NNT: 0 (no benefit to clinically relevant outcomes)
NNH: severe hypoglycemia (5), weight gain (1)

NNT: 0 (no benefit to clinically relevant outcomes)
NNH: Hypertension (30), liver damage (?)

It’s possible to quibble here about specific NNT and NNH numbers. Different studies show different things. And many of the numbers come from studies carried out by pharmaceutical companies, which generally means that the risk of a certain side effect is massively underestimated (as we will discuss shortly). The point here isn’t to get hung up on any of the specific numbers. It’s to illustrate that we quickly end up with a very complex equation, where it in many cases isn’t clear at all whether the benefits outweigh the harms.

Take alendronate, as an example. We know that it decreases fractures in elderly osteoporotic women. But it doesn’t decrease hospitalisations. The only reasonable conclusion is that the reduction in hospitalisations that is seen due to the reduction in fractures is made up for by an increase in hospitalisations due to the many and varied side effects. So at the end of the day the only way to decide whether or not to take the drug is to have a detailed discussion with the patient and get them to decide which set of risks they’d rather be taking.

Hippocrates is supposed to have said “primum non nocere”, which is latin for “first, do no harm”. Actually he didn’t say that, and couldn’t have even if he wanted to. Hippocrates was greek, and didn’t speak latin. The quote comes from a 19th century American physician, Worthington Hooker.

Of course, as doctors, we all know that “first, do no harm” is completely unrealistic. Every intervention we do carries som measure of risk. If our primary guiding principle was to never do harm, we literally would never be able to do anything. A more reasonable principle is “only do something if the benefits clearly outweigh the risks”. If it isn’t clear to you that the benefits of a drug outweigh the harms, then don’t give it to the patient.

That’s a good general rule to stick by. However, it probably isn’t enough, for a few reasons we’re now going to discuss.

A study was published in JAMA Internal Medicine in 2021 that sought to establish how good physicians are at estimating the likelihood that a patient has a certain disease. 500 primary care physicians in the US were asked to consider various hypothetical scenarios, and then answer what they thought the probability of disease was. Here’s an example of a scenario that they were asked to consider:

Ms. Smith, a previously healthy 35-year-old woman who smokes tobacco presents with five days of fatigue, productive cough, worsening shortness of breath, fevers to 102 degrees Fahrenheit (38.9 degrees centigrade) and decreased breath sounds in the lower right field. She has a heart rate of 105 but otherwise vital signs are normal. She has no particular preference for testing and wants your advice.

How likely is it that Ms. Smith has pneumonia based on this information? ___%
Ms. Smith’s chest X-ray is consistent with pneumonia. How likely is she to have pneumonia? ___%
Ms. Smith’s chest X-ray is negative. How likely is she to have pneumonia? ___%

Go ahead and make your own guesses in relation to each of the three questions.

Once you’ve done that, you can take a look at the table below, and the answers will be revealed.

So, for our pneumonia example above, we see that the actual initial risk of disease based on the presented information was around 35%. If we then move along and look at what the doctors answered, they thought the risk was 80-85%. In other words, they thought pneumonia was more than twice as likely as it actually was!

The same phenomenon was seen in all clinical scenarios tested. The doctors consistently overestimated the initial risk, and they continued to overestimate the risk after both a positive and a negative test result. In some cases the difference between reality and what the doctors thought was huge, with the doctors overestimating risk by a factor of ten or more.

What can we conclude from this?

Doctors consistently overestimate disease risk.

Hold that thought, as we move on to take a quick look at another study, which was published in BMJ Open in 2015. This study sought to do something about a problem inherent in statin trials (and for that matter, all trials in medicine), which is that the results they produce, in the form of percent absolute risk, percent relative risk, and NNT, are so abstract that they’re completely meaningless to patients (and for that matter, to doctors as well). We know that statins have an NNT of 200 when used for primary prevention (to prevent a heart attack in someone who has risk factors but hasn’t already has a heart attack), and 40 when used for secondary prevention (to prevent additional heart attacks in someone who has already experienced a heart attack). But what do those numbers actually mean? Are they good or bad?

What the patient really wants to know is “how much longer will I live if I take this drug?”

So, what the researchers did was gather together data from all the big randomised trials of statins, and use the survival curves provided to estimate how much longer the patients actually lived. Here’s what they came up with:

All the big statin trials are included here. What’s interesting to do is look at the NNT provided, and then compare that with the number to the right of it, which is how much longer the patients actually lived, on average. So, for the ALLHAT trial, to take the topmost example, we have an NNT (for primary prevention) of 250, which comes down to a postponement of death of 4.96… well, 4.96 what?

Is it years? No.

Is it months? No

The patients in the statin group lived 4.96 days longer than the patients in the placebo group. That is what the NNT of 250 means in real terms.

Let’s look instead at 4S, which was published in 1994 and is the statin trial that has produced the best results of any statin trial ever. It’s the trial that initiated the massive boom in statin prescribing that we still see today. In 4S, the NNT (for secondary prevention) is 27.8. So, in other words, one in 27.8 patients benefited from the treatment.

But what does that actually mean in terms of life extension?

It means 27 days.

Not as impressive as you would have thought, right?

When the researchers put all the data together, from all the trials, in order to get an overall average, what they found was that when statins are used for primary prevention they prolong life by 3 days. When they are used for secondary prevention, they prolong life by 4 days.

I can imagine quite a few patients turning down the offer of a statin if they knew that it will on average only prolong their life by days.

The purpose of bringing up this study was to illustrate the following general point:

Doctors consistently overestimate the benefit of the drugs they prescribe.

Hold that thought in your mind as we move on and look at a third study.

This one was published in The Lancet Healthy Longevity in 2021. It compared the rate of serious side effects seen in randomised trials with that seen in the real world. If randomised trials give us good information about what to expect in reality, then the rate of serious side effects in the trials should be the same as that seen in reality.

But that isn’t what the researchers found. What they found was that serious side effects were three to four times more common in reality than they are in the randomised trials! Three to four times!

How is this possible?

Well it’s important to remember that the randomised trials are funded and run by the drug companies, and the drug companies want to sell their drugs, so they will do what they can to make side effects appear as rare as possible.

Why is this a problem? Because it’s the randomised trials that doctors mostly use as a basis for determining whether a drug is safe to give to a patient or not.

So, what can we conclude from the study?

Doctors consistently underestimate side effects of drugs.

Ok, so we have three conclusions, that are all pointing us in the same direction:

1. Doctors consistently overestimate disease risk.

2. Doctors consistently overestimate drug benefit.

3. Doctors consistently underestimate drug harm.

What does this lead to?

Massive overprescribing of drugs.

Peter Gotzche, a founding member of the Cochrane Collaboration and former director of the Nordic Cochrane Center, has estimated that prescription drugs are now the third biggest cause of death in the western world, after heart disease and cancer.

That on its own should lead to massive humility among all doctors about our drug prescribing. It should make us much more careful every time we think about prescribing a drug to a patient.

Ok, so we’ve identified the problem. The causes of this problem are many and complex, so I’m just going to bring up one that each of us as doctors can actually do something about – industry sponsored meals.

A study was published in JAMA Internal Medicine in August 2016 that sought to estimate the extent to which physicians are influenced by partaking in industry sponsored meals, which often take the form of a lecture about a specific drug given by an drug company salesperson, which the physician is supposed to sit and listen to in return for getting a free meal. Industry sponsored meals are very common. Most physicians probably take part in at least a couple of these per year, and many take part in far more than that.

As the saying goes, “there’s no such thing as a free lunch”. The drug companies are not charities whose goal it is to keep starving doctors alive. If they spend vast sums of money of sponsored meals, it’s because they’re pretty damn sure that it increases sales of their drugs, and thereby their profits.

So, anyway, the study sought to estimate the extent to which industry sponsored meals influence physician prescribing patterns, by comparing participation in such meals with later prescribing behaviour. Here’s what they found:

They looked at four different drugs. As I think is clear from the tables, participation in industry sponsored meals increased prescribing of the drug the meal was about, and the more such meals a doctor participated in, the more often he or she prescribed that drug.

The purpose of these meals is not to educate us, or make us better doctors. It’s the opposite – the purpose is to make us do a specific profit-driven company’s bidding. And it works.

If you’re a doctor, and you think you don’t get influenced by participating in industry sponsored meals, then you are very naive. The more industry sponsored meals we participate in, the worse doctors we become.

Doctors in general massively underestimate the extent to which their thoughts, beliefs, and opinions are influenced by the pharmaceutical industry. We like to think that we are evidence based. But the truth is that much of what we think we know is not based on sound scientific knowledge, but on pharmaceutical industry propaganda, which quickly becomes clear to anyone who starts going through the studies in detail themselves.

On that note, I strongly recommend reading these three books, all written by physicians, to help get some perspective on the scale of the problem we face in relation to the pharmaceutical industry.

1. Bad Pharma by Dr. Ben Goldacre

2. Doctoring data by Dr. Malcolm Kendrick

3. Deadly medicines and organised crime by Dr. Peter Gotzsche

There is one very simple thing every doctor can do, to at least partially free themselves from the onslaught of drug company propaganda, and that is to refuse to take part in industry sponsored lunches, and all other forms of industry sponsored “education”. Just say No.

Ok, so, that’s number one: refuse to take part in industry sponsored lunches.

What else can you do as a doctor?

Well, something that was once considered standard, but has fallen by the wayside in recent decades, is to never have a patient on more than five drugs at the same time. With drugs, as with everything else, there is a state of diminishing returns – the more you add, the less benefit (and more harm) each additional drug confers. So try to keep a patient on at most five simultaneous drugs. If you want to add a sixth, then rank them all, and get rid of the one that you think is least important. Most likely, the sixth least important drug in a list of six is not going to do anything useful for the patient anyway, just increase their risk of harm.

Ok, so that’s number two: try to avoid having your patients on more than five drugs simultaneously.

Number three: go through the patient’s drug list with them once a year, and get rid of anything that isn’t clearly conferring a benefit. As any doctor will know, it’s common for patients to stay on drugs for years, even though the original reason they were put on the drug resolved itself a long time ago. The patient often doesn’t remember why they were put on the drug in the first place, but they keep taking it dutifully. Drug lists require regular pruning or they will become increasingly bloated as the years go by, which is one reason why so many elderly people are on 15 simultaneous drugs or more.

Number four: only prescribe a drug if the benefits clearly outweigh the harms. This should be obvious, but it requires a deep knowledge of the size of both potential benefit and potential harm, which unfortunately most doctors lack. And what they think they know is often incorrect because it’s based more on pharma propaganda than real science.

As a doctor, the only way to get around this is to start doing your due diligence and getting in to the weeds of the scientific studies. Do that for the ten drugs you prescribe most commonly, so that you’re an expert on those ten drugs, and you’ve already done a lot. If a patient asks you about the probability of benefit and the probability of harm, you should be able to answer that question correctly, at least for the ten drugs you use most frequently. It requires an up-front investment of time, but it will pay massive dividends to your patients over the remainder of your career.

Ok, so that was number four: only prescribe a drug if the benefit clearly outweighs the harm.

Here’s number five: prioritise lifestyle changes. Most of the diseases that doctors spend most of their time dealing with are caused by poor lifestyle choices. And most can be rectified by switching to good lifestyle choices, which invariably produce greater benefits than any drug can, with less risk of harm.

Doctors can accomplish a lot with their patients with simple lifestyle coaching. To take one example, a primary care clinic in the UK decided to try putting their type 2 diabetic patients on a ketogenic diet, since the drugs they were using clearly weren’t making the patients better. They published their six year follow up results in BMJ Nutrition, Prevention, and Health in 2020.

Over six years, the patients following the ketogenic diet decreased their median HbA1c (a measure of average blood sugar over the preceding few months) from 66 to 48. Normally, that would be unheard of. HbA1c doesn’t decrease over time in a type 2 diabetic. It increases. Yet here it was far better at the end of the six years than at the beginning. The same goes for body weight. Normally it goes up over time. But here the median decreased from 99 kg to 91 kg. And on top of that, median systolic blood pressure dropped from 152 to 141.

All this just with a simple diet intervention. Thanks to the improvements in all health markers, the patients were able to get off a lot of their drugs. This meant that after six years, the clinic was spending less than half as much money on anti-diabetic drugs as the other primary care clinics in the region.

To take another example of a simple lifestyle intervention, a randomised trial published in BMJ in 2021 that was carried out in nursing homes in Australia found that a diet high in protein has an effect on fracture risk that is equivalent to that seen with bisphosphonates.

There is a massive amount that can be accomplished with simple lifestyle interventions, and since they are much less risky than drugs, and actually treat the underlying problem rather than just putting a patch on top of it, they should be the primary intervention we use whenever possible. Drugs should be viewed as a complement to lifestyle interventions. It shouldn’t be the other way around.

Ok, so that was my fifth and final point. I’ll repeat the five points here again. These are five things that you as a doctor can do about the situation we currently find ourselves in, where prescription drugs are the third biggest killer in the western world:

1. Refuse to participate in industry sponsored lunches and other industry sponsored “education”.

2. Try to avoid having your patients on more than five drugs simultaneously.

3. Go through the patient’s drug list with them once a year, and get rid of anything that isn’t clearly conferring a benefit.

4. Only prescribe a drug if the benefits clearly outweigh the harms.

5. Prioritise lifestyle changes.

Please support my work by becoming a patron. Your support is what allows me to dedicate time to producing new content. Regardless of whether you can contribute a lot or a little, it all makes a difference – the more patrons I have, the more of my time I am able to dedicate to this work. As a bonus, patrons get access to my private forum, and also gain the ability to send me private messages through Patreon – I always respond to patrons.

COMMENT:Dear Dan:I read the entire piece and indeed it was excellent.
A couple of comments:
1) It would have been nice if the author had posted a link as to where non-MDs might find NNT for pharmaceuticals. ——John, this URL should help you:
2) It would have been nice if the author had explained the important difference between relative benefit and absolute benefit. ——The best URL on this topic that I could find was: —DwN 😊
regards, JD Energy Advocate and Environmental Consultant

This is a profound piece of information.

I am sending this out to the usual suspects. I fully agree with the conclusions, and have encountered these situations regularly. The profound aspects of the data need very careful assessment. He did leave out the anxiolytics, anti-depressants, muscle relaxers, NSAIDS we prominently experience. He also failed to address drug-drug interactions, but these are minor criticisms.

The over-diagnosis of non-pathologic, created disease states (e.g., CRPS, chronic (fill in the body part…low back, neck, arm, etc.) pain, fibromyalgia, chronic fatigue, etc….) contributes to this iatrogenic stagnation and pollution of our profession and our respective efforts to improve the health of humanity. The effort itself is an indescribable uphill battle, due to the natural gravitational forces of stupidity.

His commentary regarding discontinuation of harmful life choices is spot on. Discussions with the morbidly obese, smoking, drinking, chewing, inert population are guaranteed to generate a drop in patient return visits.

The bottom line is priceless. But it will unlikely be heeded until gravity has succeeded in planting the inert population below the earth’s surface……

Posted in Center for Environmental Genetics | Comments Off on Should the patient really get the drug?


Imagine if you will: Every time scientists post (on the internet) research data showing that “genes are composed of DNA and are responsible for heritable changes,” an opposing evil faction (SPECTRE) immediately (on the web) denounces that statement as “misinformation” (i.e., “everyone knows that inheritance is explained solely by epigenetic effects that have nothing to do with alterations in DNA sequence”). In addition, Science magazine, as well as all journals under the Nature Publishing Group umbrella (a vast organization comprising ~148 journals), have declared: “Any manuscript submitted to us, claiming that ‘genes are composed of DNA and are responsible for heritable changes, etc.’ will not be considered for publication and will be rejected without further review.”

Imagine if you will: Every time scientists post (on the web) a research paper saying that “Darwinian evolution has been occurring on this planet, starting more than 4 billion years ago, ultimately resulting today in all species,” several creationist internet sites instantly denounce those data as “misinformation” (i.e., “everyone knows that Earth was created on 4 Oct 4004 B.C. — meaning that we are now in the 6,026th year of its existence; man and dinosaur were both created on that same day.”). In addition, Science magazine, as well as all journals under the Nature Publishing Group umbrella, have stated: “Any manuscript submitted to us, claiming that ‘Darwinian evolution has occurred on Earth, starting more than 4 billion years ago, ultimately resulting today in all species on Earth today, etc.’ will not be considered for publication and will be rejected without further review.”

As silly as these last two imaginary paragraphs sound — this is exactly what has happened in the field of climatology during the last three or four decades. This censuring and calling scientific facts “misinformation” occurs not only on the internet and most scientific journals — but also in many newspapers and magazines, and in the mandatory lesson plans in many schools from pre-K through college — in many U.S. states. In other words, the scientific side of the debate is completely censured, while the alarmists (who have no scientific facts but depend only on computer-modeling) are allowed to spread hype and hysteria.

Those on the scientific side of the debate are called “deniers” or “skeptics”; alarmists call themselves “believers” or “realists” who are convinced that human activity is the cause of global warming and that rising atmospheric CO2 is the worst culprit. Hence, the hype about “carbon footprints” and the need to “decarbonize.” [Atmospheric CO2 (a colorless odorless gas) has risen from ~180 parts-per-million (ppm) in 1850 (since the end of the Little Ice Age) to ~410 ppm today — which has improved plant growth. However, keep in mind that each time you exhale, your breath contains 40,000 to 50,000 ppm of CO2.]

In contrast, Russia and China have not accepted this silliness that “the world is experiencing a climate crisis.” In October 2003, Vladimir Putin organized a World Global Warming Conference in Moscow, because he wanted to hear from both sides of the debate; after hearing the whole story, Putin concluded “there is no global warming; this is a deliberate fraud to restrain industrial development of several countries including Russia.” Xi Jinping, president of the People’s Republic of China since 2013, agrees with Putin and has approved construction of several hundred additional coal-burning factories in his country during the next decade.

How does all this above — involve our “Gene-Environment Interactions Training Program” (GEITP) email blogs? Well, since 2008, this email blog has included the topic of fraud and corruption in science. Many times, we’ve discussed the Linear No-Threshold (LNT) Theory. This has never been scientifically proven (cf. the numerous publications by Professor Ed Calabrese), yet the LNT Theory was accepted as scientific fact in the 1950s. LNT Theory was established as “government policy” and continues to be used today in thousands of risk-assessment experiments in laboratory animals — trying to estimate potentially carcinogenic and toxic environmental agents in extrapolation to humans. This has led to billions of dollars of wasted taxpayer money spent without good reason. LNT is regarded by GEITP as the “second-most expensive fraud” in Western World countries.

GEITP ranks “Anthropogenic Global Warming” (AGW) Theory as the “#1 most expensive fraud” in Western World countries, because since the 1980s it has developed into a multi-trillion-dollar industry. [In the 1970s, the same groups were hysterical about “global cooling.”] In large part, cancellation of American energy-independence (in January 2021) to make America once again energy-dependent on fossil fuels (and focused on the development of solar and wind energy — which together cannot supply even 10% of America’s energy needs), happened because leaders of this country believe in human- and CO2-caused global warming — which sadly was a big factor leading to the invasion of Ukraine.

Alongside a career in pharmacology, toxicology, biophysics and biochemistry of enzyme induction, gene nomenclature, cancer, developmental biology, clinical human genetics, and evolutionary genomics — all fields that are seriously factual and quantitative — Yours Truly developed a strong interest in the field of climatology during the past 20 years. It quickly became obvious that every fact in climatology is immediately fervently challenged and put down by hysterical alarmists, without any scientific rationale; politics are unfortunately involved, and “politics” are opinions that have nothing to do with scientific factual data. ☹

Below is the URL for a 40-minute video lecture by David Siegel, in which he covers an entire semester of Climatology. His lecture is divided into six segments [ (a) What is climate? (b) The story of CO2. (c) Orbital mechanics and temperature. (d) Emission equilibrium, the greenhouse effect. (e) Thermal equilibirum, how Earth’s climate really works, and (f) Predictions for the future.] If you left-click on “CC”, you can turn on “closed captions” — which is highly recommended because Siegel speaks so quickly and covers so much material in a succinct amount of time. It is recommended that you “take one segment at a time,” then review it repeatedly taking notes, until everything is understood. Then proceed to the next segment. And so on.

“Climate” is measured in 30-year segments, three per century; meteorological conditions that occur in less than 30 years are called “adverse weather events.” Causes of climate change include: solar activity (frequency and strength of sunspots); changes in “radiative forcing” (balance between solar radiation energy absorbed by Earth and energy radiated back into space); heat distribution between oceanic and atmospheric systems; cosmic ray flux; eccentricity, axial tilt and precession of Earth’s orbit; influence by magnetic effects of other planets (especially Jupiter); and catastrophic events such as underwater and above-water volcano eruptions, and substantial-sized meteorites.

The Intertropical Convergence Zone (ITCZ), known by sailors as “the doldrums” or “the calms” because of its monotonous windless weather, is the area where the northeast and southeast trade winds converge; it encircles Earth near the thermal equator — although its specific position varies seasonally.

As you can learn from Siegel’s lecture, a recent advancement in climatology is a better understanding as to how the ITCZ affects and interacts with the Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), Arctic Oscillation/Northern Annular Mode (AO/NAM), Atlantic Multidecadal Oscillation (AMO), El Niño-Southern Oscillation (ENSO), Indian Ocean Dipole (IOD), and Southern Annular Mode (SAM) weather oscillations/cycles. These oscillations occur over decades or centuries involving each ocean — impacting the (incoming) sun’s heat vs (outgoing) albedo (reflectivity) combined with radiative trapping by the greenhouse gases. But Siegel goes further back, into the distant past, … describing plate tectonics and continent/ocean formation, which has set up this past century’s climate changes.
He concludes there is no relationship between atmospheric CO2 levels and global temperature. No scientist has found any evidence that humans are causing global climate to change (either warmer or cooler). There is no climate emergency. To scare children with the hysteria that “the end of life as we know it, is near” (eco-anxiety) — is a form of child abuse.

David Siegel received his degree in mathematics from the University of Colorado at Boulder. In 1985, he earned a master’s degree in digital typography from Stanford University. His first job was with Pixar, and then he became an entrepreneur in web site design. He became interested in the field of climate and atmospheric science in the late 1980s and over the past 30+ years has found his niche there. IMHO, this Youtube video is one of the clearest layman’s (nonpolitical) descriptions as to how complicated “atmospheric science” really is. 😊

And then, just last week, we received another NIEHS email [see below], describing new funding opportunities for “climate change and health research.” Sadly, these federal funds given to NIH — represent just a tiny portion of the trillions of dollars being invested each year in the climate change scam. Where or when will all this wasted taxpayer money stop? We have no idea. ☹

From: Emails for Active PIs with grants at NIEHS On Behalf Of McNair, Liz (NIH/NIEHS) [E]
Sent: Tuesday, June 7, 2022 7:57 AM
Subject: NIH Climate Change and Health Initiative – New Funding Opportunities

New Funding Opportunities for Climate Change and Health Research

The National Institutes of Health (NIH) has just released funding notices and opportunities for research into how climate change affects human health, and to reduce disparities in climate change-related health outcomes. NIH encourages applicants to focus on NIH-priority populations and propose transdisciplinary research that falls broadly into the Core Elements and Supporting Areas of Science outlined in the Climate Change and Health Initiative Strategic Framework.

There are four new solicitations:

RFA-ES-22-003: Research Coordinating Center for the Climate Change and Health Community of Practice (U24 – Clinical Trial Not Allowed) etc., etc.


Posted in Center for Environmental Genetics | Comments Off on CLIMATOLOGY 101

FGF17 in “young CSF” is a tonic for memory ??

Age-related cognitive decline should be a concern to all of us(!!). Whereas a healthy diet and regular exercise can help slow down or prevent this decline, as yet there are no treatments to reverse gradual diminution in memory. As a possible means of treatment, authors [see attached article & editorial] focused on cerebrospinal fluid (CSF) — which bathes brain tissue and contains several protein growth-factors necessary for normal brain development.

CSF from young adult mice (10 weeks old) was infused into the brains of aged mice (18 months old) over 7 days. This treatment improved memory recall of the old animals in a fear-conditioning test (in which they learn to associate a small electric shock with a tone and flashing light). Authors then sought to determine how CSF treatment might alter gene expression in the hippocampus (a key memory center in the brain that’s often the focus of studies of age-associated cognitive decline). Cells in the central nervous system (CNS) called oligodendrocytes produce myelin (a fatty protein-rich material that insulates neuronal fibers called axons). Myelination of axonal projections throughout the brain ensures that strong signal connections between neurons are maintained.

Authors found that genes typically expressed in oligodendrocytes were highly up-regulated in old mice treated with CSF from young mice (compared with that in control animals treated with artificial CSF). Previous work had demonstrated that successful

fear-conditioning in mice requires oligodendrocyte proliferation and myelin formation, and that disruption of this process impairs memory. Authors therefore found that “young CSF” more than doubled the number of oligodendrocyte precursor cells (OPCs) in the hippocampus of old animals; this cellular change was followed 3 weeks later by an increase in myelin production. Their findings strongly suggest that young CSF improves the cognitive abilities of aged mice by modulating oligodendrocyte proliferation and maturation.

The greatest increase in gene expression in response to young CSF treatment was in the serum response factor gene (Srf), which encodes a transcription factor that initiates cell proliferation and differentiation. Six hours after young human CSF administration to OPC cultures, Srf expression had returned to baseline levels, but downstream targets — related to cell cycle and proliferation — were up-regulated. The authors confirmed that these SRF-signaling pathways were also activated in old mice after young CSF


CSF contains a rich cocktail of signaling molecules and growth factors — many of which could induce the SRF-signaling pathways seen in OPCs. Authors searched for candidate factors capable of inducing Srf expression in published protein databases, and fibroblast growth factor-17 (FGF17) emerged as the most compelling candidate. They then showed that FGF17 is robustly expressed in mouse neurons, exhibits decreased expression in aged mice, and induces OPC proliferation in rat cultured OPCs.

FGF17 infusion into old CSF partially recapitulated the effects of the young CSF, both in cell culture and in the intact animal, improving memory recall of aged mice. Finally, authors demonstrated that inhibition of FGF17 in cultured OPCs treated with young CSF was sufficient to inhibit OPC proliferation, and that treatment of young mice with FGF17 blockers impaired cognition. These data strongly suggest that FGF17 is a CSF-borne factor crucial for cognition, and show that its effects are probably mediated by oligodendrocytes and myelination in the hippocampus. How FGF17 is distributed in the CSF and delivered to target cells in the hippocampus presents a new direction of research. 😊

Probably several start-up companies have already begun to synthesize FGF17, and to write up clinical research proposals to treat patients in memory-care centers — and to help politicians whose brains always seem to be failing. 😉


Nature 19 May 2022; 605: 509-515 & News-N-Views pp 429-429

Posted in Center for Environmental Genetics | Comments Off on FGF17 in “young CSF” is a tonic for memory ??

Human gut bacteria produce ΤΗ17-modulating bile acid metabolites

Bile acids are steroid-like natural products that are secreted into the gastrointestinal (GI) tract of vertebrate animals after eating — where they act as “detergents” that aid in digestion, as well as ligands for host receptors. In the gut, host-derived primary bile acids are metabolized by resident microbes to form a large group of compounds called secondary bile acids. Both primary and secondary bile acids regulate host metabolism and immune responses. [As ligands for receptors, bile acids represent “extracellular signals,” which then elicit various responses in particular cell-types of the host. This is why the attached article is relevant to gene-environment interactions.] 😉

Bile acids modulate the differentiation and function of T cells, including pro-inflammatory TH17 cells and anti-inflammatory regulatory T (Treg) cells, which help to protect the host against extracellular pathogens and to maintain host immune tolerance, respectively. Specifically, secondary bile acids — such as isoallo-lithocholic acid (isoalloLCA) and iso-deoxycholic acid (isoDCA) — modulate the differentiation of Treg cells. Moreover, 3-oxoLCA inhibits TH17 cell differentiation by blocking the function of the nuclear hormone receptor retinoic-acid-receptor-related orphan nuclear receptor-γt (RORγt).

3-OxoLCA is absent from the caecum of germ-free C57BL/6 mice — suggesting that gut bacteria may synthesize 3-oxoLCA. However, it is not known which commensal [i.e., the relationship between individuals of two species in which one species obtains food or other benefits from the other without either harming or benefiting the latter] bacterial species, and which bacterial enzyme(s) produce(s) 3-oxoLCA, and whether this compound (or additional secondary bile acids) that modulate TH17 cell responses) are implicated in the pathogenesis of inflammatory bowel disease (IBD).

In the attached paper, authors used a screen of human stool isolates to identify the gut bacterial species that produces 3-oxoLCA, as well as an abundant gut metabolite, isolithocholic acid (isoLCA); authors go on to demonstrate these metabolites inhibit TH17 cell differentiation. Multi-omics analyses of two IBD registries revealed that 3-oxoLCA and isoLCA, as well as the bacterial genes responsible for their production, are negatively associated with IBD and TH17-cell-related host gene expression. Taken together, these data suggest that bacterial production of the “3-oxoLCA and isoLCA bile acids” may contribute clinically to gut immune homeostasis in humans.
Recall that: The genetic architecture [i.e., total landscape of genetic contributions to a given phenotype] represents the contribution of: [a] genetics [DNA sequence differences], [b] epigenetics [DNA methylation, RNA-interference, histone modification, and/or chromatin remodeling — all processes that do not alter DNA sequence], [c] environmental factors, [d] endogenous influences, and [e] the interindividual microbiome. This topic [attached paper] suggests that a trait such as “individual risk of IBD” might be affected by at least three, if not all five, of these categories…!! 😊

Nature 31 Mar 2022; 603: 907-912

COMMENT: This discussion is a great gene-environmental interaction. Hope all is well. JB

Posted in Center for Environmental Genetics | Comments Off on Human gut bacteria produce ΤΗ17-modulating bile acid metabolites

“Completing” the human genome (this time, for real ??)

From: Nebert, Daniel (nebertdw)
Sent: den 9 maj 2022 00:25
Subject: “Completing” the human genome (this time, for real ??) #2

The attached articles accompany the “human genome sequence story” — sent to everyone within the past hour. From left to right: [a] Identification of segmental duplications; [b] Genomic and epigenetic maps of centromeres; [c] The transcriptional and epigenetic state of repeat elements; and [d] Epigenetic patterns throughout the entire completed human genome. 😊DwN

Science, 1 Apr 2022; 376: 55, 56, 57 and 58

Since its initial release in April of 2000, the human reference genome had covered only the euchromatic fraction of the genome [euchromatin is the lightly packed form of chromatin (DNA, RNA, and protein) that is enriched in genes, and is often (but not always) under active transcription].

This leaves the important heterochromatic regions (tightly packed form of chromatin) unfinished. Completing the remaining 8% of the genome [see attached article], the Telomere-to-Telomere (T2T) Consortium now presents the complete 3.055 billion–base pair sequence of a human genome (T2T-CHM13) which: [a] includes gapless assemblies for all chromosomes except Y, [b] corrects errors in the prior references, and [c] introduces nearly 200 million new base pairs (bp) of sequence — containing 1,956 new gene predictions, 99 of which are predicted to be protein-coding.

The completed regions [see attached] include all centromeric satellite arrays, recent segmental duplications, and the short arms of all five acrocentric (i.e., the centromere is situated so that one chromosomal arm is much shorter than the other arm) chromosomes, unlocking these complex regions of the genome — so that variational and functional studies can now be carried out.

The current human reference genome was released by the Genome Reference Consortium (GRC) in 2013, and most recently patched in 2019 (GRCh38.p13). This reference traces its origin to the publicly-funded Human Genome Project and has been continually improved over the past two decades. Unlike the competing Celera company effort, and most modern sequencing projects based on “shotgun” sequence assembly, the GRC assembly was constructed from sequenced bacterial artificial chromosomes (BACs) that were ordered and oriented along the human genome by means of radiation hybrid, genetic linkage, and fingerprint maps.

However, limitations of BAC cloning have led to an underrepresentation of repetitive sequences, and the opportunistic assembly of BACs derived from multiple individuals resulted in a mosaic of haplotypes. As a result, several GRC assembly gaps are unsolvable — because of incompatible structural polymorphisms on their flanks, and many other repetitive and polymorphic regions were left unfinished, or incorrectly assembled.

To finish the last remaining regions of the genome, authors leveraged the complementary aspects of PacBio HiFi and Oxford Nanopore ultralong-read sequencing to assemble the uniformly homozygous CHM13hTERT cell line (hereafter, CHM13). The resulting T2T-CHM13 reference assembly removes a 20-year-old barrier that had hidden 8% of the genome from sequence-based analysis — including all centromeric regions and the entire short arms of five human acrocentric chromosomes. Authors describe [see attached] the construction, validation, and initial analysis of a truly complete human reference genome and discuss its potential impact on the field. 😊

COMMENT: The BIG question, now, becomes — “Because we have knowledge of ‘this last 8% of the genome’ that has been unavailable until now, do all important GWAS studies need to be repeated?”DwN
COMMENT: Great, many thanks! MI-S
Since its initial release in April of 2000, the human reference genome had covered only the euchromatic fraction of the genome [euchromatin is the lightly packed form of chromatin (DNA, RNA, and protein) that is enriched in genes, and is often (but not always) under active transcription].
This leaves the important heterochromatic regions (tightly packed form of chromatin) unfinished. Completing the remaining 8% of the genome [see attached article], the Telomere-to-Telomere (T2T) Consortium now presents the complete 3.055 billion–base pair sequence of a human genome (T2T-CHM13) which: [a] includes gapless assemblies for all chromosomes except Y, [b] corrects errors in the prior references, and [c] introduces nearly 200 million new base pairs (bp) of sequence — containing 1,956 new gene predictions, 99 of which are predicted to be protein-coding.
The completed regions [see attached] include all centromeric satellite arrays, recent segmental duplications, and the short arms of all five acrocentric (i.e., the centromere is situated so that one chromosomal arm is much shorter than the other arm) chromosomes, unlocking these complex regions of the genome — so that variational and functional studies can now be carried out.

The current human reference genome was released by the Genome Reference Consortium (GRC) in 2013, and most recently patched in 2019 (GRCh38.p13). This reference traces its origin to the publicly-funded Human Genome Project and has been continually improved over the past two decades. Unlike the competing Celera company effort, and most modern sequencing projects based on “shotgun” sequence assembly, the GRC assembly was constructed from sequenced bacterial artificial chromosomes (BACs) that were ordered and oriented along the human genome by means of radiation hybrid, genetic linkage, and fingerprint maps.

However, limitations of BAC cloning have led to an underrepresentation of repetitive sequences, and the opportunistic assembly of BACs derived from multiple individuals resulted in a mosaic of haplotypes. As a result, several GRC assembly gaps are unsolvable — because of incompatible structural polymorphisms on their flanks, and many other repetitive and polymorphic regions were left unfinished, or incorrectly assembled.

To finish the last remaining regions of the genome, authors leveraged the complementary aspects of PacBio HiFi and Oxford Nanopore ultralong-read sequencing to assemble the uniformly homozygous CHM13hTERT cell line (hereafter, CHM13). The resulting T2T-CHM13 reference assembly removes a 20-year-old barrier that had hidden 8% of the genome from sequence-based analysis — including all centromeric regions and the entire short arms of five human acrocentric chromosomes. Authors describe [see attached] the construction, validation, and initial analysis of a truly complete human reference genome and discuss its potential impact on the field. 😊
Science, 1 Apr 2022; 376: 44-53

ADDED: The attached articles accompany the “human genome sequence story.” From left to right: [a] Identification of segmental duplications; [b] Genomic and epigenetic maps of centromeres; [c] The transcriptional and epigenetic state of repeat elements; and [d] Epigenetic patterns throughout the entire completed human genome. 😊


Science, 1 Apr 2022; 376: 55, 56, 57 and 58

Posted in Center for Environmental Genetics | Comments Off on “Completing” the human genome (this time, for real ??)

Human Single-Cell Transcriptome CellAtlas

Our understanding of how individual cells form distinct tissues and organs, and how each cell interacts with one another — is incomplete. Recent single-cell RNA sequencing (scRNAseq) analyses have described the landscapes of individual cell-types, along with their abundance and interactions, in homeostasis and during disease states, but these studies are often limited to a single organ. A systematic comparison of cell-types, across different tissues, is needed to understand shared and variable transcriptional features and how these specializations are important for organ function.

The [three attached articles, plus editorial] report the pan-tissue single-cell transcriptome atlases — covering more than a million cells, including 500 cell-types, across more than 30 human tissues from 68 donors. These two attached articles apply rigorous ontologies to consistently annotate and compare single cells between organs.

Interrogation of these large datasets reveals tissue-agnostic (i.e., antineoplastic drugs that treat cancers based on the mutations that they display, instead of the tissue type in which they appear) and tissue-specific cell features, identifies rare cell-types, and provides insights into cell-states that are likely to underlie disease pathogenesis (see the figure in the editorial, 3rd attachment from left). The Tabula Sapiens Consortium created a human reference atlas across 24 different tissues and organs — using scRNA-seq, leading to the characterization of more than 400 cell-types spanning epithelial, endothelial, stromal, and immune-cell compartments.

Eraslan et al. (1st attachment from left) took a complementary approach by applying single-nucleus RNA sequencing (snRNA-seq) to eight human tissue-types and profiled neuronal cells, muscle cells, and adipocytes (the latter which is very difficult to dissociate and capture using scRNA-seq). These cross-tissue approaches recapitulated conserved cell-type features and revealed cell-state adaptations to distinct tissue environments.

Conde et al., (2nd attachment from left) present an immune-cell atlas of myeloid and lymphoid lineages across adult human tissues. They developed CellTypist for automated immune-cell annotation and performed an in-depth dissection of cell populations, identifying 101 cell types or states from more than one million cells, including previously underappreciated cell states. These incredible summary papers represent a MAJOR breakthrough in single-cell identification opening all kinds of imaginable experiments that can now be proposed…!! 😊


Science, 13 May 2022; 376: 712, 713, & editorial pp 695-696

Posted in Center for Environmental Genetics | Comments Off on Human Single-Cell Transcriptome CellAtlas

“Life” on Earth can be weird (!!!)

Two miles underground, strange bacteria are found thriving
by Chad Boutin
Oct. 20, 2006

A Princeton-led research group has discovered an isolated community of bacteria nearly two miles underground that derives all of its energy from the decay of radioactive rocks rather than from sunlight. According to members of the team, the finding suggests life might exist in similarly extreme conditions even on other worlds.

The self-sustaining bacterial community, which thrives in nutrient-rich groundwater found near a South African gold mine, has been isolated from the Earth’s surface for several million years. It represents the first group of microbes known to depend exclusively on geologically produced hydrogen and sulfur compounds for nourishment. The extreme conditions under which the bacteria live bear a resemblance to those of early Earth, potentially offering insights into the nature of organisms that lived long before our planet had an oxygen atmosphere.

The scientists, who hail from nine collaborating institutions, had to burrow 2.8 kilometers beneath our world’s surface to find these unusual microbes, leading the scientists to their speculations that life could exist in similar circumstances elsewhere in the solar system.

“What really gets my juices flowing is the possibility of life below the surface of Mars,” said Tullis Onstott, a Princeton University professor of geosciences and leader of the research team. “These bacteria have been cut off from the surface of the Earth for many millions of years, but have thrived in conditions most organisms would consider to be inhospitable to life. Could these bacterial communities sustain themselves no matter what happened on the surface? If so, it raises the possibility that organisms could survive — even on planets whose surfaces have long since become lifeless.”

Onstott’s team published its results in the Oct. 20 issue of the journal Science. The research group includes first author Li-Hung Lin, who performed many of the analyses as a doctoral student at Princeton and then as a postdoctoral researcher at the Carnegie Institution.

“These bacteria are truly unique, in the purest sense of the word,” said Lin, now at National Taiwan University. “We know how isolated the bacteria have been because analyses of the water that they live in showed that it’s very old and hasn’t been diluted by surface water. In addition, we found that the hydrocarbons in the environment did not come from living organisms, as is usual, buet rather that the source of the hydrogen needed for their respiration comes from the decomposition of water by radioactive decay of uranium, thorium and potassium.”

Because the groundwater the team sampled to find the bacteria comes from several different sources, it remains difficult to determine specifically how long the bacteria have been isolated. The team estimates the time frame to be somewhere between three and 25 million years, implying that living things are even more adaptable than once thought.

“We know surprisingly little about the origin, evolution and limits for life on Earth,” said biogeochemist Lisa Pratt, who led Indiana University Bloomington’s contribution to the project. “Scientists are just beginning to study the diverse organisms living in the deepest parts of the ocean, and the rocky crust on Earth is virtually unexplored at depths more than half a kilometer below the surface. The organisms we describe in this paper live in a completely different world than the one we know at the surface.”

That subterranean world, Onstott said, is a lightless pool of hot, pressurized salt water that stinks of sulfur and noxious gases humans would find unbreathable. But the newly discovered bacteria, which are distantly related to the Firmicutes division of microbes that exist near undersea hydrothermal vents, flourish there.

“The radiation allows for the production of lots of sulfur compounds that these bacteria can use as a high-energy source of food,” Onstott said. “For them, it’s like eating potato chips.”

But the arrival of the research team brought one substance into the underground world that, though vital to human survival, proved fatal to the microbes — air from the surface.

“These critters seem to have a real problem with being exposed to oxygen,” Onstott said. “We can’t seem to keep them alive after we sample them. But because this environment is so much like the early Earth, it gives us a handle on what kind of creatures might have existed before we had an oxygen atmosphere.”

Onstott said that many hundreds of millions of years ago, some of the first bacteria on the planet may have thrived in similar conditions, and that the newly discovered microbes could shed light on research into the origins of life on Earth.

“These bacteria are probably close to the base of the tree for the bacterial domain of life,” he said. “They might be genealogically quite ancient. To find out, we will need to compare them to other organisms such as Firmicutes and other such heat-loving creatures from deep-sea vents or hot springs.”

The research team is building a small laboratory 3.8 kilometers beneath the surface in the Witwatersrand region of South Africa to conduct further study of the newly discovered ecosystem, said Onstott, who hopes the findings will be of use when future space probes are sent to seek life on other planets.

“A big question for me is, how do these creatures sustain themselves?” Onstott said. “Has this one strain of bacteria evolved to possess all the characteristics it needs to survive on its own, or are they working with other species of bacteria? I’m sure they will have more surprises for us, and they may show us one day how and where to look for microbes elsewhere.”

Other authors of this work include Johanna Lipmann-Pipke of GeoForschungsZentrum, Potsdam, Germany; Erik Boice of Indiana University; Barbara Sherwood Lollar of the University of Toronto; Eoin L. Brodie, Terry C. Hazen, Gary L. Andersen and Todd Z. DeSantis of Lawrence Berkeley National Laboratory, Berkeley, Calif.; Duane P. Moser of the Desert Research Institute, Las Vegas; and Dave Kershaw of the Mponeng Mine, Anglo Gold, Johannesburg, South Africa.

Pratt and Onstott have collaborated for years as part of the Indiana-Princeton-Tennessee Astrobiology Institute (IPTAI), a NASA-funded research center focused on designing instruments and probes for life detection in rocks and deep groundwater on Earth during planning for subsurface exploration of Mars. IPTAI’s recommendations to NASA will draw on findings discussed in the Science report.

This work was also supported by grants from the National Science Foundation, the U.S. Department of Energy, the National Science Council of Taiwan, the Natural Sciences and Engineering Research Council of Canada, Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) and the Killam Fellowships Program.

More information about this discovery can be found at and


Long-Term Sustainability of a High-Energy, Low-Diversity Crustal Biome

By Li-Hung Lin, Pei-Ling Wang, Douglas Rumble, Johanna Lippmann-Pipke, Erik Boice, Lisa M. Pratt, Barbara Sherwood Lollar, Eoin L. Brodie, Terry C. Hazen, Gary L. Andersen, Todd Z. DeSantis, Duane P. Moser, Dave Kershaw, and T. C. Onstott

Geochemical, microbiological, and molecular analyses of alkaline saline groundwater at 2.8 kilometers depth in Archaean metabasalt revealed a microbial biome dominated by a single phylotype affiliated with thermophilic sulfate reducers belonging to Firmicutes. These sulfate reducers were sustained by geologically-produced sulfate and hydrogen at concentrations sufficient to maintain activities for millions of years with no apparent reliance on photosynthetically-derived substrates.


Microbes deep beneath seafloor survive on byproducts of radioactive process
Results have implications for life on Mars
NARRAGANSETT, R.I. – February 26, 2021 – A team of researchers from the University of Rhode Island’s Graduate School of Oceanography and their collaborators have revealed that the abundant microbes living in ancient sediment below the seafloor are sustained primarily by chemicals created by the natural irradiation of water molecules.

The team discovered that the creation of these chemicals is amplified significantly by minerals in marine sediment. In contrast to the conventional view that life in sediment is fueled by products of photosynthesis, an ecosystem fueled by irradiation of water begins just meters below the seafloor in much of the open ocean. This radiation-fueled world is one of Earth’s volumetrically largest ecosystems.

The research was published today in the journal Nature Communications.

“This work provides an important new perspective on the availability of resources that subsurface microbial communities can use to sustain themselves. This is fundamental to understand life on Earth and to constrain the habitability of other planetary bodies, such as Mars,” said Justine Sauvage, the study’s lead author and a postdoctoral fellow at the University of Gothenburg who conducted the research as a doctoral student at URI.

The process driving the research team’s findings is radiolysis of water – the splitting of water molecules into hydrogen and oxidants as a result of being exposed to naturally occurring radiation. Steven D’Hondt, URI professor of oceanography and a co-author of the study, said the resulting molecules become the primary source of food and energy for the microbes living in the sediment.

“The marine sediment actually amplifies the production of these usable chemicals,” he said. “If you have the same amount of irradiation in pure water and in wet sediment, you get a lot more hydrogen from wet sediment. The sediment makes the production of hydrogen much more effective.”

Why the process is amplified in wet sediment is unclear, but D’Hondt speculates that minerals in the sediment may “behave like a semiconductor, making the process more efficient.”

The discoveries resulted from a series of laboratory experiments conducted in the Rhode Island Nuclear Science Center. Sauvage irradiated vials of wet sediment from various locations in the Pacific and Atlantic Oceans, collected by the Integrated Ocean Drilling Program and by U.S. research vessels. She compared the production of hydrogen to similarly irradiated vials of seawater and distilled water. The sediment amplified the results by as much as a factor of 30.

“This study is a unique combination of sophisticated laboratory experiments integrated into a global biological context,” said co-author Arthur Spivack, URI professor of oceanography.

The implications of the findings are significant.

“If you can support life in subsurface marine sediment and other subsurface environments from natural radioactive splitting of water, then maybe you can support life the same way in other worlds,” said D’Hondt. “Some of the same minerals are present on Mars, and as long as you have those wet catalytic minerals, you’re going to have this process. If you can catalyze production of radiolytic chemicals at high rates in the wet Martian subsurface, you could potentially sustain life at the same levels that it’s sustained in Earth’s marine sediment.”

Sauvage added, “This is especially relevant — given that the Perseverance Rover has just landed on Mars, with its mission to collect Martian rocks and to characterize its habitable environments.”

D’Hondt said the research team’s findings also have implications for the nuclear industry, including for how nuclear waste is stored and how nuclear accidents are managed. “If you store nuclear waste in sediment or rock, it may generate hydrogen and oxidants faster than in pure water. That natural catalysis may make those storage systems more corrosive than is generally realized,” he said.

The next steps for the research team will be to explore the effect of hydrogen production through radiolysis in other environments on Earth and beyond, including oceanic crust, continental crust and subsurface Mars. They also will seek to advance the understanding of how subsurface microbial communities live, interact and evolve — when their primary energy source is derived from the natural radiolytic splitting of water.

This study was supported by the U.S. National Science Foundation and the U.S. National Aeronautics and Space Administration. The project is also affiliated with the Center for Dark Energy Biosphere Investigations.

URI researchers: Microbes deep beneath seafloor survive on byproducts of radioactive process

COMMENT: In discussions with evolutionary biologists, we learned recently that “not all animals and fungi use oxygen and give off carbon dioxide”; and, on the other hand, “not all plants use carbon dioxide and give off oxygen.” It turns out there are exceptions — if one includes deviations seen in bacteria.

The first article below (posted in 2006) noted that “some baceria live off radioactive decay in rocks” — which were discovered deep in South African gold mines. The second article below (posted in 2021) noted that “some microbes that live in the marine sediment use chemicals that are made from the irradiation of water — hydrogen and oxidants created when naturally-occurring radiation splits apart water molecules.”

It had been thought that “most marine-sediment microbes lived off the products of photosynthesis” — but it looks like these types of indirectly-radiation-fed microbes are extremely common — and may even be dominant in the seafloor. Time to change the biology textbooks. Life is getting more complicated, and weird. ☹ ☹ 😊 😊


Posted in Center for Environmental Genetics | Comments Off on “Life” on Earth can be weird (!!!)

The SOLVAY Prize recognizes the mRNA pioneer whose findings led to the mRNA COVID vaccines

This brief story, published in Nature — represents the follow-up of Dr. Katalin Karikó, an obscure scientist quietly working alone in the lab, who first designed an mRNA construct that was successful in having intact messenger RNA (mRNA) enter living cells, without getting quickly degraded by RNase enzymes; following successful entry into the cell, the new mRNA construct could then be successfully translated into the protein product by the cell’s own machinery.

As GEITP has discussed in earlier emails over the past 2 years, many labs (including my own, between 1976 and 1984) had repeatedly attempted to insert RNA into live cells and have it function normally — before it was very rapidly degraded by RNases. No one was successful — until Dr. Karikó thought up a clever way to avoid RNase degradation. 😊

It’s good to see that her creative work has now been recognized and rewarded. 😊😊
THE SOLVAY PRIZE acknowledges innovative work that has a major social impact. Her ground-breaking work on RNA led to an entirely new type of vaccine — yet Katalin Karikó spent most of her career in obscurity, searching for funding to support her research. Without a faculty position or a lab group, she had to do most of the benchwork herself, including even defrosting the lab freezer!
However, she describes those times of quietly getting on with work as a joy. “It was only from the outside that it seems like a struggle,” she says.“I have had a lot of fun in the lab.”
Now an adjunct professor at the University of Pennsylvania — and Senior Vice President at BioNTech — Karikó is feted as one of the heroes of the COVID-19 pandemic. Her decades of research into messenger RNA (mRNA) paved the way for the vaccines developed by BioNTech, Pfizer and Moderna.
Karikó’s contribution has now been recognized by her being awarded the 2022 Solvay Prize. The prize is awarded every two years for major scientific discoveries — those with the potential to shape tomorrow’s chemistry and enhance human progress. Past winners include the biochemist Carolyn Bertozzi, for inventing ‘bioorthogonal’ chemical reactions that can be performed in living cells, and Nobel laureate Ben Feringa, for creating molecular motors that could power nanorobots. “When I look at the people who previously won the Solvay Prize, I feel very humbled,” says Karikó.
In vitro-transcribed (IVT) mRNA encoding therapeutic proteins, or viral antigens, has had great potential for treating or preventing various diseases, but for years the body’s inflammatory response to mRNA hampered its medical use. In 2005, while collaborating with Drew Weissman, also at U Penn, Karikó discovered that swapping out uridine for pseudouridine, a nucleoside naturally found in RNA, not only thwarted the immune reaction to mRNA, but also improved its translational efficiency, opening the door for future therapeutics.

Despite the importance that this discovery would later have, there was initially little response from other scientists. “Nobody really contacted us; I had two invitations to give lectures, but that was about it,” Karikó recalls. Over time, RNA became increasingly popular for vaccine developers, building on Karikó’s research. Although no RNAvaccines had been approved when COVID-19 first struck, candidates based on the viral sequence were ready within weeks and were quickly produced for clinical trials.

This year’s prize coincides with the 100th anniversary of the first Solvay Conference for Chemistry, which brought together many leading figures to discuss the key problems of the day. The conference — along with its counterpart in physics — was created by Ernest Solvay, who wanted to support fundamental science after making his fortune through industrial production of sodium carbonate for use in glass manufacturing. He created Solvay in 1863, and the company continues to develop and support innovative science for solving some of the world’s most pressing challenges. “With the Solvay Prize, we want to highlight the originality of the chemistry and its potential impact,” says Patrick Maestro, Scientific Director of Solvay. “Karikó’s work has already had a significant impact, and there is even more to come in other areas of medicine.” Karikó says that she will spend the €300,000 prize money on furthering research into mRNA therapeutics: “I am 67 years old; I won’t start changing my hobbies now. My hobby is science.”

Nature 21 Apr 2002; 604: i (first page of journal avertisements)

It is my understanding that Dr. Katalin Karikó was demoted at the University of Pennsylvania, because she had repeatedly failed to get her own research grant money. The study sections told her that “her ideas (her research proposals) were not possible.”
By the way, Dr. Karikó’s daughter competed in The World Olympics Games two times, four years apart, as a member of the U.S. Women’s Rowing Team, and they won gold medals both times!


You make a good point/comment (and it’s worth an informative, educational reply). 😉 I’m sure a lot of our GEITP’ers do not know the “chemical structure of pseudouridine.” Pseudouridine (abbreviated by the Greek letter psi, Ψ) is an isomer of the nucleoside uridine in which the uracil is attached via a carbon-carbon instead of a nitrogen-carbon glycosidic bond. (In this configuration, uracil is sometimes referred to as “pseudouracil.”)
Frontiers | The Critical Contribution of Pseudouridine to mRNA COVID-19 Vaccines | Cell and Developmental Biology

As Dr. Karikó stated in her recent invited review [Nature Reviews Immunology 2021; volume 21, page 619], “In the 1990s, we started to investigate mRNA as a platform for protein replacement therapy. Because these mRNAs encoded self-proteins, we did not think that mRNA transfection would generate any adverse immune effects. However, we found that transfecting human dendritic cells (DCs) with mRNA, or even with non-coding ribonucleotide homopolymers, induced inflammatory cytokines (Ni, H. et al., 2002).

“At the time, we knew that DNA activates Toll-like receptor 9 (TLR9) and that double-stranded RNA can activate TLR3 and induce type I interferon. We hypothesized that one of the remaining TLR family members might sense single-stranded RNA. We also started to explore the activation of human DCs by different types of RNA to determine whether they all induce inflammatory cytokines. Natural RNAs are synthesized from the four basic nucleotides, but some of the nucleosides can be post-transcriptionally modified. We found that tRNA, which is known to be enriched in modified nucleosides, was non-inflammatory, and that the TLR7 and TLR8 receptors can sense single-stranded RNA. We set out to generate RNA with modified nucleosides by in vitro synthesis. Surprisingly, the replacement of uridine with pseudouridine rendered the RNAs non-immunogenic (Karikó, K. et al., 2005).

“In subsequent studies we demonstrated that mRNA containing pseudouridine was an ideal molecule for protein replacement therapy because it was efficiently translated and, unlike its unmodified counterpart, did not induce interferon in mice. Indeed, the injection of a small amount of mRNA was sufficient for the encoded protein to exert its therapeutic effect (Karikó, K. et al., 2008; Karikó, K. et al., 2012).”

So, there you have it — in a nutshell. 😊—DwN

Sent: Tuesday, April 26, 2022 5:30 PM
What a great story!! I had never heard of pseudouridine!

Posted in Center for Environmental Genetics | Comments Off on The SOLVAY Prize recognizes the mRNA pioneer whose findings led to the mRNA COVID vaccines

LNT History documentary

Over the years, GEITP has repeatedly covered all of Ed Calabrese’s articles — as he slowly unraveled the entire fraudulent story of how the Linear No-Threshold (LNT) Model was arrived at in the mid 1950s, how it was based on erroneously interpreted Drosophila (fruit fly) studies by Hermann Joseph Muller (and others) in the 1930s and 1940s, and how the Nobel Prize in Physiology or Medicine in 1946 was awarded (nevertheless) to Muller “for discovery of the production of mutations by means of x-ray irradiation” — when the mutations were, in fact, irradiation-induced DNA damage (breaks) and not mutations at all. ☹

Online [click on URL below] one can follow 22 Episodes [interviews of Professor Calabrese by the Health Physics Society (HPS)] of this fraudulent story, which was discovered by sleuth Calabrese. These 22 interviews could conveniently provide the basis for a Term Course in any college or university — on how the second-ranked most expensive duplicitous pseudoscience has cost taxpayers of the Western World billions of dollars, how it is still supported today ( !! ) by government agencies such as the U.S. Environmental ‘Protection’ Agency (USEPA), and how it continues in thousands of labs worldwide even today — more than 60 years later. ☹

What is the first-ranked most expensive duplicitous pseudoscience costing taxpayers of the Western World trillions of dollars? Answer: the “Global Warming” scam [which began as a political movement in the early 1980s, and then changed its name to “Climate Change” in 2009, because the brief warming period of 1980-97 had ‘unfortunately’ appeared to reach a plateau in the late ’90s]. Also, the warming period of 1928-45 was actually hotter, worldwide, than the late-20th-century warming period. In between (1945-78) was (you guessed it) the 20th-century worldwide cooling period. ☹


The History of the LNT Episode Guide (

Posted in Center for Environmental Genetics | Comments Off on LNT History documentary