MORE FRAUD: Hundreds of cancer papers mention cell lines that don’t seem to exist (!!!)

Just when you think that “any further audacious FRAUD (in publishing scientific papers) cannot be possible” (because all possible forms of corruption have already been invented) — along comes something new and even more dishonest. ☹

Research integrity investigators (16 authors — from several locations in Australia, United States, and France; see attached pdf)) have apparently found a new red flag for identifying fraudulent papers, at least in cancer research: discovery of human cell lines that apparently do not exist. That’s the conclusion of a recent study investigating eight cell lines that are consistently misspelled across 420 papers published from 2004 to 2023, including some in highly ranked cancer research journals. Some of the misspellings may have been inadvertent errors, but a subset of 235 papers provided details involving seven of the eight lines that “indicate the reported experiments were not actually conducted,” the sleuths say.

“Unfortunately, this looks like a massive invention of data and experiments that probably never happened,” said study lead author Jennifer Byrne, a cancer researcher and data detective at the University of Sydney. Some of the nonexistent cell lines have already been cited in literature reviews and could confuse and mislead other scientists conducting similar studies, she adds. “It’s a hell of a mess.”

Chao Shen, a cell biologist at Wuhan University and deputy director of the China Center for Type Culture Collection, a suppository of human cell lines, hopes the findings gain attention. “These revelations underline the urgent need for concerted efforts to address the challenges posed by these cell lines to research integrity and reproducibility,” such as standardized reporting of cell lines, he says.

Problems with human cell culture lines had already drawn attention in recent years, because scientists discovered that many have been contaminated by other, more robust cell lines that corrupted the results. But the new study reveals a different kind of flaw, stemming not from the misidentification of a known cell line, but from possible fabrication. ☹

The survey study, published recently in the International Journal of Cancer, began as an examination of misspelled cell lines in papers about cancer research — to determine whether the lines were contaminated or misidentified. Some of the misspellings may have begun as mistakes by inexperienced authors, Byrne said. But the team became suspicious about a subset of these papers that, in various ways, referred to the same seven cell lines as if they were not the same as similarly spelled, known cell lines. For example, some reported separate, different results from experiments that used the incorrectly and correctly spelled names of the same cell line.

These papers also had other red flags: They lacked a description of how the suspicious cell line was derived and did not provide its unique genetic fingerprint commonly used by researchers, which is based on short specific DNA sequences known as short tandem repeats (STRs). What’s more, several papers identified three repositories from which researchers could purchase many of the seven cell lines, including the largest such resource, the American Type Culture Collection. But when the research team searched for the lines in their directories, none appeared.

The team eventually identified 235 such papers in 150 journals, including high-impact publications such as Cancer Letters and Oncogene. Most of the suspect publications list authors in China, who are affiliated with hospitals — a group previously identified as a source of customers for paper mills, businesses that sell authorship on papers that are often fake or shoddy, because they may lack research experience and have faced publish-or-perish pressure to gain professional promotion (this topic has been previously discussed in our GEITP email blog).

“It’s difficult to prove unequivocally that something doesn’t exist,” Byrne stated, “but, from our analyses, we’re pretty confident.”

She speculated that paper mill writers may have copied the misspelled names from otherwise legitimate papers, unaware of the correct spellings, instead of fabricating new names, because those might have attracted greater scrutiny from knowledgeable readers. The suspect characteristics are “really very unlikely to be made by a genuine researcher,” she said. “I worked with numerous cell lines as long as 30 years ago, and I can still recite their names correctly.”

And these papers may be the tip of the iceberg, Byrne warned. Since January 2023, more than 50,000 scholarly articles about human cancer cell lines have been published. Byrne’s team identified a total of 23 misspelled lines, but limited its analysis to eight mentioned in 420 papers to keep the workload manageable. Her team plans further examinations, and she hopes others will as well. “We’re a small group, and going through these papers is very tedious.”

Obviously, reproducible laboratory research relies on correctly identified reagents, which can be used by any future laboratory to confirm or extend reported findings. Many of these previously described gene research papers with wrongly identified nucleotide sequence(s) — involved experiments in papers studying the microRNA, miR-145. Authors [see attached pdf file] manually verified the reagent identities in 36 “recent miR-145 papers” and found that 56%, and 17%, of papers described misidentified nucleotide sequences and cell lines, respectively. Authors also found five cell line identifiers in miR-145 papers having misidentified nucleotide sequences and cell lines, and 18 cell line identifiers published elsewhere, which could not be found in any list of officially indexed human cell lines. These 23 identifiers were described as non-verifiable (NV), because their identities were uncertain.

Furthermore, studying 420 papers that mentioned eight NV identifier(s), authors discovered 235 papers (56%) that referred to seven identifiers (BGC-803, BSG-803, BSG-823, GSE-1,HGC-7901, HGC-803, and MGC-823) as independent cell lines. Authors [see attached] could not find any publications or other information as to how these cell lines were established. Six cell lines were described as “sourced from cell line repositories with externally accessible online catalogs,” but these cell lines could not found, as claimed. Some papers also stated that STR profiles had been generated for three cell lines, yet no STR profiles could be identified. In summary, because non-verifiable (NV) cell lines represent new challenges to research integrity, reproducibility, and success of future studies with such human cancer cell lines — further investigations are required to clarify their status and identities. ☹

DWN

COMMENTS:
Kirsi,

Over the past six decades of reviewing thousands of manuscripts — I, too, have ALWAYS asked the authors to verify the name, the origin, and the accuracy of spelling of every cell line involved in every publication. ☹
Thank you Olavi, for forwarding to me this important and very interesting paper! I am sure you won’t mind if I forward it to several colleagues.

I am so glad someone is willing to take the time and effort to go through reagents, backgrounds and financing of research etc. to reveal some of the difficulties in research — but also to uncover intentional unethical misinformation and disinformation of researchers.

As to cell lines in papers, I think that the original source of the cell line, how and from whom it was purchased/received, and its main characteristics should always be mentioned in all manuscripts submitted for publication — and not just a reference (which may or may not be correct, as pointed out in this paper). When I review a paper for any journal, these are the things I ask the authors to include. Naturally, testing that the cell line is still in its original condition and has the claimed characteristics would even be better, but is there anyone, who actually does that?
Best regards, Kirsi

COMMENTS:
Really no surprise here, Liars lie, and Cheaters cheat. But this sentence caught my eye, and will be enshrined in my phantom archive of great malapropisms.
Chao Shen, a cell biologist at Wuhan University and deputy director of the China Center for Type Culture Collection, a suppository of human cell lines, hopes the findings gain attention.
From your comments, many of these papers and cell lines, and perhaps the Chinese repository, were in need of a suppository.

Posted in Center for Environmental Genetics | Comments Off on MORE FRAUD: Hundreds of cancer papers mention cell lines that don’t seem to exist (!!!)

The amazing phenomenon of de novo gene birth — how does a new gene evolve?

In these GEITP email blogs, we have often discussed genes that are a member of this or that “gene family,” and we know that members of a gene family generally arise from the original ancestral gene. But, going back further in evolution, how did Mother Nature create the original gene, in the first place — which then led ultimately to a family of homologous genes in that species? [Recall that an “ortholog” is a gene that has evolved from a common ancestral gene by speciation (and usually has retained a similar function in different species). “Paralogs” are genes related by duplication within the genome (and often they have acquired a new function). “Analogs” are distinct (not usually related) genes that share the same or similar function, but which have arisen by convergent evolution. ”Homologs” (a generic term) are genes that share the same origin. Thus, orthologs and paralogs are homologs whereas analogs are not.]

One of our most favorite examples of convergent evolution is the “creation of a much-needed antifreeze glycoprotein gene (AFGP),” when it became a necessity for survival of a species. The Antarctic notothenioid fish Dissostichus mawsoni AFGP gene arose from a pancreas-derived (trypsinogen) gene and evolved into a protein segment with 41 triplet repeats. This gene arose between 5 and 14 million years ago (MYA), correlating with the estimated time (~10-14 MYA) when the Antarctic Ocean froze over; thus, evolutionary environmental pressure (freezing water conditions) resulted in formation of an antifreeze protein to preserve survival of this fish species.

In the Arctic Ocean, glaciation is estimated to have occurred ~2.5 MYA. Using cloning tricks, scientists found an AFGP gene in the Arctic cod Boreogadus saida (by using the Antarctic AFGP repeat “Thr-Ala-Ala” as a probe). This AFGP protein was found to be derived from a totally different gene, and originated around the time (~2.5 MYA) at which the Arctic Ocean is estimated to have frozen over…!! So, here we have two different fish species, each living at their own pole of the planet — needing a protein (at different evolutionary times) in response to the emergence of a cold environment. This is a beautiful example of convergent evolution.

The phenomenon of de novo gene birth — i.e., emergence of genes from non-genic sequences — has received considerable attention, due to the widespread occurrence of genes that are unique to particular species or genomes. Most instances of de novo gene birth have been recognized through comparative analyses of genome sequences in eukaryotes (organisms having paired chromosomes), despite the abundance of novel, lineage-specific genes in bacteria (prokaryotes having single unpaired chromosomes) and the relative ease with which bacteria can be studied in an experimental context.

The Long-Term Evolution Experiment (LTEE) with Escherichia coli (E. coli) is a microbial evolution study that began in 1988 and continues today. Located at University of Texas (Austin), the experiment was initiated and is still being overseen by Jeffrey E. Barrick. The study involves 12 populations of E. coli that are descendants of the same ancestral strain and have been evolving in a controlled environment for over 75,000 generations (in human generation-times, this would require more than 1.5 million years).

Authors [see attached pdf] explored the genetic record of the E coli long-term evolution experiment (LTEE) for changes indicative of “proto-genic” phases of new gene birth in which non-genic sequences evolve stable transcription and/or translation. Over the time span of the LTEE, non-genic regions were found to be frequently transcribed, translated, and differentially expressed, with levels of transcription across low-expressed regions — increasing in later generations of the experiment.

Proto-genes that were formed downstream of new mutations resulted either from insertion element activity, or from chromosomal translocations that fused preexisting regulatory sequences to regions that were not expressed in the LTEE ancestor. In addition, authors identified instances of proto-gene emergence in which a previously unexpressed sequence was transcribed after a new formation of an upstream promoter (although such cases were rare, when compared to those caused by recruitment of preexisting promoters). Tracing the origin of the causative mutations, authors discovered that most occurred early in the history of the LTEE, often within the first 20,000 generations, and became fixed soon after emergence. These data show that [a] proto-genes emerge frequently within evolving populations, [b] can persist stably, and [c] can serve as potential substrates for new gene formation.

DwN

PLoS Biol May 2024; 22: e3002418

COMMENTS:
Dan,
Really nice article! But let’s remember that there are no pressures on the genome by need. New genes are born by random changes in the genome — which happen to lead to survival of the fittest…
Reply,
EXACTLY. Random changes in genomes are constantly occurring over millions-of-years — until the stochastic process leads to an advantageous event that in some way improves survival of the species…

This is a very difficult concept for most people (including many scientists) to comprehend. When time is virtually infinite, a beneficial series of mutations eventually happens — leading to a new gene whose gene product helps the organism evolve more successfully (improvement in finding food, avoiding predators, and/or reproduction). Charles Darwin had it right.

—DwN
Dan,
So, apparently random changes that do not benefit the organism lead to extinction. Right?

Reply,
Ray,

No, it’s not that simple. First, the vast number of nucleotide changes (mutations, variants) in an organism’s genome have no effect; either they’re in a noncoding region of the gene, or the change does not elicit any amino acid change, or the new variant changing an amino acid is neutral (i.e., either no benefit or no detriment to survival). These “no-effect” mutations will eventually be lost over millions of years and many generations. “Extinction” of a species is not directly related to these no-effect or detrimental-effect nucleotide changes; rather, extinction of a species is more closely related to environmental pressures (e.g., geographical changes in cold or warmth as a function of time) leading to all members of the species not able to survive and/or reproduce.

A related question might be “How many de novo mutations exist in an offspring that is not seen in either parent?” Obviously, this number will vary widely among species, depending on genome size, generation time, etc. For humans, at birth, children typically have 70 (with a range of 10 to 300) new genetic variants, compared to their parents (out of the 6 billion letters that make both parental copies of DNA sequence). An important 2019 study published in eLife shows that the number varies dramatically — with some newborns having twice as many mutations as others; characteristically, this runs in families.

These new insights were found by performing whole-genome sequencing (WGS) and genetic analysis on 603 individuals from 33 three-generation families from Utah, the largest study of its kind. The families were part of the Centre d’Etude du Polymorphisme Humain (CEPH) consortium that are central to many key investigations that has formed a modern understanding of human genetics. The large size of the Utah CEPH families, which had as many as 16 children over a span of 27 years, made them well-suited for this new investigation.

This difference in new genetic variants is based largely on two influences. One is the age of a child’s parents. A child born to a father who is 35 years old will likely have more mutations than a sibling born to the same father at 25.

“The number of mutations we pass on to the next generation increases with parental age,” said Thomas Sasani, lead author of the study and a graduate student in human genetics at U of Utah Health Center. Previous studies have demonstrated the phenomenon, also confirmed by this study.

A second difference in number of new genetic variants is genetics. The effects of parental age on mutation rates differ dramatically among families — much more than had been previously appreciated. Within one family, a child may have two additional mutations compared to a sibling born when their parents were ten years younger. In another family, two siblings born ten years apart to a different set of parents may vary by more than 30 mutations.

“This shows that we as parents are not all equal in this regard,” said Aaron Quinlan, PhD, senior author of the study, and Professor of Human Genetics at U of Utah Health, and Associate Director of the Utah Center for Genetic Discovery. “Some of us pass on more mutations than others and this is an important source of genetic novelty and genetic disease.”

Impacts of new mutations depend on where they land in our DNA, and on the passage of time. On rare occasions, the genetic changes cause serious disease, but the majority occur in parts of our genetic code that don’t have obvious effects on human health.

And even though these new changes make up a small fraction of the overall DNA sequence, they add up with each subsequent generation. Increasing the so-called mutation load could potentially make individuals more susceptible to illness, said Sasani. It remains to be determined whether factors that impact the mutation rate increase the likelihood for certain diseases.

Although the majority of new mutations originally arise in fathers’ sperm, not all do. One in five mutations come from mothers’ eggs, and increasing age does not drive as many new mutations in mother as it does in fathers. Further, as mentioned above, the ~70 new mutations seen in children come from neither parent. Instead, they arise anew in the embryo (during embryogenesis) soon after fertilization.

DwN

COMMENT
Is there a relationship between the taxonomic and the genic terms “family”?
Reply,
Alvaro — The short answer is “no.” The longer answer is — what has been called “gene superfamily” since the 1980s is a term now frowned upon. “Gene family” is now preferred, which refers to all homologous genes unequivocally derived from one ancestral gene. For example, a family member can exist in virus or bacteria, as well as human (e.g., CYP the cytochrome P450 family), we can conclude the ancestral gene existed more than ~3.8 billion years ago.

If the first homeobox (Hox) gene evolutionarily first appeared in Cnidaria (e.g., jellyfish) before the earliest Bilatera (this would make them pre-Paleozoic), we can assume members of that gene family will exist in all descendants of Cnidaria evolution, which is estimated to have arisen evolutionarily ~1,000 million years ago.

If a gene evolutionarily first appeared in yeast [e.g., Nat Commun 12, 604],” we can safely assume members of that gene family will exist in virtually all descendants of yeast evolution, which is estimated to have arisen evolutionarily during the Cretaceous period, around 145 million years ago — (unless the gene was unimportant in some downstream species and therefore was lost).

In conclusion, there is no relationship between the taxonomic and gene germs “family.”

Posted in Center for Environmental Genetics | Comments Off on The amazing phenomenon of de novo gene birth — how does a new gene evolve?

When Rebellion Becomes Virtue: How the Scientific Method Came to Be

This is for your evening bedtime reading enjoyment. The long essay below—and some further contributions from various colleagues—are the result of someone initially asking the question: WHEN was the Scientific Method first established? and WHO should be credited with establishing the Scientific Method?

DwN

Sir Francis Bacon, (1561-1626), English philosopher and statesman, who served as Lord Chancellor under King James I, has been called the “Father of Empiricism.” The Baconian method is the investigative method developed by Sir Francis Bacon, one of the founders of modern science and thus a first formulation of a modern scientific method. This method was put forth in Bacon’s book “Novum Organum” (1620) or “New Method” and was proposed to replace the methods put forward in Aristotle’s “Organon.” The Baconian method involves six steps: [1] making observations, [2] asking a question, [3] making a hypothesis, [4] experimenting, [5] analyzing data, and [6] drawing conclusions. The method is essentially empirical and was formulated as a scientific substitute for the prevailing systems of thought.

When Rebellion Becomes Virtue: How the Scientific Method Came to Be
· Carlo Rovelli on the Ancient Origins of Modern Inquiry

By Carlo Rovelli

February 28, 2023

Thales is traditionally considered one of the Seven Sages of ancient Greece—more or less historical figures recognized and honored by the Greeks as founders of their thought and institutions. Another of the Seven Sages is Solon, contemporary of Thales and Anaximander, the writer of the first democratic constitution of Athens. According to the traditionally accepted dates, Anaximander is only eleven years younger than his illustrious fellow citizen Thales.

We know nothing of the relationship Thales and Anaximander may have had. We don’t even know whether the speculations of thinkers such as Anaximander and Thales were private, or whether there was a school in Miletus along the lines of Plato’s Academy and Aristotle’s Lyceum. These schools brought together teachers and young students, and their activities included public discussions, lessons, and lectures. Texts from the fifth century BCE tell of public debates among philosophers. Did such debates already exist in Miletus of the sixth century BCE?

As I will discuss later, the sixth century BC in Greece was the first time in human history that the ability to read and write moved beyond a limited circle of professional scribes and became widespread in large sectors of the general population—essentially the entire aristocracy. Any elementary school pupil today knows that learning to read and write isn’t easy. It must have been even more challenging during the first centuries of the use of phonetic alphabets, when writing was far less widespread than today.

Someone needed to teach young Greeks how to write and read. I therefore think it is fair to believe that teachers, tutors, and schools must have existed in the major Greek cities of the time, though I have found no confirmation of this point. The combination of teaching and intellectual research that characterizes the philosophical schools of ancient Athens as well as the universities of today might very well have already been established in the sixth century BCE. In other words, I think it is reasonable to suppose that a true school existed in Miletus.

Whether or not such a school existed, it is nonetheless clear that Anaximander’s great theoretical speculations were born of, and based on, Thales’s work. The questions they ponder are identical: the search for the arche, the form of the cosmos, naturalistic explanations for earthquakes and other phenomena, and so forth. Thales’s influence is clear even in smaller details. Anaximander’s Earth is a disk, just like Thales’s. The intellectual relationship between Thales and Anaximander is very close. Thales’s reflections nourish and give rise to Anaximander’s theories. Thales is Anaximander’s teacher—figuratively and probably also literally.

Truth is veiled but may be approached by means of a sustained, almost devotional practice of observation, discussion, and reasoning, where mistakes are always possible.

It is important to consider in depth this close relationship of intellectual paternity between Thales and Anaximander, because it represents, in my view, perhaps the most important keystone of Anaximander’s contribution to the history of culture.

The ancient world teemed with masters and their great disciples: Confucius and Mencius, Moses and Joshua and the prophets, Jesus and Paul of Tarsus, the Buddha and Kaundinya. But the relationship between Thales and Anaximander was profoundly different from these. Mencius enriched and studied in depth Confucius’s thought but took care never to cast doubt upon his master’s affirmations. Paul established the theoretical basis for Christianity, far beyond what is in the Gospels, but never criticized or openly questioned the sayings of Jesus. The prophets deepened the description of Yahweh and of the relationship between him and his people, but they most certainly did not start from an analysis of Moses’s errors.

Anaximander did something profoundly new. He immersed himself in Thales’s problems, and he embraced Thales’s finest insights, way of thinking, and intellectual conquests. But at the same time, he undertook a frontal critique of the master’s assertions. Thales says the world is made of water. Not true, says Anaximander. Thales says the Earth is floating on water. Not so, says Anaximander.

Thales says earthquakes are attributable to the oscillation of the Earth’s disk in the ocean upon which it floats. Not so, says Anaximander: they are due to the Earth’s splitting open. And he goes on from there. A still‑perplexed Cicero, centuries later, remarks, “Thales holds that all things are made of water. But of this he did not persuade Anaximander, though he was his countryman and companion.”

Criticism was not absent from the ancient world—far from it. Take the Bible, for example, in which the religious thought of Babylonia is harshly criticized: Marduk is a “false god,” his “diabolical” priests are to be stabbed, and so forth. But between criticism and adherence to a master’s teaching—there was no middle ground. Even in the generations following Anaximander, the great Pythagorean school—decidedly more archaic than Anaximander in this regard—flourished in reverence to Pythagoras’s ideas, which could not be questioned. Ipse dixit (“He himself said”) is an expression that referred originally to Pythagoras, meaning that if Pythagoras had made an assertion, it must be true.

Halfway between the absolute reverence of the Pythagoreans for Pythagoras, of Mencius for Confucius, of Paul for Jesus, and the rejection of those who hold different views, Anaximander discovered a third way.

Anaximander’s reverence for Thales is manifest, and it is clear that Anaximander leans heavily upon Thales’s intellectual accomplishments. Still, he does not hesitate to say that Thales is mistaken about this or that matter, or that it is possible to do better. Neither Mencius nor Paul nor the Pythagoreans understood that this narrow third way is the most extraordinary key for the development of knowledge.

In my view, modern science in its entirety is the result of the discovery of this third way. The very possibility of conceiving it can come only from a sophisticated theory of knowledge, according to which truth is accessible, but only gradually, by means of successive refinements. Truth is veiled but may be approached by means of a sustained, almost devotional practice of observation, discussion, and reasoning, where mistakes are always possible. The practice of Plato’s Academy is obviously based on this idea. The same is true for Aristotle and his Lyceum. All of Alexandrian astronomy grows out of the continuous questioning of the assumptions made by earlier masters.

Anaximander was the first to pursue this third way. He was the first thinker able to conceive and put into practice what is now the fundamental methodological credo of modern scientists: make a thorough study of the masters, come to understand their intellectual achievements, and make these achievements their own. Then, on the basis of the knowledge so acquired, identify the errors and shortcomings in the masters’ thinking, correct them, and in so doing—improve our understanding of the world.

Consider the great scientists of the modern era. Isn’t this precisely what they did? Copernicus did not simply awaken one fine day and proclaim that the Sun was at the center of our planetary system. He did not declare that the Ptolemaic system was an illustrious bit of nonsense. If he had, he would never have been able to construct a new, effective mathematical representation of the solar system. No one would have believed him, and the Copernican revolution never would have occurred.

Instead, Copernicus was thunderstruck by the beauty of the knowledge reached by Alexandrian astronomy and summarized in Ptolemy’s Almagest, and he immersed himself fully in its study. He appropriated Ptolemy’s methods and recognized their efficacy; it was in this way, by exploring the nooks and crannies of Ptolemy’s work, that he came to recognize its limits and find ways to radically improve it. Copernicus is very much a son of Ptolemy: his treatise De revolutionibus orbium cœlestium is extremely similar in form and language to Ptolemy’s Almagest, so much so that one can almost call De revolutionibus a revised edition of the Almagest.

Ptolemy is unquestionably Copernicus’s master, from whom he learns everything that he knows and is useful to him. But to move forward, Copernicus must declare that Ptolemy is mistaken—not just in some detail, but in the most fundamental and seemingly best‑argued assumptions of the Almagest. It is simply not true that, as Ptolemy maintains in an ample and convincing discussion at the beginning of the Almagest, the Earth is immobile and at the center of the universe. Einstein and Newton have precisely the same relationship.

It is the triumphant beginning of scientific thought, the earliest exploration of the possible forms that thinking about the world can take.

And this is not just true for the great scientists: countless scientific articles of today have precisely the same relations to the works preceding them. The strength of scientific thinking derives from the continuous questioning of the hypotheses and results obtained in the past — a questioning that, just the same, takes as its point of departure a profound recognition of the knowledge value contained in these past results.

This is a delicate balance to strike, one that is anything but obvious and natural. In fact, as far as I can see, it is unknown in all of the human speculation that has come down to us from the first millennia of recorded history. This delicate process—following and developing the master’s path by criticizing the master—has a precise beginning in the history of human thought: the position that Anaximander assumed vis‑à‑vis his master Thales.

Anaximander’s novel approach immediately inspired others. Anaximenes, a few years younger than Anaximander, picked up on the idea and, as we have seen, proposed a modified and much richer theory of the arche, or first cause. Once the path of criticism was open, it could not be closed. Heraclitus, Anaxagoras, Empedocles, Leucippus, Democritus: not one of these thinkers hesitated to speak his mind on the nature of worldly things.

Only to an inattentive observer can this variety of viewpoints and crescendo of reciprocal criticisms seem a growing cacophony. Instead, it is the triumphant beginning of scientific thought, the earliest exploration of the possible forms that thinking about the world can take. It is the start of the road that has given us everything, or nearly everything, that we study and know about the world today.

According to a classic thesis, a scientific revolution comparable to the one in the West did not take place in Chinese civilization—despite the fact that for centuries Chinese civilization was in many ways broadly superior to the West—precisely because the master in Chinese culture was never criticized or questioned. Chinese thinking grew by elaborating on and developing established knowledge, never by questioning it.

This seems to be a reasonable thesis, because one can otherwise barely fathom the fact that Chinese civilization, so overwhelmingly great, never managed to understand that the Earth was round until the Jesuits arrived and told the Chinese so. In China, it seems likely that there was never an Anaximander.
Hugh Crowther –

My go-to guy for the history of science is Carlo Rovelli, this article (400 – 4,000 years) has the background … https://lithub.com/when-rebellion-becomes-virtue-how-the-scientific-method-came-to-be/

… some of which should be part of children’s education, especially the bit where half of the Greeks got it right (Anaximander, Democratis) and the other half not so much (Pythagorus and Aristotle), and then the correct bits got lost for 1500 years, until Ptolemy, Copernicus, Galileo et al dragged it back into the light. But, the reason why Galileo should probably get the credit for the scientific method is his use of experimentation to support theory.

Andy May –

The modern scientific method is the result of hundreds of years of evolution. Although using controlled experiments to disprove conjectures (hypotheses) has been around since Roman times, I don’t think it had a big impact on science until Newton (1700), Galileo (1600), and Copernicus (1500). But, using experiments to disprove conjecture isn’t the full scientific method, it is only part of it.

The scientific method wasn’t formally accepted and expressed until the late 19th century. Probably Charles Sanders Peirce is the philosopher who first formalized the idea around the turn of the century. Karl Popper wrote a lot about the idea of falsifiable hypotheses later in the twentieth century and did a lot to popularize and explain the concept.

Francis Bacon was also important in the development of the idea and originally it was often called “Baconism.” This was in 1830s, but Bacon also didn’t get all the way there.

It’s hard to pick a point when the “scientific method” was discovered and formalized, but 1877 kind of stands out. This was when C. S. Peirce published his Fixation on Belief (see attached from wikisource, it is public domain). Peirce is not a good writer, and his ideas are better presented in Karl Popper’s 1962 Conjectures and Refutations. See page 37 in Popper for a succinct explanation (Popper is attached). But Peirce was the first to fully develop the idea of using falsifiable hypotheses to remove the doubt from facts and reach a secure belief.

Andy
Popper_1962_Conjectures and Refutations.pdf Peirce_1877_The Fixation of Belief.docx
Koen Vogel –
Can I summarize the above by: depends on what you mean by the scientific method. All of the comments above make valid points. In its simplest form the scientific method is what humankind uses to transform “observations” through a lens of “skepticism” into ‘knowledge”, three more terms whose definition often is punted to a “hard to define, but I’ll know it when I see it” definition. The inhabitants of Göbekli Tepe (11.5 ka) oriented their monuments to the summer equinox appearance of Orion, so figured out the precession of the equinox. The Natufians (13 ka) figured out how to brew beer. You could probably go back to the advent of fire. All using some sort of empirical scientific method. Various deductive, inductive and abductive methods all have been proposed in the past and still have their uses today. But if you want my opinion on whose method is most in use today, I’d go with Karl Popper’s (deductive) empirical falsifiability, whereby one states a null hypothesis—my preferred theory is untrue—and then arrange your data to demonstrate the null hypothesis has a low probability. Social and “mainstream” climate scientists seem obsessed with attacking it, because it invalidates their theories, and therefore they propose the post-normal “scientific method”—consensus science—in its stead. For example, this odious Scientific American article that demonstrates the “Scientific” should be taken with a huge grain of salt

https://www.scientificamerican.com/article/the-idea-that-a-scientific-theory-can-be-falsified-is-a-myth/ in favor—mainly—of consensus climate science.

COMMENT:
Your comments on the Scientific Method are excellent, very straightforward, easy to understand, and completely correct.

MORE COMMENTS:
he essay you shared [far below] is highly interesting!

Origin of life research is an interesting extreme deviation from the Scientific Method, despite showing a clear attempt to follow the six steps that you listed. In my opinion, the error is in step 2: people ask the wrong questions. The central question in the last 70 years has been based on the Miller-Urey syntheses: “how to make, from simpler compounds (preferably gases), the organic molecular components of <> life as we know it.”

First, this is unnecessary, because meteorites and comets have for 4.54 billion years rained upon us hundreds of thousands of small- and medium-sized organic molecules, 99.9% of them not appearing in life today. The challenge is to delineate a path — most likely based on very early Darwinian evolution — that could narrow down the size of the participating repertoire, en route to full-fledged life. The example should be life using only 20 amino acids out of an estimated 2000 possible combinations, and their all being only in the L-enantiomer form.

Second, this path leads to a dead end because — even if one succeeds to make the entire present life repertoire from gases [in methods that are mostly incompatible with evolutionary Early Earth’s (high entropy) chaos] — the next step, describing how these molecules can spontaneously form self-reproducing supramolecular ensembles (protocells), becomes problematic.

A sub-error is the focus on syntheses of long linear biopolymers (RNA, proteins). Their syntheses in the primordial soup are extremely improbable, and if (e.g., a long linear RNA appears), it has nothing to do, as long as it cannot be translated to proteins without ribosomes and tRNAs being around. The necessity of neatly folded ribozymes is also totally unconvincing, because much simpler molecules, e.g., a tripeptide, can be an equally good catalyst. The sex appeal of RNA is only that “it can copy itself as well as catalyze” but this can only happen with the help from molecular friends, mostly polypeptides, within mutually catalytic networks. The Miller-Urey and the Cech-Altman disasters (the latter — Nobel Laureates for discovering ribozymes) have brought the Origin of Life research field to a grinding halt.

Our “graded autocatalysis replication domain” (GARD) model may show a way towards the utilization of the Scientific Method properly. We have only one question for step 2, taking into account all the relevant facts: “What is the simplest self-reproducing supramolecular entity forming spontaneously in the primordial soup?” From there on, the load of synthesizing molecules, complexifying them, and leading to their formation of catalytic networks, as well as narrowing used repertoire — is in Darwin’s hands! We have solid evidence that *amphiphile micelles* can do the job, as described in our papers over the last three decades, as well as in the two semi-popular recent articles and an abstract [see our three attachments].

REPLY:
Your comments relating the Scientific Method to “Origin of Life” research — are extremely fascinating. And your studies [see the three attachments] over the past three decades continue to look increasingly plausible in explaining “how we got from primordial soup” to the “last universal common ancestor” (LUCA), which you propose is an “amphiphile micelle.” I find your estimate is fascinating that — “Considering the ocean’s surface area and the miniscule size of micelles, Earth could have been inhabited by as many as 1030 micelles, each with a unique molecular composition.” [10 to the power of 30 is — one with 30 zeroes: 1,000,000,000,000,000,000,000,000,000,000 micelles]. ..(!!!)

The (at least) six steps of the Scientific Method include [1] making observations, [2] asking a question, [3] creating an hypothesis, [4] performing experiments, [5] analyzing data, and [6] drawing conclusions. Your point about “asking the best question is important/critical” — is well taken.

WHY has there been this recent surge in interest in the Scientific Method? Answer: this surge is related to the National Science Teachers Association (NSTA) of roughly 40,000 members — which since the 1940s has prepared plans for future science studies in U.S. K-though-12 public schools. In their recent Next Generation Science Standards (NGSS) report (https://www.nextgenscience.org/), they have adopted a progressive set of science standards (Fixing_Education.pdf (c19science.info)), which basically disavows such principles as the Scientific Method (Scientific_Method.pdf (c19science.info)) and Critical Thinking (https://www.coursera.org/articles/critical-thinking-skills).

The progressive alternative is basically “groupthink,” (i.e., a group of any size has a meeting and discusses their opinions, after which they arrive at a consensus view). Without much or any discussion, 48 or 49 States have proceeded to adopt these latest NGSS guidelines/recommendations, thereby rejecting the approximately 4,000 years during which the Scientific Method has developed…!!

Not surprisingly, this recent 180-degree turn in science education is resulting in a lot of serious discussions. The conflict of “science” vs the “political science” of this topic — is beyond this GEITP Email Blog. ☹

Posted in Center for Environmental Genetics | Comments Off on When Rebellion Becomes Virtue: How the Scientific Method Came to Be

Photoreceptor Evolution–from Water Animals to Land Animals

This topic is among the most clear-cut examples of “gene-environment interactions” topics…!! During evolution, when vertebrates first left the water and ventured onto land, they encountered a visual world that was radically different from that of their aquatic ancestors. In order to survive successfully, these “new” land species were required to adapt visually, in terms of being able to find food, avoid predators, and sexually reproduce. “The need to see acutely” — represents the environmental pressure; and necessary (and relatively evolutionarily “quick”) changes in visual networks of the eye (and/or brain) — represent the response by genes…

Fish exploit the strong wavelength-dependent interactions of light with water by differentially sending visual image signals — from as many as five spectral photoreceptor types into distinct behavioral programs. However, out of the water — the same spectral rules do not apply, and this evolutionary adaptation required rapid changes in response to this new environmental pressure.

Early tetrapods [e.g., Kenichthys from China ~395 million years ago (MYA), Gogonasus & Panderichthys ~380 MYA, and then salamanders with four legs] soon evolved the double cone, still a poorly understood pair of new photoreceptors that increased the “ancestral terrestrial” complement from five to seven types of photoreceptors. Subsequent non-mammalian lineages differentially adapted this highly parallelized retinal input strategy for their diverse visual ecologies. In contrast, mammals (first appearing ~225 MYA) shed most of their ancestral photoreceptors and converged on an input strategy that is extraordinarily general. In eutherian mammals (i.e., animals born via placenta), including humans, parallelization emerged gradually during evolution, as the visual signal began to traverse the layers of the retina and onwards, into the brain. [“Parallelization” can be defined as “parallel genotypic adaptation,” i.e., the independent evolution of homologous loci (similar genes) to fulfill the same function (trait) in two or more lineages. N.B. — these changes need not be identical, just functionally equivalent.]

Vertebrate vision first evolved in the water, where (for >50 million years) it was consistently based on visual signals from five anatomically and molecularly distinct types of photoreceptor neurons: rods, as well as ancestral red, green, blue, and UV cones (expressing RH, LWS, RH2, SWS2, and SWS1 opsins, respectively). In the water, these five input streams are probably best thought of as parallel feature channels that deliver distinct types of information to distinct downstream circuits. This is because water absorbs and scatters light in a wavelength-dependent manner (see Fig 1A in attached pdf file), which means that “beyond color,” different spectral photoreceptor channels inherently deliver different types of visual information.

Aquatic visual systems have recently been proposed to evolutionarily reach “answers” that exploit these differences. In this view, photoreceptors represent parallel channels that are differentially wired to drive and/or regulate distinct behavioral programs (see Fig 1B in attached pdf file): First, rods and ancestral red cones are the eyes’ primary brightness sensors; they are used for general-purpose vision and to drive circuits for body stabilization and navigation. Second, ancestral UV cones are used as a specialized foreground system, primarily wired into circuits related to predator–prey interactions and general threat detection. Third, ancestral green and blue cones probably represent an auxiliary system, tasked with regulating, rather than driving, the primary red/rod and UV circuits.

This ancestral strategy exploits the specific peculiarities of aquatic visual worlds; however, in air the same rules do not necessarily apply. For example, in water, object vision can be a relatively easy task, because background structure tends to be heavily obscured by an approximately homogeneous aquatic backdrop. At short wavelengths, including in the UV range, this effect can be so extreme that no background is visible at all. Many small fish exploit this fact of physics to find their food. Out of water, this and many other “ancestral visual tricks” no longer work, because in air, contrast tends to be largely independent of viewing distance: everything is visible at high contrast. Accordingly, when early would-be tetrapods started to crawl out of the water, strong selection pressures would have favored a functional reorganization of some of these inherited aquatic circuits; nowhere is this more evident than at the level of the photoreceptors themselves.

One of the earliest and perhaps most important retinal circuit changes was the emergence of the double cone, which took the “aquatic ancestral” photoreceptor complement of five to a “terrestrial ancestral” complement of seven (see Fig 1 in attached pdf file). The visual systems of all extant tetrapods, including humans, directly descend from this early “terrestrialized” retinal blueprint. However, from there, different descendant lineages have taken this highly parallelized retinal input strategy and embarked upon radically different visual paths. Most lineages, including those that led to modern-day amphibians, reptiles, and birds — have retained the terrestrialized ancestral blueprint, modifying upon it to suit their unique visual ecologies.

Mammals, however, have ended up on a very different path. Their early synapsid ancestors gradually shifted some of their visual systems’ “heavy lifting” out of the eye and into the brain. Along this path — whether as cause or consequence — descendant lineages gradually decreased their photoreceptor complements from seven types to six, then five, and eventually to the mere three that we see in eutherians today (see Fig 1C in attached pdf file): Rods (RH), as well as ancestral red (LWS) and UV cones (SWS1).

Primates, including humans, have then taken this eutherian strategy to the extreme: >99.9% of all photoreceptors in our eyes are either rods or ancestral red cones (including both “red-” and “green-shifted LWS variants”), the ancestral “general purpose” system of the eye. The remaining 0.1% is what is left of the ancestral UV system, today expressing a blue-shifted variant of the SWS1 opsin (hence, often called “blue cones,” not to be confused with ancestral blue cones that express SWS2). In concert, the “three” cone variants drive achromatic vision (although with limited contribution from ancestral UV cones), while in opposition they serve color vision.

However, this “textbook strategy” is far removed from the original aquatic circuit design and probably quite unique to our own lineage. Accordingly, for understanding vision in a general sense, and to understand our own visual heritage, it will be critical to respect the vertebrates’ shared evolutionary past. Here, vision is built on a retinal circuit design that begins with major parallelization — right from the original evolutionary input.

DwN
PLoS Biol Jan 2024; 22: e3002422

COMMENTS:
By coincidence, I happen to be re-re-reading the Biology of Spiders by Rainer F. Foelix. It is an incredible book full of pictures, including electron microscope picture of tissue preparations (I like books that have lots of pictures). In terms of vision, diverse animals share the structures of cones and rods, corresponding to select frequencies of the electromagnetic spectrum. This confirms in my mind, once again, that the inherent conditions for life (and reproduction) are fundamental manifestations of energy and matter. That spiders’ eyes have frequency-tuned structures that report the world outside the spider — to the world inside the spider’s brain — boggles my mind. … Every species seems directed intentionally not just to survival, but survival to reproduce life further in that species.
Hence, … again … the one thing I can conclude with certainty about the God of our Universe is “oblivion (or “randomness” or “chaos” or “infinite entropy” is not an option.”

I regret that I missed this January publication, when I was teaching Evolutionary Neurobiology this spring! The topic of “Vision” has always been a great opportunity to talk about selective environmental pressures, convergent versus divergent evolutionary trends, etc.
Thanks for sharing!

Reply:
Chris, while working on summarizing this paper, I came very close to discussing “divergent” versus “convergent” evolution (but I felt my summary was already too long). ☹

“Divergent evolution” is when individuals of one species, or closely related species, acquire enough variations in their traits (due to novel, constant environmental pressures) that it leads to two distinct new species (each of which can usually survive better in two distinctly different niches). A simple example would be a water-living fish evolving into a land-living salamander.
On the other hand, “convergent evolution” occurs when two unrelated species develop similar traits because they experience similar environments; moreover, the DNA responsible for these traits can either be homologous genes in the same gene family, or a completely different genetic network. An example would be “synthesis of antifreeze protein” by fish in Antarctica versus fish in the Arctic Ocean. Authors [see attached article] used the term “parallelization” instead of “convergent evolution.” Clearly, over the past 4 billion years, divergent evolution has occurred far, far more commonly than convergent evolution…
DwN

This is a terrific review, which I have downloaded for my files!
From the engineering point of view, the striking difference between underwater vision and vision through the air is the much greater “clutter” for air vision. An interesting example of this is America’s first US missile early-warning satellites which intentionally looked for 2.7-micron infrared (IR) radiation from burning rocket plumes that had emerged from the cloud deck. Because of the heavy absorption of water vapor at 2.7 microns, however — much of the cloud clutter and ground clutter for clear skies gets suppressed. As described in this excellent essay, fish use short-range, low-clutter ultraviolet (UV) for catching nearby prey for reasons similar to those problems encountered by America’s first US missile early-warning satellites.
Best wishes,

MORE COMMENTS:
Attached is the “Accelerated Article Preview” — which has been peer-reviewed and will appear soon in the complete published form. However, the journal felt these findings were sufficiently worthy of rapid distribution to scientists who might find these data extremely interesting.

Posted in Center for Environmental Genetics | Comments Off on Photoreceptor Evolution–from Water Animals to Land Animals

Epigenome-wide association study of total nicotine equivalents

For gene-environment interactions, the environmental “signal” in this story represents the urinary “total nicotine equivalents”, a quantitative assay for cigarettes actually smoked (which is more reliable than the patient’s estimate of their smoking history). The genetic “response” in this story involves DNA methylation, (i.e., epigenetic markers in or near genes that might be relevant to cancer or toxicity).

The impact of tobacco exposure on health varies by race and ethnicity and is closely linked to internal nicotine dose, (a marker of carcinogen uptake). [Recall that genetic differences reflect alterations in DNA sequence, whereas epigenetic differences represent chromosomal changes independent of DNA-sequence variations. “Epigenetics” has been subdivided into DNA methylation, RNA interference, histone modifications, and chromatin remodeling.] The present study [see attached] focuses on changes in DNA methylation that might be attributed to the quantifiable amount of cigarette-smoke exposure.

DNA methylation is strongly responsive to smoking status and may mediate health effects, but studies of associations with internal dose are limited. Authors carried out a white-blood-cell epigenome-wide association study (EWAS) of DNA methylation [measured using the MethylationEPIC v1.0 BeadChip (EPIC) assay] and urinary total nicotine equivalents (TNEs; a measure of nicotine uptake) — in six racial and ethnic groups across three cohorts: the

Multiethnic Cohort Study (MEC), the Singapore Chinese Health Study (SCHS), and the Southern Community Cohort Study (SCCS).

The GEITP email blog topic on 9 May 2024 was about transcriptome-wide association studies (TWASs) in 49 tissues combined. This week’s GEITP email blog topic is about white-blood-cell epigenome-wide association study (EWAS) of DNA methylation.

Worldwide, cigarette smoking is a strong risk factor for lung cancer. Epidemiologic studies have shown that risk for smoking-related lung cancer differs by race and ethnicity, even after accounting for self-reported smoking history. This interindividual variation in risk may be attributed, in part, to racial and ethnic differences in smokers’ uptake of nicotine and carcinogens from each cigarette. It has previously been shown that urinary TNE levels per cigarette were higher among African-Americans and lower among Japanese-Americans, compared to European-Americans — consistent with their corresponding higher and lower smoking-related lung cancer risk, respectively, for these groups. However, authors found that TNE-uptake-levels-per-cigarette for Native-Hawaiians and Latinos (which are similar to that of European-Americans) were not consistent with their respective higher and lower smoking-related lung cancer risk.

Several previous EWASs of smoking have involved self-reported smoking status — with the largest study across

16 epidemiologic cohorts reporting differential methylation (between current and never smokers) for >2,600 cytosine-phosphate-guanine (CpG) sites in 1,405 genes, including AHRR (aryl hydrocarbon receptor repressor), F2RL3 (F2R-like thrombin or trypsin receptor-3), and GPR15 (G-protein-coupled receptor-15). Although most of these studies were conducted in populations of mainly European ancestry, more recent EWASs in racial and ethnic minorities have replicated many of the differentially methylated sites associated with smoking status in American Indians, African Americans, and Latinos. However, the authors feel that the generalizability of these studies is still limited — due to the lack of data for many other underrepresented populations.

Furthermore, self-reported smoking status may not fully capture the effect of smoking dose on DNA methylation.

To assess the effect of smoking dose, epigenetic studies have used other measures including biomarkers of smoking (i.e., cotinine and cadmium). However, these measurements are not entirely correlated with direction of disease risks (e.g., self-reported cigarettes per day), nor do they account for interindividual differences in nicotine uptake related to smoking behavior and nicotine metabolism (e.g., cotinine). Importantly, internal smoking dose as measured by urinary TNEs [the sum of major metabolites nicotine, cotinine, trans-3’-hydroxycotinine (3-HCOT), and their glucuronides and nicotine N-oxide] estimates 80%–90% of nicotine uptake and, thus, is the more reliable measure of dose.

In a previous EWAS of internal smoking dose in three racial and ethnic groups (Japanese Americans, Native Hawaiians, and European Americans; N = 612 individuals), it was found that increasing urinary nicotine equivalents

(sum of nicotine, cotinine, 3-HCOT, and their glucuronides) were associated with higher DNA methylation at eight CpG sites in Native Hawaiians, but NOT in European Americans or Japanese-Americans. Because this provided evidence of potential variation across race and ethnicity, authors felt that further investigation is needed to better explain the transethnic effects of smoking dose on the epigenome.

In the Multiethnic Cohort Study (N = 1,994), TNEs were associated with differential methylation at 408 CpG sites across >250 genomic regions (P <9.0 x 10–8). The top significant sites were associated with the genes AHRR (aryl hydrocarbon receptor repressor), F2RL3 (F2R-like thrombin or trypsin receptor 3), RARA (retinoic acid receptor-alpha), GPR15 (G-protein-coupled receptor-15), PRSS23 (serine protease-23), and 2q37.1 (nearest to NIDDM1; non-insulin-dependent diabetes mellitus, type 2) — all of which had decreasing methylation associated with increasing TNEs. Authors identified 45 novel CpG sites, of which 42 were unique to the EPIC array and eight annotated to genes not previously linked with smoking-related DNA methylation. The most significant signal in a novel gene was cg03748458 in MIR383;SGCZ (microRNA383). Fifty-one of the 408 discovery sites were validated in the Singapore Chinese Health Study (N = 340) and the Southern Community Cohort Study (N = 394) (Bonferroni corrected P <1.2 x 10–4). Significant heterogeneity by race and ethnicity was detected for CpG sites in MYO1G (myosin-1G located at Chr 7p13) and CYTH1 (cytohesin-1). In or near these two genes, TNEs were significantly associated between cigarettes per day and DNA methylation at 15 sites. Authors believe that their multiethnic study [see attached] highlights the trans-ethnic and ethnic-specific methylation associations with internal nicotine dose — as a strong predictor of smoking-related morbidities. DwN Am J Hum Genet 7 Mar 2024; 111: 456-472

Posted in Center for Environmental Genetics | Comments Off on Epigenome-wide association study of total nicotine equivalents

Transcriptome-wide association study (TWAS) of the plasma proteome

s described many times in these GEITP email blogs — to discover a gene or genetic loci that is/are correlated with a simple or complex trait (phenotype) of your choosing, genome-wide association studies (GWASs) have been commonly used as a means to search the haploid genomes of large cohorts (human or other populations). GWAS studies have been increasingly popular over the past two decades of genetic studies. In these studies, one looks for one or more single-nucleotide variants (SNVs) that are highly statistically significantly (e.g., P <5.0 x 10–8, also written as <5.0e–08) associated with a phenotype (e.g., heart disease, schizophrenia, autism spectrum disorder, type-2 diabetes, adverse drug effect, increased risk of cancer, etc.) Today’s topic is about transcriptome-wide association studies (TWASs), which offer a whole new level of complexity. Whereas GWASs search for SNVs among every DNA nucleotide in one’s haploid genome, TWASs explore differences in transcription of only those genes that are being actively transcribed. Also, whereas each cell of the human body (except for red cells that have no nuclei) has a copy of the same DNA, transcription (gene expression) varies widely, dependent on cell type; the human body has between 208 and 212 distinct cell-types, which can exhibit altered transcription — dependent on age, circadian rhythm, metabolism, and various critical-life functions. Regulation of gene expression and protein abundance are important mechanisms through which many noncoding SNVs affect traits. Expression quantitative trait loci (eQTL) mapping in multiple human tissues has discovered a variety of both distal (trans) and proximal (cis) variants associated with gene expression. Whereas cis-eQTLs tend to have larger effect-sizes than trans-eQTLs — studies have shown that trans-eQTLs account for a larger portion of the heritability of gene expression. One study of twins found that, on average, trans-eQTLs explained more than three times the variance in gene expression than cis-eQTLs. Furthermore, trans-eQTLs tend to be more tissue-specific, suggesting that they might play a role in cell-type differentiation. Despite the importance of trans-eQTLs in regulating gene expression, QTL-mapping studies have been limited in their ability to detect trans-acting effects due in part to the high multiple-testing burden, as well as their comparatively low effect-sizes. Methods that minimize the multiple-testing burden (by prioritizing subsets of variants, or grouping trans-genes) have proven more successful at identifying trans-eQTLs. For example, one study that tested the cis-component of gene expression for an association with the observed expression of distant genes — identified more replicable trans-acting genes than a comparable trans-eQTL study. This is because many trans-eQTLs co-localize with cis-eQTLs, and they affect expression of distant genes through cis-mediators such as a nearby transcription factor (TF) gene. The advent of advanced assay technologies that capture and measure protein abundances — has enabled protein quantitative trait locus (pQTL)-mapping studies to identify variants associated with the abundance of proteins. Trans-pQTLs, like eQTLs, tend to have lower effect-sizes and be tissue-specific. Parallelling methods for detecting trans-eQTLs, the authors [see attached] hypothesized that cis-prioritization will improve detection of trans-pQTLs. Herein, authors applied a transcriptome-wide association study (TWAS) framework to proteomic data, testing the genetically-predicted expression of genes for any associations with the observed abundance of plasma proteins. [Q: Which tissue(s) did authors study? A: 49 tissues combined.] Figure 1A [see attached file] shows an overview of TWAS analysis. Genotype data from both the INTERVAL and TOPMed MESA cohorts were used to impute genetically-regulated expression levels (GReX) in 49 different GTEx tissues. GReX was tested for associations with measured plasma protein levels for all proteins tested in both studies. Authors concluded that TWAS for protein levels is an effective method for identifying replicable trans-acting associations between predicted transcripts and proteins. Authors also found a high expected proportion of true positives for associations between the predicted transcripts and protein products of the same underlying gene. Lastly, using RNA-sequencing (RNA-Seq) data, authors also show that predicted gene expression correlates better with protein levels than with the observed gene expression. DwN Am J Hum Genet 7 Mar 2024; 111: 445-455

Posted in Center for Environmental Genetics | Comments Off on Transcriptome-wide association study (TWAS) of the plasma proteome

Found: the dial in the brain that controls the immune system (??)

This is a layman’s description of a very recent paper published in Nature. And, as these articles usually do, they don’t provide enough scientific details for scientists to judge for themselves… But this brief overview always sounds very exciting, like a major breakthrough has been accomplished. ☹
—DwN

01 May 2024
Found: the dial in the brain that controls the immune system

Scientists have identified the brain cells that regulate inflammation, and pinpoint how they keep tabs on the immune response.

By Giorgia Guglielmi

Coloured magnetic resonance imaging (MRI) scan of a sagittal section through a patient’s head showing a healthy human brain and brain stem.

A population of neurons in the brain stem, the stalk-like structure that connects the bulk of the brain to the spinal cord, acts as the master dial for the immune system.

Scientists have long known that the brain plays a part in the immune system — but how it does so has been a mystery. Now, scientists have identified cells in the brainstem that sense immune cues from the periphery of the body and act as master regulators of the body’s inflammatory response.

The results, published on 1 May in Nature1, suggest that the brain maintains a delicate balance between the molecular signals that promote inflammation and those that dampen it — a finding that could lead to treatments for autoimmune diseases and other conditions caused by an excessive immune response.

The discovery is akin to a black-swan event — unexpected, but making perfect sense once revealed, says Ruslan Medzhitov, an immunologist at Yale University in New Haven, Connecticut. Scientists have known that the brainstem has many functions, such as controlling basic processes such as breathing. However, he adds, the study “shows that there is whole layer of biology that we haven’t even anticipated”.
The brain is watching

After sensing an intruder, the immune system unleashes a flood of immune cells and compounds that promote inflammation. This inflammatory response must be controlled with exquisite precision: if it’s too weak, the body is at greater risk of becoming infected; if it’s too strong, it can damage the body’s own tissues and organs.

Previous work has shown that the vagus nerve, a large network of nerve fibres that links the body with the brain, influences immune responses. However, the specific brain neurons that are activated by immune stimuli remained elusive, says Hao Jin, a neuroimmunologist at the US National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, who led the work.

To investigate how the brain controls the body’s immune response, Jin and his colleagues monitored the activity of brain cells after injecting the abdomen of mice with bacterial compounds that trigger inflammation.

The researchers identified neurons in the brainstem that switched on in response to the immune triggers. Activating these neurons, with a drug, decreased the levels of inflammatory molecules in the mice’s blood. Silencing the neurons led to an uncontrolled immune response, with the number of inflammatory molecules increasing by 300% compared with the levels observed in mice with functional brainstem neurons. These nerve cells act as “a rheostat in the brain that ensures that an inflammatory response is maintained within the appropriate levels”, says study co-author Charles Zuker, a neuroscientist at Columbia University in New York City.

Further experiments revealed two discrete groups of neurons in the vagus nerve: one that responds to pro-inflammatory immune molecules and another that responds to anti-inflammatory molecules. These neurons relay their signals to the brain, allowing it to monitor the immune response as it unfolds. In mice with conditions characterized by an excessive immune response, artificially activating the vagal neurons that carry anti-inflammatory signals diminished inflammation.
Dampening autoimmune symptoms

Finding ways to control this newly discovered body–brain network would offer an approach to fixing broken immune responses in various conditions such as autoimmune diseases and even long COVID, a debilitating syndrome that can persist for years after a SARS-CoV-2 infection, Jin says.

There is evidence that therapies targeting the vagus nerve can treat diseases such as multiple sclerosis and rheumatoid arthritis, suggesting that targeting the specific vagal neurons that carry immune signals might work in people, Zuker says. But, he cautions, “it’s a lot of work to go from here to there”.

Besides the neuronal network identified in the study, there might be other routes through which the body transmits immune signals to the brain, says Stephen Liberles, a neuroscientist at Harvard Medical School in Boston, Massachusetts. What’s more, the mechanisms by which the brain sends signals back to the immune system to regulate inflammation remain unclear. “We’re just scratching the surface,” he says. “We need to understand the rule book of how the brain and the immune system interact.”

doi: https://doi.org/10.1038/d41586-024-01259-2

Posted in Center for Environmental Genetics | Comments Off on Found: the dial in the brain that controls the immune system (??)

The LNT Model

[I always open “CC” (closed captions), if it’s available, to help me understand what speakers are saying.]

To refresh everyone’s memory about the “LNT Model,” recall that GEITP has discussed this topic at least a half-dozen times. Ed Calabrese, emeritus in environmental health sciences at University of Massachusetts Amherst, has authored and coauthored more than two dozen papers — starting about 2011 — on questioning the scientific conclusions of the fruit fly experiments leading up to the infamous Linear Non-Threshold (LNT) Model for risk assessment (originally) for ionizing radiation exposure. The LNT Model arose from the 1946 Nobel Prize awarded to Hermann Mueller — and was adopted by the U.S. Environmental Protection Agency (EPA) and many other agencies. And, by extrapolation from ionizing radiation, it was proposed that this same model likely holds true for most (if not all) chemical toxicants and carcinogens.

In his Nobel Prize Lecture on December 12, 1946, Hermann J. Muller argued that the dose-response for ionizing radiation-induced germ cell mutations was linear and that there was “no escape from the conclusion that there is no threshold.” However, a recently discovered (by Calabrese in 2015) commentary by Robert L. Brent indicated that Curt Stern, after reading a draft of part of Muller’s Nobel Prize Lecture, telephoned Muller — strongly advising him to remove reference to the flawed linear non-threshold (LNT)-supportive Ray-Chaudhuri findings and strongly encouraged him to be guided by the threshold supportive data of Ernst Caspari [who, incidentally, was DwN’s genetics mentor in college]. Brent wrote that Mueller refused to follow Stern’s advice, thereby proclaiming (during his Nobel Prize Lecture) support for the LNT dose-response, while withholding evidence that was contrary. This finding is of historical importance, because Muller’s Nobel Prize Lecture then gained considerable international attention in 1946 and was a turning point in acceptance of the linearity model for radiation and chemical hereditary and carcinogen risk assessment.

Ed asserts that the science used — to support the LNT model adopted by the NAS’s 1956 Biological Effects of Atomic Radiation (BEAR) I Genetics Panel — was also tainted by its leaders, who he says deliberately refused to include evidence from NAS’s own Atomic Bomb Casualty Commission (ABCC) human genetic study, also known as the Neel and Schull 1956a report.

Calabrese says Neel and Schull showed “an absence of genetic damage in offspring of atomic bomb survivors, which supports a threshold model (i.e. not the LNT model),” but this was not considered for evaluation by the genetics panel, “therefore, those data could not figure into its decision to recommend the erroneous LNT dose-response model for risk assessment.”

Calabrese suggests that the panel’s work was undermined by Hermann J. Muller and BEAR I chairman Warren Weaver, who “feared that human genetic studies would expose the limitations of extrapolating from animal, especially the fruit fly, Drosophila, to human responses, and admitting this would strongly shift research investments/academic grants from animal to human studies.”

Calabrese adds, “The country expects its scientists to be honest and to follow real data. The BEAR 1 Genetics Panel failed on both counts, being loyal only to their ideology, and then hiding it. They were hailed by many media outlets as the Genetics Dream Team — giving them ‘further cover’ so that their deceptions would never be known. They have gotten away with it, so far, for 68 years.” How much longer will we believe that “inaccurate scientific conclusions do not lead to bad government policy” — which continues to be practiced, today…??

“This history should represent a profound embarrassment to the U.S. NAS, regulatory agencies worldwide — and especially to the U.S. EPA and the risk-assessment community — whose founding principles were so ideologically determined and accepted with little, if any, critical reflection.”

DwN

Number of the Week: “100,000 times greater (than background).” According to Ed Calabrese, Hermann Mueller knew of Caspari’s University of Rochester study — before he gave his Nobel talk in December 1946.

“That [fruit fly] study had shown that at the ‘low’ chronic radiation dose rate (i.e., yet still about 100,000 times greater than background), no radiation-induced mutation effects were found. The Caspari study supported the threshold model, and not the LNT model.”
—DwN

COMMENT:

This 4-minute youtube video [https://www.youtube.com/watch?v=T3IwCjWytiA] is excellent. Very clear, lucid, crisp and brief — just what the busy scientist wants. The bottom line is that the Linear Non-Threshold (LNT) Model is not founded on scientific facts. Instead, wherever possible, we should base all governmental policies on scientific, evolutionary-based data.

Let’s hope that powerful leaders in the U.S. EPA, and other regulatory agencies are listening. ☹
—DwN

Dan,
Please watch this 4-minute video summary by John Cardarelli (President, Health Physics Society) on the LNT story. It is impressive and should be shared widely. Most people are more prone to watch a short (but powerful) conceptual summary than delving into many dozens or hundreds of pages of text, describing this fraud…
Ed,

MORE COMMENTS:
The Radiation Exposure Compensation Act ( “the Act” or “RECA” first passed in Oct 1990)* offers an apology and monetary compensation to individuals who contracted certain cancers and other serious diseases as a result of their exposure to radiation released during above-ground nuclear weapons tests or as a result of their occupational exposure.

This program involves scientists at CDC who attempt to quantify exposures. These estimated exposures are then incorporated within a LNT risk assessment framework to provide a “scientific” basis for “how much compensation may be received”. This same concept is said to possibly occur for people exposed to the Manhattan Project radiation in the recent controversies in St. Louis, MO — involving a school and related areas. Again, this would be driven by the LNT assumption.

There is the built-in assumption of dose-response causality in the low-dose zone.

Even though there is a 25-30% chance of any of us developing cancer over a lifetime within our society, the issue of background radiation appears to be ignored in these cases. The decision to pay the compensation is a political, rather than a scientific one. In this case, the decision to compensate cancer patients — rests with the US Congress.

It may prove to be interesting to provide a listing of how the LNT concept is being applied in many areas within the US and elsewhere. And then to calculate the (excessive and nonscientific estimated) costs to society.

Ed Calabrese

*By a 69-30 vote, the Senate passed legislation just last week that would expand the types of atmospheric nuclear testing and nuclear material exposure covered by the Radiation Exposure Compensation Act as well as the time window for damage claims. Specifically, people exposed to atmospheric nuclear tests in Nevada, and in the Pacific from 1958 to 1962 would now be eligible for damage claims.

The current law limits claims relating to atmospheric testing to parts of Arizona, Nevada, and Utah; the bill would expand that area to include all of those states as well as Colorado, Idaho, Montana, New Mexico, Utah, and Guam. The bill also expands the list of uranium mining-related jobs that qualify for compensation, notably adding workers involved in remediation efforts at uranium mines and mills.

It extends the coverage to include mines that operated through 1990, instead of through 1971. In addition, the bill creates a new claim eligibility for people affected by Manhattan Project waste in Missouri, Tennessee, Alaska, and Kentucky.

The compensation fund itself was set to expire in June, following a previous extension that Congress passed in 2022. The new legislation would extend the fund to 2030. The bill is sponsored by Sen. Josh Hawley (R-MO), who has frequently drawn attention to the enduring effects of radiological contamination in his state. The Biden administration has endorsed the legislation.

MORE COMMENTS:
Just a reminder that we generated some data with respect to chemical carcinogenesis where “the LNT model does not hold”. ☹

Assessment of human cancer risk from animal carcinogen studies is severely limited by inadequate experimental data at environmentally relevant exposures and by procedures requiring modeled extrapolations that are many orders of magnitude below observable data. We used rainbow trout, an animal model well-suited to ultralow-dose carcinogenesis research, to explore dose-response down to a targeted 10-fold excess liver tumors per 10,000 animals (ED001). [“The ED50”, e.g., is the dose/exposure of anything that produces a specific effect in 50% of the population that has been administered that dose/exposure.]

A total of 40,800 trout were fed zero to 225 ppm dibenzo[a,l]pyrene (DBP) for 4 weeks, sampled for biomarker analyses, and then returned to a control diet for 9 months prior to gross and histologic examination. Suspect tumors were confirmed by pathology, and resulting incidences were modeled and compared to the default EPA LED10 linear extrapolation method. The study provided observed incidence data down to two above-background liver tumors per 10000 animals at the lowest dose i.e., an unmodeled ED0002

measurement). [The Linear Extrapolation Model uses a linear equation to predict future outcomes. This method is best suited for predictions close to the given data. One simply draws a tangent line from the last point to extend it beyond the last known value.]

Among nine statistical models explored, three were determined to fit the liver data well — linear probit, quadratic logit, and Ryzin-Rai. None of these fitted models is compatible with the LED10 default assumption, and all fell increasingly below the default extrapolation with decreasing DBP dose. Low-dose tumor response was also not predictable from hepatic DBP-DNA adduct biomarkers, which accumulated as a power function of dose (adducts = 100 x DBP1.31).

Two-order extrapolations below the modeled tumor data — predicted DBP doses producing one excess cancer per million individuals (ED10(-6)) that were 500- to 1500-fold higher than that predicted by the five-order LED10 extrapolation. These results are considered specific to the animal model, carcinogen, and protocol used. However, they provide the first experimental estimation in any model of the degree of conservatism that may exist for the EPA default linear assumption for a genotoxic carcinogen.
Reply: Thanks, Dave. I remember this study well — largely because of the huge number of animals (40,800 trout) that George Bailey and coworkers had used.

MORE COMMENTS:
Dan, Regarding the EPA and “hot showers,” this will be a long story, but it is my history, so please bear with me!

I have always believed that there is a threshold for any toxicant, be it physical or chemical. The dose makes the poison.

When I was a graduate student at NYU we used “whole-body” radiation counters. I used one to study mucociliary clearance, in which subjects would inhale technetium-labeled monodispersed ferric oxide particles. (We had to evaluate the risk of such an experiment and compare it to the risk of chest X-rays. So, this was a benchmark or “threshold” that was considered a minimal risk. We also considered the dose someone would receive if they were living in Denver, or during an airplane flight.)

We had a very good whole-body chamber to block out background radiation, so we could use the lowest dose possible. It was a gun turret from a pre-World War II battleship. The steel was generated before above-ground testing and was several inches thick.

In NYU’s Sterling Forest Laboratory, they built a new chamber to study the disposition of radiation in baboons, and to measure radiation exposure in humans. They improved the chamber to have extremely low background.

So — to get started, they wanted to test as many control subjects as possible, to get a baseline for studies of people with known exposure. In those days, everyone wanted to help, and we all volunteered to sit in the chamber, which took several hours. The graduate students volunteered too, doing their homework and reading manuscripts during the counting. Well, guess what. The graduate students who lived down the road from the lab in student housing, had very high levels of radiation. Levels were higher in the morning tests than in the afternoon tests. This was weird: were the students exposed by drinking beer, water, or smoking pot? It turned out that none of this could explain the difference.

After careful investigation, it was determined that that it was because they were exposed to water during their morning shower, which came from local wells. The region was on the Ramapo fault, which leaked radon into the well water. This led to several follow-up studies to measure radon in basements with our newly developed environmental radon detectors. This was a part of what led to home radon mitigation, which is practice today.

MORE COMMENTS:
Since you are sometimes an expert on risk assessment, here is a slightly related question:
Do you know of any economic study calculating the EXTRA COSTS of clean-up and toxicology prevention, if one accepts the LNT Model, compared with that of a Threshold Model…??

Dan, I’m not aware of any economic analysis review article, but I’m sure the difference (between LNT and Threshold models) would be huge.
I can think of one good example where economics/common sense exceeded LNT: in 2000-01, I served on an NAS panel that evaluated the 50-ppb (parts per billion) drinking water standard for arsenic that had been in place for decades. To make a long story short, 50 ppb was clearly not protective of public health, because there were many studies in arsenic-endemic areas of the world where levels of >200 ppb were clearly associated with substantial increased bladder and lung cancer risks (in people, not rats). If one used the LNT, it is likely that the 1 in 100,000 risk level would have been way less than 1 ppb.

In the end, the EPA adopted a new standard of 10 ppb, because only a small fraction of public water supplies was above 10 ppb. If they had chosen 1 ppb (which still had a theoretical risk of >1 in 10,000), thousands of public water supplies would have required expensive treatment systems. I had the pleasure of briefing Kristie Whitman herself, who was then the EPA Commissioner, in Washington DC at 5 pm on Sept. 10, 2001. You recall what happened the next morning in Washington DC.

Needless to say, the NAS arsenic report didn’t get a whole lot of publicit the next day ☹. The briefing team was just starting to brief a Congressional Committee in the Old Executive Office Building next to the White House, when someone ran into the room screaming that everyone needed to evacuate the building immediately. A few minutes later, a plane crashed into the Pentagon — but people in the know were understandably concerned that it was headed for the White House, not the Pentagon.

MORE COMMENTS:
Dan,

First: My thoughts on the US EPA are generally positive. I think one must be cautious in lumping everything the EPA does today with the faulty reasoning that was used in the past when some risk assessments used the LNT model. My experience of 40+ years, has led me to believe the air pollution standards are based on sound science. These standards have been updated several times based on new findings, especially epidemiological studies. I think the air pollution standards have had a great public health impact on the quality of life for our country and the world.

Second: Your question about my history. I was admitted to New York University (NYU) Department of Environmental Medicine’s graduate program in 1976.

Roy Albert, MD, had started the lab — which my mentor Morton Lippmann, PhD, took over before I arrived. Roy started his studies on measuring particle clearance by generating an aerosol in a food blender and personally inhaling radioactive material thru a straw. (At least that was the rumor told to me when I joined the lab.)

Mort was trained in Engineering at Cooper Union (BS), and Industrial Hygiene at Harvard (MS); then he worked at the Bureau of Occupational and Safety (here in Cincinnati), and in Environmental Health at NYU (PhD) with Roy.

Mort developed a method to generate monodispersed particles (a spinning top generator) of varying sizes so the regional deposition could be quantified. Another scientist in Germany, Willie Stallhofen, also made many measurements on particle deposition. This is why we know that particle size determines where particles land in the nasal-oral, tracheobronchial, or alveolar region, and that the clearance times vary (oral-nasal = minutes to 0.5 hours, tracheobronchial clearance is 3-24 hours, and alveolar 24 h to weeks).

Before and when I arrived at NYU, there was a US Department of Energy Environmental Measurements Laboratory (376 Hudson Street) in lower Manhattan that I visited. The main radon measurement group there was led by Andreas C. George.

I remember meeting David Sinclair there. He is, by many accounts, the “father” of aerosol science. You might want to look up Mort, Andreas, and David.

George Leikauf, Professor DEPHS, Univ Cincinnati

FYI: This is what AI says about LNT assessments.

“Not all risk assessments use a linear no-threshold (LNT) model. The LNT model assumes that any level of exposure to a toxin carries some degree of risk, with risk increasing linearly as exposure increases. However, this model has been increasingly questioned, particularly in the field of toxicology, due to emerging evidence suggesting that low doses of certain stressors may not follow a linear dose-response relationship and may even have beneficial effects (hormesis).

In many cases, risk assessments aim to consider various factors, including the dose-response relationship, exposure pathways, biological mechanisms, and uncertainty factors, to estimate the potential risks associated with exposure to environmental pollutants and chemicals. Common practices in risk assessment today often involve using more sophisticated models and approaches that take into account dose-response curves other than linear, as well as factors such as variability and uncertainty.

Some common practices in contemporary risk assessment include:

Threshold Models: These models suggest that there may be a threshold below which exposure to a toxin does not pose a significant risk. Below this threshold, the body’s defense mechanisms can effectively handle the stressor without adverse effects.
Non-Linear Models: These models acknowledge that the dose-response relationship may not be linear and that the relationship between dose and response may vary depending on factors such as exposure duration, timing, and individual susceptibility.
Mode of Action Analysis: This approach focuses on understanding the biological mechanisms through which a toxin exerts its effects and considers how these mechanisms may influence the dose-response relationship. It can help identify critical events and inform more accurate risk assessments.
Use of Epidemiological Data: Epidemiological studies provide valuable data on the health effects of environmental exposures in human populations. Incorporating epidemiological evidence into risk assessment can help validate dose-response relationships and improve risk estimates.

Overall, contemporary risk assessment practices aim to integrate multiple lines of evidence, consider uncertainties, and use the most relevant and up-to-date scientific knowledge to inform decision-making regarding public health and environmental protection.”

Posted in Center for Environmental Genetics | Comments Off on The LNT Model

Genome evolution and divergence in cis-regulatory architecture is associated with condition-responsive development in horned dung beetles

This topic is “a first for GEITP” (!!) : studying developmental evolution in the dung beetle. In our 16 years of existence, GEITP is certain that we’ve never discussed gene regulation in the dung beetle before…!! ☹

This is a horned dung beetle, they are known to be the strongest insect in the world, They Can Lift 200-1141 Times Their Own Body Weight. These Insects Tend to be 0.75

But, even the genomes of beetles can “sense” environmental pressures (signals such as nutrition from food), and then, downstream, genetic networks are altered in order to produce morphological changes (e.g., size and shape of head horns) that will make the animal more likely to survive (e.g., success in fighting off other males before breeding) in its changing environment.

Recall that epigenetic regulation of genes includes DNA methylation, RNA-interference, histone modifications and chromatin-remodeling. This study involves histone-marking and chromatin-remodeling — inferred by DNA sequence differences in regions of the genome responsible for the horn morphology phenotype of these beetles. ☹

Phenotypic plasticity is thought to be an important driver of diversification and adaptation to environmental variation, yet the genomic mechanisms mediating plastic trait development and evolution remain poorly understood. The Scarabaeinae, or true dung beetles, are a species-rich clade of insects recognized for their highly diversified nutrition-responsive development — including that of cephalic horns (horns on the head) — which are evolutionarily novel, secondary sexual weapons that exhibit remarkable intra- and inter-specific variation. Authors [see attached] investigated the evolutionary ”basis for horns,” as well as other key dung-beetle traits, via comparative genomic and developmental assays.

Phenotypic plasticity is defined as “the capacity of a single genotype to produce multiple phenotypes in response to environmental variation” and constitutes a ubiquitous property of multicellular life. Plasticity is thought to be an important driver of adaptation — allowing organisms to maintain high fitness in the face of environmental adversity and variability, as well as of diversification via evolutionary changes in the genetic architectures underlying plastic trait formation.

The ecological and evolutionary significance of phenotypic plasticity has received much attention, and diverse genes and signal transduction pathways have been identified as important mediators of plastic development across biological systems. In addition to coding sequence, epigenetic modifications such as histone marking are predicted to provide important mechanisms of plastic gene expression regulation. Furthermore, recent quantitative trait locus (QTL) analysis, combined with genome editing by CRISPR-Cas9, has begun to establish first causal connections between several cis-regulatory elements and the plastic development of nematode feeding structures. Yet, despite these advances, the genomic basis underlying developmental plasticity and its evolution, and in particular the role of the non-coding genome and chromatin architecture in regulating conditional responses in trait formation, remain largely unknown.

One group of animals that exhibit an extreme degree of phenotypic plasticity are the true dung beetles (Scarabaeinae) (yes, it’s spelled correctly ), a very diverse clade (>6,000 extant species) found on every continent except Antarctica. The extraordinary evolutionary success of this group is attributable at least in part to their ability to exploit an abundant resource inaccessible to most other insects — i.e., dung. For nearly every species, the acquisition and utilization of dung is essential to each aspect of these beetles’ life history. This includes not only consuming dung as a food source (coprophagy), but also as a resource for larval food provisioning and nest construction, thereby enabling a single offspring to complete development from egg to adult, within the confines of an underground brood-ball.

One key adaptation aiding in this strategy is a highly diversified degree of nutrition-responsive (plastic) development. In the case of dung beetles, nutrition-responsive development is a flexible developmental response to variable and limited larval food quality and quantity — resulting in a wide range of adult body sizes, which in turn has fueled the evolution of alternative, body size-dependent morphological, physiological, and behavioral phenotypes. Accordingly, phenotypic plasticity is predicted to be an evolutionary driver for many dung beetle adaptations.

Furthermore — due to their diversity, abundance, pronounced environment-sensitive development, and unique feeding and reproductive traits — dung beetles have thus long served as important models for behavioral (e.g., status-dependent selection and sperm competition models), developmental (e.g., mechanisms of plasticity), evolutionary (e.g., the origins of evolutionary novelties), and ecological studies (e.g., meta-population theory, nutrient recycling, soil aeration). However, despite the significance of dung beetles in both basic and applied science, a reference-quality genomic resource for any member of this insect group has so far been lacking.

Among the most conspicuous morphological trait of dung beetles are head horns — novel, highly diversified secondary sexual weapons used in reproductive competition. Horns vary tremendously in shape, size, and number across and within species, mediate widespread sexual dimorphisms, and exhibit a high degree of nutrition-responsive development among conspecific males (i.e., animals within the same species). Most commonly, horn development is limited to, and often exaggerated, in males whereas females are (usually) hornless.

Intriguingly, head horns lack homology to any other appendage or body part among Insecta, and as such qualify as an evolutionary novelty even by the strictest of definitions — yet gains, losses, and modifications to horn structure are common among even closely related species. Thus, beetle horns exhibit a high degree of evolutionary lability and represent a powerful natural system for understanding how complex traits originate and diversify.

This study began by presenting

chromosome-level genome assemblies of three dung beetle species in the tribe Onthophagini (>2500 extant species) — including Onthophagus taurus, O. sagittarius, and Digitonthophagus gazella. Comparing these assemblies (to those of seven other species across the order Coleoptera) identified evolutionary changes in coding sequence associated with metabolic regulation of plasticity and metamorphosis. Authors then contrasted chromatin accessibility in developing head horn tissues of high- and low-nutrition O. taurus males and females and identified distinct cis-regulatory architectures underlying nutrition — compared to sex-responsive development, including a large proportion of recently evolved regulatory elements sensitive to horn morphology determination.

Binding motifs of known, and new candidate, transcription factors were identified and are enriched in these nutrition-responsive open chromatin regions. This study highlights the importance of chromatin-state regulation in mediating the development and evolution of plastic traits, demonstrates that gene networks are highly evolvable transducers of environmental and genetic signals, and provides new reference-quality genomes for three species that will strengthen future developmental, ecological, and evolutionary studies of this insect group.
DwN
PLoS Genet March 2024; 20: e1011165
COMMENT:
Christine has sent GEITP a <> about this topic of the dung beetle. But — how many GEITPers will “understand” this joke…?? How many of us have read fictional novels written by Franz Kafka (1993-1924)…??

“I feel a metamorphosis coming on…”
F. Kafka ☹

Answer to my question above: Franz Kafka’s message in his short novel, The Metamorphosis, deals with modernist themes — such as isolation and the absurdity of life. In the story, the main character, Gregor, has devoted himself to his family. And the absurd situation of becoming an insect has left Gregor alienated from other humans…

DwN

Posted in Center for Environmental Genetics | Comments Off on Genome evolution and divergence in cis-regulatory architecture is associated with condition-responsive development in horned dung beetles

The NIH Sacrifices Scientific Rigor for DEI

Pasted below is an excellent brief article/letter — on a topic that is mostly forbidden to discuss (i.e., that the government “is sacrificing its highest standards of scientific creativity” in order to fully embrace DEI). I’ve had experiences and have heard of other stories in which “scientific rigor” has simply been muzzled, or discouraged, in order to promote this “Woke” ideology of “Diversity, Equity, and Inclusion” (DEI). Recently — some States, and some universities and companies, in the U.S. have begun to reign in this political nonsense; so, perhaps this fad is on its way out. [Let’s hope “mandatory usage of pronouns” is the next issue to get phased out.]

For any GEITP’er(s) not aware of this partisan ideology, congratulations for remaining naïve. D.E.I. represent organizational frameworks which seek to promote “fair treatment and full participation of all people,” particularly groups “who historically have been underrepresented or subject to discrimination — on the basis of identity or disability.” E.S.G. stands for environmental, social, and governance. D.E.I. are regarded as pillars in ESG frameworks and represent the three main topic areas in which companies and academia are expected to report. DEI comprises the central “S” in “ESG”. It is proposed that “there can be no successful ESG without a sharpened focus on DEI.”

DwN

The NIH Sacrifices Scientific Rigor for DEI
“Diversity, equity, and inclusion” (DEI) in medical programs? It is not surprising, nor is it the first time grant money has been allocated for the purpose of furthering this discriminatory ideology.

John D. Sailer of the National Association of Scholars acquired thousands of pages of documents, revealing “how the [National Institutes of Health (NIH)] enforces an ideological agenda, prompting universities and medical schools to vet potential biomedical scientists for ‘wrongthink,’ regarding diversity.” The NIH has long been known to support DEI, and require the schools it funds to hire for diversity. Cornell University is just the latest to take the grant money from the NIH and join the ranks of the DEI-cluster hiring cadre.

Sailer has reported on NIH grants before. The NIH’s Faculty Institutional Recruitment for Sustainable Transformation (FIRST) program, “which funds diversity-focused faculty hiring in the biomedical sciences,” has already funded institutions like the University of South Carolina and the University of New Mexico. By accepting FIRST grant money, colleges, universities, and medical schools must use diversity statements for all grant faculty hires. If candidates do not demonstrate sufficient devotion to DEI based on rubrics, they receive a low score—and typically will not be hired, Sailer explains,

That rubric penalizes job candidates for espousing colorblind equality and gives low scores to those who say they intend to ‘treat everyone the same.’ It likewise docks candidates who express skepticism about the practice of dividing students and faculty into racially segregated ‘affinity groups.’

Additionally, the FIRST rubrics often value DEI commitment on par with merit and academic excellence. The consideration given to DEI may vary depending on the rubric—like the Florida State University faculty hiring rubric which weighs DEI commitment at 28 percent—but it is a key factor regardless. Requiring a quota of diversity hires without respect to merit, especially in the medical field, is an alarming trend — not only because schools will be churning out students who could have a deficiency of proper training, but also for ideological discrimination against faculty. Sailer’s records requests show the truth behind DEI-hiring,

The records underscore that scientists simply can’t get hired in the program without an outstanding DEI score. Northwestern’s grant progress report describes an evaluation rubric that equally weighs a ‘commitment to diversity’ and research potential—a remarkable value judgment for a program that is supposed to be focused on cancer, cardiovascular health, and neuroscience.

The FIRST grant program sets a dangerous precedent. Even some colleges and universities who have not received FIRST grants are utilizing DEI-rubrics in hiring decisions—their motivations for doing so are not always disclosed, but it usually comes down to status, seeking future funding, or sheer commitment to the DEI ideology.

From administrative bloat to DEI-cluster hiring and merit-blind admissions quotas, when will this madness end? States like Texas, Utah, and Florida are already pushing back against DEI initiatives. However, with funding pouring into both red and blue states from institutions like the NIH, defeating the DEI hydra remains challenging. Texas, for instance, has banned DEI statements and offices at state universities, yet two universities have secured NIH FIRST grants and vow to promote DEI. The ensuing legal and legislative fights will be intriguing to watch.

As concerns and tensions around DEI continue to mount, spurred by revealing documents, investigations, and increasing pressure on colleges and universities to be accountable for their actions—or lack thereof—what will it take to vanquish the DEI beast? While we await that decisive moment, we remain steadfast in our commitment to uphold truth, excellence, and integrity in higher education.

Until next week.
Kali Jerrard
Communications Associate
National Association of Scholars

COMMENTS:
Dear Dr. Nebert,
While I understand there can be different views on some of these issues, but to imply in the title that NIH plans to sacrifice scientific rigor for DEI is simply “over the top”. It is not an “either or,” the FIRST program promotes “both and.” In the end, without successful role models that reflect our growing diverse population as doctors, research scientists, engineers, and other biomedical professionals — we simply continue to propagate the myth that only white men are worthy of such professions. I, for one, would like to break down some of those barriers, and if it means incentivize some opportunities, then I am all for it.
Sincerely, Ken Greis, PhD

Hi Dan:
“Incentivize some opportunities,” says Ken Greis. I guess lower standards for admission, and eliminating or reducing metrics for completion of training (PhD, MD, commercial pilot license. or whatever) — are various means “to incentivize.” The elimination of meritocracy is a bad idea and we will suffer the consequences for generations. We already have in place programs to assist individuals from underrepresented groups. If we are really serious about enhancing the chances for these individuals to achieve their goals, the solution is not “lowering the bar” but rather “raising the ground on which they stand” so that the bar can be more easily cleared.

I’ve been a standing member of three NIH study sections, as well as serving ad hoc on over 40 others. In addition, I served as PI for a T35 training grant from NIEHS offering summer internships for minority undergraduates. We recruited African-American students from Xavier in New Orleans. I also directed a T32 training grant for a dozen years, and we actively recruited persons of color — which was difficult, given the demographics in Oregon. In my 35 years in academia, I saw many programs designed to enhance opportunities for minority students and never saw any barriers. I find the statement “..continue to propagate the myth that only white men are worthy of such professions” very offensive.

Dr. Greis is obviously well meaning, and I have no doubt he sincerely wants to increase the representation of underserved populations in these professions. As is typical though, rather than an honest discussion of your “alternative facts,” he simply requests that you cut off further communication.

Keep up the good work, Dan.

David E. Williams
Extinguished Professor Emeritus
Linus Pauling Institute and Environmental and Molecular Toxicology; Oregon State University

P.S. And, of course, Victor Davis Hanson captures what I was trying to say — in a much more elegant fashion: ☹☹

“If one does not qualify for a position or slot by accepted standards, then a series of further remedial interventions are needed to sustain the woke project — by providing exceptions and exemptions, changing rules and requirements, and misleading the nation that a more “diverse” math, or more “inclusive” engineering, or more “equity” in chemistry can supplant mastery of critical knowledge that transcends gender, race, or ideology…”

March 27, 2024

So, if I understand Ken Greis correctly, “justification” comes from the need to create role models. I am not sure that role models need to be of the same race or sex. As a youth, I came from a low middle class Italian-American catholic family. It was my impression that important professions were dominated by white Anglo-Saxon protestants. I didn’t have a problem with that. In fact, I embraced them as role models.
Ray
Professor Emeritus, Proctor & Gamble; University of Massachusetts at Amherst

Dear Dr. Nebert,
While I understand there can be different views on some of these issues, but to imply in the title that NIH plans to sacrifice scientific rigor for DEI is simply “over the top”. It is not an “either or,” the FIRST program promotes “both and.” In the end, without successful role models that reflect our growing diverse population as doctors, research scientists, engineers, and other biomedical professionals — we simply continue to propagate the myth that only white men are worthy of such professions. I, for one, would like to break down some of those barriers, and if it means incentivize some opportunities, then I am all for it.
Sincerely,
Ken Greis, PhD
Professor of Cancer Biology; Univ Cinci College of Medicine

Hi Dan,

I am a member of National Association of Scholars (NAS). It will be interesting to see if you get any pushback on this NAS Op-Ed article.
I hope you are well.
Nancy

Professor Emerita, Oregon State University

No wonder that in Latin, “DEI” is the plural of DEUS, god: “The gods.”

Alvaro

Professor Emeritus, Univ Cincinnati College of Medicine

From: Anonymous
Sent: Friday, March 22, 2024 5:36 PM
Dan,
I saw this in a Letter-to-the-Editor, Wall Street Journal:
What “DEI” really means:
“D” for Divisiveness

“E” for Entitlement

“I” for Intimidation

Hi Dan:

“Incentivize some opportunities,” says Ken Greis. I guess lower standards for admission, and eliminating or reducing metrics for completion of training (PhD, MD, commercial pilot license. or whatever) — are various means “to incentivize.” The elimination of meritocracy is a bad idea and we will suffer the consequences for generations. We already have in place programs to assist individuals from underrepresented groups. If we are really serious about enhancing the chances for these individuals to achieve their goals, the solution is not “lowering the bar” but rather “raising the ground on which they stand” so that the bar can be more easily cleared.

I’ve been a standing member of three NIH study sections, as well as serving ad hoc on over 40 others. In addition, I served as PI for a T35 training grant from NIEHS offering summer internships for minority undergraduates. We recruited African-American students from Xavier in New Orleans. I also directed a T32 training grant for a dozen years, and we actively recruited persons of color — which was difficult, given the demographics in Oregon. In my 35 years in academia, I saw many programs designed to enhance opportunities for minority students and never saw any barriers. I find the statement “..continue to propagate the myth that only white men are worthy of such professions” very offensive.

Dr. Greis is obviously well meaning, and I have no doubt he sincerely wants to increase the representation of underserved populations in these professions. As is typical though, rather than an honest discussion of your “alternative facts,” he simply requests that you cut off further communication.

Keep up the good work, Dan.

MORE COMMENTS:

P.S. And, of course, Victor Davis Hanson captures what I was trying to say — in a much more elegant fashion: ☹☹

“If one does not qualify for a position or slot by accepted standards, then a series of further remedial interventions are needed to sustain the woke project — by providing exceptions and exemptions, changing rules and requirements, and misleading the nation that a more “diverse” math, or more “inclusive” engineering, or more “equity” in chemistry can supplant mastery of critical knowledge that transcends gender, race, or ideology…”

So, if I understand Greis correctly, “justification” comes from the need to create role models. I am not sure that role models need to be of the same race or sex. As a youth, I came from a low middle class Italian-American catholic family. It was my impression that important professions were dominated by white Anglo-Saxon protestants. I didn’t have a problem with that. In fact, I embraced them as role models.

—————
—Professor. [Somewhere] Not retired…

Posted in Center for Environmental Genetics | Comments Off on The NIH Sacrifices Scientific Rigor for DEI