Functional interpretation, cataloging, and analysis of 1,341 glucose-6-phosphate dehydrogenase variants

During the 1980s, there was a time when we wondered if any drug-metabolizing-enzyme (DME) gene might have more than one or two mutant variants. Then, in the late 1980s came the first paper in which five variant alleles were cloned and characterized for the CYP2D6 pharmacogene (Nature, 4 Feb 1988; 331: 442-446).

 

In the 1990s, several publications began recommending nomenclature for clinical DME genes, some of which comprised dozens, and even more than 100 alleles. The first web site for the P450 alleles was cypalleles.ki.se, which included only P450 variants. Today, an all-inclusive repository for DME variants can be found at pharmvar.org; their archives index includes 13 “Useful Links” around the world. For example, check out the PharmGKB (pharmacogenomics knowledge base) useful link, https://www.pharmgkb.org/vips (“vips” = very important pharmacogene summaries). Therein this site includes 34 genes that have “substantial evidence supporting their importance in clinical pharmacogenomics” (Tier 1; G6PD is #21), 25 genes that have “limited evidence supporting their importance in clinical pharmacogenomics” (Tier 2), and nine (“cancer genome”) genes that are “important in tumor pharmacogenomics.”

 

The attached article describes “the winning pharmacogene, G6PD” (glucose-6-phosphate dehydrogenase) for having the largest number of identified variants: 1,341 alleles(!!). Interpreting the effect of sequence variation in G6PD can be used to predict which individuals are at risk for adverse drug reactions (ADRs). By analyzing data from publications and databases, authors provided interpretations for 186 additional G6PD variants of uncertain significance, bringing the total number of interpreted (“mechanistically understood”) variants to 400 (the remaining 941 variants are still not “mechanistically understood”).

 

Why would a gene exhibit so many variant alleles? Several reasons include: [a] the length of the gene (kb of coding region, plus 5’ and 3’ regulatory regions in the genome); [b] the poorly understood “high rate of mutabilityin some regions of the genome; and [c] how thoroughly and how large a number of individuals has been characterized and sequenced.

 

Glucose-6-phosphate dehydrogenase (G6PD) deficiency is the most common genetic defect, worldwide, that presents as a missing or defective enzyme — affecting more than 500 million individuals. G6PD is important in red blood cells, because it is the sole source of NADPH (needed for detoxication of reactive oxygen species).

 

Individuals with G6PD deficiency have variants with decreased activity, which can lead to three main clinical manifestations: [a] neonatal jaundice, [b] chronic non-spherocytic hemolytic anemia (CNSHA), and [c] acute hemolytic anemia (AHA) — in response to stressors such as certain foods, antibiotics, antimalarial drugs, and infections that elevate reactive oxygen species. The underlying G6PD deficiency reveals great genetic diversity, with 1,341 (currently identified) variant alleles, mostly missense variants in the coding region.

 

Interpreting the function and clinical effects of G6PD variants is critical to prevent adverse drug reactions, which can be avoidable by prescribing alternative drugs, and to promote neonatal health by prompting increased monitoring. This is the reason why G6PD qualifies as a “pharmacogene.”  😊

 

DwN

 

 

Am J Hum Genet, 2 Feb 2023; 110: 228-239

COMMENT:
Good point, Alvaro. In Croton, southern Italy, the Greek philosopher/mathematician Pythagoras (570-490 BC) is believed to be the first to describe the “dangers of eating fava beans,” because many individuals in that area who ate fava beans often experienced painful red blood cell hemolysis (i.e., developed hemolytic anemia). Now we know that this disease (favism) is associated with G6PD deficiency, and the incidence of G6PD deficiency in Sardinia and parts of southern Italy is as high as one in three.

One story (about how the G6PD polymorphism was discovered) concerns a World War II observation that certain soldiers — especially African-Americans — who were taking the antimalarial drug primaquine and flying in troop transport aircraft at more than 8000 feet altitude (lowered pO2)—were reported to develop painful acute hemolytic crises. This led to the (1956) discovery of low red blood cell G6PD activity and decreased GSH concentrations in affected individuals. Subsequently, it was found that this enzyme is extremely polymorphic, that almost one in ten African-Americans has the A-type of G6PD deficiency, that more than two dozen commonly prescribed drugs in addition to primaquine cause hemolytic anemia in G6PD-deficient patients, and that G6PD deficiency is inherited as an X-linked recessive trait and currently affects more than 500 million people worldwide.

This is an example of an enzyme polymorphism having an indirect effect on drug toxicity. G6PD is an enzyme in the hexose monophosphate shunt, one of the principal sources of NADPH generation (which restores oxidized glutathione, GS-SG, to its reduced form, GSH) in normal red cells and many other tissues. Many drugs and their metabolites can put a burden on GSH levels, and this can lead to a GSH deficiency in G6PD-deficient patients who have little GSH reserves to spare. GSH deficiency in the red cell leads to membrane fragility and hemolysis — hence, hemolytic anemia. The G6PD gene is located on the X chromosome, which is consistent with G6PD deficiency being transmitted as an X-linked recessive trait; this means that a “carrier” mother and a healthy father will have children displaying one of four possibilities: a healthy female, a carrier female, a healthy male, and an afflicted male. Interestingly, there is a more than 100-fold difference in the incidence of G6PD deficiency between Ashkenazic (0.4%) and Sephardic (53%) Jewish males (Ashkenazic Jews live mainly in further north, whereas Sephardic Jews live around the Mediterranean Sea).

Excellent point, Doron. The frequencies of low-activity alleles of G6PD in humans are highly correlated with the prevalence of malaria in Africa and Southeast Asia. These deficiency alleles are thought to provide decreased risk for infection by the Plasmodium parasite and are maintained at high frequency — despite the illnesses that they cause. The “high frequencies of low-activity G6PD alleles” in malaria-infested areas worldwide is an example of selective advantage in a human population (i.e., those resistant to malaria are more likely to have offspring in each subsequent generation). 😊😊

DwN

From: Doron Lancet
Sent: Saturday, May 6, 2023 12:04 AM

Highly interesting! Another explanation of the large number of genetic variants may be an “evolutionary benefit,” similar to the case of MHC genes. It would be interesting to fathom the analog of “at least some members in the population will likely survive a viral attack”.

Best, Doron.

Prof. Doron Lancet

Dept. Molecular Genetics, Weizmann Institute of Science, Rehovot, Israel

From: Puga, Alvaro   Sent: Saturday, May 6, 2023 2:16 PM

Who was the Greek or Italian philosopher in ancient times who warned the citizens of some village not to eat fava beans? Wasn’t the G6PD polymorphism the reason for those genetic differences in response to toxicity of fava beans, which is often regarded as “the earliest example of pharmacogenetics”?

 

Posted in Center for Environmental Genetics | Comments Off on Functional interpretation, cataloging, and analysis of 1,341 glucose-6-phosphate dehydrogenase variants

ZOONOMIA 240 Mammalian Genomes sequenced and compared (!!!)

The attached pdf file is a follow-up of our GEITP blog sent out on 29 Apr 2023, which was a summary in Nature of the collection of papers published in the 28 Apr 2023 issue of Science. The first two articles (Perspectives) and the first Research Article is included in the attached pdf file. For those interested, the starting page number is listed for the remaining ten Research Articles. 😊

PERSPECTIVES

Genomics expands the mammalverse p. 358

Seeing humans through an evolutionary lens p. 360

RESEARCH ARTICLES

Mammalian evolution of human cisregulatory elements and transcription factor-binding sites p. 362

Comparative genomics of Balto, a famous historic dog, captures lost diversity of 1920s sled dogs p. 363

Relating enhancer genetic variation across mammals to complex phenotypes using machine learning p. 364

A genomic timescale for placental mammal evolution p. 365

Evolutionary constraint and innovation across hundreds of placental mammals p. 366

Leveraging base-pair mammalian constraint to understand genetic variation and human disease p. 367

Integrating gene annotation with orthology inference at scale p. 368

The functional and evolutionary impacts of human-specific deletions in conserved elements p. 369

Three-dimensional genome rewiring in loci with human accelerated regions p. 370

Insights into mammalian transposable element (TE) diversity through the curation of 248 mammalian genome assemblies p. 371

The contribution of historical processes to contemporary extinction risk in placental mammals p. 372

DwN

Science 28 Apr 2023; 380: pp 356-372

From: Nebert, Daniel (nebertdw)
Sent: Saturday, April 29, 2023 6:08 PM
Subject: Comparing the genomic sequences of 240 mammals (!!!)

Science magazine has 11 papers in its 28 Apr 2023 issue on this topic of comparing the genomic sequences of 240 mammals. Perhaps the most mind-boggling finding is that “at least 10.7% of the human genome is identical to that of almost all the species that the researchers have studied.” Most of these ‘conserved’ areas are so-called regulatory genes, which modulate how and when other genes are transcribed and ultimately translated into proteins; the function of about half of these conserved genes had previously been unknown.

Long-term plans (in collaboration with other consortia) are to compare the genomic sequences of genomes for all ~71,000 living vertebrate species, which include mammals, reptiles, fish, birds, and amphibians. 😊

DwN

Huge cache of mammal genomes offers fresh insights on human evolution

The Zoonomia Project is helping to pinpoint genes responsible for animal-brain size and for human disease.

Max Kozlov

Amazon river dolphin / Boto (Inia geoffrensis) underwater with light above, Rio Negro, Amazonia, Brazil.

The Amazon River dolphin is one of 52 endangered species studied by the Zoonomia Project.

When they were first published in the early 2000s, the complete genomes of the mouse, human, rat and chimpanzee opened the door for geneticists to compare their sequences and learn more about how mammals evolved.

Now, about two decades later, researchers have amassed and compared the genomes of 240 mammals, showing how far the field has come. From this trove of data — the largest collection of mammalian genetic sequences yet — they have learnt more about why some mammals can smell particularly well, why others hibernate and why some have developed larger brains. The effort, called the Zoonomia Project, reported these and other findings in a series of 11 papers in Science on 27 April issue 1.

The data highlight not only which areas of the genomes are similar, but also when, on the scale of millions of years, their genetic sequences diverged. “This really wasn’t possible without this scale of data set before,” says Katie Pollard, a data scientist at the University of California, San Francisco, who is part of the project.

Sequencing so many mammalian species is an enormous accomplishment, says David Haussler, scientific director of the Genomics Institute at the University of California, Santa Cruz. “We were always dreaming about that.” Haussler helped to sequence the first human genome in the early 2000s.
Mining the data

Those first mammalian genomes published long ago were a good start, says Kerstin Lindblad-Toh, a geneticist at Uppsala University in Sweden who is one of the leaders of Zoonomia. But she and her colleagues realized that they would need more than 200 genomes to offer a statistically significant glimpse at how mammalian species had changed over time — especially if they wanted to zoom in on genetic changes at the level of single DNA base pairs.

Hibernating dormouse (Muscardinus avellanarius) curled up asleep in nest, Sussex, UK

The Zoonomia Project has revealed new details about how animals, such as this hazel dormouse, evolved to hibernate.

The Zoonomia consortium, which includes more than 150 scientists and 30 research teams from around the world, made its 240 genomes available to the public for the first time in 2020 2. Since then, the researchers have looked for similarities among them. They hypothesized that if certain segments of the genomes were similar — and remained so over tens of millions of years across species — those segments must serve an important function for these animals. In one analysis, a team used this concept to estimate that at least 10.7% of the human genome is identical to those of almost all of the species the researchers studied 3. Most of these ‘conserved’ areas are so-called regulatory genes, which modulate how and when other genes are transcribed and ultimately translated into proteins. The function of about half of these conserved genes were previously unknown.

Other analyses looked at how the genomes differ, highlighting the way that certain traits such as the sense of smell evolved, but also pointing researchers to which genes contribute to disease. Genome-wide association studies (GWAS) have already compared thousands of human genomes to identify variants that are linked with disease 4. But finding the precise genes that aren’t just linked to, but cause a disease, has proved difficult, especially for conditions that have millions of associated genes. Seeing how those genes have evolved over time in all mammals can help to narrow the search “by an order of magnitude”, Lindblad-Toh says.

Using Zoonomia’s data, researchers have also constructed a phylogenetic tree that estimates when each mammalian species diverged from its ancestors 5. This analysis lends support to the hypothesis that mammals had already started evolutionarily diverging before Earth was struck by the asteroid that killed the dinosaurs about 65 million years ago — but that they diverged much more rapidly afterwards.
Only the beginning

The Zoonomia Project is just one of dozens of efforts to sequence animal genomes. Another large effort is the Vertebrate Genomes Project (VGP), which aims to generate genomes for roughly all 71,000 living vertebrate species, which include mammals, reptiles, fish, birds and amphibians. Although the two projects are independent of one another, many researchers are a part of both, says Haussler, who is a trustee of the VGP.

Having so many mammalian genomes is a feat, says Walter Jetz, an ecologist at Yale University in New Haven, Connecticut, but Zoonomia’s database so far has a bias towards species with large bodies and those that are not from tropical regions. Sequencing a greater diversity of mammals will allow researchers to draw more authoritative conclusions about mammalian evolution, he says. Lindblad-Toh says the project aimed to select a wide range of species to sample, but that the more mammals added to this data set, the more powerful it will be. “We’re entering an exponential phase of genome sequencing with mammals and other groups,” says Nathan Upham, an evolutionary biologist at Arizona State University in Tempe who was not involved with the research.

Elinor Karlsson, a geneticist at the University of Massachusetts Chan Medical School in Worcester, who is one of the leaders of Zoonomia, points out that the data are publicly available on the project’s website. “We’re really hoping more people are going to start figuring out all the questions that could be asked with these data sets,” she says.

Zoonomia isn’t the culmination of research in mammal genomics — it’s only the beginning, Upham says. The past 20 years were about learning how to properly sequence genomes, he adds. “Now we’re just starting to really dive deep into the genomes.”

doi: https://doi.org/10.1038/d41586-023-01446-7
References

Vignieri, S.
Science 380, 356–357 (2023).

Article PubMed Google Scholar

Zoonomia Consortium.
Nature 587, 240–245 (2020).

Article PubMed Google Scholar

Christmas, M.
et al.Science, 380
, eabn3943 (2023).

Article Google Scholar
Uffelmann, E. et al. Nature Rev. Meth. Primers 1, 59 (2021).

Article Google Scholar
Foley, N. et al.Science 380, abl8189 (2023).

Posted in Center for Environmental Genetics | Comments Off on ZOONOMIA 240 Mammalian Genomes sequenced and compared (!!!)

Lifespan: Why We Age—and Why We Don’t Have To

I’m 27% of the way through this long book. The grammar is exceptionally remarkable for transitioning complex molecular biology into layman’s language — so that non-scientists (and EVEN PHYSICIAN) can understand. Author Sinclair is an Australian who has a lab at Harvard University and a start-up company called Elysium that sells a product (NMN; nicotinamide mononucleotide) guaranteed to make you live 20 years longer with no signs of aging.  😊😉

—D

Here is an excerpt which makes one humble:

Turns out, there is no new law required to explain life. At the nanoscale, it is merely an ordered set of chemical reactions, concentrating and assembling atoms that would normally never assemble, or breaking apart molecules that would normally never disintegrate. Life does this using proteinaceous Pac-Men called enzymes made up of coils and layer mats of amino acid chains.

Enzymes make Life possible by taking advantage of fortuitous molecular movements. Every second you are alive, thousands of glucose molecules are captured within each of your trillions of cells by an enzyme called glucokinase, which fuses glucose moleules to phosphorus atoms, tagging them for energy production. Most of the energy created is used by a multicomponent RNA and protein complex called a ribosome, whose primary job is to capture amino acids and fuse them with other amino acids to make fresh proteins.

Think of an enzyme (e.g., catalase) which has the largest turnover frequency, with values up to 4 × 107 sec−1 having been reported. This boggles the mind (all these things happening in the time space of 1 nanosecond in each cell). This is creating ORDER from CHAOS. And only Gibbs Free Energy (and God) are capable of this phenomenon…

Front Cover

6 Reviews

Lifespan: Why We Age—and Why We Don’t Have To

By David A. Sinclair, Matthew D. LaPlante

Posted in Center for Environmental Genetics | Comments Off on Lifespan: Why We Age—and Why We Don’t Have To

Study of Hospitalizations in Canada Quantifies Benefit of COVID-19 Vaccine to Reduce Death, ICU Admissions

Pasted below is a reasonable study (summarized in Medscape) that shows us how tremendously dramatic the effects of the COVID-19 vaccine were in terms of saving lives during the pandemic. One caveat not mentioned, however, is the (relatively small, but substantial) examples of serious side-effects (including several deaths) that were experienced by some adults and some children immediately after receiving one of another of these various vaccines and boosters. ☹

DwN

Study in Canada Quantifies Benefit of COVID-19 Vaccine to Reduce Death, ICU Admissions

Richard Mark Kirkner

May 08, 2023

A cohort study of more than 1.5 million hospital admissions in Canada through the first 2 years of the COVID-19 pandemic has quantified the benefit of vaccinations. Unvaccinated patients were found to be up to 15 times more likely to die from COVID-19 than fully vaccinated patients.

Investigators analyzed 1.513 million admissions at 155 hospitals across Canada from March 15, 2020, to May 28, 2022. The study included 51,679 adult admissions and 4035 pediatric admissions for COVID-19. Although the share of COVID-19 admissions increased in the fifth and sixth waves, from December 26, 2021, to March 19, 2022 — after the full vaccine rollout — to 7.73% from 2.47% in the previous four waves, the proportion of adults admitted to the intensive care unit (ICU) was significantly lower, at 8.7% vs 21.8% [odds ratio (OR), 0.35; 95% confidence interval (CI), 0.32 – 0.36).

Dr Charles Frenette

“The good thing about waves five and six was we were able to show the COVID cases tended to be less severe, but on the other hand, because the disease in the community was so much higher, the demands on the healthcare system were much higher than the previous waves,” study author Charles Frenette, MD, director of infection prevention and control at McGill University Health Center in Montreal and chair of the study’s adult subgroup, told Medscape Medical News. “But here we were able to show the benefit of vaccinations, particularly the boosting dose, in protecting against those severe outcomes.”

The study, published April 20 in JAMA Network Open, used the Canadian Nosocomial Infection Surveillance Program (CNISP) database, which collects hospital data across Canada. It was activated in March 2020 to collect details on all COVID-19 admissions, co-author Nisha Thampi, MD, chair of the study’s pediatric subgroup, told Medscape.

“We’re now over 3 years into the pandemic, and CNISP continues to monitor COVID-19 as well as other pathogens in near real time,” said Thampi, an associate professor and infectious disease specialist at Children’s Hospital of Eastern Ontario in Ottawa.

“That’s a particular strength of this surveillance program as well. We would see these data on a biweekly basis, and that allows for to implement timely protection and action.”

Tracing Trends Over Six Waves

The study tracked COVID-19 hospitalizations during six waves. The first lasted from March 15 to August 31, 2020, and the second lasted from September 1, 2020, to February 28, 2021. The wild-type variant was dominant during both waves. The third wave lasted from March 1 to June 30, 2021, and was marked by the mixed Alpha, Beta, and Gamma variants. The fourth wave lasted from July 1 to December 25, 2021, when the Alpha variant was dominant. The Omicron variant dominated during waves five (December 26, 2021, to March 19, 2022) and six (March 20 to May 28, 2022).

Hospitalizations reached a peak of 14,461 in wave five. ICU admissions, however, peaked at 2164 during wave four, and all-cause deaths peaked at 1663 during wave two.

The investigators also analyzed how unvaccinated patients fared in comparison with the fully vaccinated and the fully vaccinated-plus (that is, patients with one or more additional doses). During waves five and six, unvaccinated patients were 4.3 times more likely to end up in the ICU than fully vaccinated patients and were 12.2 times more likely than fully vaccinated-plus patients. Likewise, the rate for all-cause in-hospital death for unvaccinated patients was 3.9 times greater than that for fully vaccinated patients and 15.1 times greater than that for fully vaccinated-plus patients.

The effect of vaccines emerged in waves three and four, said Frenette. “We started to see really, really significant protection and benefit from the vaccine, not only in incidence of admission but also in the incidence of complications of ICU care, ventilation, and mortality.”

Results for pediatric patients were similar to those for adults, Thampi noted. During waves five and six, overall admissions peaked, but the share of ICU admissions decreased to 9.4% from 18.1%, which was the rate during the previous four waves (OR, 0.47).

Dr Nisha Thampi

“What’s important is how pediatric hospitalizations changed over the course of the various waves,” said Thampi.

“Where we saw the highest admissions during the early Omicron dominance, we actually had the lowest numbers of hospitalizations with death and admissions into ICUs.”

Commenting on the study for Medscape, David Fisman, MD, MPH, a professor of epidemiology at the University of Toronto, said, “This is a study that shows us how tremendously dramatic the effects of the COVID-19 vaccine were in terms of saving lives during the pandemic.” Fisman was not involved in the study.

Dr David Fisman

But CNISP, which receives funding from Public Health Agency of Canada, could do more with the data it collects to better protect the public from COVID-19 and other nosocomial infections, Fisman said.

“The first problematic thing about this paper is that Canadians are paying for a surveillance system that looks at risks of acquiring infections, including COVID-19 infections, in the hospital, but that data are not fed back to the people paying for its production,” he said.

“So, Canadians don’t have the ability to really understand in real time how much risk they’re experiencing via going to the hospital for some other reason.”

The study was independently supported. Frenette and Thampi report no relevant financial relationships. Fisman has disclosed financial relationships with Pfizer, AstraZeneca, Sanofi, Seqirus, Merck, the Ontario Nurses Association and the Elementary Teachers’ Federation of Ontario.

JAMA Netw Open. Published April 20, 2023. Full text

Richard Mark Kirkner is a medical journalist based in the Philadelphia area.

COMMENT:
Dan:
The proper question is whether vaccination improves outcomes versus taking an alternate science-based therapy (such as Ivermectin). Comparing vaccinated people to those who do nothing — is stacking the deck.

In addition, honestly comparing outcomes for those who got injected versus those who were already infected — would be very telling (e.g., https://www.foxnews.com/health/prior-covid-infection-protection-vaccines-new-study; 20 Feb 2023).

Ignoring the fact that there were extraordinary safety shortcuts taken to approve these injections — which would partly explain the results (e.g., https://www.floridahealth.gov/newsroom/2023/02/20230215-updated-health-alert.pr.html; 15 Feb 2023) is scientific malpractice, in my opinion. Supporting a political system that is advocating an unscientific solution to a widespread medical matter is scientific incompetence.

Ignoring the downside of these injections (e.g., https://thepricklypear.org/serious-harms-of-the-covid-19-vaccine-a-systematic-review/?doing_wp_cron=1684108382.8401489257812500000000; 10 Apr 2023) is an example of promoting politics, not science.

The bottom line is that multiple scientific protocols were aborted or abused regarding the COVID protocols, and no scientist should say that any of this is acceptable.

Regards, John
Executive Director, Alliance for Wise Energy Decisions (AWED), Morehead, NC

Posted in Center for Environmental Genetics | Comments Off on Study of Hospitalizations in Canada Quantifies Benefit of COVID-19 Vaccine to Reduce Death, ICU Admissions

Neurons that connect without synapses

Evolutionarily, which animal came first? Sponges or comb jellies? Concerning the evolution of animal nervous systems, it had been quite well accepted that all neurons connect to each other with synapses, and that the nervous system arose only once in evolutionary history and was never lost. But this “consensus” opinion has now been challenged [see attached article & editorial]. A lay description of this study was posted in GEITP on 24 Apr 2023; attached is the scientific article.

Authors provide new information on the structure of the nervous system of ctenophores — marine invertebrates commonly known as comb jellies. All living animals belong to one of five groups: Porifera (sponges) and Placozoa (small, disc-shaped animals distributed in warm ocean water) lack neurons; Ctenophora (comb jellies) and Cnidaria (corals, medusa jellyfish, siphonophores, and others) have “nerve nets” — (nervous systems with neurons arranged into diffuse networks); and Bilateria (which contains most animal species, including humans and other vertebrates, arthropods, and many other invertebrates; they have left-right, head-tail and dorsal ventral orientations) includes some animals with a nerve net, but most have a central nervous system.

The consensus explanation for this nervous system diversity is that these organisms represent ancestral steps in the increase of nervous system complexity (i.e., sponges diverged first from other animals, before the origin of the nervous system, and nervous system complexity increased in a ratchet-like manner in other animals). Then, surprisingly, the first sequences of Porifera and Placozoa genomes were found to contain genes that were previously thought to be specific to nervous system function. A closer look in placozoans found that they have gland cells that secrete neurosecretory components.

More recently, single-cell expression analyses revealed that some sponge cells communicate through structures that resemble synapses; this made it clear that neuron morphology and neuron signaling molecules have different distributions across animals. Consequently, traditional hypotheses about the earliest relationships in the animal phylogeny have been challenged. Some phylogenomic analyses support Porifera as the sister group to all other animals; but there is growing evidence that Ctenophora is the sister group to all other animals — indicating that some nervous system features arose independently in ctenophores or that some nervous system components were lost in sponges. This evidence further challenges the historically accepted consensus of stepwise increments in nervous system complexity through the course of animal evolution.

The current study [attached] goes right to the heart of these questions. Authors report that the ctenophore nerve net is unlike the nervous systems of other animals. Authors used serial block face scanning electron microscopy to make 3-dimensional ultrastructural reconstructions of a ctenophore subepithelial nerve net; they observed that this nerve net is not formed by neurons connecting to each other with synapses. Instead, the processes of the neurons are directly fused to each other, forming a syncytial continuum. There are synapses elsewhere, including where the nerve net connects to effector cells, but the subepithelial nerve net itself is not formed with synaptic connections. It was never a question, then, of whether all animal nervous systems conform to the neuronal doctrine or the reticulate theory — but rather of describing which animals conform to which theory.   😊

DwN

Science, 21 Apr 2023; 380: 293-297 & editorial pp 241-242

Posted in Center for Environmental Genetics | Tagged , | Comments Off on Neurons that connect without synapses

A.I. language models open a potential Pandora’s box of medical research fraud

This recent online article (about ChatGPT; summarized from an article published in the open-access journal Patterns) — furthers the cause for concern about this AI-based program. Specifically, if Drug A is maliciously <> or <> to be better than Drug B for Disease XYZ, when it clearly is not true, this raises more than “medical research concern.” This raises potentially serious adverse drug responses (ADRs), including death (caused by a physician unknowingly treating a patient), due to misuse of this AI program. ☹

DwN

MARCH 14, 2023

A.I. language models open a potential Pandora’s box of medical research fraud

by Justin Jackson , Medical Xpress

chatbot

Medical student and researcher Faisal Elali of the State University of New York Downstate Health Sciences University and medical scribe and researcher Leena Rachid from the New York-Presbyterian/Weill Cornell Medical Center wanted to see if artificial intelligence could write a fabricated research paper and then investigate how best to detect it.

Artificial intelligence is an increasingly valuable and vital part of scientific research. It is used as a tool to analyze complicated data sets, but it is never used to generate the actual paper for publication. AI-generated research papers, on the other hand, can look convincing — even when based on an entirely fabricated study. But exactly how convincing?

In a paper published in the open-access journal Patterns, the research duo demonstrated the feasibility of fabricating a research paper using ChatGPT, an AI-based language model. Simply by asking, they were able to have ChatGPT produce a number of well-written, entirely made-up abstracts. A hypothetical fraudster could then submit these fake abstracts to multiple journals seeking publication. If accepted, the same process could then be used to write an entire study with false data, nonexistent participants and meaningless results. However, it could appear legitimate, especially if the subject is particularly abstract or not screened by an expert in the specific field.

In a previous experiment cited in the current paper, humans were given both human-created and AI-generated abstracts to consider. In that experiment, humans incorrectly identified 32% of the AI-generated research abstracts as real and 14% of the human-written abstracts as fake.

The current research team decided to test their ChatGPT fabricated study against three online AI detectors. The texts were overwhelmingly identified as AI-generated, suggesting the adoption of AI detection tools by journals could be a successful diverter of fraudulent applications. However, when they took the same text and ran it through a free, online, AI-powered rephrasing tool first — the consensus unanimously flipped to “likely human,” suggesting we need better AI detection tools.

Actual science is hard work, and communicating the details of that work is a crucial aspect of science requiring substantial effort. But any mostly hairless ape can string sensible sounding words together given enough time and coffee — as the writer of this article can firmly attest. Creating a fake study with enough detail to seem credible would take tremendous effort, requiring hours of researching how best to sound believable, and might be too tedious a task for someone interested in malicious mischief. With AI completing the task in minutes, that mischief could become an entirely achievable objective. As the researchers point out in their paper, that mischief could have terrible consequences.

They give an example of a legitimate study that supports the use of drug A over drug B for treating a medical condition. Now, suppose a fabricated study makes the opposite claim and is not detected (as a side note, even if it is detected, clawing back citations and reprints of retracted studies is notoriously difficult). It could impact subsequent meta-analyses and systematic reviews of these studies — studies that guide health care policies, standards of care and clinical recommendations.

Beyond the simple-mischief motive, the authors of the paper point to the pressure on medical professionals to quickly produce a high volume of publications to gain research funding or entry into higher career positions. In part, they point out that the United States Medical Licensing Examination recently switched from a graded exam to a pass/fail model, meaning ambitious students rely more heavily on published research to distinguish them from the pack. This raises the stakes for a trustworthy AI detection system to remove potentially fraudulent medical research that could pollute the publishing environment — or, worse still, practitioners who submit fraudulent papers from practicing on patients.

The goal of AI language models has long been to produce texts that are indistinguishable from human text. That we need AI that can detect when a human is using AI to produce fraudulent work indistinguishable from reality — should not come as a surprise. What might be surprising is just that we may need it sooner than later!

Faisal R. Elali et al, AI-generated research paper fabrication and plagiarism in the scientific community, Patterns (2023).

DOI: 10.1016/j.patter.2023.100706

Posted in Center for Environmental Genetics | Comments Off on A.I. language models open a potential Pandora’s box of medical research fraud

18 Spectacularly Wrong Predictions Made around the the First Earth Day, 1970

By the way, HAPPY EARTH DAY. This article is from 2 years ago — but nothing has changed. Nonscientific hysteria is running as high as ever. ☹☹☹

DwN

18 Spectacularly Wrong Predictions Made around the Time of the First Earth Day in 1970; Expect More This Year

By Mark J. Perry

April 21, 2021

Tomorrow is Earth Day 2021 and marks the 51st anniversary of Earth Day, so it’s time for my annual CD post on the spectacularly wrong predictions that were made around the time of the first Earth Day in 1970…..

In the May 2000 issue of Reason Magazine, award-winning science correspondent Ronald Bailey wrote an excellent article titled “Earth Day, Then and Now: The planet’s future has never looked better. Here’s why” to provide some historical perspective on the 30th anniversary of Earth Day. In that article, Bailey noted that around the time of the first Earth Day in 1970, and in the years following, there was a “torrent of apocalyptic predictions” and many of those predictions were featured in his Reason article. Well, it’s now the 51st anniversary of Earth Day, and a good time to ask the question again that Bailey asked 21 years ago: How accurate were the predictions made around the time of the first Earth Day in 1970? The answer: “The prophets of doom were not simply wrong, but spectacularly wrong,” according to Bailey. Here are 18 examples of the spectacularly wrong predictions made around 1970 when the “green holy day” (aka Earth Day) started:

1. Harvard biologist George Wald estimated that “civilization will end within 15 or 30 years [by 1985 or 2000] unless immediate action is taken against problems facing mankind.”

2. “We are in an environmental crisis that threatens the survival of this nation, and of the world as a suitable place of human habitation,” wrote Washington University biologist Barry Commoner in the Earth Day issue of the scholarly journal Environment.

3. The day after the first Earth Day, the New York Times editorial page warned, “Man must stop pollution and conserve his resources, not merely to enhance existence but to save the race from intolerable deterioration and possible extinction.”

4. “Population will inevitably and completely outstrip whatever small increases in food supplies we make,” Paul Ehrlich confidently declared in the April 1970 issue of Mademoiselle. “The death rate will increase until at least 100-200 million people per year will be starving to death during the next ten years [by 1980].”

5. “Most of the people who are going to die in the greatest cataclysm in the history of man have already been born,” wrote Paul Ehrlich in a 1969 essay titled “Eco-Catastrophe! “By…[1975] some experts feel that food shortages will have escalated the present level of world hunger and starvation into famines of unbelievable proportions. Other experts, more optimistic, think the ultimate food-population collision will not occur until the decade of the 1980s.”

6. Ehrlich sketched out his most alarmist scenario for the 1970 Earth Day issue of The Progressive, assuring readers that between 1980 and 1989, some 4 billion people, including 65 million Americans, would perish in the “Great Die-Off.”

7. “It is already too late to avoid mass starvation,” declared Denis Hayes, the chief organizer for Earth Day, in the Spring 1970 issue of The Living Wilderness.

8. Peter Gunter, a North Texas State University professor, wrote in 1970, “Demographers agree almost unanimously on the following grim timetable: by 1975 widespread famines will begin in India; these will spread by 1990 to include all of India, Pakistan, China, and the Near East, Africa. By the year 2000, or conceivably sooner, South and Central America will exist under famine conditions….By the year 2000, thirty years from now, the entire world, with the exception of Western Europe, North America, and Australia, will be in famine.”

Note: The prediction of famine in South America is partly true, but only in Venezuela and only because of socialism, not for environmental reasons.

9. In January 1970, Life reported, “Scientists have solid experimental and theoretical evidence to support…the following predictions: In a decade, urban dwellers will have to wear gas masks to survive air pollution…by 1985 air pollution will have reduced the amount of sunlight reaching earth by one half….”

10. Ecologist Kenneth Watt told Time that, “At the present rate of nitrogen buildup, it’s only a matter of time before light will be filtered out of the atmosphere and none of our land will be usable.”

11. Barry Commoner predicted that decaying organic pollutants would use up all of the oxygen in America’s rivers, causing freshwater fish to suffocate.

12. Paul Ehrlich chimed in, predicting in 1970 that “air pollution…is certainly going to take hundreds of thousands of lives in the next few years alone.” Ehrlich sketched a scenario in which 200,000 Americans would die in 1973 during “smog disasters” in New York and Los Angeles.

13. Paul Ehrlich warned in the May 1970 issue of Audubon that DDT and other chlorinated hydrocarbons “may have substantially reduced the life expectancy of people born since 1945.” Ehrlich warned that Americans born since 1946…now had a life expectancy of only 49 years, and he predicted that if current patterns continued this expectancy would reach 42 years by 1980 when it might level out. (Note: According to the most recent CDC report, life expectancy in the US is 78.6 years).

14. Ecologist Kenneth Watt declared, “By the year 2000 if present trends continue, we will be using up crude oil at such a rate…that there won’t be any more crude oil. You’ll drive up to the pump and say, `Fill ‘er up, buddy,’ and he’ll say,`I am very sorry, there isn’t any.’”

Note: Global oil production last year at about 95M barrels per day (bpd) was double the global oil output of 48M bpd around the time of the first Earth Day in 1970.

15. Harrison Brown, a scientist at the National Academy of Sciences, published a chart in Scientific American that looked at metal reserves and estimated the humanity would totally run out of copper shortly after 2000. Lead, zinc, tin, gold, and silver would be gone before 1990.

16. Sen. Gaylord Nelson wrote in Look that, “Dr. S. Dillon Ripley, secretary of the Smithsonian Institute, believes that in 25 years, somewhere between 75 and 80 percent of all the species of living animals will be extinct.”

17. In 1975, Paul Ehrlich predicted that “since more than nine-tenths of the original tropical rainforests will be removed in most areas within the next 30 years or so [by 2005], it is expected that half of the organisms in these areas will vanish with it.”

18. Kenneth Watt warned about a pending Ice Age in a speech. “The world has been chilling sharply for about twenty years,” he declared. “If present trends continue, the world will be about four degrees colder for the global mean temperature in 1990, but eleven degrees colder by the year 2000. This is about twice what it would take to put us into an Ice Age.”

MP: Let’s keep those spectacularly wrong predictions from the first Earth Day 1970 in mind when we’re bombarded again this year with dire predictions of “gloom and doom” and “existential threats” due to climate change. And let’s think about the question posed by Ronald Bailey in 2000: What will Earth look like when Earth Day 60 rolls around in 2030? Bailey predicts a much cleaner, and much richer future world, with less hunger and malnutrition, less poverty, and longer life expectancy, and with lower mineral and metal prices. But he makes one final prediction about Earth Day 2030: “There will be a disproportionately influential group of doomsters predicting that the future – and the present – never looked so bleak.” In other words, the hype, hysteria, and spectacularly wrong apocalyptic predictions will continue, promoted by virtue-signaling “environmental grievance hustlers” like AOC, four years ago, who said we have “only 12 years left to stop the worst impacts of climate change.”

COMMENT:
Hi Dan:
Since you bring up the topic of Earth Day and the outlandish predictions around 1970 about the future — please see my commentary [click on the URL] — even though my commentary was published 22 April 2014. EC. “We Need a New Earth Day | Cato Institute”

Posted in Center for Environmental Genetics | Comments Off on 18 Spectacularly Wrong Predictions Made around the the First Earth Day, 1970

Is glyphosate toxic or carcinogenic? This Latest Review, full of sound and fury, seems to signify nothing

GEITP has had previous discussions about the herbicide, glyphosate. The attached review is the latest summary of where this controversial chemical stands. Glyphosate is the most applied agricultural chemical worldwide and has become nearly ubiquitous throughout the environment. Glyphosate is an effective herbicide because it disrupts the shikimate pathway, which is responsible for the synthesis of essential amino acids in fungi, plants, and microorganisms.

Given that there is no known target for glyphosate in higher organisms including vertebrates, its toxicity to humans and other animals is heavily debated — especially after the (likely erroneous) 2015 IARC ruling that “glyphosate is carcinogenic.” Today, a growing body of literature shows no conclusive cell culture, intact animal, and epidemiological evidence for any toxicity of glyphosate across animal species. With the application of glyphosate increasing globally, authors felt it was important to discuss these reports, to enable a broader conversation on glyphosate toxicity and its impact on human and environmental health. Authors (from University of Calgary, Canada) summarize the recent glyphosate literature and discuss its implications [see attached].

Structural formula of glyphosate | Download Scientific Diagram

The chemical structure of N-(phosphonomethyl)glycine (glyphosate; Figure) is incredibly simple. Glyphosate, the active ingredient in glyphosate-based herbicides (GBHs), is the most used herbicide globally. Today, ~280 million pounds are applied annually in the United States alone. First discovered by a Monsanto scientist, glyphosate was initially under patent as “RoundUp” (Monsanto), a broad-spectrum GBH containing glyphosate as the active ingredient — in a mixture with water and inert adjuvants. GBHs were quickly adopted into agricultural practices due to their perceived low toxicity to animals. Specifically, glyphosate inhibits 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS), an enzyme unique to fungi, plants, and microorganisms.

Because higher organisms lack this enzyme, it was assumed that this chemical would be safe in animals (including humans). EPSPS is critical for the synthesis of the essential aromatic amino acids — tryptophan, phenylalanine, and tyrosine, and further secondary metabolites; as such, this makes glyphosate an especially effective herbicide.

Chemist Dr. Henri Martin invented glyphosate in 1950, but its herbicidal importance was not fully realized until the 1970s, when Monsanto patented the glyphosate mixture RoundUp. Monsanto monopolized the glyphosate market until 2000, when their original patent expired. Since 2000, other companies have capitalized on glyphosate’s effectiveness; today, more than 750 GBHs are sold in the U.S. alone. These mixtures contain glyphosate as the active ingredient — most often in isopropyl ammonium salt form, to constitute 40%–60% of most GBH products. “Inert” ingredients such as water, heavy metals (i.e., arsenic, cobalt), and a proprietary mixture of surfactants, commonly of the polyoxyethylenamine family, comprise the remaining volume.

Authors summarize the cell culture, laboratory animal, and clinical data — and we find nothing conclusive. The International Agency for Research on Cancer (IARC) suggests glyphosate is “a probable carcinogen,” whereas both the European Food Safety Authority (EFSA) and U.S. Environmental Protection Agency (EPA) disagree, suggesting glyphosate is not a concern to human and animal health. 😊😊
DwN
Toxicol Sci Mar 2023; 192: 131–140

COMMENT:
Dan,
Thanks for the review; this topic is very familiar to me, because as a Panel member of the EFSA pesticide panel (PPR Panel), I was involved in first-tier glyphosate evaluation last year. The final EFSA assessment should be out in July this year. The case is now in the hands of higher-level scientific bodies at EFSA, and the “verdict” is eagerly expected.
Personally, or as a PPR Panel member, I think that the huge body of research has not been able to identify any really significant toxicity in humans or animals, let alone something more serious such as cancer. It was somewhat depressing to go through hundreds of articles on glyphosate and see the questionable quality of most articles.
Best wishes, OP

Posted in Center for Environmental Genetics | Comments Off on Is glyphosate toxic or carcinogenic? This Latest Review, full of sound and fury, seems to signify nothing

Call for Papers & Thematic Issues—Current Microwave Chemistry

With more than 25,000 “open-access, rapid-publication with fake revewers, predatory journals” that have been created, and continue to explode, since about 2012 — you can imagine the difficulty in trying to come up with a new name (and initials that have not already been used) for your newly planned journal. 😊😉

Why so many new journals? The best answer we can give is the example (about seven years ago) of a family of four living in a small flat in Turkey and in one year [offering “reasonable” “page charges” to post your manuscript (on any topic) online] — this “publisher-company” family had “accepted” and published enough papers to clear more than $1 million in profits — just working from laptops in their kitchen…!! (i.e., the best answer is … VERY L-U-C-R-A-T-I-V-E…!!)

So, … the names of new journals can be as similar as Journal of Biological and Immunological Chemistry (JBIC), International Journal of Immunological and Biological Chemistry (IJIBC), World Journal of Biological Immunology (WJBI), World Journal of Biological Immunology and Chemistry (WJBIC), Global Journal of Biological Immunological Chemistry (GJBIC), Journal of Molecular Biology and Chemical Immunology (JMBCI), Zelenskyy Journal of Biology and Immunology in Wartime (ZJBIW) … and … and … [you can catch my drift] — just as long as the title and initials differ from all 25,000 others.

Well, this week Professor Vasiliou received an email request [see below] to “publish his next paper (on any topic)” in the journal of Current Microwave Chemistry (CMC).

We checked and the journal name “Trends in Microwave and Toaster Oven Chemistry” (TMTOC) has already been taken. ☹ Therefore, as a follow-up we are considering the title of a novel journal, “Frontiers in Egg-Beaters, Potato-Peelers and the Kitchen Sink Chemistry” (FEBPPKSC). After checking carefully, the International Committee on Fake Predatory Journals (ICFPJ) has accepted this new journal name as “one not yet taken.” 😉😉😉

DwN

Comments continue to pour in — on the topic of the many thousands of predatory open-access (questionably peer-reviewed) journals that have (explodingly) proliferated during this past decade. ☹ Dr. Randolph describes his point of view from the clinical practice of medicine. Fred Guengerich describes his consulting with clients in court, and those judges and lawyers who sometimes rely on “peer-reviewed” publications. Olavi Pelkonen contrasts all these new and untested journals with the older (more reliable/trustworthy) established journals, while John Reichard confirms that the Case Rep Obstetr Gynecol Reprod (CROGR) journal is clearly an example of a predatory journal, compared with Compar Microwave Chem journal (which lies in the grey zone). ☹ 😊

DwN
From: Reichard, John
Sent: Saturday, January 21, 2023 4:20 PM
To: Nebert, Daniel
Subject: RE: Call for Papers & Thematic Issues—Current Microwave Chemistry

Dan, I certainly agree that “Case Rep Onstetr Gynecol Reprod (crogr)” is without a doubt a predatory journal. (Reproductive … what??) I wonder about the poor schmucks who are listed on the editorial board. I wonder if they know their names are listed, or if they get paid for “serving,” on the board. I’ve heard of a few instances when Editorial Board members had never even known they were listed for a particular predatory journal!

From: Olavi Pelkonen
Sent: Sunday, January 22, 2023 12:14 AM

Dan, I have not counted (the number I’ve received of) these suspicious requests to publish, because I delete them outright, but it must be >10 per day. Then there are publishers like Frontiers and MDPI (both having headquarters in Switzerland), which publish generally valuable material, but sometimes pretty poorly scientific or biased material (e.g., on endocrine disruptors) gets through and is published. One of the problems of these open-access journals is proliferation of articles that pass the limit of “the least amount of publishable information” and just add to the CV of the authors, which may of course be significant to them, because it will advance their careers in their own countries.
In essence, from the point of view of an individual researcher, the “old respectable” publishing houses and journals are the mainstream sources of useful information — even if they are usually painstakingly slow and methodical in their reviewing routines. ☹

Olavi

From: Guengerich, Frederick P < Sent: Saturday, January 21, 2023 4:09 PM Dan, there is another issue that you may or may not be aware of. I have done some consulting work on legal cases, mostly patents (one tort), and journal articles get brought in, as “evidence”….. One measure of their reliability (to the judge & other laymen) is whether they are peer-reviewed. Some of these journals at least claim to be, so there is room for trickery here by nasty lawyers. The other side (i.e., me) has to explain why this is a bad journal and can’t be trusted — if possible. Also, the new eLife reviewing plan is really going to create a can of worms, if it catches on. Surely you have heard about this — essentially, this plan eliminates peer review. At least lawyers & judges understand that data (accepted for publication but has not yet been peer-reviewed) in bioRXve is not a real journal — the other side tried this trick in a case I was helping with. F. Peter Guengerich, Ph. D. From: Dave Sent: Saturday, January 21, 2023 3:59 PM Dan, As a practicing physician, I am on the receiving end of “scientific evidence” in journal publications. I have to pull and review articles pertaining to a unique clinical condition, and then assess the validity of the study and conclusions. Recently I had a case of non-Hodgkin lymphoma (NHL) of the parotid gland which the treating oncologist from a very prestigious western university med school declared “was likely work related,” due to a possible benzene exposure. I reviewed the article. First, any substantial benzene exposure was unprovable. Second, there are over 60 types of NHL. Third, the publication addressed only eight of the 64 types — and did not include the rare parotid-associated NHL. Fourth, the patient’s father died of parotid-associated NHL (strongly suggesting a heritable disorder). All “peer-reviewed” and published articles must be cautiously examined. What is in the abstract may not be supported by the actual data. Statistical significance remains important. Everyone wants to pull a fast one, but you know this much better than I do. David Randolph, MD From: Nebert, Daniel Sent: Saturday, January 21, 2023 3:19 PM John Reichard [below] points out that this appears to be a “legitimate” journal, but there are questions about how honest and straightforward their policies are. Doron Lancet [below] laments that “these emails are copious and annoying and these might seem humorous on the surfake, but we should be weeping, rather than doing nothing — because predatory journals are helping drag “high-quality” science into the sewer. I probably receive between 20 and 40 emails PER DAY (most of them in my junk email folder) — ranging in journal topics from mathematics, physics and astronomy to clinical medicine, nursing student education and social sciences. In fact, the latest one [see below] just arrived while I was working on this email response. Note that this CASE REP OBSTET GYNECOL REP (crogr) “invitation to submit a manuscript came on 19 Jan 2023, with a request to submit the paper “on or before 27 Jan 2023,” i.e., eight days(!!) to prepare a manuscript. [This very short turnaround time is one of the factors that determine a predatory journal from a normal scientific journal. Other factors include: [a] shady names of the email server used to send you the email; [b] incorrect usage of English grammar in the email; [c] describing your expertise (in email below, am I an expert in Case Reports in Obstetrics & Gynecology?); [d] check the email server, sometimes it’s simply “gmail” or “yahoo;” and [e] pleading “We are in shortfall by one paper before we can publish the next issue; please help us out…”]. ☹☹ DwN From: CASE REP OBSTET GYNECOL REP Sent: Thursday, January 19, 2023 5:31 PM To: Nebert, Daniel Subject: Issue your expertise work and research Dear Dr. Daniel W Nebert We would be extremely grateful if you would submit your article to Case Reports in Obstetrics, Gynecology & Reproductive. Note that "Eliminated Open Access Charge" will be assured if the article is submitted on or before 27th January 2023. articles@sciencerepository.net Please submit your files to the mentioned email. Do contact us for further information. Thanks and Regards, Albert Petrov Editorial Assistant From: Doron Lancet Sent: Thursday, January 19, 2023 4:12 PM I laugh, but in fact I should weep. Can’t the Artificial Intelligent (AI) giants find a way to filter all these predatory journals out? ---Doron From: Reichard, John Sent: Thursday, January 19, 2023 11:10 AM Hi Dan: It’s a hokey name for sure, but I think Current Microwave Chemistry it is a proper journal, and I know this is a field of chemistry (e.g., https://en.wikipedia.org/wiki/Microwave_chemistry). The journal falls under the umbrella of Bentham Science publishing. Under the Guidelines for Authors link, the journal states: “PAGE CHARGES: No page charges will be levied to authors for the publication of their article. However, the authors may decide for some paid-for editorial services such as open access publication and/or a faster overall publication for their article(s). “ [ i.e., it’s like Frontier Airlines, you can pay for upgrades. 🙂 ] Also, this journal is not listed in the most recent list of predatory journals, although it is possible that predatory journal names change as fast as the spoof calls that I get on my cell phone. They also have pretty extensive misconduct and fabrication policies. Impact factor is 3rd quartile per “Web of Science” – not great — but not as bad as I would expect if it was a predatory journal. Speaking only for myself, I usually know where I’m going to publish when I start writing, or at least I do some investigation to check impact factors, etc. as I am writing. So, my question is who is actually responding to emails like the one you forwarded and just so happens to have a spare manuscript lying around to submit? What are the clear telltale signs of a predatory journal, aside from unreasonable publishing costs, short publishing history, and low impact factor? John John F. Reichard, From: Nebert, Daniel Sent: Wednesday, January 18, 2023 3:58 PM From: Vasiliou, Vasilis Sent: Tuesday, January 17, 2023 6:09 PM Subject: Fwd: Call for Papers & Thematic Issues---Current Microwave Chemistry 🤣🤣🤣🤣🤣🤣 Sent from my iPhone Begin forwarded message: From: Current Microwave Chemistry
Date: January 17, 2023 at 9:07:05 PM EST
Subject: Call for Papers & Thematic Issues—Current Microwave Chemistry

(Indexed in Emerging Source Citation Index by Clarivate-ESCI

Dear Dr. BLABLA

This is a call for papers & thematic issues for the journal Current Microwave Chemistry (CMIC). The journal is an international peer-reviewed journal that publishes important contributions describing advances in the use of the microwave in the fields of chemistry, biology, medicine, biomedical science, and engineering.
Current Microwave Chemistry is indexed by Emerging Sources Citation Index (ESCI), Chemical Abstracts Service/SciFinder, ChemWeb, Google Scholar, J-Gate, CNKI Scholar, Suweco CZ, EBSCO and Ulrich’s Periodicals Directory.
Submission of an article or abstract may be undertaken through our online system or via email. All submitted manuscripts are peer-reviewed prior to a possible decision on acceptance for publication.
If you are interested in guest editing a thematic issue, then please submit your proposal by return email. Thematic issue proposals should contain title, aims and scope of the theme issue, with the list of contributors (with corresponding authors having preferable h-index 10 or above) and tentative manuscript titles.
Author’s/Guest Editors’ Benefits:

There are no Article Processing Charges.
Articles submitted by March 31st, 2023, will be published as Full Text, without any additional fee.
Quick processing and publication of the submitted papers. Articles will be published online within 40 days of final acceptance.
Guest Editors will receive an honorarium of US$600 per thematic issue organized by them.
Guest Editors will receive a free online access to the contents of the journal for the volume in which their thematic issue will publish.
The corresponding authors will receive a complimentary one-year online subscription to the journal’s volume in which their article is published.
30% discount on the single-issue cost to authors on the purchase of issue(s) in which their article is published.
Multiple issue copies at discounted rates.
In case of your interest in any capacity, please provide us your consent via return email, so that we may guide you further accordingly.
We look forward to receiving your interest in article or thematic issue publication.
Nuzhat Gul
Sr. Editorial Manager
Current Microwave Chemistry

Posted in Center for Environmental Genetics | Comments Off on Call for Papers & Thematic Issues—Current Microwave Chemistry

Machine learning and artificial intelligence in physiologically based pharmacokinetic (PBPK) modeling

Artificial Intelligence (AI) and Machine Learning (ML) are two rapidly advancing fields of research, which will clearly play important roles in gene-environment interactions (GxE), as well as almost every venue of scientific study. Hence, GEITP is introducing this topic here.

AI is a subset of computer science, which seeks to develop machines or computational approaches that can solve various cognitive tasks — at a level similar to (or even exceeding) human intelligence (and by far exceeding the intelligence of politicians).

ML (a subset of AI) applies mathematical or computational algorithms to perform complex tasks by automatically learning from past data or knowledge. Three main types of ML methods include: [a] supervised learning (train a model on known input and output data with the goal to predict new ‘outputs’ based on new ‘inputs’); [b] unsupervised learning (allow the trained model to cluster data in meaningful ways to identify intrinsic patterns or structures based on unknown input and output data relationships); and [c] reinforcement learning (a feedback-based learning approach used to learn optimal actions in an environment to receive the maximum reward).

A new class of ML, called deep learning, enables one to create more complex models with a logic structure similar to the human brain. The ML and deep-learning algorithms establish essential blocking of AI systems; These algorithms provide a data-driven approach to the evaluation of chemical/drug ADME (absorption, distribution, metabolism, and excretion) and toxicity properties.

Physiologically-based pharmacokinetic (PBPK) models are useful tools in drug development and risk assessment of environmental chemicals. PBPK model development requires the collection of species-specific physiological-, and chemical-specific ADME parameters; this can be very time-consuming and expensive. This raises a need to create computational models capable of predicting input parameter values for PBPK models (especially for new compounds). In this review [see attached], authors summarize an emerging paradigm for integrating PBPK modeling with AI- and ML-based computational methods. This paradigm includes three steps: [a] extract time-concentration PK data and/or ADME parameters from publicly available databases; [b] develop AI/ML-based approaches to predict ADME parameters; and [c] incorporate the AI/ML models into PBPK models to predict PK summary statistics (e.g., areas-under-the-curve and maximum plasma concentrations).

Other areas in which AI/ML methodology is starting to be used — are in the fields of population genomics and genome-wide association studies (GWAS). For example, much attention has been paid to the utility of polygenic risk scores (PRS) — which represent the genetic burden of a given trait; the long-term (highly optimistic) plan is to develop strategies for risk-based intervention through lifestyle modification, screening, and drug therapy. A PRS for a given trait is typically defined as “a weighted sum of a set of germline SNVs, in which the weight for each SNV corresponds to an estimate of the strength of association between the SNV and the trait.” But these topics will be covered in future email blogs. 😊

DwN

Toxicol Sci Jan 2023; 191: 1-14

Posted in Center for Environmental Genetics | Comments Off on Machine learning and artificial intelligence in physiologically based pharmacokinetic (PBPK) modeling