Sun spotless for 33 days straight – airline travelers getting dosed with up to 70 times more radiation [than at sea level]

This information does not bode well for predictions of further “global warming” —

In fact, the lowered solar activity could cause “global cooling” again. Ugh. Colder, snowier winters..?? ☹
Sun spotless for 33 days straight – airline travelers getting dosed with up to 70 times more radiation [than at sea level]

Anthony Watts / June 21, 2019

Are we in a solar grand minimum? We’ve seen this before, but now predictions are for an extremely weak solar cycle ahead.

Today is the summer solstice in the northern hemisphere. The sun has been without a single observable sunspot now for over a month – 33 days according to NOAA and SIDC data. says:

“This is a sign of Solar Minimum, a phase of the solar cycle that brings extra cosmic rays, long-lasting holes in the sun’s atmosphere, and a possible surplus of noctilucent clouds. “×720.jpg

Solar Dynamics Observatory HMI Continuum image for June 21, 2019 More at WUWT’s solar page:

There have been sightings of the electric blue noctilucent clouds as far south as Joshua Tree, near Los Angeles, and many many other locations. But one of the most interesting things is due to the fact that the Sun’s magnetic field has weakened, more cosmic rays are now bombarding Earth and some airline flights are seeing doses of radiation up to 73 times that which we’d see at ground level.

For example, a flight from Chicago, IL to Teterboro, NJ which flies at 45,000 feet gets 73.3 times the radiation dosage than a traveler would experience at ground level. A typical commercial flight across the United States gives you about 40x exposure – about the same amount of radiation as a typical dental x-ray. The Chicago-Teterboro flight is almost double that. Frequent air travelers during the solar minimum like we have now would get an even more elevated dose of cosmic rays. is monitoring passenger flights:

We are constantly flying radiation sensors onboard airplanes over the US and and around the world, so far collecting more than 22,000 gps-tagged radiation measurements. Using this unique dataset, we can predict the dosage on any flight over the USA with an error no worse than 15%.

E-RAD lets us do something new: Every day we monitor approximately 1400 flights criss-crossing the 10 busiest routes in the continental USA. Typically, this includes more than 80,000 passengers per day. E-RAD calculates the radiation exposure for every single flight.

The Hot Flights Table is a daily summary of these calculations. It shows the 5 charter flights with the highest dose rates; the 5 commercial flights with the highest dose rates; 5 commercial flights with near-average dose rates; and the 5 commercial flights with the lowest dose rates. Passengers typically experience dose rates that are 20 to 70 times higher than natural radiation at sea level.

Here is a table of recent “hot flights” arranged by radiation dosage level:

Column definitions: (1) The flight number; (2) The maximum dose rate during the flight, expressed in units of natural radiation at sea level; (3) The maximum altitude of the plane in feet above sea level; (4) Departure city; (5) Arrival city; (6) Duration of the flight. Data provided by

There’s now a dedicated website setup for monitoring this —

Meanwhile, the sun seems to be in a deep slumber, PerspectaWeather reports:

The sun continues to be very quiet and it has been without sunspots this year 62% of the time as we approach what is likely to be one of the deepest solar minimums in a long, long time. In fact, all indications are that the upcoming solar minimum may be even quieter than the last one, which was the deepest in nearly a century.×459.gif

Daily observations of the number of sunspots since 1 January 1977 according to Solar Influences Data Analysis Center (SIDC). The thin blue line indicates the daily sunspot number, while the dark blue line indicates the running annual average. The recent low sunspot activity is clearly reflected in the recent low values for the total solar irradiance. Compare also with the geomagnetic Ap-index. Data source: WDC-SILSO, Royal Observatory of Belgium, Brussels. Last day shown: 31 May 2019. Last diagram update: 1 June 2019 . [Courtesy]

In addition, there are now forecasts that the next solar cycle, #25, will be the weakest in more than 200 years. The current solar cycle, #24, has been the weakest with the fewest sunspots since solar cycle 14 peaked in February 1906. Solar cycle 24 continues a recent trend of weakening solar cycles which began with solar cycle 21 that peaked around 1980 and if the latest forecasts are correct, that trend will continue for at least another decade or so.

Posted in Center for Environmental Genetics | Comments Off on Sun spotless for 33 days straight – airline travelers getting dosed with up to 70 times more radiation [than at sea level]

Transgenic Metarhizium rapidly kills mosquitoes in a malaria-endemic region

Transgenic Metarhizium rapidly kills mosquitoes in a malaria-endemic region
Nebert, Daniel (nebertdw)
Thu 6/20, 2:07 PM

The topic of today’s GEITP article is using a genetically-modified organism (GMO) to control a pest; this is an interesting “gene-environment interactions” type of story. Burkina Faso is a landlocked country in West Africa — surrounded by six countries that include Ivory Coast to the southwest. Mosquito-borne malaria is a serious health hazard there, and insecticide-treated bed nets had been working. However, as mosquitoes develop resistance to widely used insecticides, the nets have become less effective. Researchers have now chosen a new line of attack: testing a genetically modified fungus that kills malaria-carrying mosquitoes.

Authors [see attached article & editorial] built a 600-square-meter structure (called the MosquitoSphere), shaped like a greenhouse but with mosquito netting instead of glass. The sphere comprised six compartments — four of which contained WHO-experimental huts for West Africa sugar sources (plants) for adult mosquitoes, and breeding sites (plastic sheets buried in a layer of soil to allow water to be collected) — enclosed in a greenhouse frame with walls of mosquito netting to allow exposure to ambient climate conditions and simulate a natural mosquito habitat. Insecticide-resistant Anopheles coluzzii mosquitoes for these experiments were collected as larvae from local breeding sites and reared to adulthood in one compartment of the sphere. The sphere was purposely built to compare efficacy of Mp-Hybrid co-expressing green fluorescent protein (GFP) with that of Mp-RFP, a strain with wild-type virulence expressing red fluorescent protein (RFP), against A. coluzzii mosquitoes.

The fungus Metarhizium pingshaense provides an effective, mosquito-specific delivery system for potent insect-selective toxins. Based on previous studies, authors had found that suspending the Metarhizium in locally produced sesame oil, and spreading the suspension on black cotton sheets, achieves a long-term effect in the sphere; also, these sheets provide a resting area for mosquitoes that have taken blood meals from calves in the huts. In initial experiments, authors released 100 female insecticide-resistant A. coluzzii mosquitoes into each hut at dusk, allowed them to blood-feed from a calf, and then collected them the next morning to monitor fungal infections. In seven replicates — with cloths rotated between each compartment — authors individually collected a total of 2,402 mosquitoes and recorded their feeding status and location of capture. Of the mosquitoes recaptured the next morning, 93.2% were blood-fed. Throughout all the experiments, none of the mosquitoes that had been collected from the compartments containing control sheets were infected with fungus.

The GMO fungus eliminated 99% of the mosquitoes within a month. The fungus is a long way from real-world use: because it is genetically modified to make it more lethal, it could face steep regulatory obstacles. However, the fungus also has clear advantages because it spares insects other than mosquitoes, and because it doesn’t survive long in sunlight, it’s unlikely to spread outside the building interiors where it would be applied. Authors conclude that, because Hybrid-expressing M. pingshaense is effective at very low spore doses, its efficacy lasted longer than that of the unmodified Metarhizium. Deployment of transgenic Metarhizium against mosquitoes (subject to appropriate registration) could be rapid, with products that could synergistically integrate with existing chemical control strategies to avoid insecticide resistance. 😊


Science 31 May 2o19; 364: 894-897 & editorial (p. 817)

Posted in Center for Environmental Genetics | Comments Off on Transgenic Metarhizium rapidly kills mosquitoes in a malaria-endemic region

New Method Proposed to Screen Chemicals Quickly for Cancer Risk

This article is a summary (Research Brief 294) of a recent project funded by the NIEHS Superfund Research Program (SRP). I contacted Dr. Monte, and our email correspondence is located below.


New Method Proposed to Screen Chemicals Quickly for Cancer Risk

Boston University (BU) researchers, in collaboration with researchers at the National Toxicology Program (NTP) and the Broad Institute, have developed and evaluated a new approach to assess whether exposure to a chemical increases a person’s long-term cancer risk. The fast, cost-effective method uses gene expression profiling, which measures the activity of a thousand or more genes to capture what is happening in a cell. Based on gene expression profiling data, the researchers were able to infer specific biological changes at the cellular level and predict potential carcinogenicity of chemicals, or the ability of chemicals to cause cancer.

Of the tens of thousands of chemicals in commercial use, less than two percent have been thoroughly tested for their potential carcinogenicity, in part because the current chemical screening process is costly and time-consuming. Led by Stefano Monti, Ph.D., of the BU Superfund Research Program, the team set out to develop a short-term cell-based screening process to predict long-term cancer risk. According to the authors, their method provides a promising solution that could be used to prioritize chemicals for further cancer testing.

Simplified graphic illustrating how many different compounds can be tested in cells to collect gene expression profiles in a short amount of time.

Developing Computer Models to Find Gene Expression Patterns

The researchers exposed human liver cell lines to hundreds of individual chemicals — known to be carcinogens and noncarcinogens — and measured changes in gene expression within the cells using the Luminex L1000 platform. L1000 measures the expression of nearly 1,000 landmark genes, those determined to best predict the expression levels of the remaining genes in the transcriptome, or all RNA molecules in that cell.

Then, they fed the resulting data to a computer model that used machine-learning techniques to find patterns within the gene expression profiles that would (or might) distinguish carcinogenic from non-carcinogenic chemicals.

These patterns then formed the basis of the model that researchers used to predict the long-term carcinogenicity of a variety of different chemicals. The model was evaluated by testing its accuracy on chemicals already known to induce or not induce cancer, using data available from DrugMatrix, Connectivity Map (CMAP), and Toxicology in the 21st Century (Tox21).

Across the entire dataset, the model did not perform well. However, when they narrowed it down to chemicals with high biological activity levels, focusing on the 63 chemicals with the highest levels of bioactivity, they found that their model accurately predicted carcinogenicity, as well as genotoxicity, or damage to the cell’s genetic material.

They found that many of the expression profiles in their study involved genes implicated in DNA damage and repair processes, as well as immune-related pathways, metabolism pathways, and cell communication. Comparing these gene expression profile differences between carcinogens and noncarcinogens may inform further research on how different compounds can lead cells to become cancerous.

The research team made their data, including 6,000 gene expression profiles of more than 300 liver carcinogens and noncarcinogens, available to other researchers. They also created a portal for the public to search and visualize the results.

Alternative Methods for Safety Testing

The gold standard of carcinogenicity testing, the two-year rodent bioassay, is time-consuming and can require up to $4 million and more than 800 animals for every compound evaluated. The new approach represents a step forward for Tox21, a federal program focused on developing new ways to rapidly test whether chemicals may affect humans and the environment.

Although more work needs to be done to optimize and validate this approach before it can be applied in regulatory and clinical settings, it has the potential to provide a fast and cost-effective means to screen chemicals for further testing, according to the authors. Additionally, this approach could be extended to evaluate other negative effects of exposure, such as endocrine and metabolic disruption.

For more information, contact:

Stefano Monti, Ph.D.
Boston University
75 E. Newton St.

Posted in Center for Environmental Genetics | Comments Off on New Method Proposed to Screen Chemicals Quickly for Cancer Risk

The Polycomb-Dependent Epigenome Controls Beta-Cell Dysfunction, Dedifferentiation, and Diabetes

These GEITP pages have often discussed the fact that complex diseases almost always represent multifactorial phenotypes (traits) — (e.g. hypertension, schizophrenia, mental depressive disorder, cancer, autoimmunity, obesity, and diabetes) — which reflect contributions from genetics (alterations in DNA sequence), epigenetic factors (chromosomal events other than DNA variants), environmental effects (diet, lifestyle, smoking, prescription drugs, occupational hazards), endogenous influences (heart, kidney or lung disease), and our constantly changing microbiome (bacteria living in our intestine and in every orifice). Epigenetics is classically divided into: DNA methylation, RNA-interference (silencing miRNAs), histone modifications, and chromatin remodeling. The first two are so well understood that we now have commercially available assays to test for these, even genome-wide. The latter two processes are more elusive and are still under intense investigation.

The topic [of the attached article] is type-2 diabetes (T2D), commonly associated with obesity, inflammation, and insulin resistance. Insulin is produced in the beta-cells of the pancreas. Ultimately, diabetes results from insufficient numbers of insulin-secreting beta-cells — caused by impaired function, increased cell death, and/or loss of cell identity. Beta-cells are now known to be highly adaptable, and pioneering studies over the last decade have discovered networks of transcriptional and chromatin regulators that drive the development of beta-cell lineage. These networks provide barriers against trans-differentiation (i.e. loss of beta-cell identity). How these transcriptional programs remain stable over their long cellular lifespans seen (in the intact human or mouse pancreas) is not well understood. Important studies, both in wild-type and in genetic models, have pointed out that loss of beta cell identity (i.e. ‘dedifferentiation’ or ‘trans-differentiation’) is associated with increased metabolic stress (which is commonly seen with obesity and inflammation).

Authors [see attached article] combined deep epigenome-wide mapping analysis with single-cell transcriptomics to search for evidence of chromatin dysregulation in T2D. They found two chromatin-state “signatures” that are well correlated with beta-cell dysfunction — both in mice and in humans) — [a] ectopic activation of bivalent Polycomb-silenced domains, and [b] loss of expression at an epigenomically unique class of lineage-defining genes. Authors determined that beta-cell-specific loss-of-function of the Polycomb group repressive complex-2 (PRC2) gene expression in mice triggers diabetes-mimicking transcriptional chromatin-state signatures — combined with highly penetrant (in medical genetics, the proportion of those individuals carrying the mutant allele — who actually exhibit clinical symptoms) hyperglycemia (high blood sugar)-independent dedifferentiation. [Polycomb group (PcG) proteins are epigenetic regulators of transcription — essential for stem-cell identity, differentiation, and disease states — functioning within multiprotein complexes, called polycomb-repressive complexes (PRCs); PRCs modify histones (and other proteins) and silence target genes.] These intriguing data indicate that PRC2 dysregulation contributes to the complex disease T2D.

These breakthrough experiments provide a new direction for exploring beta-cell transcriptional regulation and identify PRC2 as a necessary polycomb-repressive complex for long-term maintenance of beta-cell identity. It’s a great example of genetics plus epigenetics contributing to a complex disease. Most fundamentally — this study suggests a “two-hit model” (i.e. chromatin, plus hyperglycemia) for explaining the loss of beta-cell identity in diabetes [and these data likely apply to both the autoimmune type-1 diabetes, as well as type-2 diabetes]. 😊

Cell Metabolism 5 June 2o19; 27 : 1294-1308

Posted in Center for Environmental Genetics | Comments Off on The Polycomb-Dependent Epigenome Controls Beta-Cell Dysfunction, Dedifferentiation, and Diabetes

Late Middle Pleistocene Denisovan mandible from the Tibetan Plateau

Posted in Center for Environmental Genetics | Comments Off on Late Middle Pleistocene Denisovan mandible from the Tibetan Plateau

Adaptive introgression enables evolutionary rescue from extreme environmental pollution

So that everyone is “on the same page,” let’s consider the classical example of horse-donkey breeding (i.e. two different species). Usually, the male donkey (jack) mates with a female horse (mare) to produce a mule (they can be either sex). Less commonly, a male horse (stallion) mates with a female donkey (jenny) to produce a hinny (again, they can be either sex). Whereas humans have 23 chromosomal pairs — horses have 32 chromosomal pairs, and donkeys have 31. During fertilization of an egg by a sperm (meiosis), therefore, the extra chromosome cannot pair up. During divergence of horse and donkey, not only has the donkey lost one chromosome, but chromosomal rearrangements and inversions have occurred over evolutionary time.

For the mule or hinny, each cell ends up with 63 chromosomes; because of these dissimilarities, the end result is usually (but not always) sterile offspring. There are no recorded cases of fertile mule stallions. There have been a few dozen cases of mule mares giving birth after mating with a horse or donkey. 😊 According to a British Broadcasting Company (BBC) report, “Only 60 cases of mules giving birth were recorded from 1527 to 2002 (spanning nearly 500 years). In recent times, mules produced a filly in China in 2001, and colts produced mules in Morocco (2002) and Colorado (2007).” According to the American Donkey and Mule Society, however, only one hinny mare has ever been known to give birth (in China in 1981). On the other hand, male mules breeding with hinnies apparently cannot produce offspring. ☹

To answer your question: successful reproduction becomes increasingly difficult, as more chromosomal rearrangements and inversions have occurred over evolutionary time. Having an extra (unpaired) chromosome makes things even more difficult — but not impossible — in this stochastic world. There needs to be a better definition (but perhaps that is impossible?) for “what represents a ‘species’ vs what represents a ‘subline’ capable of generating offspring.” This is the problem with Mother Nature; we always seem to see GRADIENTS. I do not think that Fundulus grandis and Fundulus heteroclitus should be defined as “distinct species.”


Posted in Center for Environmental Genetics | Comments Off on Adaptive introgression enables evolutionary rescue from extreme environmental pollution

New historical perspectives as to how/why EPA adopted the Linear-Non-Threshold (LNT) Model

This article [see attached] is the latest historical analysis — in a long series by Ed Calabrese — on details surrounding acceptance of the Linear Non-Threshold (LNT) Model in 1975 by the U.S. Environmental Protection Agency (EPA). In 1956, the US National Academy of Sciences (NAS) Biological Effects of Atomic Radiation (BEAR) I Genetics Panel recommended that risk assessment for ionizing radiation (for germ-cell mutation) be switched from a threshold model to a linear dose response model. This was a precipitous moment in risk assessment history that was long anticipated and highly publicized (e.g. it became a prominent paper in the journal Science, front page stories in the New York Times and Washington Post, and other major outlets, with the report sent to all public libraries in the United States). The NAS BEAR I Genetics Panel was considered the “1950s equivalent of a genetics Dream Team,” having great influence and standing — within the scientific, legislative, regulatory, news media, and general public communities.

Their report was followed by Congressional Hearings in 1957 and it influenced advisory groups — such as the National Council on Radiation Protection and Measurements (NCRPM) to adopt a linear dose response for cancer risk assessment in December 1958. This NCRPM position was based on the assumption that radiation-induced genetic damage is “completely cumulative and that the effect is independent of the rate at which the radiation is delivered” (this statement embraced the geneticists’ mantra of the 1950s and the position of the 1956 NAS BEAR Genetics Panel Report). However, in contrast to the scientific beliefs of the 1956 BEAR Genetics Panel, the 1960 NCRPM statement did not accept these assumptions and conclusions as established facts; the NCRPM specifically stated that they “were adopted as prudent public health policy” — reflecting, in effect, a Precautionary Principle.

The timing of the 1958 NCRPM policy meeting (Dec. 29/30, 1958) was remarkable — appearing about 2 weeks after the seminal publication of Russell et al. in the journal Science on December 19, 1958 — demonstrating that radiation-induced mutation frequency was explained by

dose rate, NOT total dose, and that such mutations could be readily repaired. There has been no record yet obtained that clarifies why the

decision to adopt a linear dose response policy by the NCRPM was not affected by the Russell et al. findings, because James F Crow and Edwin B Lewis (both prominent figures in the radiation geneticist community) were members of the NCRPM Committee. In other words, why didn’t NCRPM delay their decision — pending a review of the Russell findings?

Of particular relevance to the LNT issue is that the 1956 BEAR Medical/Pathology Panel, which met concurrently with the BEAR Genetics Panel, offered a different perspective/evaluation of ionizing radiation-induced mutation, and its relationship to cancer — as did the Genetics Panel. The Medical/Pathology Panel downplayed and even questioned the role of somatic mutation in cancer. These developments had the potential to suggest that a threshold model may be biologically more plausible than the LNT. Thus, while the statements of the 1960 BEAR Panels seemed to provide a type of quiet cover (or support) to the NCRPM policy statement, this veiled support in the form of uncertainty was no longer sufficient (i.e. not convincing enough). It is suggested here that the new Congressionally mandated environmental agency, the EPA, created in 1970, needed something better than regulation by uncertainty. The 1972 BEIR Report provided EPA a basis (i.e. male mouse mutagenicity data) to support a linearity approach — with the Russell data providing the Gold Standard and could be promoted in public as being based on studies using nearly two million mice.

The 1972 BEIR Report had the prerequisite caveats of some uncertainty — and erring to some extent on the side of protection. However, the critical point was that now there was sufficient information for a science-supported EPA linearity risk assessment policy that would soon have the apparent exacting precision of biostatistical model estimates of cancer risks in the low-dose zone via the LNT approach. This historical evaluation thus suggests that the EPA did not want to have its hands tied, or its scientific image affected, by statements from the 1960 BEAR Genetics and Medicine Panels that “there was too much uncertainty to estimate cancer risk in the low-dose zone.” Their comments were simply ignored and swept under the regulatory rug. ☹


Chem-Biol Interactions May 2o19; 308: 110–112

Posted in Center for Environmental Genetics | Comments Off on New historical perspectives as to how/why EPA adopted the Linear-Non-Threshold (LNT) Model

Roosevelt’s D-Day prayer still resonates – All Americans should read it this week

By Van Hipp

As we commemorate the 75th anniversary of D-Day, all generations should read and listen to President Franklin D. Roosevelt’s powerful D-Day prayer.

He didn’t call for a special “Day of Prayer.” He called for continued prayer. He knew that by God’s Grace, and the righteousness of our cause, that our sons would triumph. And he knew that with God’s blessing, we would prevail over the unholy forces of our enemy.

Roosevelt was a great communicator. And he knew how to use the power of radio to bypass the media and go straight to the American people.

In WW2, the very survival of western civilization was at stake. In freedom’s darkest hour, America’s Commander-in-Chief turned to the Almighty and understood the power of prayer. FDR had a letter printed in the Soldier’s Pocket Bible given to soldiers leaving for war in which he took pleasure “in commending the reading of the Bible to all who serve.” On the evening of D-Day, June 6th 1944, President Roosevelt went on radio to address the nation, but he did so by asking the American people to join him in prayer.

I am concerned that many of our younger generation don’t understand what it really means to be an American and don’t have an appreciation of what our men and women in uniform have done over the years to give us the freedom we enjoy today. Unfortunately, we don’t emphasize American history and civics in our schools like we used to. A recent study found that 22 percent of millennials weren’t sure if they knew what the Holocaust was. And 67 percent had not heard of Auschwitz, the Nazi death camp where more than 1 million Jews and others were murdered.

The great Anglican evangelist, Dr. John Guest, says that there is a “battle going on for the soul of America.” Guest, who spent many a night in a London air raid shelter as a boy, wonders what those who gave their lives on the beaches of Normandy would say if they saw what was going on today.

As we observe the 75th anniversary of D-Day, it was great to see President Trump read excerpts of FDR’s D-Day prayer in Portsmouth, England on Wednesday. Following congressional action a few years ago, there is currently an effort underway to have President Roosevelt’s D-Day prayer added to the National World War II Memorial. It should be. And it should be read and studied in our high schools and on our college campuses.

President Roosevelt’s D-Day prayer reminds us of what the “Greatest Generation” did to save the free world. And it reminds us, too, of the strength and humility of a U.S. president who understood the power of prayer and who asked the American people to join him in praying to God during freedom’s darkest hour.

“My fellow Americans:

Last night, when I spoke with you about the fall of Rome, I knew at that moment that troops of the United States and our allies were crossing the Channel in another and greater operation. It has come to pass with success thus far.

And so, in this poignant hour, I ask you to join with me in prayer:

Almighty God: Our sons, pride of our Nation, this day have set upon a mighty endeavor, a struggle to preserve our Republic, our religion, and our civilization, and to set free a suffering humanity.

Lead them straight and true; give strength to their arms, stoutness to their hearts, steadfastness in their faith.

They will need Thy blessings. Their road will be long and hard. For the enemy is strong. He may hurl back our forces. Success may not come with rushing speed, but we shall return again and again; and we know that by Thy grace, and by the righteousness of our cause, our sons will triumph.

They will be sorely tried, by night and by day, without rest-until the victory is won. The darkness will be rent by noise and flame. Men’s souls will be shaken with the violences of war.

For these men are lately drawn from the ways of peace. They fight not for the lust of conquest. They fight to end conquest. They fight to liberate. They fight to let justice arise, and tolerance and good will among all Thy people. They yearn but for the end of battle, for their return to the haven of home.

Some will never return. Embrace these, Father, and receive them, Thy heroic servants, into Thy kingdom.

And for us at home — fathers, mothers, children, wives, sisters, and brothers of brave men overseas — whose thoughts and prayers are ever with them–help us, Almighty God, to rededicate ourselves in renewed faith in Thee in this hour of great sacrifice.

Many people have urged that I call the Nation into a single day of special prayer. But because the road is long and the desire is great, I ask that our people devote themselves in a continuance of prayer. As we rise to each new day, and again when each day is spent, let words of prayer be on our lips, invoking Thy help to our efforts.

Give us strength, too — strength in our daily tasks, to redouble the contributions we make in the physical and the material support of our armed forces.

And let our hearts be stout, to wait out the long travail, to bear sorrows that may come, to impart our courage unto our sons wheresoever they may be.

And, O Lord, give us Faith. Give us Faith in Thee; Faith in our sons; Faith in each other; Faith in our united crusade. Let not the keenness of our spirit ever be dulled. Let not the impacts of temporary events, of temporal matters of but fleeting moment let not these deter us in our unconquerable purpose.

With Thy blessing, we shall prevail over the unholy forces of our enemy. Help us to conquer the apostles of greed and racial arrogancies. Lead us to the saving of our country, and with our sister Nations into a world unity that will spell a sure peace a peace invulnerable to the schemings of unworthy men. And a peace that will let all of men live in freedom, reaping the just rewards of their honest toil.

Thy will be done, Almighty God.


Posted in Center for Environmental Genetics | Comments Off on Roosevelt’s D-Day prayer still resonates – All Americans should read it this week

Late Middle Pleistocene Denisovan mandible from the Tibetan Plateau

Late Middle Pleistocene Denisovan mandible from the Tibetan Plateau
Nebert, Daniel (nebertdw)
Wed 6/12, 12:53 PM

Denisovans are an extinct hominin (human-like) group related most closely to Neanderthals (another extinct hominin); Denisovans are known only from fossil fragments that were identified at Denisova Cave (in Altai, Russia). However, their genomic legacy (i.e. distinct DNA sequence haplotypes, ‘patterns’) has been detected in several Asian, Australian and Melanesian populations — suggesting they once might have been widespread.

“Introgression” [also known (in genetics) as ‘introgressive hybridization’, this is the transfer of genetic information from one species to another as a result of hybridization between them and repeated backcrossing] was a term just used in a recent GEITP email, to describe killifish in the Gulf of Mexico. Denisovan introgression into present-day Tibetans, Sherpas and neighboring populations includes positive selection for the Denisovan allele of the endothelial PAS domain-containing protein-1 gene (EPAS1); this allele provides high-altitude adaptation to hypoxia (i.e. low O2) in present-day humans inhabiting the Tibetan Plateau. This Denisovan-derived adaptation is currently difficult to reconcile — given the low altitude of Denisova Cave (700 m altitude), combined with earliest evidence of a high-altitude presence of humans on the Tibetan Plateau “only” ~30–40 thousand years before present.

Furthermore, relationships of various Middle Pleistocene (781,000 to 126,000 years ago) and Late Pleistocene (126,000 to 5,000 years ago) hominin fossils in East Asia with Denisovans are difficult to resolve — owing to limited morphological information on Denisovans and lack of paleogenetic data on Middle Pleistocene hominin fossils from East Asia and tropical Oceania. Authors [see attached article] describe a Denisovan mandible (jaw bone), identified by ancient protein analysis, found on the Tibetan Plateau in a cave in Xiahe (Gansu, China); they determined the mandible to be at least 160,000 years old (via radioisotope-dating analysis). This bone provides direct evidence of Denisovans existing outside the Altai Mountains, and this specimen offers unique insights into Denisovan mandibular and dental morphology.

These findings thus confirm that archaic hominins occupied the Tibetan Plateau in the Middle Pleistocene Epoch, and that the genome of these hominins had adapted successfully to high-altitude hypoxic environments — lo-o-o-o-ong before the arrival of modern Homo sapiens in that region. 😊

Posted in Center for Environmental Genetics | Comments Off on Late Middle Pleistocene Denisovan mandible from the Tibetan Plateau

A missense variant in the FTCD gene (other than the AS3MT gene)) affects arsenic metabolism and toxicity phenotypes in Bangladesh

This topic is very central to gene-environment interactions (i.e. if everyone in a population is exposed to the same undesirable levels of arsenic — why isn’t the toxic response similar) Exposure to inorganic arsenic forms (iAs) in contaminated drinking water is a major global health problem; more than 130 million individuals worldwide are exposed at levels of >10 μg/L, including ~50 million in Bangladesh, where contamination of ground water is a well-known public health issue. Arsenic is a human carcinogen, and chronic exposure to iAs levels through drinking water at levels exceeding 50–100 μg/L is associated with various types of cancer in many populations, including the United States. Arsenic exposure has also been linked to increased risk of type-2 diabetes, cardiovascular disease, non-malignant lung disease, and decreased longevity.

Arsenic-induced skin lesions are an early sign of arsenic exposure and toxicity, which indicates a risk factor for subsequent cancer. Once absorbed into the blood stream, iAs can be converted to mono-methylated (MMA), and then di-methylated (DMA) forms of arsenic, with methylation facilitating excretion of arsenic in the urine (this metabolism is believed to occur primarily in the liver). The relative abundance of these forms of arsenic in urine (percent iAs versus percent MMA versus percent DMA) — varies across individuals and represents the efficiency with which an individual metabolizes arsenic.

Arsenic metabolism is influenced by lifestyle (e.g. occupation, cigarette smoking) and demographic (e.g. ethnicity, climate) factors, as well as genetic variation. Prior genome-wide association studies (GWAS), linkage, and candidate gene studies have shown that variation in the chromosomal 10q24.32 region near the AS3MT gene (arsenite methyltransferase) influences arsenic metabolism efficiency; two independent association signals have been observed in the AS3MT gene region among exposed Bangladeshi individuals. These metabolism-related single-nucleotide variants (SNVs) appear to impact DMA production (not the conversion of iAs to MMA), and “DMA-increasing alleles” have also been shown to be associated with reduced risk for arsenic-induced skin lesions via a SNV-arsenic (i.e. gene-environment, GxE) interaction.

Other than the AS3MT gene region located at chromosome 10q24.32, no other regions of the human genome had been known (until now) that contain variants that show robust and replicable evidence of association with arsenic metabolism efficiency — although studies of heritability suggest that additional variants are likely to exist. In order to identify additional genetic variants that influence arsenic metabolism efficiency, authors [see attached article] conducted a whole-exome sequencing (WES) study; 1,660 Bangladeshi individuals participating in the Health Effects of Arsenic Longitudinal Study (HEALS) were studied. Among almost 20,000 SNVs analyzed exome-wide, the minor allele (A; adenosine, which changed amino-acid-101 from valine to methionine) in exon 3 of the FTCD gene (formiminotransferase cyclodeaminase) was associated with: increased urinary iAs levels (P = 8 x 10–13); increased MMA levels (P =P = 2 x 10–16); and decreased DMA levels (P = 6 x 10–23).

Among 2,401 individuals with arsenic-induced skin lesions (i.e. an indicator of toxicity and cancer risk), compared with 2,472 controls, those carrying the low-metabolism allele [minor allele frequency (MAF) = 7%] were associated with increased skin lesion risk (odds ratio = 1.35; P = 1 x 10–5). The FTCD enzyme is critical for breakdown of histidine, a process that participates in the one-carbon/folate cycle, which ultimately provides methyl groups for arsenic metabolism (methylation). In this study population, therefore, the FTCD and AS3MT SNVs, together, explain ~10% of the heritable variation in DMA metabolite levels — thus supporting a causal effect of arsenic metabolism efficiency on arsenic toxicity (i.e. skin lesions). In summary, these data identify a coding variant in FTCD associated with arsenic metabolism efficiency, providing new evidence of a link between one-carbon/folate metabolism and arsenic toxicity. 😊


PLoS Genet Mar 2o19; 15: e1007984

Posted in Center for Environmental Genetics | Comments Off on A missense variant in the FTCD gene (other than the AS3MT gene)) affects arsenic metabolism and toxicity phenotypes in Bangladesh