Large genome-wide association study(ies) identify 65 new breast cancer risk loci !!!

Breast cancer risk is a perfect example of a multifactorial trait –– which these GEITP pages have continued to scrutinize and underscore importance. I still recall discovery of the BRCA1 gene (1994) when it was declared “THE breast cancer risk gene.” And within the year, the BRCA2 gene was identified. And some of us insisted “things might be much more complicated than what sees on the surface.” And they are. Now virtually everyone agrees that breast cancer risk is affected by rare coding single-nucleotide variants (SNVs) in susceptibility genes (i.e. BRCA1 and BRCA2) and hundreds if not thousands of common mostly non-coding SNVs. And most of the genetic contribution to breast cancer risk remains unknown.

In the attached report, authors describe results of a genome-wide association study (GWAS) of breast cancer in 122,977 cases and 105,974 controls of European ancestry and 14,068 cases and 13,104 controls of East Asian ancestry. They identified 65 new loci that are associated with overall breast cancer risk at P <5.0 x 10–8. The majority of credible-risk single-nucleotide variants in these loci fall into distant regulatory elements, and by integrating in silico data to predict target genes in breast cells at each locus, authors showed a strong overlap between candidate target genes and somatic driver genes ("somatic" = DNA in the body other than sperm or ovum DNA) in breast tumors. They also found that the heritability (proportion of total variation between individuals in a given population that reflects genetic variation) of breast cancer –– due to all SNVs in regulatory modules was enriched 2– to 5-fold, relative to the genome-wide average, with strong enrichment for particular transcription-factor-binding sites. These data provide further insight into genetic susceptibility to breast cancer risk and (although emphasizing the "increasing complexity" every time another larger cohort is studied by GWAS), perhaps these studies will improve the use of genetic risk scores for individualized screening and prevention of breast cancer. Nature 2 Nov 2o17; 551: 92–94

Posted in Center for Environmental Genetics | Comments Off on Large genome-wide association study(ies) identify 65 new breast cancer risk loci !!!

How evolution of modern-day humans has influenced mental illness

As discussed often in these GEITP pages, psychiatric disorders represent multifactorial traits (i.e. phenotypes caused by the contribution of hundreds if not thousands of gene mutations, plus epigenetic effects, plus environmental adversities that can accumulate over decades of time). Other multifactorial traits include obesity, type-2 diabetes, various types of cancers, and drug efficacy as well as risk of toxicity. Hence, whenever possible, GEITP likes to examine the gene-environment interactions.

The [attached] Nov 2o17 editorial describes how “genetic predisposition (risk) of mental illness” might have evolved over hundreds of generations during the past 300K to 600K years. Just as with the previous note we sent earlier today (about influence of Neaderthal variants on modern humans), the attached editorial summarizes several other presentations at the Am. Soc. of Human Genetics annual meeting in late October. One project found that evolution selected for DNA variants that are thought to be protective against schizophrenia [see attached earlier report from Nov 2o16]. Authors used the singleton density score (SDS),

(a method to infer very recent changes in allele frequencies from contemporary genome sequences). Applied to data from the UK10K Project, SDS measured allele-frequency changes in ancestors of modern Britons during the past ~2000 to 3000 years (this even predates Brexit). They found strong signals of selection at the lactase gene (involved in digestion of milk) and the major histocompatibility complex (involving the immune system), and in favor of blond hair and blue eyes. For polygenic adaptation, recent selection for increased height was discovered to have driven allele-frequency shifts across most of the genome.

Despite selection for protection against schizophrenia, this disorder has persisted and perhaps become even more prevalent — but reasons for this are not known. Many of schizophrenia’s symptoms (i.e. experiencing auditory hal­lucinations, jumbling sentences) involve brain regions tied to speech. Over the course of hominid evolution, the benefits of “ability to speak” might have out­weighed the small risk that genes (involved in language) could malfunction and result in schizophrenia in a small percentage of the population.

Another project described the dissection of environmental factors, mental illnesses, and behavioral traits. Looking at 2,455 DNA samples from individuals at 23 geographical sites across Europe, authors quantified each person’s overall genetic likelihood of conditions (such as autism) and personality traits (such as being extraverted rather than introverted. The scientists then calculated whether that risk was associated with certain environmental factors –– such as amount of rainfall, extremely cold winter temperatures, or prevalence of infectious diseases — exploring the idea that these factors might have been involved in selecting for such human traits. Persons living in parts of Europe with relatively lower winter temperatures were found to be slightly more genetically prone to schizophrenia (or is it just that long, dark nights induce people to drive more alcohol?). It was suggested that, if the genes that helped people to tolerate cold lay close to variants that enhanced risk of schizophrenia, then schizophrenia-related genes could have been inadvertently carried along during evolution as “fellow travelers” from one generation to the next.

Nature 2 Nov 2o17; 551: 15–16 [editorial] and Science 2o16; 354: 760-764 [full article]

Posted in Center for Environmental Genetics | Comments Off on How evolution of modern-day humans has influenced mental illness

More findings of Neaderthal DNA contributions to modern human (Homo sapiens)

As these GEITP pages have often described, there is no simple diagram to illustrate migrations of modern humans (Homo sapiens) out of southeast Africa during the past 1-2 million years. Both Neanderthals (Homo neanderthalensis) and modern humans are likely to have evolved from Homo erectus, an ancestor that left Africa ~1.8 million years ago (MYA) –– most likely reaching (what today is) Georgia, on eastern edge of the Black Sea, by way of the Levant, as well as China ~1.7 MYA and Iberia (now Spain) ~1.4 MYA. Numerous “molecular clock” genetic studies place the divergence time of the Neanderthal and modern human lineages between 800K and 400K years ago. Other scholars believe Neanderthals descended via Homo heidelbergensis (another distinct Homo erectus migration out of Africa that also had occurred during this time-frame). Neanderthal traits are also seen in Homo heidelbergensis specimens beginning between 600K and 350K years ago. An additional subspecies (not to be covered further today) is the Denisova (Homo denisova) which is of South Asian descent and lived around the same time-frame as the Neanderthal. By convention, European hominins younger than ~250K years are called Neanderthals. The genome of modern humans contains bits and pieces of both the Neanderthal and Denisovan genomes.

When Neanderthals mated with modern humans ~250K to 30K years ago, they “gave back” thousands of ancient African-gene variants that Eurasians had lost when their ancestors migrated out of Africa in small tribes, perhaps 80K to 60K years ago. This diversity might been a “genetic gift” to Eurasian ancestors as they spread around the world. Today, however, some of these ancient single-nucleotide variants (SNVs) are a burden: they appear to boost the risk of becoming addicted to nicotine, as well as differences in pigmentation, and having wider waistlines. (??) At the recent Annual Mtg of the ASHG [see brief summary in attached 1-page article], researchers reported that some “Neanderthal” SNVs inherited by modern humans outside of Africa are “not peculiarly Neanderthal genes,” but represent the ancestral human condition; this finding highlights just how much diversity can be lost when people pass through a genetic bottleneck as they move out of Africa.

When researchers examined closely the genomes of >20,000 people in the 1000_Genomes_Project and Vanderbilt’s BioVU_data_bank of electronic health records –– they noticed that distinct stretches of chromosomes inherited from Neanderthals also carried ancient alleles, or SNVs, found in all Africans studied (including the Yoruba, Esan, and Mende peoples). Researchers found 47,261 of such SNVs across the genomes of Europeans and 56,497 in Asians. Most intriguingly, in Eurasians these alleles are only found next to Neanderthal genes, suggesting “this entire chunk of DNA was acquired at the same time,” when ancestors of today’s Eurasians mated with Neanderthals ~50K years ago. The most stringent explanation is that these alleles represent the ancestral human condition –– inherited by both Neanderthals and Homo sapiens in Africa from their common ancestor.

Geneticists at the meeting also focused on archaic DNA “deserts,” where modern humans have inherited no DNA from Neanderthals or other archaic sublines. One of these regions includes the FOXP2 “language” gene. Absence of archaic DNA in these deserts suggests that, in our ancestors, natural selection flushed out the Neanderthal version of this gene. Neanderthal versions of FOXP2 would have produced much less of its protein than the amount expressed in modern human brains. In fact, a rare mutation –– that causes members of a family to produce half the usual amount of FOXP2 protein –– is known to trigger severe speech defects. It has been suggested by several researchers that enhancing FOXP2 expression may have been key to modern human language. 🙂

Science 27 Oct 2o17; 358: 431

Posted in Center for Environmental Genetics | Comments Off on More findings of Neaderthal DNA contributions to modern human (Homo sapiens)

Enhancer modules (affecting gene up- or down-regulation) are often not closest to the gene it regulates :(

The earliest genome-wide association studies (GWAS) immediately raised the question of what genes are the targets of the identified disease risk variants (single-nucleotide variants; SNVs), located sometimes inside a gene, but more often, located some distance upstream (or downstream) of the transcribed gene. In other words, a single-nucleotide alteration (SNV) some distance from the nearest gene MIGHT be influencing expression of THAT gene, or it might be affecting expression of some gene much further away from the SNV. GWAS have mapped thousands of variants associated with a range of phe­notypes, from biometric traits to complex immune diseases. Despite these “successes”, it has been a major challenge to translate the associ­ated SNVs into molecular mechanisms. Because the vast majority of disease-associated vari­ants fall outside of the protein-coding sequence –– something conceptually as simple as assign­ing disease variants to their target genes has been a major challenge for geneticists.

To overcome this problem, different approaches have been taken: tentative identification of a candidate gene on the basis of functional relevance to disease biology, reporting of the nearest gene to the variant or claiming a gene for which the same variant affects gene expression [i.e. an expression quantitative trait locus (eQTL)]. All of these approaches, however, lack a direct link between the associated SNV and target gene. Authors [see attached full-length paper] generated a high-resolution map of enhancer–promoter interactions in rare disease-relevant cell types, thus mapping physical interactions between regulatory elements containing variants associated with autoimmune and cardiovascular diseases and target genes.

Gene expression programs are intimately linked to the hierarchical organization of the genome. In mammalian cells, each chromosome is organized into hundreds of megabase-sized topologically associated domains (TADs), which are conserved from early stem cells to differentiated cell-types. Within this invariant TAD scaffold, cell-type-specific enhancer–promoter interactions establish regulatory gene expression programs. Standard methods require tens of millions of cells to obtain high-resolution interaction maps and confidently to assign enhancer–promoter contacts. Hence, the principles that govern enhancer–promoter conformation in disease-relevant patient samples –– are not well understood. This gap in understanding is particularly problematic for interpreting molecular functions of inherited risk factors for common human diseases, which reside in intergenic enhancers or other noncoding DNA features in (as many as) 90% of cases.

Such disease-relevant enhancers may not influence expression of the nearest gene (often reported as the default target in the GWAS literature) and may instead act in a cell-type-specific manner on distant target genes residing up to hundreds of kilobases away. Recently, systematic perturbations of regulatory elements in select gene loci have shown that effects of individual regulatory elements on gene activity can be predicted from the combination of [a] enhancer activity [marked by histone H3 lysine 27 acetylation (H3K27ac) levels] and [b] enhancer–target looping. In the attached report, authors leverage this insight to capture the combination of these two types of information across the genome in a single assay –– mapping the enhancer connectome in disease-relevant primary human cells.

Authors [see attached] show that H3K27ac HiChIP [a protein-centric chromatin conformation method, which improves the yield of conformation-informative reads by more than 10-fold and lowers the input requirement more than 100-fold relative to other ChIP methods] generates high-resolution contact maps of active enhancers and target genes in rare primary human T-cell subtypes and coronary artery smooth muscle cells. Differentiation of naive T cells into T-helper-17 cells or regulatory T cells creates subtype-specific enhancer–promoter interactions –– specifically at regions of shared DNA accessibility. These findings provide a principled means of assigning molecular functions to autoimmune and cardiovascular disease risk variants, linking hundreds of noncoding single-nucleotide variants (SNVs) to putative gene targets. Target genes identified with HiChIP are further supported by CRISPR interference and activation at linked enhancers –– by the presence of expression quantitative trait loci (eQTLs), and by allele-specific enhancer loops in patient-derived primary cells. The majority of disease-associated enhancers contact genes beyond the nearest gene in the linear genome, leading to a 4-fold increase in number of potential target genes for autoimmune and cardiovascular diseases.

Nat Genet Nov 2o17; 49: 1602–1612 [full article] + pp. 1564-5 [News’N’Views editorial]

Posted in Center for Environmental Genetics | Comments Off on Enhancer modules (affecting gene up- or down-regulation) are often not closest to the gene it regulates :(

Transplanting human cancers into mouse –> (surprise, surprise) genotype and phenotype tends to change rather quickly

Use of patient-derived models, in which part of a human tumor is transplanted into mice (also known as patient-derived xenografts, or PDXs), is gaining traction as a method to investigate tumor behavior such as “response to therapies”. Human-derived tumor models are becoming popular in the context of “personalized medicine”, but [consistent with what we had previously warned, Regul Toxicol Pharmacol 2o16; 75: 1–4] a new study [see attached full article + editorial] shows that these models could be less representative of primary tumors than previously thought –– particularly when using late passages of the cells in culture.

Many studies agree that PDXs overcome several limitations of more common and established models –– such as human cell lines, including their homogeneity and lack of human stromal microenvironment. To develop PDXs, tumors must be grown in immunodeficient mice, and then transplanted (sequentially) over several generations. Each of these transplantations is commonly referred to as “a passage” and provides the advantage of amplifying the amount of tissue avail­able from a single patient biopsy. In the attached full-length paper, however, Beroukhim, Golub and colleagues questioned the assumption that PDX models remain representative of the original human tumor during passaging. They systematically analyzed previously published data on more than a thousand samples. By tracking DNA copy-number alterations (CNAs) through different passages, they report that PDXs start diverging relatively early from the pri­mary tumor, in addition to which they find data supporting mouse-specific positive selection of pre-existing tumor clones. In other words, clones with minor representation in the human tissue graft can gain a fitness advantage during PDX passaging. Authors argue that this is due to the different evolutionary constraints posed by the mouse environment on human cells. An alternative hypothesis would entail random, non-adaptive genetic drift –– caused by a series of population bottlenecks and expansions at each stage of transplantation.

Cancer research relies on interrogating model systems that mirror the biology of human tumors. Cell lines cultured from human tumors have been the workhorse of cancer research, but marked differences between the cell culture environment and the in vivo tumor environment raise concerns that these lines may not be representative of human tumors. Recently, there have been increasing efforts to use PDXs as models to study drug response; these in vivo models are assumed to capture the cellular and molecular characteristics of human cancer better than simpler cell-line-based models.

Authors [see attached] monitored the dynamics of CNAs in 1,110 PDX samples across 24 cancer types. They observed rapid accumulation of CNAs during PDX passaging, often due to selection of preexisting minor clones. CNA acquisition in PDXs was correlated with tissue-specific levels of aneuploidy and genetic heterogeneity observed in primary tumors. However, the particular CNAs acquired during PDX passaging differed from those acquired during tumor evolution in patients. Several CNAs recurrently observed in primary tumors gradually disappeared in PDXs –– indicating that events undergoing positive selection in humans can become dispensable during propagation in the mouse. Notably, the genomic stability of PDXs was associated with their response to chemotherapy and targeted drugs. These findings have major implications for PDX-based modeling of human cancers of various types.

Nat Genet Nov 2o17; 49: 1557–1575 [full article] + pp. 1565-6 [News’N’Views editorial]

Posted in Center for Environmental Genetics | Comments Off on Transplanting human cancers into mouse –> (surprise, surprise) genotype and phenotype tends to change rather quickly

Analysis Commons, a team approach to discovery — in a big-data environment for genetic epideniology

The “Analysis Commons” –– which relies on a new team-science model for genetic epide­miology –– integrates multi-omic data and rich phenotypic and clinical information from diverse population studies into a single shared analytic platform that leverages the resources of a cloud-computing environment and allows for distributed access. The number of whole-genome sequencing (WGS) studies with large sample sizes is rap­idly expanding. Projects such as the NHLBI TOPMed Program, the Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium, and the Centers for Common Disease Genomics (CCDG), among others, have already conducted WGS in more than 100,000 individuals, and the Personalized Medicine Initiative promises (soon) whole-genome sequencing in over a million samples. These programs span a diverse set of studies and institutions, many of which lack the compu­tational infrastructure to store and compute on this scale of data. Genomic, epigenomic, metabolic and proteomic data derived from expensive assays often do not exist in large numbers in any single study, but represent a powerful discovery resource when they are combined across studies and integrated with phenotypic data.

Altogether, many population-based stud­ies have now collected data on tens of thousands of variables over several decades, and addition of WGS data to cohorts with long-term prospective follow-up provides a power­ful resource for immediate discovery. Analysis of WGS data for large samples presents formi­dable computational and administrative chal­lenges. Evaluation of rare genetic variants in WGS data requires manipulation of data sets that are tens to hundreds of terabytes in size and are prohibitively large for exchange between analysis sites. In contrast, pooled data sets –– which include genotype and phenotype data from all participants in the contributing individual studies –– provide for practical and efficient WGS analysis. Creation of such large pooled data sets containing harmonized multi-omic, phenotype and clinical data with appropriate meta-data (e.g. example, parent-study information and use permissions) is dif­ficult and time consuming.

The cloud-based Analysis Commons [see attached article] brings together genotype and phenotype data from multiple studies in a setting that is accessible by multiple investigators. This framework addresses many of the challenges of multi-center WGS analyses –– including data-sharing mechanisms, phenotype harmonization, integrated multi-omics analyses, annotation, and computational flexibility. In this setting, the computational pipeline facilitates a sequence-to-discovery analysis workflow illustrated [see attached] by an analysis of plasma fibrinogen levels in almost 4,000 individuals from the National Heart, Lung, and Blood Institute (NHLBI)’s Trans-Omics for Precision Medicine (TOPMed) WGS program. The Analysis Commons represents a novel model for translating WGS resources from a massive quantity of phenotypic and genomic data into knowledge of the determinants of health and disease risk in diverse human populations..!!

Nat Genet Nov 2o17; 49: 1560–1563

Posted in Center for Environmental Genetics | Comments Off on Analysis Commons, a team approach to discovery — in a big-data environment for genetic epideniology

Manipulation of meteorological data by NOAA and NASA over the past 30-40 years – comments

First, I want to emphasize that there is a scientific field of climatology, which needs to be kept distinct from the political science field of non-scientific-based opinions and emotions. I believe that I can answer your email because (as you say, you’ve “never invested the required 10k hours”) I, on the other hand, HAVE invested that amount of time trying to understand climatology vs meteorology. And what these GEITP pages are trying to do –– is to “separate the wheat from the chaff,” i.e. let’s deal with science and scientific facts, and identify fraud and corruption when it occurs, and leave political opinions and subjective emotions for the politicians.

Second, had I been able to attend your seminar several weeks ago at Cincinnati Children’s Hospital, and if I had found your data differed from my opinion, as a reputable scientist I would have approached you, as a reputable scientist (alone, or publicly in a question to you at the podium). I would NOT have checked online to find out what was your “Pants on fire” rating index. THIS is the difference between scientific facts and political hysteria.

Third, if government agencies (or various scientific journals and professional societies) receive federal funding to “perform research that is consistent with, and cannot be inconsisten with, their political agena,” too many times money speaks louder than truthful data.

Fourth, what NOAA and NASA have been doing for decades is called “cooking the books” (they call it “adjustment” or “homogenization of data”). If Y (on the ordinate) is plotted as a function of X (on the abscissa), and Y = global atmospheric temperature and X = time –– then every scientist knows that, if one removes a few “high Y values” early in time, plus removing a few “low Y values” later in time, the “adjusted data” will have a positive slope. Or one can find a “recent rise in temperature” and omit earlier data that would refute global warming. Here is an example of Greenland ice core temperatures over the past 600 years:

Graph #2: Last 1,200± Years Will this make people afraid? Doubt it. Greenland Ice Core Temperatures: Some Tricks to fool t…

Compare that to Greenland ice core temperatures over the past 1200 years:

Graph #1: Last 600± Years Greenland Ice Core Temperatures: Will this make people afraid? Probably so! {This should do the …

And compare that to Greenland ice core temperatures over the past 5,000 years:

{One more example: how weather events are reported. Every opportunity is taken to make an artificial connection with a US …

To quote a few words from Tim Ball’s 2o14 book Deliberate Corruption of Climate Science:
Current weather is normal; that is, it is well within the range of all previous weather and climate variations. There are no dramatic increases in temperature, precipitation, hurricanes, tornadoes, or any other severe weather. The climate is changing, just as it always has, and always will, and the rate of change is perfectly normal. Of course, that is not what the government, environmentalists, or the media promote and, as a result, what most of the public believe. The misconception is deliberate and central to the exploitation of global warming and climate change as the vehicle for a political agenda.

One phenomenon that creates the illusion weather is abnormal is the attention given by the media. We all experience being introduced to some person, then seeing them pop up every time we turn around. It’s the same thing with cars: after you buy one, you see them everywhere. In both cases they were always there, but not part of your awareness. Weather and climate catastrophic events seem to occur everyday, but it is because they became a media story. They have always occurred. Now the story appears, and is amplified by the sensationalism of the media with their “Extreme Weather Reports.”

The entire objective of those pursuing the political agenda was to create the illusion that current weather is abnormal and therefore unnatural. They wanted to show that all this occurred in the last 100 years as a result of human industrial activity. The objective was to create false science, which was easy because few people know about weather and climate, a fact confirmed by a Yale University study that created a High School exam. Fig. 1 shows the raw results with 52 percent getting an F and 25 percent a D for a total failure of 77 percent.
Fig. 1

Promoters of the false story also knew that people know even less about climate. Indeed, most don’t even know the difference between weather and climate. “Weather” is the atmospheric conditions you experience at any point in time. “Climate” is the average of those conditions in a region or over decades of time.

A few years ago I wrote, but didn’t submit, a story for the Globe and Mail with the headline, “An Area of Arctic Ice Twice the Size of Vancouver Island Melted Today.” My planned story then revealed that this was a normal amount of melt. Imagine my surprise –– when recently this story appeared in reality! The headline I tongue-in-cheek considered writing was in a national newspaper: “Melting in the Arctic reached an all-time high in June: Ice has been disappearing at a rate of 29,000 square miles a day.”

This is near the average daily rate of melt in the brief Arctic summer, but few people know this is natural. Approximately 10 million km2 of ice melts every summer in approximately 145 days, which is a melt rate of 68,965 km2 (26,627 square miles) per day. The amount mentioned is well within the wide variation in melt from year to year.

Fig. 2 provides a brief context to show the wider natural range of temperature over the last 10,000 years. It shows the temperature of the Northern Hemisphere derived from Greenland ice cores.

Fig. 2

The current temperature is on the right (red line). Some salient points that expose the lies and distortions:

* The world was warmer than today for 97 percent of the last 10,000 years, a period known variously as the Climatic Optimum, or more recently the Holocene Optimum. We have known about this warmer period for at least 75 years.

* The world was 2°C warmer than today 1000 years ago during the medieval warming. Remember, you are told that the world is going to warm by 2°C, and that is catastrophic.

* The world was 4°C warmer than today during the Minoan warming.

* We are told the amount and rate of temperature increase in the last 100 years (shown in red) is abnormal. Compare the slope with any of the previous increases.

* The green line indicates the larger trend and shows that Earth has cooled for approximately the last 7000 years.

The CO2 does actually change over this period, but those changes follow the temperature. The global warming proponents tell the public it is the opposite. As in all temperature changes, there is a logical explanation that does not include CO2. In this case, the longer trend fits what is called the Milankovitch Effect (ME). These are the collective changes caused by Sun/Earth relationships, including orbit, tilt, and precession of the Equinox (Figure 3).

Fig. 3

The Intergovernmental Panel on Climate Change (IPCC) does not include the ME in their computer models that are the source of predictions about future climate. No wonder their models are always wrong.

The existence of the ME explains, beyond lack of knowledge, why the public is susceptible to the natural /unnatural ploy. Most people think the Earth’s orbit round the Sun is a small, unchanging ellipse. Science knew this was incorrect years ago. Joseph Adhémar (1797-1862) proposed that the likely cause of climate change in the earth’s solar orbit.

James Croll expanded the idea and calculated orbital eccentricity effects on solar radiation for different latitudes over 3 million years, and published the results in 1867. The primary cause of the orbital change is the gravitational pull of the planet Jupiter. It is a significant change. (Fig. 4).

Fig. 4

The cycle is 100,000 years, but that is from minimum to maximum ellipse and back to the minimum. The solar energy currently received when the Earth is closest to the Sun (perihelion) varies from +3.5% to -3.5%. When furthest away (aphelion) 20,000 years ago, the difference was +8.5% and -8.5%.

Today, 149 years later, this is little known to most people. The main reason is that it contradicts the philosophical basis of Western science, uniformitarianism. This is the idea that change is gradual over long periods of time. A quick look at the geologic, or any other natural, record shows it is false. However, it means people are easily persuaded that a change, especially sudden change, is unnatural.

People were vulnerable and therefore easily fooled. Worse, the deceivers have deliberately changed the record to enhance their deception. They created what is natural or normal. Watch the video by Tony Heller (aka Steve Goddard) in which he demonstrates the changes made to the instrumental temperature record, all deliberately designed to enhance warming. These are the people who brand those who question the science as deniers and criminals. This is why it is the greatest deception, but worse, a deliberate deception.

Fifth and last, what global warming alarmists do (because they have no scientific facts to support their premise) is to attack the scientist climatologist and/or attack his/her credentials. This recently has even included lawsuits to try to prevent scientists from presenting real facts. Tim Ball, and his wife and children, for example, have received numerous death threats. Does this ever happen in your field of bioinformatics, Mario? I think not. THAT is the difference between the science of climatology and the field of political science and hysteria.

Posted in Center for Environmental Genetics | Comments Off on Manipulation of meteorological data by NOAA and NASA over the past 30-40 years – comments

Register now: For the 2nd SRP webinar on Adverse Outcome Pathways (AOPs), November 29, 1 pm EST

For anyone interested, the NIEHS Superfund Research Program is inviting you to join them for the second session of the “Risk e-Learning” webinar series on Adverse Outcome Pathways (AOPs). For anyone who missed the first session (“Introduction to the Adverse Outcome Pathway Framework”) and wishes to “catch up” before the second webinar on Nov 29th, you can watch this webinar here: EPA CLU-IN website. That first webinar [and a research example (attached), published by Pelkonen et al. –– on behalf of the EFSA WG EPI1 and its other members] were discussed on these GEITP pages Sept 19th and 22nd [see far below].

The webinars are free and open to the public.

From: SRP Risk e Learning [mailto:SRP-RISKELEARNING@LIST.NIH.GOV] On Behalf Of Carlin, Danielle (NIH/NIEHS) [E]
Sent: Thursday, November 09, 2017 9:56 AM
Subject: Register now: 2nd SRP webinar on Adverse Outcome Pathways (AOPs), November 29, 1 pm EST

Dear Colleagues,

The National Institute of Environmental Health Sciences (NIEHS) Superfund Research Program (SRP) invites you to join us for the second session of the Risk e-Learning webinar series on Adverse Outcome Pathways, which are structured ways to represent biological events leading to adverse health effects. The webinars are hosted on the U.S. Environmental Protection Agency (EPA) Contaminated Site Clean-Up Information (CLU-IN) website.

Session II – Assembling and Assessing Adverse Outcome Pathway (AOP) Information will be held Wednesday, November 29, 1:00 – 3:00 pm EST. To register, visit EPA’s CLU-IN Training & Events webpage. In the second session, presenters will discuss the development of AOPs and how they may be used to support hazard and risk assessment. I will moderate the session.

Carole Yauk, Ph.D., head of the Genomics Laboratory in the Environmental Health Science and Research Bureau at Health Canada, will briefly review common AOP development principles and will present a case study to walk through the development of one AOP using the AOP wiki.
Ed Perkins, Ph.D., senior research scientist at the U.S. Army Corps of Engineers Research and Development Center, will discuss efforts to merge the AOP’s simple framework for linking effects to a regulated outcome with more biological pathways and measurements, such as -omics, to support hazard and risk assessment.
Justin Teeguarden, Ph.D., chief exposure scientist at the Pacific Northwest National Laboratory and leader of the Oregon State SRP Center Research Translation Core and Texas A&M SRP Center Exposure Science Core, will introduce similar frameworks for organizing exposure information (like the aggregate exposure pathway) and discuss how they can provide critical information about the magnitude of stress and key information about how environmental concentrations can be related to human exposures.

Posted in Center for Environmental Genetics | Comments Off on Register now: For the 2nd SRP webinar on Adverse Outcome Pathways (AOPs), November 29, 1 pm EST

The unrelenting search for historical truth

The attached fascinating article summarizes the Journey that Ed Calabrese has undertaken, for most of the past decade, leading to discovery of the discrepancies of the Linear No-Threshold (LNT) Model –– for which Hermann Müller was awarded the 1946 Nobel in Physiology or Medicine –– although it was clear that (before the Nobel lecture) Müller had known of the strong data that seriously challenged the LNT Model. The story tells how Curt Stern had sent Müller the manuscript that he and Ernst Caspari had written, stating their seriously challenging data on 6 Nov 1946, after having alerted Müller in September to expect it. Caspari and Stern found that irradiation to living organisms exhibited a threshold effect on mutations and not a LNT effect. Müller acknowledged receipt of the Caspari manuscript and offered preliminary comments on it in a 12 Nov 1946 letter to Stern; in the letter, Müller acknowledged that these findings seriously challenged the LNT model, that the study needed to be replicated, that Stern needed to get the funds to do this, that Caspari was a very competent researcher, and that Müller could not dismiss the study due to inexperience or other reasons.

This information was troubling. If any oe us were in Müller’s shoes (about to present the Nobel Lecture), would we ever admit that there was no possibility the LNT Model was biologically plausible after seeing the Caspari study findings? Would we acknowledge (in our Nobel lecture) that the shape of the dose-response curve in the low-dose range remained a viable research question that still needed to be resolved? Yet, although Müller acted like a scientist in his communications with Stern, in his public demeanor he was deceitful and very ideological — everything a reputable scientist should not be. To act this way during the most significant moment in his professional life revealed important character traits in Müller, including “those of dishonesty, risk-taking, manipulation, and arrogance.”

Did Müller ever have any new data or insights that would provide an explanation for his rejection of Caspari’s threshold conclusion? To the contrary, a detailed 7-page letter from Müller to Stern (dated 14 Jan 1947) re-affirmed the 12 Nov 1946 letter. With this now in hand, Calabrese came to the firm, but unsettling, conclusion that Müller was deliberately deceptive in his Nobel Lecture and used this opportunity to achieve a long-dreamed-of goal to have LNT as the default model for cancer risk assessment.

This was his chance and, apparently, the ends justified the means — again, a rationalization that scientists should never accept. The historical record shows to what lengths Stern and Müller, and others under their influence (or spell), would go to twist the truth to advance their emotional subjective ideology. As these GEITP pages have stated before, use of the LNT Model for the past 60+ years in lab animal cancer studies is the second biggest case of fraud resulting in wasted time, effort and billions of US dollars in research money. The largest case of fraud (resulting in wasted time, effort and trillions of US dollars in research money) is the global warming hypothesis since the early 1980s –– perpetuated by United Nation’s Intergovernmental Panel on Climate Change (IPCC) and supported by NASA and NOAA employees who continue to “adjust” the raw data to fit their hypothesis. 🙁

Acad Quest 2o17; doi 10.1007/s12129-017-9660-6

Posted in Center for Environmental Genetics | Comments Off on The unrelenting search for historical truth

The dynamics of molecular evolution during 60,000 generations of the Escherichia coli (E. coli) bacterium

After the recent sharing of a GEITP report about a bird’s beak “evolving” to become longer –– to accommodate decades of birdfeeders in the gardens of England –– here is another example of “Evolution in Action.” The Escherichia coli long-term evolution experiment (LTEE) is the longest running bacterial evolution experiment, including 12 replicate populations of E. coli serially propagated for more than 60,000 generations. Michael Desai, Richard Lenski et al. now report whole-genome sequencing (WGS) at 500-generation intervals over the course of the 60,000 generations from the LTEE. Their analyses reveal a complex and dynamic evolutionary process of long-term bacterial adaptation in this controlled environment, and include findings on clonal inference, “genetic drift,” and shifting targets of selection.

Evolutionary adaptation is driven by the accumulation of mutations, but the temporal dynamics of this process are difficult to observe directly. Recently, time-resolved sequencing of microbial evolution experiments, viral and bacterial infections, and cancers has begun to illuminate this process. These studies reveal complex dynamics, characterized by rapid adaptation, competition between beneficial mutations, diminishing-returns epistasis (gene-gene interactions), and extensive genetic parallelism. These forces can alter patterns of polymorphism and influence which mutations ultimately become fixed. However, it is unclear whether these dynamics are general, or, instead, reflect the short time-scales and novel environmental conditions of previous studies.

To address this question, authors [see attached report] turned to an experiment with the longest frozen ‘fossil record’: the E. coli LTEE. The twelve LTEE populations have been serially propagated in the same type of medium for more than 60,000 generations, with samples preserved every 500 generations. Previous work has shown that the competitive fitness of each population continues to increase through 60,000 generations, despite a decline in the rate of improvement. The genome sequences of evolved clones have shown that these fitness gains are accompanied by steady accumulation of mutations. Parallel genetic changes across replicate populations suggest that there is a common pool of adaptive mutations that has yet to be exhausted in any single population.

The outcomes of evolution are determined by a stochastic (randomly determined) dynamical process that governs how mutations arise and how they spread through a population.

Although the rate of fitness gain –– during these sixty thousand generations –– declines over time, molecular evolution is characterized by signatures of rapid adaptation throughout the duration of the experiment, with multiple beneficial variants simultaneously competing for dominance in each population. Interactions between ecological and evolutionary processes play an important role, as long-term quasi-stable coexistence arises spontaneously in most populations, and evolution continues within each clade. Authors also provide evidence that the targets of natural selection change over time, as epistasis and historical contingency alter the strength of selection on different genes. Together, these incredible results show that long-term adaptation to a constant environment can be a more complex and dynamic process than is often assumed..!!

Nature 2 Nov 2o17; 551: 45–50

Posted in Center for Environmental Genetics | Comments Off on The dynamics of molecular evolution during 60,000 generations of the Escherichia coli (E. coli) bacterium