COMMENT: Dear Dr. Nebert, I would say that this publication is an “important article.” It has been highlighted in many news reports and scientific conferences –– including last month’s Amercan Society of Human Genetics meeting in San Diego. I think one of the major reasons is that this study showed (some) translational value (‘translational’ = multi-disciplinary, highly collaborative, ‘bench-to-bedside’ approach) and clinical usefulness of large-scale genomic studies. The research community and the general public “need” this type of “good news,” to fuel their hopes of “personalized medicine.”

I agree that the paper showed some value of genomic prediction using polygenic scores; however, it is mainly a “numbers game.” If you look at the overall performance –– evaluated by the area-under-the-curve (AUC) method –– the best score was ~80%, which barely reaches the bottom line for the test to be (statistically) useful in clinical predictive assessment. This is why the authors emphasized only the extreme cases (i.e. those with only very high genetic risk), when they compared those predictive values to those having “monogenic risk.”

In my humble opinion, this is a study that has had a large impact (on the field of clinical genomics) –– much more so than any “real value.” For example, if you take a close look at Supplementary Table 9 [pasted below], their data on five complex diseases suggest that this GPS Model can explain only a very small fraction of the total variance.

Posted in Center for Environmental Genetics | Comments Off on

Dynamic interplay visualized — between enhancer/promoter regions and gene activity

Transcriptional enhancers are short DNA fragments [5 base-pairs (bp) to 25-30 bp] that control gene expression; enhancers can be nearby “upstream” or “downstream”, inside the gene (in an intron that does not get transcribed into the final messenger RNA which then gets translated into the ultimate protein), distant (many thousands of bp of DNA away), and even on a different chromosome from the gene being controlled. It has been at least 35 years since the initial discovery of enhancer modules, but it has been amazing to me –– without distinct visualization –– that the idea of “looping” nearby (cis) or distal (trans) enhancer segments, in order to produce a (contact and) interaction with the promoter (which sits very near the upstream end of the gene) could be so extensively inferred.

A long-standing question in the field remains: how does the physical interaction between enhancers and promoters impact gene expression? Authors [see attached article & editorial] describe an imaging approach by which long-range interactions between promoters and distal enhancers, and their consequences on transcriptional output, can be monitored directly. Transcriptional enhancers in multicellular eukaryotes greatly outnumber genes. In the case of humans, estimates for the total number of enhancers exceed a million elements, compared with the estimated ~21,000 genes. Thus, each promoter-and-gene is, on average, under the control of more than ten regulatory elements.

Authors combined genome-editing and multi-color live imaging to simultaneously visualize (physically) enhancer–promoter interaction and transcription at the single-cell level in Drosophila (fly) embryos. By examining transcriptional activation of a reporter mini-gene by the endogenous even-skipped enhancers –– which are located 150,000 bp away, authors could identify three distinct topological conformational states and could measure their transition kinetics. They demonstrated that sustained proximity of the enhancer to its target is required for activation. Transcription, in turn, affects the three-dimensional topology –– as it enhances the temporal stability of the proximal conformation, and is then associated with further spatial compaction. Moreover, the facilitated long-range activation results in transcriptional competition at the genetic locus, causing corresponding developmental defects. This novel approach offers quantitative insight into the spatial and temporal determinants of long-range gene regulation and their implications for cellular fates. Awesome. Simply awesome.


Nat Genet Oct 2o18; 50: 1296–1303 [article] & pp 1205–1206 [News’N’Views]

Posted in Center for Environmental Genetics | Comments Off on Dynamic interplay visualized — between enhancer/promoter regions and gene activity

Genome-wide polygenic scores for common diseases identify individuals with risk eequivalent to monogenic mutations

Genome-wide association studies (GWAS) began more than 12 years ago, with the hope that clinical geneticists would soon be able to [a] predict genetic risk of complex diseases, and [b] find new pathways for which new drugs might be developed (to prevent or treat that complex disease). In these past 12 years, GWAS findings have certainly identified new pathways for drug development. However, unexpectedly (for many investigators), prediction of risk has been found to be virtually impossible.

Hundreds of discovered single-nucleotide variant (SNV) associations have been found to contribute only 10% to 25% of the heritability for that particular complex disease. Looking at the “simple” quantitative trait of height, for example, a 2o14 study used genome-wide data from N=253,000 individuals, identified 697 variants having genome-wide statistical significance –– and all SNVs together could explain only ~20% of the heritability for adult height. It has been suggested that we could study the entire world population of 7.7 billion, as a cohort for height, and all significant SNVs, combined, would still not total 100% of heritability for that trait.

These GEITP pages have recently discussed (once on Sept 30th, another on Nov 5th) a new approach to genetic risk of a complex phenotype, which is called “genome-wide polygenic score” (GPS). Authors [see attached article & editorial] demonstrate further that GWAS-informed whole-genome profiling can quantify individual disease risks in clinically significant ways –– potentially leading to “the long-awaited use of genetic profiling” in routine healthcare practices.

The [attached] study calculated GPSs for five complex diseases [coronary artery disease (CAD); atrial fibrillation (AFIB); type-2 diabetes (T2D); inflammatory bowel disease (IBD); and breast cancer (BrCA)] in >300,000 individuals. The GPSs were created, using sophisticated algorithms to integrate per-variant estimates of disease risk, gleaned from GWAS. Disease classification accuracy was carefully validated across two cohorts. Surprisingly, authors found that, for many individuals, GPS-associated risks were as high as those correlated with SNVs influencing rare monogenic forms of disease already routinely considered in clinical settings.

Authors further suggest that preventive interventions (e.g. lifestyle changes, and statin use) could be deployed, or suggested, to high-risk individuals, right now, on the basis of their GPSs. In addition, they found that ~20% of all individuals studied had >3-fold risk for at least one of the five diseases, and that the absolute number of individuals deemed at high risk from the GPS for CAD was 20-fold greater than the number expected to be identified via established monogenic variant screens. Authors proposed that it is time to contemplate the inclusion of polygenic risk prediction in clinical care, and to talk about relevant issues pertaining to GPS for many complex genetic diseases.

Nat Genet Oct 2o18; 50: 1219–1224 [article] & pp 1210–1211 [News’N’Views]

COMMENT: Dan, Just as a comment on this GEITP email (which emails I continue to enjoy receiving): long before GWAS, we knew we could not predict complex trait risks explicitly from genetics, because monozygotic (identical) twin pairs typically are only concordant (presence of the same trait in both members of a pair of twins) 50% or less of the time. In our own area (cleft lip and palate), concordance is about 50%, compared with ~5% for dizygotic (fraternal) twins. So, we have always known that it is “not all genetics.” Or indeed “not all even environmental effects” –– unless the environment in the uterus is so variable, thereby implying stochastic (randomly determined) effects are important.)

The Genome-wide Prediction Score (GPS) method is nice, because it provides some degree of “risk assessment,” and we need to provide our patients with enough of a “mathematical appreciation,” so that they might understand what their GPS means for any particular disease. However, exact predictions are fictional –– which still does not diminish the use of GWAS, etc. for druggable target discovery, new biology, and even some degree of risk determination.

COMMENT: COMMENT: Dear Dr. Nebert, I would say that this publication is an “important article.” It has been highlighted in many news reports and scientific conferences –– including last month’s Amercan Society of Human Genetics meeting in San Diego. I think one of the major reasons is that this study showed (some) translational value (‘translational’ = multi-disciplinary, highly collaborative, ‘bench-to-bedside’ approach) and clinical usefulness of large-scale genomic studies. The research community and the general public “need” this type of “good news,” to fuel their hopes of “personalized medicine.”

I agree that the paper showed some value of genomic prediction using polygenic scores; however, it is mainly a “numbers game.” If you look at the overall performance –– evaluated by the area-under-the-curve (AUC) method –– the best score was ~80%, which barely reaches the bottom line for the test to be (statistically) useful in clinical predictive assessment. This is why the authors emphasized only the extreme cases (i.e. those with only very high genetic risk), when they compared those predictive values to those having “monogenic risk.”

In my humble opinion, this is a study that has had a large impact (on the field of clinical genomics) –– much more so than any “real value.” For example, if you take a close look at Supplementary Table 9 [pasted below], their data on five complex diseases suggest that this GPS Model can explain only a very small fraction of the total variance.

Posted in Center for Environmental Genetics | Comments Off on Genome-wide polygenic scores for common diseases identify individuals with risk eequivalent to monogenic mutations

The American Journal of Human Genetics: November 1, 2018 (Volume 103, Issue 5)

Although everything is a gradient, complex genetic diseases (e.g. obesity, type-2 diabetes, schizophrenia, and cancer) and quantitative traits (e.g. height, body mass index, and I.Q.) usually differ from monogenic traits and diseases (e.g. phenylketonuria, sickle-cell anemia, and tyrosinemia). Traits (phenotypes) such as drug efficacy and response to environmental toxicants likewise exhibit a gradient; many are similar to complex diseases, whereas some appear more like monogenic (caused by single gene) or oligogenic (caused by a small number of genes) traits. Moreover, all these phenotypes are influenced/modified by variations in multiple other genes and environmental factors –– explaining why (for example) “onset” and “severity” –– in two patients having the same single-nucleotide variant (SNV) –– can be so different.

Genome-wide association studies (GWAS) since ~2oo6 have successfully identified thousands of genomic regions associated with hundreds of complex traits and diseases. GWAS publications typically report “association results” as a list of loci –– distinguished from one another, for counting purposes, and labeled with an SNV and one or more (nearest-neighbor) gene names as signposts. The gene names make referring to loci easier than using genome positions or variant labels, although the genes named in GWAS reports have varying amounts of evidence supporting any role or function contributing to that trait or disease. Early GWAS were performed with less-densely-spaced sets of SNVs, so the reported variant might not have been the strongest associated variant at a locus. More recent GWAS, and GWAS meta-analyses, are much larger with sample sizes (some cohorts now approaching one million for some traits), and, although GWAS have often been performed in a single population, a growing number of trans-ancestry studies are able to combine data across populations. For most identified loci, however, the molecular and biological mechanisms remain to be determined.

Authors [see attached excellent review] analyze thoroughly the emerging complexities of molecular mechanisms at GWAS loci. Authors ask three major questions: [a] How many association signals exist at a locus? [b] What are the candidate causal variant(s)? [c] What are the target gene(s)? In each section, they provide historical context to the question, methods available for addressing it, and evidence and observations from examples of GWAS loci that have been mechanistically characterized to date. The complexity of mechanisms at GWAS loci — (including multiple signals, multiple variants, and/or multiple genes) — is discussed. Identifying mechanisms responsible for GWAS loci requires an accumulation of consistent evidence for the genes and variants that influence the trait or disease in humans. Authors conclude with future directions for researchers to consider in experimental design and interpretation of GWAS locus mechanisms.

Am J Hum Genet Nov 2o18; 103, 637–653

Posted in Center for Environmental Genetics | Comments Off on The American Journal of Human Genetics: November 1, 2018 (Volume 103, Issue 5)

Last universal common ancestor (LUCA) between ancient Earth chemistry and the onset of genetics

There was a time when there was no life on Earth (i.e. only the environment). And there was a time when there were DNA-inheriting cells (i.e. there were gene-environment interactions). Transitioning from the former to the latter is difficult to imagine. Earth is 4.54 billion years old. By 4.2 to 4.3 billion years ago, Earth had cooled sufficiently so that there was liquid water; subsequently, hydrothermal convection currents started sequestering water to the primordial crust and mantle. First signs of life appear as carbon isotope signatures in rocks 3.95 billion years ago. Thus –– somewhere on the ocean-covered early Earth, and in a narrow window of time of “only” ~200 million years –– the first cells came into existence.

Because the genetic code and amino acid chirality (asymmetrical mirror images, of a chiral molecule or ion, are called enantiomers or optical isomers, and cannot be superimposed on one another) are universal, all modern life forms ultimately trace back to that phase of evolution –– which was the time during which the last universal common ancestor (LUCA) of all cells lived. [LUCA is a theoretical construct, which might or might not have been something we today would call an organism; but it helps to bridge the conceptual gap between rocks and water on the early Earth and ideas about the nature of the first cells.] Opinions about LUCA have spanned many decades. These concepts are traditionally linked to our ideas about the overall tree of life and where “its root” might lie. However, phylogenetic trees are temporary and of course undergo change as new data and new methods of phylogenetic inference emerge.

The familiar three-domain tree of life presented by ribosomal RNA [Carl Woese, 1990] depicted LUCA as the last common ancestor of archaea, bacteria, and eukaryotes. But we were confronted with two recurrent and fundamental problems: 1) How are the three domains related to one another (so that gene presence patterns would really trace genes to LUCA) as opposed to another evolutionarily more derived branch? 2) Does presence of a gene in two domains (or three) indicate that it was present in the common ancestor of those domains, or could it have reached its current distribution via late creation in one domain, and horizontal gene transfer (movement of genetic material between unicellular and/or multicellular organisms other than by transmission of DNA from parent to offspring) from one domain to another?

With the availability now of so many genome sequences, authors [see attached paper] ask what genes are ancient –– by virtue of their phylogeny –– rather than by virtue of being universal. This approach, undertaken recently, leads to a different view of LUCA than we have had in the past. In this insightful review, authors argue that this approach (ancient genes as a consequence of their phylogeny) fits better with the harsh geochemical setting of early Earth and resembles more the biology of prokaryotes that today inhabit Earth’s crust.

Did the origin of genetics hinge upon hydrothermal chemical conditions that gave rise to the first biochemical pathways that, in turn, gave rise to the first cells? Genes that trace to LUCA, ancient biochemical pathways, and aqueous reactions of CO2 with iron and water –– all seem to converge on similar sets of simple, exergonic (reaction accompanied by release of energy) chemical reactions –– such as those that occur spontaneously in hydrothermal vents. From the standpoint of genes, physiology, laboratory chemistry, and geochemistry, it is therefore looking more and more like LUCA was rooted in rocks and hydrothermal vents., i.e. Life was based on CO2 with iron and water.


PLoS Genet Aug 2o18; 14: e1007518

Posted in Center for Environmental Genetics | Comments Off on Last universal common ancestor (LUCA) between ancient Earth chemistry and the onset of genetics

OTX2 restricts the number of primordial germ cells “allowed to enter” into the mouse germline

Primordial germ cells (i.e. the precursors of eggs and sperm) are established very early in the development of many multicellular organ­isms; reasons for this are unknown. This process of establishing the germline involves both: [a] preventing a nonreproductive-cell (somatic-cell) fate; and [b] activating a cellular state known as pluripotency — the ability to give rise to the many different cell types in the body. Understanding germline formation has focused mainly on identifying proteins that specify germline fate, but comparatively little is known about why somatic cells do not acquire such a fate.

In many animals, specification of the germline operates like a hereditary hierarchy in which the passage of molecular components in the cytoplasm will determine which cells will form primordial germ cells. However, some animals –– including salamanders, crickets, mice and perhaps humans –– take a different approach. In the early mouse embryo, designa­tion of the germline occurs as a result of cells “simply being in the right place at the right time”, rather than inheriting the species’ hereditary hierarchy. In this inductive fate-determination scenario, cells of a cylindrically-shaped region, among the pluri­potent cells, known as the epiblast, are coaxed into adopting a primordial-germ-cell fate, which is regulated by signals derived from supporting extra-embryonic tissues (cells next to, but not part of, the embryo).

In the mouse, the induction of primordial germ cells from the post-implantation epiblast requires BMP4-signaling to occurr in “prospective” primordial germ cells, combined with the intrinsic action of primordial-germ-cell transcription factors. Authors [see attached article] show that the formation of primordial germ cells in mice can be blocked by a protein called OTX2 (orthodenticle homeobox-2). Down-regulation of the mouse Otx2 gene precedes initiation of the primordial-germ-cell program –– both in cultured cells as well as in the intact animal. Deletion of Otx2 in cultured cells markedly increases the efficiency of primordial-germ-cell-like cell differentiation and prolongs the period of primordial-germ-cell competence.

In the absence of Otx2 activity, differentiation of primordial-germ-like cells becomes independent of the (otherwise essential) cytokine signals; in this case, germline entry is initiated, even in the absence of the primordial-germ-cell transcription factor BLIMP1. Deletion of the Otx2 gene in the animal increases primordial-germ-cell numbers. These findings indicate that OTX2 functions as a repressor –– upstream of primordial-germ-cell transcription factors, acting as a roadblock to limit “entry of epiblast cells to the germline” –– to a small window in space and time, thereby ensuring that only a small number of germline cells separate from the somatic cells of the early embryo.


Nature 25 Oct 2o18; 562: 595–599 [article] and 497–498 (editorial)

Posted in Center for Environmental Genetics | Comments Off on OTX2 restricts the number of primordial germ cells “allowed to enter” into the mouse germline

Vitamin K2 Steps into the Spotlight for Bone and Heart Health

This article came out recently online at –– and I believe many will be interested in the advances being made concerning our understanding of Vitamin K nutrition, and bone and cardiac health.


Vitamin K2 Steps into the Spotlight for Bone and Heart Health
John Watson; Reviewed by: Anya Romanowski, MS, RD

October 10, 2018

Read Comments Below

Since its discovery nearly 90 years ago, vitamin K has enjoyed the uncomplicated status of an essential nutrient, respected but somewhat overlooked. Guidelines advised that we get our daily recommended intake of vitamin K (120 µg for men and 90 µg for women[1]), but most likely made no mention that it exists in two variants, K1 and K2.

Beginning in the 21st century, however, researchers started closely scrutinizing the structural differences between K1 and K2, which before had been considered largely irrelevant.[2] Their work has indicated that K2 may deserve special consideration as a treatment for osteoporosis and cardiovascular disease.

How Do Vitamins K1 and K2 Differ?
The umbrella term “vitamin K” actually describes a family of fat-soluble compounds. The body has limited ability to store the vitamin and amounts are rapidly depleted without regular dietary intake.

Vitamin K1, also known as phylloquinone, is primarily found in green leafy vegetables. K1 accounts for approximately 90% of daily vitamin K consumption in the United States.[3]

Vitamin K2, also known as menaquinone (MK), is primarily bacterial in origin. K2 is mostly encountered in fermented foods, meats, and dairy products.[4] It is further subgrouped based on the length of its side chains, from MK-4 to MK-13.[4,5] For example, meat products typically include the MK-4 variants, whereas the traditional Japanese vegetarian dish natto, made from fermented soybeans, contains MK-7, which provides the highest known vitamin K activity. K2 can also be produced by the human gut’s microbiome, though the absorption and transport of K2 produced in this manner is less understood.[1]

K2 comes to us primarily through products derived from animals, who can synthesize it from the K1 they ingest from eating grass. As agricultural practices have shifted animals away from grassy pastures toward grains, K2 levels have decreased.[2] Because K2 is usually present in only modest amounts, and even less so in low-fat and lean animal products, many Western diets are inadequate providers of a nutrient researchers consider increasingly important.

What Are K2’s Proposed Benefits?
Although K2’s effect has been studied across a variety of conditions, including cancer and arthritis, to date the strongest evidence exists to support its use in osteoporosis and cardiovascular health.[5]
Bone Health
Vitamin K’s bone-building reputation is well earned, as it is necessary for activating proteins secreted by osteoblasts.[2] K2 draws calcium into the bone matrix and can inhibit bone resorption when administered with vitamin D3.[2] The MK-7 form of vitamin K2 has proven particularly adept in this process.[6]

Supplemental K2 has been associated with significant reductions (approximately 25%-80%) in fracture risk when used alone or combined with vitamin D and calcium,[7,8] as well as with maintenance of bone density in osteoporotic patients.[9,10] K1 supplementation has shown comparatively less benefit for such outcomes.[11]

A 2017 systematic literature review recommended considering K2 alongside vitamin D and calcium as an adjunct osteoporosis treatment “rivaling bisphosphonate therapy without toxicity.”[11]
Cardiovascular Disease
K2 activates matrix Gla protein (MGP), which keeps calcium deposits from forming on vessel walls. Research has shown that adequate K2 intake generally frees up calcium for its more beneficial roles, whereas K2 deficiencies will lead to a buildup of calcifications.[5]

This simple cause-and-effect relationship was on display in the 2004 prospective population-based Rotterdam Study, which included 4,807 individuals with no history of myocardial infarction.[12] After following the cohort for up to 7 years, researchers reported that high K2 intake led to significant risk reductions in coronary heart disease, all-cause mortality, and severe aortic calcification when compared with those with the lowest K2 intake. In comparison, K1 intake had no discernible protective benefits.

A cohort study of over 16,000 women free of cardiovascular disease also reported a strong correlation between increased K2 intake and reduced coronary events, but not for K1.[13]

Remaining Questions
Supplemental K2 is now standard care for treating osteoporosis in Japan, and has been gaining attention in Western cultures as well.[11] Although the promising results described above merit enthusiasm, important questions remain regarding its use.

Studies comparing relatively lower doses of MK-7 supplementation with placebo in early menopausal[14] and post-menopausal women[15] produced conflicting results, with the former experiencing no differences in bone loss at 1 year but the latter seeing less age-related decline in bone content and density at the femoral neck and lumbar spine at 3 years. This raises questions regarding the optimal dose range of K2 for various populations, the duration of follow-up needed to determine its effect, and whether supplements can provide nutrient levels as adequate as dietary intake.

There have been several reports of elevated risk for cardiovascular disease among older adults and postmenopausal women taking calcium supplements.[16,17,18,19,20,21] However, this link has been questioned by other recent studies,[22,23] with clinical guidelines[23,24] suggesting that any risk can be mitigated if calcium supplements are taken within tolerable ranges (eg, not above the range of 2000-2500 mg/d). As this link continues to be investigated, the possible role of K2 supplements in counterbalancing any such risk is highly worthy of a robust clinical analysis.

Certain varieties of K2 supplements, such as MK-7, have also been shown to interfere with anticoagulation therapy, whereas others like MK-4 carry no risk for hyper-coagulation even at relatively high doses.[11] Physician awareness of the different properties of various K2 supplements is therefore crucial in patients taking anticoagulation therapy.

Although the evidence on K2 is preliminary and sometimes contradictory, there is nonetheless valid reason to be excited about the potential of this modest intervention. If nothing else, this collection of studies indicates that, although vitamin K remains essential, it is in no way monolithic.


Karen Ahmad | Registered Nurse (RN) 1 hour ago
Several years ago I read that the “new” digital mammography actually did a decent job of visualizing calcium deposits in the aorta and coronary arteries. I made some effort to see if someone (? cardiologist ? Radiologist) might be able look at my digital mammography ….but no one seemed to know much about it. In that some time has passed now and more of these digital studies are now being done, it would be nice to see a study on the use of digital mammography to screen for vascular calcium deposits.

Dr. William Blanchet | Internal Medicine 23 minutes ago
The presence of arterial calcium on mammography is associated with a significant increased risk of coronary events. The presence of calcification on mammography should be an indication for doing coronary calcium imaging.
Caroline Levy | Medical Student 1 hour ago
As a health consultant I see the benefit in both and both should be in our diet. Now, at 80 I have no osteoporosis, no heart problems, very good teeth, good muscles, etc., etc. Had a full physical 3 months ago and everything great. Variety is also essential in good health, but cutting back on wheat and sugary products, boxed cereals, is essential in the USA, as nobody’s body is made to survive on poor but sickening foods. Fermented foods are very good but most today do not have the time to do at home and locating these foods is somewhat difficult. We are made physically to eat a variety of foods for good health. I have noted some doctors do not allow K2 at all for some heart patients but I believe they also do not known much about real nutrition as it is not one of their studies.
Barbara Banfield | Registered Nurse (RN) 2 hours ago
I have been taking a high dose K2 supplement (5 mg MK-4) along with Vit D3 for about 5 years now. My fingernails became much stronger and my tooth enamel became superslick and shiny. “Watch spots” of weakening enamel on my teeth also cleared up and I have had no cavities since starting the K2. Occasionally I run out of the K2 and do not buy more for a few weeks. During these times, I quickly notice my fingenails becoming softer.
As a post menopausal woman on prednisone for an auto immune condition with a higher risk for osteoporosis, my bones are slightly osteopenic, but holding steady per DEXA scans. After caring for patients who had mandibulectomies for osteonecrosis, I will never take a bisphosphinate, so this is a godsend.
Dr. Justin Baldwin | Internal Medicine 2 hours ago
Nice to see this article. I hope Medscape will increase its coverage of nutritional and other research oriented toward prevention.

Dr. Luis Todd | Internal Medicine 3 hours ago
If you are taking clopidigrel, it’s still Indicated??
Dr. William Blanchet | Internal Medicine 21 minutes ago
If you are taking anti-platelet drugs, you probably have vascular disease and should be on K2 — unless you have a coagulopathy.
Dr. Paolo Bini | Rheumatology 5 hours ago
In my practice i was always struck by chest xrays from older patients with osteoporosis showing translucent spine bones and very radiopaque aortic and other vessel diffuse calcifications; I did wonder that *calcium moved from bone to vessels *. This paper on vit.K2 phisiology may explain the reason: due to nutritional deficit of vit K2, there is loss of function in Preventing parietal calcium deposition in vessels, and loss of function in promoting matrix calcium deposition in bones. It would be interesting to ascertain if such a radiologic pattern could substitute for blood measurement in diagnosing vit K2 deficiency (although clearly at a late stage of disease).

Kerre Willsher | Registered Nurse (RN) 7 hours ago
Have any studies been done on bone health to compare populations since Vitamin K2 commenced to be routinely given to neonates?
Vit K2 has been routinely given to neonates in Australia since the 1960s. I was a midwife.
––Kerre Willsher, Whyalla, South Australia
Bao-Anh Nguyen-Khoa | Pharmacist Oct 16, 2018
Please note that population-based studies, such as Rotterdam, cannot demonstrate a cause-effect relationship, as stated by the author.

George MacDonald RN | Registered Nurse (RN) 1 hour ago
True, but neither can watching people jump out of airplanes without parachutes.
Mark Seaman | Other Healthcare Provider Oct 14, 2018
The article is generally excellent. It does mention that certain varieties of K2 supplements, such as MK-7, have been shown to interfere with anticoagulation therapy. I believe that is a positive. I’m on warfarin and use a daily K2 MK-7 pill to stabilise my INR. The K2 overwhelms the variation in Vit K intake in my diet and consequently I can remain in the right INR range (2-3) about 99% of the time.

Mary Beth Horst | Nurse Practitioner (NP) 4 hours ago
@Mark Seaman Thank you for sharing this, I have believed for years that we should not restrict patients from foods that contain either K1 or K2, but rather have them eat or supplement these consistently and adjust warfarin dose accordingly. How crazy is it that we tell patients they can’t eat greens?
Lisa Abermoske | Pharmacist 2 hours ago
@Mary Beth Horst — where I practice (Madison, WI) the common practice is always to counsel patients to eat consistent amounts of leafy dark greens, cranberry juice, or other forms of Vit K – never to avoid. That would be extreme I would think.
Dr. Catherine Rice | Anesthesiology 2 hours ago
I test my INR daily and adjust my dose of Coumadin accordingly. Buy an INR test device and eat what you want.
Dr. JOSÉ RAYMOND HERRERA | Endocrinology, Metabolism Oct 12, 2018
It has to be stressed that Vit K1 and Vit K2 are completely different animals and the studies should not mix them up. Also, it looks like the «French paradox» is more related to fermented and strong cheese (rich in MK-9) than to the traditional mediterranean diet (olive oil, tomatoes, red wine), which is not bad though but apparently better with those cheeses.
Dr. Joseph Turcillo | Internal Medicine Oct 12, 2018
Excellent information! Keep it coming! Thank you. J Turcillo MD FACP

Posted in Center for Environmental Genetics | Comments Off on Vitamin K2 Steps into the Spotlight for Bone and Heart Health

The Next Big Thing in Health is Your Exposome and KEEPING IN MIND AVOGADRO’S NUMBER

The article [below] just appeared online, at Many different feelings surfaced –– when I read this. First, “exposome” is yet-another catchy buzzword for what many environmental toxicologists have been doing for decades (without having this cute name). But everything since “genome, genomics” that started ~1990 –– must rhyme (e.g. at this web site, you can see a list of 44 such ‘research fields’).

Second, as stated by Michael Snyder in this article, “A limitation of our study is we don’t know the absolute amounts of exposures; we only know relative amounts.” This concept reminds us of the Linear No-Threshold (LNT) Model that has adorned these GEITP pages for much of the last decade. Paracelsus (1493-1541) summed it up best “Alle Dinge sind Gift, und nichts ist ohne Gift; allein die dosis machts, daß ein Ding kein Gift sei” –– roughly translated “Everything is poison, and there’s nothing that’s not poison; it’s Dose alone that makes any one thing poisonous or not.”

In 1956 the National Academy of Sciences-National Research Council (NAS-NRC) established the Committee on the Biological Effects of Atomic Radiation (denoted the BEAR Committee) –– which was the forerunner of the subsequent NAS-NRC committees on the Biological Effects of Ionizing Radiation (BEIR committees). Periodically since the 1950s the BEIR Committee has published updates, the latest one being BEIR VII, Phase 2, 2oo6: “Health Risks from Exposure to Low Levels of Ionizing Radiation, Phase 2.” For the previous many discussions about the BEIR Committees, and the LNT Model, please refer to and search for ‘LNT Model’.


The Next Big Thing in Health is Your Exposome

From sewer sludge to mosquito repellant, one scientist is exploring how daily exposures determine our health

Go to the profile of Veronique Greenwood

Veronique Greenwood*Se0JRBJEufjbwenVc_N9HA@2x.png

Our health is a combination of genetics and environment. Maybe someone’s genes make them vulnerable to high blood pressure, for example, but by watching what they eat — in effect, controlling their body’s environment — they can keep their numbers within normal levels.

Right now, we know a lot about the genetics side of this combination, as an explosion of research has yielded incredible detail about people’s genetic profiles. We also have insight into how our internal bacterial environments — the microbiome in our gut — impact our health. But the environmental piece of the puzzle is still fuzzy. We don’t measure all the chemicals we encounter each day, from the microscopic fungi on a walk to the car exhaust on a highway.

That is, most people don’t.

Michael Snyder, a Stanford biologist and pioneer in genomics, does. For the past several years, Snyder has been wearing a device he invented that measures the environment around him. It’s part of his quest to learn how the environment impacts our health by studying what he calls people’s “exposomes,” or the various air particles, pollutants, viruses, and more that we come into contact with each day.

In a recent paper in the journal Cell, Snyder and his colleagues describe what they’ve learned from affixing 15 people with these air-monitoring devices for up to 890 days. Each device is about the size of a big matchbox, and contains filters that trap particulates, chemicals, and microbes from the air around it. Medium talked to Snyder about the study, the exposome, and his own self-monitoring discoveries.
This is not your first foray into detailed self-monitoring. A few years ago you were monitoring your own blood over the course of 14 months, and you detected the onset of your own diabetes, right?

Michael Snyder: Yes. The monitoring started when I was doing genome sequencing and other profiles — like gene expression — on myself, 8 years ago. I was using myself as a test subject to get the technology going — I didn’t know I’d be interesting! Then I got diabetes. My genome predicted it, and I got the disease over the course of this profiling. These types of measurements helped me catch the onset of the disease and gave a much more detailed picture of people’s health.

With these techniques — like regular testing of blood for changes in gene expression and other readouts — we’re following a group of 109 folks now, many of them for four-and-a-half years or longer. We added wearables about five years ago to continuously monitor physiology, things like heart rate and blood oxygen levels. It helped me figure out when I got Lyme disease, actually. I was with my brother in rural Massachusetts putting up fences, and two weeks later I flew to Norway. When you fly, your blood oxygen levels will drop, but they usually recover after you land. Mine didn’t. And my heart rate was abnormally high. I got a low-grade fever and went to a doctor, who told me I had a bacterial infection. I told him I thought it was Lyme. He recommended penicillin, but I said I think I need doxycycline, which is what you take for Lyme. I measured myself when I got home, and sure enough, I was Lyme positive. It was a perfectly controlled experiment, because I’d given blood before I left and I was negative then.

We want to bring big data into health; that’s the motivation. I think the way we do medicine now is very primitive, compared to what it could be. Everybody is focused on disease, but we want to focus on health and transitions to disease. We’ve written algorithms based on the data we’ve collected, and we now think we tell can when you get sick before you realize it, because your heart rate goes up. We’ve shown that on myself and three other people. We’re now trying to set up a 1,000-person study to learn more.
How does your new research — which focuses on what you call the “exposome,” meaning the sum total of everything you are exposed to — fit into this picture?

That area has been a big hole. We know you’re affected by your genes and your environment, but nobody is capturing the environment individually. Nobody carries around something on their sleeve to monitor their exposure.

We took a standard high-end air monitor and re-engineered it. The monitor has a pump that sucks up about one-fifteenth of the air you breathe. We put a submicron filter on the monitor to collect all the particulates in the air. Under that we have a cartridge with a chemical absorbent. We take that filter and elute off the particulates, and sequence, incredibly deeply, the DNA and RNA that’s there. Then we match it up against a custom database with 40,000 species of microorganisms, viruses, plants, and animals. We can see exactly what you’re getting exposed to from the biological side. Then from the chemical absorbent, we elute that off and run it through a mass spectrometer. We see all the chemical structures.

The study has just over two years of data on the biologicals; the chemicals we did for only a few months, but nonetheless we learned a ton.
What did you find?

The first thing we learned is the exposome is vast. There were more than 2,000 species, from bacteria to my pet guinea pig, registered during my own two years of profiling. Even the guy or gal who wore it for three months for the study was exposed to over 1,000 species. There were close to 3,000 chemical features detected in the whole study.

Second is that the exposome is dynamic. It varies a lot. How much of the variation is regional or seasonal? For the part we could figure out, location is the number one factor, especially for the chemicals. The time of year is another important factor. We sampled four people living in the Bay Area — me, and people in Sunnyvale, Redwood City, and San Francisco. We profiled them over the same month, and everybody’s different. The person in San Francisco had sewer sludge bacteria in their samples; there are definitely parts of San Francisco that don’t smell so good. Every time I go to Monterey, I get a fungal exposure. Location really matters.

DEET (N,N-Diethyl-meta-toluamide; the most common active ingredient in insect repellents) is everywhere, which surprised me; it was in all the samples. There’s a few carcinogens, like the solvent diethylene glycol. A limitation of our study is we don’t know the absolute amounts of exposures; we know relative amounts. That’s something we are working on to pin down. This was really just a survey to see generally what we are exposed to.
You noticed that whenever pyridine — a chemical used in paint — appeared in samples, fungi numbers were low. You hypothesize that unbeknownst to us, pyridine is an antifungal?

Right, that was interesting. Pyridine is a nasty chemical. But you can argue this certain ways. If you are very allergic to fungi, which some people are, maybe for them it’s good to have pyridine, although that is a different potentially detrimental exposure.
Were exposures connected with people’s health?

This is not in the paper, but there’s some correlation with health. We’re still trying to sort this all out.

Your eosinophils — a type of white blood cell — are actually a measure of allergic response. We can correlate my eosinophils with what exposures are out there. I thought I was probably most allergic to pine, but the correlation was actually better with eucalyptus. One in five Americans has allergies or asthma. It’s useful to know what triggers this.

In California, I’m in eucalyptus heaven! I’m not going to cut down a eucalyptus tree, although if a tree ever had to go in my yard that would be the first one to go. Chemical exposures you could try to track and get rid of.
What’s next for this field of research?

We’re going to get these devices on more people — we will try to get inexpensive devices — just to get this out there so lots of people can do this. Ultimately, we have to take samples and analyze them offline. That will be true for a while. But once we figure out what might be most impactful on people’s health, then we’ll try to set up real-time personal monitors for those things.

I would argue this is the first map of the human exposome, like the first genome map. We see what’s there, and then we try to understand how it affects your health. As we get more devices out, we will be able to make more associations between allergies and exposures. It would take long-term monitoring to understand the effects of toxins, as well as toxicants. But I do think we need that data.
COMMENT: Interesting. For about the last 20 years, I’ve been on an Extramural Advisory Committee of “ScienceMediaCentre” based in London. Every week there is one or more “releases” to the press of new papers about to be published. And we (20? 100? 500? don’t know how many are on this scientific board) are asked to comment on an article still under embargo. I’ve done so, several times –– my quotes appearing in newspapers and magazines (more often in the UK, EU than US). In fact, some of these articles (after the embargo has been lifted) have appeared in these GEITP pages.

Almost always my remarks are along the lines of Paracelsus’ famous quote (see below), i.e. just because we can DETECT some deleterious chemical in our environment (food, water, air) does NOT mean that it is liable to cause cancer or toxicity at some ridiculously low dose.

Moreover, chemists always keep in mind Avogadro’s Number –– defined as “number of units in one mole of any substance (its molecular weight in grams), equal to 6.022 x 10–23 (also known as 6.022e–23). The ‘units’ might be electrons, ions, atoms or molecules, depending on the nature of the substance.”

Methods of detection of any substance are usually in the range of 10–12, 10–14, or perhaps even 10–18 moles per liter, and our detection methods continue to improve and become increasingly sensitive. However, if something cannot be detected at, say, 10–14 molar, we should always keep in mind that “not detected” is a much more accurate statement than saying “zero.” In other words, “undetectable at 10–14 molar” does not rule out that as many as ~6 x 109 (6,000,000,000) units per liter –– might still exist in your “sample.” 🙂

COMMENT: Dan, I have extensive experience in performing analyses of many different kinds of air samples, using GC/MS and LC/MS. What Michael Snyder and coworkers are doing is only qualitative, at best. Trying to get some idea of the dose of any particular compound (exposure) that a person is getting –– by using the methodology described –– is virtually impossible.

The problem we have these days is that very sensitive instrumentation can now be placed in any lab, to be used by anyone; and many of those using such instrumentation have no idea of how to really measure things properly. Determining an accurate concentration of any atom or chemical compound collected on their filters –– is very difficult –– let alone trying to determine the exposure over some particular period of time. In my humble opinion, this “exposome” is a lot of fanfare over a fairly worthless qualitative survey.

Posted in Center for Environmental Genetics | Comments Off on The Next Big Thing in Health is Your Exposome and KEEPING IN MIND AVOGADRO’S NUMBER

HGNC Newsletter Autumn 2018

For those interested, below is pasted that autumn 2o18 NewsLetter of the Human Gene Nomenclature Committee (HGNC).

The website not only includes standardized nomenclature for all human genes and genetic loci to date –– but also does the same for genomes of primates, mouse and many other mammals, lower vertebrates, invertebrates, and even some plants.


(One small clarifying footnote: even though ‘41,555 approved human gene symbols’ now exist, the actual number of human protein-coding genes remains in the 21,000-22,000 range. All other ‘gene symbols’ refer to genetic loci/DNA segments having various identities and putative functions.)

HGNC Newsletter Autumn 2018

Posted in Center for Environmental Genetics | Comments Off on HGNC Newsletter Autumn 2018

An archaebacterium that expresses actin protein — a trait thought to be eukaryotie-specific

From time to time, these GEITP pages consider topics on Evolution. In biology, there are six Kingdoms of Life: Archaebacteria, Eubacteria, Protists, Fungi, Plants and Animals. The first two are prokaryotes (microscopic single-celled organisms having unpaired chromosomes and neither a distinct nucleus with membrane nor other specialized organelles), while the latter four are eukaryotes (organisms consisting of a cell or cells, having paired chromosomes contained within a distinct nucleus with membrane, plus many different organelles). Eukaryotes are thought to have evolved from a merger between an archaebacterial host cell, and a bacterium — from which eukaryotic organelles (called mitochondria) emerged.

Some insights into the biological properties of the host have been learned from the closest known archaeal rela­tives of eukaryotes, the Asgard superphylum. Genomes of organisms in the Asgard superfamily encode a suite of proteins typi­cally involved in functions, or processes, thought to be eukaryote-specific. The functions of these ‘eukaryotic genes’ in Asgard archaea have been elusive. However, authors [see attached article] provide evidence that some Asgard archaea encode proteins that are structurally and functionally similar to their eukaryotic counterparts.

In addition to their nucleus and energy-producing mitochondria, eukaryotic cells are characterized by a complex internal system of membrane-bound compart­ments (the endomembrane system), and by a dynamic network of proteins such as actin, forming cytoskeleton. The cytoskeleton gives the cells their shape and structure, but is also involved in a variety of cellular processes spe­cific to eukaryotes; these features are thought to have been present in the last common ances­tor (LCA) of all eukaryotes, which lived ~1.8 bil­lion years ago. Nevertheless, no life forms have been found that represent an intermediate between eukaryotes and their bacterial and archaeal ancestors. The seemingly sudden emergence of cellular complexity in the eukaryotic line­age has remained a mystery for evolutionary biologists.

Authors [attached article] show that Asgard archaea encode functional profilins (actin-binding proteins involved in the dynamic turnover and restructuring of the actin cytoskeleton) — thereby establishing that this archaeal superphylum has a regulated actin cytoskeleton. Loki profilin-1, Loki profilin-2 and Odin profilin adopt the typical profilin fold and are able to interact with rabbit actin (i.e. an interaction that involves proteins from species that diverged more than 1.2 billion years ago). These data suggest that Asgard archaea possess a primordial polar profilin-regulated actin system, which might be localized to membranes (considering the sensitivity of Asgard profilins to phospholipids). Because Asgard archaea are also predicted to encode potential eukaryotic-like genes involved in membrane-trafficking and endocytosis (uptake of matter by a living cell –– by invagination of its membrane to form a vacuole), imaging will now be necessary to elucidate whether these organisms are capable of generating eukaryotic-like membrane dynamics that are regulated by actin — such as those observed in eukaryotic cell movement, podosome formation (dynamic actin-rich cellular protrusions that degrade the extracellular matrix through local protein degradation; podosomes are involved in the motile function of normal cells, e.g. osteoclasts and certain immune cells), and endocytosis.


Nature 18 Oct 2o18; 562: 439–443 [article] and 352–353 (editorial)

Posted in Evolution and genetics | Comments Off on An archaebacterium that expresses actin protein — a trait thought to be eukaryotie-specific