Single-molecule regulatory architectures — as captured by chromatin-fiber sequencing

Any trait (phenotype) reflects the contribution of genes (i.e. DNA sequence), epigenetic factors (chromosomal events other than DNA sequence: DNA methylation, RNA interference, histone modifications, chromatin remodeling), environmental effects (diet, lifestyle), endogenous influences (e.g. cardiopulmonary disorders, kidney disease), and each individual’s microbiome. Today’s topic discusses chromatin — that portion of a chromosome comprising nucleosome arrays, punctuated by short regulatory regions, containing transcription factors (TFs) and other nonhistone proteins.

Chromatin is fundamental to genome function, yet remains undefined at the level of individual chromatin fibers (i.e. the fundamental units of gene regulation). For example, how is regulatory DNA activated on individual chromatin fibers? To what degree are nearby regulatory regions coordinately stimulated on the same chromatin fiber? How does regulatory DNA triggering affect nucleosome positioning on individual chromatin fibers? How does TF occupancy modulate regulatory DNA actuation and function on single templates? Addressing these questions — requires nucleotide-resolution analysis of individual multikilobase (thousands of DNA base-pairs) chromatin fibers, which is not obtainable using current single-cell or bulk-profiling approaches.

Authors [see attached article] wished to develop a method for measuring the primary architecture of chromatin onto its underlying DNA template at single-nucleotide resolution — thereby enabling simultaneous identification of genetic and epigenetic features along multikilobase segments of the genome. Current approaches to mapping chromatin and regulatory architectures require a researcher to sample large populations of chromatin fibers and rely on dissolution of chromatin, using nucleases such as deoxyribonuclease (DNase I), micrococcal nuclease, restriction enzymes, transposases, or mechanical shearing.

CpG and GpC methyltransferases (enzymes that methylate DNA at GC or GC sites) are capable of marking accessible cytosines in a dinucleotide context without digesting DNA, and approaches using GpC methyltransferases have been extended to single-pass nanopore-sequencing. However, the usefulness of these approaches for gaining insights into the biology of individual chromatin fibers is limited because of: [a] sporadic occurrence and linear clustering of CpG and GpC dinucleotides in animal genomes as a result of mutation and selection; [b] confounding influence of endogenous cytosine methylation machineries; [c] marked DNA degradation induced by bisulfite conversion; and [d] the intrinsically limited ability of nanopore-sequencing to accurately identify modified bases on a per-molecule basis.

Using nonspecific DNA N6-adenine methyltransferases, authors [see attached article] found that single-molecule long-read sequencing of chromatin stencils enabled nucleotide-resolution readout of the primary architecture of multikilobase chromatin fibers (a method abbreviated ‘Fiber-seq’). Fiber-seq is able to expose widespread plasticity in the linear organization of individual chromatin fibers and illuminate factors that guide regulatory DNA activation, to study the coordinated stimulation of neighboring regulatory elements, single-molecule nucleosome positioning, and single-molecule transcription factor occupancy. Authors believe that this approach — and the data presented herein — open up new vistas on our learning about the primary fundamentals of gene regulation.

DwN

Science 26 Jun 2020; 368: 1449-1454

Posted in Center for Environmental Genetics | Comments Off on Single-molecule regulatory architectures — as captured by chromatin-fiber sequencing

Genomic scans for adaptive introgression, using a new method, that authors call “VolcanoFinder”

These GEITP pages have often discussed the topic of evolution of humans, including “adaptive introgression” (i.e. the process by which beneficial alleles are introduced into a species from a closely-related species). [Recall that ‘allele’ is a gene on one chromosome; the chromosome pair has one allele from the mother, the other allele on the other chromosome from the father).] Authors [see attached article] present an analytically-manageable model for studying the effects of adaptive introgression on non-adaptive genetic variation in the genomic region surrounding the beneficial allele. The result described is a characteristic volcano-shaped pattern of increased variability that arises around the positively-selected site; thus, authors present an open-source method, VolcanoFinder, to detect this signal in genomic data. Notably, VolcanoFinder is a population-genetic likelihood-based approach, rather than a comparative-genomic approach; therefore, one can probe genomic variation data from a single population for footprints of adaptive introgression — even from an unknown and possibly extinct donor species.

Whereas classic species concepts imply genetic isolation, research over the last 3 decades shows that hybridization between closely related species (or diverged subspecies) is widespread. For adaptation research, this offers the intriguing perspective of an exchange of key adaptations between related species, with potentially important implications for our view of the adaptive process. Recent studies have brought solid evidence of cross-species introgression of advantageous alleles: well-documented examples cover a wide range of taxa — including transfer of wing-pattern mimicry genes in Heliconius butterflies, herbivore resistance and abiotic (i.e. non-living part of an ecosystem that shapes its environment, e.g. temperature, light, and water) tolerance genes in wild sunflowers, pesticide resistance in mice and mosquitoes, and new mating and vegetative incompatibility types in an invasive fungus. Such adaptive introgressions have also been found to have occurred in modern humans: [a] local adaptation to hypoxia (low atmospheric oxygen levels) at high-altitude was shown to be associated with selection for a Denisovan-related haplotype at the EPAS1 (hypoxia pathway) gene in Tibetan populations; [b] positive selection has been characterized for three archaic haplotypes, independently introgressed from Denisovans or Neanderthals in a cluster of genes involved in the innate immune response, and [c] immunity-related genes show evidence of selection for Neanderthal and Denisovan haplotypes (these GEITP pages have previously discussed these papers; recall ‘haplotype’ refers to a segment of genes on same chromosome — coming from one parent).

In all examples above, evidence of adaptive introgression rests on a comparative analysis of DNA from both donor and recipient species. In particular, human studies often rely on maps of introgressed Neanderthal or Denisovan fragments in the modern human genome. The tell-tale signature of adaptive introgression is a segment of mutations from the donor population that is present in strong linkage disequilibirum (LD; non-random association of alleles at different loci in a given population) and in high frequency in the recipient population. Unfortunately, solid data from a potential donor species may not always be available — especially in the case of an extinct donor. In absence of a donor, introgression can sometimes be inferred from haplotype statistics in the recipient species, the most recent methods making use of machine-learning algorithms based on several statistical methods.

However, there is currently no framework for a joint inference of admixture and selection (e.g. adaptive introgression), and selection is usually inferred from an unexpectedly high frequency of introgressed haplotypes. A recent article on adaptive introgression in plants identified four different types of studies in this field, focusing on [a] introgression, [b] genomic signatures of selection, [c] adaptively relevant phenotypic variations, and [d] fitness. The attached paper attempts to bridge the gap between classes [a] and [b] — and to detect specific genomic signatures of an introgression sweep.

Using coalescent theory, authors [see attached article] derived analytical predictions for these patterns. Based on these results, authors developed a composite-likelihood test to detect signatures of adaptive introgression, relative to the genomic background. Simulation results showed that VolcanoFinder has high statistical power to detect these signatures, even for older sweeps and for soft sweeps initiated by multiple migrant haplotypes. Lastly, authors implement VolcanoFinder to detect archaic introgression in European and sub-Saharan African human populations; they uncovered interesting candidates in both populations — such as TSHR (throid-stimulating hormone receptor) in Europeans and TCHH— RPTN (epidermal differentiation complex (EDC) on chromosome 1) in Africans. These biological implications provide guidelines for identifying and circumventing artifactual signals during empirical applications of VolcanoFinder. 😊

DwN

PLoS Genet Jun 2020; 16: e1008867

Posted in Center for Environmental Genetics | Comments Off on Genomic scans for adaptive introgression, using a new method, that authors call “VolcanoFinder”

THE HIGH COST OF COLLEGE: IS IT WORTH IT?

Because what the COVID-19 pandemic has done to U.S. education — from preschool through college — it seems like a good time to examine closely where education has gone, and what might happen in the future. WHY is this thorough (nonpartisan) analysis by L.E. Wood III (and a group of experts who are his friends) worth sharing on these GEITP pages? Because GEITP-ers represent a very heterogeneous group (students, postdocs, junior faculty professors, chairs of department or heads of institutes, emeritus faculty, lawyers, judges, real estate agents, nurses, technicians, other lay persons), we all might have children or grandchildren about to enter college, or we are in academia and/or currently are working on federally-funded research, or administering such research.

For those GEITP-ers outside the U.S. — perhaps this analysis is not as relevant. Or perhaps you-all will also find this far-reaching and very insightful essay very intriguing. 😊

DwN

THE HIGH COST OF COLLEGE:

IS IT WORTH IT?

By: Leonard E. “Scotty” Wood, III

2019-2020

INTRODUCTION

RESEARCH AND FINDINGS OF THIS COMPENDIUM BEGAN IN THE FALL OF 2019, SEVERAL MONTHS BEFORE THE COVID-19 PANDEMIC. WITHOUT WARNING, THE PANDEMIC POSED IMMEDIATE, MAJOR STRATEGIC AND FINANCIAL CHALLENGES TO THE ENTIRE COLLEGE AND UNIVERSITY SYSTEM. THEY ARE RESPONDING WITH URGENCY TO THE MANY CHALLENGES, SEVERAL OF WHICH WERE IDENTIFIED HEREIN PRIOR TO COVID-19, AND OTHERS RESULTING FROM THE PANDEMIC ITSELF.

SECTIONS I. – XII. REPORT ON THE RESEARCH AND FINDINGS CONDUCTED PRIOR TO COVID-19.

THE REMAINING SECTIONS, XIII. – XVII. CONSIDERATION OF THE IMPACT OF COVID-19 AND THE COMING TRANSFORMATION OF THE WHOLE SYSTEM. IT OFFERS APPROACHES, SOME OF WHICH WILL SPARK CONTROVERSY — WITH THE PURPOSE OF STIMULATING DISCUSSION AND DEBATE, AND TO IMPROVE THE COLLEGE AND UNIVERSITY SYSTEM FOR THE PEOPLE.

I. THE HIGH COST OF COLLEGE – IS IT WORTH IT?

The short answer is – yes, it’s worth it for some, but not for others. This paper is a compendium of scholarly papers, editorials, magazine articles and similar legitimate source material. Its purpose is to engage critical thinkers on the subject, share its information widely across the public, and propose solutions to make the college and university system more effective for everyone involved.

II. IS COLLEGE COSTLY?

Yes. According to the Consumer Price Index (CPI), between 1980 and 2014, the cost of a college education increased by 260% compared with 120% for the CPI itself — i.e. more than double. Student debt stands at $1.5 trillion dollars amongst 45 million students and ex students. More than 5 million are delinquent, and a larger number have loan deferments.

There is another high cost involved. It’s not economic – It’s societal. As Dr. Richard Better, Ohio University Emeritus Distinguished Economics Professor, notes, “many colleges and universities admit students who are academically unqualified because an incentive to do so exists in the form of student loans and government assistance programs. Many of these students fail to graduate, leading to bitter disappointment, financial hardship, and the stigma of being academic failures. “

Colleges have no consequences for this practice, and many argue that they need “skin in the game” — and be incentivized to admit fewer academically unqualified students. Alex J. Pollock , former President of the Federal Home Loan Bank of Chicago, suggests that colleges should pay 20% of the debt obligations of former students facing loan delinquency or default.

III. JUST HOW COSTLY IS COLLEGE?

America spends about $30,000 per student per year. The spending is exorbitant, and it has virtually no relationship to the value that many students get in exchange according to Amanda Ripley, an author published in The Atlantic Monthly magazine, September, 2018.

IV. HOW MUCH DOES COLLEGE COST IN THE REST OF THE WORLD ?

The $30,000 annual cost per student in America is roughy twice the cost per student in other developed countries according to the Organization of Economic Cooperation and Development’s (OECD) “2018 Education At A Glance Report”.

V. HOW WELL ARE PUBLIC HIGH SCHOOL STUDENTS BEING PREPARED FOR COLLEGE?

The National Center for Education Statistics May 2019 Report entitled Public High School Graduation Rates shows that 85% of students in the United States graduated on time, the highest rate ever recorded.

Meanwhile, the publication The Nation’s Report Card reports that only 37% of 12th grade students performed at or above the 12th grade Proficiency Standard in reading. The math scores were slightly lower.

Thus, our country is faced with a striking disparity between public high school graduation rates and academic proficiency. It suggests that most high schools do not base graduation decisions on student academic achievement as measured by reading and math proficiency metrics that are nationally recognized.

This apparent overemphasis on graduation rate at the expense of academic achievement may help explain why so many students fail to finish and graduate from college.

VI. HOW DO AMERICAN STUDENTS PERFORM ACADEMICALLY, COMPARED WITH THOSE IN OTHER DEVELOPED NATIONS?

Sadly, the OECD’s “International Assessment of Adult Competencies” reports that Americans under age 35 with a bachelor’s degree performed below their similarly educated peers in 14 other countries on practical math skills, below 6 other countries in reading skills, and below 13 other countries in their ability to solve problems using digital technology.

VII. WHY ARE AMERICA’S COLLEGE COSTS SO HIGH ?

Many reasons — some preventable, some systemic, and some demographic. Of the $30,000 annual cost per student, college staff and faculty consume $23,000 of the total which is more than twice what Finland, Sweden and Germany spend on these “core” services. In addition, we spend just under $3,400 per student per year on “amenities,” which include non academic activities and athletics, more than 3 times the average in the developed world.

The competition for top students and revenue assistance programs has created a “spending arms race”. Many colleges now compete for out-of-state and foreign students who pay much, much more than in state tuitions. One highly regarded midwestern university reduced its in state enrollment by 4,300 students over the last decade, while adding 5,300 out-of-state and foreign students who pay triple the tuition.

Global rankings of colleges and universities heavily weight the amount of research published by faculty — a metric that has no relationship to whether students are learning as The Atlantic Monthly article noted.

US colleges employ armies of fund raisers, athletic staff, lawyers, admissions and financial aid officers, diversity and inclusion managers, building operations and maintenance staff, security personnel, transportation workers and food service workers. One Ivy League university, for example, has a full-time diversity staff of 150.

Many state legislatures have been spending less and less per student on higher education for the past three decades. The cuts were particularly stark after the 2008 recession. Colleges and universities did not adequately respond with cost reductions but rather burdened students with higher tuition costs.

VIII. HOW AFFORDABLE IS COLLEGE IN AMERICA COMPARED TO ELSEWHERE ?

Alex Usher of Higher Education Strategy Associates reports on the affordability of college — based on the cost of tuition, books, and living expenses, divided by the median income in a given country. By this metric, the US does very poorly, ranking third from the bottom with only Mexico and Japan doing worse.

IX. WHAT HAS HAPPENED TO AMERICA’S COLLEGE ENROLLMENT ?

In short, huge growth driven by the notion that you must have a college education. In 1980, 53% of eligible Americans were enrolled in some form of collegiate education; by 2012, this had grown to 69%. The principal driver of this growth is the societal belief that “you must go to college”.

X. HAS THE GROWTH OF COLLEGE ENROLLMENT PRODUCED A GROWTH IN INCOME ?

NO — when adjusted for inflation. According to the National Association of Colleges and Employers (NACE), the average starting salary in 1980 when adjusted for inflation was $51,047 — compared to $47,823 in 2014, a 6% reduction in income over a 34-year period.

XI. DOES EVERYONE NEED TO GO TO COLLEGE ?

NO. Review any of 100 ultra successful people who did not get a college degree. Names like Bill Gates, Paul Allen, Mark Zuckerberg, Larry Page. Richard Branson, Steven Spielberg and Steve Jobs. Then there are others from prior times — George Eastman, Henry Ford, Thomas Edison, Ray Kroc, Colonel Sanders, John D. Rockefeller, George Washington and Abraham Lincoln.

There are millions of people who just do not benefit from or need a college education, and why should they be potentially saddled with the debt or the stigma? They have other skills that are not compatible with the academic life in a college or university. Why should society shun them for not attending college?

XII. WHY NOT SIMPLY DO WHAT OTHER COUNTRIES HAVE DONE AND HAVE GOVERNMENT TAKE OVER THE FUNDING OF COLLEGE EDUCATION ?

There are significant tax consequences of such a move. Currently, according to the US Census, there are 16.8 million Americans enrolled in colleges and universities. At $30,000 a head, the “bill” for nationally funded college education would exceed $500 billion dollars annually which represents 11% of the current gross expenditures of the federal government. Accordingly, unless the top tax rates were dramatically increased to a point where many high earners would shelter money off shore, all taxpayers would receive a significant tax increase including those who have no reason to attend college.

Then, there is the matter of the current high cost of US college attendance in relation to other developed countries. As a policy question, should American taxpayers pay for what is a comparatively inefficient system? OECD notes that “universities extract money from students because they can. It’s the inevitable outcome of an unregulated fee structure.” In many developed nations, the government limits how much that colleges can extract by capping tuition.

XIII. ENTER COVID-19, WHICH INITIALLY HAD SHUT THE ENTIRE SYSTEM DOWN. IS IT THE GREATEST THREAT TO THE SYSTEM IN ITS HISTORY? IS IT ARMAGEDDON?

There seems to be little question that the COVID-19 pandemic is THE defining moment in the history of the college and university system as well as the smaller communities (eg.-college towns) that are highly dependent upon their local college or university. All sorts of retail establishments in college towns are directly affected by COVID-19, and sadly in some cases these businesses will not survive.

Without warning, all institutions were thrust into crisis management mode and in some cases emergency management mode. Complicating the dilemma is that very few institutions have any meaningful experience with crisis management and expense control, both of which are vital components in an institution’s successful response to the extreme risks (safety, revenue, expense, etc.) created by the crisis.

Colleges and Universities have prided themselves on providing advanced instruction in good business practices, critical thinking and thoughtful decision making. Unfortunately, many have not “practiced what they preach”.

Institutions have relied on revenue sources that now are threatened. Enrollment levels are in jeopardy as millions of parents have lost jobs or businesses and can’t afford to send a child to the college of their choice. Tuitions are in jeopardy as instructional transformation has already occurred with the movement to embrace virtual learning (payers are already litigating for tuition reductions to reflect the lower costs of virtual learning). Every other source of revenue is threatened, some with extinction — to name a few — revenues from on campus room and board (fewer boarding students) , state government funding (states losing tax revenues), foreign students (travel restrictions), athletics (near empty venues). Endowments have sustained major losses, as the investment markets declined.

In addition to the revenue challenges, there are also new expenses that must be incurred in order to function safely. Investments in facilities to maintain social distancing, provision and stockpiling of PPE (personal protective equipment), increased medical expenses to test students and staff, increased maintenance costs to provide necessary sanitizing and training of students and staff to practice safe living. Young people are notorious for believing that medical issues just don’t happen to them, and that is a threat to everyone’s health in a pandemic environment.

XIV. WHAT CAN INSTITUTIONS IMMEDIATELY DO?

1. Consider strategic alternatives and mission — ranging from continuing as is, to downsizing, to specialization, to actual closure.

2. Implement immediate expense reductions and controls to reduce the cash burn rate. Salary reductions for top staff and coaches, hiring freezes, furloughs, benefit reductions, organizational streamlining, etc.

3. Reduce tuitions for those courses employing increased virtual learning — as a means of preserving enrollment levels.

4. Reduce or ELIMINATE “feel -good programs” — such as large diversity staffs and courses with questionable academic benefit.

5. Reduce or eliminate entire academic departments and/or extracurricular activities including selected athletics. Only a handful of institutions can be “all things to all people”.

6. Cancel/defer all discretionary capital expenditures (CAPEX). Facility enhancements are nice to have, but unaffordable in the crisis environment.

7. Repurpose facilities no longer in use by renting /leasing/selling to the outside.

XV. HOW WILL THE COLLEGE AND UNIVERSITY LANDSCAPE CHANGE IN THE NEXT FEW YEARS?

Uncertainty and threats to institutions’ very existence — will see Darwin’s Law at work. The strong will survive (and even prosper), the marginal will struggle, and the weakest will close. How many in each category is the great mystery and dependent upon many factors, but chiefly enrollment levels and the effectiveness of college and university leadership in successfully responding to the pandemic and setting the new stage for operations going forward.

COVID-19 temporarily forced movement of instruction to a virtual format across all educational platforms from kindergarten to graduate school. Virtual instruction has been present for years, but largely subordinated to the class room.

A transformation is underway that will significantly change the entire college and university system in different ways for different institutions. New instructional models are emerging, most of them using virtual and related technology, and the college and university landscape will change dramatically over time.

XVI. ASSUMING THAT THE PANDEMIC GETS CONTROLLED, WHAT THEN?

Medical experts suggest that this pandemic and others to come could affect our lives permanently. Social distancing and face coverings in public places could be with us for years to come, creating a “new normal” society. College- and University-age children may be staying at home longer than their parents might prefer — causing a delayed empty nest and perhaps less preparation of the children for productive, independent living.

The long-term answers are uncertain because of this societal transformation and also difficult because the college and university paradigm is so big, so ingrained in our society, and now so threatened, but creative and critical thinking can produce action steps including:

1. Focus on the basic mission of academic achievement, emphasize applied knowledge majors, de-emphasize soft majors. De-emphasize wasteful prestige games among elite schools, gold plating of amenities for students, out-of-control college athletics, and a lax workplace culture that breeds both inefficiency and a stiff resistance to innovation. Budgets should be reduced in non-academic areas, particularly college sports.

2. Grow virtual instruction and, where indicated, blend it with class-room instruction (call it the hybrid model). It can provide significant benefit to students’ academic achievement. Rather than students attending small classes conducted mostly by teaching assistants, they can attend a virtual course conducted by a world-renowned academic who can reach thousands of students simultaneously at a fraction of the per student cost. In some models, a course conducted twice a week could have one virtual and the other in class to assure student debate and interaction where desirable.

There is a potentially significant role for technology companies like Microsoft, Apple, Google and others to partner with institutions by creating new approaches to learning, communication and virtual interaction among students and faculty. However, the danger with this is that even more than now, High Tech will indoctrinate ideology rather than teach facts. It is probable that artificial intelligence (AI) technology will be central to such a new dynamic.

Institutions large and small should consider partnerships — where one institution offers certain courses, while the other offers different courses — but students could blend the two and graduate on time with a joint degree.

3. Adopt the goal of keeping overall bottom line costs at or below CPI with penalties for institutions that fail to meet the goal. Colleges and universities should not increase costs faster than payers’ ability to pay.

4. Increase productivity and facility utilization by moving to year-round schooling — 180 or less annual days of learning is poor utilization of both human and physical resources. Armies and businesses don’t train that way — why should colleges and universities?

5. Offer alternate approaches to learning such as learn on your own time and pay as you go. Learn on your own time requires discipline but it allows the student to work or be an at home parent while attending college. This is currently done ad hoc and could be more efficient and effective if formalized as an alternate learning model.

6. Improve the current federal student loan program by requiring that using their endowments as a funding source, colleges and universities pay a minimum of 20% of the debt obligations of students who fail to graduate. Adopt ceilings on federal student loan debt so that a few don’t benefit at the expense of many.

7. Require that all incoming students take a course in Debt Management so that they clearly learn and understand the advantages, disadvantages and responsibilities of debt, may it be personal, commercial or public debt.

8. Adopt minimum ACT scores for college entry. Require that students pass an academic achievement test in order to graduate from college or university, not just those pursuing professional designations.

9. Base college accreditation on academic achievement, as measured by graduation rates, graduate achievement test scores, career placement results, and graduate earnings vis a vis norms established for each area of endeavor.

10. Convert endowment development marketing from an emphasis on brick-and-mortar naming rights to course-naming rights (e.g. the “John Doe Thermodynamics Lecture Series”). Public Broadcasting System has used this approach successfully for years.

11. Develop a process enabling colleges to determine the “return on investment” of tenured faculty. Develop a faculty code of conduct that rewards unbiased instruction and prescribes sanctions for biased instruction.

12. Publish annual report cards to the general public reporting on tuition history, graduation rates, graduation test results, and other achievement metrics.

13. Adopt tax-free thresholds for college endowments, and tax endowments in excess of thresholds. Require restricted use of the excess endowment tax revenue to support the federal student loan program. Unlimited tax free status is not universal amongst non profit organizations; why should colleges and universities endowments be totally tax free?

14. Replace secondary education college placement programs with career counseling that may or may not lead to college enrollment.

15. Educate the public about the alternatives to college education, and the benefits thereof — in an effort to reduce the stigma of academic failure and the burden of unnecessary student debt.

XVII. WHAT CAN YOU DO TO HELP COLLEGES AND UNIVERSITIES BETTER SERVE THE PEOPLE?

Like any other endeavor in life, helping means getting involved. Stimulate the conversation by forwarding this compendium to people of all ages – family, friends, students, educators, business people, theologians and government leaders at all levels.

You need not drop everything and devote lots of time to this unless you want to. Rather, give it serious thought and engage others in the subject. If colleges and universities are to effectively serve students and payers, they need to know what the people want and need, not what sounds good or feels good.

“Should colleges and universities focus more on specialization, as a means of delivering higher quality instruction at lower cost?” — is an example of what can and should be debated. “Should state universities that have multiple campuses increase concentration of specific subjects at specific campuses rather than offering all subjects at all campuses?”

But what about the uncertainty of whether the “new normal” will see more students staying at home and/or using virtual learning making concentration less important? Will technology companies partner with elite universities and dominate higher learning?

We must find the answers to these questions without “breaking the bank” by minimizing “trial and error”.

The college and university system needs the benefit of critical thinking more than ever, and you can contribute. Get involved. Start by forwarding this to many others.

ACKNOWLEDGEMENTS:

The author wishes to thank a small group of critical thinkers who reviewed various iterations of this compendium, and provided valuable input. Only the author knows each of the individual contributors, and yet each shares many, but not necessarily all, of the items expressed herein. Without their assistance, this could not have been completed.

Who are they?

A distinguished professor of finance at a major state university.

A senior officer of a major health system with significant fund-raising expertise. A trustee of a small private college.

A former executive of a major insurance company.. A former engineer and business executive.

A woman non-college graduate who broke the glass ceiling at a major company.

Thanks also goes to those individuals and organizations who are mentioned in the body of the compendium.

Who is the author?

He is a retired business executive who spent his career in insurance and employee benefits. He has served on nine Boards of Directors, traveled all 50 states, 67 countries, and remains active in community affairs as a Trustee of a major health system and Chairman of the Planning & Zoning Board in his home city. He has three adult children, all college graduates — two from state universities and one from a private university. His four grandchildren haven’t made choices about higher education — yet.

This compendium is protected by U S Copyright Law but may shared with that understanding.

Posted in Center for Environmental Genetics | Comments Off on THE HIGH COST OF COLLEGE: IS IT WORTH IT?

Evolutionary origins of flowering plants and their pollinators

The topic in these GEITP pages today is “the rise and early diversity of flowering plants” (angiosperms). Darwin described the seemingly explosive diversification of angiosperms — as an “abominable mystery.” Today, debates continue about the origin, and the processes that drove angiosperm speciation. Dating the origin of angiosperms was traditionally the prerogative of paleobotanists who study fossil records of plants; however, with DNA-sequencing becoming increasingly sophisticated, molecular dating methods have become popular. Any angiosperm fossils can be dated to the Early Cretaceous (~135 million years ago; MYA), which led paleobotanists to reason that they originated during that era.

It is now increasingly recognized that angiosperms are probably older than the oldest fossils — but how much older remains controversial. When angiosperms originated is key to understanding the origin and evolution of pollinators (e.g. bees, butterflies, moths, and flies). Recent reports highlight the discrepancy of molecular vs paleontological time scales and draw conflicting conclusions about the timing of angiosperm diversification (see figure in attached editorial). On the basis of gene sequences from 2,881 chloroplast genomes — belonging to species from 85% of living flowering-plant families, and time-calibrated using 62 fossils — one study dated the origin of angiosperms to the Late Triassic (>200 MYA; this is ~70 MY before the earliest documented angiosperm fossils). This study further suggested that major radiations (i.e. species diversification) occurred in the Late Jurassic and Early Cretaceous (~165 to 100 MYA).

Although the idea that angiosperms arose before the beginning of the Cretaceous may seem hard to reconcile with the rapid increase in morphological diversity observed during that interval — it is not impossible — if the Cretaceous radiation occurred rapidly. Both paleontological records and molecular analyses have their strengths and weaknesses. The strength of fossils is that they can provide information on past form, function, and clade richness, and indirectly provide information on speciation and extinction; fossils are particularly useful when they harbor intermediate structures or combinations of characters that no longer exist, which can provide insightful examples that help to reconstruct the course of evolutionary events. Yet, interpretation of fossils can be subjective and controversial, because important

features of these plants may not be preserved and often must be inferred from two-dimensional compressed remains.

In summary, “absence of evidence” is “no evidence of absence,” and it is known that the fossil record can be incomplete or biased because some taxa may be less likely to fossilize. Fossil data and molecular evidence therefore lead to conflicting conclusions about the timing of the origin of flowering plants. Fossil evidence suggests that flowering plants arose near the beginning of the Cretaceous (~145 MYA), but molecular analyses now date the origin much earlier, in the early Triassic (~215 MYA). Of course, these GEITP pages will put their money on the latter hypothesis. 😊

DwN

Science 19 Jun 2020; 368: 1306-1308

Posted in Center for Environmental Genetics | Comments Off on Evolutionary origins of flowering plants and their pollinators

Using information theory to decode network co-evolution of insect-plant ecosystems

This topic is central to the GEITP theme of gene-environment interactions. The “environmental signal” is a chemical(s) emitted (by a plant or an insect), and “genes” in the genome (of the insect or plant) respond to that signal. “Attracting” or “repelling” the signal is regarded as one of the main forces that shape plant-herbivore interaction networks. For example, insects have a large number of olfactory receptors with high sensitivity for chemical signals — which play essential roles in their fundamental activities including foraging (obtaining food) and oviposition (depositing eggs). Chemically-mediated species interactions are based on information transfer, which can be defined simply as “communication,” regardless of the benefit to the emitter or the receiver.

Understanding chemical communication between plants and herbivores (animals that eat plants) has been an active field of research; most work has focused on how a specific plant (or genus of plants) defends against herbivores — directly by using chemical repellents or by producing stronger chemical defenses, or indirectly by emitting signals to attract herbivores’ enemies (e.g. predators or parasites). Little is known about how information transfer shapes these species interactions (e.g. it remains unclear how plants code chemical information to deal with a diversity of potential herbivores; and how herbivores decode such olfactory signals to distinguish among plants and identify potential hosts). Even less is known about whether there is any general information structure of plant-herbivore chemical communication and how such a structure can be maintained during an ongoing chemical “arms race” between plants and herbivores.

Information theory has been discussed before in these GEITP pages; it provides a quantitative and scalable means to measure information transfer, and has already brought key insights about the structure and emergence of human language. Authors [see attached article & editorial] extended information theory to increase our understanding about the “chemical language” in ecological communities. Previously, however, such attempts had been used to study chemical communication of only a single plant species with its interacting insects. In the present study [see attached] authors used information theory to study plant volatile organic compounds (VOCs) as a communication channel, forming plant-insect interaction networks. From an information perspective, the relationship between sender and receiver (i.e. ‘speaker’ and ‘listener’) determines the nature of the signal transmission and evolution of the information structure shaping the communication pattern between individuals (i.e. individuals can provide either clear information that can be decoded easily, or spurious information that can be difficult to decode).

[Classical Information theory considers a sender and a receiver trying to communicate using a given code sent through a noisy channel (Morse code in the old telegraph system would be one example). For each signal sent, there is a probability that it will be misunderstood by the receiver. Not surprisingly, a great deal of standard information theory deals with finding ways of optimizing communication efficiency. However, in an ecological system in which interactions are highly asymmetrical (i.e. one species eats the other),the needs at the ends of the (coevolving) communication channel are in clear conflict: Insects need to faithfully identify what leaves are edible, and plants need to avoid being identified as edible.]

By integrating information theory with ecological and evolutionary theories, authors [see attached article & editorial] therefore demonstrated that a stable information structure of plant VOCs is able to emerge from a conflicting information process between plants and herbivores. Authors corroborated this information “arms race” theory with field data — recording plant-VOC associations and plant-herbivore interactions in a tropical dry forest. Authors discovered that plant VOC redundancy and herbivore specialization can be explained by a conflicting information transfer. Authors conclude that information-based communication approaches should be able to increase our understanding of species interactions across levels of feeding and nutrition.

DwN

Science 19 Jun 2020; 368: 1337-1381 & editorial pp 1315-1316

Posted in Center for Environmental Genetics | Comments Off on Using information theory to decode network co-evolution of insect-plant ecosystems

Why herd immunity theshold to COVID-19 is reached much earlier than thought – update

EARLIER POST HERE:

Thank you, M. The time-line for number of COVID-19-related deaths [see figure below] is also very intriguing — in a country that never imposed any restrictions — no face masks, children stayed in school, and the economy was not affected:

Did Sweden’s coronavirus strategy succeed or fail? – BBC News

DwN

From: MI-S
Sent: Wednesday, July 29, 2020 2:52 AM

Thank you for that update on the likelihood of herd immunity. It has been amazing how the infection rate has decreased in Sweden. Now we can see it also on the level of new confirmed COVID-19 cases:

The reason behind this is really not understood. The Swedish people have not changed their habits at all. One can imagine, as mentioned, that cell immunity could contribute; in Stockholm it is estimated that 20% have T Cell immunity and another 20% carry antibodies. Therefore, perhaps 40% immune people in our population is sufficient to provide herd immunity.

How stable is the epigenome of the virus? One could imagine changes (in the viral epigenome), relative to the influence of the virus environment in the Swedish population.

Best, M

Posted in Center for Environmental Genetics | Comments Off on Why herd immunity theshold to COVID-19 is reached much earlier than thought – update

Persistent warm Mediterranean surface waters during the Roman period

For those GEITP-ers that remain enthusiastic about “global warming”, here is a semi-lay summary of an article that reently appeared in Sci Reports [Nature]. This is evidence of global warming — except this occurred 1500 to 2000 years ago.

 

DwN

 

 

 

  • Date: 24/07/20

Mediterranean Sea was 3.6°F (2°C) hotter during the Roman Empire than temperatures since

Comparison of the records from Sicily Channel studied in this work (thick dark blue line) in comparison with other samples – Alboran Sea (thick light blue line), Minorca Basin (thick red line) and Aegean Sea (thick dark and light green lines). They support the claim that the Roman Period saw a 3.6°F rise in temperatures in this Mediterranean region.

 

The Empire coincided with a 500-year period, from AD 1 to AD 500, that was the warmest period of the last 2,000 years in the almost completely land-locked sea.

The climate later progressed towards colder and arid conditions that coincided with the historical fall of the Empire, scientists claim.

Spanish and Italian researchers recorded ratios of magnesium to calcite, taken from skeletonized amoebas in marine sediments, an indicator of sea water temperatures, in the Sicily Channel.

The study identifies the Roman period (1-500 AD) as the warmest period of the last 2,000 years. Map A shows the central-western Mediterranean Sea. Red triangle shows the location of the sample studied, while the red circles are previously-found marine records used for the comparison. Map B shows the Sicily Channel featuring surface oceanographic circulation and sample location. Black lines follow the path of surface water circulation

 

They say the warmer period may have also coincided with the shift from the Roman Republic to the great Empire founded by Octavius Augustus in 27 BC.

The study offers ‘critical information’ to identify past interactions between climate changes and evolution of human societies and ‘their adaptive strategies’.

It meets requests from the Intergovernmental Panel on Climate Change (IPCC) to assess the impact of historically warmer conditions between 2.7°F and 3.6°F (1.5°C to 2°C).

Spanish and Italian researchers recorded ratios of magnesium to calcite taken from amoebas present in marine sediments, an indicator of sea water temperatures. The skeletonised G. ruber was sampled from a depth of 1,500 feet (475m) located in the northwestern part of the Sicily Channel

 

For the first time, we can state with certainty that the Roman period was the warmest period of time of the last 2,000 years, and these conditions lasted for 500 years,’ said Professor Isabel Cacho at the Department of Earth and Ocean Dynamics, University of Barcelona.

The Mediterranean is a semi-closed sea, meaning it is surrounded by land and almost only connected to oceans by a narrow outlet, and is a climate change ‘hot spot’ according to a previous paper.

Situated between North Africa and European climates, the sea occupies a ‘transitional zone’, combining the arid zone of the subtropical high and humid northwesterly air flows.

This makes it extremely vulnerable to modern and past climate changes, such as changes in precipitation change and average surface air temperature, and is of ‘particular interest’ to researchers.

Home to many civilisations over the years, the Med, or Mare Nostrum as it was known by ancient Roman civilisations, has become a model to study the periods of climate variation.

Reconstructing previous millennia of sea surface temperatures and how it evolved is challenging, due to the difficulty retrieving good resolution marine records.

However, the study of the fossil archives remains the only valid tool to reconstruct past environmental and climatic changes as far back as 2,000 years ago, they say.

Compared to the subsequent period of the Roman Empire, the Mediterranean was characterised by a colder phase from around 500 BC to 200 BC.

This corresponds with the beginning of the so-called ‘sub-Atlantic phase’ characterised by a cool climate and rainy winters which was favourable for Greek and Roman civilisations to grow crops.

The cool and humid climate of the sub-Atlantic phase lasted until around 100 BC and covered the entire period of the monarchy in Rome.

However, in 400 BC, cultural changes were synchronised across the Mediterranean region and more ‘homogeneous’ temperature conditions across the Med regions were established.

A distinct warming phase, running from AD 1 to AD 500, then coincided with the Roman Period and covered the whole Roman Empire archaeological period.

‘This pronounced warming during the Roman Period is also consistent with other marine records from Atlantic Ocean,’ the team say in their research paper, published in Scientific Reports [Nature] (26 Jun 2020; 10: 10431).

 

This climate phase corresponds to what is known as the ‘Roman Climatic Optimum’ characterised by prosperity and expansion of the Empire, giving warmth and sunlight to crops.

Roman Climatic Optimum, a phase of warm stable temperatures across much of the Mediterranean heartland, covers the whole phase of origin and expansion of the Roman Empire.

The greatest time of the Roman Empire coincided with the warmest period of the last 2,000 years in the Mediterranean.

After the Roman Period, a general cooling trend developed in the region with several minor oscillations in temperature.

The climate then transitioned from wet to arid conditions and this could have marked the decline of the golden period of the Roman Empire after AD 500.

These new records correlated with data from other areas of the Mediterranean – the Alboran Sea, Menorca basin and Aegean Sea.

‘We hypothesise the potential link between this Roman Climatic Optimum and the expansion and subsequent decline of the Roman Empire.’

The study provides high resolution and precision data on how the temperatures evolved over the last 2,000 years in the Mediterranean area.

It also identifies a warming phase that’s different during the Roman Empire in the Mediterranean area and is focused on the reconstruction of the sea surface temperature over the last 5,000 years.

‘Our study highlights the relevance of the Roman Empire to better understand the behaviour of the Mediterranean climate – specifically, the hydrological cycle – in warm conditions compared to the ones in the current climate change scene,’ said Professor Cacho.

Posted in Center for Environmental Genetics | Comments Off on Persistent warm Mediterranean surface waters during the Roman period

Toward development of the proteome landscape for the kingdoms of life

Early on, in genetic studies, the concept (learned in grade school, these days 😊) was that “DNA is transcribed into RNA, which is translated into protein.” Then, it became established that “DNA —> primary transcript, which then results in messenger-RNA (mRNA) via splicing; then mRNA —> protein. The early belief was the “one gene = one gene product (protein)” concept. Now, we know the primary transcript can exhibit dozens of start-sites and termination sites, leading to numerous mRNA variants often resulting in different proteins (many revealing distinct functions); also, by way of posttranslational modification, a protein can be modified via addition of various groups (e.g. glycosylation, phosphorylation, ubiquitination, acetylation, methylation, ADP-ribosylation, farnesylation, sumoylation, glutathionylation), resulting in dozens or hundreds of final protein products — from a single gene…!! The total number of proteins from any organism is called “the proteome,” which is obviously more complex and much larger than “the genome”.

Authors [see attached article] chose to begin characterizing proteomes from a diverse set of representative organisms across the Tree of Life. Including common model organisms for comparison, this collation resulted in 19 archaeabacteria, 49 eubacteria, and 32 eukaryotes [bacteria (prokaryotes) have single chromosomes; eukaryotes chromosome pairs] — creating a total of 100 different species. Authors also added 14 viruses. Authors incorporated the latest technological advances into their workflow for high-resolution bottom-up proteomics, and implemented a recently developed chip-based method. For all prokaryotes, authors performed single-run mass spectrometry (MS) analyses, whereas for the more complex eukaryotic samples, authors used a loss-less prefractionator. They reasoned the chip-based chromatographic method — combined with the very large data set of more than two million unique peptides — should be well suited to “deep-learning algorithms” (which have recently been shown to be applicable to MS-based proteomics).

With ~2 million peptide (following protein digestion, smaller pieces of amino-acid sequences are called ‘peptides’) and 340,000 stringent protein identifications, obtained in a standardized manner, authors doubled the number of proteins — with solid

experimental evidence known to the scientific community. These data [see attached article] provide a large-scale case study for sequence-based machine learning — as authors demonstrate, by experimentally confirming the predicted properties of peptides from a simple bacterium, Bacteroides uniformis. These results offer a comparative view of the functional organization of organisms — across the entire evolutionary range.

A remarkably high fraction of the total proteome mass in all kingdoms was found to be dedicated to protein homeostasis and folding; this highlights the biological challenge of maintaining protein structure in all branches of Life. Likewise, a universally high fraction is involved in supplying energy resources — although these pathways range from photosynthesis through iron-sulfur metabolism to carbohydrate metabolism. Generally, however, proteins and proteomes are remarkably diverse between organisms. They can readily be explored and functionally compared at www.proteomesoflife.org.

DwN

Nature 25 Jun 2020; 583: 592-596

Posted in Center for Environmental Genetics | Comments Off on Toward development of the proteome landscape for the kingdoms of life

Mapping and characterization of structural variants (SVs) in 17,795 human genomes

As the previous GEITP pages email described, in the early days of genetic studies, the concept (learned in grade school 😊) was that “DNA is transcribed into RNA, which is translated into protein.” Then, it was established that “DNA —> primary transcript, which then results in messenger-RNA (mRNA) via splicing; then mRNA (from DNA coding region) —> protein. An early belief was the “one gene = one gene product (protein)” concept. Now, we know the primary transcript can exhibit dozens of start-sites and termination sites, leading to numerous structural variants (SVs) often resulting in different proteins (many exhibiting distinct functions).

 

Studies of human genetics use whole-genome sequencing (WGS) to enable comprehensive trait-mapping analyses across the full diversity of genome variation — including SVs [defined as 50 base pairs (bp) or greater — such as insertions/deletions (indels), duplications, inversions, and other rearrangements]. SVs appear to have a disproportionately large role (relative to their abundance) in the biology of rare diseases, and in shaping heritable differences in gene expression in human populations. Rare and de novo SVs have been implicated in the genetics of autism and of schizophrenia; few other complex trait association studies have directly assessed SVs. One challenge for interpretation of SVs in WGS-based studies is the lack of high-quality publicly-available variant maps from large populations.

 

Authors [see attached article] used a scalable pipeline to map and characterize structural variants in 17,795 deeply-sequenced human genomes. And they have publicly released these site-frequency data to create the largest WGS-based SV to date [see attached article]. On average, each individual genome contains ~2.9 rare SVs that alter coding regions; these SVs affect the dosage or structure of ~4.2 genes and account for 4.0–11.2% of rare high-impact coding alleles…!! Using a computational model, authors estimated that SVs account for 17.2% of rare alleles genome-wide — with predicted deleterious effects that are equivalent to loss-of-function (LoF) coding alleles; ~90% of such SVs are noncoding deletions (with a mean of 19.1 per genome).

 

Authors reported 158,991 ultra-rare structural variants (u-rSVs) and showed that ~2% of individuals carry megabase-scale u-rSVs, nearly half of which are balanced, or complex, rearrangements. Lastly, authors inferred the dosage sensitivity of genes and noncoding elements, and identified trends that relate to element class and conservation. This study will help in the future, to guide the analysis and interpretation of SVs in the era of WGS.  😊

 

DwN

 

 

Nature 2 Jul 2020; 583: 83-89

Posted in Center for Environmental Genetics | Comments Off on Mapping and characterization of structural variants (SVs) in 17,795 human genomes

Osteocalcin promotes bone mineralization but is not a hormone

Today’s topic in these GEITP pages is an excellent example of controversial conclusions (over more than two decades) able to be overturned by additional experiments; this is an example of how rigorous science is supposed to correct itself. Osteocalcin (OCN) is a 46-amino-acid protein, synthesized and secreted almost exclusively by osteoblasts (terminally-differentiated cells responsible for synthesis and mineralization of bone matrix during development of the skeleton — and its periodic regeneration throughout life). Osteoblasts originate from mesenchymal progenitors and are short-lived cells that are constantly replaced — depending on demand for bone formation. OCN secreted by osteoblasts contains three γ-carboxyglutamic acid residues that confer high affinity to the bone hydroxyapatite matrix. However, when bone is resorbed by osteoclasts (a macrophage-derived cell-type), the carboxyl groups on OCN are removed, and decarboxylated OCN is released into the circulation. Circulating levels of decarboxylated OCN are therefore dependent on the rate of bone turnover, also known as bone-remodeling.

Originally thought to function exclusively in bone, a more expansive view of decarboxylated OCN as an endocrine hormone has evolved over the past two decades; this all began with characterization of “an OCN-knockout mouse.” OCN had been proposed to act on multiple end organs and tissues — including pancreas, liver, fat cells, muscle, male gonads, and brain — to regulate functions ranging from bone mass accumulation to body weight, adiposity, glucose and energy metabolism, male fertility, brain development, and cognition. Because of such a wide range or purported target cell-types, OCN has therefore been considered a hormone (any signaling-molecule, produced by glands in multicellular organisms, that is transported by the circulatory system to target distant organs to regulate physiology and behavior).

However, many puzzling differences between mouse and human OCN function(s) have been reported. One possible explanation (for differences between results in mice and in humans) is that OCN genetics and function differ between humans and mice; humans have a single OCN gene named BGLAP (bone gamma-carboxyglutamate protein), whereas mice have two adjacent genes, Bglap and Bglap2. That OCN is an endocrine hormone with pleiotropic effects — is widely cited in textbooks and reviews and has provided the rationale for numerous clinical studies on the relationship between OCN and diabetes or obesity. This view has now been seriously challenged by two studies of independent OCN-knockout mouse models: PLoS Genet Jun 2020; 16: e1008586 and PLoS Genet Jun 2020; 16: e1008361.

In the first paper, authors replaced DNA encoding BGLAP and BGLAP2 proteins with a neo cassette in embryonic stem cells, and then investigated the role of OCN on bone formation and mineralization, as well as glucose metabolism,

testosterone production, and muscle mass. In contrast to results reported more than two decades ago, OCN was found NOT to participate in bone formation (or resorption) and bone mass in either estrogen-sufficient or estrogen-deficient

states. Instead, OCN was shown to be indispensable for alignment of biological apatite crystallites parallel to collagen fibers (see Figure in attached review). Loss of OCN function had NO EFFECT on collagen orientation. However, bone strength was decreased in the OCN-deficient mice — indicating that alignment of crystallites with collagen fibers is one of the elusive determinants of bone quality that (together with bone mass) determines the ability of bone to resist fractures. In addition, OCN was demonstrated TO PLAY NO ROLE in exercise-induced bone formation, glucose metabolism, improvement of glucose metabolism caused by exercise, testosterone synthesis, spermatogenesis, or muscle mass. ☹

In the second paper, authors used CRISPR/Cas9-mediated gene editing to delete most of the Bglap and Bglap2 protein-coding regions. Those gene-edited mice have no circulating OCN — but revealed normal bone mass as well as normal blood glucose and normal male fertility. Also, RNA-seq transcriptomics of cortical bone samples from the OCN-deficient mice showed minimal differences from non-gene-edited control mice. The gene-edited mice DO exhibit increased bone crystal size and maturation of hydroxyapatite, consistent with that found in the first paper, as well as earlier evidence by many other groups, and the general consensus that OCN plays a role in mineralization.

How can we explain the apparent discrepancies between these two recent articles and other conclusions made for more than two decades? Genetic background of the mouse strain, modifier genes, and differences in the molecular genetics of the knockout alleles — remain possible explanations. However, both groups explicitly state that these latest mouse lines will be publicly available so that their findings can be confirmed and extended by other interested investigators; indeed, the importance of resource-sharing is one of the most valuable take-home messages of this story. 😊

DwN

PLoS Genet Jun 2020; 16: e1008714

COMMENT:

Well, the take-home message of that OCN story was that there can be discrepancies among knockout mouse models — due to issues such as genetic background of the mouse line, modifier genes, and differences in how the knockout alleles had been created. From the time of the first OCN-knockout mouse study, from which numerous textbooks and review articles had written what they believed was 100% correct, it took 24 years before two new knockout mouse lines proved that original study to be inaccurate.

A second take-home message of that OCN story is simply to demonstrate that “science is never settled.” New approaches and new experiments can disprove a previously-established “fact” based on what had been done at that time.

DwN

From: W T. P

What about vitamin K2 variability — genetic differences in response, varying levels of dietary intake? Vitamin K2 is understood to be required for OCN function as well, correct? It would be most interesting to know/examine whether Vitamin K2-deficient mice have normal bone mass, assuming lethality under conditions of Vitamin K2 deficiency does not preclude such experiments (while maintaining adequate levels of Vitamin K1, of course).

Because of its practical significance to human health, vitamin K2 is more significant genetic variability among OCN-knockout mouse lines. Thank you for an interesting topic.

Posted in Center for Environmental Genetics | Comments Off on Osteocalcin promotes bone mineralization but is not a hormone