Questions from a Korean magazine and answers (by PROFESSOR RICHARD LINDZEN)-1

Various sources give slightly different times for “The Little Ice Age,” but a good estimate is “The Little Ice Age spanned from about 1550 to about 1850” –– which would make it mid-16th Century to mid-18th Century. In the middle was a period of seriously low solar activity, known as the Maunder Minimum (1645-1715). [During this time, the Thames River sometimes froze over. Due to substantially decreased solar activity over the past 15 years, many climate scientists today fear that Earth might be approaching another Maunder Minimum.]

I agree that the beginning and ending dates of the Little Ice Age are a bit fuzzy (e.g. one could say “it spanned from ~1570 to ~1820”). However, Professor Lindzen’s statement “that the Little Ice Age ended about 200 years ago” seems reasonable.

The most important take-home messages from Lindzen’s answers are that: [1] “the vast majority of the population is scientifically illiterate”, and one could extrapolate to say that “probably at least 99% of all scientists are illiterate with regard to climate science”; and [2] “Weather” reflects the extreme temperatures we see from day to month to year –– whereas “Climate” is measured in three 30-year segments per century and embodies “centuries of warming or cooling periods.” The graph below plots global-temperature estimates from the last Glacial Period (~11,000 years ago) to the present. One can see where Professor Lindzen chose an “about 1.0 °C rise in temperature since the Little Ice Age.”

Posted in Center for Environmental Genetics | Comments Off on Questions from a Korean magazine and answers (by PROFESSOR RICHARD LINDZEN)-1

Honey bees apparently can “understand” that “zero” is different from “5” and also different from “6”

The number “zero” is central to contemporary mathematics, as well as to our scientifically and technologically advanced culture. Yet, it is a difficult number to truly “understand”. Children grasp the symbolic number “zero” –– long after they start to understand, at ~4 years of age, that “nothing” can be a numerical quantity (i.e. ‘the empty set’), which is a quantity smaller than “one”. Therefore, scientists had assumed that the concept of “nothing”, as a numerical quantity, would be beyond comprehension for any subhuman animal.

Recent studies on cognitively-advanced vertebrates (i.e. animals having a spinal column/backbone) have challenged this view, however. Monkeys and birds can not only distinguish numerical quantities, but can also grasp “the empty set” as the smallest quantity on the mental-number line. Authors [see attached article & editorial] demonstrate that the honey bee –– a small insect on a branch very remote from humans on the animal tree-of-life –– also belongs to “the elite club” of animals that understand “the empty set” as the conceptual precursor of the number “one”.

Authors trained individual honey bees to the numerical concepts of “greater than” versus “fewer than” –– using stimuli containing one, up to six, elemental features. By learning, honey bees were discovered to be able to extrapolate the concept of “less than” to order (in their little bee brains) “zero numerosity” at the lowest end of the numerical continuum (i.e. bees could distinguish that “zero” was less than “one”, less than “two”, less than “three”, less than “four”, less than “five”, and less than “six”). Bees therefore exhibit an understanding that parallels vertebrate animals –– including the African grey parrot, chicken, nonhuman primates, and even preschool children.

Science 8 June 2o18; 360: 1124–1126 & pp 1069–1070 [Editorial}

Posted in Center for Environmental Genetics | Comments Off on Honey bees apparently can “understand” that “zero” is different from “5” and also different from “6”

A reappraisal of the additive- to- background assumption in cancer risk assessment

A series of 15+ papers published by Ed Calabrese over more than 10 years have detailed the history behind the 1956 recommendations of the U.S. National Academy of Sciences (NAS) Biological Effects of Atomic Radiation (BEAR) I Committee (Genetics Panel) to switch from a threshold –– to a linear non-threshold (LNT) dose–response model. Adoption of the LNT Model has been the most significant risk-assessment policy change ever made, and it was rapidly adopted by highly influential national and international advisory committees. This policy also has determined the setting of “exposure standards” for ionizing radiation and chemical carcinogens in the Western world.

The LNT model, whether for ionizing radiation or chemicals, assumes that even a “single hit” of irradiation, or a “single molecule” of a chemical is dangerous. This same genetic-risk prediction would apply to chemotherapeutic agents and other drugs. The threshold model, on the other hand, says that a certain amount of irradiation, or chemical, can be conveniently “absorbed” (i.e.”tolerated”) by the living cell, by the organism. This brings to mind “oxidative stress”, nitric oxide, and carbon monoxide –– all of which can be toxic at sufficiently high doses –– actually act as second-messenger “signals” –– important in many genetic-network pathways and critical-life processes including cell division/migration and physiological functions, as well as embryogenesis. Moreover, in pharmacology it is well known that a drug administered to a patient can lead to therapeutic failure, efficacy, or toxicity (i.e. dosage too low to see any effect, dosage “just right” that benefits the patient, or dosage too high, respectively).

In any carcinogen dose-response study, the “control” group of animals virtually always shows “spontaneous” tumors, which are regarded as “the “background”; these are routinely “subtracted” from the number of tumors seen in the experimental group. However, an important, but often overlooked, area of cancer-risk assessment (i.e. cancer dose-response assessment) is the additive-to-background” assumption. This hypothesis essentially assumes the LNT Model (i.e. any agent causes cancer at any dose and by the same mechanism). This assumption was proposed in the mid-1970s, and was incorporated into governmental risk assessment policy/practices a decade later. It was not possible in the 1970s to assess its validity scientifically, because the explosion in molecular biology studies and discoveries of oncogenes had just begun. Today, it is now possible to evaluate the scientific validity. The attached publication shows that the additive-to-background assumption (i.e. spontaneous vs induced tumors occur via the same mechanisms) is not compatible with the extensive publications now available in cancer research.

Authors [attached] evaluated the additive-to-background assumption by using findings of modern molecular toxicology –– including oncogene activation/mutation, gene regulation, and molecular pathway analyses. Based on published studies with 45 carcinogens over 13 diverse mammalian models and over a broad range of tumor types, compelling evidence indicates that carcinogen-induced tumors are mediated in general via mechanisms that are not the same as those which cause spontaneous tumors in appropriate control groups. This interesting conclusion therefore challenges a fundamental hypothesis of the additive-to-background concept. Such findings should lead to new considerations by those involved in cancer-risk-assessment policy, regulatory agency practices, as well as in fundamental concepts of cancer biology.

Environ Res 2o18; 166: 175–204

Posted in Center for Environmental Genetics | Comments Off on A reappraisal of the additive- to- background assumption in cancer risk assessment

Adaptive radiation is seen when the stickleback fish is exposed to a new niche that has no predators

Gene-environment interactions play an extremely important role in the adaptation of any species to a new or (rapidly) changing environment. For any living organism to colonize a new habitat or niche –– often it must rapidly adapt to multiple environmental challenges (this is called ‘multifarious’ divergent selection). This is most dramatic in adaptive radiations, where rapid successions of niche and habitat shifts take place within a lineage. However, most adaptive radiations started thousands of generations ago; thus, no one knows whether major phenotypic and genomic adaptation might occur within the first few generations of colonizing a new habitat, or over much longer time scales.

An excellent example of an “evolutionary bloom” is the secretoglobin gene group (SCGB) [Hum Genom 2o11; 5: 691-702]. If one compares the human and mouse genomes, the human genome has 11 SCGB genes and five pseudogenes, whereas the mouse genome contains 68 Scgb genes –– four of which are highly orthologous (similar in DNA/protein sequence) to human SCGB genes; the remainder (64 genes) represent an ‘evolutionary bloom’ and make up a large gene family having similarities to only six of the human SCGB counterparts. Such a “bloom” occurs by (rapid?) gene-duplication events, to create many genes in tandem, each protein of which then diverges to “handle” some new environmentally adverse signal. This “bloom” is evidence of a dramatic adaptive radiation required by the mouse ancestor (during the past 70 million years) to some (unknown) adverse environmental challenge(s) with which the human ancestor was not challenged..!!

Authors [see attached] decided to test whether rapid adaptation is possible –– in the adaptive radiation of threespine stickleback fish on the Haida Gwaii archipelago (in Western Canada). In a selection experiment that ONLY took 19 years, authors allowed stickleback fish from a large blackwater lake to evolve in a small clear-water pond that had no apparent vertebrate predators. Authors then compared 56 whole genomes from the experimental group and 26 whole genomes from natural populations. Authors found that adaptive genomic change was rapid in many small genomic regions and encompassed 75% of the change between 12,000-year-old ecotypes (which are distinct forms, or races, of a plant or animal species, occupying a particular habitat). Genomic change was as fast as phenotypic change in defense and trophic morphology, and both were largely parallel between the short-term selection experiment and long-term natural adaptive radiation. (Yes, even “living without predators” can represent a “new, shocking environmental adverse event”!) This really cool exciting experiment therefore shows that (functionally relevant) standing genetic variation can persist in derived radiation members –– thereby allowing adaptive radiations to unfold (evolutionarily) very rapidly..!!

Nature Ecol Evol July 2o18; 2: 1128–1138

Posted in Center for Environmental Genetics | Comments Off on Adaptive radiation is seen when the stickleback fish is exposed to a new niche that has no predators

The gut microbiome is responsible for ketogenic diet-mediated protection against epileptic seizures

As often mentioned in these GEITP pages –– the microbiome (gut bacteria, which actually comprise more than 90% of all the DNA in us) has become increasingly realized to play an important role in gene-environment interactions. In today’s topic, the TRAIT (phenotype) is ketogenic diet-mediated protection again epileptic seizures (caused when the brain has a sudden burst of electrical activity). The genotype in specific patients unfortunately causes them to be highly susceptible to seizures. The environmental effect is the response, or lack of response, “to be cured” by this particular diet.

The ketogenic diet is well known as a successful treatment for many who have refractory epilepsy (i.e. when various medicines are tried, and they do not bring seizures under control; sometimes called drug-resistant epilepsy), but the mechanisms underlying the neuroprotective effects of the ketogenic diet remain unclear. Authors [see attached article] show that the gut microbiome is transformed by the ketogenic diet and the microbiome is required for protection –– in two mouse models –– against sudden electrically-induced seizures. Mice, when treated with antibiotics or reared in a germ-free facility, are resistant to the ketogenic diet-mediated seizure protection.

Enrichment of the microbiome with two ketogenic diet-associated bacterial species, restores seizure protection. Furthermore, transplantation of either of these two ketogenic diet-mediated gut bacterial species each confer seizure protection to mice fed a control diet. Alterations in metabolic profile –– seen in the lumen of the colon, blood serum, and brain hippocampal region –– all are correlated with seizure protection, including decreases in systemic g-glutamylated amino acids and elevated hippocampal g-aminobutyric acid (GABA)/glutamate levels. Bacterial cross-feeding decreases g-glutamyltranspeptidase (GGT) activity, and inhibiting this g-glutamylation promotes seizure protection in the intact animal. This fascinating study reveals that the gut microbiome can modulate the host in protection vs resistance to intractible epilepsy.

Cell June 2o18; 173: 1728–1741

COMMENT:
I thought of one more worthwhile point to add: The “ketogenic diet” is, in fact, a form of treatment, although it is not specifically a DRUG. However, I would regard this as “being within the realm of gene-drug interactions”. I predict that, in the near future, that we will see a flurry of publications on “gene responses” to a drug (in humans as well as mice) –– showing that alterations in the individual’s microbiome can lead to serious consequences on the drug response, i.e. the phenotype.

DwN

This is funny—I just recently listened to a podcast on the origins of the ketogenic diet. I had no idea that it helped some people with seizures..!!
http://maximumfun.org/sawbones/sawbones-ketogenic-diet

I also have a new faculty member who is interested in the gut-lung interaction. He has sent me some research projects—but everything is exploratory and correlational. Frank McCormack wants mechanisms in proposals, and perhaps we don’t know enough yet, about the gut microbiome, to propose mechanisms. I have a “gut feeling” (pun intended) that the immune system is the intermediary. (EK)

Posted in Center for Environmental Genetics | Comments Off on The gut microbiome is responsible for ketogenic diet-mediated protection against epileptic seizures

Thinking of a Fluoroquinolone? Think Again

On these GEITP pages, we deal with gene-environment interactions, and a major subset of this topic is gene-drug interactions: efficacy, therapeutic failure, dose-dependent adverse drug reactions (ADRs), and dose-independent ADRs. All of these categories constitute the TRAIT (phenotype). The patient’s response to each drug can be affected by his genotype (DNA sequence differences), epigenetic effects (DNA-methylation, RNA interference, histone modifications & chromatin remodeling), endogenous influences (e.g. renal function, cardiovascular status, etc.), environmental factors (e.g. cigarette smoking, drug-drug interactions, diet, occupationally hazardous chemicals, etc.), and (still largely unappreciated) possible contributions from each patient’s microbiome (gut bacteria, which actually comprises ~90% of all the DNA in us).

This article just appeared today on Medscape and is worthwhile sharing immediately for two reasons. First, fluoroquinolones have become a very popular choice of antibiotic for all kinds of infections. Second, the (dose-INDEPENDENT) ADRs of fluoroquinolones are now realized to be, in some cases, extremely serious –– although the percentage of patients in whom such drug toxicity occurs, is still being calculated.

DwN

Thinking of a Fluoroquinolone? Think Again

Sarah Kabbani, MD, MSc

July 16, 2016

https://img.medscapestatic.com/grant_attribution/cdc_medscape.png?interpolation=lanczos-none&resize=600:*

Medscape Editor’s Note: Since this commentary was prepared, the US Food and Drug Administration (FDA) has strengthened its black box warning for fluoroquinolones to require a separate warning about the drug’s potential mental side effects (disturbances in attention, disorientation, agitation, nervousness, memory impairment, and delirium), and to add a warning about the risk for coma with hypoglycemia. They reiterate their position that because the risk for serious side effects generally outweighs the benefits –– for patients with acute bacterial sinusitis, acute bacterial exacerbation of chronic bronchitis, and uncomplicated urinary tract infections –– fluoroquinolones should be reserved for use in patients with these conditions who have no alternative treatment options.

https://img.medscapestatic.com/pi/editorial/studio/configs/2018/core/898636/898636_start.png?interpolation=lanczos-none&resize=932:524

Hello. I am Sarah Kabbani, a medical officer with the Division of Healthcare Quality Promotion at the Centers for Disease Control and Prevention. Over the next few minutes, I will provide you with important information on fluoroquinolone-prescribing-and-use data, and why appropriate fluoroquinolone prescribing is an important patient safety issue.

Fluoroquinolones are the third most commonly prescribed outpatient antibiotic class in the United States in adults, with an estimated 115 prescriptions per 1000 persons annually.[1] In 2016, the FDA issued a black box warning (its strongest warning) to stress serious and disabling adverse events associated with systemic fluoroquinolone use, including damage to tendons, muscles, joints, nerves, and the central nervous system.[2]

A new study published in Clinical Infectious Diseases[3] reports that fluoroquinolones are commonly prescribed for conditions when antibiotics are not needed at all, or when fluoroquinolones are not the recommended first-line therapy. In medical offices and emergency departments, about 5% of all fluoroquinolones prescribed for adults are completely unnecessary, and about 20% of all fluoroquinolone prescriptions do not adhere to recommendations about the use of fluoroquinolones as a first-line therapy.

Fluoroquinolones are not recommended for such conditions as uncomplicated urinary tract infections and respiratory conditions, including viral upper respiratory tract infections, acute sinusitis, and acute bronchitis. According to another study, published in JAMA Internal Medicine,[4] only 52% of patients received the recommended first-line antibiotic therapy for three common infections — otitis media, sinusitis, and pharyngitis.

CDC recognizes that the majority of healthcare providers are familiar with antibiotic prescribing guidelines, but many providers admit that they or their colleagues often choose an antibiotic that is not the recommended first-line therapy.[5] Specifically, fluoroquinolones, with their well-documented efficacy, a broad spectrum of activity covering many common pathogens, and favorable pharmacokinetics, are perceived to be safer than other antibiotics, despite the serious adverse events associated with systemic fluoroquinolone use. These characteristics of fluoroquinolones may have led to overprescribing.[6] Additionally, healthcare providers often cite patient satisfaction as a reason for prescribing an antibiotic when no antibiotic is recommended.

Improving antibiotic prescribing is important for preventing serious adverse events and potentially life-threatening Clostridium difficile infections. Based on the FDA’s warning, fluoroquinolones should be used only in patients with acute bacterial sinusitis, acute bacterial exacerbation of chronic bronchitis, or uncomplicated urinary tract infections when no other treatment options are available.[2] It’s important to follow clinical guidelines when prescribing any antibiotic due to the serious potential adverse events.

Prescribing the right antibiotic, at the right dose, for the right duration, and at the right time helps optimize patient care and fight antibiotic resistance. CDC encourages healthcare providers, health systems, and regulators to view appropriate antibiotic prescribing as an important patient safety issue. For more information on antibiotic prescribing and use, please visit CDC’s Antibiotic Prescribing and Use.

Posted in Center for Environmental Genetics | Comments Off on Thinking of a Fluoroquinolone? Think Again

Frequent Technology Use Linked to ADHD Symptoms

For years, I’ve been suggesting this –– as a probable causal factor for ADHD. And maybe ASD as well.

Of course, it’s gene-environment interactions.

The blinking/flashing lights are the environment. The person’s genetic make-up leads to his being “more or less susceptible” to the stimulus/stimuli.

Frequent Technology Use Linked to ADHD Symptoms in Teens, Study Finds

Wall Street Journal—July 17, 2018

The more teens use social-media networking sites, video games, excessive-fast-action movies, and streaming services, the higher their risk of developing symptoms of attention-deficit hyperactivity disorder, or ADHD, a new study found. The study, published Tuesday in the Journal of the American Medical Association, tracked 2,500 teens over two years and monitored their usage and symptoms.

Posted in Center for Environmental Genetics | Comments Off on Frequent Technology Use Linked to ADHD Symptoms

Daughter’s Genome Comprises Almost Entirely Father’s Genes

Daughter’s Genome Comprises Almost Entirely Father’s Genes

8523

10 JUL 18

Frances Shaw

Rare Case of Daughter’s Genome Made Up Almost Entirely of Father’s Genes

Usually, we inherit genes from each of our parent in fairly equal measures. However, there are now approximately 20 reported cases of children inheriting almost all of their genes from a single parent. Interestingly, it seems that it is only females that inherit all genes from their father.

There are three ways in which both copies of particular genes or chromosomes can occur in offspring, all are errors in the early stages of embryogenesis:

1. trisomy rescue, in which there is a mitotic loss of one of the three copies of the trisomic chromosome

2. monosomy duplication, in which the lone copy of a chromosome pair is duplicated via non-disjunction

3. gamete complementation, in which a gamete (sperm or egg) that is missing one chromosome unites with a gamete containing two copies of that chromosome by chance

When these mechanisms occur across all 23 chromosomes, a child is born with a genome that is made up almost entirely of a single parent’s genes; a phenomenon termed uniparental diploidy.

However, having two copies of the same gene isn’t always ideal, especially in the case of recessive disorders. This explains why every case reported, except one, has resulted in very high risk of a cancer.

The rare case was reported in the Journal of Human Genetics, in which an 11-year-old girl, who is a genetic “mosaic” of her father, suffers from deafness but with no signs of cancer (so far). Although no cancer has been found, the girl has an extremely high risk of malignancy and is being carefully monitored by a specific outpatient pediatric oncology program.

The study also documented the differences between the percentages of maternal and paternal genes in various tissues. Only about 7% of her blood cells, for example, showed any maternal genes. And 74% of the cells in her saliva held only paternal genes.

Posted in Center for Environmental Genetics | Comments Off on Daughter’s Genome Comprises Almost Entirely Father’s Genes

ow did we modern humans (Homo sapiens) evolve?

These GEITP pages have often examined the latest advances in our understanding of how we modern humans (Homo sapiens) evolved. During the last three decades, our understanding has advanced greatly. Most research has supported the theory that Homo sapiens had originated in Africa “no more than ~200,000 years ago (Ya)”, but the latest discoveries suggest that the events were more complex than previously thought (this is common in virtually every field of science). The data confirm interbreeding between Homo sapiens and other hominin species. The data also provide evidence for Homo sapiens in Morocco as early as 300,000 Ya; the data further indicate incremental changes in shape of the cranium (skull) of Homo sapiens. Although cumulative evidence still suggests that all modern humans descended from African Homo sapiens populations –– which had replaced local populations of earlier archaic hominins, models of modern human origins now must include that “substantial interactions” with those populations (Homo neaderthalensis & Homo denisova) had occurred before they became extinct [see the tree diagram on p 1297 of attached editorial].

Although today’s humans vary in traits such as body size, shape, facial structure and skin color, we clearly belong to a single species, Homo sapiens –– which are characterized by shared features such as narrow pelvis, large brain housed in a globular braincase, and decreased size of teeth and surrounding skeletal architecture. These traits (phenotypes) distinguish modern humans from other (now-extinct) members of the genus Homo (e.g. the Neanderthals in western Eurasia and the Denisovans in eastern Eurasia). This excellent editorial [attached] describes chronologically how a 1987 study, using mitochondrial DNA from modern humans, indicated a recent and exclusively African origin for modern humans. In the following years, fossil and genetic data, combined, then supported further the recent African origin (RAO) for our species.

The RAO Model postulates that, by 60,000 Ya, the shared features of modern humans had evolved in southeast Africa and, via population dispersals (i.e. the Great Human Diaspora), began to spread from there across the world. Some have opposed this “single-origin” view, and the narrow definition of Homo sapiens to exclude fossil humans such as the Neanderthals. In recent years, however –– new fossil discoveries, the growth of ancient DNA technology, and improved radiocarbon-dating techniques (of rocks in which fossils with DNA are embedded) have raised questions about whether “the RAO theory of Homo sapiens evolution” needs to be completely revised.

We now know that ~6% of the modern human genome comprises Neaderthal alleles (two or more alternative forms of a gene that arise by mutation and are found at the same place on a chromosome), and some populations (e.g. Oceanians) have genomes carrying substantial amounts of Denisovan alleles. So, there is growing evidence for a longer-term “coexistence” (i.e. admixture, messing-around, hanky-panky, etc.) of Homo sapiens and other lineages outside of Africa –– consistent with the Assimilation Model. The accumulated evidence still points to the evolution of shared anatomical features of Homo sapiens as an African phenomenon. How the ancestral populations interacted within Africa, currently looks unclear; genomes have not been successfully reconstructed from African fossils that are older than about 15,000 Ya. However, if such data become available, they will hopefully clarify many of the remaining uncertainties. 🙂

Science 22 June 2o18; 360: 1296–1298

Posted in Center for Environmental Genetics | Comments Off on ow did we modern humans (Homo sapiens) evolve?

Frequencing of Amyotrophic Lateral Sclerosis (ALS) associated with statin usage

This publication –– concerning drug efficacy vs possible association with long-term undesirable drug toxicity –– has been ruminating in my mind for some weeks, so I asked for input on the statistical analysis. Statin cholesterol-lowering drugs are among the most widely prescribed drugs in the world today. The benefits (lowering cholesterol and therefore reducing risk of coronary heart disease) seem to be so fantastic that some have actually proposed adding statins to our drinking water.

Like all drugs, statins have the potential to produce adverse drug reactions (ADRs). Particular focus has been on muscle effects (i.e. pain, weakness, and increased fatigue), but occasionally more serious effects such as rhabdomyolysis (destruction of striated muscle), necrotizing autoimmune myopathy (abnormalities of skeletal muscle structure & metabolism), and triggering (or ‘unmasking’) mitochondrial myopathy. Concerns have also been raised about possible increases in occurrence of amyotrophic lateral sclerosis (ALS)-like muscle-wasting conditions associated with statin use. ALS is a fatal neurodegenerative disease –– affecting muscle-controling neurons that characteristically leads to rapidly progressive paralysis –– ending in death, usually from respiratory failure or aspiration of food material.

In the attached study, authors examined US FDA Adverse Event Reporting System (FAERS) data –– to compare reporting odds ratios (RORs) of ALS and ALS-like conditions in patients taking the various statins. They looked at disproportionate rates of reported ALS and ALS-related conditions for each statin agent, separately. RORs ranged from 9.09 (range 6.57–12.6) and 16.2 (range 9.56–27.5) for rosuvastatin and pravastatin (hydrophilic, i.e. relatively water-soluble) to 17.0 (range 14.1–20.4), 23.0 (range 18.3–29.1), and 107 (range 68.5–167) for atorvastatin, simvastatin, and lovastatin (lipophilic, i.e. relatively fat-soluble), respectively. These data extend previous evidence, which suggests that differences in increased risk of ALS depends on which statin is studied.

Critique: The association of ALS and ALS-like events with statin usage appears to be real, and this association might be causal because it is supported by other studies including studies in mice. However, the present result is not new –– although this study had a larger sample-size than previous studies. Authors concluded that hydrophilic statins exhibit lower risk than lipophilic statins [e.g. the two extremes were rosuvastatin (ROR of 9) vs lovastatin (ROR of 107)]. This observation can be influenced by many confounding factors (i.e. caveats) not explicitly considered in this study, especially the duration of statin usage and age of each patient. Obviously, the “older” statins should have higher risks because they have been used longer and in older patients (also, probably patients of lower social economical status, because the older statins are cheaper than the new statins). All these issues might contribute to the risk factors for ALS.

Drug Safety Apr 2o18; 41: 403–413

Posted in Center for Environmental Genetics | Comments Off on Frequencing of Amyotrophic Lateral Sclerosis (ALS) associated with statin usage