I cannot claim to be an expert on artificial intelligence (AI) or machine-learning, but I would say that the essence of this approach is as follows:
Many things in science (more so in biology than perhaps in chemistry, and even less so in physics or mathematics) appear, to the human mind, as “extremely complicated patterns” — which humans are unable to fathom or interpret or explain in any objective, nonbiased, quantitative way. To me, this touches on “Chaos Theory” (that branch of mathematics which involves complex systems whose behavior is highly sensitive to slight changes in conditions, such that small alterations can give rise to strikingly differenet consequences).
Thus, what AI or machine-learning does — is an attempt to minimize the bias, to examine these patterns (the greater number N of observations, the better), and to quantify the data into the least-random (or highest-likelihood) dataset or explanation. Also, the data can be ranked as a gradient — from the highest-likelihood to the lowest-likelihood datasets or explanations. This type of analysis therefore removes any human bias from the experiment. Testing the Müllerian mimicry theory in Heliconius butterflies [see attached article, which was described in yesterday’s email pasted below] represents an excellent example of an extremely complicated pattern that is much better quantified and analyzed by AI/machine-learning rather than by any test using our human (biased) minds. 😊
Other examples of extremely complicated patterns obviously might include: predictions of meteorological and climate patterns; human complex diseases (e.g. autism spectrum disorder, mental depressive disorder, hypertension); phenotypic heterogeneity seen in the response to a drug; phenotypic heterogeneity to a complex mixture of environmental toxicants (e.g. substantial exposure to a toxic waste dump site).
DwN
Sci Advanc 14 Aug 2019; 5: eaaw4967