This is a follow-up editorial by the same author; there are some further pearls of wisdom in this article. Scientific research is not always “a process of rolling, powerfully and inevitably forward”, but rather “oddly fragile”. Once we have a reasonably accurate picture of how things work — the testing and validation can proceed in a well-organized, efficient manner. But, in the early stages, when we really don’t understand the system and possibilities seem endless, it is easy to go astray. “Is that 1.5-fold change meaningful? Maybe if conditions are optimized — it will become a 5-fold change. Or maybe it’s just a blip and doesn’t mean a thing.” The question often becomes how much effort should we expend on working out the bugs and obtaining a robust result.
The author describes being fooled by a nice result that “happened twice, but then stopped; only after spending a long time trying to repeat it, and then exploring the system, using other approaches, did I realize that it was not correct.” Alternatively, “we can get results that looked nice once or twice, then stopped working, but then, after extensive optimization — turned out to be correct and important.” The author says he also has had a weak, hard-to-reproduce result that he abandoned — only to see it published a few years later in a very nice paper by another lab. In short, one must continuously be on guard for false positives and false negatives.
Nor does statistical analysis solve the essential problem. No result is statistically significant at the start. Statistical analysis comes later — only after the system has been optimized and the experiment repeated a number of times. In the early stages of a project when the picture is just emerging, it’s hard to know. At the early stage in any project, intuition is a major factor. When you really know your system, sometimes a preliminary result just feels right. Or being right makes so much sense that you are willing to follow a feeble lead. However, a major confounding variable in this process is the human tendency to want our hypotheses to be correct. [‘If my hypothesis is correct — it means I’m smart, I’m close to writing the paper, and then I have a good shot at landing the job or getting tenure.’] Our desire to be correct makes it harder actually to be correct.
There is a state of mind that facilitates clear thinking; in the title, the author jokingly called it “disinterest” (perhaps a better term is ‘open-mindedness’ or “not being judgmentally/opinionated’). To be more accurate, the author says he might have called it “passionate disinterest”. Buddhists call it non-attachment. We all have hopes, desires and ambitions; non-attachment means acknowledging them, accepting them, but then not inserting them into a process that, at some level, has nothing to do with you.
Molecular biology and genetics science occupies a middle ground between two opposite forms of exploration. The arts explore, in free-form manner, every aspect of individual, subjective human experience. At the opposite pole, mathematics elucidates a kind of universal language that is true for all time in all places, independent of its creators. Molecular biology and genetics science lies in between — scientists aim to discover universal laws, yet we do so through subjective experiences that we call “experiments”. Making non-attachment a central part of science education would be far more important than ethics classes, and discussing regulations about the “use of Photoshop in preparing figures”. 😊
J Cell Sci Aug 2015; 128: 2745-2746
This (intriguing) 1-page editorial is not directly “gene-environment interactions-related”, but has to do with teaching graduate students in the fields of science. It was offered to me recently by a colleague and GEITP-reader. The author of the article is still at the University of Virginia Charlottesville College of Medicine. Basically, he suggests that “Ph.D. programs often do students a disservice — in two ways.”
First, students are not made to understand how hard it is to do research — and how very, very hard it is to do creative research. “It’s a lot harder than taking very demanding courses.” What makes it difficult is that research represents immersion in the unknown. We simply do not know what we’re doing. We cannot be sure whether we’re asking the right question, or doing the right experiment — until we get the answer or the result. [Admittedly, science is made harder by competition for grants and space in top journals. But, apart from all that, doing significant research is intrinsically hard — and changing departmental, institutional, or national policies will not succeed in lessening its intrinsic difficulty.]
Second, mentors don’t do a sufficient job of teaching grad students how to be “productively stupid” (i.e. ‘if we don’t feel stupid, then it means we’re not really trying’). The author is not talking about “relative stupidity, in which the other students in the class actually read the material, think about it, and ace the exam, whereas you don’t.” And he is not talking talking about bright people who might be working in dull research areas that don’t match their talents. Science involves confronting our “absolute stupidity” — “that kind of stupidity is an existential fact, inherent in our efforts to push our way into the unknown.” [Perhaps ‘humility’ and ‘endless curiosity’ might be more appropriate terms than ‘stupidity’?]
Preliminary exams and thesis exams have the right idea — when the faculty committee pushes until the student starts getting the answers wrong, or gives up and says, “I don’t know”. [The point of that exam is not to see if the student gets all the answers right. If they do, it’s the faculty who failed the exam.] The point is to identify the student’s weaknesses — partly to see where they need to invest some effort, and partly to see whether the student’s knowledge fails at a sufficiently high level “that they are now ready to take on a creative research project.”
Productive stupidity (or ‘humility’, or ‘endless curiosity’?) means being ignorant by choice. Focusing on important questions puts us in the awkward position of being ignorant. One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine — as long as we learn something each time. No doubt, this can be difficult for students who are accustomed to getting all the answers right. [No doubt, reasonable levels of confidence and emotional resilience help.] However, the author thinks that scientific education might do more to ease what is a very big transition: from learning what other people once discovered — to making your own discoveries. “The more comfortable we become with being stupid, the deeper we will wade into the unknown.” And the more likely we are to make important (i.e. creative) discoveries. In fact, most likely, there is a genetic component to this trait, as well. 😊
J Cell Sci 2008 Jun; 121: 1771
COMMENT: Here are two quick responses by molecular toxicologists — both of whom have spent many decades in the trenches doing laboratory research. Dave, I agree that “ignorance” would have been a much more appropriate, and less provocative, word to use — instead of “stupidity”. And both of you point out the urgent need for students to re-learn the null hypothesis and remember The Scientific Method (1=Make an observation. 2=Ask a question. 3=Form an hypothesis, or testable explanation. 4=Make a prediction based on the hypothesis. 5=Test the prediction (proving the null hypothesis is not the correct one). And 6=Repeat several times — before using the solid data to make new hypotheses or predictions.) 😊