Category: Scientific methodology

Anomalies and Falsification in Science

Bluestone_the_Great_unmaskedIn science, an anomaly refers a new observation or experimental result that seems to fly in the face of conventional, well established scientific wisdom.

Anomalies are an important part of the scientific discovery process. Sometimes a particular anomaly or the collection of anomalies can precipitate a revolution in scientific thinking – dethroning an old way of thinking and bringing in a new one. A canonical example of this is the Michaelson Morley Experiment, which started the ball rolling towards the Theory of Relativity.

Anomalies are important, but they are also frequently misunderstood and frequently misused.

Peddlers of pseudo-science love anomalies. Anomalies are the raison d’etre of crank scientists the world around. It gives them a justification for their “new paradigm”. When asked what the scientific basis is for their claims they answer “because anomalies”. And there is an easy bait and switch going on:

“These anomalies falsify the scientific orthodoxy! So you might as well buy my theory instead, <wink> <wink>.”

The whole thing boils down to a confusion about falsifiability in science. Falsifiability is one of the key hallmarks of science. Philosopher Karl Popper defined science by the very idea that it makes claims that are testable and can therefore be proven wrong (falsified) by contradicting observations. Science, in the Popplerian sense, is always tentative. But, Richard Feynman says it better than I can:

(His whole segment on the scientific method can be found here. it’s great.)

Scientists take falsifiability very seriously. We take anomalies very seriously. But, the relationship between the two is where some people get confused. A good number of people are under the impression that theories are falsified with a dramatic moment of unmasking, as in “Scooby Doo, et al” (pictured above). But, science rarely happens this way. It is not so simple.

What we often refer to as a theory in science is not a singular thing. A theory is a collection of many ideas and explanations that systematically explain observations. Perhaps any one observation may be falsified, but it’s hard to falsify an entire theory in one fell swoop. In fact, if we were to throw away conventional scientific wisdom at the first sight of any anomalies, science would fundamentally lack stability.

A good scientist has the discipline to abandon an idea when it’s wrong. But, equally important is the discipline not to immediately give up on an otherwise good theory. A well-established or mature scientific framework is built on decades of observations, empirically established first-principles, and countless successful predictions. To dismiss mature science on the basis of a few anomalies would be premature, and in many cases wrong headed.

Scientists are constantly looking for new ways to push the envelope. We like anomalies because they’re way more interesting than confirming what we already know. We seek them out. We develop new instrumentation to give ourselves new sensitivity. We design new experiments to look in places we never looked before. And, the thing about venturing to new places is: you end up with a lot of false starts – mistakes, misunderstandings, experimental artifacts.

Anomalies are thus inevitable. So the right attitude is “trust but verify”, bearing in mind that verification takes time and work.

Sometimes anomalies completely dethrone the current paradigm. Far more often….they. just. don’t.

Here are some of the common fates of anomalies, from most likely to least:

1) It turns out to be an experimental artifact, a mistake. In my own field, the observation of neutrinos traveling faster than light turned out to be a consequence of a loose cable.

2) It is a real effect, but incomplete. A missing piece of the the observation, once found, restores consistency with the theory. Feynman gives a good example of this – superconductivity. At first, the discovery of superconductivity seemed to completely contradict the known understanding of atomic physics. Eventually it was realized that a very subtle quantum mechanical phenomena explained the effect. Once this is taken into account, atomic theory is again fully consistent.

3) It represents a real problem with the theory, but the essence of the theory survives with some modification (ranging anywhere from minor to major).

4) By itself or in combination with other anomalies, it dethrones the prevailing wisdom

If you’re a person who rejects broadly accepted science, you’re inclined to see any anomaly as your own Scooby Doo ending! It’s the moment you’ve been waiting for, when orthodoxy collapses and you are vindicated! It is also too easy to use these anomalies to draw inexperienced skeptics away from established science and into the rabbit hole of quack science. Which brings me to one last essential point:

Old theories are almost never dethroned by anomalies, UNTIL there is a superior alternative that explains both the old observations and the new anomalies. The really frustrating part of the pseudo-scientific bait-and-switch is the innuendo:

“Well if the mainstream science is wrong, then anyone’s guess is as good as anyone else’s. So, you might as well choose mine”

This is simply not true. In the face of deep rooted anomalies, one would be naive to blindly hold on to the established model. On the other hand, it is naive and unskeptical to settle for the first new theory to pass your way. The old paradigm survived a serious shaking and any new theory should do the same…and then some. Only hindsight is 20×20, and few people in history can claim to have successfully “picked winners” early in a scientific crisis.

So when people without any scientific credentials drive up in tinted vans, offering you the sweet indulgence of “anomalies”, just say NO and run to someone you trust :).

But seriously: I may sound like a defender of the orthodoxy. My colleagues and I take anomalies very seriously. They are the constant talk of the lunch table. The faster-than-light neutrino measurement, ridiculous as it seemed, prompted a serious and genuine discussion of theories that could violate relativity. A lot of work went in to understanding the measurement. The measurement was redone by two other experiments and neutrinos went back to being slower-than-light. And, in it’s own Scooby Doo style ending, the culprit turned out to be a loose wire.

An important, if not THE important, value of science is the willingness to revise or replace wrong ideas. But, as with anything, this is not a carte blanch principle. When anomalies appear, we should welcome them but we must also exercise care and patience.

Advertisements

On the importance of being blind. On the importance of being a priori.

In the popular imagination, blindness is often used to denote a deficiency. In science, it is the greatest strength. My dissertation research was the product of 6 years work involving around 15 people. The most thrilling aspect of it was that the measurement was “double blinded”. Our calibrations were performed on a separate data set from the one used in our measurement. And, the code we used to fit our data was programed to add an unknown number to offset the answer. We could look at all of our cross-check plots to evaluate the quality of our result. But, we were not allowed to peak and know the answer. We could not know the final answer until we were able to convince ourselves and others that our methods were sound. This prevented us from tuning the analysis to give us what we “expected to see”. The final unblinding came after a month of grueling peer analysis. It was done publicly and it was very scary. In the end, it was worth it. The process very much added to the quality of the work.

It is too easy to cherry-pick, to ignore inconvenient data, and to “move the goal post” when our observations of the world do not match what we want to believe. This type of behavior is called a posteriori (meaning after-the-fact) or post hoc reasoning. The best protection against this type of bias is to set the terms of the research before we know the outcome, and to be somewhat hard-nosed about sticking to the outcome. Setting your standards “in the first place” is what I mean by being a priori.

Let me give a concrete example of how easily an “unblinded” mind can fool itself. I recently attended an seminar hosted by Bob Inglis, a former congressman from South Carolina and one of the most interesting thinkers on the subject of climate change and policy[1]. His guest speaker, Yale Professor Dan Kahan[2] spoke about how party polarization affects people’s perceptions of scientific issues and he presented the results of a really interesting study. In it, a group of people were tested for their math abilities and then given a subtle problem where they were asked to draw a conclusion from data on the “effectiveness of a drug trial”. The problem was designed to be counter intuitive, so most people got it wrong. But the top 10% in math abilities got it right. Similar groups of people were given the test but the data was presented as “gun control” related. Unsurprisingly, liberals tended to favor the liberal answer, even when wrong. Conservatives tended to favor the conservative answer, even when wrong. But here’s the crazy part: the top 10% in math abilities were more likely to get it wrong than those with poor math skills[3]. This drives home how insidious confirmation bias is. The smarter a person is, the better they are at selectively filtering data to fit their prejudices!

Study Image 2_0

Upper plot: groups were given two scenarios, one where the drug is ineffective and one where it is effective. The high numeracy group is equally likely to get it right. And the low numeracy group is equally likely to get it wrong, regardless of political affiliation. Lower plot: When the same data are presented as related to concealed carry, the more numerate liberals and conservatives are, the more likely they are to get the answer wrong in a way that is consistent with their ideology. For more details and explanation,  see Ref [3] below.

Blinding and a priori reasoning are the antidote. When math wizards were given a dispassionate problem, they used dispassionate reasoning. When given a partisan problem they used partisan reasoning. If we can perform the reasoning without knowing the outcome ahead of time or if we can set our standards before we engage in research, then we are more likely to handle the data objectively.

Let’s now hash this out into a concrete prescription. Let’s say you are investigating the state of science on a very polarizing subject. Here’s an outline on how to proceed:

1) Choose your experts blindly! Most people gravitate to the one or two experts that tell them what they want to hear. In a large field with many tens of thousands of scientists worldwide, it is often possible to find fringe experts to corroborate any view. This will fundamentally misrepresent the state of the science. You can counteract this tendency by choosing a random sample. Make a list of 20 or so of the top scientific institutions you can think of. Randomly select 5 from a hat. Look up the experts: people who actively research and publish on the subject. Engage them and ask your questions.

2) Write your questions ahead of time, before you are in a position to ask them.

3) Hash your questions out, before asking them. Run them by somebody else, especially someone with a different outlook. Peer collaboration is an important way to check yourself against bias. Have your colleague look at your questions and challenge you on them. Why did you pick those questions? Are you asking question X as a leading question or out of genuine curiosity? Do these questions cover the relevant issues? Revise your questions ahead of time.

4) Be a priori in thinking about outcomes. What sorts of answers might you get to the questions? Are the questions well defined enough to elicit precise responses. How might you fall into the trap of hearing what you want to hear? Are there ways of framing the questions to prevent this?

5) Go out and talk to the experts.

6) Listen, really listen.

7) Repeat as needed.

By really pre-defining and constraining your research method before engaging in research, you are separating yourself from steering the research to suit your bias. Instead you are steering the research on the basis of what you see as good research methods.

This may seem like common sense, but I am always amazed at how few people even make an effort to design their research process to guard against bias. What I love about science is the constant push to implement protections against these effects.

One takeaway of the Kahan study: being biased is not restricted to the the ignorant. It is equally (if not more-so) a challenge to those with literacy and sophistication. Working against our confirmation bias is a universal struggle and it requires constant diligence. Fortunately, there are tools for mitigating it. And, while it is advisable not to accept things “on blind faith”, you should definitely be inclined to accept things “on blind science”.

References

[1] Inglis is the founder of the Energy and Enterprise Initiative, a policy think tank dedicated to finding market solutions to climate change. http://energyandenterprise.com. Here’s his address to the Duke School of Business:

[2] Kahan’s blog can be found here. His group at Yale can be found here.  Here is a nice talk on his area of focus:

[3] D. Kahan, E. Peters,E.C. Dawson, P. Slovic, “Motivated Numeracy and Enlightened Self Government”, preprint available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992

It Starts with Observation

090825-01-galileos-telescope_big

One of the key pillars of the Scientific Method is the ascendancy of observation and measurement in the pursuit of knowledge. Science is an empirical endeavor, and observation is the first and most important step in the scientific process. It is the emphasis on observation over “pure thought” that separates modern science from early and medieval science. Science’s grounding in evidence based methods also explains it’s great success over the last few centuries.

Approaching “Pure Reason” With Caution

For a very large part of human history, people believed that it was possible to derive the fundamental truths of the Universe from abstract reason alone. Chief among these thinkers was Plato, but the thread was strong among the Greeks and carried through to a lot of medieval thinkers [1].

There is a certain attraction to the purest, most abstract forms of reason, like mathematical logic. However, one should be careful to distinguish between inevitability of mathematical conclusions within mathematical systems and the ability of those mathematical systems to draw absolute conclusions about the “outside world”.

For one thing, there are many possible mathematical systems one can construct by choosing different sets of starting assumptions (axioms). Each system can lead to conclusions which are inevitable within that particular system. Yet, the inevitable conclusions drawn from one set of axioms can contradict the logical conclusions of a slightly different set of axioms! For example, in Euclidean geometry the sum of the angles of a triangle has to add up to 180 degrees. But, there are equally logical, “non-euclidean” formulations of geometry where this does not have to be the case [2]. As it turns out, some physical phenomena are well described by euclidean geometry. Other systems are best described by non-euclidean models. The necessary/inevitable conclusions of each system do not apply equally well to all cases [3].

To scientists, arguments made without grounding in externally observable phenomena are suspect. Observation “keeps it real”. Observation ties us to something external, outside of ourselves. As such, the combination of observation with sound research methodologies can place a check against our cognitive biases. I would argue that the scientific approach is one of realism compared with the idealism often characteristic of proponents of pure thought. To a scientist, “proofs” only exist in the very circumscribed world of math. Outside of math, proofs do not carry weight. We value ideas based on their ability to predict and explain things which can be observed, not on whether we find them to adhere to our limited sense of what is or is not logical.

Hiding Behind A Proof of Smoke

The above thoughts may seem self-evident, but there are many people for whom this isn’t so clear. I often encounter individuals who claim to have logical, purely deductive “proofs” that support their beliefs. I would like to highlight 2 ways in which these “proofs” fail to achieve the rigor or certitude that they are sold as having:  (1) they shift the burden of the proof to axioms which are not sufficiently demonstrated and (2) they artificially restrict the allowable outcomes of the system in a way that leads to their desired conclusion, but does not fully reflect the full richness of reality. Let me elaborate:

Shifting the Burden of Proof to the Axioms:

You can prove anything if you start with the right axioms. But, the important question is: are your axioms right? One trick used in phony proofs is to shift some of the burden of proof into the axioms. The presenter hopes that his audience will view the axioms as untouchable, or at least he hopes they will be less skeptical of the axioms. But, it is a mistake to think that the axioms must be accepted without question.

Mathematics is the only field where you take an axiom to be true “just because…”. When we make statements about the real world, scientists still insist that our “first-principles” be rigorously tested. So, how do you validate a postulate?

1) Often, it is possible to test your assumptions directly. For example, the Theory of Relativity takes as a postulate that nothing can go faster than the speed of light. That’s just how the Universe seems to be. But, we don’t have to accept it blindly or even because it “seems to make sense”. We can test it. And, indeed, the speed of light has been tested and observed unfathomably many times, with no exceptions observed. It is a well-supported postulate of the theory.

2) Even if the postulate cannot be directly tested, the implications of the postulate can always be tested. If the consequences of an assumption cannot be tested, we say that the theory is not well defined. The point of a theory is to start with a minimal set of postulates and to work out the consequences of the postulates. If the consequences of those postulates agree with observation and if none of the consequences of those postulates contradict observation, then we feel that the postulates are good at explaining reality. Only then do we place trust and weight on new predictions made by that theory. The value of axioms in science lies only in how well they describe reality. But, as soon as the conclusions of an axiom contradict observations, that axiom is questioned or even thrown away. Unlike the “proofs” offered by opinionated and belief-driven people, the assumptions of science are not sacred or immutable.

A scientist will typically say something like :

Axioms A and B lead to hundreds of inevitable conclusions that are all confirmed by external observation. Therefore I consider proposition C (which hasn’t yet been observed) to be highly likely, since it follows from the same assumptions.

On the contrary, belief driven people tend to say things like

Let us start with assumptions A and B which make sense and therefore we accept as true. Conclusion C is inevitable, therefore C absolutely must be true!

Note that their “purely deductive” proof is no different from the scientific statement, except that the scientific statement is based on axioms which are rigorously tested and shown to agree with observation.

To summarize: don’t take the axioms of a so-called “proof” for granted. Apply scientific skepticism. Can they be rigorously tested, either directly or systematically through their theoretical consequences? If not, one should be very skeptical of the grounds upon which the proof is made. Finally, one should be aware that a proof, based on “pure” deductive reason is not really so “pure” if the axioms are observations about the real world. If the axioms make claims which should be testable, then those claims need to be tested. And if the axioms are untestable, then the proof is incomplete.

Restricting the Freedom of the System:

So-called “logical proofs” are often constructed in such a way that limits the number of possibilities available to the system.  A skilled rhetorician will present a series of yes-no questions that limit the possible conclusions in a way that forces a person to their conclusion. But, nature often operates on a continuum; not necessarily a finite number of yes-no questions.  Let us say that I have a glass marble. I present its color in a way that makes it seem as if the marble can either be red or blue. If I can demonstrate that the marble is not red, then my audience is led to the inevitable conclusion that the marble must be blue. In reality, the marble could well have been green or pink. My argument was logically correct. And, if the audience is focused on my logic, they will miss the bigger picture: that I have limited framework in which that logic exists. My logic is correct and the conclusion follows from the premises, but the logical model I’ve constructed does not completely describe the full richness of reality. A complete system would need to include the possibility of green marbles.

Pure though has value. Just don’t oversell it.

I am not saying that the inner dictates of our pure minds or intuitions are necessarily wrong. My issue is with calling them proofs. They are arguments that cannot be corroborated experimentally, but rather appeal to our inner senses. As a religious scientist I believe that these inner voices do indeed have a value to them.

But, it is important for us to realize that what may seem so compelling to our inner voices cannot be conveyed in the form of “proofs”. We cannot oversell how “obvious” our intuited or derived views of the world are. And, we need to balance our world of pure thought with experience gained from real-life interaction. I do not buy into scientism: the view that the only meaningful things we can say about the world are those that can be determined through science. But, I urge my readers to challenge themselves to take empirical findings seriously. As a matter of realism and pragmatism, it is important to check our beliefs against what we can observe and see.

From my experiences in the religious world, I am extremely frustrated by people who try to sell their beliefs by arguments that I term: “it’s obvious, stupid!”. I’ve been to religious talks where very skilled and educated rhetoricians resort to very intellectually aggressive tactics that cross a serious line for me.

Science “starts” with observation…but it does not end there

Earlier in this post I suggested that observations can place a check on our cognitive biases. But, I must be careful. Observations do not (at all) guarantee objectivity or correctness, as I discussed in my post on “jumping to conclusions”. Even very dogmatic and opinionated people can identify factual observations that support their views. In fact, one of the most insidious forms of bias is confirmation bias, wherein people selectively identify only those facts which support the conclusions they already choose to believe.

Science starts with observation but is does not end there. It is the first mile marker on a very long road. Science does not just ask for evidence, it demands that said evidence be placed in the context of a careful methodology. This road is fraught with peril. Even the best research falls fall short of attaining this ideal. And there is plenty of shoddy work in science that fails on a more rudimentary level.

In my next post I will talk about some of the methodological  steps that scientists take in order to avoid the effects of bias. I will also take the opportunity to indulge in describing some of my own research. Stay tuned!

Notes/Bibliography

[1] greek/medieval

http://www.betsymccall.net/edu/CLAM/prerenaissance.pdf

Aristotelian “physics” is different from what we mean today by this word, not only to the extent that it belongs to antiquity whereas the modern physical sciences belong to modernity, rather above all it is different by virtue of the fact that Aristotle’s “physics” is philosophy, whereas modern physics is a positive science that presupposes a philosophy…. This book determines the warp and woof of the whole of Western thinking, even at that place where it, as modern thinking, appears to think at odds with ancient thinking. But opposition is invariably comprised of a decisive, and often even perilous, dependence. Without Aristotle’s Physics there would have been no Galileo.

Martin Heidegger, The Principle of Reason, trans. Reginald Lilly, (Indiana University Press, 1991), 62-63 by way of wikipedia

http://en.wikipedia.org/wiki/History_of_scientific_method

http://aether.lbl.gov/www/classes/p10/aristotle-physics.html

http://galileoandeinstein.physics.virginia.edu/lectures/aristot2.html

[3] https://en.wikipedia.org/wiki/Non-Euclidean_geometry

[3] Applicability of non-Euclidean geometry: http://www.pbs.org/wgbh/nova/physics/blog/tag/non-euclidean/

The Danger of Catapulting To Conclusions and The Power of Alternative Hypotheses

c5-02

“So what’s going to happen next?” The room was full of physics grad students and nobody wanted to answer. You could hear a pin drop….

Dr Richard Berg ran the physics lecture demonstration group at University of Maryland. He designed and provided demos that professors and TAs could use to make their classes more engaging. My personal favorite was the electromagnetic can smasher (for obvious reasons). Every year, first year graduate students in physics were required to attend a seminar where the various research groups presented their work. It was an opportunity to learn about what was going on in the department, to help us shop for future advisors. But, traditionally, the first presentation was given by Dick Berg. He would present simple table-top experiments that inevitably defied intuition; so much so that a room full of kids with 4 years of physics study under their belts would squirm in their seats. He’d egg us on, “You tell me what’s supposed to happen next. We’ve got a room full of future physics professors here.”

One of the things you learn as a career scientist is that the world rarely conforms to common sense and it certainly doesn’t care what we want it to be. Demonstrations like Dr Berg’s are not exceptions. They are the rule. There is a huge chasm between taking a  finite collection of observations and drawing a systematic conclusion from them. I often find myself floored by the way that many non-scientists (and sometimes even scientists) hurl themselves to conclusions that cannot be drawn from the very limited number of facts they have available. For many people, it is enough that a conclusion “just makes sense”, especially if that conclusion reaffirms what they want to believe. But, “making sense” can be an illusion and a dangerous one at that. In popular culture there is a concept of not “jumping to conclusions” prematurely. I think this is an understatement. When we talk about complex subject matters (whether they be the dynamics of Supernovas or curing cancer or tax policy or solving urban poverty), opinionated people can often be seen, not merely “jumping” to premature conclusions, but catapulting to them.

print-galleryCognitive Illusions

The human brain is an excellent pattern finding machine. In fact, it is too good at it! Many people are familiar with optical illusions. But, few people are aware of cognitive illusions: the brain’s ability to see patterns in things that aren’t really there. Given a few facts, our brains quickly connect the dots. I hope to discuss many of these in future posts. In the mean time, wikipedia provides an excellent list of cognitive biases here. Take the time and look at some of these.

It’s More Complicated Than I Think

The most important lesson I wish more people would take away is that in complicated, multifactorial problems it is much harder to come to a solid conclusion or understanding than you may think it is. It’s a humbling and liberating realization.

Above all, I wish that people would realize that this principle applies to all aspects of our human experience: not just science. I would like to live in a world where even our political lives were informed with the awareness that problems are unlikely to fit into our tidy, ideological boxes.

I would suggest that the realization of the world’s complexity offers a “third way”, a middle path in approaching truth. Rather than accepting the false choice between the relativist approach to truth (There is no absolute truth; it’s all relative) and a sort of chauvinistic approach (Of course there is an absolute truth and it just so happens to agree with my beliefs. What a coincidence!) it is possible to take on the realist view that there is an objective truth to the matter, but it is difficult if not impossible to fully understand it. Just remind yourself, “It’s probably more complicated than I think“. One needs to be disciplined to resist falling for intellectual mirages.

The Power of “The Alternative Hypothesis”

In my own experience as a scientist, no skill is more exciting, more important, and yet overwhelming than the ability to generate alternative hypotheses.

People make educated guesses all the time, to fill in the gaps in our understanding. These guesses can make a lot of sense. They can be really clever. And, if our intuition is well-trained, they might just be right. But, a major trick that the brain can pull on us is to underestimate either the complexity or the range of freedom available to the system we’re observing. One way we can shake ourselves out of this complacency, is to force ourselves to think of alternative explanations, beyond the first hypothesis to pop into our heads. In fact, it is best for us to come up with as many alternative hypotheses as possible. With practice and experience, one will get increasingly better at this exercise. Often, you learn it the hard way: testing your hypotheses and finding out they were wrong (again and again and again)…By seeing a more full scope and range of possible explanations that all make reasonable sense but imply very different conclusions, you can better figure out how to design tests to narrow down the possibilities, and you can open your mind enough to accept facts and observations that run counter to what you actually expect. Once you’ve gone through the exercise of listing as many possible hypotheses as you can think of, feel free to pick a pet hypothesis. The important thing is not letting yourself too hastily come to the belief that your hypothesis must be right or that it provides “the only possible explanation”.

Another important realization: even after ranking by which guesses “make the most sense”, observation can surprise you. There is not rule that nature has to conform to what makes sense to us. And, the very idea of something “making sense” can easily become suspect. Hypotheses are starting points not ends unto themselves.

Mixing Alternative Hypotheses and Politics: Be Careful

Scientists don’t just think up alternative hypotheses for ourselves. It is a part of our culture to offer suggested alternative explanations to our colleagues. This is sort of being like a friendly devil’s advocate. It is so important to the early process of developing a new theory or experiment. Because of this habit, we often tend to offer alternative hypotheses to non-scientists, when we hear them making really strong first guesses about an observation. In political contexts, I often find myself pushing back really hard as a devils advocate. The result is that non-scientists interpret my questioning as disagreement. My liberal friends end up convincing themselves that I’m an arch conservative and my conservative friends will think I’m a socialist. If offered patiently and politely, such thoughts can be very helpful. Just be careful; It can end poorly.

Modesty And Curiosity

Through the course of this post I presented a strong case for why we can’t easily draw conclusions from our observations of complicated systems. But, I do not mean for this to imply futility or nihilism. On the contrary, the beauty of the scientific enterprise is that it shows us the extent to which the world can be known. Appreciating how little we know opens us to knew ideas, to seeing the world through a different lens. It opens us up to our most important intellectual driver: curiosity. You cannot be curious if you’ve fooled yourself into thinking that the matter is closed and you hold all the answers. But, if you allow yourself to embrace that curiosity, if you approach problems with an intellectual modesty and a thirst for new knowledge, then you will be inspired to do the necessary work to systematically understand the given problem. Ultimately, the pursuit itself will be far more interesting and rewarding than whether or not you ever reach a fully complete or satisfying conclusion.