On the importance of being blind. On the importance of being a priori.

In the popular imagination, blindness is often used to denote a deficiency. In science, it is the greatest strength. My dissertation research was the product of 6 years work involving around 15 people. The most thrilling aspect of it was that the measurement was “double blinded”. Our calibrations were performed on a separate data set from the one used in our measurement. And, the code we used to fit our data was programed to add an unknown number to offset the answer. We could look at all of our cross-check plots to evaluate the quality of our result. But, we were not allowed to peak and know the answer. We could not know the final answer until we were able to convince ourselves and others that our methods were sound. This prevented us from tuning the analysis to give us what we “expected to see”. The final unblinding came after a month of grueling peer analysis. It was done publicly and it was very scary. In the end, it was worth it. The process very much added to the quality of the work.

It is too easy to cherry-pick, to ignore inconvenient data, and to “move the goal post” when our observations of the world do not match what we want to believe. This type of behavior is called a posteriori (meaning after-the-fact) or post hoc reasoning. The best protection against this type of bias is to set the terms of the research before we know the outcome, and to be somewhat hard-nosed about sticking to the outcome. Setting your standards “in the first place” is what I mean by being a priori.

Let me give a concrete example of how easily an “unblinded” mind can fool itself. I recently attended an seminar hosted by Bob Inglis, a former congressman from South Carolina and one of the most interesting thinkers on the subject of climate change and policy[1]. His guest speaker, Yale Professor Dan Kahan[2] spoke about how party polarization affects people’s perceptions of scientific issues and he presented the results of a really interesting study. In it, a group of people were tested for their math abilities and then given a subtle problem where they were asked to draw a conclusion from data on the “effectiveness of a drug trial”. The problem was designed to be counter intuitive, so most people got it wrong. But the top 10% in math abilities got it right. Similar groups of people were given the test but the data was presented as “gun control” related. Unsurprisingly, liberals tended to favor the liberal answer, even when wrong. Conservatives tended to favor the conservative answer, even when wrong. But here’s the crazy part: the top 10% in math abilities were more likely to get it wrong than those with poor math skills[3]. This drives home how insidious confirmation bias is. The smarter a person is, the better they are at selectively filtering data to fit their prejudices!

Study Image 2_0

Upper plot: groups were given two scenarios, one where the drug is ineffective and one where it is effective. The high numeracy group is equally likely to get it right. And the low numeracy group is equally likely to get it wrong, regardless of political affiliation. Lower plot: When the same data are presented as related to concealed carry, the more numerate liberals and conservatives are, the more likely they are to get the answer wrong in a way that is consistent with their ideology. For more details and explanation,  see Ref [3] below.

Blinding and a priori reasoning are the antidote. When math wizards were given a dispassionate problem, they used dispassionate reasoning. When given a partisan problem they used partisan reasoning. If we can perform the reasoning without knowing the outcome ahead of time or if we can set our standards before we engage in research, then we are more likely to handle the data objectively.

Let’s now hash this out into a concrete prescription. Let’s say you are investigating the state of science on a very polarizing subject. Here’s an outline on how to proceed:

1) Choose your experts blindly! Most people gravitate to the one or two experts that tell them what they want to hear. In a large field with many tens of thousands of scientists worldwide, it is often possible to find fringe experts to corroborate any view. This will fundamentally misrepresent the state of the science. You can counteract this tendency by choosing a random sample. Make a list of 20 or so of the top scientific institutions you can think of. Randomly select 5 from a hat. Look up the experts: people who actively research and publish on the subject. Engage them and ask your questions.

2) Write your questions ahead of time, before you are in a position to ask them.

3) Hash your questions out, before asking them. Run them by somebody else, especially someone with a different outlook. Peer collaboration is an important way to check yourself against bias. Have your colleague look at your questions and challenge you on them. Why did you pick those questions? Are you asking question X as a leading question or out of genuine curiosity? Do these questions cover the relevant issues? Revise your questions ahead of time.

4) Be a priori in thinking about outcomes. What sorts of answers might you get to the questions? Are the questions well defined enough to elicit precise responses. How might you fall into the trap of hearing what you want to hear? Are there ways of framing the questions to prevent this?

5) Go out and talk to the experts.

6) Listen, really listen.

7) Repeat as needed.

By really pre-defining and constraining your research method before engaging in research, you are separating yourself from steering the research to suit your bias. Instead you are steering the research on the basis of what you see as good research methods.

This may seem like common sense, but I am always amazed at how few people even make an effort to design their research process to guard against bias. What I love about science is the constant push to implement protections against these effects.

One takeaway of the Kahan study: being biased is not restricted to the the ignorant. It is equally (if not more-so) a challenge to those with literacy and sophistication. Working against our confirmation bias is a universal struggle and it requires constant diligence. Fortunately, there are tools for mitigating it. And, while it is advisable not to accept things “on blind faith”, you should definitely be inclined to accept things “on blind science”.

References

[1] Inglis is the founder of the Energy and Enterprise Initiative, a policy think tank dedicated to finding market solutions to climate change. http://energyandenterprise.com. Here’s his address to the Duke School of Business:

[2] Kahan’s blog can be found here. His group at Yale can be found here.  Here is a nice talk on his area of focus:

[3] D. Kahan, E. Peters,E.C. Dawson, P. Slovic, “Motivated Numeracy and Enlightened Self Government”, preprint available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s