3 of My Favorite Talks on Climate Science

There has been a lot of noise about the Paris Climate Agreement and what it means for the future. I’m not going to comment on the policy, but I would like to focus on the science. Folks on all sides of the political spectrum feel compelled to opine on what the science says. Many are unable to back their assertions up with actual literacy. Some people are genuinely curious about the subject, and a lot of folks just don’t know where to look for good scientific sources.

Unfortunately, good science sources are not always easy to come by. Most mainstream media outlets have pretty bad records on science reporting – not just climate, but any topic. Political media and tabloids are obviously worse.

I wanted to share a few videos that:

1) Are delivered by people with genuine subject expertise.
2) Represent the mainstream of the science.
3) Don’t shy away from the uncertainties and complexity of the subject matter.
4) Are really frigging interesting.

It’s a lot of material, but well worth your time.

1. Kerry Emanuel (atmospheric physicist at MIT): “What We Know About Climate Change”

If you watch no other video on the subject. This is the place to start. I think Kerry Emanuel’s talk is the best bird’s eye view of the subject.

 

2. Richard Alley (Paleoclimatologist, Penn): 4.6 Billion Years of Earth’s Climate History

One common trope about the earth’s climate is that “the climate has always been changing”.  True though it is, this is not a scientific sentiment. Science is more than just the observation of things that happen. Science seeks to understand those things. A big part of the scientific understanding of man-made climate change is informed by a growing body of knowledge explaining what caused major climatic shifts in times past. This is the best overview on the subject.

3. Stephen Schneider (Climate Physicist, Stanford)
Global Warming: Is the Science Settled Enough for Policy

Stephen Schneider was one of climatology’s great philosopher statesmen. Much of his work focused on the intersection of science and risk management. This talk is the best on the subject of uncertainty and risk.

Advertisements

Parenting Advice From Richard Feynman

feynman

This is going to be the first of two blog posts on a subject near and dear to my heart: early science education.

A popular trend in parenting these days are educational videos and toys aimed at unlocking you baby’s/toddler’s inner genius. They’re prolific. We certainly have a few “Baby Einstein” toys that we got as gifts – and they’re good toys.

The existence of this market points to a demand: parents want to raise smart, inquisitive children. But, the task is an overwhelming one. I would argue that, as a country, we’re having a hard time succeeding at it.

I wasn’t good at math

The number one reaction I get from people (by far) when they discover I’m a physicist is, “I always thought that was so interesting, but I was never good at the math”.

This makes me tremendously sad, because it speaks to a deep rooted self-doubt and defeatism that gets hoisted on people at a very young age. And, it’s fundamentally wrong! I will discuss this more in my next post, but it’s important to say now, because we need to know what we have to work against, when we’re raising the next generation great scientists.

Thoughts From Richard Feynman

Richard Feynman is one of the great geniuses of 20th century physics. What makes Feynman unique is that he is also arguably among the best physics communicators of the 20th century, as well. The Feynman lectures are now available online, here. If you are really ambitious and want to conquer the basics of physics, I suggest starting from the beginning and working through these. They’re unparalleled.

In this articles, I’ve posted some segments from the documentary “Richard Feynman: The Pleasure of Finding Things Out”, available in full, here. In these segments, Feynman reflects on his father’s parenting methods that lead Feynman on the road to becoming a great physicist. I’ll come back to this, but Feynman’s dad was not a physicist or scientist of any sort – he was a military uniform salesman.

“Translating”

Naming things is not the same as understanding them



Math and science cannot be done by wrote

I remember in my own elementary school education being told that there is a “right way” to solve certain problems, and when I found a better or different method, being told I couldn’t do it that way. Math is hard enough without manufactured constraints. Adding an unnecessary distinction of “right” and “wrong” methods is just wrong. Here’s Feynman reflecting on the silliness of math being treated as formulaic:

You don’t need to be skilled in math or science to raise a child who is…

…But you need to be genuine

While I think the Baby Einstein/Mozart products are perfectly respectable children’s toys, I do take issue with one thing about them:

I’m worried at the thought that many parent’s buy Baby Mozart and play it in their kids rooms, as a substitute for listening to Mozart themselves. It sort of goes without saying, but good parenting is about living the values you want your children to have. Intellectual exploration with your kids needs to be a joint effort.

Feynman’s dad was not necessarily scientifically or mathematically gifted. And, yet he was profound and sincere in how he raised young Richard. That’s what should shine through from these video clips. Being scientific means being inquisitive and being willing to struggle with really understanding things. One of the fun things about you children is they get this naturally. In attempting to raise a “Baby Einstein” or “Baby Feynman” you only have to encourage what is already there. And, by being genuinely interested yourself, raising a child is a way to relearn things that you may have missed the first time around, and have fun in the process. I’ve certainly found this to be the case.

In reflecting on my own upbringing, my parents used to joke about the problems they had with high school math, that they didn’t know where I came from. But, as far back as I can remember, they always supported and prioritized my scientific curiosity. My scientific explorations were never alone. I would not be where I am in my career without them. In addition, while mathematics was not their cup of tea, they nonetheless had a wide range of intellectual interests that they passionately pursued. In so doing, they gave me a model for how to pursue my own passions (h/t Mom and Dad).

Anomalies and Falsification in Science

Bluestone_the_Great_unmaskedIn science, an anomaly refers a new observation or experimental result that seems to fly in the face of conventional, well established scientific wisdom.

Anomalies are an important part of the scientific discovery process. Sometimes a particular anomaly or the collection of anomalies can precipitate a revolution in scientific thinking – dethroning an old way of thinking and bringing in a new one. A canonical example of this is the Michaelson Morley Experiment, which started the ball rolling towards the Theory of Relativity.

Anomalies are important, but they are also frequently misunderstood and frequently misused.

Peddlers of pseudo-science love anomalies. Anomalies are the raison d’etre of crank scientists the world around. It gives them a justification for their “new paradigm”. When asked what the scientific basis is for their claims they answer “because anomalies”. And there is an easy bait and switch going on:

“These anomalies falsify the scientific orthodoxy! So you might as well buy my theory instead, <wink> <wink>.”

The whole thing boils down to a confusion about falsifiability in science. Falsifiability is one of the key hallmarks of science. Philosopher Karl Popper defined science by the very idea that it makes claims that are testable and can therefore be proven wrong (falsified) by contradicting observations. Science, in the Popplerian sense, is always tentative. But, Richard Feynman says it better than I can:

(His whole segment on the scientific method can be found here. it’s great.)

Scientists take falsifiability very seriously. We take anomalies very seriously. But, the relationship between the two is where some people get confused. A good number of people are under the impression that theories are falsified with a dramatic moment of unmasking, as in “Scooby Doo, et al” (pictured above). But, science rarely happens this way. It is not so simple.

What we often refer to as a theory in science is not a singular thing. A theory is a collection of many ideas and explanations that systematically explain observations. Perhaps any one observation may be falsified, but it’s hard to falsify an entire theory in one fell swoop. In fact, if we were to throw away conventional scientific wisdom at the first sight of any anomalies, science would fundamentally lack stability.

A good scientist has the discipline to abandon an idea when it’s wrong. But, equally important is the discipline not to immediately give up on an otherwise good theory. A well-established or mature scientific framework is built on decades of observations, empirically established first-principles, and countless successful predictions. To dismiss mature science on the basis of a few anomalies would be premature, and in many cases wrong headed.

Scientists are constantly looking for new ways to push the envelope. We like anomalies because they’re way more interesting than confirming what we already know. We seek them out. We develop new instrumentation to give ourselves new sensitivity. We design new experiments to look in places we never looked before. And, the thing about venturing to new places is: you end up with a lot of false starts – mistakes, misunderstandings, experimental artifacts.

Anomalies are thus inevitable. So the right attitude is “trust but verify”, bearing in mind that verification takes time and work.

Sometimes anomalies completely dethrone the current paradigm. Far more often….they. just. don’t.

Here are some of the common fates of anomalies, from most likely to least:

1) It turns out to be an experimental artifact, a mistake. In my own field, the observation of neutrinos traveling faster than light turned out to be a consequence of a loose cable.

2) It is a real effect, but incomplete. A missing piece of the the observation, once found, restores consistency with the theory. Feynman gives a good example of this – superconductivity. At first, the discovery of superconductivity seemed to completely contradict the known understanding of atomic physics. Eventually it was realized that a very subtle quantum mechanical phenomena explained the effect. Once this is taken into account, atomic theory is again fully consistent.

3) It represents a real problem with the theory, but the essence of the theory survives with some modification (ranging anywhere from minor to major).

4) By itself or in combination with other anomalies, it dethrones the prevailing wisdom

If you’re a person who rejects broadly accepted science, you’re inclined to see any anomaly as your own Scooby Doo ending! It’s the moment you’ve been waiting for, when orthodoxy collapses and you are vindicated! It is also too easy to use these anomalies to draw inexperienced skeptics away from established science and into the rabbit hole of quack science. Which brings me to one last essential point:

Old theories are almost never dethroned by anomalies, UNTIL there is a superior alternative that explains both the old observations and the new anomalies. The really frustrating part of the pseudo-scientific bait-and-switch is the innuendo:

“Well if the mainstream science is wrong, then anyone’s guess is as good as anyone else’s. So, you might as well choose mine”

This is simply not true. In the face of deep rooted anomalies, one would be naive to blindly hold on to the established model. On the other hand, it is naive and unskeptical to settle for the first new theory to pass your way. The old paradigm survived a serious shaking and any new theory should do the same…and then some. Only hindsight is 20×20, and few people in history can claim to have successfully “picked winners” early in a scientific crisis.

So when people without any scientific credentials drive up in tinted vans, offering you the sweet indulgence of “anomalies”, just say NO and run to someone you trust :).

But seriously: I may sound like a defender of the orthodoxy. My colleagues and I take anomalies very seriously. They are the constant talk of the lunch table. The faster-than-light neutrino measurement, ridiculous as it seemed, prompted a serious and genuine discussion of theories that could violate relativity. A lot of work went in to understanding the measurement. The measurement was redone by two other experiments and neutrinos went back to being slower-than-light. And, in it’s own Scooby Doo style ending, the culprit turned out to be a loose wire.

An important, if not THE important, value of science is the willingness to revise or replace wrong ideas. But, as with anything, this is not a carte blanch principle. When anomalies appear, we should welcome them but we must also exercise care and patience.

Four of my favorite contemporary science communicators

favs

After a long hiatus, I’m attempting a return to my blog. One thing I would like to do in this and future writings, is to share some favorite science communicators. Here are four of them: Ken Miller, Eugenie Scott, Stephen Schneider, and Kerry Emmanuel.

All four of these speakers address topics in science that carry political baggage (not so much among scientists, but in the public): evolution and climate change. These subjects are interesting because they present some of the greatest challenges to science communication. These speakers are excellent for the very reason that they are so effective at meeting the challenges: rising above ideological mirk, and getting at the heart of the science. I hope you enjoy.

Ken Miller:

Biologists -especially those who defend evolution in the classroom- are often the subject of straw man attacks, accused of everything from communism to fanatical atheism. Ken Miller is a great counter example. Miller has been an outspoken advocate for the teaching of evolutionary biology, and thus a formidable opponent of Intelligent Design/Creationism in school curricula. But, contrary to the stereotype of the “atheist scientist”, Miller is a devout and outspoken Catholic who talks openly and frankly about issues of science and religion. His Templeton essay on science and religion is among the most eloquent pieces I’ve read on the subject.

A key theme of my blog is the importance of scientific thinking and the values of science. It is not literacy of any one particular subject of science, but literacy on scientific thinking itself that is most missing from public discourse. And, the values of science are frequently under assault. At the heart of the assaults on science, is a fundamental confusion about what science is and how it works. As a religious scientist myself, I take common cause with Miller in saying that “Intelligent Design” is fundamentally unscientific and in fact anti-scientific, because it blurs and confuses distinctions between science and belief. Miller is one of the most articulate and credible speakers on the subject. I highly encourage checking out his webpage. Here is one of my favorite excerpt from a talk he gave about the Dover PA textbook trial. I’ve edited the segment for brevity, but the whole talk can be found here. Here’s the teaser:

Eugenie Scott:

In understanding the distinction between science and religion, another excellent speaker is Eugenie Scott, the recently retired director of the National Institute for Science Education (NCSE). Unlike Miller, Scott considers herself to be a strong agnostic, if not atheist. Nonetheless, she draws a sharp distinction between the value-neutrality of science and the incorrect notion that science is anti-religious. A big pet peeve of mine is when people accuse science of being dogmatically “materialist” – subscribing to a philosophy that the material world is “all there is”. In this clip, Scott hits it out of the ball park. Science does not assume a purely materialistic world; it simply restricts itself to answering materialistic questions – this is a very important distinction.

Stephen Schneider:

As this century continues to unfold, our success and survival will increasingly depend on our ability to discuss complex and sometimes yet-to-be-fully-resolved science topics. To me nothing underscores this challenge like the subject of climate change, which requires us to balance abstract long-term thinking against tangible short term problems; which still has many open questions; and which has become politicized to the point of being toxic. The issue requires a level of scientific savvy that I fear is missing from our political leadership and much of the general public at large, and it needs to change, fast. I worry very deeply about the implications of a warming planet, but I worry more about what our failure to reasonably discuss this problem says about our ability to deal with future problems of the same magnitude – and there will be more problems of this magnitude in the near future.

Unlike, my own field of pure physics, the study of climate falls under the umbrella of “complex systems science” – the study of systems that are not amenable to reductionism, where many moving parts interact on different scales and across different fields of science.

Among the figures to speak on the topic of climate science, one of my favorites is the late Stephen Schneider. Schneider was as much a philosopher of science as he is a key figure in the development of modern climatology. He spoke frequently about our “ability to survive complexity”, and cut through the difficulty of the scientific and political problems of climate change with razor precision. He gave a great series of lectures on “Climate Change: Is the Science Settled Enough For Policy?“. The whole lecture is very much worth watching. Here is a clip where he reminds us that any subject in a science is not an all-or-nothing proposition. Scientific theories have many different components, with different levels of certitude and knowledge. A critical first step to understanding science is being able to sort it into it’s various components and reason out each topic on its own merits:

Another key skill in managing the intersection of science and politics is the ability to separate between the two. Especially when the science is inherently uncertain, policy boils down to risk evaluation which is in the realm of value judgements. Science gives us probabilities and confidences, but it’s up to us to determine what to do about them. And, we have to be extremely careful not to confuse value judgments with scientific judgements. Here, Schneider lays this out:

Kerry Emanuel:

This last speaker is another climatologist, and he covers many of the same themes as Schneider. I highly recommend Emanuel’s rebuttal to a controversial article by political scientist Roger Pielke Jr on natural disasters and climate change. His article is both terse and profound, and can be found here.

One of the key problems of climate change is that politicians and the public see it as a debate between two opposite scenarios: doomsday and nothing-to-worry-about. These are what Schneider calls “the two least likely outcomes”, and they present a false choice.  The best available science lays out a continuum, a spectrum of possibilities, ranging from mostly benign to really bad. We don’t need to “choose which outcome is right”. We need to look at the spectrum and make policy decisions according to our best estimates of their likelihoods. The following is a clip from Emmanuel’s excellent summary talk, “What we know about climate change”. The whole lecture provides one of the best overviews on the topic. Here is Emmanuel laying out the basic idea and talking about the concept of tail risks:

Conclusion

A common tactic from dogmatic people who attack scientific theories is to accuse scientists themselves of dogma. No doubt, all scientists have our biases, and some can be quite dogmatic. But, the scientific community is largely populated by people who are turned off by ideology and prefer the complexity and nuances of the world as it is, to naive simplicity of the world as we want it it to be. At our best, we leverage our skeptical community and our methodology to place a strong check on bias and dogma. I picked these examples because they run counter to the accusations of “atheism”, “alarmism”, or “socialism” that get so cheaply tossed around by partisan hacks. These are people who embody the voice of reason, who value being reasonable and being accurate above all else, and I hope that shines through. How we think about the world has very real implications for how we ultimately act in it. It is here where science has many important things to say about the future of our society.

“The 97 Percent”

I am not a fan of the term “denier” or “denial” when tossed around on the subject of climate change. In most cases it does not serve a constructive purpose. However, in describing a recent op-ed in the WSJ by Joe Bast and Prof Roy Spencer, it is difficult for me not to use the word.

Anyone who has first-hand with the field knows that the number of published, active researchers in the field who challenge the main findings on climate change1 is miniscule. It simply can’t be more than a few percent. And one is hard pressed to find a single professional scientific organization that doesn’t make a clear statement on the subject.

Even guys like Roy Spencer, when pushed hard enough, will admit that they are a minority voice. This is why I find articles like the WSJ piece completely baffling.  In the op-ed, Spencer and Bast challenge the claim that 97% of the climate scientists agree on the main points of global warming. But here’s their bait-and-switch: they don’t really offer an alternative number and they avoid saying exactly how wrong they think that figure is, leaving much to innuendo.

First a little history

The claim that “97% of all climate scientists agree…” comes from a series of studies: Naomi Oreskes performed a literature search, looking at abstracts from 928 papers matching the keywords “climate change”. Orskes found no papers contradicting the consensus findings, as described by the International Panel on Climate Change (IPCC). Doran & Zimmerman (2009) polled 10,000 earth scientists and found a broad consensus among all scientists, with 97% agreement among those actively publishing climate research.  Anderegg et al. (2010) reviewed publicly signed declarations supporting or rejecting human-caused global warming, and again found high consensus among climate experts.  Cook et al. (2013) performed a literature search, in a similar vein to Oreskes, and found that among papers addressing the question of anthropogenic climate change 97% affirmed the consensus position. Two other surveys of note find high levels of agreement (>85%), especially among experts, though not the 97% number: A survey by the American Meteorological Society (AMS) and one by Brey and Von Storch. We will discuss these shortly.

None of these surveys is perfect. All of them have certain strengths and weaknesses, but they are useful at illustrating to the public what those in the scientific community know: that people who outright deny a warming trend, who challenge the notion of an anthropogenic cause, and who consider the effects of the warming to be harmless represent a marginal view among the broad community of experts.

Picking Nits

Surveys and literature searches are inherently imperfect, so it’s easy to raise methodological objections. This is fine. Climate change is a complex subject and not easily reduced to a yes or no question. On this basis, I am inclined to agree that one should take the exact figure 97% with a grain of salt. But, questioning whether the number is 90% or 99% is not the same as doubting that there ultimately is broad agreement. And, this is where I find the op-ed to be deceptive and obfuscating. Not only do Spencer and Bast fail  to imply a lack of consensus on harmful, man made global warming, but most of their own sources contradict them. So what have they got?

Seriously, “the Oregon Petition”?

The thing that I find most shocking about the WSJ op-ed is the retreading of the infamous “Global Warming Petition Project” (aka the Oregon Petition). It’s bizarre that Spencer and Bast spend half of their article nitpicking the methodologies of surveys based on standard practices, and then they turn around to hang their hat on a petition that doesn’t follow any practices. If you are unfamiliar, here are few key problems with the petition:

(1) It’s a petition, not a survey. No attempt is made at selecting a representative sample and no effort is made to determine ratio of scientists who challenge global warming to those who don’t. (2) Their only standard for defining a “scientists” is the dubious requirement that one simply have a bachelors in science or engineering. Would you accept legal advice from a pre-law or have surgery performed by a guy with only an undergraduate? Then you shouldn’t take an undergrad in physics to be a serious authority on atmospheric physics. To wit, 31,000 people with a bachelors or more in science is less than 0.3% of the ~10 million (Americans alone) who have a bachelors in science. Even the 9,000 PhDs the survey boasts is small when you consider that ~30,000 new PhDs in science are awarded every year. (3) Most of the PhDs who signed the petition are in fields that have nothing to do with climate science.  The number of self-identified climatologists who sign the petition is 39, with maybe one or two thousand in related fields (if you want to be generous). I don’t know how anyone can say with a straight face “31,000 scientists support our petition. Of those, 39 actually study the subject matter relevant to the petition”. Peter Hadfield has a great video on youtube explaining what it means to be “an expert”.

We won’t even get in to issues with the tactics of the survey or the fact that the survey was at one point signed by Perry Mason, the Doctors from MASH, and even the Spice-girls.

The AMS Survey

The op-ed goes on to quote an American Meteorological Society survey (the paper can be found here). Say Bast and Spencer,”only 39.8%…said man made global warming is dangerous.” They must be hoping that their readers don’t actually bother to read the survey. In response to how harmful or beneficial global warming would be over the next 150 years, 38% respondents answered “very harmful” but they neglect to mention an additional 38% answered “somewhat harmful”. That’s 76% who believe the consequences of warming will range from “somewhat harmful” to “very harmful”. Even this understates the level of agreement because the 24% who believe that the consequences will be benign includes many AMS members who do not qualify as scientists or climate experts. Of the respondents, 38% do not have a PhD. Only around 23% of the respondents claim to publish primarily on the topic of climate science. In fact, the main finding of the survey is that climate consensus is much higher among those who actually have the relevant expertise.

Brey and Von Storch

Bast and Spencer go on to cite a survey by Brey and Von Storch. This is one of my favorite surveys on the topic. I happen to think that it paints a pretty accurate picture of the state of the field. It covers a wide range of topics. Rather than a simple “yes” or “no” choice, the questions are answered on a scale from 1 to 7. And, they have a good sample size and composition.

Ironically for Bast and Spencer, the level of consensus expressed in the Von Storch paper happens to be pretty strong and it agrees well with the levels of confidence expressed by the IPCC. Point-by-point, if you look at topics where there is high agreement in the survey, they correspond to findings that the IPCC attributes with “high confidence”. The points where there is little agreement in the survey correspond to findings that the IPCC cites as having “low confidence”.

Throughout their editorial, Bast and Spencer  have been focusing on the question of how many climate experts believe in “harmful, man-made global warming”.  For some reason they neglect to report the results of this survey on those very questions (gee, I wonder why?). Let’s look at the results:

changehappening

anthro

threatlevel

Why I Don’t Like the “97% number”
Polls inevitably make oversimplifications and understate the complexity of the issues. The 97% number gives the impression of a monolithic “climate orthodoxy” that isn’t there. In reality, climate scientists hold a spectrum of views, and many subjects are still hotly debated. It is difficult to reduce everything to simple yes-no questions, and I am skeptical of the precision implied by using 2 significant digits in the 97% number. That said, there are certain key findings of climate science supported by so many data and so many lines of evidence that everyone in the community has moved on. Everyone, that is, but guys like Prof Spencer (although even Spencer concedes some amount of greenhouse warming).

But you don’t have to take my word for it

To any nonscientists who doubt the level of agreement among climate scientists: I challenge you to make a list of scientific institutions, pick a random sample and see for yourself how hard it is to find an active climatologist who does not believe the earth is warming (a few percent) or who doesn’t think humans are responsible for a good chunk of the last half-century warming (<10%) or  who thinks that the consequences will be benign (probably <10%). Pick a few journals and regularly read the articles (if you need help, recruit a scientist friend).  You will quickly see just how marginal your outlook is.

Can We Move On Now?

Policy makers and the public need to know that essentially no one in the climate science community questions the premise that world has warmed over the last century. Policy makers and the public need to know that the vast majority of the climate science community are convinced that more than half of the warming since the 1950s is driven by manmade causes. Policy makers and the public need to know that the vast majority of climate scientists feel that the implications of this man-made warming are likely to range from somewhat harmful to very harmful. Instead of peddling doubt and innuendo, Spencer and Bast should actually help to make constructive improvements to the process of polling the community. Ultimately, if they want to nitpick over the exact percentage that constitutes a “vast majority” of climate scientists, more power to them. But, if they’re trying suggest that there isn’t any majority among climate scientists on these three key points, then they’re just being counterfactual.

Footnotes:

1. For the entirety of this article, I will define “the main findings” of science on climate change as being (1) the earth is warming (2) most of the warming since the mid-20th century is and will continue to be driven by man-made causes (primarily greenhouse gases) and (3) that the consequences of continued warming will incur economic and human costs.

On the importance of being blind. On the importance of being a priori.

In the popular imagination, blindness is often used to denote a deficiency. In science, it is the greatest strength. My dissertation research was the product of 6 years work involving around 15 people. The most thrilling aspect of it was that the measurement was “double blinded”. Our calibrations were performed on a separate data set from the one used in our measurement. And, the code we used to fit our data was programed to add an unknown number to offset the answer. We could look at all of our cross-check plots to evaluate the quality of our result. But, we were not allowed to peak and know the answer. We could not know the final answer until we were able to convince ourselves and others that our methods were sound. This prevented us from tuning the analysis to give us what we “expected to see”. The final unblinding came after a month of grueling peer analysis. It was done publicly and it was very scary. In the end, it was worth it. The process very much added to the quality of the work.

It is too easy to cherry-pick, to ignore inconvenient data, and to “move the goal post” when our observations of the world do not match what we want to believe. This type of behavior is called a posteriori (meaning after-the-fact) or post hoc reasoning. The best protection against this type of bias is to set the terms of the research before we know the outcome, and to be somewhat hard-nosed about sticking to the outcome. Setting your standards “in the first place” is what I mean by being a priori.

Let me give a concrete example of how easily an “unblinded” mind can fool itself. I recently attended an seminar hosted by Bob Inglis, a former congressman from South Carolina and one of the most interesting thinkers on the subject of climate change and policy[1]. His guest speaker, Yale Professor Dan Kahan[2] spoke about how party polarization affects people’s perceptions of scientific issues and he presented the results of a really interesting study. In it, a group of people were tested for their math abilities and then given a subtle problem where they were asked to draw a conclusion from data on the “effectiveness of a drug trial”. The problem was designed to be counter intuitive, so most people got it wrong. But the top 10% in math abilities got it right. Similar groups of people were given the test but the data was presented as “gun control” related. Unsurprisingly, liberals tended to favor the liberal answer, even when wrong. Conservatives tended to favor the conservative answer, even when wrong. But here’s the crazy part: the top 10% in math abilities were more likely to get it wrong than those with poor math skills[3]. This drives home how insidious confirmation bias is. The smarter a person is, the better they are at selectively filtering data to fit their prejudices!

Study Image 2_0

Upper plot: groups were given two scenarios, one where the drug is ineffective and one where it is effective. The high numeracy group is equally likely to get it right. And the low numeracy group is equally likely to get it wrong, regardless of political affiliation. Lower plot: When the same data are presented as related to concealed carry, the more numerate liberals and conservatives are, the more likely they are to get the answer wrong in a way that is consistent with their ideology. For more details and explanation,  see Ref [3] below.

Blinding and a priori reasoning are the antidote. When math wizards were given a dispassionate problem, they used dispassionate reasoning. When given a partisan problem they used partisan reasoning. If we can perform the reasoning without knowing the outcome ahead of time or if we can set our standards before we engage in research, then we are more likely to handle the data objectively.

Let’s now hash this out into a concrete prescription. Let’s say you are investigating the state of science on a very polarizing subject. Here’s an outline on how to proceed:

1) Choose your experts blindly! Most people gravitate to the one or two experts that tell them what they want to hear. In a large field with many tens of thousands of scientists worldwide, it is often possible to find fringe experts to corroborate any view. This will fundamentally misrepresent the state of the science. You can counteract this tendency by choosing a random sample. Make a list of 20 or so of the top scientific institutions you can think of. Randomly select 5 from a hat. Look up the experts: people who actively research and publish on the subject. Engage them and ask your questions.

2) Write your questions ahead of time, before you are in a position to ask them.

3) Hash your questions out, before asking them. Run them by somebody else, especially someone with a different outlook. Peer collaboration is an important way to check yourself against bias. Have your colleague look at your questions and challenge you on them. Why did you pick those questions? Are you asking question X as a leading question or out of genuine curiosity? Do these questions cover the relevant issues? Revise your questions ahead of time.

4) Be a priori in thinking about outcomes. What sorts of answers might you get to the questions? Are the questions well defined enough to elicit precise responses. How might you fall into the trap of hearing what you want to hear? Are there ways of framing the questions to prevent this?

5) Go out and talk to the experts.

6) Listen, really listen.

7) Repeat as needed.

By really pre-defining and constraining your research method before engaging in research, you are separating yourself from steering the research to suit your bias. Instead you are steering the research on the basis of what you see as good research methods.

This may seem like common sense, but I am always amazed at how few people even make an effort to design their research process to guard against bias. What I love about science is the constant push to implement protections against these effects.

One takeaway of the Kahan study: being biased is not restricted to the the ignorant. It is equally (if not more-so) a challenge to those with literacy and sophistication. Working against our confirmation bias is a universal struggle and it requires constant diligence. Fortunately, there are tools for mitigating it. And, while it is advisable not to accept things “on blind faith”, you should definitely be inclined to accept things “on blind science”.

References

[1] Inglis is the founder of the Energy and Enterprise Initiative, a policy think tank dedicated to finding market solutions to climate change. http://energyandenterprise.com. Here’s his address to the Duke School of Business:

[2] Kahan’s blog can be found here. His group at Yale can be found here.  Here is a nice talk on his area of focus:

[3] D. Kahan, E. Peters,E.C. Dawson, P. Slovic, “Motivated Numeracy and Enlightened Self Government”, preprint available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992

How to research research [pt1 – intro]

Scientific issues are playing an ever greater role in our society: from the innovations that generate so much of our wealth to the very preservation of human life. Technological innovations produced by science have a huge impact on our lives: sometimes for better and sometimes for worse.

People are right to be skeptical of of new scientific findings. People are right not to take scientific posturing on authority alone. People are especially right to ask questions. I think that most scientists love nothing more than tough questions asked out of genuine curiosity and concern.

Unfortunately, true skepticism is very difficult to separate from it’s lesser (and more irrational) cousin: doubt. When I encounter websites or media sources that present phony skepticism or when I see misguided skepticism from opinionated non-scientists it makes me sad: both because it muddies the topic in question and because it confuses people about how the scientific process works. I find it disheartening to see so much bad information and -in some cases- disinformation is out there. Scientific issues are complex. Scientific language is couched in an open and careful admission of uncertainties. Both of these things make it incredibly easy for partisans to spin and distort our work. An informed scientific reader should always be aware of this.

Skepticism is an essential part of the scientific enterprise. But, many people seem to think that skepticism means being skeptical of others. One of the most important points in the scientific method is self-skepticism. Being hyper aware of our own biases and predispositions is the first step in overcoming them. The purpose of all of the controls in the scientific method isn’t to guard against the biases of others. Researchers put these controls in place to protect ourselves against our own bias. I truly believe that if more people operated with self-skepticism, we as a society would be better able to handle the complex problems that increasingly loom over our society.

The skeptical approaches of the scientific method do not have to apply only to scientists who are doing original research. A non-scientist, trying to understand a scientific issue can use these methods to keep his inquiry as objective and open-minded as possible. In a past article I gave an outline for how to fact-check web rumors. In the next series of posts, I would like to provide some thoughts for how to dig for the science buried the clutter of these rumors. My goal is to provide instruction on how to research research.

At the end of the day, the important question is not about how much research you did, but how you did your research. What is your research process? Do you have one? Ideologues typically research like lawyers: actively seeking out the facts that support their position, and disregarding or minimizing the facts that weaken their cases. Given the complexity and breadth of scientific issues, it is always possible to find experts and data to support any proposition. A person is capable of convincing themselves of just about anything. And, for anything they hear, they can probably find a rebuttal. But, is that data and are those experts representative of the whole of the knowledge on a subject?  To properly get the pulse of a field, you need to think less like a lawyer and more like a scientist: you need to prefer seeing a thing for how it is, and not how you want it to be. This is the essence of what we will hash out in the following articles.

This post is just an introduction to what will hopefully be a series. The second two posts are more-or-less finished. So I’m going to try to post at least one a week, with the first follow-up coming on Monday. Also, I’ve been experimenting with sound and video, so who knows what crazy things I may try.