Category: How to be a good scientific skeptic

On the importance of being blind. On the importance of being a priori.

In the popular imagination, blindness is often used to denote a deficiency. In science, it is the greatest strength. My dissertation research was the product of 6 years work involving around 15 people. The most thrilling aspect of it was that the measurement was “double blinded”. Our calibrations were performed on a separate data set from the one used in our measurement. And, the code we used to fit our data was programed to add an unknown number to offset the answer. We could look at all of our cross-check plots to evaluate the quality of our result. But, we were not allowed to peak and know the answer. We could not know the final answer until we were able to convince ourselves and others that our methods were sound. This prevented us from tuning the analysis to give us what we “expected to see”. The final unblinding came after a month of grueling peer analysis. It was done publicly and it was very scary. In the end, it was worth it. The process very much added to the quality of the work.

It is too easy to cherry-pick, to ignore inconvenient data, and to “move the goal post” when our observations of the world do not match what we want to believe. This type of behavior is called a posteriori (meaning after-the-fact) or post hoc reasoning. The best protection against this type of bias is to set the terms of the research before we know the outcome, and to be somewhat hard-nosed about sticking to the outcome. Setting your standards “in the first place” is what I mean by being a priori.

Let me give a concrete example of how easily an “unblinded” mind can fool itself. I recently attended an seminar hosted by Bob Inglis, a former congressman from South Carolina and one of the most interesting thinkers on the subject of climate change and policy[1]. His guest speaker, Yale Professor Dan Kahan[2] spoke about how party polarization affects people’s perceptions of scientific issues and he presented the results of a really interesting study. In it, a group of people were tested for their math abilities and then given a subtle problem where they were asked to draw a conclusion from data on the “effectiveness of a drug trial”. The problem was designed to be counter intuitive, so most people got it wrong. But the top 10% in math abilities got it right. Similar groups of people were given the test but the data was presented as “gun control” related. Unsurprisingly, liberals tended to favor the liberal answer, even when wrong. Conservatives tended to favor the conservative answer, even when wrong. But here’s the crazy part: the top 10% in math abilities were more likely to get it wrong than those with poor math skills[3]. This drives home how insidious confirmation bias is. The smarter a person is, the better they are at selectively filtering data to fit their prejudices!

Study Image 2_0

Upper plot: groups were given two scenarios, one where the drug is ineffective and one where it is effective. The high numeracy group is equally likely to get it right. And the low numeracy group is equally likely to get it wrong, regardless of political affiliation. Lower plot: When the same data are presented as related to concealed carry, the more numerate liberals and conservatives are, the more likely they are to get the answer wrong in a way that is consistent with their ideology. For more details and explanation,  see Ref [3] below.

Blinding and a priori reasoning are the antidote. When math wizards were given a dispassionate problem, they used dispassionate reasoning. When given a partisan problem they used partisan reasoning. If we can perform the reasoning without knowing the outcome ahead of time or if we can set our standards before we engage in research, then we are more likely to handle the data objectively.

Let’s now hash this out into a concrete prescription. Let’s say you are investigating the state of science on a very polarizing subject. Here’s an outline on how to proceed:

1) Choose your experts blindly! Most people gravitate to the one or two experts that tell them what they want to hear. In a large field with many tens of thousands of scientists worldwide, it is often possible to find fringe experts to corroborate any view. This will fundamentally misrepresent the state of the science. You can counteract this tendency by choosing a random sample. Make a list of 20 or so of the top scientific institutions you can think of. Randomly select 5 from a hat. Look up the experts: people who actively research and publish on the subject. Engage them and ask your questions.

2) Write your questions ahead of time, before you are in a position to ask them.

3) Hash your questions out, before asking them. Run them by somebody else, especially someone with a different outlook. Peer collaboration is an important way to check yourself against bias. Have your colleague look at your questions and challenge you on them. Why did you pick those questions? Are you asking question X as a leading question or out of genuine curiosity? Do these questions cover the relevant issues? Revise your questions ahead of time.

4) Be a priori in thinking about outcomes. What sorts of answers might you get to the questions? Are the questions well defined enough to elicit precise responses. How might you fall into the trap of hearing what you want to hear? Are there ways of framing the questions to prevent this?

5) Go out and talk to the experts.

6) Listen, really listen.

7) Repeat as needed.

By really pre-defining and constraining your research method before engaging in research, you are separating yourself from steering the research to suit your bias. Instead you are steering the research on the basis of what you see as good research methods.

This may seem like common sense, but I am always amazed at how few people even make an effort to design their research process to guard against bias. What I love about science is the constant push to implement protections against these effects.

One takeaway of the Kahan study: being biased is not restricted to the the ignorant. It is equally (if not more-so) a challenge to those with literacy and sophistication. Working against our confirmation bias is a universal struggle and it requires constant diligence. Fortunately, there are tools for mitigating it. And, while it is advisable not to accept things “on blind faith”, you should definitely be inclined to accept things “on blind science”.

References

[1] Inglis is the founder of the Energy and Enterprise Initiative, a policy think tank dedicated to finding market solutions to climate change. http://energyandenterprise.com. Here’s his address to the Duke School of Business:

[2] Kahan’s blog can be found here. His group at Yale can be found here.  Here is a nice talk on his area of focus:

[3] D. Kahan, E. Peters,E.C. Dawson, P. Slovic, “Motivated Numeracy and Enlightened Self Government”, preprint available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992

Advertisements

It Starts with Observation

090825-01-galileos-telescope_big

One of the key pillars of the Scientific Method is the ascendancy of observation and measurement in the pursuit of knowledge. Science is an empirical endeavor, and observation is the first and most important step in the scientific process. It is the emphasis on observation over “pure thought” that separates modern science from early and medieval science. Science’s grounding in evidence based methods also explains it’s great success over the last few centuries.

Approaching “Pure Reason” With Caution

For a very large part of human history, people believed that it was possible to derive the fundamental truths of the Universe from abstract reason alone. Chief among these thinkers was Plato, but the thread was strong among the Greeks and carried through to a lot of medieval thinkers [1].

There is a certain attraction to the purest, most abstract forms of reason, like mathematical logic. However, one should be careful to distinguish between inevitability of mathematical conclusions within mathematical systems and the ability of those mathematical systems to draw absolute conclusions about the “outside world”.

For one thing, there are many possible mathematical systems one can construct by choosing different sets of starting assumptions (axioms). Each system can lead to conclusions which are inevitable within that particular system. Yet, the inevitable conclusions drawn from one set of axioms can contradict the logical conclusions of a slightly different set of axioms! For example, in Euclidean geometry the sum of the angles of a triangle has to add up to 180 degrees. But, there are equally logical, “non-euclidean” formulations of geometry where this does not have to be the case [2]. As it turns out, some physical phenomena are well described by euclidean geometry. Other systems are best described by non-euclidean models. The necessary/inevitable conclusions of each system do not apply equally well to all cases [3].

To scientists, arguments made without grounding in externally observable phenomena are suspect. Observation “keeps it real”. Observation ties us to something external, outside of ourselves. As such, the combination of observation with sound research methodologies can place a check against our cognitive biases. I would argue that the scientific approach is one of realism compared with the idealism often characteristic of proponents of pure thought. To a scientist, “proofs” only exist in the very circumscribed world of math. Outside of math, proofs do not carry weight. We value ideas based on their ability to predict and explain things which can be observed, not on whether we find them to adhere to our limited sense of what is or is not logical.

Hiding Behind A Proof of Smoke

The above thoughts may seem self-evident, but there are many people for whom this isn’t so clear. I often encounter individuals who claim to have logical, purely deductive “proofs” that support their beliefs. I would like to highlight 2 ways in which these “proofs” fail to achieve the rigor or certitude that they are sold as having:  (1) they shift the burden of the proof to axioms which are not sufficiently demonstrated and (2) they artificially restrict the allowable outcomes of the system in a way that leads to their desired conclusion, but does not fully reflect the full richness of reality. Let me elaborate:

Shifting the Burden of Proof to the Axioms:

You can prove anything if you start with the right axioms. But, the important question is: are your axioms right? One trick used in phony proofs is to shift some of the burden of proof into the axioms. The presenter hopes that his audience will view the axioms as untouchable, or at least he hopes they will be less skeptical of the axioms. But, it is a mistake to think that the axioms must be accepted without question.

Mathematics is the only field where you take an axiom to be true “just because…”. When we make statements about the real world, scientists still insist that our “first-principles” be rigorously tested. So, how do you validate a postulate?

1) Often, it is possible to test your assumptions directly. For example, the Theory of Relativity takes as a postulate that nothing can go faster than the speed of light. That’s just how the Universe seems to be. But, we don’t have to accept it blindly or even because it “seems to make sense”. We can test it. And, indeed, the speed of light has been tested and observed unfathomably many times, with no exceptions observed. It is a well-supported postulate of the theory.

2) Even if the postulate cannot be directly tested, the implications of the postulate can always be tested. If the consequences of an assumption cannot be tested, we say that the theory is not well defined. The point of a theory is to start with a minimal set of postulates and to work out the consequences of the postulates. If the consequences of those postulates agree with observation and if none of the consequences of those postulates contradict observation, then we feel that the postulates are good at explaining reality. Only then do we place trust and weight on new predictions made by that theory. The value of axioms in science lies only in how well they describe reality. But, as soon as the conclusions of an axiom contradict observations, that axiom is questioned or even thrown away. Unlike the “proofs” offered by opinionated and belief-driven people, the assumptions of science are not sacred or immutable.

A scientist will typically say something like :

Axioms A and B lead to hundreds of inevitable conclusions that are all confirmed by external observation. Therefore I consider proposition C (which hasn’t yet been observed) to be highly likely, since it follows from the same assumptions.

On the contrary, belief driven people tend to say things like

Let us start with assumptions A and B which make sense and therefore we accept as true. Conclusion C is inevitable, therefore C absolutely must be true!

Note that their “purely deductive” proof is no different from the scientific statement, except that the scientific statement is based on axioms which are rigorously tested and shown to agree with observation.

To summarize: don’t take the axioms of a so-called “proof” for granted. Apply scientific skepticism. Can they be rigorously tested, either directly or systematically through their theoretical consequences? If not, one should be very skeptical of the grounds upon which the proof is made. Finally, one should be aware that a proof, based on “pure” deductive reason is not really so “pure” if the axioms are observations about the real world. If the axioms make claims which should be testable, then those claims need to be tested. And if the axioms are untestable, then the proof is incomplete.

Restricting the Freedom of the System:

So-called “logical proofs” are often constructed in such a way that limits the number of possibilities available to the system.  A skilled rhetorician will present a series of yes-no questions that limit the possible conclusions in a way that forces a person to their conclusion. But, nature often operates on a continuum; not necessarily a finite number of yes-no questions.  Let us say that I have a glass marble. I present its color in a way that makes it seem as if the marble can either be red or blue. If I can demonstrate that the marble is not red, then my audience is led to the inevitable conclusion that the marble must be blue. In reality, the marble could well have been green or pink. My argument was logically correct. And, if the audience is focused on my logic, they will miss the bigger picture: that I have limited framework in which that logic exists. My logic is correct and the conclusion follows from the premises, but the logical model I’ve constructed does not completely describe the full richness of reality. A complete system would need to include the possibility of green marbles.

Pure though has value. Just don’t oversell it.

I am not saying that the inner dictates of our pure minds or intuitions are necessarily wrong. My issue is with calling them proofs. They are arguments that cannot be corroborated experimentally, but rather appeal to our inner senses. As a religious scientist I believe that these inner voices do indeed have a value to them.

But, it is important for us to realize that what may seem so compelling to our inner voices cannot be conveyed in the form of “proofs”. We cannot oversell how “obvious” our intuited or derived views of the world are. And, we need to balance our world of pure thought with experience gained from real-life interaction. I do not buy into scientism: the view that the only meaningful things we can say about the world are those that can be determined through science. But, I urge my readers to challenge themselves to take empirical findings seriously. As a matter of realism and pragmatism, it is important to check our beliefs against what we can observe and see.

From my experiences in the religious world, I am extremely frustrated by people who try to sell their beliefs by arguments that I term: “it’s obvious, stupid!”. I’ve been to religious talks where very skilled and educated rhetoricians resort to very intellectually aggressive tactics that cross a serious line for me.

Science “starts” with observation…but it does not end there

Earlier in this post I suggested that observations can place a check on our cognitive biases. But, I must be careful. Observations do not (at all) guarantee objectivity or correctness, as I discussed in my post on “jumping to conclusions”. Even very dogmatic and opinionated people can identify factual observations that support their views. In fact, one of the most insidious forms of bias is confirmation bias, wherein people selectively identify only those facts which support the conclusions they already choose to believe.

Science starts with observation but is does not end there. It is the first mile marker on a very long road. Science does not just ask for evidence, it demands that said evidence be placed in the context of a careful methodology. This road is fraught with peril. Even the best research falls fall short of attaining this ideal. And there is plenty of shoddy work in science that fails on a more rudimentary level.

In my next post I will talk about some of the methodological  steps that scientists take in order to avoid the effects of bias. I will also take the opportunity to indulge in describing some of my own research. Stay tuned!

Notes/Bibliography

[1] greek/medieval

http://www.betsymccall.net/edu/CLAM/prerenaissance.pdf

Aristotelian “physics” is different from what we mean today by this word, not only to the extent that it belongs to antiquity whereas the modern physical sciences belong to modernity, rather above all it is different by virtue of the fact that Aristotle’s “physics” is philosophy, whereas modern physics is a positive science that presupposes a philosophy…. This book determines the warp and woof of the whole of Western thinking, even at that place where it, as modern thinking, appears to think at odds with ancient thinking. But opposition is invariably comprised of a decisive, and often even perilous, dependence. Without Aristotle’s Physics there would have been no Galileo.

Martin Heidegger, The Principle of Reason, trans. Reginald Lilly, (Indiana University Press, 1991), 62-63 by way of wikipedia

http://en.wikipedia.org/wiki/History_of_scientific_method

http://aether.lbl.gov/www/classes/p10/aristotle-physics.html

http://galileoandeinstein.physics.virginia.edu/lectures/aristot2.html

[3] https://en.wikipedia.org/wiki/Non-Euclidean_geometry

[3] Applicability of non-Euclidean geometry: http://www.pbs.org/wgbh/nova/physics/blog/tag/non-euclidean/

The Danger of Catapulting To Conclusions and The Power of Alternative Hypotheses

c5-02

“So what’s going to happen next?” The room was full of physics grad students and nobody wanted to answer. You could hear a pin drop….

Dr Richard Berg ran the physics lecture demonstration group at University of Maryland. He designed and provided demos that professors and TAs could use to make their classes more engaging. My personal favorite was the electromagnetic can smasher (for obvious reasons). Every year, first year graduate students in physics were required to attend a seminar where the various research groups presented their work. It was an opportunity to learn about what was going on in the department, to help us shop for future advisors. But, traditionally, the first presentation was given by Dick Berg. He would present simple table-top experiments that inevitably defied intuition; so much so that a room full of kids with 4 years of physics study under their belts would squirm in their seats. He’d egg us on, “You tell me what’s supposed to happen next. We’ve got a room full of future physics professors here.”

One of the things you learn as a career scientist is that the world rarely conforms to common sense and it certainly doesn’t care what we want it to be. Demonstrations like Dr Berg’s are not exceptions. They are the rule. There is a huge chasm between taking a  finite collection of observations and drawing a systematic conclusion from them. I often find myself floored by the way that many non-scientists (and sometimes even scientists) hurl themselves to conclusions that cannot be drawn from the very limited number of facts they have available. For many people, it is enough that a conclusion “just makes sense”, especially if that conclusion reaffirms what they want to believe. But, “making sense” can be an illusion and a dangerous one at that. In popular culture there is a concept of not “jumping to conclusions” prematurely. I think this is an understatement. When we talk about complex subject matters (whether they be the dynamics of Supernovas or curing cancer or tax policy or solving urban poverty), opinionated people can often be seen, not merely “jumping” to premature conclusions, but catapulting to them.

print-galleryCognitive Illusions

The human brain is an excellent pattern finding machine. In fact, it is too good at it! Many people are familiar with optical illusions. But, few people are aware of cognitive illusions: the brain’s ability to see patterns in things that aren’t really there. Given a few facts, our brains quickly connect the dots. I hope to discuss many of these in future posts. In the mean time, wikipedia provides an excellent list of cognitive biases here. Take the time and look at some of these.

It’s More Complicated Than I Think

The most important lesson I wish more people would take away is that in complicated, multifactorial problems it is much harder to come to a solid conclusion or understanding than you may think it is. It’s a humbling and liberating realization.

Above all, I wish that people would realize that this principle applies to all aspects of our human experience: not just science. I would like to live in a world where even our political lives were informed with the awareness that problems are unlikely to fit into our tidy, ideological boxes.

I would suggest that the realization of the world’s complexity offers a “third way”, a middle path in approaching truth. Rather than accepting the false choice between the relativist approach to truth (There is no absolute truth; it’s all relative) and a sort of chauvinistic approach (Of course there is an absolute truth and it just so happens to agree with my beliefs. What a coincidence!) it is possible to take on the realist view that there is an objective truth to the matter, but it is difficult if not impossible to fully understand it. Just remind yourself, “It’s probably more complicated than I think“. One needs to be disciplined to resist falling for intellectual mirages.

The Power of “The Alternative Hypothesis”

In my own experience as a scientist, no skill is more exciting, more important, and yet overwhelming than the ability to generate alternative hypotheses.

People make educated guesses all the time, to fill in the gaps in our understanding. These guesses can make a lot of sense. They can be really clever. And, if our intuition is well-trained, they might just be right. But, a major trick that the brain can pull on us is to underestimate either the complexity or the range of freedom available to the system we’re observing. One way we can shake ourselves out of this complacency, is to force ourselves to think of alternative explanations, beyond the first hypothesis to pop into our heads. In fact, it is best for us to come up with as many alternative hypotheses as possible. With practice and experience, one will get increasingly better at this exercise. Often, you learn it the hard way: testing your hypotheses and finding out they were wrong (again and again and again)…By seeing a more full scope and range of possible explanations that all make reasonable sense but imply very different conclusions, you can better figure out how to design tests to narrow down the possibilities, and you can open your mind enough to accept facts and observations that run counter to what you actually expect. Once you’ve gone through the exercise of listing as many possible hypotheses as you can think of, feel free to pick a pet hypothesis. The important thing is not letting yourself too hastily come to the belief that your hypothesis must be right or that it provides “the only possible explanation”.

Another important realization: even after ranking by which guesses “make the most sense”, observation can surprise you. There is not rule that nature has to conform to what makes sense to us. And, the very idea of something “making sense” can easily become suspect. Hypotheses are starting points not ends unto themselves.

Mixing Alternative Hypotheses and Politics: Be Careful

Scientists don’t just think up alternative hypotheses for ourselves. It is a part of our culture to offer suggested alternative explanations to our colleagues. This is sort of being like a friendly devil’s advocate. It is so important to the early process of developing a new theory or experiment. Because of this habit, we often tend to offer alternative hypotheses to non-scientists, when we hear them making really strong first guesses about an observation. In political contexts, I often find myself pushing back really hard as a devils advocate. The result is that non-scientists interpret my questioning as disagreement. My liberal friends end up convincing themselves that I’m an arch conservative and my conservative friends will think I’m a socialist. If offered patiently and politely, such thoughts can be very helpful. Just be careful; It can end poorly.

Modesty And Curiosity

Through the course of this post I presented a strong case for why we can’t easily draw conclusions from our observations of complicated systems. But, I do not mean for this to imply futility or nihilism. On the contrary, the beauty of the scientific enterprise is that it shows us the extent to which the world can be known. Appreciating how little we know opens us to knew ideas, to seeing the world through a different lens. It opens us up to our most important intellectual driver: curiosity. You cannot be curious if you’ve fooled yourself into thinking that the matter is closed and you hold all the answers. But, if you allow yourself to embrace that curiosity, if you approach problems with an intellectual modesty and a thirst for new knowledge, then you will be inspired to do the necessary work to systematically understand the given problem. Ultimately, the pursuit itself will be far more interesting and rewarding than whether or not you ever reach a fully complete or satisfying conclusion.

Coherence between many lines of evidence: Part I

An important hallmark of good science is coherence across many different, independent measurements. It is simply a fact of living in the real world that no single way of measuring something works in all cases. For example, a bathroom scale might be a good way to measure my own mass. But, if I want to determine the mass of a spoonful of flour, I will need to use a different kind of scale.

Different measurements rely on different assumptions. If the starting assumptions are wrong, then a measurement will produce nonsensical results. If I want to measure my mass on the earth, I can use a standard scale. But, if I were on the moon, this scale would give me an incorrect result since the scale is calibrated assuming earth-gravity.

Let’s examine this issue in the context of one particular measurement problem: how to calculate the age of old sample using Carbon dating. First, a brief introduction/review of how carbon dating works: There are several “isotopes” of carbon. Isotopes are different versions of the same element with different numbers of neutrons (neutral particles) in the nucleus. Neutrons have no effect on the chemical properties of an element, but they can affect how stable it is. Unstable elements will decay over time. Carbon 14 (C14) is an unstable isotope of carbon with a half-life of ~5730 years. This means that if I have a sample of C14, after 5730 years I will have half as much. Even though C14 decays away, new C14 is produced in the atmosphere from cosmic rays bombarding Carbon 12 (C12). Living plants breath in the C14 and are eaten by animals. The carbon 14 becomes a part of these living things and continues to replenish as long as they are alive. But, as soon as they die, no new C14 enters the specimen and the fraction of C14 continues to decline. Knowing the atmospheric concentration of C14, we can figure out how long ago this plant or animal died from how much of that C14 decayed away [1].

Let’s think about some of the challenges involved in Carbon dating. Then we’ll peruse the literature and see how these challenges are dealt with.

1) What determines the initial Concentration of C14 and is it constant?

The fraction of Carbon 14 in the atmosphere is determined by the rate it is produced in the atmosphere and the rate at which it decays. New C14 is produced by cosmic rays bombarding the atmosphere. In order for carbon dating to work, this concentration should be stable and well-known. If it is not stable and relatively constant, then the method would not work.

2) Is the decay rate constant? Are there factors that could change it?

Another potential challenge to the accuracy of radiometric dating is the decay rate of C14, itself. If this decay rate was not constant, we could not use it as a reliable clock.

We don’t need to rely on physical assumptions, even well motivated assumptions, to verify these underlying premises of carbon dating. We can actually cross-check these assumptions *directly* against other sources of evidence. Carbon dating is the not the sole technique we have available for determining age. It exists alongside other, categorically different dating methods [2]. The more independent measurement methods we have, the more confidence we can place on the broad conclusions.

So what are some of the complementary measurement techniques for determining ages?

As most people know, trees grow in seasonal cycles and their trunks show a pattern of concentric rings corresponding to periods of growth and rest. There is one ring per year. Moreoever, the sizes of tree rings vary with fluctuations in local and global climate. These unique, yearly patterns allow us to line up tree rings from younger trees with those of older trees and we can work our way backward. Using the overlap between successive generations of trees we can patch together a timeline extending back more than ten thousand years. This lining up process has its challenges and pitfalls, but the people who do the work are able to use precision measurement techniques. In any case, this technique does not depend on carbon-14 levels in the atmosphere. Nor does it depend on the constancy of radioactive decay rates. [3]

Tree rings fall into a class of dating methods called “incremental dating techniques” because they exhibit a pattern of countable bands, each band corresponding to one yearly cycle. Two other incremental methods that I would like to discuss are varves and ice cores. Varves are seasonal sediment layers that build up in certain lakebeds [4]. The varve record goes much further back than the tree ring record: close to 50,000 years of banding patterns. Several particular sets of varve records play critical roles as calibration samples for carbon dating. Ice cores are vertical columns of ice, carefully drilled out of large glaciers. Like tree rings and varves, the ice forms a banding pattern, due to seasonal thaw and refreezing cycles [5]. The ice core record goes back even further yet: several hundred thousand years. There are many other relative and absolute dating methods that we can use to compare and cross-check carbon dating. I hope to address others. But, for this article, let’s stop here.

Now, let’s take a look at our carbon dating method. We can take trees that died in particular years in the tree-ring record and compare their age in the tree-ring record with the fraction of carbon-14 remaining [6]. Likewise, we can take samples found in varves and compare the C14 age with the year of the corresponding sediment layer the sample was buried in. Here is a composite of varve data taken from Lake Steel in Minnesota and Lake Suigetsu in Japan,  various tree ring data, and their corresponding carbon 14 concentration (source: Davidson and Wolgemuth) [7]:

tree-rings-varves-c14-chronology

Over a span of ~10,000 years of tree rings and and 50,000 years of varves, we see a smooth, mostly linear relationship with the logarithm of  C14 concentration: The older the tree, the less the C14. In other words, the age given by trees and varves is pretty well consistent with the C14 concentration. And here is the key point: if either of our above two assumptions about carbon dating (known atmospheric concentrations and constant decay rate) were significantly wrong, the C14 concentrations would diverge from the other methods. Instead of a straight line, the above graph would fluctuate wildly, with no rhyme or reason. This is clearly not the case.

What makes these methods nice is their complementarity. There are many factors that can bias varve and tree data, but those factors have little to do with the nuclear physics governing Carbon-14 decays. Likewise, even if carbon-14 decay rates were completely different in the past, one would be hard pressed to explain why this effect would simultaneously increase (or decrease) the number of tree rings or sediment layers in the record.

At this point, an astute reader might notice that the graph of C14 concentration versus varves is not a perfectly straight line. There are small fluctuations. And, more importantly, in the older samples (>20,000 years) the yellow points are starting to bend slightly downward away from a straight line. Don’t worry, I won’t dust this under the carpet. As you will recall, we never expected carbon dating to work perfectly. All measurement techniques are inherently imperfect. What scientists and informed scientific readers ask is: (1) How imperfect are these methods and (2) can we understand those imperfections? In short, are they good enough? With any good scientific method we should be able to quantify how good they are compared with the accuracy we need. And this takes us to the most exciting point:

Using a large number of complementary measurement techniques can even help us to understand the imperfections and limitations of each individual technique. Coherence does not just reinforce our confidence in a measurement; it helps us to systematically understand it.

First, we need to understand which factors are varying over time. If the plot above jitters a bit, what is causing the jitter? Is it the carbon dating or the tree rings or the varves or ice cores? Well, fortunately we have at least four different measurements techniques. If two of them agree with each other but not the third one, then we have good reason to believe that the third one is the “odd man out” so to speak. We can plot deviations of C14 age from tree-ring age, for example, and we get the black graph below on the plot below.  The y-axis (on the left side) shows the percent difference between the measured concentration and the expected concentration of C14 for a sample of a given age. Sometimes there is more C14 than there should be (and we would underestimate the age). Sometimes there is less than there should be (and we overestimate the age). But, note the magnitude of the fluctuations. The majority of the variations are within 10% of the correct age, with a few deviations as large as around 20%. If we don’t trust these particular tree-ring based measurements, we can go to a completely different part of the world and compare the C14 concentration of air bubbles in an ice core with the age of the ice-core, based on counting thaw-and-freeze layers in the ice. We get the same result (shown in red on the same plot)! We can see that the Carbon-dating age fluctuates with respect the tree ring age and it fluctuates the same way compared to ice cores [8]. This tells us that Carbon dating is the probably odd man out.

muscheler_full

We have now shown that Carbon dating is not perfect, as we expected. However, we can quantify how imperfect it is: mostly within 10%. And, we can do this by directly comparing it against three completely different and complementary techniques (tree growth, ice formation, and sediment layers in lake beds). Now we have one more question: do we know why the C14 date is varying like this? Can we determine the cause?

Even here, we have independent data and methods we can use to test the various causes. Beryllium-10 is a radioactive isotope with a MUCH longer half-life than carbon (1.4 million years). This means that however much Be-10 there was in a sample a few thousand years ago, there is pretty much the same amount (because hardly any of it decayed). If rates of cosmic rays varied over time, they would change the concentration of C14 and cause these deviations in the C-14 age. They would also change the Be-10 concentrations in the atmosphere [9].  Since Be-10 has a very different and much slower lifetime, it provides a nice complementary measurement to compare with C14. In the plot below deviations between the C-14 age and tree rings are compared with deviations in Be-10 concentrations in layers of corresponding ice cores. Here we are essentially comparing two different radiometric techniques -C-14 and B-10 concentrations- against two independent chronologies from different parts of the world (tree and glaciers) and we see similar global cycles in the concentrations of radioisotopes.

nielsbohr_be10

It gets even better: We know that the earth’s magnetic field can change polarity (North pole flips with South pole). We know that this has happened in the past from looking at the magnetic polarity of layers of rock in the geological column. During the transition period from one polarity to the other, we expect low magnetic fields and therefore high rates of cosmic rays (since the earth’s magnetic field deflects cosmic rays). We would predict that, if this were the case, we would see a large deviation between carbon dating and varves or a large change in Be-10 concentrations in ice cores. Indeed, we see just this. The most recent such reversal is called the “Laschampe Geomagnetic Excursion” and it occurred around 41,000 years ago. Below are two plots from two different papers showing changes in radioisotope concentrations during this event. First, we see a paper specifically discussing the Lamschampe Excursion [10]. In this series of two plots, the bottom plot shows the orientation of the earth’s magnetic field and the top graph shows Be-10 concetrations measured in Greenland ice cores. We see spikes in the Be-10 concentrations following the “transition” periods when the orientation of the earths magnetic field flipped signs and cosmic rays would be at their highest rates.

Laschampe-BE10

Similarly, in a recent paper using varves taken from Lake Suigestu, a large deviation between the varve age and Carbon-dating age is observed at a little over 40,000 years before present [11].

Laschampe-Suigetsu

Coherence is a much stronger proposition than mere repeatability. Repeatability of the same experiment or measurement is very important in science as a means for catching mistakes (or phony results). However, scientists are professional skeptics and we do not like to hang our hats on a proposition that relies on one single measurement technique. We demand coherence across a wide spectrum of different methods, each with non-overlapping assumptions. In this post we saw in Carbon dating some beautiful examples of how this coherence among many methods can be used to quantify and shine light on the limitations of each individual method. This is only the tip of a much deeper iceberg. Geological dating techniques, and radiometric dating, are approaching a century-old. There are literally hundreds of techniques and thousands of papers spanning the decades. These results have been reproduced again and again, under the careful scrutiny of thousands of scientists who gave these problems their undivided attention. I do want to come back to this subject. If I get good questions about this or other related concepts, I’m liable to write a follow-up post.

References

[1] For further reading, see:  (a) Carbon Dating   (b) C14dating.com   (c) Radiocarbon Calibration

[2] Refining the Radiocarbon Time Scale; Paula J. Reimer; Science 338, 337 (2012);

[3] For further reading, see:  (a) About tree rings   (c) Principles of Dendrochronology (d) NOAA slides on tree-rings

[4] For further reading, see:  (a)  Varves as natural calendars (b) Radiocarbon Dating of Varve Chronologies

[5] For further reading see: (a)  Ice Core 101  (b) Stratigraphic dating of ice cores (c) NOAA slideshow (d) Data from the Vostok ice core

[6] Dendrochronology and Radiocarbon Dating

[7] Lake Suigetsu and the 60,000 year varve chronology

[8] C-14 in ice cores and tree-rings

[9]  Influence of Atmospheric Processes on Be-10 atom concentrations; Y. Stozhkov;

[10] Dynamics of the Laschamp geomagnetic excursion from Black Sea sediments; N.R. Nowaczyk a,n, H.W. Arz, U. Frank, J. Kind, B. Plessen; Earth and Planetary Science Letters 351–352 (2012) 54–69

[11] A Complete Terrestrial Radiocarbon Record for 11.2 to 52.8 kyr B.P.; Christopher Bronk Ramsey et al.;  Science 338, 370 (2012);