Bursting the “Filter Bubble”

View gaps

I’ve mentioned many times that it is our nature to preferentially seek out information that tells us what we want to hear, while ignoring those facts which are inconvenient to our beliefs. It is an insidious problem.

The problem of this “selection bias” is bad enough by itself. The scary thing is that the web makes matters worse: search engines like google and Facebook have built in “predictive filtering”. Information is presented to us according to predictions of what we would like to see (based on our past search behavior).

Selective filtering by search engines and web pages is a nice idea on paper. In practice, it serves to further insulate us from information that would challenge our beliefs. We end up living in an information bubble where we receive only the information that we want to hear. Not only do we choose to ignore challenging facts, but google (and others) already makes the choice for us. It becomes all the easier to get entrenched in our over simplistic ideological world views. Perhaps this media trend is part of the reason for the growing political polarization in America.

In 2011, Eli Pariser gave a great Ted Talk on the subject. I highly recommend it.

Just yesterday, my sis-in-law Andrea sent me an interesting article in the MIT technology review on a group who is working to solve this problem:

http://www.technologyreview.com/view/522111/how-to-burst-the-filter-bubble-that-protects-us-from-opposing-views/

Another interesting solution I’ve seen is a browser-plugin that can connect readers on a particular website to webpages that rebut the arguments on the site:

http://blog.rbutr.com/about-us/

In both cases, I’m glad to see people working on solving this problem.

My own two cents

I think this is an excellent idea. However, I would caution both of these approaches not to present the world as a dialectic, a two-sided “point-counterpoint”. People tend to sort themselves into two opposing camps on any issue, but the real world does not work that way. The real world shows much richer complexity than dualisms allow. On any issue, there is a wide range of possible views and available facts.

I think it would be interesting, not only to expose people to “opposing” viewpoints, but to also expose them to atypical “hybrid” viewpoints: like animal rights activists who belong to the NRA, or social conservatives who want to legalize pot.

People are more receptive to contrary evidence from those closer to their worldview than they are from people who are polar opposite of their worldview. You have to ease someone into new ideas. And, the best way to shake people out of their box is to show them that the world has many more than just two possibilities available to it.

Advertisements

Can science be applied to questions about the past?

tumblr_inline_mntx6z1Tj91qz4rgp

A friend of mine recently wrote on his blog post that “we can’t apply the scientific method to the past” [1] .

I admit to sometimes being a little knee jerk with such comments: I often encounter them in religious circles, used as a way to dismiss particular scientific findings that appear to contradict religious doctrine. It is all too easy, by saying “science cannot go there” to throw out contradictory evidence with the flick of the wrist. In the process, one throws out a large number of topics that have long been considered science: geology, paleontology, cosmology, etc.

In any case, knee-jerk reactions are never worthwhile. His question is very interesting and a good teaching opportunity to review what science is, and to discuss why these subjects are indeed legitimately scientific. So, here we go.

Nobody Was Around Back Then

One argument often put forward is that we cannot make scientific statements about events that happened before anyone was around to witness them. This is a slippery slope: why stop with the past? It is true that nobody was around to witness the dinosaurs, but indeed nobody has ever seen a proton! Yet, I’ve never heard anyone question the existence of protons.

The vast majority of contemporary science happens far outside the realm of direct human experience (which covers an incredibly narrow band in the Universe). Most of science relies on instrumentation and techniques that extend our reach beyond the narrow world of our limited direct experience. These techniques have worked to great success, allowing us to see many orders of magnitude smaller than the wavelength of visible light, many orders of magnitude beyond our planet, and many orders of magnitude in the past. What makes any of these studies scientific, is not whether people are around to see them, but rather they are scientific because they correctly follow the methods and protocols of science.

So What Are the Protocols Of Science?

It is not my intention in this article to provide a full review of the scientific method. I happen to really like Peter Hadfield’s youtube video on the subject. Let’s just focus on some of the key points:

Is there data? Of course. We have fossils, mountains, volcanic rocks, erosion, radiometric abundances, ice cores, seasonal sediminent deposits, tree rings, etc.

Are there quantifiable, testable theories that explain the above phenomena? Of course: theory of plate tectonics, theory of evolution, nuclear theory,  etc.

A key point in science is the idea of falsification. Scientific theories must be capable of tests that can falsify them if they are incorrect. Is this the case for “historical” science? Of course. For example, there used to be many competing hypotheses regarding the extinction of the dinosaurs. As evidence piled in, the majority of these were falsified and the discussion focuses around a very limited number of hypotheses (for example, see The K-T ExtinctionThe Chicxulub Debate and Deccan Volcanism expressing several competing but much more circumscribed theories on dinosaur extinction) [2].

More importantly, have these theories made novel predictions a priori (without knowing the outcome) that were later tested and validated? Yes! Scientists have made countless thousands of predictions regarding evidence of past events that should be discoverable in the geological and fossil records. These predicted observations have since been independently and repeatedly confirmed.

A hypothetical example: a geologist discovers the site of a huge volcanic eruption. The date of the eruption is estimated from the location of the site in the geological column. Multiple samples are sent to several radiometry labs. Without knowing the origin of the samples, these independent groups consistently date the sample to a period in time that agrees with the geologist’s original estimate. Researchers look in the ice core record to find a dust layer consistent with aerosol particles from a volcano. Knowing when the eruption happened, they count back the layers of seasonal ice deposits to the range of years predicted by the radiometric dating. They find a large eruption in exactly the right spot. Similar observations are made in other geological records. Isotopic analysis shows all of these layers in multiple ice cores around the globe and in various sediment deposits to match the composition of material found at the site of the eruption. A real life example of this hypothetical described in more depth in this article on The Toba Super Eruption and Polar Ice Cores and this one on The Lake Malawi Sediment Chronometer and the Toba Super Eruption [3]. How is this not science?

But you cannot conduct controlled experiments?

There is a common misconception that to do science, you must be able to conduct laboratory experiments. By such an argument, one could say that “mountains exist” is not a scientific proposition, simply because it is impossible to create and observe a mountain in the laboratory. On the contrary, some sciences are based almost exclusively on field observation. This, of course, does present a number of unique challenges. Performing a “control run” is impossible with historical events, but it is still possible to implement “controls” against researcher bias. However, the unique challenges of historical science are understood and addressed by the experts in these fields. For an excellent in depth discussion, read this article [4].

One interesting point made in Ref [4] is that establishing whether an event happened is an easier task than deriving a generalizable causal model for a phenomenon. Saying “B happened” is much easier than saying “phenomenon A generally causes phenomenon B through mechanism X”. The many trials and the control runs in laboratory science are necessary in order to disentangle the possible causes of an effect. These are unnecessary in establishing the particularities of a single historical event. A criminologist must study the profiles of many serial killers and many normal individuals in order to understand what makes a serial killer. A forensic detective need only establish that a pattern of murders is consistent with a single serial killer. There is no arguing that both of them are studying the same phenomena. The difference is that one looks forward in time (the criminologist) and the other looks backward (the forensic detective).

What if the laws of nature have changed in time?

As I discussed in my post on coherence between many lines of evidence, scientists don’t just look at a single measurement or a single technique. We compare different measurements to each other. The more and varied these measurements are, the more constrained is the system we are trying to understand. To go back to that previous example: If the laws of radioactive decay had changed significantly over the last hundred thousand years, we would see them diverge from the tree ring or the ice core record. One would be hard pressed to come up with a scenario where completely independent physical processes such as seasonal melting patters in ice, seasonal sediment deposits in lake beds, chemical processes, etc would vary perfectly in concert so as to render the change in radioactive decay rates undetectable. Rather, if all of these independently corroborate the story told by a radiometric measurement, then I can rest assured that the decay rates must have been pretty constant with respect to a wide variety of other physical processes. These sorts of calibrations (including volcanic eruptions) have been performed many thousands of times by many thousands of scientists for nearly a century. The majority of evidence from our past works out to be very consistent with science as we know it, and across all major fields of science.

But You Are Taking on Faith That Different Scientific Principles Should Behave Coherently?

It is not necessary to take the coherence of physical law on faith. The proof is in the pudding. Science is an empirical endeavor. We value a model on: does this model explain the data? Can this model effectively elucidate new observations that we hadn’t thought to look for? If the model works, we have a good theory.

The coherence of natural principles in the past is demonstrated by the fact that our scientific theories are able to make coherent statements about them. If natural law were incoherent in the past, or even inconsistent with modern science, then our theories would not be able to make sense of the observations: something as simple as comparing carbon dating with tree rings would just break down and give divergent or nonsensical answers.

At the end of the day, it is always true there are certain implicit assumptions built into a scientific analysis. This is as true for science of present phenomena as it is for past phenomena. But the goal is:

1) Minimalism in assumptions
2) Hyper-sensitive awareness of those assumptions and their possible consequences.
3) Constantly testing those assumptions: either through direct measurement or by working out their consequences. Never taking those givens for granted. Not accepting them on faith or just because they “feel right”.

Few non-scientists will ever appreciate the degree to which scientists take these principles seriously. We spend our every day thoroughly and systematically seeing these principles applied. I wish more people could experience the work it takes to perform a precision measurement in science.

Maybe the laws themselves all changed coherently?

Imagine a society with a gold-based currency. Now imagine I told you that the value of gold per-gram has doubled. Now imagine further that the prices in this economy simultaneously doubled across the board. The price of an apple would be twice as expensive, but the value of a gold coin would also be twice as big. If an apple cost 3 gold coins before, it would still cost 3 gold coins. Since all of the relationships between money and price remained the same, a person would be unable to detect any meaningful change.

As in the above metaphor, science studies the interrelationships between various natural phenomena. If all of these phenomena change in a way that preserves those interrelationships, than at some point it becomes meaningless to say that anything was different. At the end of the day, if the science of today can make predictive statements about evidence from the past, then one can say that the science of today is perfectly good at describing said evidence.  One can say that science is still valid, and it is merely a matter of philosophical speculation about “the nature of reality” to delve any further. After all, our brains could be in “The Matrix”, in which case all of our science (past and present) is purely fictional.  Such speculations may be valuable themselves, but they go beyond science and into the realm of metaphysics.

A person can always invoke “unknown unknowns”, or speculate as to the existence of some mechanism for fooling the science. Such claims are often made without doing any of the hard work to provide a mechanism or demonstrate it through evidence based inquiry. As a scientist, I always leave room for the “unknown unknowns”. But, I have to be pragmatic and I have to live in a world where some things are better known than others. We understand many events of the past better than we know of many happenings in the world today.

Conclusion

The fact that a topic is based on field observation rather than laboratory experimentation does not mean it isn’t science. Moreoever, direct eye-witness human contact is not a precondition for science. Science is a about building precise, numerical theories that explain what is known and make verifiable predictions of phenomena that were previously unknown. The latter of these two criteria is essential. The ability of a theory to predict phenomena that nobody previously thought to look for is the key to making sure that said theory is not merely being tweaked or fudged to match the known data alone.

Most scientific frameworks for explaining past events satisfy these criteria in spades. While cosmology of the early Universe is still subject to big unknowns, natural history going back a few hundred million years is well known not only in broad strokes, but even in terms of exact details: periods of glaciation at exactly the calculable times of wobbles in the earth’s orbit,  mass extinctions, asteroid impacts, and volcanic eruptions.

Scientific paradigms in biology, astronomy, physics, geology, chemistry, and climatology are able to precisely and consistently describe phenomena in the geological record on multiple levels of complexity: ranging from the fundamental (eg nuclear decays or thermoluminesence or rates of fossilization) to the macroscopic (eg tree growth, statistical changes in fossil morphology, glacial growth, or solar evolution). Having read some of the primary literature (though I don’t claim to be an expert), it is impossible for me to understate how vast the evidence is. Anyone who tells you otherwise is either lying to you or deceiving themselves. I urge my readers not to take me on my word. Learn about it. It’s really cool stuff! And, I’m glad to help you out with references or explanations on technical matters (to the best of my ability), and to refer you to experts when I’m in over my head.

Further Reading

[1] Patently Jewish – Folly of Faith, Folly of Reason

[2] R. Cowen- The K-T Extinction , and G. Keller – The Chicxulub Debate , Deccan Volcanism

[3] The Toba Super Eruption and Polar Ice Cores and The Lake Malawi Sediment Chronometer and the Toba Super Eruption. Check out the primary sources for further reading.

[4] C.E. Cleland, “Historical Science, Experimental Science and the Scientific Method“, Geology (Nov 2001) v. 29; no. 11; p. 987–990. Click here to read the full article.

It Starts with Observation

090825-01-galileos-telescope_big

One of the key pillars of the Scientific Method is the ascendancy of observation and measurement in the pursuit of knowledge. Science is an empirical endeavor, and observation is the first and most important step in the scientific process. It is the emphasis on observation over “pure thought” that separates modern science from early and medieval science. Science’s grounding in evidence based methods also explains it’s great success over the last few centuries.

Approaching “Pure Reason” With Caution

For a very large part of human history, people believed that it was possible to derive the fundamental truths of the Universe from abstract reason alone. Chief among these thinkers was Plato, but the thread was strong among the Greeks and carried through to a lot of medieval thinkers [1].

There is a certain attraction to the purest, most abstract forms of reason, like mathematical logic. However, one should be careful to distinguish between inevitability of mathematical conclusions within mathematical systems and the ability of those mathematical systems to draw absolute conclusions about the “outside world”.

For one thing, there are many possible mathematical systems one can construct by choosing different sets of starting assumptions (axioms). Each system can lead to conclusions which are inevitable within that particular system. Yet, the inevitable conclusions drawn from one set of axioms can contradict the logical conclusions of a slightly different set of axioms! For example, in Euclidean geometry the sum of the angles of a triangle has to add up to 180 degrees. But, there are equally logical, “non-euclidean” formulations of geometry where this does not have to be the case [2]. As it turns out, some physical phenomena are well described by euclidean geometry. Other systems are best described by non-euclidean models. The necessary/inevitable conclusions of each system do not apply equally well to all cases [3].

To scientists, arguments made without grounding in externally observable phenomena are suspect. Observation “keeps it real”. Observation ties us to something external, outside of ourselves. As such, the combination of observation with sound research methodologies can place a check against our cognitive biases. I would argue that the scientific approach is one of realism compared with the idealism often characteristic of proponents of pure thought. To a scientist, “proofs” only exist in the very circumscribed world of math. Outside of math, proofs do not carry weight. We value ideas based on their ability to predict and explain things which can be observed, not on whether we find them to adhere to our limited sense of what is or is not logical.

Hiding Behind A Proof of Smoke

The above thoughts may seem self-evident, but there are many people for whom this isn’t so clear. I often encounter individuals who claim to have logical, purely deductive “proofs” that support their beliefs. I would like to highlight 2 ways in which these “proofs” fail to achieve the rigor or certitude that they are sold as having:  (1) they shift the burden of the proof to axioms which are not sufficiently demonstrated and (2) they artificially restrict the allowable outcomes of the system in a way that leads to their desired conclusion, but does not fully reflect the full richness of reality. Let me elaborate:

Shifting the Burden of Proof to the Axioms:

You can prove anything if you start with the right axioms. But, the important question is: are your axioms right? One trick used in phony proofs is to shift some of the burden of proof into the axioms. The presenter hopes that his audience will view the axioms as untouchable, or at least he hopes they will be less skeptical of the axioms. But, it is a mistake to think that the axioms must be accepted without question.

Mathematics is the only field where you take an axiom to be true “just because…”. When we make statements about the real world, scientists still insist that our “first-principles” be rigorously tested. So, how do you validate a postulate?

1) Often, it is possible to test your assumptions directly. For example, the Theory of Relativity takes as a postulate that nothing can go faster than the speed of light. That’s just how the Universe seems to be. But, we don’t have to accept it blindly or even because it “seems to make sense”. We can test it. And, indeed, the speed of light has been tested and observed unfathomably many times, with no exceptions observed. It is a well-supported postulate of the theory.

2) Even if the postulate cannot be directly tested, the implications of the postulate can always be tested. If the consequences of an assumption cannot be tested, we say that the theory is not well defined. The point of a theory is to start with a minimal set of postulates and to work out the consequences of the postulates. If the consequences of those postulates agree with observation and if none of the consequences of those postulates contradict observation, then we feel that the postulates are good at explaining reality. Only then do we place trust and weight on new predictions made by that theory. The value of axioms in science lies only in how well they describe reality. But, as soon as the conclusions of an axiom contradict observations, that axiom is questioned or even thrown away. Unlike the “proofs” offered by opinionated and belief-driven people, the assumptions of science are not sacred or immutable.

A scientist will typically say something like :

Axioms A and B lead to hundreds of inevitable conclusions that are all confirmed by external observation. Therefore I consider proposition C (which hasn’t yet been observed) to be highly likely, since it follows from the same assumptions.

On the contrary, belief driven people tend to say things like

Let us start with assumptions A and B which make sense and therefore we accept as true. Conclusion C is inevitable, therefore C absolutely must be true!

Note that their “purely deductive” proof is no different from the scientific statement, except that the scientific statement is based on axioms which are rigorously tested and shown to agree with observation.

To summarize: don’t take the axioms of a so-called “proof” for granted. Apply scientific skepticism. Can they be rigorously tested, either directly or systematically through their theoretical consequences? If not, one should be very skeptical of the grounds upon which the proof is made. Finally, one should be aware that a proof, based on “pure” deductive reason is not really so “pure” if the axioms are observations about the real world. If the axioms make claims which should be testable, then those claims need to be tested. And if the axioms are untestable, then the proof is incomplete.

Restricting the Freedom of the System:

So-called “logical proofs” are often constructed in such a way that limits the number of possibilities available to the system.  A skilled rhetorician will present a series of yes-no questions that limit the possible conclusions in a way that forces a person to their conclusion. But, nature often operates on a continuum; not necessarily a finite number of yes-no questions.  Let us say that I have a glass marble. I present its color in a way that makes it seem as if the marble can either be red or blue. If I can demonstrate that the marble is not red, then my audience is led to the inevitable conclusion that the marble must be blue. In reality, the marble could well have been green or pink. My argument was logically correct. And, if the audience is focused on my logic, they will miss the bigger picture: that I have limited framework in which that logic exists. My logic is correct and the conclusion follows from the premises, but the logical model I’ve constructed does not completely describe the full richness of reality. A complete system would need to include the possibility of green marbles.

Pure though has value. Just don’t oversell it.

I am not saying that the inner dictates of our pure minds or intuitions are necessarily wrong. My issue is with calling them proofs. They are arguments that cannot be corroborated experimentally, but rather appeal to our inner senses. As a religious scientist I believe that these inner voices do indeed have a value to them.

But, it is important for us to realize that what may seem so compelling to our inner voices cannot be conveyed in the form of “proofs”. We cannot oversell how “obvious” our intuited or derived views of the world are. And, we need to balance our world of pure thought with experience gained from real-life interaction. I do not buy into scientism: the view that the only meaningful things we can say about the world are those that can be determined through science. But, I urge my readers to challenge themselves to take empirical findings seriously. As a matter of realism and pragmatism, it is important to check our beliefs against what we can observe and see.

From my experiences in the religious world, I am extremely frustrated by people who try to sell their beliefs by arguments that I term: “it’s obvious, stupid!”. I’ve been to religious talks where very skilled and educated rhetoricians resort to very intellectually aggressive tactics that cross a serious line for me.

Science “starts” with observation…but it does not end there

Earlier in this post I suggested that observations can place a check on our cognitive biases. But, I must be careful. Observations do not (at all) guarantee objectivity or correctness, as I discussed in my post on “jumping to conclusions”. Even very dogmatic and opinionated people can identify factual observations that support their views. In fact, one of the most insidious forms of bias is confirmation bias, wherein people selectively identify only those facts which support the conclusions they already choose to believe.

Science starts with observation but is does not end there. It is the first mile marker on a very long road. Science does not just ask for evidence, it demands that said evidence be placed in the context of a careful methodology. This road is fraught with peril. Even the best research falls fall short of attaining this ideal. And there is plenty of shoddy work in science that fails on a more rudimentary level.

In my next post I will talk about some of the methodological  steps that scientists take in order to avoid the effects of bias. I will also take the opportunity to indulge in describing some of my own research. Stay tuned!

Notes/Bibliography

[1] greek/medieval

http://www.betsymccall.net/edu/CLAM/prerenaissance.pdf

Aristotelian “physics” is different from what we mean today by this word, not only to the extent that it belongs to antiquity whereas the modern physical sciences belong to modernity, rather above all it is different by virtue of the fact that Aristotle’s “physics” is philosophy, whereas modern physics is a positive science that presupposes a philosophy…. This book determines the warp and woof of the whole of Western thinking, even at that place where it, as modern thinking, appears to think at odds with ancient thinking. But opposition is invariably comprised of a decisive, and often even perilous, dependence. Without Aristotle’s Physics there would have been no Galileo.

Martin Heidegger, The Principle of Reason, trans. Reginald Lilly, (Indiana University Press, 1991), 62-63 by way of wikipedia

http://en.wikipedia.org/wiki/History_of_scientific_method

http://aether.lbl.gov/www/classes/p10/aristotle-physics.html

http://galileoandeinstein.physics.virginia.edu/lectures/aristot2.html

[3] https://en.wikipedia.org/wiki/Non-Euclidean_geometry

[3] Applicability of non-Euclidean geometry: http://www.pbs.org/wgbh/nova/physics/blog/tag/non-euclidean/

Anatomy of an Internet Rumor

ecat hot3

A friend of mine alerted me to this linked article and wondered if there is any weight to the story. I thought this would be a fun chance to work my blogging chops and sort of live-blog how I would go about dissecting and validating this story.

The article describes the “independent verification” of a potential “cold” nuclear process developed by Italian industrial tycoon, Andrea Rossi. Cold fusion is the name given to the idea that one can produce sustained nuclear fusion reactions at low temperatures. Conventional fusion reactions only occur at very high energies. Cold fusion has a very uncredible past history, starting with its claimed observation by Pons and Fleischman and followed by a very public and embarrassing discrediting by the larger scientific community. The story provides an excellent cautionary tale for what can happen when researchers fail to adhere to proper standards of scientific method (for example, see Ref [2]). Since then, cold fusion has essentially been viewed as quack science by most physicists and the public at large. But, as is often the case, it has attained a certain cult status among a small but very passionate community of conspiracy theorists and pseudo-scientists.

What I did not know prior to researching this article is that there is a new, scientifically more credible idea [3] bouncing around for what are called Low Energy Nuclear Reactions (LENR). The reactions are not the same as cold fusion, but do potentially amount to low-temperature, nuclear energy generating mechanisms. It is on this basis of this new idea about LENR, that Andrea Rossi comes in.

Andrea Rossi has, for several years now, claimed to have developed a LENR technique he calls “ECat” that yields energies far beyond chemical processes such as the burning of fossil fuels. He has been extremely secretive and cryptic, but now people are claiming that his technique has been independently verified by a group of scientists who have posted an article on arXiv [4], a commonly used repository for physics pre-prints (unpublished drafts of papers). It is worth noting that arXiv allows anyone to post articles, and the documents on it are not necessarily peer-reviewed. This pre-print, in particular, is not yet reviewed or published. Frankly, it doesn’t even look ready to be submitted for a review in a real journal, as we shall discuss. Nonetheless, this event has sufficed to set the blogosphere abuzz. It even has attracted the attention of some big-name papers such as Forbes [5]. But is there any punch behind this story?

Step 1: Is this source reliable?

One of the first questions one should always ask is whether the source of the report is trustable? This is not a simple question. Some sources are reliable on some subjects, but not others. Moreover, just because a source is a major media outlet, that does not at all automatically mean it’s reliable. It is worth noting: Most formal news sources are pretty unreliable on scientific subject matters.

No magazine, paper, news show, or blog should get a free pass. Everything should be fact-checked. If you haven’t ever fact checked your news sources, I highly recommend it. And, once you’ve tested a source enough times, it is possible for said source to earn your trust. But, make it earn your trust.

In this case, I would say that I have no experience with ExtremeTech.com, so I will approach the article with appropriate skepticism.

Step 2: Make note of “red flags”

The first thing any skeptic will do is look for red flags: potential indications that the claim is bogus. The presence of red flags does not necessarily mean that the claim is bogus. They simply bring attention to the possibility that it could be. Here are a few things that jump out at me:

The cold fusion device being tested has…1,000 times the power density of gasoline. Even allowing for a massively conservative margin of error, the scientists say that the cold fusion device they tested is 10 times more powerful than gasoline….

So the power is between 10 and 1,000 times greater than gasoline? This is a pretty huge margin of error. When I read a scientific finding, I generally expect them to get the “order-of-magnitude” right. The above quote is equivalent to saying: “The nearest gas station is several kilometers away. Even allowing for a conservative margin of error, the gas station is 10 meters away!” If an experiment cannot distinguish between a factor of 10 and a factor of 1,000, maybe it isn’t time to break out the bottle of champagne.

Then there are these quotes:

Rossi, it would seem, has discovered a secret sauce that significantly reduces the amount of energy required to start the reaction. As for what the secret sauce is, no one knows — in the research paper, the independent scientists simply refer to it as “unknown additives.”

While Rossi hasn’t provided much in the way of details — he’s a very secretive man, it seems…”

One need not be a scientist to be skeptical of claims involving a “secret sauce”. And, of course, there is Rossi himself. Rossi’s business history [6] is very much a cause for red-flags. Should I trust a man who has a history of fraud charges? Well, let’s not off-handedly dismiss him, but let’s not give him a free ride either.

Step 3: Find the “linch pins” of the article

What are the key points on which the article hangs its credibility? Every article or blog post will identify one or two key points that the author thinks will legitimize his claims. Even questionable claims can have a grain of truth to them. Our goal is to identify these key points and establish: (1) are they true, and (2) if they are true, do they actually support the full thesis of the article?

This article is about Rossi’s great discovery. Due, in part to his “secretive” nature, the author cannot really legitimize Rossi’s claims on their own terms. Instead, he needs to look to independent sources. Who are these sources? Are they legit? And do they validate Rossi’s claims?

It seems the credibility of Rossi’s claims is hung on two points:

(1) the idea that the claimed mechanisms for the “ECat” device resemble some LENR research by scientists at NASA [7],[8]

(2) the notion that his device has now been “independently verified” in a “paper” on the arXiv [4].

Let’s examine these claims, more in depth.

Step 4: Dig Deep, aka Google is our friend

Just googling the name of the key NASA researcher, Joe Zawodny along with keywords like “Ecat” and “Rossi”. I found this blog posting. Here’s what he has to say:

There have been many attempts to twist the release of this video into NASA’s support for LENR or as proof that Rossi’s e-cat really works. Many extraordinary claims have been made in 2010. In my scientific opinion, extraordinary claims require extraordinary evidence. I find a distinct absence of the latter. So let me be very clear here. While I personally find sufficient demonstration that LENR effects warrant further investigation, I remain skeptical. Furthermore, I am unaware of any clear and convincing demonstrations of any viable commercial device producing useful amounts of net energy. [9]

So attempts to tie Rossi’s work to Zawodny are tenuous, at best. Zawodny, as an expert in the field, is skeptical of Rossi’s claims or even the possibility of his own research leading towards any near-term applications.

Next, I looked at the arXiv article, itself. My first impression was shock at how amateurish the pre-print looked. In terms of formatting and clarity of plots, it did not look anywhere near publishable. Not a good sign, but not a substantive problem either. Then I began to read the paper. Even the content seemed a little sketchy. A big red flag for me is that the output of the reaction was primarily measured using an infrared (IR) camera. It seems an odd way to measure the heat output. And, I was not convinced by the text that even the researchers had a solid handle on their technique. The biggest problem for me is the massively obvious thing they didn’t do: They didn’t put up any radiation sensors, as far as I can tell. This is a huge no-brainer to me, far more important than the heat output. If one is going to claim a nuclear reaction then one would expect to see radiation But, I’m cheating and relying on my physics experience to evaluate this paper. What can a person do if they don’t have such knowledge?

Even if one does not have the scientific background to smell something fishy from a direct read of the pre-print, there is still much one can do by searching around. One can certainly try to find science blogs that deal with the topic. For example, I found an excellent technical discussion of the arXiv article on the blog Starts With A Bang [10]. As with everything, one should be skeptical of even science blogs. However, posts like these can at least help one to get a sense of some of the scientific objections.

This article in The New Energy Times even calls into question the “independent” nature of the of the study on arXiv [11]. It seems that the researchers were invited in at Rossi’s request, the setup was built by Rossi, and the researchers were not allowed much access to the setup. They certainly could not see inside. If these claims are true, the whole matter seems suspect. More importantly, it weakens the very point of all of the buzz: that Rossi’s device had been independently tested! Personally, I feel this needs further follow up, maybe even by other researchers. I am fine with Rossi keeping aspects secret, but the researchers need more complete access to the device. Check out the letters-to-the-editor beneath the main article on New Energy Times. They are informative and even outline some of the ways that Rossi could have pulled some smoke-and-mirrors stunts. There are a lot of questions that must be addressed before I would consider this testing to be credible.

Step 5: Where matters are unresolved, wait and see…but don’t hold your breath

My experience and intuition makes me extremely skeptical of the claims in this article. I would be willing to wager a good amount of money that it is nonsense. That said, unfortunately, we do not have a “Scooby-do” ending, where fraud is completely and unambiguously unmasked.

One need not let this bother you. The purpose of skepticism is to filter out good knowledge from amid noise. Some people mistake open mindedness with a liberality in accepting new ideas. But, unfiltered acceptance of new claims just leads to clutter. True open mindedness, the skeptical approach is to entertain the possibility of new ideas while only accepting them if they are well supported. I would say that, though there is no reason to absolutely reject these claims, there is no reason to accept them either. If Rossi is sitting on the real deal and he’s just waiting to secure his patent, then the world will radically change in the next 10 years and we’ll know it. I, for one, would be glad to be wrong. Otherwise (and more likely) we’ll never hear of this again. If you are really interested, check back in a year and see if anything has changed. I bet it won’t have.

Step 6: So what’s the point?

This all may seem, at first glance, to be an exercise in futility. An article that sounds “too good to be true” often is too good to be true. Is it a waste of time to find this out? I offer an emphatic no! It isn’t a waste of time to debunk or take a claim “down a notch”. On the contrary, the process of skeptically investigating claims is a tremendously useful learning exercise. The more you do it, the better and the faster you will get at doing it. And, you will be surprised to discover that even your most trusted blogs and news outlets are not as reliable as you believed them to be.

More than just learning how to be skeptical (which is valuable in itself), the exercise is also a fun way of encountering new science. As I mentioned, every rumor has a grain of truth. This grain of truth can be interesting and educational, even if everything else in the rumor is wrong. In researching this article, I first wanted to dismiss the whole thing simply because I automatically associated “cold fusion” with quackery. I was intrigued to discover the work of scientists like Joseph Zowadny and theorists Widom and Larsen,  providing LENR with theoretical and experimental basis for renewed (albeit tentative) credibility in the scientific community. I’ve also learned about an interesting new news source: TheNew Energy Times, which I am impressed does an excellent job of promoting this new scientific direction without allowing itself to fall prey to charlatans.

Conclusion

To be fair to ExtremeTech: the article in question is somewhat cautious, and it ends with an acknowledgement that we’ll have to “say tuned”. My main nitpick is that I think they could have been more skeptical, and more cautious. In the end, there is much worse pseudo-scientific hearsay on the web. It abounds on Facebook. So be warned: Don’t take claims for granted. This is especially true if those claims reinforce what you want to hear. A repeated theme in this blog will be the urge for self-skepticism. The easiest bias that one can fall prey to is the “confirmation bias”: researching until you find something that affirms your belief and then just stopping. Real skepticism and real growth comes when one starts to fact check one’s own favorite sources [12].

For another fun discussion about fact-checking and chasing down rumors, I highly recommend this video by Peter Hadfield [13].

Bibliography

[1]  ExtremeTech- Cold fusion reactor independently verified, has 10,000 times the energy density of gas

[2]  Berkeley, Understanding Science- Cold Fusion: A Case Study for Scientific Behavior

[3]  Widom and Larsen-  Eur. Phy. J. C46, 107 (2006); arXiv: cond-mat/050502. 

[4]  Levi, Foschi, et al; arXiv preprint- Indication of anomalous heat energy production in a reactor device

[5]  Forbes- Finally, independent testing of Rossi’s Ecat cold fusion device

[6]  New Energy Times- Rossi’s profitable career in science

[7]  ExtremeTech- NASA’s cold fusion tech could put a reactor in every basement

[8]  NASA, Climate Change News- The nuclear reactor in your basement

[9]  Blog of NASA researcher, Joseph Zawodny

[10]  Wired- Cold fusion gets hot and aims for EU

[10] Starts With a Bang – The Ecat is back and people are still falling for it

[11] New Energy Times- Rossi Manipulates Academics to Create Illusion of Independent Test

[12] Saturday Morning Breakfast Cereal – All of your biases have at some point been confirmed by anecdote

[13] Why  the media screw up science, Part I- Peter Hadfield

The Danger of Catapulting To Conclusions and The Power of Alternative Hypotheses

c5-02

“So what’s going to happen next?” The room was full of physics grad students and nobody wanted to answer. You could hear a pin drop….

Dr Richard Berg ran the physics lecture demonstration group at University of Maryland. He designed and provided demos that professors and TAs could use to make their classes more engaging. My personal favorite was the electromagnetic can smasher (for obvious reasons). Every year, first year graduate students in physics were required to attend a seminar where the various research groups presented their work. It was an opportunity to learn about what was going on in the department, to help us shop for future advisors. But, traditionally, the first presentation was given by Dick Berg. He would present simple table-top experiments that inevitably defied intuition; so much so that a room full of kids with 4 years of physics study under their belts would squirm in their seats. He’d egg us on, “You tell me what’s supposed to happen next. We’ve got a room full of future physics professors here.”

One of the things you learn as a career scientist is that the world rarely conforms to common sense and it certainly doesn’t care what we want it to be. Demonstrations like Dr Berg’s are not exceptions. They are the rule. There is a huge chasm between taking a  finite collection of observations and drawing a systematic conclusion from them. I often find myself floored by the way that many non-scientists (and sometimes even scientists) hurl themselves to conclusions that cannot be drawn from the very limited number of facts they have available. For many people, it is enough that a conclusion “just makes sense”, especially if that conclusion reaffirms what they want to believe. But, “making sense” can be an illusion and a dangerous one at that. In popular culture there is a concept of not “jumping to conclusions” prematurely. I think this is an understatement. When we talk about complex subject matters (whether they be the dynamics of Supernovas or curing cancer or tax policy or solving urban poverty), opinionated people can often be seen, not merely “jumping” to premature conclusions, but catapulting to them.

print-galleryCognitive Illusions

The human brain is an excellent pattern finding machine. In fact, it is too good at it! Many people are familiar with optical illusions. But, few people are aware of cognitive illusions: the brain’s ability to see patterns in things that aren’t really there. Given a few facts, our brains quickly connect the dots. I hope to discuss many of these in future posts. In the mean time, wikipedia provides an excellent list of cognitive biases here. Take the time and look at some of these.

It’s More Complicated Than I Think

The most important lesson I wish more people would take away is that in complicated, multifactorial problems it is much harder to come to a solid conclusion or understanding than you may think it is. It’s a humbling and liberating realization.

Above all, I wish that people would realize that this principle applies to all aspects of our human experience: not just science. I would like to live in a world where even our political lives were informed with the awareness that problems are unlikely to fit into our tidy, ideological boxes.

I would suggest that the realization of the world’s complexity offers a “third way”, a middle path in approaching truth. Rather than accepting the false choice between the relativist approach to truth (There is no absolute truth; it’s all relative) and a sort of chauvinistic approach (Of course there is an absolute truth and it just so happens to agree with my beliefs. What a coincidence!) it is possible to take on the realist view that there is an objective truth to the matter, but it is difficult if not impossible to fully understand it. Just remind yourself, “It’s probably more complicated than I think“. One needs to be disciplined to resist falling for intellectual mirages.

The Power of “The Alternative Hypothesis”

In my own experience as a scientist, no skill is more exciting, more important, and yet overwhelming than the ability to generate alternative hypotheses.

People make educated guesses all the time, to fill in the gaps in our understanding. These guesses can make a lot of sense. They can be really clever. And, if our intuition is well-trained, they might just be right. But, a major trick that the brain can pull on us is to underestimate either the complexity or the range of freedom available to the system we’re observing. One way we can shake ourselves out of this complacency, is to force ourselves to think of alternative explanations, beyond the first hypothesis to pop into our heads. In fact, it is best for us to come up with as many alternative hypotheses as possible. With practice and experience, one will get increasingly better at this exercise. Often, you learn it the hard way: testing your hypotheses and finding out they were wrong (again and again and again)…By seeing a more full scope and range of possible explanations that all make reasonable sense but imply very different conclusions, you can better figure out how to design tests to narrow down the possibilities, and you can open your mind enough to accept facts and observations that run counter to what you actually expect. Once you’ve gone through the exercise of listing as many possible hypotheses as you can think of, feel free to pick a pet hypothesis. The important thing is not letting yourself too hastily come to the belief that your hypothesis must be right or that it provides “the only possible explanation”.

Another important realization: even after ranking by which guesses “make the most sense”, observation can surprise you. There is not rule that nature has to conform to what makes sense to us. And, the very idea of something “making sense” can easily become suspect. Hypotheses are starting points not ends unto themselves.

Mixing Alternative Hypotheses and Politics: Be Careful

Scientists don’t just think up alternative hypotheses for ourselves. It is a part of our culture to offer suggested alternative explanations to our colleagues. This is sort of being like a friendly devil’s advocate. It is so important to the early process of developing a new theory or experiment. Because of this habit, we often tend to offer alternative hypotheses to non-scientists, when we hear them making really strong first guesses about an observation. In political contexts, I often find myself pushing back really hard as a devils advocate. The result is that non-scientists interpret my questioning as disagreement. My liberal friends end up convincing themselves that I’m an arch conservative and my conservative friends will think I’m a socialist. If offered patiently and politely, such thoughts can be very helpful. Just be careful; It can end poorly.

Modesty And Curiosity

Through the course of this post I presented a strong case for why we can’t easily draw conclusions from our observations of complicated systems. But, I do not mean for this to imply futility or nihilism. On the contrary, the beauty of the scientific enterprise is that it shows us the extent to which the world can be known. Appreciating how little we know opens us to knew ideas, to seeing the world through a different lens. It opens us up to our most important intellectual driver: curiosity. You cannot be curious if you’ve fooled yourself into thinking that the matter is closed and you hold all the answers. But, if you allow yourself to embrace that curiosity, if you approach problems with an intellectual modesty and a thirst for new knowledge, then you will be inspired to do the necessary work to systematically understand the given problem. Ultimately, the pursuit itself will be far more interesting and rewarding than whether or not you ever reach a fully complete or satisfying conclusion.

Coherence between many lines of evidence: Part II

In the last post we discussed one of the key hallmarks of good scientific methodology: reproducible results across many different and complementary measurement techniques. In this post, we will discuss the ways in which phony scientific skeptics can spin these coherent results in order to make a particular measurement or conclusion look weak.

100 methods, 100 problems (Argument by Anecdote)

The more different methods corroborate a result, the more confidence we can place on that result: especially if the strengths of each different method can cover for each other’s weaknesses. Ironically, in the eye of a phony skeptic: the more techniques there are, the easier it is to make a given finding look weak. His equation is simple: more methods = more weaknesses.

No measurement technique is perfect.  For a finding supported by many different lines of evidence, a phony skeptic can produce a laundry list of the weaknesses of each method without ever putting them together to form a bigger, coherent picture. If a hundred different methods show the same conclusion, he will ignore or gloss-over the common conclusion and simply discuss all of the particular failings with each technique, taken by itself. This sort of laundry list is what I call “argument by anecdote”. And, for such a faux critic: numbers are all that matter. The more doubts he can throw your way, the more likely he hopes one of them will stick.

The thing that is inevitably missing is context. And, this is the distinguishing characteristic of true scientific skepticism. When a scientist tries to make a judgement on the strength of scientific evidence, context is her top priority. Those who wish to contradict science tend to value lists over context.

Asking Rhetorical Questions

Nothing is more frustrating to a scientist than an insincere rhetorical question. We work hard at what we do. We love tough questions, but we want them to be asked out of genuine curiosity. Unfortunately, rhetorical questions are an effective technique for casting doubt on scientific findings.

A common approach is the use of rhetorical questions to imply oversight on the part of scientists. People ask questions with a tone as if the scientific community did not even think to ask said questions. More often than not, these questions have been both asked and answered throughout the scientific literature. But, the kind of person who asks these questions typically has no real interest in finding out. Better to leave the question open-ended. Better to rely on innuendo.

My doctoral dissertation was based on research that I spent 6 years working on (along with 10 other colleagues). I will talk more about it in future posts. Suffice to say, I spent all of every day thinking about, quantifying, and testing all of the ways my measurement could go wrong. Way more time was spent on the error analysis than the measurement itself. This is typical in my field. It is certainly true that sometimes obvious questions are overlooked even by the experts. But, more often than not, if a question about a particular scientific finding seems pretty important, odds are high that the people who did the work already thought about it. I don’t expect or even want you to take that on faith. Next time someone asks an open-ended and accusatory question about a scientific finding, check for yourself. Email an expert. Search the literature. You will see that the people doing the research are thoughtful, thorough, and skeptical (it’s what we’re hired to do) and you will see that the people asking the accusatory questions often have a transparent agenda.

Purposely testing a technique under circumstances where you know it won’t work.

In the previous post I presented the example of a bathroom scale that works perfectly well in earth’s gravity. Given the assumption of earth gravity, the scale would give incorrect results if it were used on the moon. Now imagine someone purposely used the scale on the moon in order to discredit the reliability of the scale. Pretty dishonest, huh? Yet, this sort of approach is common place among those who have an axe to grind with certain scientific ideas.

Take the case of carbon dating: Carbon dating is based on the premise that a living organism incorporates C-14 from the atmosphere into is body (through plants breathing CO2 and animals eating the plants, for example). When that animal dies, the C14 starts to decay. How much of it has already decayed tells us how long ago it died. There are two significant preconditions for carbon dating to work: (1) It needs to have carbon and (2) it needs to have been primarily in direct contact with the atmosphere. Aquatic life, for example, is in contact with “old carbon” stored deep in sea. Thus, it is known to produce anomalous carbon dating results. Many people who spin doubt regarding radiometric dating will come up with countless lists of stories wherein samples-that-shouldn’t-be-carbon-dated for known reasons are dated and give nonsense results. This approach is a combination of my argument by anecdote (lists and doubts) and the asking of rhetorical questions (since the person listing these anecdotes does not mention that these samples were of a pedigree known to fail carbon dating).

In this article we focused on some aspects of false skepticism. As promised in my introduction to this blog, I hope to also provide a rubric for how to exercise legitimate scientific skepticism. Looking forward to your thoughts and comments!

 

Coherence between many lines of evidence: Part I

An important hallmark of good science is coherence across many different, independent measurements. It is simply a fact of living in the real world that no single way of measuring something works in all cases. For example, a bathroom scale might be a good way to measure my own mass. But, if I want to determine the mass of a spoonful of flour, I will need to use a different kind of scale.

Different measurements rely on different assumptions. If the starting assumptions are wrong, then a measurement will produce nonsensical results. If I want to measure my mass on the earth, I can use a standard scale. But, if I were on the moon, this scale would give me an incorrect result since the scale is calibrated assuming earth-gravity.

Let’s examine this issue in the context of one particular measurement problem: how to calculate the age of old sample using Carbon dating. First, a brief introduction/review of how carbon dating works: There are several “isotopes” of carbon. Isotopes are different versions of the same element with different numbers of neutrons (neutral particles) in the nucleus. Neutrons have no effect on the chemical properties of an element, but they can affect how stable it is. Unstable elements will decay over time. Carbon 14 (C14) is an unstable isotope of carbon with a half-life of ~5730 years. This means that if I have a sample of C14, after 5730 years I will have half as much. Even though C14 decays away, new C14 is produced in the atmosphere from cosmic rays bombarding Carbon 12 (C12). Living plants breath in the C14 and are eaten by animals. The carbon 14 becomes a part of these living things and continues to replenish as long as they are alive. But, as soon as they die, no new C14 enters the specimen and the fraction of C14 continues to decline. Knowing the atmospheric concentration of C14, we can figure out how long ago this plant or animal died from how much of that C14 decayed away [1].

Let’s think about some of the challenges involved in Carbon dating. Then we’ll peruse the literature and see how these challenges are dealt with.

1) What determines the initial Concentration of C14 and is it constant?

The fraction of Carbon 14 in the atmosphere is determined by the rate it is produced in the atmosphere and the rate at which it decays. New C14 is produced by cosmic rays bombarding the atmosphere. In order for carbon dating to work, this concentration should be stable and well-known. If it is not stable and relatively constant, then the method would not work.

2) Is the decay rate constant? Are there factors that could change it?

Another potential challenge to the accuracy of radiometric dating is the decay rate of C14, itself. If this decay rate was not constant, we could not use it as a reliable clock.

We don’t need to rely on physical assumptions, even well motivated assumptions, to verify these underlying premises of carbon dating. We can actually cross-check these assumptions *directly* against other sources of evidence. Carbon dating is the not the sole technique we have available for determining age. It exists alongside other, categorically different dating methods [2]. The more independent measurement methods we have, the more confidence we can place on the broad conclusions.

So what are some of the complementary measurement techniques for determining ages?

As most people know, trees grow in seasonal cycles and their trunks show a pattern of concentric rings corresponding to periods of growth and rest. There is one ring per year. Moreoever, the sizes of tree rings vary with fluctuations in local and global climate. These unique, yearly patterns allow us to line up tree rings from younger trees with those of older trees and we can work our way backward. Using the overlap between successive generations of trees we can patch together a timeline extending back more than ten thousand years. This lining up process has its challenges and pitfalls, but the people who do the work are able to use precision measurement techniques. In any case, this technique does not depend on carbon-14 levels in the atmosphere. Nor does it depend on the constancy of radioactive decay rates. [3]

Tree rings fall into a class of dating methods called “incremental dating techniques” because they exhibit a pattern of countable bands, each band corresponding to one yearly cycle. Two other incremental methods that I would like to discuss are varves and ice cores. Varves are seasonal sediment layers that build up in certain lakebeds [4]. The varve record goes much further back than the tree ring record: close to 50,000 years of banding patterns. Several particular sets of varve records play critical roles as calibration samples for carbon dating. Ice cores are vertical columns of ice, carefully drilled out of large glaciers. Like tree rings and varves, the ice forms a banding pattern, due to seasonal thaw and refreezing cycles [5]. The ice core record goes back even further yet: several hundred thousand years. There are many other relative and absolute dating methods that we can use to compare and cross-check carbon dating. I hope to address others. But, for this article, let’s stop here.

Now, let’s take a look at our carbon dating method. We can take trees that died in particular years in the tree-ring record and compare their age in the tree-ring record with the fraction of carbon-14 remaining [6]. Likewise, we can take samples found in varves and compare the C14 age with the year of the corresponding sediment layer the sample was buried in. Here is a composite of varve data taken from Lake Steel in Minnesota and Lake Suigetsu in Japan,  various tree ring data, and their corresponding carbon 14 concentration (source: Davidson and Wolgemuth) [7]:

tree-rings-varves-c14-chronology

Over a span of ~10,000 years of tree rings and and 50,000 years of varves, we see a smooth, mostly linear relationship with the logarithm of  C14 concentration: The older the tree, the less the C14. In other words, the age given by trees and varves is pretty well consistent with the C14 concentration. And here is the key point: if either of our above two assumptions about carbon dating (known atmospheric concentrations and constant decay rate) were significantly wrong, the C14 concentrations would diverge from the other methods. Instead of a straight line, the above graph would fluctuate wildly, with no rhyme or reason. This is clearly not the case.

What makes these methods nice is their complementarity. There are many factors that can bias varve and tree data, but those factors have little to do with the nuclear physics governing Carbon-14 decays. Likewise, even if carbon-14 decay rates were completely different in the past, one would be hard pressed to explain why this effect would simultaneously increase (or decrease) the number of tree rings or sediment layers in the record.

At this point, an astute reader might notice that the graph of C14 concentration versus varves is not a perfectly straight line. There are small fluctuations. And, more importantly, in the older samples (>20,000 years) the yellow points are starting to bend slightly downward away from a straight line. Don’t worry, I won’t dust this under the carpet. As you will recall, we never expected carbon dating to work perfectly. All measurement techniques are inherently imperfect. What scientists and informed scientific readers ask is: (1) How imperfect are these methods and (2) can we understand those imperfections? In short, are they good enough? With any good scientific method we should be able to quantify how good they are compared with the accuracy we need. And this takes us to the most exciting point:

Using a large number of complementary measurement techniques can even help us to understand the imperfections and limitations of each individual technique. Coherence does not just reinforce our confidence in a measurement; it helps us to systematically understand it.

First, we need to understand which factors are varying over time. If the plot above jitters a bit, what is causing the jitter? Is it the carbon dating or the tree rings or the varves or ice cores? Well, fortunately we have at least four different measurements techniques. If two of them agree with each other but not the third one, then we have good reason to believe that the third one is the “odd man out” so to speak. We can plot deviations of C14 age from tree-ring age, for example, and we get the black graph below on the plot below.  The y-axis (on the left side) shows the percent difference between the measured concentration and the expected concentration of C14 for a sample of a given age. Sometimes there is more C14 than there should be (and we would underestimate the age). Sometimes there is less than there should be (and we overestimate the age). But, note the magnitude of the fluctuations. The majority of the variations are within 10% of the correct age, with a few deviations as large as around 20%. If we don’t trust these particular tree-ring based measurements, we can go to a completely different part of the world and compare the C14 concentration of air bubbles in an ice core with the age of the ice-core, based on counting thaw-and-freeze layers in the ice. We get the same result (shown in red on the same plot)! We can see that the Carbon-dating age fluctuates with respect the tree ring age and it fluctuates the same way compared to ice cores [8]. This tells us that Carbon dating is the probably odd man out.

muscheler_full

We have now shown that Carbon dating is not perfect, as we expected. However, we can quantify how imperfect it is: mostly within 10%. And, we can do this by directly comparing it against three completely different and complementary techniques (tree growth, ice formation, and sediment layers in lake beds). Now we have one more question: do we know why the C14 date is varying like this? Can we determine the cause?

Even here, we have independent data and methods we can use to test the various causes. Beryllium-10 is a radioactive isotope with a MUCH longer half-life than carbon (1.4 million years). This means that however much Be-10 there was in a sample a few thousand years ago, there is pretty much the same amount (because hardly any of it decayed). If rates of cosmic rays varied over time, they would change the concentration of C14 and cause these deviations in the C-14 age. They would also change the Be-10 concentrations in the atmosphere [9].  Since Be-10 has a very different and much slower lifetime, it provides a nice complementary measurement to compare with C14. In the plot below deviations between the C-14 age and tree rings are compared with deviations in Be-10 concentrations in layers of corresponding ice cores. Here we are essentially comparing two different radiometric techniques -C-14 and B-10 concentrations- against two independent chronologies from different parts of the world (tree and glaciers) and we see similar global cycles in the concentrations of radioisotopes.

nielsbohr_be10

It gets even better: We know that the earth’s magnetic field can change polarity (North pole flips with South pole). We know that this has happened in the past from looking at the magnetic polarity of layers of rock in the geological column. During the transition period from one polarity to the other, we expect low magnetic fields and therefore high rates of cosmic rays (since the earth’s magnetic field deflects cosmic rays). We would predict that, if this were the case, we would see a large deviation between carbon dating and varves or a large change in Be-10 concentrations in ice cores. Indeed, we see just this. The most recent such reversal is called the “Laschampe Geomagnetic Excursion” and it occurred around 41,000 years ago. Below are two plots from two different papers showing changes in radioisotope concentrations during this event. First, we see a paper specifically discussing the Lamschampe Excursion [10]. In this series of two plots, the bottom plot shows the orientation of the earth’s magnetic field and the top graph shows Be-10 concetrations measured in Greenland ice cores. We see spikes in the Be-10 concentrations following the “transition” periods when the orientation of the earths magnetic field flipped signs and cosmic rays would be at their highest rates.

Laschampe-BE10

Similarly, in a recent paper using varves taken from Lake Suigestu, a large deviation between the varve age and Carbon-dating age is observed at a little over 40,000 years before present [11].

Laschampe-Suigetsu

Coherence is a much stronger proposition than mere repeatability. Repeatability of the same experiment or measurement is very important in science as a means for catching mistakes (or phony results). However, scientists are professional skeptics and we do not like to hang our hats on a proposition that relies on one single measurement technique. We demand coherence across a wide spectrum of different methods, each with non-overlapping assumptions. In this post we saw in Carbon dating some beautiful examples of how this coherence among many methods can be used to quantify and shine light on the limitations of each individual method. This is only the tip of a much deeper iceberg. Geological dating techniques, and radiometric dating, are approaching a century-old. There are literally hundreds of techniques and thousands of papers spanning the decades. These results have been reproduced again and again, under the careful scrutiny of thousands of scientists who gave these problems their undivided attention. I do want to come back to this subject. If I get good questions about this or other related concepts, I’m liable to write a follow-up post.

References

[1] For further reading, see:  (a) Carbon Dating   (b) C14dating.com   (c) Radiocarbon Calibration

[2] Refining the Radiocarbon Time Scale; Paula J. Reimer; Science 338, 337 (2012);

[3] For further reading, see:  (a) About tree rings   (c) Principles of Dendrochronology (d) NOAA slides on tree-rings

[4] For further reading, see:  (a)  Varves as natural calendars (b) Radiocarbon Dating of Varve Chronologies

[5] For further reading see: (a)  Ice Core 101  (b) Stratigraphic dating of ice cores (c) NOAA slideshow (d) Data from the Vostok ice core

[6] Dendrochronology and Radiocarbon Dating

[7] Lake Suigetsu and the 60,000 year varve chronology

[8] C-14 in ice cores and tree-rings

[9]  Influence of Atmospheric Processes on Be-10 atom concentrations; Y. Stozhkov;

[10] Dynamics of the Laschamp geomagnetic excursion from Black Sea sediments; N.R. Nowaczyk a,n, H.W. Arz, U. Frank, J. Kind, B. Plessen; Earth and Planetary Science Letters 351–352 (2012) 54–69

[11] A Complete Terrestrial Radiocarbon Record for 11.2 to 52.8 kyr B.P.; Christopher Bronk Ramsey et al.;  Science 338, 370 (2012);