I wish people wouldn’t say this, it’s usually followed by some lame reason why we should trust their anecdotal experience over empirical data. Sure the word skeptic (or sceptic if you prefer) has a certain colloquial definition and to a large extent words are defined by the way they are used, I mean no-one uses the word “gay” to refer to being happy anymore.
Even so this usage is getting on my nerves. When I use the word “skeptic” to refer to myself I mean someone who evaluates the available evidence and comes to a reasonable conclusion. Implicit in my definition is also an understanding of human foibles with regard to cognitive biases and a deep seated inability to view our own experiences impartially. Refer to my previous post for more in this vein.
On a whim I thought I’d look up what on-line dictionaries had to say about the word, I found some variation of the following to be popular:
“1. One who instinctively or habitually doubts, questions, or disagrees with assertions or generally accepted conclusions.”
That doesn’t seem any better to me. So, what’s my problem?
Well for a start those alluded to in the title of this post are not applying skepticism they are merely doubtful. And when evaluating claims they are not using the methods of science they are using the unreliable guide that is personal experience. Thus, while their protestations of skepticism and subsequent conversion sound impressive, they are (to my ears) merely the hollow echo of true inquiry.
Harsh enough for you? well perhaps. I don’t expect all who use the word skeptic to apply to it the same definition that I do, but it still chafes.
The dictionary definition given above is also lacking in nuance, it appears more suited to define a contrarian than skeptic. What my favourite skeptical interviewer DJ Grothe refers to as “knee-jerk skepticism”. A skeptic isn’t someone who just says “no”, a skeptic is someone who asks “how do we know?”.
The reason my hypothetical skeptical convert gets on my wick so much is when answering the “how do we know?” question they assume that they can draw general conclusions from their informal experiment where n=1. This ties into the “don’t knock it ’til you’ve tried it” line of argument. NO. Trying it myself is not the way to determine the validity of a claim. This falls under the category of anecdote, and anecdotes are not good quality evidence. At best they should be the start of investigation – not the end.
When evaluating a claim we should look at two things in particular, yes we should determine the direct evidence for the claim i.e. is there evidence to show that it acts as claimed? but we should also attempt to see how the specific claim fits into the wider scientific ecosystem – the prior probability if you will.
Often in day to day claims this is of little practical importance and so it becomes overlooked when it is relevant. A new gadget or medication is often based on previous iterations of the same technology or medical practice and represents an incremental improvement or merely an additional option in the sphere or possibilities. However some claims are sufficiently far from mainstream understanding that we should take a step back and consider the likelihood that the claim is possible, irrespective of the evidence presented for the claim itself.
In the case of say, homeopathy or power balance bands our current understanding of the science should make us extremely wary of efficacy even before the specific claims are considered. To be clear here though, plausibility should be used as only part of the process, there are many things that work without us knowing how they work but the further outside of current knowledge something is the stronger the evidence we should require before we accept it. Certainly for many “alternative” therapies that strong evidence simply does not exist, as I presented for Amber teething beads there is no reason to think they should work from a physical or medical point of view so our standard of evidence should be higher than the earnest assurances of people in mothering forums, or even our own experience – as noted above.
But this is exactly the sort of pseudo-evidence that we are wired to find most convincing. Throughout most of our history the ability to evaluate randomised trials, statistics and p-values would not have aided our survival one whit. Therefore it’s not surprising that most of us are bad at it.*
Yes, it’s hard. Yes, it requires work, and yes you will probably get it wrong most of the time.** But it’s worth it. So give it a try – be skeptical, like you mean it.
* Arguably all of us, it requires practice and even the “experts” can get it wrong.
* I certainly do.