This video describes a study I conducted between January 2013 and June 2013.
In execution this experiment worked perfectly. However, the data were not supportive of the hypothesis. What’s a scientist to do?
I’ve had a lot of interesting feedback from my peers, colleagues, and facebook friends on this video. First off, almost everyone agrees this is a positive thing. Making science open (here’s my data, scales, and syntax), as well as making science accessible to those who are interested is a good thing. A thing we ought to aspire to. Obviously this is a thing I believe in. Strongly.
But I also received a certain degree of criticism. I’ll address the most common points below.
Who’s the video for?
On on a superficial level this is a question that makes a lot of sense. Who do I want to reach? Who am I communicating to? But I think this is missing the point. I’m not necessarily trying to reach anyone with this video*, I’m not inviting people in to the party, I’m merely leaving the door open. Some people suggested it’s too long for the layperson, OR, doesn’t have enough detail for the scientist. I’m not too concerned by either. I’m assuming anyone who watches this video is already interested in the topic and is willing to invest the 8 minutes it takes to watch it. Scientists will get enough detail (and have access to all the raw and cleaned material), and the non-scientist gets a full, unpolished run-down of the experiment. Naturally, future videos will become increasingly slick and practiced, but this is as it is, without omission.
*Though reaching out is important, just look at my science comms projects.
You’re ‘special pleading‘ // The data suggest you’re wrong
The data do not support the hypothesis. This is true. As explained in the video, however, I suggest the experiment was a poor test of the hypothesis. It didn’t ‘tap’ the thing I was hoping to investigate. I fully accept that there is a possibility that I am wrong, and that this is a futile line of research. I am not convinced that this is the case, just yet. Future research is to be done.
But this is how science typically moves forward. Scientist believe a thing (based on research) and pursue it. Sometimes they’re wrong and they waste a lot of time pursuing it before finally letting their ideas die. Sometimes they’re right, and their first study is wrong. Sometimes they’re wrong and their first study incorrectly yields positive results. (And sometimes they’re right and nail it first time!). This is of the category ‘We-might-be-right-but-it-was-a-bad-study’. This was – to the best of my knowledge – a novel investigation of self-deception within this domain. More importantly, this was a description of the study and the process of the study. We acknowledge we could be wrong; but we’re currently assuming we’re right and trying our hardest to shoot the idea down. So far the scores is 0 – 1 against us. We’ll try again, zoom in on the interesting parts and report back. There’s no special pleading – this is how most research progresses.