Often the most frustrating questions as a tutor are those that are not asked. Maybe because people are too shy, or because it seems like a silly question, but the most frustrating to all concerned are those questions that you don't even know are there. Which means that everyone ignores them until they quietly creep up and wallop you between the shoulder blades.
The following is a short survival guide I wrote a while ago for an undergraduate class on one such question. The problem is that on the surface of it, it seems like a non-question: "in experimental science, what do you do when you get no result?" The problem is in the premis of the question, that there is such a thing as a 'no-result'. Which is false.
Dealing with ‘no results’ is something everyone in science has to come to terms with. Contrary to popular opinion, few experiments end with the researcher discovering some grand epiphany and scarpering down the streets naked yelling ‘Eureka!”
More often than not, some poor fool having been sitting in front of a computer for weeks, suddenly starts banging their head against the monitor, and swearing at the data. Things seldom go smoothly, or as planned, and often the results of even the most meticulously designed experiment leave you wondering what the outcome actually is, and why something is not as expected. The worst of these is when your data appear to be telling you, well, nothing.
Here is a short survival guide to navigating “no result”:
There are generally three types of "No Result":
• Lack of data
• Failed experimental design (really a subset of 'Lack of Data’)
• Data doesn't show conclusive results one way or the other (non-significant results)
Lack of DataLikely to happen especially when time/money etc are short, you are reporting on trial/pilot experiments or are presenting data from an experiment that is still underway or in early stages. There is no shame in these results. Small scale or pilot studies are the basis from which more elaborate and grand-scale experiments are born. Outstanding significant results, or even enough data to properly analyse are not expected, instead you are looking for trends in the data, general directions that it is going that might be worth following up, or that suggest your hypotheses are correct and it is worth pursuing more thoroughly.
Failed Experimental DesignIt sucks, but it happens. Sometimes it may be for embarrassing reasons you didn't think of beforehand, but even so it is very rare for scientists to sink the boot into a genuinely failed experiment. 1. because sometimes it just happens, that's what makes biology so unpredictably interesting and 2. because it has probably 'just happened' to them too some time.
So don't focus on the negatives. The audience doesn't want to hear all the whys and what could have beens, and it just makes everyone feel awkward. Honestly, they probably don't care (unless it was for some really exciting reason that presents more questions). Admit what went wrong, but move on to 1. how you will fix it in the future, and 2. what you DID find. Seldom is an experiment such a complete bust that you cant even pull some general trends or even observations out of your time. What did you see while setting up or running it that was interesting? Were there things that you saw that will change your hypotheses, or created new questions? Go through what your hypotheses still are, how you arrived at them, and how they are supported from the evidence of other studies (see 'Reporting' below for more).
Non-Significant ResultsThere are several flavours of non-significant results. The first, vanilla flavour is from above: Lack of data. Again, get the best out of what you have, stress trends and what you are seeing against what you expected. Emphasise your hypotheses and why you stated them, how they are supported by other data.
The second flavour of non-significant results is much nuttier: Genuinely middle of the road data. This can be one of the most irritating situation a scientist can find themselves in. A great idea, months of work and when all the data is in, it appears to say nothing. No significant yes, no significant no. Just noise. How do you write up a non-result?
Often such data is treated as a 'failure' however this is not true. You have probably found 'no result' for a reason, it is telling you something about your system. It's not what you expected, but it may be no less interesting. The trick is to spin the finding to make it interesting to an audience:
1. look for trends. Ok, they aren't significant, but you can build a story out of them - as above, what is it, how does it fit with other work and hypotheses
2. find other ways or questions to look at the data you already have. This can hurt your brain and take a lot of time, especially if you start blindly prodding your data hoping something falls out. Try to go back to square one, construct new focused hypotheses, then you can follow the right paths for analysis and a (hopefully) clear answer.
3. what does 'no result' actually MEAN? As mentioned above, a 'no-result' has probably occurred for a reason. For example, say all the sequences for the gene I was looking at across hundreds of humans are the same. There is no variation. My hypothesis that I could look at differences in this gene between continents is gone, but it raises the interesting question, WHY is there no variation in this gene? Maybe it is a very important gene and cant afford to change... maybe it is a recent viral insertion... etc. So instead of boring the audience with why I saw nothing, (scientifically) suggest some stories (again, learn to love the word 'hypotheses') about why this might be the case, and how you could test each possibility. Your audience will give you much greater credit for showing you can think (especially under difficult situations) and for your initiative, than for rolling out how bad the methods were and why it was bound to fail for x, y and z reasons.
**Adding more data will not necessarily give you a significant result!!!!**
A common fall-back error when presenting non-significant results is to say “I must not have enough data, so getting more will surely find a significant result.” Well, no. If there is truly no significant result to be gained (ie non significance IS the answer), more data will only push significance further and further away!
REPORTING
Each type of no result has particular avenues to finding something to write about, as touched on above. But broadly across all of these there are some general themes. In all cases you can be left stranded with a good idea that you are frustratingly unable to prove. But rather than throw the baby out with the bath-water and depressing the audience with what didn't go right, concentrate on the positives.
What were your hypotheses?? Why did you make them, and what did you *expect* your data to show that would prove them. The greatest credit people will give you is for the interesting, novel or clever idea you had in the first place. If it is a truly sound idea and you can present evidence as to why you expect the hypotheses you propose, collecting the data is *almost* irrelevant. If there has been past work done on a similar theme, or that you are building upon, convince the audience that the result you didn't quiet were almost inevitable.
This is a quality skill in science, and is used time and again when convincing people to give you money for experiments you have not yet done.
And don't forget to:
- reference your work. Not only does this show that you have actually read about what you are doing, but it will automatically allow you to include new ideas, other results and begin to use the language and thinking needed to present scientific data.
- state your hypotheses, what you expect to find and WHY.
- for papers AND talks, read, re-read, proof read, give-to-your-housemates-to-read, read and read again. No bad data looks worse than sloppy writing.