Tuesday, September 6, 2011

No Such Thing As Nothing In Science


Often the most frustrating questions as a tutor are those that are not asked. Maybe because people are too shy, or because it seems like a silly question, but the most frustrating to all concerned are those questions that you don't even know are there. Which means that everyone ignores them until they quietly creep up and wallop you between the shoulder blades.

The following is a short survival guide I wrote a while ago for an undergraduate class on one such question. The problem is that on the surface of it, it seems like a non-question: "in experimental science, what do you do when you get no result?" The problem is in the premis of the question, that there is such a thing as a 'no-result'. Which is false.

Dealing with ‘no results’ is something everyone in science has to come to terms with. Contrary to popular opinion, few experiments end with the researcher discovering some grand epiphany and scarpering down the streets naked yelling ‘Eureka!”
More often than not, some poor fool having been sitting in front of a computer for weeks, suddenly starts banging their head against the monitor, and swearing at the data. Things seldom go smoothly, or as planned, and often the results of even the most meticulously designed experiment leave you wondering what the outcome actually is, and why something is not as expected. The worst of these is when your data appear to be telling you, well, nothing.
Here is a short survival guide to navigating “no result”:

There are generally three types of "No Result":

• Lack of data
• Failed experimental design (really a subset of 'Lack of Data’)
• Data doesn't show conclusive results one way or the other (non-significant results)

Lack of Data
Likely to happen especially when time/money etc are short, you are reporting on trial/pilot experiments or are presenting data from an experiment that is still underway or in early stages. There is no shame in these results. Small scale or pilot studies are the basis from which more elaborate and grand-scale experiments are born. Outstanding significant results, or even enough data to properly analyse are not expected, instead you are looking for trends in the data, general directions that it is going that might be worth following up, or that suggest your hypotheses are correct and it is worth pursuing more thoroughly.

Failed Experimental Design
It sucks, but it happens. Sometimes it may be for embarrassing reasons you didn't think of beforehand, but even so it is very rare for scientists to sink the boot into a genuinely failed experiment. 1. because sometimes it just happens, that's what makes biology so unpredictably interesting and 2. because it has probably 'just happened' to them too some time.
So don't focus on the negatives. The audience doesn't want to hear all the whys and what could have beens, and it just makes everyone feel awkward. Honestly, they probably don't care (unless it was for some really exciting reason that presents more questions). Admit what went wrong, but move on to 1. how you will fix it in the future, and 2. what you DID find. Seldom is an experiment such a complete bust that you cant even pull some general trends or even observations out of your time. What did you see while setting up or running it that was interesting? Were there things that you saw that will change your hypotheses, or created new questions? Go through what your hypotheses still are, how you arrived at them, and how they are supported from the evidence of other studies (see 'Reporting' below for more).

Non-Significant Results
There are several flavours of non-significant results. The first, vanilla flavour is from above: Lack of data. Again, get the best out of what you have, stress trends and what you are seeing against what you expected. Emphasise your hypotheses and why you stated them, how they are supported by other data.
The second flavour of non-significant results is much nuttier: Genuinely middle of the road data. This can be one of the most irritating situation a scientist can find themselves in. A great idea, months of work and when all the data is in, it appears to say nothing. No significant yes, no significant no. Just noise. How do you write up a non-result?
Often such data is treated as a 'failure' however this is not true. You have probably found 'no result' for a reason, it is telling you something about your system. It's not what you expected, but it may be no less interesting. The trick is to spin the finding to make it interesting to an audience:

1. look for trends. Ok, they aren't significant, but you can build a story out of them - as above, what is it, how does it fit with other work and hypotheses
2. find other ways or questions to look at the data you already have. This can hurt your brain and take a lot of time, especially if you start blindly prodding your data hoping something falls out. Try to go back to square one, construct new focused hypotheses, then you can follow the right paths for analysis and a (hopefully) clear answer.
3. what does 'no result' actually MEAN? As mentioned above, a 'no-result' has probably occurred for a reason. For example, say all the sequences for the gene I was looking at across hundreds of humans are the same. There is no variation. My hypothesis that I could look at differences in this gene between continents is gone, but it raises the interesting question, WHY is there no variation in this gene? Maybe it is a very important gene and cant afford to change... maybe it is a recent viral insertion... etc. So instead of boring the audience with why I saw nothing, (scientifically) suggest some stories (again, learn to love the word 'hypotheses') about why this might be the case, and how you could test each possibility. Your audience will give you much greater credit for showing you can think (especially under difficult situations) and for your initiative, than for rolling out how bad the methods were and why it was bound to fail for x, y and z reasons.

**Adding more data will not necessarily give you a significant result!!!!**
A common fall-back error when presenting non-significant results is to say “I must not have enough data, so getting more will surely find a significant result.” Well, no. If there is truly no significant result to be gained (ie non significance IS the answer), more data will only push significance further and further away!


REPORTING
Each type of no result has particular avenues to finding something to write about, as touched on above. But broadly across all of these there are some general themes. In all cases you can be left stranded with a good idea that you are frustratingly unable to prove. But rather than throw the baby out with the bath-water and depressing the audience with what didn't go right, concentrate on the positives.
What were your hypotheses?? Why did you make them, and what did you *expect* your data to show that would prove them. The greatest credit people will give you is for the interesting, novel or clever idea you had in the first place. If it is a truly sound idea and you can present evidence as to why you expect the hypotheses you propose, collecting the data is *almost* irrelevant. If there has been past work done on a similar theme, or that you are building upon, convince the audience that the result you didn't quiet were almost inevitable.
This is a quality skill in science, and is used time and again when convincing people to give you money for experiments you have not yet done.

And don't forget to:

- reference your work. Not only does this show that you have actually read about what you are doing, but it will automatically allow you to include new ideas, other results and begin to use the language and thinking needed to present scientific data.

- state your hypotheses, what you expect to find and WHY.

- for papers AND talks, read, re-read, proof read, give-to-your-housemates-to-read, read and read again. No bad data looks worse than sloppy writing.

Friday, July 29, 2011

A Building full of Botanists...


The 18th International Botanical Congress, Melbourne, July 24-30.

So, what happens when over 2000 of the worlds botanists converge on Melbourne for a week long plant love-in...?H
ere's a brief day by day playbook from a typical conference. More specific tales from IBC will follow:

Conference Day-1: Registration, Welcome and Mixer.
Follow the huddles of twitchy looking types to Melbourne Convention Center. Botanists can be identified by typically 'smart-casual-field" attire, looking like overgrown university students (which is what we typically are). Chinos or jeans, blue business shirt or slightly worn/botany related T-shirt under a polar-fleece, glasses, beard (generally the men) and hiking boots. Bewildered look semi optional.
Stand around in small groups generally of a similar geographic origin until someone makes a move to disperse. Wander around awkwardly looking for familiar faces while pretending to be surveying the finger-food. Avoid people whos faces you know, but names you cant for the life of you remember until you can look them up in the program, or steal a surreptitious look at their name badge. Note to organisers: this years name badges do not include institution, leading to much confusion and awkward roundabout ways of trying to remember where this person works...
Sit through opening session with tenuous cultural references. Applaud politely. Retreat to hotel at first available opportunity.
----------------------------------------------

Conference Day 1.
Energetic enthusiasm abounds as the crowds mill in expectant conversation or push their way in large numbers to the coffee stands, where conversations converge and divide, threshing around the snakes of people, excited at the week on offer. Postgraduate students and postdoctoral workers dash around trying to find the famous and important people they want to make a good impression on, with the truely noteworthy assembling small rugby scrums around them. Conversations are animated and waves and hellos ring out all around.
The first sessions start and the crowds pile in to hear whatever is going. People dash between sessions, up and down stairs in the excitement to make the talks of interest. Too late for taking any precautions, the lab with 'the conference cold' is identified. Tally of people who have waved to me so far whom I swear I have never met: 4.
----------------------------------------------

Conference Day 2.

The frenetic chaos of the previous day has slowed a little, and there is the opportunity to have some indepth conversation on matters of interest with the relevant people. There is still a flurry of activity between sessions, and the coffee lines may even be a little longer with people looking a little in need of a pick-me-up. Still, one pretends to by in a hurry, especially when passing someone you really dont want to get stuck talking to. The importers of the conference cold look heroically cheerful and manfully maintain that the worst is over, so it must have been non-contagious when they arrived
----------------------------------------------

Conference Day 3.
Promises of collaboration have been made and data, papers and invitations have been traded, with small knots of people inhabiting the reception hall during sessions. Earnest conversations and the unlucky ones still to talk hunch over laptops. Sessions with particularly controversial or broadly appealing titles are filled out, however people move little between sessions. The skill of pretending not to notice that same person you have already said 'hello' to 4 times when scanning the room becomes important if awkward conversation is to be avoided.
Just in time for the mid-conference hump, the conference dinner provides the opportunity for everyone to let their hair down,crack out their best 1990s era suit and drink enough to become unhealthily sociable. The uneasy feeling that the conference cold was not as infectious as first thought is dawning on many people, however it is hard to differentiate between that, and a surplus of alcohol...
----------------------------------------------

Conference Day 4.
Early morning halls are conspicuously empty. The queues for coffee have slowed, however the number of people nursing a hangover over a cup is certainly up. Stragglers wander in late and collapse into chairs rather than seeking out a session to break into. Social nicities are abandoned and feigning blindness on seeing 'that-person-who-you-have-nothing-left-to-say-to' gives way to a curt nod. Lectures are sparsely populated with more and more people sitting vacantly in the halls instead of picking a random talk to attend. A large portion of the population is reported to be absent due to cold-related myasma.
----------------------------------------------

Conference Day 5.
Usually even the most resilient have cracked by now, if not to hangover or the conference cold, then in absence of any remaining constructive conversation. Halls feel empty, but the occasional persistent knot of people may be found around cafe tables or away from the convention center. The poor souls remaining to give presentations slave ghoulishly over talks they hope will prise some last glimmer of life from their audience. Awkward conversations can be avoided by farewells and promises to catch up, and looking forward to next time. Anecdotes on the week are traded through stuffy noses and you struggle to pretend you have retained any of the proceedings beyond a bewildering fog of graphs, slides and take home messages.
----------------------------------------------

Conference +1.
Delegates return to normal life slightly shell shocked. Greetings of "how was your holiday" and "good to get back to work" land unhelpfully, and a week on the beach looks inviting.
----------------------------------------------

Conference +2.
Some form of assimilation appears to have miraculously happened over night, and there is a fresh enthusiasm to attack your work and investigate/incorporate the fresh ideas and methods from the week before. You find yourself fondly looking forward to next year...

Tuesday, May 31, 2011

ERA loses its ABC


Several years ago the Australian academic community was told that it needed metrics and direction to become the lean tool of academic excellence expected (presumably bitter competition within the community is insufficient) by whoever cares about these things. This was going to be provided under the banner of Excellence in Australian Reseach (ERA), by the Australian Research Council (ARC). A major innovation was to be the replacement of the generally universal system for ranking and estimating the relative profile of journals, with our own home-grown scale. In all it's wisdom, the Australian government (how many PhDs between them??) was going to boldy break out and devise it's own system. Needless to say, the rest of the world didn't give a hoot, resulting in journals or papers being more or less important depending upon which side of Australia's border you happend to be on.

After no doubt not inconsiderable sums of money, much huffing and puffing and a revision when serious flaws arose (journals ranked way out of their league, or missing entirely) a list of journals and ranks was released. Of particular note is that rankings were partly determined by lobying from the industry, and decided on by a 'panel of (not impartial) experts'. Journals were nominated a value within an apparantly arbitrary tier system from A* (highest) through A and B to C (lowest). The Sesame Street System. After several years, things had settled down, people for the most part ignored the new system, unless scraping for a few boosted publication rankings in local ARC grant proposals.
Fine.
Until today.
It has been anounced that the rankings are already to be discontinued. The media statement is available here:

http://minister.innovation.gov.au/Carr/MediaReleases/Pages/IMPROVEMENTSTOEXCELLENCEINRESEARCHFORAUSTRALIA.aspx

Blimey, what a lot of waffle!!!

So, in summary:

"These improvements are:
- The refinement of the journal quality indicator to remove the prescriptive A*, A, B and C ranks"


Refinement?? So they are getting rid of the Sesame Street System (A,B,C), but are still providing some 'quality indicator'?? How is this going to be any different? And what was the original intention of 'perscriptive rank' if not to provide a guide for researchers as to the perceived 'quality' of a journal??

The one problem with the system stated is "...the setting of targets for publication in A and A* journals by institutional research managers."

Well... ummm... dah! Who would be naive enough to not see the blindingly obvious extension that higher rated journals would be targeted!?!? As is already the case with Impact Factors?!? Or can I read between the lines/guess that because personal submissions/cases were made for journal ranks, they discovered people were inflating their own journals/journals-they-already-published-in rank and skewing the field...?

- The introduction of a journal quality profile, showing the most frequently published journals for each unit of evaluation;

What?!?! Who cares? Unless I'm odd, the rate of issue is not high on my priority list, and quantity certainly isn't any measure of quality...

"As with some other aspects of ERA, the rankings themselves were inherited from the discontinued Research Quality Framework (RQF) process of the previous government..." under the advice of the ARC, while he later goes on to say: "I have made the decision to remove the rankings, based on the ARC’s expert advice."

So the same body that was inept under the last government are now excellent...?? Someone hand me the sick bag...