Thursday, August 28, 2014

Lunch


Seems I was not the only one feeling peckish around noon today.


A slightly nervous Red Tailed Hawk (Buteo jamaicensis) demolishing a rabbit on the lawns beside the campus post office building. I have seen many of these birds patrolling the highways and byways aroundabouts in the hopes of an easy meal, but never anything quite so bold.

Thursday, August 14, 2014

Still the forgotten people

Just a passing note on how the more things change the more they stay the same.

    The Forgotten People - a speech by Robert Menzies on 22 May, 1942.
    "One of the great blots on our modern living is the cult of false values, a repeated application of the test of money, notoriety, applause. A world in which a comedian or a beautiful half-wit on the screen can be paid fabulous sums, whilst scientific researchers and discoverers can suffer neglect and starvation, is a world which needs to have its sense of values violently set right."

*sigh*
Actually, on second thoughts, although the issues remain the same, I can't imagine a politician of todays 'modern'* world even admitting to such sentiments. This speech was quoted in an interview of Michael Fullilove (great surname) by Richard Fidler on ABC radio on August 7, 2014 (http://www.abc.net.au/local/sites/conversations/).

*I read an interesting article (citation needed) the other day on the misuse of the word "modern", in particular it's application to art and design. Apparently "Modern" refers to a specific period of time (early-mid 20th century), so technically correct when talking about new architecture to say "of a contemporary style"...

Thursday, August 7, 2014

For the Love of the... Rant


For the Love of the Job Rant


As much as I find *some* of Slate's articles to be above-averagely interesting and thoughtful, I seldom manage to motivate myself to dredge through their increasingly link-baity pages (oh so much worse since they renovated their site), so it's nice when I am saved the trouble by someone sending me a link. And this one is right up in my wheel house:

"So You're A Science Ph.D. How Good Are Your Job Prospects Really?"


http://www.slate.com/blogs/moneybox/2014/08/05/science_ph_d_job_market_how_bad_is_it_really.html

There were two initial reactions to the title alone: a sarcastic snort, and the kind of stomach roll you used to get when you remembered that you had a maths test the next day. However despite the foreboding that this would be a glib piece about how "those scientists aren't as bad off as they say" I was pleasantly surprised to discover that it was rather the opposite. It's always nice to get confirmation of ones own views...

In fact it is a response to an article that I suspect was written as a (possibly smug) contrarian jab at the doom and gloom surrounding the state of science funding and employment anywhere where people wear lab coats or jockey pipettes. The author (Jordan Weissmann) has a background in business and economics, and so it is rather touching to see him going in to bat for scientists who can feel increasing alienated from the conversations going on in the rest of the room. Browsing the articles written by the author in his previous post at The Atlantic (http://www.theatlantic.com/jordan-weissmann) the topic is within his well-worn stomping ground, but no less worthy for that (naturally form my perspective).


It should also be no surprise given my age and stage of career that the climate for research jobs at the moment is a topic I am familiar with. A waking up in cold sweats kind of familiarity.


The parlous state of research and especially research funding (described as my current supervisor as "academic triage - trying to keep as many researchers barely alive as possible) is not a entirely novel topic to turn up hidden away in the mainstream media, however one particular, very important, point is often papered over: scientists continue to surrender blood, sweat, and tears, ignoring all rational warning signs and against all odds, in order to pursue a incredibly challenging and seldom lucrative career because they love what they do.

With a brand of enthusiasm, energy and naivety that is institutionally exploited.
So much so that it has become indoctrinated into the culture. A vast amount of work and data in the current system is generated by volunteer undergraduate students, below-poverty wage graduate students or untenured researchers who are underemployed, unable to secure sufficient or any funding, or in a growing number of anecdotal cases, working for free so as to stay in contact with the field in the hopes that legitimate work eventually arises.
Even when gainfully employed scientists are expected to work long, exhausting hours, battling lack of funds, rabid competition and the inevitable stress of defense of ideas and peer scrutiny. Supervising students, attending conferences, writing and reviewing papers, after-hours lab work, reading, writing and reading again, and seldom more than 2-3 years away from project collapse if the next round of funding doesn't eventuate. 

Granted this is not a landscape entirely unique to those in professional research, there are many parallels between a career in science and a career as an artist, living commission to commission hoping one day for a patron to pick you up and secure your future. What scientists don't have, however, is any degree of sympathy or support from the general public (indeed cuts to science funding are generally ignored by the world at large). Yet everyone is familiar with the trope of the struggling artist, suffering for their work. And I suspect that in many instances it is this recognition that establishes private support for the arts and individual artists to a degree that science has not seen for over one hundreds years. There is a further non-parallel between science and art that is frustrating: scientists are almost never able to sit on the sidewalk with a sign saying "will extract DNA and run protein assay for cash" or take over the local cafe or converted loft space to showcase their pipetting skills.


Before I am hailed with ballet shoes or stabbed with paintbrushes, let me make it clear: I am not advocating that private donors should be making a decision between supporting a starving artist or a starving scientist. For two reasons: 1. small grants are (unfortunately) rarely going to be enough to establish a scientific career with associated research costs, and 2. (as noted in the Slate article in question) the level of education of most career-attempt scientists does provide a degree of backup should the worst come to the worst and the individual has to convert their lab coat into a parachute and bail. Unless large amounts of money are involved (as indeed the Gates' and Packards' do) it is almost not worth the time. What is needed is recognition from government, corporations or maybe groups of investors that science is not a directly self-funding exercise (although dollars generated through science far exceed input), that blue sky/non-cure-for-cancer research is essentially the base for much of the worlds progress and that many of the worlds sharpest minds are being lost to more reliable (but ultimately less productive) jobs, and ultimately, that the current system is broke and exploiting a valuable resource in an unsustainable way.

Tuesday, September 6, 2011

No Such Thing As Nothing In Science


Often the most frustrating questions as a tutor are those that are not asked. Maybe because people are too shy, or because it seems like a silly question, but the most frustrating to all concerned are those questions that you don't even know are there. Which means that everyone ignores them until they quietly creep up and wallop you between the shoulder blades.

The following is a short survival guide I wrote a while ago for an undergraduate class on one such question. The problem is that on the surface of it, it seems like a non-question: "in experimental science, what do you do when you get no result?" The problem is in the premis of the question, that there is such a thing as a 'no-result'. Which is false.

Dealing with ‘no results’ is something everyone in science has to come to terms with. Contrary to popular opinion, few experiments end with the researcher discovering some grand epiphany and scarpering down the streets naked yelling ‘Eureka!”
More often than not, some poor fool having been sitting in front of a computer for weeks, suddenly starts banging their head against the monitor, and swearing at the data. Things seldom go smoothly, or as planned, and often the results of even the most meticulously designed experiment leave you wondering what the outcome actually is, and why something is not as expected. The worst of these is when your data appear to be telling you, well, nothing.
Here is a short survival guide to navigating “no result”:

There are generally three types of "No Result":

• Lack of data
• Failed experimental design (really a subset of 'Lack of Data’)
• Data doesn't show conclusive results one way or the other (non-significant results)

Lack of Data
Likely to happen especially when time/money etc are short, you are reporting on trial/pilot experiments or are presenting data from an experiment that is still underway or in early stages. There is no shame in these results. Small scale or pilot studies are the basis from which more elaborate and grand-scale experiments are born. Outstanding significant results, or even enough data to properly analyse are not expected, instead you are looking for trends in the data, general directions that it is going that might be worth following up, or that suggest your hypotheses are correct and it is worth pursuing more thoroughly.

Failed Experimental Design
It sucks, but it happens. Sometimes it may be for embarrassing reasons you didn't think of beforehand, but even so it is very rare for scientists to sink the boot into a genuinely failed experiment. 1. because sometimes it just happens, that's what makes biology so unpredictably interesting and 2. because it has probably 'just happened' to them too some time.
So don't focus on the negatives. The audience doesn't want to hear all the whys and what could have beens, and it just makes everyone feel awkward. Honestly, they probably don't care (unless it was for some really exciting reason that presents more questions). Admit what went wrong, but move on to 1. how you will fix it in the future, and 2. what you DID find. Seldom is an experiment such a complete bust that you cant even pull some general trends or even observations out of your time. What did you see while setting up or running it that was interesting? Were there things that you saw that will change your hypotheses, or created new questions? Go through what your hypotheses still are, how you arrived at them, and how they are supported from the evidence of other studies (see 'Reporting' below for more).

Non-Significant Results
There are several flavours of non-significant results. The first, vanilla flavour is from above: Lack of data. Again, get the best out of what you have, stress trends and what you are seeing against what you expected. Emphasise your hypotheses and why you stated them, how they are supported by other data.
The second flavour of non-significant results is much nuttier: Genuinely middle of the road data. This can be one of the most irritating situation a scientist can find themselves in. A great idea, months of work and when all the data is in, it appears to say nothing. No significant yes, no significant no. Just noise. How do you write up a non-result?
Often such data is treated as a 'failure' however this is not true. You have probably found 'no result' for a reason, it is telling you something about your system. It's not what you expected, but it may be no less interesting. The trick is to spin the finding to make it interesting to an audience:

1. look for trends. Ok, they aren't significant, but you can build a story out of them - as above, what is it, how does it fit with other work and hypotheses
2. find other ways or questions to look at the data you already have. This can hurt your brain and take a lot of time, especially if you start blindly prodding your data hoping something falls out. Try to go back to square one, construct new focused hypotheses, then you can follow the right paths for analysis and a (hopefully) clear answer.
3. what does 'no result' actually MEAN? As mentioned above, a 'no-result' has probably occurred for a reason. For example, say all the sequences for the gene I was looking at across hundreds of humans are the same. There is no variation. My hypothesis that I could look at differences in this gene between continents is gone, but it raises the interesting question, WHY is there no variation in this gene? Maybe it is a very important gene and cant afford to change... maybe it is a recent viral insertion... etc. So instead of boring the audience with why I saw nothing, (scientifically) suggest some stories (again, learn to love the word 'hypotheses') about why this might be the case, and how you could test each possibility. Your audience will give you much greater credit for showing you can think (especially under difficult situations) and for your initiative, than for rolling out how bad the methods were and why it was bound to fail for x, y and z reasons.

**Adding more data will not necessarily give you a significant result!!!!**
A common fall-back error when presenting non-significant results is to say “I must not have enough data, so getting more will surely find a significant result.” Well, no. If there is truly no significant result to be gained (ie non significance IS the answer), more data will only push significance further and further away!


REPORTING
Each type of no result has particular avenues to finding something to write about, as touched on above. But broadly across all of these there are some general themes. In all cases you can be left stranded with a good idea that you are frustratingly unable to prove. But rather than throw the baby out with the bath-water and depressing the audience with what didn't go right, concentrate on the positives.
What were your hypotheses?? Why did you make them, and what did you *expect* your data to show that would prove them. The greatest credit people will give you is for the interesting, novel or clever idea you had in the first place. If it is a truly sound idea and you can present evidence as to why you expect the hypotheses you propose, collecting the data is *almost* irrelevant. If there has been past work done on a similar theme, or that you are building upon, convince the audience that the result you didn't quiet were almost inevitable.
This is a quality skill in science, and is used time and again when convincing people to give you money for experiments you have not yet done.

And don't forget to:

- reference your work. Not only does this show that you have actually read about what you are doing, but it will automatically allow you to include new ideas, other results and begin to use the language and thinking needed to present scientific data.

- state your hypotheses, what you expect to find and WHY.

- for papers AND talks, read, re-read, proof read, give-to-your-housemates-to-read, read and read again. No bad data looks worse than sloppy writing.

Friday, July 29, 2011

A Building full of Botanists...


The 18th International Botanical Congress, Melbourne, July 24-30.

So, what happens when over 2000 of the worlds botanists converge on Melbourne for a week long plant love-in...?H
ere's a brief day by day playbook from a typical conference. More specific tales from IBC will follow:

Conference Day-1: Registration, Welcome and Mixer.
Follow the huddles of twitchy looking types to Melbourne Convention Center. Botanists can be identified by typically 'smart-casual-field" attire, looking like overgrown university students (which is what we typically are). Chinos or jeans, blue business shirt or slightly worn/botany related T-shirt under a polar-fleece, glasses, beard (generally the men) and hiking boots. Bewildered look semi optional.
Stand around in small groups generally of a similar geographic origin until someone makes a move to disperse. Wander around awkwardly looking for familiar faces while pretending to be surveying the finger-food. Avoid people whos faces you know, but names you cant for the life of you remember until you can look them up in the program, or steal a surreptitious look at their name badge. Note to organisers: this years name badges do not include institution, leading to much confusion and awkward roundabout ways of trying to remember where this person works...
Sit through opening session with tenuous cultural references. Applaud politely. Retreat to hotel at first available opportunity.
----------------------------------------------

Conference Day 1.
Energetic enthusiasm abounds as the crowds mill in expectant conversation or push their way in large numbers to the coffee stands, where conversations converge and divide, threshing around the snakes of people, excited at the week on offer. Postgraduate students and postdoctoral workers dash around trying to find the famous and important people they want to make a good impression on, with the truely noteworthy assembling small rugby scrums around them. Conversations are animated and waves and hellos ring out all around.
The first sessions start and the crowds pile in to hear whatever is going. People dash between sessions, up and down stairs in the excitement to make the talks of interest. Too late for taking any precautions, the lab with 'the conference cold' is identified. Tally of people who have waved to me so far whom I swear I have never met: 4.
----------------------------------------------

Conference Day 2.

The frenetic chaos of the previous day has slowed a little, and there is the opportunity to have some indepth conversation on matters of interest with the relevant people. There is still a flurry of activity between sessions, and the coffee lines may even be a little longer with people looking a little in need of a pick-me-up. Still, one pretends to by in a hurry, especially when passing someone you really dont want to get stuck talking to. The importers of the conference cold look heroically cheerful and manfully maintain that the worst is over, so it must have been non-contagious when they arrived
----------------------------------------------

Conference Day 3.
Promises of collaboration have been made and data, papers and invitations have been traded, with small knots of people inhabiting the reception hall during sessions. Earnest conversations and the unlucky ones still to talk hunch over laptops. Sessions with particularly controversial or broadly appealing titles are filled out, however people move little between sessions. The skill of pretending not to notice that same person you have already said 'hello' to 4 times when scanning the room becomes important if awkward conversation is to be avoided.
Just in time for the mid-conference hump, the conference dinner provides the opportunity for everyone to let their hair down,crack out their best 1990s era suit and drink enough to become unhealthily sociable. The uneasy feeling that the conference cold was not as infectious as first thought is dawning on many people, however it is hard to differentiate between that, and a surplus of alcohol...
----------------------------------------------

Conference Day 4.
Early morning halls are conspicuously empty. The queues for coffee have slowed, however the number of people nursing a hangover over a cup is certainly up. Stragglers wander in late and collapse into chairs rather than seeking out a session to break into. Social nicities are abandoned and feigning blindness on seeing 'that-person-who-you-have-nothing-left-to-say-to' gives way to a curt nod. Lectures are sparsely populated with more and more people sitting vacantly in the halls instead of picking a random talk to attend. A large portion of the population is reported to be absent due to cold-related myasma.
----------------------------------------------

Conference Day 5.
Usually even the most resilient have cracked by now, if not to hangover or the conference cold, then in absence of any remaining constructive conversation. Halls feel empty, but the occasional persistent knot of people may be found around cafe tables or away from the convention center. The poor souls remaining to give presentations slave ghoulishly over talks they hope will prise some last glimmer of life from their audience. Awkward conversations can be avoided by farewells and promises to catch up, and looking forward to next time. Anecdotes on the week are traded through stuffy noses and you struggle to pretend you have retained any of the proceedings beyond a bewildering fog of graphs, slides and take home messages.
----------------------------------------------

Conference +1.
Delegates return to normal life slightly shell shocked. Greetings of "how was your holiday" and "good to get back to work" land unhelpfully, and a week on the beach looks inviting.
----------------------------------------------

Conference +2.
Some form of assimilation appears to have miraculously happened over night, and there is a fresh enthusiasm to attack your work and investigate/incorporate the fresh ideas and methods from the week before. You find yourself fondly looking forward to next year...

Tuesday, May 31, 2011

ERA loses its ABC


Several years ago the Australian academic community was told that it needed metrics and direction to become the lean tool of academic excellence expected (presumably bitter competition within the community is insufficient) by whoever cares about these things. This was going to be provided under the banner of Excellence in Australian Reseach (ERA), by the Australian Research Council (ARC). A major innovation was to be the replacement of the generally universal system for ranking and estimating the relative profile of journals, with our own home-grown scale. In all it's wisdom, the Australian government (how many PhDs between them??) was going to boldy break out and devise it's own system. Needless to say, the rest of the world didn't give a hoot, resulting in journals or papers being more or less important depending upon which side of Australia's border you happend to be on.

After no doubt not inconsiderable sums of money, much huffing and puffing and a revision when serious flaws arose (journals ranked way out of their league, or missing entirely) a list of journals and ranks was released. Of particular note is that rankings were partly determined by lobying from the industry, and decided on by a 'panel of (not impartial) experts'. Journals were nominated a value within an apparantly arbitrary tier system from A* (highest) through A and B to C (lowest). The Sesame Street System. After several years, things had settled down, people for the most part ignored the new system, unless scraping for a few boosted publication rankings in local ARC grant proposals.
Fine.
Until today.
It has been anounced that the rankings are already to be discontinued. The media statement is available here:

http://minister.innovation.gov.au/Carr/MediaReleases/Pages/IMPROVEMENTSTOEXCELLENCEINRESEARCHFORAUSTRALIA.aspx

Blimey, what a lot of waffle!!!

So, in summary:

"These improvements are:
- The refinement of the journal quality indicator to remove the prescriptive A*, A, B and C ranks"


Refinement?? So they are getting rid of the Sesame Street System (A,B,C), but are still providing some 'quality indicator'?? How is this going to be any different? And what was the original intention of 'perscriptive rank' if not to provide a guide for researchers as to the perceived 'quality' of a journal??

The one problem with the system stated is "...the setting of targets for publication in A and A* journals by institutional research managers."

Well... ummm... dah! Who would be naive enough to not see the blindingly obvious extension that higher rated journals would be targeted!?!? As is already the case with Impact Factors?!? Or can I read between the lines/guess that because personal submissions/cases were made for journal ranks, they discovered people were inflating their own journals/journals-they-already-published-in rank and skewing the field...?

- The introduction of a journal quality profile, showing the most frequently published journals for each unit of evaluation;

What?!?! Who cares? Unless I'm odd, the rate of issue is not high on my priority list, and quantity certainly isn't any measure of quality...

"As with some other aspects of ERA, the rankings themselves were inherited from the discontinued Research Quality Framework (RQF) process of the previous government..." under the advice of the ARC, while he later goes on to say: "I have made the decision to remove the rankings, based on the ARC’s expert advice."

So the same body that was inept under the last government are now excellent...?? Someone hand me the sick bag...

Monday, September 13, 2010

Talking to the Great-Beyond



At the risk of dating myself (in the non-romantic sense), I can remember a world sans-internet, with the first twinkles of information crawling down the not-so-super-highway (message-boards) albeit in a very second hand way.
Now to Google something is a verb in it's own right, and something most of us do regularly. New advances in data acquisition and management are parse, and we expect more and more in return for simple, inexplicable, incomprehensible, inappropriate and and down-right useless queries.
So it is not often that I become amused and engaged with a new way to query the great cloud-in-the-ether. Yet today, I spent (ok, maybe this is more procrastination talking) a fun while 'interacting' with a search engine. Well, no t actually a search engine, but a data analysis and access engine. And I began asking it questions. Questions about itself. I was trying to converse with an algorithm...

'It' is WolframAlpha (www.wolframalpha.com). If you ask Wolfram what it is, it replies that it is 'a computational knowledge engine.' This gives a sense of Wolframs state of mind; adequate and to the point, but not terribly enlightening. Those behind Wolfram (ie, his 'father' Stephan Wolfram) go a little further:

"Wolfram|Alpha's long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. Our goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries."

So, the ideal is that WolframAlpha, and his siblings (Beta is in gestation) are able to not only provide matches to key phrases, but will summarise, compute, interpret and display data. A suggested example of this is to query two stocks, or companies (eg enter "Apple Microsoft") or countries (eg Australia USA) and you are presented with a summary comparison of the two.
Where this is supposed to become really powerful is for analysis of more integrated data that is freely available online, but little accessed (census data, accounts, stocks, shares, etc) however that is well and truly outside my area of exploration (if someone out there is more insightful than me, let me know!). So instead I settled down to plying the poor thing with demeaning questions (I imagine this is exactly how Marvin first started complaining about the pain in all his right diodes). But in fact WolframAlpha is so designed that asking sensible (or less so) questions is actually intuitive, in that the response is generally to provide some sort of coherent, or at least ball park, answer. Thus my questions grew from tentative "who are you" to the more subtle and tongue in cheek "What’s the speed of an unladen swallow?" and eventually trying to catch him out "is there a god."
Wolfram's ability to answer direct questions varies with the precision and extrapolation of the query, and while some questions are answered in a glib fashion suggesting anticipation by the programmers, other are reduced to component parts and return less than specific results. Still others return the interesting response "Philosophy Ideas/Human Discourse: Development of this topic is under investigation..."
So could we be seeing an attempt to pass the Turing Test??

------------------------------------
Background: the Turing Test was proposed in 1950 by Alan Turing as a way to determine when artificial intelligence has reached that of human cognition. A human user (judge) is blindly engaged in natural language conversation with in turn a human and computer companion. If the user is unable to determine which is human and which is computer, the computer is judged to have passed the test.
Although not an ideal or ultimate test of AI, the test has been backed with a $100,000 prize for the first success which has enhanced it's appeal and credibility, such that it is still the ultimate focus of many AI developments. As yet (2010), it remains unawarded.
------------------------------------

As far as I (in my limited capacity) know, this would not be the first time the idea of using the internet cloud to pass the test has been attempted (something tells me Google may have tried), and it seems the most likely path to success, with the ability to consult, analyse, and model responses upon billions of online examples far superior to any attempt to rely on a library of words and rules pre-determined by programmers.

However, this is not really taking advantage of WolframAlpha's abilities, and obviously many of the answers have been carefully provided for geeks such as myself, rather than being the result of natural language responses from Wolfram himself. But it is, none the less, amusing.
However, it got me thinking about what is it that makes us want to squeeze some sentient-like behaviour out of a non physical being? Why ask stupid questions, or those that I already know or think I know the answer to, of a line of code...?
Hmmmmm.... Anyone for religion...?


Things to ask WolframAlpha:

Are you alive?
When were you born?
Are you my friend?
What is your favourite colour?
What is the weather like?
What’s the speed of an unladen swallow?

What is the meaning of life?
Who is Luke's father?
Who is your daddy and what does he do?
How many chucks can a wood chuck chuck?
How many roads must a man walk down?
Is Elvis alive?
Is there a god?