The Department of Health and Ageing has just published its annual report for 2008-09. It is not fascinating reading but then, to be fair, few annual reports are.
But in one respect it is fascinating. This is in its worshipping at the altar of performance indicators. What is concerning is not so much whether they have met their various PIs. It is the seeming obsession with such indicators and the nature of them. They are many, so let me focus on just a few.
On all of the “outcomes” (read programs) of which there are 15, there is for each a PI that is “Quality, relevant and timely advice for Australian government decision-making, measured by ministerial satisfaction”. The so called reference point/target is “ministerial satisfaction”.
In all instances, the judgment is that “ministers were satisfied”. This is, thus, successful performance. But I wonder … is this indicator used in all programs in all departments? And has there ever been a time when it was reported that the minister was not satisfied? And is it the case that all ministers religiously go over these reports and judge whether they are satisfied or not?
For all outcomes there is a PI, which is to keep the actual spend within 0.5% of the forecast. So often, sadly, the DOHA fails … Now the logic here is that failure is just as bad if it underspends or overspends. This leads to odd situations. In program 5.2, for example, (Primary Care Financing, Quality and Access) all indicators are said to be met or substantially met, yet the program is (naughtily) more than 0.5 % underspent — it was cheaper than forecast. That sounds good to this economist but, no, this is deemed to be bad as it is more than 0.5% below the target. If more had been spent so that the underspend was 0.5% or less then that would have meant success.
There are some other peculiarities. The only indicator for “primary care educational training” is an “increased number of non-vocationally recognised medical practitioners undertaking professional development”. That’s it for all forms of primary care educational training — and, oh dear, that indicator was not met. So by implication this means that the whole primary care education and training program failed. This is PI management-think gone bizarre.
Again looking at the indigenous health program, one indicator is “demonstrated access to culturally appropriate social and emotional wellbeing and mental health services” (emphasis added). Success is claimed because the number of relevant client contacts in Aboriginal Community Controlled Health Organisations went up.
But there is no consideration of the fact that not all of these would have been “culturally appropriate”. In fact, there is no consideration of “culturally appropriate” in the discussion of the PIs. Few if any ACCHOs are funded adequately to allow them to provide culturally appropriate services. Did the number that were “culturally appropriate” go up? We do not know and it seems neither does the department.
To some extent one sympathises with those poor souls in DOHA who have to design and fill out this sort of reporting.
But what does it mean? And what does it cost to design and fill it out? Is it worth the paper it is written on?
On that particular PI, I think the “indicator is not met” — I suspect not even to “Ministerial satisfaction”! This is not Don Watsonian management speak but the equivalent in management thinking.