Image: Thumbnail of Report Cover

Download PDFDownload PDF Version
[2.0MB]

LinkHTML Version

 

Download PDFDownload PDF Version
[1.29MB]

LinkHTML Version

 

Download PDFDownload PDF Version
[0.9MB]

6. Check and evaluate

Processes in the check and evaluate phase are valuable techniques to measure the success, or otherwise, of innovations. They provide the basis on which judgments can be made about efficiency, effectiveness and appropriateness of a new process, product, service or method of delivery.

Innovation CHECK_02.eps

Elements of the check and evaluate phase of the innovation process are outlined in this section of the Guide.

An assessment as to whether an innovation is meeting the objectives and expectations of government, citizens, clients and other stakeholders, requires relevant performance information. Stakeholder expectations can be high and risks include: a lack of data on progress to date; a lack of data on future uptake; and a lack data on the on-going appropriateness of the initiative. Effective ways to mitigate these risks include: the use of appropriately targeted performance information; ongoing engagement with citizens, clients and other stakeholders; and where appropriate, longer-run evaluations.

6.1 Prepare an evaluation strategy

Performance information and its availability to agency managers, politicians and Australian citizens, contribute to learning, innovation and improvement. An appropriately tailored evaluation strategy includes the collection and analysis of performance information that provides:

  • an early indication of policy/delivery effectiveness; and
  • longer-term evaluation of outcomes.

Know what to measure and how to do it

Early indicators can be used to detect any significant problems and enable corrective action to be taken. Longer-term evaluation can be used to better understand the details of the impact of policy, service delivery and regulatory changes.

Manage innovation risks: appropriately targeted performance indicators

It is better to focus resources on fewer, well-specified and robust performance indicators than numerous partial indicators. In some cases where direct indictors may not be available, proxy indicators may need to be considered.

Most performance indicators describe performance for a previous period of time and, therefore, are lagging indicators. Lead indicators are valuable because they provide early information on likely future performance. Lead indicators are especially useful where there is a considerable time period between the implementation of a policy innovation and the outcome. From an internal management perspective, lead indicators assist agencies to determine whether new approaches are working and to modify them as required. From an external perspective, lead indicators enhance accountability and transparency as to whether an initiative is likely to produce the desired outcomes in the future.

 

Lead Indicators

Lead indicators provide information on likely future performance. For example, in relation to immunisation and cervical cancer, the number of women being vaccinated with GARDASIL® is a lead indicator of the expected beneficial outcome some years into the future.

 

6.2 Monitor short-run uptake and impact

Know how and where you are going

Early reviews, which can be undertaken before the data is available to sustain a full evaluation, can be useful in providing confirmation or otherwise as to whether the initiative is being taken up by the target population. Used appropriately, lead indicators can provide useful information on early results and likely future performance. Such indicators can also identify areas that may require closer scrutiny.

Obtaining client and stakeholder feedback, including through web-based mechanisms, can provide valuable information and insights on the uptake and impact of initiatives and add credibility to any adjustments that may need to be made.

Manage innovation risks: engage with stakeholders, clients and citizens

Maintaining engagement with the minister, program managers, clients, citizens and interest groups will ensure that early reviews and later evaluations are informed by the views and experience of a range of stakeholders. Not only are they able to offer practical insights and different perspectives but they can be sources of new ideas for improvement. A sound consultation strategy will add weight to the credibility of any review or evaluation process.

 

Monitoring impact

The switchover from analogue to digital television broadcasting in Australia will be completed by December 2013, when all viewers will need digital receiving equipment to receive free-to-air broadcast television. The regulatory body, the Australian Communications and Media Authority, has undertaken a transmission and reception study and also using consulting firms, has undertaken a series of surveys of digital television uptake in Australian households and more in-depth research in 120 homes. This research is providing valuable information on digital television uptake overall and on issues consumers may face in switching to digital television.

6.3 Evaluate longer-run outcomes

Review and evaluation methodologies and timeframes need to be determined to best fit the circumstances of the particular initiative. There is no one-size-fits-all methodology as different considerations apply across the policy, program, service delivery and regulatory spectrum. Appropriate methodologies and timeframes also depend on the nature and scale of the innovation.

Evaluation shows whether, and to what extent, objectives are being met

Full evaluations necessarily require a longer timeframe and more comprehensive data than do early reviews. Where considered appropriate, formalised evaluation processes can assist managers and other decision-makers to: assess the continued relevance and priority of an innovation in the light of current circumstances, including government policy changes; test whether the innovation is targeting the desired population; and ascertain whether there are more cost-effective ways of assisting the target group. Evaluations also have the capacity to establish causal links and to enable lessons to be learnt and appropriate adjustments made as early as practicable.

The usefulness of an evaluation depends upon the quality of the evidence on which it is based. It is generally more expensive to gather necessary evidence after the event as part of an evaluation process than to build data collection into the initial initiative design.

Manage innovation risks: involve external participants

Involving external participants in evaluations can provide valuable additional insights and add credibility to the outcomes. Establishment of a stakeholder reference group, bound by confidentiality agreements if needed, can be a useful mechanism to obtain external input. This can extend to sharing of draft findings and recommendations in order to road test future directions.

 

External review

Within three years of establishing the initial six National Research Flagships in 2002–03, the CSIRO commissioned a review of the initiative, chaired by the former Chief Scientist, Dr Robin Batterham. The subsequent positive report not only endorsed and confirmed the value of Flagships but was influential in the Government’s decision to fund a further three Flagships in 2007–08. [See Appendix A.6 for more detail.]

 

6.4 Key lessons

 

Checking and evaluating the efficiency, effectiveness and appropriateness of initiatives is fundamental to successful innovation and valuable lessons will still be learnt from ‘failures’ as well as ‘successes’. Key steps to consider, depending on the circumstances, for the check and evaluate phase are:

  • prepare a tailored evaluation strategy which includes the collection and analysis of appropriately-targeted information;
  • monitor short-run uptake and impacts to obtain early indications of the effectiveness of the initiative and whether adjustments are required;
  • ensure data and information are being collected and early trends analysed, including through citizen, client and stakeholder feedback; and
  • evaluate longer-run outcomes based on a sound methodology.

 

line

Previous pagePrevious: 5. Implement

Next pageNext: 7. Adjust and disseminate