Rutter, Jill (2012) Evidence and evaluation in policy making. A problem of supply or demand. London: institute for Government.
Preview | Title | Contact |
---|---|---|
PDF (Evidence and evaluation in policy making)
- Published Version
176kB |
This report summarises the outputs from a series of four seminars held at the Institute for Government between February and May 2012, organised in collaboration with the Alliance for Useful Evidence and the National Institute of Economic and Social Research. The starting point was the finding in the Institute’s 2011 report that despite years attempting to make policy more evidence based, this was still seen to be an area of weakness by both ministers and civil servants. The aim of the seminars was to explore both the changing nature of the evidence landscape but also to look at the barriers on both the supply and demand sides to better use of evidence and evaluation in policy making.
Speakers pointed to the changing evidence possibilities. Rigorous experimental techniques, such as randomisation were now being applied successfully to test insights on a range of policies. There were also options to learn from ‘natural experiments’ in other places and past attempts at reform. The opening up of government data meant that there were new possibilities for non-government actors to get involved in analysing and scrutinising government policy – and the internet was enabling low cost citizen feedback on services as well as a more rapid means of holding government to account.
Nonetheless, a range of supply and demand barriers were identified, standing in the way of more systematic use of evidence and evaluation. Past reports focused on the supply side, assuming that this was where the principal blockage lay. On the supply side significant remaining barriers were:
• Research is not timely enough in providing answers to relevant policy questions; and some academics find it difficult to engage effectively with the policy process despite their expertise and potential contribution.
• Many of the issues with which government deals are not suited to the most rigorous testing but, even where they were, policies were often designed in a way that did not allow for proper evaluation.
• A lack of good usable data to provide the basis for research both within and outside government. There was also a risk that new forms of feedback might bias policy making compared to more rigorous data – due in part to differential access to feedback mechanisms.
But most of the participants thought that in practice the demand barriers were more significant – and these affected ministers, civil servants and other public service providers. Underlying this was the thought that both incentives and culture of these key groups militate against more rigorous use of evidence and evaluation. The key demand barriers identified were:
• Problems with the timeliness and helpfulness of evidence and the mismatch between political timetables and the timelines of the evidence producers allied to ethical reservations about experimentation.
• The fact that many political decisions were driven by values rather than outcomes – and that sometimes the ‘evidence-driven’ answer brought significant political risk
• The lack of culture and skills for using rigorous evidence in the civil service.
• A need to create openness to feedback among other service providers.
Some speakers thought that the Treasury had a potential role to play in changing incentives by more explicitly linking spending decisions to evidence; external scrutineers and local commissioners were other potential new sources of demand pressure. But a number of speakers highlighted the role external ‘evidence institutions’ had played in addressing the ‘time inconsistencies’ policy makers often faced. At our last session we heard from the heads of three such institutions – the Dutch Central Planning Bureau, set up shortly after the Second World War in the Netherlands with a remit to evaluate both government and opposition party policies, the Office for Budget Responsibility and the Education Endowment Foundation established by the coalition. Drawing on these examples a number of design principles for evidence institutions emerged:
• Institutions need independence and credibility to perform this function. One way to establish independence and credibility is through longevity.
• Transparency is a critical part of that reputation building.
• Resourcing models also need to underline that independence.
• They need to be able to access both internal government information and draw on – or create – a robust evidence base.
• They need to be clearly linked into the policy process.
The seminars did not attempt to come to a conclusion. But looking at the discussions there are changes that can be made to the incentives of the players in the system to increase the use of evidence and evaluation – including the creation of external evidence institutions. But real change will come when politicians see evidence and evaluation as ways of helping them entrench policies.
R Research > Research outcome > Policy implications of research / evidence
L Social psychology and related concepts > Participation incentive / reward (contingency)
N Communication, information and education > Information transfer / dissemination > Information transfer from research evidence to practice
VA Geographic area > Europe > United Kingdom
Repository Staff Only: item control page