• Ingen resultater fundet

3) How are evaluations used in evaluation systems? (addressed in articles 2, 3 and 4)

6.2 A NSWERING THE RESEARCH QUESTION

88

the evaluation management (steering committee) in the Commission. This is because the EU evaluation system is primarily designed to feed information into the EU decision-making procedure every seventh year, before the beginning of a new programme cycle. Thus, due to the institutionalisation of evaluation practices that are fitted to the activity-based management system in the Commission, the use of evaluations takes place mainly just before important decision points. The limited process use is a consequence of deliberate choices to secure and improve findings use for decision-making, in particular decision points before programme renegotiation. Thus, the loss of process use is a direct consequence of systemic factors related to policy-making practices in the EU political system that influence the Commission’s decisions in relation to implementation of evaluation practices.

89

management system in the Commission. Hence, the system achieved permanence and all the attributes required for it to meet the definition of an evaluation system.

By analysing the EU evaluation system with a particular focus on the European Commission, the thesis demonstrates how formal structures are introduced to increase oversight of the Commission by other organisations in the system and how evaluation is used to increase accountability in the Commission. Article 2 finds that evaluation is, in fact, primarily institutionalised in the Commission for accountability purposes. The evaluation system is thus set up with a main aim of securing the legitimacy through accountability for the Commission. Nevertheless, articles 2, 3 and 4 all show that, despite this aim of the evaluation system, there is still room for evaluation use within the framework of the evaluation system’s rules and standards.

The three main effects of the evaluation system on evaluation use are: 1) the

‘sacrifice’ of process use for findings use and accountability in decision points (explained above); 2) a very narrow scope for evaluation use, due to the formal institutionalisation of evaluation; 3) a de-politicisation of evaluation.

First, and as a consequence of the policy cycle, the possibility of evaluation use in the evaluation process is decreased because of the tightly managed and

standardised evaluation process and the stress on evaluator independence that ultimately secures the legitimacy of the evaluation output and the Commission.

Also, process use is sacrificed as a logical consequence of the fact that programme changes are usually attainable only in the design phase of a new programme (and not during its implementation), at which time the Commission needs credible, trustworthy and independent evaluations to increase its own legitimacy as well as that of the new proposal. In other words, process use as envisaged by, for example, Michael Q Patton, with a very interactive and engaging evaluation process to bring about significant programme changes immediately during the evaluation process, is in the EU context unlikely to yield more programme changes because of the rigidity of the decision-making and the legal constraints the Commission works under.

Second, evaluation recommendations tend to suggest small procedural programme changes rather than large-scale programme changes that only the EP and Council

90

could decide upon. Evaluation findings and recommendations for programme change are usually not surprising or innovative because political and legal structures limit the implementation of recommendations. For example, the terms of reference, the evaluation questions and the Commission staff guide evaluators towards recommendations that are feasible on a managerial level and that do not require a political decision-making procedure or that go against the legal structures of the Commission.

Third, the two previous findings imply a de facto de-politicisation of programme evaluations in the EU evaluation system, where evaluation information conforms to the administrative context of programme management in the Commission instead of the political context of policy-makers. Evaluation does not challenge the policies because evaluation is institutionalised, formally and informally, so that evaluation content and outputs conform to programme-specific recommendations.

As an example, out of the three case evaluations of article 4, only one

recommendation was not possible for the Commission to implement on its own.

The recommendation was to increase the budget of the programme in question, which might be the reason why it was not objected to by the Commission during the implementation of the evaluation.

We have now seen how the evaluation system has several effects on evaluation use both inside and outside the Commission. The evaluation system limits the potential evaluation use, but also enables or improves some uses at specific times.

In other words, the formal and informal structures of the system create a playing field where only a certain range of outcomes is possible. However, at the same time the evaluation system’s standards and procedures, as well as staff

commitment to evaluation and experience with evaluation, also enable evaluation use by securing a usable evaluation product. Evaluations are highly complex and potentially very political. The Commission’s standards and procedures for evaluation implementation limits the possible evaluation outcomes, as explained above, but at the same time they increase the chance of a legitimate, sound and relevant evaluation product that is more likely to be used by the Commission.

91 6.2.2 OTHER FINDINGS

A number of other findings from the four articles are indirectly linked to the research question. First, the articles together show the importance of analysing phenomena such as evaluation and evaluation use in their systemic organisational context. When trying to explain evaluation use, the evaluation literature has focused on the evaluation much more than on the context of the evaluation. The main contribution of this thesis is to introduce to the evaluation literature empirically tested assumptions of organisational institutionalism, thereby

illustrating that a theory of organisation is better at explaining evaluation uses than evaluation theory. The purpose of the evaluation system is to secure the

Commission’s accountability. Justificatory use is therefore the most important type of use for the Commission and this raison d’être can explain why process uses are not made possible in the evaluation system and why findings uses are significantly limited to mainly small-scale programme changes.

An important finding of this thesis is that the concept ‘evaluation system’ needs more theoretical depth. If an ‘evaluation system’ is defined only in terms of its boundedness, units and institutionalisation then we fail to understand how accountability and organisational effectiveness affects evaluation practices and evaluation use. This thesis shows very clearly how organisational accountability plays an important role in determining how evaluations are used.