• Ingen resultater fundet

CERTAIN EVALUATION METHODS DEVELOP THEIR OWN LIVES AND ACHIEVE SUCH A DEGREE OF STATUS IN CERTAIN COUNTRIES THAT THEIR APPROPRIATE APPLICATION

TO A GIVEN EDUCATIONAL SECTOR IS NOT QUESTIONED.

The rationale here can be found in the concept of path-dependency. This means that organiza-tions and societies sometimes opt for a route that becomes so immersed in the economy, lines of production, societal belief systems and ‘objective knowledge’ that hardly anybody questions the (practical) validity of the ‘path’, even when there are more effective and efficient technolo-gies available. It can be argued that, within the field of evaluation methods, similar path de-pendence can be found. Power (1999) and Barzelay (1996) have indicated that this has also happened in the field of auditing.

Let us, therefore, refer to the evidence available in the country-papers. In contrast to the previ-ous three hypotheses where no hard evidence became available, we will hopefully, this time, indeed find some corroborative evidence.

What is the evidence? I will list two elements.

The first deals with the belief (to be found in most of the organizations) that evaluating self-evaluations of institutions and schools is an adequate route to follow.

The second deals with the belief that carrying out evaluations in which stakeholders play an important role is also an adequate route to follow. The point I would like to make is that, fol-lowing the theory of institutional isomorphism, educational evaluators, as described in the pa-pers I have used as source material, have started to become more similar because of organiza-tional imitation (DiMaggio and Powell, 1983).

The evidence, part 1: evaluating self-evaluations

All educational evaluators in the 7 countries are becoming more and more involved in assessing or evaluating school self-evaluation reports in one way or another. All educational evaluators are also getting involved in auditing the mechanisms schools and universities use in order to assess the quality, efficiency and effectiveness of their own programs and/or institutions. This is

Educational Evaluation around the World 20

clearly an example of isomorphism (Power, 1999). Crucial is that the evaluator first and fore-most starts with what evaluative knowledge the organization itself has produced. However, as the criteria the evaluators work with are well known (due to their websites, reports, symposia, feedback to schools, laws, debates etc.), these criteria act as incentives to behave according to these criteria and standards. There is serious doubt as to whether, by following this line, the goals of the evaluators will indeed be realized. The main reason for this concern lies in the fact that isomorphism not only leads to imitating the good sides of an approach, but also the unin-tended and even negative side effects (van Thiel and Leeuw, 2002)

The evidence, part 2: stakeholder involvement

All evaluation agents believe that involving stakeholders in their work is beneficial to the evaluation process, the outcomes and the utilization of the reports, but not all evaluation agents believe in the same intensity and level of activism with regard to stakeholder involve-ment. Partly, this is a confirmation of hypothesis # 4. Partly, though, it is not, because the dif-ferences in the practice of involving stakeholders are large. Some inform stakeholders and communicate with them, others involve them in practical work, and some make (large) parts of their norms, methods and criteria partly dependent upon the attitudes and responses of stake-holders. One organization even refers to a ‘shared evaluation responsibility’. One can, there-fore, detect the following continuum:

information and communication ---Æ involvement ---Æ coordination : ---Æ

agreement with norms, criteria & methods ---Æ partnerial evaluations .

In line with this continuum, the different countries can be plotted.

In New Zealand, it is indicated that evaluations, therefore, are directed primarily at the institu-tion's management, academic staff and students, and the evaluation processes are designed to involve those sectors as well as graduates, the professions, business and industry.

In Northern Ireland, Evaluations are also provided for individual organisations via, for example, spoken reports to subject departments, senior management teams, and school governors; and to groups such as the employing authorities, parents and the general public by way of written reports of inspection and follow-up inspections.

The same is true for Canada where the issue of ‘stakeholders’ is also addressed.

However, in the field of higher education organizations, such as HAC in Hungary and the VSNU in the Netherlands, the intensity of involving stakeholders is increasing. Several of the stake-holders, for example, participate through delegated members in the work of HAC and VSNU.

In France, the role of stakeholders is conceived in terms of ‘consultation’: ‘Whenever a specific type of evaluation is decided at the national level, the object to be assessed is always very clearly identified and defined after consultation with the relevant bodies interested in the re-sults of the work’. And, with regard to the role CNE plays in evaluating higher education pro-grams, again there is a more active approach to stakeholders: ‘Evaluation is a concerted ap-proach. When evaluations are carried out, there are many exchanges between the establish-ments and the National Evaluation Committee – concerted reflection on the evaluation meth-odology and the questionnaire for internal evaluation; discussion on the themes of expertise chosen for evaluation, on-site visits by members of the French CNE and the general secretariat and the sending of experts. The draft report itself is submitted to those in charge of the institu-tion under review, as they are also responsible for validating the data published in the report.

The head of the establishment has the last word; his/her response is published at the end of the evaluation report.

Educational Evaluation around the World 21

With regard to the situation in the Netherlands, it should be noticed that the many stake-holders play a crucial role in the norms, criteria and methods the evaluators use, based on the frameworks for inspections. These frameworks have to have as much commitment as possible from those who are concerned with the work of the Inspectorate. For this purpose the Inspec-torate has to consult with representatives of the educational field and other stakeholders and to take very seriously their opinions. But the Inspectorate remains responsible for the decisions about its own frameworks for inspection. Parliament has created the arrangement that the Senior Chief Inspector has to make decisions regarding the framework, and, following this step, the minister has to approve the proposal and to send it to parliament. This is in order to enable a parliamentary debate between the minister – who is ultimately responsible for the functioning of the inspectorate! – and parliament . This possibility was explicitly opened so that the Inspection could not fix criteria, norms and methods without the social approval of a broad representation of interested parties connected with potential evaluation objects. The first frameworks have been dealt with according to this procedure in November and December 2002 and are now valid for three years. They contain details about the working methods of the school-inspections (see below) and they provide the indicators and criteria for the aspects of quality.

Finally, and closest to the picture of complete stakeholder involvement is EVA, Denmark. Ac-cording to the paper: ‘Reflecting the fact that EVA’s evaluation activities cover the whole public education system, the institute has a very large and diverse group of stakeholders at all levels in society. Amongst others, several ministries are involved, educational councils, labour organiza-tions, employer organizaorganiza-tions, local and regional governmental organizaorganiza-tions, teacher organi-zations, etc. Prior to each evaluation, the institute conducts a preliminary study that typically involves a stakeholder analysis and dialogue. Some stakeholders are, therefore, involved on a case-to-case basis. Other stakeholders, like the Ministry of Education and the education coun-cils, are permanently involved in the activities of the institute through the institute’s board, as required by law.

The committee of representatives, which comments on EVA’s annual report and the priorities of planned activities, comprises 27 members. The members are appointed by the board on recommendations of organisations from the following sectors: school proprietors, school asso-ciations, school boards and employers; rectors’ conferences and school managers; manage-ment and labour organisations; teacher organisations and student and pupil organisations’.

EVA also points to the phenomenon now known as partnerial evaluations (Pollit, 1999): ´The evaluation model applied by EVA implies a shared evaluator responsibility in the sense that the activities of the area under evaluation are partly evaluated through self-evaluation and partly through analysis of the documentation material by the external evaluation group’.

CONCLUSIONS

Educational evaluations are carried out within the boundaries of societal restrictions and oppor-tunities.

One such boundary concerns the control and owner relationships within the educational sec-tors that are subjects of evaluation. Based on the material in this book, these relationships ap-pear not to differ fundamentally between the seven countries. Then, it was found that these relationships do not have a large impact on the values within the educational sectors or on the ways in which evaluations are done. Culture was also hypothesized as an important factor, but again: we did not find much evidence that, based on the material that was presented to us in this bibliography, this factor is crucial in determining values and methods. A third hypothesis stems from neo-institutional theory: does the institutional foundation of the educational evaluation institute determine the values and methods? Here we found that there are indeed differences between the institutional position and the autonomy of the evaluation agencies in the seven countries. To some extent the organizations can even be ranked in terms of their autonomy.

Educational Evaluation around the World 22

Additionally, if one compares the different institutions vis-à-vis their respective foundations in society, there is – again — no clear pattern to the values attached to evaluations or to the methods used.

The only hypothesis that seems to be corroborated is number 4: certain evaluation methods develop their own lives and achieve such a degree of status in certain countries that their ap-propriate application to a given educational sector is not questioned. Indeed, there is evidence that a certain ´methodological´ isomorphism and path-dependency is taking place within this evaluation community. Dialogue-driven or stakeholder evaluations are almost everywhere to be found, unannounced studies almost nowhere; evaluating self-evaluations and auditing quality control mechanisms are becoming more and more the ´talk of the day´, and, finally, the predic-tions of where the offices will be ´five years from today´ do not present us with a broad spec-trum of unexpected or new foci and approaches. It can be argued that educational evaluations, as described by the authors in the country-reports, have become ´institutionalized´. That, how-ever, does not only have positive effects. Sociologists and economists conscious of neo-institutionalism direct attention to unexpected and even undesirable side-effects of this devel-opment, such as: ´predictability´ of the behaviour of counterparts and evaluands; ‘teaching to the test’-behaviour; tunnel vision; and – even— the performance paradox (Smith, 1996; van Thiel & Leeuw, 2002). To prevent these phenomena from occurring, educational evaluators should invest in ´reflective practitioners´.

REFERENCES

Barzelay, Michael (1996), PP. 15-57, Performance auditing and the New Public Management:

changing roles and strategies of central audit institutions, in: OECD-PUMA, Performance audit-ing and the modernisation of Government, Paris.

David, Paul. 1985. Clio in the economics of QWERTY, American Economic Review, 75: 332-337 DiMaggio, Paul J. and Walter W. Powell, 1983, The Iron Cage revisited: institutional isomor-phism and collective rationality in organizational fields, in: American Sociological Review, 48:

147-160.

Meyer, M.W. & Gupta, V. (1994) The performance paradox. Research in Organizational Behav-iour, 16, 309-369.

Power, M. (1999). The audit society. Oxford, Oxford University Press

Shaw,I. Et al, (2003).Do Ofsted inspections of secondary schools make a difference to GCSE results? British Educational Research Journal, (1), 29.

Thiel, Sandra van & Frans L. Leeuw. 2003. The performance paradox in the public sector, in:

Public Productivity and Management Review, 25 (3): 267-281

Scott, W.R. (2001). Institutions and organizations. Thousand Oaks, SAGE.

Smelser, N.J. and R.Swedberg (eds.). 1994. The Handbook of economic sociology. Princeton University Press, Princeton, N.J.

Smith, P. (1995). On the unintended consequences of publishing performance data in the pub-lic sector. International Journal of Pubpub-lic Administration, 18, 277-310.

Educational Evaluation around the World 23