• Ingen resultater fundet

Dimensions of and factors affecting implementation

Figure 2.1 Conceptual framework of the knowledge transfer process (Ward et al., 2009, p. 163)

and conditions, (3) developing an implementation plan, (4) receiving training and technical assistance, (5) instituting practitioner–developer collaboration, and (6) evaluating the effec-tiveness of the programme/activity. The last component in particular comprises seven action steps that are of interest when considering implementation quality:

• Measuring fidelity of implementation (i.e. adherence, integrity)

• Measuring dosage of the innovation

• Measuring quality of delivery of the innovation

• Measuring participant responsiveness

• Measuring degree of programme differentiation

• Measuring programme reach

• Documenting all adaptations that are made to the innovation.

Table 2.1 Quality implementation framework Phase one: initial considerations regarding the host setting

Assessment strategies

1. Conducting a needs and resources assessment 2. Conducting a fit assessment

3. Conducting a capacity/readiness assessment Decisions about adaptation

4. Possibility for adaption Capacity-building strategies

5. Obtaining explicit buy-in from critical stakeholders and fostering a supportive community/organisational climate

6. Building general/organisational capacity 7. Staff recruitment/maintenance

8. Effective pre-innovation staff training Phase two: creating a structure for implementation

Structural features for implementation 9. Creating implementation teams 10. Developing an implementation plan Phase three: ongoing structure once implementation begins

Ongoing implementation support strategies

11. Technical assistance/ coaching/supervision 12. Process evaluation

13. Supportive feedback mechanism Phase Four: Improving future application

14. Learning from experience

Meyers et al (2012a) pp 468

Phase one in the quality implementation framework (Meyers et al., 2012a) involves various different assessment strategies regarding organisational needs, innovation–organisational fit, and capacity or readiness assessment. Thus its primary focus is on the ecological fit bet-ween the host setting and innovation. There are eight critical steps in this phase, covering the initial steps in implementing evidence-based programmes or activities. Management and leadership have a crucial role in all eight steps. It is in this phase that a supportive cli-mate for implementation and a secure buy-in from key leaders and frontline staff should be established.

Phase two focuses on creating a structure for implementation. The critical steps here are ensuring both a precise implementation plan and that there is a team of professionals with the qualifications to take responsibility for the actual implementation. Phases one and two are the preliminary preparation for the actual implementation of the programme/activity.

Phase three covers the actual implementation process and consists of three important tasks:

the provision of ongoing assistance to frontline professionals, the monitoring of ongoing implementation, and the creation of feedback mechanisms such that involved parties can follow the progression in the process.

Phase four consists of only one critical step – learning from experience. It is at this stage that the implementation process can be modified based on experiences with ineffective and effe-ctive strategies and critical self-reflections about one’s own efforts, mistakes and successes.

These reflections can improve the quality of the implementation of the programme/activity and in this way ensure sustainability.

In phase three and four it can be wise to include the action steps outlined in the sixth com-ponent of the quality implementation tool for evaluating the effectiveness of the programme/

activity (Meyer et al., 2012b).

Humphrey et al. (2016) state in their handbook that while implementation is a multidimensi-onal construct, there is general agreement on that eight dimensions can be identified within it (ibid. p. 6):

Dimension Content

Fidelity/adherence The extent to which implementers adhere to the intended treatment model

Dosage How much the intended intervention has been

delivered and /or received

Quality How well different components of an intervention are delivered

Reach The rate and scope of the participation

Responsiveness The degree to which participants engage in the intervention

Program differentiation The extent to which intervention activities can be distinguished from other existing practices Monitoring of control/

comparison groups Determination of the ‘counterfactual’ – what is taking place in the absence of the intervention

Adaptation The nature and extent of changes made to the intervention

The handbook also describes five factors that are believed to affect implementation (ibid. p.7):

Table 2.3. Factors in implementation

Factors Content

Preplanning and foundations What is the level of need, readiness and capacity for changing the setting where the intervention takes place?

Implementation support

system What strategies and practices are used to support quality implementation?

Implementation environment What are the influential contextual and compositional characteristics in the setting where the intervention takes place?

Implementer factors What is the profile of professional characteristics, intervention perceptions and attitudes, and

psychological characteristics among implementers?

Intervention characteristics What form does the intervention take?

Searches in the international databases yielded 10,077 references. After identification of duplicates and screening for relevance, 73 references remained for assessment of weight of evidence, leaving 34 studies for use in the narrative synthesis. The procedures for search, screening, and assessment are described in appendices 2 and 8.

With the assessment complete, it is possible to compare the studies included in the asses-sment (appendix 4) with those selected for the narrative synthesis (appendix 5). The main difference here is that out of a total of 24 case-studies, only six were assessed as having suf-ficient weight of evidence to be included in the synthesis, owing to three main deficiencies in the evidence base. First, teachers’ own perceptions of their work were disproportionately frequent as outcome variables; second, the sample sizes were small and often self-selected;

and third, a theoretical and empirical foundation was lacking. Many of the excluded studies cover conceptual aspects of knowledge use, which are more difficult to operationalise than knowledge used in clearly defined instrumental programmes or activities.

Of the 34 studies in the narrative synthesis, nineteen are from the United States, five from the Nordic countries, one from Finland, four from Norway, and three from England. One study each from Canada, Ireland, Scotland, New Zealand, and the Netherlands is included.

Finally two studies cannot be attributed to one specific country, either because they collect data from more than one country or because the study is a systematic review (see Appendix 5 for a full characterisation of the studies).