Data extraction
In this phase of the study, we extracted data on the research design (e.g., sample, data collection and analysis) as well as characteristics and context of our 57 studies (i.e., year of publication, geographic location and school level). We also identified descriptions of research use in relation to RPP interventions. For instance, Askins and Schwisow (1988) mentioned cognitive instruction as a basis for a course for teachers to improve their teaching, hence connecting the research source to the work done in the RPP. Data regarding research use will be presented in the Results section. However, a brief description of the context of the 57 studies shows that just over 10% of the studies were located in countries other than the USA. These were instead located in England (n = 2), Japan (n = 1), and countries in the South Pacific (n = 2). Moreover, about 72% of the studies were published in the last decade (2010–2019), while the oldest ones were published in the late 1980s. In relation to the research design, a majority of the included studies were conducted as case studies, with a limited number of teachers and researchers. There were also a few large-scale studies, however (Jesson & Spratt, 2017). Data was collected in various forms, for example observations, surveys and documents. The majority of studies were, however, based on interview data. The analyses were in most cases deemed as qualitative. However, there were some studies that included statistical analysis (Wilcox et al., 2017).
Analysis
In our analysis we chose to focus on research use relating to the intervention of the partnership. Intervention is the part of RPPs that is closest to influencing practice, and optimally presents opportunities for research use to practitioners through the intervention. The intervention of the partnership can be described as the structure that is meant to facilitate school improvement, for example the design of a PD programme or research project meant to inform local practice. In contrast, board meetings and similar structures focused more on planning the overall structure of an RPP are not seen as part of the intervention itself. The analysis of data was conducted in three parts.
To begin with, when starting to analyse what kind of research is used to inform RPP interventions (RQ1), it became apparent that research could be used to either inform the content of the intervention or the methods being used for facilitating participants’ interaction with the content. For instance, theories on effective teacher mentoring could be used to inform how you organise the work in a PD intervention (intervention method; Betlem et al., 2019), or strategies of cognitive instruction could inform intervention content (Askins & Schwisow, 1988). A similar observation has been made in studies of other educational interventions. For example, in a review of the research literature on teacher PD programmes, Kennedy (2016) characterised programmes according to both their content and their methods for facilitating teachers’ engagement with the content.
Secondly, we analysed what kind of research is used to inform RPP interventions (RQ1) in relation to method and content. The process was highly inductive and in line with structures of open coding, which is frequently used in configurative reviews (Gough et al., 2017). For a study like ours, with few predefined concepts, open coding is useful as it focuses on constructing new theory based on data. In line with open coding processes, our analysis followed an iterative pattern in a cycle of reviewing literature and homing in on RQ1 (Gough et al., 2017). The iterative process of the analysis served to, among other things, enhance the reliability of the study in terms of stability (cf. Weber, 1990). Specifically, the stability of the study was enhanced as the analysis generated categories which, after several iterations, were invariant. The iterative process began with creating preliminary categories of research based on the identified passages. These categories were then tried against the identified passages to determine whether the categories were stable. As mentioned earlier, this was repeated until the categories were invariant. Further, the reliability of the study was strengthened in terms of reproducibility (i.e., the extent to which different coders produce the same results when coding; Weber, 1990), as the coding classification done by the first author was continuously reviewed and discussed with the other authors between each cycle of coding. Although the process was inductive, we sometimes borrowed terms (descriptive, explanatory, predictive, and prescriptive) from the framework on research theories (McKenney & Reeves, 2012) to describe our results.
The third part of the analysis focused on what opportunities for research use were presented to practitioners through interventions informed by research (RQ2). The analysis process in this part, in contrast to the first part, was highly reliant on the mentioned framework for research use (Weiss & Bucuvalas, 1980) which has previously been applied to studies of RPPs (e.g., Farrell et al., 2018). However, as we remained open to finding other instances of research use that did not fit into this framework, our analysis can be described as abductive. The framework (Weiss & Bucuvalas, 1980) divides research use into the four categories instrumental use, conceptual use, symbolic/political use, and process use. This framework was used to categorise what opportunities for research use were presented to practitioners through the RPP intervention. Further, in cases when RPPs presented opportunities for more than one kind of research use, these were classified as primary and secondary opportunities.

Leave a Reply