Review article

Evidence use in higher education decision-making and policy: a scoping review of empirical studies from 2010 to 2022

Authors
  • Christoph Thiedig orcid logo (Department Research System and Science Dynamics, German Centre for Higher Education Research and Science Studies (DZHW), Berlin, Germany)
  • Antje Wegner orcid logo (Department Research System and Science Dynamics, German Centre for Higher Education Research and Science Studies (DZHW), Berlin, Germany)

Abstract

In this article, we review empirical studies on the use of evidence in higher education decision-making and policy from 2010 to 2022. In our scoping review, we identify 77 studies in English or German, of which 69 meet minimum quality standards. We map the current state of knowledge on the use of data and research evidence in higher education and research institutions, and higher education and research policy, using qualitative coding. The results depict a growing, US-dominated, research field characterised by a large variety of methodological approaches, influenced by heterogeneous sets of paradigms and shaped by professional publications. We compare studies on evidence use in higher education and research institutions, and higher education and research policy, and find conceptual and empirical differences regarding the studied dimensions of evidence use, the types of evidence taken into account, and factors influencing evidence use. Our review highlights the need for comparative organisational research on evidence use, further research on mechanisms and pathways of evidence uses and a closer linkage between concepts and empirical findings.

Keywords: evidence use, evaluation use, evidence utilisation, decision-making, higher education, higher education policy, scoping review

How to Cite: Thiedig, C. and Wegner, A. (2024) ‘Evidence use in higher education decision-making and policy: a scoping review of empirical studies from 2010 to 2022’. London Review of Education, 22 (1), 36. DOI: https://doi.org/10.14324/LRE.22.1.36.

Rights: 2024, Christoph Thiedig and Antje Wegner.

220 Views

43 Downloads

Published on
06 Nov 2024
Peer Reviewed

Introduction

The factors that promote or hinder the use of evidence in decision-making have long been studied in fields such as public policy, education and healthcare. These efforts have resulted in a substantial body of empirical studies and reviews, as well as conceptual models and frameworks (Johnson et al., 2009; King and Alkin, 2019; Langer et al., 2016; Nutley et al., 2003; Rickinson et al., 2021). Although the studies and frameworks are based on different paradigms, such as evaluation use, evidence-informed practices, policymaking and data-driven decision-making, they show a remarkable consensus on enablers and barriers of transfer and use of evidence. These include, for example, the accessibility, relevance and credibility of evidence, the quality and nature of dissemination, and the resources of evidence producers and recipients (Isett and Hicks, 2020; Johnson et al., 2009; Oliver et al., 2014; Rickinson et al., 2021).

Despite these commonalities, literature also consistently points out sector-specific peculiarities in practices of evidence use (Nutley et al., 2007; Oliver et al., 2014; Rickinson et al., 2021). A prominent example is the relationship between formal evidence and non-formal evidence, such as professional experience, which is relevant for evidence use in education and other practice-based sectors (Oliver et al., 2014; Rickinson et al., 2021). Contexts of evidence use also differ tremendously, ranging from individual student achievements and learning outcomes to changes in organisational structures and professional practices or research uptake in policy design. In addition, evidence use is to varying degrees embedded in formal and informal relationships between stakeholders and within institutionalised support or evidence ecosystems.

In this article, we contribute to this literature by presenting a scoping review of empirical studies in English and German language on factors influencing evidence use in higher education and research institutions (HERI) and higher education and research policy (HERP). To the best of our knowledge, this is the first review of empirical studies on evidence use for these two sectors. We understand evidence to be any kind of systematically generated, more or less objective, and explicit information (Malin et al., 2020), including, but not limited to, survey data, administrative and statistical data, and research results. We account for different notions of evidence use, such as instrumental, conceptual or symbolic use (Weiss, 1979), as well as different stages of use (Knott and Wildavsky, 1980) and influence of use (Kirkhart, 2000).

We distinguish between studies on HERI and HERP because we expect evidence use in these two sectors to follow distinct logics of action. Despite many commonalities between evidence use in policy and practice, such as complex decision-making, the need to translate knowledge between groups with distinct agendas and timelines (Isett and Hicks, 2020; Nutley et al., 2007) and general difficulties in getting evidence used (Boaz et al., 2019), notable differences between both sectors remain. In policy, decision-making processes take place in highly politicised contexts in which decision-makers need to balance the use of (external) evidence with constituent and stakeholder interests in specific policy situations (Nutley et al., 2007; Rickinson et al., 2021). Timely access to persuasive evidence, individual values, beliefs and ideologies, and collaborations with knowledge brokers are important factors that influence evidence use in this sector (Cairney, 2019; Isett and Hicks, 2020; Nutley et al., 2007). In contrast, evidence use in HERI commonly concerns internal organisational matters, relies more often on evidence gathered in-house and needs to reconcile administrative and managerial interests with academic self-governance.

We suggest that this latter characteristic of dual governance and other organisational peculiarities limit the transferability of findings on evidence use from other fields to HERI. In contrast to schools or companies, for example, HERI are considered specific organisations consisting of loosely coupled and autonomous professionals with specific professional standards, often unclear strategic goals and decentralised decision-making (Musselin, 2007; for a detailed discussion, see also Kleimann, 2019). Due to their engagement in both teaching and research, contexts of evidence use in HERI are very diverse. They include, for example, the assessment of teaching quality, the evaluation of research performance, organisational development or more complex governance processes. Hence, lessons learned from studies about evidence-based practices in (school) education might be relevant for facilitating evidence use in teaching, but less so for understanding evidence use in research evaluations. Furthermore, both academics and ‘third space’ professionals (Whitchurch, 2013) in quality management, institutional research and other organisational units are likely to have expertise in producing, collecting and using evidence, as recent studies on competency profiles suggest (Krempkow et al., 2023; Schelske and Thiedig, 2022). Consequently, in HERI, the generation and use of evidence are more often internal to the organisation, handled by the same person or organisational unit, whereas fields such as (school) education or public policy commonly rely on evidence produced by external providers.

In our scoping review, we aim to overview the type and volume of the existing empirical literature on evidence use in HERI and HERP. The review follows a concept-led, exploratory approach with comparatively broad research questions in a heterogeneous, emerging research area. We map empirical studies and seek to identify focal points and research gaps in the understanding of evidence use to be addressed by future research. We answer the following questions, which we subsume under three topics:

  • Comprehensiveness of literature:

    1. RQ1.

      What is the type and quantity of studies undertaken on evidence use in the higher education and research system?

    2. RQ2.

      How comprehensive and robust are the empirical findings?

  • Content of literature:

    1. RQ3.

      What dimensions and outcomes of evidence use are studied?

    2. RQ4.

      Which types of evidence are considered in the included studies?

    3. RQ5.

      Which factors affect evidence use?

  • Sector comparison:

    1. RQ6.

      How do these factors differ between evidence use in HERI and HERP?

Our study complements existing reviews of evidence use (for example, Isett and Hicks, 2020; Rickinson et al., 2021) in two ways. First, we explicitly focus on factors influencing evidence use in HERI. This sector has not yet been systematically considered in other reviews, and it is frequently subsumed under the umbrella term of education. Second, we consider and map contributions originating from a heterogeneous set of paradigms and strands of literature, ranging from the use of evaluation results, evidence-informed practices and evidence-informed policy to data-driven decision-making. Covering research results across these research paradigms is especially relevant, since recent mappings of national and international higher education research and science studies point to a growing specialisation that could lead to a disintegration of the body of knowledge (Daenekindt and Huisman, 2020). As a consequence, it seems increasingly likely that scholars in this heterogeneous scientific community apply different concepts and methodological approaches to very similar questions and topics, but the results are not mutually acknowledged. In this way, the review can help to (re-)relate concepts and empirical results more closely to each other, and thus avoid inefficiencies and possibly redundant research. Last but not least, it can also help to inform practical efforts to improve evidence use in higher education by providing an overview of factors that enable or constrain this use in the two sectors and identifying critical entry points for interventions.

Methodology

Choice of review design

We used a scoping review design to overview empirical studies on evidence use in the higher education and research system. This design best suited our aim to systematically assess the type and volume of the available literature for comparatively broad research questions in a heterogeneous and emerging research area. In contrast to more traditional systematic reviews, we are not interested in the effect of a specific intervention using only high-quality studies; rather, we intend to identify all accessible literature, its research focuses and gaps, and subsequently to inform future research and practice. In conducting and reporting the review, we followed the guidance of the PRISMA extension for scoping reviews (Peters et al., 2020a, 2020b). A review team of four members systematically assessed the studies.

Inclusion criteria

We aimed to include all empirical studies on factors influencing evidence use in HERI and HERP published in English or German between 1 January 2010 and 31 December 2022. We chose 2010 as our starting year because we were interested in mapping recent research rather than tracing conceptual and empirical developments over time, and initial database searches suggested a substantial increase in studies in the mid-2010s. We included German-language studies to ensure that we had the largest number of studies accessible to us in our review, and because the scoping review also aimed to inform empirical case studies as part of a research project on evidence use in German higher education institutions. We considered all types of literature, including articles, book chapters, systematic reviews, dissertations and reports, irrespective of peer-review status. Research on evidence use in education or education policy was included only if the study also covered higher education, or if the results were deemed directly applicable to the higher education system by the authors of the study. Studies on evidence-based practices, learning analytics or other advancements in teacher education were included if they analysed the use of evidence in an organisational context. Table 1 summarises our inclusion and exclusion criteria.

Table 1

Inclusion and exclusion criteria 

Criterion Inclusion Exclusion
Publication year 2010–2022 Other year
Language English, German Other language
Country of study All -
Publication type All -
Peer-review status All (reviewed, not reviewed) -
Evidence base Empirical study or review of empirical studies Does not report empirical findings
Sector Any of the following:
  • Higher education and research

  • Higher education and research policy

  • Education or education policy, if relevance of findings for higher education is made explicit and evidence is used in organisational context, that is, beyond the individual level

Any of the following:
  • Education, without relevance for higher education

  • Education, if evidence use is restricted to individuals

  • Further education

  • Policy, if the study does not refer to higher education policy

  • Other sectors

Study content All of the following:
  • Study is on evidence use as defined for the review

  • Study reports on (forms of) evidence use or impact

  • Study reports on factors influencing use or impact

Any of the following:
  • Study does not address (factors of) evidence use

  • Evidence use is only a marginal topic in the study

Search strategy

We searched the electronic databases of Scopus and ERIC (Education Resources Information Center). After conducting pilot searches, the two following candidate search engines were rejected; Google Scholar lacked features for systematic queries (see also Gusenbauer and Haddaway, 2020) and Bielefeld Academic Search Engine (BASE) offered limited search features, displayed a substantial number of duplicates and lacked relevant sources. By combining an interdisciplinary (Scopus) and a subject-specific (ERIC) database, we ensured broad coverage of the eligible literature. We conducted systematic database queries using a controlled vocabulary that consisted of the following combination of keywords for the relevant research strands combined with search terms to specify the sector(s):

  • ‘research use’ OR ‘research utilisation’ OR ‘evidence use’ OR ‘evidence utilisation’ OR ‘evidence-informed practice’ OR ‘evidence-based practice’ OR ‘evidence-based policy’ OR ‘evidence-informed policy’ OR ‘evidence in policy’ OR ‘evaluation use’ OR ‘evidence-based decision-making’ OR ‘data-based decision-making’ OR ‘data-informed decision-making’ OR ‘data-driven decision-making’ OR ‘data use’ OR ‘data utilisation’ AND

  • ‘higher education’ OR ‘science policy’ OR ‘research policy’ OR ‘higher education policy’ OR ‘research evaluation’ OR ‘research organisation’ OR ‘research management’ OR ‘university management’ OR ‘university’.

We adapted these search terms for each database. In Scopus, we used wildcards to account for British and American English spelling variants, as well as singular and plural items. In ERIC, we included all these variants in our search terms. Prior to conducting the final database queries, the review strategy was discussed with a bibliometrics expert external to the project team. We conducted the final searches on 31 January 2023. The final queries are published in Wegner and Thiedig (2023).

Since German-language publications are under-represented in these bibliographic databases, we conducted an additional manual screening of abstracts in six German-language higher education journals: Beiträge zur Hochschulforschung ( https://www.bzh.bayern.de), Das Hochschulwesen ( http://www.hochschulwesen.info), die hochschule ( http://www.die-hochschule.de), Hochschulmanagement ( https://www.universitaetsverlagwebler.de/hm), Qualität in der Wissenschaft ( https://www.universitaetsverlagwebler.de/qiw) and Zeitschrift für Hochschulentwicklung ( https://www.zfhe.at/). We also added eligible publications (in German and English) known to the review team to the literature corpus.

Screening

To increase consistency, screening of the abstracts was piloted within the review team using a random sample of 30 articles retrieved from the Scopus query. During the actual screening process, studies were assigned to single reviewers due to restricted resources. If inclusion or exclusion was difficult to determine, studies were discussed within the review team. If the abstract alone was not sufficient for determining inclusion or exclusion, the full text was consulted. Some studies were excluded belatedly in the data extraction phase due to additional information retrieved from the full texts.

We screened 3,325 records and assessed 124 full texts for eligibility. In total, 77 studies fulfilled the eligibility criteria and were included in the review. The study selection process is summarised in Figure 1.

Figure 1
Figure 1

Flow diagram of the literature search, screening and inclusion 

Data extraction

Data extraction was carried out by coding the full texts of all eligible studies in MAXQDA, a software program for qualitative data analysis, based on a standardised guideline and coding scheme. For the development of coding categories for outcomes of evidence use, as well as barriers and facilitators, we built on core concepts from the literature on evaluation use, evidence-informed practices and evidence-informed policy. Key code categories are described below.

For the purpose of this review, and in line with other studies, evidence includes administrative and survey data, as well as evidence and data from academic research already processed in the form of publications, reports and databases. We distinguish three sources of evidence: survey data; administrative and statistical data; and other types of evidence such as academic research, focus groups or consultations. We further differentiate between internal and external evidence – research suggests that decision-makers often prefer internal data sources, even though the quality might not be as high as that of external research (Hollands and Escueta, 2020; Widany and Gerhards, 2022). Whether the evidence was considered internal or external depended on the study design: for example, administrative data collected in a university would be considered internal by local management staff, but external when used by policymakers.

For coding the dimensions of evidence use that the studies focused on (hereafter termed study outcomes), we accounted for different notions of evidence use that are commonly applied in the literature (Nutley et al., 2007). We identified whether studies focus on one or more of the following dimensions of evidence use:

  • The study outcome was coded as ‘concepts of evidence use’, if the study applied any kind of typologies or heuristics addressing qualitative dimensions of evidence use, such as Weiss’s (1979) notion of instrumental, conceptual or symbolic use.

  • The outcome was coded as ‘evidence use’, if the focus was on investigating whether evidence was used at all, regardless of its impact. Following the notion of stage models of evidence use (Gándara, 2019; Nutley et al., 2007), this represents the passive end of the use continuum.

  • We coded the study outcome as ‘influence of evidence use’, if the study focused on the effects resulting from evidence use. Those effects might be intended, unidirectional and instrumental, as well as multidirectional, incremental, unintended and non-instrumental (Alkin and Taut, 2003; Kirkhart, 2000).

The code categories for barriers and facilitators of evidence use build on a synthesis of the findings in Johnson et al. (2009), Isett and Hicks (2020) and Rickinson et al. (2022). We differentiated between five main groups of factors on the levels of evidence (such as relevance, credibility and timing), evidence users (skills, mindset and motivation, relationships), evidence producers, organisation (leadership and organisational culture, infrastructure and resources) and system. These factors were coded as ‘facilitator’, ‘barrier’ or ‘no effect’, depending on the statements and results presented in the studies. Since the understanding as a facilitating or hindering factor strongly depends on its framing in a specific study context, we only report whether the studies make statements about the influencing factor based on empirical data. In a pilot coding phase, the review team summarised and supplemented the coding system with additional categories. The final coding scheme included codes on article characteristics (such as publication year, language, publication type), study characteristics (among them, research design, sector, country and type of organisation studied) and factors influencing evidence use. Reviewers independently coded the full text based on the coding guideline, discussed the results and further specified the coding guideline where necessary. Uncertain cases were initially discussed with the whole review team. Later, a second, and occasionally a third, reviewer provided commentary.

Critical appraisal

Following Langer et al. (2016), we conducted a critical appraisal of the individual studies. We assessed studies on two dimensions: the relevance of the study to the review goals (indicated by the analytical breadth and depth of the studied factors influencing evidence use) and the methodological quality of the study (indicated by study design, empirical basis and relationship between data and reported results). For both dimensions, reviewers assessed the studies using a four-point scale of ‘high’, ‘moderate’, ‘low’ and ‘unclear’. Articles coded as ‘unclear’ were discussed within the review team and subsequently re-coded.

Results

We included 77 studies empirically studying factors of evidence use between 2010 and 2022 in our scoping review (see Table 2).

Table 2

The 77 studies included in the scoping review, in alphabetical order 

Author(s) Date Title
AACRAO* 2017 Use of and Access to Data: Opinions on institutional data practices. Results of the AACRAO-ACE November 2017 60-Second Survey
Armstrong 2016 Data Driven Decision-Making in South Dakota: Effective use of state data systems
Armstrong and Whitfield 2016 Strong Foundations 2016: The state of state postsecondary data systems
Arnold et al. 2019 Informing Improvement: Recommendations for enhancing accreditor data-use to promote student success and equity
Baumann et al.* 2020 ‘Digitalisierung an Hochschulen: Eine Multifallstudie aus Campus Management Perspektive’
Benson and Trower 2012 ‘Data, leadership, and catalyzing culture change’
Bolhuis et al. 2016 ‘Data-based decision-making in teams: Enablers and barriers’
Borch et al. 2022 ‘Student course evaluation documents: Constituting evaluation practice’
Chen 2020 ‘Data-driven decision-making literacy among rural community college leaders in Iowa: The role of leadership competencies’
Chow 2017 ‘How can student learning data at institutional level support decision-making for educational improvement for academic programme? A case study in a Hong Kong university’
Coughlin 2014 Engaging Evidence: How independent colleges and universities use data to improve student learning
Cox et al. 2017 ‘Lip service or actionable insights? Linking student experiences to institutional assessment and data-driven decision-making in higher education’
Dalal 2019 Don’t Stop Improving: Supporting data-driven continuous improvement in college student outcomes
Deom et al.* 2021 ‘Data-driven decisions: Using network analysis to guide campus course offerings’
Dukes 2021 Evidence to Action: A policy perspective from three states
Dunbar et al. 2014 ‘Connecting analytics and curriculum design: Process and outcomes of building a tool to browse data relevant to course designers’
Forbes et al. 2022 ‘Course enhancement conversations: A holistic and collaborative evaluation approach to quality improvement in higher education’
Gándara 2019 ‘Does evidence matter? An analysis of evidence use in performance-funding policy design’
Gläser and Von Stuckrad 2014 ‘Von inaktiv bis kreativ. Der Umgang von Universitäten mit Forschungsevaluationen als Herausforderung für die Organisationssoziologie’
Gläser et al. 2010 ‘Informed authority? The limited use of research evaluation systems for managerial control in universities’
Hammarfelt and Rushforth 2017 ‘Indicators as judgment devices: An empirical study of citizen bibliometrics in research evaluation’
Hartikayanti et al. 2018 ‘Financial management information system: An empirical evidence’
Hartman et al. 2020 ‘University first-time-in-college students’ mathematics placement and outcomes: Leadership response to local data’
Hillebrandt 2020 ‘Keeping one’s shiny Mercedes in the garage: Why higher education quantification never really took off in Germany’
Hollands and Escueta 2020 ‘How research informs educational technology decision-making in higher education: The role of external research versus internal research’
Hora et al. 2014 ‘Exploring data-driven decision-making in the field: How faculty use data and other forms of information to guide instructional decision-making’
Hordósy 2017 ‘How do different stakeholders utilise the same data? The case of school leavers’ and graduates’ information systems in three European countries’
Ion et al. 2019a ‘How does the context of research influence the use of educational research in policy-making and practice?’
Ion et al. 2019b ‘How can researchers facilitate the utilisation of research by policy-makers and practitioners in education?’
Jankowski 2012 ‘Mapping the topography of the evidence use terrain in assessment of U.S. higher education: A multiple case study approach’
Janson 2015 ‘Die Bedeutung von Absolventenstudien für die Hochschulentwicklung. Zusammenfassung einer empirischen Studie’
Kabuye and Basheka 2017 ‘Institutional design and utilisation of evaluation results in Uganda’s public universities: Empirical findings from Kyambogo University’
Kerrigan and Jenkins 2013 ‘A growing culture of evidence? Findings from a survey on data use at achieving the dream colleges in Washington State’
Kezar 2021 ‘Understanding the relationship between organizational identity and capacities for scaled change within higher education intermediary organizations’
Leiber 2017 ‘University governance and rankings: The ambivalent role of rankings for autonomy, accountability and competition’
Madsen 2022 Governing by Numbers and Human Capital in Education Policy Beyond Neoliberalism: Social democratic governance practices in public higher education
Mahroeian and Daniel 2021 ‘Is New Zealand’s higher education sector ready to employ analytics initiatives to enhance its decision-making process?’
Masango et al.* 2020 ‘Design and implementation of a student biographical questionnaire (BQ) online platform for effective student success’
McCarthy et al. 2017 ‘Changing leadership behaviours: A journey towards a data driven culture’
McCaul 2015 ‘Closing the loop: A study of how the National Survey of Student Engagement (NSSE) is used for decision-making and planning in student affairs’
McCoy and Rosenbaum 2019 ‘Uncovering unintended and shadow practices of users of decision support system dashboards in higher education institutions’
Milzow et al. 2019 ‘Understanding the use and usability of research evaluation studies’
Mokher et al. 2020 ‘Exploring institutional change in the context of a statewide developmental education reform in Florida’
Morgan et al.* 2022 ‘Quality assurance, meet quality appreciation: Using appreciative inquiry to define faculty quality standards’
Mountford-Zimdars and Moore 2020 ‘Identifying merit and potential beyond grades: Opportunities and challenges in using contextual data in undergraduate admissions at nine highly selective English universities’
Mukhtar et al.* 2020 ‘The information system development based on knowledge management in higher education institution’
Natow 2020 ‘Research utilization in higher education rulemaking: A multi-case study of research prevalence, sources, and barriers’
Natow 2022 ‘Research use and politics in the federal higher education rulemaking process’
O’Connor 2022 ‘Evidence based education policy in Ireland: Insights from educational researchers’
Osborne 2012 ‘Transforming data into knowledge within higher education’
Parnell et al.* 2018 Institutions’ Use of Data and Analytics for Student Success: Results from a national landscape analysis
Peck and McDonald 2013 ‘Creating “cultures of evidence” in teacher education: Context, policy, and practice in three high-data-use programs’
Peck and McDonald 2014 ‘What is a culture of evidence? How do you get one? And … should you want one?’
Rabovsky 2014 ‘Using data to manage for performance at public universities’
Rickinson and Edwards 2021 ‘The relational features of evidence use’
Rorison and Voight 2016 Leading with Data: How senior institution and system leaders use postsecondary data to promote student success
Rosa et al. 2022 ‘Learning analytics and data ethics in performance data management: A benchlearning exercise involving six European universities’
Sá and Hamlin 2015 ‘Research use capacity in provincial governments’
Schroeder 2012 ‘An exploration of the use of data, analysis and research among college admission professionals in the context of data-driven decision-making’
Selwyn et al. 2018 ‘“You need a system”: Exploring the role of data in the administration of university students and courses’
Senter et al. 2021 ‘Sociology departments and program review: Chair perspectives on process and outcomes’
Seyfried and Pohlenz 2014 ‘Studienverlaufsstatistik als Berichtsinstrument. Eine empirische Betrachtung von Ursachen, Umsetzung und Implementationshindernissen’
Seyfried and Pohlenz 2018 ‘Assessing quality assurance in higher education: Quality managers’ perceptions of effectiveness’
Sima 2017 ‘Evidence in Czech research evaluation policy: Measured and contested’
Sirat and Azman 2014 ‘Malaysia’s National Higher Education Research Institute (IPPTN): Narrowing the research–policy gap in a dynamic higher education system’
Sloan 2015 ‘Data and learning that affords program improvement: A response to the U.S. accountability movement in teacher education’
Smith et al. 2021 ‘Aligning the times: Exploring the convergence of researchers, policy makers and research evidence in higher education policy making’
Taylor 2020 ‘Neoliberal consequence: data-driven decision-making and the subversion of student success efforts’
Terkla et al. 2014 ‘Using data to inform institutional decision-making at Tufts University’
Thorpe and Howlett 2020 ‘Applied and conceptual approaches to evidence-based practice in research and academic libraries’
Torii and Okada 2017 ‘Achieving evidence-based improvement and transparency in higher education: The current status and challenges regarding data utilization and disclosure in Japan’
Tsai et al. 2022 ‘Charting design needs and strategic approaches for academic analytics systems through co-design’
Van Zyl et al. 2020 ‘Effective institutional intervention where it makes the biggest difference to student success: The University of Johannesburg (UJ) Integrated Student Success Initiative (ISSI)’
Wassmer and Probst 2019 ‘Competitive and collaborative aspects of graduate survey use in Switzerland’
Weaver et al.* 2020 ‘Establishing a better approach for evaluating teaching: The TEval Project’
Whitfield et al. 2019 Strong Foundations 2018: The state of state postsecondary data systems
Wolf et al. 2021 ‘Best practices for research analytics & business intelligence within the research domain’
  • Note: Studies marked with an asterisk (*) (n = 8) were excluded from the analysis of factors of evidence use due to their low thematic relevance and limited methodological quality after the critical appraisal.

Type and quantity of studies (RQ1)

The number of studies has grown in recent years: we found 27 studies published between 2020 and 2022, compared to only 7 studies published between 2010 and 2013. Most studies included in the review are academic studies (75 per cent, n = 58). Professional studies, that is, reports and articles written by and for the target audience of higher education practitioners, make up a quarter of the literature (n = 19), often reporting on practical matters such as data-informed quality improvements in HERI or the state of data systems and their use in a specific area. We found no prior review of empirical studies. Most studies are on evidence use in HERI (80 per cent, n = 62). Although they primarily address intra-organisational evidence use, application contexts vary widely, among them faculty development, quality management in teaching and research, research evaluation and change management. Evidence use in higher education policy (on national or state/provincial levels) is studied in 15 per cent of studies (n = 12). Three studies address both sectors. Studies using qualitative methods make up half of the sample (n = 38). A quarter of studies apply quantitative methods exclusively (25 per cent, n = 19). Some use a combination of both (14 per cent, n = 11), occasionally employing an explicit mixed-method design. Empirical studies beyond a single case make up around 80 per cent of the literature (n = 61), while the remainder are single case studies (n = 16).

The results depict a strongly US-dominated research field, with 50 per cent of studies, including almost all professional/practice studies, stemming from the US context (n = 39). Despite our inclusion of German-language studies, only a few studies on evidence use in Germany were ultimately included in our sample (13 per cent of studies, n = 10). Evidence use in Australia, the UK and other countries, such as the Netherlands, Romania and New Zealand, is also studied far less, compared to the US. Only six studies consider more than one country or are explicitly comparative in nature. Table 3 provides an overview of these study characteristics for all 77 studies included in the review.

Table 3

Characteristics of studies included in scoping review 

Publication year Frequency Percentage
2010–2013 7 9.1
2014–2016 19 24.7
2017–2019 24 31.2
2020–2022 27 35.1
Publication type
Academic 58 75.3
Professional/practice 19 24.7
Sector
HERI 62 80.5
HERP 12 15.6
Both sectors 3 3.9
Methodology
Qualitative 38 49.4
Quantitative 19 24.7
Qualitative and quantitative 11 14.3
Not evident/description of practical case 9 11.7
Evidence level
Empirical study (except single case studies) 61 79.2
Single case study 16 20.8
Review of empirical studies 0 0.0
Countries of study
USA 39 50.6
Germany 10 13.0
Australia 6 7.8
UK 5 6.5
Other countries 1 26 33.8
  • Notes: n = 77 studies. Multiple countries per study are possible. 1 Armenia, Austria, Belgium, Bolivia, Canada, Costa Rica, Côte d’Ivoire, Czechia, Denmark, Egypt, Estonia, Finland, France, Hong Kong, Indonesia, Ireland, Italy, Jamaica, Japan, Lebanon, Malaysia, Mexico, the Netherlands, Netherlands Antilles, New Zealand, Norway, Poland, Portugal, Qatar, Romania, Saudi Arabia, Switzerland, Singapore, South Africa, Sweden, Uganda, United Arab Emirates.

Critical appraisal (RQ2)

Table 4 shows the results of the critical appraisal. Overall, 27 per cent of studies (n = 21) were considered to be of high relevance to the review, with 6 of them having high methodological quality as well. Another 50 per cent of studies were deemed moderately relevant (n = 37). A quarter of studies had low relevance to the review (n = 19). Of the latter, 8 studies were deemed to have also low methodological quality. A critical appraisal is optional in a scoping review (Peters et al., 2020a; Tricco et al., 2018). While we aimed to include all relevant studies in our review to get an overview of the volume of the existing literature, we also intended to assess the reliability of the empirical knowledge presented in it. Critical appraisal varied between academic and professional studies. While we assessed both types of contributions to be of similar relevance to the review, we found that professional studies frequently had significantly lower methodological quality than academic ones. For example, they did not provide sufficient information on the research design, the empirical basis, or the reasons for selecting cases and interview partners. Six of eight studies with low relevance and low methodological quality are professional studies (see Table 4). To not overstate the amount and quality of the existing findings, we limit further analyses of study characteristics and factors influencing evidence use in this article to the 69 studies that were at least rated ‘moderate’ in one of the two dimensions.

Table 4

Critical appraisal of studies included in the review 

Methodological quality of study Relevance of study
High Moderate Low
Frequency Percentage Frequency Percentage Frequency Percentage
High 6 28.6 8 21.6 5 26.3
Moderate 9 42.9 22 (5) 59.5 6 31.6
Low 6 (4) 28.6 7 (4) 18.9 8 (6) 42.1
Total 21 (4) 100.1 37 (9) 100.0 19 (6) 100
  • Note: n = 77 studies. The number of professional studies for each cell is indicated in parentheses. Percentages above 100 are due to rounding.

Dimensions and outcomes of evidence use (RQ3)

More than half of the studies investigated the outcome ‘evidence use’, and thus focus primarily on the question of whether evidence is used at all and to what extent (54 per cent, n = 37). Around 40 per cent of the contributions (n = 28) focused on the ‘influence’ of evidence use. However, studies rarely made explicit how use or influence are exactly defined and measured. ‘Concepts of use’ were studied the least (see Table 5). Here, studies either referred to established scales and concepts, such as the distinction between conceptual, instrumental and symbolic use, or developed and applied ad hoc classifications. Altogether, the studies cover all outcome types defined in the methodology section (although 10 studies did not contain any precise statement on outcomes). Nevertheless, it is striking that studies – with two exceptions – focus either only on use or only on influence. No studies that look at both outcome dimensions equally were identified.

Table 5

Outcomes of evidence use studied 

Outcome Frequency Percentage
Concepts of use 9 13.0
Use 37 53.6
Influence 28 40.6
Not evident 10 14.5
  • Note: n = 69 studies. Multiple outcomes per publication are possible.

Types of evidence studied (RQ4)

Internal sources of evidence, especially administrative and statistical and survey data, are mentioned the most in the reviewed literature. Internal data almost always relate to students and the improvement of education quality within universities. Only a few studies focus on other types of evidence internal to universities, such as administrative data related to research performance or process data used for improving library services (see Table 6). External administrative and statistical data are addressed in a quarter of studies (n = 18). External survey data barely play a role – only 10 per cent of studies address this type of evidence (n = 7). Other types of external evidence, such as research literature, are mentioned more frequently, especially in studies on HERP.

Table 6

Types of evidence addressed 

Type of evidence Frequency Percentage
Internal survey data 19 27.5
Internal administrative data/statistics 34 49.3
Other internal evidence 11 15.9
External survey data 7 10.1
External administrative data/statistics 18 26.1
Other external evidence 18 26.1
Not evident 9 13.0
  • Note: n = 69 studies. Multiple types of evidence per publication are possible.

Factors influencing evidence use (RQ5)

Figure 2 provides an overview of the share of studies which provide empirical information about factors influencing evidence use.

Figure 2
Figure 2

Percentage of studies which provide empirical information about factors influencing evidence use (n = 69 studies) 

Evidence characteristics are found to influence evidence use in most studies (81 per cent, n = 56). The factors commonly identified in the literature – availability, credibility, communication quality, relevance and timing – also influence evidence use in our sample. These factors often relate to the use of data as a source of evidence, and less often to other forms of evidence, such as academic research or reports. Examples of such factors include access to data as a prerequisite for its use; the role of data quality in establishing credibility; the influence of transparency, accessible presentation and evidence-based storytelling on communication quality; and the importance of timeliness and relevance for evidence use by decision-makers.

Characteristics of evidence producers, often related to the capacity or willingness of actors to produce or disseminate evidence, rarely play a role in the included studies. Evidence producers are probably rarely addressed because the vast majority of studies deal with the use of internal data sources at HERI.

Characteristics of evidence users, especially skills, mindset and motivation, are found to influence evidence use in 60 per cent of studies (n = 42). Data literacy skills, deliberate use of evidence for answering strategic questions and individual interests of policymakers, university leadership or institutional research staff are examples of these factors. Individual relationships, collaborations and networks are less often studied as enabling or hindering factors, although several studies comment on the involvement of stakeholders in preparation or dissemination of data inquiries at HERI.

Organisational factors also influence evidence use in the majority of studies (73 per cent, n = 50). Both leadership and organisational culture, as well as infrastructure and resources, are often found to be relevant factors. Examples are the role of leadership in fostering an organisational culture of evidence use, and the availability of organisational resources to develop analytical capacities.

System-level factors, such as regulatory frameworks, funding policies or accreditation requirements, are found to influence evidence use in just over a third of the included studies (n = 24).

Sector differences (RQ6)

The literature on evidence use in HERI and in HERP differs regarding the outcome(s) studied and the types of evidence used, as well as regarding factors influencing evidence use. ‘Evidence use’ is the most studied outcome across both sectors. However, studies in HERI investigate the influence of evidence use much more frequently. In these cases, influence often relates to precise organisational goals, such as increasing graduation rates and teacher performance and restructuring departments or programmes, and less frequently to indirect effects of organisational learning. Concepts and qualitative dimensions of use are studied the least, and predominantly in the policy sector, often with explicit reference to the well-established typologies of Weiss (1979) and Knott and Wildavsky (1980). In the higher education literature, these typologies are found mostly in studies on evaluation use, whereas studies of data-driven decision-making in HERI sometimes explicitly resist these conceptions (for example, Jankowski, 2012). Overall, studies on HERI appear to be less explicit and precise in their definition of outcomes, as demonstrated by often weakly elaborated conceptual linkages and the higher number of studies that did not allow us to extract a specific outcome.

Evidence is mostly internal in HERI and mostly external in HERP. Administrative and survey data are addressed more frequently in studies on HERI. Other types of evidence are more frequently mentioned in studies on HERP. These types predominantly include not only research evidence, but also information provided by legislative services or the use of examples. In HERI, other types of evidence include research, external rankings and external evaluations, among others (Table 7).

Table 7

Differences in outcomes and types of evidence studied between sectors 

HERI HERP
Outcome Frequency Percentage Frequency Percentage
Concepts of use 4 7.4 4 33.3
Use 25 46.3 9 75.0
Influence 25 46.3 2 16.7
Not evident 9 16.7 1 8.3
Types of evidence
Internal survey data 19 35.2 0 0
Internal administrative data/statistics 31 57.4 2 16.7
Other internal evidence 8 14.8 3 25.0
External survey data 4 7.4 0 0
External administrative data/statistics 12 22.2 3 25.0
Other external evidence 10 18.5 7 58.3
Not evident 7 13.0 2 16.7
  • Notes: n = 54 HERI studies, n = 12 HERP studies. Studies considering both sectors (n = 3) are excluded. Multiple outcomes and types of evidence per study are possible.

Evidence use in HERI and in HERP is influenced by different factors to different extents, pointing to inherent differences in sector logics, processes and timelines. According to our findings, (perceived) credibility and access are evidence characteristics that more commonly influence evidence use in HERI. In HERP, communication quality and especially timing are key. Characteristics of evidence producers, such as (self-) perceptions of academics and their research, are mostly relevant in the policy domain. For evidence users, (data) skills in particular influence evidence use in HERI, whereas the users’ mindsets, motivation and individual relationships play a somewhat larger role in the policy sector. Organisational factors more commonly influence evidence use in HERI. Leadership and organisational culture are particularly important here. Infrastructure and resources, including staff capacity, affect evidence use in both sectors (see Figure 3).

Figure 3
Figure 3

Percentage of studies providing empirical findings about factors of evidence use by sectors (n = 66). Percentages based on studies for each sector (n = 54 HERI, n = 12 HERP). Studies on both sectors (n = 3) are excluded from the analysis because sectors were determined on document level and factors were not attributed to different sectors within studies 

Summary and conclusion

Evidence use in higher education and research is under-studied. In our scoping review, we aimed to identify and analyse studies of all types in English or German that empirically studied factors influencing evidence use in HERI and HERP between 2010 and 2022. Our aim was to get an overview of the type and volume of the existing literature and to assess which types of evidence are investigated, which kinds of uses and outcomes are studied, and which factors influence these outcomes. In addition, we were interested in the robustness of these findings in order to base our implications for research and practice on the most reliable evidence available.

State of the literature

Our findings, and the substantial number of screened studies, highlight the growing interest in understanding evidence use in higher education and the corresponding need for practical guidance and research. The results depict a US-dominated research field that is considerably shaped by professional (practice) publications. Including the latter in our review has contributed to this dominance: in the US, various professional associations, foundations, institutes and institutional researchers are engaged in understanding, monitoring and improving data use in particular. While findings from US studies on the influence of individual, organisational and evidence characteristics can also inform research on and practices of evidence use in other countries, the specific institutional landscape of the US should be kept in mind when discussing (the transferability of) lessons learned and practical implications.

Overall, the literature is characterised by a variety of conceptual and empirical approaches employed in very heterogeneous use contexts. The most common case of evidence use found in our sample is the use of internal data for improving teaching and learning in HERI. Despite a growing number of studies that describe such data use (for example, in the context of predictive analytics applications), only a small number of studies focus on factors (that is, barriers and facilitators) of evidence use as an actual core subject. While the literature offers a broad overview of potential influence factors on the levels of evidence, users, the organisation and the system, it leaves key questions unanswered. These include how strong the influence of these factors is (in relation to other factors) and what kinds of use or outcome they affect in particular – the reception of evidence, the implementation of decisions and measures based on it, or both. These findings are similar to findings in other fields, where scholars have long pointed out that the type of use is rarely explicitly defined and empirically operationalised (Nutley et al., 2007), and that mechanisms are rarely investigated (Johnson et al., 2009).

Sector differences

We find substantial differences between factors influencing evidence use in HERI and HERP. In HERI, the availability and credibility of evidence, individual (data) skills, and organisational leadership and culture seem to be particularly important factors influencing evidence use. Similar findings are reported elsewhere for school education (Schildkamp, 2019), where data literacy and leadership also play important roles in facilitating data-based decision-making. In HERP, timing and communication quality, the reputation of evidence producers and users’ motivation and relationships are much more prevalent influence factors. In addition, some conceptualisations of evidence use seem to remain largely sector-specific. The notion of concepts of use routinely employed in policy studies, for example, might not be as easily applicable to data-informed decision-making in HERI due to the multifaceted and collective ways in which data is made sense of in organisational decision-making (Jankowski, 2012).

Research gaps

Our findings suggest research gaps. The use of external survey data, a common way to present findings in higher education research, and arguably also one of particular relevance for practitioners and policymakers, is barely studied in our sample. To what extent competing evidence or the actual content of the evidence influence its use is also rarely addressed. The same is true for the influence of individual relationships and networks. The latter, in particular, is a surprising finding, given that the role of relationships and networks has been studied very intensively in recent years in various sectors, and seems to be an important predictor for evidence use (Johnson et al., 2009; Nelson and Campbell, 2019; Nutley et al., 2007).

Limitations

Our review has limitations. First, while our search strategy encompasses various strands of literature, it may not fully represent the study landscape – especially for the policy sector. As Rickinson et al. (2021) point out, evidence use in policy is associated with a broader set of concepts, such as knowledge translation, transfer and exchange. These were not taken into account, as they were deemed irrelevant for the conceptual background of our specific project. Second, studies on evidence use in Germany are likely over-represented in our sample. All these studies are academic articles, empirical contributions beyond a single case study, and on HERI. However, a comparison of the coding results of these studies with the rest of the sample (not reported here) indicated no substantial differences. Thus, their inclusion does not influence our interpretation of the overall review findings. Third, due to resource constraints, full-text studies were coded by only one member of the review team, with a second opinion sought in cases of doubt, and a third opinion sought in cases of dissent. While we conducted a test screening of 30 studies and a test coding of two full articles prior to coding the full texts, a more systematic reliability assessment would have been desirable. In addition, coding the influence factors sometimes proved to be a challenging task for the following reasons. Due to heterogeneous terminology, non-transparent operationalisation and frequent weakly established links between results and empirical data, it was occasionally difficult to extract results and avoid self-interpretation when coding studies. Furthermore, some studies reported potential influencing factors based on empirical data but did not relate these findings to the degree of use or non-use. In these cases, we refrained from coding factors. For some studies, it remained unclear if evidence use was not linked to potential factors due to methodological or empirical reasons, or whether this link was simply not reported in the publication. Thus, a difference between the quality of reporting and the methodological quality might exist in some cases. Finally, due to our selection of eligible publication years, our review misses earlier work from the long-standing discourse on evaluation use in particular (for a discussion, see Dahler-Larsen, 1998).

Implications for future research

Our review provides several implications for future research. Studies would benefit from making more explicit whether they intend to study the extent of evidence use, qualitative aspects of evidence use, or its influence/impact, and how this is defined and measured. These concepts should ideally be directly linked to the empirical evidence. Addressing these issues proves a crucial step to empirically scrutinise causal mechanisms and pathways in the evidence use chain from most passive use to impact, and to answer research questions raised for almost two decades (Johnson et al., 2009; Mark and Henry, 2004). Further research is also needed on specific influence factors: compared to studies in educational research, the effects of individual relationships and networks in HERI, as well as the cooperation between stakeholders, are rarely examined in detail. Given the ongoing professionalisation in research management, and the growing importance of institutional research, these factors should be addressed more strongly. In addition, which sources of evidence are subject to investigation often appears to be driven by case selection, rather than by a deliberate decision during study design. As an example, the apparent non-use of external survey data in both HERI and HERP merits further study. A more systematic and comparative investigation of sources of evidence would enable it to derive conclusions about how the choice of sources might impact evidence use, and whether this is determined by the (un)availability of evidence, or rather by the explicit comparison and assessment of different sources.

Methodologically, qualitative and especially case study research is highly relevant to this field of study due to its depth and richness of detail. Here, studies should move beyond a reflective and accompanying role for the introduction of organisational measures and programme evaluations. More deliberate and independent comparative studies including a larger number of organisations (or organisational units) are needed to prevent bias resulting from the selection of good practice examples as empirical cases. In doing so, the specific organisational characteristics of HERI, and their effects on the use context, should be reflected and, ideally, varied in a systematic manner. Country-specific factors, such as the regulatory environment or incentive and funding structures, should be noted, and their implications for the generalisability of the findings discussed. At the same time, the high number of local or national studies currently leaves potential for international comparative analyses and collaboration unused.

Practical implications

For both higher education researchers and practitioners, understanding the interplay between systemic, organisational and situational contexts will benefit their efforts to improve evidence use in higher education. For HERI, in particular, this emphasises the role of evidence users and their competencies, resources and networks, as well as the embedding of users and use processes in (inter-)organisational contexts. The latter are also relevant for establishing long-term stakeholder relationships and developing interventions and cooperation models, especially involving evidence users and producers, which, in conjunction with more traditional dissemination measures, can help to increase the relevance of evidence in higher education decision-making.

Funding

Funding for this research was provided by the German Federal Ministry of Education and Research (BMBF) under grant numbers 16WIT008A and 16WIT008B.

Acknowledgements

We would like to acknowledge the work of our two review team members, Kerstin Janson and René Krempkow, who contributed by screening and coding studies for the scoping review. Our student assistant Stefanie Deckelmann assisted in compiling the review results. We are very grateful for the constructive feedback and suggestions from two anonymous reviewers and the editors.

Declarations and conflicts of interest

Research ethics statement

Not applicable to this article.

Consent for publication statement

Not applicable to this article.

Conflicts of interest statement

The authors declare no conflicts of interest with this work. All efforts to sufficiently anonymise the authors during peer review of this article have been made. The authors declare no further conflicts with this article.

References

AACRAO (American Association of Collegiate Registrars and Admissions Officers). (2017).  Use of and Access to Data: Opinions on institutional data practices. Results of the AACRAO-ACE November 2017 60-Second Survey, https://www.aacrao.org/docs/default-source/research-docs/use-of-and-access-to-data-opinions-on-institutional-data-practices---november-60-second-surveyf152e81dab544e4997e846b060be844a.pdf?sfvrsn=b11db7e0_4. Accessed 11 April 2023

Alkin, M.C; Taut, S.M. (2003).  ‘Unbundling evaluation use’.  Studies in Educational Evaluation 29 (1) : 1–12, DOI: http://dx.doi.org/10.1016/S0191-491X(03)90001-0

Armstrong, J. (2016).  Data Driven Decision-Making in South Dakota: Effective use of state data systems, https://sheeo.org/wp-content/uploads/2019/04/SHEEO_Gates_SouthDakota.pdf. Accessed 11 April 2023

Armstrong, J; Whitfield, C. (2016).  Strong Foundations 2016: The state of state postsecondary data systems, https://postsecondarydata.sheeo.org/wp-content/uploads/2018/05/SHEEO_StrongFoundations2016_FINAL.pdf. Accessed 11 April 2023

Arnold, N; Voight, M; Morales, J; Kim, D; Coleman, A. (2019).  Informing Improvement: Recommendations for enhancing accreditor data-use to promote student success and equity, https://www.ihep.org/publication/informing-improvement-recommendations-for-enhancing-accreditor-data-use-to-promote-student-success-and-equity/. Accessed 11 April 2023

Baumann, A; Gottlieb, M; Leitel, J; Jeschke, M. (2020).  ‘Digitalisierung an Hochschulen: Eine Multifallstudie aus Campus Management Perspektive’.  WI2020 Community Tracks, : 2–16, DOI: http://dx.doi.org/10.30844/wi_2020_s1-baumann

Benson, R.T; Trower, C.A. (2012).  ‘Data, leadership, and catalyzing culture change’.  Change: The magazine of higher learning 44 (4) : 27–34, DOI: http://dx.doi.org/10.1080/00091383.2012.691862

Boaz, A, Davies, H.T.O; H.T.O and Fraser, A; A, Nutley, S.M S.M (eds.), . (2019).  What Works Now? Evidence-informed policy and practice. Bristol: Policy Press.

Bolhuis, E; Schildkamp, K; Voogt, J. (2016).  ‘Data-based decision-making in teams: Enablers and barriers’.  Educational Research and Evaluation 22 (3–4) : 213–233, DOI: http://dx.doi.org/10.1080/13803611.2016.1247728

Borch, I; Sandvoll, R; Risør, T. (2022).  ‘Student course evaluation documents: Constituting evaluation practice’.  Assessment & Evaluation in Higher Education 47 (2) : 169–182, DOI: http://dx.doi.org/10.1080/02602938.2021.1899130

Cairney, P. (2019).  ‘Evidence and policy making’.  What Works Now? Evidence-informed policy and practice. Boaz, A, Davies, H; H and Fraser, A; A, Nutley, S.M S.M (eds.),   Bristol: Policy Press, pp. 21–40, DOI: http://dx.doi.org/10.51952/9781447345527.ch002

Chen, Y. (2020).  ‘Data-driven decision-making literacy among rural community college leaders in Iowa: The role of leadership competencies’.  Community College Journal of Research and Practice 44 (5) : 347–362, DOI: http://dx.doi.org/10.1080/10668926.2019.1592032

Chow, J. (2017).  ‘How can student learning data at institutional level support decision-making for educational improvement for academic programme? A case study in a Hong Kong university’.  Research in Educational Administration and Leadership 2 (2) : 144–169, DOI: http://dx.doi.org/10.30828/real/2017.2.2

Coughlin, M.A. (2014).  Engaging Evidence: How independent colleges and universities use data to improve student learning. (ED561079). ERIC. https://files.eric.ed.gov/fulltext/ED561079.pdf.

Cox, B.E; Reason, R.D; Tobolowsky, B.F; Brower, R.L; Patterson, S; Luczyk, S; Roberts, K. (2017).  ‘Lip service or actionable insights? Linking student experiences to institutional assessment and data-driven decision-making in higher education’.  Journal of Higher Education 88 (6) : 835–862, DOI: http://dx.doi.org/10.1080/00221546.2016.1272320

Daenekindt, S; Huisman, J. (2020).  ‘Mapping the scattered field of research on higher education: A correlated topic model of 17,000 articles, 1991–2018’.  Higher Education 80 (3) : 571–587, DOI: http://dx.doi.org/10.1007/s10734-020-00500-x

Dahler-Larsen, P. (1998).  ‘Beyond non-utilization of evaluations: An institutional perspective’.  Knowledge, Technology & Policy 11 (1–2) : 64–90, DOI: http://dx.doi.org/10.1007/s12130-998-1011-z

Dalal, N. (2019).  Don’t Stop Improving: Supporting data-driven continuous improvement in college student outcomes. (ED598244). ERIC. https://files.eric.ed.gov/fulltext/ED598244.pdf.

Deom, G; Fiorini, S; McConahay, M; Shepard, L; Teague, J. (2021).  ‘Data-driven decisions: Using network analysis to guide campus course offerings’.  College and University 96 (4) : 2–13.

Dukes, D. (2021).  Evidence to Action: A policy perspective from three states. (ED615978). ERIC. https://files.eric.ed.gov/fulltext/ED615978.pdf.

Dunbar, R.L; Dingel, M.J; Prat-Resina, X. (2014).  ‘Connecting analytics and curriculum design: Process and outcomes of building a tool to browse data relevant to course designers’.  Journal of Learning Analytics 1 (3) : 223–243, DOI: http://dx.doi.org/10.18608/jla.2014.13.26

Forbes, M; Murphy, A; Alderman, L. (2022).  ‘Course enhancement conversations: A holistic and collaborative evaluation approach to quality improvement in higher education’.  Evaluation Journal of Australasia 22 (4) : 221–236, DOI: http://dx.doi.org/10.1177/1035719X221120295

Gándara, D. (2019).  ‘Does evidence matter? An analysis of evidence use in performance-funding policy design’.  The Review of Higher Education 42 (3) : 991–1022, DOI: http://dx.doi.org/10.1353/rhe.2019.0027

Gläser, J; Von Stuckrad, T. (2014).  ‘Von inaktiv bis kreativ. Der Umgang von Universitäten mit Forschungsevaluationen als Herausforderung für die Organisationssoziologie’.  Wissensregulierung und Regulierungswissen. Bora, A, Henkel, A; A and Reinhardt, C C (eds.),   Weilerswist: Velbrück Wissenschaft, pp. 41–64.

Gläser, J; Lange, S; Laudel, G; Schimank, U. (2010).  ‘Informed authority? The limited use of research evaluation systems for managerial control in universities’.  Reconfiguring Knowledge Production: Changing authority relationships in the sciences and their consequences for intellectual innovation. Whitley, R, Gläser, J; J and Engwall, L L (eds.),   Oxford: Oxford University Press, pp. 149–183.

Gusenbauer, M; Haddaway, N.R. (2020).  ‘Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources’.  Research Synthesis Methods 11 (2) : 181–217, DOI: http://dx.doi.org/10.1002/jrsm.1378

Hammarfelt, B; Rushforth, A.D. (2017).  ‘Indicators as judgment devices: An empirical study of citizen bibliometrics in research evaluation’.  Research Evaluation 26 (3) : 169–180, DOI: http://dx.doi.org/10.1093/reseval/rvx018

Hartikayanti, H.N; Bramanti, F.L; Gunardi, A. (2018).  ‘Financial management information system: An empirical evidence’.  European Research Studies Journal 21 (2) : 463–475, DOI: http://dx.doi.org/10.35808/ersj/1015

Hartman, J.J; Janssens, R; Hensberry, K.K.R. (2020).  ‘University first-time-in-college students’ mathematics placement and outcomes: Leadership response to local data’.  Education Leadership Review 21 (1) : 14–27.

Hillebrandt, M. (2020).  ‘Keeping one’s shiny Mercedes in the garage: Why higher education quantification never really took off in Germany’.  Politics and Governance 8 (2) : 48–57, DOI: http://dx.doi.org/10.17645/pag.v8i2.2584

Hollands, F; Escueta, M. (2020).  ‘How research informs educational technology decision-making in higher education: The role of external research versus internal research’.  Educational Technology Research and Development 68 (1) : 163–180, DOI: http://dx.doi.org/10.1007/s11423-019-09678-z

Hora, M.T; Bouwma-Gearhart, J; Park, H.J. (2014).  ‘Exploring data-driven decision-making in the field: How faculty use data and other forms of information to guide instructional decision-making’.  WCER Working Paper No. 2014-3. https://wcer.wisc.edu/docs/working-papers/Working_Paper_No_2014_03.pdf. Accessed 12 August 2024

Hordósy, R. (2017).  ‘How do different stakeholders utilise the same data? The case of school leavers’ and graduates’ information systems in three European countries’.  International Journal of Research and Method in Education 40 (4) : 403–420, DOI: http://dx.doi.org/10.1080/1743727X.2016.1144740

Ion, G; Marin, E; Proteasa, C. (2019a).  ‘How does the context of research influence the use of educational research in policy-making and practice?’.  Educational Research for Policy and Practice 18 (2) : 119–139, DOI: http://dx.doi.org/10.1007/s10671-018-9236-4

Ion, G; Stîngu, M; Marin, E. (2019b).  ‘How can researchers facilitate the utilisation of research by policy-makers and practitioners in education?’.  Research Papers in Education 34 (4) : 483–498, DOI: http://dx.doi.org/10.1080/02671522.2018.1452965

Isett, K.R; Hicks, D. (2020).  ‘Pathways from research into public decision-making: Intermediaries as the third community’.  Perspectives on Public Management and Governance 3 (1) : 45–58, DOI: http://dx.doi.org/10.1093/ppmgov/gvz020

Jankowski, N. (2012).  ‘Mapping the topography of the evidence use terrain in assessment of U.S. higher education: A multiple case study approach’.  PhD thesis. Urbana-Champagne, USA: University of Illinois. https://hdl.handle.net/2142/42316. Accessed 13 September 2023

Janson, K. (2015).  ‘Die Bedeutung von Absolventenstudien für die Hochschulentwicklung. Zusammenfassung einer empirischen Studie’.  Generation Hochschulabschluss: vielfältige Perspektiven auf Studium und Berufseinstieg Analysen aus der Absolventenforschung. Flöther, C, Krücken, G G (eds.),   Münster: Waxmann, pp. 131–150.

Johnson, K; Greenseid, L.O; Toal, S.A; King, J.A; Lawrenz, F; Volkov, B. (2009).  ‘Research on evaluation use: A review of the empirical literature from 1986 to 2005’.  American Journal of Evaluation 30 (3) : 377–410, DOI: http://dx.doi.org/10.1177/1098214009341660

Kabuye, J; Basheka, B.C. (2017).  ‘Institutional design and utilisation of evaluation results in Uganda’s public universities: Empirical findings from Kyambogo University’.  African Evaluation Journal 5 (1) DOI: http://dx.doi.org/10.4102/aej.v5i1.190

Kerrigan, M.R; Jenkins, D. (2013).  ‘A growing culture of evidence? Findings from a survey on data use at achieving the dream colleges in Washington State’.  Community College Research Center, Teachers College, Columbia University. https://ccrc.tc.columbia.edu/publications/growing-culture-of-evidence.html. Accessed 11 April 2023

Kezar, A. (2021).  ‘Understanding the relationship between organizational identity and capacities for scaled change within higher education intermediary organizations’.  Review of Higher Education 45 (1) : 31–59, DOI: http://dx.doi.org/10.1353/rhe.2021.0013

King, J.A; Alkin, M.C. (2019).  ‘The centrality of use: Theories of evaluation use and influence and thoughts on the first 50 years of use research’.  American Journal of Evaluation 40 (3) : 431–458, DOI: http://dx.doi.org/10.1177/1098214018796328

Kirkhart, K. (2000).  ‘Reconceptualizing evaluation use: An integrated theory of influence’.  New Directions for Evaluation 88 : 5–23, DOI: http://dx.doi.org/10.1002/ev.1188

Kleimann, B. (2019).  ‘(German) universities as multiple hybrid organizations’.  Higher Education 77 (6) : 1085–1102, DOI: http://dx.doi.org/10.1007/s10734-018-0321-7

Knott, J; Wildavsky, A. (1980).  ‘If dissemination is the solution, what is the problem?’.  Knowledge 1 (4) : 537–578, DOI: http://dx.doi.org/10.1177/107554708000100404

Krempkow, R, Harris-Huemmert, S; S and Janson, K; K, Höhle, E; E, Rathke, J; J, Hölscher, M M (eds.), . (2023).  Berufsfeld Wissenschaftsmanagement. Bielefeld: UniversitätsVerlagWebler, DOI: http://dx.doi.org/10.53183/9783946017301

Langer, L; Tripney, J; Gough, D. (2016).  The Science of Using Science: Researching the use of research evidence in decision-making. London: EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London. https://eppi.ioe.ac.uk/cms/Portals/0/PDF%20reviews%20and%20summaries/Science%20Technical%20report%202016%20Langer.pdf?ver=2016-04-18-142648-770. Accessed 13 September 2023

Leiber, T. (2017).  ‘University governance and rankings: The ambivalent role of rankings for autonomy, accountability and competition’.  Beiträge zur Hochschulforschung 39 (3–4) : 30–51. https://www.bzh.bayern.de/fileadmin/news_import/3-4-2017-Leiber.pdf. Accessed 6 October 2024

Madsen, M. (2022).  Governing by Numbers and Human Capital in Education Policy Beyond Neoliberalism: Social democratic governance practices in public higher education. Cham: Springer Science and Business Media, DOI: http://dx.doi.org/10.1007/978-3-031-09996-0_5

Mahroeian, H; Daniel, B. (2021).  ‘Is New Zealand’s higher education sector ready to employ analytics initiatives to enhance its decision-making process?’.  International Journal of Artificial Intelligence in Education 31 (4) : 940–979, DOI: http://dx.doi.org/10.1007/s40593-020-00234-y

Malin, J.R; Brown, C; Ion, G; Van Ackeren, I; Bremm, N; Luzmore, R; Flood, J; Rind, G.M. (2020).  ‘World-wide barriers and enablers to achieving evidence-informed practice in education: What can be learnt from Spain, England, the United States, and Germany?’.  Humanities and Social Sciences Communications 7 (1) : 99. DOI: http://dx.doi.org/10.1057/s41599-020-00587-8

Mark, M.M; Henry, G.T. (2004).  ‘The mechanisms and outcomes of evaluation influence’.  Evaluation 10 (1) : 35–57, DOI: http://dx.doi.org/10.1177/1356389004042326

Masango, M; Muloiwa, T; Wagner, F; Pinheiro, G. (2020).  ‘Design and implementation of a student biographical questionnaire (BQ) online platform for effective student success’.  Journal of Student Affairs in Africa 8 (1) : 93–110, DOI: http://dx.doi.org/10.24085/jsaa.v8i1.4184

McCarthy, J; Sammon, D; Murphy, C. (2017).  ‘Changing leadership behaviours: A journey towards a data driven culture’.  Proceedings of the 25th European Conference on Information Systems (ECIS), Guimarães, Portugal 5–10 June, : 2625–2634. https://aisel.aisnet.org/ecis2017_rip/15.

McCaul, J.L. (2015).  ‘Closing the Loop: A study of how the National Survey of Student Engagement (NSSE) is used for decision-making and planning in student affairs’.  PhD thesis. Kalamazoo, USA: Western Michigan University. https://scholarworks.wmich.edu/dissertations/1185/. Accessed 11 April 2023

McCoy, C; Rosenbaum, H. (2019).  ‘Uncovering unintended and shadow practices of users of decision support system dashboards in higher education institutions’.  Journal of the Association for Information Science and Technology 70 (4) : 370–384, DOI: http://dx.doi.org/10.1002/asi.24131

Milzow, K; Reinhardt, A; Söderberg, S; Zinöcker, K. (2019).  ‘Understanding the use and usability of research evaluation studies’.  Research Evaluation 28 (1) : 94–107, DOI: http://dx.doi.org/10.1093/reseval/rvy040

Mokher, C.G; Spencer, H; Park, T.J; Hu, S. (2020).  ‘Exploring institutional change in the context of a statewide developmental education reform in Florida’.  Community College Journal of Research and Practice 44 (5) : 377–390, DOI: http://dx.doi.org/10.1080/10668926.2019.1610672

Morgan, A.M; Jobe, R.L; Konopa, J.K; Downs, L.D. (2022).  ‘Quality assurance, meet quality appreciation: Using appreciative inquiry to define faculty quality standards’.  Higher Learning Research Communications 12 (1) : 98–111, DOI: http://dx.doi.org/10.18870/hlrc.v12i1.1301

Mountford-Zimdars, A; Moore, J. (2020).  ‘Identifying merit and potential beyond grades: Opportunities and challenges in using contextual data in undergraduate admissions at nine highly selective English universities’.  Oxford Review of Education 46 (6) : 752–769, DOI: http://dx.doi.org/10.1080/03054985.2020.1785413

Mukhtar, M; Sudarmi, S; Wahyudi, M; Burmansah, B. (2020).  ‘The information system development based on knowledge management in higher education institution’.  International Journal of Higher Education 9 (3) : 98–108, DOI: http://dx.doi.org/10.5430/ijhe.v9n3p98

Musselin, C. (2007).  ‘Are universities specific organisations?’.  Towards a Multiversity? Universities between global trends and national traditions.. Krücken, G, Kosmützky, A; A and Torka, M M (eds.),   Bielefeld: Transcript, pp. 63–84.

Natow, R.S. (2020).  ‘Research utilization in higher education rulemaking: A multi-case study of research prevalence, sources, and barriers’.  Education Policy Analysis Archives 28 (95) DOI: http://dx.doi.org/10.14507/epaa.28.5048

Natow, R.S. (2022).  ‘Research use and politics in the federal higher education rulemaking process’.  Educational Policy 36 (3) : 689–716, DOI: http://dx.doi.org/10.1177/0895904820917363

Nelson, J; Campbell, C. (2019).  ‘Using evidence in education’.  What Works Now? Evidence-informed policy and practice. Boaz, A, Davies, H; H and Fraser, A; A, Nutley, S S (eds.),   Bristol: Policy Press, pp. 131–150, DOI: http://dx.doi.org/10.51952/9781447345527.ch007

Nutley, S.M; Walter, I; Davies, H.T.O. (2003).  ‘From knowing to doing: A framework for understanding the evidence-into-practice agenda’.  Evaluation 9 (2) : 125–148, DOI: http://dx.doi.org/10.1177/1356389003009002002

Nutley, S.M; Walter, I; Davies, H.T.O. (2007).  Using Evidence: How research can inform public services. Bristol: Policy Press.

O’Connor, J. (2022).  ‘Evidence based education policy in Ireland: Insights from educational researchers’.  Irish Educational Studies 43 (1) : 21–45, DOI: http://dx.doi.org/10.1080/03323315.2021.2021101

Oliver, K; Innvar, S; Lorenc, T; Woodman, J; Thomas, J. (2014).  ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’.  BMC Health Services Research 14 (1) : 2. DOI: http://dx.doi.org/10.1186/1472-6963-14-2

Osborne, M.E. (2012).  ‘Transforming data into knowledge within higher education’.  EdD Dissertation. Ypsilanti, USA: Eastern Michigan University. https://commons.emich.edu/theses/445/. Accessed 11 April 2023

Parnell, A; Jones, D; Wesaw, A; Brooks, D.C. (2018).  Institutions’ Use of Data and Analytics for Student Success: Results from a national landscape analysis, Educause Research Report. https://library.educause.edu/resources/2018/4/institutions-use-of-data-and-analytics-for-student-success. Accessed 11 April 2023

Peck, C.A; McDonald, M. (2013).  ‘Creating “cultures of evidence” in teacher education: Context, policy, and practice in three high-data-use programs’.  New Educator 9 (1) : 12–28, DOI: http://dx.doi.org/10.1080/1547688X.2013.751312

Peck, C.A; McDonald, M. (2014).  ‘What is a culture of evidence? How do you get one? And … should you want one?’.  Teachers College Record 116 (3) : 1–22, DOI: http://dx.doi.org/10.1177/016146811411600307

Peters, M.D.J; Marnie, C; Tricco, A.C; Pollock, D; Munn, Z; Alexander, L; McInerney, P; Godfrey, C.M; Khalil, H. (2020a).  ‘Updated methodological guidance for the conduct of scoping reviews’.  JBI Evidence Synthesis 18 (10) : 2119–2126, DOI: http://dx.doi.org/10.11124/JBIES-20-00167

Peters, M.D.J; McInerney, P; Munn, Z; Tricco, A.C; Khalil, H. (2020b).  ‘Chapter 11: Scoping reviews’.  JBI Manual for Evidence Synthesis, DOI: http://dx.doi.org/10.46658/JBIMES-20-12

Rabovsky, T.M. (2014).  ‘Using data to manage for performance at public universities’.  Public Administration Review 74 (2) : 260–272, DOI: http://dx.doi.org/10.1111/puar.12185

Rickinson, M; Edwards, A. (2021).  ‘The relational features of evidence use’.  Cambridge Journal of Education 51 (4) : 509–526, DOI: http://dx.doi.org/10.1080/0305764X.2020.1865877

Rickinson, M; Cirkony, C; Walsh, L; Gleeson, J; Salisbury, M; Boaz, A. (2021).  ‘Insights from a cross-sector review on how to conceptualise the quality of use of research evidence’.  Humanities and Social Sciences Communications 8 (141) : 1–12, DOI: http://dx.doi.org/10.1057/s41599-021-00821-x

Rickinson, M; Cirkony, C; Walsh, L; Gleeson, J; Cutler, B; Salisbury, M. (2022).  ‘A framework for understanding the quality of evidence use in education’.  Educational Research 64 (2) : 133–158, DOI: http://dx.doi.org/10.1080/00131881.2022.2054452

Rorison, J; Voight, M. (2016).  Leading with Data: How senior institution and system leaders use postsecondary data to promote student success. (ED570101). ERIC. https://files.eric.ed.gov/fulltext/ED570101.pdf.

Rosa, M.J; Williams, J; Claeys, J; Kane, D; Bruckmann, S; Costa, D; Rafael, J.A. (2022).  ‘Learning analytics and data ethics in performance data management: A benchlearning exercise involving six European universities’.  Quality in Higher Education 28 (1) : 65–81, DOI: http://dx.doi.org/10.1080/13538322.2021.1951455

Sá, C; Hamlin, D. (2015).  ‘Research use capacity in provincial governments’.  Canadian Public Administration 58 (3) : 468–486, DOI: http://dx.doi.org/10.1111/capa.12125

Schelske, S; Thiedig, C. (2022).  ‘Competency requirements in research information management and reporting: Evidence from a national survey in Germany’.  Procedia Computer Science 211 : 141–150, DOI: http://dx.doi.org/10.1016/j.procs.2022.10.186

Schildkamp, K. (2019).  ‘Data-based decision-making for school improvement: Research insights and gaps’.  Educational Research 61 (3) : 257–273, DOI: http://dx.doi.org/10.1080/00131881.2019.1625716

Schroeder, K.A.C. (2012).  ‘An exploration of the use of data, analysis and research among college admission professionals in the context of data-driven decision-making’.  PhD thesis. Lexington, USA: University of Kentucky. https://uknowledge.uky.edu/epe_etds/2/. Accessed 11 April 2023

Selwyn, N; Henderson, M; Chao, S.-H. (2018).  ‘“You need a system”: Exploring the role of data in the administration of university students and courses’.  Journal of Further and Higher Education 42 (1) : 46–56, DOI: http://dx.doi.org/10.1080/0309877X.2016.1206852

Senter, M.S; Ciabattari, T; Amaya, N.V. (2021).  ‘Sociology departments and program review: Chair perspectives on process and outcomes’.  Teaching Sociology 49 (1) : 1–16, DOI: http://dx.doi.org/10.1177/0092055X20970268

Seyfried, M; Pohlenz, P. (2014).  ‘Studienverlaufsstatistik als Berichtsinstrument. Eine empirische Betrachtung von Ursachen, Umsetzung und Implementationshindernissen’.  Beiträge zur Hochschulforschung 36 (3) : 34–51. https://www.bzh.bayern.de/fileadmin/news_import/3-2014-Seyfried-Pohlenz.pdf. Accessed 6 October 2024

Seyfried, M; Pohlenz, P. (2018).  ‘Assessing quality assurance in higher education: Quality managers’ perceptions of effectiveness’.  European Journal of Higher Education 8 (3) : 258–271, DOI: http://dx.doi.org/10.1080/21568235.2018.1474777

Sima, K. (2017).  ‘Evidence in Czech research evaluation policy: Measured and contested’.  Evidence & Policy: A journal of research, debate and practice 13 (1) : 81–95, DOI: http://dx.doi.org/10.1332/174426415X14467432784664

Sirat, M; Azman, N. (2014).  ‘Malaysia’s National Higher Education Research Institute (IPPTN): Narrowing the research–policy gap in a dynamic higher education system’.  Studies in Higher Education 39 (8) : 1451–1462, DOI: http://dx.doi.org/10.1080/03075079.2014.949532

Sloan, T.F. (2015).  ‘Data and learning that affords program improvement: A response to the U.S. accountability movement in teacher education’.  Educational Research for Policy and Practice 14 (3) : 259–271, DOI: http://dx.doi.org/10.1007/s10671-015-9179-y

Smith, K; Fernie, S; Pilcher, N. (2021).  ‘Aligning the times: Exploring the convergence of researchers, policy makers and research evidence in higher education policy making’.  Research in Education 110 (1) : 38–57, DOI: http://dx.doi.org/10.1177/0034523720920677

Taylor, L.D. (2020).  ‘Neoliberal consequence: Data-driven decision-making and the subversion of student success efforts’.  Review of Higher Education 43 (4) : 1069–1097, DOI: http://dx.doi.org/10.1353/rhe.2020.0031

Terkla, D.G; Sharkness, J; Conoscenti, L.M; Butler, C. (2014).  ‘Using data to inform institutional decision-making at Tufts University’.  Using Data to Improve Higher Education: Global perspectives on higher education. Menon, M.E, Terkla, D.G; D.G and Gibbs, P P (eds.),   Rotterdam: SensePublishers, pp. 39–63, DOI: http://dx.doi.org/10.1007/978-94-6209-794-0_4

Thorpe, C; Howlett, A. (2020).  ‘Applied and conceptual approaches to evidence-based practice in research and academic libraries’.  LIBER Quarterly 30 (1) : 1–17, DOI: http://dx.doi.org/10.18352/lq.10320

Torii, T; Okada, Y. (2017).  ‘Achieving evidence-based improvement and transparency in higher education: The current status and challenges regarding data utilization and disclosure in Japan’.  Higher Education Forum 14 : 35–49, DOI: http://dx.doi.org/10.15027/42951

Tricco, A.C; Lillie, E; Zarin, W; O’Brien, K; Colquhoun, H; Levac, D; Moher, D; Peters, M.D.J; Horsley, T; Weeks, L; Hempel, S; Akl, E.A; Chang, C; McGowan, J; Stewart, L; Hartling, L; Aldcroft, A; Wilson, M.G; Garritty, C; Straus, S. (2018).  ‘PRISMA extension for Scoping Reviews (PRISMA-ScR): Checklist and explanation’.  Annals of Internal Medicine 169 (7) DOI: http://dx.doi.org/10.7326/M18-0850 [PubMed]

Tsai, Y.-S; Singh, S; Rakovic, M; Lim, L.-A; Roychoudhury, A; Gasevic, D. (2022).  ‘Charting design needs and strategic approaches for academic analytics systems through co-design’.  LAK22: 12th International Learning Analytics and Knowledge Conference (LAK22). New York: Association for Computing Machinery, pp. 381–391, DOI: http://dx.doi.org/10.1145/3506860.3506939

Van Zyl, A; Dampier, G; Ngwenya, N. (2020).  ‘Effective institutional intervention where it makes the biggest difference to student success: The University of Johannesburg (UJ) Integrated Student Success Initiative (ISSI)’.  Journal of Student Affairs in Africa 8 (2) : 59–71, DOI: http://dx.doi.org/10.24085/jsaa.v8i2.4448

Wassmer, C; Probst, C. (2019).  ‘Competitive and collaborative aspects of graduate survey use in Switzerland’.  Zeitschrift für Hochschulentwicklung 14 (4) : 171–190, DOI: http://dx.doi.org/10.3217/ZFHE-14-04/11

Weaver, G.C; Austin, A.E; Greenhoot, A.F; Finkelstein, N.D. (2020).  ‘Establishing a better approach for evaluating teaching: The TEval Project’.  Change: The magazine of higher learning 52 (3) : 25–31, DOI: http://dx.doi.org/10.1080/00091383.2020.1745575

Wegner, A; Thiedig, C. (2023).  ‘Evidenznutzung an Hochschulen/Forschungseinrichtungen und in der Hochschul- und Forschungspolitik – Protokoll eines Scoping Reviews’.  Zenodo, DOI: http://dx.doi.org/10.5281/ZENODO.10034215

Weiss, C.H. (1979).  ‘The many meanings of research utilization’.  Public Administration Review 39 (5) : 426–431, DOI: http://dx.doi.org/10.2307/3109916

Whitchurch, C. (2013).  Reconstructing Identities in Higher Education: The rise of ‘third space’ professionals. London: Routledge.

Whitfield, C; Armstrong, J; Weeden, D. (2019).  Strong Foundations 2018: The state of state postsecondary data systems. (ED598660). ERIC. https://files.eric.ed.gov/fulltext/ED598660.pdf.

Widany, S; Gerhards, P. (2022).  ‘“Who needs those apples and oranges?” Perception and use of data of continuing education statistics by stakeholders in continuing education policy’.  Zeitschrift für Bildungsforschung 12 (1) : 145–163, DOI: http://dx.doi.org/10.1007/s35834-022-00339-5

Wolf, B; Hall, T; Robershaw, K. (2021).  ‘Best practices for research analytics & business intelligence within the research domain’.  Research Management Review 25 (1) : 1–37.