Editorial

AI and the human in education: Editorial

Authors
  • Sandra Leaton Gray orcid logo (UCL Institute of Education, UK)
  • Natalia Kucirkova orcid logo (University of Stavanger, Norway)

How to Cite: Leaton Gray, S., & Kucirkova, N. (2021). AI and the human in education: Editorial. London Review of Education, 19(1). https://doi.org/10.14324/lre.19.1.10

Rights: Copyright © 2021 Leaton Gray and Kukirkova.

1765 Views

1167 Downloads

1Citations

Published on
17 Mar 2021

Throughout history, human civilizations have characteristically experienced significant changes in how knowledge and understanding are organized, albeit manifested in a number of ways. These have included the spread and democratization of education and literacy, through innovations such as the printing press and universal schooling, as well as the introduction of various long-distance communication systems ranging from semaphore to today’s mass media. There have also been different ways of curating knowledge in collections and libraries, and communicating it through differential forms of access. The advent of the computer and the World Wide Web follow in this vein, but they have introduced unprecedented and intensified change in the ways in which we process, produce and share information globally. The current era is frequently described as the Fourth Industrial Revolution, a term coined by the World Economic Forum to describe the so-called ‘infosphere’ of intense digital information exchange. While philosophical works about the infosphere centre on the specific shift from computer ethics to information ethics (Floridi, 2014), there is little general awareness of what this information sphere means to everyday life and education. One salient feature of the Fourth Industrial Revolution is an intense use of artificially intelligent (AI) systems.

The AI systems that proliferate in the current global information developments are based on adaptive algorithms that make decisions based on data provided by individuals and managed by corporate and government organizations. The systems follow arbitrary rules for assigning degrees of probability according to the purpose of interaction and transaction with each individual user. The adaptation and decision-making processes can be perceived as an opportunity to solve the largest problems faced by humankind in the twenty-first century or as a threat that could exacerbate climate change, anti-democratic communication or media power. For example, in the case of meteorological predictions, AI processes can be revolutionary, allowing humans to reach new heights of environmental awareness. AI systems can also assess risk at a scale impossible for humans or cut delivery costs in business and public services. The systems can also help investors achieve a competitive edge by increasing their production capability (Barton and Thomas, 2009) or improve healthcare practices to provide earlier and more accurate diagnoses. It is this predictive power of AI systems that is so seductive in the field of education.

A particularly attractive prospect for educational researchers is the possibility to predict thinking processes of individual learners, and thus identify their learning difficulties and provide learning solutions that go way beyond the scope of what individual educators might be able to offer. For this reason, commercial organizations have scrambled to develop AI products and services to fuel the growth of a new market sector. However, there is a danger that the AI is neither truly artificial nor truly intelligent. Indeed, the current AI systems rely heavily on the collective insights of numerous human beings to steer computer-based decision making towards a given productive end, and this process is very difficult to do without introducing the biases and imperfections of the very human beings involved in creating and running the system.

This forms a central paradox when it comes to the use of AI in education. On the one hand, as humans, we seek a sense of control and agency when it comes to deploying autonomous systems, and on the other hand, human beings themselves are an example of a highly intelligent assessment tool, allowing for a degree of nuance in decision making that is probably impossible to replicate in anything approaching a realistic form using technology.

Indeed, it is hard to imagine a situation in which machine technology could reliably range across even the most straightforward human decision-making frameworks in any depth. For example, the simple act of encountering another human being calls for all five senses in a complex process of probability sampling to assess the other human’s background, mood, health and risk or benefit to our well-being. This takes place in a split second, and it has a long evolutionary history which has allowed for the survival and prospering of our species over millions of years. A teacher will routinely engage in such encounters hundreds of times during the course of a working day. It is this acute ability to make rapid decisions and adjust our conduct and thinking accordingly that provides the basis for a large proportion of current educational practice. At root, it is less about knowledge dissemination or knowledge transfer, and more about promoting human relationships and human endeavour. A concern for humanity, then, needs to lie at the core of the quest for worthwhile AI systems within modern society.

In this special feature, we aim to stimulate discussion and reflection on these complex issues. We were delighted that some of the most prominent thinkers in this area accepted our invitation to contribute to the issue and to share their reflections on the human and ethical questions surrounding the use of AI systems in education and their personalization power.

Ben Williamson starts the thematic exploration by discussing the role of research assemblages of machines and scientific expertise, and the way struggles for power and authority dominate contemporary discourse surrounding ‘precision education’, a term borrowed from medicine. In contextualizing this, he offers a vision of the role of ‘sensory power’ (Isin and Ruppert, 2019) within commercial learning developments, mediated by the new discipline of data science. Through his explanation, we achieve important insights into the ways AI systems in education can co-opt the body for their own digital ends, and he shows us how this relates to instances of wider social control, here located within a framework of performativity.

On the theme of personalization, Natalia Kucirkova and Margaret Mackey complement Williamson’s theorization by exploring the role of digital literacies in children’s personalized books. They assess the impact of AI on the children’s developing sense of self. Some digital personalized books are designed to adjust the reading level automatically to the needs of the child, and to promote engagement with the reading process. However, as with many untested innovations, there are frequently unintended consequences, as Kucirkova and Mackey show with their analysis. The authors demonstrate how personalization can lead to possible confusion in terms of the developing sense of self, and an associated reduction in personal agency. They also show how AI systems can generate a confusion of conceptual chronotypes, which perpetually locate the child socially at the centre of all interaction, but with an identity not of their own choosing, and one which potentially inhibits them from achieving a broader perspective as readers.

Ken Saltman brings a policy perspective to the special theme and explores the cultural politics of introducing AI into publicly funded school systems. He frames such developments in terms of a neo-liberal restructuring of the public domain, linking commercialization with digitization agendas. Through careful analysis of the relationship between different forms of educational ownership and control, he maps where vested interests are able to gain new forms of power within an allegedly neutral system. In this way, he lays bare the ideological roots of education reform initiatives grounded in technological development and their essential anti-intellectualism.

This theme is also explored in Sandra Leaton Gray’s article, which explains the state of the art in relation to the adoption of AI in schools, locating them in a social context, as well as in relation to commercial monopolization of forms of knowledge. Leaton Gray explores this in relation to children’s data privacy rights, demonstrating how achieving an effective balance between fairness and effective use of data is genuinely problematic, and not usefully supported by current legislation. Leaton Gray proposes a different model grounded in a form of cooperative democracy involving stakeholders more broadly in the development of new AI models and structures.

Dimitris Parapadakis’s article highlights the difficulties that follow when key stakeholders fail to interpret the outcomes of AI systems because of their insufficient understanding. As outlined earlier in this editorial, one argument that is frequently put forward in favour of the adoption of AI systems is that such decision-making models allow for a greater degree of discrimination among the data. The argument goes that this allows providers a more nuanced ability to adapt their products and services to the needs of users. However, using the example of the National Student Survey in UK higher education institutions over a decade (2008–17), Parapadakis demonstrates how useful patterns within data sets can be ignored in favour of those that are more simple to understand, and how by changing variables such as the name of an institution, it is possible to skew its student satisfaction ratings in favour of the institution. There are a number of reasons attributed to this tendency, such as a lack of domain knowledge, overfitting (generating data too closely aligned to one particular need at the expense of a broader view), lack of transparency on the algorithms used to analyse the data and apophenia (seeing connections between unrelated items, born of the human desire for holism).

In his opinion piece, Michael Reiss presents a more positive view of AI, arguing that it has the potential to enrich student learning by bringing home and school modes of learning closer together. He argues that if care is taken to ensure that learning remains a social act, rather than an isolated one, then it will be possible for AI to promote human flourishing. Importantly, he warns of the dangers of deploying high levels of surveillance in attempting to achieve this aim. Reiss also argues that, while at its best AI presents real opportunities for children (especially those with special educational needs) to be catered for at a high level, there is also a danger of fragmentation in provision, resulting in a widening social gap. This needs to be carefully guarded against.

Mary Richardson and Rose Clesham discuss the impact that such fragmentation might have on issues relating to high-stakes assessment, using the case of A level results in 2020 as an example of where human operators should be involved to ensure fairness and high-quality outputs. The authors also explore the way in which English as a second language examinations are increasingly deploying AI systems internationally as a mechanism for systematizing diversity, for example by allowing varied forms of the use of English in oral assessments. They conclude by arguing for increased assessment literacy to ensure an appropriate future for high-stakes systems.

In the last article of the special feature, Virginia Dignum offers an analysis of what responsible and trustworthy AI might look like and how it might affect education. She starts by laying out four ways in which AI has been viewed: as a computational technology, as a step in a process of digital transformation, as a field of scientific research and as a mysterious autonomous entity akin to magical powers. Dignum then explains the importance of understanding AI as a socio-technical system that should be underpinned by the principles of ART (accountability, responsibility and transparency) and supported by effective governance and regulation. She proposes a future for digital education that goes beyond the rhetoric of revolution, but instead promotes trust and enlightened engagement with the process, engendering true creativity and collaboration. This has the added benefit of avoiding many of the problems described by Parapadakis in his article for this feature, as well as other features.

The articles in this special feature provide diverse perspectives on issues related to adjusting to a new order, in which big data are used to determine and direct educational paths, triaging learners accordingly. Some of the articles focus on the more dangerous aspects of this endeavour, others emphasize the significant societal benefits that are available, if we are to approach the incorporation of AI into the educational systems, and the subsequent re-engineering of those systems, with sufficient caution and rigour. What is imperative here is the need to use the time and resources saved by the use of automation to enrich and develop what it means to be truly human. This must include complex and rewarding mechanisms to express our most essential human qualities of altruism, vocation and collective social endeavour. Collectively, the authors in this special feature make it clear that without this respect for the human condition, any efforts to use AI to make education ‘better’ will have been in vain.

Notes on the editors

Sandra Leaton Gray is Associate Professor at UCL Institute of Education, UK. She is a former teacher and sociology of education specialist, with a special interest in social and ethical issues surrounding contemporary identity, biometrics, AI and algorithms. Sandra is a member of the Privacy Expert Group of the Biometrics Institute, a senior member of Wolfson College, University of Cambridge, and a member of the advisory board of digitaldefendme, a children’s privacy rights organization.

Natalia Kucirkova is Professor of Early Childhood Education and Development at the University of Stavanger, Norway, and Professor of Reading and Children’s Development at the Open University, UK. Natalia’s research concerns innovative ways of supporting children’s book reading and digital literacy, and exploring the role of personalization in the early years. She co-edits the Bloomsbury book series Children’s Reading and Writing on Screen and the journal Literacy, published by Wiley.

References

Barton, R; Thomas, A. (2009).  Implementation of intelligent systems, enabling integration of SMEs to high-value supply chain networks.  Engineering Applications of Artificial Intelligence 22 (6) : 929–38, DOI: http://dx.doi.org/10.1016/j.engappai.2008.10.016

Floridi, L. (2014).  The Fourth Revolution: How the infosphere is reshaping human reality. Oxford: Oxford University Press.

Isin, E; Ruppert, E. (2019). Data’s empires: Postcolonial data politics In:  Bigo, D, Isin, E; E and Ruppoert, E E (eds.),   Data Politics: Worlds, subjects, rights. London: Routledge, pp. 207–27.