Neil Ballantyne, Open Polytechnic of New Zealand
Abstract
Over the last decade, a growing number of scholarly contributions have discussed the implications of the rise of artificial intelligence (AI) for social work. Several oMer critical reflections on the introduction of predictive risk modelling in fields such as child protection. More recently, scholars have started to grapple with the ways in which generative AI might be adopted by social workers. Professional associations are beginning to shape ethical and practice guidance for the use of AI. Technology vendors are promoting applications that promise to automate assessment, report writing and policy analysis. These developments are unfolding amid growing public awareness of AI tools, swift advances in their capabilities and a proliferation of hyperbolic claims about their potential. Drawing on the interdisciplinary field of critical data studies, this entry advances a careful, considered and critical perspective on AI and social work, one that is sensitive to its implications for social justice, climate change, labour exploitation and changes to the nature of social work practice.
Introduction
One of the key diMerences between the current AI boom and previous technological revolutions is the shape-shifting nature of the phenomenon. While the term artificial intelligence is widely used, it lacks a single, universally accepted definition. This ambiguity arises because AI is an abstract term encompassing a dynamic field of computer science marked by rapidly evolving theories, models, and methods for designing and deploying machines that exhibit human-like cognition across diverse (and sometimes overlapping) application domains, including predictive AI, generative AI, recognition AI, and social robotics. All of these domains have relevance to social work tasks but this entry will focus on the most discussed applications: predictive and generative AI.
Adopting a critical perspective
Floridi (2024) argued that the current hype around AI “does not negate the profoundly transformative potential of AI technology, but it calls for caution and critical thinking” (p. 128). In the context of social work, critical social work theory oMers a relevant corrective to the predominant AI hype. Critical social work draws on critical theory, anti-capitalist analyses of class and labour, feminist critiques of gendered power, and postcolonial perspectives on race and empire. In responding to technological innovations, a critical social work perspective channels the insights of critical data studies: an interdisciplinary sociological sub-field that examines the power relations, social implications, and epistemological assumptions underlying datafication in contemporary society. A critical approach to the application of AI in social work calls for a value-driven inquiry into who stands to benefit, or to be harmed, by such developments, and through what mechanisms of power. These questions can be posed in relation to AI at a societal level or directed at specific implementations within agencies. The following section explores the macro-level technopolitical context for AI developments, the subsequent one will reflect on actual and emerging applications of predictive and generative AI in social work agencies.
AI governance and the sociotechnical imaginary
The concept of sociotechnical imaginaries is a key analytical tool used to explain the ideological dimensions of technological innovation. Sociotechnical imaginaries are visions of desirable futures that are actively disseminated by corporate, governmental and media actors to shape societal expectations (JasanoM & Kim, 2015). These imaginaries currently present AI as inevitable, transformational and inherently progressive. Sociotechnical imaginaries function to naturalise and depoliticise technological progress. In doing so they conceal the profit-seeking interests of corporations and the austerity-driven policy agendas of governments—both hallmark features of four decades of neoliberal governance.
The current AI boom is occurring in the context of an unprecedented concentration of corporate power with five “big tech” companies dominating the economy and actively lobbying governments for political influence. At the same time, neoliberal governments—engaged in divestment from public services and presiding over deepening social inequalities—embrace AI solutions to social problems that blur the boundaries between corporate and state power. The promotion of predictive AI tools in child protection social work across three national contexts has been directly linked to austerity-driven policy goals (Jørgensen et al., 2022).
The views of powerful, political elites in each nation-state make a material diMerence to the development of national policies for the promotion, adoption and regulation of AI. States have markedly diMerent approaches. The European Union’s AI Act (2021) adopted a precautionary, risk-based approach, restricting or banning some AI applications such as real-time biometric surveillance in public spaces. In contrast, the current US administration under Trump has a market-driven stance, dismantling previous regulatory eMorts and promoting unregulated private-sector innovation. These contrasting approaches are shaped by diMering AI imaginaries with the EU emphasising ethical safeguards and the rights of citizens, while the US prioritises the unregulated use of AI to achieve economic growth, military dominance and technological supremacy.
As calls for AI applications advance, it is imperative that critical social workers, and professional associations, subject the emerging aMordances of AI technologies to core social work values, including the principles expressed in the People’s Charter for an Eco-Social World. Principles that support ecological sustainability and are against labour exploitation (International Federation of Social Workers, 2022). With these commitments in mind, a critical perspective highlights the negative externalities associated with the development of AI. These include the accelerating global demand for data centres, mounting evidence of the potential climate impact of generative AI, and reports on the exploitation of refugees and workers in the global south to label data and moderate toxic content, “One…worker tasked with reading and labeling text for OpenAI…suMered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child” (Perrigo, 2023). In addition, as insistence on AI adoption in social work agencies increases, critical social workers and their trade unions, must attend to the ways in which AI might be applied by employers to alter working practices, automate decision-making, shift accountabilities, and routinise or displace jobs.
Actual and emerging AI applications in social work
Since the 2010s, social work agencies have experimented with forms of predictive risk assessment using machine learning on historical datasets to create models that predict the likelihood of negative outcomes such as child maltreatment, teenage pregnancy, and truancy (Keddell, 2019). There is strong evidence that the outputs from these models can amplify biases inherent in the training data and that the opaqueness of their processes leads to unaccountable and incontestable decisions. Concerted attempts to remedy design issues with technical fixes, such as debiasing the datasets on which the algorithms are trained, are themselves highly problematic (Keddell, 2019). Many such systems were cancelled either before or after their deployment (Redden et al., 2020). The problems with predictive optimisation are so profound that their legitimacy is questioned by leading computer scientists. Yet, and despite the advisability of the precautionary principle in high-stakes settings, advocacy for their use persists. Indeed, predictive optimisation has proliferated “so quickly that it appears to have become a part of the new social order, and hence normalized, to the point where challenging the entire category seems almost unthinkable” (Wang et al., 2024, p. 9.18).
While the debate surrounding predictive risk modelling continues, the more recent push to advance AI applications in public services was triggered by the public release of OpenAI’s ChatGPT large language model (LLM) in November 2022. Since then, public awareness of the capabilities of LLMs has grown rapidly, sparking strong interest in their potential as professional tools for a wide range of natural language tasks, including audio transcription, case note generation, letter writing, assessment, and policy analysis. Early reports suggest that some applications significantly reduce time spent on administrative tasks (Social Work Today, 2025). However, in the context of austeritydriven public services, such eMiciencies can result in reduced staMing levels or accelerated work processes. In addition, it is well documented that LLMs can produce outputs that, while plausible and articulate, may be biased and stereotypical (reflecting the biases inherent in the training data), or grounded in false facts or erroneous inferences (commonly referred to as hallucinations). Although LLM designers have intervened to moderate bias and minimise their tendency to hallucinate, such flaws have not been, and may never be, fully eliminated.
That fact has not prevented system vendors from promoting the use of generative AI systems in high-stakes social work settings. The inherent risk is mitigated by advising that outputs should be reviewed, edited and signed oM by a human-in-the loop. This shifts the risk to social work staM who may fall foul of automation bias: a well-known human–computer interaction phenomenon where human operators “fall asleep at the wheel”. Social work professionals can become the “moral crumple zone” (Elish, 2019) for flawed systems, held accountable for system failures, despite their limited control over system behaviour. These concerns point to the need for a cautious, critically informed and precautionary perspective on the introduction of AI tools into social work practice that is grounded in an awareness of their limitations and risks and with a willingness to resist introduction or abolish operational systems.
Conclusion
Technological progress in the field of AI is advancing rapidly, but it is acutely diMicult to distinguish the real capabilities of this emerging technology from the hyperbolic claims of interested parties. There are widely diverging opinions—even amongst industry experts—about the potential, risks and likely social impacts of AI. A critical social work perspective is alert to the sociotechnical imaginaries advanced by powerful corporate and governmental elites. Critical social work theory channels critical data studies, raising awareness of the environmental harms and exploitative labour practices of the AI industry, the disproportionate impact of AI technologies on historically marginalised social groups, and the need for precautionary regulation. Finally, critical social workers should interrogate proposed AI applications in social work agencies, paying close attention to how they alter social work practices and might disrupt the relational nature of social work.
This entry is currently in pre-publication review. The final version will be available later this year in the Elgar Encylopedia of Social Work, edited by C. Fouche & L. Beddoe published in 2025 Edward Elgar Publishing Ltd.
References
Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction.
Engaging Science, Technology and Society, 5, 40–60.
https://doi.org/10.2139/ssrn.2757236
Floridi, L. (2024). Why the AI Hype is another tech bubble. Philosophy & Technology,
37(4), 128. https://doi.org/10.1007/s13347-024-00817-w
International Federation of Social Workers. (2022, May 24). The role of social workers in advancing a new eco-social world. https://www.ifsw.org/the-role-of-socialworkers-in-advancing-a-new-eco-social-world/
JasanoM, S., & Kim, S. H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. University of Chicago Press.
http://ebookcentral.proquest.com/lib/otago/detail.action?docID=2130453
Jørgensen, A. M., Webb, C., Keddell, E., & Ballantyne, N. (2022). Three roads to Rome?
Comparative policy analysis of predictive tools in child protection services in
Aotearoa New Zealand, England and Denmark. Nordic Social Work Research,
12(3), 379–391. https://doi.org/10.1080/2156857X.2021.1999846
Keddell, E. (2019). Algorithmic justice in child protection: Statistical fairness, social justice and the implications for practice. Social Sciences, 8(10).
https://doi.org/10.3390/SOCSCI8100281
Perrigo, B. (2023, January 18). The $2 per hour workers who made ChatGPT Safer. TIME.
https://time.com/6247678/openai–chatgpt–kenya–workers/
Redden, J., Brand, J., Sander, I., & Warne, H. (2020). Automating public services:
Learning from cancelled systems. Carnegie UK Trust, the Data Justice Lab and
Western FIMS. https://carnegieuktrust.org.uk/publications/automating-publicservices-learning-from-cancelled-systems/
Social Work Today. (2025, February 11). Regulator commissions AI in social work research as tool claims to halve paperwork. Social Work Today.
https://www.socialworktoday.co.uk/News/regulator-commissions-ai-in-socialwork-research-as-tool-claims-to-halve-paperwork
Wang, A., Kapoor, S., Barocas, S., & Narayanan, A. (2024). Against predictive optimization: On the legitimacy of decision-making algorithms that optimize predictive accuracy. ACM Journal of Responsible Computing, 1(1), 9:1-9:45.
https://doi.org/10.1145/3636509
Xiang, C. (2023, January 18). OpenAI used Kenyan workers making $2 an hour to fiter traumatic content from ChatGPT. VICE. https://www.vice.com/en/article/openai–usedkenyan–workers–making–dollar2–an–hour–to–filter–traumatic–content–from–chatgpt/
