Social work and the value of human relations in an age of AI



Submission to The Social Lens: A Social Work Action Blog by John Richmond, SOWK Alumni.

Could robots take better care of people than we do?

While my feminist, socialist grandmother was languishing in an upscale private, for-profit nursing home in West Vancouver many years ago, I was discovering the fascinating world of artificial intelligence, philosophy of mind, metaphysics and epistemology at UBC. It wasn’t long before I began to wonder if we would eventually have robot caregivers – and now we do.

The nursing home where my grandmother lived looked nice from the outside – don’t they all? The Long Term Care home, now public thanks to the BC government – had all the trappings of a facility catering to upscale, demanding residents. But the reality behind the scenes was something else entirely, according to Grandma.

My grandmother, Helen Armour, was a complex person – aren’t we all? A mother, teacher and the daughter of Frank McKenna, a carpenter who migrated from Manchester to Canada at the end of the 19th century. My grandmother’s view of the world was heavily influenced by “Brother” Frank, a socialist and trade union activist who worked his way across the country, ending up in BC, working on the railways. Brother Frank went on to be fired for protesting the poor working conditions and the exploitation of Chinese migrant labour.

My grandmother had a sophisticated criticism of what was wrong with her nursing home, starting with the for-profit motive of the owners and extending to the way the staff (mis)treated the residents, particularly those who didn’t have someone to speak up for them.

My grandmother’s experiences in her Long Term Care facility motivated me – in part – to go to social work school once I had completed my philosophy degree. But my philosophy education remained with me well into my social work career.

I often wondered, what if caregiving robots could be made to look and sound human? What if caregiving robots had endless energy and – at least apparent – empathy? How would LTC residents or folks experiencing mental health or neurocognitive diversity respond to this helping? These are no longer hypothetical questions.

Knowing what I know from my work in Elder Care, I strongly suspect my grandmother (and her sister) suffered from a mild vascular dementia at the end of their long lives (secondary to smoking). I frequently wonder how AI might be better suited to providing the quiet, pleasant, low stimulus environments we know that provide benefits not just to those with dementia but to those with delirium as well. When I raise this idea with my health care colleagues, I encounter two types of responses: younger people are open and enthusiastic, while older folks tend to react with horror. I don’t think this is a question of which generation is more likely to have seen 2001: A Space Odyssey or Terminator.

As is often the case, youth seem more excited and welcoming of the prospect of AI helpers – along with my clients in rehab medicine who look forward to the kind AI exoskeletons being developed at UBC and GF Strong, and who already have an improved quality of life thanks to Alexa and Siri.

Several of my clients tell me they would prefer a friendly robot helper to the kinds of “care”givers who come to see them at home and help with meal prep, take blood sugars and perform light housekeeping.

Many people are already benefiting from mindfulness apps and chatbots available any time of the night you can’t sleep and are feeling anxious.*

The future is here and at least some people seem to be OK with seeking solace from a machine.

In case you’ve missed it, AI has evolved from the days of HAL in 2001: A Space Odyssey. Now when we say “AI” we are often not referring to the Terminator, but rather the less threatening concept of “machine learning” – training computers on huge data sets. When I was in philosophy at UBC, AI referred to conscious, aware machines. But even back then, in the 1990s, I could see the development of Neural networks and parallel distributive processing was not necessarily going to lead to what we now call “hard AI” – or machine consciousness – for reasons I won’t go into here, because this is a social work blog.

But a quick overview of the field is important for social workers and social work students and faculty to understand where we might be headed. And how social work as a field can prepare for the future.

A quick tour of AI and philosophy of mind

Ada Lovelace was a brilliant female mathematician conveniently “forgotten” by male historians. She may well have been the first computer scientist. Lovelace suggested more than 100 years before the first computer that mathematical calculations could be automated using an algorithm and that this automation could theoretically take the form of a machine.

In the 1930s, John Turing built on this idea to propose a computational model of machine processing – the automated manipulation of symbols and inputs to produce outputs, for example, mathematical formulas, or, during WWII, deciphering encrypted messages used by the German Army.

Turing and others wondered what would happen if a machine could manipulate inputs to produce the same outputs – behaviour – as does the brain. This was called functionalism: output, for example: Midnights, is a result of input: “What is the best Taylor Swift album?”

If a machine produced the same or similar results to a human, would people think the machine was human? Would the machine be conscious? Was that the definition of artificial intelligence? I don’t know about you, but I do think Midnights might qualify as Swift’s best work.

Oxford Philosopher John Searle dealt a stunning blow to AI early on, arguing in his 1980 paper “Minds, Brains and Programs” that the manipulation of symbols to produce intelligible outputs does not demonstrate understanding, awareness or consciousness. So you might think a chatbot understands you when it says “Tell me more about your mother, it sounds like you really miss her?” at 2am, but really the computer is just manipulating inputs using an algorithm to produce a reliable output. Today, what we call AI are generative AI language models such as ChatGPT – machines that can “learn” – be trained on massive amounts of data, such as therapy transcripts, to behave, reliably, like an empathetic social worker or, as in my own case, diagnose a dissected artery and blood clot in my brain.

From a social work perspective, Turing might have been right after all. Famously, a Google engineer recently claimed a large language learning model he was interacting with was a “sentient” being – you can decide for yourself: here is the transcript.

AI language models will soon be used everywhere and if you want to be able use rather than be used by them, you will require a whole new skill set called prompting or prompt engineering– I’m taking a course in it and I encourage everyone reading this to do the same (fair to mention the dissenters in AI who believe something else will come along to replace prompt engineering but for anyone with an interest in Philosophy of Language, prompting is fascinating, enjoyable stuff). AI is already a part of our daily lives, suggesting to me which new Knowledge Network program I might like to watch, but also helping with surgery and planning health care system treatment and discharge algorithms. Never mind de-tasking the police, we are detasking health care professionals without much of a public or professional debate.

What is the social work response to AI?

Having followed the field closely for years, I believe that with the power and nature of quantum computing, conscious machines might not be far off – thanks to the role of indeterminacy and how knowledge and the acquisition of belief play a role in intentional behaviour. But we are not there yet. As will be the case when we meet our first alien species, I’m not sure we will recognize non-organic consciousness when we first encounter it. We are, however, at the stage of having generative language models that can mimic intelligence.

AI expert and author of the always-interesting Algorithmic Bridge blog, Alberto Romero, argues, “Most people haven’t yet conceptualized what this all means.” Romero continues, “ChatGPT has been the first truly global breakthrough and an unprecedented success story.” Elsewhere, I am proud to say, Romero quotes AI ethicist Amanda Askell when he says “training in Philosophy is extremely useful” in working with these new generative models.

Image generated by DALL-E 2 using the terms “social work,” “face” and “artificial intelligence.”

Even as OpenAI’s DALL-E 2 makes huge inroads into AI visual art and ChatGPT looks and sounds more and more human (and portends the end of the essay as we know it), social work and the helping professions in general seem caught unprepared for the huge potential consequences of AI helpers. While my clients might be understandably excited, we have reason to be worried.

We run the risk of being overtaken by history, and rather than having a technology that serves us, we will serve the technology – you may already be experiencing this if you use drop down “treatment” or “intervention” menus in health and social services. Have you ever wondered what happens to all the data you generate at work in terms of notes, applications, forms, etc.? That meta-data is being used to create AI social work.

Computer scientist Norbert Wiener warned of a dystopian AI future in 1948 in The Human Use of Human Beings, a book we should have been talking about long before now but which is well covered in the eminently readable new book Possible Minds: Twenty-Five Ways of Looking at AI (Penguin Press, 2019). Not only do the contributors to the book look at the future of AI, they also examine the dangers and pitfalls. Some of the deepest thinkers on AI, like Philosopher Daniel Dennett, are also the most progressive.

I think, were my sensible Grandmother still around, she would welcome AI in caregiving, but she would want to see the ownership of these technologies socialized, and used in partnership with humans to make our lives richer and more fulfilling. Will AI help to expose the inefficient and inequitable nature of private LTC and private health care? I would like to think so.

On a personal note, AI imaging helped my stroke team in the Emergency decide on the “probable” best course of action in dealing with my Ischemic stroke, stenosis and arterial occlusion. I might not be here were it not for AI. But it was a human being who performed surgery and saved my life and (most) of my brain.

In a world of catfishing and online imposters, people looking for help are going to place a premium on help from real people, and this will be the social work advantage.

Most certainly, AI will continue to help people feel better in the middle of the night; help to prevent people from taking their own lives and make informed treatment decisions. While the global online metaverse takes off, taking more and more global citizens with it, at the same time many people are going to be logging off, searching for authentic human experiences. Hopefully social work will be there to help meet this need.

I was recently asked by a psychiatrist to see a young MVA trauma survivor. Buried deep in the referral was the statement, “He has a online support bot but he doesn’t use it.” Turned out he wasn’t using his online support group either. The patient told me, “I prefer talking to real people.” His day job? Computer programmer.

People in need of a skilled therapeutic relationship and people seeking social justice and advocacy broadly speaking, will, I suspect, still seek out human helpers and we will have to rethink how we provide social work services to continue to be relevant and helpful. It’s time social work schools, with some help from philosophy, and the profession itself, started doing the heavy lifting of thinking about this, before AI does it for us.

*Wearing both my philosophy and social work hats, I don’t want to minimize the ethical issues around AI and the helping professions. For a good hard dig into the reasons we might want to embrace the many benefits on AI, while addressing the shortcomings, check out AI Ethicist Amanda Askell’s blog.

THE SOCIAL LENS: A SOCIAL WORK ACTION BLOG - The views and opinions expressed in this blog are solely those of the original author(s) and do not express the views of the UBC School of Social Work and/or the other contributors to the blog. The blog aims to uphold the School's values and mission.