Social Work and AI a year later. Are you ready for a brave new world? Are our clients?
John Richmond, BSW, MSW, RSW (ON, BC)
Branch Rep, Vancouver Sea to Sky BCASW
I’m back a year later with an update on AI and social work. I’m writing this on the ferry – sitting next to a man who tells me that although he is homeless, he uses his phone to access virtual care. “Robot doctors and nurses will help you with no questions asked. The future is going to be awesome, Bro.” It’s become a truism that AI is going to radically change the world; the question for us is what does this mean social workers and our clients?
I’m going to do a bit of a deep dive here to help social workers understand the latest developments and how those developments will affect our practice. If I loose you, don’t worry, I’ll recap at the end in simple language and give you a list of resources like the best and latest youtube videos on AI (I’ve watched a million of them so you don’t have to.)
But first let’s review the past year – and what a year it’s been: self driving car crashes (Apple recently announced it is abandoning it’s self-driving car project), asymmetric low tech warfare vs high tech drones in the Middle East, the steady advance of quantum computing, LLM (large language models) and now LAM (Large Action Models), and allegations of racism in LLMs. The now famous case of Gemini briefly producing graphics with no white people in them begs the question did Gemini “learn” to make the mistake or did it someone insert a routine designed to counter the vast amounts of racist bias found on the net and used to train systems like Gemini?
Or how about this fakenews produced by AI, the mouse with giant testicles (thanks to Alberto Romero).
The by-now world-famous mouse was the subject of a “peer” reviewed publication. Except the mouse does not exist, and is an AI “hallucination” – created by drawing on data and then adding mad- up material to match a prompt (prompt engineering is how you or anyone interested- with a few hours of free training – can program AI).
Now Computer engineer Jesse Lyu has exploded on to the AI scene at the end of 2023 from his home in LA with Rabbit R1 (wait, don’t order yet, there’s more!) and OPEN AI’s board blew up revealing AI might be needed to improve the corporate governance model at AI companies themselves.
Unless you work in acute care fields like surgery, trauma, and emergency medicine, however, you may not have seen any (obvious) practical AI advances in your social work practice. Although if you are a social work student you might already be using ChatGPT (not condoning it but everyone seems to be doing the same thing).
While the Rabbit R1 might be a fun gadget fad that will disappear as quickly as it appeared, the tech behind it is here to stay and will change social work practice forever. It will also – and is already – helping our clients achieve new levels of independence and ability.
Large Action Models like Rabbit take the foundational architecture behind LLMs (AI) and make it more useful – hence the word ACTION – for example, I can theoretically combine all my charting, resource references, web sites, referral portals and practice management tools together, “show” or teach Rabbit how I use them and voila! I can sit back spend more quality time with my client, dictate a note and instructions to Rabbit on what I want done (draw on which resources, make which referrals etc.). LAMs simplify our use of apps and the net in general. A popular use of Rabbit is to teach what you like in a vacation and the say “I want to go Greece for under $5,000 Canadian” – and the LAM will do all the work for you, provided you give it unrestricted access to your computer and passwords.
I prompted DALL-E for an image of a rabbit using Rabbit R1. This is a rabbit. This is not a Rabbit R1. Unfortunately for the tech bros at OPEN AI, LLMs makes things up.
Ethical Issues
AI itself says the following about social work and AI: (lifted word for word with no credit or source cited, from the International Journal of Social Work ethics):
Artificial intelligence (AI) is becoming increasingly prevalent in social work. AI is being used to conduct risk assessments, assist people in crisis, strengthen prevention efforts, identify systemic biases in the delivery of social services, provide social work education, and predict social worker burnout and service outcomes, among other uses. (Vol 20, 2023)
Whatever the advantages of AI, the ethical issues are huge:
– We now know universities were and are using AI to monitor remote learning and testing (and in some cases students have been accused of academic misconduct based on AI not being able to identify a face).
– What if Rabbit runs rampant over the net pretending to be me ordering flights and hotels?
– Tech has long been the domain of folks like me – white, privileged, cisgendered males. The digital gap is growing between those of us with easy, affordable access to the net and some of the folks I see every day who don’t even have a phone. Are we headed for a world where a small number wired social workers working from home are “helping” larger numbers of people who cannot access the benefits of a device like Rabbit?
And the advantages of Large Action Models to people with disabilities are significant. After I had a stroke in 2021 and then hit by a truck on my bike in 2023 (no bike lane) I gave this subject considerable, personal thought. Had I not recovered, a device like Rabbit would have helped me pay my bills, file my taxes, manage my medical appointments and homecare, and – wait for it – manage my social work appointments. I could afford Rabbit and the data/wifi needed to run it because of the benefits I have through my employer. But what about my clients? One of them is an architect who could still be working post MVA – but he can’t afford the tech and has no one except me to help him set it up.
All of this raises important questions I want to tackle here and in a presentation I will giving for Social Work Week 2024.
What is AI anyway?
But first, for those interested (and frightened) the definition of AI is a moving target. AI in popular culture refers to the types of apps and technologies we are all using to organize our school work, meetings, a busy social work practice or to manage our finances.
In the world of computer science, philosophy and cognitive science AI refers to several different models. The first – called hard AI, which (trust me on this) does not yet exist, is true sentient, conscious, synthetic/non-organic machine intelligence – a machine, app, or robot with its own mind and more importantly, a theory of mind. A machine that is aware of itself and aware of others (organic and non-organic) as sentient creatures much like itself. This is how we navigate the world ourselves – ascribing beliefs and intentionality to others and sometimes even to our pets and our Bissell vacuum cleaner (hey, stop that, get away from my spilt Ju-jubes). Theory of Mind AI is being called AGI in the media and is the explicit goal of folks like Sam Altman at Open AI and Elon Musk. Mr. Altman has said he needs 7 trillion to achieve his goal.
The other types of AI include the most common ones we see, hear, and read about every day – generative AI or Large Language Models like Chat GPT – trained on massive amounts of data and used in a variety of settings – for example, I use Open AI LLMs to produce Advance Care Plans for and with my clients. LLMs require massive amounts of computing power on the cloud, not to mention massive amounts of mined rare elements and electricity.
Meanwhile over at LLMs… this kind of AI is NOT thinking, reasoning, sentient intelligence. It just manipulates data. LLMs make stuff up, confabulate; however, if enough data is used the answers can look impressive. (from Alberto Romero’s blog)
LLMs account for the lion’s share of capital raised in the business of AI due in part to what is assumed to be huge market potential for LLMs in the near future. It is already becoming apparent that the huge costs of LLMs is starting to give the corporations who would be using the products produced with generative AI, pause for thought. Investors in Sam Altman’s venture may never see their money back but that won’t stop progress in AI, which, no pun intended, has taken on a life of its own and has become an investment bandwagon everyone wants to be on. AI critic Alberto Romero says, “Don’t bet against AI.”
AI also refers to pre-programmed machines with limited memory and no ability to learn novel tasks or navigate unique environments. Think of the computers that co-pilot planes or robots welding cars. Generally speaking, pre-programmed AI remains the best investment for many companies and is more sustainable, using far less energy and requiring far less computing power. Simple AI is what many social workers still use at work due to limited budgets (we just added a tech budget at a non-profit where I sit on the board)
The AI project remains plagued by problems – who can forget the self driving car in San Francisco that ran over a woman and dragged her 25 meters to her death?
The solution may well lie with multi-modal tokens. Tokens cost money and while Open AI will give you a small number of tokens with your membership, the tokens are quickly used up and require you to purchase additional tokens to keep working. Each token represents a quantity of data – the commodification of information. Multi-modal inputs – text-to-video – means prompt engineering with text, audio, and visual inputs – a million multi-modal tokens will produce some spectacular images that are hard to distinguish from reality.
The set backs, aside from cost and the environmental impact, include the obvious problems AI continues to display with problem solving – a most interesting example being the issue of object permanence: AI clearly does not “understand” that chairs don’t float. It was not trained on data that would suggest chairs can float – it just does not understand the concept object permanence.
AI doesn’t “get it” – chairs cannot float. They are permanent objects in our world. But for an AI system that is not actually thinking – using reason and experience – the sky is the limit apparently. (taken from Alberto Romero’s subreddit)
Why is object permanence important? A quick philosophical side note – feel free to skip:
I know that a small little forest sits nestled at the end of an S curve near my house, but if I were to ride my bike to the library in Xwesam tomorrow, and CanFor had cut down the little forest, I could adjust accordingly in a split second. If the S curve was gone and the road straightened, I would just keep going hardly giving it a thought (aside from plans to send an angry letter to my local councillor). But LLM models of AI seem unable to adapt to new inputs as quickly as a human being. Enter the Large Action Model and Rabbit – little is known about the engineering behind Rabbit, but it certainly looks promising.
In the meantime, AI keeps churning out “hallucinations” like the floating chair, seemingly oblivious to object permanence. Of course nothing is permanent – not even time and space – but we are able to successfully navigate the physical world by using a relatively good internal mental representation of the aspects of the external world which tend to be permanent and those which are not (the little forest down the road from my house might be gone tomorrow AND the road may have pilons on it – I could quickly adjust to both, knowing forests are relatively permanent while pilons are not.
Which leads to the last philosophical issue before we get to the implications for Social Work. The smartest minds in the room are divided on the (admittedly scary) prospects for sentient, conscious, intelligent AGI (artificial general intelligence) – with some believing it doesn’t matter because if it acts like a duck and quack likes a duck, people will think it is a duck. As I mentioned last year, some folks will and are being helped by robots like Woe Bot which philosophers understand is NOT AGI. It may well be that conscious; intelligent behaviour is an emergent property of brains (much like ice as we experience it is an emergent property of water) – and may emerge from natural language learning models executed in Quantum computers. We may not even recognize self awareness when it shows up, but many think a conscious machine is right around the corner at the speed with which we are making progress with deep neural networks imitating neural activation patterns in the brain.
Back to social work:
Have you used or tried using Notion or Motion? It’s an effective AI organizing tool that speeds up your ability to manage cases, meetings, tasks– with the practical impact of speeding up of the work day and making SW a more efficient process for both clients and funders. Having a front facing, efficient system for managing client interactions is a reality the public is going to expect to encounter when needing SW services – increasingly, folks will not tolerate waiting days or weeks to hear back from a SW with an appointment for a further 3 or 4 weeks away. Social work employers are going to have to invest in front facing technologies that help clients easily and quickly obtain SW assistance. And when we interact with clients, we will have to be prepared to work with clients who are using AI to research their own psychosocial issues and treatments.
AI can already outperform even the official answers on the social work licensing exam in the US. How long before the public figure out Chat GPT can answer their advanced care planning questions faster than a real, live social worker? The solution lies in working WITH AI to improve service quality in social work.
As someone who lives with a disability I am resolutely optimistic that AI will improve the lives of people living with disabilities but not unless people living with disabilities have access to the same tech many of us are taking for granted. I’ve had three advocacy cases this year of patients looking to use Google Home in hospital, and actively prevented from doing so by having the Wi-Fi blocked “because those devices will record the Doctors and Nurses” as one hospital manager informed me.
Social Work ought to embrace the potential of AI for our clients. AI will, for example, convert some folks with mild cognitive disorder or mental illness currently deemed “financially incapable” to “capable” over night. If you are aware enough that you have trouble tracking your income, paying bills on time, catching fraud before it happens, and appreciate that you need an AI app to help you manage your finances, then I would argue you are capable (I look forward to fighting a case like this before the Consent and Capacity Board).
The many ethical issues AI brings up for social work are merely a subset of the many ethical issues capitalism has created for social work. For example, an objective for government, police, and the criminal justice system for AI is greater monitoring and control of people – the ability to harshly criminalize behaviours we used to manage with humanitarian interventions. Will we move away from safe supply as an evidenced response to the toxic drug supply and instead ramp up robocops with face recognition AI and a new war on drugs? The massive amounts being spent on AI means the main applications for AI will likely be military and policing unless we demand otherwise. As social workers we can and must be vocal advocates for our clients, insisting tax breaks for Big Tech or subsidies to universities be spent on technology that will benefit our clients, and that our clients have the income needed to participate in this brave, and somewhat scary, new world.
RECAP:
– AI comes in different forms and models
– Large Action Models (LAMS) use Large Language Models and natural language plus the internet to problem solve for us and our clients.
– LAMs will allow us to build integrated models of client care.
– Clients are going to faster, more reliable responses from SW and AI working together.
– LAMs and LLMs are NOT thinking, intelligent machines
– The digital divide is widening and will get worse unless we advocate strongly for our clients to have access to AI
– AI will be used primarily for purposes of control and marketing unless we push government to ensure AI is used for progressive purposes.
Where to go to learn more about AI: Gary Marcus, a world renowned cognitive scientist and AI critic, can be found on substack. Marcus recently relocated from the US and now calls Vancouver home. Gary has an active and entertaining twitter account @garymarcus
The Algorithmic Bridge from Alberto Romero on substack is my daily go-to for AI news and critical analysis. Absolutely unbeatable.
More tech types will enjoy TLDR Tech newsletter. I’m usually able to impress employers and IT Departments with the latest info I have picked up from TLDR; however, it’s not for the faint of heart.
Learning prompt engineering has never been easier as systems evolve and improve. An investment of under 500 hours in freecodecamp.org will make you almost an expert and enable you to create your own apps.