In the Catch22Minutes podcast, we delve into some of today’s major social challenges. We speak to frontline experts, industry leaders and young people, in pursuit of ideas for reforming public services.
With the recent release of our manifesto: 22 ways to build resilience and aspiration in people and communities, our fourth season focuses on some of our key policy asks. It is presented by Catch22’s Head of Policy and Campaigns, Stella Tsantekidou.
In today’s episode we will be focusing on how AI (artificial intelligence) is used in public services and the hiring process; the challenges, the dangers, and of course, the opportunities.
Joining Stella to discuss is Rachel Coldicutt, Executive Director of Careful Trouble, and Magid El-Amin, Director of Evidence and Insight at Catch22.
Hello. I’m Stella Tsantekidou. I’m Head of Policy and Campaigns for Catch22 and we are having another edition of our Policy Series podcast. Today we will be focusing on AI, particularly using AI to improve public services and the hiring process, the challenges, the dangers, and of course, the opportunities.
The topic is as relevant as ever. A few weeks ago, the British Prime Minister, Rishi Sunak, gathered government representatives, AI companies, and experts to discuss how this technology can be developed safely. And in the summer, Labour announced it would use artificial intelligence to help those looking for work prepare their CVs, find jobs, and receive payments faster.
Since then, Keir Starmer, the Labour leader, has regularly mentioned AI use in other policy areas. At Catch22, we want to see AI being used to drive equity in the job-hunting market, and we also want to see it improve public services.
With me today to discuss all that is Rachel Coldicutt, Executive Director of Careful Trouble, and Magid El-Amin, Director of Evidence and Insight at Catch22. Rachel first and then Magid, would you like to introduce yourselves?
Yeah. Hi, I’m Rachel. I’ve been working in technology for nearly 30 years now, both in R&D and policy making, and I run a research and foresight studio, and we do lots of work around AI governance and helping people and organisations to understand how best to either use or not use AI.
Amazing, and Magid?
Hi, yes, I’m Magid, I’m the director of evidence and insight at Catch22. I’ve been in the third sector for just over 12 to 13, well, just over 12 years, and my job is all about how we can use the sort of latest, best, best in class techniques and tools to provide the best kind of services for our service users.
Amazing. Thank you both. So, to set the scene a bit. Could you tell me how is AI currently used to make automated decisions within public service delivery? Obviously, people know that AI is becoming more and more ubiquitous. Obviously, we see it used in the private sector a lot. A lot of people have started using ChatGPT. We hear of a lot of people who use it for their day-to-day job. But often when we think about public services, we think that they haven’t developed as fast, they are lagging behind the private sector. So, Rachel, would you like to start and tell me, give me a few examples of where we see it being used and how we can be, maybe use it further.
Firstly, it’s important to think about what AI is. And it tends to be used as a term that captures lots of different things, and so it captures a number of different kinds of technologies that would be everything from an automated decision about you to, for instance, maybe a facial recognition to identify you. And so, I think it’s important to think about AI as having been made-up of data decisions and then the ways it’s rolled out.
And in terms of the public services we have now, a lot of the time it is used to try and save money very often around the more complicated things that happen. So, I always think that you’re really better off automating the easy things that you really understand. But what tends to happen, for instance, is you get automation popping up in the justice system, for instance, to help judges decide whether or not people ought to be eligible for probation. You see it used at borders to help and assess the entry claims for people. It’s used in the Department of Work and Pensions to look at the bank accounts of people who are beneficiaries.
And so actually there’s a whole load of other ways that can be used that it isn’t always used for, but probably the most well-known is in healthcare, where actually there are extraordinary leaps ahead in terms of research around the scans and imaging and that kind of thing. So, I think we can see that it’s like very broad and it means lots of different things in different settings.
And in healthcare, you’re saying we haven’t done that yet. There is a lot of scope to use AI, but we haven’t yet started using it.
There’s a lot of scope to use AI, but I think what is… I think this is a difficult question, right? There’s – in terms of a lot of the ways it is used are maybe not in the best ways. That can be because incentives are set up to use it to save money rather than to deliver better services.
So, what would be a good protocol for establishing which decisions can or should be automated? And do you have any suggestions for how we change the incentive? Does this have to do with political motivation, I guess, which might not be there?
I think a lot of the time how technologies are introduced into public services is as possible ways of saving money. And that actually, what’s a maybe more useful way to think about it is how to create more capacity and more capability in the staff. So, for instance, can you automate things like journey planning to make sure that people are moving around in more efficient ways, say, as opposed to automating who is considered for social care or not.
So, I would say the important things to be thinking about are automating the easy things that people understand that can be picked apart really easily and interrogated and so that when things go wrong, everyone can see. Because the thing that often happens is, the ways things can go wrong in a in an individual’s life are not always apparent to the system as a whole.
I have a question, and Magid I will bring you in just a moment, but I have a specific question on what Rachel has been talking about just now, which is the difference between using AI to make a service better as opposed to using AI to make a service cheaper. Doesn’t making a service better also save money in the long term?
Well, it ought to. And so, for instance, I did a piece of work in 2016 for the Secretary of State for Health, looking at how technology might be brought into end-of-life care. And what we saw there is there are a number of things that are causing people to spend lots of time. For instance, they’re saying their medical histories again and again and again to everyone they encounter in the system. Right. So, automating that kind of thing is extremely hard to do in a way that it that looks after people’s privacy, that is accurate, that meets everyone’s needs.
But the idea is if you’re able to automate those kinds of things, you get more space in the system for clinical staff to be listening and acting and caring for people. There’s more time for the people who are receiving care to be looking after themselves and not constantly administering their appointments.
But the problem is, is that those kinds of projects are long and expensive and take a long time, right? And so, what tends to happen is people look at the budget within a year. They want to make a saving or they have a CAPEX, and they say, right, we have X amount of money to invest now, and we want it to deliver return in the next 8 months or 9 months, not in the next year or two or three. You know, and so really, it’s thinking about what the long-term opportunities of automating are rather than always looking to speed everything up here and now.
Yeah. Which is a recurring problem in in politics, because obviously elections run in four year cycles. So, there is a question about motivating politicians.
Magid, if I could bring you in, so you work for Catch22, we are a big government service provider. We’re obviously looking at AI to improve our services. What are your thoughts on using AI to improve public services?
Well, I think from our perspective, there’s less of a money pressure because our portfolio of services is a mix of you know, contracts that are delivered on behalf of the commissioner, be that local authority or central government, and some are purely from a charitable perspective. So, there’s less of a money pressure but there is a need to demonstrate efficiency, and I think there’s a need to demonstrate quality.
So, for example one or two of the use cases we’re considering is one of the services we have is a food bank. And so, what we’d like to understand quite quickly is the level of dependency on that food bank within the clientele that use it. And that’s not with the aim of reducing that dependency. It’s actually with the aim of providing support that’s outside of the food bank. So, the basic question we’re sort of positing to ourselves is can we understand quite quickly who is using the food bank and what other services they might want to use, say, financial advice, housing advice, migration advice. You know, maybe they’re awaiting an asylum application. We’ll then try and intervene appropriately at the right time using the food bank as the entry point, and then provide them access to other services that are beyond that. And if we can recognise that early, it means we recognise dependency on that service much, much earlier. So that’s much less about money. It’s much more about supporting those we engage.
And another one of our use cases might be within the employability sphere. So, what is it that leads to someone sustaining a job beyond six months, beyond 12 months? Is it just the quality of the role? Is it more likely to be the fact that there are other areas of their life that are much more stable, the housing is stable, the family situation is stable, their educational training is stable. And actually, if we can affect those other things, maybe their chances of sustaining a career that we place them into is much, much stronger, right?
So, yeah, okay, you know, we might turn around to a commissioner and say we have performed this well on these contracts and that’s down to some of the work we’ve done. Absolutely. But for us fundamentally, it’s about how good of a service can we provide the end user. So, for us, the pressure is much less on money.
And could you say a bit more about using AI in the in the hiring process? Obviously, we are focusing on public services today, but could you tell me a bit more about how you think we can use AI to improve the hiring process for service users, for people who usually come to our services, but also what are the dangers?
Yeah, I think, just to echo what Rachel has said, a lot of kind of AI use, whether that’s kind of private or public will be on automating on saving money, ostensibly anyway. And so, when it comes to the hiring process, the, the sort of classic “pain point” is the number of applications. So, you want to sieve through the number of applications quite quickly. And there can sometimes be quite a laissez-faire attitude to that, you know, okay, if you miss a few good people, but you reduce the bulk down to a lower number of good applicants or good candidates then that’s okay to do.
Now that would always affect our particular cohort of service users who might have had a particularly extended break from work, who might have a need for flexible working because of family or caring commitments, who might be first time entrants into the job market, who might not have gone to kind of established or well-known schools or universities. So just those three groups of people will be unfairly biased when having to come across an automated decision-making algorithm for a hiring process.
So, for example one or two of the big job aggregators will sift through your CV, and if you’ve got any particularly, long, you know, you define how long is, break in your career, that’s flagged as kind of a negative thing. Obviously, you could have had a break in your career because you’ve gone travelling or because you decided to live out the country or you had a kid or whatever it might be. Of course, you banish and remove all context and meaning and you just say, right, you know, long break is bad, short break is good or no break at all is even better without any kind of context to that.
So that’s where some of the negative points are and there is some work within our employability hub to work with large companies who kind of use these algorithms to get them to understand what some of the barriers, intended or otherwise might be. But they’re also in some of our work, it’s with service users to understand, you know what algorithms might look for and how to appropriately address some of the issues.
And what about you Rachel, have you seen any dangers in the way AI is used or in the way that we’re planning to use AI for public services?
Yeah. So, I think the main thing really to be thinking about is whether the data we have about people is fit for purpose and whether it’s contextual. So, one of the things that we definitely know is that a lot of the times, when data is collected, it only tells a tiny part of the story about a person or a situation or the context. We all also know that the kinds of tools that you buy off the shelf, whether they’re from Microsoft, Open AI, or Google or whoever actually, what they tend to do is they draw on reserves of data and language that are not representative of the lives of everyone.
So, we know, for instance, that, I think in ChatGPT 3, so that’s the version that was released earlier this year, that up weighted data that in its learning model that came from Wikipedia, things that were linked from Reddit, it’s biased towards the English language and people for whom English is the first language. We know that Wikipedia, for instance, is mostly written and edited by white men, you know, so the language and the constructs that they’re using is not representative of many people at all.
What happens is the power and the perspectives of a very tiny number of people are generalised out as being the norm, and I think this is very problematic when we’re thinking about making important decisions about people that may change their lives. It’s completely different to be thinking about maybe what kind of music might be recommended to a person, or what they might be watching on Netflix compared to making a decision about their insurance, their energy bills, their benefits, their access to work or to healthcare.
And so, I think I would say, we need to think of AI as a set of tools that we use and that they’re not ready to be things that replace us. And that if when we’re using those tools, we don’t have confidence in them, then we ought to be able to say no or query them and move on. And I think the other thing, which actually I think you asked earlier, and I didn’t get to, is when we’re thinking about automating the service, as well as thinking about what might go wrong, I think it’s really important to think about how we might turn it off. Right? And what might happen when it’s used in ways we didn’t anticipate.
And so, I think a lot of the time when you’re automating a service or you’re thinking about, for instance, using AI in hiring, what you’re thinking about is the issues and the problems from your perspective or your organisation’s, you know, perspective, but you’re not really able to see the impacts it will have more broadly and I think we really need to start contextualising those things and understanding the experience of the people who are likely to be the most marginalised, the most vulnerable, rather than only looking at it from the perspective of the most powerful.
Thank you, Rachel. That’s all we have time for today, but I want to finish with a quote from your writing actually. I think it really summarises a lot of the things we talked about today.
You said, “Computer science is a complex discipline and those who excel at it are rightly lauded, but so is understanding and critiquing power and holding it to account.”
I really like this because it shows so much about some incredibly smart people who are incredibly well educated, and they want to make decisions for the rest of us. So, thank you very much both, thank you, Magid, thank you, Rachel. This was this has been very, very illuminating for me. Certainly.
Thanks a lot.