In 2017 there was a dramatic change to the way that language generating methods were used. It was simply known as the “transformer”. Today it is behind large language models, or LLMs, like ChatGPT and Bard. And it’s one of the things that the teams at IBM have been following very closely too.
The world is finally starting to understand what so many technologists and engineers have been working on over the years. ChatGPT may have taken the world by storm when it first came out, but in June there was a 10 per cent drop in users for the first time. Encouraging commentators to blame the false information that was appearing in stories online. In some companies employees have been banned from using the platform too.
We spoke to Michael Conway, Data and AI Transformation Leader at IBM Consulting, to learn more about how these challenges are affecting financial services.
From Technique to ChatGPT
Michael explained how in his role at IBM he has moved over the years from warehousing data to exploiting data. Which led us to discuss his team’s mantra: “Using GenAI before it was cool”.
“Before it was just a technique,” Michael shared. “And all of a sudden the technique has captured the imagination,” Michael explained in relation to large language models that he and his team work with. Importantly, ChatGPT is just another LLM. An LLM that there is a big lesson to learn from, Michael added.
“What is amazing about ChatGPT, and what is a lesson to everyone, is that it’s all about the UI. About the distribution and the ease of use.
“This is just the technique we’ve been using. But as soon as it becomes easily understandable for anyone to use, that’s when it blows up,” Michael added. He also shared a tweet that he thought explained the importance behind distribution in the context of ChatGPT.
The Exponential Industry that Generative AI Represents Today
Just like Michael and his team were using GenAI before it was cool, we then discussed what he is working on that he can share. And how his role works.
“I think we, as consultants, in an industry, and especially in an exponential industry like this one, have a duty to ourselves and to our clients to be on the next page.”
The next page is important, of course, but Michael still thinks that LLMs have a long way to run. They are definitely not old news. He thinks that where people are struggling today is making use of LLMs.
“Deploying LLMs is harder in practice than it is in theory, and that’s where we expect a lot of work to come from in the future,” Michael shared, alluding to the tweet he shared earlier about distribution.
Clients are already working with IBM Consulting to address this. Michael explained how businesses, especially highly regulated ones, need help understanding a myriad of questions. Where to put AI in the workflow? Where can it be trusted? How can I govern and audit the outcome? This is where Michael and his team are there to help, especially with complex challenges like the ones that banks face.
One of those clients is Lloyds Banking Group whose project with IBM Consulting has been shortlisted by the MCA Awards 2023:
Getting tech to the Users. And making it Stick.
One of the things that Michael believes is needed to succeed is the ability to always be looking for the next shiny thing, or technological advancement. He added how he thinks there is a balance between deliverability and ensuring the technology is used.
“You can’t just be a forever conveyor belt moving to the next shiny thing,” Michael explained. When it comes to banks, they produce large amounts of governance and checklists. The amount of work might be frustrating. But the work is getting technology into the hands of real users. Getting stuff into production. And making it stick.
“You can have the shiny slickest model, but if no one’s using it. It’s irrelevant,” Michael highlighted. “That’s where we think our work is really important. It’s the intersection of really amazing tech in the hands of users making a difference.”
First steps into LLMs for financial services
Working with banks means a lot of different stakeholders can be involved in the collaboration. One of the positions that recurs frequently when working with banks are members of the risk team. And, when working with people from the risk team, as well as others, it’s important not to go in with a big bang. Some LLMs can be run on local devices, Michael explained. And, he added, it’s often better to start the conversation there, on a smaller scale.
“Every conversation I am having with clients right now is not about the what,” Michael shared. “It’s about the how.”
One client that doesn’t need to start at the beginning is Lloyds Banking Group. IBM Consulting has been working with Lloyds for years now. The bank has the right foundations in place to facilitate LLMs, Michael believes. This means they can do more, faster. Together. Other clients are also able to benefit from this type of outcome. Especially with all the know-how that Michael and his team have accumulated in the space over the years.
Prompts are Powerful
We discussed a specific case that Michael and his team had worked on with a bank. It was a simple example using the search capability on the bank’s site. The system was asked what phone number do I call for a lost card, for example. The system responded with a ‘sorry you’ve lost your card’, which it hadn’t been trained to, and added a telephone number. Trouble with telephone numbers, Michael explained, was that they are inherently random. And instead of giving the phone number available on the site, the system generated a random number instead. Michael phoned it. It didn’t work.
Working with prompts means that you can make rules. Michael added how the prompting that should be used when interacting could be ‘if you find a phone number that looks like this: 020…. If you find it in the search of the data on the site, give it me verbatim. If not, say you can’t find it.’
Michael sees how prompts and business rules overlap. With the help of prompt libraries, it can become less complicated to work with AI. Because the way that you structure these prompts is phenomenally powerful.
“We are still in nascent territories,” Michael added, “It’ll be interesting how that matures over time.”
Sharing the journey
Differently to software delivery, Michael touched upon how the relationship with the teams representing banks needs to be based on trust. Not just because his team represents expertise about new technologies. But because the teams should be part of the journey together.
“Risk can’t be experts in every new technology,” Michael explained, adding how “we encourage the banks that we work with to include risk as part of the sprint team.”
This allows the risk teams to see how decisions are made. To see that Michael’s team is thoughtful and sensitive to the issues that a bank might face. “At the end they’re representing the team and saying we have set out these guardrails and this is how we keep the bank safe,” Michael added about the role of the risk team at a bank.
Michael shared how he has seen a rise in roles such as data ethics and AI ethics officers. But the best summary of these job titles that he’s heard is “just because we can, doesn’t mean we should.” These new jobs are the ones guarding the ‘Can we Should we debate’. He sees this as a positive step.
Michael shared how the teams at IBM are advocating the idea of precision regulation.
“Don’t be too blanket about it. You’ve got to regulate at the point that the rubber hits the road for that particular use case,” Michael explained. He sees how in financial services things like the FCA’s consumer duty and GDPR are already main parts of the discussion.
“How you evidence that you are being fair to your customers with consumer duty. Part of that is fair and transparent outcomes,” Michael highlighted. “You have to prove that your AI is fair and transparent.” Like the second order of the first order of regulation.
The next Generation
It’s not just how the new generation are going to be expecting advanced AI as a standard part of their online lives. It’s also how AI is helping with those who are vulnerable. Consumers who have hearing problems, for instance. In his own case, Michael has been fortunate enough to have AI help him in his personal life with his Type 1 Diabetes.
We discussed how Michael’s daughter, who is four, is already interacting with AI. In the case of his daughter. This interaction is between her and Michael’s watch or Siri.
“I think it’s incumbent on us as responsible stewards of this type of work (LLMs) to make sure that the banks. Banks are very responsible in this case and we help to make sure that we are catering for all sorts of diverse needs,” Michael summarized.
Author: Andy Samu
#Transformer #ArtificialIntelligence #GenerativeAI #Regulation #Risk #Prompt #LLMs