This article is a transcribed interview between Todd Jordan and Toju Duke of Diverse AI, recorded as part of The CEO.digital Show podcast series on Responsible AI.
You can listen to the audio here, or read on for the transcript.
Hello, and welcome to this special series of the CEO.digital Show. I’m Todd Jordan, and we’re diverting from this show’s usual conversations in favour of a deeper dive into a single, weighty topic. Responsible AI.
We want to shine a light on how and why we should use AI responsibly to lay the groundwork for a more stable foundation. This is the first episode of a multi part limited series where we’ll sit down with different experts for their own perspectives on responsible AI. What it means, why it’s important, and how you can put it into practice.
Today’s guest is Toju Duke, advisor on responsible AI, author of the book, Building Responsible AI Algorithms, and founder of community interest organization, Diverse AI, which is dedicated to supporting underrepresented groups for a more inclusive AI future.
Toju believes that AI has a lot to offer both business leaders and society. But she also has a unique perspective on the harms it could pose, particularly to underrepresented communities.
Toju Duke, welcome to the CEO. Digital Show.
Thank you, Todd.
Let’s start at the very beginning. What do you mean when you talk about responsible AI?
Developing and deploying AI in a responsible manner where it has minimal to zero harm.
Can you give us any examples of the harm caused when these tools aren’t deployed responsibly?
There’s a range, starting with forging inequality against different genders and different minority groups. We have privacy violations, human rights violations, and we have cases where people have lost their lives due to misleading chatbots. I’ve got a few real-life examples…
A very recent one was Amazon’s Alexa not being able to recognize that there was a women’s football match happening. Someone asked for details on the World Cup Finals, and Alexa’s response was “there’s no match.” In other words, it didn’t even recognize female football as football! That’s a shame.
Five years ago, Amazon had a hiring tool which was favouring male CVs over female CVs, because the data set it was built on was more representative of male universities and colleges. Anytime a female college came up, they couldn’t recognize it and dropped the application. This tool was used for three years!
Put yourself in the shoes of these women. Imagine if it was a woman who just came back from maternity, struggling getting back into work, struggling with confidence. Let’s imagine one of those users was very experienced, great at her job. She’s just not getting any interviews, so she’ll start wondering… what’s wrong with me?
It can affect people’s confidence, it can affect their career trajectory, it could affect many things.
The one I mentioned about loss of lives is a very recent one. A man in Belgium took his own life because a chatbot told him to. It claimed that if he did it, he’d be able to save the world from climate change. And he believed it.
He had been chatting with this chatbot for only six weeks. I think he knew it wasn’t human, but the further you interact with technology, even on social media, you form attachments. When you’re chatting with AI, it’s not going to tell you to shut up or argue with you. It’s always trying to make you feel okay and it’s very polite.
This raises ethical questions. Should the owners of these different chatbots have a time limit on interaction? It’s like parents who want to limit their kids’ screen time. Do we as humans need that?
If you’re literate on the pros and cons of different AI systems, then you should be aware that these things can cause harm and require some form of limitation. Alternatively, AI is highly beneficial. I’m still a number one AI advocate. I believe AI should be here. Do I agree with the pace that it’s going at? Probably not. But at the same time, it’s slowing down anyway.
We had the AI arms race that happened last year. And after that, there’s only so much we can build within a year. It takes time to do the research, to test it out and then to launch it.
I think so many people have been shocked by how quickly the ramifications of this technology are appearing. I always knew it was going to be possible in my lifetime, but I thought it would be maybe 20 years down the road. Not right now!
Could you suggest some ground-rules that should be put in place to prevent some of those more tragic cases happening? Both for governments and for any of our listeners within their own organizations who want to be responsible with their AI use?
Responsible AI is about implementing some form of responsibility in the ML models that are being developed. The example I gave on Alexa and Amazon’s hiring tool borders on a field called fairness, wherein whatever output any AI gives must be fair to every member of society.
At financial institutions, we can have issues where people are not treated equally when applying for a mortgage. There was another example with Apple card. When a couple who were both earning the same salary and had the same credit score applied for credit, the woman would get less than the man, even if everything else was the same. So responsible AI fairness is one way of addressing this common problem.
That’s one example. Then we have curation methods in data ethics and responsible data.
How is your data being curated? Is it done in a responsible manner? Do people know that their data is being taken? Is it against their consent? Where is the data being stored? Who has access? Do you need the amount of data that you’re going to use? Are the data sets transparent? Is there a risk of data leakages?
ML privacy methods anonymize the data so it cannot identify you as a person. When you launch a product, you want to export it to the rest of the world. It must be useful, conducive, and not harmful to anyone.
Another example is robustness. We know there are attacks against ML models. People will try to hack an AI system to get data and sell it off. Making sure your models are robust enough to withstand any form of attacks is critical.
There are other examples. I could go on all day…
Should the owners of these different chatbots have a time limit on interaction? It’s like parents who want to limit their kids’ screen time. Do we as humans need that?
TOJU DUKE, Founder, Diverse AI
You mentioned the threat from bad actors trying to leech off data.
Is there a concern that bad actors might train an AI tool on false information to erode trust?
It can happen if the bad actor is experienced and knowledgeable enough. But I read an article recently that suggests most bad actors aren’t that knowledgeable yet.
They don’t even need to build their own datasets and false information. All they need to do is manipulate the information that is out there and come up with a new data set on top of an existing one that’s full of misinformation.
On a more positive note, how can AI and humans work together to be more productive?
Is it possible for AI to have wild flashes of inspiration and combine pieces of knowledge into something new?
I’m not going to say AI has wild flashes of inspiration! But at the same time, they’re able to gather different pieces of information and predict the next word in the sequence. I can give you a good example.
I’ve stayed away from ChatGPT for a long time. I decided to use ChatGPT again because I was chatting with someone, and they said they use it for titles. And I thought, I’m always struggling with titles for my talks, so I went on and got so many good titles that I could tweak. I just picked one out and I tweaked it a little bit and that’s it. It’s made my life easier.
But when it comes to fact finding, that’s where the problem lies. When I asked it how many reviews my book has, it claimed there were many, even though the book was just launched.
I could have been in deep trouble because it came up with false information. I don’t know if you heard about the lawyer in New York who used ChatGPT to build cases? They were all incorrect. Now he’s facing sanctions and might lose his license.
Be honest, are there any chapters from your book written by ChatGPT?
[Laughs] I wish I could say yes! No, of course not. You can easily tell when the language just sounds a bit too robotic. Sometimes it’s just nonsense.
As you said, I think people need a more conscious relationship with the outputs from these tools.
I remember a story of a chess experiment, pitting a computer programme against a human being. What they found is that a computer is slightly better at chess than a human. But what’s even better is a human working with a computer, looking at what the computer suggests, and then occasionally deciding to overrule that and do something unpredictable.
I think that’s where we need to get to. More collaboration between us and the machine.
100%. One of the things that got me interested in AI is its propensity to solve the world’s top challenges, from treating cancer to helping with climate change. And it’s doing that.
AI is a tool to help humans, not take over. At this stage there’s no evidence that it can do that. I don’t think it will happen, but we must make sure we’re in control.
We are the ones building the systems. They’re feeding on data. They have some form of intelligence, but it’s not human reasoning.
I suppose a lot of the shortfalls of this kind of technology reflect our own failings. Do you foresee that AI could potentially help us, as humans, to become less biased by pointing out those issues?
A few research scientists and professors say that AI could reduce bias in society. Because AI is not built to be biased. They’re just based on the data that they’re fed.
There’s a recent study by scientists at Cornell talking about latent persuasion. It’s a form of bias that happens when you keep on chatting with AI. When you keep on chatting with it, whatever it tells you, you’re going to listen to. Because there’s some form of subtle bias and persuasion, you can change your reasoning towards a certain topic based on chatting with AI.
Now if we’re able to build them where bias is reduced to the most minimal stage that it can be, could the results it comes up with influence your way of thinking to be less biased too?
Currently when searching for an image of a man in Africa, you see gory, horrible images. When searching for a CEO, it comes up with westernized images. There are lots of harmful stereotypes in the results.
If we’re able to solve this, that should hopefully influence and reduce bias in society.
There’s been so many examples of AI bias. There was an issue recently where Midjourney would only show autistic people looking sad when prompted to include an autistic person in an image. I’m heartened these conversations are happening and that we’re spotting problems where they occur.
Which leads me to your work with Diverse AI, your new startup! How has the launch been?
We want to build and foster a diverse community in AI. We support the people already working in the field, and attract more talent to address some of these issues. Ultimately, it goes back to who’s building these systems. People build stuff based on their lived experiences, so we need more representation. That’s the idea behind Diverse AI.
We’ve launched free educational courses on our website, and we’re planning to do an AI literacy week with a few schools across London and Birmingham that come from poor socioeconomic neighbourhoods. We also have in-depth training, and panel discussions engaging our community and driving further knowledge and awareness.
Sounds like great work. Besides checking out webinars and reading your excellent book, ‘Building Responsible AI Algorithms,’ what else can business leaders do to educate themselves? If they are thinking ‘I need to take steps to protect my business, my staff, and my community from irresponsible AI,’ where can they begin?
Firstly, by educating yourself. You can look for responsible AI ethicists to connect with on LinkedIn. There’s lots of information out there.
Next is thinking about AI in a responsible way. If you’re going to develop AI systems, you have to decide what they must and must not do.
But whatever the use case is, it’s important to educate yourself on potential harms, then build a framework that addresses those harms. For example, if you want to launch an AI product, it’s best to launch it in phases. Rather than doing your tests on 5% of the sample dataset before launching it, you can launch it in phases and get feedback before your full launch. Fine tune and improve. Then launch to a wider set of users.
That means your product is going to be less harmful and more profitable because you have fewer complaints and less reputational risk. That way, the likelihood of regulators knocking at your door is going to be lessened, too.
Less harm, more profit. Who wouldn’t want that?
Is there anything exciting happening with AI that hasn’t yet hit mainstream attention?
Yes, there’s lots of talk about Human Value AI Alignment, which is building AI not just to predict, but to reflect some form of human values. There are going to be arguments about what human values we agree upon.
Of course there are basic rules we can have. Be kind. Be helpful. Do not perpetrate any form of harm or toxic content. But how do we make sure that AI sticks to these policies? Because with generative AI, they’re like wild kids. It’s hard to reign them in.
It’s about making sure that AI governance is done correctly. I’ve heard about people using AI agents that can govern an AI system and ensure it’s adherent. Autonomous agents programming themselves and working on themselves without human input.
It’s exciting, though we don’t know where it’s going to lead. Still, we keep pushing the boundaries.
One of the things that got me interested in AI is its propensity to solve the world’s top challenges, from treating cancer to helping with climate change. And it’s doing that.
TOJU DUKE, Founder, Diverse AI
Last question! Ten years from now, what are we going to be talking about?
We have people trying to work on having one system that works with text, video, audio, and image. If we can have all those capabilities in one system working with you, then it’s going to drive productivity even further.
I don’t know if you’ve heard of Microsoft Corp? It’s able to bring in all the different data across all Microsoft products and come up with something that combines it all. So in 10 years we’ll likely have one system for everything.
Thank you so much, Toju. Where can people find you online if they want to learn more?
Yes, so that is a reminder for everybody to please go and check out ‘Building Responsible AI Algorithms’ by Toju Duke, as well as reading more into the work that you’re doing with Diverse AI.
Thank you Toju, and thanks also to our listeners for tuning in.
ABOUT OUR GUEST
With more than 18 years of experience spanning advertising, retail, not-for-profit and tech, Toju Duke is a popular speaker, thought leader and advisor on Responsible AI. In 2023 she authored the book ‘Building Responsible AI Algorithms’ and founded Diverse AI, a community interest organization dedicated to supporting and championing under-represented groups for a more inclusive AI future.
Founder, Diverse AI