Transcript: Responsible AI episode three with AI Ethicist, Uthman Ali

This article is a transcribed interview between Todd Jordan and AI ethicist Uthman Ali, recorded as part of The CEO.digital Show podcast series on Responsible AI.

You can listen to the audio here or read on for the transcript.

Todd Jordan:

Hello and welcome back to the CEO Digital Show. I’m Todd Jordan, and this is the third episode in our deep dive into responsible AI. Today’s guest is AI ethicist Uthman Ali. Beginning his career as an undergraduate in law, Uthman quickly found himself drawn to the philosophical aspects of robotics and ethics.

After completing a master’s in AI ethics and taking on roles in FinTech and engineering, Uthman’s core fascination regards transhumanism — Examining what it means to be human in the age of machines, balancing imagination, excitement, and caution for tech.

Uthman, thank you for joining us.

Uthman Ali:

Thanks so much for having me.

Todd Jordan:

You’ve got an interesting background; from your career and your general interests to your studies. I’m curious about the combination of law and philosophy. Has that given you a unique view on technology use?

Uthman Ali:

Digital ethics is a new field, you’ll find most people starting these teams have legal backgrounds. In law, you have to be an all-rounder to understand risk, business acumen, and put it all together. Digital ethics has to do with the fundamental preservation of human rights. Things like right to privacy, family and friends, right to life. All these have been codified with human rights law. But it’s about codifying that and ensuring it’s appropriately applied when using digital technologies.

Todd Jordan:

It’s a very interesting area. I think Germany already has the legal right to be “forgotten”, and I can imagine that becoming more sought after as technology developments continue apace.

Uthman Ali:

Yes, people want to preserve their right to privacy. That’s a large part of digital ethics, predicting future legislation. There’s this misconception that AI is the wild west and there’s no regulation when it comes to digital tech.

Lots of laws exist around privacy, copyright infringement, consumer protection, and human rights. But it’s about making that connection so people can see how AI is related to these things. Because if you mention the word AI they think of existential risks and movies like iRobot.

When you explain how bias can infiltrate training data and create discriminatory outcomes, they start to see the connections. It might seem abstract at first, but when you really understand the risks of AI, you realise what this field is about.

Todd Jordan:

You just touched on what irresponsible AI might look like. Is there a demographic or business area that is most at risk? When we talk about being responsible, who are we trying to protect?

Uthman Ali:

It’s about protecting all of society. If you look at a big AI risk like misinformation, it doesn’t matter which end of the political spectrum you’re on, what your net worth is. Everyone can be negatively impacted if misinformation spreads and impacts elections, for example.

This is a large-scale issue, so when you look at the risks we face, you might find that certain people; often those who are more marginalized, are at greater risk.

Todd Jordan:

I remember there was a day when all of Twitter was fooled by a picture of the Pope in a big puffy jacket that was generated by AI. It was pretty frivolous. But it doesn’t seem unimaginable before something genuinely dangerous starts to get out there that people interpret as real.

Uthman Ali:

This is something that I’m concerned with. If deep fakes or misinformation spread, it’s the erosion of truth and trust. What do you believe? How do you know if something is true?

And it sparks a bigger conversation about what it means to be authentic. For example, me speaking on this podcast – I could get to a point where I can use AI to generate my voice and predict how I would have answered these questions based on my historical data.

How would people listening feel if I didn’t put time and effort into doing this and just outsourced my thinking to AI?

Todd Jordan:

For business leaders, trust and authority is the basis for everything. So as exciting as this technology is, we need people to start thinking about those kinds of threats.

What would you say to somebody who owns an SME, who doesn’t think they need to hire an AI ethicist, or doesn’t think they are big enough yet to be worrying about this?

Uthman Ali:

Everyone’s entitled to their opinion. But there’s commercial benefit in taking AI ethics seriously. For example, if you understand AI performance risks around data privacy issues, copyright infringement, producing bias to discriminatory outcomes, the black box problems around explainability, then you understand the products you’re getting and the performance limitations so you can make an informed strategic decision about your AI use.

You’ll also know what you’re buying and its applicability for your goals. But if you’re in a risk averse industry and also a technology consumer looking to procure AI, then you should think about where you’re buying from.

Ask yourself: Where am I registering it? Do I have a purchase inventory? Do my employees and staff want this? Do I know how to use it? Do I know how to write effective prompts? Am I concerned about job displacement? How can I improve confidence so we get maximum value from this?

This is how you reduce transformation anxiety with AI. And that is the key. Because ethics, if done correctly, is an enabler for digital transformation.

Todd Jordan:

You make a great point. It’s about transparency of when and how AI is being used. For employers, consumers, staff, and for everyone in between.

And once you start unpicking that as an issue, all of these other issues start appearing.

Uthman Ali:

For commercial value, you get faster adoption of the technology if you consider ethics by design at the early stages of building products. You build more inclusive and accessible products that reach a larger market.

They’re also building more products that are more robust. It’s just good product development to be considering ethics throughout the lifecycle.

If a biased AI chatbot is used across the world, then that bias will reach the whole world at once. The amplification of harm is huge compared to a human bias which only impacts people within your immediate sphere.

UTHMAN ALI, AI Ethicist

Todd Jordan:

Taking it back to responsibility, who is accountable if something goes wrong? Is it the people that develop the tools? Is it whose who deploy them? Is it lawmakers?

Uthman Ali:

When you’re looking at legal liabilities, it often depends on the specific AI failure, the use case, and the specific facts of the case.

But it could be any of those you mentioned. If I build an AI hiring tool, give it to my customer to do with what they want, and my customer uses it for use cases it was not intended for, leading to discriminatory outcomes…. Then that’s liability on behalf of the customer. They didn’t follow the guidelines and were warned about foreseeable harm. They ignored guidance which led to a contractual breach.

But even with emerging AI regulation, a big talking point is foreseeable harm. Because with AI ethics, the ethical failure often results in unintended consequences. Few people are bad or discriminatory on purpose. It happens because you never thought about it throughout the technology lifecycle.

Foreseeable harm is about what issues could arise when using the tool. How it could go wrong. Was it foreseeable? Could this have been prevented and was it negligent?

Todd Jordan:

It reminds me of the Milgram experiment, where people thought they were being asked to give lethal electric shocks, and because a man in a white coat had told them to, they did it.

So when do we overrule it? When do we step in and say no, I’m not going to give that shock or no I will hire this person even though AI thinks they’re not right for the job?

Uthman Ali:

This is the peculiar thing about AI. It is autonomous decision making. It’s making predictions. It might predict the next word in a sentence or how you should answer a question.But you must have your own judgment and your own expertise.

In the use of generative AI, the outcome could be accurate or inaccurate. Does it need to be fact checked? Who do I ask to fact check? You’ve must think these things through and that’s where human ingenuity and creativity comes in. Fact checking and reviewing can become increasingly important skills once we get more content generated by AI.

Generative AI tools produce things so brilliantly that it’s easy to be hoodwinked. And as humans, we’re quite lazy. We want the fastest solution to something. So, when you see something looking good and it checks out, you’re tempted to repost it online or put it in a report.

You need to have discipline to fact check or peer review things, because there could be bias or misinformation. You need a human in the loop for meaningful review. We have automation bias when we see things being produced by machines, we perceive it to be objective or trustworthy. A machine is mathematical. It’s just dealing with facts and figures. We assume it will be correct – even when we know that’s not always the case.

Todd Jordan:

The crucial distinction seems to be the data we train AI on. It’s built by humans and trained on human data.

Do you think AI tools can ever be truly impartial in ways that humanity can’t be?

Uthman Ali:

Philosophically, we can get into a debate about whether it’s possible to be impartial. But today’s AI will reflect its training data.

Historical patterns will be reproduced. I get asked the difference between a biased human and biased AI. The distinction is that as human beings, we have our cultural differences too. It can aggregate across the world.

If a biased AI chatbot is used across the world, then that bias will reach the whole world at once. The amplification of harm is huge compared to a human bias which only impacts people within your immediate sphere.

Todd Jordan:

Whichever major generative AI becomes the most popular in the coming years could be used everywhere for problems that might be different in, for example, China as opposed to Kenya or the UK.

Uthman Ali:

The impact of misinformation or discriminatory AI outputs can worsen depending on the political climate.

For the manufacturer, it’s not foreseeable from within an office. It’s a concern that, if scaled across the world, will impact certain groups more than others.

Todd Jordan:

As more people understand generative AI, do you see it changing the way that we interact with each other? Is it going to influence our behaviour?

Uthman Ali:

How AI will change human behaviour is an interesting one. Usually, if I get anxious travelling, I just speak to a mate to double check if I’ve got everything or if I’m overthinking. But last time I used an AI chatbot and it gave me a detailed list of all the things I should be considering.

And it was amazing. But what if I do this for everything else in my personal life. What if everyone does and we speak to each other less and less, how would that influence human behaviour?

Imagine you’re in a relationship and things aren’t going well. Instead of speaking to a counsellor or friend, you ask what AI thinks about an argument you had. Then you get infiltration of AI within everyday life, and it becomes personal instead of about utility. It could be really good, because it’s more conversational, people feel more confident discussing their feelings. But where’s the data going? And how is your personal information being used for training?

Over time, it will be interesting to see what authority AI will have over decision making. If you don’t have meaningful human reviews and people who are confident in their own expertise over an algorithm, it could lead to decisions in your company being made by software without human oversight.

UTHMAN ALI, AI Ethicist

Todd Jordan:

Are there people already trying to build an AI CEO? How far away are we from that?

Uthman Ali:

There’s a show called Westworld which did a great job of depicting how an algorithm could be used in the boardroom.There’s a lot of hype around this. Can you have AI be a CEO that makes strategic decisions? We’re a long way off that. You can ask it to simulate different board members and that sort of thing. Again, it will only be as good as the data it’s trained on.

Over time, it will be interesting to see what authority AI will have over decision making. If you don’t have meaningful human reviews and people who are confident in their own expertise over an algorithm, it could lead to decisions in your company being made by software without human oversight.

Todd Jordan:

That keeps coming back as a core area, that we need someone in charge to ensure it’s working correctly. Otherwise, it strays into the realms of science fiction.

Uthman Ali:

For digital ethics, science fiction is amazing. If you look at Black Mirror, it does a fantastic job of showing these unintended consequences. How things spiral when you just don’t consider digital ethics.

Todd Jordan:

Now for an optimistic point of view. What can small to medium enterprises do to ensure they’re protecting their business, clients, workforce, and reputation?

Uthman Ali:

Upskill on AI. How it works, when to use it, why it’s being used so much, what use cases are appropriate. Any learning and development program you give your employees should include responsible AI, AI ethics, trustworthy AI.

You have to understand the performance limitations and risks of the tools before you encourage your employees to use them.

Todd Jordan:

I don’t know how much people understand about how AI is trained on our data and how much information we’re giving away.

Is there risk involved with trying to seek advice on important business decisions? Could that lead to a data leak?

Uthman Ali:

Chatbots will use your data for retraining. If you paste a spreadsheet, you’re giving away confidential information and who knows where it will be used. Your ideas could potentially be given away. As a company you’ve got to understand the risks of using these tools. It comes down to the importance of training.

Todd Jordan:

I read that ChatGPT is getting dumber as time goes on, because it’s being trained on things that it produced by ChatGPT. Is there truth in that?

Uthman Ali:

There is some truth to that. It’s a term your the listeners might not like, but I’ve heard it called digital incest.

Generative AI tools produce content which people cut and paste online. Then the AI tools reuse that for training data. And it keeps suggesting the same to others, which could include misinformation and inaccuracies. It’s a feedback loop; almost like you’re tying a knot, and it gets tighter and tighter because people keep reusing the same rehashed stuff.

But I also think we’re getting smarter. Because when ChatGPT first came out, the revolution was the interface and design. It was how it seemed like a search engine, but seemingly knew everything. Now people have become more educated. We’re wary of inaccuracies and misinformation. I’d argue that we’re smarter as a society and aware that we need to be vigilant.

Todd Jordan:

I think you’re right. This is liberating technology that can help us spend more time on important problems and hand over laborious thinking. That allows us to oversee things and focus on where to break from tradition and take leaps of inspiration. We’re here for the fun stuff, right?

Uthman Ali:

It’s funny you mentioned that because this was always the sales pitch for AI. That it would do the dull, dirty, dangerous jobs. Now generative AI is producing movie scripts and images and music. I know creatives who have said to me, “you said this would do boring things, but now it does the fun things!”

These creativity tools are great for everyone. You can find remarkable ways to use generative AI. And there’s this new this subculture called hustle GPT which is essentially teenagers who are finding unique ways to use AI tools for small business ideas. It’s empowering. You feel like you’re starting from a blank slate, and then some teenager says, “I’ve got a really good idea of how I can use this.” They might know little about AI, but they come up with cool ideas using that democratization of access.

Todd Jordan:

Is there anything exciting happening with AI that hasn’t yet hit the mainstream consciousness? Any scoops you’d like to break for us?

Uthman Ali:

A few come to mind. One is the ethics of brain computer interfaces and neurotechnologies. It’s interesting from an ethical perspective because of how it can transform society. For example, you could have BCIs, which are brain implants that could potentially end certain forms of paralysis.

But then who gets access to this first? And you have to ask: Why stop with just ending paralysis? Why not use technologies for enhancement?

There’s one company who make robotic limbs for amputees. It’s brilliant what they’ve done because the robotic arms have given touch sensitivity back to amputees. And they’re making them affordable, because they want everyone to have access to this technology. But where do you draw the line if you have a robotic arm that’s super robust like something out of a comic book movie? Where does this go next? And who has the right to tell you to stop augmenting yourself? If you look at technologies like Neuralink, they’re looking at augmentation. The idea that you can have telepathy or superhuman-like abilities.

With transhumanist technologies, when do you have to start asking what it means to be human if you augment yourself that much? Going back to human rights law, a lot of legislation was based on this universal understanding of what it means to be human. But if that definition changes, what else needs to change?

It goes back to the Ship of Theseus parable: You have a ship that’s sailing, and over time you keep swapping out one block of wood for another until eventually every bit of the ship is replaced… Is that ship still Theseus’s in the end? Or do we need to call it something else?

With these new technologies we must have these conversations early. These are the trends we’re going to see over the next five to ten years.

Generative AI tools produce content which people cut and paste online. Then the AI tools reuse that for training data. And it keeps suggesting the same to others, which could include misinformation and inaccuracies.

UTHMAN ALI, AI Ethicist

Todd Jordan:

Thanks, Uthman. Before we go, do you have any recommendations for further reading, any books, websites, documentaries?

Uthman Ali:

I would love to recommend books, but we’re in the TikTok generation where our attention span can’t go beyond six seconds. As a caveat to that, I’ll offer documentaries which I think everyone would love.

The Social Dilemma. It’s an amazing documentary looking at social media companies. Another absolutely amazing one is Coded Bias by Dr. Joy Buolamwini, looking at the issues around bias and technology. If you watch those two, you’re off to a good start.

Todd Jordan:

I’ll just get ChatGPT to summarize it for me. Is there anything else that you’d like to add?

Uthman Ali:

AI ethics is an interesting and confusing field. A business owner might wonder where to start. Remember, if you’re looking at this commercially, look at what you can control or what your company is doing with AI to start considering AI ethics. What’s your strategy? Is responsible AI within that strategy? Think it through and narrow down your risks. Create guidelines, have policies and procedures and proactively look at emerging regulation.

Todd Jordan:

I love knowing there’s people like you out there thinking about this.

Uthman Ali:

And we need more of us, which is why if you could start upskilling your staff, that would be great.

Todd Jordan:

We’ll start right now. Thank you so much, Uthman. It’s been wonderful having you on the show.

Uthman Ali:

Thanks for having me. I really enjoyed this.

Todd Jordan:

Thanks again to Uthman Ali for his time as we explore these ethical AI conundrums. If you enjoyed listening, like and subscribe wherever you get your podcasts and keep an eye out for more.
Thanks for listening.

ABOUT OUR GUEST

Uthman Ali is a successful and experienced AI Ethicist, passionate about guiding ethical and responsible AI adoption to benefit humanity. He’s an expert in AI ethics, digital policy, human rights law and strategic partnership development to drive transformative change. He’s worked for household names, advises for the World Economic Forum and AI startups, and advocates for creating an AI-aware society.

Uthman Ali
AI Ethicist