Transcript: Responsible AI episode two with Cambridge Consultants’ Head of AI Capabilities, Dr. Maya Dillon

This article is a transcribed interview between Todd Jordan and Dr Maya Dillon of Cambridge Consultants, recorded as part of The Show podcast series on Responsible AI.

You can listen to the audio here or read on for the transcript.

Todd Jordan

Welcome back to the CEO. digital show. I’m Todd Jordan, and this is the next in our series of episodes about responsible AI. Today’s guest is Dr Maya Dillon, Cambridge Consultant’s Head of AI Capabilities. Maya has a PhD in astrophysics from the University of Warwick, but left academia to explore the relationship between people and technology in data science. Maya now advises businesses on the intersection between human intelligence and artificial intelligence. Now, Maya is here to help us understand AI.

Dr Maya, welcome to the CEO. Digital show.

Dr Maya Dillon

It’s a pleasure to be here.

Todd Jordan

Let’s start by getting to know more about you. Your biography is staggering, taking on both astrophysics and now artificial intelligence.

Why make that shift?

Dr Maya Dillon

Thanks for reminding me about my staggering biography. It’s as Steve Jobs said, you can only join the dots when you look back on your life.

I wanted to do something that was impactful and supporting societal needs. To be bigger than myself. So, moving from astrophysics was a mixture of necessity and opportunity.

The recession hit in 2008 when I got my PhD. And it was difficult to continue finding opportunities in academia. It was a defining moment for a lot of careers, not just in academia.

AI was something I was interested in for a long time. I was either going to be a neurosurgeon or a palaeontologist or an AI expert. Some years after the recession, in the era of big data, suddenly I can actually do something in AI. I can take the skillsets that I’ve learned and have another go in a new direction. It was a mixture of forces outside of me and passions from within.

Todd Jordan

And look at where it’s led you.

You said what an unusual time that was and why that was the moment to make a leap. And here we are in 2023 talking about responsible AI. Now is the time to be having this conversation.

What do you think makes this moment so important to talk about, not just AI, but responsible AI?

Dr Maya Dillon

I’ll begin by going through my own short evolution. I started in 2013, did all the data munging, self-taught Python, a familiar journey for many data scientists and AI specialists. Then this convergence of technologies happened around 2011. The Harvard Business Review was saying that 90 percent of data had been created in the previous two years. That’s created by humanity, and it’s exponentially increased.

It’s also machine generated data. 10 years ago, we talked about the convergence of computational power, data volumes, and our algorithmic capability. The idea around that time was about seeing what we can do. Let’s invest money in data science. Let’s understand the difference between machine learning and AI and how it affects business. The conversations were about it being trendy.

But that conversation has moved on. There’s a maturity that exists across industries, and there was a need for people to collaborate. It wasn’t just about AI for the sake of it. People are seeing it as a mature play that will elevate an entire ecosystem of their business. And when you have something that’s transformative, impacting technology, data operations, and end users, you think, how do we make this effective and safe and private and responsible for everyone?

A while ago the conversations already started around responsible development of AI. I think we turned a corner with ChatGPT, with Geoffrey Hinton, a father of AI, saying he has concerns about the way AI is being developed and needing guardrails and safety.

It’s a coming-of-age story for all of us. We’re realizing the amazing things we can do. We’re at this steep part of the exponential curve where innovation is rapid, and we should ask how and why we’re doing stuff as opposed to if we can.

AI is proliferating, we now have regulatory momentum, and the technical maturity is already here for us to push the envelope of what AI can do in the next decade.

Todd Jordan

Many listeners may be looking at internal governance. Where should they begin creating internal governance to regulate AI-powered innovation and how it’s used, making sure it’s done responsibly?

Dr Maya Dillon

There are several angles you can take. You want stakeholders together to collaborate. Everybody interested in the development of the business needs to sit down and discuss what AI means to their organisation.

But it shouldn’t end with aspirations of what we could and should do. Because invariably the output of those conversations is challenges they can’t answer or address. And yes, it’s an appropriate opportunity for me to plug in what I do at Cambridge Consultants, right?

We’ve been at the cutting edge of innovation for 60 years, one of the things that I’m seeing is that we’re always questioning how we develop technologies and implement them, not just what we could do next.

In that process, we’ve developed an AI assurance framework that looks at safety ethics, responsibility, technical robustness. Now it’s become a blueprint and we’re able to be slightly ahead of the curve. I hope we can maintain our momentum.

We’ve had some bumps in the road asking next level questions, like when is something fully assured? So, talking to an organisation who’s had that experience would be something everyone should be able to leverage.

You’re not alone on this journey. And now we’re much more receptive to hearing what others are doing. Share and learn. Everyone is concerned about it.

For internal processes, first referencing all the governance materials and guidelines that we have out there. There’s the EU AI Act, we have the Bill of Rights in the States. The UK government has a pro innovation plan around AI.

Look at this information and ask, how does this apply to my organization? And how can I leverage this? That’s a good starting point. Then you start figuring out guidelines and policies internally.

A key part of all of this is transparency. Communicating constantly, this is what we’re doing, why we’re doing it, why it matters and upskilling people as you move along.

Finally, when it comes to putting these processes in place, make sure that there’s a feedback loop. There’s no point in just barging ahead. It’s a very complex topic. No one’s perfected it.

Best practices are found if we iterate through feedback and, where possible, do pilot testing and monitoring. Putting money, time and opportunity into exploring and playing with these things will be beneficial.

Todd Jordan

Great places to start. It can feel like a maze. But once you start to get momentum, if you can see where first steps are, then hopefully the maze falls away, and the pathway becomes clear.

Dr Maya Dillon

It begins with those first questions. What does AI mean for us? How can we leverage it? And then, how do we innovate responsibly? How do we accelerate our time to market, but make sure that what we create is for our organisation and its end users.

The future of AI and humanity is deeper collaboration and integration of technologies. It’s going to support us in augmenting our innovation and taking it forward, but first we need to understand the technology.

DR MAYA DILON, Cambridge Consultants

Todd Jordan

Yeah, great questions. The tech world has always been about moving fast, breaking things. We ask for forgiveness, not permission. Bold moves are rewarded. It’s an unusual point now where we’re saying, maybe we should go a tiny bit slower so that we can go further?

Do you think that’s because we understand that there are greater risks involved with this technology?

Dr Maya Dillon

Definitely. I think AI is an industry that’s screaming for regulation. I’ve only been here a few years, but I think for the last few generations, you’ll struggle to find a technology that’s demanded so much attention in pacing and managing expectations.

I think for most, this is completely unheard of. The reason is convergence of technologies, algorithms, maturity and understanding of how AI could be applied.

About five years ago we were talking about responsible and democratic development of AI. Now we’re stakeholders in it. And we’re aware of the impact that AI can have. Businesses are now accountable much earlier in the process.

Human nature wants us to create things for the benefit of everybody. But we are driven by two forces. The need to enjoy things and find pleasure, and our desire to avoid pain and risk. And we’re suddenly realizing that if we don’t get a handle on this, it could end up with a lot of negative side effects.

Todd Jordan

That’s the thing. It’s not about stopping progress; it’s about moving forward securely. The organizations that act mindfully now are going to last a long time. They’re the ones that are going to exciting new places with AI.

The ones who don’t take the time to put seatbelts in their cars, they’ll move quickly but then suffer issues like trust, workforce problems, a data breach taking down a giant, it seems like anything could happen with the pace of change.

Do you think it’s even possible for us to make predictions about where things are going? Is any of what I just said likely to happen?

Dr Maya Dillon

I think so. I’m already seeing the desire to do things correctly. I say that the future of business and AI is not about those who do it first, but those who do it right. By focusing on things like assurance, responsible development of AI, security and privacy by design, making AI human centric.

A lot of organizations think, why should we do it if the big players don’t do it? Sadly, the way consequences and legislation are upheld, the big players can get away with a fine.

If you’re a startup or a mid-sized company and your margins are small, repercussions impact the way you do business. So, it may not be beneficial to cut corners to progress quickly.

Take a step back and thinking, what does the future hold? Taking into account business and user input. Applying things like agile methodology. You create solutions that are robust, resilient, fit for purpose and something that supports a series of solutions that can support organizations at times of uncertainty. Things are happening much faster than anyone expected, so we need the ability to pull things forward when timelines change.

There’s a lot that happening now. We have to dig deep. We have to understand the different avenues and the complexities. We may not get it right the first time, but putting transparency and accountability into the process, we can look back and say we should do it this way or that way. We can mitigate risks and limit the impact of negative outcomes.

Todd Jordan

That’s such a good point especially about the sudden jump forward of advancement that seemed such a long way away. My experience of technology has mostly been, well, where’s my jetpack, where’s my flying car? All of these things should be here by now, and they’re not. And then all of a sudden, it’s happening quicker than expected. You mentioned a term; AI assurance. Could you explain what that means and maybe point me in the right direction to learn more about it?

Dr Maya Dillon

When mentioning AI assurance, we’re referring to a set of practices, frameworks and strategies that an organization implements through their operational processes, creating auditable trails where they can develop AI safely, ethically and reliably. AI assurance takes those concerns around privacy, security, fairness, transparency, bias, and accountability into account.

If you address all of those things, you have something quite magical; trust in AI. The future of AI and humanity is deeper collaboration and integration of technologies. It’s going to support us in augmenting our innovation and taking it forward, but first we need to understand the technology.

AI assurance is process, frameworks, and regulations that implement trust in an AI system that covers things like security and privacy.

Todd Jordan

AI assurance, as you just described it, should be part of the conversations that business leaders are having sooner rather than later.

Am I correct in thinking that you recently attended the Global AI Summit in London earlier this year?

Dr Maya Dillon

We had the AI summit in London in June, where I had the pleasure of delivering a presentation about generation-after-next-AI, which was a fun opportunity because we talked about what AI might look like in five, 10 years’ time. How will it impact our lives? What are the ways that we would interact with it?

It’s a very exciting time to be involved in AI.

None of this means anything unless we collaborate across disciplines, across sectors, and across organizations so we can learn from each other. The greatest benefit for all of us is if we communicate and collaborate.

DR MAYA DILON, Cambridge Consultants

Todd Jordan

When you get together with other AI experts at these events, is there anything that’s gotten you all excited that hasn’t yet hit the mainstream?

Dr Maya Dillon

Those who may find themselves stalking me on LinkedIn or anything like that will know I’m incredibly passionate about three aspects, people, planet, profit.

I do believe that the development of AI can benefit everybody. Touching on people and planet, some things that aren’t making waves yet, but will do next year, are things like where AI can provide assistive help.

We’re already seeing multiple projects and organizations looking at how AI can support individuals with learning disabilities, cognitive impairments, and physiological impairments. I’m excited to see how those systems will understand, empathetically, the state of a human to provide the appropriate support. That’s something as well that we’re very passionate about at CC.

We have a team we call HMU, Human Machine Understanding. And our work there is all about developing AI that’s human centric, that supports critical decision making in individuals and entire teams.

And the other side of things is how AI can support in predictions around climate change. Conservation of species and the like. Now we are fully aware of the impact that adverse weather effects are having on everything from agriculture to infrastructure. We have now sadly moved to the realm of looking at early warning systems, not just mitigation.

So, how do we reduce the effects of climate change and how do we get ahead of it? I think AI will support us when it comes to earth observation, warning and monitoring systems for adverse weather, and enhancing food production whilst managing environmental sustainably.

Not the sexiest of topics, but one that is needed and something that we must address fast. It’s a vital area of research and practice.

Todd Jordan

Would you like to talk a about how Cambridge Consultants can help business leaders find out more about using AI responsibly?

Dr Maya Dillon

Our organization has divisions focused on everything from aerospace to med tech, energy, agriculture, and telco. You name it, we’ve had experience in it over about 60 years.

We thrive when our engineers look at innovative problems. We love pushing the envelope. And we have the assurance framework in mind when we’re doing it. But the problems that we look at are not iterations or slight improvements of existing solutions. We’re looking at things that aren’t in the horizon of the next year, but two, three, or four years.

We do offer advisory services, we’re a consultancy, but one of the reasons I joined CC is because I love that we build things. My inner nerd was super excited when I could walk around and see our real products of invention. On the website, you can watch my talk from June at the AI Summit. We have papers on there, like the AI assurance briefing paper. That’s a good stepping stone to an initial conversation about what AI means to your organization and how you can assure it.

We delve into how we break down complexity and what it means for different markets. There are also some news articles in multiple areas worth exploring. Listeners can even reach out to myself and the organization directly or meet us at conferences. We’re more than happy to have conversations with people.

Todd Jordan

Fantastic. I’m sure there’ll be lots of listeners who’ll be very keen to do that. I suggest everyone watches the video of your talk from the AI Summit. It’s great stuff.

Before we say goodbye, any final words on responsible AI for people listening?

Dr Maya Dillon

I want to remind people that AI can’t develop without or progress without our input. It’s not running away from us. We’re innovating at an accelerated pace, but we’re putting the right frameworks in place. We’re having the right conversations. Final thought would be, none of this means anything unless we collaborate across disciplines, across sectors, and across organizations so we can learn from each other. The greatest benefit for all of us is if we communicate and collaborate.

Todd Jordan

Everybody listening – don’t forget to stay curious, collaborate, and ask questions. Thanks again to Dr Maya Dillon for sharing her knowledge on responsible AI. If you enjoyed this episode, like and subscribe wherever you get your podcasts.

And don’t forget to watch out for the next episode coming very soon and visit CEO. digital for articles and more on Responsible AI and other subjects.

Until then, thanks for listening.


Dr Maya Dillon is an AI expert with a love for science, technology and its application in augmenting human ingenuity to empower the world. She has had a varied career; from astrophysics to running her own business as an executive coach to working as a data scientist before finding her platform at Cambridge Consultants as Head of AI Capabilities.

Dr Maya Dillon
Head of AI Capabilities, Cambridge Consultants