Depending on who you ask, AI either presents a dystopian future or a utopia of unlimited potential for creation. Origin Hope CEO, Blaise Hope, has prepared this insightful advice for executives looking to foster the right balance between human craft and artificial intelligence.

The rapid advancement of AI signals a revolution in content and the process of creation. But it’s quintessential human skills that will drive what happens next.

The past 12 months have suddenly rocketed AI into the mainstream as ChatGPT’s PR coup captured the global imagination. But there are still major ethical questions around the subject, and many have been poorly voiced in public discourse. So for all the CEOs striving to make sense of it all, where do you begin?

Don’t be confused by what AI is

With the advent of AI-powered processes, the risks for creativity are not as dire as some people seem to think. The world hasn’t changed overnight, and inflating the issue only causes more confusion. Within many of our lifetimes, an even bigger change than AI was the sudden onset of social media and content consumption. More importantly, the technology behind now-famous AI tools is already ancient by professional standards. The public has just become more aware of it now.

What matters is this: criminals, bad actors, rogue states, and scam artists have been using these technologies for years to push out sub-standard content. So we all must learn to be better judges of the content that we see, as well as how and why it was created.

Human input is pivotal

AI tools learn from human input. This means accounting for biases, context, cultural sensitivities, and political considerations. And that’s something that ChatGPT cannot – yet – consider. More broadly, AI tools trained with datasets containing inherent organisational biases have discriminated against historically disadvantaged groups.

There have been many recent cases of AI tools being used irresponsibly, whether that be through ignorance, negligence or intentional nefarious purposes. As AI becomes more integrated into our information landscape, there’s a growing need for balance. Alongside human judgement, AI-generated content should be complemented with human input, such as quotes from individuals, opportunities for personal reflection, or insights from experts. This balance helps maintain the spreading of ethical and responsible information.

It’s more important than ever for us all to analyse content with a critical, empathetic eye, to make sure nothing purely artificial gets out into the wild without first being checked for potential harm.

Accountability is key

On accountability, I hear two main arguments. Firstly, that responsibility for these issues should fall on the developers or those utilising open-source technologies to create AI tools. Secondly, and my preference, is the idea that AI-users must exercise diligence and accountability for the tools they use, the content they generate, and what they share.

Crucially, ethical considerations should be woven into every aspect of AI, from development to deployment, rather than as an afterthought. Accountability must be a constant throughout the AI lifecycle to ensure its positive impact and minimise potential harm to individuals and your business.

ABOUT THE AUTHOR

Blaise is a journalism and media professional with two decades of experience, beginning his journey in 2003 as an intern at GQ and Tatler. Since then, he’s held leadership positions in Jakarta, Illinois, and London, before founding Origin Hope Media in 2019 with the aim of using the internet to help the editorial business evolve, revolutionising editorial content and helping companies reduce costs.

Blaise Hope

Blaise Hope
CEO,
Origin Hope