AI and large language models are becoming commonplace due to their promised productivity gains. But Hywel Carver, co-founder and CEO of Skiller Whale Says that there are ethical, legal and technical considerations to keep in mind

After the UK’s recent AI Safety Summit, Rishi Sunak’s interview with Elon Musk dominated headlines. The event marked a turning point in the fight for responsible AI. During the week, 28 countries including China and the US, signed the Bletchley Declaration. It’s the first formal international agreement to commit to design, develop, deploy and use AI in a manner that is “trustworthy, human-centric and responsible”. But how can this sober vision be balanced against the seemingly breakneck speed of innovation?

Artificial intelligence is evolving at a record pace. One of the best known large language models (LLMs), ChatGPT, reached 100 million users just two months after launching in November 2022. Such tools are making AI widely available, bringing the technology within easy reach of businesses and individuals all over the world.

The benefits are significant. For tech leaders and software engineers, AI can be used to automate, augment and accelerate work. It allows developers to write code faster, fix bugs quicker and collaborate more effectively. Research by GitHub found 92% of US developers are using AI coding tools in and outside of work. More than half (57%) of those surveyed feel AI tools are helping them improve their coding language skills and 41% believe such tools can help prevent burnout.

Balancing benefits with risk

There are still significant risks behind the the opportunities highlighted above. LLMs can deploy bias and negative stereotypes, spread misinformation, and make up false facts – called hallucinations. Even Sam Altman, CEO of OpenAI (at time of writing), has admitted to being “a little bit scared” of AI, because of the potential for bad actors to spread “large-scale disinformation”.

Even in the example of software engineers, early evidence indicates that AI tools actually make junior developers slower to produce code by 7%. This is presumably due to the extra mental effort required to review code that may be difficult to understand.

There are privacy concerns too. A growing list of organisations such as Samsung, Amazon, Apple, JPMorgan Chase, and Microsoft have already banned employees from using ChatGPT internally because of fears that it will leak confidential information.

And there are examples of black box algorithms already leading to mass miscarriages of justice. Tax authorities in the Netherlands used a self-learning algorithm to spot benefit fraud, which marked dual nationality and low income as risk indicators. Tens of thousands of families were erroneously notified of exorbitant debts to the state and more than a thousand children were taken into foster care. When the scandal came to light, the government resigned, and the Dutch Data Protection Authority fined the tax administration €2.75m under the General Data Protection Regulation (GDPR).

Black box algorithms are not only a product of AI. The British Post Office scandal did not involve any artificial intelligence, only an overreliance on flawed software. But AI does exacerbate the phenomenon, because it is harder to interrogate an artificial intelligence’s reasoning, and developers can easily put too much trust in a statistically generated result.

Software engineers have a pivotal role to play in preventing AI from doing harm. They need to balance value creation against risk and understand that everyday decisions in software design can have severe consequences for society.

HYWEL CARVER, Co-Founder, CEO, Skiller Whale

Evolving legislation

As proven by the Dutch case, legislative measures such as GDPR do provide some protection for individuals. The UK’s data protection watchdog, the Information Commissioner’s Office (ICO), has already warned developers against rushing to adopt powerful AI technology without being mindful of privacy risks.

Under UK GDPR law (which largely mirrors the EU legislation), AI systems cannot discriminate on the basis of race, gender, age or other protected characteristics and should not be used to make decisions that will affect people’s lives. Where AI has been involved, there must be transparency about the process, and individuals have the right to challenge automated decisions.

And more legislation is coming. The EU’s AI Act, the world’s first proposed legislation in response to the rise of generative AI, is in the final throes of being agreed. It aims to ensure AI systems in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. The framework will also give citizens and companies the right to sue for financial damages if they’re harmed by an AI system and hold developers legally accountable for their AI models. Companies that aren’t on top of this face the prospect of hefty fines and reputational damage.

Targeted learning for the future

AI is developing faster than the regulation needed to keep such models in check. It’s therefore down to technology leaders and their engineers to become the first line of defence.

Developers can make great use of the opportunities provided by AI and LLMs, but they need to learn how to do so responsibly. Too many are already using those tools without that knowledge. It’s only a matter of time before something goes wrong.

Developers need to know about the ethical considerations and rapidly evolving legal landscape surrounding the technology. They need to understand the tasks these tools are good at, and which are best left to human judgement. By working in partnership with AI, the role of the software engineer will evolve from one of creation to curation and become much more strategic. When teams understand how LLMs work, they will be much better equipped to predict, evaluate and innovate with technology.

That won’t happen overnight, nor is it likely to be covered by the way most companies approach learning and development. Just allocating a training budget isn’t enough. We need to fundamentally shift how we think about team skills and workforce planning.

With live team coaching, the focus is on advancing the capabilities of an entire team in line with the company’s strategic goals.

Leadership lies in mapping the skills profile of the team to identify gaps in knowledge and prioritise the areas the business would benefit from. Sessions are short, take place during the working day, and are led by a technical expert in real time. Teams are able to adopt a deep understanding of a new skill in weeks, rather than years. And they’re given a real insight into the strengths and pitfalls of LLMs, how to write effective commands, and how to achieve better productivity.

AI is developing faster than the regulation needed to keep such models in check. It’s therefore down to technology leaders and their engineers to become the first line of defence.

HYWEL CARVER, Co-Founder, CEO, Skiller Whale

Tech teams need acceleration to keep pace

Developers are often the ones on the cutting edge of technology, navigating new tools, processes and responsibilities that progress brings. But they need to learn to make sure they’re using AI effectively, while being conscious of its limitations, and in a way that’s safe and secure. They need to be able to spot mistakes, minimise risks, and keep social impact front of mind. And they need to coordinate with the rest of the team and the wider business.

AI is accelerating the pace of change in a way that’s almost impossible to predict. Developers need a different sort of learning experience to be able to appreciate the opportunities it brings, and the vigilance to prevent mistakes that could have grave consequences.

ABOUT THE AUTHOR

Hywel Carver wrote his first program in C aged 9. After graduating with an MEng from Cambridge, he dropped out of his PhD program to co-found his first start-up. Today, Hywel is Skiller Whale’s Co-Founder and CEO. He runs a dinner club for CTOs, a podcast for tech leaders called ‘Primarily Context-Based’ and is designing and building his own 8-bit computer for playing Pong. In his 12+ year career, he’s been building and scaling start-ups. With Skiller Whale, he is drawing on that experience to solve the biggest problem he faced as a CTO: learning for engineering teams.

Hywel Carver
Co-Founder, CEO, Skiller Whale