In this captivating interview, Nell Watson shares insights from his new book Taming the Machine, which delves into the profound advancements and ethical considerations of artificial intelligence. Watson discusses the transformative impact of deep machine learning on technology and humanity, emphasizing the necessity for ethical standards and safety measures in AI development. Her book provides a comprehensive overview of AI’s biggest challenges and practical advice on navigating its complexities responsibly.

Hello Nell Watson, why did you write this book… now?

Nell Watson: I have a background in Machine Vision, that’s teaching computers to make sense of things in visual form, such as pictures and video. I have patents in that area, and founded a company enabling body measurement from photos from a camera, which is still going strong. These technologies were strongly enabled by the deep learning wave from around 2012 or so. Problems which were intractable to solve through hand-written code suddenly became trivial when deep learning was mixed in. For example, we had tremendous difficulty in ensuring that our body measurement system was precisely measuring people and not the background behind them. Our system worked well, but only about 15% of the time, which was infuriating. We had a very skilled team working to improve our custom image segmentation algorithm. However, just as we would get the hit to fit right, the crotch would break again, or the arms and legs, and it was an exercise in futility. I joined a summer program at Singularity University in 2013 and learned about the power of deep machine vision. Soon our team was able to apply semantic segmentation techniques to several hundred examples we made in photo editing software. Armed with this dataset, our deep machine learning driven image segmentation processes worked near perfectly for the first time.

It was then that I realized that we had made a huge leap forward, one that would compound and grow much further. Having gained a lot of experience about the growing power of AI, I became an evangelist for these incredibly cool new techniques. However, I also grew increasingly concerned, and with a sense of responsibility to help steer humanity in a better direction, especially since I had contributed to these developments in my own small way.

For the past ten years I have worked to develop a series of standards and certifications for ethical AI, working with organizations such as the IEEE. Our work taken AI ethics beyond mere principles and into directly applicable rules.

These tangible criteria enable AI systems – and the organizations behind them – to be benchmarked in a granular manner. These means that performance can be monitored, incentivized, and compared between respective solutions, greatly improving the power of consumers, regulators, and enabling best practices.

I have often been asked by others to write a book, to distill the learning I have gained in the past decade. Publisher Kogan Page reached out and gave me the structure I needed to do so. Taming the Machine which provides a broad overview of the biggest issues in AI, and provides direct, practical insights in how to make things better.

I am proud also to cover both AI Ethics (how we use AI responsibly) and AI Safety (how we keep systems aligned with human wellbeing), which are deeply interlinked concepts, yet which are rather spoken of in the same breath. I wanted to cover the gamut, to ensure that the reader’s understanding could be fully comprehensive.

The trends that are just emerging and that you believe in the most?

N. W.: We are about to see a huge leap forward with agentic AI systems which can adapt and achieve complex goals independently. These systems integrate with large language models to provide tools for innovation, logistics, and risk management.

This ability to manage by objectives while doggedly pursuing assigned goals makes AI an attractive option for high-level decision-making roles. However, the independent goal-achieving capabilities of agentic AI pose unique ethical and safety challenges compared to other AI systems, necessitating careful alignment of AI goals with human values to prevent unintended consequences.

For example, such systems may not understand when to stop pursuing a goal, even in the face of force majeure. The may misinterpret user’s intentions, instead following commands literally. They may take shortcuts to achieve goals quickly, though unsafely, and they may fail to understand the cultural and situational contexts of their environment of operation, or to account for the boundaries of their users, and third parties.

Everyday people will soon need to grapple with these problems. In much the same way as we have licenses for cars and private planes, we may need some kind of license for working with sophisticated machine intelligences.

If you had to give one piece of advice to a reader of this article, what would it be?

N. W.: With AI, not a case of ‘can we’, but ‘should we’. AI technologies can be incredibly powerful, but that in itself can be seductive. We can trust systems too much when they seem to work well 95% of the time, but we forget that 1 time in 20 things can go very wrong. As algorithmic systems are deeply enmeshed in our personal and professional lives, as well as our hospitals, courts, and militaries, things are increasingly going very wrong.

To benefit from AI, we need to use it in a careful and cautious manner, ensuring that there is a foundation of transparency, so we can understand what a system is doing, in what manner, and for whose benefit.

This gives us insight into how the system may be misinterpreting reality in an incorrect or unfair manner. It allows us to gain accountability, so that when things go wrong, we know this, and can prevent it in future.

Only with these elements in place can we sustainably enjoy the benefits from AI.

In a nutshell, what are the next topics that you will be passionate about?

N. W.: My book, Taming the Machine, has an associated animation which provides a précis of the topics. This also serves as a pilot for an animated series, which I’m working to get greenlit. This will provide another channel through which to proliferate the ideas within.

Beyond this, I’m working on a web/PC/mobile game which will provide practical hands-on experience of how to deal with AI ethics and safety issues within simulated professional workflows. This should equip the public with virtual experience for how to practically deal with these issues as they arise.

I’m also working on another book, which explores how the properties of physics can be applied to create a new system of ethics, finally uniting science and spirituality. That will take a couple of years to complete, however!

Thank you Nell Watson.

Thank you Bertrand Jouvenot.

The book: Taming the Machine, Nell Watson, Kogan Page, publication date not provided.