Request a Demo Blog

AI Ethics expert Olivia Gambelin on why you really need a Responsible AI strategy

November 4, 2024 | Uncategorized

When it comes to technological innovation, AI Ethicist Olivia Gambelin wants to know ‘why’ rather than ‘how’.

“I grew up outside of San Francisco in the Silicon Valley bubble,” she says. “In that area, you’re a guinea pig for the local startups and the different emerging tech and tools. But I was always frustrated with the cool tech that served no purpose. I could see what these products and solutions did – but what’s the why?”

Rather than heading straight into the field of technology, Olivia studied philosophy at Texas’ Baylor University, which eventually led her into the field of ethics: “Whether we’re cognisant of it or not, we live by our values, and being able to pick apart and understand those almost hidden forces that are driving different motivations is fascinating for me.”

She says the “cheesy, lightbulb” moment came while she was freelancing at the start of her career. While researching data privacy in Brussels at the point when GDPR was being introduced, she came across the term ‘Data Ethics’ for the first time. “It combined the industry I grew up in with my love of ethics – and I knew there was something practical I could do with it.”

After completing a Masters in AI Ethics at the University of Edinburgh, she founded Ethical Intelligence – a professional network of Responsible AI and Ethics consultants. Today, she works with business leaders and product teams, both directly and through Ethical Intelligence, on the responsible development of AI that is more aligned with human values.

AI Ethics expert Olivia Gambelin on why you really need a Responsible AI strategy

Responsible AI development

Olivia’s first book, ’Responsible AI – Implement an Ethical Approach in your Organization’ is a guide for business leaders who want to implement a Responsible AI strategy. But what if such a strategy is not on your company radar?

“If you don’t understand why you need a Responsible AI strategy at this point then you’re too far behind,” Olivia suggests.

“I know that sounds harsh, but Responsible AI has been proven over and over again to reduce AI risk by 28-30%, which is unheard of when it comes to reducing any type of risk in AI. It’s been proven to improve product quality and employee satisfaction. There are so many benefits that when you quantify it, you wonder why everybody isn’t doing this. It’s just good business practice at the end of the day.”

Olivia approaches Responsible AI in two ways. With Ethical Intelligence, she focuses predominantly on AI governance, risk mitigation, and other more ‘traditional’ ethics work. Beyond the company, her personal research centers on ethics-by-design and how human values are used as drivers of innovation.

AI ethics should not be viewed as a blocker to innovation, she says, pointing out that technology and ambition needn’t be reined in just because you’re taking ethics into account. “If the technology wasn’t reflective of what people actually need then it’s not innovation; it’s almost useless. I think the definition of good technology is technology that’s actually meeting a person’s needs, and you have to have the value alignment to do this. AI ethics is a guide to how to make technical innovation actually useful. ”

Practical implementation of AI Ethics

Having a plan to ensure an organization’s products reflect these values is one thing, but putting it into practice quite another. “Ethics can be very abstract and high level,” Olivia admits, “and it’s easy to lose sight of when you’re in the trenches of the day-to-day of building something.”

There are practical steps that business leaders and AI practitioners can introduce to ensure that AI ethics are applied at every stage of a product’s lifespan. To that end, Olivia has developed a novel Responsible AI management tool called the Values Canvas. This uses a matrix of nine different points of impact where an organization’s ethical values can be developed and documented in terms of its people, process and technology.

“The theoretical application of practicing ethics is that you are using these values as a decision-making tool. So, when it comes down to choosing A or B, you assess which is better aligned with the organization’s end goal and ambition with the values that it wants to embed into a system.

“What that looks like – the practical side of practicing ethics – can be things such as formal training and teaching the skill set for that kind of decision making. You can also build out frameworks or adapt current frameworks that are already in place. Let’s say a company has a procurement framework for example, then you can add questions in there specifically targeted at value alignment. Beyond that there are different softwares that allow you to scale that kind of monitoring, and you can put in place decision-guiding policies so that teams know what to align with. There are lots of different ways to bring this to life.”

Cross-organisational collaboration is essential when building out these frameworks. “A lot of work that an AI Ethicist does is helping to break down communication silos,” Olivia reveals. “For example, I’ve gone into companies where data science teams are all working on different definitions of fairness. When it’s a larger enterprise, there’s a lot lost when communication between departments or disciplines isn’t aligned, and that causes its own problems.

“With regard to the technology itself, it’s the quality of your documentation, it’s your data management, it’s getting feedback loops in place with your end users.” But Olivia suggests that nine times out of ten, the problem is in the people or the process rather than the technology: “AI is great at doing what you’ve told it to do, but if you don’t know what you’ve told it to do then you’re starting to have some problems there.”

AI opportunities, threats and irresponsible design

To understand the value of Responsible AI, it’s necessary to understand what irresponsible AI looks like. Olivia defines this as setting out to develop without any clear practices or protocols in place. “Irresponsible AI is profit at all costs – a ‘we’re going to innovate and figure out later what the impact is’ mentality. That works in the short term. For a very small amount of time you may be able to make a big profit, but it doesn’t last.”

There are real-world ramifications for irresponsible AI design, including security risks, reputational damage, privacy violations and algorithmic bias. The latter raises issues around the use of AI in content moderation to safeguard users, but ethical AI initiatives can help to mitigate these problems. “The first step is on the educational level that AI introduces bias,” Olivia suggests, “and that using AI for content moderation does not mean that it’s bias free, just that you’re automating the bias that exists in it.

“When it comes to applying AI ethics in that situation, it’s looking at being able to pinpoint what that bias is. Is it the bias we actually want or is it detrimental to our goals? And checking if there’s anything particularly skewed towards protected attributes. Then it would require a plan for what point the human needs to be brought in to check the work and insure against a different kind of bias,” she explains. “As I understand WebPurify’s offering, this is something you do in addition to AI and human content moderation. That is, your human team assists companies with prompt engineering and content vetting for training balanced models, which is great.”

We’re at a pivotal point in the evolution of AI, with the positive and negative impacts of the technology part of the public discussion. When it comes to the opportunities available, Olivia is keen to explore how AI will make more time for the aspects that make us uniquely human, and creating the environment where those aspects thrive: “How do we design our AI in a way that enables us to be more creative, to have more curiosity, to be more compassionate towards each other? Not as a way to automate it or enhance that, but just free up the time-and-mind space to encourage that.”

Without ethics being part of the conversation though, AI is not going to be aligned with the purpose and value of what we need as people. “AI is a human story,” she says.“ These tools impact people. Every Silicon Valley startup sets out to ‘change the world’. But if you’re sitting there going I don’t really like how the world’s changing, well, that’s because you’re not actually aligning with the needs underneath. So I would say that’s more of the threat of continued change of the world in the direction that we’re beholden to without control over.”

Anyone who says with confidence that they know what the shape of the AI landscape will be in five years’ time “doesn’t know what they’re doing”, Olivia suggests: “The generative AI hype came out of nowhere, for example. But generative AI existed before ChatGPT, and now there’s a lot of pushback happening against it because it was launched without a clear business use case.

“I think the only thing that I can predict with some certainty is we’ll see companies move towards the direction of being AI-enabled organizations and we’ll start to get a better understanding of what that means at scale. Beyond that, it’s too unpredictable.”

One thing is certain: Responsible AI requires a collaborative approach from developers, business leaders, policymakers and society at large. The future of AI is not just about innovation; it’s about creating a world where technology and ethics go hand in hand, empowering individuals and communities in a responsible and equitable manner.

Failure to prioritize transparency and inclusivity, or build robust frameworks that address the ‘why’ as well as the ‘how’, will only serve to increase ethical debt and decrease trust. “If you are either unconsciously making decisions or using patchwork solutions with a view to fixing things down the line, then you’re incurring that ethical debt over and over again,” Olivia says. “That becomes an entire bonfire waiting to happen, where one disgruntled employee or one poor user experience can light it all up.”

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.