Josh Swords

Responsible AI as a differentiator

The AI market is crowded and it’s hard to stand out. But there’s an opportunity that could benefit us all: leading the way in ethical and responsible AI use.

The problem

Broadly speaking we can break down the AI stack into three layers:

The application layer now has incredibly low barriers to entry. It’s almost too easy to launch something new. This raises the probability that harmful AI applications reach the market. Sometimes by accident, sometimes on purpose.

As AI spreads throughout our lives, new unforeseen failure modes should be expected. Deepfakes are one example. They went from a curiosity to a serious problem in only a few years, just because the underlying models got better. And that trend is going to continue.

The opportunity

If you’re proactive, this messy mix of uncertainty, competition, and half-finished rules is actually an opportunity.

Set the standard now and people will notice. If you do it conspicuously, you become known for it. You stand out from the crowd, and more importantly, people trust you.

McKnisey say "to capture the full potential value of AI, organisations need to build trust. Trust, in fact, is the foundation for adoption of AI-powered products and services". I like to phrase it more concisely: If you build trust, you build usage.

Research from MIT found that 87% of managers believe Responsible AI is critical but only 15% feel ready to do it. That gap between belief and practice is the opportunity.

Ethical and responsible AI in the generative AI age are in their nascency. Many firms don’t fully understand the challenges that lay ahead, and best practices haven’t been clearly defined. There’s a first-mover advantage, maybe by industry.

This window of opportunity won’t last forever though. Once regulations and norms catch up, ethical and responsible AI practices will be expected. At that point it won’t differentiate you any more than having a privacy policy does now.

Proceed with caution though. Heavy-handed attempts can backfire. Google found this out when Gemini overcorrected for bias, rewriting history and trivialising real-world risks. The system meant to build trust ended up eroding it.

But if it’s done right, ethical and responsible AI are opportunities to do good, build better products, stronger brands, and more resilient businesses.

How Responsible AI Protects the Bottom Line (Harvard Business Review)

Implementing Responsible AI in the Generative Age (MIT Technology Review)

Responsible AI Is About More Than Avoiding Risk (BCG)

Building AI Trust: The Key Role of Explainability (McKinsey)

Why Google's 'Woke' AI Problem Won't Be an Easy Fix (BBC)

#AI