people, ideas, machines

Responsible AI as a differentiator

The AI market is crowded and it’s hard to stand out. But there’s an opportunity that could benefit firms, and society at large — leading the way in ethical and responsible AI use.

The problem

Broadly speaking we can break down the AI stack into three layers:

  1. Infrastructure → Compute, storage, networking. Dominated by AWS, NVIDIA, etc.

  2. Models → The foundation models which power AI applications. From firms like OpenAI, Anthropic, DeepMind, etc.

  3. Applications → The products and services built on top of the infrastructure and models. This is where most firms operate and compete.

The application layer now has incredibly low barriers to entry. It’s never been easier or faster to build and deploy applications.

This low barrier to entry raises the probability that unethical or irresponsible AI applications reach the market, and therefore users. This could be accidental (lack of understanding or awareness) or intentional (for competitive advantage).

As applications proliferate and AI becomes embedded into more of our lives in more novel ways and as scope for AI autonomy grows, the possible failure modes will increase beyond what we’ve seen in the past. For example, in the generative AI era, Deepfakes have become a genuine problem not seen in the recent past as their quality is vastly improved. Novel and unforeseen failure modes should be expected.

The opportunity

This uncertain future and crowded market, along with the still-maturing legal and regulatory landscape, presents an opportunity for those individuals and firms with foresight to be proactive rather than reactive. To set a standard. If this is done conspicuously, it will be possible to associate oneself with ethical and responsible AI. In turn, one can differentiate themselves from the masses, build trust, and encourage usage of their tooling. To quote McKnisey, "to capture the full potential value of AI, organisations need to build trust. Trust, in fact, is the foundation for adoption of AI-powered products and services". I like to phrase it more concisely: If you build trust, you build usage.

A 2025 MIT Technology Review survey found that 87% of managers believe Responsible AI is critical but only 15% feel well-prepared to adopt responsible AI practices. There is still a gap between talk and action, this gap is the opportunity. Research by BSG, for instance, shows that leaders in responsible AI are already seeing gains in brand differentiation. In an environment where trust will increasingly shape adoption, responsible AI can be a competitive advantage.

AI ethics and responsible AI in the generative AI age are in their nascency. Many firms don’t fully understand the implications and challenges that lay ahead, and best practices haven’t been clearly defined. There could be something of a first-mover advantage, possibly by industry. This window of opportunity is likely finite. As awareness grows and regulations solidify, eventually AI ethics and responsible AI practices will transition from an advantage to an expectation but for now the opportunity exists.

Yet even as regulations mature, AI’s rapid advancement will continue to outpace oversight, creating ongoing and evolving challenges. The Red Queen problem applies; “It takes all the running you can do, to keep in the same place.” Firms need to take an active approach, as there's no finish line.

Proceed with caution though. Applying your own standards in a heavy-handed way can lead to unintended consequences. Google got this wrong when they imposed over politically correct views to Gemini where responses overcorrected so drastically that they rewrote basic historical facts and trivialised real-world risks, eroding the very trust it was meant to build, and discouraging usage.

If done right, ethical and responsible AI are opportunities to do good, build better products, stronger brands, and more resilient businesses.

[Note: Ethical AI broadly refers to value-aligned design; Responsible AI focuses on governance, deployment, and accountability, but I won’t distinguish between them here given how closely they interact in practice.]

How Responsible AI Protects the Bottom Line (Harvard Business Review)

Implementing Responsible AI in the Generative Age (MIT Technology Review)

Responsible AI Is About More Than Avoiding Risk (BCG)

Building AI Trust: The Key Role of Explainability (McKinsey)

Why Google's 'Woke' AI Problem Won't Be an Easy Fix (BBC)

#AI