people, ideas, machines

Ethical AI is harder than you think

Everyone agrees AI should be ethical. But that agreement hides how hard it is. And when something sounds simple, people assume it will take care of itself. But ethics isn’t the default. It takes work, and if no one does the work, you don’t get it.

What is ethics anyway?

Ethics is about what’s right and wrong. People have argued about it for thousands of years, and they still don’t agree. There are frameworks like consequentialism, deontology and virtue ethics which try to help. But in practice they’re a matter of taste. What’s right or wrong is often subjective.

The law can be a good guide too, but it’s not enough. What’s legal can still be unethical, and the law tends to arrive late.

Ethical AI

Add AI to this mess and you get something even more slippery.

One of the main reasons is that AI is a general-purpose technology. It shows up everywhere, which makes agreement hard. What counts as ethical in one place can be harmful in another, even inside a single system that serves cultures with different norms.

And then there’s the incentives. Speed-to-market or competitive advantage usually win. This tension underlies much of the criticism of the current “AI arms race” narrative.

So what does ethical AI look like in practice? The EU give four principles:

Those principles all sound reasonable and should be pursued. But the devil is in the detail, and the hard part is applying them.

AI is useful because it can do things at scales we can’t. But that’s also what makes it tricky. Once it gets to that level, it’s harder for us to check its work, and harder to stay in control.

Then there’s the general-purpose nature of AI we spoke about. You can’t easily prevent harm from a tool that can be used in any domain for virtually any purpose.

Explainability is also easy to want but hard to get. It’s roughly impossible when dealing with systems that have hundreds of billions of parameters. Mechanistic interpretability research is promising but still early.

And it turns out that fairness isn’t one idea, but a family of conflicting ones. You can’t satisfy them all at once.

A note on fairness

Fairness deserves some more attention because it feels obvious and straightforward but isn't. It needs real technical skill. You need to test models across subgroups, design robust test harnesses, even audit training data. Most firms can’t do that.

There are even different types of fairness. Group fairness. Individual fairness. Equal opportunity. Demographic parity. Choosing is a value judgment, as they point in different directions. Impossibility theorems show that if you choose one, you compromise another. Perfect fairness is literally impossible.

Another rub is that improving fairness can reduce model performance. That creates friction. Unless fairness aligns with regulation or reputation, profit pressures will probably come out on top. That makes fairness a strange mix of philosophy, politics, and engineering all at once.

Why this matters

AI already makes decisions in hiring, lending, healthcare, and criminal justice. The distance between what we want and what we can deliver creates real risks. Good intentions won't close that gap.

What can is treating ethical AI with rigour. You have to choose which principles matter most. You have to acknowledge trade-offs openly. You need processes to surface new issues as they appear, and you have to match ambitions to what’s technically possible.

There aren't any universal answers, so start with questions. What do you want for your organisation? For your users? For the systems you build?

Success doesn’t mean getting everything right. It means being able to fail in the right way, by documenting choices, confronting trade-offs, and adapting as things change.

Ethical AI is not the default. It takes effort. It requires active decisions. The first step is realising that ethical AI is harder than you think.

I have also written about responsible AI and regulatory arbitrage

Algorithmic Decision Making and the Cost of Fairness

#AI #ethics #regulation