image1 image2 image3

PRODUCT.|PHILOSOPHY.|LIFE.

Turning things right


One of my favourite (which there are many) quotes from the Harry Potter series is from the first book when Voldemort tells Harry, "There is no good and evil, there is only power and those too weak to seek it."

In a sense, Voldemort understand this (and puts it) perfectly.

There has never been an objective right and wrong. 
It used to be right to enslave people and treat them as beneath the owners. It isn't anymore.
It used to be right to deny women the right to own property or vote in elections. It isn't anymore.
It used to be right to kill in the name of religion and to start territory grabbing wars. It isn't anymore.
It used to be right to persecute homosexuals. It isn't anymore.

There are some things that can be scientifically proven through theory and observation. But none of these fall under that category. In fact, none of the things that we talk about when we talk about good and evil or right and wrong (morally not mathematically) fall under that category.

In that sense, there is no good and evil, objectively. And there is only power. That is the power to define what is good and evil, and what is right and wrong. 

Today, and for the foreseeable future (in most parts of the world), this power comes through democracy. Not democracy in terms of exercising a vote to elect officials. But democracy in the true essence. Because whatever majority of us believes is good is good and whatever majority of us believes is evil is evil. 

With the advent of AI, there have been concerns about how to teach machines to act ethically. In cases similar to the one where a self-driving car has to make a decision between going on straight and hit five people that have just jumped in front of it or veer left and hit one person on the sidewalk or veer right and ram into a tree, killing the person inside the car.

If there was an objective right and wrong, it would have been very simple to teach algorithms to follow those principles. But sadly, there isn't.

And it is a good thing that it isn't. Otherwise, all those examples I started with would never have been overturned. 

Turning something from wrong to right or from evil to good takes a shift in popular opinion. It takes years of effort and campaigning to convince people one by one to change the way they think and to eventually reach a tipping point where the power is with the other side, making their claims right and those of the ones opposing them wrong.

With AI, there is no convincing each machine, robot and computer separately, one by one. They all run on a single algorithm. Which means it always has the power and is always right.

But, the way these algorithms are designed is that they learn from human interactions (and machine to machine interactions) continuously. Which means, if a human tells them something they did is bad, they listen, objectively. And learn immediately. And if enough people tell them they did bad and push them over the tipping point, they change.

So, the real concern is not about whether AI will act ethically or not, it will, even if it takes some time to learn what is right from wrong. 

The real concern is whether AI will influence humans to change their own beliefs of what is right and wrong, by bringing out and tailoring to their deep biases, like in the Trump election.

And that is a problem worth solving.

Share this:

CONVERSATION