Who Gets To Decide What’s Right?
Jacky Fitt, FRSA
Prompted by the recent BBC article, ‘Can We Teach Robots Ethics?’ by David Edmonds and another ‘Office of the Future: Artificial Intelligence Powered by Trust’ by Brian Fanzo, I was intrigued to find both writers conclude with the same idea: that in a world of artificial intelligence (AI) we humans could ultimately end up by learning from the robots.
While Edmonds investigates the ethical decisions needed for driverless cars and ‘care bots’, Fanzo discusses trust in workplace technology and how it will be essential to greater productivity and innovation – ‘trust’ stands out as the key to both articles. The issue with trust, however, is that it’s an emotional response to a myriad of interpersonal signals. Let’s just digest that for a moment what’s being considered here: a machine-led, consistent approach to right and wrong that humans can learn from to better themselves… But who is teaching the machine? Who’s the role model?
“The robot may turn out to be better at some ethical decisions than we are. It may even make us better people.”
“It’s not about automation and AI replacing humans. Rather, it’s about humans leveraging automation and learning from AI to become better humans.”
I agree with Professor Roger Steare’s comments on Edmonds’s article, that we can only teach robots ethics and thereby engender trust, “…if we could program consciousness and emotions such as compassion, fear and shame. So that’s not going to happen any time soon. Instead, any robot morality will be a cold calculus of winners and losers.” and, perhaps, therein lies the clue to my uneasiness, “the cold calculus of winners and losers.” Who decides what’s right and wrong?
Earlier in his article Edmonds’s underlines his concern around how we ensure that machine learning doesn’t “absorb and compound our prejudices”. And, in terms of prejudice, bias and trust we’ve got an on-going issue around a highly masculine dominated sector. Given men can display feminine traits and women masculine ones, looking at the overarching picture, currently, feminine energy in tech is woefully under represented, with gender bias, unequal pay and a lack of mentors all featuring in the top five reasons preventing its growth*. Masculine traits are, in general, reductionist, attracted to calculation of odds, the best outcome for the many; a utilitarian view of how the world works. It’s about trying to fix stuff, if you can take emotion out of it even better; it gets done quicker with less mess. By this thinking any minority who stands out or does something different is against the majority and is first in line to be sacrificed for the greater good. It’s a clean and rational approach to life. Feminine energy, on the other hand, offers us a more nurturing approach. Emotion plays more of a role in decision-making, trust and care plays a much larger part too. These key trends are born out by the worldwide MoralDNA studies undertaken by Professor Steare. Females, should it need to be pointed out, are also 50% of the consumers of tech, yet, the bias of innovation and marketing are predominantly and consistently male or masculine focused.
Asimov’s zeroth law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
In the acceleration of AI development we need more feminine energy; diverse and equally respected it’s got an important part to play in the debate around new innovation, ethics and the automated ‘decision maker’ that could potentially hold the balance of our lives in its binary grasp. We can’t be lazy about delegating ethical choices to machines and must be aware of sleepwalking into a situation where difference and diversity are summarily ignored by a one-size-fits-all approach deemed ‘appropriate’ by a masculine minded majority. It may get complicated; it could get messy; it needs to be emotional. In whom would you put your trust? We certainly all need to trust each other more in order to make the right choices for everyone.
“Well, if droids could think, there’d be none of us here, would there?”
Obi-Wan Kenobi – Star Wars
And by humanity Asimov means all of us. We all have a stake in the development of AI; we all need to be part of the debate.
‘Can We Teach Robots Ethics?’ by David Edmonds
Office of the Future: Artificial Intelligence Powered by Trust by Brian Fanzo
Take the MoralDNA Test – free tool to identify your moral values and part of a worldwide research programme.