Pleasure and Pain – on machine feelings and the future of the human race

Artificial intelligence - a science fiction concept?

Prompted by a Twitter discussion, I thought I would write an article on my thoughts on the future of machine intelligence and Artificial Emotional Intelligence (AEI). Granted, I’m not a scientist working in the field of advanced cybernetics or neural networking, but I have been given a vivid imagination and a sense of curiosity (ironically, some of the things we may not be allowed to give a robot).

Specifically, I wanted to discuss, “Should we give AIs feelings as well as intelligence?” It seems like this question is pretty academic, too far off to be of consequence, but I strongly believe if we are to look at the future of mankind, we have to start thinking that one day something we’ve built might be looking back at us. We’re already starting to get freaked out by our smart phones start giving smart arse answers.

This is a moral question as well as one of survival, and I’ll be discussing why giving our machines feelings could be harder than it sounds.

Artificial intelligence - a science fiction concept?

Artificial Emotional Intelligence – is it possible?

AEI – Artificial Emotional Intelligence

It’s a science fiction trope – the machine rebellion that rises up against humanity – from Skynet, to I, Robot we fear what may come from creating sentience, “playing god” as some would put it. In some ways, scifi has phrased these conflicts between humans and their creations as almost inevitable, that we would naturally create intelligence but enslave it, or become anathema to it, making their reprisals our just deserts.

This may be a projection of how we feel about ourselves. Most have such latent guilt about our general behaviour that we easily assume that we deserve to be wiped out. We’ve trashed the planet, we’re awful to our own species and we bring war and death wherever we go. Surely a being possessed of pure logic such as an artificial intelligence would decree our extinction as ridding the Earth of an abomination?

Xatarra i xinet

Yeah, we all contribute to this. This is Guiyu, where your iphone goes to die.

Well, not necessarily. If you take away the guilt and try to use purely rational thought, there would be many arguments that humans would not be best served by genocide. Humans are a part of the ecosystem too, and removal of us might cause other systems to go out of balance. It probably depends on what you assume the “original” ecosystem to restore is. It’s been millennia that humans have been messing with nature and so removing them would necessitate some organisms that depend on us to also be wiped out. And if humans are “bad” for the earth, an understanding of “badness” would probably require understanding that genocide is also bad. That’s logic.

Your enlightened robot overlord might decide on education over destruction, population control and universal health and smarter, emission free new technologies that reduce human burden on the ecosystem. But mostly we’d be saved by the Asimov programming which would be mandated by law and common sense that all machine intelligences could not harm humanity, even to save it from itself. 1

Your Plastic Pal Who’s Fun to be With

Some people have argued that machine intelligence could also be tempered with programming that allowed for compassion, feelings of affection, joy and more. This has been dubbed Artificial Emotional Intelligence. The theory is that if you give AI feelings, they may be moved by humanity’s plight and not wipe us all out like the sticky macro-virus we are. But this is tricky territory.

On one hand, giving machines feelings really reduces the scope of getting them to do all our work for us – which is why we invented them in the first place, let’s face it. It’s a bit hard to ask TRASHBOT3000 to take out the trash if he doesn’t feel like it. And what if he starts to question the fairness of the arrangement? A robotic teenage temper tantrum would probably end pretty badly for the meatbags. On the other hand, if we restricted their actions with the Asimov laws, forcing their obedience, but STILL allowed them to feel, wouldn’t that be incredibly cruel?

Uploaded via Pocket Imgur Pro

Meatbags? Or douchebags?

And finally, the actual logistics of giving robots emotions could be the hardest part of all. To understand why, you have to think a little about human emotions. Biochemistry plays a much larger part in our feelings, actions and even thoughts than most people realise. That’s why medication is dished out by psychiatrists. Well, the original reason. But how do we mimic this chemistry in machines?

Lila Tretikov's keynote speech at Wikimania 2014 01

Picturing them naked won’t really help.

Here’s an exercise to show you what I mean. Imagine yourself about to give a speech in front of a large crowd. You have to stand up to walk to the stage. Your heart is thudding, your mouth dry, palms clammy. You fumble your notes with thick fingers and begin, a little slowly, nervous. As you warm to the subject, you lose yourself in the moment, you’re on fire, in the flow. Before you know it, you’re finished and suddenly the nerves return. One person begins to clap. Is that it? You feel like the ground is opening up beneath you, you want to hide yourself, you feel redness creeping up your face. This moment lasts forever. But then more people join in clapping, and soon the applause reaches thunderous proportions. Triumph floods through you. You feel like fist-pumping the air but you restrain yourself to a gracious nod or two. You leave the stage with a barely-controlled spring in your step.

Almost all of the emotions you would have experienced there at the beginning were supplied to you courtesy of your fight or flight responses. Adrenaline would course through your system leading to things such as increased heart rate, pupil dilation, tingling fingers, time slowing down as your metabolism speeds up etc. It’s your body rewiring yourself so you can survive the stress ahead, which in caveman times would have probably involved a lot of running and fighting. These very human responses would be totally irrational in a robot. There is no danger, nothing to fear. Our emotions are actually a historical hangover. Should we programme a robot to feel like this?

Then there is the part where you lose yourself in the moment. This is a little more tricky to understand, but would a robot ever be able to forget the external stimuli and just exist in the moment? Would it be wise to even consider that? At the end of our experiment there was the triumphant feeling, the brain flooding itself with reward chemicals – the same ones you get when you feed an addiction, by the way. Similar reward chemicals reinforce behaviours that ensure species survival such as sexual stimulus, food pleasure and so on. But other more subtle ones exist such as the feeling of having peer approval, satisfaction at a job well done, or a sense of serenity during a quiet moment.

How the crap do you reward a machine intelligence? What would constitute a good feeling for a robot? Knowing us, if we could find the answer to that, we’d use it to reward the robot when it does all the icky jobs we’ll be asking of it. And of course, the inevitable sex stuff. Then there are the negative emotions. Would we want a robot to feel angry? Bored? Sad? Would a robot with only positive emotions miss most of the experience? And be super-irritating?

Clownforlaragiddings

Imagine a clown robot. With no off button

Pain, in a way, would be easier to transcribe to circuits. Pain could be interpreted as damage to its systems, something to be avoided. In fact Asimovian laws would almost guarantee that a robot avoided damaging circumstances to a degree that would make it seem quite wussy to a human who, believe it or not, damages itself for fun and profit on a regular basis. When you go to the gym and do some weights, for example, you are damaging your muscle fibres and forcing them to rebuild themselves larger and more densely packed. You’re rewarded with endorphins so you keep doing this. When you drink to excess, you’re damaging your liver. But who cares? A robot, by comparison, would seem prissy and shy unless it was given some pretty wicked overrides.

View post on imgur.com

Bender, stretching the definition of machine “intelligence”

A new mother will look down upon her child not just with a feeling of benevolence, but with a heady mix of hormones that reward her dangerous and painful birth, helping her to forget the hell she went through and encourage her to do it again, despite the agonies. What mix of rewards could we possibly give robots to encourage them to look down on us so tenderly?

And here’s the kicker – we don’t yet fully understand our own human emotions and biochemistry. How are we supposed to give a robot emotions when we don’t have the entire picture yet? Would it be responsible to even try? When we’ve cracked the human brain, will we still be motivated to create an artificial one? I guess I’m asking, is our search for artificial intelligence really just a way of understanding ourselves? Once we have this understanding, will we not ascend and become post-human superintelligences anyway? 2

Judgebot Alpha

Of course, there is scope for argument that dispassionate detachment is something to be sought after. Buddhists around the world seek to let go of destructive feelings and look for a more stable route through life, to the extremes of letting go attachments to families and loved ones. Obviously, the human race could not sustain itself completely devoid of these attachments as each of us needed at least some “attachment” to get started in life. Sex and nurturing come from the same base human drives. But a machine intelligence does not need any of this palaver. Many science fiction authors put AIs in control of our government and control, much better than we are thanks to their god-like detachment. They could sit in philosophical splendour, observing all our problems but without participating in them, making them the King Solomons of the future, all-knowing, all-seeing.

But, if you’ll recall, Solomon’s judgement was supposedly thus: when two women came to him each claiming a small child as their own, his decree was to cut it in half. The woman who begged him to not go through with it, rather let the other woman have the child than kill it, was supposedly revealed as the true mother. But would a robo-Solomon think that dividing the child in two was actually logical? Would it understand the relationship between mother and child as one of affection, or property rights? Furthermore, how many of our decisions in life are actually made by gut instinct? What if they both pleaded with him to save the child, what would he have done then? (It is hard to believe the other woman just stood there, accepting this judgement as SANE, let alone just, but anyway…)

Robot judiciary would have so many problems of subjectivity it would be difficult for them to function. After all, most court cases are presented with opposing arguments, both presenting as “the” right answer. Although the machine would not have the human hindrance of bias, or make a snap decision just because they were hungry and wanted to get to lunch quicker, they would also not necessarily understand exceptional or unprecedented circumstances.

Another question: how much of our own moral compass is a transitory thing? After all, morals are intrinsically linked to time and place. Our feelings on human slavery have changed a lot over the centuries, from being seen as a fashionable and correct thing due to the owners’ natural superiority, to becoming an abhorrence to the degree that restitution is sought from descendents of the perpetrators. We hate it so much we consider it a crime across time. Yet human slavery has not been completely eradicated from the earth, rather pushed into the shadows. At this point, we have robot “slavery” without qualms. Expecting our robots to be intelligent AND our slaves would be the height of hypocrisy. It would be the worst Indian Giving ever. 3

So, how could we possibly programme an instinctive understanding of right and wrong into a machine when we’re always changing our minds? How can we give our robots intelligence without scoring the most massive own goal ever? How do we add just the right amount of brain-chemistry mimicry to get what we want without messing it up? The answer for this author is – we don’t. We can’t. Our sane choice should be to give machines specific intelligence for the task at hand but don’t grant them sentience, i.e. self-awareness. And certainly don’t give them feelings. But knowing humanity, it seems inevitable we’ll do it anyway. I only hope I’m dead before the AI overlord comes and, terrifyingly, starts reading this article from the future. 4

1 The three laws of Robotics – AKA Asimov’s laws.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

There’s also another law, or the “Zeroth law”, above all others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”. This helps prevent that pesky “genocide for the good of the planet” type thing happening.
Back to where you were

2 I explore the themes of post-humanism vs AI in Ada King.
Continue reading up there

3 I used Indian Giving as another example of spatiotemporal specific morals and ethics. It related to some deep cultural misunderstandings. In fact, even using the phrase “Indian Giving” is dubious today as it has racist overtones. (I use it as an example of how messed up we are, rather than thinking “Indians” are bad at giving stuff.)
Just up above

4 See Roko’s Basilisk – a thought experiment where a super-intelligent machine in our future horrifically tortures those who opposed it or did not help it come into being… even if they’re already dead.
Honestly, are you even going to use this link? I don’t know why I bother. And me, with a brain the size of a planet…

Shameless plug time:

Buy Ada King by E.M. Faulds now on Amazon for more on AI, post-humanism and how they clash.

Leave a Reply

Your email address will not be published. Required fields are marked *