Wednesday, 10 December 2014

Forget AI: We Should Really Worry About Dumb People Talking To Even Dumber Machines

There was a significant moment this past week when Professor Stephen Hawking warned the world that our species faces real dangers from the advances in artificial intelligence. He wrote:
There are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains.

The comments were significant in a number of ways, not least of which was that they managed to slip some soft science into the news agenda. When Hawking speaks, people tend to listen. Why they listen is a moot point. In a way, Hawking is our version of Albert Einstein: a non-scientist's idea of a scientist. Even if people don't understand why he's brilliant, they can recognise something about him which is obviously a mark of genius. Hawking also shares with Einstein a talent for using the media. It makes it hard to separate the tragedy of his illness, the heroic struggle to overcome those enormous difficulties, with the hard science he's actually achieved in his lifetime. I have no doubt that his reputation is well earned as a theoretical astrophysicist but I can't help but feel that there's an element of the TV scientist about some of his public comments. 'Danger, Danger, Will Robinson!' is always going to be far more exciting to hear than 'All is well, Will Robinson...' and Hawking is bright enough to know that.

Hawking's contribution to the debate about artificial intelligence is an interesting one but not, as far as I can tell, based on any particularly great insight into the field of thinking machines. He quotes "[r]ecent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana' as examples of the rapid rise of AI. However, all these developments are (also as far as I can tell) extensions of relatively simple advances in pattern recognition which has come about through the miniaturisation of chips. Moore's Law famously states that computer circuits double in density every two years. That roughly means that every two years the chips that drive our computers become twice as powerful. Moore's Law held for over half a century before, in recent years, it has started to slow, with the doubling now occurring every three years instead of two. Yet even if machines continue to increase in power at the rate predicted by Gordon E. Moore there's still some way to go before anything could be built approaching genuine artificial intelligence. The problems aren't simply problems that can be solved simply by throwing more memory and processing cores at it. As one of my old computing professors used to phrase it: a cow doesn't gestate its young more quickly because it's standing in a field with a dozen other cows. In other words, some problems can't simply be solved by cranking up the dial. Indeed, it might even be argued that if should a thing could happen, artificial intelligence won't be achieved using the relatively crude chip technology we use today.

The scale of the AI challenge is enormous and popularist pieces, such as the one by Hawking, merely serve the public's appetite for salacious science. There was a story a few months ago about a computer that had apparently defeated the Turing Test. The media ran the story with bold headlines and when I saw one such headline, I actually raised an eyebrow. Had a computer really tricked a person into thinking they were having a conversation with another human? I should have known, however, and once I'd read the article, I was left wondering why anybody with half an idea about the Turing Test couldn't tell that the claims were simply far too bold. The computer hadn't come anywhere close to passing the test and the result was barely more impressive than those produced by the old Eliza script of the 1960s which used to play the psychologist to the user's inputs.

True artificial intelligence is still the stuff of science fiction and, I suspect, will remain so unless there's one of those genuine leaps of technology that come along so rarely; the last one probably being the invention of the silicon chip, with everything that has come since being merely an evolution of that.

However, the debate around AI systems came to mind this evening as I was contemplating the data gathered by my blog over the course of the last week. I'm fascinated to the point of distraction by visitors. Not so much the numbers, though catch me at a weak moment and I'll say that, yes, I am addicted to page views. What interests me is to establish who or what is visiting the site. I know at times this sounds like my desperate need for affirmation but I sometimes wonder how this blog is received, perceived, and even if it's perceived at all. And I think I have good reason to be sceptical about the latter. One of the rarely expressed truths about the current internet (or, at least, I don't think I've ever read this written elsewhere but it's so obvious that it undoubtedly has) is the extent to which so much of what passes for social media is simply people talking to computers.

For example, tonight I posted a tweet. I hadn't done one in a while but I keep getting these urges to be social. So, onto Twitter I went and wrote the following:
Hmm... Who'll succeed Alan Rusbridger at The Guardian? My guess is a multigender Eco warrior privacy smurf into S&M and Coldplay.

It might not be the greatest Tweet penned by man but I was quite proud of the result of about ten seconds of thought and fingers. And within about another ten seconds, I had a message come back to me. Some Chris Martin fan account had favourited my tweet. For a moment I smiled. That was really nice of them. It was nice to know that my wit is appreciated and... and...

Hmm...

Then I realised that there was very little chance that the Chris Martin fan account was actually being manned by a Chris Martin fan. A human being -- even a Chris Martin fan -- would surely have spotted that my reference to Coldplay was actually scathing and not worth marking as a favourite. It was obvious that a computer had merely picked out the word 'Coldplay' and automatically given it the virtual thumbs up.

Now, this, in a small sense, was a victory for the computers which had fooled me into thinking that I was dealing with a human. Yet the sad truth of all this is that so many of my daily interactions are probably with computers. It's one of the reasons I don't use social media. Look beyond the likes, the up votes, the Google + scores, the follower counts and you see just one enormous machine whirring away. A human puts input in and automated systems produce the required response. They like you, they follow you, they vote you up and some even send you messages asking you to like them back. Yet none of it is real. None of it means as much as even the simplest smile.

I do occasional work for a company who believe strongly in all of this social media. They love their follower counts and work hard to get more. I merely look at their numbers and wonder what it all means. Do those thousands of votes actually mean that people like the company? The answer, of course, is no. Those numbers really represent how long they've been present on the web. The follower counts really mark their own need for affirmation and the urgency with which they play the social media game. The real people are lost in all of this. You, the person out there, reading this... You are the person I'm writing this for. I'm not asking for anything other than a connection of our minds. A shared humanity contained within these words written as I sit here at my desk at 11.53 at night and scratching three days growth of beard. And that's all that ultimately matters. How my blog feed might be digested by the machines, the media crawlers, the influence registers... They really don't interest me. Yet I also fear that mine is a lone voice in a day and age when people prefer to speak to and be read by a million computers than understood by a single human brain.

The galling part of it is knowing that these words will be read by thousands of machines and, if I'm lucky, perhaps by only three or four humans. Or perhaps it will be read by thousands of people and only a few machines. The problem is that I simply cannot tell. And in this limited sense, I think Hawking is more right than his media friendly comments probably warrant. There might come a time when AI becomes self-aware and capable of taking away our freedoms. In the meantime, however, it's the dumb systems we already have that are doing that to us, right this very moment.

No comments:

Post a Comment