Wednesday, March 16, 2016

Ave, Charles!

Charles posts some thoughts about the recent five-game battle between 9th-dan Korean baduk (go) champion Lee Sedol and Google DeepMind's AlphaGo, which resulted in a 4-1 victory for the machine. Charles concludes his meditation by appreciating Lee's acumen and observing that we're still a long way from being taken over by our own technology.

Whether you find the Terminator/Matrix scenario to be an urgent matter may depend on how confident you are about the progress currently being made in AI. The popular long-essay website Wait But Why recently published—what else?—a long, two-part essay on why it would be rational to fear the advent of conscious AI; it's worth your time to read it. In his essay, Tim Urban, one of the two main writers at WBW, maps out human attitudes toward AI by demarcating two major precincts: Confident Corner (with optimistic strong-AI proponents like Ray Kurzweil) and the much larger Anxious Avenue, where all the worrywarts (like Stephen Hawking and, apparently, most Hollywood screenwriters) reside.

Personally, what I took away from the Lee/AlphaGo match was that, at the beginning, there was a lot of prideful rhetoric floating around about how the ancient Chinese game was so complex that it required intuition for excellence. What AlphaGo has arguably shown (and, really, the jury is still out on this, as there are other baduk masters out there) is that brute-force calculation may be all that's necessary for excellence at the game.

If this is our line of thinking (and it may not be yours), at least two possible conclusions present themselves: (1) intuition is an illusion that intelligent machines are slowly deconstructing: you don't really need intuition to be great at baduk; (2) intuition is an emergent phenomenon that can be manifested in machines, i.e., AlphaGo—especially when it astonished the commentators with some of its moves—did indeed play intuitively.

The implication of the first conclusion is that intuition reduces to brute-force calculation, and if intuition can be so easily simulated here, then other aspects of human behavior, awareness, and cognition can also eventually be simulated (thus putting us on the road to creating Cylons... or to making machines that bootstrap themselves to the point where they can create their own Cylons). This takes us down an old and well-worn philosophical path that's been discussed to death: if everything about us can be exactly simulated, then are robots that minutely simulate us conscious? This is the old problem of the "philosophical zombie": a being with no interiority that can nevertheless pass for a being with interiority. Can we, in fact, verify that the being has no interiority? Can we, in fact, declare that there was nothing intuitive about how AlphaGo played? What are the standards of proof?

The implication of the second conclusion might be thought of as the flip-side of the zombie problem. Here, we assume the existence of intuition, which automatically implies that intuition need not be unique to humans (if we have built intuition into AlphaGo, and if AlphaGo isn't human, then non-humans can possess intuition). In this line of thinking, if we build a robot that acts conscious in every way, then we can rest assured that it is conscious. The Star Trek: The Next Generation episode "The Measure of a Man" dealt with this very problem and resoundingly concluded that an android like Data was actually a conscious being imbued with the rights and freedoms of all sapient beings—not a mere machine, not a zombie.

For myself, I agree with Charles that we've probably got a long way to go before the machines are forcing us to the ground and taking our lunch money. At the same time, Tim Urban's essay on the topic of AI is disturbing enough to make me wonder just how long we have before we—as Morpheus put it in "The Matrix"—give birth to AI.

ADDENDUM:

1. Seen on Twitter: AlphaGo's Domination Has South Korea Freaking Out About Artificial Intelligence. With special focus on intuition.

2. Article/video seen on Drudge: We Should Be More Afraid of Computers Than We Are.

3. This article makes the point that Charles made in his post: intelligence and consciousness are two completely different animals.


_

2 comments:

Charles said...

Good food for thought. Glad I could inspire you to share your insights!

I tried to keep my post fairly light, because I knew if I started delving into all the implications I would get mired down and never post it. I was indeed thinking about this stuff, though.

I've always thought of intuition as a shortcut. I guess I have been influenced somewhat by the writing of Malcolm Gladwell, in particular his book Blink, which talks about how our split-second decisions, made without conscious thought, can turn out to be the best. The important point that Gladwell makes, though, is that you have to have a deep base of knowledge, experience, and/or training to draw on if you want your split-second decision to mean anything. Without those things, your split-second decisions are likely to be disastrous.

So intuition is basically the culmination of a lot of knowledge and experience, and it just seems, well, intuitive because we don't see everything that is going on beneath the surface, everything that led up to that one moment of "intuition." Computer programs may not have intuition in the human sense, but that's only because we can actually see the process by which they arrive at their conclusion.

And I definitely did not want to get into the question of what consciousness is. I must admit that that sort of thing is a little beyond, and it makes my brain hurt to think about it.

Kevin Kim said...

"... it makes my brain hurt to think about it."

Join the club.