Sunday, February 17, 2008

that nut

Ray Kurzweil, quite possibly the most famous representative of the "strong AI" school of thought, is the subject of this BBC article, in which he claims that "we will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029." I think machine consciousness is not only possible but probable, but predicting its arrival by 2029 is more than a little optimistic. The hurdles to progress in AI are enormous, after all: machines are still generally unreliable when it comes to basic cognitive tasks like pattern recognition.* Machines will need exponentially more processing power and sophistication in their software before they even come close to simulating human thought.

The article spends some time on the question of nanobots. I look forward to this phenomenon with mixed delight and trepidation. A while back, I blogged about creating a cancer-killing "nanolotion" that, when topically applied, would release nanobots that would travel into the body, spot instances of cancer, eradicate the cancer, then resurface through the skin again, perhaps as a lotiony film or as a dry dust that one simply wipes off. The idea sounds great in principle, but what if something were to go wrong with the nanobotic programming, such that the nanobots thought, say, that lung cells were all cancer cells? That's why I have no intention of buying versions 1 through 5 of the nanolotion: let the eggheads work out all the kinks first. I don't want any bots eating my lungs. Or my brain. Or my family jewels.

*Read about the "frame problem" here; the frame problem relates to hierarchies of importance (i.e., degrees of relevance): what, exactly, do you need to know to solve a given problem? How do you know you have enough data? How do you know, in examining a problem, what information is essential to the problem, and what can be safely excluded? The relationship of the frame problem to problems of pattern recognition should be obvious.


No comments: