It may be too early to say, but I think I may have been right to side with Ray Kurzweil and his strong-AI functionalism. Basically, the idea is that the material parts of a brain don't matter as much as how those parts connect and relate to each other. If we could make a brain out of artificial materials such that the parts within that artificial brain related to each other in exactly the same way that the parts in a real, organic brain do, we'd essentially have a functionally equivalent brain. And if it's true that the mind is what the brain does, then we'd also have a functionally equivalent mind.
And here's Kyle presenting the first major baby step in that direction. Not only was the fruit-fly brain mapped out, but it was also fed information to see whether it functioned like a fruit-fly mind. It did. Weird, scary, and exciting. If this technology exponentiates, we might see truly intelligent AI in our lifetimes. Of course, if the will is part of the mind, we might also get a glimpse of willful AI. And that, friends, might not be a good thing. You don't have to have human-level intelligence to be willful. Ask any balking animal—a dog, a horse, whatever. I guess, too, that we're going to learn the hard way whether emotions are baked into consciousness. Let's hope not (SF writer Larry Niven associates emotions with glands), but then again, is a cold, calculating artificial intellect better or worse than an emotive one?
No comments:
Post a Comment
READ THIS BEFORE COMMENTING!
All comments are subject to approval before they are published, so they will not appear immediately. Comments should be civil, relevant, and substantive. Anonymous comments are not allowed and will be unceremoniously deleted. For more on my comments policy, please see this entry on my other blog.
AND A NEW RULE (per this post): comments critical of Trump's lying must include criticism of Biden's or Kamala's or some prominent leftie's lying on a one-for-one basis! Failure to be balanced means your comment will not be published.