Tuesday, April 25, 2023

AI's fatal flaw?

In the video below, Kyle Hill talks about something that didn't blow up as news but should have: the world's most sophisticated go algorithm was defeated 14 out of 15 times by an amateur exploiting a flaw in the way the algorithm "thought." (These AI algorithms aren't conscious, so they're not really "thinking" in the required sense.) How was this possible?

What's interesting is that, before the amateur could defeat the algorithm, another algorithm had to figure out the go algorithm's flaw. So this was an AI-assisted "victory."

Kyle's video widens the discussion to consider how AI still doesn't "understand" anything and is a long way from sentience. Also: with an AI like ChatGPT, researchers don't actually know how the algorithm does what it does—a fuzziness that was deliberately built into the system.  But taking a "we shall know it by its fruits" approach could be dangerous depending on what tasks you're asking an AI to do. With the AI's central processing a mystery, what if it "concludes" it should hurt or kill someone without recourse to any moral considerations (to the extent a non-sentient AI might "consider" anything)? An AI that's supposedly "trained" to perform a certain task could still conceivably glitch out and do something utterly random and nonsensical—and the mystery of the action will only be compounded by the fact that the AI's inner workings are deliberately obscure. How do you root out an AI problem when you don't even understand, or can't even analyze, the AI? 

There does seem to be something very ass-backward about all of this. We still have no universally accepted definitions of things like mind, consciousness, sentience, intelligence, etc. And yet here we are, trying to build entities that function as if they had minds, consciousness, etc. There's an undercurrent of non-ethicality about all of this. Frank Herbert's injunction, voiced in Dune, that Thou shalt not make a machine in the likeness of a human mind seems more and more relevant the deeper down this dangerous rabbit hole we go.



No comments: