Tuesday, November 21, 2023

mind-reading AI?

When I met my buddy Charles for dinner Monday night, he mentioned that he watches a YouTube channel called ColdFusion. I wasn't familiar with the channel, and Charles remarked that one recent ColdFusion video had a very clickbait-y title that made the prospect of watching it unpalatable. I decided, when I got home, to give the video a whirl. It's about the use of noninvasive neural tech and AI to read people's brain states and render their thoughts into coherent sentences (and images):

I have a few thoughts as a Kurzweilian strong-AI proponent. First, the idea that we'll eventually have machines that do indeed "read" our brain states and render them via technology as text, speech, or images is something I see as nearly inevitable. The close linkage between the mind and the brain is, in my opinion, undeniable and obvious unless you're a substance dualist who sees mind as entirely non-material. I watched cancer eat away at my mother's brain, and with that deterioration came a concomitant deterioration in Mom's faculties—her utterances became shorter over time; she was unable to form logical connections between events and sequences of ideas; her ability to express something as basic as emotions became more and more truncated over time. There's no doubt in my mind that consciousness depends for its existence on the physical brain. No brain, no mind.

Second, what the video shows is a bit of sleight of hand, although to its credit, the video does reveal its hand. Basically, the technology shown in the video involves the use of several layers of intervening software (and hardware) to "reconstruct" a person's thoughts. This software is predictive in nature, hunting along the most probable avenues to "guess" at what a person is trying to express. The final output might indeed be the very sentence that a person was thinking, but the machine has in no way "read" the sentence directly from the person's mind. Instead, it guessed its way along probability trees to what its algorithm saw as the most plausible result, i.e., the most likely thing the person was thinking or trying to say. This is less mind reading and more akin to the art of mentalism, in which a trained person seems to know your inner thoughts but is actually building a profile of you as you unconsciously offer the mentalist verbal, gestural, and postural c(l)ues.

Third, the technology shown in the video seems predicated on the idea that people think both in clear sentences and in clear images (software guessing at what people are visualizing is also shown—things like horses, trains, etc.). Actual consciousness is a confusing, noisy, multilayered, ever-shifting and ever-morphing jumble (maybe less so if you're, say, a disciplined Buddhist monk), so I don't think the current technology can allow people to "think out" entire paragraphs that can be rendered on a screen quite yet. I see nothing to indicate that the current tech can screen out mental "noise" to find whatever counts as the main or principal thought occupying a person's mind.

The video does a better job of exploring the pros and cons of this technology. On the pro side, it might conceivably be used to help people who may seem clinically inert, but who in fact have a very active mental life. On the con side, it might be able to record and exploit aspects of your consciousness by gaining access to information that you would rather keep private. 

The video offers other issues to chew over, but all in all, Charles was right to see the video's title as clickbait. Whatever this technology is, it's not at the level of we traditionally consider "mind-reading" to be. But I do think that, someday, we'll reach a point where our brain states—as layered and chaotic as they may be—will be something like an open book (if that metaphor even applies). Not for a long, long time, though.



2 comments:

Charles said...

Thanks for taking one for the team. I've heard about aspects of this technology before, so I'm not too surprised to see it coming together like this.

John Mac said...

That "mental noise" you mention has been keeping me awake at night lately. I used to be able to shut it down and go to my "happy place" to facilitate sleep. Not lately, though.

As to the potential for AI to read my mind, all I can say is, bring it on, bitch! I agree with your assessment; it isn't likely to be accurate enough to take seriously.

The day we have to surrender our humanity to AI may be coming, but it ain't here yet.