Saturday, May 24, 2025

because we all freak out about the same things at the same time now

People are screaming in delight, horror, horrified delight, and delighted horror about Google's new Veo 3, an AI video-generation platform that, while far from flawless, produces logarithmically smoother-looking, smoother-sounding videos than previous platforms (see below). We're definitely in the uncanny valley. And now, I have an idea for how my friends and brothers can interact with "me" after I die. Ha!

I could cynically pronounce myself unimpressed with Veo 3 for the same reason I wasn't impressed by that AI conversation I'd written about, but what does impress me is how much AI quality has improved over such a short span of time. We all knew this was going to happen, and as with so many catastrophic events that we can see coming from a mile away once we know the signs (nuclear proliferation, virus weaponization, etc.), we realize full well that we're running headlong into a wall of spikes, but if we don't do it, then someone else will.

On one hand, the advent of AI will force us regular humans to seek innovative pathways to creativity in order to be able to distinguish ourselves as human. We will have to push the envelope when it comes to our ideas of story, entertainment, food, etc. so as to stay always one step ahead of the AI. But how long can we realistically sustain that? AI is improving at a massively accelerated rate, maybe faster than Moore's Law. When I'm finally dying in a hospital bed, will the attending doc even be human? In East Asia, people are much more open to the idea of non-human doctors, teachers, etc.

Arguably more disturbing than the speed of AI's improvement is the increase in mystery that arises with complexity. AI programs are now so sophisticated that they contain dark corners where no one is quite sure what is happening or will happen. In cinema, the shorthand for this is that the computer or program reaches a saturation point and—boom—becomes conscious or sentient or self-aware.  Even way back during the AlphaGo era (you'll recall the AI that consistently beat a Korean baduk master, Lee Se-dol, in 2016), experts were often mystified by how AlphaGo came up with its creative solutions. No programmer could say, "Yes, I'm the one who programmed AlphaGo to do that." The AI was doing genuinely original things. While I would never say something spooky like true creativity is lurking inside the machine, you have to admit that, if creativity is at least partially a combinatorial process, then machines that can explore quadrillions of possibilities in a millisecond will arrive at those creative solutions a hell of a lot faster than human beings—with their slow, biological brains—ever could.

AI-driven robots have apparently reached a point where they can teach themselves and each other self-defense. Will we soon see the advent of a perfect martial-arts system, discovered and honed by AI, one that can be adjusted to different human body types? Will these martial robots function in place of soldiers or police or security guards? Will they be otherwise weaponized (I'd say yes, inevitably so)? Humanity is very close to that wall of spikes, I think, and a whole lot of significant trends in AI/robotics and genetic engineering are all converging in what could either be a glorious racial apotheosis or, more likely, the global equivalent of a gigantic nuclear explosion. Meanwhile, look at these:



For the moment, at least, we can rest assured that the images you saw above involved, on some level, a human, creative element—human decisions driving the AI process forward. But what happens when the AI hardware and software can self-teach, self-repair, and self-replicate? Does it then say, "Thanks—we'll take it from here"? We may be about to witness the death of truth and trust, given our heavily mediatized lives (well, médiatisé is a French word; it's also found in English, but it means something different, so maybe I should say media-saturated). Assuming truth and trust aren't dead already.


5 comments:

  1. The above is cute compared to what's down the pipe. Sure you have heard of AI 27, the report warning of SAI starting to run things within the next couple of years. https://ai-2027.com/ As scary as the bots taking over and collusively whittling us down to 500,000 cooperative slaves is the Sino-US AI arms race that must be feverishly underway as we sleep. When, say, China is on the brink of being able to drone-swarm all US defensive and offensive weapons and disable nuclear capabilities, an entire cold war will be played out in days. Because the US will have to act or effectively surrender primacy. You and I and everyone else will be locked in place, staring down the barrel. Good convo with one of the authors here: https://youtu.be/wNJJ9QUabkA?si=tbaq0ojd0goMIXsX

    ReplyDelete
    Replies
    1. Swarm intelligence! Have you ever read Michael Crichton's 2002 Prey? It has a ridiculous, stupid, cinematic ending, but the first 95% of the novel is unsettling.

      Delete
    2. I watched the Douthat/Kokotajlo conversation and feel a little bit vindicated. Kokotajlo says some things that sound a lot like what I said in this post. I'll have a lot more to say about the interview soon. Thanks for the link.

      Delete
  2. I have not. Is it about AI taking out his dinosaurs?

    ReplyDelete

READ THIS BEFORE COMMENTING!

All comments are subject to approval before they are published, so they will not appear immediately. Comments should be civil, relevant, and substantive. Anonymous comments are not allowed and will be unceremoniously deleted. For more on my comments policy, please see this entry on my other blog.

AND A NEW RULE (per this post): comments critical of Trump's lying must include criticism of Biden's or Kamala's or some prominent leftie's lying on a one-for-one basis! Failure to be balanced means your comment will not be published.