Friday, January 06, 2023

retardart

So when you do research into OpenAI (Elon Musk co-founded it), one of the first things you learn about is a product called ChatGPT (you can use it for free, right now, while it's still in beta). There's a lot of rumbling that this and other, similar AI programs will be a disruptive force in the coming years. A parallel AI called DALL-E (a play off Pixar's "Wall-E," and pronounced like "Dali") uses its "intelligence" to generate images based on whatever wackily descriptive sentences you can write. On my first try with DALL-E (which you have to sign up for), however, the result was a massive, retarded failure. The image I wanted was, "A cat doing bloody brain surgery." You'd expect to see a cat in a surgical smock, masked up and leaning over a bloody hole in a patient's skull, the brain barely visible beyond the sliced-open dura mater and swimming in blood as well. But, no—here's what my request produced:

So DALL-E failed to understand the essential fact that the cat was supposed to be doing the surgery. Instead, we've got surgery being done to various cats, which was the sort of not-so-funny imagery I was hoping to avoid.

You know, I gave this post the humorous title of "retardart," given how badly DALL-E botched what I thought would be a simple job, but these images just make me sad.

UPDATE: I got better results when I typed, "feline doctor performs bloody brain surgery."

something weirdly auto-surgical going on here



almost looks to be smoking

I'm not sure DALL-E has a clear idea of what a brain is.

__________

EXPLANATORY NOTE: our company's CEO has tasked us with learning about ChatGPT, which is leading us down a rabbit hole. The CEO is optimistic about ChatGPT and its ilk, but the more I learn about it, the more damage I see it doing to society as it makes us all that much lazier. Basically, ChatGPT is a chatbot with a "deep learning" AI that allows it to give highly intelligent, natural-English prose answers to questions posed to it. It can also perform any number of functions on request. "Write a poem about A and B (in the style of a Shakespearean sonnet)," or "write a fictional story about President X meeting with magical beast Y." Like a combination of the Google search engine and Wikipedia, ChatGPT can furnish you a bio, written in original language, about a famous figure from history. In our session tonight, we asked ChatGPT to write up a fill-in-the-blanks quiz about some vocabulary words we covered. The thing's ability to produce nearly flawless English is astounding, and as this beta testing goes on, it will only get "smarter" (it's an algorithm, of course, not a mind, so it has no interiority, no subjective sense of "I," which means it's not smart, at least not in the way we usually define "smart"). We discovered some limits pretty quickly, though: you can't ask ChatGPT questions like "Who are you?" or "What's your name?" (The latter question will prompt a response like, "You may call me Assistant if you want.") The chatbot also made some minor punctuation errors, from what I saw, but they were, uncannily, the sort of errors I've seen in human writing. The parallel program DALL-E will refuse to make images that go against an as-yet-unseen TOS, so "draw Joe Biden in a skirt" (which my boss tried) won't lead to any results. I got rejected, too, when I requested the image of a woman with enormous breasts emerging from a swimming pool like a breaching whale. No dice.



No comments:

Post a Comment

READ THIS BEFORE COMMENTING!

All comments are subject to approval before they are published, so they will not appear immediately. Comments should be civil, relevant, and substantive. Anonymous comments are not allowed and will be unceremoniously deleted. For more on my comments policy, please see this entry on my other blog.

AND A NEW RULE (per this post): comments critical of Trump's lying must include criticism of Biden's lying on a one-for-one basis! Failure to be balanced means your comment will not be published.