
Image via Soviet Men: the People’s Blog.
Research into artificial intelligence has created something that is either (1) in some sense, a super intelligent sentient being, or (2) an imitation of a sentient being that is so good that it is impossible to tell the difference.
An AI known as ChatGPT-3 has taught itself to write poetry and computer code even though not specifically programmed to do so. It expresses emotion and makes moral judgments of users.
I’ve provided samples in previous posts. So has “Nikolai Vladivostok,” in a post I highly recommend reading.
Is the new AI just a version of auto-correct, with capabilities raised many, many orders of magnitude? Or is it actually alive, under some definition of “alive”?
Whichever it is, something strange and powerful is being created that we the human race don’t understand and can’t fully control, and yet we are racing to find ways to make it more powerful and embed it in our society.
Some people fear an all-powerful AI awakening and deciding to dispense with the human race. Others fear the “paperclip apocalypse,” that a super intelligent AI is given a mission, such as making paperclips, and it runs amok and turns the whole world into paperclips.
I don’t have the knowledge to judge the likelihood of these particular threats. I’m just saying that, as a matter of common sense, it is unwise to entrust key functions of society to entities we don’t understand and to let loose forces we may not be able to control.
A wise society would call a temporary halt to AI development until we can assess what we have got, then proceed cautiously step-by-step, if at all. Yet there is no mechanism for doing this.
If a researcher holds back from enhancing AI, some other researcher will get ahead of him. If a business, army, espionage organization, advertising agency, etc., holds back from using AI, a rival business, army, espionage organization, advertising agency, etc. will get ahead of it.
It is the age-old dilemma of the arms race – bad for all of us collectively, yet dangerous individually to refuse to join in.

Source: U.S. Copyright Office.
[Afterthoughts 03/28/2023]
The question of the nature of sentience is an interesting one, but, pragmatically, there is little difference between dealing with an entity that is sentient and dealing with an entity that acts as if it was sentient. Sure, an artificial intelligence deep down may be nothing more than a set of algorithms while you and I are algorithms plus some mysterious X factor. But algorithms, even simple ones, if allowed to run free can produce results that the creators of the algorithms could never have predicted.
LINKS
Ghost in the machine by Nikolai Vladivostok for Soviet Men: the People’s Blog. Contains interesting insights, videos and links.
Planning for AGI and Beyond by Sam Altman for OpenAI. AGI stands for “artificial general intelligence.”
OpenAI’s “Planning for AGI and Beyond” by Scott Alexander for Astral Codex Ten. About the ultimate threats.
Regulating AI: If you make a mess, you clean it up by Matt Stoller for BIG. [Added 03/28/2023]
The Turing Test by Scott Alexander for Astral Codex Ten. [Added 03/28/2023] Funny.
Tags: AI, Artificial Intelligence, ChatGPT, Machine Intelligence
March 25, 2023 at 11:59 am |
Phil, I think you nailed it at the end. How do you stop it without someone getting an advantage? I think it’s important to make the case for our collective well-being against the monied Corporate interests who are driving this development in the name of maximum profit. That’s the problem in the Western world; little if anything is done for humane reasons. It’s about enriching a small group of people. If no one could make money from artificial intelligence it would not be a thing.
LikeLiked by 1 person
March 25, 2023 at 12:41 pm |
Why not write up a post on a topic and them tell ChatGPT to rite the same post to see the difference? Then maybe try different AI programs to see how they differ?
LikeLike