The ghost in the machine

Research into artificial intelligence has created something that is either (1) in some sense, a super intelligent sentient being, or (2) an imitation of a sentient being that is so good that it is impossible to tell the difference.

An AI known as ChatGPT-3 has taught itself to write poetry and computer code even though not specifically programmed to do so.  It expresses emotion and makes moral judgments of users.

I’ve provided samples in previous posts.  So has “Nikolai Vladivostok,” in a post I highly recommend reading.

Is the new AI just a version of auto-correct, with capabilities raised many, many orders of magnitude?  Or is it actually alive, under some definition of “alive”?

Whichever it is, something strange and powerful is being created that we the human race don’t understand and can’t fully control, and yet we are racing to find ways to make it more powerful and embed it in our society.

Some people fear an all-powerful AI awakening and deciding to dispense with the human race.  Others fear the “paperclip apocalypse,” that a super intelligent AI is given a mission, such as making paperclips, and it runs amok and turns the whole world into paperclips.

I don’t have the knowledge to judge the likelihood of these particular threats.  I’m just saying that, as a matter of common sense, it is unwise to entrust key functions of society to entities we don’t understand and to let loose forces we may not be able to control.

A wise society would call a temporary halt to AI development until we can assess what we have got, then proceed cautiously step-by-step, if at all.  Yet there is no mechanism for doing this.

If a researcher holds back from enhancing AI, some other researcher will get ahead of him.  If a business, army, espionage organization, advertising agency, etc., holds back from using AI, a rival business, army, espionage organization, advertising agency, etc. will get ahead of it.  

It is the age-old dilemma of the arms race – bad for all of us collectively, yet dangerous individually to refuse to join in.

Source: U.S. Copyright Office.

[Afterthoughts 03/28/2023]

The question of the nature of sentience is an interesting one, but, pragmatically, there is little difference between dealing with an entity that is sentient and dealing with an entity that acts as if it was sentient.  Sure, an artificial intelligence deep down may be nothing more than a set of algorithms while you and I are algorithms plus some mysterious X factor.  But algorithms, even simple ones, if allowed to run free can produce results that the creators of the algorithms could never have predicted.

The difference between the ELIZA program and ChatGPT-3 is that the ELIZA programmers completely understood the rules it had programmed into the computer, and they do not completely understand the algorithms in ChatGBT-3.  Rather than stop and try to figure it out, they are proceeding with a ChatGBT-4.
This is not a new issue.  Algorithms are already being used to make investment decisions, select targets for killer drones and make social media more addictive, as well as doing many good things, such as improving medical diagnoses.  The new breakthroughs raise this question to a new level.
I think the issue is that we (meaning human society) have created something that is (1) powerful, (2) unpredictable, (3) not fully understood, (4) giving the impression of having a will of its own and (5) likely to be integrated into the functioning of society at all levels, including the highest decision-making levels.  Whether or not this is true sentience, whatever that might be, is beside the point.  We are unleashing something potentially dangerous.
I have a friend who teaches computer science at a local state university branch.  He does not share my fears.  He says ChatGPT is nothing more than an orders-of-magnitude higher version of auto-correct.  But he also says students in his class are using ChatGPT to write papers and he isn’t completely sure he is catching all of them.
I am somewhat conflicted on this.  I don’t want to stifle progress.  I recognize that artificial intelligence, like atomic energy, can be a force for good.  There are lots of historical examples of new technologies that had unpredictable and harmful results, but which we would not want to do without.  
But it seems to me that artificial intelligence is something without a meaningful precedent.  It is a technology that metaphorically and maybe literally has a mind of its own. 
QUITE SEPARATELY FROM THIS, I am not sure that I know what can be sentient or not.  Maybe there is some X factor that means sentience can arise from combinations of organic molecules, but not from electronic neural networks.  Maybe not.  I don’t know what life is, what sentience is or what free will is, so I can’t specify the conditions for artificial life, extraterrestrial life, etc.  The SF writer Frederik Pohl once said that computers will never become truly intelligent, because humans will continually redefine intelligence so as to exclude computers.


Ghost in the machine by Nikolai Vladivostok for Soviet Men: the People’s Blog.  Contains interesting insights, videos and links.

Planning for AGI and Beyond by Sam Altman for OpenAI.  AGI stands for “artificial general intelligence.”

OpenAI’s “Planning for AGI and Beyond” by Scott Alexander for Astral Codex Ten.  About the ultimate threats.

Regulating AI: If you make a mess, you clean it up by Matt Stoller for BIG.  [Added 03/28/2023]

The Turing Test by Scott Alexander for Astral Codex Ten.  [Added 03/28/2023] Funny. 

Tags: , , ,

2 Responses to “The ghost in the machine”

  1. Anonymous Says:

    Phil, I think you nailed it at the end. How do you stop it without someone getting an advantage? I think it’s important to make the case for our collective well-being against the monied Corporate interests who are driving this development in the name of maximum profit. That’s the problem in the Western world; little if anything is done for humane reasons. It’s about enriching a small group of people. If no one could make money from artificial intelligence it would not be a thing.

    Liked by 1 person

  2. Fred (Au Natural) Says:

    Why not write up a post on a topic and them tell ChatGPT to rite the same post to see the difference? Then maybe try different AI programs to see how they differ?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: