Posts Tagged ‘Computers’

The coming of the super-intelligent computer

September 13, 2023

THE MASTER ALGORITHM: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos (2015)

HUMAN COMPATIBLE: Artificial Intelligence and the Problem of Control by Stuart Russell (2019)

NEW LAWS OF ROBOTICS: Defending Human Expertise in the Age of AI by Frank Pasquale (2020)

∞∞∞

A robot may not injure a human being or, through inaction, allow a human being to come to harm

A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

==Isaac Asimov’s Three Laws of Robotics.

The future is already here.  It’s just not evenly distributed.

==William Gibson

∞∞∞

Artificial intelligence presents us, the human race, with a problem:  How do we control an entity that is more intelligent than we are, that we don’t fully understand, that’s not fully under our control, and that can enhance its own powers?

Computers were once logic made manifest.  They could perform calculations with a speed, accuracy and complexity beyond the power of any unaided human operator, based on the ability of their circuitry – the AND, OR, NOR and NAND gates – to duplicate the work of logicians and mathematicians.  

The computer programs were purely mechanical, purely deterministic based on their circuitry, completely understandable in principle if you delved deeply enough.

Today’s most advanced artificial intelligence programs are far beyond that.  They can reason empirically and not just logically.  They can learn on their own without human input.  They can reprogram themselves and develop capabilities their human masters did not plan on.

Computer expert friends of mine say that the ever-evolving, ever-changing AIs are more like organisms or ecological systems than they are like machines.  

But they are not sentient.  They don’t think their own thoughts.  They don’t have desires and emotions as we do—at least not insofar as we humans can tell. 

AI is so embedded in our society that few of us would want to shut it down altogether, or even know how to do it if we wanted to.

If you’re an urban, middle-class American, AIs are involved in almost every aspect of your life. 

AIs determine the placement of products on supermarket shelves.  AIs correct your grammar when you use word processors.  AIs diagnose illnesses.  AIs help prospecting companies find oil, gas and mineral deposits.  AIs make social media and on-line games more engaging and addictive.

AIs help marketers plan advertising campaigns, politicians plan political campaigns, stockbrokers plan investment strategies and generals and admirals plan military strategy.  They can beat grand masters at chess and Go.  They confer so many competitive advantages that it is hard to imagine them being rolled back.

This may be just the beginning.

The goal of top AI researchers is artificial general intelligence (AGI), or super intelligence.  This would be an AI that can reason as humans do and perceive the world as humans do, in terms of sights and sounds, but a million times more powerfully, and to be able to do it not for specialized purposes, as current AIs do, but for any human purpose.

Such an AI would not necessarily be a conscious, living being, but it most likely would be a convincing imitation of one, and not all computer scientists rule out the possibility of actual sentience.  

If biological life and consciousness somehow emerged by themselves in a mysterious way from complex organic molecules, maybe another form of life and consciousness—not necessarily one we could recognize—could emerge from complex electronic processes.

Be that as it may, a powerful force would be unleashed into the human environment, a force with huge potential for both good and evil, which humans would not fully understand and could not fully control. 

What we would need to worry about is not a real-life version of Skynet. computers deciding to replace human beings.  AIs are altruists.  They don’t have goals or drives save those that are programmed into them.

 The danger would be unintended consequences, the story of the Sorcerer’s Apprentice writ large.  Whether that is an immediate danger, a long-range danger or an imaginary danger, I do not know.

(more…)

Software rot, not cyber-terrorism, is the threat

July 12, 2015

The computer systems serving United Airlines, the New York Stock Exchange and the Wall Street Journal web page all crashed on the same day.

The cause almost certainly was not cyber-terrorism.  It was software rot.

Software programs of most big institutions are built on modifications of older obsolete programs.  There are so many layers of software that nobody fully understands them.

A writer named Zeynep Tufekci explained—

In the nineties, I paid for parts of my college education by making such old software work on newer machines.  Sometimes, I was handed a database, and some executable (compiled) code that nobody had the source code for.  The mystery code did some things to the database.  Now more things needed to be done.

windows-rotThe sane solution would have been to port the whole system to newer machines, fully, with new source code.  But the company neither had the money nor the time to fix it like that, once and for all.

So I wrote more code that intervened between the old programs and the old database, and added some options that the management wanted.  It was a lousy fix. 

It wouldn’t work for the next thing that needed to be done, either, but they would probably hire one more person to write another layer of connecting code. But it was cheap (for them). And it worked (for the moment).

via Medium.

Other aspects of the problem are that most software programs are written in a hurry to meet tight deadlines.  Remember the engineers’ proverb?

Price.  Time.  Quality.

Pick any two.

All this is part of a larger societal problem—the refusal of managers of big institutions to spend money on maintenance.

Our dominant operating systems, our way of working, and our common approach to developing, auditing and debugging software, and spending (or not) money on its maintenance, has not yet reached the requirements of the 21st century.  [snip]

From our infrastructure to our privacy, our software suffers from “software sucks” syndrome which doesn’t sound as important as a Big Mean Attack of Cyberterrorists. But it is probably worse in the danger it poses.

Via Why the Great Glitch of July 8 Should Scare You by Zeynep Tufekci for Medium.

Rise of the machines

June 9, 2014

Alan Turing, the great World War Two codebreaker and computer pioneer, devised what he called the Turing Test to determine whether a computer is truly intelligent or not.

The test consists of exchanging blind messages with a hidden entity, and trying to decide correctly whether you are communicating with a human or a machine.   The test has already been passed at least once, by a program devised by a 13-year-old boy in Ukrainea team of Russians posing as a 13-year-old boy in Ukraine.

The science fiction writer Charles Stross, in his novel Rule 34, predicted the rise of autonomous artificial intelligence through the co-evolution of spam and spam filters.  After all, what is spam but a Turing Test—that is, an attempt to convince you that a computer-generated message is sent a genuine human communication?   I greatly enjoyed the novel, but I’m not worried that this is a real possibility.

robot-image_largeWhat we should be worried about is the delegation of human decision-making to computers as if the computers really were autonomous intelligences and not machines responding to highly complex rules (algorithms).

I’ve read that European airlines are much more inclined than American airlines to led planes fly on automatic pilot.  The computer is by definition not prone to human error, so it probably would provide a smoother ride.  But what happens in an emergency that the computer is not programmed to deal with? The human pilot is less able to deal with it.

Much stock trading is done automatically, by computers responding instantaneously to market data as it comes in.  This is harmless if done some small trading company with an algorithm its partners think is better than anybody else’s.  But when there are a lot of traders using the same algorithm, then the automatic process can crash the market, and it has.

American drone warfare is conducted partly by computer algorithm.  Amazon and Barnes & Noble analyze your book-buying habits so as to guess what books you’d probably like.  The same kind of software is used to analyze behavior of people in the tribal areas of Afghanistan, Pakistan and Yemen and guess which is likely to be an insurgent fighter.

The technology is not the problem.  The problem is human beings using technology as a way to avoid responsibility for their judgments.

LINKS

Turing test breakthrough as super-computer becomes first to convince us it’s human by Andrew Griffin of The Independent.

A Venture Capital Firm Just Named An Algorithm To Its Board of Directors by Rob Wile for Business Insider.

From teledildonics to interactive porn: the future of sex in a digital age by Sam Leith for The Guardian.

P.S. [6/11/14]  Now that I’ve seen samples of the AI program, I don’t think I would have been deceived by it.  Click on Fake Victory for Artificial Intelligence by Leonid Bershidsky for Bloomberg View.

http://www.bloombergview.com/articles/2014-06-09/fake-victory-for-artificial-intelligence?alcmpid=view

The passing scene: Links & comments 11/12/13

November 12, 2013

Mondragon and the System Problem by Gar Alperovitz and Thomas M. Hanna for Truthout.

The Mondragon Corporation, based in Spain’s Basque country, is a federation of worker-owned cooperatives employing 80,000 people, which is often held up as an example of a successful alternative to the investor-owned corporation.

But recently one of its biggest units, Fagor Electrodomesticos, a manufacturer of dishwashers, cookers and other appliances, had to file for production from creditors under Spain’s bankruptcy laws.  Alperovitz and Hanna say that this is no reflection on Mondragon’s effective internal model, but that this model does not shield it from a bad Spanish and world economy.

Socialism in One Village by Belen Fernandez for Jacobin magazine.

The village of Marinadela in Andalusia calls itself a “utopia towards peace.”  It has full employment, affordable housing, no crime and free Wi Fi, thanks to a local economy based on a worker-owned farm cooperative.

Fernandez said it is not really a utopia.  It has not escaped the effects of Spain’s recession and its politics are dominated by its charismatic mayor and his clique.  But it sets an example to the rest of Spain and of the world as to what is possible.

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines by Nicholas Carr for The Atlantic Monthly.

Computers on average are more reliable than human judgment, so we rely on them to fly airplanes, diagnose illness, design buildings and a whole lot of other things.  The problem is that for any human capacity, you lose it if you don’t use it, and that creates big problems when computers fail.

How Republicans Rig the Game by Tim Dickinson for Rolling Stone.

The Republicans are becoming a minority party, but they hold on to power by means of gerrymandering, voter suppression and abuse of the filibuster.  Why don’t the Democrats make an issue of this?

The unemployment rate for veterans remains incredibly high by Brad Plumer for the Washington Post’s Wonkblog.

The job market is tough for everybody, but tougher for veterans because of service-connected disabilities, lack of civilian work experience, and employers’ failure to recognize relevant military work experience.