Posts Tagged ‘Artificial Intelligence’

How artificial intelligence elected Trump

February 28, 2017

thedges0112mercer

Hedge fund billionaire Robert Mercer bailed out the Trump campaign last summer when it hit its low point, but that was not the most important thing he did.

The most important thing was to teach Steve Bannon, Jared Kushner and Jason Miller how to use computer algorithms, artificial intelligence and cyber-bots to target individual voters and shape public opinion.

The Guardian reported that Mercer’s company, Cambridge Analytica, claims to have psychological profiles on 220 million American voters based on 5,000 separate pieces of data.  [Correction: The actual claim was 220 million Americans, not American voters.]

Michal Kosinski, lead scientist for Cambridge University’s Psychometric Centre in England, said that knowing 150 Facebook likes, he can know a person’s personality better than their spouse; with 300 likes, better than the person knows themselves.

Advertisers have long used information from social media to target individuals with messages that push their psychological buttons.

I suppose I shouldn’t be shocked or surprised that political campaigners are doing the same thing.

Bloomberg reported how the Trump campaign targeted idealistic liberals, young women and African-Americans in key states, identified through social media, and fed them negative information about Hillary Clinton in order to persuade them to stay home.

This probably was what gave Trump his narrow margin of victory in Wisconsin, Michigan and Pennsylvania.

The other way artificial intelligence was used to elect Trump was the creation of robotic Twitter accounts that automatically linked to Breitbart News and other right-wing news sites.

This gave them a high-ranking on Google and created the illusion—or maybe self-fulfilling prophecy—that they represent a consensus.

(more…)

Theo Jansen and his Strandbeests

June 18, 2016

Theo Jansen, a Dutch physicist turned artist, creates self-propelled kinetic sculptures he calls Strandbeests (Dutch for “beach animals”) out of yellow plastic tubing and other materials that can be bought at a hardware store.

They are powered by the wind.  His more advanced creations store up compressed air for when the wind dies down.  They automatically turn away from water.  And they automatically anchor themselves in the sand when the wind gets too fierce.

He said he thinks of them as a new form of life.  He envisions herds of his creations, roaming the Dutch seashore years after he is gone.  I think it is fair to call them at least a new form of artificial intelligence.

The video above shows Strandbeests in action.  The two below show something of how they work.

(more…)

There’s a line between humanoid and human

January 18, 2016
Hiroshi Ishaguro with Erica, his latest humanoid robot

Hiroshi Ishaguro with Erica, his latest humanoid robot

The following is from The Guardian:

Erica enjoys the theatre and animated films, would like to visit south-east Asia, and believes her ideal partner is a man with whom she can chat easily.

She is less forthcoming, however, when asked her age. “That’s a slightly rude question … I’d rather not say,” comes the answer.

As her embarrassed questioner shifts sideways and struggles to put the conversation on a friendlier footing, Erica turns her head, her eyes following his every move. It is all rather disconcerting, but if Japan’s new generation of intelligent robots are ever going to rival humans as conversation partners, perhaps that is as it should be.

Erica, who, it turns out, is 23, is the most advanced humanoid to have come out of a collaborative effort between Osaka and Kyoto universities, and the Advanced Telecommunications Research Institute International (ATR).

At its heart is the group’s leader, Hiroshi Ishiguro, a professor at Osaka University’s Intelligent Robotics Laboratory, perhaps best known for creating Geminoid HI-1, an android in his likeness, right down to his trademark black leather jacket and a Beatles mop-top made with his own hair.  [snip]

(more…)

The real reason robots are replacing human labor

May 12, 2015

The great danger of so-called artificial intelligence is not that computers will become sentient beings, but that decision-makers will treat them as if they are.

Machines are tools.  They are a means to multiply human strength and to duplicate repetitive human tasks.  They are highly useful.  But they are not a substitute for human skill and judgment.

jobretraining21stcenturyThe use of automatic pilots in airplanes is a good example.  An automatic pilot will make fewer errors than a human pilot, especially if airline management has pushed the human pilot to the point of exhaustion.  But excessive use of automatic pilots means that the human pilot’s skills wither, and the human is less able to respond in an emergency that doesn’t fit the computer algorithm.

Another example is the use of the Internet and automatic answering machines for customer service.  I don’t think anybody who has ever had to deal with one of these things thinks that they provide improved customer service.  Their purpose is to create a barrier between the organization and the public in order to save money, but also in order to free the managers from the inconvenience of having to deal with actual human beings.

Machines don’t talk back.  Not even self-directed machines talk back.  Neither do they exercise judgment or think of ways to do the work better.

But from the standpoint of a bureaucrat whose goal is the seamless exercise of power, the latter consideration is unimportant.   It is much more convenient to program machines than to deal with employees or deal with the public.

(more…)

The passing scene: Links & comments 11/19/14

November 19, 2014

The Myth of AI: a conversation with Jaron Lanier for Edge.

Jaron Lanier, a computer scientist, social critic and pioneer virtual reality researcher, said a computer algorithm is no more a form of life, and artificial intelligence is no more a form of intelligence, than a computer is a type of person.

The great danger is not that intelligent computers will take over, but that human beings will abdicate their decision-making to computer algorithms.  This is especially true, Lanier noted, as corporate managers increasingly make decisions based on computer algorithms.

Lanier warned against “premature mystery reduction”—the assumption that when we learn interesting and important new things, these are the key to understanding everything.

The Scheduled Crisis by Jeannette Cooperman for St. Louis magazine.

William Harmening, who was an Illinois state investigator for 34 years and now teaches forensic psychology, criminology and crisis intervention at Washington University in St. Louis, gave a wide-ranging interview on what to expect when a Grand Jury decides whether to indict Ferguson, Missouri, police officer Darren Wilson in the killing of Michael Brown.

Harmening spoke of the process of “deindividuation” in which people in a crowd are so caught up by anger that they lose the capacity for thought and self-control and become caught up in something that seems like a group mind.

There is an opposite process, he said, in which people are so caught up by fear that they lose any sense of being a part of organized society and do whatever they think will make them safe, at whatever cost.

High Tide in Republicanland by John Pennington.

John Pennington collected photographs for his blog of water in the streets of American  coastal cities at high tide.   He said these photos weren’t taken in the aftermath of storms or anything like that, just after regular high tide.

This is something that will only get worse.  How much worse depends on what Americans and others do to reduce greenhouse gas emissions, which are making the climate change and the ocean rise.

Rise of the machines: Links & comments 8/19/14

August 19, 2014

The Internet’s Original Sin by Ethan Zuckerman for The Atlantic.

The basic problem with the commercial Internet, according to this writer, is the use of advertising to finance Internet services.

Because an individual advertisement on the Internet has little impact, the value of advertising is based on the ability of the firm to target individuals who are interested in this particular product.  And the only way to do this is to gather data and use it to profile individuals.

Invasion of privacy is not a bug.  It is a necessary feature.  The reason it is necessary is that most people would rather give up their privacy than pay for Internet services.

Zuckerman thinks this is the reason that NSA surveillance is no big deal for most Americans.  We’re already accustomed to giving up our privacy.

He doesn’t have a good answer as to what to do about all this, and neither do I.

How We Imprison the Poor for Crimes That Haven’t Happened Yet by Hamilton Nolan for Gawker.

The science-fiction movie Minority Report imagined a world in which it was possible to predict when people would commit crimes and to arrest them before the crime occurred.  A predictive science of human behavior does not exist, but that does not stop people in authority from acting as if it did.

American courts are increasingly using what’s called “evidence-based sentencing” on which the severity of the sentence is based on a computer algorithm’s determination of the likelihood that the person will commit another crime.

In practice, what this means that that poor youth who grew up in a family without a father will get a worse sentence than a middle-class youth with access to psychiatrists and good job opportunities.

This is contrary to the basic principle of equal justice under law.   If you commit a crime, you should be punished for what you did, not for what somebody thinks you may do.

(more…)

Rise of the machines

June 9, 2014

Alan Turing, the great World War Two codebreaker and computer pioneer, devised what he called the Turing Test to determine whether a computer is truly intelligent or not.

The test consists of exchanging blind messages with a hidden entity, and trying to decide correctly whether you are communicating with a human or a machine.   The test has already been passed at least once, by a program devised by a 13-year-old boy in Ukrainea team of Russians posing as a 13-year-old boy in Ukraine.

The science fiction writer Charles Stross, in his novel Rule 34, predicted the rise of autonomous artificial intelligence through the co-evolution of spam and spam filters.  After all, what is spam but a Turing Test—that is, an attempt to convince you that a computer-generated message is sent a genuine human communication?   I greatly enjoyed the novel, but I’m not worried that this is a real possibility.

robot-image_largeWhat we should be worried about is the delegation of human decision-making to computers as if the computers really were autonomous intelligences and not machines responding to highly complex rules (algorithms).

I’ve read that European airlines are much more inclined than American airlines to led planes fly on automatic pilot.  The computer is by definition not prone to human error, so it probably would provide a smoother ride.  But what happens in an emergency that the computer is not programmed to deal with? The human pilot is less able to deal with it.

Much stock trading is done automatically, by computers responding instantaneously to market data as it comes in.  This is harmless if done some small trading company with an algorithm its partners think is better than anybody else’s.  But when there are a lot of traders using the same algorithm, then the automatic process can crash the market, and it has.

American drone warfare is conducted partly by computer algorithm.  Amazon and Barnes & Noble analyze your book-buying habits so as to guess what books you’d probably like.  The same kind of software is used to analyze behavior of people in the tribal areas of Afghanistan, Pakistan and Yemen and guess which is likely to be an insurgent fighter.

The technology is not the problem.  The problem is human beings using technology as a way to avoid responsibility for their judgments.

LINKS

Turing test breakthrough as super-computer becomes first to convince us it’s human by Andrew Griffin of The Independent.

A Venture Capital Firm Just Named An Algorithm To Its Board of Directors by Rob Wile for Business Insider.

From teledildonics to interactive porn: the future of sex in a digital age by Sam Leith for The Guardian.

P.S. [6/11/14]  Now that I’ve seen samples of the AI program, I don’t think I would have been deceived by it.  Click on Fake Victory for Artificial Intelligence by Leonid Bershidsky for Bloomberg View.

http://www.bloombergview.com/articles/2014-06-09/fake-victory-for-artificial-intelligence?alcmpid=view