Archive for the ‘Technology’ Category

Jaron Lanier on addictive social media

September 21, 2018

View story at Medium.com

These are notes for a presentation to the drop-in discussion group at First Universalist Church of Rochester, 150 S. Clinton Ave., Rochester, N.Y. at 9:15 a.m. on Sunday, Sept. 23.

Free market capitalism + technological change = addictiveness.

Free market capitalism + technological change + artificial intelligence + behavioral psychology + advertising-based social media = maximum addictiveness.

In 2010, a venture capitalist named Paul Graham wrote an essay entitled “The Acceleration of Addictiveness.”  He said that in a free market, the most addictive products would be the most successful, and technological progress would accelerate addictiveness.

He didn’t have a good answer for this, because he didn’t want to give up the benefits of either the free market or technology, except for individuals to understand this process and shield themselves from it.

This has happened in social media. Addiction is a business model.  Research centers, such as the Stanford University Persuasive Technology Laboratory, perfected ways to use technology to modify behavior. Companies use behavioral psychology—positive and negative reinforcement—to make video games and social networks compulsive. 

Jaron Lanier in Ten Arguments for Deleting Your Social Media Accounts Right Now explains that Internet addiction is a real thing.  It is by design.

A vast amount of data is collected about you, moment to moment, including your facial expressions, the way your body moves, who you know, what you read, where you goes, what you eat, and your likely susceptibility to assorted attempts at persuasion.  This data is then used by algorithms to create feeds of stimuli – both paid ads and unpaid posts – that are designed to boost your “engagement” and increase the effectiveness of “advertisements.”  (The honest terms would be “addiction” and “behavior modification stimuli.” Indeed, Facebook executives have written that they deliberately incorporated addictive techniques into their service.) 

Advertising was previously a mostly one-way street; the advertiser sent forth the ad and hoped for the best.  But now you are closely monitored to measure the effect of what is called an ad so that a personalized stream of stimuli can be incrementally adjusted until the person’s behavior is finally altered.  Most of you are now living in automated virtual Skinner Boxes.

Everyone is susceptible of being influenced on the biochemical level by positive and negative stimuli.

On social media, positive stimuli conveyed might include being retweeted, friended, or made momentarily viral.  Negative stimuli include the familiar occurrences of being made to feel unappreciated, unnoticed, or ridiculed.  Unfortunately, positive and negative online stimuli are pitted against each other in an unfair fight. 

Positive and negative emotions have comparable ultimate power over us, but they exhibit crucially different timing.  Positive emotions typically take longer to build and can be lost quickly, while negative ones can come on faster and dissipate more slowly.  It takes longer to build trust than to lose it.  One can become scared or angry quickly, but it takes longer for the feelings to fade. 

Those who use social media to exert influence – whether human or algorithm – are a little like high frequency traders, constantly watching results and adjusting.  The feedback loop is tight and fast. 

The sour and lousy consequence, which no one foresaw, is that the negative emotions are the most often emphasized, because positive ones take too long to show up in the feedback loop that influences how paying customers and dark actors use these services to manipulate ordinary users and society.

Whatever divisions exist in society are likely to be widened by social media.  The Internet can be a means of bringing people together, but anger, paranoia, xenophobia and conspiracy theories are more engaging.

(more…)

A mock Bugatti Chiron built of Lego parts

September 8, 2018

Lego engineers built a driveable duplicate of a $2.6 million Bugatti Chiron sports car, using more than 1 million Lego parts.

It uses real Bugatti wheels and tires, a steel frame and batteries for power, but more than 90 percent of the car is Lego parts, including 2,300 Lego Technic Power Function motors and 4,632 Lego Technic gear wheels.

It has a fully functional steering wheel, brakes (but no accelerator), headlights, tail lights, speedometer and doors that open and close.  No glue was used in putting the parts together.

A real Bugatti Chiron is made of about 1,800 parts.  It has a 1,500 horsepower motor and a top speed of 261 miles per hour.  The Lego version has a 5.3 HP motor and a theoretical top speed of 18 miles per hour.

But it works!  I bet it was a lot of fun to work on.

(more…)

Electronic voting machines are easy to hack

August 21, 2018

Source: XKCD

Electronic voting systems have long been vulnerable to tampering and hacking.  This has been known for more than 10 years, but little or nothing has been done about it.

Having a vulnerable system is like leaving your door unlocked or the keys in the ignition of your car.  Eventually somebody is going to take advantage of you.

If we Americans want to protect our voting systems from tampering, it doesn’t matter if the suspected tamperers are Russians, Republicans or somebody else.  We need written ballots that are hand-counted in public.

That’s only one of many problems.   Our election system is increasingly rigged to favor Republicans by discouraging or disqualifying voting by poor people, young people and people of color.

LINKS

How They Could Steal the Election This Time by Ronnie Dugger for The Nation (2004)

It’s Magic! by Greg Palast (2012)

How They’re Stealing Ohio: Vote machines audit function turned off—and worse by Greg Palast  (2016)

Justice Department Warns It Might Not Be Able to Prosecute Voting Machine Hackers by Kim Zetter for Motherboard [Added 9/31/2018]

How to Fight Voter Suppression in 2018 by Edward Burmila for Dissent Magazine.  Twelve ways in which voting procedures are rigged against poor people, young people and people of color, and what to do about them.

Addiction as a successful business model

August 2, 2018

The problem is not just pornography.   Promoting addictiveness is a widespread business model.

A venture capitalist named Paul Graham, writing in 2010, said it is the nature of free market capitalism to make products addictive.

He wasn’t speaking of pornography in particular, but of everything from tobacco to gambling to compulsive viewing of the Internet.

The logic of the marketplace is that the person who makes the most addictive product wins the largest market share.

More recent Jaron Lanier, a famous virtual reality pioneer, wrote a book giving 10 Arguments for Deleting Your Social Media Accounts Right Now, which is about addictive social media companies.  The business model for companies such as Facebook is behavior modification, he wrote; they cannot give that model up and stay in business.

Their artificial intelligence systems use personal information, social science information and psychology to create “engagement” — which laymen would call “addiction” — by means of advertising and propaganda.  The systems are constantly at work to increase the power of their algorithms.

Stanford University has a Persuasive Technology Laboratory, which learns how to design interactive technology to alter human thoughts and behavior in the interests of advertisers and politicians, not the individuals targeted.

Richard Freed wrote about B.J. Fogg, the head of the laboratory, and how psychological research is used not to liberate people from addictive and compulsive behavior, but the opposite.

Click to enlarge

The “Fogg Behavior Model” is a well-tested method to change behavior and, in its simplified form, involves three primary factors: motivation, ability, and triggers.

Describing how his formula is effective at getting people to use a social network, the psychologist says in an academic paper that a key motivator is users’ desire for “social acceptance,” although he says an even more powerful motivator is the desire “to avoid being socially rejected.”

Regarding ability, Fogg suggests that digital products should be made so that users don’t have to “think hard.”  Hence, social networks are designed for ease of use.

Finally, Fogg says that potential users need to be triggered to use a site.  This is accomplished by a myriad of digital tricks, including the sending of incessant notifications urging users to view friends’ pictures, telling them they are missing out while not on the social network, or suggesting that they check — yet again — to see if anyone liked their post or photo.

(more…)

Douglas Rushkoff on survival of the richest

July 9, 2018

Douglas Rushkoff

Futurist Douglas Rushkoff was offered half a year’s salary to give a talk on the future of technology.  To his surprise, he found his audience consisted of five persons from “the upper echelon of the hedge fund world.”  Their real interest was in Rushkoff’s thoughts on how to survive the coming collapse of civilization.

The CEO of a brokerage house explained that he had nearly completed building his own underground bunker system and asked, “How do I maintain authority over my security force after the event?”

For all their wealth and power, they don’t believe they can affect the future. The Event.  That was their euphemism for the environmental collapse, social unrest, nuclear explosion, unstoppable virus, or Mr. Robot hack that takes everything down.

This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader?

The billionaires considered using special combination locks on the food supply that only they knew.  Or making guards wear disciplinary collars of some kind in return for their survival.  Or maybe building robots to serve as guards and workers — if that technology could be developed in time.

(more…)

How social media try to manipulate your mind

June 28, 2018

Click to enlarge

Any time you log on to Google, Facebook, Twitter or other “free” social media, information on every keystroke is being fed into powerful computers somewhere.

Algorithms in these computers correlate this data.  They compare you with other people with similar profiles,  The algorithms—”intelligent,” but blind—experiment with ways to use this information to modify your behavior so you will do what they want.

What they usually want is for you to respond for an ad for a particular product or service.  But they can be trying to influence you to vote—or not to vote.

Jaron Lanier, a scientist and entrepreneur who pioneered virtual reality, wrote about this in his new book, TEN ARGUMENTS FOR DELETING YOUR SOCIAL MEDIA ACCOUNTS RIGHT NOW (2018)

He thinks this is sinister.  Your social media may not be influencing you a lot, but it is almost certain to have some influence, and that influence is operating on you below your level of awareness.

Social media feeds you stuff that is intended to stimulate your emotion, and it is easier to stimulate feelings of anger, fear and resentment than it is feelings of joy, affection and security.

I know this from my newspaper experience.  Back in the 1990s, my old newspaper made a big effort to discover what kind of news our readers wanted.  In surveys and focus groups, they said that wanted positive news—articles about people accomplishing good things.  But the article they remember the best was a horrible story about a dead baby being found in a Dumpster.

The people who answered the survey weren’t hypocrites.  Not at all.  It is just that we human beings react in ways we don’t choose, and this leaves us open to manipulation.

Another effect of feedback from social media is to reinforce whatever it is you happen to be—liberal, conservative, pro-gun, anti-war—and to diminish you ability to understand people who think differently from you.

I was shocked when I read about Cambridge Analytica, the campaign consultant that worked for the Trump presidential campaign, and its claim that it could manipulate voter behavior on an individual basis.  But I later came to realize that this was the standard Facebook service, and could have been available to the Clinton campaign.

Lanier takes the charges of Vladimir Putin’s interference in the campaign more seriously than I did.  The Russian ads seemed amateurish to me (unless they were decoys to divert attention from the real influence campaign) and most of them were posted after election day.

But effectiveness of the 2016 ads is beside the point.  If the combination of Big Data, artificial intelligence and behavior modification algorithms can influence voting behavior, Putin is sure to use it, and he doesn’t, some other foreign government or institution will.  Not to mention our own NSA and CIA.

(more…)

How to unsubscribe from Amazon

April 13, 2018

Did leaked Facebook data swing the 2016 vote?

March 18, 2018

[Last updated 3/22/2018]

Video added 3/19/2018

The Guardian published an article about how a company called Cambridge Analytica used unauthorized data obtained from Facebook to help swing the 2016 election to Donald Trump.

The Facebook “likes” and other data were used to draw psychological profiles of individual voters, who were then targeted with messages based on those profiles.

A year or so ago, I made a post, based on an earlier article in The Guardian and an expose by the Real News Network, about how Steve Bannon and the Trump campaign used Cambridge Analytica to identify idealistic liberals, young women and African-Americans in key states, and feed them information to discourage them from voting for Hillary Clinton.

Many people question whether such manipulation was possible on a significant scale.  I am not qualified to say.

The thing is, targeted messages don’t have to work every time, or even most of the time—just enough times to tip the balance.   And the technology is being constantly improved, so even if they didn’t make a difference in 2016, they may affect the next election and the one after that.

I don’t have good ideas as to what to do about this.   It is not unethical to send accurate information to someone you think will respond to it.  Does it become unethical when the information and its target are chosen by an artificial intelligence program?  At the very least, we the people ought to be able to know where the messages come from.

Afterthought [3/20/2018]

After thinking this over for a couple of days,  I’m of two minds about Cambridge Analytica and similar companies.

(more…)

Apollo 17 – I hope it was the latest, not the last

December 27, 2017

The story of the Apollo 17 mission of 45 years ago should not be forgotten.   It is a story of herosim and competence, two qualities we Americans as a nation can’t afford to lose.

At the present time, we as a nation need to give priority to the basics—long-term survival goals more than aspirational goals.  But I hope Apollo 17 was the latest, and not the last, American venture to the moon and beyond.

Why the FCC proposes to eliminate Net Neutrality

November 27, 2017

Doug Muder wrote an excellent post on today’s The Weekly Sift about how the Federal Communications Commission’s proposal to end Net Neutrality will enable monopolists to dominate the Internet.

Long story short: Net Neutrality means that Internet service providers operate like telephone companies.  Anybody can phone anywhere who is connected to the system, and every ISP charges its customers the same rates..   The end of Net Neutrality means that they operate like cable TV companies.  You would have to accept whatever restriction they choose to impose.

Muder shows how the end of Net Neutrality ties in with the growth of business monopoly and how this ties in with the growth of economic inequality.

I strongly recommend reading Muder’s article, but I have a couple of graphics below that also explain the issue, although not in as great a depth.

LINK

The Looming End of Net Neutrality (and why you should care) by Doug Muder for The Weekly Sift.

(more…)

Robots will not (necessarily) replace us

November 15, 2017

You Will Lose Your Job to a Robot—And Sooner Than You Think, argues Kevin Drum in Mother Jones.

His argument is simple.  Historically, computing power doubles every couple of years.   There is no reason to think this will stop anytime soon.   So at some point the capability of artificial intelligence will exceed the capability of human intelligence.  Machines will be able to do any kind of job, including physician, artist or chief executive officer, better than a human being can.

This will happen gradually, then, as AI doubles the last few times, suddenly.

When that happens, humanity will be divided into a vast majority who serve no economic function, and a tiny group of capitalists who own the means of production.   Rejection of automation is not an option, according to Drum.   It only means that your nation will be unable to compete with nations that embrace it.

The only question, according to Drum, is whether the wealthy capitalists will have enough vision to give the rest of us enough of an income to survive and to create a market for the products of automation.

I have long believed that automation is driven as much by administrators’ desire for command and control as it is by the drive for economic efficiency.   An automated customer service hotline does not provide better service, but it eliminates the need to deal with pesky and contentious human beings.

I also believe that, in the short run, the danger is not that computer algorithms will surpass human intelligence, but that people in authority will treat them as if they do.

Drum presents interesting information, new to me, about the amazing progress of machine intelligence in just the past few years.   But that’s not necessary to his argument.

His argument is based on continuation of exponential growth and (unstated) continuation of the current economic system, which works for the benefit of high-level executives and administrators and of holders of financial assets at the expense of the rest of us.

There’s no law of physics that says development of technology has to result in higher unemployment.  Under a different system of incentives and ownership, technology could be used to expand the capability of workers and to make work more pleasant and fulfilling.

To the extent that automation eliminated boring and routine jobs, it could free up people to work in human services—in schools, hospitals, nursing homes—and in the arts and sciences.

Technology does not make this impossible.   Our current economic structure does.   Our current economic structure was created by human decisions, and can be changed by human decisions.  Technological determinism blinds us to this reality.

Monopoly power on the feudal Internet

June 21, 2017

Maciej Ceglowski, a writer and software entrepreneur in San Francisco, spoke at a conference in Berlin last May about monopoly power on the Internet: –

There are five Internet companies—Apple, Google, Microsoft, Amazon and Facebook.  Together they have a market capitalization just under 3 trillion dollars.

Bruce Schneier has called this arrangement the feudal Internet.  Part of this concentration is due to network effects, but a lot of it is driven by the problem of security.  If you want to work online with any measure of convenience and safety, you must choose a feudal lord who is big enough to protect you.

Maciej Ceglowski

These five companies compete and coexist in complex ways.

Apple and Google have a duopoly in smartphone operating systems.  Android has 82% of the handset market, iOS has 18%.

Google and Facebook are on their way to a duopoly in online advertising.  Over half of the revenue in that lucrative ($70B+) industry goes to them, and the two companies between them are capturing all of the growth (16% a year).

Apple and Microsoft have a duopoly in desktop operating systems.  The balance is something like nine to one in favor of Windows, not counting the three or four people who use Linux on the desktop, all of whom are probably at this conference.

Three companies, Amazon, Microsoft and Google, dominate cloud computing. AWS has 57% adoption, Azure has 34%. Google has 15%.

Outside of China and Russia, Facebook and LinkedIn are the only social networks at scale.  LinkedIn has been able to survive by selling itself to Microsoft.

And outside of Russia and China, Google is the world’s search engine.

That is the state of the feudal Internet, leaving aside the court jester, Twitter, who plays an important but ancillary role as a kind of worldwide chat room.  [1]

There is a difference between the giant Silicon Valley companies and Goldman Sachs, Citicorp and the big Wall Street banks.   The Silicon Valley companies have created value.  The Wall Street banks, by and large, have destroyed wealth.

I depend on Google; I found Ceglowski’s talk through Google Search.   I use Apple products; I’m typing this post on my i-Mac.  I don’t use Facebook or Windows, but many of my friends do.  I try to avoid ordering books through Amazon, because I disapprove of the way Jeff Bezos treats Amazon employees and small book publishers, but I use subscribe to Amazon Prime.

I don’t deny the achievements of the founders of these companies, nor begrudge them wealth and honor.  But I do not think that they or their successors have the right to rule over me, and that’s what their monopoly power gives them.

(more…)

The case against the Internet

March 29, 2017

Double click to enlarge. Source: Visual Capitalist.

Andrew Keen’s book, The Internet Is Not The Answer (2015), which I recently finished reading, is a good antidote to cyber-utopians such as Kevin Kelly.

Keen says the Internet is shaping society in ways we the people don’t understand.  Some of them are good, some of them are bad, but all are out of control.

Like Kelly, he writes about technology as if it were an autonomous force, shaped by its own internal dynamic rather than by human decisions.  Unlike Kelly, he thinks this is a bad thing, not a good thing.

He does not, of course, deny that the Internet has made life easier in many ways, especially for writers.   But that is not the whole story.  He claims that—

  • The Internet is a job-destroyer.
  • The Internet enables business monopoly
  • The Internet enables surveillance and invasion of privacy.
  • The Internet enables anonymous harassment and bullying.
  • The Internet enables intellectual property theft

Keen quotes Marshall McLuhan’s maxim, “We shape our tools, then our tools shape us.”

What he doesn’t quite understand is that the “we” who shape the tools is not the same as the “us” who are shaped by them.

Or to use Marxist lingo, what matters is who owns the means of production.

Technology serves the needs and desires of those who own it.  Technological advances generally serve the needs and desires of those who fund it.

Advances in technology that benefit the elite often serve the general good as well, but there is no economic or social law that guarantees this.   This is as true of the Internet as it is of everything else.

Let me look at his claims one by one—

(more…)

Your life on the Internet is an open book

March 28, 2017

Double click to enlarge

How Google Tracks You—And What You Can Do About It by Jeff Desjardins for Visual Capitalist.

(more…)

A brief history of cyber-scares

March 22, 2017

From Russia, With Panic: Cozy bears, unsourced hacks—and a Silicon Valley shakedown by Yasha Levine for The Baffler.   It’s a bit long, but well worth reading in its entirety.

How artificial intelligence elected Trump

February 28, 2017

thedges0112mercer

Hedge fund billionaire Robert Mercer bailed out the Trump campaign last summer when it hit its low point, but that was not the most important thing he did.

The most important thing was to teach Steve Bannon, Jared Kushner and Jason Miller how to use computer algorithms, artificial intelligence and cyber-bots to target individual voters and shape public opinion.

The Guardian reported that Mercer’s company, Cambridge Analytica, claims to have psychological profiles on 220 million American voters based on 5,000 separate pieces of data.  [Correction: The actual claim was 220 million Americans, not American voters.]

Michal Kosinski, lead scientist for Cambridge University’s Psychometric Centre in England, said that knowing 150 Facebook likes, he can know a person’s personality better than their spouse; with 300 likes, better than the person knows themselves.

Advertisers have long used information from social media to target individuals with messages that push their psychological buttons.

I suppose I shouldn’t be shocked or surprised that political campaigners are doing the same thing.

Bloomberg reported how the Trump campaign targeted idealistic liberals, young women and African-Americans in key states, identified through social media, and fed them negative information about Hillary Clinton in order to persuade them to stay home.

This probably was what gave Trump his narrow margin of victory in Wisconsin, Michigan and Pennsylvania.

The other way artificial intelligence was used to elect Trump was the creation of robotic Twitter accounts that automatically linked to Breitbart News and other right-wing news sites.

This gave them a high-ranking on Google and created the illusion—or maybe self-fulfilling prophecy—that they represent a consensus.

(more…)

The real threat of vote-rigging

October 31, 2016

Donald Trump’s supporters say the integrity of the coming U.S. election is threatened by illegal voting.  Hillary Clinton’s supporters say it is impossible to rig the U.S. election.  They’re both dead wrong.

The real problem is the vulnerability of electronic voting machines to hacking and the lack of transparency in vote counting.

LINKS

We Will Never Know If Electronic Voting Compromises Elections; Democrats Should Worry About This by Mike the Mad Biologist.

DHS Seeks to Protect U.S. Election Infrastructure – But Is That Even Possible? by Brad Friedman for The BRAD BLOG.

How to Hack an Election in Seven Minutes by Ben Wofford for POLITICO

America’s Electronic Voting Machines Are Scarily Easy Targets by Brian Barrett for WIRED.

Democracy’s Gold Standard: Hand-Marked, Hand-Counted Paper Ballots, Publicly Tabulated at Every Polling Place in America by Brad Friedman for The BRAD BLOG.

(more…)

Russia accused of war by using weaponized truth

October 18, 2016

wireap_8cf7592f8cbc452287d88d28e2e8d9ec_16x9_1600

Russian intelligence services are accused of waging cyber-warfare by releasing embarrassing Hillary Clinton e-mails through Wikileaks.

There is no direct evidence of where Wikileaks got the Clinton e-mails, but the Russians have the capability and the motive to hack her system.

Would this be an act of war?  I for one would welcome war by means of weaponized truth.

If revealing accurate information about your geopolitical enemy is a form of warfare, I think escalation of this kind of warfare would be a good thing and not a bad thing.

I think the NSA and the CIA should retaliate by arranging the release of damaging secret information about Vladimir Putin—maybe through Wikileaks as a form of poetic justice.

In fact, there are those who think they already have done so, through the Panama Papers leak

(more…)

17 things that come out of a barrel of crude

October 3, 2016

1barrelvisualcapitalist-barryritholtz

Hat tips to Barry Ritholtz, Visual Capitalist and JWN Energy

I think that we the human race have to learn to stop burning oil for fuel because we’re at risk of overheating the planet.  But another reason is that petroleum is such an amazing and versatile substance that it seems a waste to just burn it.

The computer era and productivity statistics

September 28, 2016

us_total_factor_business_productivity

.

chart-2-us-labour-productivity

.

If factory automation, artificial intelligence and data tracking are doing all that much to improve productivity, why don’t we see it in the productivity statistics?

It’s true that productivity is growing, and the continual growth in productivity should not be taken for granted.  Maybe productivity would be less or even fall if not for the computer revolution.

Maybe the computer revolution hasn’t gone far enough as yet.  Maybe, as the linked articles suggest, it is confined to just a few industries—electronics, communications and finance.  Maybe it is offset by disinvestment in American industry, as CEOs spend profits on stock buybacks rather than productivity improvements.

The fact remains that productivity was increasing at a faster rate in the United States before the 1980s, which is when Wired magazine proclaimed a new economy.

LINKS

Technology Isn’t Working by The Economist.

Robots, Growth and Inequality by Andrew Berg, Edward F. Buffie and Luis-Felipe Zanna for the International Monetary Fund.

Kevin Kelly’s technological determinism

September 22, 2016

Kevin Kelly is a smart and influential thinker who has good insight into the potential of advanced technologies such as artificial intelligence, virtual reality and data tracking.

He has written popular books on technology with titles such as Out of Control, What Technology Wants and his latest, The Inevitable.  I haven’t read them; they’re no doubt worth reading.  I quarrel with the assumptions reflected in the titles of the books.

His mistake, in my opinion, is in treating technology as an autonomous force to which human beings must adapt, whether they like it or not.

Technology is not out of control.  The fact that we the public don’t control it doesn’t mean that nobody does.   Technology didn’t develop itself.  It developed they way it did because it served the needs of corporations, governments and other institutions.

Technology doesn’t want anything because it isn’t sentient.    Only human beings want things.   Technology ought to exist to the wants and needs of people.   People do not exist in order to serve the requirements of technology

There is nothing inevitable about the path of technological change.   Which technologies are developed is a matter of choice—by somebody.   Devices such as the steam engine existed for centuries before they were put into us.

Ned Ludd would not have destroyed weaving machines if the weavers had owned the machines.  As a Marxist would say, it all depends on who owns the means of production.  Technology works to the benefit of those who own it.

(more…)

Murray Bookchin: the social matrix of technology

August 21, 2016

This is part of a chapter-by-chapter review of THE ECOLOGY OF FREEDOM: The emergence and dissolution of hierarchy by Murray Bookchin (1982, 1991, 2005).   I’m interested in Bookchin’s work because he provides a way a deeper, broader and longer-range perspective than the false alternatives in current politics.

chapter ten – the social matrix of technology

In this chapter, Murray Bookchin debunked the idea that the level of technology determines the level of social organization.  Rather social organization itself is the most important technology.

Human beings do not have to adapt to the requirements of technology.  The machine was made for man, not man for the machine.

The Pyramids of Egypt and the great temples of Assyria and Babylonia did not depend on a high level of technology, he wrote; they were built with primitive tools.

What the great empires of ancient Egypt and the Fertile Crescent discovered was how to organize and mobilize huge numbers of people against their will, and to squeeze the maximum amount of labor out of them.   So long as they had slaves, they had no need to invent labor-saving machinery.

The same was true of the New World, Bookchin wrote.   The democratic Iroquois and the totalitarian Inca used the same types of tools.  It was their social organization that was different.

Neither did geography determine social organization.  The Inca empire and Greek democracy both arose in mountainous regions.

Rather hierarchy arose, as Bookchin noted in previous chapters, when non-productive old people reinvented themselves as priests and the young men gave their loyalty to warrior bands rather than the village clans.   This happened in many different times and settings.  It set in motion an evolution ending with supposedly sacred despots supported by priests, warriors and tax collectors.

When despotic societies arose, Bookchin wrote, organic matricentric societies had to militarize themselves or else either be conquered or driven from their lands.  What’s remarkable, he wrote, is not the spread of despotism, but how much of the world’s people remained free.

(more…)

The coming of the robots

July 8, 2016

This video from Boston Dynamics shows the capability of robots to do human labor—not that they would necessarily be in humanoid form as in the video.

In theory, the use of robots could enable human beings to live lives of voluntary, meaningful, higher-level activity.  In practice, the results probably would be more like Kurt Vonnegut’s dystopian novel, Player Piano, with an elite of engineers and a mass of unemployed or under-employed former workers.

If robots do everything, there will be no high-wage, full-employment economy.  There would be no mass consumer market.  Economic activity would be mainly devoted to serving the needs of the owners of the robots and the engineers and technicians who keep the robots running.

A guaranteed annual income would not be a solution.  Human beings degenerate if they have nothing useful to do.

Maybe a new economy would arise—a robot economy serving the elite and a parallel human economy serving the majority of humanity.

Or maybe—in some way I can’t foresee—robotic technology would come under democratic control, and there would be a public debate as to how robotics could be used to benefit everyone and not just a few.

LINKS

New Rossum’s Universal Robots: Toward a Most Minimal Wage by Fred Reed for Fred on Everything.  Lots of interesting links.

Toyota in talks to acquire Boston Dynamics from Google by Danielle Muoio for Tech Insider.

When the Robots Rise by Lee Drutman and Yascha  Mounk for The National Interest [added 7/11/2016]

If Sir Isaac Newton had a Smartphone

April 21, 2016

isaacnewton.smartphonet

Via Nusaireyat.

A Fukushima on the Hudson?

April 4, 2016

04indian01_span-articleLarge

NY-DN874_NYINDI_16U_20150414182440Ellen Cantarow and Alison Rose Levy wrote an alarming and plausible article for TomDispatch about the likelihood of a Fukushima-type accident at the Indian Point nuclear power plant outside New York City.

The Indian Point plant has a terrible safety record, even by industry standards.  There is an ongoing leak of tritium (radioactive) water, whose source has not been identified, into local groundwater and the Hudson River.  There is a known danger of flooding, which could cause a meltdown of the reactor core, but management of Entergy, the owner of Indian Point, has declined to install a $200,000 flood detector.

Now a high-pressure natural gas pipeline is planned by an energy company called Spectra, would carry fracked gas within 150 feet of Indian Point.  Accidents in gas pipelines are on the rise, according to a study by the National Transportation Safety Board, due to gas companies cutting corners on safety.

How much risk should the nearly 20 million people who live in the vicinity of Indian Point assume?

(more…)