Looking backwards from the year 2096

The Nobel economist Paul Krugman was a science fiction fan.  He once said he was inspired to become an economist by the example of Isaac Asimov’s Foundation stories, in which the fictional Hari Seldon created a predictive science of history by which his followers saved galactic civilization.

In 1996, Krugman was invited by the New York Times Magazine to try his hand at science fiction.  To celebrate its centennial, the magazine invited contributors to write as if they were 100 years in the future, looking back on the year 1996.  Here is Krugman’s contribution.


When looking backward, you must always be prepared to make allowances: it is unfair to blame late-20th-century observers for their failure to foresee everything about the century to come.  Long-term social forecasting is an inexact science even now, and in 1996 the founders of modern nonlinear socioeconomics were obscure graduate students.  Still, many people understood that the major forces driving economic change would be the continuing advance of digital technology and the spread of economic development throughout the world; in that sense, there were no big surprises. The puzzle is why the pundits of the time completely misjudged the consequences of those changes.

Paul Krugman

Perhaps the best way to describe the flawed vision of fin de siecle futurists is to say that, with few exceptions, they expected the coming of an ”immaculate” economy — one in which people would be largely emancipated from any grubby involvement with the physical world.  The future, everyone insisted, would bring an ”information economy” that would mainly produce intangibles.  The good jobs would go to ”symbolic analysts,” who would push icons around on computer screens; knowledge, rather than traditional resources like oil or land, would become the primary source of wealth and power.

But even in 1996 it should have been obvious that this was silly. First, for all the talk about information, ultimately an economy must serve consumers — and consumers want tangible goods.  The billions of third-world families that finally began to have some purchasing power when the 20th century ended did not want to watch pretty graphics on the Internet.  They wanted to live in nice houses, drive cars and eat meat.

Second, the Information Revolution of the late 20th century was a spectacular but only partial success.  Simple information processing became faster and cheaper than anyone had imagined, but the once-confident artificial intelligence movement went from defeat to defeat.  As Marvin Minsky, one of the movement’s founders, despairingly remarked, ”What people vaguely call common sense is actually more intricate than most of the technical expertise we admire.”  And it takes common sense to deal with the physical world — which is why, even at the end of the 21st century, there are still no robot plumbers.

Most important of all, the long-ago prophets of the information age seemed to have forgotten basic economics.  When something becomes abundant, it also becomes cheap.  A world awash in information is one in which information has very little market value.  In general, when the economy becomes extremely good at doing something, that activity becomes less, rather than more, important.  Late-20th-century America was supremely efficient at growing food; that was why it had hardly any farmers.  Late-21st-century America is supremely efficient at processing routine information; that is why traditional white-collar workers have virtually disappeared.

These, then, were the underlying misconceptions of late-20th-century futurists. Their flawed analysis led, in turn, to the five great economic trends that observers in 1996 should have expected but didn’t.

Soaring Resource Prices

The first half of the 1990’s was an era of extraordinarily low prices for raw materials.  In retrospect, it is hard to see why anyone thought that situation would last.  When two billion Asians began to aspire to Western levels of consumption, it was inevitable that they would set off a scramble for limited supplies of minerals, fossil fuels and even food.

In fact, there were danger signs as early as 1996.  A surge in gasoline prices during the spring of that year was prompted by an unusually cold winter and miscalculations about Middle East oil supplies.  Although prices soon subsided, the episode should have reminded people that industrial nations were once again vulnerable to disruptions of oil supplies.  But the warning was ignored.

Quite soon, however, it became clear that natural resources, far from becoming irrelevant, had become more crucial.  In the 19th century, great fortunes were made in heavy industry; in the late 20th, they were made in technology; today’s super-rich are, more frequently, those who own prime land or mineral rights.

The Environment as Property

In the 20th century, people used some quaint expressions — ”free as air,” ”spending money like water” — as if the supplies of air and water were unlimited.  But in a world where billions of people can afford cars, vacations and food in plastic packages, the limited carrying capacity of the environment had become perhaps the single most important constraint on the standard of living.

By 1996, it was obvious that one way to cope with environmental limits was to use market mechanisms.  In the early 1990’s, the Government began to allow electric utilities to buy and sell rights to emit certain kinds of pollution; the principle was extended in 1995 when the Government began auctioning rights to the electromagnetic spectrum.  Today, of course, practically every environmentally harmful activity carries a hefty price tag.  It is hard to believe that as late as 1995, an ordinary family could fill up a Winnebago with $1-a-gallon gasoline, then pay only $5 for admission to Yosemite.  Today, that trip would cost about 15 times as much, even after adjusting for inflation.

Once governments got serious about making people pay for pollution and congestion, income from environmental licenses soared.  License fees now account for more than 30 percent of the gross domestic product, and have become the main source of Government revenue; after repeated reductions, the Federal income tax was finally abolished in 2043.

The Rebirth of the Big City

During the second half of the 20th century, the densely populated, high-rise city seemed to be in unstoppable decline.  Modern telecommunications eliminated much of the need for physical proximity in routine office work, leading more and more companies to shift back-office operations to suburban office parks.  It seemed as if cities would vanish and be replaced with a low-rise sprawl punctuated by an occasional cluster of 10-story office towers.

But this proved transitory.  For one thing, high gasoline prices and large fees for environmental licenses made a one-person, one-car commuting pattern impractical.  Today, the roads belong mostly to hordes of share-a-ride minivans efficiently routed by computers.  Moreover, the jobs that had temporarily flourished in the suburbs — mainly office work — were eliminated in vast numbers beginning in the mid-90’s.  Some white-collar jobs migrated to low-wage countries; others were taken over by computers.  The jobs that could not be shipped abroad or be handled by machines were those that required a human touch — face-to-face interaction between people working directly with physical materials.  In short, they were jobs done best in dense urban areas, places served by what is still the most effective mass-transit system yet devised: the elevator.

Here again, there were straws in the wind.  At the beginning of the 1990’s, there was speculation about which region would become the center of the ballooning multimedia industry.  Would it be Silicon Valley? Los Angeles? By 1996, the answer was clear. The winner was — Manhattan, whose urban density favored personal interaction, which turned out to be essential.  Today, of course, Manhattan boasts almost as many 200-story buildings as St. Petersburg or Bangalore.

The Devaluation of Higher Education

In the 1990’s, everyone believed that education was the key to economic success.  A college degree, even a postgraduate degree, was essential for anyone who wanted a good job as one of those ”symbolic analysts.”

But computers are proficient at analyzing symbols; it is the messiness of the real world that they have trouble with.  Furthermore, symbols can be transmitted easily to Asmara or La Paz and analyzed there for a fraction of the cost in Boston.  Therefore, many of the jobs that once required a college degree have been eliminated.  The others can be done by any intelligent person, whether or not she has studied world literature.

This trend should have been obvious in 1996.  Even then, America’s richest man was Bill Gates, a college dropout who did not need a lot of formal education to build the world’s most powerful information technology company.

Or consider the panic over ”downsizing” that gripped America in 1996.  As economists quickly pointed out, the rate at which Americans were losing jobs in the 90’s was not especially high by historical standards.  Downsizing suddenly became news because, for the first time, white-collar, college-educated workers were being fired in large numbers, even while skilled machinists and other blue-collar workers were in demand.  This should have signaled that the days of ever-rising wage premiums for people with higher education were over.  Somehow, nobody noticed.

Eventually, the eroding payoff of higher education created a crisis in education itself.  Why should a student put herself through four years of college and several years of postgraduate work to acquire academic credentials with little monetary value?  These days, jobs that require only 6 or 12 months of vocational training — paranursing, carpentry, household maintenance and so on — pay nearly as much as if not more than a job that requires a master’s degree, and pay more than one requiring a Ph.D.

So enrollment in colleges and universities has dropped almost two-thirds since its peak at the turn of the century.  The prestigious universities coped by reverting to an older role.  Today a place like Harvard is, as it was in the 19th century, more of a social institution than a scholarly one — a place for children of the wealthy to refine their social graces and befriend others of their class.

The Celebrity Economy.

The last of this century’s great trends was noted by acute observers in 1996, yet somehow most people failed to appreciate it.  Although business gurus were proclaiming the predominance of creativity and innovation over mere routine production, in fact the growing ease with which information could be transmitted and reproduced was making it ever harder for creators to profit from their creations.  Today, if you develop a marvelous piece of software, by tomorrow everyone will have downloaded a free copy from the Net.  If you record a magnificent concert, next week bootleg CDs will be selling in Shanghai.  If you produce a wonderful film, next month high-quality videos will be available in Mexico City.

How, then, can creativity be made to pay?  The answer was already becoming apparent a century ago: creations must make money indirectly, by promoting sales of something else.  Just as auto companies used to sponsor Grand Prix racers to spice up the image of their cars, computer manufacturers now sponsor hotshot software designers to build brand recognition for their hardware. And the same is true for individuals.  The royalties the Four Sopranos earn from their recordings are surprisingly small; mainly the recordings serve as advertisements for their arena concerts.  The fans, of course, go to these concerts not to appreciate the music (they can do that far better at home) but for the experience of seeing their idols in person. Technology forecaster Esther Dyson got it precisely right in 1996: “Free copies of content are going to be what you use to establish your fame. Then you go out and milk it”.  In short, instead of becoming a Knowledge Economy we have become a Celebrity Economy.

Luckily, the same technology that has made it impossible to capitalize directly on knowledge has also created many more opportunities for celebrity.  The 500-channel world is a place of many subcultures, each with its own culture heroes; there are people who will pay for the thrill of live encounters not only with divas but with journalists, poets, mathematicians, and even economists. When Andy Warhol predicted a world in which everyone would be famous for 15 minutes, he was wrong: if there are indeed an astonishing number of people who have experienced celebrity, it is not because fame is fleeting but because there are many ways to be famous in a society that has become incredibly diverse.

Still, the celebrity economy has been hard on some people — especially those of us with a scholarly bent.  A century ago it was actually possible to make a living as a more or less pure scholar: someone like myself would probably have earned a pretty good salary as a college professor, and been able to supplement that income with textbook royalties.

Today, however, teaching jobs are hard to find and pay a pittance in any case; and nobody makes money by selling books.  If you want to devote yourself to scholarship, there are now only three options (the same options that were available in the 19th century, before the rise of institutionalized academic research).  Like Charles Darwin, you can be born rich, and live off your inheritance.  Like Alfred Wallace, the less fortunate co-discoverer of evolution, you can make your living doing something else, and pursue research as a hobby.  Or, like many 19th-century scientists, you can try to cash in on scholarly reputation by going on the paid lecture circuit.

But celebrity, though more common than ever before, still does not come easily.  And that is why writing this article is such an opportunity.  I actually don’t mind my day job in the veterinary clinic, but I have always wanted to be a full-time economist; an article like this might be just what I need to make my dream come true.

Source: New York Times.

Click on The Unofficial Paul Krugman Web Page for some of Krugman’s more recent writings.

Click on Krugman is from Trantor; Gingrich ain’t for comment on the Making Light web log about Paul Krugman vs. Newt Gingrich as a contemporary Hari Seldon.

One Response to “Looking backwards from the year 2096”

  1. Hugh MacDougall Says:

    I’ve always liked Paul Krugman, and I’ve always been fascinated with Bellamy’s Looking Backward This makes a great combination.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: