If Only He Can

Can Obama Do for the Grid What Eisenhower Did for Highways?
By ANDREW C. REVKIN
12:21 p.m. | Updated |
Building on earlier discussions of President Obama’s options on energy and the environment, here’s a “Your Dot” contribution from Lee C. Harrison, a frequent comment contributor who’se a senior research associate at the Atmospheric Sciences Research Center of the University of Albany:

NREL
A map of the solar energy potential in the United States (the colors darkened with more kilowatt hours per square meter per day). More background here.
The United States is one of the most favored nations for combined wind and solar resources. Few people are surprised that our solar energy availability is highest in the southwest deserts, but many do not appreciate the degree to which wind energy is strongest at sea along the coasts, and in a stripe down the central plains states. Detractors of renewable energy often claim that ‘the intermittency problem’ prevents wind and solar power ever providing anything beyond a small fraction of our nation’s energy. This claim is simply untrue. A recent study [footnote 1] shows that even the Atlantic region of the United States can achieve electric energy self-sufficiency, using a combination of wind and solar, aided by storage which is feasible today. However it will be both more advantageous and lower cost if the United States commits to the construction of a true national power distribution grid.

NREL
A map of the potential wind resource over the United States (at an elevation of 80 meters aboveground). Click here for more maps and larger versions.
The contiguous lower 48 states span three time zones, and usually have two to three frontal weather systems moving across them. Both wind and solar availability are affected by the moving systems; averaging across the United States makes the fractional variability much smaller. To achieve this we need a far more capable distribution grid, turning the 48 states (and likely much of Canada and northern Mexico) into a single unified power pool, capable of moving large amounts of power. Studies demonstrate this is practical today, and would result in economies even without renewable energy, because it would reduce the number of power plants which must be built/maintained, and creates a bidding pool far more resistant to price manipulation. It strongly improves the case for wind and solar if it can ship power from the areas of strong resource to markets far away. [footnote 2]

Improvements in how we use energy are economic “low-hanging fruit” and are already underway. We need improvement in building efficiency and particularly air-conditioning. It is the latter which is the largest driver of peak electrical loads.

Peak loads correlate very well with solar availability (for obvious reasons) but locally the peak lasts later than the sun does. Improved building performance and AC performance trim this; AC systems which “store coolth” are straightforward and increasing in use today. Solar power shipped from the American southwest can handle east-coast loads (the majority of US electrical consumption) well after sunset.

Power management can also be done on the “demand side” by implementing near-realtime pricing to end-users Some storage or backup/peaking power generation will still be needed.

This transition need not be difficult or costly given a national grid. An obvious example is “the Portuguese model.” They simply use their existing fossil power plants as backup. Fossil-fueled peaking/backup plants will produce little CO2 output as long as their duty cycle is small, and their other pollutants become more tolerable too. As battery cars and plug-hybrids become more common the storage capacity and load-shifting (for charging) of the transportation fleet becomes very favorable in terms of load management — ditto the potential to use the fleet of parked batteries to assist during peak loads.

Nothing stops us now other than the lack of national political will to overcome inertia and the entrenched regional and state regulatory oligopolies. We need an “Eisenhower Interstate” program for a national electrical grid. It can then make fossil-powered generation intermittent!
12:21 P.M. Update
I accidentally left out Harrison’s footnotes. Here they are (with annotations added in text above):

1. Journal of Power Sources, 2012. DOI: 10.1016/j.jpowsour.2012.09.054, available by free download from here.

2. Green Power Superhighways, Building a Path to America’s Clean Energy Future.

Also:

Archer, C. L.; Jacobson, M. Z. (2007). “Supplying Baseload Power and Reducing Transmission Requirements by Interconnecting Wind Farms.” Journal of Applied Meteorology and Climatology 46 (11): 1701–1717. Bibcode 2007JApMC..46.1701A.
doi:10.1175/2007JAMC1538.1

Czisch, Gregor; Gregor Giebel. “Realisable Scenarios for a Future Electricity Supply based 100% on Renewable Energies.” Institute for Electrical Engineering – Efficient Energy Conversion. University of Kassel, Germany and Risø National Laboratory, Technical University of Denmark.
Back to original post: For more, explore the Energy Department’s Smartgrid.gov Web site and read Ken Silverstein’s recent Forbes post, “Smart Grid May be Shortest Route to Obama’s Green Energy Goals.”

Here’s a useful Scientific American “smart grid” explainer:

energy, engineering, politics, technology, Eisenhower, Dwight David, Energy and Power, Obama, Barack, Solar Energy, Wind Power

New Book About Richard Nixon

 

A new biography on an American president.

WAG THE DOG The making of Richard Nixon.
BY THOMAS MALLON
FEBRUARY 4, 2013

PRINTE-MAILSINGLE PAGE
KEYWORDS
(PRES.) RICHARD NIXON; PAT NIXON; PRESIDENTS; (PRES.) DWIGHT D. EISENHOWER; WATERGATE; DOUGLAS MCGRATH; “CHECKERS”
Richard and Pat Nixon, two essentially shy people who would now both be a hundred years old, first met onstage. Each had a role in the Whittier Community Players’ 1938 production of “The Dark Tower,” by George S. Kaufman and Alexander Woollcott. Pat Ryan, a pretty twenty-five-year-old teacher at Whittier High, came to “The Dark Tower” with a smidgen of theatrical experience. Born in a Nevada mining-town shack and toughened by a hardworking childhood on a farm in Artesia, she had helped put herself through the University of Southern California with occasional jobs as a movie extra. But it wasn’t any real enthusiasm for the stage that brought her to the Community Players. As her daughter Julie explains in a biography of her mother, she went only because the assistant superintendent at Whittier High asked her to, and she “found it difficult to say no to a school administrator.” Nixon took to the whole business and several months later was back for more. At the urging of the Players’ director, he went on to appear in “Night of January 16th,” a melodrama by Ayn Rand in which the text itself chewed the scenery.

Pat Nixon, in later years, gave three memorably painful on-camera performances opposite Richard Nixon. In each of them, she was without lines of her own, but her mute, stricken countenance became an important part of the historical impression being created and preserved. On the last of these occasions, standing in the East Room of the White House on August 9, 1974, as her husband said farewell to his staff, she managed to avoid the tears that had flooded her eyes during a previous broadcast agony, her husband’s tentative concession to John F. Kennedy on Election Night, 1960.

But Pat Nixon’s presence and expression were most critical at the first of these televised displays, the one that took place at the El Capitan Theatre, in Los Angeles, on September 23, 1952. The surviving film of her husband’s “Checkers” speech shows her on-camera, her jaw supportively set, for only seconds each time, as Nixon rebuts the accusation imperilling his campaign for the Vice-Presidency on a ticket headed by General Dwight D. Eisenhower. “secret rich men’s trust fund keeps Nixon in style far beyond his salary” was the New York Post’s headline a few days before. Not so, Nixon now argued, and more or less proved to the television audience, by laying out everything he and his wife owned and owed: “I have no life insurance whatever on Pat. . . . I owe 4,500 dollars to the Riggs Bank in Washington, D. C., with interest four and a half per cent. I owe 3,500 dollars to my parents. . . .”

Though he was nowhere near the theatre, Checkers, a canine present from a supporter, stole the show. “Regardless of what they say about it,” Nixon insisted of the dog, “we’re gonna keep it.” Checkers joined forces with Pat’s cloth coat (“I always tell her that she’d look good in anything”) to insure the candidate’s continued place on the ticket. Heartwarming or revolting—take your pick—the speech was indisputably effective, and it might never have been given at all had Pat Nixon not overridden her husband’s last-minute attack of stage fright. “I just don’t think I can go through with this one,” he told her three minutes before the camera’s red light went on. “Of course you can,” she replied, thereby extending his political life for more than two decades. The speech was a grand slam—Nixon celebrated its anniversary every year, even after Watergate—but Pat Nixon loathed politics from that televised moment on.

FROM THE ISSUECARTOON BANKE-MAIL THIS
The origins of the Checkers episode can probably be traced to Nixon’s run-in with Earl Warren, the governor of California, who four years earlier had been Thomas E. Dewey’s running mate and sixteen years later swore in Richard Nixon as the thirty-seventh President of the United States. In the summer of 1952, Warren had positioned himself as a favorite-son candidate for President, but his control of the California delegation was threatened by Senator Nixon’s attempts to maneuver it into the Eisenhower camp. Two months after the Republican Convention, a still disgruntled Warren supporter may have leaked the story of Nixon’s “secret fund” to the Post.

Soon the much tonier New York Herald Tribune was chiming in, and causing Nixon’s biggest problems. Having long clamored for an Eisenhower nomination, this editorial avatar of liberal Republicanism now called for the General’s Vice-Presidential pick to get off the ticket. Advisers close to Eisenhower, ones much deeper inside the Party and financial establishments than the young, mortgaged, and stridently anti-Communist Senator, urged the same course. Eisenhower stayed largely silent on the matter for days, not even telephoning his running mate, though he did drop a quotable remark that Nixon would need to prove himself to be as “clean as a hound’s tooth” if he wanted to remain on the ballot. When the General finally did call, a thoroughly agitated Nixon told the architect of the Normandy landings that it was time to “shit or get off the pot.” By the conversation’s end, however, the General still wanted the Senator to go on television to explain the whole matter.

Eisenhower scarcely understood the power of the weaponry that he was inviting Nixon to bring onto the field. But a couple of Nixon’s allies, his bruising tactician Murray Chotiner and a political P.R. man named Robert Humphreys, instinctively grasped that television was about to alter politics as thoroughly as the nuclear option had recently changed military strategy. With money from various Republican campaign committees, they secured Nixon thirty minutes of airtime following Milton Berle on the Tuesday-night TV lineup. For the next couple of days, Nixon mostly secluded himself—already his customary crisis mode—and prepared. Then, shortly before the broadcast, a phone call from Governor Dewey, who, for all his establishment credentials, had been a real Nixon supporter, threw the candidate into a tailspin. Dewey regretted to tell him that Eisenhower’s closest aides believed the TV speech should conclude with Nixon’s resignation from the ticket. The candidate, as furious as he was shaken, hung up after instructing Dewey to tell everyone around Eisenhower that “I know something about politics, too!”

PHOTOGRAPH: AP

Read more: http://www.newyorker.com/arts/critics/atlarge/2013/02/04/130204crat_atlarge_mallon#ixzz2JYAw26lY

A Different Perspective on Energy

Exploring the Energy Debate in an Artistic Manner

By Kurt Cobb | Tue, 29 January 2013 22:59 | 0

Benefit From the Latest Energy Trends and Investment Opportunities before the mainstream media and investing public are aware they even exist. The Free Oilprice.com Energy Intelligence Report gives you this and much more. Click here to find out more.

It is hard to imagine a more unlikely vehicle for advancing energy literacy than a finely crafted large format picture book. Energy, after all, is invisible. We see its effects, but never the thing itself. And yet, Energy: Overdevelopment and the Delusion of Endless Growth succeeds and succeeds profoundly for it puts on display those effects so compellingly that the reader cannot help but turn the pages to see more.

Taken with the eye of the fine art photographer, the book’s images project a disturbing beauty. They seduce the viewer with their attention to composition, color, light, and perspective. This impels us to enter into these images and contemplate rather than merely visually consume an exploding offshore oil platform; a desolated landscape strewn with derelict drilling rigs; a decapitated mountain; a pelican coated with oil; a coal strip mine seen from its bottom; and a tar sands mine seen from the sky. Once drawn in, the viewer cannot help but feel the immensity and drama of the energy issues we now face. And, once drawn in, the viewer wants more images that will somehow explain this immense drama and its significance for each of us.

Leafing through the pages, you will be astonished at each successive image. Eventually, you will reach a substantial block of text. By then you will be more than ready for some explanation to put into words what all these images taken together might mean.

The essays that follow are penned by noted writers such as poet, novelist and farmer Wendell Berry, climate change activist Bill McKibben, and peak oil author Richard Heinberg; by scientists such as climate scientist James Hansen and sustainable agriculture researcher Wes Jackson; and by big-picture pragmatists such as Plan B author Lester Brown and energy efficiency guru Amory Lovins.

These and others take readers through the entire controversy surrounding energy starting with key energy concepts and then moving onto the waning of the fossil fuel age, alternative energy, climate change, energy conservation, and energy efficiency. Opposite these essays, many full-page images continually remind us of the colossal nature of the subject.

Related article: The Energy Industry is Not Safe in North Africa

I feel comfortable revealing the overall conclusion of the book without issuing a spoiler alert for in order to understand this conclusion, you must understand everything that precedes it. Here it is: We must drastically reduce our use of energy over the coming decades if we expect human civilization and perhaps even humans to survive.

This conclusion runs so contrary to the conventional wisdom that those new to energy issues may conclude that it cannot be so. But I urge you to keep reading and contemplating the images. The book’s second section correctly characterizes our energy situation as a predicament. The dictionary defines predicament as a “a difficult, perplexing, or trying situation.” Frequently, it means a situation for which there is no response that restores the status quo ante. Problems have solutions; predicaments require coping mechanisms.

To disabuse readers of the solutions currently on offer—such as new unconventional sources of oil and natural gas, “clean” coal, nuclear power, massive hydropower dams, biofuels, and geoengineering of the climate—an entire section of essays explains why these are false solutions if by solutions people mean that we can go on with business-as-usual after implementing them.

As image upon image builds in your mind, you will begin to see that there are deeper concerns at stake, and the essayists help elucidate these: the rights and survival of other species; the voracious human appetite unleashed by modern global capitalism and its creed of perpetual growth; the nature of human happiness; and the importance of beauty. (The book, in fact, treats us to some images of unspoiled landscapes to remind us of the beauty we are losing.)

Related article: Green Cars May Lead to Higher Taxes

Economic growth has limits on a spherical planet. Those limits are already in evidence. What Energy does in a profound way is demonstrate that human beings are limited creatures, both in their understanding and their powers. We humans have already amply demonstrated that by not anticipating and then not addressing the myriad critical environmental and resource problems we face today.

What this book seems to ask then is whether accepting limits, ours and the Earth’s, can make us and our posterity better humans—or whether it is simply inevitable that our hubris borne of nothing other than a brief period of cheap, plentiful but finite energy will lead us to ruin.

History and science tells us that this era must end. How it will end is partly in our hands. Energy does make a few suggestions: conservation, distributed renewable energy, reinvigorated local economies, family planning, and a renewed emphasis on the intangible rewards of being human including the fellowship of others and our encounters with beauty.

Will we continue to accept the religion of unlimited economic growth—which must also be accompanied by unlimited growth in the production of energy and other resources—until a remorseless nature enforces its limits upon us?

Or will we accept those limits—now so painstakingly outlined to us by our own science—and seek out the happiness and beauty that come from working in concert with others to transform our society into one that can sustain us—and sustain all the living things which make our lives possible and which have a claim on the biosphere that we can no longer afford to ignore?

By. Kurt Cobb

Developments in Stem Cell Research

Embryonic stem cells

Looking up

Stem-cell research is now bearing fruit

FOURTEEN years ago James Thomson of the University of Wisconsin isolated stem cells from human embryos. It was an exciting moment. The ability of such cells to morph into any other sort of cell suggested that worn-out or damaged tissues might be repaired, and diseases thus treated—a technique that has come to be known as regenerative medicine. Since then progress has been erratic and (because of the cells’ origins) controversial. But, as two new papers prove, progress there has indeed been.

This week’s Lancet published results from a clinical trial that used embryonic stem cells in people. It follows much disappointment. In November, for example, a company in California cancelled what had been the first trial of human embryonic stem cells, in those with spinal injuries. Steven Schwartz of the University of California, Los Angeles, however, claims some success in treating a different problem: blindness. His research, sponsored by Advanced Cell Technology, a company based in Massachusetts, involved two patients. One has age-related macular degeneration, the main cause of blindness in rich countries. The other suffers from Stargardt’s macular dystrophy, its main cause in children. Dr Schwartz and his team coaxed embryonic stem cells to become retinal pigment epithelium—tissue which supports the rod and cone cells that actually respond to light—then injected 50,000 of them into one eye of each patient, with the hope that they would bolster the natural supply of these cells.

The result was a qualified success. First and foremost, neither patient had an adverse reaction to the transplant—always a risk when foreign tissue is put into someone’s body. Second, though neither had vision restored to any huge degree, each was able, four months after the transplant, to distinguish more letters of the alphabet than they could beforehand.

Whether Dr Schwartz’s technique will prove truly useful remains to be seen. Experimental treatments fail far more often than they succeed. But the second paper, published in Nature by Lawrence Goldstein of the University of California, San Diego, and his colleagues, shows how stem cells can be of use even if they do not lead directly to treatment.

Since 2006 researchers have been able to reprogram adult cells into an embryonic state, using proteins called transcription factors. Though these reprogrammed cells, known as induced pluripotent stem (iPS) cells, might one day be used for treatment, their immediate value is that they are also an excellent way to understand illness. Using them, it is possible to make pure cultures of types of cells that have gone wrong in a body. Crucially, the cultured cells are genetically identical to the diseased ones in the patient.

Dr Goldstein is therefore using iPS cells to try to understand Alzheimer’s disease. The brains of those with advanced Alzheimer’s are characterised by deposits, known as plaques, of a protein-fragment called beta-amyloid, and by tangles of a second protein, called tau. But how these plaques and tangles are related remains unclear. To learn more, Dr Goldstein took tissue from six people: two with familial Alzheimer’s, a rare form caused by a known genetic mutation; two with sporadic Alzheimer’s, whose direct cause is unknown; and two unaffected individuals who acted as controls. He reprogrammed the cells collected into iPS cells, then nudged them to become nerve cells.

In three of the four Alzheimer’s patients these lab-made nerve cells did, indeed, show higher levels of beta-amyloid and tau—and also of another characteristic of the disease, an enzyme called active GSK3-beta. Since he now had the cells in culture, Dr Goldstein could investigate the relationship between the three.

To do so he treated the cultured cells with drugs. He found that a drug which attacked beta-amyloid directly did not lead to lower levels of tau or active GSK3-beta; but a drug which attacked one of beta-amyloid’s precursor molecules did have that effect. That is useful information, for it suggests where a pharmacological assault on the disease might best be directed.

In the short term, at least, iPS-based studies of this sort are likely to yield more scientific value than clinical experiments of the type conducted by Dr Schwartz, even though they are not treatments in themselves. That will, though, require many more pluripotent cells. And at least one firm is selling a way to make billions of iPS cells for just that purpose. Its founder, appropriately, is Dr Thomson.

From the print edition: Science and technology

The New Gulag – North Korea

North Korea on Google Maps: Monuments, nuclear complex, gulags

By Jethro Mullen, CNN
updated 6:03 AM EST, Tue January 29, 2013 |
A map of Camp 22 shows previously unidentified structures -- such as guards compounds or the office of director.
A map of Camp 22 shows previously unidentified structures — such as guards compounds or the office of director.

STORY HIGHLIGHTS
  • Google publishes detailed maps of North Korea for the first time
  • It says “citizen cartographers” used map making software to add the data
  • The maps show the reclusive regime’s main nuclear complex and gulags
  • Google’s executive chairman, Eric Schmidt, visited North Korea this month

(CNN) — Ever wondered how to drive from the center of Pyongyang, the showcase capital of North Korea, to Yongbyon, the location of the secretive regime’s main nuclear complex?

Well, a recent update to Google Maps has the answer for you.

It has filled in the big, largely blank space that previously lay north of the well-mapped South Korea with streets, towns and landmarks.

Users curious to virtually explore one of the world’s most reclusive states can zoom into the heart of Pyongyang and pull up photographs of the Kumsusan Memorial Palace, which houses the bodies of the revered former leaders Kim Il Sung and Kim Jong Il.

The availability of photos quickly thins as users scroll into the North Korean countryside and dries up almost entirely around more controversial areas marked on the map, like the Yongbyon nuclear complex and what Google labels the Yodok and Hwasong gulags.

Map: Yongbyon Nuclear CenterMap: Yongbyon Nuclear Center

Map: Bukchang GulagMap: Bukchang Gulag

Pyongyang threatens South Korea

North Korea prepares for nuke test

Human rights groups say as many as 200,000 people may be being held in North Korea’s network of political prison camps.

The Punggye-ri Nuclear Test Facility, where the regime may be about to carry out a new nuclear test in defiance of international pressure, doesn’t appear to be featured on the map at the moment.

‘A community of citizen cartographers’

In a blog post Monday announcing the update, Google said that North Korea had been one of the largest places with limited map data in the world.

Unsurprisingly, the details added to the map didn’t come from the young North Korea leader Kim Jong Un’s regime.

Google said “a community of citizen cartographers” used the Internet search giant’s Google Map Maker software over a period of years to pinpoint road and place names. Google Map Maker works in a similar way to Wikipedia, allowing users to add, edit and review information.

The company encouraged people to keep working on the maps, saying, “Creating maps is a crucial first step towards helping people access more information about parts of the world that are unfamiliar to them.”

It said the North Korean maps could be particularly useful to South Korean citizens, “who have ancestral connections or still have family living there.”

Restrictions inside

But people inside North Korea, where the Internet is extremely restricted, are unlikely to be able to see the mapping information Google is making available.

The company’s executive chairman, Eric Schmidt, visited North Korea earlier this month along with former New Mexico Gov. Bill Richardson in a trip that left many observers puzzled.

Schmidt, who has in the past written at length about the Web’s ability to empower citizens oppressed by autocratic governments, urged North Korea to embrace the Internet or face further decline in its impoverished economy.

Schmidt’s daughter Sophie, who accompanied him on the trip, said in blog post about the visit that they had been able to take a look at North Korea’s national intranet, which she described as “a walled garden of scrubbed content taken from the real Internet.”

 

Major Support for Fuel Cells

Ford, Daimler, and Nissan Commit to Fuel Cells

The partnership to jointly develop fuel cell vehicles by 2017 signals the renewed interest in hydrogen-powered cars and the need to collaborate in auto industry.

Hyundai, which leases its fuel cell vehicles, is among the automakers investing in fuel cell technology.

A long-running joke in the auto industry is that fuel cell vehicles are the technology of the future—and always will be. But that may not ring true a few years from now.

Ford, Renault-Nissan, and Daimler today said they will jointly develop technology to make “affordable, mass-market” fuel cell vehicles by 2017, investing equal amounts into the effort. This partnership follows a similar joint development deal between BMW and Toyota announced last week and commitments to fuel cell vehicles by Hyundai and Honda last year. (See, Hydrogen Cars: A Dream that Won’t Die.)

By collaborating on the fuel cell stack and other system components, Ford, Daimler and Renault-Nissan hope to improve the technology and produce at a large scale. With a higher production volume, these automakers will get to economies of scale and offer more affordable cars, says Daimler board member Thomas Weber.

Fuel cell vehicles, which convert stored hydrogen into electricity on board, are getting more serious commitments from automakers because of improvements in the cost and reliability of fuel cell stacks.

Sales of battery-electric vehicles, such as the Nissan Leaf, are gated by the limited range they offer and the relatively high purchase price. By contrast, fuel cell vehicles can offer a longer range and fuel cell powertrains can be used on larger vehicles, Nissan board executive Mitsuhiko Yamashita says.

Fuel efficiency standards and carbon emissions limits set by governments around the world have prodded automakers to invest in new technologies. (See, Stringent CAFE Standards Push Automakers.) To lower the cost of development, automakers are creating a number of shared-development plans. Ford and Toyota, for example, have a partnership to develop hybrid powertrains for larger cars sold in North America.

Another factor helping advance fuel cell vehicles is the low cost of natural gas in the United States. Natural gas can be reformed into hydrogen, which can be dispensed at hydrogen stations in about the same amount of time as gasoline. A company called Nuvera is developing these fueling ports for fork lifts in anticipation of the larger passenger car market forming in few years.

But the lack of hydrogen fueling infrastructure means that fuel cell vehicles will likely be targeted at a few niche markets, such as fleet vehicle operators or environmentally minded consumers in cities equipped with a few hydrogen fueling stations.

And even though carmakers are showing more interest in fuel cells, the costs are substantially higher than conventional vehicles. If global collaborations can reduce the cost substantially over the next several years, the appeal of fuel cell vehicles would grow. Regardless of how quickly each approach is adopted, it looks like the auto industry’s route to electrification will ride on both battery-electric and fuel-cell vehicles.

Artificial Intelligence

THE STONE January 27, 2013, 5:00 pm106 Comments
Cambridge, Cabs and Copenhagen: My Route to Existential Risk
By HUW PRICE

The Stone is a forum for contemporary philosophers on issues both timely and timeless.
TAGS:

ARTIFICIAL INTELLIGENCE, BIOTECHNOLOGY, COMPUTERS AND THE INTERNET, PHILOSOPHY, SCIENCE AND TECHNOLOGY
In Copenhagen the summer before last, I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer. No surprise if he’d been the driver, perhaps (never tell a taxi driver that you’re a philosopher!), but this was a man who has spent his career with computers.

Indeed, he’s so talented in that field that he is one of the team who made this century so, well, 21st – who got us talking to one another on video screens, the way we knew we’d be doing in the 21st century, back when I was a boy, half a century ago. For this was Jaan Tallinn, one of the team who gave us Skype. (Since then, taking him to dinner in Trinity College here in Cambridge, I’ve had colleagues queuing up to shake his hand, thanking him for keeping them in touch with distant grandchildren.)

There could be trouble when intelligence escapes the constraints of biology.
I knew of the suggestion that A.I. might be dangerous, of course. I had heard of the “singularity,” or “intelligence explosion”– roughly, the idea, originally due to the statistician I J Good (a Cambridge-trained former colleague of Alan Turing’s), that once machine intelligence reaches a certain point, it could take over its own process of improvement, perhaps exponentially, so that we humans would soon be left far behind. But I’d never met anyone who regarded it as such a pressing cause for concern – let alone anyone with their feet so firmly on the ground in the software business.

I was intrigued, and also impressed, by Tallinn’s commitment to doing something about it. The topic came up because I’d asked what he worked on these days. The answer, in part, is that he spends a lot of his time trying to improve the odds, in one way or another (talking to philosophers in Danish taxis, for example).

I was heading for Cambridge at the time, to take up my new job as Bertrand Russell professor of philosophy – a chair named after a man who spent the last years of his life trying to protect humanity from another kind of technological risk, that of nuclear war. And one of the people I already knew in Cambridge was the distinguished cosmologist Martin Rees – then master of Trinity College, and former president of the Royal Society. Lord Rees is another outspoken proponent of the view that we humans should pay more attention to the ways in which our own technology might threaten our survival. (Biotechnology gets most attention, in his work.)

So it occurred to me that there might be a useful, interesting and appropriate role for me, as a kind of catalyst between these two activists, and their respective circles. And that, to fast forward a little, is how I came to be taking Jaan Tallinn to dinner in Trinity College; and how he, Martin Rees and I now come to be working together, to establish here in Cambridge the Centre for the Study of Existential Risk (C.S.E.R.).

Leif Parsons
By “existential risks” (E.R.) we mean, roughly, catastrophic risks to our species that are “our fault,” in the sense that they arise from human technologies. These are not the only catastrophic risks we humans face, of course: asteroid impacts and extreme volcanic events could wipe us out, for example. But in comparison with possible technological risks, these natural risks are comparatively well studied and, arguably, comparatively minor (the major source of uncertainty being on the technological side). So the greatest need, in our view, is to pay a lot more attention to these technological risks. That’s why we chose to make them the explicit focus of our center.

I have now met many fascinating scholars – scientists, philosophers and others – who think that these issues are profoundly important, and seriously understudied. Strikingly, though, they differ about where they think the most pressing risks lie. A Cambridge zoologist I met recently is most worried about deadly designer bacteria, produced – whether by error or by terror, as Rees puts it – in a nearby future in which there’s almost an app for such things. To him, A.I. risk seemed comparatively far-fetched – though he confessed that he was no expert (and added that the evidence is that even experts do little better than chance, in many areas).

Where do I stand on the A.I. case, the one that got me into this business? I don’t claim any great expertise on the matter (perhaps wisely, in the light of the evidence just mentioned). For what it’s worth, however, my view goes like this. On the one hand, I haven’t yet seen a strong case for being quite as pessimistic as Jaan Tallinn was in the taxi that day. (To be fair, he himself says that he’s not always that pessimistic.) On the other hand, I do think that there are strong reasons to think that we humans are nearing one of the most significant moments in our entire history: the point at which intelligence escapes the constraints of biology. And I see no compelling grounds for confidence that if that does happen, we will survive the transition in reasonable shape. Without such grounds, I think we have cause for concern.

My case for these conclusions relies on three main observations. The first is that our own intelligence is an evolved biological solution to a kind of optimization problem, operating under very tight constraints of time, energy, raw materials, historical starting point and no doubt many other factors. The hardware needs to fit through a mammalian birth canal, to be reasonably protected for a mobile life in a hazardous environment, to consume something like 1,000 calories per day and so on – not to mention being achievable by mutation and selection over a time scale of some tens of millions of years, starting from what existed back then!

Second, this biological endowment, such as it is, has been essentially constant, for many thousands of years. It is a kind of fixed point in the landscape, a mountain peak on which we have all lived for hundreds of generations. Think of it as Mount Fuji, for example. We are creatures of this volcano. The fact that it towers above the surrounding landscape enables us to dominate our environment and accounts for our extraordinary success, compared with most other species on the planet. (Some species benefit from our success, of course: cockroaches and rats, perhaps, and the many distinctive bacteria that inhabit our guts.) And the distinctive shape of the peak – also constant, or nearly so, for all these generations – is very deeply entangled with our sense of what it is to be us. We are not just creatures of any volcano; we are creatures of this one.

Both the height and the shape of the mountain are products of our biological history, in the main. (The qualification is needed because cultural inheritance may well play a role too.) Our great success in the biological landscape, in turn, is mainly because of the fact that the distinctive intelligence that the height and shape represent has enabled us to control and modify the surrounding environment. We’ve been exercising such control for a very long time of course, but we’ve recently got much better at it. Modern science and technology give us new and extraordinarily powerful ways to modify the natural world, and the creatures of the ancient volcano are more dominant than ever before.

This is all old news, of course, as is the observation that this success may ultimately be our undoing. (Remember Malthus.) But the new concern, linked to speculation about the future of A.I., is that we may soon be in a position to do something entirely new: to unleash a kind of artificial vulcanism, that may change the shape and height of our own mountain, or build new ones, perhaps even higher, and perhaps of shapes we cannot presently imagine. In other words – and this is my third observation – we face the prospect that designed nonbiological technologies, operating under entirely different constraints in many respects, may soon do the kinds of things that our brain does, but very much faster, and very much better, in whatever dimensions of improvement may turn out to be available.

The claim that we face this prospect may seem contestable. Is it really plausible that technology will reach this stage (ever, let alone soon)? I’ll come back to this. For the moment, the point I want to make is simply that if we do suppose that we are going to reach such a stage – a point at which technology reshapes our human Mount Fuji, or builds other peaks elsewhere – then it’s not going to be business as usual, as far as we are concerned. Technology will have modified the one thing, more than anything else, that has made it “business as usual” so long as we have been human.

Indeed, it’s not really clear who “we” would be, in those circumstances. Would we be humans surviving (or not) in an environment in which superior machine intelligences had taken the reins, to speak? Would we be human intelligences somehow extended by nonbiological means? Would we be in some sense entirely posthuman (though thinking of ourselves perhaps as descendants of humans)? I don’t claim that these are the only options, or even that these options are particularly well formulated – they’re not! My point is simply that if technology does get to this stage, the most important fixed point in our landscape is no longer fixed – on the contrary, it might be moving, rapidly, in directions we creatures of the volcano are not well equipped to understand, let alone predict. That seems to me a cause for concern.

These are my reasons for thinking that at some point over the horizon, there’s a major tipping point awaiting us, when intelligence escapes its biological constraints; and that it is far from clear that that’s good news, from our point of view. To sum it up briefly, the argument rests on three propositions: (i) the level and general shape of human intelligence is highly contingent, a product of biological constraints and accidents; (ii) despite its contingency in the big scheme of things, it is essential to us – it is who we are, more or less, and it accounts for our success; (iii) technology is likely to give us the means to bypass the biological constraints, either altering our own minds or constructing machines with comparable capabilities, and thereby reforming the landscape.

But how far away might this tipping point be, and will it ever happen at all? This brings me back to the most contested claim of these three – the assertion that nonbiological machines are likely, at some point, to be as intelligent or more intelligent than the “biological machines” we have in our skulls.

Objections to this claim come from several directions. Some contest it based on the (claimed) poor record of A.I. so far; others on the basis of some claimed fundamental difference between human minds and computers; yet others, perhaps, on the grounds that the claim is simply unclear – it isn’t clear what intelligence is, for example.

To arguments of the last kind, I’m inclined to give a pragmatist’s answer: Don’t think about what intelligence is, think about what it does. Putting it rather crudely, the distinctive thing about our peak in the present biological landscape is that we tend to be much better at controlling our environment than any other species. In these terms, the question is then whether machines might at some point do an even better job (perhaps a vastly better job). If so, then all the above concerns seem to be back on the table, even though we haven’t mentioned the word “intelligence,” let alone tried to say what it means. (You might try to resurrect the objection by focusing on the word “control,” but here I think you’d be on thin ice: it’s clear that machines already control things, in some sense – they drive cars, for example.)

I see no good reason to believe that intelligence is never going to escape from the head.
Much the same point can be made against attempts to take comfort in the idea that there is something fundamentally different between human minds and computers. Suppose there is, and that that means that computers will never do some of the things that we do – write philosophy, appreciate the sublime, or whatever. What’s the case for thinking that without these gifts, the machines cannot control the terrestrial environment a lot more effectively than we do?

People who worry about these things often say that the main threat may come from accidents involving “dumb optimizers” – machines with rather simple goals (producing IKEA furniture, say) that figure out that they can improve their output astronomically by taking control of various resources on which we depend for our survival. Nobody expects an automated furniture factory to do philosophy. Does that make it less dangerous? (Would you bet your grandchildren’s lives on the matter?)

But there’s a more direct answer, too, to this attempt to take comfort in any supposed difference between human minds and computers. It also cuts against attempts to take refuge in the failure of A.I. to live up to some of its own hype. It’s an answer in two parts. The first part – let me call it, a little aggressively, the blow to the head – points out that however biology got us onto this exalted peak in the landscape, the tricks are all there for our inspection: most of it is done with the glop inside our skulls. Understand that, and you understand how to do it artificially, at least in principle. Sure, it could turn out that there’s then no way to improve things – that biology, despite all the constraints, really has hit some sort of fundamental maximum. Or it could turn out that the task of figuring out how biology did it is just beyond us, at least for the foreseeable future (even the remotely foreseeable future). But again, are you going to bet your grandchildren on that possibility?

The second part of the argument – the blow from below – asks these opponents just how far up the intelligence mountain they think that A.I. could get us. To the level of our fishy ancestors? Our early mammalian ancestors? (Keep in mind that the important question is the pragmatic one: Could a machine do what these creatures do?) Wherever they claim to draw the line, the objection challenges them to say what biology does next, that no nonbiological machine could possibly do. Perhaps someone has a plausible answer to this question, but for my part, I have no idea what it could be.

RELATED
More From The Stone
Read previous contributions to this series.

At present, then, I see no good reason to believe that intelligence is never going to escape from the head, or that it won’t do so in time scales we could reasonably care about. Hence it seems to me eminently sensible to think about what happens if and when it does so, and whether there’s something we can do to favor good outcomes over bad, in that case. That’s how I see what Rees, Tallinn and I want to do in Cambridge (about this kind of technological risk, as about others): we’re trying to assemble an organization that will use the combined intellectual power of a lot of gifted people to shift some probability from the bad side to the good.

Tallin compares this to wearing a seat belt. Most of us agree that that makes sense, even if the risk of an accident is low, and even though we can’t be certain that it would be beneficial, if we were to have an accident. (Occasionally, seat belts make things worse.) The analogy is apt in another way, too. It is easy to turn a blind eye to the case for wearing a seat belt. Many of us don’t wear them in taxis, for example. Something – perhaps optimism, a sense that caution isn’t cool, or (if you’re sufficiently English!) a misplaced concern about hurting the driver’s feelings – just gets in the way of the simple choice to put the thing on. Usually it makes no difference, of course, but sometimes people get needlessly hurt.

Worrying about catastrophic risk may have similar image problems. We tend to be optimists, and it might be easier, and perhaps in some sense cooler, not to bother. So I finish with two recommendations. First, keep in mind that in this case our fate is in the hands, if that’s the word, of what might charitably be called a very large and poorly organized committee – collectively shortsighted, if not actually reckless, but responsible for guiding our fast-moving vehicle through some hazardous and yet completely unfamiliar terrain. Second, remember that all the children – all of them – are in the back. We thrill-seeking grandparents may have little to lose, but shouldn’t we be encouraging the kids to buckle up?

Huw Price is Bertrand Russell professor of philosophy at the University of Cambridge. With Martin Rees and  Jaan Tallinn, he is a co-founder of the project to establish the Centre for the Study of Existential Risk.

Iran’s Space Feat

This would have been more noteworthy had they sent Mahmoud Ahmadinejad into outer space, not bothering to bring him back alive.

Iran (Says It) Sent a Monkey Into Space and Brought It Back Alive

By

|

Posted Monday, Jan. 28, 2013, at 10:22 AM ET

Share on Facebook

54
46
1359387926150

Screenshot from Iranian state-run television announcing news of the mission

Iran said today that it has successfully sent a monkey into orbit and brought it back alive, an announcement that if true would represent a major scientific accomplishment for the Islamic republic and mark the latest step in the nation’s quest to put a man in space by the end of the decade.

It should be noted, however, that the news came via state-run media and has not been independently confirmed. The initial report gave only vague details and provided no info on the timing or location of the launch or the landing. Still, the government offered at least a few pieces of photographic evidence to back up its story. The Associated Press explains:

Still images broadcast on state TV showed a small, gray-tufted monkey presumably being prepared for the flight, including wearing a type of body protection and being strapped tightly into a pod that resembled an infant’s car seat.

The photos draw historical links to the earliest years of the space race in the 1950s when both the U.S. and Soviet Union tested the boundaries of rocket flight with animals on board, including American capsules carrying monkeys and Moscow’s crafts holding dogs.

Iran had previously sent smaller animals into space, including a rat and a several turtles, and had successfully launched three satellites over the past four years, according to the AFP. A previous attempt back in 2011 to put a monkey in space failed, although no reason was ever given.

The latest mission would appear to be the biggest breakthrough yet for the Iranian space program. For comparison, the United States was the first nation to successfully put a live monkey into space way back in 1949—although it would be another decade before we would bring one back to Earth alive. (Our first attempt, in 1948, failed after the monkey apparently suffocated inside the capsule while it was still on the launching pad.)

While Iran has long denied its space program—like its nuclear work—is directly tied to its military ambitions, it hasn’t gone unnoticed in the Western world that the same technology used to launch a rocket into space can also be used in ballistic missiles. [Update 10:31 a.m.: In case that link wasn’t clear enough, an Iranian commander told state reporters in a separate announcement today that the nation plans to unveil new “long, intermediate and short-range missiles” sometime early next month.]

***Follow @JoshVoorhees and the rest of the @slatest team on Twitter.***

Outsourcing – The Herd Instinct

Herd instinct

Companies need to think more carefully about how they offshore and outsource

 

WHAT AND HOW much of its production to offshore to other countries is one of the most important choices a company can make. France’s two big carmakers illustrate the point. PSA Peugeot Citroën, the younger of the two, has tried over time to find cheaper places than around Paris to make its cars; in the 1950s and 60s Citroën opened a factory in Brittany and started manufacturing in Spain and Portugal, the China and Vietnam of their time for offshoring. Nowadays it makes cars cheaply in Slovakia and in the Czech Republic. But two-fifths of its global production is still in France, where it has seven expensive factories. One reason is that the company is family-owned, and families tend to be particularly loyal to their countries of origin.

Renault, on the other hand, has determinedly pursued a low-cost strategy, setting up factories in Morocco, Slovenia, Turkey and Romania, and now makes only a quarter of its cars at home. Unsurprisingly, it is Peugeot that is now in dire financial straits. Last autumn, amidst a fierce political storm, the company announced plans to stop car production at one of its biggest French factories, at Aulnay-sous-Bois, just outside Paris. But that may be too little, too late.

 

Yet there are also examples of highly successful companies that choose not to offshore to any great extent, even in labour-intensive industries. Zara, the main clothing brand of Inditex, a Spanish textile firm, is famous for making its high-fashion clothes in Spain itself and in nearby Portugal and Morocco. This costs more than it would in China, but a short, flexible supply chain allows the firm to respond quickly to changes in customer tastes. It sells the vast majority of its outfits at full price rather than at a discount. Its decision to stay close to home has become its main source of competitive advantage.

The practice of outsourcing is as old as business itself. A 19th-century manufacturing company might have had its own machines but not its own fleet of horse-drawn drays to distribute its wares. The fashion for what to subcontract and what to keep inside the firm has ebbed back and forth over time. At one time the conglomerate, owning everything it could, was all the rage, but for the past few decades firms have been outsourcing ever more of their operations, in the belief that as long as they kept the “core” of their business in-house, the rest could safely be sent anywhere in the world.

That belief has not always turned out to be justified. After Boeing, an aeroplane-maker, outsourced 70% of the development and production work on its new 787 Dreamliner to around 50 suppliers, it suffered huge delays because its outsourcing partners failed to produce parts on time. In 2005 Deloitte Consulting looked at 25 big companies that had outsourced operations and found that a quarter of them soon brought them back “in-house” because they could do the work themselves better and cheaper.

But most companies outsource to save money, so doing more of it has increasingly meant sending work to cheaper countries. In 2003, according to TPI, a company that advises on outsourcing, about 40% of all outsourcing contracts entered into by American and European firms involved offshore workers; that figure has since risen to 67%. In turn, companies that decide to offshore production often have little choice but to outsource as well. Local firms are often in a better position to operate in a particular environment, and they may control supply chains. Most of America’s and Europe’s textile industry, for instance, subcontracts work to outside firms in China, Vietnam and Bangladesh. Production of consumer electronics is largely outsourced to huge contract manufacturers such as Taiwan’s Foxconn and Quanta. This report concentrates on work that is done overseas, either inside the firm but in an offshore location or outsourced to foreign contractors, because this part of corporate globalisation has caused the most controversy.

Most firms do not give enough thought to choosing where to produce. To an alarming degree, says McKinsey, “companies continue to indulge in herd behaviour” when deciding where to base their operations and how to arrange their supply chains. Many of them, says the consulting firm, simply follow each other around to low-cost countries or allow themselves to be drawn in by governments waving wads of cash and other incentives.

David Arkless, head of government and corporate affairs for Manpower, which advises large companies on their locations, recalls the story of two rival technology firms from Idaho some years ago. One of them moved its production to the state of Penang in Malaysia. The other, having seen its foe reduce its labour costs by half and slash prices by 15%, pursued it to exactly the same place, he says. The pair quickly started competing for labour with each other and local wages soared. Mr Arkless has seen whole clusters of industries move to Shenzhen in tandem. “Within a year or so the labour costs go up to near the level of the original place,” he says. Manpower advises Western firms that if labour makes up 15% or less of their product’s total cost, they would do better not to offshore. And even if the share is higher, there is usually scope for improvement at home. “Going somewhere else for the sake of cheaper labour is usually a quick fix and avoids the real problems,” says Mr Arkless.

“Moving production a long way off and separating it from research and development risks harming a firm’s long-term ability to innovate”

Companies rarely analyse past location decisions to see whether they have proved right, note Michael Porter and Jan Rivkin of Harvard Business School in a paper, “Choosing the United States”, published last year. One reason why companies rush into offshoring may be that they are looking for a quick solution to existing troubles. According to “The Handbook of Global Outsourcing and Offshoring”, by Leslie Willcocks, Julia Kotlarsky and Ilan Oshri, companies are most likely to consider offshoring their operations when their profits are already falling.

Two sets of strategic problems can arise from offshoring production to another part of the world, especially if it is poorly thought out. The first of these concerns the logistics of supply. The more that firms spread their operations around the globe, the more vulnerable they become to disruption from unexpected events such as natural disasters or political unrest. The second strikes at the heart of what companies try to do: sell more and better widgets to customers than their rivals down the road. Often, the more a firm offshores and outsources, the worse it will be at responding to customers quickly.

Ideas factory

Over the past few decades it became conventional wisdom that factory jobs could be done cheaply in some far-flung corner of the world but more important innovation work should stay in-house in high-cost countries. Manufacturing was seen as just a cost centre, so it was often offshored. Now many companies reckon that production makes a big contribution to the success of research and development, and that innovation is more likely to happen when R&D and manufacturing are in the same place, so increasingly they want to bring manufacturing back in-house.

Foreign suppliers of parts not infrequently turn into competitors, and for many companies the risk of losing intellectual property either through theft or imitation in China and elsewhere remains high. Indeed, says Richard Dobbs of the McKinsey Global Institute in Seoul, big South Korean groups reckon that American and European companies are making a mistake in outsourcing as much manufacturing as they do, because this allows other firms a great deal of insight into their processes. They should know: Samsung, an electronics giant, was once an outsourcing partner for several Japanese firms but now dwarfs its former customers. South Korean firms offshore production to their own factories overseas, but they seldom outsource.

 

Many companies are now rethinking the outsourcing of ever more important functions. Lenovo wants to own more of its capacity in China and elsewhere; it gets better results from its own facilities than from its outside contractors, says Gerry Smith, the firm’s global head of supply chain. That often means taking the work back home.

The most prominent current example of the opportunities and risks of offshoring is the relationship between Apple and Foxconn. From a strategic point of view the partnership could not be more successful. In 2010 Foxconn took a huge chance by investing billions of dollars in building enough capacity in China to manufacture Apple’s iPhone on the scale required. It built a uniquely flexible and responsive supply chain for the American firm. On one recent occasion, according to a report in the New York Times, Apple redesigned the iPhone’s screen at the last minute and Foxconn woke up its workers in the middle of the night to get the job done in time. “The reason Apple is what it is today is Foxconn,” says a consultant in Taipei who prefers not to be named. The two companies, he says, are inextricably bound to each other.

But Apple may be wishing it was not quite so dependent on Foxconn. After a spate of reports of poor working conditions for the firm’s employees (including excessive hours), Apple’s chief executive, Tim Cook, ordered an investigation, and Foxconn is making a number of changes. Even so, the bad news has not stopped. In September Foxconn had to close a factory for a while when a brawl among employees turned into a full-scale riot. In October the firm admitted that it had employed “interns” as young as 14 in its factories. In December Mr Cook announced that Apple would bring some production of Mac computers back from China to America. He said the main aim was to create jobs in America, but the move may also appease critics of Apple’s partnership with Foxconn. The Taiwanese firm said that it, too, would expand its operations in America, explaining that important customers wanted more work done there.

 

From the print edition: Special report

Crumbling Technology Empires

Why the Facebook and Apple empires are bound to fall
History should teach us that for today’s technology industry titans, the only way is down. Just ask Microsoft

John Naughton
The Observer, Saturday 26 January 2013
Jump to comments (103)

What goes up must come down: Mark Zuckerberg launches Facebook’s Graph Search. Photograph: Stephen Lam/Getty Images
Nothing lasts forever: if history has any lesson for us, it is this. It’s a thought that comes from rereading Paul Kennedy’s magisterial tome, The Rise and Fall of the Great Powers, in which he shows that none of the great nation-states or empires of history – Rome; imperial Spain in 1600; France in either its Bourbon or Bonapartist manifestations; the Dutch republic in 1700; Britain in its imperial glory – succeeded in maintaining its global ascendancy for long.

What has this got to do with technology? Well, it provides us with a useful way of thinking about two of the tech world’s great powers. The first is Apple. The past week saw a veritable torrent of hysterical reaction to its quarterly results, coupled with fevered speculation about its future. The globe has been hypnotised for years by Apple’s metamorphosis from a failing computer manufacturer into a corporate giant that, on some days, is now the most valuable company in the world, with bigger cash reserves than the annual GDP of some countries. But as with all inexorable growth curves, the question on every commentator’s lips is: has Apple peaked?

If you think “hysterical” is a bit harsh, then ponder this. Although Apple did not sell the 50m iPhones that had been forecast for the quarter (it “only” shifted 47.8m) and sales of its Mac computers were down somewhat, nevertheless the quarterly results mean that in 2012 Apple earned more in the year than any other corporation, ever. And even the quarter’s supposedly disappointing earnings of $13.1bn were the fourth largest of all time, according to the same metric. And the reaction of the stock market to this news? The share price dropped 10% in after-hours trading.

Then there’s the social network Facebook with its billion users, which is likewise the focus of much hyperventilating comment. Recently, the Mark Zuckerberg empire launched its latest deadly weapon with the catchy name of Graph Search – as in “social graph”. Facebook’s new tool is just an algorithm that finds information from within one’s network of friends and supplements the results with hits from Microsoft’s Bing search engine, but to read some of the commentary on it you’d think that Zuckerberg & co had invented either a perpetual motion machine or a through-ticket to hell.

“Facebook’s new search engine attempts to build walls around the internet and keep its horde within its gates,” wrote the webmaster of a respected online magazine. “It’s a nightmare and it will probably work.”

Actually, it’s Facebook’s latest attempt to become the AOL de nos jours. And, in the end, it will fail for the same reason that AOL’s attempt to corral users within its walled garden failed: the wider internet is just too diverse, innovative and interesting. But because Facebook looms so large in the public consciousness at the moment, it’s difficult to keep it in perspective. Which is why Kennedy’s book makes such salutary reading.

So what we need to remember as we wade through the current overheated commentary on Apple and Facebook is that nothing lasts forever. I have been in this racket long enough to remember a time when Microsoft was at least as dominant and scary as these two companies are now. Spool forward a couple of decades and Microsoft is still around, but actually it’s an ailing giant – profitable but no longer innovative, trying (and so far failing) to get a foothold in the post-PC, mobile, cloud-based world.

Although the eclipsing of Apple and Facebook is inevitable, the timing and causes of their eventual declines will differ. Apple’s current strength is that it actually makes things that people are desperate to buy and on which the company makes huge margins. The inexorable logic of the hardware business is that those margins will decline as the competition increases, so Apple will become less profitable over the longer term. What will determine its future is whether it can come up with new, market-creating products such as the iPod, iPhone and iPad.

Facebook, on the other hand, makes nothing. It just provides an online service that, for the moment, people seem to value. But in order to make money out of those users and satisfy the denizens of Wall Street, it has to become ever more intrusive and manipulative. It’s condemned, in other words, to intrusive overstretch. Which is why, in the end, it will become a footnote in the history of the internet. Just like Microsoft, in fact. Sic transit gloria.

Article history
Technology
Apple ·     Facebook · Computing ·     Internet · Mark Zuckerberg · Microsoft ·     Tablet computers
Series
The networker