Image of the Bixby Bridge on the Pacific Coast Highway, north of Big Sur, one of the most beautiful landscapes in America
WTI closed November just shy of $89 a barrel on hopes of an improving economy. I think there is an argument for an improving economy in 2013, but it is just too early to tell how things are going to come together with the economy, and all the ramifications of basically an anti-business and social agenda political leadership of the last four years.
I am not making a political statement that there aren`t some benefits to be gained by such governmental policies, more to state the fact that with a lot of the legislative and fiscal policies of the last four years we have yet to fully understand some of the costs and unintended consequences of these policies like higher taxes, increased healthcare costs, and increased business regulation costs.
So this is an area that remains to be seen how all this plays out in the economy. Just last year many analysts were expecting 2013 to be an increased chance of a recession, we shall see.
But even with the best case scenario for a stronger 2013 economy in the US, this isn`t going to effectively change the actual demand picture for Petroleum products in a mature market like the US to any significant degree.
In addition, with the latest employment numbers coming out of Europe and the slowdown finally hitting Germany it appears Europe is not going to have a robust 2013. Asia, namely China appears to be coming off the bottom, but the days of 12% economic growth are over for the emerging market, in short, they have finally “emerged”!
China has infrastructure and inflation constraints that hamper growth levels higher than 7% going forward, and real growth may be much less than reported. This economic reality is priced into the Chinese stock market, which has had an abysmal year.
In regard to Japan, they are about to have another leadership change, the 7th one in as many years, and this economy is in a state of perpetual deflationary decline which has lasted for the last twenty years. So despite the new leadership change, Japan has major demographic and anti-competitive businesses constraints that auger more of the same for 2013.
So the US market, which is a mature market may actually have the best economy of the major economies and users of Petroleum products in 2013, and that`s not saying much from the demand side of the equation.

Source: EIA
Now we get to the supply side of the equation, and here is the problem for Oil bulls, and partly the reason so many funds got killed in 2012 trying to aggressively invest in Oil through Futures, ETF`s and the like. No established trends could take hold because the supply levels globally and domestically are well above the five-year average, and at the height of that range. So we have had a drop in WTI from $109 to $78 totalling $31 a barrel, and $100 to $84 totalling $16 a barrel which is not good if you’re a fund manager investing long-term.
The most noteworthy trend in the Oil markets for 2012 is the increased role of US and Canadian production, and it is only going to get stronger for 2013 and into the future. The trend is definitely not a fluke; we have had a nice run of over a decade of high prices which has spurred a lot of economic investment into new technologies and an increase in smaller, independent operators searching for opportunities to make money by producing oil in North America.

Source: EIA
Well, we are finally starting to see the results of the increased capital investments, and just like Shale Natural Gas, these projects once they get going stay online, even as prices drop substantially. Expect the same for Oil, as frankly, there just aren`t a lot of areas where you can make the kind of margins that are attainable in the Oil market. It is a good business to be in versus many other industries vying for capital resources.

Source: EIA
Just to give the reader an idea of some of this dynamic change in US production numbers alone, the United States is on track for a 7% increase in oil production this year to an average of 10.9 million barrels per day. Furthermore, The U.S. Department of Energy is forecasting that U.S. production of crude and other liquid hydrocarbons will average 11.4 million barrels per day in 2013. For comparison sake Saudi Arabia’s output is approximately 11.6 million barrels per day.
The only reason Oil isn`t much lower currently is that there has been a lot of increased Oil storage for newly built capacity in China and the US. For example, Cushing Oklahoma which is the location that the WTI Futures contract is based upon, had a storage capacity of approximately 47 million barrels in March 2011, with increased Capacity upgrades it stands just above 60 million barrels as of March of 2012.

Source: EIA
But here is the kicker, on September 30, 2011 Cushing had just under 30 million barrels in storage, as of last week Cushing has 46 million barrels stored at this location, this is an increase of 16 million barrels in one year, if we have a repeat in 2013, which all signs point to as the trend is getting stronger not weaker, than Cushing will be running out of working storage capacity of just over 60 million barrels. I am sure Cushing is building more workable storage capacity as we speak, but at some level what is the point, 2013 is when WTI starts really pricing in some of this supply glut that comes from increased US and Canadian production.
The supply glut just isn`t in WTI it is felt in total US inventory levels which to quote directly from the EIA Report: “At 374.1 million barrels, U.S. crude oil inventories are well above the upper limit of the average range for this time of year.” The US Inventory level will probably bust through the 400 million level in 2013 for the first time in history.

Source: EIA
Prices have also been supported by international cases of many supply disruptions during the past two years, but slowly but surely, this oil is coming back online, and expect more output internationally.
Libyan oil is now up to full speed, Iraq output had been greater than expected, and is the real wildcard because they are just now starting to hit their stride, and have the potential for much more on the upside if all those new projects start producing, not only for 2013, but for the next decade plus, it all depends upon political stability in the country.
But here is the thing you have to remember about international Oil, everybody needs money, regardless if your extremist, Islamic fundamentalist, Democratic it still all applies to the need for money, and whomever is in charge, is going to need to monetize their resources, and these countries have very little competitive options other than oil for generating revenue, one way or another this Oil finds its way to the market.
As it happens, many of the international countries robust in Oil resources need to generate Oil revenues because a large portion of their respective populations is subsidized through Oil exports, this trend will continue as it has for the last 30 years with no major supply disruptions.
In fact, I expect OPEC will need to start entertaining supply cutbacks in 2013 to address swelling inventory levels globally, but again here is the irony, they may talk up the market with production cuts, but the fact remains, the incentive is even greater for cheating on quotas the lower the price goes, because the governments still have budgets based upon the same level of revenue, and the only way to get the same revenue with lower prices is to pump more oil.
Moreover, since a lot more capacity is capable of coming online in 2013 globally, and all these governments from Iraq to Sudan need money, expect greater achievement towards capacity which is bearish for Brent Oil prices as well, i.e., expect Brent to test the $85 level sometime in 2013.
The last decade has been exemplified by higher energy prices, and with this came increased CAP EX investment, new technologies were refined, and North America has seen a rebirth in energy activity. It all started with natural gas, and now we are starting to experience this sea change in the Oil market.
Prices in 2012 started addressing this dynamic change, but the real effects of this trend will start taking shape in 2013 as storage constraints start kicking in, demanding a re-pricing of the commodity.
After all, no matter how much the Fed devalues the dollar, unlike Gold or Silver, you can only store so much Crude Oil, and with 700 million in the Strategic Reserves, another 400 million in US Reserves, how much do we really need to store in an increasingly energy independent North American Region?
Economics will dictate that you can only build so much storage to avert the price drop from continual over supply, and right now the world produces more Oil than it consumes each day, and it has for the past 16 months, this trend will only get worse in 2013. So expect prices to finally start to address this over supply issue in the Oil Markets in 2013.
By. Dian Chu
Everybody has a different pattern of veins in the whites of their eyes. New security software makes use of that.
EyeVerify’s software identifies you by your “eyeprints,” the pattern of veins in the whites of your eyes. Everybody has four eyeprints, two in each eye on either side of the iris. The company claims that its method is as accurate as a fingerprint or iris scan, without requiring any special hardware.
The Kansas City, Kansas-based company plans to roll out its software in the first half of next year. CEO and founder Toby Rush envisions a range of uses for it, including authenticating people who want to use smartphones to access their online medical records or bank accounts. Rush says phone manufacturers are interested in embedding the software into handsets so that many applications can use it for authenticating people, though he declined to name any prospective partners.
The technology behind EyeVerify comes from Reza Derakhshani, associate professor of computer science and electrical engineering at the University of Missouri, Kansas City. Derakhshani, the company’s chief scientist, was a co-recipient of a patent for the eye-vein biometrics behind EyeVerify in 2008.
On the user’s end, EyeVerify seems pretty simple (though somewhat awkward in its prototype stage). To access data on a smartphone that’s locked with EyeVerify, you would look to the right or the left, enabling EyeVerify to capture eyeprints from each of your eyes with the camera on the back of the smartphone. (Eventually, EyeVerify expects to take advantage of a smartphone’s front-facing camera, but for now the resolution is not high enough on most of these cameras, Rush says.) EyeVerify’s software processes the images, maps the veins in your eye, and matches that against an eyeprint stored on the phone.
Rush says the software can tell the difference between a real person and an image of a person. It randomly challenges the smartphone’s camera to adjust settings such as focus, exposure, and white balance and checks whether it receives an appropriate response from the object it’s focused on.
The look of the veins in your eyes changes over time, and you might burst a blood vessel one day. But Rush says long-term changes would be slow enough that EyeVerify could “age” its template to adjust. And the software only needs one proper eyeprint to authenticate you, so unless you bloody up both eyes, you should be able to use EyeVerify after a bar fight.
Kevin Bowyer, chair of the University of Notre Dame’s computer science and engineering department—whose research includes biometrics of the iris of the eye—says he thinks the technology has promise, but he’s skeptical that it’s as accurate as fingerprint scanning.
Indeed, EyeVerify still needs to do more to prove that. Rush says that in tests of 96 people, the eyeprint system was 99.97 percent accurate. The company is working with Purdue University researchers to judge the accuracy of its software on 250 subjects—or another 500 eyes.
OP-ED COLUMNIST
Class Wars of 2012
By PAUL KRUGMAN
Published: November 29, 2012 502 Comments
FACEBOOK
On Election Day, The Boston Globe reported, Logan International Airport in Boston was running short of parking spaces. Not for cars — for private jets. Big donors were flooding into the city to attend Mitt Romney’s victory party.
Fred R. Conrad/The New York Times
Paul Krugman
Go to Columnist Page »
Blog: The Conscience of a Liberal
They were, it turned out, misinformed about political reality. But the disappointed plutocrats weren’t wrong about who was on their side. This was very much an election pitting the interests of the very rich against those of the middle class and the poor.
And the Obama campaign won largely by disregarding the warnings of squeamish “centrists” and embracing that reality, stressing the class-war aspect of the confrontation. This ensured not only that President Obama won by huge margins among lower-income voters, but that those voters turned out in large numbers, sealing his victory.
The important thing to understand now is that while the election is over, the class war isn’t. The same people who bet big on Mr. Romney, and lost, are now trying to win by stealth — in the name of fiscal responsibility — the ground they failed to gain in an open election.
Before I get there, a word about the actual vote. Obviously, narrow economic self-interest doesn’t explain everything about how individuals, or even broad demographic groups, cast their ballots. Asian-Americans are a relatively affluent group, yet they went for President Obama by 3 to 1. Whites in Mississippi, on the other hand, aren’t especially well off, yet Mr. Obama received only 10 percent of their votes.
These anomalies, however, weren’t enough to change the overall pattern. Meanwhile, Democrats seem to have neutralized the traditional G.O.P. advantage on social issues, so that the election really was a referendum on economic policy. And what voters said, clearly, was no to tax cuts for the rich, no to benefit cuts for the middle class and the poor. So what’s a top-down class warrior to do?
The answer, as I have already suggested, is to rely on stealth — to smuggle in plutocrat-friendly policies under the pretense that they’re just sensible responses to the budget deficit.
Consider, as a prime example, the push to raise the retirement age, the age of eligibility for Medicare, or both. This is only reasonable, we’re told — after all, life expectancy has risen, so shouldn’t we all retire later? In reality, however, it would be a hugely regressive policy change, imposing severe burdens on lower- and middle-income Americans while barely affecting the wealthy. Why? First of all, the increase in life expectancy is concentrated among the affluent; why should janitors have to retire later because lawyers are living longer? Second, both Social Security and Medicare are much more important, relative to income, to less-affluent Americans, so delaying their availability would be a far more severe hit to ordinary families than to the top 1 percent.
Or take a subtler example, the insistence that any revenue increases should come from limiting deductions rather than from higher tax rates. The key thing to realize here is that the math just doesn’t work; there is, in fact, no way limits on deductions can raise as much revenue from the wealthy as you can get simply by letting the relevant parts of the Bush-era tax cuts expire. So any proposal to avoid a rate increase is, whatever its proponents may say, a proposal that we let the 1 percent off the hook and shift the burden, one way or another, to the middle class or the poor.
The point is that the class war is still on, this time with an added dose of deception. And this, in turn, means that you need to look very closely at any proposals coming from the usual suspects, even — or rather especially — if the proposal is being represented as a bipartisan, common-sense solution. In particular, whenever some deficit-scold group talks about “shared sacrifice,” you need to ask, sacrifice relative to what?
As regular readers may know, I’m not a fan of the Bowles-Simpson report on deficit reduction that laid out a poorly designed plan that for some reason has achieved near-sacred status among the Beltway elite. Still, at least you can say this for Bowles-Simpson: When it talked about shared sacrifice, it started from a “baseline” that already assumed the end of the high-end Bush tax cuts. At this point, however, just about all the deficit scolds seem to want us to count the expiration of those cuts — which were sold on false pretenses, and were never affordable — as some kind of big giveback by the rich. It isn’t.
So keep your eyes open as the fiscal game of chicken continues. It’s an uncomfortable but real truth that we are not all in this together; America’s top-down class warriors lost big in the election, but now they’re trying to use the pretense of concern about the deficit to snatch victory from the jaws of defeat. Let’s not let them pull it off.
Why is it impossible to stop thinking, to render the mind a complete blank?
—John Hendrickson, via email
Barry Gordon, professor of neurology and cognitive science at the Johns Hopkins University School of Medicine, replies:
Forgive your mind this minor annoyance because it has worked to save your life—or more accurately, the lives of your ancestors. Most likely you have not needed to worry whether the rustling in the underbrush is a rabbit or a leopard, or had to identify the best escape route on a walk by the lake, or to wonder whether the funny pattern in the grass is a snake or dead branch. Yet these were life-or-death decisions to our ancestors. Optimal moment-to-moment readiness requires a brain that is working constantly, an effort that takes a great deal of energy. (To put this in context, the modern human brain is only 2 percent of our body weight, but it uses 20 percent of our resting energy.) Such an energy-hungry brain, one that is constantly seeking clues, connections and mechanisms, is only possible with a mammalian metabolism tuned to a constant high rate.
Constant thinking is what propelled us from being a favorite food on the savanna—and a species that nearly went extinct—to becoming the most accomplished life-form on this planet. Even in the modern world, our mind always churns to find hazards and opportunities in the data we derive from our surroundings, somewhat like a search engine server. Our brain goes one step further, however, by also thinking proactively, a task that takes even more mental processing.
So even though most of us no longer worry about leopards in the grass, we do encounter new dangers and opportunities: employment, interest rates, “70 percent off” sales and swindlers offering $20 million for just a small investment on our part. Our primate heritage brought us another benefit: the ability to navigate a social system. As social animals, we must keep track of who’s on top and who’s not and who might help us and who might hurt us. To learn and understand this information, our mind is constantly calculating “what if?” scenarios. What do I have to do to advance in the workplace or social or financial hierarchy? What is the danger here? The opportunity?
For these reasons, we benefit from having a brain that works around the clock, even if it means dealing with intrusive thoughts from time to time.
This article was originally published with the title Ask the Brain
Megastorms Could Drown Massive Portions of California
Huge flows of vapor in the atmosphere, dubbed “atmospheric rivers,” have unleashed massive floods every 200 years, and climate change could bring more of them
By Michael D. Dettinger and B. Lynn Ingram
DROWNED: A 43-day atmospheric-river storm in 1861 turned California’s Central Valley region into an inland sea, simulated here on a current-day map.
Image: Don Foley
In Brief
Geologic evidence shows that truly massive floods, caused by rainfall alone, have occurred in California about every 200 years. The most recent was in 1861, and it bankrupted the state.
Such floods were most likely caused by atmospheric rivers: narrow bands of water vapor about a mile above the ocean that extend for thousands of miles. Much smaller forms of these rivers regularly hit California, as well as the western coasts of other countries.
Scientists who created a simulated megastorm, called ARkStorm, that was patterned after the 1861 flood but was less severe, found that such a torrent could force more than a million people to evacuate and cause $400 billion in losses if it happened in California today.
Forecasters are getting better at predicting the arrival of atmospheric rivers, which will improve warnings about flooding from the common storms and about the potential for catastrophe from a megastorm.
More In This Article
Overview
Hurricane Sandy: An Unprecedented Disaster
Overview
The Future of Climate Change
Overview
Extreme Weather and Climate Change
Editor’s note (11/30/12): The article will appear in the January 2013 issue of Scientific American. We are making it freely available now because of the flooding underway in California.
The intense rainstorms sweeping in from the Pacific Ocean began to pound central California on Christmas Eve in 1861 and continued virtually unabated for 43 days. The deluges quickly transformed rivers running down from the Sierra Nevada mountains along the state’s eastern border into raging torrents that swept away entire communities and mining settlements. The rivers and rains poured into the state’s vast Central Valley, turning it into an inland sea 300 miles long and 20 miles wide. Thousands of people died, and one quarter of the state’s estimated 800,000 cattle drowned. Downtown Sacramento was submerged under 10 feet of brown water filled with debris from countless mudslides on the region’s steep slopes. California’s legislature, unable to function, moved to San Francisco until Sacramento dried out—six months later. By then, the state was bankrupt.
A comparable episode today would be incredibly more devastating. The Central Valley is home to more than six million people, 1.4 million of them in Sacramento. The land produces about $20 billion in crops annually, including 70 percent of the world’s almonds—and portions of it have dropped 30 feet in elevation because of extensive groundwater pumping, making those areas even more prone to flooding. Scientists who recently modeled a similarly relentless storm that lasted only 23 days concluded that this smaller visitation would cause $400 billion in property damage and agricultural losses. Thousands of people could die unless preparations and evacuations worked very well indeed.
Was the 1861–62 flood a freak event? It appears not. New studies of sediment deposits in widespread locations indicate that cataclysmic floods of this magnitude have inundated California every two centuries or so for at least the past two millennia. The 1861–62 storms also pummeled the coastline from northern Mexico and southern California up to British Columbia, creating the worst floods in recorded history. Climate scientists now hypothesize that these floods, and others like them in several regions of the world, were caused by atmospheric rivers, a phenomenon you may have never heard of. And they think California, at least, is overdue for another one.
Ten Mississippi Rivers, One Mile High
Atmospheric rivers are long streams of water vapor that form at about one mile up in the atmosphere. They are only 250 miles across but extend for thousands of miles—sometimes across an entire ocean basin such as the Pacific. These conveyor belts of vapor carry as much water as 10 to 15 Mississippi Rivers from the tropics and across the middle latitudes. When one reaches the U.S. West Coast and hits inland mountain ranges, such as the Sierra Nevada, it is forced up, cools off and condenses into vast quantities of precipitation.
People on the West Coast of North America have long known about storms called “pineapple expresses,” which pour in from the tropics near Hawaii and dump heavy rain and snow for three to five days. It turns out that they are just one configuration of an atmospheric river. As many as nine atmospheric rivers hit California every year, according to recent investigations. Few of them end up being strong enough to yield true megafloods, but even the “normal” storms are about as intense as rainstorms get in the rest of the U.S., so they challenge emergency personnel as well as flood-control authorities and water managers.
Atmospheric rivers also bring rains to the west coasts of other continents and can occasionally form in unlikely places. For example, the catastrophic flooding in and around Nashville in May 2010—which caused some 30 deaths and more than $2 billion in damages—was fed by an unusual atmospheric river that brought heavy rain for two relentless days up into Tennessee from the Gulf of Mexico. In 2009 substantial flooding in southern England and in various parts of Spain was also caused by atmospheric rivers. But the phenomenon is best understood along the Pacific Coast, and the latest studies suggest that these rivers of vapor may become even larger in the future as the climate warms.
Certainly the folks at Gazprom are having a good snicker, reveling in the mockery that has been made of what should have been a landmark Ukraine-Spain gas deal that would have loosened Russia’s gas grip on Kiev.
Everyone wondered how Russia would respond to Ukraine’s attempt at gas independence. But this is what happens when you mess with Gazprom.
It was a horrible moment for Ukraine on Monday—all the more horrible because the whole event was televised—when the historical $1.1 billion deal it was about to sign with Spain’s Gas Natural Fenosa turned out to be fake.
Why was the deal historical? It would have secured $1.1 billion in investment for the construction of Ukraine’s first liquid natural gas (LNG) terminal on the Black Sea and a pipeline connecting the country’s vast gas network to the terminal.
More to the point, this would enable Ukraine to import by tanker up to 10 billion cubic meters of European gas at a price 20% cheaper than Gazprom. Even more to the point, it would be a major first step toward reducing Ukraine’s dependence on Russia.
Related Article: Europe Hot for Algerian Shale Gas
The deal was that investors had apparently signed agreements through a newly formed consortium for the construction of the $1.1 billion LNG terminal.
Here’s how the ill-fated signing ceremony went down:
While Ukrainian Prime Minister Mykola Azarov and Energy Minister Yuriy Boyko were cutting the ribbon on the construction of the terminal in a live televised ceremony, the country’s investment chief, Vladislav Kaskiv, was attending the official investment signing ceremony elsewhere, also via live video feed. This is where walls caved in very suddenly.
Signing on behalf of Fenosa was one Jordi Sarda Bonvehi. At the 11th hour, Fenosa let it be known that they have no idea who Bonvehi is and that he certainly does not represent the company in any way. Fenosa apparently had no idea it was signing a landmark agreement with Ukraine.
Kiev was necessarily taken aback, and Bonvehi remained conveniently silent at the signing ceremony once the news broke out.
Of course, what no one knows is how Ukrainian authorities were led to believe—during multiple rounds of negotiations—that Bonvehi was a Fenosa representative.
The story being bandied about by authorities in Kiev is now that Bonvehi was under the impression that Fenosa would sign the deal with Ukraine and that he would be given the authority to sign the deal retroactively.
Related Article: South Stream Pipeline will Increase Gazprom’s Control over Europe
But Fenosa denies it has ever considered such a deal and continues to deny any relationship at all with Bonvehi.
So where does that leave us? It leaves Ukraine in the lurch. There is no way it can fund this terminal on its own, despite its claims to the contrary. We probably don’t have to look much further than Gazprom and the Ukrainian oligarchy to find where this beautifully crafted charade was hatched.
In the meantime, Bonvehi—if such a person of that name even exists—remains elusive. No one knows who he really is or who he really works for.
More than anything, it’s an advertisement for due diligence.
By. Jen Alic of Oilprice.com
The argument in the floor
Evidence is mounting that moderate minimum wages can do more good than harm
Nov 24th 2012 | from the print edition
MINIMUM-WAGE laws have a long history and enduring political appeal. New Zealand pioneered the first national pay floor in 1894. America’s federal minimum wage dates from 1938. Most countries now have a statutory pay floor—and the ranks are still swelling. Even Germany, one of the few big countries without, may at last introduce a national one. And in an era of budget austerity and widening inequality, the political temptation to prop up wages at the bottom by fiat may well grow.
Economists have tended to oppose minimum wages on the grounds that they reduce employment, hurting many of those they are supposed to help. Milton Friedman called them a form of discrimination against low-skilled workers. In standard models of competitive markets, anything that artificially raises the price of labour will curb demand for it, and the first to lose their jobs will be the least-skilled workers.
Yet economic theory allows for the possibility that wage floors can boost both employment and pay. If employers have monopsony power as buyers of labour and are able to set wages, for instance, they can keep pay below its competitive rate. Academic supporters of wage floors, mainly economists on the left, appealed to this logic. But most of their colleagues disagreed; and until about 1990, most empirical studies found that higher minimum wages cost jobs, particularly among young workers.
Then a pioneering case study by two noted labour economists, David Card and Alan Krueger, examined the response of fast-food restaurants to a rise in New Jersey’s state minimum wage. It found that this had actually increased employment. The paper spawned a flood of similar “case-study” research, a flurry of revisionist thinking and a heated academic debate. The most prominent critics of the new research were David Neumark of the University of California at Irvine and William Wascher of the Federal Reserve. They disputed Messrs Card and Krueger’s findings for New Jersey and argued that a comparison of different states over time showed that higher minimum wages hurt jobs.
Almost two decades later, the minimum-wage debate has matured, not least because policy changes have brought heaps of new evidence to analyse. Britain introduced a national minimum wage in 1999. America’s states saw numerous adjustments in their minimum wages, and the federal floor was raised by 40% between 2007 and 2009.
America’s academics still do not agree on the employment effects. But both sides have honed their methods and, in some ways, the gap between them has shrunk. Messrs Card and Krueger moved on to other work, but Arindrajit Dube at the University of Massachusetts-Amherst and Michael Reich of the University of California at Berkeley have generalised the case-study approach, comparing restaurant employment across all contiguous counties with different minimum-wage levels between 1990 and 2006. They found no adverse effects on employment from a higher minimum wage. They also argue that if research showed such effects, these mostly reflected other differences between American states and had nothing to do with the minimum wage.
Messrs Neumark and Wascher still demur. They have published stacks of studies (and a book) purporting to show that minimum wages hit jobs. In a forthcoming paper they defend their methods and argue that the evidence still favours their view. But even they are no longer blanket opponents. In a 2011 paper they pointed out that a higher minimum wage along with the Earned Income Tax Credit (which tops up income for poor workers in America) boosted both employment and earnings for single women with children (though it cost less-skilled, minority men jobs).
Britain’s experience offers another set of insights. The country’s national minimum wage was introduced at 46% of the median wage, slightly higher than America’s. A lower floor applied to young people. Both are adjusted annually on the advice of the Low Pay Commission. Before the law took effect, worries about potential damage to employment were widespread. Yet today the consensus is that Britain’s minimum wage has done little or no harm.
The most striking impact of Britain’s minimum wage has been on the spread of wages. Not only has it pushed up pay for the bottom 5% of workers, but it also seems to have boosted earnings further up the income scale—and thus reduced wage inequality. Wage gaps in the bottom half of Britain’s pay scale have shrunk sharply since the late 1990s. A new study by a trio of British labour-market economists (including one at the Low Pay Commission) attributes much of that contraction to the minimum wage. Wage inequality fell more for women (a higher proportion of whom are on the minimum wage) than for men and the effect was most pronounced in low-wage parts of Britain.
The British way versus the American way
This new evidence leaves economists with lots of unanswered questions. What exactly is going on in labour markets if minimum wages do not hurt employment but reduce wage gaps? Are firms cutting costs by squeezing wages elsewhere? Are they improving the productivity of the lowest-wage workers? Some of the newest studies suggest firms employ a variety of strategies to deal with a higher minimum wage, from modestly raising prices to saving money from lower turnover.
Policymakers face practical issues. Bastions of orthodoxy, such as the OECD, a rich-country think-tank, and the International Monetary Fund, now assert that a moderate minimum wage probably does not do much harm and may do some good. Their definition of moderate is 30-40% of the median wage. Britain’s experience suggests it might even be a bit higher. The success of the Low Pay Commission points to the importance of technocrats rather than politicians setting wage floors. Britain’s small, regular changes may be easier for firms to absorb than America’s infrequent but hefty minimum-wage increases. Whatever their flaws, minimum wages are here to stay.
Sources