Showing posts with label irrationalism. Show all posts
Showing posts with label irrationalism. Show all posts

Friday, August 24, 2012

Regulators Captured - WSJ Editorial about the SEC and money-market funds

Regulators Captured
The Wall Street Journal, August 24, 2012, on page A10
http://online.wsj.com/article/SB10000872396390444812704577607421541441692.html


Economist George Stigler described the process of "regulatory capture," in which government agencies end up serving the industries they are supposed to regulate. This week lobbyists for money-market mutual funds provided still more evidence that Stigler deserved his Nobel. At the Securities and Exchange Commission, three of the five commissioners blocked a critical reform to help prevent a taxpayer bailout like the one the industry received in 2008.

Assistant editorial page editor James Freeman on the SEC's nixing a proposed rule that would hold money market funds more accountable.

SEC rules have long allowed money-fund operators to employ an accounting fiction that makes their funds appear safer than they are. Instead of share prices that fluctuate, like other kinds of securities, money funds are allowed to report to customers a fixed net asset value (NAV) of $1 per share—even if that's not exactly true.

As long as the value of a fund's underlying assets doesn't stray too far from that magical figure, fund sponsors can present a picture of stability to customers. Money funds are often seen as competitors to bank accounts and now hold $1.6 trillion in assets.

But during times of crisis, as in 2008, investors are reminded how different money funds are from insured deposits. When one fund "broke the buck"—its asset value fell below $1 per share—it triggered an institutional run on all money funds. The Treasury responded by slapping a taxpayer guarantee on the whole industry.

SEC Chairman Mary Schapiro has been trying to eliminate this systemic risk by taking away the accounting fiction that was created when previous generations of lobbyists captured the SEC. She made the sensible case that money-fund prices should float like the securities they are.

But industry lobbyists are still holding hostages. Commissioners Luis Aguilar, Dan Gallagher and Troy Paredes refused to support reform, so taxpayers can expect someday a replay of 2008. True to the Stigler thesis, the debate has focused on how to maintain the current money-fund business model while preventing customers from leaving in a crisis. The SEC goal should be to craft rules so that when customers leave a fund, it is a problem for fund managers, not taxpayers.

The industry shrewdly lobbied Beltway conservatives, who bought the line that this was a defense against costly regulation, even though regulation more or less created the money-fund industry. Free-market think tanks have been taken for a ride, some of them all too willingly.

The big winners include dodgy European banks, which can continue to attract U.S. money funds chasing higher yields knowing the American taxpayer continues to offer an implicit guarantee.

The industry shouldn't celebrate too much, though, because regulation may now be imposed by the new Financial Stability Oversight Council. Federal Reserve and Treasury officials want to do something, and their preference will probably be more supervision and capital positions that will raise costs that the industry can pass along to consumers. By protecting the $1 fixed NAV, free-marketeers may have guaranteed more of the Dodd-Frank-style regulation they claim to abhor.

The losers include the efficiency and fairness of the U.S. economy, as another financial industry gets government to guarantee its business model. Congratulations.

Saturday, June 30, 2012

Jonathan Haidt's The Righteous Mind: Why Good People Are Divided by Politics and Religion

Jonathan Haidt: He Knows Why We Fight. By Holman W Jenkins, Jr
Conservative or liberal, our moral instincts are shaped by evolution to strengthen 'us' against 'them.'
The Wall Street Journal, June 30, 2012, page A13
http://online.wsj.com/article/SB10001424052702303830204577446512522582648.html

Nobody who engages in political argument, and who isn't a moron, hasn't had to recognize the fact that decent, honest, intelligent people can come to opposite conclusions on public issues.

Jonathan Haidt, in an eye-opening and deceptively ambitious best seller, tells us why. The reason is evolution. Political attitudes are an extension of our moral reasoning; however much we like to tell ourselves otherwise, our moral responses are basically instinctual, despite attempts to gussy them up with ex-post rationalizations.

Our constellation of moral instincts arose because it helped us to cooperate. It helped us, in unprecedented speed and fashion, to dominate our planet. Yet the same moral reaction also means we exist in a state of perpetual, nasty political disagreement, talking past each other, calling each other names.

So Mr. Haidt explains in "The Righteous Mind: Why Good People Are Divided by Politics and Religion," undoubtedly one of the most talked-about books of the year. "The Righteous Mind" spent weeks on the hardcover best-seller list. Mr. Haidt considers himself mostly a liberal, but his book has been especially popular in the conservative blogosphere. Some right-leaning intellectuals are even calling it the most important book of the year.

It's full of ammunition that conservatives will love to throw out at cocktail parties. His research shows that conservatives are much better at understanding and anticipating liberal attitudes than liberals are at appreciating where conservatives are coming from. Case in point: Conservatives know that liberals are repelled by cruelty to animals, but liberals don't think (or prefer not to believe) that conservatives are repelled too.

Mr. Haidt, until recently a professor of moral psychology at the University of Virginia, says the surveys conducted by his research team show that liberals are strong on evolved values he defines as caring and fairness. Conservatives value caring and fairness too but tend to emphasize the more tribal values like loyalty, authority and sanctity.

Conservatives, Mr. Haidt says, have been more successful politically because they play to the full spectrum of sensibilities, and because the full spectrum is necessary for a healthy society. An admiring review in the New York Times sums up this element of his argument: "Liberals dissolve moral capital too recklessly. Welfare programs that substitute public aid for spousal and parental support undermine the ecology of the family. Education policies that let students sue teachers erode classroom authority. Multicultural education weakens the cultural glue of assimilation."

Such a book is bound to run into the charge of scientism—claiming scientific authority for a mix of common sense, exhortation or the author's own preferences. Let it be said that Mr. Haidt is sensitive to this complaint. If he erred, he says, it was on the side of being accessible, readable and, he hopes, influential.

As we sit in his new office at New York University, he professes an immodest aim: He wants liberals and conservatives to listen to each other more, hate each other less, and to understand that their differences are largely rooted in psychology, not open-minded consideration of the facts. "My big issue, the one I'm somewhat evangelical about, is civil disagreement," he says.

A shorthand he uses is "follow the sacred"—and not in a good way. "Follow the sacred and there you will find a circle of motivated ignorance." Today's political parties are most hysterical, he says, on the issues they "sacralize." For the right, it's taxes. For the left, the sacred issues were race and gender but are becoming global warming and gay marriage.

Yet between the lines of his book is an even more dramatic claim: The same moral psychology that makes our politics so nasty also underlies the amazing triumph of the human species. "We shouldn't be here at all," he tells me. "When I think about life on earth, there should not be a species like us. And if there was, we should be out in the jungle killing each other in small groups. That's what you should expect. The fact that we're here [in politics] arguing viciously and nastily with each other, and no guns, that itself is a miracle. And I think we can make [our politics] a little better. That's my favorite theme."

Who is Jon Haidt? A nice Jewish boy from central casting, he grew up in Scarsdale, N.Y. His father was a corporate lawyer. "When the economy opened out in the '50s and '60s and Jews could go everywhere, he was part of that generation. He and all his buddies from Brooklyn did very well."

His family was liberal in the FDR tradition. At Yale he studied philosophy and, in standard liberal fashion, "emerged pretty convinced that I was right about everything." It took a while for him to discover the limits of that stance. "I wouldn't say I was mugged by reality. I would say I was gradually introduced to it academically," he says today.

In India, where he performed field studies early in his professional career, he encountered a society in some ways patriarchal, sexist and illiberal. Yet it worked and the people were lovely. In Brazil, he paid attention to the experiences of street children and discovered the "most dangerous person in the world is mom's boyfriend. When women have a succession of men coming through, their daughters will get raped," he says. "The right is right to be sounding the alarm about the decline of marriage, and the left is wrong to say, 'Oh, any kind of family is OK.' It's not OK."

At age 41, he decided to try to understand what conservatives think. The quest was part of his effort to apply his understanding of moral psychology to politics. He especially sings the praises of Thomas Sowell's "Conflict of Visions," which he calls "an incredible book, a brilliant portrayal" of the argument between conservatives and liberals about the nature of man. "Again, as a moral psychologist, I had to say the constrained vision [of human nature] is correct."

That is, our moral instincts are tribal, adaptive, intuitive and shaped by evolution to strengthen "us" against "them." He notes that, in the 1970s, the left tended to be categorically hostile to evolutionary explanations of human behavior. Yet Mr. Haidt, the liberal and self-professed atheist, says he now finds the conservative vision speaks more insightfully to our evolved nature in ways that it would be self-defeating to discount.

"This is what I'm trying to argue for, and this is what I feel I've discovered from reading a lot of the sociology," he continues. "You need loyalty, authority and sanctity"—values that liberals are often suspicious of—"to run a decent society."

Mr. Haidt, a less chunky, lower-T version of Adam Sandler, has just landed a new position at the Stern School of Business at NYU. He arrived with his two children and wife, Jane, after a successful and happy 16-year run at the University of Virginia. An introvert by his own account, and never happier than when laboring in solitude, he nevertheless sought out the world's media capital to give wider currency to the ideas in the "The Righteous Mind."

Mr. Haidt's book, as he's the first to notice, has given comfort to conservatives. Its aim is to help liberals. Though he calls himself a centrist, he remains a strongly committed Democrat. He voted for one Republican in his life—in 2000 crossing party lines to cast a ballot for John McCain in the Virginia primary. "I wasn't trying to mess with the Republican primary," he adds. "I really liked McCain."

His disappointment with President Obama is quietly evident. Ronald Reagan understood that "politics is more like religion than like shopping," he says. Democrats, after a long string of candidates who flogged policy initiatives like items in a Wal-Mart circular, finally found one who could speak to higher values than self-interest. "Obama surely had a chance to remake the Democratic Party. But once he got in office, I think, he was consumed with the difficulty of governing within the Beltway."

The president has reverted to the formula of his party—bound up in what Mr. Haidt considers obsolete interest groups, battles and "sacred" issues about which Democrats cultivate an immunity to compromise.

Mr. Haidt lately has been speaking to Democratic groups and urging attachment to a new moral vision, albeit one borrowed from the Andrew Jackson campaign of 1828: "Equal opportunity for all, special privileges for none."

Racial quotas and reflexive support for public-sector unions would be out. His is a reformed vision of a class-based politics of affirmative opportunity for the economically disadvantaged. "I spoke to some Democrats about things in the book and they asked, how can we weaponize this? My message to them was: You're not ready. You don't know what you stand for yet. You don't have a clear moral vision."

Like many historians of modern conservatism, he cites the 1971 Powell Memo—written by the future Supreme Court Justice Lewis Powell Jr.—which rallied Republicans to the defense of free enterprise and limited government. Democrats need their own version of the Powell Memo today to give the party a new and coherent moral vision of activist government in the good society. "The moral rot a [traditional] liberal welfare state creates over generations—I mean, the right is right about that," says Mr. Haidt, "and the left can't see it."

Yet one challenge becomes apparent in talking to Mr. Haidt: He's read his book and cheerfully acknowledges that he avoids criticizing too plainly the "sacralized" issues of his liberal friends.

In his book, for instance, is passing reference to Western Europe's creation of the world's "first atheistic societies," also "the least efficient societies ever known at turning resources (of which they have a lot) into offspring (of which they have very few)."

What does he actually mean? He means Islam: "Demographic curves are very hard to bend," he says. "Unless something changes in Europe in the next century, it will eventually be a Muslim continent. Let me say it diplomatically: Most religions are tribal to some degree. Islam, in its holy books, seems more so. Christianity has undergone a reformation and gotten some distance from its holy books to allow many different lives to flourish in Christian societies, and this has not happened in Islam."

Mr. Haidt is similarly tentative in spelling out his thoughts on global warming. The threat is real, he suspects, and perhaps serious. "But the left is now embracing this as their sacred issue, which guarantees that there will be frequent exaggerations and minor—I don't want to call it fudging of data—but there will be frequent mini-scandals. Because it's a moral crusade, the left is going to have difficulty thinking clearly about what to do."

Mr. Haidt, I observe, is noticeably less delicate when stepping on the right's toes. He reviles George W. Bush, whom he blames for running up America's debt and running down its reputation. He blames Newt Gingrich for perhaps understanding his book's arguments too well and importing an uncompromising moralistic language into the partisan politics of the 1990s.

Mr. Haidt also considers today's Republican Party a curse upon the land, even as he admires conservative ideas. He says its defense of lower taxes on capital income—mostly reported by the rich—is indefensible. He dismisses Mitt Romney as a "moral menial," a politician so cynical about the necessary cynicism of politics that he doesn't bother to hide his cynicism. (Some might call that a virtue.) He finds it all too typical that Republicans abandoned their support of the individual health-care mandate the moment Mr. Obama picked it up (though he also finds Chief Justice John Roberts's bend-over-backwards effort to preserve conservative constitutional principle while upholding ObamaCare "refreshing").

Why is his language so much less hedged when discussing Republicans? "Liberals are my friends, my colleagues, my social world," he concedes. Liberals also are the audience he hopes most to influence, helping Democrats to recalibrate their political appeal and their attachment to a faulty welfare state.

To which a visitor can only say, God speed. Even with his parsing out of deep psychological differences between conservatives and liberals, American politics still seem capable of a useful fluidity. To make progress we need both parties, and right now we could use some progress on taxes, incentives, growth and entitlement reform.

Mr. Jenkins writes the Journal's Business World column.

Tuesday, May 15, 2012

Changes in U.S. water use and implications for the future

It is interesting to see some data in Water Reuse: Expanding the Nation's Water Supply Through Reuse of Municipal Wastewater (http://www.nap.edu/catalog.php?record_id=13303), a National Research Council publication.

See for example figure 1-6, p 17, changes in U.S. water use and implications for the future:



Thursday, February 23, 2012

Can Institutional Reform Reduce Job Destruction and Unemployment Duration?

Can Institutional Reform Reduce Job Destruction and Unemployment Duration? Yes It Can. By Esther Perez & Yao Yao
IMF Working Paper No. 12/54
February 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25738.0

Summary: We read search theory’s unemployment equilibrium condition as an Iso-Unemployment Curve(IUC).The IUC is the locus of job destruction rates and expected unemployment durations rendering the same unemployment level. A country’s position along the curve reveals its preferences over the destruction-duration mix, while its distance from the origin indicates the unemployment level at which such preferences are satisfied Using a panel of 20 OECD countries over 1985-2008, we find employment protection legislation to have opposing efects on destructions and durations, while the effects of the remaining key institutional factors on both variables tend to reinforce each other. Implementing the right reforms could reduce job destruction rates by about 0.05 to 0.25 percentage points and shorten unemployment spells by around 10 to 60 days. Consistent with this, unemployment rates would decline by between 0.75 and 5.5 percentage points, depending on a country’s starting position.


Introduction

This paper investigates how labor market policies affect the unemployment rate through its two defining factors, the duration of unemployment spells and job destruction rates.  To this aim, we look at search theory’s unemployment equilibrium condition as an Iso-Unemployment Curve (IUC). The IUC represents the locus of job destruction rates and expected unemployment durations rendering the same unemployment level. A country’s position along the curve reveals its preferences over the destruction-duration mix, while its distance from the origin indicates the unemployment level at which such preferences are satisfied. We next provide micro-foundations for the link between destructions, durations and policy variables. This allows us to explore the relevance of institutional features using a sample of 20 OECD countries over the period 1985-2008.

The empirical literature investigating the influence of labor market institutions on overall unemployment rate is sizable (see, for instance, Blanchard and Wolfers, 1999, and Nickell and others, 2002). Equally numerous are the studies splitting unemployment into job creation and job destruction flows (see, for example, Blanchard, 1998, Shimer, 2007, and Elsby and others, 2008). This work connects these two strands of the literature by investigating how labor market policies shape both job separations and unemployment spells, which together determine the overall unemployment rate in the economy. The IUC schedule used in our analysis is novel and is motivated by the need to understand the nature of unemployment, as essentially coming from destructions, durations or a combination of both these factors. This can help clarify whether policy makers should focus primarily on speeding up workers’ reallocation across job positions rather than protecting them in the workplace.

One fundamental question raised in this context is whether countries with dynamic labor markets significantly outperform countries with more stagnant markets. By dynamic (stagnant) we mean labor markets displaying high (low) levels of workers’ turnover in and out of unemployment. Is it the case that countries featuring high job destruction rates but brief unemployment spells tend to display lower unemployment rates than labor markets characterized by limited job destruction but longer unemployment durations?  And how do institutional features shape destructions and durations?


Conclusions

This paper reads the basic unemployment equilibrium condition postulated by search theory as an Iso-Unemployment Curve (IUC). The IUC is the locus of job destruction rates and expected unemployment durations that render the same unemployment level.  We use this schedule to classify countries according to their preferences over the job destruction-unemployment duration trade-off. The upshot of this analysis is that labor markets characterized by high levels of job destruction but brief unemployment spells do not necessarily outperform countries characterized by the opposite behavior. But, the IUC construct makes it clear that high unemployment rates result from extreme values in either durations or destructions, or intermediate-to-high levels in both.

Looking at unemployment through the lenses of the IUC schedule focuses the attention on each economy’s revealed social preferences over the destruction-duration mix. Policy packages fighting unemployment should take into consideration such preferences. Some countries seem to tolerate relatively high destruction rates as long as unemployment duration is short. Others are biased towards job security and do not mind financing longer job search spells. A few unfortunate countries are trapped in a high inflow-high duration combination, seemingly condemned for long periods of high unemployment.

An optimistic message arising from this study, especially for countries located on higher IUCs, is that an ambitious structural reform program tackling high labor tax wedges, activating unemployment benefits and removing barriers to competition in key services can effectively contain job losses, limit the duration of unemployment spells and yield substantial reduction in unemployment.

Tuesday, January 31, 2012

Macroeconomic and Welfare Costs of U.S. Fiscal Imbalances

Macroeconomic and Welfare Costs of U.S. Fiscal Imbalances. By Bertrand Gruss and Jose L. Torres
IMF Working Paper No. 12/38
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25691.0

Summary: In this paper we use a general equilibrium model with heterogeneous agents to assess the macroeconomic and welfare consequences in the United States of alternative fiscal policies over the medium-term. We find that failing to address the fiscal imbalances associated with current federal fiscal policies for a prolonged period would result in a significant crowding-out of private investment and a severe drag on growth. Compared to adopting a reform that gradually reduces federal debt to its pre-crisis level, postponing debt stabilization for two decades would entail a permanent output loss of about 17 percent and a welfare loss of almost 7 percent of lifetime consumption. Moreover, the long-run welfare gains from the adjustment would more than compensate the initial losses associated with the consolidation period.

The authors start the paper this way:

“History makes clear that failure to put our fiscal house in order will erode the vitality of our
economy, reduce the standard of living in the United States, and increase the risk of economic and financial instability.”

Ben S. Bernanke, 2011 Annual Conference of the Committee for a Responsible Federal Budget


Excerpts
Introduction
One of the main legacies of the Great Recession has been the sharp deterioration of public finances in most advanced economies. In the U.S., the federal debt held by the public surged from 36 percent of GDP in 2007 to around 70 percent in 2011. This rise in debt, however impressive, gets dwarfed when compared to the medium-term fiscal imbalances associated with entitlement programs and revenue-constraining measures. For example, the non-partisan Congressional Budget Office (CBO) foresees the debt held by the public to exceed 150 percent of GDP by 2030 (see Figure 1). Similarly, Batini et al. (2011) estimate that closing the federal “fiscal gap” associated with current fiscal policies would require a permanent fiscal adjustment of about 15 percent of GDP.

While the crisis brought the need to address the U.S. medium-term fiscal imbalances to the center of the policy debate, the costs they entail are not necessarily well understood. Most of the long-term fiscal projections regularly produced in the U.S. and used to guide policy discussions are derived from debt accounting exercises. A shortcoming of such approach is that relative prices and economic activity are unaffected by different fiscal policies, and that it cannot be used for welfare analysis. To overcome those limitations and contribute to the debate, in this paper we use a rational expectations general equilibrium framework to assess the medium-term macroeconomic and welfare consequences of alternative fiscal policies in the U.S. We find that failing to address the federal fiscal imbalances for a prolonged period would result in a significant crowding-out of private investment and drag on growth, entailing a permanent output loss of about 17 percent and welfare loss of almost 7 percent of lifetime consumption. Moreover, we find that the long-run welfare gains from stabilizing the federal debt at a low level more than compensate the welfare losses associated with the consolidation period. Our results also suggest that the crowding-out effects of public debt are an order of magnitude bigger than the policy mix effects: Reducing promptly the level of public debt is significantly more important for activity and welfare than differences in the size of government or the design of the tax reform.

The focus of this study is on the costs and benefits of fiscal consolidation for the U.S. over the medium-term to long-term. In this sense, we explicitly leave aside some questions on fiscal consolidation that, while very relevant for the short-run, cannot be appropriately tackled in this framework. One example is assessing the effects of back-loading the pace of consolidation in the near term—while announcing a credible medium-run adjustment—in the current context of growth below potential and nominal interest rates close to zero. A related relevant question is what mix of fiscal instruments in the near term would make fiscal consolidation less costly in such context. While interesting, these questions are beyond the scope of this paper.

The quantitative framework we use is a dynamic stochastic general equilibrium model with heterogeneous agents, and endogenous occupational choice and labor supply. In the model, ex-ante identical agents face idiosyncratic entrepreneurial ability and labor productivity shocks, and choose their occupation. Agents can become either entrepreneurs and hire other workers, or they can become workers and decide what fraction of their time to work for other entrepreneurs. In order to make a realistic analysis of the policy options, we assume that the government does not have access to lump sum taxation. Instead, the government raises distortionary taxes on labor, consumption, and income, and issues one period non-contingent bonds to finance lump sum transfers to all agents, other noninterest spending, and service its debt. Given that the core issue threatening debt sustainability in the U.S. is the explosive path of spending on entitlement programs, the heterogeneous agents assumption is crucial: Our model allows for a meaningful tradeoff between distortionary taxation and government transfers, as the latter insure households from attaining very low levels of consumption. The complexity this introduces forces us to sacrifice on some dimension: Agents in our model face individual uncertainty but have perfect foresight about future paths of fiscal instruments and prices. Allowing for uncertainty about the timing and composition of the adjustment would be interesting, but would severely increase the computational cost.

We compare model simulations from four alternative fiscal scenarios. The benchmark scenario maintains current fiscal policies for about twenty years. More precisely, in this scenario we feed the model with the spending (noninterest mandatory and discretionary) and revenue projections from CBO’s Alternative Fiscal scenario (CBO 2011)—allowing all other variables to adjust endogenously—until about 2030, when we assume that the government increases all taxes to stabilize the debt at its prevailing level. Three alternative scenarios assume, instead, the immediate adoption of fiscal reform aimed at gradually reducing the federal debt to its pre-crisis level. There are of course many possible parameterizations for such reform reflecting, among other things, different views about the desired size of the public sector and the design of the tax system. We first consider an adjustment scenario assuming the same size of government and tax structure than the benchmark one in order to disentangle the sole effect of delaying fiscal adjustment—and stabilizing the debt ratio at a high level. We then explore the effect of alternative designs for the consolidation plan by considering two alternative adjustment scenarios that incorporate spending and revenue measures proposed by the bipartisan December 2010 Bowles-Simpson Commission.

This paper is related to different strands of the macro literature on fiscal issues. First, it is related to studies using general equilibrium models to analyze the implications of fiscal consolidations. Forni et al. (2010) use perfect-foresight simulations from a two-country dynamic model to compute the macroeconomic consequences of reducing the debt to GDP ratio in Italy. Coenen et al. (2008) analyze the effects of a permanent reduction in public debt in the Euro Area using the ECB NAWM model. Clinton et al. (2010) use the IMF GIMF model to examine the macroeconomic effects of permanently reducing government fiscal deficits in several regions of the world at the same time. Davig et al. (2010) study the effects of uncertainty about when and how policy will adjust to resolve the exponential growth in entitlement spending in the U.S.

The main difference with our paper is that these works rely on representative agent models that cannot adequately capture the redistributive and insurance effects of fiscal policy. As a result, such models have by construction a positive bias towards fiscal reforms that lower transfers, reduce the debt, and eventually lower the distortions by lowering tax rates. Another unappealing feature of the representative agent models for analyzing the merits of a fiscal consolidation is that, in steady state, the equilibrium real interest rate is independent of the debt level, whereas in our model the equilibrium real interest rate is endogenously affected by the level of government debt, which is consistent with the empirical literature.

Second, the paper is related to previous work using general equilibrium models with infinitively lived heterogeneous agents, occupational choice, and borrowing constraints to analyze fiscal reforms, such as Li(2002), Meh (2005) and Kitao (2008). Differently from these papers, that impose a balanced budget every period, we focus on the effects of debt period of time we augment our model to include growth. Moreover and as in Kitao (2008), we explicitly compute the transitional dynamics after the reforms and analyze the welfare costs associated with the transition.  dynamics and fiscal consolidation reforms. Also, since we focus on reforms over an extended period of time we augment our model to include growth. Moreover and as in Kitao (2008), we explicitly compute the transitional dynamics after the reforms and analyze the welfare costs associated with the transition.

Results:The long-run effects


What is the effect of delaying fiscal consolidation on...?
Capital and Labor. The high interest rates in the delay scenario imply that for those entrepreneurs that do not have enough internal funding, the cost of borrowing sufficient capital is too high for them to compensate for their income under the outside option (i.e.  wage income). As a result, the share of entrepreneurs in the delay scenario is roughly one half the share under the passive adjust scenario and the aggregate capital stock is about 17 percent lower. The higher share of workers in the delay scenario implies a higher labor supply. Together with a lower labor demand (due to a lower capital stock), this leads to a real wage that is more than 19 percent lower. Total hours worked are similar in the two steady states as lower individual hours offset the higher share of workers.

Output and Consumption. The crowding-out effect of fiscal policy under the delay scenario leads to large permanent losses in output and consumption. The level of GDP is about 16 percent lower in the delay than in the passive adjust scenario and aggregate consumption is 3.5 percent lower. Moreover and as depicted in Figure 4, the wealth distribution is significantly more concentrated under the delay scenario.

Welfare. The effect of lower aggregate consumption and more concentrated wealth distribution under the delay scenario implies that welfare is significantly lower than in the passive adjust scenario. Using a consumption equivalent welfare metric we find that the average difference in steady state welfare across scenarios would be equivalent to permanently increasing consumption to each agent in the delay scenario economy by 6 percent while leaving their amount of leisure unchanged. We interpret this differential as the permanent welfare gain from stabilizing public debt at its pre-crisis level. A breakup of the welfare comparison of steady states by wealth deciles, shown in Figure 5, suggests that all agents up to the 7th deciles of the wealth distribution would be better off under fiscal consolidation.


What are the effects of alternative fiscal consolidation plans?

Capital and Output. The smaller size of government in the two active adjust scenario relative to the passive one translates into higher capital stocks and higher output, increasing the gap with the delay scenario. Regarding the tax reform, the comparison between the two active adjust scenarios reveals that distributing the higher tax pressure on all taxes, including consumption taxes, lowers distortions and results in a higher capital stock and in a growth friendlier consolidation: The difference in the output level between the delay and active (1) adjust scenario stands at 17.7 percent—while this difference is 17.1 and 15.7 percent for the active (2) adjust and passive adjust scenarios respectively.

Consumption and Welfare. While all adjust scenarios reveal a significant difference in long-run per-capita consumption and welfare with respect to postponing fiscal consolidation, the relative performance among them also favors a smaller size of government and a balanced tax reform. The difference in per-capita consumption with the delay scenario is 3.5, 5.8 and 5.4 percent respectively for the passive, active (1) and active (2) adjustment scenarios. The policy mix under the active (1) adjust scenario also ranks the best in terms of welfare, with the welfare differential with respect to the delay scenario being more than 7 percent of lifetime consumption.

Overall Welfare Cost of Delaying Fiscal Consolidation

In the long-run the average welfare in the adjust scenario is higher than in the delay scenario by 6.7 percent of lifetime consumption. However, along the transition to the new steady state the adjust scenario is characterized by a costly fiscal adjustment that entails a lower path for per capita consumption, so it might not be necessarily true that an adjustment is optimal.

To assess the overall welfare ranking of the alternative fiscal paths, we extend the analysis of section III.A. by computing, for the delay and adjust scenarios, the average expected discounted lifetime utility starting in 2011. We find that even taking into account the costs along the transition, the adjust scenario entails an average welfare gain for the economy. The infinite horizon welfare comparison suggests that consumption under the delay scenario should be raised by 0.8 percent for all agents in the economy in all periods to attain the same average utility than under the adjust scenario (while leaving leisure unchanged). A breakup of this result by wealth deciles (see Figure 9) suggests that, as in the long-run comparison, the wealthiest decile of the population is worse off under the adjust scenario. Differently from the steady state comparison, however, the first four deciles also face welfare losses in the adjust scenario.

A few elements suggest that the average welfare gain reported (0.8 percent in consumptionequivalent terms) can be considered a lower bound. First, the calibrated subjective discount factor from the model used to compute the present value of the utility paths entails a yearly discount rate of about 9.9 percent.20 With such a high discount rate, the long-run benefits from the delay scenario are heavily discounted. Using a discount rate of 3 percent, the one used by CBO for calculating the present value of future streams of revenues and outlays of the government’s trust funds, would imply a consumption-equivalent welfare gain of 5.9 percent (instead of 0.8 percent). Second, the model we are using has infinitely lived agents, so we are not explicitly accounting for the distribution of costs and benefits across generations.

Conclusions
We compare the macroeconomic and welfare effects of failing to address the fiscal imbalances in the U.S. for an extended period with those of reducing federal debt to its precrisis level and find that the stakes are quite high. Our model simulations suggest that the continuous rise in federal debt implied by current policies would have sizeable effects on the economy, even under certainty that the federal debt will be fully repaid. The model predicts that the mounting debt ratio would increase the cost of borrowing and crowd out private capital from productive activities, acting as a significant drag on growth. Compared to stabilizing federal debt at its pre-crisis level, continuation of current policies for two decades would entail a permanent output loss of around 17 percent. The associated drop in per-capita consumption, combined with the worsening of wealth concentration that the model suggests, would cause a large average welfare loss in the long-run, equivalent to about 7 percent of lifetime consumption. Our results also suggest that reducing promptly the level of public debt is significantly more important for activity and welfare than differences in the size of government or the design of the tax reform. Accordingly, even under consensus on the desirability to increase primary spending in the medium-run, it would be preferable to start from a fiscal house in order.

The model adequately captures that the fiscal consolidation needed to reduce federal debt to its pre-crisis level would be very costly. Still, extending the welfare comparison to include also the transition period suggests that a fiscal consolidation would be on average beneficial.  After taking into account the short-term costs, the average welfare gain from fiscal consolidation stands at 0.8 percent of lifetime consumption.

We argue that our welfare results can be interpreted as a lower bound. This is because, first, we abstract from default so our simulations ignore the potential effect of higher public debt on the risk premium. However, as the debt crisis in Europe has revealed, interest rates can soar quickly if investors lose confidence in the ability of a government to manage its fiscal policy. Considering this effect would have magnified the long-run welfare costs of stabilizing the debt ratio at a higher level. Second, the high discount rate we use in the computation of the present value of utility exacerbates the short-term costs. If we recomputed the overall welfare effects in our scenarios using a discount rate of 3 percent, the welfare gain from a consolidation would be 5.9 percent of lifetime utility, instead of 0.8 percent. An argument for considering a lower rate to compute the present value of welfare is that by assuming infinitely lived agents we are not attaching any weight to unborn agents that would be affected by the permanent costs of delaying the resolution of fiscal imbalances and do not enjoy the expansionary effects of the unsustainable policy along the transitional dynamics.

The results in this paper are not exempt from the perils inherent to any model-dependent analysis. In order to address features that we believe are crucial for the issue at hand, we needed to simplify the model on other dimensions. For example, given the current reliance of the U.S. on foreign financing, the closed economy assumption used in this paper may be questionable. However, we believe that it would also be problematic to assume that the world interest rate will remain unaffected if the U.S. continues to considerably increase its financing needs. Moreover and as mentioned before, the model ignores the effect of higher debt on the perceived probability of default, which would likely counteract the effect in our results from failing to incorporate the government’s access to foreign borrowing. The model also abstracts from nominal issues and real and nominal rigidities typically introduced in the new Keynesian models commonly used for policy analysis. However, we believe that while these features are particularly relevant for short-term cyclical considerations, they matter much less for the longer-term issues addressed in this paper.

Friday, October 21, 2011

The Case Against Global-Warming Skepticism

The Case Against Global-Warming Skepticism. By Richard A Muller
There were good reasons for doubt, until now.
http://online.wsj.com/article/SB10001424052970204422404576594872796327348.html
WSJ, Oct 21, 2011

Are you a global warming skeptic? There are plenty of good reasons why you might be.

As many as 757 stations in the United States recorded net surface-temperature cooling over the past century. Many are concentrated in the southeast, where some people attribute tornadoes and hurricanes to warming.

The temperature-station quality is largely awful. The most important stations in the U.S. are included in the Department of Energy's Historical Climatology Network. A careful survey of these stations by a team led by meteorologist Anthony Watts showed that 70% of these stations have such poor siting that, by the U.S. government's own measure, they result in temperature uncertainties of between two and five degrees Celsius or more. We do not know how much worse are the stations in the developing world.

Using data from all these poor stations, the U.N.'s Intergovernmental Panel on Climate Change estimates an average global 0.64ºC temperature rise in the past 50 years, "most" of which the IPCC says is due to humans. Yet the margin of error for the stations is at least three times larger than the estimated warming.

We know that cities show anomalous warming, caused by energy use and building materials; asphalt, for instance, absorbs more sunlight than do trees. Tokyo's temperature rose about 2ºC in the last 50 years. Could that rise, and increases in other urban areas, have been unreasonably included in the global estimates? That warming may be real, but it has nothing to do with the greenhouse effect and can't be addressed by carbon dioxide reduction.

Moreover, the three major temperature analysis groups (the U.S.'s NASA and National Oceanic and Atmospheric Administration, and the U.K.'s Met Office and Climatic Research Unit) analyze only a small fraction of the available data, primarily from stations that have long records. There's a logic to that practice, but it could lead to selection bias. For instance, older stations were often built outside of cities but today are surrounded by buildings. These groups today use data from about 2,000 stations, down from roughly 6,000 in 1970, raising even more questions about their selections.

On top of that, stations have moved, instruments have changed and local environments have evolved. Analysis groups try to compensate for all this by homogenizing the data, though there are plenty of arguments to be had over how best to homogenize long-running data taken from around the world in varying conditions. These adjustments often result in corrections of several tenths of one degree Celsius, significant fractions of the warming attributed to humans.

And that's just the surface-temperature record. What about the rest? The number of named hurricanes has been on the rise for years, but that's in part a result of better detection technologies (satellites and buoys) that find storms in remote regions. The number of hurricanes hitting the U.S., even more intense Category 4 and 5 storms, has been gradually decreasing since 1850. The number of detected tornadoes has been increasing, possibly because radar technology has improved, but the number that touch down and cause damage has been decreasing. Meanwhile, the short-term variability in U.S. surface temperatures has been decreasing since 1800, suggesting a more stable climate.

Without good answers to all these complaints, global-warming skepticism seems sensible. But now let me explain why you should not be a skeptic, at least not any longer.

Over the last two years, the Berkeley Earth Surface Temperature Project has looked deeply at all the issues raised above. I chaired our group, which just submitted four detailed papers on our results to peer-reviewed journals. We have now posted these papers online at www.BerkeleyEarth.org to solicit even more scrutiny.

Our work covers only land temperature—not the oceans—but that's where warming appears to be the greatest. Robert Rohde, our chief scientist, obtained more than 1.6 billion measurements from more than 39,000 temperature stations around the world. Many of the records were short in duration, and to use them Mr. Rohde and a team of esteemed scientists and statisticians developed a new analytical approach that let us incorporate fragments of records. By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.

We discovered that about one-third of the world's temperature stations have recorded cooling temperatures, and about two-thirds have recorded warming. The two-to-one ratio reflects global warming. The changes at the locations that showed warming were typically between 1-2ºC, much greater than the IPCC's average of 0.64ºC.

To study urban-heating bias in temperature records, we used satellite determinations that subdivided the world into urban and rural areas. We then conducted a temperature analysis based solely on "very rural" locations, distant from urban ones. The result showed a temperature increase similar to that found by other groups. Only 0.5% of the globe is urbanized, so it makes sense that even a 2ºC rise in urban regions would contribute negligibly to the global average.

What about poor station quality? Again, our statistical methods allowed us to analyze the U.S. temperature record separately for stations with good or acceptable rankings, and those with poor rankings (the U.S. is the only place in the world that ranks its temperature stations). Remarkably, the poorly ranked stations showed no greater temperature increases than the better ones. The mostly likely explanation is that while low-quality stations may give incorrect absolute temperatures, they still accurately track temperature changes.

When we began our study, we felt that skeptics had raised legitimate issues, and we didn't know what we'd find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections.

Global warming is real. Perhaps our results will help cool this portion of the climate debate. How much of the warming is due to humans and what will be the likely effects? We made no independent assessment of that.

Mr. Muller is a professor of physics at the University of California, Berkeley, and the author of "Physics for Future Presidents" (W.W. Norton & Co., 2008).

Sunday, January 23, 2011

Four of every 10 rows of U.S. corn now go for fuel, not food



Please see commentary at TradeFlow21.com


Amber Waves of Ethanol. WSJ Editorial
Four of every 10 rows of U.S. corn now go for fuel, not food.
WSJ, Jan 22, 2011
http://online.wsj.com/article/SB10001424052748703396604576088010481315914.html

The global economy is getting back on its feet, but so too is an old enemy: food inflation. The United Nations benchmark index hit a record high last month, raising fears of shortages and higher prices that will hit poor countries hardest. So why is the United States, one of the world's biggest agricultural exporters, devoting more and more of its corn crop to . . . ethanol?

The nearby chart, based on data from the Department of Agriculture, shows the remarkable trend over a decade. In 2001, only 7% of U.S. corn went for ethanol, or about 707 million bushels. By 2010, the ethanol share was 39.4%, or nearly five billion bushels out of total U.S. production of 12.45 billion bushels. Four of every 10 rows of corn now go to produce fuel for American cars or trucks, not food or feed.

This trend is the deliberate result of policies designed to subsidize ethanol. Note the surge in the middle of the last decade when Congress began to legislate renewable fuel mandates and many states banned MTBE, which had competed with ethanol but ran afoul of the green and corn lobbies.

This carve out of nearly half of the U.S. corn corp to fuel is increasing even as global food supply is struggling to meet rising demand. U.S. farmers account for about 39% of global corn production and about 16% of that crop is exported, so U.S. corn stocks can influence the world price. Chicago Board of Trade corn March futures recently hit 30-month highs of $6.67 a bushel, up from $4 a bushel a year ago.

Demand from developing nations like China is also playing a role in rising prices, and in our view so is the loose monetary policy of the U.S. Federal Reserve that has increased the price of nearly all commodities traded in dollars.

But reduced corn food supply undoubtedly matters. About 40% of U.S. corn production is used to produce feed for animals. As corn prices rise, beef, poultry and other prices rise, too. The price squeeze has already contributed to the bankruptcy of companies like Texas-based Pilgrim's Pride Corp. and Delaware-based poultry maker Townsends Inc. over the past few years.

This damage coincides with a growing consensus that ethanol achieves none of its alleged policy goals. Ethanol supporters claim the biofuel reduces U.S. dependence on foreign oil and provides a cleaner source of energy. But Cornell University scientist David Pimentel calculates that if the entire U.S. corn crop were devoted to ethanol production, it would satisfy only 4% of U.S. oil consumption.

The Environmental Protection Agency has found that ethanol production has a minimal to negative impact on the environment. Even Al Gore, once an ethanol evangelist, now says his support had more to do with Presidential politics in Iowa and admits the fuel provides little or no environmental gain.

Not that this has changed the politics of ethanol. When consumers didn't buy enough gas last year to meet previous ethanol mandates, the Obama Administration lifted the cap on how much ethanol may be mixed into gasoline to 15% from 10%. Presto! More ethanol "demand." On Friday the EPA greatly expanded the number of cars approved to use the 15% blend. Last month, Congressmen whose constituents benefit from this largesse tucked into the tax bill an extension of the $5 billion tax credit for blending ethanol into gasoline.

At a time when the world will need more corn and grains, it makes no sense to devote scarce farmland to make a fuel that exists only because of taxpayer subsidies and mandates. If food supplies tighten and prices keep rising, such a policy will soon become immoral.

Sunday, January 16, 2011

Can We Boost Demand for Rainfall Insurance in Developing Countries?

Can We Boost Demand for Rainfall Insurance in Developing Countries?
Wold Bank, Jan 05, 2011
http://blogs.worldbank.org/allaboutfinance/node/634

Ask small farmers in semiarid areas of Africa or India about the most important risk they face and they will tell you that it is drought. In 2003 an Indian insurance company and World Bank experts designed a potential hedging instrument for this type of risk—an insurance contract that pays off on the basis of the rainfall recorded at a local weather station.

The idea of using an index (in this case rainfall) to proxy for losses is not new. In the 1940s Harold Halcrow, then a PhD student at the University of Chicago, wrote his thesis on the use of area yield to insure against crop yield losses. In the past two decades the market to hedge against weather risk has grown, especially in developed economies: citrus farmers can insure against frost, gas companies against warm winters, ski resorts against lack of snow, and couples against rain on their wedding day.

Rainfall insurance in developing countries is typically sold commercially before the start of the growing season in unit sizes as small as $1. To qualify for a payout, there is no need to file a claim: policyholders automatically qualify if the accumulated rainfall by a certain date is below a certain threshold. Figure 1 shows an example of a payout schedule for an insurance policy against drought, with accumulated rainfall on the x-axis and payouts on the y-axis. If rainfall is above the first trigger, the crop has received enough rain; if it is between the first and second triggers, the policyholder receives a payout, the size of which increases with the deficit in rainfall; and if it is below the second trigger, which corresponds to crop failure, the policyholder gets the maximum payout. This product has inspired development agencies around the world, and today at least 36 pilot projects are introducing index insurance in developing countries.



Figure 1. Example of a Payout Schedule for an Insurance Policy against Drought


DTheredespite the potentially large welfare benefits, take-up of the product has been disappointingly low. Explanations for this low demand abound. The first and obvious reason is that the product is too expensive relative to the risk coping strategies now used by the farmers. After all, when it is not heavily subsidized (as it is in several states in India), average payouts, which are based on historical rainfall data, amount to about 30–40 percent of the premiums. In a recent paper several coauthors and I estimate that if insurance could be offered with payout ratios similar to those of U.S. insurance contracts, demand would increase by 25–50 percent. But even if prices were close to actuarially fair, demand would not come close to universal participation. So the price cannot be the whole story.

Another explanation is based on liquidity constraints: farmers purchase insurance at the start of the growing season, when there are many competing uses for the limited cash available. In the same paper we randomly assign certain households enough cash to buy one policy and find that this increases take-up by 150 percent of the baseline take-up rate. This effect is several times as large as the effect of cutting the price of the product by half and is concentrated among poor households, which are likely to have less access to the financial system.

In addition, potential buyers may not fully trust the product. Unlike credit, which requires that the lender trust the borrower to repay the loan, insurance requires that the client trust the provider to honor its promise in case of a payout. We measure the importance of trust by varying whether or not the insurance educator visiting households is endorsed by a trusted local agent during the visit. Demand is 36 percent higher when the insurance is offered by a source the household trusts. Trust may be particularly important because many households have only limited numeracy and financial literacy, which is likely to reduce their ability to independently evaluate the insurance.

These results point to several possible improvements in contract design. For example, the trust issue might be overcome by designing a product that pays often initially, since it is easier to sell insurance where a past payout has occurred. Liquidity constraints might be eased by ensuring that payouts are disbursed quickly or by offering loans to pay the premium. Finally, agricultural loans could be bundled with insurance, creating what is in effect a contingent loan, with the amount to be repaid depending on the amount of rainfall. This product was tested in a pilot in Malawi, and to our surprise demand for the bundled loan (17.6 percent uptake) was lower than that for a regular loan (33 percent). The reason may have been that the lender’s inability to penalize defaulting borrowers (in part, because of lack of collateral) was already providing implicit insurance and so farmers did not value the insurance policy.

What is remarkable about the Malawi experience is that after the pilot the lenders decided to bundle all agricultural loans with insurance. In their view, rainfall insurance had proved to be an attractive way to reduce the risk of credit default and had the potential to increase access to agricultural credit at lower prices.

The insurance covers only the loans. But informal discussions with borrowers suggest that they remain largely unaware that the loans are insured. Banks may not be telling borrowers about the insurance, however—because if they did, borrowers would need to know the exact amount of the payout (if any) to compute what they need to repay to the bank. In other words, uncertainty about the payout can undermine the culture of repayment. This happened in the Malawi pilot. One region of the pilot experienced a mild drought that triggered only a small payout. But because farmers were told that there had been a payout, they assumed that it covered the entire repayment amount and thus defaulted on their loans.

This example suggests that where financial literacy and understanding of the product are limited, insurance policies could instead be targeted to a group—such as an entire village, a producer group, or a cooperative—rather than to individuals. The decision to purchase insurance would be made by the group’s managers, who are likely to be more educated and more familiar with financial products than other group members and may also be less financially constrained. The group could then decide ahead of time how best to allocate funds among its members in case of a payout.


Further reading
Giné, X., R. M. Townsend, and J. Vickery. 2007. “Statistical Analysis of Rainfall Insurance Payouts in Southern India.” American Journal of Agricultural Economics 89 (5): 1248–54.
Giné, X., R. Townsend, and J. Vickery. 2008. “Patterns of Rainfall Insurance Participation in Rural India.” World Bank Economic Review 22 (3): 539–66.
Giné, X., and D. Yang. 2009. “Insurance, Credit, and Technology Adoption: Field Experimental Evidence from Malawi.” Journal of Development Economics 89 (1): 1–11.
Cole, S., X. Giné, J. Tobacman, P. Topalova, R. Townsend, and J. Vickery. 2010. “Barriers to Household Risk Management: Evidence from India.” Policy Research Working Paper 5504, World Bank, Washington, DC.

Thursday, December 30, 2010

Macro-prudential regulation and the false promise of Basel III

Financial regulation goes global - Risks for the world economy
Legatum Institute
http://www.li.com/attachments/20101228_LegatumInstitute_FinancialRegulationGoesGlobal.pdf
Dec 29, 2010

Excerpts with footnotes:

4. How internationalised regulation can lead to a new crisis

We are witnessing a movement towards tighter regulation of world financial markets and also towards regulation that is more closely harmonised across the leading industrial economies. That is no accident, as the G20 communiqué pledged that:
“We each agree to ensure our domestic regulatory systems are strong. But we also agree to establish the much greater consistency and systematic cooperation between countries, and the framework of internationally agreed high standards, that a global financial system requires.”
Policymakers seem to believe that insufficient regulation, not just ineffective regulation, is to blame for the financial crisis. Moreover, they also want regulations to be more consistent across different countries and intend to further internationalise financial regulation.

However, there are a number of weaknesses, in principle and practice, with the regulations that have been proposed, that might mean they exacerbate future periods of boom and bust.

4.1 Global regulations create global crises

The central argument in favour of supranational regulation is the possibility of financial contagion. Policymakers do not want their own financial systems put at risk by regulatory failures elsewhere. However, with the present crisis emerging in major developed economies, it is hard to justify the sudden focus on the possibility of contagion. Many countries, such as Canada, did maintain stable financial systems despite collapses elsewhere. The contagion from the subprime crisis in the United States was a serious problem only because financial sectors in other major economies had made similar mistakes and become very vulnerable.

To be sure, an economy will suffer if its trading partners get into trouble. There will be a smaller market for their exports, imports might become more expensive or more difficult to get hold of, and supply chains can be disrupted. But that can happen for a range of reasons: a bad harvest, war, internal political strife, a recession not driven by a financial crisis. The financial sector is not unique in that regard.

There is also concern about a “race to the bottom”. As Stephen G. Cecchetti – Economic Adviser and Head of Monetary and Economic Department at the Bank for International Settlements – wrote, it is felt to be necessary to “make sure national authorities are confident that they will not be punished for their openness”.18 Concerns that countries will be punished for proper regulation are overblown. There are powerful network effects in financial services that mean many institutions are located in places like New York, London and Frankfurt despite those locations having high costs. While smaller institutions like hedge funds may move more lightly, big banks and other systemically important institutions need to be located in a major financial centre. At the same time, they do attach some importance to a reliable financial system. Countries are more likely to be punished for bad policy – e.g. the new 50 percent top tax rate in the United Kingdom – than for measures genuinely necessary to ensure financial stability.

At the same time, the coordination of regulatory policies creates new risks and exacerbates crises.  Common capital adequacy rules, while increasing transparency, also encourage homogeneity in investment strategy and undertaking of risk, leading to a high concentration of risk. That means that global regulations can be dangerous because they increase the amplitude of global credit cycles. If every country is in phase, systemic risk is higher than in situations where there are offsetting, out of phase, credit booms and busts in individual countries. The situation is akin to a monoculture, a lack of diversity makes the whole crop more vulnerable.

The Basel rules use a similar risk assessment framework across a broad range of institutions which encourages them to hold similar assets and respond in similar ways in a crisis.19 Consequently, instead of increasing diversification of assets and minimising risk, herd behaviour is amplified.20

The recession that followed the financial crisis was undoubtedly sharper because it was global. That meant countries were hit simultaneously by their own crisis and a fall in global demand hurting export industries. There were also more simultaneous pressures on global financial institutions. Global regulations, reducing diversity in investment decisions and behaviour in a crisis, will tend to produce global crises when they go wrong. As a result, internationalising regulations increases the danger to the world economy.

The objective should be to strike a proper balance between standardisation and diversity in regulations. Unfortunately, there are reasons why politicians might go too far in standardising regulations. Politicians in countries with burdensome regulations are tempted to force others into adopting equally burdensome measures, in order to prevent yardstick competition and limit the ability of firms and individuals to vote with their feet. A well known example of this is attempts to curb tax competition by organisations such as the OECD and the European Union. Finally, for some, international summits are more comfortable than messy, democratic domestic politics.

4.2 Macro-prudential regulation and the false promise of Basel III

The economic profession’s understanding of the role of financial regulation is shifting from an insistence on micro-prudential regulation to measures which take into account the systemic risks involved in finance.  The new paradigm suggests that a policy approach that tries to make the system safe by making each of the individual financial institutions safe is doomed to fail because of the endogenous nature of risk and because of the interactions between different financial institutions.21

Many of the proposed regulatory changes seem to be inspired – at least in part – by the idea that macro-prudential regulation will require a move away from a regulatory regime that does not take into account the endogenous nature of risk. Unfortunately, the form that the international harmonisation of regimes of financial regulation is taking fails to mitigate excessive leverage in good economic times.

A related question is whether the endogenous nature of risk enables this new regulatory paradigm to succeed at all. Most importantly, caring about systemic risk requires the regulator to identify – explicitly or implicitly – those financial institutions that are systemically important – either individually or in “herds”. Provided that this information can be discovered by the banks or becomes common knowledge, systemically important institutions will know that they will not be allowed to fail.  This would create a large moral hazard problem and could represent a key structural flaw that compromises the whole idea of macro-prudential financial regulation.

At the same time, there might be no need for shifting regulations in the macro-prudential direction, especially if the crisis is the result of regulatory and policy failure as set out in Section 1. Policymakers would just need to abstain from policies similar to those that fuelled the boom leading to this crisis. Of course, a greater need for macro-prudential policy and avoiding specific regulatory and policy failure are not mutually exclusive. It is easy to imagine a regulatory environment that combines more attention to the macroeconomic dimension of financial markets; a more prudent monetary policy that becomes contractionary during periods of rapid economic expansions, and sectoral policies that do not encourage asset bubbles.22

However, the regulation of financial markets is taking a path that could exacerbate future booms and busts – in sharp contrast both to the declared intentions of policymakers and to the underlying idea of macro-prudential regulation.

Our criticism of the Basel rules and of the harmonisation of financial regulation needs to be distinguished sharply from the concerns raised by the banking community, which usually point out the costs that would be involved in raising capital adequacy standards. The Institute of International Finance, for instance, has conducted a study of the effects of likely regulatory reform on the broader economy.23 The models used by the study are based on a relatively simple logic. Higher capital ratios require banks to raise more capital, putting an upward pressure on the cost of capital. In turn, this increases lending rates and reduces the aggregate supply of credit to the economy, lowering aggregate employment and GDP.

On that basis, the paper estimates the costs of adopting a full regulatory reform at an average of about 0.6 percentage points of GDP over the period 2011-2015 and an average of about 0.3 percentage points of GDP for the ten year period, 2011-2020. With a different set of assumptions, the Basel Committee estimates the costs to be much smaller. But whether this is a cost worth bearing depends on what the regulatory reform would achieve. If the output gap is a price to pay for an adequate reduction in the likelihood of future crises – and a reduction in the amplitude of business cycles – then it might be worth paying. Unfortunately, the regulatory reform which we are likely to get is unlikely to achieve that.

Firstly, in spite of claims to the contrary, much of the re-regulation simply increases the procyclicality which was characteristic of banking regulation under Basel II. Indeed, Basel III increases the requirements for tier 1 capital to a minimum of 6 percent and the share of common equity to a total of 7.0 percent.  And on top of that it introduces a countercyclical buffer of 0-2.5 percent. Yet, that buffer cannot offset the procyclical effect of the increased capital requirements.

We should stress that the problem with Basel III rules is not the absolute size of capital adequacy requirements but the fact that they are based on the borrower’s default risk. Hence, riskier assets need to be backed by a larger capital buffer than less risky ones. During times of crisis, the overall riskiness of extending loans increases and banks will therefore have an incentive to increase the amount of capital which they are holding relative to the total size of their risk-weighted assets. An extreme reaction to economic downturn would thus consist of dumping the riskier assets on the financial market, in the hope of restoring the required capital adequacy ratio, exacerbating the economic downturn and possibly triggering a credit crunch. Conversely, in good economic times, when the measured riskiness of individual loans has decreased, banks will be tempted to hold less capital relative to their other assets and will thus be tempted to fuel a potential lending boom.

A related issue is that current measures of risk – which are used as the basis for the risk-weighted capital adequacy rules – are highly imperfect. In a nutshell, highly-rated assets can be leveraged much more heavily than riskier assets, which is a problem if those ratings are not necessarily accurate. Lending to triple- A-rated sovereigns still carries a risk-weight of zero. As the present fiscal crisis in Europe suggests, exposure to triple-A-rated debt is certainly not risk free.  Basel III complements the capital adequacy rules by simple – not risk weighted – leverage ratio limits.  However, looking at the past data, there is little reason to believe that these will be effective in preventing future crises. In fact, risk-adjusted and simple balance sheet leverage ratios both show stable bank leverage until the onset of the crisis.24

Similarly, mark-to-market valuation practices are very problematic for assets where markets have become illiquid, and yield valuations that are both very low and uncertain. In times of crisis, this can give rise to serious consequences for companies that report mark-to-market valuations on their balance sheets. For that reason, mark-to-market valuations can exacerbate the effects of economic downturns.

Furthermore, Basel III will contain new, stricter, definitions of common equity, Tier 1 capital and capital at large. In principle, there is nothing wrong with being pickier when selecting the capital assets to use as a buffer when running a bank. It might indeed be prudent to use only common stock and not preferred stock and/or debt-equity hybrids that are permissible under Basel II. However, imposing a common notion of capital on banks and financial institutions worldwide is more likely to make their por tfolios similar and will therefore increase the co-movement existing between their liquidity – or lack thereof – at any given point in time.

A common definition of capital and a similar composition of bank capital across the world will also create incentives for regulators to synchronise monitoring. Such moves are already on their way within the EU – especially in the light of the establishment of common institutions for financial regulation – in spite of the fact that the business cycles in different parts of Europe are not synchronised.

Finally, we should recognise that tighter financial regulation has its unintended consequences. In the past, we have witnessed companies’ moving complex, highly leveraged, instruments off their balance sheets. Much of the financial activity moved – both geographically and sector-wise – to areas which were less heavily regulated. This included moving activities away from the banking industry into, say, hedge funds. And this also includes moving financial activities to jurisdictions that are friendlier to the financial industry. According to the Financial Times25, in the past two years, almost 1,000 hedge fund employees moved from the UK to Swiss cantons, seeking regulatory and fiscal predictability.  Insofar as the move towards harmonised financial regulation is imperfect – and so long as there remain jurisdictions and areas of finance that are regulated less heavily – there will be a relocation of financial activities towards these jurisdictions and areas of activity. The corollary is that overly tight regulation can create a situation in which much of the actual financial activity is taking place outside of the government supervision which is intended to curb their alleged excesses.

4.3 Crisis as alibi, symbolic politics

Many of the measures that are part of the G20 agenda are completely irrelevant to any ambition one could possibly have to mitigate systemic risks in the world economy. For instance, the idea that “tax havens” and banking secrecy are among the issues that contributed to the financial crisis is completely unfounded. If anything, tax competition could curb some of the excesses of the big, fiscally irresponsible, welfare states by making it difficult for governments to impose too onerous fiscal burdens on mobile tax bases. It is thus clear that for politicians in high-tax countries, the present crisis has served as an alibi to push forward a variety of measures which they have demonstrated an interest in implementing but lacked a plausible justification.26

In many respects, regulating short-selling is similar. Short-selling cannot be blamed for the financial crisis, just as it cannot be blamed for the Greek debt crisis that occurred earlier this year. Indeed, short-selling is critical in reflecting new, often pessimistic, information about the asset in question into a market price. Enabling European regulators to prohibit short-selling in specific situations – presumably in situations when doubts arise about the ability of a European country to repay its debt obligations – will do nothing to address the underlying problems of fiscal irresponsibility. It is just an illustration of a mentality that pretends that shooting the messenger is an appropriate response to the fiscal problems of the Eurozone. The direct cost of this policy is that it will introduce noise into the functioning of financial markets and will make them process new information less efficiently.

Besides taxation and short-selling, there have been coordinated moves to regulate hedge funds, both in the United States and in Europe. While this might make sense from a macro-prudential perspective, particularly if it is the case that some of the hedge funds are of systemic importance, we should recognise that hedge funds were the victim, not the perpetrator, in the recent crisis.  There have been a series of measures that governments have been eager to take for a long time and for which the crisis provided a convenient ad hoc justification, that are now part of the coordinated re-regulation of financial markets in the United States and in Europe. This includes, for instance, the creation of systemic risk boards – as if creation of such institutions would in itself be an improvement over the present situation. Creating a new bureau does not endow the regulators with a superior model of the economy and certainly does not mean that they will be able to do better forecasts than the regulators of the past.

Likewise, the creation of consumer protection boards is unlikely to have a significant effect, besides creating a false sense of security among the general public. After all, the crisis was not caused by uninformed consumers’ falling prey to – say – credit card companies. While instances of individuals making bad decisions regarding their indebtedness certainly exist, they were in most cases a rational response to the wider institutional environment in which they were operating, and which made it worthwhile, for instance, to use one’s house as a piggybank. Furthermore, there is evidence that some of the measures aiming at protecting consumers can in fact exacerbate moral hazard and strengthen the incentives for irresponsible behaviour.27

Finally, the issue of executive pay is high on the list of priorities for policymakers across the globe, again without a credible explanation of how that would contribute to the prevention of future crises. Major proponents of macroprudential regulation – such as the authors of the Geneva report – argue that there is very little reason for regulators to get involved in the decisions of private firms over executive compensation. Rather, as Charles Wyplosz says, “macro-prudential regulation will push banks to develop incentive packages that are more encouraging of longer-term behaviour.”28



Footnotes:

18 Cecchetti, S. G. “Financial reform: a progress report.” Remarks prepared for the Westminster Economic Forum, National Institute of Economic and Social Research, 4 October 2010.
19 Eatwell, J. The New International Financial Architecture: Promise or Threat? Cambridge Endowment for Research in Finance, 22 May 2002.
20 Daníelsson, J. & J.-P. Zigrand. What Happens when You Regulate Risk? Evidence from a Simple Equilibrium Model. April 2003.
21 For an exposition of the ideas behind this approach to financial regulation see Hanson, Kashyap and Stein (2010): “A Macroprudential Approach to Financial Regulation.” Journal of Economic Perspectives, forthcoming.
22 In this endeavour, targeting nominal GDP instead of inflation might be instrumental, as Scott Sumner, David Beckworth, George Selgin and others have argued.
23 IIF (2010). Interim Report on the Cumulative Impact on the Global Economy of Proposed Changes in the Banking Regulatory Framework. http://www.ebf-fbe.eu/uploads/10-Interim%20NCI_June2010_Web.pdf
24 See Joint FSF-CGFS Working Group (2009). The role of valuation and leverage in procyclicality. http://www.bis.org/publ/cgfs34.htm
25 FT. “Hedge funds managers seek predictability.” October 1, 2010. Available at: http://www.ft.com/cms/s/0/557f55d4-cd93-11df-9c82-00144feab49a.html
26 Indeed, the OECD has been running its program on harmful tax practices since 1998.
27 We discuss the specific case of the CARD Act in the United States in Rohac, D. (2010). “The high costs of consumer protection.” The Washington Times, September 3, 2010.
28 Wyplosz, C. (2009). “The ICMB-CEPR Geneva Report: ‘The future of financial regulation.’” VoxEU, January 27, 2009. http://www.voxeu.org/index.php?q=node/2872

Monday, November 29, 2010

New derivatives rules could punish firms that pose no systemic risk

Nov 29, 2010

The Hangover, Part II. WSJ Editorial
New derivatives rules could punish firms that pose no systemic risk.
WSJ, Nov 29, 2010
http://online.wsj.com/article/SB10001424052748704104104575622583155296368.html

Not even Mel Gibson would want a role in this political sequel. Readers will recall the true story of Congressman Barney Frank and Senator Chris Dodd, two pals who stayed up all night rewriting derivatives legislation.

The plot centered around the comedy premise that two Beltway buddies would quickly restructure multi-trillion-dollar markets to present their friend, President Barack Obama, with an apparent achievement before a G-8 meeting. As in the movies, the slapstick duo finished rewriting their bill just in time for the big meeting in Toronto last June.

But after the pair completed their mad-cap all-nighter, no hilarity ensued. That's because Main Street companies that had nothing to do with the financial crisis woke up to find billions of dollars in potential new costs. The threat was new authority for regulators to require higher margins on various financial contracts, even for small companies that nobody considers a systemic risk. The new rules could apply to companies that aren't speculating but are simply trying to protect against business risks, such as a sudden price hike in a critical raw material.

Businesses with good credit that have never had trouble off-loading such risks might have to put up additional cash at the whim of Washington bureaucrats, or simply hold on to the risks, making their businesses less competitive. Companies that make machine tools, for example, want to focus on making machine tools, not on the fluctuations of interest rates or the value of a foreign customer's local currency. So companies pay someone else to manage these risks. But Washington threatens to make that process much more costly.

Messrs. Frank and Dodd responded to the uproar first by suggesting that the problem could be fixed later in a "corrections" bill and then by denying the problem existed. Both proclaimed that their bill did not saddle commercial companies with new margin rules. But as we noted last summer, comments from the bill's authors cannot trump the language of the law.

Flash forward to today, and the Commodity Futures Trading Commission (CFTC) is drafting its new rules for swaps, the common derivatives contracts in which two parties exchange risks, such as trading fixed for floating interest rates. We're told that CFTC Chairman Gary Gensler has said privately that his agency now has the power to hit Main Street with new margin requirements, not just Wall Street.

Main Street companies that use these contracts are known as end-users. When we asked the CFTC if Mr. Gensler believes regulators can require swap dealers to demand margin from all end-users, a spokesman said, "It would be premature to say that a rule would include such a requirement or that the Chairman supports such a requirement."

It may only be premature until next month, when the CFTC is expected to issue its draft rules. While the commission doesn't have jurisdiction over the entire swaps market, other financial regulators are expected to follow its lead. Mr. Gensler, a Clinton Administration and Goldman Sachs alum, may not understand the impact of his actions outside of Washington and Wall Street.

In a sequel to the Dodd-Frank all-nighter, the law requires regulators to remake financial markets in a rush. CFTC Commissioner Michael Dunn said recently that to comply with Dodd-Frank, the commission may need to write 100 new regulations by next July.

"In my opinion it takes about three years to really promulgate a rule," he said, according to Bloomberg News. Congress instructed us to "forget what's physically possible," he added. The commission can't really use this impossible schedule as an excuse because Mr. Gensler had as much impact as anyone in crafting the derivatives provisions in Dodd-Frank. No surprise, the bill vastly expands his agency's regulatory turf.

And if anyone can pull off a complete overhaul of multi-trillion-dollar markets in a mere eight months, it must be the CFTC.

Just kidding. An internal CFTC report says that communication problems between the CFTC's enforcement and market oversight divisions "impede the overall effectiveness of the commission's efforts to not only detect and prevent, but in certain circumstances, to take enforcement action against market manipulation." The report adds that the commission's two primary surveillance programs use incompatible software. Speaking generally and not in response to the report, Mr. Gensler says that the agency is "trying to move more toward the use of 21st century computers," though he warns that "it's a multiyear process." No doubt.

The CFTC report also noted that "the staff has no standard protocol for documenting their work." If we were tasked with restructuring a complex trading market to conform to the vision of Chris Dodd and Barney Frank, we wouldn't want our fingerprints on it either.

The report was completed in 2009 but only became public this month thanks to a Freedom of Information Act request from our colleagues at Dow Jones Newswires. Would Messrs. Dodd and Frank have responded differently to Mr. Gensler's power grab if they had realized how much trouble the CFTC was having fulfilling its traditional mission? We doubt it, but it certainly would have made their reform a tougher sell, even to the Washington press corps.

Congress should scrutinize this process that is all but guaranteed to result in ill-considered, poorly crafted regulation. In January, legislators should start acting, not like buddies pulling all-nighters, but like adults who understand it's their job to make the tough calls, rather than kicking them over to the bureaucracy with an arbitrary deadline.

Saturday, October 30, 2010

Utopia, With Tears - A review of Fruitlands, by Richard Francis

Utopia, With Tears. By ALEXANDRA MULLEN
No meat, no wool, no coffee or candles to read by, but plenty of high aspirations—and trouble.A review of Fruitlands, by Richard Francis (Yale University Press, 321 pages, $30)

WSJ, Friday, October 29, 2010
http://online.wsj.com/article/SB10001424052702304173704575578761068904960.html


In 1843, in the quiet middle of Massachusetts, a group of high-minded people set out to create a new Eden they called Fruitlands. The embryonic community miscarried, lasting only seven months, from June to January. Fruitlands now has a new chronicler in Richard Francis, a historian of 19th-century America. "This is the story," he writes, "of one of history's most unsuccessful utopias ever—but also one of the most dramatic and significant." As we learn in his thorough and occasionally hilarious account, the claim is about half right.

The utopian community of Fruitlands had two progenitors: the American idealist Bronson Alcott and the English socialist Charles Lane. Alcott was a farm boy from Connecticut who had turned from the plough to philosophy. According to Ralph Waldo Emerson, his friend, Alcott could not chat about anything "less than A New Solar System & the prospective Education in the nebulae." Airy as his thoughts were, Alcott could be a mesmerizing speaker. Indeed, his words partly inspired an experimental community in England, where he met Lane.

Lane has often been considered the junior partner in the Fruitlands story, merely the guy who put up the money (for roughly 100 acres, only 11 of which were arable). But Mr. Francis fleshes him out, showing him to be a tidier and more bitter thinker than Alcott, with a practical streak that could be overrun by his hopes for humanity.

As Mr. Francis notes, Alcott and Lane shared a "tendency to take moderation to excess," pushing their first principles as far as they could go. One such principle was that you should do no harm to living things, including plants. As Mr. Francis explains: "If you cut a cabbage or lift a potato you kill the plant itself, just as you kill an animal in order to eat its meat. But pluck an apple, and you leave the tree intact and healthy."

The Fruitlands community never numbered more than 14 souls, five of them children. The members included a nudist, a former inmate of an insane asylum, and a man who had once gotten into a knife fight to defend his right to wear a beard. Then there was the fellow who thought swearing elevated the spirit. He would greet the Alcott girls: "Good morning, damn you." Lane thought the members should be celibate; Alcott's wife, Abigail, the mother of his four daughters and the sole permanent woman resident, was a living reproach to this view.

All of Fruitlands members, however, agreed to certain restrictions: No meat or fish; in fact nothing that came from animals, so no eggs and no milk. No leather or wool, and no whale oil for lamps or candles made from tallow (rendered animal fat). No stimulants such as coffee or tea, and no alcohol. Because the Fruitlanders were Abolitionists, cane sugar and cotton were forbidden (slave labor produced both). The members of the community wore linen clothes and canvas shoes. The library was stocked with a thousand books, but no one could read them after dark.

And how did the whole experiment go? Well, most of the men at Fruitlands had little farming experience. Alcott, who did, impressed Lane with his ability to plow a straight furrow; but Alcott was always a better talker than worker. The community rejected animal labor—and even manure, a serious disadvantage if you want to produce enough food to be self-sufficient. The farming side of Fruitlands was a dud.

But the experiment was indeed, as Mr. Francis claims, "dramatic." The drama came from a common revolutionary trajectory in which "a group of idealists ends by trying to destroy each other." "Of spiritual ties she knows nothing," Lane wrote of Abigail. "All Mr. Lane's efforts have been to disunite us," she confided to a friend, referring to her relations with Bronson. Even the usually serene Bronson agonized: "Can a man act continually for the universal end," he asked Lane, "while he cohabits with a wife?" By Christmas, which he spent in Boston, Bronson seemed on the verge of dissolving his family. In the new year he returned to Fruitlands, but he had a breakdown. This was no way to run a utopia, and the experiment ended.

Was Fruitlands "significant"? In Mr. Francis's reading, the community "intuited the interconnectedness of all living things." That intuition, he believes, underlies our notions of the evils of pollution and the imminence of environmental catastrophe, as well as our concerns about industrialized farming. The Fruitlanders' understanding of the world, he argues, helped create a parallel universe—an alternative to scientific empiricism—that is still humming along in the current day.

Perhaps so. Certainly many New Age and holistic notions, in their fuzzy and well-meaning romanticism, share a common ancestor with the Fruitlands outlook. But the result is not always benign. It was the Fruitlanders' belief, for instance, that "all disease originates in the soul." One descendant of this idea is the current loathsome view that cancer is caused by bad thoughts.

Though obviously sympathetic to the Fruitlands experiment, Mr. Francis gives us enough facts to let us draw our own conclusions. He records Bronson and Abigail's acts of charity, already familiar to us from their daughter Louisa's novel "Little Women" (1868). But he also retells less admiring stories, of their petty vindictiveness and casual callousness. Along the way he adumbrates the ways in which idealism can slide into megalomania.

Mr. Francis reports a conversation that Alcott once had with Henry James Sr., the father of the novelist Henry and the philosopher William. Alcott let it drop that he, like Jesus and Pythagoras before him, had never sinned. James asked whether Alcott had ever said, "I am the Resurrection and the Life." "Yes, often," Alcott replied. Unfortunately, Mr. Francis fails to record James's rejoinder: "And has anyone ever believed you?"

Ms. Mullen writes for the Barnes & Noble Review.

Friday, October 22, 2010

High costs of making batteries stall affordability of electric cars

High costs of making batteries stall affordability of electric cars. By Mike Ramsey
The Wall Street Journal Europe, page 22, Oct 19, 2010
http://online.wsj.com/article/SB40001424052748703735804575536242934528502.html

The push to get electric cars on the road is backed by governments and auto makers around the world, but they face a hurdle that may be tough to overcome: the stubbornly high cost of the giant battery packs that power the vehicles.

Both the industry and government are betting that a quick takeoff in electric-car sales will drive down the price of the battery packs, which can account for more than half the cost of an electric vehicle.

But a number of scientists and automotive engineers believe cost reductions will be hard to come by. Unlike with tires or toasters, battery packs aren't likely to enjoy traditional economies of scale as their makers ramp up production.

Some experts say that increased production of batteries means the price of the key metals used in their manufacture will remain steady—or maybe even rise—at least in the short term.

These experts also say the price of the electronic parts used in battery packs as well as the enclosures that house the batteries aren't likely to decline appreciably.

The U.S. Department of Energy has set a goal of bringing down car-battery costs by 70% from last year's price, which it estimated at $1,000 per kilowatt hour of battery capacity, by 2014.

Jay Whitacre, a battery researcher and technology policy analyst at Carnegie Mellon University, is skeptical. The government's goals "are aggressive and worth striving for, but they are not attainable in the next three to five years," he said in an interview. "It will be a decade at least" before that price reduction is reached.

The high cost of batteries is evident in the prices set for early electric cars. Nissan Motor Co.'s Leaf, due in the U.S. in December, is priced at $33,000. Current industry estimates say its battery pack alone costs Nissan about $15,600.

That cost will make it difficult for the Leaf to turn a profit. And it also may make the Leaf a tough sell, since even with government tax breaks the car will cost more than twice the $13,520 starting price of the similar-size Nissan Versa hatchback.

Nissan won't comment on the price of the battery packs, other than to say that the first versions of the Leaf won't make money. Only later, when the company begins mass-producing the battery units in 2013, will the car be profitable, according to Nissan.

The Japanese company believes it can cut battery costs through manufacturing scale. It is building a plant in Smyrna, Tenn., that will have the capacity to assemble up to 200,000 packs a year.

Other proponents of electric vehicles agree that battery costs will fall as production ramps up. "They will come down by a factor of two, if not more, in the next five years," said David Vieau, chief executive officer of A123 Systems, a start-up that recently opened a battery plant in Plymouth, Mich.

Alex Molinaroli, president of Johnson Controls Inc.'s battery division, is confident it can reduce the cost of making batteries by 50% in the next five years, though the company won't say what today's cost is. The cost reduction by one of the world's biggest car-battery makers will mostly come from efficient factory management, cutting waste and other management-related costs, not from fundamental improvement of battery technology, he said.

But researchers such as Mr. Whitacre, the National Academies of Science and even some car makers aren't convinced, mainly because more than 30% of the cost of the batteries comes from metals such as nickel, manganese and cobalt. (Lithium makes up only a small portion of the metals in the batteries.)

Prices for these metals, which are set on commodities markets, aren't expected to fall with increasing battery production—and may even rise as demand grows, according to a study by the Academies of Science released earlier this year and engineers familiar with battery production.

Lithium-ion battery cells already are mass produced for computers and cellphones and the costs of the batteries fell 35% from 2000 through 2008—but they haven't gone down much more in recent years, according to the Academies of Science study.

The Academies and Toyota Motor Corp. have publicly said they don't think the Department of Energy goals are achievable and that cost reductions are likely to be far lower. It likely will be 20 years before costs fall by 50%—not the three or so years the DOE projects—according to an Academy council studying battery costs. The council was made up of nearly a dozen researchers in the battery field.

"Economies of scale are often cited as a factor that can drive down costs, but hundreds of millions to billions of ... [battery] cells already are being produced in optimized factories. Building more factories is unlikely to have a great impact on costs," the Academies report said.

The report added that the cost of the battery-pack enclosure that holds the cells is a major portion of the total battery-pack cost, and isn't likely to come down much. In addition, battery packs include electronic sensors and controls that regulate the voltage moving through and the heat being generated by the cells. Since those electronics already are mass-produced commodities, their prices may not fall much with higher production, the study said.

Lastly, the labor involved in assembling battery packs is expensive because employees need to be more highly trained than traditional factory staff because they work in a high-voltage environment. That means labor costs are unlikely to drop, said a senior executive at one battery manufacturer.

When car makers began using nickel-metal hydride batteries, an older technology, in their early hybrid vehicles, the cost of the packs fell only 11% from 2000 to 2006 and has seen little change since, according to the Academies study.

Toyota executives, including Takeshi Uchiyamada, global chief of engineering, say their experience with nickel-metal hydride batteries makes them skeptical that the prices of lithium ion battery pack prices will fall substantially.

"The cost reductions aren't attainable even in the next 10 years," said Menahem Anderman, principal of Total Battery Consulting Inc., a California-based battery research firm. "We still don't know how much it will cost to make sure the batteries meet reliability, safety and durability standards. And now we are trying to reduce costs, which automatically affect those first three things."