Showing posts with label irrationalism. Show all posts
Showing posts with label irrationalism. Show all posts

Tuesday, January 31, 2012

Macroeconomic and Welfare Costs of U.S. Fiscal Imbalances

Macroeconomic and Welfare Costs of U.S. Fiscal Imbalances. By Bertrand Gruss and Jose L. Torres
IMF Working Paper No. 12/38
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25691.0

Summary: In this paper we use a general equilibrium model with heterogeneous agents to assess the macroeconomic and welfare consequences in the United States of alternative fiscal policies over the medium-term. We find that failing to address the fiscal imbalances associated with current federal fiscal policies for a prolonged period would result in a significant crowding-out of private investment and a severe drag on growth. Compared to adopting a reform that gradually reduces federal debt to its pre-crisis level, postponing debt stabilization for two decades would entail a permanent output loss of about 17 percent and a welfare loss of almost 7 percent of lifetime consumption. Moreover, the long-run welfare gains from the adjustment would more than compensate the initial losses associated with the consolidation period.

The authors start the paper this way:

“History makes clear that failure to put our fiscal house in order will erode the vitality of our
economy, reduce the standard of living in the United States, and increase the risk of economic and financial instability.”

Ben S. Bernanke, 2011 Annual Conference of the Committee for a Responsible Federal Budget


Excerpts
Introduction
One of the main legacies of the Great Recession has been the sharp deterioration of public finances in most advanced economies. In the U.S., the federal debt held by the public surged from 36 percent of GDP in 2007 to around 70 percent in 2011. This rise in debt, however impressive, gets dwarfed when compared to the medium-term fiscal imbalances associated with entitlement programs and revenue-constraining measures. For example, the non-partisan Congressional Budget Office (CBO) foresees the debt held by the public to exceed 150 percent of GDP by 2030 (see Figure 1). Similarly, Batini et al. (2011) estimate that closing the federal “fiscal gap” associated with current fiscal policies would require a permanent fiscal adjustment of about 15 percent of GDP.

While the crisis brought the need to address the U.S. medium-term fiscal imbalances to the center of the policy debate, the costs they entail are not necessarily well understood. Most of the long-term fiscal projections regularly produced in the U.S. and used to guide policy discussions are derived from debt accounting exercises. A shortcoming of such approach is that relative prices and economic activity are unaffected by different fiscal policies, and that it cannot be used for welfare analysis. To overcome those limitations and contribute to the debate, in this paper we use a rational expectations general equilibrium framework to assess the medium-term macroeconomic and welfare consequences of alternative fiscal policies in the U.S. We find that failing to address the federal fiscal imbalances for a prolonged period would result in a significant crowding-out of private investment and drag on growth, entailing a permanent output loss of about 17 percent and welfare loss of almost 7 percent of lifetime consumption. Moreover, we find that the long-run welfare gains from stabilizing the federal debt at a low level more than compensate the welfare losses associated with the consolidation period. Our results also suggest that the crowding-out effects of public debt are an order of magnitude bigger than the policy mix effects: Reducing promptly the level of public debt is significantly more important for activity and welfare than differences in the size of government or the design of the tax reform.

The focus of this study is on the costs and benefits of fiscal consolidation for the U.S. over the medium-term to long-term. In this sense, we explicitly leave aside some questions on fiscal consolidation that, while very relevant for the short-run, cannot be appropriately tackled in this framework. One example is assessing the effects of back-loading the pace of consolidation in the near term—while announcing a credible medium-run adjustment—in the current context of growth below potential and nominal interest rates close to zero. A related relevant question is what mix of fiscal instruments in the near term would make fiscal consolidation less costly in such context. While interesting, these questions are beyond the scope of this paper.

The quantitative framework we use is a dynamic stochastic general equilibrium model with heterogeneous agents, and endogenous occupational choice and labor supply. In the model, ex-ante identical agents face idiosyncratic entrepreneurial ability and labor productivity shocks, and choose their occupation. Agents can become either entrepreneurs and hire other workers, or they can become workers and decide what fraction of their time to work for other entrepreneurs. In order to make a realistic analysis of the policy options, we assume that the government does not have access to lump sum taxation. Instead, the government raises distortionary taxes on labor, consumption, and income, and issues one period non-contingent bonds to finance lump sum transfers to all agents, other noninterest spending, and service its debt. Given that the core issue threatening debt sustainability in the U.S. is the explosive path of spending on entitlement programs, the heterogeneous agents assumption is crucial: Our model allows for a meaningful tradeoff between distortionary taxation and government transfers, as the latter insure households from attaining very low levels of consumption. The complexity this introduces forces us to sacrifice on some dimension: Agents in our model face individual uncertainty but have perfect foresight about future paths of fiscal instruments and prices. Allowing for uncertainty about the timing and composition of the adjustment would be interesting, but would severely increase the computational cost.

We compare model simulations from four alternative fiscal scenarios. The benchmark scenario maintains current fiscal policies for about twenty years. More precisely, in this scenario we feed the model with the spending (noninterest mandatory and discretionary) and revenue projections from CBO’s Alternative Fiscal scenario (CBO 2011)—allowing all other variables to adjust endogenously—until about 2030, when we assume that the government increases all taxes to stabilize the debt at its prevailing level. Three alternative scenarios assume, instead, the immediate adoption of fiscal reform aimed at gradually reducing the federal debt to its pre-crisis level. There are of course many possible parameterizations for such reform reflecting, among other things, different views about the desired size of the public sector and the design of the tax system. We first consider an adjustment scenario assuming the same size of government and tax structure than the benchmark one in order to disentangle the sole effect of delaying fiscal adjustment—and stabilizing the debt ratio at a high level. We then explore the effect of alternative designs for the consolidation plan by considering two alternative adjustment scenarios that incorporate spending and revenue measures proposed by the bipartisan December 2010 Bowles-Simpson Commission.

This paper is related to different strands of the macro literature on fiscal issues. First, it is related to studies using general equilibrium models to analyze the implications of fiscal consolidations. Forni et al. (2010) use perfect-foresight simulations from a two-country dynamic model to compute the macroeconomic consequences of reducing the debt to GDP ratio in Italy. Coenen et al. (2008) analyze the effects of a permanent reduction in public debt in the Euro Area using the ECB NAWM model. Clinton et al. (2010) use the IMF GIMF model to examine the macroeconomic effects of permanently reducing government fiscal deficits in several regions of the world at the same time. Davig et al. (2010) study the effects of uncertainty about when and how policy will adjust to resolve the exponential growth in entitlement spending in the U.S.

The main difference with our paper is that these works rely on representative agent models that cannot adequately capture the redistributive and insurance effects of fiscal policy. As a result, such models have by construction a positive bias towards fiscal reforms that lower transfers, reduce the debt, and eventually lower the distortions by lowering tax rates. Another unappealing feature of the representative agent models for analyzing the merits of a fiscal consolidation is that, in steady state, the equilibrium real interest rate is independent of the debt level, whereas in our model the equilibrium real interest rate is endogenously affected by the level of government debt, which is consistent with the empirical literature.

Second, the paper is related to previous work using general equilibrium models with infinitively lived heterogeneous agents, occupational choice, and borrowing constraints to analyze fiscal reforms, such as Li(2002), Meh (2005) and Kitao (2008). Differently from these papers, that impose a balanced budget every period, we focus on the effects of debt period of time we augment our model to include growth. Moreover and as in Kitao (2008), we explicitly compute the transitional dynamics after the reforms and analyze the welfare costs associated with the transition.  dynamics and fiscal consolidation reforms. Also, since we focus on reforms over an extended period of time we augment our model to include growth. Moreover and as in Kitao (2008), we explicitly compute the transitional dynamics after the reforms and analyze the welfare costs associated with the transition.

Results:The long-run effects


What is the effect of delaying fiscal consolidation on...?
Capital and Labor. The high interest rates in the delay scenario imply that for those entrepreneurs that do not have enough internal funding, the cost of borrowing sufficient capital is too high for them to compensate for their income under the outside option (i.e.  wage income). As a result, the share of entrepreneurs in the delay scenario is roughly one half the share under the passive adjust scenario and the aggregate capital stock is about 17 percent lower. The higher share of workers in the delay scenario implies a higher labor supply. Together with a lower labor demand (due to a lower capital stock), this leads to a real wage that is more than 19 percent lower. Total hours worked are similar in the two steady states as lower individual hours offset the higher share of workers.

Output and Consumption. The crowding-out effect of fiscal policy under the delay scenario leads to large permanent losses in output and consumption. The level of GDP is about 16 percent lower in the delay than in the passive adjust scenario and aggregate consumption is 3.5 percent lower. Moreover and as depicted in Figure 4, the wealth distribution is significantly more concentrated under the delay scenario.

Welfare. The effect of lower aggregate consumption and more concentrated wealth distribution under the delay scenario implies that welfare is significantly lower than in the passive adjust scenario. Using a consumption equivalent welfare metric we find that the average difference in steady state welfare across scenarios would be equivalent to permanently increasing consumption to each agent in the delay scenario economy by 6 percent while leaving their amount of leisure unchanged. We interpret this differential as the permanent welfare gain from stabilizing public debt at its pre-crisis level. A breakup of the welfare comparison of steady states by wealth deciles, shown in Figure 5, suggests that all agents up to the 7th deciles of the wealth distribution would be better off under fiscal consolidation.


What are the effects of alternative fiscal consolidation plans?

Capital and Output. The smaller size of government in the two active adjust scenario relative to the passive one translates into higher capital stocks and higher output, increasing the gap with the delay scenario. Regarding the tax reform, the comparison between the two active adjust scenarios reveals that distributing the higher tax pressure on all taxes, including consumption taxes, lowers distortions and results in a higher capital stock and in a growth friendlier consolidation: The difference in the output level between the delay and active (1) adjust scenario stands at 17.7 percent—while this difference is 17.1 and 15.7 percent for the active (2) adjust and passive adjust scenarios respectively.

Consumption and Welfare. While all adjust scenarios reveal a significant difference in long-run per-capita consumption and welfare with respect to postponing fiscal consolidation, the relative performance among them also favors a smaller size of government and a balanced tax reform. The difference in per-capita consumption with the delay scenario is 3.5, 5.8 and 5.4 percent respectively for the passive, active (1) and active (2) adjustment scenarios. The policy mix under the active (1) adjust scenario also ranks the best in terms of welfare, with the welfare differential with respect to the delay scenario being more than 7 percent of lifetime consumption.

Overall Welfare Cost of Delaying Fiscal Consolidation

In the long-run the average welfare in the adjust scenario is higher than in the delay scenario by 6.7 percent of lifetime consumption. However, along the transition to the new steady state the adjust scenario is characterized by a costly fiscal adjustment that entails a lower path for per capita consumption, so it might not be necessarily true that an adjustment is optimal.

To assess the overall welfare ranking of the alternative fiscal paths, we extend the analysis of section III.A. by computing, for the delay and adjust scenarios, the average expected discounted lifetime utility starting in 2011. We find that even taking into account the costs along the transition, the adjust scenario entails an average welfare gain for the economy. The infinite horizon welfare comparison suggests that consumption under the delay scenario should be raised by 0.8 percent for all agents in the economy in all periods to attain the same average utility than under the adjust scenario (while leaving leisure unchanged). A breakup of this result by wealth deciles (see Figure 9) suggests that, as in the long-run comparison, the wealthiest decile of the population is worse off under the adjust scenario. Differently from the steady state comparison, however, the first four deciles also face welfare losses in the adjust scenario.

A few elements suggest that the average welfare gain reported (0.8 percent in consumptionequivalent terms) can be considered a lower bound. First, the calibrated subjective discount factor from the model used to compute the present value of the utility paths entails a yearly discount rate of about 9.9 percent.20 With such a high discount rate, the long-run benefits from the delay scenario are heavily discounted. Using a discount rate of 3 percent, the one used by CBO for calculating the present value of future streams of revenues and outlays of the government’s trust funds, would imply a consumption-equivalent welfare gain of 5.9 percent (instead of 0.8 percent). Second, the model we are using has infinitely lived agents, so we are not explicitly accounting for the distribution of costs and benefits across generations.

Conclusions
We compare the macroeconomic and welfare effects of failing to address the fiscal imbalances in the U.S. for an extended period with those of reducing federal debt to its precrisis level and find that the stakes are quite high. Our model simulations suggest that the continuous rise in federal debt implied by current policies would have sizeable effects on the economy, even under certainty that the federal debt will be fully repaid. The model predicts that the mounting debt ratio would increase the cost of borrowing and crowd out private capital from productive activities, acting as a significant drag on growth. Compared to stabilizing federal debt at its pre-crisis level, continuation of current policies for two decades would entail a permanent output loss of around 17 percent. The associated drop in per-capita consumption, combined with the worsening of wealth concentration that the model suggests, would cause a large average welfare loss in the long-run, equivalent to about 7 percent of lifetime consumption. Our results also suggest that reducing promptly the level of public debt is significantly more important for activity and welfare than differences in the size of government or the design of the tax reform. Accordingly, even under consensus on the desirability to increase primary spending in the medium-run, it would be preferable to start from a fiscal house in order.

The model adequately captures that the fiscal consolidation needed to reduce federal debt to its pre-crisis level would be very costly. Still, extending the welfare comparison to include also the transition period suggests that a fiscal consolidation would be on average beneficial.  After taking into account the short-term costs, the average welfare gain from fiscal consolidation stands at 0.8 percent of lifetime consumption.

We argue that our welfare results can be interpreted as a lower bound. This is because, first, we abstract from default so our simulations ignore the potential effect of higher public debt on the risk premium. However, as the debt crisis in Europe has revealed, interest rates can soar quickly if investors lose confidence in the ability of a government to manage its fiscal policy. Considering this effect would have magnified the long-run welfare costs of stabilizing the debt ratio at a higher level. Second, the high discount rate we use in the computation of the present value of utility exacerbates the short-term costs. If we recomputed the overall welfare effects in our scenarios using a discount rate of 3 percent, the welfare gain from a consolidation would be 5.9 percent of lifetime utility, instead of 0.8 percent. An argument for considering a lower rate to compute the present value of welfare is that by assuming infinitely lived agents we are not attaching any weight to unborn agents that would be affected by the permanent costs of delaying the resolution of fiscal imbalances and do not enjoy the expansionary effects of the unsustainable policy along the transitional dynamics.

The results in this paper are not exempt from the perils inherent to any model-dependent analysis. In order to address features that we believe are crucial for the issue at hand, we needed to simplify the model on other dimensions. For example, given the current reliance of the U.S. on foreign financing, the closed economy assumption used in this paper may be questionable. However, we believe that it would also be problematic to assume that the world interest rate will remain unaffected if the U.S. continues to considerably increase its financing needs. Moreover and as mentioned before, the model ignores the effect of higher debt on the perceived probability of default, which would likely counteract the effect in our results from failing to incorporate the government’s access to foreign borrowing. The model also abstracts from nominal issues and real and nominal rigidities typically introduced in the new Keynesian models commonly used for policy analysis. However, we believe that while these features are particularly relevant for short-term cyclical considerations, they matter much less for the longer-term issues addressed in this paper.

Friday, October 21, 2011

The Case Against Global-Warming Skepticism

The Case Against Global-Warming Skepticism. By Richard A Muller
There were good reasons for doubt, until now.
http://online.wsj.com/article/SB10001424052970204422404576594872796327348.html
WSJ, Oct 21, 2011

Are you a global warming skeptic? There are plenty of good reasons why you might be.

As many as 757 stations in the United States recorded net surface-temperature cooling over the past century. Many are concentrated in the southeast, where some people attribute tornadoes and hurricanes to warming.

The temperature-station quality is largely awful. The most important stations in the U.S. are included in the Department of Energy's Historical Climatology Network. A careful survey of these stations by a team led by meteorologist Anthony Watts showed that 70% of these stations have such poor siting that, by the U.S. government's own measure, they result in temperature uncertainties of between two and five degrees Celsius or more. We do not know how much worse are the stations in the developing world.

Using data from all these poor stations, the U.N.'s Intergovernmental Panel on Climate Change estimates an average global 0.64ºC temperature rise in the past 50 years, "most" of which the IPCC says is due to humans. Yet the margin of error for the stations is at least three times larger than the estimated warming.

We know that cities show anomalous warming, caused by energy use and building materials; asphalt, for instance, absorbs more sunlight than do trees. Tokyo's temperature rose about 2ºC in the last 50 years. Could that rise, and increases in other urban areas, have been unreasonably included in the global estimates? That warming may be real, but it has nothing to do with the greenhouse effect and can't be addressed by carbon dioxide reduction.

Moreover, the three major temperature analysis groups (the U.S.'s NASA and National Oceanic and Atmospheric Administration, and the U.K.'s Met Office and Climatic Research Unit) analyze only a small fraction of the available data, primarily from stations that have long records. There's a logic to that practice, but it could lead to selection bias. For instance, older stations were often built outside of cities but today are surrounded by buildings. These groups today use data from about 2,000 stations, down from roughly 6,000 in 1970, raising even more questions about their selections.

On top of that, stations have moved, instruments have changed and local environments have evolved. Analysis groups try to compensate for all this by homogenizing the data, though there are plenty of arguments to be had over how best to homogenize long-running data taken from around the world in varying conditions. These adjustments often result in corrections of several tenths of one degree Celsius, significant fractions of the warming attributed to humans.

And that's just the surface-temperature record. What about the rest? The number of named hurricanes has been on the rise for years, but that's in part a result of better detection technologies (satellites and buoys) that find storms in remote regions. The number of hurricanes hitting the U.S., even more intense Category 4 and 5 storms, has been gradually decreasing since 1850. The number of detected tornadoes has been increasing, possibly because radar technology has improved, but the number that touch down and cause damage has been decreasing. Meanwhile, the short-term variability in U.S. surface temperatures has been decreasing since 1800, suggesting a more stable climate.

Without good answers to all these complaints, global-warming skepticism seems sensible. But now let me explain why you should not be a skeptic, at least not any longer.

Over the last two years, the Berkeley Earth Surface Temperature Project has looked deeply at all the issues raised above. I chaired our group, which just submitted four detailed papers on our results to peer-reviewed journals. We have now posted these papers online at www.BerkeleyEarth.org to solicit even more scrutiny.

Our work covers only land temperature—not the oceans—but that's where warming appears to be the greatest. Robert Rohde, our chief scientist, obtained more than 1.6 billion measurements from more than 39,000 temperature stations around the world. Many of the records were short in duration, and to use them Mr. Rohde and a team of esteemed scientists and statisticians developed a new analytical approach that let us incorporate fragments of records. By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.

We discovered that about one-third of the world's temperature stations have recorded cooling temperatures, and about two-thirds have recorded warming. The two-to-one ratio reflects global warming. The changes at the locations that showed warming were typically between 1-2ºC, much greater than the IPCC's average of 0.64ºC.

To study urban-heating bias in temperature records, we used satellite determinations that subdivided the world into urban and rural areas. We then conducted a temperature analysis based solely on "very rural" locations, distant from urban ones. The result showed a temperature increase similar to that found by other groups. Only 0.5% of the globe is urbanized, so it makes sense that even a 2ºC rise in urban regions would contribute negligibly to the global average.

What about poor station quality? Again, our statistical methods allowed us to analyze the U.S. temperature record separately for stations with good or acceptable rankings, and those with poor rankings (the U.S. is the only place in the world that ranks its temperature stations). Remarkably, the poorly ranked stations showed no greater temperature increases than the better ones. The mostly likely explanation is that while low-quality stations may give incorrect absolute temperatures, they still accurately track temperature changes.

When we began our study, we felt that skeptics had raised legitimate issues, and we didn't know what we'd find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections.

Global warming is real. Perhaps our results will help cool this portion of the climate debate. How much of the warming is due to humans and what will be the likely effects? We made no independent assessment of that.

Mr. Muller is a professor of physics at the University of California, Berkeley, and the author of "Physics for Future Presidents" (W.W. Norton & Co., 2008).

Sunday, January 23, 2011

Four of every 10 rows of U.S. corn now go for fuel, not food



Please see commentary at TradeFlow21.com


Amber Waves of Ethanol. WSJ Editorial
Four of every 10 rows of U.S. corn now go for fuel, not food.
WSJ, Jan 22, 2011
http://online.wsj.com/article/SB10001424052748703396604576088010481315914.html

The global economy is getting back on its feet, but so too is an old enemy: food inflation. The United Nations benchmark index hit a record high last month, raising fears of shortages and higher prices that will hit poor countries hardest. So why is the United States, one of the world's biggest agricultural exporters, devoting more and more of its corn crop to . . . ethanol?

The nearby chart, based on data from the Department of Agriculture, shows the remarkable trend over a decade. In 2001, only 7% of U.S. corn went for ethanol, or about 707 million bushels. By 2010, the ethanol share was 39.4%, or nearly five billion bushels out of total U.S. production of 12.45 billion bushels. Four of every 10 rows of corn now go to produce fuel for American cars or trucks, not food or feed.

This trend is the deliberate result of policies designed to subsidize ethanol. Note the surge in the middle of the last decade when Congress began to legislate renewable fuel mandates and many states banned MTBE, which had competed with ethanol but ran afoul of the green and corn lobbies.

This carve out of nearly half of the U.S. corn corp to fuel is increasing even as global food supply is struggling to meet rising demand. U.S. farmers account for about 39% of global corn production and about 16% of that crop is exported, so U.S. corn stocks can influence the world price. Chicago Board of Trade corn March futures recently hit 30-month highs of $6.67 a bushel, up from $4 a bushel a year ago.

Demand from developing nations like China is also playing a role in rising prices, and in our view so is the loose monetary policy of the U.S. Federal Reserve that has increased the price of nearly all commodities traded in dollars.

But reduced corn food supply undoubtedly matters. About 40% of U.S. corn production is used to produce feed for animals. As corn prices rise, beef, poultry and other prices rise, too. The price squeeze has already contributed to the bankruptcy of companies like Texas-based Pilgrim's Pride Corp. and Delaware-based poultry maker Townsends Inc. over the past few years.

This damage coincides with a growing consensus that ethanol achieves none of its alleged policy goals. Ethanol supporters claim the biofuel reduces U.S. dependence on foreign oil and provides a cleaner source of energy. But Cornell University scientist David Pimentel calculates that if the entire U.S. corn crop were devoted to ethanol production, it would satisfy only 4% of U.S. oil consumption.

The Environmental Protection Agency has found that ethanol production has a minimal to negative impact on the environment. Even Al Gore, once an ethanol evangelist, now says his support had more to do with Presidential politics in Iowa and admits the fuel provides little or no environmental gain.

Not that this has changed the politics of ethanol. When consumers didn't buy enough gas last year to meet previous ethanol mandates, the Obama Administration lifted the cap on how much ethanol may be mixed into gasoline to 15% from 10%. Presto! More ethanol "demand." On Friday the EPA greatly expanded the number of cars approved to use the 15% blend. Last month, Congressmen whose constituents benefit from this largesse tucked into the tax bill an extension of the $5 billion tax credit for blending ethanol into gasoline.

At a time when the world will need more corn and grains, it makes no sense to devote scarce farmland to make a fuel that exists only because of taxpayer subsidies and mandates. If food supplies tighten and prices keep rising, such a policy will soon become immoral.

Sunday, January 16, 2011

Can We Boost Demand for Rainfall Insurance in Developing Countries?

Can We Boost Demand for Rainfall Insurance in Developing Countries?
Wold Bank, Jan 05, 2011
http://blogs.worldbank.org/allaboutfinance/node/634

Ask small farmers in semiarid areas of Africa or India about the most important risk they face and they will tell you that it is drought. In 2003 an Indian insurance company and World Bank experts designed a potential hedging instrument for this type of risk—an insurance contract that pays off on the basis of the rainfall recorded at a local weather station.

The idea of using an index (in this case rainfall) to proxy for losses is not new. In the 1940s Harold Halcrow, then a PhD student at the University of Chicago, wrote his thesis on the use of area yield to insure against crop yield losses. In the past two decades the market to hedge against weather risk has grown, especially in developed economies: citrus farmers can insure against frost, gas companies against warm winters, ski resorts against lack of snow, and couples against rain on their wedding day.

Rainfall insurance in developing countries is typically sold commercially before the start of the growing season in unit sizes as small as $1. To qualify for a payout, there is no need to file a claim: policyholders automatically qualify if the accumulated rainfall by a certain date is below a certain threshold. Figure 1 shows an example of a payout schedule for an insurance policy against drought, with accumulated rainfall on the x-axis and payouts on the y-axis. If rainfall is above the first trigger, the crop has received enough rain; if it is between the first and second triggers, the policyholder receives a payout, the size of which increases with the deficit in rainfall; and if it is below the second trigger, which corresponds to crop failure, the policyholder gets the maximum payout. This product has inspired development agencies around the world, and today at least 36 pilot projects are introducing index insurance in developing countries.



Figure 1. Example of a Payout Schedule for an Insurance Policy against Drought


DTheredespite the potentially large welfare benefits, take-up of the product has been disappointingly low. Explanations for this low demand abound. The first and obvious reason is that the product is too expensive relative to the risk coping strategies now used by the farmers. After all, when it is not heavily subsidized (as it is in several states in India), average payouts, which are based on historical rainfall data, amount to about 30–40 percent of the premiums. In a recent paper several coauthors and I estimate that if insurance could be offered with payout ratios similar to those of U.S. insurance contracts, demand would increase by 25–50 percent. But even if prices were close to actuarially fair, demand would not come close to universal participation. So the price cannot be the whole story.

Another explanation is based on liquidity constraints: farmers purchase insurance at the start of the growing season, when there are many competing uses for the limited cash available. In the same paper we randomly assign certain households enough cash to buy one policy and find that this increases take-up by 150 percent of the baseline take-up rate. This effect is several times as large as the effect of cutting the price of the product by half and is concentrated among poor households, which are likely to have less access to the financial system.

In addition, potential buyers may not fully trust the product. Unlike credit, which requires that the lender trust the borrower to repay the loan, insurance requires that the client trust the provider to honor its promise in case of a payout. We measure the importance of trust by varying whether or not the insurance educator visiting households is endorsed by a trusted local agent during the visit. Demand is 36 percent higher when the insurance is offered by a source the household trusts. Trust may be particularly important because many households have only limited numeracy and financial literacy, which is likely to reduce their ability to independently evaluate the insurance.

These results point to several possible improvements in contract design. For example, the trust issue might be overcome by designing a product that pays often initially, since it is easier to sell insurance where a past payout has occurred. Liquidity constraints might be eased by ensuring that payouts are disbursed quickly or by offering loans to pay the premium. Finally, agricultural loans could be bundled with insurance, creating what is in effect a contingent loan, with the amount to be repaid depending on the amount of rainfall. This product was tested in a pilot in Malawi, and to our surprise demand for the bundled loan (17.6 percent uptake) was lower than that for a regular loan (33 percent). The reason may have been that the lender’s inability to penalize defaulting borrowers (in part, because of lack of collateral) was already providing implicit insurance and so farmers did not value the insurance policy.

What is remarkable about the Malawi experience is that after the pilot the lenders decided to bundle all agricultural loans with insurance. In their view, rainfall insurance had proved to be an attractive way to reduce the risk of credit default and had the potential to increase access to agricultural credit at lower prices.

The insurance covers only the loans. But informal discussions with borrowers suggest that they remain largely unaware that the loans are insured. Banks may not be telling borrowers about the insurance, however—because if they did, borrowers would need to know the exact amount of the payout (if any) to compute what they need to repay to the bank. In other words, uncertainty about the payout can undermine the culture of repayment. This happened in the Malawi pilot. One region of the pilot experienced a mild drought that triggered only a small payout. But because farmers were told that there had been a payout, they assumed that it covered the entire repayment amount and thus defaulted on their loans.

This example suggests that where financial literacy and understanding of the product are limited, insurance policies could instead be targeted to a group—such as an entire village, a producer group, or a cooperative—rather than to individuals. The decision to purchase insurance would be made by the group’s managers, who are likely to be more educated and more familiar with financial products than other group members and may also be less financially constrained. The group could then decide ahead of time how best to allocate funds among its members in case of a payout.


Further reading
Giné, X., R. M. Townsend, and J. Vickery. 2007. “Statistical Analysis of Rainfall Insurance Payouts in Southern India.” American Journal of Agricultural Economics 89 (5): 1248–54.
Giné, X., R. Townsend, and J. Vickery. 2008. “Patterns of Rainfall Insurance Participation in Rural India.” World Bank Economic Review 22 (3): 539–66.
Giné, X., and D. Yang. 2009. “Insurance, Credit, and Technology Adoption: Field Experimental Evidence from Malawi.” Journal of Development Economics 89 (1): 1–11.
Cole, S., X. Giné, J. Tobacman, P. Topalova, R. Townsend, and J. Vickery. 2010. “Barriers to Household Risk Management: Evidence from India.” Policy Research Working Paper 5504, World Bank, Washington, DC.

Thursday, December 30, 2010

Macro-prudential regulation and the false promise of Basel III

Financial regulation goes global - Risks for the world economy
Legatum Institute
http://www.li.com/attachments/20101228_LegatumInstitute_FinancialRegulationGoesGlobal.pdf
Dec 29, 2010

Excerpts with footnotes:

4. How internationalised regulation can lead to a new crisis

We are witnessing a movement towards tighter regulation of world financial markets and also towards regulation that is more closely harmonised across the leading industrial economies. That is no accident, as the G20 communiqué pledged that:
“We each agree to ensure our domestic regulatory systems are strong. But we also agree to establish the much greater consistency and systematic cooperation between countries, and the framework of internationally agreed high standards, that a global financial system requires.”
Policymakers seem to believe that insufficient regulation, not just ineffective regulation, is to blame for the financial crisis. Moreover, they also want regulations to be more consistent across different countries and intend to further internationalise financial regulation.

However, there are a number of weaknesses, in principle and practice, with the regulations that have been proposed, that might mean they exacerbate future periods of boom and bust.

4.1 Global regulations create global crises

The central argument in favour of supranational regulation is the possibility of financial contagion. Policymakers do not want their own financial systems put at risk by regulatory failures elsewhere. However, with the present crisis emerging in major developed economies, it is hard to justify the sudden focus on the possibility of contagion. Many countries, such as Canada, did maintain stable financial systems despite collapses elsewhere. The contagion from the subprime crisis in the United States was a serious problem only because financial sectors in other major economies had made similar mistakes and become very vulnerable.

To be sure, an economy will suffer if its trading partners get into trouble. There will be a smaller market for their exports, imports might become more expensive or more difficult to get hold of, and supply chains can be disrupted. But that can happen for a range of reasons: a bad harvest, war, internal political strife, a recession not driven by a financial crisis. The financial sector is not unique in that regard.

There is also concern about a “race to the bottom”. As Stephen G. Cecchetti – Economic Adviser and Head of Monetary and Economic Department at the Bank for International Settlements – wrote, it is felt to be necessary to “make sure national authorities are confident that they will not be punished for their openness”.18 Concerns that countries will be punished for proper regulation are overblown. There are powerful network effects in financial services that mean many institutions are located in places like New York, London and Frankfurt despite those locations having high costs. While smaller institutions like hedge funds may move more lightly, big banks and other systemically important institutions need to be located in a major financial centre. At the same time, they do attach some importance to a reliable financial system. Countries are more likely to be punished for bad policy – e.g. the new 50 percent top tax rate in the United Kingdom – than for measures genuinely necessary to ensure financial stability.

At the same time, the coordination of regulatory policies creates new risks and exacerbates crises.  Common capital adequacy rules, while increasing transparency, also encourage homogeneity in investment strategy and undertaking of risk, leading to a high concentration of risk. That means that global regulations can be dangerous because they increase the amplitude of global credit cycles. If every country is in phase, systemic risk is higher than in situations where there are offsetting, out of phase, credit booms and busts in individual countries. The situation is akin to a monoculture, a lack of diversity makes the whole crop more vulnerable.

The Basel rules use a similar risk assessment framework across a broad range of institutions which encourages them to hold similar assets and respond in similar ways in a crisis.19 Consequently, instead of increasing diversification of assets and minimising risk, herd behaviour is amplified.20

The recession that followed the financial crisis was undoubtedly sharper because it was global. That meant countries were hit simultaneously by their own crisis and a fall in global demand hurting export industries. There were also more simultaneous pressures on global financial institutions. Global regulations, reducing diversity in investment decisions and behaviour in a crisis, will tend to produce global crises when they go wrong. As a result, internationalising regulations increases the danger to the world economy.

The objective should be to strike a proper balance between standardisation and diversity in regulations. Unfortunately, there are reasons why politicians might go too far in standardising regulations. Politicians in countries with burdensome regulations are tempted to force others into adopting equally burdensome measures, in order to prevent yardstick competition and limit the ability of firms and individuals to vote with their feet. A well known example of this is attempts to curb tax competition by organisations such as the OECD and the European Union. Finally, for some, international summits are more comfortable than messy, democratic domestic politics.

4.2 Macro-prudential regulation and the false promise of Basel III

The economic profession’s understanding of the role of financial regulation is shifting from an insistence on micro-prudential regulation to measures which take into account the systemic risks involved in finance.  The new paradigm suggests that a policy approach that tries to make the system safe by making each of the individual financial institutions safe is doomed to fail because of the endogenous nature of risk and because of the interactions between different financial institutions.21

Many of the proposed regulatory changes seem to be inspired – at least in part – by the idea that macro-prudential regulation will require a move away from a regulatory regime that does not take into account the endogenous nature of risk. Unfortunately, the form that the international harmonisation of regimes of financial regulation is taking fails to mitigate excessive leverage in good economic times.

A related question is whether the endogenous nature of risk enables this new regulatory paradigm to succeed at all. Most importantly, caring about systemic risk requires the regulator to identify – explicitly or implicitly – those financial institutions that are systemically important – either individually or in “herds”. Provided that this information can be discovered by the banks or becomes common knowledge, systemically important institutions will know that they will not be allowed to fail.  This would create a large moral hazard problem and could represent a key structural flaw that compromises the whole idea of macro-prudential financial regulation.

At the same time, there might be no need for shifting regulations in the macro-prudential direction, especially if the crisis is the result of regulatory and policy failure as set out in Section 1. Policymakers would just need to abstain from policies similar to those that fuelled the boom leading to this crisis. Of course, a greater need for macro-prudential policy and avoiding specific regulatory and policy failure are not mutually exclusive. It is easy to imagine a regulatory environment that combines more attention to the macroeconomic dimension of financial markets; a more prudent monetary policy that becomes contractionary during periods of rapid economic expansions, and sectoral policies that do not encourage asset bubbles.22

However, the regulation of financial markets is taking a path that could exacerbate future booms and busts – in sharp contrast both to the declared intentions of policymakers and to the underlying idea of macro-prudential regulation.

Our criticism of the Basel rules and of the harmonisation of financial regulation needs to be distinguished sharply from the concerns raised by the banking community, which usually point out the costs that would be involved in raising capital adequacy standards. The Institute of International Finance, for instance, has conducted a study of the effects of likely regulatory reform on the broader economy.23 The models used by the study are based on a relatively simple logic. Higher capital ratios require banks to raise more capital, putting an upward pressure on the cost of capital. In turn, this increases lending rates and reduces the aggregate supply of credit to the economy, lowering aggregate employment and GDP.

On that basis, the paper estimates the costs of adopting a full regulatory reform at an average of about 0.6 percentage points of GDP over the period 2011-2015 and an average of about 0.3 percentage points of GDP for the ten year period, 2011-2020. With a different set of assumptions, the Basel Committee estimates the costs to be much smaller. But whether this is a cost worth bearing depends on what the regulatory reform would achieve. If the output gap is a price to pay for an adequate reduction in the likelihood of future crises – and a reduction in the amplitude of business cycles – then it might be worth paying. Unfortunately, the regulatory reform which we are likely to get is unlikely to achieve that.

Firstly, in spite of claims to the contrary, much of the re-regulation simply increases the procyclicality which was characteristic of banking regulation under Basel II. Indeed, Basel III increases the requirements for tier 1 capital to a minimum of 6 percent and the share of common equity to a total of 7.0 percent.  And on top of that it introduces a countercyclical buffer of 0-2.5 percent. Yet, that buffer cannot offset the procyclical effect of the increased capital requirements.

We should stress that the problem with Basel III rules is not the absolute size of capital adequacy requirements but the fact that they are based on the borrower’s default risk. Hence, riskier assets need to be backed by a larger capital buffer than less risky ones. During times of crisis, the overall riskiness of extending loans increases and banks will therefore have an incentive to increase the amount of capital which they are holding relative to the total size of their risk-weighted assets. An extreme reaction to economic downturn would thus consist of dumping the riskier assets on the financial market, in the hope of restoring the required capital adequacy ratio, exacerbating the economic downturn and possibly triggering a credit crunch. Conversely, in good economic times, when the measured riskiness of individual loans has decreased, banks will be tempted to hold less capital relative to their other assets and will thus be tempted to fuel a potential lending boom.

A related issue is that current measures of risk – which are used as the basis for the risk-weighted capital adequacy rules – are highly imperfect. In a nutshell, highly-rated assets can be leveraged much more heavily than riskier assets, which is a problem if those ratings are not necessarily accurate. Lending to triple- A-rated sovereigns still carries a risk-weight of zero. As the present fiscal crisis in Europe suggests, exposure to triple-A-rated debt is certainly not risk free.  Basel III complements the capital adequacy rules by simple – not risk weighted – leverage ratio limits.  However, looking at the past data, there is little reason to believe that these will be effective in preventing future crises. In fact, risk-adjusted and simple balance sheet leverage ratios both show stable bank leverage until the onset of the crisis.24

Similarly, mark-to-market valuation practices are very problematic for assets where markets have become illiquid, and yield valuations that are both very low and uncertain. In times of crisis, this can give rise to serious consequences for companies that report mark-to-market valuations on their balance sheets. For that reason, mark-to-market valuations can exacerbate the effects of economic downturns.

Furthermore, Basel III will contain new, stricter, definitions of common equity, Tier 1 capital and capital at large. In principle, there is nothing wrong with being pickier when selecting the capital assets to use as a buffer when running a bank. It might indeed be prudent to use only common stock and not preferred stock and/or debt-equity hybrids that are permissible under Basel II. However, imposing a common notion of capital on banks and financial institutions worldwide is more likely to make their por tfolios similar and will therefore increase the co-movement existing between their liquidity – or lack thereof – at any given point in time.

A common definition of capital and a similar composition of bank capital across the world will also create incentives for regulators to synchronise monitoring. Such moves are already on their way within the EU – especially in the light of the establishment of common institutions for financial regulation – in spite of the fact that the business cycles in different parts of Europe are not synchronised.

Finally, we should recognise that tighter financial regulation has its unintended consequences. In the past, we have witnessed companies’ moving complex, highly leveraged, instruments off their balance sheets. Much of the financial activity moved – both geographically and sector-wise – to areas which were less heavily regulated. This included moving activities away from the banking industry into, say, hedge funds. And this also includes moving financial activities to jurisdictions that are friendlier to the financial industry. According to the Financial Times25, in the past two years, almost 1,000 hedge fund employees moved from the UK to Swiss cantons, seeking regulatory and fiscal predictability.  Insofar as the move towards harmonised financial regulation is imperfect – and so long as there remain jurisdictions and areas of finance that are regulated less heavily – there will be a relocation of financial activities towards these jurisdictions and areas of activity. The corollary is that overly tight regulation can create a situation in which much of the actual financial activity is taking place outside of the government supervision which is intended to curb their alleged excesses.

4.3 Crisis as alibi, symbolic politics

Many of the measures that are part of the G20 agenda are completely irrelevant to any ambition one could possibly have to mitigate systemic risks in the world economy. For instance, the idea that “tax havens” and banking secrecy are among the issues that contributed to the financial crisis is completely unfounded. If anything, tax competition could curb some of the excesses of the big, fiscally irresponsible, welfare states by making it difficult for governments to impose too onerous fiscal burdens on mobile tax bases. It is thus clear that for politicians in high-tax countries, the present crisis has served as an alibi to push forward a variety of measures which they have demonstrated an interest in implementing but lacked a plausible justification.26

In many respects, regulating short-selling is similar. Short-selling cannot be blamed for the financial crisis, just as it cannot be blamed for the Greek debt crisis that occurred earlier this year. Indeed, short-selling is critical in reflecting new, often pessimistic, information about the asset in question into a market price. Enabling European regulators to prohibit short-selling in specific situations – presumably in situations when doubts arise about the ability of a European country to repay its debt obligations – will do nothing to address the underlying problems of fiscal irresponsibility. It is just an illustration of a mentality that pretends that shooting the messenger is an appropriate response to the fiscal problems of the Eurozone. The direct cost of this policy is that it will introduce noise into the functioning of financial markets and will make them process new information less efficiently.

Besides taxation and short-selling, there have been coordinated moves to regulate hedge funds, both in the United States and in Europe. While this might make sense from a macro-prudential perspective, particularly if it is the case that some of the hedge funds are of systemic importance, we should recognise that hedge funds were the victim, not the perpetrator, in the recent crisis.  There have been a series of measures that governments have been eager to take for a long time and for which the crisis provided a convenient ad hoc justification, that are now part of the coordinated re-regulation of financial markets in the United States and in Europe. This includes, for instance, the creation of systemic risk boards – as if creation of such institutions would in itself be an improvement over the present situation. Creating a new bureau does not endow the regulators with a superior model of the economy and certainly does not mean that they will be able to do better forecasts than the regulators of the past.

Likewise, the creation of consumer protection boards is unlikely to have a significant effect, besides creating a false sense of security among the general public. After all, the crisis was not caused by uninformed consumers’ falling prey to – say – credit card companies. While instances of individuals making bad decisions regarding their indebtedness certainly exist, they were in most cases a rational response to the wider institutional environment in which they were operating, and which made it worthwhile, for instance, to use one’s house as a piggybank. Furthermore, there is evidence that some of the measures aiming at protecting consumers can in fact exacerbate moral hazard and strengthen the incentives for irresponsible behaviour.27

Finally, the issue of executive pay is high on the list of priorities for policymakers across the globe, again without a credible explanation of how that would contribute to the prevention of future crises. Major proponents of macroprudential regulation – such as the authors of the Geneva report – argue that there is very little reason for regulators to get involved in the decisions of private firms over executive compensation. Rather, as Charles Wyplosz says, “macro-prudential regulation will push banks to develop incentive packages that are more encouraging of longer-term behaviour.”28



Footnotes:

18 Cecchetti, S. G. “Financial reform: a progress report.” Remarks prepared for the Westminster Economic Forum, National Institute of Economic and Social Research, 4 October 2010.
19 Eatwell, J. The New International Financial Architecture: Promise or Threat? Cambridge Endowment for Research in Finance, 22 May 2002.
20 Daníelsson, J. & J.-P. Zigrand. What Happens when You Regulate Risk? Evidence from a Simple Equilibrium Model. April 2003.
21 For an exposition of the ideas behind this approach to financial regulation see Hanson, Kashyap and Stein (2010): “A Macroprudential Approach to Financial Regulation.” Journal of Economic Perspectives, forthcoming.
22 In this endeavour, targeting nominal GDP instead of inflation might be instrumental, as Scott Sumner, David Beckworth, George Selgin and others have argued.
23 IIF (2010). Interim Report on the Cumulative Impact on the Global Economy of Proposed Changes in the Banking Regulatory Framework. http://www.ebf-fbe.eu/uploads/10-Interim%20NCI_June2010_Web.pdf
24 See Joint FSF-CGFS Working Group (2009). The role of valuation and leverage in procyclicality. http://www.bis.org/publ/cgfs34.htm
25 FT. “Hedge funds managers seek predictability.” October 1, 2010. Available at: http://www.ft.com/cms/s/0/557f55d4-cd93-11df-9c82-00144feab49a.html
26 Indeed, the OECD has been running its program on harmful tax practices since 1998.
27 We discuss the specific case of the CARD Act in the United States in Rohac, D. (2010). “The high costs of consumer protection.” The Washington Times, September 3, 2010.
28 Wyplosz, C. (2009). “The ICMB-CEPR Geneva Report: ‘The future of financial regulation.’” VoxEU, January 27, 2009. http://www.voxeu.org/index.php?q=node/2872

Monday, November 29, 2010

New derivatives rules could punish firms that pose no systemic risk

Nov 29, 2010

The Hangover, Part II. WSJ Editorial
New derivatives rules could punish firms that pose no systemic risk.
WSJ, Nov 29, 2010
http://online.wsj.com/article/SB10001424052748704104104575622583155296368.html

Not even Mel Gibson would want a role in this political sequel. Readers will recall the true story of Congressman Barney Frank and Senator Chris Dodd, two pals who stayed up all night rewriting derivatives legislation.

The plot centered around the comedy premise that two Beltway buddies would quickly restructure multi-trillion-dollar markets to present their friend, President Barack Obama, with an apparent achievement before a G-8 meeting. As in the movies, the slapstick duo finished rewriting their bill just in time for the big meeting in Toronto last June.

But after the pair completed their mad-cap all-nighter, no hilarity ensued. That's because Main Street companies that had nothing to do with the financial crisis woke up to find billions of dollars in potential new costs. The threat was new authority for regulators to require higher margins on various financial contracts, even for small companies that nobody considers a systemic risk. The new rules could apply to companies that aren't speculating but are simply trying to protect against business risks, such as a sudden price hike in a critical raw material.

Businesses with good credit that have never had trouble off-loading such risks might have to put up additional cash at the whim of Washington bureaucrats, or simply hold on to the risks, making their businesses less competitive. Companies that make machine tools, for example, want to focus on making machine tools, not on the fluctuations of interest rates or the value of a foreign customer's local currency. So companies pay someone else to manage these risks. But Washington threatens to make that process much more costly.

Messrs. Frank and Dodd responded to the uproar first by suggesting that the problem could be fixed later in a "corrections" bill and then by denying the problem existed. Both proclaimed that their bill did not saddle commercial companies with new margin rules. But as we noted last summer, comments from the bill's authors cannot trump the language of the law.

Flash forward to today, and the Commodity Futures Trading Commission (CFTC) is drafting its new rules for swaps, the common derivatives contracts in which two parties exchange risks, such as trading fixed for floating interest rates. We're told that CFTC Chairman Gary Gensler has said privately that his agency now has the power to hit Main Street with new margin requirements, not just Wall Street.

Main Street companies that use these contracts are known as end-users. When we asked the CFTC if Mr. Gensler believes regulators can require swap dealers to demand margin from all end-users, a spokesman said, "It would be premature to say that a rule would include such a requirement or that the Chairman supports such a requirement."

It may only be premature until next month, when the CFTC is expected to issue its draft rules. While the commission doesn't have jurisdiction over the entire swaps market, other financial regulators are expected to follow its lead. Mr. Gensler, a Clinton Administration and Goldman Sachs alum, may not understand the impact of his actions outside of Washington and Wall Street.

In a sequel to the Dodd-Frank all-nighter, the law requires regulators to remake financial markets in a rush. CFTC Commissioner Michael Dunn said recently that to comply with Dodd-Frank, the commission may need to write 100 new regulations by next July.

"In my opinion it takes about three years to really promulgate a rule," he said, according to Bloomberg News. Congress instructed us to "forget what's physically possible," he added. The commission can't really use this impossible schedule as an excuse because Mr. Gensler had as much impact as anyone in crafting the derivatives provisions in Dodd-Frank. No surprise, the bill vastly expands his agency's regulatory turf.

And if anyone can pull off a complete overhaul of multi-trillion-dollar markets in a mere eight months, it must be the CFTC.

Just kidding. An internal CFTC report says that communication problems between the CFTC's enforcement and market oversight divisions "impede the overall effectiveness of the commission's efforts to not only detect and prevent, but in certain circumstances, to take enforcement action against market manipulation." The report adds that the commission's two primary surveillance programs use incompatible software. Speaking generally and not in response to the report, Mr. Gensler says that the agency is "trying to move more toward the use of 21st century computers," though he warns that "it's a multiyear process." No doubt.

The CFTC report also noted that "the staff has no standard protocol for documenting their work." If we were tasked with restructuring a complex trading market to conform to the vision of Chris Dodd and Barney Frank, we wouldn't want our fingerprints on it either.

The report was completed in 2009 but only became public this month thanks to a Freedom of Information Act request from our colleagues at Dow Jones Newswires. Would Messrs. Dodd and Frank have responded differently to Mr. Gensler's power grab if they had realized how much trouble the CFTC was having fulfilling its traditional mission? We doubt it, but it certainly would have made their reform a tougher sell, even to the Washington press corps.

Congress should scrutinize this process that is all but guaranteed to result in ill-considered, poorly crafted regulation. In January, legislators should start acting, not like buddies pulling all-nighters, but like adults who understand it's their job to make the tough calls, rather than kicking them over to the bureaucracy with an arbitrary deadline.

Saturday, October 30, 2010

Utopia, With Tears - A review of Fruitlands, by Richard Francis

Utopia, With Tears. By ALEXANDRA MULLEN
No meat, no wool, no coffee or candles to read by, but plenty of high aspirations—and trouble.A review of Fruitlands, by Richard Francis (Yale University Press, 321 pages, $30)

WSJ, Friday, October 29, 2010
http://online.wsj.com/article/SB10001424052702304173704575578761068904960.html


In 1843, in the quiet middle of Massachusetts, a group of high-minded people set out to create a new Eden they called Fruitlands. The embryonic community miscarried, lasting only seven months, from June to January. Fruitlands now has a new chronicler in Richard Francis, a historian of 19th-century America. "This is the story," he writes, "of one of history's most unsuccessful utopias ever—but also one of the most dramatic and significant." As we learn in his thorough and occasionally hilarious account, the claim is about half right.

The utopian community of Fruitlands had two progenitors: the American idealist Bronson Alcott and the English socialist Charles Lane. Alcott was a farm boy from Connecticut who had turned from the plough to philosophy. According to Ralph Waldo Emerson, his friend, Alcott could not chat about anything "less than A New Solar System & the prospective Education in the nebulae." Airy as his thoughts were, Alcott could be a mesmerizing speaker. Indeed, his words partly inspired an experimental community in England, where he met Lane.

Lane has often been considered the junior partner in the Fruitlands story, merely the guy who put up the money (for roughly 100 acres, only 11 of which were arable). But Mr. Francis fleshes him out, showing him to be a tidier and more bitter thinker than Alcott, with a practical streak that could be overrun by his hopes for humanity.

As Mr. Francis notes, Alcott and Lane shared a "tendency to take moderation to excess," pushing their first principles as far as they could go. One such principle was that you should do no harm to living things, including plants. As Mr. Francis explains: "If you cut a cabbage or lift a potato you kill the plant itself, just as you kill an animal in order to eat its meat. But pluck an apple, and you leave the tree intact and healthy."

The Fruitlands community never numbered more than 14 souls, five of them children. The members included a nudist, a former inmate of an insane asylum, and a man who had once gotten into a knife fight to defend his right to wear a beard. Then there was the fellow who thought swearing elevated the spirit. He would greet the Alcott girls: "Good morning, damn you." Lane thought the members should be celibate; Alcott's wife, Abigail, the mother of his four daughters and the sole permanent woman resident, was a living reproach to this view.

All of Fruitlands members, however, agreed to certain restrictions: No meat or fish; in fact nothing that came from animals, so no eggs and no milk. No leather or wool, and no whale oil for lamps or candles made from tallow (rendered animal fat). No stimulants such as coffee or tea, and no alcohol. Because the Fruitlanders were Abolitionists, cane sugar and cotton were forbidden (slave labor produced both). The members of the community wore linen clothes and canvas shoes. The library was stocked with a thousand books, but no one could read them after dark.

And how did the whole experiment go? Well, most of the men at Fruitlands had little farming experience. Alcott, who did, impressed Lane with his ability to plow a straight furrow; but Alcott was always a better talker than worker. The community rejected animal labor—and even manure, a serious disadvantage if you want to produce enough food to be self-sufficient. The farming side of Fruitlands was a dud.

But the experiment was indeed, as Mr. Francis claims, "dramatic." The drama came from a common revolutionary trajectory in which "a group of idealists ends by trying to destroy each other." "Of spiritual ties she knows nothing," Lane wrote of Abigail. "All Mr. Lane's efforts have been to disunite us," she confided to a friend, referring to her relations with Bronson. Even the usually serene Bronson agonized: "Can a man act continually for the universal end," he asked Lane, "while he cohabits with a wife?" By Christmas, which he spent in Boston, Bronson seemed on the verge of dissolving his family. In the new year he returned to Fruitlands, but he had a breakdown. This was no way to run a utopia, and the experiment ended.

Was Fruitlands "significant"? In Mr. Francis's reading, the community "intuited the interconnectedness of all living things." That intuition, he believes, underlies our notions of the evils of pollution and the imminence of environmental catastrophe, as well as our concerns about industrialized farming. The Fruitlanders' understanding of the world, he argues, helped create a parallel universe—an alternative to scientific empiricism—that is still humming along in the current day.

Perhaps so. Certainly many New Age and holistic notions, in their fuzzy and well-meaning romanticism, share a common ancestor with the Fruitlands outlook. But the result is not always benign. It was the Fruitlanders' belief, for instance, that "all disease originates in the soul." One descendant of this idea is the current loathsome view that cancer is caused by bad thoughts.

Though obviously sympathetic to the Fruitlands experiment, Mr. Francis gives us enough facts to let us draw our own conclusions. He records Bronson and Abigail's acts of charity, already familiar to us from their daughter Louisa's novel "Little Women" (1868). But he also retells less admiring stories, of their petty vindictiveness and casual callousness. Along the way he adumbrates the ways in which idealism can slide into megalomania.

Mr. Francis reports a conversation that Alcott once had with Henry James Sr., the father of the novelist Henry and the philosopher William. Alcott let it drop that he, like Jesus and Pythagoras before him, had never sinned. James asked whether Alcott had ever said, "I am the Resurrection and the Life." "Yes, often," Alcott replied. Unfortunately, Mr. Francis fails to record James's rejoinder: "And has anyone ever believed you?"

Ms. Mullen writes for the Barnes & Noble Review.

Friday, October 22, 2010

High costs of making batteries stall affordability of electric cars

High costs of making batteries stall affordability of electric cars. By Mike Ramsey
The Wall Street Journal Europe, page 22, Oct 19, 2010
http://online.wsj.com/article/SB40001424052748703735804575536242934528502.html

The push to get electric cars on the road is backed by governments and auto makers around the world, but they face a hurdle that may be tough to overcome: the stubbornly high cost of the giant battery packs that power the vehicles.

Both the industry and government are betting that a quick takeoff in electric-car sales will drive down the price of the battery packs, which can account for more than half the cost of an electric vehicle.

But a number of scientists and automotive engineers believe cost reductions will be hard to come by. Unlike with tires or toasters, battery packs aren't likely to enjoy traditional economies of scale as their makers ramp up production.

Some experts say that increased production of batteries means the price of the key metals used in their manufacture will remain steady—or maybe even rise—at least in the short term.

These experts also say the price of the electronic parts used in battery packs as well as the enclosures that house the batteries aren't likely to decline appreciably.

The U.S. Department of Energy has set a goal of bringing down car-battery costs by 70% from last year's price, which it estimated at $1,000 per kilowatt hour of battery capacity, by 2014.

Jay Whitacre, a battery researcher and technology policy analyst at Carnegie Mellon University, is skeptical. The government's goals "are aggressive and worth striving for, but they are not attainable in the next three to five years," he said in an interview. "It will be a decade at least" before that price reduction is reached.

The high cost of batteries is evident in the prices set for early electric cars. Nissan Motor Co.'s Leaf, due in the U.S. in December, is priced at $33,000. Current industry estimates say its battery pack alone costs Nissan about $15,600.

That cost will make it difficult for the Leaf to turn a profit. And it also may make the Leaf a tough sell, since even with government tax breaks the car will cost more than twice the $13,520 starting price of the similar-size Nissan Versa hatchback.

Nissan won't comment on the price of the battery packs, other than to say that the first versions of the Leaf won't make money. Only later, when the company begins mass-producing the battery units in 2013, will the car be profitable, according to Nissan.

The Japanese company believes it can cut battery costs through manufacturing scale. It is building a plant in Smyrna, Tenn., that will have the capacity to assemble up to 200,000 packs a year.

Other proponents of electric vehicles agree that battery costs will fall as production ramps up. "They will come down by a factor of two, if not more, in the next five years," said David Vieau, chief executive officer of A123 Systems, a start-up that recently opened a battery plant in Plymouth, Mich.

Alex Molinaroli, president of Johnson Controls Inc.'s battery division, is confident it can reduce the cost of making batteries by 50% in the next five years, though the company won't say what today's cost is. The cost reduction by one of the world's biggest car-battery makers will mostly come from efficient factory management, cutting waste and other management-related costs, not from fundamental improvement of battery technology, he said.

But researchers such as Mr. Whitacre, the National Academies of Science and even some car makers aren't convinced, mainly because more than 30% of the cost of the batteries comes from metals such as nickel, manganese and cobalt. (Lithium makes up only a small portion of the metals in the batteries.)

Prices for these metals, which are set on commodities markets, aren't expected to fall with increasing battery production—and may even rise as demand grows, according to a study by the Academies of Science released earlier this year and engineers familiar with battery production.

Lithium-ion battery cells already are mass produced for computers and cellphones and the costs of the batteries fell 35% from 2000 through 2008—but they haven't gone down much more in recent years, according to the Academies of Science study.

The Academies and Toyota Motor Corp. have publicly said they don't think the Department of Energy goals are achievable and that cost reductions are likely to be far lower. It likely will be 20 years before costs fall by 50%—not the three or so years the DOE projects—according to an Academy council studying battery costs. The council was made up of nearly a dozen researchers in the battery field.

"Economies of scale are often cited as a factor that can drive down costs, but hundreds of millions to billions of ... [battery] cells already are being produced in optimized factories. Building more factories is unlikely to have a great impact on costs," the Academies report said.

The report added that the cost of the battery-pack enclosure that holds the cells is a major portion of the total battery-pack cost, and isn't likely to come down much. In addition, battery packs include electronic sensors and controls that regulate the voltage moving through and the heat being generated by the cells. Since those electronics already are mass-produced commodities, their prices may not fall much with higher production, the study said.

Lastly, the labor involved in assembling battery packs is expensive because employees need to be more highly trained than traditional factory staff because they work in a high-voltage environment. That means labor costs are unlikely to drop, said a senior executive at one battery manufacturer.

When car makers began using nickel-metal hydride batteries, an older technology, in their early hybrid vehicles, the cost of the packs fell only 11% from 2000 to 2006 and has seen little change since, according to the Academies study.

Toyota executives, including Takeshi Uchiyamada, global chief of engineering, say their experience with nickel-metal hydride batteries makes them skeptical that the prices of lithium ion battery pack prices will fall substantially.

"The cost reductions aren't attainable even in the next 10 years," said Menahem Anderman, principal of Total Battery Consulting Inc., a California-based battery research firm. "We still don't know how much it will cost to make sure the batteries meet reliability, safety and durability standards. And now we are trying to reduce costs, which automatically affect those first three things."

Tuesday, June 8, 2010

Self-identified liberals and Democrats do badly on questions of basic economics

Are You Smarter Than a Fifth Grader? By DANIEL B. KLEIN
Self-identified liberals and Democrats do badly on questions of basic economics.WSJ, Jun 08, 2010

Who is better informed about the policy choices facing the country—liberals, conservatives or libertarians? According to a Zogby International survey that I write about in the May issue of Econ Journal Watch, the answer is unequivocal: The left flunks Econ 101.

Zogby researcher Zeljka Buturovic and I considered the 4,835 respondents' (all American adults) answers to eight survey questions about basic economics. We also asked the respondents about their political leanings: progressive/very liberal; liberal; moderate; conservative; very conservative; and libertarian.

Rather than focusing on whether respondents answered a question correctly, we instead looked at whether they answered incorrectly. A response was counted as incorrect only if it was flatly unenlightened.

Consider one of the economic propositions in the December 2008 poll: "Restrictions on housing development make housing less affordable." People were asked if they: 1) strongly agree; 2) somewhat agree; 3) somewhat disagree; 4) strongly disagree; 5) are not sure.

Basic economics acknowledges that whatever redeeming features a restriction may have, it increases the cost of production and exchange, making goods and services less affordable. There may be exceptions to the general case, but they would be atypical.

Therefore, we counted as incorrect responses of "somewhat disagree" and "strongly disagree." This treatment gives leeway for those who think the question is ambiguous or half right and half wrong. They would likely answer "not sure," which we do not count as incorrect.

In this case, percentage of conservatives answering incorrectly was 22.3%, very conservatives 17.6% and libertarians 15.7%. But the percentage of progressive/very liberals answering incorrectly was 67.6% and liberals 60.1%. The pattern was not an anomaly.

The other questions were: 1) Mandatory licensing of professional services increases the prices of those services (unenlightened answer: disagree). 2) Overall, the standard of living is higher today than it was 30 years ago (unenlightened answer: disagree). 3) Rent control leads to housing shortages (unenlightened answer: disagree). 4) A company with the largest market share is a monopoly (unenlightened answer: agree). 5) Third World workers working for American companies overseas are being exploited (unenlightened answer: agree). 6) Free trade leads to unemployment (unenlightened answer: agree). 7) Minimum wage laws raise unemployment (unenlightened answer: disagree).

How did the six ideological groups do overall? Here they are, best to worst, with an average number of incorrect responses from 0 to 8: Very conservative, 1.30; Libertarian, 1.38; Conservative, 1.67; Moderate, 3.67; Liberal, 4.69; Progressive/very liberal, 5.26.

Americans in the first three categories do reasonably well. But the left has trouble squaring economic thinking with their political psychology, morals and aesthetics.

To be sure, none of the eight questions specifically challenge the political sensibilities of conservatives and libertarians. Still, not all of the eight questions are tied directly to left-wing concerns about inequality and redistribution. In particular, the questions about mandatory licensing, the standard of living, the definition of monopoly, and free trade do not specifically challenge leftist sensibilities.

Yet on every question the left did much worse. On the monopoly question, the portion of progressive/very liberals answering incorrectly (31%) was more than twice that of conservatives (13%) and more than four times that of libertarians (7%). On the question about living standards, the portion of progressive/very liberals answering incorrectly (61%) was more than four times that of conservatives (13%) and almost three times that of libertarians (21%).

The survey also asked about party affiliation. Those responding Democratic averaged 4.59 incorrect answers. Republicans averaged 1.61 incorrect, and Libertarians 1.26 incorrect.

Adam Smith described political economy as "a branch of the science of a statesman or legislator." Governmental power joined with wrongheadedness is something terrible, but all too common. Realizing that many of our leaders and their constituents are economically unenlightened sheds light on the troubles that surround us.

Mr. Klein is a professor of economics at George Mason University. This op-ed is based on an article published in the May 2010 issue of the journal he edits, Econ Journal Watch, a project sponsored by the American Institute for Economic Research.

Thursday, May 20, 2010

The Madness of Cotton - The feds want U.S. taxpayers to subsidize Brazilian farmers

The Madness of Cotton. WSJ Editorial
The feds want U.S. taxpayers to subsidize Brazilian farmers
WSJ, May 21, 2010

U.S. cotton farmers took in almost $2.3 billion dollars in government subsidies in 2009, and the top 10% of the recipients got 70% of the cash. Now Uncle Sam is getting ready to ask taxpayers to foot the bill for another $147.3 million a year for a new round of cotton payments, this time to Brazilian growers.

We realize that in today's Washington this is a rounding error. But the reason for the new payments to foreign farmers deserves attention. If it becomes a habit, it is unlikely to end with cotton.

Here's the problem: The World Trade Organization has ruled that subsidies to American cotton growers under the 2008 farm bill are a violation of U.S. trading commitments. The U.S. lost its final appeal in the case in August 2009 and the WTO gave Brazil the right to retaliate.

Brazil responded by drafting a retaliation list threatening tariffs on more than 100 U.S. exports, including autos, pharmaceuticals, medical equipment, electronics, textiles, wheat, fruits, nuts and cotton. The exports are valued at about $1 billion a year, and the tariffs would go as high as 100%. Brazil is also considering sanctions against U.S. intellectual property, including compulsory licensing in pharmaceuticals, music and software.

The Obama Administration appreciates the damage this retaliation would cause, so in April it sent Deputy U.S. Trade Representative Miriam Sapiro to negotiate. She came back with a promise from Brazil to postpone the sanctions for 60 days while it considers a U.S. offer to—get this—let American taxpayers subsidize Brazilian cotton growers.

That's right. Rather than reduce the U.S. subsidies to American cotton farmers that are the cause of the trade fight, the Administration is proposing that U.S. taxpayers also compensate Brazilian cotton farmers for the harm done by the U.S. subsidies. Thus the absurd U.S. cotton program would dip into the Commodity Credit Corporation to pay what is a bribe to Brazil so it won't retaliate.

Talk about taxpayer double jeopardy. As Senator Richard Lugar (R., Ind.) said recently, the commodity credit program was established to assist U.S. agriculture, "not to pay restitution to foreign farmers who won a trade complaint against a U.S. farm subsidy program."

Mr. Lugar wants the subsidies to U.S. farmers cut by the amount that will have to be sent to Brazil. He adds that a better option would be to take on the trade-distortions of the cotton program. "I am prepared to introduce legislation to achieve these immediate reforms," he wrote in an April 30 letter to President Obama.

This is probably tilting at political windmills, since Mr. Obama has shown no appetite for trade promotion, much less confronting a cotton lobby supported by such Democrats as Arkansas Senator Blanche Lincoln. But we're glad to see that at least Mr. Lugar is willing to call out the absurdity of U.S. taxpayers subsidizing foreign farmers to satisfy the greed of a few American cotton growers.

Thursday, May 13, 2010

The Case for the New START Treaty, by Secretary Gates

The Case for the New START Treaty. By ROBERT M. GATES
The treaty has the unanimous support of America's military leadership.
WSJ, May 13, 2010

I first began working on strategic arms control with the Russians in 1970, an effort that led to the first Strategic Arms Limitation Agreement with Moscow two years later.

The key question then and in the decades since has always been the same: Is the United States better off with an agreement or without it? The answer for each successive president has always been "with an agreement." The U.S. Senate has always agreed, approving each treaty by lopsided, bipartisan margins.

The same answer holds true for the New START agreement: The U.S. is far better off with this treaty than without it. It strengthens the security of the U.S. and our allies and promotes strategic stability between the world's two major nuclear powers. The treaty accomplishes these goals in several ways.

First, it limits significantly U.S. and Russian strategic nuclear arsenals and establishes an extensive verification regime to ensure that Russia is complying with its treaty obligations. These include short-notice inspections of both deployed and nondeployed systems, verification of the numbers of warheads actually carried on Russian strategic missiles, and unique identifiers that will help us track—for the very first time—all accountable strategic nuclear delivery systems.

Since the expiration of the old START Treaty in December 2009, the U.S. has had none of these safeguards. The new treaty will put them back in place, strengthen many of them, and create a verification regime that will provide for greater transparency and predictability between our two countries, to include substantial visibility into the development of Russian nuclear forces.

Second, the treaty preserves the U.S. nuclear arsenal as a vital pillar of our nation's and our allies' security posture. Under this treaty, the U.S. will maintain our powerful nuclear triad—ICBMs, submarine launched ballistic missiles (SLBMs) and bombers—and we will retain the ability to change our force mix as we see fit. Based on recommendations of the Joint Chiefs of Staff, we plan to meet the Treaty's limits by retaining a triad of up to 420 ICBMs, 14 submarines carrying up to 240 SLBMs, and up to 60 nuclear-capable heavy bombers.

Third, and related, the treaty is buttressed by credible modernization plans and long-term funding for the U.S. nuclear weapons stockpile and the infrastructure that supports it. This administration is proposing to spend $80 billion over the next decade to rebuild and sustain America's aging nuclear infrastructure—especially our national weapons labs, and our science, technology and engineering base. This week the president is providing a report to the Congress on investments planned over the next 10 years to sustain and modernize our nuclear weapons, their delivery systems, and supporting infrastructure.

Fourth, the treaty will not constrain the U.S. from developing and deploying defenses against ballistic missiles, as we have made clear to the Russian government. The U.S. will continue to deploy and improve the interceptors that defend our homeland—those based in California and Alaska. We are also moving forward with plans to field missile defense systems to protect our troops and partners in Europe, the Middle East, and Northeast Asia against the dangerous threats posed by rogue nations like North Korea and Iran.

Finally, the treaty will not restrict America's ability to develop and deploy conventional prompt global strike capabilities—that is, the ability to hit targets anywhere in the world in less than an hour using conventional explosive warheads fitted to long-range missiles.

These delivery systems—be they land or sea based—would count against the new treaty limits, but if we deploy them it would be in very limited numbers. We are currently assessing other kinds of long-range strike systems that would not count under the treaty.

The New START Treaty has the unanimous support of America's military leadership—to include the chairman of the Joint Chiefs of Staff, all of the service chiefs, and the commander of the U.S. Strategic Command, the organization responsible for our strategic nuclear deterrent. For nearly 40 years, treaties to limit or reduce nuclear weapons have been approved by the U.S. Senate by strong bipartisan majorities. This treaty deserves a similar reception and result—on account of the dangerous weapons it reduces, the critical defense capabilities it preserves, the strategic stability it maintains, and, above all, the security it provides to the American people.

Mr. Gates is secretary of defense.

Wednesday, May 12, 2010

The Price of Wind - The 'clean energy revolution' is expensive

The Price of Wind. WSJ Editorial
The 'clean energy revolution' is expensive
WSJ, May 12, 2010

The ferocious opposition from Massachusetts liberals to the Cape Wind project has provided a useful education in green energy politics. And now that the Nantucket Sound wind farm has won federal approval, this decade-long saga may prove edifying in green energy economics too: Namely, the price of electricity from wind is more than twice what consumers now pay.

On Monday, Cape Wind asked state regulators to approve a 15-year purchasing contract with the utility company National Grid at 20.7 cents per kilowatt hour, starting in 2013 and rising at 3.5% annually thereafter. Consumers pay around nine cents for conventional power today. The companies expect average electric bills to jump by about $1.59 a month, because electricity is electricity no matter how it is generated, and Cape Wind's 130 turbines will generate so little of it in the scheme of the overall New England market.

Still, that works out to roughly $443 million in new energy costs, and that doesn't count the federal subsidies that Cape Wind will receive from national taxpayers. It does, however, include the extra 6.1 cents per kilowatt hour that Massachusetts utilities are mandated to pay for wind, solar and the like under a 2008 state law called the Green Communities Act. Also under that law, at least 15% of power company portfolios must come from renewable sources by 2020.

Two weeks ago, U.S. Interior Secretary Ken Salazar approved Cape Wind, placing it in the vanguard of "a clean energy revolution." A slew of environmental and political outfits have since filed multiple lawsuits for violations of the Endangered Species Act, the National Environmental Policy Act, the Outer Continental Shelf Lands Act, certain tribal-protection laws, the Clean Water Act, the Migratory Bird Treaty Act and the Rivers and Harbors Act.

There's comic irony in this clean energy revolution getting devoured by the archaic regulations of previous clean energy revolutions. But given that taxpayers will be required to pay to build Cape Wind and then required to buy its product at prices twice normal rates, opponents might have more success if they simply pointed out what a lousy deal it is.

Saturday, May 1, 2010

India's Government By Quota - The affirmative-action plan to eliminate caste discrimination was supposed to last 10 years. Instead it has become a permanent, and divisive, fact of life

India's Government By Quota. By SHIKHA DALMIA
The affirmative-action plan to eliminate caste discrimination was supposed to last 10 years. Instead it has become a permanent, and divisive, fact of life.WSJ, May 01, 2010

For nearly half a century, group or racial preferences have been America's prescribed remedy for racism and other -isms standing in the way of social equality. But anyone wishing to study the unintended side-effects of this medicine on the body politic need only look at India. There reactionary groups are trying to co-opt a women's quota bill, not to create an egalitarian utopia, but its opposite.

India's ruling secular Congress party has joined hands with Hindu nationalist parties on a bill to guarantee 33% seats in the parliament and state legislatures to women. This is on top of a similar quota that women enjoy at the local or panchayat level. The bill sailed through the upper house but has met stiff resistance by India's lower-caste parties. Why? Because it threatens their monopoly on the country's quota regime.

Just as racism is the bane of America, caste is the bane of India; its rigid strictures for centuries sustained a stratified society where birth is destiny. Although caste has declined in India's large, cosmopolitan cities, elsewhere this system still restricts social mobility for the country's 100 million dalits (untouchables). They are not only consigned to demeaning jobs but they're not even allowed to pray in the same temples as upper castes.

But the scheme that India's founders devised to eradicate the caste system has actually deepened the country's caste divide, and created several more. The women's quota bill is only the latest development in the competition for victimhood status that has pitted every group with any grievance, real or imagined, against every other.

India's founders began on the right track, constitutionally banning untouchability in 1950 and, just as in America, guaranteeing equal treatment under the law for everyone regardless of caste, sex, religion or race. But then came the fatal leap. They created a list or "schedule" of all the dalit sub-castes deserving preferential treatment and handed them 17.5% of the seats in the parliament and state legislatures. They also gave them 22.5% of all public-sector jobs and guaranteed spots in public or publicly funded universities.

The scheme was supposed to last 10 years. Instead it assumed a life of its own, making scheduled-caste status a bigger driver of success than individual merit (at least before liberalization opened opportunities in the private sector).

The tipping point came in the late 1980s when the government's Mandal Commission. This body, charged with examining the plight of the poor and disenfranchised, concluded in its final report that the original list of scheduled castes was too short. It recommended a new, catch-all category called Other Backward Classes covering over half the population and called for reserving 49.5% government jobs and university seats for these groups

The report caused an uproar. Hindu students from nonscheduled castes, particularly from modest backgrounds, exploded into riots. Already rubbed raw from the existing quota regime which allowed academically inferior, scheduled-caste candidates to breeze into the best universities and land secure government jobs while they struggled, they took to the streets. A few immolated themselves, one big reason why the government collapsed in November 1990. But the quota system survived, and post-riot governments have slowly expanded it.

Quotas have become a fact of life in India because they are the major currency with which Indian politicians buy votes. In a few states with their own quotas, almost 70% of government jobs and university seats go to the reserved castes.

The major political resistance to the quota regime during the Mandal riots came from Hindu nationalist parties—but that was before they found a way to make it work for them. In some states like Rajasthan they have actually instituted quotas for the poor "forward castes"—code for upper-caste Hindus.

And these parties wholeheartedly back the latest women's quota bill because it will simultaneously allow them to: establish their progressive bona fides; once again stick it to Muslims, arguably the only genuinely disenfranchised minority without its own legislative quota; and consolidate their power base in parliament since the women elected are likely to be relatively well-off Hindus.

A tragi-comic note in this drama is Raj Thackeray, an ultra-nativist, Hindu politician from Mumbai who wants to chase all out-of-state residents out of his city. He is warning the lower-caste leaders to show respect for women by supporting this bill or else "they will be given a lesson on it."

Protests have broken out in the country, with Muslim and lower-caste women opposing it as currently written and urbane, city feminists demanding its immediate passage. But the lower-caste parties' only objection is that the quota bill doesn't contain a sub-quota for lower-caste women. In other words, the debate in India is no longer about using quotas to redistribute opportunity—it is about redistributing the quotas themselves. No politician or party is opposing this bill on principle.

It would be tempting to blame the abuse of quotas on the degraded state of Indian politics. But, in reality, India is demonstrating the reductio ad absurdum logic of quotas.

Progressives in India—as in America—believe that equal protection of individual rights is insufficient to create equality because it does nothing to address private discrimination. Protecting the property rights of persecuted castes is hardly enough if they can't get jobs in the first place. Hence, in their view, government has to give persecuted groups a leg up to equalize opportunity.

But this turns the system into a zero-sum game, triggering a race for the spoils in which powerful groups can seize the advantage. Because quotas or preferences don't originally apply to them, they become the new aggrieved—victims of "reverse discrimination." And it is easy for them to mobilize this sentiment into a political movement precisely because they are powerful.

India's lesson is that abrogating individual rights through group preferences or quotas institutionalizes the very divisions that these policies are supposed to erase. Human prejudice can't be legislated away. That requires social activism to coax, cajole and shame people out of their intolerance. There are no short cuts.

Ms. Dalmia is a senior analyst at the Reason Foundation and a Forbes columnist.