Tuesday, July 31, 2012

Measuring Systemic Liquidity Risk and the Cost of Liquidity Insurance. By Tiago Severo

Measuring Systemic Liquidity Risk and the Cost of Liquidity Insurance. By Tiago Severo
IMF Working Paper No. 12/194
Jul 31, 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012194

Summary: I construct a systemic liquidity risk index (SLRI) from data on violations of arbitrage relationships across several asset classes between 2004 and 2010. Then I test whether the equity returns of 53 global banks were exposed to this liquidity risk factor. Results show that the level of bank returns is not directly affected by the SLRI, but their volatility increases when liquidity conditions deteriorate. I do not find a strong association between bank size and exposure to the SLRI - measured as the sensitivity of volatility to the index. Surprisingly, exposure to systemic liquidity risk is positively associated with the Net Stable Funding Ratio (NSFR). The link between equity volatility and the SLRI allows me to calculate the cost that would be borne by public authorities for providing liquidity support to the financial sector. I use this information to estimate a liquidity insurance premium that could be paid by individual banks in order to cover for that social cost.

Excerpts:

Introduction

Liquidity risk has become a central topic for investors, regulators and academics in the aftermath of the global financial crisis. The sharp decline of real estate prices in the U.S.  and parts of Europe and the consequent collapse in the values of related securities created widespread concerns about the solvency of banks and other financial intermediaries. The resulting increase in counterparty risk induced investors to shy away from risky short-term funding markets [Gorton and Metrick (2010)] and to store funds in safe and liquid assets, especially U.S. government debt. The dry-up in funding markets hit levered financial intermediaries involved in maturity and liquidity transformation hard [Brunnermeier (2009)], propagating the initial shock through global markets.

Central bankers in major countries responded to the contraction in liquidity by pumping an unprecendented amount of funds into securities and interbank markets, and by creating and extending liquidity backstop lines to rescue troubled financial intermediaries.  Such measures have exposed public finances, and ultimately taxpayers, to the risk of substantial losses. Understanding the origins of systemic liquidity risk in financial markets is, therefore, an invaluable tool for policymakers to reduce the chance of facing these very same challenges again in the future. In particular, if public support in periods of widespread distress cannot be prevented—due to commitment problems—supervisors and regulators should ensure that financial intermediaries are properly monitored and charged to reflect the contingent benefits they enjoy.

The present paper brings three contributions to the topic of systemic liquidity risk:

1) It produces a systemic liquidity risk index (SLRI) calculated from violations of “arbitrage” relationships in various securities markets.

2) It estimates the exposure of 53 global banks to this aggregate risk factor.

3) It uses the information in 2) to devise an insurance system where banks pre-pay for the costs faced by public authorities for providing contingent liquidity support.

Results indicate that systemic illiquidity became a widespread problem in the aftermath of Lehman’s bankruptcy, and it only recovered after several months. Systemic liquidity risk spiked again during the Greek sovereign crisis in the second quarter of 2010, albeit at much more moderate levels. Yet, the renewed concerns regarding sovereign default in peripheral Europe observed in the last quarter of 2010 did not induce global liquidity shortfalls.

In terms of exposures of individual institutions, I find that, in general, systemic liquidity risk does not affect the level of bank stock returns on a systematic fashion.  However, liquidity risk is strongly correlated with the volatility of bank stocks: system wide illiquidity is associated with riskier banks. Estimates also show that U.S. and U.K. banks were relatively more exposed to liquidity conditions compared to Japanese institutions, with continental European banks lying in the middle of the distribution. More specifically, the results indicate that U.S. and U.K. banks’ stocks became much more volatile relative to their Asian peers when liquidity evaporated. This likely reflects the higher degree of maturity transformation and the reliance on very short-term funding by Anglo-Saxon banks. A natural question is whether bank specific characteristics beyond geographic location reflect the different degrees of exposure to liquidity risk.

I start the quest for those bank characteristics by looking at the importance of bank size for liquidity risk exposure. Market participants, policymakers and academics have highlighted the role of size and interconnectedness as a source of systemic risk. To verify this claim, I form quintile portfolios of banks based on market capitalization and test whether there are significant differences in the sensitivity of their return volatility to the SLRI. The estimates suggest that size has implications for liquidity risk, but the relationship is highly non-linear. The association between size and sensitivity to liquidity conditions is only relevant for the very large banks, and it becomes pronounced only when liquidity conditions deteriorate substantially.

Recently, the Basel Committee on Banking Supervision produced, for the first time, a framework (based on balance sheet information) to regulate banks’ liquidity risk. In particular, it proposed two liquidity ratios that shall be monitored by supervisors: the Liquidity Coverage Ratio (LCR), which indicates banks’ ability to withstand a short-term liquidity crisis, and the Net Stable Funding Ratio (NSFR), which measures the long-term, structural funding mismatches in a bank. Forming quintile portfolios based on banks' NSFR, I find that, if anything, the regulatory ratio is positively associated with the exposure to the SLRI. In other words, banks with a high NSFR (the ones deamed to be structurally more liquid) are in fact slightly more sensitive to liquidity conditions. This counterintuitive result needs to be qualified. As noted later, the SLRI captures short-term liquidity stresses, whereas the NSFR is designed as a medium to long-term indicator of liquidity conditions. Certainly, it would be more appropriate to test the performance of LCR instead. However, the data necessary for its computation are not readily available.

The link between bank stock volatility and the SLRI allows me to calculate the cost faced by public authorities for providing liquidity support for banks. Relying on the contingent claims approach (CCA), I use observable information on a bank’s equity and the book value of its liabilities to back out the unobserved level and volatility of its assets. I then estimate by how much the level and volatility of implied assets change as liquidity conditions deteriorate, and how such changes affect the price of a hypothetical put option on the bank’s assets. Because the price of this put indicates the cost of public support to banks, variations in the put due to fluctuations in the SLRI provide a benchmark for charging banks according to their exposure to systemic liquidity risk, a goal that has been advocated by many experts on bank regulation. 


http://www.imf.org/external/pubs/cat/longres.aspx?sk=26131.0

Sunday, July 29, 2012

Austerity Debate a Matter of Degree -- In Europe, Opinions Differ on Depth, Timing of Cuts; International Monetary Fund Has Change of Heart

Austerity Debate a Matter of Degree. By Stephen Fidler
In Europe, Opinions Differ on Depth, Timing of Cuts; International Monetary Fund Has Change of Heart
Wall Street Journal, February 17, 2012
http://online.wsj.com/article/SB10001424052970204792404577227273553955752.html

Excerpts

In the U.S., the debate about whether the government should start cutting its budget deficit opens up a deep ideological divide. Many countries in Europe don't have that luxury.

True, there may be questions about how hard to cut budgets and how best to time the cuts, but with government-bond investors going on strike, policy makers either don't have a choice or feel they don't. Budget austerity is also a recipe favored by Germany and other euro-zone governments that hold the Continent's purse strings.

Once upon a time, the International Monetary Fund, which also provides bailout funds and lend its crisis management expertise to euro-zone governments, would have been right there with the Germans: It never handled a financial crisis for which tough austerity wasn't the prescribed medicine. In Greece, however, officials say the IMF supported spreading the budget pain over a number of years rather than concentrating it at the front end.

That is partly because overpromising the undeliverable hurts government credibility, which is essential to overcoming the crisis. But it is also because the IMF's view has shifted.

"Over its history, the IMF has become less dogmatic about fiscal austerity being always the right response to a crisis," said Laurence Ball, economics professor at Johns Hopkins University, and a part-time consultant to the IMF.

These days, the fund worries more than it did about the negative impact that cutting budgets has on short-term growth prospects—a traditional concern of Keynesian economists.

"Fiscal consolidation typically has a contractionary effect on output. A fiscal consolidation equal to 1% of [gross domestic product] typically reduces GDP by about 0.5% within two years and raises the unemployment rate by about 0.3 percentage point," the IMF said in its 2010 World Economic Outlook:

But that isn't the full story. In the first place, the IMF agrees that reducing government debt—which is what austerity should eventually achieve—has long-term economic benefits. For example, in a growing economy close with strong employment, reduced competition for savings should lower the cost of capital for private entrepreneurs.

That suggests that, where bond markets give governments the choice, there is a legitimate debate to be had about timing of austerity. The IMF economic models suggest it will be five years before the "break-even" point when the benefits to growth of cutting debt start to exceed the "Keynesian" effects of austerity.

There is an alternative hypothesis that has a lot of support in Germany, and among the region's central bankers. This is the notion that budget cutbacks stimulate growth in the short term, often referred to as the "expansionary fiscal contraction" hypothesis.

Manfred Neumann, professor emeritus of economics at the Institute for Economic Policy at the University of Bonn, said the view is also called the "German hypothesis" since it emerged from a round of German budget cutting in the early 1980s.

"The positive effect of austerity is much stronger than most people believe," he said. The explanation for the beneficial impact is that cutting government debt generates an improvement in confidence among households and entrepreneurs, he said.

The IMF concedes there may be something in this for countries where people are worried about the risk that the government might default—but only up to a point. It concedes that fiscal retrenchment in such countries "tends to be less contractionary" than in countries not facing market pressures—but doesn't conclude that budget cutting in such circumstances is actually expansionary.

Each side of the debate invokes its own favored study. Support for the "German hypothesis" comes from two Harvard economists with un-German names—Alberto Alesina and Silvia Ardagna. But their critics, who include Mr. Ball, say their sample includes many irrelevant episodes for which their model fails to correct—including, for example, the U.S. "fiscal correction" that was born out of the U.S. economic boom of the late 1990s.

Mr. Alesina didn't respond to an email asking for comment, but Mr. Neumann said he isn't confident that studies, such as the IMF's, that appear to refute the hypothesis manage to isolate the effects of the austerity policy from other effects of a financial crisis.

Some of the IMF's conclusions, however, bode ill for the euro zone's budget cutters.

The first is that the contractionary effects of fiscal retrenchment are often partly offset by an increase in exports—but less so in countries where the exchange rate is fixed. Second, the pain is greater if central banks can't offset the fiscal austerity through a stimulus in monetary policy. With interest rates close to zero in the euro zone, such a stimulus is hard to achieve. Third, when many countries are cutting budgets at the same time, the effect on economic activity in each is magnified.

If you are a government in budget-cutting mode, there are, however, better and worse ways of doing it. The IMF says spending cuts tend to have less negative impact on the economy than tax increases. However, that is partly because central banks tend to cut interest rates more aggressively when they see spending cuts.

Mr. Neumann sees an austerity hierarchy. It is better to cut government consumption and transfers, including staff costs, than government investment—though it may be harder politically. If you are raising taxes, better to raise those with no impact on incentives—such as inheritance or wealth taxes—than those that hurt incentives, such as income or payroll taxes.

Raising sales or value-added taxes may have less impact on incentives—but have other undesirable effects, such as increasing inflation, that could deter central banks from easing policy.

Saturday, July 28, 2012

The Statistical Definition of Public Sector Debt. An Overview of the Coverage of Public Sector Debt for 61 Countries

What Lies Beneath: The Statistical Definition of Public Sector Debt. An Overview of the Coverage of Public Sector Debt for 61 Countries.By Robert Dippelsman, Claudia Dziobek, and Carlos A. GutiƩrrez Mangas
IMF Staff Discussion Note
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26101.0

Excerpts

Executive Summary

While key macroeconomic indicators such as Gross Domestic Product (GDP) or Consumer Price Index (CPI) are based on internationally accepted methodologies, indicators related to the debt of the public sector often do not follow international standards and can have several different definitions. As this paper shows, the absence of the standard nomenclature can lead to major misunderstandings in the fiscal policy debate. The authors present examples that show that debt-to-GDP ratios for a country at any given time can range from 40 to over 100 percent depending on the definition used. Debt statistics, for example, may include or exclude state and local governments and may cover all debt instruments or just a subset. The authors suggest that gross debt of the general government (―gross debt‖) should be globally adopted as the headline indicator supplemented by other measures of government debt for risk-based assessments of the fiscal position. Broader measures, including net debt and detailed information on contingent liabilities and derivatives, could be considered. The standard nomenclature of government and of debt instruments helps users understand the concepts in line with the Public Sector Debt Statistics Guide. Use of more standard definitions of government debt would improve data comparability, would benefit IMF surveillance, programs, and debt sustainability analysis, and would help country authorities specify and monitor fiscal rules. Data disaggregated by government subsector and debt instrument for 61 countries from the IMF‘s Government Finance Statistics Yearbook (GFSY) database are presented to illustrate the importance and viability of adopting this approach.

Most key macroeconomic indicators such as GDP, the consumer price index (CPI), data on monetary aggregates or balance of payments follow internationally accepted definitions. In contrast, countries often do not follow international guidelines for public debt data. As this paper shows, failure to apply global standards can lead to important misunderstandings because of the potentially large magnitudes involved. International guidelines on the compilation of public sector debt are well established and are summarized in the recently published Public Sector Debt Statistics Guide (Debt Guide). The Debt Guide also describes applications of these guidelines for the analysis of debt sustainability, fiscal risk, and vulnerability.

The authors seek in this paper to provide a more intuitive application of the various concepts and definitions found in the Debt Guide, and propose that global standard definitions of ―gross debt‖ referring to the ―general government‖ be adopted as a headline measure. As with other headline indicators, a variety of narrower and wider indicators remain valuable and useful for different purposes. The notion of gross debt will be familiar to macroeconomic statisticians, but, as a practical matter, the adoption of global standard statistical definitions of debt will require some development efforts in terms of source data availability and training for compilers of debt statistics. A particular challenge is complete coverage of all relevant institutions and financial instruments. Detailed information on contingent liabilities and derivatives should also be considered. Coordination across agencies that work with debt related data is also critical, as with other complex datasets such as GDP.

Many users are not aware of the extent to which differences in concepts and methods matter. Box 1 below highlights the four key dimensions of public sector debt. Countries publish data, for example, either including or excluding state and local governments, pension funds, and public corporations. Also, while much of the policy debate centers on government liabilities, some countries have begun to publish and focus policy analysis on net debt (financial assets minus liabilities). Debt data frequently only include two (of the six) debt instruments available: debt securities and loans. Debt instruments such as other accounts payable or insurance technical reserves are often not taken into account. In many cases the method of valuation is not explicitly mentioned even though market versus nominal valuation can be significantly different. Consolidation, which refers to the process of netting out intra-governmental obligations, is another important factor rarely specified in published data. And finally, debt data may be compiled using cash data and excluding non-cash items such as arrears or using accrual (or partial accrual) methods to reflect important non-cash obligations.
Box 1. Key Dimensions to Measure Government Gross Debt
Institutional Coverage of Government
Instrument Coverage of Debt
Valuation of Debt Instruments (market and nominal)
Consolidation of Intra-Government Holdings

Source: Public Sector Debt Statistics Guide.

Conclusions

The headline indicator for government debt should be defined as ―gross debt of the general government‖ or GL3/D4 in this paper‘s nomenclature. The authors suggest that countries should aspire to publish timely data on the broader concept of gross debt.

Data on the institutional level of the general government (GL3) would be consistent with a broad range of data uses and with the data requirements of other macroeconomic datasets, notably the national accounts. Including the full range of debt instruments is desirable particularly because some of these may expand in times of financial distress and could thus serve as valuable indicators of distress. Clarity of what the debt data cover would help build understanding of the data and their comparability across countries.

A global standard would facilitate communication on the main concepts in public sector debt statistics and it would bring greater precision to research on fiscal issues, and lead to improved cross-country comparison. This framework uses a nomenclature inspired by the approach in monetary data where M1 through M4 (monetary aggregates) reflect institutional and instrument coverage as well.

The methodological framework of government debt presented here is widely accepted among statisticians. The relevant definitions, concepts, classification, and guidance of compilation are summarized in GFSM 2001 and the Debt Guide. These standards are fully consistent with the overarching statistical methodology of the 2008 SNA and other international macroeconomic methodologies such as the Sixth Edition of Balance of Payments and International Investment Position Manual (BPM6) and broadly consistent with the European System of Accounts (ESA) manual and the more specialized manuals of deficit and debt that govern the Excessive Deficit Procedure.
However, the methodology is not always well defined in the policy debate. An international convention to view GL3/D4 as the desirable headline indicator of government debt, consistent with the international standards, would go a long way to create more transparency and better comparability of international data.

Our contribution is to provide a presentational framework and nomenclature that highlights the importance of different instruments, institutional coverage, and valuation and consolidation as key indicators of debt. Indeed, we have noted that other, more narrowly defined concepts can meaningfully supplement the comprehensive measure of debt. These narrower measures may be important for a risk-based assessment of the fiscal position, but they are not substitutes for a global indicator.

Further extensions of this work are the development of the statistical reporting of broader measures, for example net debt of the general government and the presentation of information on derivatives, and contingent liabilities.

The new debt database launched by the IMF and World Bank in 2010 is structured along government levels, debt instruments, consolidation and valuation as discussed in this paper. However, some countries report data only on the GL2 level and cover mostly D1. Developing data on the broader statistics will take some time, although Australia, Canada, and some other countries already publish or plan to publish GL3/D4 data or publish components that would allow the calculation of GL3/D4.

Debt statistics for various levels of government and instruments were shown for 61 countries and these data highlight some interesting patterns that merit further analysis such as the degree of fiscal autonomy of state and local government to issue debt, the degree of development of markets for government debt securities. The authors conclude that further research would be worthwhile on the advantages of a global standard of government debt for such topics as data comparability, IMF surveillance, programs, debt sustainability analysis, and the analysis of fiscal rules.

Thursday, July 26, 2012

BIS - Capital requirements for bank exposures to central counterparties + Basel III counterparty credit risk FAQs + other doc

Basel III counterparty credit risk - Frequently asked questions (update of FAQs published in November 2011) (25.07.2012 12:10)
http://www.bis.org/publ/bcbs228.htm

Regulatory treatment of valuation adjustments to derivative liabilities: final rule issued by the Basel Committee (25.07.2012 12:05)
http://www.bis.org/press/p120725b.htm

Capital requirements for bank exposures to central counterparties (25.07.2012 12:00)
http://www.bis.org/publ/bcbs227.htm

Wednesday, July 18, 2012

On Graen's "Unwritten Rules for Your Career: 15 Secrets for Fast-track Success"

Miner (2005) says (chp 14), citing Graen (1989), that those interested in achieving their personal ends would need to focus on:
things a person should do to achieve fast-track status in management, what unwritten rules exist in organizations, and how to become an insider who understands these rules and follows them to move up the hierarchy. These unwritten rules are part of the informal organization and constitute the secrets of organizational politics.


There are fifteen such secrets of the fast track:

1. Find the hidden strategies of your organization and use them to achieve your objectives.  (This involves forming working relationships—networks—with people who have access to special resources, skills, and abilities to do important work.)

2. Do your homework in order to pass the tests. (These tests can range from sample questions to command performances; you should test others, as well, to evaluate sources of information.)

3. Accept calculated risks by using appropriate contingency plans. (Thus, learn to improve your decision average by taking calculated career risks.)

4. Recognize that apparently complete and final plans are merely flexible guidelines to the actions necessary for implementation. (Thus, make your plans broad and open-ended so that you can adapt them as they are implemented.)

5. Expect to be financially undercompensated for the first half of your career and to be overcompensated for the second half. (People on the fast track inevitably grow out of their job descriptions and take on extra duties beyond what they are paid to do.)

6. Work to make your boss successful. (This is at the heart of the exchange between the two of you and involves a process of reciprocal promotion.)

7. Work to get your boss to promote your career. (This is the other side of the coin and involves grooming your replacement as well.)

8. Use reciprocal relationships to build supportive networks. (It is important that these be competence networks involving effective working relationships and competent people.)

9. Do not let your areas of competence become too narrowly specialized. (Avoid the specialists trap by continually taking on new challenges.)

10. Try to act with foresight more often than with hindsight. (Be proactive by identifying the right potential problem, choosing the right solution, and choosing the best implementation process.)

11. Develop cordial relationships with your competitors: Be courteous, considerate, and polite in all relationships. (You need not like all these people, but making unnecessary enemies is an expensive luxury.)

12. Seek out key expert insiders and learn from them. (Have numerous mentors and preserve these relationships of your reciprocal network.)

13. Make sure to acknowledge everyone’s contribution. (Giving credit can be used as a tool to develop a network of working relationships.)

14. Prefer equivalent exchanges between peers instead of rewards and punishments between unequal partners. (Equivalent exchanges are those in which a resource, service, or behavior is given with the understanding that something of equivalent value will eventually be returned; this requires mutual trust.)

15. Never take unfair advantage of anyone, and avoid letting anyone take unfair advantage of you. (Networks cannot be maintained without a reputation for trustworthiness.)


More recently, in another book, Graen (2003) has revisited this topic and set forth another partially overlapping list of thirteen actions that distinguish key players from others [...]. These guidelines [...] for how to play the hierarchy and gain fast-track status are as follows:

1. Demonstrate initiative to get things done (i.e., engage in organizational citizenship behaviors).

2. Exercise leadership to make the unit more effective (i.e., become an informal group leader).

3. Show a willingness to take risks to accomplish assignments (i.e., go against group pressures in order to surface problems if necessary).

4. Strive to add value to the assignments (i.e., enrich your own job by making it more challenging and meaningful).

5. Actively seek out new job assignments for self-improvement (i.e., seek out opportunities for growth).

6. Persist on a valuable project after others give up (and learn not to make the same mistake twice).

7. Build networks to extend capability, especially among those responsible for getting work done.

8. Influence others by doing something extra (i.e., this means building credibility and adjusting your interpersonal style to match others).

9. Resolve ambiguity by dealing constructively to resolve ambiguity (i.e., gather as much information as possible and obtain frequent feedback).

10. Seek wider exposure to managers outside the home division, which helps in gathering information.

11. Build on existing skills. Apply technical training on the job and build on that training to develop broader expertise; be sure not to allow obsolescence to creep in.

12. Develop a good working relationship with your boss. Work to build and maintain a close working relationship with the immediate supervisor (Strive to build a high quality LMX, devote energy to this goal—see Maslyn and Uhl-Bien, 2001).

13. Promote your boss. Work to get the immediate supervisor promoted (i.e., try to make that person look good; as your boss goes up, so well may you).


Bibliography
Graen, George (1989). Unwritten Rules for Your Career: 15 Secrets for Fast-track Success. New York: John Wiley.
Graen, George (2003). Dealing with Diversity. Greenwich, CT: Information Age Publishing.
Miner, John B. Organizational behavior I. Essential theories of motivation and leadership. Armonk, NY: M. E. Sharpe.

Tuesday, July 10, 2012

Quality of Government and Living Standards: Adjusting for the Efficiency of Public Spending

Quality of Government and Living Standards: Adjusting for the Efficiency of Public Spending. By Grigoli, Francesco; Ley, Eduardo
IMF Working Paper No. 12/182
Jul 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26052.0

Summary: It is generally acknowledged that the government’s output is difficult to define and its value is hard to measure. The practical solution, adopted by national accounts systems, is to equate output to input costs. However, several studies estimate significant inefficiencies in government activities (i.e., same output could be achieved with less inputs), implying that inputs are not a good approximation for outputs. If taken seriously, the next logical step is to purge from GDP the fraction of government inputs that is wasted. As differences in the quality of the public sector have a direct impact on citizens’ effective consumption of public and private goods and services, we must take them into account when computing a measure of living standards. We illustrate such a correction computing corrected per capita GDPs on the basis of two studies that estimate efficiency scores for several dimensions of government activities. We show that the correction could be significant, and rankings of living standards could be re-ordered as a result.

Excerpts:

Despite its acknowledged shortcomings, GDP per capita is still the most commonly used summary indicator of living standards. Much of the policy advice provided by international organizations is based on macroeconomic magnitudes as shares of GDP, and framed on cross-country comparisons of per capita GDP. However, what GDP does actually measure may differ significantly across countries for several reasons. We focus here on a particular source for this heterogeneity: the quality of public spending. Broadly speaking, the ‘quality of public spending’ refers to the government’s effectiveness in transforming resources into socially valuable outputs. The opening quote highlights the disconnect between spending and value when the discipline of market transactions is missing.

Everywhere around the world, non-market government accounts for a big share of GDP and yet it is poorly measured—namely the value to users is assumed to equal the producer’s cost.  Such a framework is deficient because it does not allow for changes in the amount of output produced per unit of input, that is, changes in productivity (for a recent review of this issue, see Atkinson and others, 2005). It also assumes that these inputs are fully used. To put it another way, standard national accounting assumes that government activities are on the best practice frontier. When this is not the case, there is an overstatement of national production.  This, in turn, could result in misleading conclusions, particularly in cross-country comparisons, given that the size, scope, and performance of public sectors vary so widely.

Moreover, in the national accounts, this attributed non-market (government and non-profit sectors) “value added” is further allocated to the household sector as “actual consumption.” As Deaton and Heston (2008) put it: “[...] there are many countries around the world where government-provided health and education is inefficient, sometimes involving mass absenteeism by teachers and health workers [...] so that such ‘actual’ consumption is anything but actual. To count the salaries of AWOL government employees as ‘actual’ benefits to consumers adds statistical insult to original injury.” This “statistical insult” logically follows from the United Nations System of National Accounts (SNA) framework once ‘waste’ is classified as income—since national income must be either consumed or saved. Absent teachers and health care workers are all too common in many low-income countries (Chaudhury and Hammer, 2004; Kremer and others, 2005; Chaudhury and others, 2006; and World Bank, 2004). Beyond straight absenteeism, which is an extreme case, generally there are significant cross-country differences in the quality of public sector services. World Bank (2011) reports that in India, even though most children of primaryschool age are enrolled in school, 35 percent of them cannot read a simple paragraph and 41 percent cannot do a simple subtraction.

It must be acknowledged, nonetheless, that for many of government’s non-market services, the output is difficult to define, and without market prices the value of output is hard to measure. It is because of this that the practical solution adopted in the SNA is to equate output to input costs. This choice may be more adequate when using GDP to measure economic activity or factor employment than when using GDP to measure living standards.

Moving beyond this state of affairs, there are two alternative approaches. One is to try to find indicators for both output quantities and prices for direct measurement of some public outputs, as recommended in SNA 93 (but yet to be broadly implemented). The other is to correct the input costs to account for productive inefficiency, namely to purge from GDP the fraction of these inputs that is wasted. We focus here on the nature of this correction. As the differences in the quality of the public sector have a direct impact on citizens’ effective consumption of public and private goods and services, it seems natural to take them into account when computing a measure of living standards.

To illustrate, in a recent study, Afonso and others (2010) compute public sector efficiency scores for a group of countries and conclude that “[...] the highest-ranking country uses onethird of the inputs as the bottom ranking one to attain a certain public sector performance score. The average input scores suggest that countries could use around 45 per cent less resources to attain the same outcomes if they were fully efficient.” In this paper, we take such a statement to its logical conclusion. Once we acknowledge that the same output could be achieved with less inputs, output value cannot be equated to input costs. In other words, waste should not belong in the living-standards indicator—it still remains a cost of government but it must be purged from the value of government services. As noted, this adjustment is especially relevant for cross-country comparisons.

...

In this context, as noted, the standard practice is to equate the value of government outputs to its cost, notwithstanding the SNA 93 proposal to estimate government outputs directly. The value added that, say, public education contributes to GDP is based on the wage bill and other costs of providing education, such as outlays for utilities and school supplies. Similarly for public health, the wage bill of doctors, nurses and other medical staff and medical supplies measures largely comprises its value added. Thus, in the (pre-93) SNA used almost everywhere, non-market output, by definition, equals total costs. Yet the same costs support widely different levels of public output, depending on the quality of the public sector.

Note that value added is defined as payments to factors (labor and capital) and profits. Profits are assumed to be zero in the non-commercial public sector. As for the return to capital, in the current SNA used by most countries, public capital is attributed a net return of zero—i.e., the return from public capital is equated to its depreciation rate. This lack of a net return measure in the SNA is not due to a belief that the net return is actually zero, but to the difficulties of estimating the return.

Atkinson and others (2005, page 12) state some of the reasons behind current SNA practice: “Wide use of the convention that (output = input) reflects the difficulties in making alternative estimates. Simply stated, there are two major problems: (a) in the case of collective services such as defense or public administration, it is hard to identify the exact nature of the output, and (b) in the case of services supplied to individuals, such as health or education, it is hard to place a value on these services, as there is no market transaction.”

Murray (2010) also observes that studies of the government’s production activities, and their implications for the measurement of living standards, have long been ignored. He writes: “Looking back it is depressing that progress in understanding the production of public services has been so slow. In the market sector there is a long tradition of studying production functions, demand for inputs, average and marginal cost functions, elasticities of supply, productivity, and technical progress. The non-market sector has gone largely
unnoticed. In part this can be explained by general difficulties in measuring the output of services, whether public or private. But in part it must be explained by a completely different perspective on public and private services. Resource use for the production of public services has not been regarded as inputs into a production process, but as an end in itself, in the form of public consumption. Consequently, the production activity in the government sector has not been recognized.” (Our italics.)

The simple point that we make in this paper is that once it is recognized that the effectiveness of the government’s ‘production function’ varies significantly across countries, the simple convention of equating output value to input cost must be revisited. Thus, if we learn that the same output could be achieved with less inputs, it is more appropriate to credit GDP or GNI with the required inputs rather than with the actual inputs that include waste. While perceptions of government effectiveness vary widely among countries as, e.g., the World Bank’s Governance indicators attests (Kaufmann and others 2009), getting reliable measures of government actual effectiveness is a challenging task as we shall discuss below.

In physics, efficiency is defined as the ratio of useful work done to total energy expended, and the same general idea is associated with the term when discussing production. Economists simply replace ‘useful work’ by ‘outputs’ and ‘energy’ by ‘inputs.’ Technical efficiency means the adequate use of the available resources in order to obtain the maximum product. Why focus on technical efficiency and not other concepts of efficiency, such as price or allocative efficiency? Do we have enough evidence on public sector inefficiency to make the appropriate corrections?

The reason why we focus on technical efficiency in this preliminary inquiry is twofold. First, it corresponds to the concept of waste. Productive inefficiency implies that some inputs are wasted as more could have been produced with available inputs. In the case of allocative inefficiency, there could be a different allocation of resources that would make everyone better off but we cannot say that necessarily some resources are unused—although they are certainly not aligned with social preferences. Second, measuring technical inefficiency is easier and less controversial than measuring allocative inefficiency. To measure technical inefficiency, there are parametric and non-parametric methods allowing for construction of a best practice frontier. Inefficiency is then measured by the distance between this frontier and the actual input-output combination being assessed.

Indicators (or rather ranges of indicators) of inefficiency exist for the overall public sector and for specific activities such as education, healthcare, transportation, and other sectors. However, they are far from being uncontroversial. Sources of controversy include: omission of inputs and/or outputs, temporal lags needed to observe variations in the output indicators, choice of measures of outputs, and mixing outputs with outcomes. For example, many social and macroeconomic indicators impact health status beyond government spending (Spinks and Hollingsworth, 2009, and Joumard and others, 2010) and they should be taken into account. Most of the output indicators available show autocorrelation and changes in inputs typically take time to materialize into outputs’ variations. Also, there is a trend towards using outcome rather than output indicators for measuring the performance of the public sector. In health and education, efficiency studies have moved away from outputs (e.g., number of prenatal interventions) to outcomes (e.g., infant mortality rates). When cross-country analyses are involved, however, it must be acknowledged that differences in outcomes are explained not only by differences in public sector outputs but also differences in other environmental factors outside the public sector (e.g., culture, nutrition habits).

Empirical efficiency measurement methods first construct a reference technology based on observed input-output combinations, using econometric or linear programming methods. Next, they assess the distance of actual input-output combinations from the best-practice frontier. These distances, properly scaled, are called efficiency measures or scores. An inputbased efficiency measure informs us on the extent it is possible to reduce the amount of the inputs without reducing the level of output. Thus, an efficiency score, say, of 0.8 means that using best practices observed elsewhere, 80 percent of the inputs would suffice to produce the same output.

We base our corrections to GDP on the efficiency scores estimated in two papers: Afonso and others (2010) for several indicators referred to a set of 24 countries, and Evans and others (2000) focusing on health, for 191 countries based on WHO data. These studies employ techniques similar to those used in other studies, such as Gupta and Verhoeven (2001), Clements (2002), Carcillo and others (2007), and Joumard and others (2010).

? Afonso and others (2010) compute public sector performance and efficiency indicators (as performance weighted by the relevant expenditure needed to achieve it) for 24 EU and emerging economies. Using DEA, they conclude that on average countries could use 45 percent less resources to attain the same outcomes, and deliver an additional third of the fully efficient output if they were on the efficiency frontier. The study included an analysis of the efficiency of education and health spending that we use here.

? Evans and others (2000) estimate health efficiency scores for the 1993–1997 period for 191 countries, based on WHO data, using stochastic frontier methods. Two health outcomes measures are identified: the disability adjusted life expectancy (DALE) and a composite index of DALE, dispersion of child survival rate, responsiveness of the health care system, inequities in responsiveness, and fairness of financial contribution. The input measures are health expenditure and years of schooling with the addition of country fixed effects. Because of its large country coverage, this study is useful for illustrating the impact of the type of correction that we are discussing
here.

We must note that ideally, we would like to base our corrections on input-based technical efficiency studies that deal exclusively with inputs and outputs, and do not bring outcomes into the analysis. The reason is that public sector outputs interact with other factors to produce outcomes, and here cross-country hetereogenity can play an important role driving cross-country differences in outcomes. Unfortunately, we have found no technical-efficiency studies covering a broad sample of countries that restrict themselves to input-output analysis.  In particular, these two studies deal with a mix of outputs and outcomes. The results reported here should thus be seen as illustrative. Furthermore, it should be underscored that the level of “waste” that is identified for each particular country varies significantly across studies, which implies that any associated measures of GDP adjusting for this waste will also differ.

...

We have argued here that the current practice of estimating the value of the government’s non-market output by its input costs is not only unsatisfactory but also misleading in crosscountry comparisons of living standards. Since differences in the quality of the public sector have an impact on the population’s effective consumption and welfare, they must be taken into account in comparisons of living standards. We have performed illustrative corrections of the input costs to account for productive inefficiency, thus purging from GDP the fraction of these inputs that is wasted.

Our results suggest that the magnitude of the correction could be significant. When correcting for inefficiencies in the health and education sectors, the average loss for a set of 24 EU member states and emerging economies amounts to 4.1 percentage points of GDP.  Sector-specific averages for education and health are 1.5 and 2.6 percentage points of GDP, implying that 32.6 and 65.0 percent of the inputs are wasted in the respective sectors. These corrections are reflected in the GDP-per-capita ranking, which gets reshuffled in 9 cases out of 24. In a hypothetical scenario where the inefficiency of the health sector is assumed to be representative of the public sector as a whole, the rank reordering would affect about 50 percent of the 93 countries in the sample, with 70 percent of it happening in the lower half of the original ranking. These results, however, should be interpreted with caution, as the purpose of this paper is to call attention to the issue, rather than to provide fine-tuned waste estimates.

A natural way forward involves finding indicators for both output quantities and prices for direct measurement of some public outputs. This is recommended in SNA 93 but has yet to be implemented in most countries. Moreover, in recent times there has been an increased interest in outcomes-based performance monitoring and evaluation of government activities (see Stiglitz and others, 2010). As argued also in Atkinson (2005), it will be important to measure not only public sector outputs but also outcomes, as the latter are what ultimately affect welfare. A step in this direction is suggested by Abraham and Mackie (2006) for the US, with the creation of “satellite” accounts in specific areas as education and health. These extend the accounting of the nation’s productive inputs and outputs, thereby taking into account specific aspects of non-market activities.

Monday, July 9, 2012

Macro-prudential Policy in a Fisherian Model of Financial Innovation

Macro-prudential Policy in a Fisherian Model of Financial Innovation. By Bianchi, Javier; Boz, Emine; Mendoza, Enrique G.
IMF Working Paper No. 12/181
Jul 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26051.0

Summary: The interaction between credit frictions, financial innovation, and a switch from optimistic to pessimistic beliefs played a central role in the 2008 financial crisis. This paper develops a quantitative general equilibrium framework in which this interaction drives the financial amplification mechanism to study the effects of macro-prudential policy. Financial innovation enhances the ability of agents to collateralize assets into debt, but the riskiness of this new regime can only be learned over time. Beliefs about transition probabilities across states with high and low ability to borrow change as agents learn from observed realizations of financial conditions. At the same time, the collateral constraint introduces a pecuniary externality, because agents fail to internalize the effect of their borrowing decisions on asset prices. Quantitative analysis shows that the effectiveness of macro-prudential policy in this environment depends on the government's information set, the tightness of credit constraints and the pace at which optimism surges in the early stages of financial innovation. The policy is least effective when the government is as uninformed as private agents, credit constraints are tight, and optimism builds quickly.

Excerpts:

Policymakers have responded to the lapses in financial regulation in the years before the 2008 global financial crisis and the unprecedented systemic nature of the crisis itself with a strong push to revamp financial regulation following a "macro-prudential" approach. This approach aims to focus on the macro (i.e. systemic) implications that follow from the actions of credit market participants, and to implement policies that influence behavior in "good times" in order to make financial crises less severe and less frequent. The design of macro-prudential policy is hampered, however, by the need to develop models that are reasonably good at explaining the macro dynamics of financial crises and at capturing the complex dynamic interconnections between potential macro-prudential policy instruments and the actions of agents in credit markets.

The task of developing these models is particularly challenging because of the fast pace of financial development. Indeed, the decade before the 2008 crash was a period of significant financial innovation, which included both the introduction of a large set of complex financial instruments, such as collateralized debt obligations, mortgage backed securities and credit default swaps, and the enactment of major financial reforms of a magnitude and scope unseen since the end of the Great Depression. Thus, models of macro-prudential regulation have to take into account the changing nature of the financial environment, and hence deal with the fact that credit market participants, as well as policymakers, may be making decisions lacking perfect information about the true riskiness of a changing financial regime.

This paper proposes a dynamic stochastic general equilibrium model in which the interaction between financial innovation, credit frictions and imperfect information is at the core of the financial transmission mechanism, and uses it to study its quantitative implications for the design and effectiveness of macro-prudential policy. In the model, a collateral constraint limits the agents' ability to borrow to a fraction of the market value of the assets they can offer as collateral. Financial innovation enhances the ability of agents to "collateralize," but also introduces risk because of the possibility of fluctuations in collateral requirements or loan-to-value ratios.  We take literally the definition of financial innovation to be the introduction of a truly new financial regime. This forces us to deviate from the standard assumption that agents formulate rational expectations with full information about the stochastic process driving fluctuations in credit conditions. In particular, we assume that agents learn (in Bayesian fashion) about the transition probabilities of financial regimes only as they observe regimes with high and low ability to borrow over time. In the long run, and in the absence of new waves of financial innovation, they learn the true transition probabilities and form standard rational expectations, but in the short run agents' beliefs display waves of optimism and pessimism depending on their initial priors and on the market conditions they observe. These changing beliefs influence agents' borrowing decisions and equilibrium asset prices, and together with the collateral constraint they form a financial amplification feedback mechanism: optimistic (pessimistic) expectations lead to over-borrowing (under-borrowing) and increased (reduced) asset prices, and as asset prices change the ability to borrow changes as well.

Our analysis focuses in particular on a learning scenario in which the arrival of financial innovation starts an "optimistic phase," in which a few observations of enhanced borrowing ability lead agents to believe that the financial environment is stable and risky assets are not "very risky." Hence, they borrow more and bid up the price of risky assets more than in a full-information ra- tional expectations equilibrium. The higher value of assets in turn relaxes the credit constraint.  Thus, the initial increase in debt due to optimism is amplified by the interaction with the collateral constraint via optimistic asset prices. Conversely, when the first realization of the low-borrowing- ability regime is observed, a "pessimistic phase" starts in which agents overstate the probability of continuing in poor financial regimes and overstate the riskiness of assets. This results in lower debt levels and lower asset prices, and the collateral constraint amplifies this downturn.

Macro-prudential policy action is desirable in this environment because the collateral constraint introduces a pecuniary externality in credit markets that leads to more debt and financial crises that are more severe and frequent than in the absence of this externality. The externality exists because individual agents fail to internalize the effect of their borrowing decisions on asset prices, particularly future asset prices in states of financial distress (in which the feedback loop via the collateral constraint triggers a financial crash).

There are several studies in the growing literature on macro-prudential regulation that have examined the implications of this externality, but typically under the assumption that agents form rational expectations with full information (e.g. Lorenzoni (2008), Stein (2011), Bianchi (2011), Bianchi and Mendoza (2010), Korinek (2010), Jeanne and Korinek (2010), Benigno, Chen, Otrok, Rebucci, and Young (2010)). In contrast, the novel contribution of this paper is in that we study the effects of macro-prudential policy in an environment in which the pecuniary externality is influenced by the interaction of the credit constraint with learning about the riskiness of a new financial regime. The analysis of Boz and Mendoza (2010) suggest that taking this interaction into account can be important, because they found that the credit constraint in a learning setup produces significantly larger effects on debt and asset prices than in a full-information environment with the same credit constraint. Their study, however, focused only on quantifying the properties of the decentralized competitive equilibrium and abstracted from normative issues and policy analysis. The policy analysis of this paper considers a social planner under two different informational assumptions. First, an uninformed planner who has to learn about the true riskiness of the new financial environment, and faces the set of feasible credit positions supported by the collateral values of the competitive equilibrium with learning. We start with a baseline scenario in which private agents and the planner have the same initial priors and thus form the same sequence of beliefs, and study later on scenarios in which private agents and the uninformed planner form different beliefs. Second, an informed planner with full information, who therefore knows the true transition probabilities across financial regimes, and faces a set of feasible credit positions consistent with the collateral values of the full-information, rational expectations competitive equilibrium.

We compute the decentralized competitive equilibrium of the model with learning (DEL) and contrast this case with the above social planner equilibria. We then compare the main features of these equilibria, in terms of the behavior of macroeconomic aggregates and asset pricing indicators, and examine the characteristics of macro-prudential policies that support the allocations of the planning problems as competitive equilibria. This analysis emphasizes the potential limitations of macro-prudential policy in the presence of significant financial innovation, and highlights the relevance of taking into account informational frictions in evaluating the effectiveness of macro-prudential policy.

The quantitative analysis indicates that the interaction of the collateral constraint with optimistic beliefs in the DEL equilibrium can strengthen the case for introducing macro-prudential regulation compared with the decentralized equilibrium under full information (DEF). This is because, as Boz and Mendoza (2010) showed, the interaction of these elements produces larger amplification both of the credit boom in the optimistic phase and of the financial crash when the economy switches to the bad financial regime. The results also show, however, that the effectiveness of macro-prudential policy varies widely with the assumptions about the information set and collateral pricing function used by the social planner. Moreover, for the uninformed planner, the effectiveness of macro-prudential policy also depends on the tightness of the borrowing constraint and the pace at which optimism builds in the early stages of financial innovation.

Consider first the uninformed planner. For this planner, the undervaluation of risk weakens the incentives to build precautionary savings against states of nature with low-borrowing-ability regimes over the long run, because this planner underestimates the probability of landing on and remaining in those states. In contrast, the informed planner assesses the correct probabilities of landing and remaining in states with good and bad credit regimes, so its incentives to build precautionary savings are stronger. In fact, the informed planner's optimal macro-prudential policy features a precautionary component that lowers borrowing levels at given asset prices, and a component that influences portfolio choice of debt v. assets to address the effect of the agents' mispricing of risk on collateral prices.

It is important to note that even the uninformed planner has the incentive to use macro-prudential policy to tackle the pecuniary externality and alter debt and asset pricing dynamics. In our baseline calibration, however, the borrowing constraint becomes tightly binding in the early stages of financial innovation as optimism builds quickly, and as a result macro-prudential policy is not very effective (i.e. debt positions and asset prices differ little between the DEL and the uninformed planner). Intuitively, since a binding credit constraint implies that debt equals the high-credit-regime fraction of the value of collateral, debt levels for the uninformed social planner and the decentralized equilibrium are similar once the constraint becomes binding for the planner. But this is not a general result.2 Variations in the information structure in which optimism builds more gradually produce outcomes in which macro-prudential policy is effective even when the
planner has access to the same information set. On the other hand, it is generally true that the uninformed planner allows larger debt positions than the informed planner because of the lower precautionary savings incentives.

We also analyze the welfare losses that arise from the pecuniary externality and the optimism embedded in agents' subjective beliefs. The losses arising due to their combined e®ect are large, reaching up to 7 percent in terms of a compensating variation in permanent consumption that equalizes the welfare of the informed planner with that of the DEL economy. The welfare losses attributable to the pecuniary externality alone are relatively small, in line with the findings reported by Bianchi (2011) and Bianchi and Mendoza (2010), and they fall significantly at the peak of optimism.

Our model follows a long and old tradition of models of financial crises in which credit frictions and imperfect information interact. This notion dates back to the classic work of Fisher (1933), in which he described his debt-deflation financial amplification mechanism as the result of a feedbackloop between agents' beliefs and credit frictions (particularly those that force fires sales of assets and goods by distressed borrowers). Minsky (1992) is along a similar vein. More recently, macroeconomic models of financial accelerators (e.g. Bernanke, Gertler, and Gilchrist (1999), Kiyotaki and Moore (1997), Aiyagari and Gertler (1999)) have focused on modeling financial amplification but typically under rational expectations with full information about the stochastic processes of exogenous shocks.

The particular specification of imperfect information and learning that we use follows closely that of Boz and Mendoza (2010) and Cogley and Sargent (2008a), in which agents observe regime realizations of a Markov-switching process without noise but need to learn its transition probability matrix. The imperfect information assumption is based on the premise that the U.S. financial system went through significant changes beginning in the mid-90s as a result of financial innovation and deregulation that took place at a rapid pace. As in Boz and Mendoza (2010), agents go through a learning process in order to "discover" the true riskiness of the new financial environment as they observe realizations of regimes with high or low borrowing ability.

Our quantitative analysis is related to Bianchi and Mendoza (2010)'s quantitative study of macro-prudential policy. They examined an asset pricing model with a similar collateral constraint and used comparisons of the competitive equilibria vis-a-vis a social planner to show that optimal macro-prudential policy curbs credit growth in good times and reduces the frequency and severity of financial crises. The government can accomplish this by using Pigouvian taxes on debt and dividends to induce agents to internalize the model's pecuniary externality. Bianchi and Mendoza's framework does not capture, however, the role of informational frictions interacting with frictions in financial markets, and thus is silent about the implications of di®erences in the information sets of policy-makers and private agents.

Our paper is also related to Gennaioli, Shleifer, and Vishny (2010), who study financial innovation in an environment in which "local thinking" leads agents to neglect low probability adverse events (see also Gennaioli and Shleifer (2010)). As in our model, the informational friction distorts decision rules and asset prices, but the informational frictions in the two setups differ.3 Moreover, the welfare analysis of Gennaioli, Shleifer, and Vishny (2010) focuses on the effect of financial innovation under local thinking, while we emphasize the interaction between a fire-sale externality and informational frictions.

Finally, our work is also related to the argument developed by Stein (2011) to favor a cap and trade system to address a pecuniary externality that leads banks to issue excessive short-term debt in the presence of private information. Our analysis differs in that we study the implications of a form of model uncertainty (i.e. uncertainty about the transition probabilities across financial regimes) for macro-prudential regulation, instead of private information, and we focus on Pigouvian taxes as a policy instrument to address the pecuniary externality.

Sunday, July 8, 2012

Margin requirements for non-centrally-cleared derivatives - BIS consultative document

Margin requirements for non-centrally-cleared derivatives - consultative document
BIS, July 2012
http://www.bis.org/publ/bcbs226.htm

The G20 Leaders agreed in 2011 to add margin requirements on non-centrally-cleared derivatives to the reform programme for over-the-counter (OTC) derivatives markets. Margin requirements can further mitigate systemic risk in the derivatives markets. In addition, they can encourage standardisation and promote central clearing of derivatives by reflecting the generally higher risk of non-centrally-cleared derivatives. The consultative paper published today lays out a set of high-level principles on margining practices and treatment of collateral, and proposes margin requirements for non-centrally-cleared derivatives.

These policy proposals are articulated through a set of key principles that primarily seek to ensure that appropriate margining practices will be established for all non-centrally-cleared OTC derivative transactions. These principles will apply to all transactions that involve either financial firms or systemically important non-financial entities.

The Basel Committee and IOSCO would like to solicit feedback from the public on questions related to the scope, feasibility and impact of the margin requirements. Responses to the public consultation, together with the QIS results, will be considered in formulating a final joint proposal on margin requirements on non-centrally-cleared derivatives by year-end.


Excerpts:

Objectives of margin requirements for non-centrally-cleared derivatives
Margin requirements for non-centrally-cleared derivatives have two main benefits:

Reduction of systemic risk. Only standardised derivatives are suitable for central clearing. A substantial fraction of derivatives are not standardised and will not be able to be cleared.4 These non-centrally-cleared derivatives, which total hundreds of trillions of dollars of notional amounts,5 will pose the same type of systemic contagion and spillover risks that materialised in the recent financial crisis. Margin requirements for non-centrally-cleared derivatives would be expected to reduce contagion and spillover effects by ensuring that collateral are available to offset losses caused by the default of a derivatives counterparty. Margin requirements can also have broader macroprudential benefits, by reducing the financial system’s vulnerability to potentially de-stabilising procyclicality and limiting the build-up of uncollateralised exposures within the financial system.

Promotion of central clearing. In many jurisdictions central clearing will be mandatory for most standardised derivatives. But clearing imposes costs, in part because CCPs require margin to be posted. Margin requirements on non-centrally-cleared derivatives, by reflecting the generally higher risk associated with these derivatives, will promote central clearing, making the G20’s original 2009 reform program more effective. This could, in turn, contribute to the reduction of systemic risk.

The effectiveness of margin requirements could be undermined if the requirements were not consistent internationally. Activity could move to locations with lower margin requirements, raising two concerns:
  The effectiveness of the margin requirements could be undermined (ie regulatory arbitrage).
  Financial institutions that operate in the low-margin locations could gain a competitive advantage (ie unlevel playing field).


Margin and capital

Both capital and margin perform important risk mitigation functions but are distinct in a number of ways. First, margin is “defaulter-pay”. In the event of a counterparty default, margin protects the surviving party by absorbing losses using the collateral provided by the defaulting entity. In contrast, capital adds loss absorbency to the system, because it is “survivor-pay”, using capital to meet such losses consumes the surviving entity’s own financial resources. Second, margin is more “targeted” and dynamic, with each portfolio having its own designated margin for absorbing the potential losses in relation to that particular portfolio, and with such margin being adjusted over time to reflect changes in the risk of that portfolio. In contrast, capital is shared collectively by all the entity’s activities and may thus be more easily depleted at a time of stress, and is difficult to rapidly adjust to reflect changing risk exposures. Capital requirements against each exposure are not designed to be sufficient to cover the loss on the default of the counterparty but rather the probability weighted loss given such default. For these reasons, margin can be seen as offering enhanced protection against counterparty credit risk where it is effectively implemented. In order for margin to act as an effective risk mitigant, that margin must be (i) accessible at the time of need and (ii) in a form that can be liquidated rapidly in a period of financial stress at a predictable price.

The interaction between capital and margin, however, is complex and is an area in which the full range of interactions needs further careful consideration. When calibrating the application of capital and margin, consideration must be given to factors such as: (i) differences in capital requirements across different types of entities; (ii) the effect certain margin requirements may have on the capital calculations of different types of regulated entities subject to differing capital requirements; and (iii) the current asymmetrical treatment of collateral in many regulatory capital frameworks where benefit is given for collateral received, but no cost is incurred for the (encumbrance) risks of collateral posted.


Impact of margin requirements on liquidity

The potential benefits of margin requirements must be weighed against the liquidity impact that would result from derivative counterparties’ need to provide liquid, high-quality collateral to meet those requirements, including potential changes to market functioning as result of an increasing demand for such collateral in the aggregate. Financial institutions may need to obtain and deploy additional liquidity resources to meet margin requirements that exceed current practices. Moreover, the liquidity impact of margin requirements cannot be considered in isolation. Rather, it is important to recognise ongoing and parallel regulatory initiatives that will also have significant liquidity impacts; examples of such initiatives include the BCBS’s Liquidity Coverage Ratio (LCR), Net Stable Funding Ratio (NSFR) and global mandates for central clearing of standardised derivatives.

The US SEC has pointed out that the proposed margin requirements could have a much greater impact on securities firms regulated under net capital rules. Under such rules, securities firms are required to maintain at all times a minimum level of ‘net capital” (meaning highly liquid capital) in excess of all subordinated liabilities. When calculating the “net capital”, the firm must deduct all assets that cannot be readily convertible into cash, and adjust the value of liquid assets by appropriate haircuts. As such, in computing “net capital”, assets that are delivered by the firm to another party as margin collateral are treated as unsecured receivables from the party holding the collateral and are thus deducted in full when calculating net capital.

As discussed in Part C of this consultative paper, the BCBS and IOSCO plan to conduct a quantitative impact study (QIS) in order to gauge the impact of the margin proposals. In particular, the QIS will assess the amount of margin required on non-centrally-cleared derivatives as well as the amount of available collateral that could be used to satisfy these requirements. The QIS will be conducted during the consultation period, and its results will inform the BCBS’s and IOSCO’s joint final proposal.


Macroprudential considerations

The BCBS and IOSCO also note that national supervisors may wish to establish margin requirements for non-centrally-cleared derivatives that, in addition to achieving the two principal benefits noted above, also create other desirable macroprudential outcomes. Further work by the relevant authorities is likely required to consider the details of how such outcomes might be identified and operationalised. The BCBS and IOSCO encourage further consideration of other potential macroprudential benefits of margin requirements for non-centrally-cleared derivatives and of the need for international coordination that may arise in this respect.

Friday, July 6, 2012

The (Other) Deleveraging. By Manmohan Singh

The (Other) Deleveraging. By Manmohan Singh
IMF Working Paper No. 12/179
July 2012
http://www.imfbookstore.org/IMFORG/WPIEA2012179

Summary: Deleveraging has two components--shrinking of balance sheets due to increased haircuts/shedding of assets, and the reduction in the interconnectedness of the financial system. We focus on the second aspect and show that post-Lehman there has been a significant decline in the interconnectedness in the pledged collateral market between banks and nonbanks. We find that both the collateral and its associated velocity are not rebounding as of end-2011 and still about $4-5 trillion lower than the peak of $10 trillion as of end-2007. This paper updates Singh (2011) and we use this data to compare with the monetary aggregates (largely due to QE efforts in US, Euro area and UK), and discuss the overall financial lubrication that likely impacts the conduct of global monetary policy.


Excerpts:

Deleveraging from shrinking of bank balance sheets is not (yet) taking place; however, we still find the financial system imploding.

The reduction in debt (or deleveraging) has two components. The first (and more familiar) involves the shrinking of balance sheets. The other is a reduction in the interconnectedness of the financial system (Figure 1). Most recent researchers have focused on the impact of smaller balance sheets, overlooking this ‘other’ deleveraging resulting from reduced interconnectedness. Yet, as the current crisis unfolds, key actors in the global financial system seem to be “ring fencing” themselves owing to heightened counterparty risk. While “rational” from an individual perspective, this behavior may have unintended consequences for the financial markets.

The interconnections nexus has become considerably more complex over the past two decades.  The interconnectedness of the financial system aspect may be viewed from the lens of collateral chains. Typically, collateral from hedge funds, pension, insurers, central banks etc., is intermediated by the large global banks. For example, a Hong Kong hedge fund may get financing from UBS secured by its collateral. This collateral may include, say, Indonesian bonds which will be pledged to UBS, (U.K.) for re-use. There may be demand for such bonds from, for instance, a pension fund in Chile who may have Santander as its global bank.  However, due to heightened counterparty risk, UBS may not want to onward pledge to Santander, despite demand for the collateral with UBS. Fewer trusted counterparties in the market owing to elevated counterparty risk leads to stranded liquidity pools, incomplete markets, idle collateral and shorter collateral chains, missed trades and deleveraging. In volume terms, over the past decade this collateral use has become on par with monetary aggregates like M2.

The balance sheet shrinking due to ‘price decline’ (i.e., increased haircuts) has been studied extensively [...]. But the balance sheet shrinkage is being postponed—Euro area bank balance sheets may have increased up to €500bn since the end of November, 2011 helped by the liquidity injection from ECB’s 3-year Long Term Repo Operations or LTROs (net of reduced Monthly Repurchase Operations, MROs).

However, de-leveraging of the financial system due to the shortening of ‘re-pledging chains’ has not (yet) received attention. This deleveraging is taking place despite the recent official sector support. This second component of deleveraging is contributing towards the higher credit cost to the real economy. In fact, relative to 2006, the primary indices that measure aggregate borrowing cost are well over 2.5 times in the U.S. and 4 times in the Eurozone (see Figure 2). This is after adjusting for the central bank rate cuts which have lowered the total cost of borrowing for similar corporates (e.g., in the U.S., from about 6% in 2006 to about 4% at present). Figure 3 shows that for the past three decades, the cost of borrowing for financials has been below non-financials; however this has changed post-Lehman. Since much of the real economy resorts to banks to borrow (aside from the large industrials), the higher borrowing cost for banks is then passed on the real economy.

As the “other” deleveraging continues, the financial system remains short of high-grade collateral that can be re-pledged. Recent official sector efforts such as ECB’s “flexibility” (and the ELA programs of national central banks in the Eurozone) in accepting “bad” collateral attempts to keep the good/bad collateral ratio in the market higher than otherwise. ECB’s acceptance of good and bad collateral at non market price brings Gresham's law into play. But, if such moves become part of the central banker’s standard toolkit, the fiscal aspects and risks associated with such policies cannot be ignored. By so doing, the central banks have interposed themselves as risk-taking intermediaries with the potential to bring significant unintended consequences.

Thursday, July 5, 2012

Paths to Eurobonds

Paths to Eurobonds. By Stijn Claessens, Ashoka Mody, and Shahin VallƩe
July, 2012
IMF Working Paper No. 12/172
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012172

Summary: This paper discusses proposals for common euro area sovereign securities. Such instruments can potentially serve two functions: in the short-term, stabilize financial markets and banks and, in the medium-term, help improve the euro area economic governance framework through enhanced fiscal discipline and risk-sharing. Many questions remain on whether financial instruments can ever accomplish such goals without bold institutional and political decisions, and, whether, in the absence of such decisions, they can create new distortions. The proposals discussed are also not necessarily competing substitutes; rather, they can be complements to be sequenced along alternative paths that possibly culminate in a fully-fledged Eurobond. The specific path chosen by policymakers should allow for learning and secure the necessary evolution of institutional infrastructures and political safeguards.

Excerpts:

The European Monetary Union was purposefully designed as a monetary union without a fiscal union. History has not been kind to such arrangements, as Bordo et al. (2011) argue and as several critics had warned before the eurozone came into being (for a review of that earlier literature, see Bornhorst, Mody, and Ohnsorge, forthcoming). The ongoing crisis appears to have validated these concerns. The absence of formal pooling of resources has required the construction of additional arrangements for inter-governmental fiscal support to respond to countries in crisis. These arrangements include the European Financial Stability Facility (EFSF) and the European Stability Mechanism (ESM). And as the crisis has evolved, the European Central Bank (the ECB) has needed to play an important role in supporting banks and, indirectly, sovereigns in need.

In this context, the common issuance of debt in the euro area has been increasingly evoked— including most recently by the European Parliament and the European Council—both as an immediate response to the financial crisis and as a structural feature of the monetary union.

This paper is a review of various proposals for common debt issuance. Clearly, common instruments are not the only or necessarily the primary way to reduce financial instability or improve economic, financial and fiscal governance in the euro area. Indeed, common debt issuance is inextricably linked to the shape and form of a future fiscal union. Because a fiscal (and banking) union is likely a longer-term project, a discussion of common instruments today can help sharpen the discussion of the choices underlying a fiscal union and possibly initiate more limited forms of risk-sharing and pooling that create a valuable learning process.

In undertaking this review, we are motivated by the following questions:
* How does the proposal change incentives of governments (debtors) and creditors?  Does it offer clarity on how average and marginal costs of borrowing would be affected, and how default would be treated?

* What is the nature of the insurance that is being offered? Would the new instrument help reduce risk and improve liquidity? Who will want to hold those instruments?

* Would the (currently perverse) sovereign-bank linkages be reduced? What are effects on current financial markets (ill)functioning?

* What are the phasing-in, transitional, legal, and institutional issues?

* And, are there paths along which the different proposed instruments may be combined?

Conclusions

Common debt could bring reprieve from current financial instability. Specifically, the creation of a large safe asset can reduce flight to safety from one sovereign to another and weaken the links between banks and their respective sovereigns that are currently destabilizing. Common debt issuance could also be a structural stabilizing feature of the euro area by helping to create deeper and more liquid financial markets allowing the monetary union to capture the liquidity gains of a broader sovereign debt market. Importantly, these initiatives can serve to focus attention on the need for fiscal federalism including macroeconomic stabilization and risk-sharing mechanisms but also fiscal discipline.

But there clearly are risks associated with such common instruments. In terms of fiscal discipline, the pricing approaches, where countries’ own debt is lower ranked and hence pays a higher price, are intriguing. But the tranching creates new challenges, not least if the junior tranches replicate the instability that we are currently witnessing. Similarly, to the extent that funds are earmarked to repay the common debt, greater pro-cyclicality may ensue as earmarked resources are less available to deal with adverse shocks.

Ideally then, common debt should follow from a fundamental discussion of the long-term shape of a fiscal, financial and monetary union. The absence of a debate on fiscal union reflects in part historical concerns that one group of countries may become dependent on another group on a permanent basis. But short of addressing these fundamental issues completely, common debt issuance can initiate a political process towards this goal. If, for the moment, there is only appetite for limited and bounded fiscal risk-sharing, then the Eurobills can start a learning process. These could be scaled up if proven successful and evolve towards more ambitious structures. If the assessment is that a key task today is to bring debt-to-GDP ratios down before further progress can be made, then the Redemption Pact is the right first step. But this would take 20-25 years and delay the creation of a permanent mechanism to complete the monetary union.

Thus, addressing both the current debt overhang problem and insuring against loss of market access likely requires combining several proposals. And while a gradual phase-in provides some advantages, in particular as it can foster a political discussion about fiscal risk-sharing and transfers, the current financial crisis might call for more rapid introduction. Regardless, steps towards common debt issuance require an open political discussion given the importance of accountability and legitimacy dimensions associated with the embryonic creation of a fiscal union. Federations are not static political constructs and common debt issuance can both contribute to effective economic management and act as a catalyst for political change. In that sense, the proposals put forward are a constructive feature of the ongoing discussion, forcing a critical and focused rethinking of the EMU architecture.

Tuesday, July 3, 2012

Monitoring indicators for intraday liquidity management - consultative document

Monitoring indicators for intraday liquidity management - consultative document

Basel Committee, July 2012
Intraday liquidity can be defined as funds that are accessible during the business day, usually to enable financial institutions to make payments in real time. The Basel Committee's proposed Monitoring indicators for intraday liquidity management are intended to allow banking supervisors to monitor a bank's intraday liquidity risk management. Over time, the indicators will also help supervisors to gain a better understanding of banks' payment and settlement behaviour and their management of intraday liquidity risk.
The Basel Committee welcomes comments on this consultative document. Comments should be submitted by Friday 14 September 2012 by e-mail to: baselcommittee@bis.org. Alternatively, comments may be sent by post to the Secretariat of the Basel Committee on Banking Supervision, Bank for International Settlements, CH-4002 Basel, Switzerland. All comments may be published on the website of the Bank for International Settlements unless a comment contributor specifically requests confidential treatment. 

http://www.bis.org/publ/bcbs225.htm