Saturday, April 27, 2013

South Korea: Give Nukes a Chance. By Denny Roy

South Korea: Give Nukes a Chance. By Denny Roy
Asia Pacific Bulletin, no. 204
Washington, D.C.: East-West Center
March 27, 2013
http://www.eastwestcenter.org/publications/south-korea-give-nukes-chance

Excerpts:

It is only a matter of time before North Korea fields an actual nuclear-tipped missile that works. With the persistent security threat from North Korea seemingly worsening, recent public opinion surveys show that a majority of South Koreans favor getting their own nuclear weapons. There is no doubt that South Korea is capable of making its own nuclear weapons, probably within a year. Indeed, the Republic of Korea (ROK) has explored this possibility occasionally since the 1970s, each time backing off under outside pressure.

There are some good reasons why, in principle, the world is better off with a smaller, rather than larger, number of nuclear weapon states. Nevertheless, there are two additional principles that apply here. First, nuclear weapons are a powerful deterrent; they are the main reason why the Cold War remained cold. Second, there may be a specific circumstance in which the introduction of a new nuclear weapons capability has a constructive influence on international security—call it the exception to the general nonproliferation rule.

Given the ROK’s present circumstances, Washington and Seoul should seriously consider the following policy change. Seoul gives the required 90 days notice required for it to withdraw from the Nuclear Nonproliferation Treaty, which allows for deratification in the case of “extraordinary events” that threaten national security. The ROK announces its intention to begin working toward a nuclear weapons capability, with the following conditions: (1) the South Korean program will match North Korea’s progress step-by-step towards deploying a reliable nuclear-armed missile; and (2) Seoul will commit to halting and shelving its program if North Korea does the same. For its part, Washington announces that US nonproliferation policy is compelled to tolerate an exception when a law-abiding state is threatened by a rogue state—in this case North Korea—that has both acquired nuclear weapons and threatened to use them aggressively. Pyongyang has repeatedly spoken of using its nuclear weapons to devastate both the ROK and the United States.

This policy change is necessary because US, ROK and (half-hearted) Chinese efforts to get North Korea to denuclearize are not working. [...]

An ROK nuclear weapons capability would impose a meaningful penalty on the DPRK for its nuclear weapons program. Aside from the sanctions ordered by the United Nations Security Council, which have proved no more than a nuisance and are amply compensated for by the growing economic relationship with China, Pyongyang has suffered no significant negative consequences for acquiring nuclear weapons. A South Korean nuclear capability would change that. The North Koreans would understand that their act brought about an outcome they very much do not want [...].

ROK nukes, furthermore, will help deter North Korean provocations. A capacity to attack a neighbor with nuclear weapons provides North Korea with cover for limited conventional attacks. Pyongyang has established a pattern of using quick, sharp jabs against South Korea. The goal is to rattle Seoul into accommodating North Korean economic and political demands. Seoul insists that future North Korean attacks will result in military retaliation by South Korean forces. Since South Korea has not hit back after previous incidents, it is uncertain whether this pledge will deter Pyongyang from trying this tactic again. A DPRK nuclear weapons capability worsens this already dangerous situation. North Korean planners might conclude that Seoul would not dare retaliate against a DPRK strike out of fear that the next step would be a nuclear attack on the ROK. A South Korean nuclear capability, however, would redress this imbalance. If ROK conventional military capabilities are superior to the DPRK and equal or superior at the nuclear level, deterrence against a North Korean attack is stronger.

South Korean nukes would close the credibility gap in the US-ROK alliance. The “umbrella” of America’s nuclear arsenal covers South Korea and theoretically negates the DPRK nuclear threat. However, South Koreans have always questioned the reliability of this commitment which potentially puts a US city at risk in order to protect a South Korean city. The doubts are growing more acute now that a North Korean capability is apparently close to realization. An ROK nuclear arsenal would remove this strain on the alliance and give the South Koreans a sense of greater control over their own destiny.

Pyongyang would not be the only target audience for Seoul’s announcement of intent to deploy nuclear weapons. Like the North Koreans, the People’s Republic of China (PRC) is deeply opposed to an ROK nuclear capability. The announcement would also signal to Beijing that the cost of failing to discipline their client state is rising dramatically. The Chinese are already debating whether the status quo of a rogue DPRK has become so adverse to Chinese interests that China must pressure Pyongyang more heavily even at the risk of causing regime collapse. South Korea’s imminent—and reversible—acquisition of nuclear weapons would strengthen the argument that the PRC must get tougher with the DPRK.

To be sure, this policy change would create its own problems. An ROK nuclear capability would pressure Japan to follow suit. A US-friendly, stable, law-abiding, liberal democratic country getting nukes is not necessarily a bad thing. But if so, the solution is for Washington and Seoul to emphasize that South Korea’s nuclear capability would be temporary and contingent, so Tokyo can remain non-nuclear.  Thankfully, there are precedents for middle-sized states giving up their nuclear weapons.

South Korea’s security situation is deteriorating and for the ROK’s leadership, national security is job number one. It is now time to get past the visceral opposition to proliferation and recognize that in this case, a conditional change of South Korea’s status to nuclear-weapon state can help manage the dangers created by a heightened North Korean threat.

Sunday, April 21, 2013

Generalized linear modeling with highly dimensional data

Question from a student, University of Missouri-Kansas City:

Hi guys,
I have project in Regression class, and we have to use R to do it,but till now I didn't find appropriate code for this project, and I dont now which method I have to use.

I have to analysis of a high dimensional dataset. The data has a total of 500 features.

we have no knowledge as to which of the features are useful and which not. Thus we want to apply model selection techniques to obtain a subset of useful features. What we have to do is the following:

a) There are totally 2000 observations in the data. Use the first 1000 to train or fit your model, and the other 1000 for prediction.

b) You will report the number of features you select and the percentage of response you correctly predict. Your project is considered valid only if the obtained percentage exceeds 54%.

Please help me as much as you can.
Your help would be appreciated..
Thank you!


-------------------
Answer

well, doing batches of 30 variables I came across 88 of the 500 that minimize AIC for each batch:

t1=read.csv("qw.csv", header=FALSE)
nrow(t1)
# not a good solution -- better to get 1000 records randomly, but this is enough for now:
train_data=t1[1:1000,]
test_data=t1[1001:2000,]
library(bestglm)
x=train_data[,2:31]
y=train_data[,1]
xy=as.data.frame(cbind(x,y))
(bestAIC = bestglm(xy, IC="AIC"))

, and so on, going from  x=train_data[,2:31] to x=train_data[32:61], etc. Each run gives you a list of best variables to minimize AIC (I chose AIC but it can be any other criterion).

If I try to process more than 30 (or 31) columns with bestglm it takes too much time because it uses other programs and optimization is different... and clearly inefficient.

now, the problem seems reduced to using less than 90 variables instead of the original 500. Not the real solution, since I am doing this in a piecemeal basis, but maybe close to what we are looking for, which is to get 54pct of the observed values.

using other methods I got even less candidates to be used as variables, but let's keep the ones we found before

then I tried this: after finding the best candidates I created this object, a data frame:

dat = data.frame(train_data$V1, train_data$V50, train_data$V66, train_data$V325, train_data$V426, train_data$V28, train_data$V44, train_data$V75, train_data$V111, train_data$V128, train_data$V149, train_data$V152, train_data$V154, train_data$V179, train_data$V181, train_data$V189, train_data$V203, train_data$V210, train_data$V213, train_data$V216, train_data$V218, train_data$V234, train_data$V243, train_data$V309, train_data$V311, train_data$V323, train_data$V338, train_data$V382, train_data$V384, train_data$V405, train_data$V412, train_data$V415, train_data$V417, train_data$V424, train_data$V425, train_data$V434, train_data$V483)


then, I invoked this:

model = train(train_data$V1 ~ train_data$V50 + train_data$V66 + train_data$V325 + train_data$V426 + train_data$V28 + train_data$V44 + train_data$V75 + train_data$V111 + train_data$V128 + train_data$V149 + train_data$V152 + train_data$V154 + train_data$V179 + train_data$V181 + train_data$V189 + train_data$V203 + train_data$V210 + train_data$V213 + train_data$V216 + train_data$V218 + train_data$V234 + train_data$V243 + train_data$V309 + train_data$V311 + train_data$V323 + train_data$V338 + train_data$V382 + train_data$V384 + train_data$V405 + train_data$V412 + train_data$V415 + train_data$V417 + train_data$V424 + train_data$V425 + train_data$V434 + train_data$V483,
               dat,
               method='nnet',
               linout=TRUE,
               trace = FALSE)
ps = predict(model, dat)


if you check the result, ps, you find that most values are the same:

606 are -0.2158001115381
346 are 0.364988437287819

the rest of the 1000 values are very close to these two, the whole thing is this:

just 1 is -0.10
   1  is -0.14
   1  is -0.17
   1  is -0.18
   3  is -0.20
 617 are -0.21
   1  is 0.195
   1  is 0.359
   1  is 0.360
   1  is 0.362
   2  is 0.363
 370  are 0.364

, so I just converged all negative values to -1 and all positive values to 1 (let's assume is propensity not to buy or to buy), and then I found that 380 rows were negative when the original value to be predicted was -1 (499 rows), that is, a success percentage of 76 pct

only 257 values were positive when the original values were positive (success rate of 257/501 = 51.3pct)

the combined success rate in predicting the response variable values is a bit above 63%, which is above the value we aimed at, 54pct

---
now, I tried with the second data set, test_data (the second 1000 rows)

negative values when original response value was negative too:
          success rate is 453/501 = .90419

impressive?  See how disappointing is this:

positive values when original response value was positive too:
          success rate is 123/499 = .24649

the combined success rate is about 57pct, which is barely above the mark

---
do I trust my own method?

of course not, I would get all previous consumer surveys (buy/not buy) my company had in the files and then I will check if I can get a success rate at or above 57pct (which to me is too low, to say nothing of 54pct)

for the time and effort I spent maybe I should have tossed an electronic coin, with a bit of luck you can get a bit above 50pct success     : - )

maybe to prevent this they chose 54pct, since in 1000 runs you could be very well near 50pct

---
refinement, or "If we had all the time of the world..."

since I got enough free time, I tried this (same dat data frame):

model = train(train_data$V1 ~ log(train_data$V50) + log(train_data$V66) + log(train_data$V325) + log(train_data$V426) + log(train_data$V28) + log(train_data$V44) + log(train_data$V75) + log(train_data$V111) + log(train_data$V128) + log(train_data$V149) + log(train_data$V152) + log(train_data$V154) + log(train_data$V179) + log(train_data$V181) + log(train_data$V189) + log(train_data$V203) + log(train_data$V210) + log(train_data$V213) + log(train_data$V216) + log(train_data$V218) + log(train_data$V234) + log(train_data$V243) + log(train_data$V309) + log(train_data$V311) + log(train_data$V323) + log(train_data$V338) + log(train_data$V382) + log(train_data$V384) + log(train_data$V405) + log(train_data$V412) + log(train_data$V415) + log(train_data$V417) + log(train_data$V424) + log(train_data$V425) + log(train_data$V434) + log(train_data$V483),
               dat,
               method='nnet',
               linout=TRUE,
               trace = FALSE)
ps = predict(model, dat)

negative values when original response value was negative too: .7

positive values when original response value was positive too: .69

combined success rate: 69.4pct

# now we try with the other 1000 values:
[same dat data frame, but using test_data instead of train_data]

model = train(test_data$V1 ~ log(test_data$V50) + log(test_data$V66) + log(test_data$V325) + log(test_data$V426) + log(test_data$V28) + log(test_data$V44) + log(test_data$V75) + log(test_data$V111) + log(test_data$V128) + log(test_data$V149) + log(test_data$V152) + log(test_data$V154) + log(test_data$V179) + log(test_data$V181) + log(test_data$V189) + log(test_data$V203) + log(test_data$V210) + log(test_data$V213) + log(test_data$V216) + log(test_data$V218) + log(test_data$V234) + log(test_data$V243) + log(test_data$V309) + log(test_data$V311) + log(test_data$V323) + log(test_data$V338) + log(test_data$V382) + log(test_data$V384) + log(test_data$V405) + log(test_data$V412) + log(test_data$V415) + log(test_data$V417) + log(test_data$V424) + log(test_data$V425) + log(test_data$V434) + log(test_data$V483),
               dat,
               method='nnet',
               linout=TRUE,
               trace = FALSE)
ps = predict(model, dat)


negative values when original response value was negative too:
          success rate is 322/499 = .645

positive values when original response value was positive too:
          success rate is 307/501 = .612

combined success rate: 62.9pct

other things I tried failed -- if we had all the time of the world we could try other possibilities and get better results... or not

you'll tell me if you can reproduce the results, which are clearly above the 54pct mark

Wednesday, April 17, 2013

CPSS: Implementation monitoring of standards

Implementation monitoring of standards
CPSS, Apr 17, 2013
http://www.bis.org/cpss/cpssinfo2_5.htm

The Committee on Payment and Settlement Systems (CPSS) and the International Organization of Securities Commissions (IOSCO) have started the process of monitoring implementation of the Principles for financial market infrastructures (the PFMIs). The PFMIs are international standards for payment, clearing and settlement systems, including central counterparties, and trade repositories. They are designed to ensure that the infrastructure supporting global financial markets is robust and well placed to withstand financial shocks. The PFMIs were issued by CPSS-IOSCO in April 2012 and jurisdictions around the world are currently in the process of implementing them into their regulatory frameworks to foster the safety, efficiency and resilience of their financial market infrastructures (FMIs).

Full, timely and consistent implementation of the PFMIs is fundamental to ensuring the safety, soundness and efficiency of key FMIs and for supporting the resilience of the global financial system. In addition, the PFMIs play an important part in the G20's mandate that all standardised over-the-counter (OTC) derivatives should be centrally cleared. Global central clearing requirements reinforce the importance of strong safeguards and consistent oversight of derivatives central counterparties (CCPs) in particular. CPSS and IOSCO members are committed to adopt the principles and responsibilities contained in the PFMIs in line with the G20 and Financial Stability Board (FSB) expectations.

Scope of the assessments

The implementation monitoring will cover the implementation of the principles contained in the PFMIs as well as responsibilities A to E. Reviews will be carried out in stages, assessing first whether a jurisdiction has completed the process of adopting the legislation and other policies that will enable it to implement the principles and responsibilities and subsequently whether these changes are complete and consistent with the principles and responsibilities. Assessments will also examine consistency in the outcomes of implementation of the principles by FMIs and implementation of the responsibilities by authorities. The results of the assessments will be published on both CPSS and IOSCO websites.

Jurisdictional coverage - The assessments will cover the following jurisdictions: Argentina, Australia, Belgium, Brazil Canada, Chile, China, European Union, France, Germany, Hong Kong SAR, Indonesia, India, Italy, Japan, Korea, Mexico, Netherlands, Russia, Saudi Arabia, Singapore, South Africa, Spain, Sweden, Switzerland, Turkey, United Kingdom and United States.  The jurisdictional coverage reflects, among other factors, the importance of the PFMIs to the G20 mandate for central clearing of OTC derivatives and the need to ensure robust risk management by CCPs.

Types of FMI - In many jurisdictions, the framework for regulation, supervision and oversight is different for each type of financial market infrastructure (FMI). Whilst initial overall assessments will cover the regulation changes necessary for all types of FMIs, further thematic assessments (assessing the consistency of implementation) are likely to focus on OTC derivatives CCPs and TRs, given their importance for the successful completion of the G20 commitments regarding central clearing and transparency for derivatives products. Prioritising OTC derivatives CCPs and TRs will help ensure timely initial reporting given that most jurisdictions have made most progress in implementing reforms for these sectors.


Timing

A first assessment is currently underway examining whether jurisdictions have made regulatory changes that reflect the principles and responsibilities in the PFMI. Results of this assessment are due to be published in the third quarter of 2013. 

Monday, April 15, 2013

For a Sick Friend: First, Do No Harm. By Letty Cottin Pogrebin

For a Sick Friend: First, Do No Harm. By Letty Cottin Pogrebin
Conversing with the ill can be awkward, but keeping a few simple commandments makes a huge difference
The Wall Street Journal, April 13, 2013, on page C3
http://online.wsj.com/article/SB10001424127887324240804578416574019136696.html


'A closed mouth gathers no feet." It's a charming axiom, but silence isn't always an option when we're dealing with a friend who's sick or in despair. The natural human reaction is to feel awkward and upset in the face of illness, but unless we control those feelings and come up with an appropriate response, there's a good chance that we'll blurt out some cringe-worthy cliché, craven remark or blunt question that, in retrospect, we'll regret.

Take this real-life exchange. If ever the tone deaf needed a poster child, Fred is their man.

"How'd it go?" he asked his friend, Pete, who'd just had cancer surgery.

"Great!" said Pete. "They got it all."

"Really?" said Fred. "How do they know?"

A few simple commandments makes a huge difference when conversing with the ill.

Later, when Pete told him how demoralizing his remark had been, Fred's excuse was, "I was nervous. I just said what popped into my head."

We're all nervous around illness and mortality, but whatever pops into our heads should not necessarily plop out of our mouths. Yet, in my own experience as a breast-cancer patient, and for many of the people I have interviewed, friends do make hurtful remarks. Marion Fontana, who was diagnosed with breast cancer eight years after her husband, a New York City firefighter, died in the collapse of the World Trade Center, was told that she must have really bad karma to attract so much bad luck. In another case, upon hearing a man's leukemia diagnosis, his friend shrieked, "Wow! A girl in my office just died of that!"

You can't make this stuff up.

If we're not unwittingly insulting our sick friends, we're spouting clichés like "Everything happens for a reason." Though our intent is to comfort the patient, we also say such things to comfort ourselves and tamp down our own feelings of vulnerability. From now on, rather than sound like a Hallmark card, you might want to heed the following 10 Commandments for Conversing With a Sick Friend.

1. Rejoice at their good news. Don't minimize their bad news. A guy tells you that the doctors got it all, say "Hallelujah!" A man with advanced bladder cancer says that he's taking his kids to Disneyland next summer, don't bite your lip and mutter, "We'll see." Tell him it's a great idea. (What harm can it do?) Which doesn't mean that you should slap a happy face on a friend's grim diagnosis by saying something like, "Don't worry! Nowadays breast cancer is like having a cold!"

The best response in any encounter with a sick friend is to say, "Tell me what I can do to make things easier for you—I really want to help."

2. Treat your sick friends as you always did—but never forget their changed circumstance. However contradictory that may sound, I promise you can learn to live within the paradox if you keep your friend's illness and its constraints in mind but don't treat them as if their illness is who they are. Speak to them as you always did (tease them, kid around with them, get mad at them) but indulge their occasional blue moods or hissy-fits. Most important, start conversations about other things (sports, politics, food, movies) as soon as possible and you'll help speed their journey from the morass of illness to the miracle of the ordinary.

3. Avoid self-referential comments. A friend with a hacking cough doesn't need to hear, "You think that's bad? I had double pneumonia." Don't tell someone with brain cancer that you know how painful it must be because you get migraines. Don't complain about your colicky baby to the mother of a child with spina bifida. I'm not saying sick people have lost their capacity to empathize with others, just that solipsism is unhelpful and rude. The truest thing you can say to a sick or suffering friend is, "I can only try to imagine what you're going through."

4. Don't assume, verify. Several friends of Michele, a Canadian writer, reacted to her cancer diagnosis with, "Well, at least you caught it early, so you'll be all right!" In fact, she did not catch it early, and never said or hinted otherwise. So when someone said, "You caught it early," she thought, "No, I didn't, therefore I'm going to die." Repeat after me: "Assume nothing."

5. Get the facts straight before you open your mouth.Did your friend have a heart or liver transplant? Chemo or radiation? Don't just ask, "How are you?" Ask questions specific to your friend's health. "How's your rotator cuff these days?" "Did the blood test show Lyme disease?" "Are your new meds working?" If you need help remembering who has shingles and who has lupus, or the date of a friend's operation, enter a health note under the person's name in your contacts list or stick a Post-it by the phone and update the information as needed.

6. Help your sick friend feel useful. Zero in on one of their skills and lead to it. Assuming they're up to the task, ask a cybersmart patient to set up a Web page for you; ask a bridge or chess maven to give you pointers on the game; ask a retired teacher to guide your teenager through the college application process. In most cases, your request won't be seen as an imposition but a vote of confidence in your friend's talent and worth.

7. Don't infantilize the patient. Never speak to a grown-up the way you'd talk to a child. Objectionable sentences include, "How are we today, dearie?" "That's a good boy." "I bet you could swallow this teeny-tiny pill if you really tried." And the most wince-worthy, "Are we ready to go wee-wee?" Protect your friend's dignity at all costs.

8. Think twice before giving advice.Don't forward medical alerts, newspaper clippings or your Aunt Sadie's cure for gout. Your idea of a health bulletin that's useful or revelatory may mislead, upset, confuse or agitate your friend. Sick people have doctors to tell them what to do. Your job is simply to be their friend.

9. Let patients who are terminally ill set the conversational agenda.If they're unaware that they're dying, don't be the one to tell them. If they know they're at the end of life and want to talk about it, don't contradict or interrupt them; let them vent or weep or curse the Fates. Hand them a tissue and cry with them. If they want to confide their last wish, or trust you with a long-kept secret, thank them for the honor and listen hard. Someday you'll want to remember every word they say.

10. Don't pressure them to practice 'positive thinking.' The implication is that they caused their illness in the first place by negative thinking—by feeling discouraged, depressed or not having the "right attitude." Positive thinking can't cure Huntington's disease, ALS or inoperable brain cancer. Telling a terminal patient to keep up the fight isn't just futile, it's cruel. Insisting that they see the glass as half full may deny them the truth of what they know and the chance to tie up life's loose ends while there's still time. As one hospice patient put it, "All I want from my friends right now is the freedom to sulk and say goodbye."

Though most of us feel dis-eased around disease, colloquial English proffers a sparse vocabulary for the expression of embarrassment, fear, anxiety, grief or sorrow. These 10 commandments should help you relate to your sick friends with greater empathy, warmth and grace.

—Ms. Pogrebin is the author of 10 books and a founding editor of Ms. magazine. Her latest book is "How to Be a Friend to a Friend Who's Sick," from which this essay is adapted.

Saturday, April 13, 2013

BCBS: Monitoring tools for intraday liquidity management - final document

BCBS: Monitoring tools for intraday liquidity management - final document
April 2013
http://www.bis.org/publ/bcbs248.htm

This document is the final version of the Committee's Monitoring tools for intraday liquidity management. It was developed in consultation with the Committee on Payment and Settlement Systems to enable banking supervisors to better monitor a bank's management of intraday liquidity risk and its ability to meet payment and settlement obligations on a timely basis. Over time, the tools will also provide supervisors with a better understanding of banks' payment and settlement behaviour.

The framework includes:
  • the detailed design of the monitoring tools for a bank's intraday liquidity risk;
  • stress scenarios;
  • key application issues; and
  • the reporting regime.
Management of intraday liquidity risk forms a key element of a bank's overall liquidity risk management framework. As such, the set of seven quantitative monitoring tools will complement the qualitative guidance on intraday liquidity management set out in the Basel Committee's 2008 Principles for Sound Liquidity Risk Management and Supervision. It is important to note that the tools are being introduced for monitoring purposes only and that internationally active banks will be required to apply them. National supervisors will determine the extent to which the tools apply to non-internationally active banks within their jurisdictions.

Basel III: The Liquidity Coverage Ratio and liquidity risk monitoring tools (January 2013), which sets out one of the Committee's key reforms to strengthen global liquidity regulations does not include intraday liquidity within its calibration. The reporting of the monitoring tools will commence on a monthly basis from 1 January 2015 to coincide with the implementation of the LCR reporting requirements.

 An earlier version of the framework of monitoring tools was issued for consultation in July 2012. The Committee wishes to thank those who provided feedback and comments as these were instrumental in revising and finalising the monitoring tools.

Authorities' access to trade repository data - consultative report

CPSS: Authorities' access to trade repository data - consultative report
April 2013
www.bis.org/publ/cpss108.htm

The consultative report Authorities' access to trade repository data was published for public comment on 11 April 2013. 

Trade repositories (TRs) are entities that maintain a centralised electronic record of over-the-counter (OTC) derivatives transaction data. TRs will play a key role in increasing transparency in the OTC derivatives markets by improving the availability of data to authorities and the public in a manner that supports the proper handling and use of the data. For a broad range of authorities and official international financial institutions, it is essential to be able to access the data needed to fulfil their respective mandates while maintaining the confidentiality of the data pursuant to the laws of relevant jurisdictions.

The purpose of the report is to provide guidance to TRs and authorities on the principles that should guide authorities' access to data held in TRs for typical and non-typical data requests. The report also sets out possible approaches to addressing confidentiality concerns and access constraints. Accompanying the report is a cover note that lists the specific related issues for comment.
Comments should be sent by 10 May 2013 to both the CPSS secretariat (cpss@bis.org) and the IOSCO secretariat (accessdata@iosco.org). The comments will be published on the websites of the BIS and IOSCO unless commentators have requested otherwise.

Thursday, April 11, 2013

Market-Based Structural Top-Down Stress Tests of the Banking System. By Jorge Chan-Lau

Market-Based Structural Top-Down Stress Tests of the Banking System. By Jorge Chan-Lau
IMF Working Paper No. 13/88
April 10, 2013
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40468.0

Summary: Despite increased need for top-down stress tests of financial institutions, performing them is challenging owing to the absence of granular information on banks’ trading and loan portfolios. To deal with these data shortcomings, this paper presents a market-based structural top-down stress testing methodology that relies in market-based measures of a bank's probability of default and structural models of default risk to infer the capital losses they could experience in stress scenarios. As an illustration, the methodology is applied to a set of banks in an advanced emerging market economy.

Tuesday, April 2, 2013

Regulators Let Big Banks Look Safer Than They Are. By Sheila Bair

Regulators Let Big Banks Look Safer Than They Are. By Sheila Bair
The Wall Street Journal, April 2, 2013, on page A13
http://online.wsj.com/article/SB10001424127887323415304578370703145206368.html

The recent Senate report on the J.P. Morgan Chase "London Whale" trading debacle revealed emails, telephone conversations and other evidence of how Chase managers manipulated their internal risk models to boost the bank's regulatory capital ratios. Risk models are common and certainly not illegal. Nevertheless, their use in bolstering a bank's capital ratios can give the public a false sense of security about the stability of the nation's largest financial institutions.

Capital ratios (also called capital adequacy ratios) reflect the percentage of a bank's assets that are funded with equity and are a key barometer of the institution's financial strength—they measure the bank's ability to absorb losses and still remain solvent. This should be a simple measure, but it isn't. That's because regulators allow banks to use a process called "risk weighting," which allows them to raise their capital ratios by characterizing the assets they hold as "low risk."

For instance, as part of the Federal Reserve's recent stress test, the Bank of America reported to the Federal Reserve that its capital ratio is 11.4%. But that was a measure of the bank's common equity as a percentage of the assets it holds as weighted by their risk—which is much less than the value of these assets according to accounting rules. Take out the risk-weighting adjustment, and its capital ratio falls to 7.8%.

On average, the three big universal banking companies (J.P. Morgan Chase, Bank of America and Citigroup) risk-weight their assets at only 55% of their total assets. For every trillion dollars in accounting assets, these megabanks calculate their capital ratio as if the assets represented only $550 billion of risk.

As we learned during the 2008 financial crisis, financial models can be unreliable. Their assumptions about the risk of steep declines in housing prices were fatally flawed, causing catastrophic drops in the value of mortgage-backed securities. And now the London Whale episode has shown how capital regulations create incentives for even legitimate models to be manipulated.

According to the evidence compiled by the Senate Permanent Subcommittee on Investigations, the Chase staff was able to magically cut the risks of the Whale's trades in half. Of course, they also camouflaged the true dangers in those trades.

The ease with which models can be manipulated results in wildly divergent risk-weightings among banks with similar portfolios. Ironically, the government permits a bank to use its own internal models to help determine the riskiness of assets, such as securities and derivatives, which are held for trading—but not to determine the riskiness of good old-fashioned loans. The risk weights of loans are determined by regulation and generally subject to tougher capital treatment. As a result, financial institutions with large trading books can have less capital and still report higher capital ratios than traditional banks whose portfolios consist primarily of loans.

Compare, for instance, the risk-based ratios of Morgan Stanley, an investment bank that has struggled since the crisis, and U.S. Bancorp, a traditional commercial lender that has been one of the industry's best performers. According to the Fed's latest stress test, Morgan Stanley reported a risk-based capital ratio of nearly 14%; take out the risk weighting and its ratio drops to 7%. USB has a risk-based ratio of about 9%, virtually the same as its ratio on a non-risk weighted basis.

In the U.S. and most other countries, banks can also load up on their own country's government-backed debt and treat it as having zero risk. Many banks in distressed European nations have aggressively purchased their country's government debt to enhance their risk-based capital ratios.

In addition, if a bank buys the debt of another bank, it only needs to include 20% of the accounting value of those holdings for determining its capital requirements—but it must include 100% of the value of bonds of a commercial issuer. The rules governing capital ratios treat Citibank's debt as having one-fifth the risk of IBM's. In a financial system that is already far too interconnected, it defies reason that regulators give banks such strong capital incentives to invest in each other.

Regulators need to use a simple, effective ratio as the main determinant of a bank's capital strength and go back to the drawing board on risk-weighting assets. It does make sense to look at the riskiness of banks' assets in determining the adequacy of its capital. But the current rules are upside down, providing more generous treatment of derivatives trading than fully collateralized small-business lending.

The main argument megabanks advance against a tough capital ratio is that it would force them to raise more capital and hurt the economic recovery. But the megabanks aren't doing much new lending. Since the crisis, they have piled up excess reserves and expanded their securities and derivatives positions—where they get a capital break—while loans, which are subject to tougher capital rules, have remained nearly flat.

Though all banks have struggled to lend in the current environment, midsize banks, with their higher capital levels, have the strongest loan growth, and community banks do the lion's share of small-business lending. A strong capital ratio will reduce megabanks' incentives to trade instead of making loans. Over the long term, it will make these banks a more stable source of credit for the real economy and give them greater capacity to absorb unexpected losses. Bet on it, there will be future London Whale surprises, and the next one might not be so easy to harpoon.

Ms. Bair, the chairman of the Federal Deposit Insurance Corporation from 2006 to 2011, is the author of "Bull by the Horns: Fighting to Save Main Street From Wall Street and Wall Street From Itself" (Free Press, 2012).

Monday, April 1, 2013

China's Demography and its Implications

China's Demography and its Implications. By Il Houng Lee, Qingjun Xu, and Murtaza Syed
IMF Working Paper No. 13/82
Mar 28, 2013
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40446.0

Summary: In coming decades, China will undergo a notable demographic transformation, with its old-age dependency ratio doubling to 24 percent by 2030 and rising even more precipitously thereafter. This paper uses the permanent income hypothesis to reassess national savings behavior, with greater prominence and more careful consideration given to the role played by changing demography. We use a forward-looking and dynamic approach that considers the entire population distribution. We find that this not only holds up well empirically but may also be superior to the static dependency ratios typically employed in the literature. Going further, we simulate global savings behavior based on our framework and find that China’s demographics should have induced a negative current account in the 2000s and a positive one in the 2010s given the rising share of prime savers, only turning negative around 2045. The opposite is true for the United States and Western Europe. The observed divergence in current account outcomes from the simulated path appears to have been partly policy induced. Over the next couple of decades, individual countries’ convergence toward the simulated savings pattern will be influenced by their past divergences and future policy choices. Other implications arising from China’s demography, including the growth model, the pension system, the labor market, and the public finances are also briefly reviewed.

China’s Path to Consumer-Based Growth: Reorienting Investment and Enhancing Efficiency

China’s Path to Consumer-Based Growth: Reorienting Investment and Enhancing Efficiency. By Il Houng Lee, Murtaza Syed, and Liu Xueyan (Xueyan Liu???)
IMF Working Paper No. 13/83
March 29, 2013

http://www.imf.org/external/pubs/cat/longres.aspx?sk=40446.0

Summary: This paper proposes a possible framework for identifying excessive investment. Based on this method, it finds evidence that some types of investment are becoming excessive in China, particularly in inland provinces. In these regions, private consumption has on average become more dependent on investment (rather than vice versa) and the impact is relatively short-lived, necessitating ever higher levels of investment to maintain economic activity. By contrast, private consumption has become more self-sustaining in coastal provinces, in large part because investment here tends to benefit household incomes more than corporates. If existing trends continue, valuable resources could be wasted at a time when China’s ability to finance investment is facing increasing constraints due to dwindling land, labor, and government resources and becoming more reliant on liquidity expansion, with attendant risks of financial instability and asset bubbles. Thus, investment should not be indiscriminately directed toward urbanization or industrialization of Western regions but shifted toward sectors with greater and more lasting spillovers to household income and consumption. In this context, investment in agriculture and services is found to be superior to that in manufacturing and real estate. Financial reform would facilitate such a reorientation, helping China to enhance capital efficiency and keep growth buoyant even as aggregate investment is lowered to sustainable levels.

Friday, March 29, 2013

America's Voluntary Standards System: A 'Best Practice' Model for Asian Innovation Policies? By Dieter Ernst

America's Voluntary Standards System: A 'Best Practice' Model for Asian Innovation Policies? By Dieter Ernst
East-West Center, Policy Studies, No. 66, March 2013
ISBN: 978-0-309-26204-5 (print); 978-0-86638-205-2 (electronic)
Pages: xvi, 66
http://www.eastwestcenter.org/publications/americas-voluntary-standards-system-best-practice-model-asian-innovation-policies


Summary

Across Asia there is a keen interest in the potential advantages of America's market-led system of voluntary standards and its contribution to US innovation leadership in complex technologies.

For its proponents, the US tradition of bottom-up, decentralized, informal, market-led standardization is a "best practice" model for innovation policy. Observers in Asia are, however, concerned about possible drawbacks of a standards system largely driven by the private sector.

This study reviews the historical roots of the American system, examines its defining characteristics, and highlights its strengths and weaknesses. A tradition of decentralized local self-government has given voice to diverse stakeholders in innovation. However, a lack of effective coordination of multiple stakeholder strategies constrains effective and open standardization processes.

Asian countries seeking to improve their standards systems should study the strengths and weaknesses of the American system. Attempts to replicate the US standards system will face clear limitations--persistent differences in Asia's economic institutions, levels of development, and growth models are bound to limit convergence to a US-style market-led voluntary standards system.

Thursday, March 28, 2013

Too Cold, Too Hot, Or Just Right? Assessing Financial Sector Development Across the Globe

Too Cold, Too Hot, Or Just Right? Assessing Financial Sector Development Across the Globe. By A Barajas et alii.
IMF Working Paper No. 13/81
March 28, 2013
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40441.0

Summary: This paper introduces the concept of the financial possibility frontier as a constrained optimum level of financial development to gauge the relative performance of financial systems across the globe. This frontier takes into account structural country characteristics, institutional, and macroeconomic factors that impact financial system deepening. We operationalize this framework using a benchmarking exercise, which relates the difference between the actual level of financial development and the level predicted by structural characteristics, to an array of policy variables. We also show that an overshooting of the financial system significantly beyond levels predicted by its structural fundamentals is associated with credit booms and busts.


Excerpts:

Ample empirical evidence has shown a positive, albeit non-linear, relationship between financial system depth, economic growth, and macroeconomic volatility. At the same time, rapid expansion in credit has been associated with higher bank fragility and the likelihood of a systemic banking crisis.1 This seemingly conflicting evidence is actually consistent with theory. The same mechanisms through which finance helps growth also makes it susceptible to shocks and, ultimately, fragility. Specifically, the maturity and liquidity transformation from short-term savings and deposit facilities into long-term investments is at the core of the positive impact finance on the real economy, but it can also render the system susceptible to shocks. The information asymmetries and ensuing agency problems between savers and entrepreneurs that banks help to alleviate can also turn into a source of fragility given agency conflicts between depositors/creditors and banks.

The importance of the financial sector for the overall economy raises the question of the “optimal” or “Goldilocks” level of financial depth and the requisite policies to reach this optimum. Given the dual-faced nature of financial deepening, contributing to growth while often resulting in boom-bust cycles, and the identification of non-linear relationships between growth, volatility, and financial depth, it is apparent that additional deepening is not always desirable. Further, there is increasing evidence for a critical role of the financial system in defining policy space and the transmission of fiscal, monetary and exchange rate policies (IMF, 2012). Both shallow as well as over-extended financial systems can severely reduce the available policy space and hamper transmission channels.

The conceptual and empirical frameworks offered in this paper are relevant for the academic and policy debate on financial sector deepening, particularly in developing countries. We introduce the concept of a financial possibility frontier as a constrained optimum level of financial development to gauge the relative performance of financial systems around the globe. Specifically, this concept allows us to assess the performance of countries’ financial systems over time relative to structural country characteristics and other state variables (e.g., macroeconomic and institutional variables). Depending on the position of country’s financial system relative to the frontier, policy options can be prioritized to address deficiencies.

Three different sets of policies can be delineated depending on a country’s standing relative to the frontier. Market-developing policies, related to macroeconomic stability, long-term institution building, and other measures to overcome constraints imposed by a small size or volatile economic structure, can help push out the frontier. Market-enabling policies, which address deficiencies such as regulatory barriers and lack of competition, can help a financial system move toward the frontier. Finally, market-harnessing policies help prevent a financial system from moving beyond the frontier (the long-term sustainable equilibrium), and include regulatory oversight and short-term macroeconomic management.

We also operationalize this conceptual framework by presenting a benchmark model that predicts countries’ level of financial development based on structural characteristics (e.g., income, size, and demographic characteristics) and other fundamental factors. The most straightforward approach for assessing a country’s progress in financial deepening is to benchmark its financial system against peers or regional averages. Such comparisons, while useful, do not allow for a systematic unbundling of structural and policy factors that have a bearing on financial deepening. Using regression analysis, we relate gaps between predicted and actual levels of financial development to an array of macroeconomic, regulatory, and institutional variables. We also provide preliminary evidence that overshooting the predicted level of financial development is associated with credit boom-bust episodes, underlining the importance of optimizing rather than maximizing financial development.

This paper is related to several literatures. First, it is directly related to an earlier exercise to derive an access possibilities frontier as a conceptual tool to assess the optimal level of sustainable outreach of the financial system (Beck, and de la Torre, 2007). While Beck, and de la Torre (2007) focus on the microeconomics of access to and use of financial services, this paper provides a macroeconomic perspective on financial sector development. Second, our paper is related to the empirical literature on benchmarking. Based on Beck et al. (2008) and Al Hussainy et al. (2011), we derive a benchmarking model that relates a country’s level of financial development over time to a statistical benchmark, obtained from a large panel regression.

In a broader sense, the paper is also related to the literature on the finance-growth nexus, financial crises, and studies identifying policies needed for sound and effective financial systems. The finance and growth literature, as surveyed by Levine (2005), among others, has found a positive relationship between financial deepening and growth. More recent work, however, has uncovered non-linearities in this relationship. There is evidence that the effect of financial development is strongest among middle-income countries (Barajas et al., 2012), whereas other work finds a declining effect of finance on growth as countries grow richer.2 More recently, Arcand et al. (2012) find that the finance-growth relationship becomes negative as private credit reaches 110 percent of GDP, while Dabla-Norris and Srivisal (2013) document a positive relationship between financial depth and macroeconomic volatility at very high levels.

Our paper is also related to a growing literature exploring the anatomy of financial crises. This literature has pointed to the role of macroeconomic, bank-level and regulatory factors in driving and exacerbating financial fragility. Finally, our paper is related to a diverse literature exploring macroeconomic and institutional determinants of sound and efficient financial deepening.

Cyprus: Some Early Lessons. By Thorsten Beck

Cyprus: Some Early Lessons. By Thorsten Beck
World Bank Blogs, Mar 28, 2013

The crisis is Cyprus is still unfolding and the final resolution might still have some way to go, but the events in Nicosia and Brussels already offer some first lessons. And these lessons look certainly familiar to those who have studied previous crises.  Bets are that Cyprus will not be the Troika’s last patient, with one South European finance minister already dreading the moment where he might be in a situation like his Cypriot colleague.  Even more important, thus to analyze the on-going Cyprus crisis resolution for insights into where the resolution of the Eurozone crisis might be headed and what needs to be done.

1. A deposit insurance scheme is only as good as the sovereign backing it

One of the main objectives of deposit insurance is to prevent bank runs. That was also the idea behind the increase of deposit insurance limits across the Eurozone to 100,000 Euro after the Global Financial Crisis. However, deposit insurance is typically designed for idiosyncratic bank failures, not for systemic crises.  In the latter case, it is important that public back stop funding is available.  Obviously, the credibility of the latter depends on a solvent sovereign. As Cyprus has shown, if the solvency of the sovereign is itself in question, this will undermine the confidence of depositors in a deposit insurance scheme.  In the case of Cyprus, this confidence has been further undermined by the initial idea of imposing a tax on insured deposits, effectively an insurance co-payment, contradicting maybe not in legal terms but definitely in spirit the promise of deposit insurance of up to 100,000 Euros. The confidence that has been destroyed with the protracted resolution process and the back-and-forth over loss distribution will be hard to re-establish. A banking system without the necessary trust, in turn, will be hard pressed to fulfill its basic functions of facilitating payment services and intermediating savings. Ultimately, this lack of confidence can only be overcome by a Eurozone wide deposit insurance scheme with public back-stop funding by ESM and a regulatory and supervisory framework that depositors can trust.

2. A large financial system is not necessarily growth enhancing

An extensive literature has documented the positive relationship between financial deepening and economic growth, even though the recent crisis has shed doubts on this relationship (Levine, 2005, Beck, 2012).  However, both theoretical and empirical literature focus on the intermediation function of the financial system, not on the size of the financial system per se. Very different from this financial facilitator view is the financial center view, which sees the financial sector as an export sector, i.e. one that seeks to build a nationally centered financial center stronghold based on relative comparative advantages such as skill base, favorable regulatory and tax policies, (financial safety net) subsidies, etc. Economic benefits of such a financial center might also include important spin-offs coming from professional services (legal, accounting, consulting, etc.) that tend to cluster around the financial sector.

In recent work with Hans Degryse and Christiane Kneer (2013) and using pre-2007 data, we have shown that a large financial system might stimulate growth in the short-term, but comes at the expense of higher volatility. It is the financial intermediation function of finance that helps improve growth prospects not a large financial center, a lesson that Cyprus could have learned from Iceland.

3. Crisis resolution as political distribution fight

Resolution processes are basically distributional fights about who has to bear losses.   The week-long negotiations about loss allocation in Cyprus are telling in this respect.  While it was initially Eurozone authorities that were blamed for imposing losses on insured depositors, there is an increasingly clear picture that it was maybe the Cypriot government itself that pushed for such a solution in order to avoid imposing losses on large, (and thus most likely) richer and more connected depositors.

While the Cypriot case might be the most egregious recent example for the entanglement of politics and crisis resolution, the recent crises offer ample examples of how politically sensitive the financial system is.  Just two more examples here:  First, even during and after the Global Financial Crisis of 2008 and 2009, there was still open political pressure across Europe to maintain or build up national champions in the respective banking systems, even at the risk of creating more too-big-to-fail banks.  Second, the push by the German government to exempt German small savings and cooperative banks from ECB supervision and thus the banking union can be explained only on political basis and not with economic terms, as the "too-many-to-fail" is as serious as the "too-big-to-fail" problem.

4. Plus ca change, plus c'est la meme chose

European authorities and many observers have pointed to the special character of each of the patients of the Eurozone crisis and their special circumstances. Ireland and Spain suffered from housing booms and subsequent busts, Portugal from high current account deficits stemming from lack of competitiveness and mis-allocation of capital inflows, Greece from high government deficit and debt and now Cyprus from an oversized banking system. So, seemingly different causes, which call for different solutions!
But there is one common thread across all crisis countries, and that is the close ties between government and bank solvency. In the case of Ireland, this tie was established when the ECB pushed the Irish authorities to assume the liabilities of several failed Irish banks. In the case of Greece, it was the other way around, with Greek banks having to be recapitalized once sovereign debt was restructured.  In all crisis countries, this link is deepened as their economies go into recession, worsening government’s fiscal balance, thus increasing sovereign risk, which in turn puts balance sheets of banks under pressure that hold these bonds but also depend on the same government for possible recapitalization. This tie is exacerbated by the tendency of banks to invest heavily in their home country’s sovereign bonds, a tendency even stronger in the Eurozone’s periphery (Acharya, Drechsler and Schnabl, 2012).  Zero capital requirements for government bond holdings under the Basel regime, based on the illusion that such bonds in OECD countries are safe from default, have not helped either.

5. If you kick the can down the road, you will run out of road eventually 

The multiple rounds of support packages for Greece by Troika, built on assumptions and data, often outdated by the time agreements were signed, has clearly shown that you can delay the day of reckoning only so long. By kicking the can down the road, however, you risk deteriorating the situation even further. In the case of Greece that led eventually to restructuring of sovereign debt. Delaying crisis resolution of Cyprus for months if not years has most likely also increased losses in the banking system.  A lesson familiar from many emerging market crises (World Bank. 2001)!  On a first look, the Troika seemed eager to avoid this mistake in the case of Cyprus, forcing recognition and allocation of losses in the banking system early on without overburdening the sovereign debt position. However, the recession if not depression that is sure to follow in the next few years in Cyprus will certainly increase the already high debt-to-GDP ratio and might ultimately lead to the need for sovereign debt restructuring.

6. The Eurozone crisis — a tragedy of commons

The protracted resolution process of Cyprus has shown yet again, that in addition to a banking, sovereign, macroeconomic and currency crisis, the Eurozone faces a governance crisis. Decisions are taken jointly by national authorities who each represent the interest of their respective country (and taxpayers), without taking into account the externalities of national decisions arising on the Eurozone level. It is in the interest of every member government with fragile banks to "share the burden" with the other members, be it through the ECB’s liquidity support or the Target 2 payment system. Rather than coming up with crisis resolution on the political level, the ECB and the Eurosystem are being used to apply short-term (liquidity) palliatives that deepen distributional problems and make the crisis resolution more difficult. What is ultimately missing is a democratically legitimized authority that represents Eurozone interests.

7. Learning from the Vikings

In 2008, Iceland took a very different approach from the Eurozone when faced with the failure of their oversized banking system. It allowed its banks to fail, transferred domestic deposits into good banks and left foreign deposits and other claims and bad assets in the original banks, to be resolved over time.  While the banking crisis and its resolution has been a traumatic experience for the Icelandic economy and society, with repercussions even for diplomatic relations between Iceland and several European countries, it avoided a loss and thus insolvency transfer from the banking sector to the sovereign.  Iceland's government has kept its investment rating throughout the crisis. And while mistakes might have been made in the resolution process (Danielsson, 2011), Iceland’s banking sector does not drag down Iceland’s growth any longer and might eventually even make a positive contribution.

The resolution approach in Cyprus seems to follow the Icelandic approach. While the Cypriot case might be a special one (as part of the losses fall outside the Eurozone and Cypriot banks are less connected with the rest of the Eurozone than previous crisis cases), there are suggestions that future resolution cases might impose losses not just on junior and maybe senior creditors of banks, but even on depositors to thus reduce pressure on government’s balance sheets.  A move towards market discipline, for certain; whether this is due to learning from experience, tighter government budgets across Europe or for political reasons remains to be seen.

8. Banking union with just supervision does not work

The move towards a Single Supervisory Mechanism has been hailed as major progress towards a banking union and stronger currency union.  As the case of Cyprus shows, this is certainly not enough.  The holes in the balance sheets of Cypriot banks became obvious in 2011 when Greek sovereign debt was restructured, but given political circumstances, the absence of a bank resolution framework in Cyprus and — most importantly — the absence of resources to undertake such a restructuring, the problems have not been addressed until now.  Even once the ECB has supervisory power over the Eurozone banking system, without a Eurozone-wide resolution authority with the necessary powers and resources, it will find itself forced to inject more and more liquidity and keep the zombies alive, if national authorities are unwilling to resolve a failing bank.

9. A banking union is needed for the Eurozone, but won't help for the current crisis!

While the Eurozone will not be sustainable as currency union without a banking union, a banking union cannot help solve the current crisis. First, building up the necessary structures for a Eurozone or European regulatory and bank resolution framework cannot be done overnight, while the crisis needs immediate attention. Second, the current discussion on banking union is overshadowed by distributional discussions, as the bank fragility is heavily concentrated in the peripheral countries, and using a Eurozone-wide deposit insurance and supervision mechanism to solve legacy problems is like introducing insurance after the insurance case has occurred. The current crisis has to be solved before banking union is in place. Ideally, this would be done through the establishment of an asset management company or European Recapitalization Agency, which would sort out fragile bank across Europe, and also be able to take an equity stake in restructured banks to thus benefit from possible upsides (Beck, Gros and Schoenmaker, 2012).  This would help disentangle government and bank ties, discussed above, and might make for a more expedient and less politicized resolution process than if done on the national level.

10. A currency union with capital controls?

The protracted resolution process of the Cypriot banking crisis has increased the likelihood of a systemic bank run in Cyprus once the banks open, though even if the current solution would have been arrived at in the first attempt, little confidence in Cypriot banks might have been left.  As in other crises (Argentina and Iceland) that perspective has led authorities to impose capital controls, an unprecedented step within the Eurozone. Effectively, however, this implies that a Cypriot Euro is not the same as a German or Dutch Euro, as they cannot be freely exchanged via the banking system, thus a contradiction to the idea of a common currency (Wolff, 2013).

However, these controls only formalize and legalize what has been developing over the past few years: a rapidly disintegrating Eurozone capital market.  National supervisors increasingly focus on safeguarding their home financial system, trying to keep capital and liquidity within their home country (Gros, 2012).  Anecdotal evidence suggests that this does not only affect the inter-bank market but even intra-group transaction between, let’s say, Italian parent banks and their Austrian and German subsidiaries.  Another example of the tragedy of commons, discussed above.

11. Finally, there is no free lunch

This might sound like a broken disk, but the Global Financial Crisis and subsequent Eurozone crisis has offered multiple incidences to remind us that you cannot have the cake and eat it.  This applies as much to Dutch savers attracted by high interests in Icesave and then disappointed by the failure of Iceland to assume the obligations of its banks as to Cypriot banks piling up on Greek government bonds promising high returns even in 2010 when it had become all but obvious that Greece would require sovereign debt restructuring.  On a broader level, the idea that a joint currency only brings advantages for everyone involved, but no additional responsibilities in term of reduced sovereignty and burden-sharing and insurance arrangements also resembles the free lunch idea.

On a positive note, the Cyprus bail-out has shown that Eurozone authorities have learnt from previous failures by forcing an early recognition of losses in Cyprus and by moving towards a banking union, even if very slowly. As discussed above, however, there are still considerable political constraints and barriers to overcome, so that it is ultimately left to each observer to decide whether the glass is half full or half empty.


References:

Acharya, Viral, Itamar Drechsler and Philipp Schnabl. 2012. A tale of two overhangs: the nexus of financial sector and sovereign credit risks. Vox 15 April 2012
Beck, Thorsten. 2012. Finance and growth: lessons from the literature and the recent crisis. Paper prepared for the LSE growth commission.
Beck, Thorsten, Hans Degryse and Christiane Kneer. 2012. Is more finance better?
Disentangling intermediation and size effects of financial systems. Journal of Financial Stability, forthcoming.
Beck, Thorsten, Daniel Gros, Dirk Schoenmaker (2012): Banking union instead of Eurobonds — disentangling sovereign and banking crises, Vox 24 June 2012.
Danielsson, Jon. 2011. How not to resolve a banking crisis: Learning from Iceland’s mistakes  Vox, 26 November 2011
Gros. Daniel. 2012. The Single European Market in Banking in decline — ECB to the rescue? Vox , 16 Ocotber 2012
Levine, Ross. 2005. Finance and growth: theory and evidence. In Handbook of Economic
Growth, ed. Philippe Aghion and Steven N. Durlauf, 865–934. Amsterdam: Elsevier.
Wolff, Guntram. 2013. Capital controls are a grave risk to the eurozone. Financial Times 26 March 2013.
World Bank. 2001. Finance For Growth: Policy Choices in a Volatile World. Policy Research Report


Full article:
http://blogs.worldbank.org/allaboutfinance/cyprus-some-early-lessons


Wednesday, March 27, 2013

How Effective are Macroprudential Policies in China? By Bin Wang and Tao Sun

How Effective are Macroprudential Policies in China? By Bin Wang and Tao Sun
IMF Working Paper No. 13/75
March 27, 2013
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40425.0

Summary: This paper investigates macroprudential policies and their role in containing systemic risk in China. It shows that China faces systemic risk in both the time (procyclicality) and cross-sectional (contagion) dimensions. The former is reflected as credit and asset price risks, while the latter is reflected as the links between the banking sector and informal financing and local government financing platforms. Empirical analysis based on 171 banks shows that some macroprudential policy tools (e.g., the reserve requirement ratio and house-related policies) are useful, but they cannot guarantee protection against systemic risk in the current economic and financial environment. Nevertheless, better-targeted macroprudential policies have greater potential to contain systemic risk pertaining to the different sizes of the banks and their location in regions with different levels of economic development. Complementing macroprudential policies with further reforms, including further commercialization of large banks, would help improve the effectiveness of those policies in containing systemic risk in China.


ISBN/ISSN: 9781484355886 / 2227-8885

Supervisory framework for measuirng and controlling large exposures

Supervisory framework for measuirng and controlling large exposures
BCBS, Mar 2013
http://www.bis.org/publ/bcbs246.htm

The Basel Committee on Banking Supervision has today published a proposed supervisory framework for measuring and controlling large exposures.

 One of the key lessons from the financial crisis is that banks did not always consistently measure, aggregate and control exposures to single counterparties across their books and operations. And throughout history there have been instances of banks failing due to concentrated exposures to individual counterparties (eg Johnson Matthey Bankers in the UK in 1984, the Korean banking crisis in the late 1990s). Large exposures regulation has arisen as a tool for containing the maximum loss a bank could face in the event of a sudden counterparty failure to a level that does not endanger the bank's solvency.

A separate key lesson from the crisis is that material losses in one systemically important financial institution (SIFI) can trigger concerns about the solvency of other SIFIs, with potentially catastrophic consequences for global financial stability. The Committee is of the view that the large exposures framework is a tool that could be used to mitigate the risk of contagion between global systemically important banks, thus underpinning financial stability.

Finally, the consultation paper presents proposals to strengthen the oversight and regulation of the shadow banking system in relation to large exposures.  In particular, the proposals include policy measures designed to capture bank-like activities conducted by non-banks that are of concern to supervisors.

The proposed new standard aims to ensure greater consistency in the way banks and supervisors measure, aggregate and control exposures to single counterparties. Acting as a backstop to risk-based capital requirements, the standard would supplement the existing risk-based capital framework by protecting banks from substantive losses caused by the sudden default of a counterparty or group of connected counterparties. The consultative paper would replace the Basel Committee's 1991 guidance Measuring and controlling large credit exposures.

Tuesday, March 26, 2013

Issues with the Bayes estimator of a conjugate normal hierarchy model

Someone asks for an instability issue in R's integrate program:
Hello everyone,

I am supposed to calculate the Bayes estimator of a conjugate normal hierarchy model. However, the Bayes estimator does not have a closed form,

The book "Theory of Point Estimation" claims that the numerical evaluation of  this estimator is simple. But my two attempts below both failed.

1. I tried directly using the integration routine in R on the numerator and denominator separately. Maybe because of the infinite domain, occasionally the results are far from reasonable.

2. I tried two ways of change of variables so that the resulting domain can be finite. I let


But the estimator results are very similar to the direct integration on the original integrand. More often than it should occur, we obtain quite large evaluation of the Bayes estimator, up to 10^6 magnitude.

I wonder if there is any other numerical integration trick which can lead to a more accurate evaluation.

I appreciate any suggestion.


-------------------------------------------
xxx
Some State University
-------------------------------------------

Well, what happens here? Her program have a part that says:

[Bayes(nu,p,sigma,xbar) is the ratio of both integrals, "f", and "g" are the integrals, f is the numerator, g the denominator, so Bayes = f/g]

Now, executing Bayes(2,10,1,9.3) fails:

> Bayes(2,10,1,9.3)
[1] 1477.394

, which is much greater than the expected approx. 8.


I tried this with the same program, integrate, to do this simple case (dnorm is the normal distribution density):

> integrate(dnorm,0,1)
0.3413447 with absolute error < 3.8e-15
> integrate(dnorm,0,10)
0.5 with absolute error < 3.7e-05
> integrate(dnorm,0,100)
0.5 with absolute error < 1.6e-07
> integrate(dnorm,0,1000)
0.5 with absolute error < 4.4e-06
> integrate(dnorm,0,10000000000)
0 with absolute error < 0



As we can see, the last try, with a very large value, fails miserably, value is 0 (instead of 0.5) and absolute error is negative.

The program "integrate" uses code that is part of supposedly "a Subroutine Package for Automatic Integration", as it is advertised, but it cannot anticipate everything -- and we hit an instability we cannot solve.

My suggestion was to use integrate(f,0,1) and integrate(g,0,1) always until we get results outside what is reasonable. In those cases, we should try integrate(f,0,.999) and integrate(g,0,.999) with as many nines as we can (I got problems with just .9999, that's why I wrote .999 there).

Of course, you can always try a different method. Since this function is well-behaved, any simple method could be good enough.

Saturday, March 23, 2013

Basel: Consultative document on recognising the cost of credit protection purchased

Basel Committee issues consultative document on recognising the cost of credit protection purchased
March 22, 2013
http://www.bis.org/press/p130322.htm

The Basel Committee on Banking Supervision has today published a proposal that would strengthen capital requirements when banks engage in certain high-cost credit protection transactions.

The Committee has previously expressed concerns about potential regulatory capital arbitrage related to certain credit protection transactions. At that time it noted that it would continue to monitor developments with respect to such transactions and would consider imposing a globally harmonised minimum capital Pillar 1 requirement if necessary. After further consideration, the Committee decided to move forward with a more comprehensive Pillar 1 proposal.

While the Committee recognises that the purchase of credit protection can be an effective risk management tool, the proposed changes are intended to ensure that the costs, and not just the benefits, of purchased credit protection are appropriately recognised in regulatory capital. It does this by requiring that banks, under certain circumstances, calculate the present value of premia paid for credit protection, which should be considered as an exposure amount of the protection-purchasing bank and be assigned a 1,250% risk weight.


---
Recognising the cost of credit protection purchased

The proposal set out in this consultative document would strengthen capital requirements when banks engage in certain high-cost credit protection transactions. The Committee has previously expressed concerns about potential regulatory capital arbitrage related to certain credit protection transactions. At that time it noted that it would continue to monitor developments with respect to such transactions and would consider imposing a globally harmonised minimum capital Pillar 1 requirement if necessary. After further consideration, the Committee decided to move forward with a more comprehensive Pillar 1 proposal.

While the Committee recognises that the purchase of credit protection can be an effective risk management tool, the proposed changes are intended to ensure that the costs, and not just the benefits, of purchased credit protection are appropriately recognised in regulatory capital. It does this by requiring that banks, under certain circumstances, calculate the present value of premia paid for credit protection, which should be considered as an exposure amount of the protection-purchasing bank and be assigned a 1,250% risk weight.

 
---
Full text of the consultative doc: http://www.bis.org/publ/bcbs245.pdf

Friday, March 22, 2013

Basel Committee: supervisory guidance on external audits of banks (consultation)

Basel Committee publishes for consultation supervisory guidance on external audits of banks
March 21, 2013
http://www.bis.org/press/p130321.htm

The Basel Committee on Banking Supervision has today published supervisory guidance on External audits of banks for consultation along with a letter to the International Auditing and Assurance Standards Board (IAASB).

The consultative paper aims to enhance and supersede the existing guidance that was published by the Basel Committee in 2002 on the relationship between banking supervisors and banks' external auditors and in 2008 on external audit quality and banking supervision. The evolution of bank practices and the introduction of new standards and regulations over the last 10 years warranted a thorough revision of the Committee's supervisory guidance. In addition, the recent financial crisis has highlighted the need to improve the quality of external audits of banks. The proposed enhanced guidance sets out supervisory expectations of how:
  • external auditors can discharge their responsibilities more effectively;
  • audit committees can contribute to audit quality in their oversight of the external audit;
  • an effective relationship between external auditors and supervisors can lead to regular communication of mutually useful information; and
  • regular and effective dialogue between the banking supervisory authorities and relevant audit oversight bodies can enhance the quality of bank audits.
The Committee's letter to the IAASB calls for enhancing the International Standards on Auditing (ISAs) to include more authoritative guidance relating to the audit of banks. It sets out specific areas where the Committee believes the ISAs should be improved.

Commenting on today's publications, Stefan Ingves, Chairman of the Basel Committee and Governor of Sveriges Riksbank, said, "The Committee has developed guidance that builds on recent experience and will help raise the bar with regard to what supervisors expect of banks' external auditors and audit committees. We also recognise the great importance of audit standards and are keen to support the IAASB in enhancing audit quality."

Comments on the proposals should be submitted by Friday 21 June 2013 by e-mail to: baselcommittee@bis.org. Alternatively, comments may be sent by post to: Secretariat of the Basel Committee on Banking Supervision, Bank for International Settlements, CH-4002 Basel, Switzerland. All comments may be published on the website of the Bank for International Settlements unless a comment contributor specifically requests confidential treatment.

Monday, March 18, 2013

Tracking Global Demand for Advanced Economy Sovereign Debt

Tracking Global Demand for Advanced Economy Sovereign Debt. Prepared by Serkan Arslanalp and Takahiro Tsuda
IMF Working Paper No. 12/284
December 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40135.0

Recent events have shown that sovereign, just like banks, can be subject to runs, highlighting the importance of the investor base for their liabilities. This paper proposes a methodology for compiling internationally comparable estimates of investor holdings of sovereign debt. Based on this methodology, it introduces a dataset for 24 major advanced economies that can be used to track US$42 trillion of sovereign debt holdings on a quarterly basis over 2004-11. While recent outflows from euro periphery countries have received wide attention, most sovereign borrowers have continued to increase reliance on foreign investors. This may have helped reduce borrowing costs, but it can imply higher refinancing risks going forward. Meanwhile, advanced economy banks’ exposure to their own government debt has begun to increase across the board after the global financial crisis, strengthening sovereign-bank linkages. In light of these risks, the paper proposes a framework— sovereign funding shock scenarios (FSS)—to conduct forward-looking analysis to assess sovereigns’ vulnerability to sudden investor outflows, which can be used along with standard debt sustainability analyses (DSA). It also introduces two risk indices—investor base risk index (IRI) and foreign investor position index (FIPI)—to assess sovereigns’ vulnerability to shifts in investor behavior.

In service of the country: Ted van Dyk

My Unrecognizable Democratic Party. By Ted van Dyk
The stakes are too high, please get serious about governing before it's too late.
http://online.wsj.com/article/SB10001424127887324128504578344611522010132.html 
The Wall Street Journal, March 18, 2013, on page A13

As a lifelong Democrat, I have a mental picture these days of my president, smiling broadly, at the wheel of a speeding convertible. His passengers are Democratic elected officials and candidates. Ahead of them, concealed by a bend in the road, is a concrete barrier.

They didn't have to take that route. Other Democratic presidents have won bipartisan support for proposals as liberal in their time as some of Mr. Obama's are now. Why does this administration seem so determined to head toward a potential crash and burn?

Even after the embarrassing playout of the Obama-invented Great Sequester Game, after the fiasco of the president's Fiscal Cliff Game, conventional wisdom among Democrats holds that disunited Republicans will be routed in the 2014 midterm elections, leaving an open field for the president's agenda in the final two years of his term. Yet modern political history indicates that big midterm Democratic gains are unlikely, and presidential second terms are notably unproductive, most of all in their waning months. Since 2012 there has been nothing about the Obama presidency to justify the confidence that Democrats now exhibit.

Mr. Obama was elected in 2008 on the basis of his persona and his pledge to end political and ideological polarization. His apparent everyone-in-it-together idealism was exactly what the country wanted and needed. On taking office, however, the president adopted a my-way-or-the-highway style of governance. He pursued his stimulus and health-care proposals on a congressional-Democrats-only basis. He rejected proposals of his own bipartisan Simpson-Bowles commission, which would have provided long-term deficit reduction and stabilized rapidly growing entitlement programs. He opted instead to demonize Republicans for their supposed hostility to Social Security, Medicare and Medicaid.

No serious attempt—for instance, by offering tort reform or allowing the sale of health-insurance products across state lines—was made to enlist GOP congressional support for the health bill. It passed, but the constituents of moderate Democrats punished them: 63 lost their seats in 2010 and Republicans took control of the House.

Faced with a similar situation in 1995, following another GOP House takeover, President Bill Clinton shifted to bipartisan governance. Mr. Obama did not, then blamed Republicans for their "obstructionism" in not yielding to him.

Defying the odds, Mr. Obama did become the first president since Franklin Roosevelt to be re-elected with an election-year unemployment rate above 7.8%. Yet his victory wasn't based on public affirmation of his agenda. Instead, it was based on a four-year mobilization—executed with unprecedented skill—of core Democratic constituencies, and on fear campaigns in which Mitt Romney and the Republicans were painted as waging a "war on women," being servants of the wealthy, and of being hostile toward Latinos, African Americans, gays and the middle class. I couldn't have imagined any one of the Democratic presidents or presidential candidates I served from 1960-92 using such down-on-all-fours tactics.

The unifier of 2008 became the calculated divider of 2012. Yes, it worked, but only narrowly, as the president's vote total fell off sharply from 2008.

Other modern Democratic presidents have had much more success with very different governing strategies. In 1961-62, John Kennedy won Republican congressional and public support with the proposals of his Keynesian Council of Economic Advisers chairman, Walter Heller, to cut personal and business taxes "to get America moving again," and for the global free movement of goods, services, capital and people.

In 1965, Lyndon Johnson had Democratic congressional majorities sufficient to pass any legislation he wanted. But he sought and received GOP congressional support for Medicare, Medicaid, civil rights, education and other Great Society legislation. He knew that in order to last, these initiatives needed consensus support. He did not want them re-debated later, as ObamaCare is being re-debated now.

Johnson got bipartisan backing for deficit reduction in 1967, when he learned that the deficit had reached an unthinkable $28 billion. Faced with today's annual deficits of $1 trillion and federal debt between $16.7 and $31 trillion, depending on whether you count off-budget obligations, LBJ no doubt would appoint a bipartisan Simpson-Bowles commission and use it to get a tax, spending and entitlements fix so that he could move on to the rest of his agenda. Bill Clinton took the same practical approach and got to a balanced federal budget as soon as he could, at the beginning of his second term.

These former Democratic presidents would also know today that no Democratic or liberal agenda can go forward if debt service is eating available resources. Nor can successful governance take place if presidential and Democratic Party rhetoric consistently portrays loyal-opposition leaders as having devious or extremist motives. We really are, as Mr. Obama pointed out in 2008, in it together.

It's not too late for the president to take a cue from his predecessors and enter good-faith budget negotiations with congressional Republicans. A few posturing meetings with GOP congressional leaders will not suffice. President Obama's hype about the horrors of fiscal-cliff and sequestration cuts, and his placing of blame on Republicans, have been correctly viewed as low politics. His approval ratings have plunged since the end of the sequestration exercise.

But time is running out for Democrats to get serious about governance. That concrete barrier—in the form of the 2014 midterm—lies just ahead on the highway, and they're joy riding straight toward it.

Mr. Van Dyk served in Democratic national administrations and campaigns over several decades. His memoir of public life, "Heroes, Hacks and Fools," was first published by University of Washington Press in 2007.

Wednesday, March 13, 2013

A Framework for Macroprudential Bank Solvency Stress Testing: Application to S-25 and Other G-20 Country FSAPs

A Framework for Macroprudential Bank Solvency Stress Testing: Application to S-25 and Other G-20 Country FSAPs. By Andreas A Jobst, Li Ong, and Christian Schmieder

IMF Working Paper No. 13/68
March 13, 2013
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40390.0

Summary: The global financial crisis has placed the spotlight squarely on bank stress tests. Stress tests conducted in the lead-up to the crisis, including those by IMF staff, were not always able to identify the right risks and vulnerabilities. Since then, IMF staff has developed more robust stress testing methods and models and adopted a more coherent and consistent approach. This paper articulates the solvency stress testing framework that is being applied in the IMF’s surveillance of member countries’ banking systems, and discusses examples of its actual implementation in FSAPs to 18 countries which are in the group comprising the 25 most systemically important financial systems (“S-25”) plus other G-20 countries. In doing so, the paper also offers useful guidance for readers seeking to develop their own stress testing frameworks and country authorities preparing for FSAPs. A detailed Stress Test Matrix (STeM) comparing the stress test parameters applie in each of these major country FSAPs is provided, together with our stress test output templates.

Saturday, March 9, 2013

The Real Women's Issue: Time. By Jody Greenstone Miller

The Real Women's Issue: Time. By Jody Greenstone Miller
Never mind 'leaning in.' To get more working women into senior roles, companies need to rethink the clock
The Wall Street Journal, March 9, 2013, on page C3
http://online.wsj.com/article/SB10001424127887324678604578342641640982224.html


Why aren't more women running things in America? It isn't for lack of ambition or life skills or credentials. The real barrier to getting more women to the top is the unsexy but immensely difficult issue of time commitment: Today's top jobs in major organizations demand 60-plus hours of work a week.

In her much-discussed new book, Facebook Chief Operating Officer Sheryl Sandberg tells women with high aspirations that they need to "lean in" at work—that is, assert themselves more. It's fine advice, but it misdiagnoses the problem. It isn't any shortage of drive that leads those phalanxes of female Harvard Business School grads to opt out. It's the assumption that senior roles have to consume their every waking moment. More great women don't "lean in" because they don't like the world they're being asked to lean into.

It doesn't have to be this way. A little organizational imagination bolstered by a commitment from the C-suite can point the path to a saner, more satisfying blend of the things that ambitious women want from work and life. It's time that we put the clock at the heart of this debate.

I know this is doable because I run a growing startup company in which more than half the professionals work fewer than 40 hours a week by choice. They are alumnae of top schools and firms like General Electric GE +0.38% and McKinsey, and they are mostly women. The key is that we design jobs to enable people to contribute at varying levels of time commitment while still meeting our overall goals for the company.

This isn't advanced physics, but it does mean thinking through the math of how work in a company adds up. It's also an iterative process; we hardly get it right every time. But for businesses and reformers serious about cracking the real glass ceiling for women—and making their firms magnets for the huge swath of American talent now sitting on the sidelines—here are four ways to start going about it.

Rethink time. Break away from the arbitrary notion that high-level work can be done only by people who work 10 or more hours a day, five or more days a week, 12 months a year. Why not just three days a week, or six hours a day, or 10 months a year?

It sounds simple, but the only thing that matters is quantifying the work that needs to get done and having the right set of resources in place to do it. Senior roles should actually be easier to reimagine in this way because highly paid people have the ability and, often, the desire to give up some income in order to work less. Flexibility and working from home can soften the blow, of course, but they don't solve the overall time problem.


Break work into projects. Once work is quantified, it must be broken up into discrete parts to allow for varying time commitments. Instead of thinking in terms of broad functions like the head of marketing, finance, corporate development or sales, a firm needs to define key roles in terms of specific, measurable tasks.

Once you think of work as a series of projects, it's easy to see how people can tailor how much to take on. The growth of consulting and outsourcing came precisely when firms realized they could carve work into projects that could be done more effectively outside. The next step is to design internal roles in smaller bites, too. An experienced marketer for a pharma company could lead one major drug launch, for example, without having to oversee all drug launches. Instead of managing a portfolio with 10 products, a senior person could manage five. If a client-service executive working five days a week has a quota of 10 deals a month, then one who chooses to work three days a week has a quota of only six. Lower the quota but not the quality of the work or the executive's seniority.

One reason this doesn't happen more is managerial laziness: It's easier to find a "superwoman" to lead marketing (someone who will work as long as humanly possible) than it is to design work around discrete projects. But even superwoman has a limit, and when she hits it, organizations adjust by breaking up jobs and adding staff. Why not do this before people hit the wall?

Availability matters. It's important to differentiate between availability and absolute time commitments. Many professional women would happily agree to check email even seven days a week and jump in, if necessary, for intense project stints—so long as over the course of a year, the time devoted to work is more limited. Managers need to be clear about what's needed: 24/7 availability is not the same thing as a 24/7 workload.


Quality is the goal, not quantity. Leaders need to create a culture in which talented people are judged not by the quantity of their work, but by the quality of their contributions. This can't be hollow blather. Someone who works 20 hours a week and who delivers exceptional results on a pro rata basis should be eligible for promotions and viewed as a top performer. American corporations need to get rid of the notion that wanting to work less makes someone a "B player."

Promoting this kind of innovation, where companies start to look more like puzzles than pyramids, has to become part of feminism's new agenda. It's the only way to give millions of capable women the ability to recalibrate the time that they devote to work at different stages of their lives.

We have been putting smart women on the couch for 40 years, since psychologist Matina Horner published her famous studies on "fear of success." But the portion of top jobs that go to women is still shockingly low. That's the irony of Ms. Sandberg's cheerleading for women to stay ambitious: She fails to see that her own agenda isn't nearly ambitious enough.

"Leaning in" may help the relative handful of talented women who can live with the way that top jobs are structured today—and if that's their choice, more power to them. But only a small percentage of women will choose this route. Until the rest of us get serious about altering the way work gets done in American corporations, we're destined to howl at the moon over the injustice of it all while changing almost nothing.

—Ms. Greenstone Miller is co-founder and chief executive officer of Business Talent Group.

Friday, March 8, 2013

Rules, Discretion, and Macro-Prudential Policy. By Itai Agur and Sunil Sharma

Rules, Discretion, and Macro-Prudential Policy. By Itai Agur and Sunil Sharma
March 08, 2013
IMF Working Paper No. 13/65
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40379.0

Summary: The paper examines the implementation of macro-prudential policy. Given the coordination, flow of information, analysis, and communication required, macro-prudential frameworks will have weaknesses that make it hard to implement policy. And dealing with the political economy is also likely to be challenging. But limiting discretion through the formulation of macro-prudential rules is complicated by the difficulties in detecting and measuring systemic risk. The paper suggests that oversight is best served by having a strong baseline regulatory regime on which a time-varying macro-prudential policy can be added as conditions warrant and permit.