Saturday, April 27, 2013

South Korea: Give Nukes a Chance. By Denny Roy

South Korea: Give Nukes a Chance. By Denny Roy
Asia Pacific Bulletin, no. 204
Washington, D.C.: East-West Center
March 27, 2013
http://www.eastwestcenter.org/publications/south-korea-give-nukes-chance

Excerpts:

It is only a matter of time before North Korea fields an actual nuclear-tipped missile that works. With the persistent security threat from North Korea seemingly worsening, recent public opinion surveys show that a majority of South Koreans favor getting their own nuclear weapons. There is no doubt that South Korea is capable of making its own nuclear weapons, probably within a year. Indeed, the Republic of Korea (ROK) has explored this possibility occasionally since the 1970s, each time backing off under outside pressure.

There are some good reasons why, in principle, the world is better off with a smaller, rather than larger, number of nuclear weapon states. Nevertheless, there are two additional principles that apply here. First, nuclear weapons are a powerful deterrent; they are the main reason why the Cold War remained cold. Second, there may be a specific circumstance in which the introduction of a new nuclear weapons capability has a constructive influence on international security—call it the exception to the general nonproliferation rule.

Given the ROK’s present circumstances, Washington and Seoul should seriously consider the following policy change. Seoul gives the required 90 days notice required for it to withdraw from the Nuclear Nonproliferation Treaty, which allows for deratification in the case of “extraordinary events” that threaten national security. The ROK announces its intention to begin working toward a nuclear weapons capability, with the following conditions: (1) the South Korean program will match North Korea’s progress step-by-step towards deploying a reliable nuclear-armed missile; and (2) Seoul will commit to halting and shelving its program if North Korea does the same. For its part, Washington announces that US nonproliferation policy is compelled to tolerate an exception when a law-abiding state is threatened by a rogue state—in this case North Korea—that has both acquired nuclear weapons and threatened to use them aggressively. Pyongyang has repeatedly spoken of using its nuclear weapons to devastate both the ROK and the United States.

This policy change is necessary because US, ROK and (half-hearted) Chinese efforts to get North Korea to denuclearize are not working. [...]

An ROK nuclear weapons capability would impose a meaningful penalty on the DPRK for its nuclear weapons program. Aside from the sanctions ordered by the United Nations Security Council, which have proved no more than a nuisance and are amply compensated for by the growing economic relationship with China, Pyongyang has suffered no significant negative consequences for acquiring nuclear weapons. A South Korean nuclear capability would change that. The North Koreans would understand that their act brought about an outcome they very much do not want [...].

ROK nukes, furthermore, will help deter North Korean provocations. A capacity to attack a neighbor with nuclear weapons provides North Korea with cover for limited conventional attacks. Pyongyang has established a pattern of using quick, sharp jabs against South Korea. The goal is to rattle Seoul into accommodating North Korean economic and political demands. Seoul insists that future North Korean attacks will result in military retaliation by South Korean forces. Since South Korea has not hit back after previous incidents, it is uncertain whether this pledge will deter Pyongyang from trying this tactic again. A DPRK nuclear weapons capability worsens this already dangerous situation. North Korean planners might conclude that Seoul would not dare retaliate against a DPRK strike out of fear that the next step would be a nuclear attack on the ROK. A South Korean nuclear capability, however, would redress this imbalance. If ROK conventional military capabilities are superior to the DPRK and equal or superior at the nuclear level, deterrence against a North Korean attack is stronger.

South Korean nukes would close the credibility gap in the US-ROK alliance. The “umbrella” of America’s nuclear arsenal covers South Korea and theoretically negates the DPRK nuclear threat. However, South Koreans have always questioned the reliability of this commitment which potentially puts a US city at risk in order to protect a South Korean city. The doubts are growing more acute now that a North Korean capability is apparently close to realization. An ROK nuclear arsenal would remove this strain on the alliance and give the South Koreans a sense of greater control over their own destiny.

Pyongyang would not be the only target audience for Seoul’s announcement of intent to deploy nuclear weapons. Like the North Koreans, the People’s Republic of China (PRC) is deeply opposed to an ROK nuclear capability. The announcement would also signal to Beijing that the cost of failing to discipline their client state is rising dramatically. The Chinese are already debating whether the status quo of a rogue DPRK has become so adverse to Chinese interests that China must pressure Pyongyang more heavily even at the risk of causing regime collapse. South Korea’s imminent—and reversible—acquisition of nuclear weapons would strengthen the argument that the PRC must get tougher with the DPRK.

To be sure, this policy change would create its own problems. An ROK nuclear capability would pressure Japan to follow suit. A US-friendly, stable, law-abiding, liberal democratic country getting nukes is not necessarily a bad thing. But if so, the solution is for Washington and Seoul to emphasize that South Korea’s nuclear capability would be temporary and contingent, so Tokyo can remain non-nuclear.  Thankfully, there are precedents for middle-sized states giving up their nuclear weapons.

South Korea’s security situation is deteriorating and for the ROK’s leadership, national security is job number one. It is now time to get past the visceral opposition to proliferation and recognize that in this case, a conditional change of South Korea’s status to nuclear-weapon state can help manage the dangers created by a heightened North Korean threat.

Sunday, April 21, 2013

Generalized linear modeling with highly dimensional data

Question from a student, University of Missouri-Kansas City:

Hi guys,
I have project in Regression class, and we have to use R to do it,but till now I didn't find appropriate code for this project, and I dont now which method I have to use.

I have to analysis of a high dimensional dataset. The data has a total of 500 features.

we have no knowledge as to which of the features are useful and which not. Thus we want to apply model selection techniques to obtain a subset of useful features. What we have to do is the following:

a) There are totally 2000 observations in the data. Use the first 1000 to train or fit your model, and the other 1000 for prediction.

b) You will report the number of features you select and the percentage of response you correctly predict. Your project is considered valid only if the obtained percentage exceeds 54%.

Please help me as much as you can.
Your help would be appreciated..
Thank you!


-------------------
Answer

well, doing batches of 30 variables I came across 88 of the 500 that minimize AIC for each batch:

t1=read.csv("qw.csv", header=FALSE)
nrow(t1)
# not a good solution -- better to get 1000 records randomly, but this is enough for now:
train_data=t1[1:1000,]
test_data=t1[1001:2000,]
library(bestglm)
x=train_data[,2:31]
y=train_data[,1]
xy=as.data.frame(cbind(x,y))
(bestAIC = bestglm(xy, IC="AIC"))

, and so on, going from  x=train_data[,2:31] to x=train_data[32:61], etc. Each run gives you a list of best variables to minimize AIC (I chose AIC but it can be any other criterion).

If I try to process more than 30 (or 31) columns with bestglm it takes too much time because it uses other programs and optimization is different... and clearly inefficient.

now, the problem seems reduced to using less than 90 variables instead of the original 500. Not the real solution, since I am doing this in a piecemeal basis, but maybe close to what we are looking for, which is to get 54pct of the observed values.

using other methods I got even less candidates to be used as variables, but let's keep the ones we found before

then I tried this: after finding the best candidates I created this object, a data frame:

dat = data.frame(train_data$V1, train_data$V50, train_data$V66, train_data$V325, train_data$V426, train_data$V28, train_data$V44, train_data$V75, train_data$V111, train_data$V128, train_data$V149, train_data$V152, train_data$V154, train_data$V179, train_data$V181, train_data$V189, train_data$V203, train_data$V210, train_data$V213, train_data$V216, train_data$V218, train_data$V234, train_data$V243, train_data$V309, train_data$V311, train_data$V323, train_data$V338, train_data$V382, train_data$V384, train_data$V405, train_data$V412, train_data$V415, train_data$V417, train_data$V424, train_data$V425, train_data$V434, train_data$V483)


then, I invoked this:

model = train(train_data$V1 ~ train_data$V50 + train_data$V66 + train_data$V325 + train_data$V426 + train_data$V28 + train_data$V44 + train_data$V75 + train_data$V111 + train_data$V128 + train_data$V149 + train_data$V152 + train_data$V154 + train_data$V179 + train_data$V181 + train_data$V189 + train_data$V203 + train_data$V210 + train_data$V213 + train_data$V216 + train_data$V218 + train_data$V234 + train_data$V243 + train_data$V309 + train_data$V311 + train_data$V323 + train_data$V338 + train_data$V382 + train_data$V384 + train_data$V405 + train_data$V412 + train_data$V415 + train_data$V417 + train_data$V424 + train_data$V425 + train_data$V434 + train_data$V483,
               dat,
               method='nnet',
               linout=TRUE,
               trace = FALSE)
ps = predict(model, dat)


if you check the result, ps, you find that most values are the same:

606 are -0.2158001115381
346 are 0.364988437287819

the rest of the 1000 values are very close to these two, the whole thing is this:

just 1 is -0.10
   1  is -0.14
   1  is -0.17
   1  is -0.18
   3  is -0.20
 617 are -0.21
   1  is 0.195
   1  is 0.359
   1  is 0.360
   1  is 0.362
   2  is 0.363
 370  are 0.364

, so I just converged all negative values to -1 and all positive values to 1 (let's assume is propensity not to buy or to buy), and then I found that 380 rows were negative when the original value to be predicted was -1 (499 rows), that is, a success percentage of 76 pct

only 257 values were positive when the original values were positive (success rate of 257/501 = 51.3pct)

the combined success rate in predicting the response variable values is a bit above 63%, which is above the value we aimed at, 54pct

---
now, I tried with the second data set, test_data (the second 1000 rows)

negative values when original response value was negative too:
          success rate is 453/501 = .90419

impressive?  See how disappointing is this:

positive values when original response value was positive too:
          success rate is 123/499 = .24649

the combined success rate is about 57pct, which is barely above the mark

---
do I trust my own method?

of course not, I would get all previous consumer surveys (buy/not buy) my company had in the files and then I will check if I can get a success rate at or above 57pct (which to me is too low, to say nothing of 54pct)

for the time and effort I spent maybe I should have tossed an electronic coin, with a bit of luck you can get a bit above 50pct success     : - )

maybe to prevent this they chose 54pct, since in 1000 runs you could be very well near 50pct

---
refinement, or "If we had all the time of the world..."

since I got enough free time, I tried this (same dat data frame):

model = train(train_data$V1 ~ log(train_data$V50) + log(train_data$V66) + log(train_data$V325) + log(train_data$V426) + log(train_data$V28) + log(train_data$V44) + log(train_data$V75) + log(train_data$V111) + log(train_data$V128) + log(train_data$V149) + log(train_data$V152) + log(train_data$V154) + log(train_data$V179) + log(train_data$V181) + log(train_data$V189) + log(train_data$V203) + log(train_data$V210) + log(train_data$V213) + log(train_data$V216) + log(train_data$V218) + log(train_data$V234) + log(train_data$V243) + log(train_data$V309) + log(train_data$V311) + log(train_data$V323) + log(train_data$V338) + log(train_data$V382) + log(train_data$V384) + log(train_data$V405) + log(train_data$V412) + log(train_data$V415) + log(train_data$V417) + log(train_data$V424) + log(train_data$V425) + log(train_data$V434) + log(train_data$V483),
               dat,
               method='nnet',
               linout=TRUE,
               trace = FALSE)
ps = predict(model, dat)

negative values when original response value was negative too: .7

positive values when original response value was positive too: .69

combined success rate: 69.4pct

# now we try with the other 1000 values:
[same dat data frame, but using test_data instead of train_data]

model = train(test_data$V1 ~ log(test_data$V50) + log(test_data$V66) + log(test_data$V325) + log(test_data$V426) + log(test_data$V28) + log(test_data$V44) + log(test_data$V75) + log(test_data$V111) + log(test_data$V128) + log(test_data$V149) + log(test_data$V152) + log(test_data$V154) + log(test_data$V179) + log(test_data$V181) + log(test_data$V189) + log(test_data$V203) + log(test_data$V210) + log(test_data$V213) + log(test_data$V216) + log(test_data$V218) + log(test_data$V234) + log(test_data$V243) + log(test_data$V309) + log(test_data$V311) + log(test_data$V323) + log(test_data$V338) + log(test_data$V382) + log(test_data$V384) + log(test_data$V405) + log(test_data$V412) + log(test_data$V415) + log(test_data$V417) + log(test_data$V424) + log(test_data$V425) + log(test_data$V434) + log(test_data$V483),
               dat,
               method='nnet',
               linout=TRUE,
               trace = FALSE)
ps = predict(model, dat)


negative values when original response value was negative too:
          success rate is 322/499 = .645

positive values when original response value was positive too:
          success rate is 307/501 = .612

combined success rate: 62.9pct

other things I tried failed -- if we had all the time of the world we could try other possibilities and get better results... or not

you'll tell me if you can reproduce the results, which are clearly above the 54pct mark

Wednesday, April 17, 2013

CPSS: Implementation monitoring of standards

Implementation monitoring of standards
CPSS, Apr 17, 2013
http://www.bis.org/cpss/cpssinfo2_5.htm

The Committee on Payment and Settlement Systems (CPSS) and the International Organization of Securities Commissions (IOSCO) have started the process of monitoring implementation of the Principles for financial market infrastructures (the PFMIs). The PFMIs are international standards for payment, clearing and settlement systems, including central counterparties, and trade repositories. They are designed to ensure that the infrastructure supporting global financial markets is robust and well placed to withstand financial shocks. The PFMIs were issued by CPSS-IOSCO in April 2012 and jurisdictions around the world are currently in the process of implementing them into their regulatory frameworks to foster the safety, efficiency and resilience of their financial market infrastructures (FMIs).

Full, timely and consistent implementation of the PFMIs is fundamental to ensuring the safety, soundness and efficiency of key FMIs and for supporting the resilience of the global financial system. In addition, the PFMIs play an important part in the G20's mandate that all standardised over-the-counter (OTC) derivatives should be centrally cleared. Global central clearing requirements reinforce the importance of strong safeguards and consistent oversight of derivatives central counterparties (CCPs) in particular. CPSS and IOSCO members are committed to adopt the principles and responsibilities contained in the PFMIs in line with the G20 and Financial Stability Board (FSB) expectations.

Scope of the assessments

The implementation monitoring will cover the implementation of the principles contained in the PFMIs as well as responsibilities A to E. Reviews will be carried out in stages, assessing first whether a jurisdiction has completed the process of adopting the legislation and other policies that will enable it to implement the principles and responsibilities and subsequently whether these changes are complete and consistent with the principles and responsibilities. Assessments will also examine consistency in the outcomes of implementation of the principles by FMIs and implementation of the responsibilities by authorities. The results of the assessments will be published on both CPSS and IOSCO websites.

Jurisdictional coverage - The assessments will cover the following jurisdictions: Argentina, Australia, Belgium, Brazil Canada, Chile, China, European Union, France, Germany, Hong Kong SAR, Indonesia, India, Italy, Japan, Korea, Mexico, Netherlands, Russia, Saudi Arabia, Singapore, South Africa, Spain, Sweden, Switzerland, Turkey, United Kingdom and United States.  The jurisdictional coverage reflects, among other factors, the importance of the PFMIs to the G20 mandate for central clearing of OTC derivatives and the need to ensure robust risk management by CCPs.

Types of FMI - In many jurisdictions, the framework for regulation, supervision and oversight is different for each type of financial market infrastructure (FMI). Whilst initial overall assessments will cover the regulation changes necessary for all types of FMIs, further thematic assessments (assessing the consistency of implementation) are likely to focus on OTC derivatives CCPs and TRs, given their importance for the successful completion of the G20 commitments regarding central clearing and transparency for derivatives products. Prioritising OTC derivatives CCPs and TRs will help ensure timely initial reporting given that most jurisdictions have made most progress in implementing reforms for these sectors.


Timing

A first assessment is currently underway examining whether jurisdictions have made regulatory changes that reflect the principles and responsibilities in the PFMI. Results of this assessment are due to be published in the third quarter of 2013. 

Monday, April 15, 2013

For a Sick Friend: First, Do No Harm. By Letty Cottin Pogrebin

For a Sick Friend: First, Do No Harm. By Letty Cottin Pogrebin
Conversing with the ill can be awkward, but keeping a few simple commandments makes a huge difference
The Wall Street Journal, April 13, 2013, on page C3
http://online.wsj.com/article/SB10001424127887324240804578416574019136696.html


'A closed mouth gathers no feet." It's a charming axiom, but silence isn't always an option when we're dealing with a friend who's sick or in despair. The natural human reaction is to feel awkward and upset in the face of illness, but unless we control those feelings and come up with an appropriate response, there's a good chance that we'll blurt out some cringe-worthy cliché, craven remark or blunt question that, in retrospect, we'll regret.

Take this real-life exchange. If ever the tone deaf needed a poster child, Fred is their man.

"How'd it go?" he asked his friend, Pete, who'd just had cancer surgery.

"Great!" said Pete. "They got it all."

"Really?" said Fred. "How do they know?"

A few simple commandments makes a huge difference when conversing with the ill.

Later, when Pete told him how demoralizing his remark had been, Fred's excuse was, "I was nervous. I just said what popped into my head."

We're all nervous around illness and mortality, but whatever pops into our heads should not necessarily plop out of our mouths. Yet, in my own experience as a breast-cancer patient, and for many of the people I have interviewed, friends do make hurtful remarks. Marion Fontana, who was diagnosed with breast cancer eight years after her husband, a New York City firefighter, died in the collapse of the World Trade Center, was told that she must have really bad karma to attract so much bad luck. In another case, upon hearing a man's leukemia diagnosis, his friend shrieked, "Wow! A girl in my office just died of that!"

You can't make this stuff up.

If we're not unwittingly insulting our sick friends, we're spouting clichés like "Everything happens for a reason." Though our intent is to comfort the patient, we also say such things to comfort ourselves and tamp down our own feelings of vulnerability. From now on, rather than sound like a Hallmark card, you might want to heed the following 10 Commandments for Conversing With a Sick Friend.

1. Rejoice at their good news. Don't minimize their bad news. A guy tells you that the doctors got it all, say "Hallelujah!" A man with advanced bladder cancer says that he's taking his kids to Disneyland next summer, don't bite your lip and mutter, "We'll see." Tell him it's a great idea. (What harm can it do?) Which doesn't mean that you should slap a happy face on a friend's grim diagnosis by saying something like, "Don't worry! Nowadays breast cancer is like having a cold!"

The best response in any encounter with a sick friend is to say, "Tell me what I can do to make things easier for you—I really want to help."

2. Treat your sick friends as you always did—but never forget their changed circumstance. However contradictory that may sound, I promise you can learn to live within the paradox if you keep your friend's illness and its constraints in mind but don't treat them as if their illness is who they are. Speak to them as you always did (tease them, kid around with them, get mad at them) but indulge their occasional blue moods or hissy-fits. Most important, start conversations about other things (sports, politics, food, movies) as soon as possible and you'll help speed their journey from the morass of illness to the miracle of the ordinary.

3. Avoid self-referential comments. A friend with a hacking cough doesn't need to hear, "You think that's bad? I had double pneumonia." Don't tell someone with brain cancer that you know how painful it must be because you get migraines. Don't complain about your colicky baby to the mother of a child with spina bifida. I'm not saying sick people have lost their capacity to empathize with others, just that solipsism is unhelpful and rude. The truest thing you can say to a sick or suffering friend is, "I can only try to imagine what you're going through."

4. Don't assume, verify. Several friends of Michele, a Canadian writer, reacted to her cancer diagnosis with, "Well, at least you caught it early, so you'll be all right!" In fact, she did not catch it early, and never said or hinted otherwise. So when someone said, "You caught it early," she thought, "No, I didn't, therefore I'm going to die." Repeat after me: "Assume nothing."

5. Get the facts straight before you open your mouth.Did your friend have a heart or liver transplant? Chemo or radiation? Don't just ask, "How are you?" Ask questions specific to your friend's health. "How's your rotator cuff these days?" "Did the blood test show Lyme disease?" "Are your new meds working?" If you need help remembering who has shingles and who has lupus, or the date of a friend's operation, enter a health note under the person's name in your contacts list or stick a Post-it by the phone and update the information as needed.

6. Help your sick friend feel useful. Zero in on one of their skills and lead to it. Assuming they're up to the task, ask a cybersmart patient to set up a Web page for you; ask a bridge or chess maven to give you pointers on the game; ask a retired teacher to guide your teenager through the college application process. In most cases, your request won't be seen as an imposition but a vote of confidence in your friend's talent and worth.

7. Don't infantilize the patient. Never speak to a grown-up the way you'd talk to a child. Objectionable sentences include, "How are we today, dearie?" "That's a good boy." "I bet you could swallow this teeny-tiny pill if you really tried." And the most wince-worthy, "Are we ready to go wee-wee?" Protect your friend's dignity at all costs.

8. Think twice before giving advice.Don't forward medical alerts, newspaper clippings or your Aunt Sadie's cure for gout. Your idea of a health bulletin that's useful or revelatory may mislead, upset, confuse or agitate your friend. Sick people have doctors to tell them what to do. Your job is simply to be their friend.

9. Let patients who are terminally ill set the conversational agenda.If they're unaware that they're dying, don't be the one to tell them. If they know they're at the end of life and want to talk about it, don't contradict or interrupt them; let them vent or weep or curse the Fates. Hand them a tissue and cry with them. If they want to confide their last wish, or trust you with a long-kept secret, thank them for the honor and listen hard. Someday you'll want to remember every word they say.

10. Don't pressure them to practice 'positive thinking.' The implication is that they caused their illness in the first place by negative thinking—by feeling discouraged, depressed or not having the "right attitude." Positive thinking can't cure Huntington's disease, ALS or inoperable brain cancer. Telling a terminal patient to keep up the fight isn't just futile, it's cruel. Insisting that they see the glass as half full may deny them the truth of what they know and the chance to tie up life's loose ends while there's still time. As one hospice patient put it, "All I want from my friends right now is the freedom to sulk and say goodbye."

Though most of us feel dis-eased around disease, colloquial English proffers a sparse vocabulary for the expression of embarrassment, fear, anxiety, grief or sorrow. These 10 commandments should help you relate to your sick friends with greater empathy, warmth and grace.

—Ms. Pogrebin is the author of 10 books and a founding editor of Ms. magazine. Her latest book is "How to Be a Friend to a Friend Who's Sick," from which this essay is adapted.

Saturday, April 13, 2013

BCBS: Monitoring tools for intraday liquidity management - final document

BCBS: Monitoring tools for intraday liquidity management - final document
April 2013
http://www.bis.org/publ/bcbs248.htm

This document is the final version of the Committee's Monitoring tools for intraday liquidity management. It was developed in consultation with the Committee on Payment and Settlement Systems to enable banking supervisors to better monitor a bank's management of intraday liquidity risk and its ability to meet payment and settlement obligations on a timely basis. Over time, the tools will also provide supervisors with a better understanding of banks' payment and settlement behaviour.

The framework includes:
  • the detailed design of the monitoring tools for a bank's intraday liquidity risk;
  • stress scenarios;
  • key application issues; and
  • the reporting regime.
Management of intraday liquidity risk forms a key element of a bank's overall liquidity risk management framework. As such, the set of seven quantitative monitoring tools will complement the qualitative guidance on intraday liquidity management set out in the Basel Committee's 2008 Principles for Sound Liquidity Risk Management and Supervision. It is important to note that the tools are being introduced for monitoring purposes only and that internationally active banks will be required to apply them. National supervisors will determine the extent to which the tools apply to non-internationally active banks within their jurisdictions.

Basel III: The Liquidity Coverage Ratio and liquidity risk monitoring tools (January 2013), which sets out one of the Committee's key reforms to strengthen global liquidity regulations does not include intraday liquidity within its calibration. The reporting of the monitoring tools will commence on a monthly basis from 1 January 2015 to coincide with the implementation of the LCR reporting requirements.

 An earlier version of the framework of monitoring tools was issued for consultation in July 2012. The Committee wishes to thank those who provided feedback and comments as these were instrumental in revising and finalising the monitoring tools.

Authorities' access to trade repository data - consultative report

CPSS: Authorities' access to trade repository data - consultative report
April 2013
www.bis.org/publ/cpss108.htm

The consultative report Authorities' access to trade repository data was published for public comment on 11 April 2013. 

Trade repositories (TRs) are entities that maintain a centralised electronic record of over-the-counter (OTC) derivatives transaction data. TRs will play a key role in increasing transparency in the OTC derivatives markets by improving the availability of data to authorities and the public in a manner that supports the proper handling and use of the data. For a broad range of authorities and official international financial institutions, it is essential to be able to access the data needed to fulfil their respective mandates while maintaining the confidentiality of the data pursuant to the laws of relevant jurisdictions.

The purpose of the report is to provide guidance to TRs and authorities on the principles that should guide authorities' access to data held in TRs for typical and non-typical data requests. The report also sets out possible approaches to addressing confidentiality concerns and access constraints. Accompanying the report is a cover note that lists the specific related issues for comment.
Comments should be sent by 10 May 2013 to both the CPSS secretariat (cpss@bis.org) and the IOSCO secretariat (accessdata@iosco.org). The comments will be published on the websites of the BIS and IOSCO unless commentators have requested otherwise.

Thursday, April 11, 2013

Market-Based Structural Top-Down Stress Tests of the Banking System. By Jorge Chan-Lau

Market-Based Structural Top-Down Stress Tests of the Banking System. By Jorge Chan-Lau
IMF Working Paper No. 13/88
April 10, 2013
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40468.0

Summary: Despite increased need for top-down stress tests of financial institutions, performing them is challenging owing to the absence of granular information on banks’ trading and loan portfolios. To deal with these data shortcomings, this paper presents a market-based structural top-down stress testing methodology that relies in market-based measures of a bank's probability of default and structural models of default risk to infer the capital losses they could experience in stress scenarios. As an illustration, the methodology is applied to a set of banks in an advanced emerging market economy.

Tuesday, April 2, 2013

Regulators Let Big Banks Look Safer Than They Are. By Sheila Bair

Regulators Let Big Banks Look Safer Than They Are. By Sheila Bair
The Wall Street Journal, April 2, 2013, on page A13
http://online.wsj.com/article/SB10001424127887323415304578370703145206368.html

The recent Senate report on the J.P. Morgan Chase "London Whale" trading debacle revealed emails, telephone conversations and other evidence of how Chase managers manipulated their internal risk models to boost the bank's regulatory capital ratios. Risk models are common and certainly not illegal. Nevertheless, their use in bolstering a bank's capital ratios can give the public a false sense of security about the stability of the nation's largest financial institutions.

Capital ratios (also called capital adequacy ratios) reflect the percentage of a bank's assets that are funded with equity and are a key barometer of the institution's financial strength—they measure the bank's ability to absorb losses and still remain solvent. This should be a simple measure, but it isn't. That's because regulators allow banks to use a process called "risk weighting," which allows them to raise their capital ratios by characterizing the assets they hold as "low risk."

For instance, as part of the Federal Reserve's recent stress test, the Bank of America reported to the Federal Reserve that its capital ratio is 11.4%. But that was a measure of the bank's common equity as a percentage of the assets it holds as weighted by their risk—which is much less than the value of these assets according to accounting rules. Take out the risk-weighting adjustment, and its capital ratio falls to 7.8%.

On average, the three big universal banking companies (J.P. Morgan Chase, Bank of America and Citigroup) risk-weight their assets at only 55% of their total assets. For every trillion dollars in accounting assets, these megabanks calculate their capital ratio as if the assets represented only $550 billion of risk.

As we learned during the 2008 financial crisis, financial models can be unreliable. Their assumptions about the risk of steep declines in housing prices were fatally flawed, causing catastrophic drops in the value of mortgage-backed securities. And now the London Whale episode has shown how capital regulations create incentives for even legitimate models to be manipulated.

According to the evidence compiled by the Senate Permanent Subcommittee on Investigations, the Chase staff was able to magically cut the risks of the Whale's trades in half. Of course, they also camouflaged the true dangers in those trades.

The ease with which models can be manipulated results in wildly divergent risk-weightings among banks with similar portfolios. Ironically, the government permits a bank to use its own internal models to help determine the riskiness of assets, such as securities and derivatives, which are held for trading—but not to determine the riskiness of good old-fashioned loans. The risk weights of loans are determined by regulation and generally subject to tougher capital treatment. As a result, financial institutions with large trading books can have less capital and still report higher capital ratios than traditional banks whose portfolios consist primarily of loans.

Compare, for instance, the risk-based ratios of Morgan Stanley, an investment bank that has struggled since the crisis, and U.S. Bancorp, a traditional commercial lender that has been one of the industry's best performers. According to the Fed's latest stress test, Morgan Stanley reported a risk-based capital ratio of nearly 14%; take out the risk weighting and its ratio drops to 7%. USB has a risk-based ratio of about 9%, virtually the same as its ratio on a non-risk weighted basis.

In the U.S. and most other countries, banks can also load up on their own country's government-backed debt and treat it as having zero risk. Many banks in distressed European nations have aggressively purchased their country's government debt to enhance their risk-based capital ratios.

In addition, if a bank buys the debt of another bank, it only needs to include 20% of the accounting value of those holdings for determining its capital requirements—but it must include 100% of the value of bonds of a commercial issuer. The rules governing capital ratios treat Citibank's debt as having one-fifth the risk of IBM's. In a financial system that is already far too interconnected, it defies reason that regulators give banks such strong capital incentives to invest in each other.

Regulators need to use a simple, effective ratio as the main determinant of a bank's capital strength and go back to the drawing board on risk-weighting assets. It does make sense to look at the riskiness of banks' assets in determining the adequacy of its capital. But the current rules are upside down, providing more generous treatment of derivatives trading than fully collateralized small-business lending.

The main argument megabanks advance against a tough capital ratio is that it would force them to raise more capital and hurt the economic recovery. But the megabanks aren't doing much new lending. Since the crisis, they have piled up excess reserves and expanded their securities and derivatives positions—where they get a capital break—while loans, which are subject to tougher capital rules, have remained nearly flat.

Though all banks have struggled to lend in the current environment, midsize banks, with their higher capital levels, have the strongest loan growth, and community banks do the lion's share of small-business lending. A strong capital ratio will reduce megabanks' incentives to trade instead of making loans. Over the long term, it will make these banks a more stable source of credit for the real economy and give them greater capacity to absorb unexpected losses. Bet on it, there will be future London Whale surprises, and the next one might not be so easy to harpoon.

Ms. Bair, the chairman of the Federal Deposit Insurance Corporation from 2006 to 2011, is the author of "Bull by the Horns: Fighting to Save Main Street From Wall Street and Wall Street From Itself" (Free Press, 2012).

Monday, April 1, 2013

China's Demography and its Implications

China's Demography and its Implications. By Il Houng Lee, Qingjun Xu, and Murtaza Syed
IMF Working Paper No. 13/82
Mar 28, 2013
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40446.0

Summary: In coming decades, China will undergo a notable demographic transformation, with its old-age dependency ratio doubling to 24 percent by 2030 and rising even more precipitously thereafter. This paper uses the permanent income hypothesis to reassess national savings behavior, with greater prominence and more careful consideration given to the role played by changing demography. We use a forward-looking and dynamic approach that considers the entire population distribution. We find that this not only holds up well empirically but may also be superior to the static dependency ratios typically employed in the literature. Going further, we simulate global savings behavior based on our framework and find that China’s demographics should have induced a negative current account in the 2000s and a positive one in the 2010s given the rising share of prime savers, only turning negative around 2045. The opposite is true for the United States and Western Europe. The observed divergence in current account outcomes from the simulated path appears to have been partly policy induced. Over the next couple of decades, individual countries’ convergence toward the simulated savings pattern will be influenced by their past divergences and future policy choices. Other implications arising from China’s demography, including the growth model, the pension system, the labor market, and the public finances are also briefly reviewed.