Sunday, March 11, 2012

How To Be Creative

How To Be Creative. By Jonah Lehrer
The image of the 'creative type' is a myth. Jonah Lehrer on why anyone can innovate—and why a hot shower, a cold beer or a trip to your colleague's desk might be the key to your next big idea.The Wall Street Journal, Mar 10, 2012, on page C1

http://online.wsj.com/article/SB10001424052970203370604577265632205015846.html
 
Creativity can seem like magic. We look at people like Steve Jobs and Bob Dylan, and we conclude that they must possess supernatural powers denied to mere mortals like us, gifts that allow them to imagine what has never existed before. They're "creative types." We're not.

But creativity is not magic, and there's no such thing as a creative type. Creativity is not a trait that we inherit in our genes or a blessing bestowed by the angels. It's a skill. Anyone can learn to be creative and to get better at it. New research is shedding light on what allows people to develop world-changing products and to solve the toughest problems. A surprisingly concrete set of lessons has emerged about what creativity is and how to spark it in ourselves and our work.

The science of creativity is relatively new. Until the Enlightenment, acts of imagination were always equated with higher powers. Being creative meant channeling the muses, giving voice to the gods. ("Inspiration" literally means "breathed upon.") Even in modern times, scientists have paid little attention to the sources of creativity.

But over the past decade, that has begun to change. Imagination was once thought to be a single thing, separate from other kinds of cognition. The latest research suggests that this assumption is false. It turns out that we use "creativity" as a catchall term for a variety of cognitive tools, each of which applies to particular sorts of problems and is coaxed to action in a particular way.

Does the challenge that we're facing require a moment of insight, a sudden leap in consciousness? Or can it be solved gradually, one piece at a time? The answer often determines whether we should drink a beer to relax or hop ourselves up on Red Bull, whether we take a long shower or stay late at the office.

The new research also suggests how best to approach the thorniest problems. We tend to assume that experts are the creative geniuses in their own fields. But big breakthroughs often depend on the naive daring of outsiders. For prompting creativity, few things are as important as time devoted to cross-pollination with fields outside our areas of expertise.

Let's start with the hardest problems, those challenges that at first blush seem impossible. Such problems are typically solved (if they are solved at all) in a moment of insight.

Consider the case of Arthur Fry, an engineer at 3M in the paper products division. In the winter of 1974, Mr. Fry attended a presentation by Sheldon Silver, an engineer working on adhesives. Mr. Silver had developed an extremely weak glue, a paste so feeble it could barely hold two pieces of paper together. Like everyone else in the room, Mr. Fry patiently listened to the presentation and then failed to come up with any practical applications for the compound. What good, after all, is a glue that doesn't stick?

On a frigid Sunday morning, however, the paste would re-enter Mr. Fry's thoughts, albeit in a rather unlikely context. He sang in the church choir and liked to put little pieces of paper in the hymnal to mark the songs he was supposed to sing. Unfortunately, the little pieces of paper often fell out, forcing Mr. Fry to spend the service frantically thumbing through the book, looking for the right page. It seemed like an unfixable problem, one of those ordinary hassles that we're forced to live with.

But then, during a particularly tedious sermon, Mr. Fry had an epiphany. He suddenly realized how he might make use of that weak glue: It could be applied to paper to create a reusable bookmark! Because the adhesive was barely sticky, it would adhere to the page but wouldn't tear it when removed. That revelation in the church would eventually result in one of the most widely used office products in the world: the Post-it Note.

Mr. Fry's invention was a classic moment of insight. Though such events seem to spring from nowhere, as if the cortex is surprising us with a breakthrough, scientists have begun studying how they occur. They do this by giving people "insight" puzzles, like the one that follows, and watching what happens in the brain:

   A man has married 20 women in a small town. All of the women are still alive, and none of them is divorced. The man has broken no laws. Who is the man?

If you solved the question, the solution probably came to you in an incandescent flash: The man is a priest. Research led by Mark Beeman and John Kounios has identified where that flash probably came from. In the seconds before the insight appears, a brain area called the superior anterior temporal gyrus (aSTG) exhibits a sharp spike in activity. This region, located on the surface of the right hemisphere, excels at drawing together distantly related information, which is precisely what's needed when working on a hard creative problem.

Interestingly, Mr. Beeman and his colleagues have found that certain factors make people much more likely to have an insight, better able to detect the answers generated by the aSTG. For instance, exposing subjects to a short, humorous video—the scientists use a clip of Robin Williams doing stand-up—boosts the average success rate by about 20%.

Alcohol also works. Earlier this year, researchers at the University of Illinois at Chicago compared performance on insight puzzles between sober and intoxicated students. The scientists gave the subjects a battery of word problems known as remote associates, in which people have to find one additional word that goes with a triad of words. Here's a sample problem:

   Pine Crab Sauce

In this case, the answer is "apple." (The compound words are pineapple, crab apple and apple sauce.) Drunk students solved nearly 30% more of these word problems than their sober peers.

What explains the creative benefits of relaxation and booze? The answer involves the surprising advantage of not paying attention. Although we live in an age that worships focus—we are always forcing ourselves to concentrate, chugging caffeine—this approach can inhibit the imagination. We might be focused, but we're probably focused on the wrong answer.

And this is why relaxation helps: It isn't until we're soothed in the shower or distracted by the stand-up comic that we're able to turn the spotlight of attention inward, eavesdropping on all those random associations unfolding in the far reaches of the brain's right hemisphere. When we need an insight, those associations are often the source of the answer.

This research also explains why so many major breakthroughs happen in the unlikeliest of places, whether it's Archimedes in the bathtub or the physicist Richard Feynman scribbling equations in a strip club, as he was known to do. It reveals the wisdom of Google putting ping-pong tables in the lobby and confirms the practical benefits of daydreaming. As Einstein once declared, "Creativity is the residue of time wasted."

Of course, not every creative challenge requires an epiphany; a relaxing shower won't solve every problem. Sometimes, we just need to keep on working, resisting the temptation of a beer-fueled nap.

There is nothing fun about this kind of creativity, which consists mostly of sweat and failure. It's the red pen on the page and the discarded sketch, the trashed prototype and the failed first draft. Nietzsche referred to this as the "rejecting process," noting that while creators like to brag about their big epiphanies, their everyday reality was much less romantic. "All great artists and thinkers are great workers," he wrote.

This relentless form of creativity is nicely exemplified by the legendary graphic designer Milton Glaser, who engraved the slogan "Art is Work" above his office door. Mr. Glaser's most famous design is a tribute to this work ethic. In 1975, he accepted an intimidating assignment: to create a new ad campaign that would rehabilitate the image of New York City, which at the time was falling apart.

Mr. Glaser began by experimenting with fonts, laying out the tourist slogan in a variety of friendly typefaces. After a few weeks of work, he settled on a charming design, with "I Love New York" in cursive, set against a plain white background. His proposal was quickly approved. "Everybody liked it," Mr. Glaser says. "And if I were a normal person, I'd stop thinking about the project. But I can't. Something about it just doesn't feel right."

So Mr. Glaser continued to ruminate on the design, devoting hours to a project that was supposedly finished. And then, after another few days of work, he was sitting in a taxi, stuck in midtown traffic. "I often carry spare pieces of paper in my pocket, and so I get the paper out and I start to draw," he remembers. "And I'm thinking and drawing and then I get it. I see the whole design in my head. I see the typeface and the big round red heart smack dab in the middle. I know that this is how it should go."

The logo that Mr. Glaser imagined in traffic has since become one of the most widely imitated works of graphic art in the world. And he only discovered the design because he refused to stop thinking about it.

But this raises an obvious question: If different kinds of creative problems benefit from different kinds of creative thinking, how can we ensure that we're thinking in the right way at the right time? When should we daydream and go for a relaxing stroll, and when should we keep on sketching and toying with possibilities?

The good news is that the human mind has a surprising natural ability to assess the kind of creativity we need. Researchers call these intuitions "feelings of knowing," and they occur when we suspect that we can find the answer, if only we keep on thinking. Numerous studies have demonstrated that, when it comes to problems that don't require insights, the mind is remarkably adept at assessing the likelihood that a problem can be solved—knowing whether we're getting "warmer" or not, without knowing the solution.

This ability to calculate progress is an important part of the creative process. When we don't feel that we're getting closer to the answer—we've hit the wall, so to speak—we probably need an insight. If there is no feeling of knowing, the most productive thing we can do is forget about work for a while. But when those feelings of knowing are telling us that we're getting close, we need to keep on struggling.

Of course, both moment-of-insight problems and nose-to-the-grindstone problems assume that we have the answers to the creative problems we're trying to solve somewhere in our heads. They're both just a matter of getting those answers out. Another kind of creative problem, though, is when you don't have the right kind of raw material kicking around in your head. If you're trying to be more creative, one of the most important things you can do is increase the volume and diversity of the information to which you are exposed.

Steve Jobs famously declared that "creativity is just connecting things." Although we think of inventors as dreaming up breakthroughs out of thin air, Mr. Jobs was pointing out that even the most far-fetched concepts are usually just new combinations of stuff that already exists. Under Mr. Jobs's leadership, for instance, Apple didn't invent MP3 players or tablet computers—the company just made them better, adding design features that were new to the product category.

And it isn't just Apple. The history of innovation bears out Mr. Jobs's theory. The Wright Brothers transferred their background as bicycle manufacturers to the invention of the airplane; their first flying craft was, in many respects, just a bicycle with wings. Johannes Gutenberg transformed his knowledge of wine presses into a printing machine capable of mass-producing words. Or look at Google: Larry Page and Sergey Brin came up with their famous search algorithm by applying the ranking method used for academic articles (more citations equals more influence) to the sprawl of the Internet.

How can people get better at making these kinds of connections? Mr. Jobs argued that the best inventors seek out "diverse experiences," collecting lots of dots that they later link together. Instead of developing a narrow specialization, they study, say, calligraphy (as Mr. Jobs famously did) or hang out with friends in different fields. Because they don't know where the answer will come from, they are willing to look for the answer everywhere.

Recent research confirms Mr. Jobs's wisdom. The sociologist Martin Ruef, for instance, analyzed the social and business relationships of 766 graduates of the Stanford Business School, all of whom had gone on to start their own companies. He found that those entrepreneurs with the most diverse friendships scored three times higher on a metric of innovation. Instead of getting stuck in the rut of conformity, they were able to translate their expansive social circle into profitable new concepts.

Many of the most innovative companies encourage their employees to develop these sorts of diverse networks, interacting with colleagues in totally unrelated fields. Google hosts an internal conference called Crazy Search Ideas—a sort of grown-up science fair with hundreds of posters from every conceivable field. At 3M, engineers are typically rotated to a new division every few years. Sometimes, these rotations bring big payoffs, such as when 3M realized that the problem of laptop battery life was really a problem of energy used up too quickly for illuminating the screen. 3M researchers applied their knowledge of see-through adhesives to create an optical film that focuses light outward, producing a screen that was 40% more efficient.

Such solutions are known as "mental restructurings," since the problem is only solved after someone asks a completely new kind of question. What's interesting is that expertise can inhibit such restructurings, making it harder to find the breakthrough. That's why it's important not just to bring new ideas back to your own field, but to actually try to solve problems in other fields—where your status as an outsider, and ability to ask naive questions, can be a tremendous advantage.

This principle is at work daily on InnoCentive, a crowdsourcing website for difficult scientific questions. The structure of the site is simple: Companies post their hardest R&D problems, attaching a monetary reward to each "challenge." The site features problems from hundreds of organization in eight different scientific categories, from agricultural science to mathematics. The challenges on the site are incredibly varied and include everything from a multinational food company looking for a "Reduced Fat Chocolate-Flavored Compound Coating" to an electronics firm trying to design a solar-powered computer.

The most impressive thing about InnoCentive, however, is its effectiveness. In 2007, Karim Lakhani, a professor at the Harvard Business School, began analyzing hundreds of challenges posted on the site. According to Mr. Lakhani's data, nearly 30% of the difficult problems posted on InnoCentive were solved within six months. Sometimes, the problems were solved within days of being posted online. The secret was outsider thinking: The problem solvers on InnoCentive were most effective at the margins of their own fields. Chemists didn't solve chemistry problems; they solved molecular biology problems. And vice versa. While these people were close enough to understand the challenge, they weren't so close that their knowledge held them back, causing them to run into the same stumbling blocks that held back their more expert peers.

It's this ability to attack problems as a beginner, to let go of all preconceptions and fear of failure, that's the key to creativity.

The composer Bruce Adolphe first met Yo-Yo Ma at the Juilliard School in New York City in 1970. Mr. Ma was just 15 years old at the time (though he'd already played for J.F.K. at the White House). Mr. Adolphe had just written his first cello piece. "Unfortunately, I had no idea what I was doing," Mr. Adolphe remembers. "I'd never written for the instrument before."

Mr. Adolphe had shown a draft of his composition to a Juilliard instructor, who informed him that the piece featured a chord that was impossible to play. Before Mr. Adolphe could correct the music, however, Mr. Ma decided to rehearse the composition in his dorm room. "Yo-Yo played through my piece, sight-reading the whole thing," Mr. Adolphe says. "And when that impossible chord came, he somehow found a way to play it."

Mr. Adolphe told Mr. Ma what the professor had said and asked how he had managed to play the impossible chord. They went through the piece again, and when Mr. Ma came to the impossible chord, Mr. Adolphe yelled "Stop!" They looked at Mr. Ma's left hand—it was contorted on the fingerboard, in a position that was nearly impossible to hold. "You're right," said Mr. Ma, "you really can't play that!" Yet, somehow, he did.

When Mr. Ma plays today, he still strives for that state of the beginner. "One needs to constantly remind oneself to play with the abandon of the child who is just learning the cello," Mr. Ma says. "Because why is that kid playing? He is playing for pleasure."

Creativity is a spark. It can be excruciating when we're rubbing two rocks together and getting nothing. And it can be intensely satisfying when the flame catches and a new idea sweeps around the world.

For the first time in human history, it's becoming possible to see how to throw off more sparks and how to make sure that more of them catch fire. And yet, we must also be honest: The creative process will never be easy, no matter how much we learn about it. Our inventions will always be shadowed by uncertainty, by the serendipity of brain cells making a new connection.

Every creative story is different. And yet every creative story is the same: There was nothing, now there is something. It's almost like magic.

—Adapted from "Imagine: How Creativity Works" by Jonah Lehrer, to be published by Houghton Mifflin Harcourt on March 19. Copyright © 2012 by Jonah Lehrer.

---
10 Quick Creativity Hacks

1. Color Me Blue

A 2009 study found that subjects solved twice as many insight puzzles when surrounded by the color blue, since it leads to more relaxed and associative thinking. Red, on other hand, makes people more alert and aware, so it is a better backdrop for solving analytic problems.

2. Get Groggy

According to a study published last month, people at their least alert time of day—think of a night person early in the morning—performed far better on various creative puzzles, sometimes improving their success rate by 50%. Grogginess has creative perks.

3. Daydream Away

Research led by Jonathan Schooler at the University of California, Santa Barbara, has found that people who daydream more score higher on various tests of creativity.

4. Think Like A Child

When subjects are told to imagine themselves as 7-year-olds, they score significantly higher on tests of divergent thinking, such as trying to invent alternative uses for an old car tire.

5. Laugh It Up

When people are exposed to a short video of stand-up comedy, they solve about 20% more insight puzzles.

6. Imagine That You Are Far Away

Research conducted at Indiana University found that people were much better at solving insight puzzles when they were told that the puzzles came from Greece or California, and not from a local lab.

7. Keep It Generic

One way to increase problem-solving ability is to change the verbs used to describe the problem. When the verbs are extremely specific, people think in narrow terms. In contrast, the use of more generic verbs—say, "moving" instead of "driving"—can lead to dramatic increases in the number of problems solved.

8. Work Outside the Box

According to new study, volunteers performed significantly better on a standard test of creativity when they were seated outside a 5-foot-square workspace, perhaps because they internalized the metaphor of thinking outside the box. The lesson? Your cubicle is holding you back.
 
9. See the World

According to research led by Adam Galinsky, students who have lived abroad were much more likely to solve a classic insight puzzle. Their experience of another culture endowed them with a valuable open-mindedness. This effect also applies to professionals: Fashion-house directors who have lived in many countries produce clothing that their peers rate as far more creative.
 
10. Move to a Metropolis

Physicists at the Santa Fe Institute have found that moving from a small city to one that is twice as large leads inventors to produce, on average, about 15% more patents.

—Jonah Lehrer

Tuesday, February 28, 2012

Systemic Real and Financial Risks: Measurement, Forecasting, and Stress Testing

Systemic Real and Financial Risks: Measurement, Forecasting, and Stress Testing. By Gianni de Nicolo & Marcella Lucchetta
IMF Working Paper No. 12/58
Feb 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25745.0

Summary: This paper formulates a novel modeling framework that delivers: (a) forecasts of indicators of systemic real risk and systemic financial risk based on density forecasts of indicators of real activity and financial health; (b) stress-tests as measures of the dynamics of responses of systemic risk indicators to structural shocks identified by standard macroeconomic and banking theory. Using a large number of quarterly time series of the G-7 economies in 1980Q1-2010Q2, we show that the model exhibits significant out-of sample forecasting power for tail real and financial risk realizations, and that stress testing provides useful early warnings on the build-up of real and financial vulnerabilities.

Excerpts

Introduction

The 2007-2009 financial crisis has spurred renewed efforts in systemic risk modeling. Bisias et al. (2012) provide an extensive survey of the models currently available to measure and track indicators of systemic financial risk. However, three limitations of current modeling emerge from this survey. First, almost all proposed measures focus on (segments of) the financial sector, with developments in the real economy either absent, or just part of the conditioning variables embedded in financial risk measures. Second, there is yet no systematic assessment of the out-of-sample forecasting power of the measures proposed, which makes it difficult to gauge their usefulness as early warning tools. Third, stress testing procedures are in most cases sensitivity analyses, with no structural identification of the assumed shocks.

Building on our previous effort (De Nicolò and Lucchetta, 2011), this paper contributes to overcome these limitations by developing a novel tractable model that can be used as a real-time systemic risks’ monitoring system. Our model combines dynamic factor VARs and quantile regressions techniques to construct forecasts of systemic risk indicators based on density forecasts, and employs stress testing as the measurement of the sensitivity of responses of systemic risk indicators to configurations of structural shocks.

This model can be viewed as a complementary tool to applications of DSGE models for risk monitoring analysis. As detailed in Schorfheide (2010), work on DSGE modeling is advancing significantly, but several challenges to the use of these models for risk monitoring purposes remain. In this regard, the development of DSGE models is still in its infancy in at least two dimensions: the incorporation of financial intermediation and forecasting. In their insightful review of recent progress in developments of DSGE models with financial intermediation, Gertler and Kyotaki (2010) outline important research directions still unexplored, such as the linkages between disruptions of financial intermediation and real activity. Moreover, as noted in Herbst and Schorfheide (2010), there is still lack of conclusive evidence of the superiority of the forecasting performance of DSGE models relative to sophisticated data-driven models. In addition, these models do not typically focus on tail risks. Thus, available modeling technologies providing systemic risk monitoring tools based on explicit linkages between financial and real sectors are still underdeveloped. Contributing to fill in this void is a key objective of this paper.

Three features characterize our model. First, we make a distinction between systemic real risk and systemic financial risk, based on the notion that real effects with potential adverse welfare consequences are what ultimately concerns policymakers, consistently with the definition of systemic risk introduced in Group of Ten (2001). Distinguishing systemic financial risk from systemic real risk also allow us to assess the extent to which a realization of a financial (real) shock is just amplifying a shock in the real (financial) sector, or originates in the financial (real) sector. Second, the model produces real-time density forecasts of indicators of real activity and financial health, and uses them to construct forecasts of indicators of systemic real and financial risks. To obtain these forecasts, we use a dynamic factor model (DFM) with many predictors combined with quantile regression techniques. The choice of the DFM with many predictors is motivated by its superior forecasting performance over both univariate time series specifications and standard VAR-type models (see Watson, 2006). Third, our design of stress tests can be flexibly linked to selected implications of DSGE models and other theoretical constructs. Structural identification provides economic content of these tests, and imposes discipline in designing stress test scenarios. In essence, our model is designed to exploit, and make operational, the forecasting power of DFM models and structural identification based on explicit theoretical constructs, such as DSGE models.

Our model delivers density forecasts of any set of time series. Thus, it is extremely flexible, as it can incorporate multiple measures of real or financial risk, both at aggregate and disaggregate levels, including many indicators reviewed in Bisias et al. (2012). In this paper we focus on two simple indicators of real and financial activity: real GDP growth, and an indicator of health of the financial system, called FS. Following Campbell, Lo and MacKinlay (1997), the FS indicator is given by the return of a portfolio of a set of systemically important financial firms less the return on the market. This indicator is germane to other indicators of systemic financial risk used in recent studies (see e.g. Acharya et al., 2010 or Brownlee and Engle, 2010).

The joint dynamics of GDP growth and the FS indicator is modeled through a dynamic factor model, following the methodology detailed in Stock and Watson (2005). Density forecasts of GDP growth and the FS indicator are obtained by estimating sets of quantile autoregressions, using forecasts of factors derived from the companion factor VAR as predictors.  The use of quantile auto-regressions is advantageous, since it allows us to avoid making specific assumptions about the shape of the underlying distribution of GDP growth and the FS indicator. The blending of a dynamic factor model with quantile auto-regressions is a novel feature of our modeling framework.

Our measurement of systemic risks follows a risk management approach. We measure systemic real risk with GDP-Expected Shortfall (GDPES ), given by the expected loss in GDP growth conditional on a given level of GDP-at-Risk (GDPaR), with GDPaR being defined as the worst predicted realization of quarterly growth in real GDP at a given (low) probability. Systemic financial risk is measured by FS-Expected Shortfall (FSES), given by the expected loss in FS conditional on a given level of FS-at-Risk (FSaR), with FSaR being defined as the worst predicted realization of the FS indicator at a given (low) probability level.

Stress-tests of systemic risk indicators are implemented by gauging how impulse responses of systemic risk indicators vary through time in response to structural shocks. The identification of structural shocks is accomplished with an augmented version of the sign restriction methodology introduced by Canova and De Nicolò (2002), where aggregate shocks are extracted based on standard macroeconomic and banking theory. Our approach to stress testing differs markedly from, and we believe significantly improves on, most implementations of stress testing currently used in central banks and international organizations. In these implementations, shock scenarios are imposed on sets of observable variables, and their effects are traced through "behavioral" equations of certain variables of interest. Yet, the ?shocked? observable variables are typically endogenous: thus, it is unclear whether we are shocking the symptoms and not the causes. As a result, it is difficult to assess both the qualitative and quantitative implications of the stress test results.

We implement our model using a large set of quarterly time series of the G-7 economies during the 1980Q1-2010Q1 period, and obtain two main results. First, our model provides significant evidence of out-of sample forecasting power for tail real and financial risk realizations for all countries. Second, stress tests based on this structural identification provide early warnings of vulnerabilities in the real and financial sectors.

Monday, February 27, 2012

Economic crisis: Views from Greece

I asked some Greek professionals about the crisis in their country on behalf of Hanna Intelligence's CEO, Mr. Jose Navio:
dear sir, I got some questions for you, if you have the time:

1  could you please make mention of effects in the citizenry like more children abandoned in hospices because the family cannot maintain them?
2  do you know of lack of food/medicines or lower quality of them?
3  is it better in your opinion to get out of the Euro and use again the old drachma (or any other new currency)?
4  is it better in your opinion to default and to reject the troika bail-outs?

thank you very much in advance,

xxx

The answer of one of those professionals:

Date: 2/27/2012
Subject: RE: Greece and the economic crisis
Dear Mr xxx,

thank you for asking about my country's present; my comment should focus on two issues:

The first one refers to the huge "brain drain" that is in progress during this period in Greece, even to a greater extent than the period after the WWII, which was the greatest immigration period in Greek history. People of all ages and professions are migrating in foreign countries around the world seeking for a job and better living conditions, in all financial, communal and governance/ infrastructural terms.

The second one refers to the sharp rise of homeless people and unable to sustain their families' every day living, dignity and income, due to the unprecedented percentages of unemployment, wages' cuttings and increase of the prices of almost all commodities. In cooperation with the church and under the coordination of various entities and NGOs, citizens are gathering food and clothing to assist all those who suffer the "human insecurity" that prevails nowadays in Greece.

I can't say what could have been better for Greece in economic terms, since it's out of my area of expertise, and I don't want to follow the paradigm of all those who suddenly became experts in economic strategies, options, terms and conspiracy theories. I can confirm though that this situation is the result of bad Greek governance for the last thirty years and that although Greece didn't loose sovereignty through wars in it's modern history, it did through economic procedures and EU norms; in any case Greeks are experiencing a very hard austerity policy, humiliation from various (mostly) European governments and states, and most important, instead of facing a hopeful future and prospect, they see things getting worst every day, even after all this inhuman behaviors.

I don't know what the plan or EU's "Grand Strategy" might be for Greece, but definately the proud and cultural Greeks don't deserve what they experience during these years, not even what is yet to come. The civil society is a "boiling pot" due to the downgrade of the every day living standards, unpunished and "untouchable" politicians responsible for this situation,explicit inequalities and non-existing options for the future generations. Let's hope at least that we'll not experience also a bloodshed or Egypt-like uprisings..

I hope I gave you a brief and indicative picture of contemporary Greece, and been of some help to your questions.

Best regards,

xxx

Thursday, February 23, 2012

Can Institutional Reform Reduce Job Destruction and Unemployment Duration?

Can Institutional Reform Reduce Job Destruction and Unemployment Duration? Yes It Can. By Esther Perez & Yao Yao
IMF Working Paper No. 12/54
February 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25738.0

Summary: We read search theory’s unemployment equilibrium condition as an Iso-Unemployment Curve(IUC).The IUC is the locus of job destruction rates and expected unemployment durations rendering the same unemployment level. A country’s position along the curve reveals its preferences over the destruction-duration mix, while its distance from the origin indicates the unemployment level at which such preferences are satisfied Using a panel of 20 OECD countries over 1985-2008, we find employment protection legislation to have opposing efects on destructions and durations, while the effects of the remaining key institutional factors on both variables tend to reinforce each other. Implementing the right reforms could reduce job destruction rates by about 0.05 to 0.25 percentage points and shorten unemployment spells by around 10 to 60 days. Consistent with this, unemployment rates would decline by between 0.75 and 5.5 percentage points, depending on a country’s starting position.


Introduction

This paper investigates how labor market policies affect the unemployment rate through its two defining factors, the duration of unemployment spells and job destruction rates.  To this aim, we look at search theory’s unemployment equilibrium condition as an Iso-Unemployment Curve (IUC). The IUC represents the locus of job destruction rates and expected unemployment durations rendering the same unemployment level. A country’s position along the curve reveals its preferences over the destruction-duration mix, while its distance from the origin indicates the unemployment level at which such preferences are satisfied. We next provide micro-foundations for the link between destructions, durations and policy variables. This allows us to explore the relevance of institutional features using a sample of 20 OECD countries over the period 1985-2008.

The empirical literature investigating the influence of labor market institutions on overall unemployment rate is sizable (see, for instance, Blanchard and Wolfers, 1999, and Nickell and others, 2002). Equally numerous are the studies splitting unemployment into job creation and job destruction flows (see, for example, Blanchard, 1998, Shimer, 2007, and Elsby and others, 2008). This work connects these two strands of the literature by investigating how labor market policies shape both job separations and unemployment spells, which together determine the overall unemployment rate in the economy. The IUC schedule used in our analysis is novel and is motivated by the need to understand the nature of unemployment, as essentially coming from destructions, durations or a combination of both these factors. This can help clarify whether policy makers should focus primarily on speeding up workers’ reallocation across job positions rather than protecting them in the workplace.

One fundamental question raised in this context is whether countries with dynamic labor markets significantly outperform countries with more stagnant markets. By dynamic (stagnant) we mean labor markets displaying high (low) levels of workers’ turnover in and out of unemployment. Is it the case that countries featuring high job destruction rates but brief unemployment spells tend to display lower unemployment rates than labor markets characterized by limited job destruction but longer unemployment durations?  And how do institutional features shape destructions and durations?


Conclusions

This paper reads the basic unemployment equilibrium condition postulated by search theory as an Iso-Unemployment Curve (IUC). The IUC is the locus of job destruction rates and expected unemployment durations that render the same unemployment level.  We use this schedule to classify countries according to their preferences over the job destruction-unemployment duration trade-off. The upshot of this analysis is that labor markets characterized by high levels of job destruction but brief unemployment spells do not necessarily outperform countries characterized by the opposite behavior. But, the IUC construct makes it clear that high unemployment rates result from extreme values in either durations or destructions, or intermediate-to-high levels in both.

Looking at unemployment through the lenses of the IUC schedule focuses the attention on each economy’s revealed social preferences over the destruction-duration mix. Policy packages fighting unemployment should take into consideration such preferences. Some countries seem to tolerate relatively high destruction rates as long as unemployment duration is short. Others are biased towards job security and do not mind financing longer job search spells. A few unfortunate countries are trapped in a high inflow-high duration combination, seemingly condemned for long periods of high unemployment.

An optimistic message arising from this study, especially for countries located on higher IUCs, is that an ambitious structural reform program tackling high labor tax wedges, activating unemployment benefits and removing barriers to competition in key services can effectively contain job losses, limit the duration of unemployment spells and yield substantial reduction in unemployment.

Thursday, February 16, 2012

Intra-group support measures in times of stress or unexpected loss by financial groups in the banking, insurance and securities sectors

The Joint Forum: Report on intra-group support measures
Feb 2012
http://www.bis.org/publ/joint28.htm

The Joint Forum (BIS, IOSCO, IAIS) just published a report to assist national supervisors in gaining a better understanding of the use of intra-group support measures in times of stress or unexpected loss by financial groups across the banking, insurance and securities sectors. The report provides an important overview of intra-group support measures used in practice at a time when authorities are increasingly focused on ways to ensure banks and other financial entities can be wound down in an orderly manner during periods of distress.

The Joint Forum was established in 1996 under the aegis of the Basel Committee on Banking Supervision (BCBS), the International Organization of Securities Commissions (IOSCO) and the International Association of Insurance Supervisors (IAIS) to deal with issues common to the banking, securities and insurance sectors, including the regulation of financial conglomerates.

Excertps

Executive Summary
The objective of this report prepared by the Joint Forum is to assist national supervisors in gaining a better understanding of the use of intra-group support measures in times of stress or unexpected loss by financial groups across the banking, insurance and securities sectors.  The report provides an important overview of the use of intra-group support at a time when authorities are increasingly focused on ways to ensure banks and other financial entities can be wound down in an orderly manner during periods of distress. The report may also assist the thematic work contemplated by the Financial Stability Board (FSB) on deposit insurance schemes and feed into the ongoing policy development in relation to recovery and resolution plans.

The report is based on the findings of a high-level stock-take which examined the use of intra-group support measures available to banks, insurers and securities firms. The stocktake was conducted through a survey by the Joint Forum Working Group on Risk Assessment and Capital (JFRAC) that was completed by 31 financial institutions headquartered in ten jurisdictions on three continents: Europe, North America and Asia.  Participants were drawn from the banking, insurance and securities sectors and from many of the jurisdictions represented by Joint Forum members. Many participating firms were large global financial institutions.

The report provides an overview and analysis of the types and frequency of intra-group support measures used in practice. It is based only on information provided by participants in the survey. Responses were verified by supervisors only in certain instances.

The survey’s main findings are as follows:

1. Intra-group support measures can vary from institution to institution, driven by the regulatory, legal and tax environment; the management style of the particular institution; and the cross-border nature of the business. Authorities should be mindful of the complicating effect of these measures on resolution regimes and the recovery process in the event of failure.

2. The majority of respondents surveyed indicated centralised capital and liquidity management systems were in place. According to proponents, this approach promotes the efficient management of a group’s overall capital level and helps maximise liquidity while reducing the cost of funds. However, the respondents that favoured a “self-sufficiency” approach pointed out that centralised management potentially has the effect of increasing contagion risk within a group in the event of distress at any subsidiaries. The use of these systems impacts the nature and design of intra-group support measures with some firms indicating that the way they managed capital and liquidity within the group was a key driver in their decisions about the intra-group transactions and support measures they used.

3. Committed facilities, subordinated loans and guarantees were the most widely used measures. This was evident across all sectors and participating jurisdictions.

4. Internal support measures generally were provided on a one-way basis (eg downstream from a parent to a subsidiary). Loans and borrowings, however, were provided in some groups on a reciprocal basis. As groups surveyed generally operated across borders, most indicated support measures were provided both domestically and internationally. Support measures were also in place between both regulated and unregulated entities and between entities in different sectors.

5. The study found no evidence of intra-group support measures either a) being implemented on anything other than an arm’s length basis, or b) resulting in the inappropriate transfer of capital, income or assets from regulated entities or in a way which generated capital resources within a group. However, this does not necessarily mean that supervisory scrutiny of intra-group support measures is unwarranted. As this report is based on industry responses, further in-depth analysis by national supervisors may provide a more complete picture of the risks potentially posed by intra-group support measures.

6. While the existing regulatory frameworks for intra-group support measures are somewhat limited, firms do have certain internal policies and procedures to manage and restrict internal transactions. Respondents pointed out that the regulatory and legal framework can make it difficult for some forms of intra-group support to come into force while supervisors aim to ensure that both regulated entities and stakeholders are protected from risks arising from the use of support measures. For instance, upstream transfers of liquidity and capital are monitored and large exposure rules can limit the extent of intra-group interaction for risk control purposes. Jurisdictional differences in regulatory settings can also pose a challenge for firms operating across borders.

7. Based on the survey and independent of remaining concerns and information gaps, single sector supervisors should be aware of the risks that intra-group support measures may pose and should fully understand the measures used by an institution, including its motivations for using certain measures over others. In order to obtain further insight into the intra-group support measures put in place by financial institutions within their jurisdiction, national supervisors should, where appropriate, conduct further analysis in this area. A high-level model questionnaire is provided in Annex II with the aim of assisting national supervisors with ongoing work relating to intra-group support measures.

Thursday, February 9, 2012

Short-term Wholesale Funding and Systemic Risk: A Global CoVaR Approach

Short-term Wholesale Funding and Systemic Risk: A Global CoVaR Approach. By German Lopez-Espinosa, Antonio Moreno, Antonio Rubia, and Laura Valderrama
IMF Working Paper No. 12/46
Feb 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25720.0

Summary: In this paper we identify some of the main factors behind systemic risk in a set of international large-scale complex banks using the novel CoVaR approach. We find that short-term wholesale funding is a key determinant in triggering systemic risk episodes. In contrast, we find no evidence that a larger size increases systemic risk within the class of large global banks. We also show that the sensitivity of system-wide risk to an individual bank is asymmetric across episodes of positive and negative asset returns. Since short-term wholesale funding emerges as the most relevant systemic factor, our results support the Basel Committee’s proposal to introduce a net stable funding ratio, penalizing excessive exposure to liquidity risk.

Excerpts

Introduction
That financial markets move more closely together during times of crisis is a well-documented fact. Conditional correlations among assets are much higher when market returns are low in periods of financial stress; see, among others, King and Wadhwani (1990) and Ang, Chen and Xing (2006). Co-movements typically arise from common exposures to shocks, but also from the propagation of distress associated with a decline in the market value of assets held by individual institutions, a phenomenon we dub balance sheet contraction and which is of particular concern in the financial industry. The recent crisis has shown how the failure of large individual credit institutions can have dramatic effects on the overall financial system and, eventually, spread to the real economy. As a result, international financial policy institutions are currently designing a new regulatory framework for the so-called systemically important financial institutions in order to ensure global financial stability and prevent, or at least mitigate, future episodes of systemic contagion.

In this paper, building on a global system of international financial institutions that comprises the largest banks in a sample of 18 countries, we analyze the main determinants of systemic contagion from an individual institution to the international financial system, i.e., the empirical drivers of tail-risk interdependence. We restrict our attention to a set of large-scale, complex institutions that are the target of current regulation efforts and that would likely be considered too-big-to-fail by central banks. These firms are characterized by their large capitalization, global activity, cross-border exposures and/or representative size in the local industry. Using data spanning the 2001-2009 period, we explicitly measure the contribution of the balance-sheet contraction of these institutions to international financial distress. As regulators seek for meaningful measures of interconnectedness (Walter 2011), this paper contributes to the current debate on prudential regulatory requirements by showing formal evidence that short-term wholesale funding is a major driver of systemic risk in global banking.

Financial institutions use wholesale funding to supplement retail deposits and expand their balance sheets. These funds are typically raised on a short-term rollover basis with instruments such as large-denomination certificates of deposits, brokered deposits, central bank funds, commercial paper and repurchase agreements. Whereas it is agreed that wholesale funding provides certain managerial advantages (see Huang and Ratnovski, 2011, for a discussion), the effects on systemic risk of an overreliance on these liabilities were under-recognized prior to the recent financial crisis. Banks with excessive short-term funding ratios are typically more interconnected to other banks, exposed to a large degree of maturity mismatch and more vulnerable to market conditions and liquidity risk. These features can critically increase the vulnerability of interbank markets and money market mutual funds which act as wholesale providers of liquidity and, eventually, of the whole financial system. The empirical analysis on this paper provides clear evidence on the major role played by short-term wholesale funding to spread systemic risk in global markets.

Additionally, we explore the possibility that the contribution to systemic risk may be asymmetric, i.e. that it depends on whether the market value of a bank’s balance sheet is increasing or decreasing. Because a distressed institution is likely to generate larger externalities on the rest of the financial system when its balance sheet is contracting, an empirical analysis of tail risk-dependence within a financial system should distinguish between episodes of expanding and contracting balance sheets. We deal with this previously unaddressed but key issue, finding strong evidence supporting the existence of asymmetric patterns. Finally, we also analyze the effects of the 2008-2009 global financial crisis on systemic risk and assess the impact of public recapitalizations directly targeted at individual banks.

Our study builds on the novel procedure put forward by Adrian and Brunnermeier (2009), the so-called CoVaR methodology, and generalizes it in several ways in order to deal with the characteristics of a sample of international banks and to address the asymmetric patterns that may underlie tail dependence. The main empirical findings of our analysis can be summarized as follows. First, we find that short-term wholesale funding is the most significant balance sheet determinant of individual contributions to global systemic risk. An increase of one percentage point in this variable leads to an increase in the contribution to systemic risk of 40 basis points of quarterly asset returns. These results support regulatory initiatives aimed at increasing bank liquidity buffers to lessen asset-liability maturity mismatches as a mechanism to mitigate individual liquidity risk, such as the liquidity coverage ratio standard recently laid out by the Basel Committee on Banking Supervision under the new Basel III regulatory framework.3 This paper shows that these provisions may also help to reduce the likelihood of systemic contagion. By contrast, we find little evidence that, within the class of large-scale banks, either relative size or leverage is helpful in predicting future systemic risk after accounting for short-term wholesale funding.

Second, our analysis shows that individual balance sheet contraction produces a significant negative spillover on the Value-at-Risk (VaR) threshold of the global index. Whereas the sensitivity of left tail global returns to a shock in an institution’s market valued asset returns is on average about 0.3, the elasticity conditional on an institution having a shrinking balance sheet is almost three times larger. This result reveals a strong degree of asymmetric response that has not been discussed in the extant literature and which turns out to be larger the more systemic the bank is when its balance sheet is contracting. Therefore, controlling for balance sheet contraction is crucial to rank financial institutions by their contribution to systemic risk.

Third, restricting attention to balance sheet contraction episodes, the credit crisis added up 0.1 percentage points to the co-movement between individual and global asset returns while recapitalization during the crisis period dampened co-movement by 0.2 percentage points.  Furthermore, the timing of recapitalization also matters for systemic risk. Banks that received prompt recapitalization in Q4 2008 proved able to improve their relative position during the crisis period, whereas banks that were rescued by public authorities later in Q4 2009 became relatively more systemic during the crisis period. Finally, the marginal contribution of an individual bank to overall systemic risk increases from 0.76 quarterly percent returns in an average quarter to 0.92 in a quarter characterized by money market turbulence. These results highlight the relevance of crisis episodes in measuring systemic risk and of policy actions in controlling it.


Concluding remarks and policy recommendations
In this paper we examine some of the main factors driving systemic risk in a global framework. We focus on a set of large-scale, international complex institutions which would in principle be deemed too-big-to-fail by national regulators and which are therefore of mayor interest for policy makers. For this class of firms, the evidence based on the CoVaR methodology suggests that short-term wholesale funding –a variable strongly related to interconnectedness and liquidity risk exposure-, is positively and significantly related to systemic risk, whereas other features of the firm, such as leverage or relative size, do not seem to provide incremental information over wholesale funding. This suggests that this latter variable subsumes to a large extent most of the relevant information on systemic risk conveyed by other firm characteristics. We also uncover the relevant role played by asymmetric responses when assessing the impact of individual institutions on system-wide risk, as we find that the sensitivity of system returns to individual bank returns is much higher in periods of balance sheet deleveraging.

Regulators are currently developing a methodological framework within the context of Basel III that attempts to embody the main factors of systemic importance; see Walter (2011). These factors are categorized as size, interconnectedness, substitutability, global activity and complexity, and will serve as a major reference to determine the amount of additional capital requirements and funding ratios for systemically important financial institutions. Our analysis provides formal empirical support to the Basel Committee’s proposal to penalize excessive exposures to liquidity risk by showing that short-term wholesale funding, a variable capturing interconnectedness, largely contributes to systemic risk. Furthermore, since our findings suggest that some factors are much more important than others in determining systemic risk contributions, an optimal capital buffer structure on systemic banks could in principle be designed by suitably weighting the different driving factors as a function of their relative importance. This is an interesting topic for further research. Similarly, the evidence in this paper also offers empirical support to justify the theoretical models that acknowledge the premise that wholesale funding can generate large systemic risk externalities; see, for instance, Perotti and Suarez (2011) for a recent analysis and references therein.

Given the relevance of liquidity strains as a contributing factor to systemic risk, the regulation of systemic risk could be strengthened by giving incentives to disclose contingent short-term liabilities, in particular those related to possible margin calls under credit default swap contracts and repo funding. Our study also points at the role of large trading books as a source of systemic risk –for those banks which were recapitalized during the crisis. As a result, the 2010 revamp of the Basel II capital framework to cover market risk associated with banks’ trading book positions will not only decrease individual risk but will also contribute to mitigate systemic risk.

Wednesday, February 8, 2012

The Global Macroeconomic Costs of Raising Bank Capital Adequacy Requirements

The Global Macroeconomic Costs of Raising Bank Capital Adequacy Requirements. By Scott Roger & Francis Vitek
IMF Working Paper No. 12/44
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25716.0

Summary: This paper examines the transitional macroeconomic costs of a synchronized global increase in bank capital adequacy requirements under Basel III, as well as a capital increase covering globally systemically important banks. The analysis, using an estimated multi-country model, contributed to the work of the Macroeconomic Assessment Group analysis, especially in estimating the potential international spillovers associated with a global increase in capital requirements. The magnitude of the effects found in this analysis is relatively modest, especially if monetary policies have scope to ease in response to a widening of interest rate spreads by banks.

Excerpts:

Introduction

1. This paper analyzes the transitional macroeconomic costs of strengthening bank capital adequacy requirements, including a general increase in capital requirements as well as an increase specifically for globally systemically important banks (GSIBS). In addition to estimating the impact of introducing higher capital requirements in each of 15 major economies, the analysis also includes estimates of the international spillover effects associated with the simultaneous introduction of higher capital requirements by all 15 countries. The simulations are generated within the framework of an extended and refined version of the multi-country macroeconometric model of the world economy developed and estimated by Vitek (2009).

2. This analysis contributed to the work of the Macroeconomic Assessment Group (MAG), chaired by the Bank for International Settlements (BIS), and the Long-term Economic Impact (LEI) group of the Basel Committee for Banking Stability (BCBS).1 The MAG participants, including the IMF, used a variety of models to estimate the medium-term macroeconomic costs of strengthening capital and liquidity requirements.2 The analysis presented in this paper, reflecting the MAG mandate, focuses solely on the short-term to medium-term output costs of the proposed new regulatory measures. Estimates of the net benefits of these regulatory measures can be found in the LEI report (BCBS 2010).

3. The macroeconomic effects of an increase in capital adequacy requirements are assumed in this analysis to be transmitted exclusively via increases in the spread between commercial bank lending rates and the central bank policy rate. We estimate that, in the absence of any monetary policy response, a permanent synchronized global increase in capital requirements for all banks by 1 percentage point, would cause a peak reduction in GDP of around 0.5 percentage points, of which around 0.1 percentage points would result from international spillovers. Losses in emerging market economies are found to be somewhat higher than in advanced economies. If monetary policy is able to respond, however, the adverse impact of higher capital requirements could be largely offset.

4. With regard to strengthening capital requirements specifically for GSIBs, we estimate that a 1 percentage point increase in capital requirements for the top 30 GSIBs would cause a median peak reduction in GDP of around 0.17 percentage points, of which 0.04 percentage points, or 25 percent, results from international spillovers. The aggregate figures conceal a wide range of outcomes, however, and for some countries, international spillovers would be the main source of macroeconomic effects.

5. It is important to bear in mind the limitations of the model and assumptions used in the analysis. In particular, the analysis does not take account of other possible responses by banks or other financial institutions to changes in capital requirements, or non-linearities in the response of financial systems, monetary policy, or the real economy. Nor does the model allow for changes in the macroeconomic steady state associated with very persistent widening of lending spreads. Additionally, the analysis does not take account of the different initial starting points of different countries in raising capital requirements, or differences in the speed of implementation. 


Concluding comments and caveats

28. The multi-country macroeconomic model used in this analysis contributed importantly to the MAG assessments of the potential impact over the medium term of a global increase in capital requirements, both for all banks and for a smaller group of GSIBs. The results of the multi-country analysis indicate that international spillovers associated with coordinated policy measures are important—our analysis suggests that spillovers typically account for 20- 25 percent of the total impact on output. Moreover, in the case of an increase in capital requirements for GSIBs, international spillovers may be the primary source of macroeconomic effects.

29. At the same time, it is important to recognize the important limitations associated both with the model and with the exercise it was used in. With regard to the model, the main limitations to emphasize are that:
* As discussed earlier, the model is not geared to dealing with changes in the steady state associated with permanent or very persistent shocks. Although the quantitative significance of this does not appear to be large in the context of this exercise, it suggests that the estimated effects of a permanent increase in interest rate spreads should be interpreted with caution, particularly at long horizons.

* The model has only one avenue for the increase in capital requirements to affect the real economy; though a widening of bank lending spreads over the policy rate. As discussed in the MAG reports, there are several ways in which banks can respond to higher capital requirements and some could have much more significant effects on output, while others would be more benign.


30. The exercises themselves have some important limitations that should be borne in mind in assessing the quantitative results and risks surrounding them. These include:
* The implementation of the higher capital requirements is assumed to be linear over the alternative implementation periods. In practice, the speed of implementation is quite likely to be non-linear; indeed, markets may be forcing a front-loading of adjustment.
* The scope for monetary policy responses may well vary over time and differ from one country to another. Not all countries are close to the zero lower bound for interest rates, and even those that are may not remain so over the entire implementation period.  Consequently, macroeconomic outcomes and spillovers are bound to differ from those suggested by the model analysis. The analysis should be thought of as showing bounds for potential outcomes associated with different monetary policies.

* The analyses only consider standardized increases in capital requirements by 1 percentage point. However, the effects of increases in requirements may well be nonlinear, so that increasing requirements by 2 percentage points may be not be simply twice as much as a 1 percentage point increase, and the degree of non-linearity may not be the same across time or countries. The zero lower bound constraint is one such nonlinearity, but there are likely to be others.

* The analysis of the global increase in capital requirements assumed an identical increase in capital requirements in all countries. In reality, banks in some countries will have much further to go in meeting higher capital requirements than banks in other countries. As a consequence, the pace of increases in interest rate spreads will vary across countries. As seen in the exercise with GSIBs, where spreads increased by different amounts in different countries, this would significantly modify the pattern of macroeconomic effects and their spillovers between countries.

Tuesday, February 7, 2012

U.S.-China Competition in Asia: Legacies Help America

U.S.-China Competition in Asia: Legacies Help America. BY ROBERT SUTTER
East-West Center
Feb 2012
http://www.eastwestcenter.org/sites/default/files/private/apb147.pdf

As Sino-American competition for influence enters a new stage with the Obama administration’s re-engagement with Asia, each power’s legacies in the region add to economic, military and diplomatic factors determining which power will be more successful in the competition. How the United States and China deal with their respective histories in regional affairs and the role of their non-government relations with the Asia-Pacific represent important legacies that on balance favor the United States.


The Role of History
From the perspective of many regional government officials and observers, the United States and the People’s Republic of China both have historically very mixed records, often resorting to highly disruptive and violent measures to preserve their interests. The record of the United States in the Cold War and later included major wars in Korea and Vietnam and constant military friction along Asia’s rim as it sought to preserve military balance and deter perceived aggression. Many in Asia benefited from America’s resolve and major sacrifices. Most today see the United States as a mature power well aware of the pros and cons of past behavior as it crafts a regional strategy to avoid a potentially dangerous withdrawal and to preserve stability amid U.S. economic and budget constraints.

In contrast, rising China shows little awareness of the implications of its record in the region. Chinese officials and citizens remain deeply influenced by an officially encouraged erroneous claim that China has always been benign and never expansionist. The highly disruptive policies and practices of the People’s Republic of China under the revolutionary leadership of Mao Zedong and the more pragmatic leadership of Deng Xiaoping are not discussed. Well-educated audiences at foreign policy forums at universities and related venues show little awareness of such legacies as consistent Chinese support for the Khmer Rouge as a means to preserve Chinese interests in Southeast Asia.  China’s military invasion of Vietnam and Chinese directed insurgencies against major governments in Southeast Asia, both Western-aligned states and the strictly neutral government of Burma, seem widely unknown.

Chinese officials who should know better also refuse or are unable to deal honestly with the recent past. Speaking last year to a group of Asian Pacific including Vietnamese, American and Chinese officials and scholars deliberating over recent trends in Asia, a Chinese foreign affairs official emphasized in prepared remarks that China “has always been a source of stability in Asia.” After watching the Vietnamese participants squirm in their seats, others raised objections to such gross inaccuracy.

The Chinese lacuna regarding how it has been perceived by its neighbors encumbers China’s efforts to gain influence in the region. China has a lot to live down. Regional governments need steady reassurance that China will not employ its growing power to return to the domineering and disruptive practices that marked forty of the sixty years of the People’s Republic of China. Educated Chinese citizens and at least some responsible officials appear insensitive to this need because of ignorance. They see no requirement to compensate for the past and many criticize Chinese government actions that try to accommodate concerns of regional neighbors. The nationalistic rhetoric coming from China views neighbors as overly sensitive to Chinese assertions and coercive measures on territorial, trade and other issues which revive regional wariness that the antagonistic China of the recent past may be reemerging with greater power in the current period.


Non-government Relations

Like many countries, China’s interaction with its neighbors relies heavily on the Chinese government and other official organizations. Even areas such as trade, investment, media, education and other interchange are heavily influenced by administrative support and guidance. An exception is the large numbers of ethnic Chinese living for generations in neighboring countries, especially in Southeast Asia, which represent a source of non-government influence for China. On balance, the influence of these groups is positive for China, although suspicions about them remain in some countries.

By contrast, for much of its history, the United States exerted influence in Asia and the Pacific much more through business, religious, media, foundations, educational and other interchange than through channels dependent on government leadership and support. Active American non-government interaction with the region continues today, putting the United States in a unique position where the American non-government sector has such a strong and usually positive impact on the influence the United States exerts in the region. Meanwhile, almost 50 years of generally color-blind U.S.  immigration policy since the ending of discriminatory U.S. restrictions on Asian immigration in 1965 has resulted in the influx of millions of Asia-Pacific migrants who call America home and who interact with their countries of origin in ways that under gird and reflect well on the U.S. position in the region. No other country, with the exception of Canada, has such an active and powerfully positive channel of influence in the Asia-Pacific.


Outlook: Advantage U.S.

The primary concerns in the Asia-Pacific with stability and development mean that U.S.-Chinese competition for influence probably will focus more on persuasion than coercion. The strong American foundation of webs of positive non-government regional interchange and the Obama government’s widely welcomed re-engagement with the region contrasts with rising China’s poor awareness of its historical impact on the region and limited non-government connections.

Friday, February 3, 2012

Why did the U.S. recover faster from the Panic of 1907 than from the 2008 recession and the Great Depression?

Why did the U.S. recover faster from the Panic of 1907 than from the 2008 recession and the Great Depression?
By PHIL GRAMM AND MIKE SOLON
WSJ, Feb 02, 2012
http://online.wsj.com/article/SB10001424052970204740904577193382505500756.html

Commerce Department data released last Friday show that four years after the recession began, real gross domestic product per person is down $1,112, while 5.8 million fewer Americans are working than when the recession started.

Never before in postwar America has either real per capita GDP or employment still been lower four years after a recession began. If in this "recovery" our economy had grown and generated jobs at the average rate achieved following the 10 previous postwar recessions, GDP per person would be $4,528 higher and 13.7 million more Americans would be working today.

Behind the startling statistics of lost income and jobs are the real and painful stories of American families falling further behind: record high poverty levels, record low teenage employment, record high long-term unemployment, shrinking birthrates, exploding welfare benefits, and a crippled middle class.

As the recovery faltered, President Obama first claimed the weakness of the recovery was due to the depth of the recession, saying that it was "going to take a while for us to get out of this. I think even I did not realize the magnitude . . . of the recession until fairly far into it."

But, in fact, the 1981-82 recession was deeper and unemployment was higher. Moreover, the 1982 recovery was constrained by a contractionary monetary policy that pushed interest rates above 21%, a tough but necessary step to break inflation. It was also a recovery that required a painful restructuring of American businesses to become more competitive in the increasingly globalized economy. By way of comparison, our current recovery has benefited from the most expansionary monetary policy in U.S. history and a rapid return to profitability by corporate America.

Despite the significant disadvantages the economy faced in 1982, President Ronald Reagan's policies ignited a recovery so powerful that if it were being repeated today, real per capita GDP would be $5,694 higher than it is now—an extra $22,776 for a family of four. Some 16.9 million more Americans would have jobs.

The most recent excuse for the failed recovery is that financial crises, by their very nature, result in slower, more difficult recoveries. Yet the 1981-82 recession was at least in part financially induced by inflation, record interest rates and the dislocations they generated. The high interest rates wreaked havoc on long-term lenders like S&Ls, whose net worth turned negative in mid-1982. But even if we ignore the financial roots of the 1981-82 recession, the financial crisis rationalization of the current, weak recovery does not stand up to scrutiny.

The largest economic crisis of the 20th century was the Great Depression, but the second most significant economic upheaval was the panic of 1907. It was from beginning to end a banking and financial crisis. With the failure of the Knickerbocker Trust Company, the stock market collapsed, loan supply vanished and a scramble for liquidity ensued. Banks defaulted on their obligations to redeem deposits in currency or gold.

Milton Friedman and Anna Schwartz, in their classic "A Monetary History of the United States," found "much similarity in its early phases" between the Panic of 1907 and the Great Depression. So traumatic was the crisis that it gave rise to the National Monetary Commission and the recommendations that led to the creation of the Federal Reserve. The May panic triggered a massive recession that saw real gross national product shrink in the second half of 1907 and plummet by an extraordinary 8.2% in 1908. Yet the economy came roaring back and, in two short years, was 7% bigger than when the panic started.

It is certainly true that the economy languished in the Great Depression as it has over the past four years. But today's malaise is similar to that of the Depression not because of the financial events that triggered the disease but because of the virtually identical and equally absurd policy prescriptions of the doctors.

Under President Franklin Roosevelt, federal spending jumped by 3.6% of GDP from 1932 to 1936, an unprecedented spending spree, as the New Deal was implemented. Under President Obama, spending exploded by 4.6% of GDP from 2008 to 2011. The federal debt by the end of 1938 was almost 150% above the 1929 level. Publicly held debt is projected to be double the 2008 level by the end of 2012. The regulatory burden mushroomed under Roosevelt, as it has under Mr. Obama.

Tax policy then and now was equally destructive. The top individual income tax rate rose from 24% to 63% and then to 79% during the Hoover and Roosevelt administrations. Corporate rates were increased by 36%. Under Mr. Obama, capital gains taxes are set to rise by one third, the top effective tax rate on dividends will more than triple, and the highest marginal tax rate will effectively rise by 21.4%.

Moreover, the Obama administration's populist tirades against private business are hauntingly similar to the Roosevelt administration's tirades. FDR's demagoguery against "the privileged few" and "economic royalists" has evolved into Mr. Obama's "the richest 1%" and America's "millionaires and billionaires."

Yet, in his signature style, Mr. Obama now claims our weak recovery is not because a Democratic Congress said yes to his policy prescriptions in 2009-10 but because a Republican House said no in 2011. The sad truth is this president sowed his policies and America is reaping the results.

Faced with the failed results of his own governing strategy of tax, spend and control, the president will have no choice but to follow an election strategy of blame, vilify and divide. But come Nov. 6, American voters need only ask themselves the question Reagan asked in 1980: "Are you better off than you were four years ago?"

Sadly, with their income reduced by thousands, the number of U.S. jobs down by millions, and the nation trillions deeper in debt, the answer will be a resounding "No."

Mr. Gramm, a former U.S. senator from Texas, is the senior partner at U.S. Policy Metrics, where Mr. Solon, a former senior budget staffer in both houses of Congress, is also a partner.

Tuesday, January 31, 2012

Macroeconomic and Welfare Costs of U.S. Fiscal Imbalances

Macroeconomic and Welfare Costs of U.S. Fiscal Imbalances. By Bertrand Gruss and Jose L. Torres
IMF Working Paper No. 12/38
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25691.0

Summary: In this paper we use a general equilibrium model with heterogeneous agents to assess the macroeconomic and welfare consequences in the United States of alternative fiscal policies over the medium-term. We find that failing to address the fiscal imbalances associated with current federal fiscal policies for a prolonged period would result in a significant crowding-out of private investment and a severe drag on growth. Compared to adopting a reform that gradually reduces federal debt to its pre-crisis level, postponing debt stabilization for two decades would entail a permanent output loss of about 17 percent and a welfare loss of almost 7 percent of lifetime consumption. Moreover, the long-run welfare gains from the adjustment would more than compensate the initial losses associated with the consolidation period.

The authors start the paper this way:

“History makes clear that failure to put our fiscal house in order will erode the vitality of our
economy, reduce the standard of living in the United States, and increase the risk of economic and financial instability.”

Ben S. Bernanke, 2011 Annual Conference of the Committee for a Responsible Federal Budget


Excerpts
Introduction
One of the main legacies of the Great Recession has been the sharp deterioration of public finances in most advanced economies. In the U.S., the federal debt held by the public surged from 36 percent of GDP in 2007 to around 70 percent in 2011. This rise in debt, however impressive, gets dwarfed when compared to the medium-term fiscal imbalances associated with entitlement programs and revenue-constraining measures. For example, the non-partisan Congressional Budget Office (CBO) foresees the debt held by the public to exceed 150 percent of GDP by 2030 (see Figure 1). Similarly, Batini et al. (2011) estimate that closing the federal “fiscal gap” associated with current fiscal policies would require a permanent fiscal adjustment of about 15 percent of GDP.

While the crisis brought the need to address the U.S. medium-term fiscal imbalances to the center of the policy debate, the costs they entail are not necessarily well understood. Most of the long-term fiscal projections regularly produced in the U.S. and used to guide policy discussions are derived from debt accounting exercises. A shortcoming of such approach is that relative prices and economic activity are unaffected by different fiscal policies, and that it cannot be used for welfare analysis. To overcome those limitations and contribute to the debate, in this paper we use a rational expectations general equilibrium framework to assess the medium-term macroeconomic and welfare consequences of alternative fiscal policies in the U.S. We find that failing to address the federal fiscal imbalances for a prolonged period would result in a significant crowding-out of private investment and drag on growth, entailing a permanent output loss of about 17 percent and welfare loss of almost 7 percent of lifetime consumption. Moreover, we find that the long-run welfare gains from stabilizing the federal debt at a low level more than compensate the welfare losses associated with the consolidation period. Our results also suggest that the crowding-out effects of public debt are an order of magnitude bigger than the policy mix effects: Reducing promptly the level of public debt is significantly more important for activity and welfare than differences in the size of government or the design of the tax reform.

The focus of this study is on the costs and benefits of fiscal consolidation for the U.S. over the medium-term to long-term. In this sense, we explicitly leave aside some questions on fiscal consolidation that, while very relevant for the short-run, cannot be appropriately tackled in this framework. One example is assessing the effects of back-loading the pace of consolidation in the near term—while announcing a credible medium-run adjustment—in the current context of growth below potential and nominal interest rates close to zero. A related relevant question is what mix of fiscal instruments in the near term would make fiscal consolidation less costly in such context. While interesting, these questions are beyond the scope of this paper.

The quantitative framework we use is a dynamic stochastic general equilibrium model with heterogeneous agents, and endogenous occupational choice and labor supply. In the model, ex-ante identical agents face idiosyncratic entrepreneurial ability and labor productivity shocks, and choose their occupation. Agents can become either entrepreneurs and hire other workers, or they can become workers and decide what fraction of their time to work for other entrepreneurs. In order to make a realistic analysis of the policy options, we assume that the government does not have access to lump sum taxation. Instead, the government raises distortionary taxes on labor, consumption, and income, and issues one period non-contingent bonds to finance lump sum transfers to all agents, other noninterest spending, and service its debt. Given that the core issue threatening debt sustainability in the U.S. is the explosive path of spending on entitlement programs, the heterogeneous agents assumption is crucial: Our model allows for a meaningful tradeoff between distortionary taxation and government transfers, as the latter insure households from attaining very low levels of consumption. The complexity this introduces forces us to sacrifice on some dimension: Agents in our model face individual uncertainty but have perfect foresight about future paths of fiscal instruments and prices. Allowing for uncertainty about the timing and composition of the adjustment would be interesting, but would severely increase the computational cost.

We compare model simulations from four alternative fiscal scenarios. The benchmark scenario maintains current fiscal policies for about twenty years. More precisely, in this scenario we feed the model with the spending (noninterest mandatory and discretionary) and revenue projections from CBO’s Alternative Fiscal scenario (CBO 2011)—allowing all other variables to adjust endogenously—until about 2030, when we assume that the government increases all taxes to stabilize the debt at its prevailing level. Three alternative scenarios assume, instead, the immediate adoption of fiscal reform aimed at gradually reducing the federal debt to its pre-crisis level. There are of course many possible parameterizations for such reform reflecting, among other things, different views about the desired size of the public sector and the design of the tax system. We first consider an adjustment scenario assuming the same size of government and tax structure than the benchmark one in order to disentangle the sole effect of delaying fiscal adjustment—and stabilizing the debt ratio at a high level. We then explore the effect of alternative designs for the consolidation plan by considering two alternative adjustment scenarios that incorporate spending and revenue measures proposed by the bipartisan December 2010 Bowles-Simpson Commission.

This paper is related to different strands of the macro literature on fiscal issues. First, it is related to studies using general equilibrium models to analyze the implications of fiscal consolidations. Forni et al. (2010) use perfect-foresight simulations from a two-country dynamic model to compute the macroeconomic consequences of reducing the debt to GDP ratio in Italy. Coenen et al. (2008) analyze the effects of a permanent reduction in public debt in the Euro Area using the ECB NAWM model. Clinton et al. (2010) use the IMF GIMF model to examine the macroeconomic effects of permanently reducing government fiscal deficits in several regions of the world at the same time. Davig et al. (2010) study the effects of uncertainty about when and how policy will adjust to resolve the exponential growth in entitlement spending in the U.S.

The main difference with our paper is that these works rely on representative agent models that cannot adequately capture the redistributive and insurance effects of fiscal policy. As a result, such models have by construction a positive bias towards fiscal reforms that lower transfers, reduce the debt, and eventually lower the distortions by lowering tax rates. Another unappealing feature of the representative agent models for analyzing the merits of a fiscal consolidation is that, in steady state, the equilibrium real interest rate is independent of the debt level, whereas in our model the equilibrium real interest rate is endogenously affected by the level of government debt, which is consistent with the empirical literature.

Second, the paper is related to previous work using general equilibrium models with infinitively lived heterogeneous agents, occupational choice, and borrowing constraints to analyze fiscal reforms, such as Li(2002), Meh (2005) and Kitao (2008). Differently from these papers, that impose a balanced budget every period, we focus on the effects of debt period of time we augment our model to include growth. Moreover and as in Kitao (2008), we explicitly compute the transitional dynamics after the reforms and analyze the welfare costs associated with the transition.  dynamics and fiscal consolidation reforms. Also, since we focus on reforms over an extended period of time we augment our model to include growth. Moreover and as in Kitao (2008), we explicitly compute the transitional dynamics after the reforms and analyze the welfare costs associated with the transition.

Results:The long-run effects


What is the effect of delaying fiscal consolidation on...?
Capital and Labor. The high interest rates in the delay scenario imply that for those entrepreneurs that do not have enough internal funding, the cost of borrowing sufficient capital is too high for them to compensate for their income under the outside option (i.e.  wage income). As a result, the share of entrepreneurs in the delay scenario is roughly one half the share under the passive adjust scenario and the aggregate capital stock is about 17 percent lower. The higher share of workers in the delay scenario implies a higher labor supply. Together with a lower labor demand (due to a lower capital stock), this leads to a real wage that is more than 19 percent lower. Total hours worked are similar in the two steady states as lower individual hours offset the higher share of workers.

Output and Consumption. The crowding-out effect of fiscal policy under the delay scenario leads to large permanent losses in output and consumption. The level of GDP is about 16 percent lower in the delay than in the passive adjust scenario and aggregate consumption is 3.5 percent lower. Moreover and as depicted in Figure 4, the wealth distribution is significantly more concentrated under the delay scenario.

Welfare. The effect of lower aggregate consumption and more concentrated wealth distribution under the delay scenario implies that welfare is significantly lower than in the passive adjust scenario. Using a consumption equivalent welfare metric we find that the average difference in steady state welfare across scenarios would be equivalent to permanently increasing consumption to each agent in the delay scenario economy by 6 percent while leaving their amount of leisure unchanged. We interpret this differential as the permanent welfare gain from stabilizing public debt at its pre-crisis level. A breakup of the welfare comparison of steady states by wealth deciles, shown in Figure 5, suggests that all agents up to the 7th deciles of the wealth distribution would be better off under fiscal consolidation.


What are the effects of alternative fiscal consolidation plans?

Capital and Output. The smaller size of government in the two active adjust scenario relative to the passive one translates into higher capital stocks and higher output, increasing the gap with the delay scenario. Regarding the tax reform, the comparison between the two active adjust scenarios reveals that distributing the higher tax pressure on all taxes, including consumption taxes, lowers distortions and results in a higher capital stock and in a growth friendlier consolidation: The difference in the output level between the delay and active (1) adjust scenario stands at 17.7 percent—while this difference is 17.1 and 15.7 percent for the active (2) adjust and passive adjust scenarios respectively.

Consumption and Welfare. While all adjust scenarios reveal a significant difference in long-run per-capita consumption and welfare with respect to postponing fiscal consolidation, the relative performance among them also favors a smaller size of government and a balanced tax reform. The difference in per-capita consumption with the delay scenario is 3.5, 5.8 and 5.4 percent respectively for the passive, active (1) and active (2) adjustment scenarios. The policy mix under the active (1) adjust scenario also ranks the best in terms of welfare, with the welfare differential with respect to the delay scenario being more than 7 percent of lifetime consumption.

Overall Welfare Cost of Delaying Fiscal Consolidation

In the long-run the average welfare in the adjust scenario is higher than in the delay scenario by 6.7 percent of lifetime consumption. However, along the transition to the new steady state the adjust scenario is characterized by a costly fiscal adjustment that entails a lower path for per capita consumption, so it might not be necessarily true that an adjustment is optimal.

To assess the overall welfare ranking of the alternative fiscal paths, we extend the analysis of section III.A. by computing, for the delay and adjust scenarios, the average expected discounted lifetime utility starting in 2011. We find that even taking into account the costs along the transition, the adjust scenario entails an average welfare gain for the economy. The infinite horizon welfare comparison suggests that consumption under the delay scenario should be raised by 0.8 percent for all agents in the economy in all periods to attain the same average utility than under the adjust scenario (while leaving leisure unchanged). A breakup of this result by wealth deciles (see Figure 9) suggests that, as in the long-run comparison, the wealthiest decile of the population is worse off under the adjust scenario. Differently from the steady state comparison, however, the first four deciles also face welfare losses in the adjust scenario.

A few elements suggest that the average welfare gain reported (0.8 percent in consumptionequivalent terms) can be considered a lower bound. First, the calibrated subjective discount factor from the model used to compute the present value of the utility paths entails a yearly discount rate of about 9.9 percent.20 With such a high discount rate, the long-run benefits from the delay scenario are heavily discounted. Using a discount rate of 3 percent, the one used by CBO for calculating the present value of future streams of revenues and outlays of the government’s trust funds, would imply a consumption-equivalent welfare gain of 5.9 percent (instead of 0.8 percent). Second, the model we are using has infinitely lived agents, so we are not explicitly accounting for the distribution of costs and benefits across generations.

Conclusions
We compare the macroeconomic and welfare effects of failing to address the fiscal imbalances in the U.S. for an extended period with those of reducing federal debt to its precrisis level and find that the stakes are quite high. Our model simulations suggest that the continuous rise in federal debt implied by current policies would have sizeable effects on the economy, even under certainty that the federal debt will be fully repaid. The model predicts that the mounting debt ratio would increase the cost of borrowing and crowd out private capital from productive activities, acting as a significant drag on growth. Compared to stabilizing federal debt at its pre-crisis level, continuation of current policies for two decades would entail a permanent output loss of around 17 percent. The associated drop in per-capita consumption, combined with the worsening of wealth concentration that the model suggests, would cause a large average welfare loss in the long-run, equivalent to about 7 percent of lifetime consumption. Our results also suggest that reducing promptly the level of public debt is significantly more important for activity and welfare than differences in the size of government or the design of the tax reform. Accordingly, even under consensus on the desirability to increase primary spending in the medium-run, it would be preferable to start from a fiscal house in order.

The model adequately captures that the fiscal consolidation needed to reduce federal debt to its pre-crisis level would be very costly. Still, extending the welfare comparison to include also the transition period suggests that a fiscal consolidation would be on average beneficial.  After taking into account the short-term costs, the average welfare gain from fiscal consolidation stands at 0.8 percent of lifetime consumption.

We argue that our welfare results can be interpreted as a lower bound. This is because, first, we abstract from default so our simulations ignore the potential effect of higher public debt on the risk premium. However, as the debt crisis in Europe has revealed, interest rates can soar quickly if investors lose confidence in the ability of a government to manage its fiscal policy. Considering this effect would have magnified the long-run welfare costs of stabilizing the debt ratio at a higher level. Second, the high discount rate we use in the computation of the present value of utility exacerbates the short-term costs. If we recomputed the overall welfare effects in our scenarios using a discount rate of 3 percent, the welfare gain from a consolidation would be 5.9 percent of lifetime utility, instead of 0.8 percent. An argument for considering a lower rate to compute the present value of welfare is that by assuming infinitely lived agents we are not attaching any weight to unborn agents that would be affected by the permanent costs of delaying the resolution of fiscal imbalances and do not enjoy the expansionary effects of the unsustainable policy along the transitional dynamics.

The results in this paper are not exempt from the perils inherent to any model-dependent analysis. In order to address features that we believe are crucial for the issue at hand, we needed to simplify the model on other dimensions. For example, given the current reliance of the U.S. on foreign financing, the closed economy assumption used in this paper may be questionable. However, we believe that it would also be problematic to assume that the world interest rate will remain unaffected if the U.S. continues to considerably increase its financing needs. Moreover and as mentioned before, the model ignores the effect of higher debt on the perceived probability of default, which would likely counteract the effect in our results from failing to incorporate the government’s access to foreign borrowing. The model also abstracts from nominal issues and real and nominal rigidities typically introduced in the new Keynesian models commonly used for policy analysis. However, we believe that while these features are particularly relevant for short-term cyclical considerations, they matter much less for the longer-term issues addressed in this paper.

How Risky Are Banks’ Risk Weighted Assets? Evidence from the Financial Crisis

How Risky Are Banks’ Risk Weighted Assets? Evidence from the Financial Crisis. By Sonali Das & Amadou N. R. Sy
IMF Working Paper No. 12/36
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25687.0

Summary: We study how investors account for the riskiness of banks’ risk-weighted assets (RWA) by examining the determinants of stock returns and market measures of risk. We find that banks with higher RWA had lower stock returns over the US and European crises. This relationship is weaker in Europe where banks can use Basel II internal risk models. For large banks, investors paid less attention to RWA and rewarded instead lower wholesale funding and better asset quality. RWA do not, in general, predict market measures of risk although there is evidence of a positive relationship before the US crisis which becomes negative afterwards.

Introduction:
“The leverage ratio - a simple ratio of capital to balance sheet assets - and the more complex riskbased requirements work well together. The leverage requirement provides a baseline level of capital to protect the safety net, while the risk-based requirement can capture additional risks that are not covered by the leverage framework. The more advanced and complex the models become, the greater the need for such a baseline. The leverage ratio ensures that a capital backstop remains even if model errors or other miscalculations impair the reliability of risk-based capital. This is a crucial consideration - particularly as we work through the implementation of Basel II standard. By restraining balance sheet growth, the leverage ratio promotes stability and resilience during difficult economic periods.”– Remarks by Sheila Bair, Chairman, Federal Deposit Insurance Corporation before the Basel Committee on Banking Supervision, Merida, Mexico, October 4, 2006.

The financial crisis that began in 2007 has exposed a number of important weaknesses in banking regulation. A key challenge is how to appropriately determine the riskiness of banks’ assets. The principle that regulatory capital requirements should be tied to the risks taken by banks was accepted internationally and formalized with the Basel I accord in 1988, and the definition of capital and measurement of risks have undergone several revisions since that time.  The second Basel accord, published in 2004, recommended banks hold total regulatory capital equal to at least 8 percent of their risk-weighted assets (RWA). The recently updated Basel III guidelines emphasize higher quality forms of capital, but makes limited strides in the measurement of risks. Instead, Basel III proposes as a complementary measure, a non-riskweighted leverage ratio.

Risk weighted assets are an important element of risk-based capital ratios. Indeed, banks can increase their capital adequacy ratios in two ways: (i) by increasing the amount of regulatory capital held, which boosts the numerator of the ratio, or (ii) by decreasing risk-weighted assets, which is the denominator of the regulatory ratio. A key concern about current methods of determining risk-weighted assets is that they leave room for individual banks to “optimize” capital requirements by underestimating their risks and thus being permitted to hold lower capital. Jones (2000) discusses techniques banks can use to engage in regulatory capital arbitrage and provides evidence on the magnitude of these activities in the Unites States. Even under the Basel I system, in which particular classes of assets are assigned fixed risk-weights, the capital ratio denominator can be circumvented. Merton (1995) provides an example in which, in place of a portfolio of mortgages, a bank can hold the economic equivalent of that portfolio at a riskweight one-eighth as large. Innovations in financial products since the first Basel accord have also likely made it easier for financial institutions to manipulate their regulatory risk measure.  Acharya, Schnabl, and Suarez (2010) analyze asset-backed commercial paper and find results suggesting that banks used this form of securitization to concentrate, rather than disperse, financial risks in the banking sector while reducing bank capital requirements.

In addition to concerns about underestimating the riskiness of assets, there are differences in calculation of risk weighted assets across countries that may have unintended effects on financial stability. Lord Adair Turner, chairman of the UK Financial Services Authority, warned in June that international differences in the calculation of risk-weighted assets could undermine Basel III3 and Sheila Bair, former chairman of the US Federal Deposit Insurance Corporation, added her concern that Europe and the US may be diverging in their calculation of RWA: “The risk weightings are highly variable in Europe and have led to continuing declines in capital levels, even in the recession. There's pretty strong evidence that the RWA calculation isn't working as it's supposed to.”

In this paper, we study whether equity investors find banks’ reported risk-weighted assets to be a credible measure of risk. First, did banks with lower risk-weighted assets have higher stock returns during the recent financial crisis? And second, do measures of risk based on equity market information correspond to risk-weighted assets? Demirgüç-Kunt, Detragiache, and Merrouche (2010) and Beltratti and Stulz (2010) study banks’ stock return performance during the financial crisis as well, focusing primarily on the effect of different measures of capital and bank governance, respectively. Our paper studies whether markets price bank risk as measured by RWA, to inform the debate on how best to measure the risks embedded in banks’ portfolios.  Addressing the first question, we find that banks with higher RWA performed worse during the severe phase of the crisis, from July 2007 to September 2008, suggesting that equity investors did look at RWA as a determinant of banks’ stock returns in this period. This relationship is weaker in Europe where banks can use Basel II internal risk models. For large banks, investors paid less attention to RWA and rewarded instead lower wholesale funding and better asset quality.

We find as in Demirguc-Kunt, Detragiache, and Merrouche (2010) that markets do not respond to all measures of capital, but respond positively to higher quality measures – that is, capital with greater loss-absorbing potential. We also investigate the possibility of a capital-liquidity trade-off in the market assessment of banks. Our results indicate that there is indeed a capital-liquidity trade-off: (i) banks with more stable sources of short-term funding are not rewarded as highly for having higher capital, and (ii) banks with liquid assets are not rewarded as highly for having higher capital.

Regarding the relationship between RWA and stock market measures of bank risk, we find that RWA do not, in general, predict market measures of banks’ riskiness. There is evidence, however, of a positive relationship between RWA and market risk in the three years prior to the crisis, from 2004 to 2006, and this relationship becomes negative after the crisis. This could result from the large increase in market measures of risk, which reflect the volatility of a bank’s stock price, since the crisis, while banks have not adjusted their RWA to account for increased risk.

Conclusions
There has been a steady decline in the measure of asset-risk that banks report to regulators—riskweighted assets (RWA)—over the last decade. In light of this trend and other indications that banks can “optimize” their capital by under-reporting RWA in an attempt to minimize regulatory burdens, we study how equity market investors account for the riskiness of RWA by examining the determinants of stock returns and stock-market measures of risk of an international panel of banks.

Regarding banking stock returns, we find a negative relationship between RWA and stock returns over periods of financial crisis, suggesting that investors use RWA as an indicator of bank portfolio risk. Indeed, banks with higher risk-weighted assets performed worse during the severe phase of the crisis, from July 2007 to September 2008. We find a similar result when we focus on the ongoing crisis in the Europe.

Comparing regions with different regulatory structures, we find, however, that the relationship between stock returns and RWA is weaker in countries where banks have more discretion in the calculation of RWA. Specifically, in countries that had implemented Basel II before the onset of the recent financial crisis, allowing banks to use their own internal models to assess credit risks, investors look to other balance-sheet measures of risk exposure but not RWA. Our results also suggest that for large banks, investors paid less attention to the quality of capital and RWAs during the crisis and rewarded instead lower reliance on wholesale funding and better asset quality as measured by the relative size of customer deposit and non-performing loans, respectively.

We confirm results from previous studies that only capital with the greatest loss-absorbing potential matters for stock returns. In addition, we find a trade-off between capital and liquidity in terms of their positive effects on bank stock returns. The more stable a bank’s funding, the less positive the effect of higher capital on its stock return; the more liquid a bank’s assets, the less an increase in capital will increase its stock return.

When it comes to stock-market measures of risk, we find that RWA do not, in general, predict market measures of bank risk. There is evidence, however, of a break in the relationship between stock market measures of risk and RWA since the start of the crisis. Indeed, we find a positive relationship between RWA and market risk in the three years prior to the crisis, from 2004 to 2006, and this relationship becomes negative after the crisis. This could result from the large increase in market measures of risk, which reflect the volatility of a bank’s stock price, since the crisis, while banks have not adjusted their RWA to reflect increased risk.

In light of increasing risk-aversion in markets during times of crisis, the question of how market assessments of risk should be incorporated into banking regulation and supervision remains. Indeed, the asymmetry of information between banks, supervisors, and market participants regarding how risky RWA are can lead to increased uncertainty about the adequacy of bank capital, which during a financial crisis, can have damaging effects for financial stability.