Tuesday, April 24, 2012

Central Bank Independence and Macro-prudential Regulation. By Kenichi Ueda & Fabian Valencia

Central Bank Independence and Macro-prudential Regulation. By Kenichi Ueda & Fabian Valencia
IMF Working Paper No. 12/101
Apr 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25872.0

Summary: We consider the optimality of various institutional arrangements for agencies that conduct macro-prudential regulation and monetary policy. When a central bank is in charge of price and financial stability, a new time inconsistency problem may arise. Ex-ante, the central bank chooses the socially optimal level of inflation. Ex-post, however, the central bank chooses inflation above the social optimum to reduce the real value of private debt. This inefficient outcome arises when macro-prudential policies cannot be adjusted as frequently as monetary. Importantly, this result arises even when the central bank is politically independent. We then consider the role of political pressures in the spirit of Barro and Gordon (1983). We show that if either the macro-prudential regulator or the central bank (or both) are not politically independent, separation of price and financial stability objectives does not deliver the social optimum.

Excerpts

Introduction

A growing literature based on models where pecuniary externalities reinforce shocks in the aggregate advocates the use of macro-prudential regulation (e.g. Bianchi (2010), Bianchi and Mendoza (2010), Jeanne and Korinek (2010), and Jeanne and Korinek (2011)). Most research in this area has focused on understanding the distortions that lead to financial amplification and to assess their quantitative importance. The natural next question is how to implement macro-prudential regulation.

Implementing macro-prudential policy requires, among other things, figuring out the optimal institutional design. In this context, there is an intense policy debate about the desirability of assigning the central bank formally with the responsibility of financial stability. This debate has spurred interest in studying the interactions between monetary and macro-prudential policies with the objective of understanding the conflicts and synergies that may arise from different institutional arrangements.

This paper contributes to this debate by exploring the circumstances under which it may be suboptimal to have the central bank in charge of macro-prudential regulation. We differ from a rapidly expanding literature on macro-prudential and monetary interactions, including De Paoli and Paustian (2011) and Quint and Rabanal (2011), mainly in that our focus is on the potential time-inconsistency problems that can arise, which are not addressed in existing work. Our departure point is the work pioneered by Kydland and Prescott (1977) and Barro and Gordon (1983) who studied how time-inconsistency problems and political pressures distort the monetary authority’s incentives under various institutional arrangements. In our model, there are two stages, in the first stage, the policymaker (possibly a single or several institutions) makes simultaneous monetary policy and macro-prudential regulation decisions. In the second stage, monetary policy decisions can be revised or “fine-tuned” after the realization of a credit shock.  This setup captures the fact that macro-prudential regulation is intended to be used preemptively, once a credit shock (boom or bust) have taken place, it can do little to change the stock of debt. Monetary policy, on the other hand, can be used ex-ante and ex-post.

The key finding of the paper is that a dual-mandate central bank is not socially optimal. In this setting, a time inconsistency problem arises. While it is ex-ante optimal for the dual-mandate central bank to deliver the socially optimal level of inflation, it is not so ex-post. This central bank has the ex-post incentive to reduce the real burden of private debt through inflation, similar to the incentives to monetize public sector debt studied in Calvo (1978) and Lucas and Stokey (1983).  This outcome arises because ex-post the dual-mandate central bank has only one tool, monetary policy, to achieve financial and price stability.

We then examine the role of political factors with a simple variation of our model in the spirit of Barro and Gordon (1983). We find that the above result prevails if policy is conducted by politically independent institutions. However, when institutions are not politically independent (the central bank, the macro-prudential regulator, or both) neither separate institutions nor combination of objectives in a single institution delivers the social optimum. As in Barro and Gordon (1983), the non-independent institution will use its policy tool at hand to try to generate economic expansions. The non-independent central bank will use monetary policy for this purpose and the non-independent macro-prudential regulator will use regulation. Which arrangement generates lower welfare losses in the case of non-independence depends on parameter values. A calibration of the model using parameter values from the literature suggest, however, that a regime with a non-independent dual-mandate central bank almost always delivers a worse outcome than a regime with a non-independent but separate macro-prudential regulator.

Finally, if the only distortion of concern is political interference (i.e. ignoring the time-inconsistency problem highlighted earlier) all that is needed to achieve the social optimum is political independence, with separation or combination of objectives yielding the same outcome.  From a policy perspective, our analysis suggests that a conflict between price and financial stability objectives may arise if pursued by a single institution. Our results also extend the earlier findings by Barro and Gordon (1983) and many others on political independence of the central bank to show that these results are also applicable to a macro-prudential regulator. We should note that we have abstracted from considering the potential synergies that may arise in having dual mandate institutions. For instance, benefits from information sharing and use of central bank expertise may mitigate the welfare losses we have shown may arise (see Nier, Osinski, J´acome and Madrid (2011)), although information sharing would also benefit fiscal and monetary interactions. However, we have also abstracted other aspects that could exacerbate the welfare loss such as loss in reputation.


Conclusions

We consider macro-prudential regulation and monetary policy interactions to investigate the welfare implications of different institutional arrangements. In our framework, monetary policy can re-optimize following a realization of credit shocks, but macro-prudential regulation cannot be adjusted immediately after the credit shock. This feature of the model captures the ability of adjusting monetary policy more frequently than macro-prudential regulation because macro-prudential regulation is an ex-ante tool, whereas monetary policy can be used ex-ante and ex-post. In this setting, a central bank with a price and financial stability mandate does not deliver the social optimum because of a time-inconsistency problem. This central bank finds it optimal ex-ante to deliver the social optimal level of inflation, but it does not do so ex-post. This is because the central bank finds it optimal ex-post to let inflation rise to repair private balance sheets because ex-post it has only monetary policy to do so. Achieving the social optimum in this case requires separating the price and financial stability objectives.

We also consider the role of political independence of institutions, as in Barro and Gordon (1983).  Under this extension, separation of price and financial stability objectives delivers the social optimum only if both institutions are politically independent. If the central bank or the macro-prudential regulator (or both) are not politically independent, they would not achieve the social optimum. Numerical analysis in our model suggest however, that in most cases a non-independent macro-prudential regulator (with independent monetary authority) delivers a better outcome than a non-independent central bank in charge of both price and financial stability.

Wednesday, April 18, 2012

Principles for financial market infrastructures, assessment methodology and disclosure framework

CPSS Publications No 101
April 2012

Final version of the Principles for financial market infrastructures

The report Principles for financial market infrastructures contains new and more demanding international standards for payment, clearing and settlement systems, including central counterparties. Issued by the CPSS and the International Organization of Securities Commissions (IOSCO), the  new standards (called "principles") are designed to ensure that the infrastructure supporting global financial markets is more robust and thus well placed to withstand financial shocks.

The principles apply to all systemically important payment systems, central securities depositories, securities settlement systems, central counterparties and trade repositories (collectively "financial market infrastructures"). They replace the three existing sets of international standards set out in the Core principles for systemically important payment systems (CPSS, 2001); the Recommendations for securities settlement systems (CPSS-IOSCO, 2001); and the Recommendations for central counterparties (CPSS-IOSCO, 2004). CPSS and IOSCO have strengthened and harmonised these three sets of standards by raising minimum requirements, providing more detailed guidance and broadening the scope of the standards to cover new risk-management areas and new types of FMIs.

The principles were issued for public consultation in March 2011. The finalised principles being issued now have been revised in light of the comments received during that consultation.

CPSS and IOSCO members will strive to adopt the new standards by the end of 2012. Financial market infrastructures (FMIs) are expected to observe the standards as soon as possible.

Consultation versions of an assessment methodology and disclosure framework

At the same time as publishing the final version of the principles, CPSS and IOSCO have issued two related documents for public consultation, namely an assessment methodology and a disclosure framework for these new principles.

Comments on these two documents are invited from all interested parties and should be sent by 15 June 2012 to both the CPSS secretariat (cpss@bis.org) and the IOSCO secretariat (fmi@iosco.org). The comments will be published on the websites of the Bank for International Settlements (BIS) and IOSCO unless commentators request otherwise. After the consultation period, the CPSS and IOSCO will review the comments received and publish final versions of the two documents later in 2012.

Other documents

A cover note that explains the background to the three documents above and sets out some specific points on the two consultation documents on which the committees are seeking comments during the public consultation period is also available.

A summary note that provides background on the report and an overview of its contents is also available.

Saturday, April 14, 2012

America's Voluntary Standards System--A "Best Practice" Model for Innovation Policy?

America's Voluntary Standards System--A "Best Practice" Model for Innovation Policy? By Dieter Ernst
East-West Center, Apr 2012
http://www.eastwestcenter.org/publications/americas-voluntary-standards-system-best-practice-model-innovation-policy

For its proponents, America's voluntary standards system is a "best practice" model for innovation policy. Foreign observers however are concerned about possible drawbacks of a standards system that is largely driven by the private sector. There are doubts, especially in Europe and China, whether the American system can balance public and private interests in times of extraordinary national and global challenges to innovation. To assess the merits of these conflicting perceptions, the paper reviews the historical roots of the American voluntary standards system, examines its current defining characteristics, and highlights its strengths and weaknesses. On the positive side, a tradition of decentralized local self-government, has given voice to diverse stakeholders in innovation, avoiding the pitfalls of top-down government-centered standards systems. However, a lack of effective coordination of multiple stakeholder strategies tends to constrain effective and open standardization processes, especially in the management of essential patents and in the timely provision of interoperability standards. To correct these drawbacks of the American standards system, the government has an important role to play as an enabler, coordinator, and, if necessary, an enforcer of the rules of the game in order to prevent abuse of market power by companies with large accumulated patent portfolios. The paper documents the ups and downs of the Federal Government’s role in standardization, and examines current efforts to establish robust public-private standards development partnerships, focusing on the Smart Grid Interoperability project coordinated by the National Institute of Standards and Technology (NIST). In short, countries that seek to improve their standards systems should study the strengths and weaknesses of the American system. However, persistent differences in economic institutions, levels of development and growth models are bound to limit convergence to a US-Style market-led voluntary standards system.

BCBS: Implementation of stress testing practices by supervisors

Implementation of stress testing practices by supervisors: Basel Committee publishes peer review
BCBS
April 13, 2012
http://www.bis.org/press/p120413.htm

The Basel Committee on Banking Supervision has today published a peer review of the implementation by national supervisory authorities of the Basel Committee's principles for sound stress testing practices and supervision.

Stress testing is an important tool used by banks to identify the potential for unexpected adverse outcomes across a range of risks and scenarios. In 2009, the Committee reviewed the performance of stress testing practices during the financial crisis and published recommendations for banks and supervisors entitled Principles for sound stress testing practices and supervision. The guidance set out a comprehensive set of principles for the sound governance, design and implementation of stress testing programmes at banks, as well as high-level expectations for the role and responsibilities of supervisors.

As part of its mandate to assess the implementation of standards across countries and to foster the promotion of good supervisory practice, the Committee's Standards Implementation Group (SIG) conducted a peer review during 2011 of supervisory authorities' implementation of the principles. The review found that stress testing has become a key component of the supervisory assessment process as well as a tool for contingency planning and communication. Countries are, however, at varying stages of maturity in the implementation of the principles; as a result, more work remains to be done to fully implement the principles in many countries.

Overall, the review found the 2009 stress testing principles to be generally effective. The Committee, however, will continue to monitor implementation of the principles and determine whether, in the future, additional guidance might be necessary.

Friday, April 13, 2012

Conference on macrofinancial linkages and their policy implications

Bank of Korea - Bank for International Settlements - International Monetary Fund: joint conference concludes on macrofinancial linkages and their policy implications
April 12, 2012
http://www.bis.org/press/p120412.pdf
 
The Bank of Korea, the Bank for International Settlements and the International Monetary Fund have today brought to a successful conclusion their joint conference on "Macrofinancial linkages: Implications for monetary and financial stability policies". Held on April 10-11 in Seoul, Korea, the event brought together central bankers, regulators and researchers to discuss a variety of topics related to interactions between the financial system and the real economy. The goal of the conference was to promote a continuing dialogue on the policy implications of recent research findings.

The conference programme included the presentation and discussion of research on the following issues:
  • Banks, shadow banks and the macroeconomy;
  • Bank liquidity regulation;
  • The macroeconomic impact of regulatory measures;
  • Macroprudential policies in theory and in practice;
  • Monetary policy and financial stability.
Efforts to recast monetary and financial stability policies to reduce the frequency and severity of financial crises have focused attention on the interactions between the financial system and the macroeconomy. The crisis demonstrated that financial system weaknesses can have sudden and long-lasting macroeconomic effects.

The conference concluded with a panel discussion chaired by Stephen Cecchetti (BIS), and including Jun Il Kim (Bank of Korea), Jan Brockmeijer (IMF), Hiroshi Nakaso (Bank of Japan), and David Fernandez (JP Morgan). The panel discussion focused on the lessons or guideposts for the formulation and implementation of macroprudential and monetary policies that can be drawn from the intensive research efforts on macrofinancial issues in recent years, as well as on the empirical evidence on the effectiveness of policy measures. The roundtable also included a discussion of weaknesses in our understanding of macrofinancial linkages and touched on priorities for future research, analysis, and continuing cooperation between central banks, regulatory authorities, international organisations and academics.

Introducing the conference, Choongsoo Kim, Governor of the Bank of Korea, said, "Since major countries' measures to reform financial regulations, including Basel III of the BCBS, focus mostly on the prevention of crisis recurrence, we need to continuously monitor and track how these measures will affect the sustainability of world economic growth in the medium- and long-term. In doing so, we should be careful so that the strengthening of financial regulation does not weaken the benign function of finance, which is to drive the growth of the real economy through seamless financial intermediation. Moreover, in today's more closely interconnected world economy, the strengthening of financial regulation with a primary focus on advanced countries does not equally affect the financial system in emerging market countries with their significantly different financial structure. Hence, in examining the implementation of regulations, an in-depth analysis should be conducted of how these regulations will affect the financial industries of emerging market countries and all other countries other than the advanced economies and their careful monitoring is called for."

Stephen Cecchetti, Economic Adviser and Head of the BIS Monetary and Economic Department, remarked that "It is important that we continue to learn about the mechanisms through which financial regulation helps to stabilize the economic and financial system. We are not only exploring the effectiveness of existing tools, but also working to fashion new ones. Doing this means refining the intellectual framework, including both the theoretical models and empirical analysis, that forms the basis for macroprudential policy and microprudential policy, as well as conventional and unconventional monetary policy. The papers presented and discussed in this conference are part of the foundation of this new and essential stability-oriented policy framework."

Jan Brockmeijer, Deputy Director of the IMF Monetary and Capital Markets Department, added that "All the institutions involved in developing macroprudential policy frameworks are on a learning curve both with regard to monitoring systemic risks and in using tools to limit such risks. In such circumstances, sharing of views and experiences is crucial to identifying best practices and moving up the learning curve quickly. The Fund is eager to help its members in this regard, and the conference co-organised by the Fund is one way to serve this purpose."

Wednesday, April 11, 2012

IMF Global Financial Stability Report: Risks of stricter prudential regulations

IMF Global Financial Stability Report
Apr 2012
http://www.imf.org/External/Pubs/FT/GFSR/2012/01/index.htm

Chapter 3 of the April 2012 Global Financial Stability Report probes the implications of recent reforms in the financial system for market perception of safe assets. Chapter 4 investigates the growing public and private costs of increased longevity risk from aging populations.

Excerpts from Ch. 3, Safe Assets: Financial System Cornerstone?:

In the future, there will be rising demand for safe assets, but fewer of them will be available, increasing the price for safety in global markets.  In principle, investors evaluate all assets based on their intrinsic characteristics. In the absence of market distortions, asset prices tend to reflect their underlying features, including safety. However, factors external to asset markets—including the required use of specific assets in prudential regulations, collateral practices, and central bank operations—may preclude markets from pricing assets efficiently, distorting the price of safety. Before the onset of the global financial crisis, regulations, macroeconomic policies, and market practices had encouraged the underpricing of safety. Some safety features are more accurately reflected now, but upcoming regulatory and market reforms and central bank crisis management strategies, combined with continued uncertainty and a shrinking supply of assets considered safe, will increase the price of safety beyond what would be the case without such distortions.

The magnitude of the rise in the price of safety is highly uncertain [...]

However, it is clear that market distortions pose increasing challenges to the ability of safe assets to fulfill all their various roles in financial markets. [...] For banks, the common application of zero percent regulatory risk weights on debt issued by their own sovereigns, irrespective of risks, created perceptions of safety detached from underlying economic risks and contributed to the buildup of demand for such securities. [...]

[...] Although regulatory reforms to make institutions safer are clearly needed, insufficient differentiation across eligible assets to satisfy some regulatory requirements could precipitate unintended cliff effects—sudden drops in the prices—when some safe assets become unsafe and no longer satisfy various regulatory criteria. Moreover, the burden of mispriced safety across types of investors may be uneven. For instance, prudential requirements could lead to stronger pressures in the markets for shorter-maturity safe assets, with greater impact on investors with higher potential allocations at shorter maturities, such as banks.

Money and Collaterall, by Manmohan Singh & Peter Stella

Money and Collateral, by Manmohan Singh & Peter Stella
IMF Working Paper No. 12/95
Apr 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25851.0

Summary: Between 1980 and before the recent crisis, the ratio of financial market debt to liquid assets rose exponentially in the U.S. (and in other financial markets), reflecting in part the greater use of securitized assets to collateralize borrowing. The subsequent crisis has reduced the pool of assets considered acceptable as collateral, resulting in a liquidity shortage. When trying to address this, policy makers will need to consider concepts of liquidity besides the traditional metric of excess bank reserves and do more than merely substitute central bank money for collateral that currently remains highly liquid.

Excerpts:

Introduction

In the traditional view of a banking system, credit and money are largely counterparts to each other on different sides of the balance sheet. In the process of maturity transformation, banks are able to create liquid claims on themselves, namely money, which is the counterpart to the less liquid loans or credit.2 Owing to the law of large numbers, banks have—for centuries— been able to safely conduct this business with relatively little liquid reserves, as long as basic confidence in the soundness of the bank portfolio is maintained.

In recent decades, with the advent of securitization and electronic means of trading and settlement, it became possible to greatly expand the scope of assets that could be transformed directly, through their use as collateral, into highly liquid or money-like assets. The expansion in the scope of the assets that could be securitized was in part facilitated by the growth of the shadow financial system, which was largely unregulated, and the ability to borrow from non-deposit sources. This meant deposits no longer equaled credit (Schularick and Taylor, 2008). The justification for light touch or no regulation of this new market was that collateralization was sufficient (and of high quality) and that market forces would ensure appropriate risk taking and dispersion among those educated investors best able to take those risks which were often tailor made to their demands. Where regulation fell short was in failing to recognize the growing interconnectedness of the shadow and regulated sectors, and the growing tail risk that sizable leverage entailed (Gennaioli, Shleifer and Vishny, 2011).

Post-Lehman, there has been a disintermediation process leading to a fall in the money multiplier. This is related to the shortage of collateral (Singh 2011). This is having a real impact—in fact deleveraging is more pronounced due to less collateral. Section II of the paper focuses on money as a legal tender, the money multiplier; then we introduce the adjusted money multiplier. Section III discusses collateral, including tail-risk collateral.  Section IV tries to bridge the money and collateral aspects from a “safe assets” angle. Section V introduces collateral chains and describes the economics behind the private pledged collateral market. Section VI brings the monetary and collateral issues together under an overall financial lubrication framework. In our conclusion (section VII) we offer a useful basis for understanding monetary policy in the current environment.



Conclusion

“Monetary” policy is currently being undertaken in uncharted territory and may change some fundamental assumptions that link monetary and macro-financial policies. Central banks are considering whether and how to augment the apparently ‘failed’ transmission mechanism and in so doing will need to consider the role that collateral plays as financial lubrication (see also Debelle, 2012). Swaps of “good” for “bad” collateral may become part of the standard toolkit.31 If so, the fiscal aspects and risks associated with such policies—which are virtually nil in conventional QE swaps of central bank money for treasuries—are important and cannot be ignored. Furthermore, the issue of institutional accountability and authority to engage in such operations touches at the heart of central bank independence in a democratic society.

These fundamental questions concerning new policy tools and institutional design have arisen at the same time as developed countries have issued massive amounts of new debt.  Although the traditional bogeyman of pure seigniorage financing, that is, massive monetary purchases of government debt may have disappeared from the dark corners of central banks, this does not imply that inflation has been forever arrested. Thus a central bank may “stand firm” yet witness rises in the price level that occur to “align the market value of government debt to the value of its expected real backing.” Hence current concerns as to the potential limitations fiscal policy places on monetary policy are well founded and indeed are novel only to those unfamiliar with similar concerns raised for decades in emerging and developing countries as well as in the “mature” markets before World War II.

Thursday, April 5, 2012

IMF Background Material for its Assessment of China under the Financial Sector Assessment Program

IMF Releases Background Material for its Assessment of China under the Financial Sector Assessment Program
Press Release No. 12/123
April 5, 2012

A joint International Monetary Fund (IMF) and The World Bank assessment of China's financial system was undertaken during 2010 under the Financial Sector Assessment Program (FSAP). The Financial System Stability Assessment (FSSA) report, which is the main IMF output of the FSAP process, was discussed by the Executive Board of the IMF at the time of the annual Article IV discussion in July 2011.

The FSSA report was published on Monday, November 14, 2011. As background for the FSSA, comprehensive assessments were undertaken by the FSAP mission of the financial regulatory infrastructure and the Detailed Assessment Reports of China's observance with international financial standards were prepared during the FSAP exercise. At the request of the Chinese authorities, these five reports are being released today.

The documents published are as follows:

Detailed Assessment of Observance Reports
  1. Observance of Basel Core Principles for Effective Banking Supervision
  2. Observance of IAIS Insurance Core Principles
  3. Observance of IOSCO Objectives and Principles of Securities Regulation
  4. Observance of CPSS Core Principles for Systemically Important Payment Systems
  5. Observance of CPSS-IOSCO Recommendations for Securities Settlement Systems and Central Counterparties

The FSAP is a comprehensive and in-depth analysis of a country’s financial sector. The FSAP findings provide inputs to the IMF’s broader surveillance of its member countries’ economies, known as Article IV consultations. The focus of the FSAP assessments is to gauge the stability of the financial sector and to assess its potential contribution to growth. To assess financial stability, an FSAP examines the soundness of the banks and other financial institutions, conducts stress tests, rates the quality of financial regulation and supervision against accepted international standards, and evaluates the ability of country authorities to intervene effectively in case of a financial crisis. Assessments in developing and emerging market countries are done by the IMF jointly with the World Bank; those in advanced economies are done by the IMF alone.

This is the first time the Chinese financial system has undergone an FSAP assessment.

Since the FSAP was launched in 1999, more than 130 countries have volunteered to undergo these examinations (many countries more than once), with another 35 or so currently underway or in the pipeline. Following the recent global financial crisis, demand for FSAP assessments has been rising, and all G-20 countries have made a commitment to undergo regular assessments.

For additional information on the program, see the Factsheet and FAQs.

Original link: http://www.imf.org/external/np/sec/pr/2012/pr12123.htm

Management Tips from the Wall Street Journal

Management Tips from the Wall Street Journal


Developing a Leadership Style

        Leadership Styles
        What do Managers do?
        Leadership in a Crisis – How To Be a Leader
        What are the Common Mistakes of New Managers?
        What is the Difference Between Management and Leadership?
        How Can Young Women Develop a Leadership Style?

Managing Your People

        How to Motivate Workers in Tough Times
        Motivating Employees
        How to Manage Different Generations
        How to Develop Future Leaders
        How to Reduce Employee Turnover
        Should I Rank My Employees?
        How to Keep Your Most Talented People
        Should I Use Email?
        How to Write Memos

Recruiting, Hiring and Firing

        Conducting Employment Interviews – Hiring How To
        How to Hire New People
        How to Make Layoffs
        What are Alternatives to Layoffs?
        How to Reduce Employee Turnover
        Should I Rank My Employees?
        How to Keep Your Most Talented People

Building a Workplace Culture

        How to Increase Workplace Diversity
        How to Create a Culture of Candor
        How to Change Your Organization’s Culture
        How to Create a Culture of Action in the Workplace

Strategy

        What is Strategy?
        How to Set Goals for Employees
        What Management Strategy Should I Use in an Economic Downturn?
        What is Blue Ocean Strategy?

Execution

        What are the Keys to Good Execution?
        How to Create a Culture of Action in the Workplace

Innovation

        How to Innovate in a Downturn
        How to Change Your Organization’s Culture
        What is Blue Ocean Strategy?

Managing Change

        How to Motivate Workers in Tough Times
        Leadership in a Crisis – How To Be a Leader
        What Management Strategy Should I Use in an Economic Downturn?
        How to Change Your Organization’s Culture

guides.wsj.com/management/

Sunday, April 1, 2012

Encouraging workers to keep track of what they're doing can make them healthier and more productive

Employees, Measure Yourselves. By H. James Wilson
Encouraging workers to keep track of what they're doing can make them healthier and more productiveThe Wall Street Journal, Apr 2012
http://online.wsj.com/article/SB10001424052970204520204577249691204802060.html

Imagine how much better workers could do their jobs if they knew exactly how they spend their day.

Suppose they could get a breakdown of how much time they spend actually working on their computer, as opposed to surfing the Web. Suppose they could tell how much an afternoon workout boosts their productivity, or how much a stressful meeting raises their heart rate.

Thanks to a new wave of technologies called auto-analytics, they can do just that. These devices—from computer software and smartphone apps to gadgets that you wear—let users gather data about what they do at work, analyze that information and use it to do their job better. They give workers a fascinating window into the unseen, unconscious little things that can make such a big difference in their daily work lives. And by encouraging workers to start tracking their own activities—something many already are doing on their own—companies can end up with big improvements in job performance, satisfaction and possibly even well-being.

The key word here is encouragement. It is not the same as insistence. Bosses should be careful to stay out of workers' way, letting employees experiment at their own pace and find their own solutions. They should offer them plenty of privacy safeguards along the way. Too much managerial interference could make the programs seem like Big Brother and dissuade workers from signing on. There's a big difference between employees wanting to measure themselves, and bosses demanding it.

Here's a look at three areas of auto-analytics that are gaining followers in the workplace—and that merit encouragement from managers.



Tracking Screen Time
Many companies monitor what their employees are doing on the computer all day, by watching network traffic or even taking screenshots at random times. But all that oversight is designed to make sure people aren't slacking off; it doesn't help them figure out how to do their jobs better. And besides, a lot of workers probably think it's kind of creepy to have someone watching over their shoulder.

On the other hand, workers are a lot more comfortable with close scrutiny when they're the ones doing the watching.

People are signing on in droves to a new technology called knowledge workload tracking—recording how you use your computer. Software like RescueTime measures things like how long you spend on an open window, how long you're idle and how often you switch from one window to another. The software turns all those measurements into charts so you can see where you're spending your time. From there, you can set up automatic alerts to keep yourself away from distractions; you might send yourself a message if you, say, spend too much time on Twitter.

Programs like these also let you look a lot deeper into your behavior. One employee I observed saw that he got a lot more done when he switched tasks at set intervals. So he had the software remind him to change things up every 20 minutes. (He also set up an algorithm that suggested the best activity to do next.)

Another employee, a programmer, thought his online chats were eating into his work time. So he tested the theory: He looked at how long he spent chatting during certain periods, then looked at how much code he wrote during those times. But in fact, the more he talked, the more code he wrote. Gabbing online with colleagues and customers helped his work.

Managers should encourage experiments and help workers get the ball rolling. They might, for instance, find workers who got good results from the software and have them give presentations to other employees.

Again, though, companies need to use a light touch in encouraging employees: Many workers might be reluctant to track what they do if they think the company might get access to the information, or use it against them. Companies should emphasize that this type of software usually comes with lots of privacy controls. Workers can often store their data in the cloud, for instance, or locally on their machines. In some cases, they can pause tracking and delete pieces of personal data they choose. Likewise, they can also create a list of sites that they want to track by name and label all the other sites they visit as generic.



Collecting Thoughts

Tracking clicks and keystrokes is one thing. But another set of tools goes one step deeper and lets employees track their mental performance—and maybe even improve it.

These tools come in a variety of styles. For example, there's Lumosity, from Lumos Labs Inc., an online system that serves up games employees can play during downtime at work. The games promise to develop memory, thinking speed, attention and problem-solving abilities.

You might have to sort a batch of words into two piles depending on whether or not they follow a certain rule. Or you might be presented with two equations and have to figure if the one on the left is greater than, less than or equal to the one on the right. The software will feed you tougher challenges once you've mastered one level of difficulty.

So far, that might not sound much different than other games you might play at the office. (Minesweeper, anyone?) The difference is tracking. The games offer a scorecard of your performance and let you follow changes in performance over time, so you can see if you're getting better or backsliding. You can also choose what skills you want to improve. If you're having trouble remembering things, for instance, you might ask for memory-boosting games. So, while it may seem like just another game, it can home in on skills you're trying to sharpen for work—and improve them.

Another set of tools promises to help with a couple of age-old problems: forgetting ideas or the context in which you thought of them (or having so many of them you can't decide which will work best for the task at hand).

The method, called cognitive mapping, powers software like TheBrain, from TheBrain Technologies LP. When you get an idea related to work, you type it into the software on your desktop or mobile device. You place it near related ideas by clicking on a visual map that shows clusters of concepts grouped together by category like constellations on your screen.

Let's say your job is designing products for a household-goods company, and you get an idea about a new kind of sponge. You might click on the cluster of ideas for kitchen-cleaning products, which covers mops and paper towels as well as sponges. Then you'd click on the smaller cluster of ideas about sponges and type in your new notion. You'd also be able to attach things like links to websites, photos and meeting notes.

Later on, if you need to come up with some ideas in a particular area, you might type in a few search terms to see the thoughts you've had on the topic and the clusters of ideas and information you originally associated with those terms. Thus, you not only have a historical record of your thoughts, but also detailed insight into the context in which they were created.

As with knowledge workload tracking, employers should encourage workers to use these systems and give them freedom to experiment. But companies can probably be more active in pushing these products, since they don't have the same Big Brother associations as tracking work. So managers might buy subscriptions for influential employees who can help seed interest across the company. If they think it's warranted, managers might even buy companywide subscriptions, as they do for other types of software.




The Physical Side

There's one area where employers are already doing a lot to encourage workers to track themselves: company-sponsored wellness programs. More than two-thirds of companies around the world run wellness programs, and self-tracking tools are fast becoming a common feature.

Usually, the third-party companies that manage the programs give workers tracking devices that can synch up with an external database through a smartphone or work computer. That way, employees can crunch their own data and come up with options for improving health and job performance.

For instance, you might wear a device like Jawbone's UP wristband, which tracks sleep quantity and quality. You could then analyze your data to see how different amounts of sleep affect your work. Do you close more sales on days when you get more quality sleep? Or do you post better numbers when you sacrifice some shut-eye to entertain clients until all hours?

Another approach is tracking how your body works over the course of a workday with a tool such as the emWave2, from HeartMath LLC, which monitors your pulse. You can then look at your stats on a desktop dashboard to see, for instance, what sorts of situations cause you the most stress. The program can then recommend ways to reduce anxiety, such as breathing techniques that can help you reduce your heart rate during a big presentation.

Tracking things at this intimate level might set off all sorts of alarm bells for workers. Many might wonder if an employer could get hold of the information and use it against them. So bosses should ensure that workers have the chance to encrypt or otherwise protect their data.


Mr. Wilson is senior researcher at Babson Executive Education

Thursday, March 29, 2012

Revisiting Risk-Weighted Assets

Revisiting Risk-Weighted Assets. By Vanessa Le Leslé & Sofiya Avramova
IMF Working Paper No. 12/90
Mar 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25807.0

Summary: In this paper, the authors provide an overview of the concerns surrounding the variations in the calculation of risk-weighted assets (RWAs) across banks and jurisdictions and how this might undermine the Basel III capital adequacy framework. They discuss the key drivers behind the differences in these calculations, drawing upon a sample of systemically important banks from Europe, North America, and Asia Pacific. Then, the authors discuss a range of policy options that could be explored to fix the actual and perceived problems with RWAs, and improve the use of risk-sensitive capital ratios.


Introduction

Strengthening capital ratios is a key priority in the aftermath of the global financial crisis.  Increasing the quantity, quality, and transparency of capital is of paramount importance to restore the banking sector to health. Recent regulatory reforms have primarily focused on improving the numerator of capital ratios, while changes to the denominator, i.e., riskweighted assets (RWAs), have been more limited.

Why look at RWAs now? Confidence in reported RWAs is ebbing. Market participants question the reliability and comparability of capital ratios, and contend that banks may not be as strong as they are portrayed by risk-based capital ratios. The Basel Committee recently announced it will review the measurement of RWAs and formulate policy responses to foster greater consistency across banks and jurisdictions.

The academic literature on capital is vast, but the focus on RWAs is more limited. Current studies mostly emanate from market participants, who highlight the wide variations existing in RWAs across banks. There is no convergence in views about the materiality and relative importance of these differences, and thus no consensus on policy implications.

This paper aims to shed light on the scale of the RWA variation issue and identify possible policy responses. The paper (i) discusses the importance of RWAs in the regulatory capital framework; (ii) highlights the main concerns and the controversy surrounding RWA calculations; (iii) identifies key drivers behind the differences in RWA calculations across jurisdictions and business models; and (iv) concludes with a discussion on the range of options that could be considered to restore confidence in banks’ RWA numbers.

A comprehensive analysis of broader questions, such as what is the best way to measure risk or predict losses, and what is the optimal amount of capital that banks should hold per unit of risk, is beyond the scope of this study. A comparison of the respective merits of the leverage and risk-based capital ratios is also outside our discussion.


Conclusion

Perceived differences in RWAs within and across countries have led to a diminishing of trust in the reliability of RWAs and capital ratios, and if not addressed, could affect the credibility of the regulatory framework in general. This paper is a first step towards shedding light on the extent and causes of RWA variability and to foster policy debate.

The paper seeks to disentangle key factors behind observed differences in RWAs, but does not quantify how much of the RWA variance can be explained by each factor. It concludes that a host of factors drive differences in RWA outputs between firms within a region and indeed across regions; many of these factors can be justified, but some less so. Differences in RWAs are not only the result of banks’ business model, risk profile, and RWA methodology (good or bad), but also the result of different supervisory practices. Aiming for full harmonization and convergence of RWA practices may not be achievable, and we would expect some differences to remain. It may be more constructive to focus on improving the transparency and understanding of outputs, and on providing common guidance on methodologies, for banks and supervisors alike.

The paper identifies a range of policy options to address the RWA issue, and contends that a multipronged approach seems the most effective path of reform. A combination of regulatory changes to the RWA regime, enhanced supervision, increased market disclosure, and more robust internal risk management may help restore confidence in RWAs and safeguard the integrity of the capital framework. Finally, the paper contends that even if RWAs are not perfect, retaining risk-sensitive capital ratios is still very important, and the latter can be backstopped by using them in tandem with unweighted capital measures.

This paper aims to encourage discussion and policy suggestions, while the Basel Committee undertakes a more extensive review of the RWA framework.

Accounting Devices and Fiscal Illusions

Accounting Devices and Fiscal Illusions. By Timothy C. Irwin
IMF Staff Discussion Note SDN/12/02
March 28, 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25795.0
ISBN/ISSN: 978-1-61635-386-5 / 2221-030X

A government seeking to reduce its deficit can be tempted to replace genuine spending cuts or tax increases with accounting devices that give the illusion of change without its substance, or that make the change appear larger than it actually is. Under ideal accounting standards, this would not be possible, but in real accounting it sometimes is. For example, governments can sometimes sell assets or borrow money and count the proceeds as revenue, or defer unavoidable spending without recognizing a liability. In each case, this year’s reported deficit is reduced, but only at the expense of future deficits. The result is that the reported deficit loses some of its accuracy as a fiscal indicator.

The use of accounting stratagems cannot be eliminated, but several things can be done to reduce their use or at least bring them quickly to light. Governments can be encouraged to prepare audited financial statements—income statement, cash-flow statement, and balance sheet—according to international accounting standards, and statisticians, who in many countries use accounting data to compile the most important (“headline”) fiscal indicators, can be given the resources and independence to be both expert and impartial, as well as the authority to revise standards in the light of emerging problems. To help reveal remaining problems in headline fiscal indicators, a variety of alternative fiscal indicators can be monitored, since a problem suppressed in one fiscal indicator is likely to show up in another.  Many of the devices documented in this note would be revealed if governments also reported change in net worth and high-quality long-term forecasts of the headline indicator of the deficit under current policy. 

Wednesday, March 14, 2012

Could leveraging Public Credit Registries’ information improve supervision and regulation of financial systems?

Could leveraging Public Credit Registries’ information improve supervision and regulation of financial systems? By Jane Hwang
World Bank blogs, Mar 13, 2012

http://blogs.worldbank.org/allaboutfinance/could-leveraging-public-credit-registries-information-improve-supervision-and-regulation-of-financia

Monday, March 12, 2012

Capital Regulation, Liquidity Requirements and Taxation in a Dynamic Model of Banking

Capital Regulation, Liquidity Requirements and Taxation in a Dynamic Model of Banking. By Gianni De Nicolo, Andrea Gamba and Marcella Lucchetta
IMF Working Paper No. 12/72
March 01, 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25767.0

This paper studies the impact of bank regulation and taxation in a dynamic model with banks exposed to credit and liquidity risk. We find an inverted U-shaped relationship between capital requirements and bank lending, efficiency, and welfare, with their benefits turning into costs beyond a certain requirement threshold. By contrast, liquidity requirements reduce lending, efficiency and welfare significantly. The costs of high capital and liquidity requirements represent a lower bound on the benefits of these regulations in abating systemic risks. On taxation, corporate income taxes generate higher government revenues and entail lower efficiency and welfare costs than taxes on non-deposit liabilities.

Excerpts:
Introduction

The 2007-2008 financial crisis has been a catalyst for significant bank regulation reforms, as the pre-crisis regulatory framework has been judged inadequate to cope with large financial shocks. The new Basel III framework envisions a raise in bank capital requirements and the introduction of new liquidity requirements, while several proposals have been recently advanced to use forms of taxation with the twin objectives of raising funding to pay for resolution costs in stressed times, as well as a way to control bank risk-taking behavior.1 To date, however, the relatively large literature on bank regulation o ers no formal analysis where a joint assessment of these policies can be made in a dynamic model of banking where banks play a role and are exposed to multiple sources of risk. The formulation of such a dynamic banking model is the main contribution of this paper.

Our model is novel in three important dimensions. First, we analyze a bank that dynamically transforms short term liabilities into longer-term partially illiquid assets whose returns are uncertain. This feature is consistent with banks' special role in liquidity transformation emphasized in the literature (see e.g. Diamond and Dybvig (1983) and Allen and Gale (2007)).

Second, we model bank's financial distress explicitly. This allows us to examine optimal banks' choices on whether, when, and how to continue operations in the face of financial distress. The bank in our model invests in risky loans and risk-less bonds financed by (random) government-insured deposits and short-term debt. Financial distress occurs when the bank is unable to honor part or all of its debt and tax obligations for given realizations of credit and liquidity shocks. The bank has the option to resolve distress in three costly forms: by liquidating assets at a cost, by issuing fully collateralized bonds, or by issuing equity. The liquidation costs of assets are interpreted as fire sale costs, and modeled introducing asymmetric costs of adjustment of the bank's risky asset portfolio. The importance of fire sale costs in amplifying banks financial distress has been brought to the fore in the recent crisis (see e.g.  Acharya, Shin, and Yorulmazer (2010) and Hanson, Kashyap, and Stein (2011)).

Third, we evaluate the impact of bank regulations and taxation not only on bank optimal policies, but also in terms of metrics of bank efficiency and welfare. The first metric is the enterprise value of the bank, which can be interpreted as the efficiency with which the bank carries out its maturity transformation function. The second one, called \social value", proxies welfare in our risk-neutral world, as it summarizes the total expected value of bank activities to all bank stakeholders and the government. To our knowledge, this is the first study that evaluates the joint welfare implications of bank regulation and taxation.

Our benchmark bank is unregulated, but its deposits are fully insured. We consider this bank as the appropriate benchmark, since one of the asserted roles of bank regulation is the abatement of the excessive bank risk-taking arising from moral hazard under partial or total insurance of its liabilities. We use a standard calibration of the parameters of the model |with regulatory and tax parameters mimicking current capital regulation, liquidity requirement and tax proposals| to solve for the optimal policies and the metrics of efficiency and welfare. 

We obtain three sets of results. First, if capital requirements are mild, a bank subject only to capital regulation invests more in lending and its probability of default is lower than its unregulated counterpart. This additional lending is financed by higher levels of retained earnings or equity issuance. Importantly, under mild capital regulation bank efficiency and social values are higher than under no regulation, and their benefits are larger the higher are fire sale costs. However, if capital requirements become too stringent, then the efficiency and welfare benefits of capital regulation disappear and turn into costs, even though default risk remains subdued: lending declines, and the metrics of bank efficiency and social value drop below those of the unregulated bank. Thus, there exists an inverted-U-shaped relationship between bank lending, efficiency, welfare and the stringency of capital requirements. These novel findings suggest the existence of an optimal level of bank-specific regulatory capital under deposit insurance.

Second, the introduction of liquidity requirements reduces bank lending, efficiency and social value significantly, since these requirements hamper bank maturity transformation. In addition, the reduction in lending, efficiency, and social values increases monotonically with their stringency. When liquidity requirements are added to capital requirements, they also eliminate the benefits of mild capital requirements, since bank lending, efficiency and social values are reduced relative to the bank subject to capital regulation only. We should stress that these results do not have to be necessarily interpreted as an indictment of liquidity requirements. 

If liquidity requirements were found to be optimal regulations to correct some negative externalities arising from excessive bank's reliance on short term debt -which we do not model- then our results indicate how large the costs associated with these negative externalities should be to rationalize the need of liquidity requirements.

On taxation, an increase in corporate income taxes reduces lending, bank efficiency and social values due to standard negative income e ects. However, tax receipts increase, generating higher government revenues. With the introduction of a tax on non-deposit liabilities, which in our model is short-term debt, the decline in bank lending, efficiency and social values is larger than that under an increase in corporate taxation, while the increase in government tax receipts is lower. Therefore, in our model corporate taxation is preferable to a tax on non-deposit liabilities, although both forms of taxation reduce lending, efficiency and social value.


Conclusions

This paper has formulated a dynamic model of a bank exposed to credit and liquidity risk that can face financial distress by reducing loans, issuing secured debt, or issuing equity at a cost. We evaluated the joint impact of capital regulation, liquidity requirements and taxation on banks' optimal policies and metrics of bank efficiency of and welfare.

We have uncovered an important inverted U-shaped relationship between bank lending, bank effi ciency, social value and regulatory capital ratios. This result suggests the existence of optimal levels of regulatory capital, which are likely to be highly bank-specific, depending crucially on the configuration of risks a bank is exposed to as a function of the chosen business strategies. Similarly, our results on the high costs of liquidity requirements point out the adverse consequences of the repression of the key maturity transformation role of bank intermediation.  Given our finding of the adverse e ffects of liquidity requirements, the argument by Admati, DeMarzo, Hellwig, and Peiderer (2011) that capital requirements can be designed to substitute for liquidity requirements is reinforced. Finally, for the purpose of rising tax revenues, corporate income taxation seems preferable to taxation of non-deposit liabilities, since the former generates higher revenues and lower efficiency and welfare costs.

Overall, our results suggest that implementing non-trivial increases in capital requirements, liquidity requirements and taxation may be associated with costs significantly larger than what proponents of these policies may have thought. This implies that the benefits of these requirements in terms of their ability to abate systemic risk should at least o set the costs we have identified.

Sunday, March 11, 2012

How To Be Creative

How To Be Creative. By Jonah Lehrer
The image of the 'creative type' is a myth. Jonah Lehrer on why anyone can innovate—and why a hot shower, a cold beer or a trip to your colleague's desk might be the key to your next big idea.The Wall Street Journal, Mar 10, 2012, on page C1

http://online.wsj.com/article/SB10001424052970203370604577265632205015846.html
 
Creativity can seem like magic. We look at people like Steve Jobs and Bob Dylan, and we conclude that they must possess supernatural powers denied to mere mortals like us, gifts that allow them to imagine what has never existed before. They're "creative types." We're not.

But creativity is not magic, and there's no such thing as a creative type. Creativity is not a trait that we inherit in our genes or a blessing bestowed by the angels. It's a skill. Anyone can learn to be creative and to get better at it. New research is shedding light on what allows people to develop world-changing products and to solve the toughest problems. A surprisingly concrete set of lessons has emerged about what creativity is and how to spark it in ourselves and our work.

The science of creativity is relatively new. Until the Enlightenment, acts of imagination were always equated with higher powers. Being creative meant channeling the muses, giving voice to the gods. ("Inspiration" literally means "breathed upon.") Even in modern times, scientists have paid little attention to the sources of creativity.

But over the past decade, that has begun to change. Imagination was once thought to be a single thing, separate from other kinds of cognition. The latest research suggests that this assumption is false. It turns out that we use "creativity" as a catchall term for a variety of cognitive tools, each of which applies to particular sorts of problems and is coaxed to action in a particular way.

Does the challenge that we're facing require a moment of insight, a sudden leap in consciousness? Or can it be solved gradually, one piece at a time? The answer often determines whether we should drink a beer to relax or hop ourselves up on Red Bull, whether we take a long shower or stay late at the office.

The new research also suggests how best to approach the thorniest problems. We tend to assume that experts are the creative geniuses in their own fields. But big breakthroughs often depend on the naive daring of outsiders. For prompting creativity, few things are as important as time devoted to cross-pollination with fields outside our areas of expertise.

Let's start with the hardest problems, those challenges that at first blush seem impossible. Such problems are typically solved (if they are solved at all) in a moment of insight.

Consider the case of Arthur Fry, an engineer at 3M in the paper products division. In the winter of 1974, Mr. Fry attended a presentation by Sheldon Silver, an engineer working on adhesives. Mr. Silver had developed an extremely weak glue, a paste so feeble it could barely hold two pieces of paper together. Like everyone else in the room, Mr. Fry patiently listened to the presentation and then failed to come up with any practical applications for the compound. What good, after all, is a glue that doesn't stick?

On a frigid Sunday morning, however, the paste would re-enter Mr. Fry's thoughts, albeit in a rather unlikely context. He sang in the church choir and liked to put little pieces of paper in the hymnal to mark the songs he was supposed to sing. Unfortunately, the little pieces of paper often fell out, forcing Mr. Fry to spend the service frantically thumbing through the book, looking for the right page. It seemed like an unfixable problem, one of those ordinary hassles that we're forced to live with.

But then, during a particularly tedious sermon, Mr. Fry had an epiphany. He suddenly realized how he might make use of that weak glue: It could be applied to paper to create a reusable bookmark! Because the adhesive was barely sticky, it would adhere to the page but wouldn't tear it when removed. That revelation in the church would eventually result in one of the most widely used office products in the world: the Post-it Note.

Mr. Fry's invention was a classic moment of insight. Though such events seem to spring from nowhere, as if the cortex is surprising us with a breakthrough, scientists have begun studying how they occur. They do this by giving people "insight" puzzles, like the one that follows, and watching what happens in the brain:

   A man has married 20 women in a small town. All of the women are still alive, and none of them is divorced. The man has broken no laws. Who is the man?

If you solved the question, the solution probably came to you in an incandescent flash: The man is a priest. Research led by Mark Beeman and John Kounios has identified where that flash probably came from. In the seconds before the insight appears, a brain area called the superior anterior temporal gyrus (aSTG) exhibits a sharp spike in activity. This region, located on the surface of the right hemisphere, excels at drawing together distantly related information, which is precisely what's needed when working on a hard creative problem.

Interestingly, Mr. Beeman and his colleagues have found that certain factors make people much more likely to have an insight, better able to detect the answers generated by the aSTG. For instance, exposing subjects to a short, humorous video—the scientists use a clip of Robin Williams doing stand-up—boosts the average success rate by about 20%.

Alcohol also works. Earlier this year, researchers at the University of Illinois at Chicago compared performance on insight puzzles between sober and intoxicated students. The scientists gave the subjects a battery of word problems known as remote associates, in which people have to find one additional word that goes with a triad of words. Here's a sample problem:

   Pine Crab Sauce

In this case, the answer is "apple." (The compound words are pineapple, crab apple and apple sauce.) Drunk students solved nearly 30% more of these word problems than their sober peers.

What explains the creative benefits of relaxation and booze? The answer involves the surprising advantage of not paying attention. Although we live in an age that worships focus—we are always forcing ourselves to concentrate, chugging caffeine—this approach can inhibit the imagination. We might be focused, but we're probably focused on the wrong answer.

And this is why relaxation helps: It isn't until we're soothed in the shower or distracted by the stand-up comic that we're able to turn the spotlight of attention inward, eavesdropping on all those random associations unfolding in the far reaches of the brain's right hemisphere. When we need an insight, those associations are often the source of the answer.

This research also explains why so many major breakthroughs happen in the unlikeliest of places, whether it's Archimedes in the bathtub or the physicist Richard Feynman scribbling equations in a strip club, as he was known to do. It reveals the wisdom of Google putting ping-pong tables in the lobby and confirms the practical benefits of daydreaming. As Einstein once declared, "Creativity is the residue of time wasted."

Of course, not every creative challenge requires an epiphany; a relaxing shower won't solve every problem. Sometimes, we just need to keep on working, resisting the temptation of a beer-fueled nap.

There is nothing fun about this kind of creativity, which consists mostly of sweat and failure. It's the red pen on the page and the discarded sketch, the trashed prototype and the failed first draft. Nietzsche referred to this as the "rejecting process," noting that while creators like to brag about their big epiphanies, their everyday reality was much less romantic. "All great artists and thinkers are great workers," he wrote.

This relentless form of creativity is nicely exemplified by the legendary graphic designer Milton Glaser, who engraved the slogan "Art is Work" above his office door. Mr. Glaser's most famous design is a tribute to this work ethic. In 1975, he accepted an intimidating assignment: to create a new ad campaign that would rehabilitate the image of New York City, which at the time was falling apart.

Mr. Glaser began by experimenting with fonts, laying out the tourist slogan in a variety of friendly typefaces. After a few weeks of work, he settled on a charming design, with "I Love New York" in cursive, set against a plain white background. His proposal was quickly approved. "Everybody liked it," Mr. Glaser says. "And if I were a normal person, I'd stop thinking about the project. But I can't. Something about it just doesn't feel right."

So Mr. Glaser continued to ruminate on the design, devoting hours to a project that was supposedly finished. And then, after another few days of work, he was sitting in a taxi, stuck in midtown traffic. "I often carry spare pieces of paper in my pocket, and so I get the paper out and I start to draw," he remembers. "And I'm thinking and drawing and then I get it. I see the whole design in my head. I see the typeface and the big round red heart smack dab in the middle. I know that this is how it should go."

The logo that Mr. Glaser imagined in traffic has since become one of the most widely imitated works of graphic art in the world. And he only discovered the design because he refused to stop thinking about it.

But this raises an obvious question: If different kinds of creative problems benefit from different kinds of creative thinking, how can we ensure that we're thinking in the right way at the right time? When should we daydream and go for a relaxing stroll, and when should we keep on sketching and toying with possibilities?

The good news is that the human mind has a surprising natural ability to assess the kind of creativity we need. Researchers call these intuitions "feelings of knowing," and they occur when we suspect that we can find the answer, if only we keep on thinking. Numerous studies have demonstrated that, when it comes to problems that don't require insights, the mind is remarkably adept at assessing the likelihood that a problem can be solved—knowing whether we're getting "warmer" or not, without knowing the solution.

This ability to calculate progress is an important part of the creative process. When we don't feel that we're getting closer to the answer—we've hit the wall, so to speak—we probably need an insight. If there is no feeling of knowing, the most productive thing we can do is forget about work for a while. But when those feelings of knowing are telling us that we're getting close, we need to keep on struggling.

Of course, both moment-of-insight problems and nose-to-the-grindstone problems assume that we have the answers to the creative problems we're trying to solve somewhere in our heads. They're both just a matter of getting those answers out. Another kind of creative problem, though, is when you don't have the right kind of raw material kicking around in your head. If you're trying to be more creative, one of the most important things you can do is increase the volume and diversity of the information to which you are exposed.

Steve Jobs famously declared that "creativity is just connecting things." Although we think of inventors as dreaming up breakthroughs out of thin air, Mr. Jobs was pointing out that even the most far-fetched concepts are usually just new combinations of stuff that already exists. Under Mr. Jobs's leadership, for instance, Apple didn't invent MP3 players or tablet computers—the company just made them better, adding design features that were new to the product category.

And it isn't just Apple. The history of innovation bears out Mr. Jobs's theory. The Wright Brothers transferred their background as bicycle manufacturers to the invention of the airplane; their first flying craft was, in many respects, just a bicycle with wings. Johannes Gutenberg transformed his knowledge of wine presses into a printing machine capable of mass-producing words. Or look at Google: Larry Page and Sergey Brin came up with their famous search algorithm by applying the ranking method used for academic articles (more citations equals more influence) to the sprawl of the Internet.

How can people get better at making these kinds of connections? Mr. Jobs argued that the best inventors seek out "diverse experiences," collecting lots of dots that they later link together. Instead of developing a narrow specialization, they study, say, calligraphy (as Mr. Jobs famously did) or hang out with friends in different fields. Because they don't know where the answer will come from, they are willing to look for the answer everywhere.

Recent research confirms Mr. Jobs's wisdom. The sociologist Martin Ruef, for instance, analyzed the social and business relationships of 766 graduates of the Stanford Business School, all of whom had gone on to start their own companies. He found that those entrepreneurs with the most diverse friendships scored three times higher on a metric of innovation. Instead of getting stuck in the rut of conformity, they were able to translate their expansive social circle into profitable new concepts.

Many of the most innovative companies encourage their employees to develop these sorts of diverse networks, interacting with colleagues in totally unrelated fields. Google hosts an internal conference called Crazy Search Ideas—a sort of grown-up science fair with hundreds of posters from every conceivable field. At 3M, engineers are typically rotated to a new division every few years. Sometimes, these rotations bring big payoffs, such as when 3M realized that the problem of laptop battery life was really a problem of energy used up too quickly for illuminating the screen. 3M researchers applied their knowledge of see-through adhesives to create an optical film that focuses light outward, producing a screen that was 40% more efficient.

Such solutions are known as "mental restructurings," since the problem is only solved after someone asks a completely new kind of question. What's interesting is that expertise can inhibit such restructurings, making it harder to find the breakthrough. That's why it's important not just to bring new ideas back to your own field, but to actually try to solve problems in other fields—where your status as an outsider, and ability to ask naive questions, can be a tremendous advantage.

This principle is at work daily on InnoCentive, a crowdsourcing website for difficult scientific questions. The structure of the site is simple: Companies post their hardest R&D problems, attaching a monetary reward to each "challenge." The site features problems from hundreds of organization in eight different scientific categories, from agricultural science to mathematics. The challenges on the site are incredibly varied and include everything from a multinational food company looking for a "Reduced Fat Chocolate-Flavored Compound Coating" to an electronics firm trying to design a solar-powered computer.

The most impressive thing about InnoCentive, however, is its effectiveness. In 2007, Karim Lakhani, a professor at the Harvard Business School, began analyzing hundreds of challenges posted on the site. According to Mr. Lakhani's data, nearly 30% of the difficult problems posted on InnoCentive were solved within six months. Sometimes, the problems were solved within days of being posted online. The secret was outsider thinking: The problem solvers on InnoCentive were most effective at the margins of their own fields. Chemists didn't solve chemistry problems; they solved molecular biology problems. And vice versa. While these people were close enough to understand the challenge, they weren't so close that their knowledge held them back, causing them to run into the same stumbling blocks that held back their more expert peers.

It's this ability to attack problems as a beginner, to let go of all preconceptions and fear of failure, that's the key to creativity.

The composer Bruce Adolphe first met Yo-Yo Ma at the Juilliard School in New York City in 1970. Mr. Ma was just 15 years old at the time (though he'd already played for J.F.K. at the White House). Mr. Adolphe had just written his first cello piece. "Unfortunately, I had no idea what I was doing," Mr. Adolphe remembers. "I'd never written for the instrument before."

Mr. Adolphe had shown a draft of his composition to a Juilliard instructor, who informed him that the piece featured a chord that was impossible to play. Before Mr. Adolphe could correct the music, however, Mr. Ma decided to rehearse the composition in his dorm room. "Yo-Yo played through my piece, sight-reading the whole thing," Mr. Adolphe says. "And when that impossible chord came, he somehow found a way to play it."

Mr. Adolphe told Mr. Ma what the professor had said and asked how he had managed to play the impossible chord. They went through the piece again, and when Mr. Ma came to the impossible chord, Mr. Adolphe yelled "Stop!" They looked at Mr. Ma's left hand—it was contorted on the fingerboard, in a position that was nearly impossible to hold. "You're right," said Mr. Ma, "you really can't play that!" Yet, somehow, he did.

When Mr. Ma plays today, he still strives for that state of the beginner. "One needs to constantly remind oneself to play with the abandon of the child who is just learning the cello," Mr. Ma says. "Because why is that kid playing? He is playing for pleasure."

Creativity is a spark. It can be excruciating when we're rubbing two rocks together and getting nothing. And it can be intensely satisfying when the flame catches and a new idea sweeps around the world.

For the first time in human history, it's becoming possible to see how to throw off more sparks and how to make sure that more of them catch fire. And yet, we must also be honest: The creative process will never be easy, no matter how much we learn about it. Our inventions will always be shadowed by uncertainty, by the serendipity of brain cells making a new connection.

Every creative story is different. And yet every creative story is the same: There was nothing, now there is something. It's almost like magic.

—Adapted from "Imagine: How Creativity Works" by Jonah Lehrer, to be published by Houghton Mifflin Harcourt on March 19. Copyright © 2012 by Jonah Lehrer.

---
10 Quick Creativity Hacks

1. Color Me Blue

A 2009 study found that subjects solved twice as many insight puzzles when surrounded by the color blue, since it leads to more relaxed and associative thinking. Red, on other hand, makes people more alert and aware, so it is a better backdrop for solving analytic problems.

2. Get Groggy

According to a study published last month, people at their least alert time of day—think of a night person early in the morning—performed far better on various creative puzzles, sometimes improving their success rate by 50%. Grogginess has creative perks.

3. Daydream Away

Research led by Jonathan Schooler at the University of California, Santa Barbara, has found that people who daydream more score higher on various tests of creativity.

4. Think Like A Child

When subjects are told to imagine themselves as 7-year-olds, they score significantly higher on tests of divergent thinking, such as trying to invent alternative uses for an old car tire.

5. Laugh It Up

When people are exposed to a short video of stand-up comedy, they solve about 20% more insight puzzles.

6. Imagine That You Are Far Away

Research conducted at Indiana University found that people were much better at solving insight puzzles when they were told that the puzzles came from Greece or California, and not from a local lab.

7. Keep It Generic

One way to increase problem-solving ability is to change the verbs used to describe the problem. When the verbs are extremely specific, people think in narrow terms. In contrast, the use of more generic verbs—say, "moving" instead of "driving"—can lead to dramatic increases in the number of problems solved.

8. Work Outside the Box

According to new study, volunteers performed significantly better on a standard test of creativity when they were seated outside a 5-foot-square workspace, perhaps because they internalized the metaphor of thinking outside the box. The lesson? Your cubicle is holding you back.
 
9. See the World

According to research led by Adam Galinsky, students who have lived abroad were much more likely to solve a classic insight puzzle. Their experience of another culture endowed them with a valuable open-mindedness. This effect also applies to professionals: Fashion-house directors who have lived in many countries produce clothing that their peers rate as far more creative.
 
10. Move to a Metropolis

Physicists at the Santa Fe Institute have found that moving from a small city to one that is twice as large leads inventors to produce, on average, about 15% more patents.

—Jonah Lehrer

Tuesday, February 28, 2012

Systemic Real and Financial Risks: Measurement, Forecasting, and Stress Testing

Systemic Real and Financial Risks: Measurement, Forecasting, and Stress Testing. By Gianni de Nicolo & Marcella Lucchetta
IMF Working Paper No. 12/58
Feb 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25745.0

Summary: This paper formulates a novel modeling framework that delivers: (a) forecasts of indicators of systemic real risk and systemic financial risk based on density forecasts of indicators of real activity and financial health; (b) stress-tests as measures of the dynamics of responses of systemic risk indicators to structural shocks identified by standard macroeconomic and banking theory. Using a large number of quarterly time series of the G-7 economies in 1980Q1-2010Q2, we show that the model exhibits significant out-of sample forecasting power for tail real and financial risk realizations, and that stress testing provides useful early warnings on the build-up of real and financial vulnerabilities.

Excerpts

Introduction

The 2007-2009 financial crisis has spurred renewed efforts in systemic risk modeling. Bisias et al. (2012) provide an extensive survey of the models currently available to measure and track indicators of systemic financial risk. However, three limitations of current modeling emerge from this survey. First, almost all proposed measures focus on (segments of) the financial sector, with developments in the real economy either absent, or just part of the conditioning variables embedded in financial risk measures. Second, there is yet no systematic assessment of the out-of-sample forecasting power of the measures proposed, which makes it difficult to gauge their usefulness as early warning tools. Third, stress testing procedures are in most cases sensitivity analyses, with no structural identification of the assumed shocks.

Building on our previous effort (De Nicolò and Lucchetta, 2011), this paper contributes to overcome these limitations by developing a novel tractable model that can be used as a real-time systemic risks’ monitoring system. Our model combines dynamic factor VARs and quantile regressions techniques to construct forecasts of systemic risk indicators based on density forecasts, and employs stress testing as the measurement of the sensitivity of responses of systemic risk indicators to configurations of structural shocks.

This model can be viewed as a complementary tool to applications of DSGE models for risk monitoring analysis. As detailed in Schorfheide (2010), work on DSGE modeling is advancing significantly, but several challenges to the use of these models for risk monitoring purposes remain. In this regard, the development of DSGE models is still in its infancy in at least two dimensions: the incorporation of financial intermediation and forecasting. In their insightful review of recent progress in developments of DSGE models with financial intermediation, Gertler and Kyotaki (2010) outline important research directions still unexplored, such as the linkages between disruptions of financial intermediation and real activity. Moreover, as noted in Herbst and Schorfheide (2010), there is still lack of conclusive evidence of the superiority of the forecasting performance of DSGE models relative to sophisticated data-driven models. In addition, these models do not typically focus on tail risks. Thus, available modeling technologies providing systemic risk monitoring tools based on explicit linkages between financial and real sectors are still underdeveloped. Contributing to fill in this void is a key objective of this paper.

Three features characterize our model. First, we make a distinction between systemic real risk and systemic financial risk, based on the notion that real effects with potential adverse welfare consequences are what ultimately concerns policymakers, consistently with the definition of systemic risk introduced in Group of Ten (2001). Distinguishing systemic financial risk from systemic real risk also allow us to assess the extent to which a realization of a financial (real) shock is just amplifying a shock in the real (financial) sector, or originates in the financial (real) sector. Second, the model produces real-time density forecasts of indicators of real activity and financial health, and uses them to construct forecasts of indicators of systemic real and financial risks. To obtain these forecasts, we use a dynamic factor model (DFM) with many predictors combined with quantile regression techniques. The choice of the DFM with many predictors is motivated by its superior forecasting performance over both univariate time series specifications and standard VAR-type models (see Watson, 2006). Third, our design of stress tests can be flexibly linked to selected implications of DSGE models and other theoretical constructs. Structural identification provides economic content of these tests, and imposes discipline in designing stress test scenarios. In essence, our model is designed to exploit, and make operational, the forecasting power of DFM models and structural identification based on explicit theoretical constructs, such as DSGE models.

Our model delivers density forecasts of any set of time series. Thus, it is extremely flexible, as it can incorporate multiple measures of real or financial risk, both at aggregate and disaggregate levels, including many indicators reviewed in Bisias et al. (2012). In this paper we focus on two simple indicators of real and financial activity: real GDP growth, and an indicator of health of the financial system, called FS. Following Campbell, Lo and MacKinlay (1997), the FS indicator is given by the return of a portfolio of a set of systemically important financial firms less the return on the market. This indicator is germane to other indicators of systemic financial risk used in recent studies (see e.g. Acharya et al., 2010 or Brownlee and Engle, 2010).

The joint dynamics of GDP growth and the FS indicator is modeled through a dynamic factor model, following the methodology detailed in Stock and Watson (2005). Density forecasts of GDP growth and the FS indicator are obtained by estimating sets of quantile autoregressions, using forecasts of factors derived from the companion factor VAR as predictors.  The use of quantile auto-regressions is advantageous, since it allows us to avoid making specific assumptions about the shape of the underlying distribution of GDP growth and the FS indicator. The blending of a dynamic factor model with quantile auto-regressions is a novel feature of our modeling framework.

Our measurement of systemic risks follows a risk management approach. We measure systemic real risk with GDP-Expected Shortfall (GDPES ), given by the expected loss in GDP growth conditional on a given level of GDP-at-Risk (GDPaR), with GDPaR being defined as the worst predicted realization of quarterly growth in real GDP at a given (low) probability. Systemic financial risk is measured by FS-Expected Shortfall (FSES), given by the expected loss in FS conditional on a given level of FS-at-Risk (FSaR), with FSaR being defined as the worst predicted realization of the FS indicator at a given (low) probability level.

Stress-tests of systemic risk indicators are implemented by gauging how impulse responses of systemic risk indicators vary through time in response to structural shocks. The identification of structural shocks is accomplished with an augmented version of the sign restriction methodology introduced by Canova and De Nicolò (2002), where aggregate shocks are extracted based on standard macroeconomic and banking theory. Our approach to stress testing differs markedly from, and we believe significantly improves on, most implementations of stress testing currently used in central banks and international organizations. In these implementations, shock scenarios are imposed on sets of observable variables, and their effects are traced through "behavioral" equations of certain variables of interest. Yet, the ?shocked? observable variables are typically endogenous: thus, it is unclear whether we are shocking the symptoms and not the causes. As a result, it is difficult to assess both the qualitative and quantitative implications of the stress test results.

We implement our model using a large set of quarterly time series of the G-7 economies during the 1980Q1-2010Q1 period, and obtain two main results. First, our model provides significant evidence of out-of sample forecasting power for tail real and financial risk realizations for all countries. Second, stress tests based on this structural identification provide early warnings of vulnerabilities in the real and financial sectors.

Monday, February 27, 2012

Economic crisis: Views from Greece

I asked some Greek professionals about the crisis in their country on behalf of Hanna Intelligence's CEO, Mr. Jose Navio:
dear sir, I got some questions for you, if you have the time:

1  could you please make mention of effects in the citizenry like more children abandoned in hospices because the family cannot maintain them?
2  do you know of lack of food/medicines or lower quality of them?
3  is it better in your opinion to get out of the Euro and use again the old drachma (or any other new currency)?
4  is it better in your opinion to default and to reject the troika bail-outs?

thank you very much in advance,

xxx

The answer of one of those professionals:

Date: 2/27/2012
Subject: RE: Greece and the economic crisis
Dear Mr xxx,

thank you for asking about my country's present; my comment should focus on two issues:

The first one refers to the huge "brain drain" that is in progress during this period in Greece, even to a greater extent than the period after the WWII, which was the greatest immigration period in Greek history. People of all ages and professions are migrating in foreign countries around the world seeking for a job and better living conditions, in all financial, communal and governance/ infrastructural terms.

The second one refers to the sharp rise of homeless people and unable to sustain their families' every day living, dignity and income, due to the unprecedented percentages of unemployment, wages' cuttings and increase of the prices of almost all commodities. In cooperation with the church and under the coordination of various entities and NGOs, citizens are gathering food and clothing to assist all those who suffer the "human insecurity" that prevails nowadays in Greece.

I can't say what could have been better for Greece in economic terms, since it's out of my area of expertise, and I don't want to follow the paradigm of all those who suddenly became experts in economic strategies, options, terms and conspiracy theories. I can confirm though that this situation is the result of bad Greek governance for the last thirty years and that although Greece didn't loose sovereignty through wars in it's modern history, it did through economic procedures and EU norms; in any case Greeks are experiencing a very hard austerity policy, humiliation from various (mostly) European governments and states, and most important, instead of facing a hopeful future and prospect, they see things getting worst every day, even after all this inhuman behaviors.

I don't know what the plan or EU's "Grand Strategy" might be for Greece, but definately the proud and cultural Greeks don't deserve what they experience during these years, not even what is yet to come. The civil society is a "boiling pot" due to the downgrade of the every day living standards, unpunished and "untouchable" politicians responsible for this situation,explicit inequalities and non-existing options for the future generations. Let's hope at least that we'll not experience also a bloodshed or Egypt-like uprisings..

I hope I gave you a brief and indicative picture of contemporary Greece, and been of some help to your questions.

Best regards,

xxx

Thursday, February 23, 2012

Can Institutional Reform Reduce Job Destruction and Unemployment Duration?

Can Institutional Reform Reduce Job Destruction and Unemployment Duration? Yes It Can. By Esther Perez & Yao Yao
IMF Working Paper No. 12/54
February 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25738.0

Summary: We read search theory’s unemployment equilibrium condition as an Iso-Unemployment Curve(IUC).The IUC is the locus of job destruction rates and expected unemployment durations rendering the same unemployment level. A country’s position along the curve reveals its preferences over the destruction-duration mix, while its distance from the origin indicates the unemployment level at which such preferences are satisfied Using a panel of 20 OECD countries over 1985-2008, we find employment protection legislation to have opposing efects on destructions and durations, while the effects of the remaining key institutional factors on both variables tend to reinforce each other. Implementing the right reforms could reduce job destruction rates by about 0.05 to 0.25 percentage points and shorten unemployment spells by around 10 to 60 days. Consistent with this, unemployment rates would decline by between 0.75 and 5.5 percentage points, depending on a country’s starting position.


Introduction

This paper investigates how labor market policies affect the unemployment rate through its two defining factors, the duration of unemployment spells and job destruction rates.  To this aim, we look at search theory’s unemployment equilibrium condition as an Iso-Unemployment Curve (IUC). The IUC represents the locus of job destruction rates and expected unemployment durations rendering the same unemployment level. A country’s position along the curve reveals its preferences over the destruction-duration mix, while its distance from the origin indicates the unemployment level at which such preferences are satisfied. We next provide micro-foundations for the link between destructions, durations and policy variables. This allows us to explore the relevance of institutional features using a sample of 20 OECD countries over the period 1985-2008.

The empirical literature investigating the influence of labor market institutions on overall unemployment rate is sizable (see, for instance, Blanchard and Wolfers, 1999, and Nickell and others, 2002). Equally numerous are the studies splitting unemployment into job creation and job destruction flows (see, for example, Blanchard, 1998, Shimer, 2007, and Elsby and others, 2008). This work connects these two strands of the literature by investigating how labor market policies shape both job separations and unemployment spells, which together determine the overall unemployment rate in the economy. The IUC schedule used in our analysis is novel and is motivated by the need to understand the nature of unemployment, as essentially coming from destructions, durations or a combination of both these factors. This can help clarify whether policy makers should focus primarily on speeding up workers’ reallocation across job positions rather than protecting them in the workplace.

One fundamental question raised in this context is whether countries with dynamic labor markets significantly outperform countries with more stagnant markets. By dynamic (stagnant) we mean labor markets displaying high (low) levels of workers’ turnover in and out of unemployment. Is it the case that countries featuring high job destruction rates but brief unemployment spells tend to display lower unemployment rates than labor markets characterized by limited job destruction but longer unemployment durations?  And how do institutional features shape destructions and durations?


Conclusions

This paper reads the basic unemployment equilibrium condition postulated by search theory as an Iso-Unemployment Curve (IUC). The IUC is the locus of job destruction rates and expected unemployment durations that render the same unemployment level.  We use this schedule to classify countries according to their preferences over the job destruction-unemployment duration trade-off. The upshot of this analysis is that labor markets characterized by high levels of job destruction but brief unemployment spells do not necessarily outperform countries characterized by the opposite behavior. But, the IUC construct makes it clear that high unemployment rates result from extreme values in either durations or destructions, or intermediate-to-high levels in both.

Looking at unemployment through the lenses of the IUC schedule focuses the attention on each economy’s revealed social preferences over the destruction-duration mix. Policy packages fighting unemployment should take into consideration such preferences. Some countries seem to tolerate relatively high destruction rates as long as unemployment duration is short. Others are biased towards job security and do not mind financing longer job search spells. A few unfortunate countries are trapped in a high inflow-high duration combination, seemingly condemned for long periods of high unemployment.

An optimistic message arising from this study, especially for countries located on higher IUCs, is that an ambitious structural reform program tackling high labor tax wedges, activating unemployment benefits and removing barriers to competition in key services can effectively contain job losses, limit the duration of unemployment spells and yield substantial reduction in unemployment.