Showing posts with label gov't intervention. Show all posts
Showing posts with label gov't intervention. Show all posts

Monday, July 24, 2017

US cities must unlock the value of the land they sit on

US cities must unlock the value of the land they sit on, by Matthew Klein
There is an answer to local governments’ pension obligations and under-investment
Financial Times, July 21, 2017
https://www.ft.com/content/e20bd8d4-6de5-11e7-bfeb-33fe0c5b7eaa

Boston’s Logan International Airport was built in the wrong place. Instead of occupying undesirable plots on the outskirts of the city, it sits on almost 1,000 hectares of easily accessible waterfront property close to the urban core. The land should be home to condos and office towers, not take-offs and landings.

The question is whether it’s worth paying the high cost to move the airport for benefits that will not be realised for decades. Nobody knows. Today’s politicians will be long gone by then and have no incentive to explore whether the move would make the city better off in the long run.

The financial system provides a way round this problem: wise cities can use the market as a time machine to reap rewards today for good decisions about future investments. This would require cities to adopt the accounting and governance standards sought by activist investors in hoteliers, retailers and chain restaurants. In particular, cities should separate their real estate assets from the services they provide to their residents.

The potential rewards would be enormous. Excluding public parks, local governments own about a fifth of all the land within many US cities’ limits. It is worth at least $25tn, according to Dag Detter and Stefan Fölster in The Public Wealth of Cities. That figure dwarfs the $3.8tn in municipal bond debt and $7.5tn in accumulated pension obligations collectively owed by the US’s states and localities. Capturing this value and boosting yields by even a tiny amount could generate more than enough income to pay benefits to retired workers, invest in maintenance and develop additional infrastructure to accommodate growing populations.

Governments could start by figuring out the real value of what they own. Weirdly, the Governmental Accounting Standards Board thinks doing this for physical assets is too hard and “may negatively affect timeliness of financial reporting”. The result is that municipalities publish balance sheets with implausibly low estimates of their net worth. The Massachusetts Port Authority, which owns Logan airport, claims its landholdings are worth just $226.5m and that its total capital assets net of depreciation are worth about $3.1bn. A rough estimate suggests the value of the land under the airport alone could easily be worth tens of billions if dollars.

The next step would be transferring ownership of these assets to what Detter and Fölster call an “urban wealth fund”. Ideally, all publicly owned assets in a given city would be placed in the fund, regardless of whether they technically belong to the county, the city, the school system, the state or some other entity. The local governments would each have shares in the fund proportionate to the value of the assets they contributed. These shares would be reported as assets on the municipal balance sheets.

Independent managers with experience in real estate and finance would be charged with maximising the value of the portfolio. Cities would receive dividends from their stakes in these commercial properties and have the option to borrow against or sell their shares if desperate for cash.

Public officials would then have to decide whether it makes sense to pay fair market rents to stay in their properties. Moving offices might be inconvenient for government workers but the potential gains for taxpayers and citizens who depend on government services would be far greater. Leasing space in subway stations to shops might detract from the “historic” character of the US’s barbarous public transit systems, but the revenues could fund needed improvements, such as ventilation, without the need for debt or higher passenger fares.

The urban wealth fund wouldn’t have to be run purely for profit. Segments within the portfolio could have separate goals as long as they are simple and quantifiable. Public housing, for example, could be boosted by increasing density on existing plots and funding improvements by developing some of the freed-up land to sell at higher prices, as Andrew Adonis, head of the UK’s National Infrastructure Commission, has suggested.

Boston can afford to leave money on the table because the local economy has been booming and the city’s general obligation bonds have the country’s highest credit ratings. Other cities, such as Chicago, are being forced to cut services and raise taxes because of financial stress. Yet they, too, have enormous stocks of untapped wealth. With better governance, professional asset management and a little financial engineering, they could raise the money they need and invest.

matt.klein@ft.com
@M_C_Klein

Monday, July 10, 2017

Is China building too many airports?

Is China building too many airports? By Fran Wang. Caixin, Jun 23, 2017. http://www.caixinglobal.com/2017-06-23/101105028.html. Extract.

Over the next three years, local authorities in China are planning to build more than 900 airports for general aviation—the segment of the industry that includes crop dusting and tourism. The figure is nearly double the central government’s goal of “more than 500” over the period.

A news report has warned that’s just too many airports.

In May 2016, the State Council, China’s cabinet, announced that the country wanted to construct more than 500 general aviation airports to boost the size of the industry to over 1 trillion yuan (U.S.$146 billion).

General aviation covers flights on helicopters and light aircraft used in sectors such as tourism, agriculture, medical care, and disaster relief.

All provincial-level governments except Shanghai, Tibet, and the northeastern province of Jilin have since published their own plans for these airports, and their goal is far more ambitious than the central government’s. Together, they plan to build 934 general aviation airports, according to the 21st Century Business Herald.

The number put forward by each region ranges from seven to 200. The three places that intend to build the most general aviation airports are Guangxi in southern China, Heilongjiang in the northeast, and Xinjiang in the northwest—all remote and less developed areas, the newspaper said.

Despite the government excesses managing the public treasure*, corruption in civil engineering works†, etc.‡, the citizen is quite comfortable ¶ with these expenditures (while the costs are not recouped visibly from him). It seems that if we see more tower buildings, and are taller, we assume we are progressing, there is material advance, and that most are better for this. My hypothesis is that what feeds the population's approval are patriotism§ (very powerful in Han China) and redistributionism֍.




From Chris Buckley's China’s New Bridges: Rising High, but Buried in Debt, The New York Times, Jun 10, 2017. Available at https://www.nytimes.com/2017/06/10/world/asia/china-bridges-infrastructure.html (impressive imagery).

*    “Infrastructure is a double-edged sword,” said Atif Ansar, a management professor at the University of Oxford who has studied China’s infrastructure spending. “It’s good for the economy, but too much of this is pernicious. ‘Build it and they will come’ is a dictum that doesn’t work, especially in China, where there’s so much built already.”

A study that Mr. Ansar helped write said fewer than a third of the 65 Chinese highway and rail projects he examined were “genuinely economically productive,” while the rest contributed more to debt than to transportation needs.

†  In the past six years, anticorruption inquiries have toppled more than 27 Hunan transportation officials.

‡, §   “The amount of high bridge construction in China is just insane,” said Eric Sakowski, an American bridge enthusiast who runs a website on the world’s highest bridges. “China’s opening, say, 50 high bridges a year, and the whole of the rest of the world combined might be opening 10.”

    Of the world’s 100 highest bridges, 81 are in China, including some unfinished ones, according to Mr. Sakowski’s data. (The Chishi Bridge ranks 162nd.)

    China also has the world’s longest bridge, the 102-mile Danyang-Kunshan Grand Bridge, a high-speed rail viaduct running parallel to the Yangtze River, and is nearing completion of the world’s longest sea bridge, a 14-mile cable-stay bridge skimming across the Pearl River Delta, part of a 22-mile bridge and tunnel crossing that connects Hong Kong and Macau with mainland China.

    The country’s expressway growth has been compared to that of the United States in the 1950s, when the Interstate System of highways got underway, but China is building at a remarkable clip. In 2016 alone, China added 26,100 bridges on roads, including 363 “extra large” ones with an average length of about a mile, government figures show.

֍    “It’s very important to improve transport and other infrastructure so that impoverished regions can escape poverty and prosper,” President Xi Jinping said while visiting the spectacular, recently opened Aizhai Bridge in Hunan in 2013. “We must do more of this and keep supporting it.”

¶    Indeed, the new roads and railways have proved popular.


§  Who Will Fight? The All-Volunteer Army after 9/11. By Susan Payne Carter, Alexander Smith & Carl Wojtaszek. American Economic Review, May 2017, Pages 415-419, https://www.aeaweb.org/articles?id=10.1257/aer.p20171082.

Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar

Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar. By Christopher T. M Clack, Staffan A. Qvist, Jay Apt,, Morgan Bazilian, Adam R. Brandt, Ken Caldeira, Steven J. Davis, Victor Diakov, Mark A. Handschy, Paul D. H. Hines, Paulina Jaramillo, Daniel M. Kammen, Jane C. S. Long, M. Granger Morgan, Adam Reed, Varun Sivaram, James Sweeney, George R. Tynan, David G. Victor, John P. Weyant, and Jay F. Whitacre. Proceedings of the National Academy of Sciences. http://www.pnas.org/content/early/2017/06/16/1610381114.full

Significance: Previous analyses have found that the most feasible route to a low-carbon energy future is one that adopts a diverse portfolio of technologies. In contrast, Jacobson et al. (2015) consider whether the future primary energy sources for the United States could be narrowed to almost exclusively wind, solar, and hydroelectric power and suggest that this can be done at “low-cost” in a way that supplies all power with a probability of loss of load “that exceeds electric-utility-industry standards for reliability”. We find that their analysis involves errors, inappropriate methods, and implausible assumptions. Their study does not provide credible evidence for rejecting the conclusions of previous analyses that point to the benefits of considering a broad portfolio of energy system options. A policy prescription that overpromises on the benefits of relying on a narrower portfolio of technologies options could be counterproductive, seriously impeding the move to a cost effective decarbonized energy system.

Abstract: A number of analyses, meta-analyses, and assessments, including those performed by the Intergovernmental Panel on Climate Change, the National Oceanic and Atmospheric Administration, the National Renewable Energy Laboratory, and the International Energy Agency, have concluded that deployment of a diverse portfolio of clean energy technologies makes a transition to a low-carbon-emission energy system both more feasible and less costly than other pathways. In contrast, Jacobson et al. [Jacobson MZ, Delucchi MA, Cameron MA, Frew BA (2015) Proc Natl Acad Sci USA 112(49):15060–15065] argue that it is feasible to provide “low-cost solutions to the grid reliability problem with 100% penetration of WWS [wind, water and solar power] across all energy sectors in the continental United States between 2050 and 2055”, with only electricity and hydrogen as energy carriers. In this paper, we evaluate that study and find significant shortcomings in the analysis. In particular, we point out that this work used invalid modeling tools, contained modeling errors, and made implausible and inadequately supported assumptions. Policy makers should treat with caution any visions of a rapid, reliable, and low-cost transition to entire energy systems that relies almost exclusively on wind, solar, and hydroelectric power.

Thursday, June 22, 2017

When the appeal of a dominant leader is greater than a prestige leader

When the appeal of a dominant leader is greater than a prestige leader. By Hemant Kakkar & Niro Sivanathan
Proceedings of the National Academy of Sciences, http://www.pnas.org/content/early/2017/06/06/1617711114.full

Abstract: Across the globe we witness the rise of populist authoritarian leaders who are overbearing in their narrative, aggressive in behavior, and often exhibit questionable moral character. Drawing on evolutionary theory of leadership emergence, in which dominance and prestige are seen as dual routes to leadership, we provide a situational and psychological account for when and why dominant leaders are preferred over other respected and admired candidates. We test our hypothesis using three studies, encompassing more than 140,000 participants, across 69 countries and spanning the past two decades. We find robust support for our hypothesis that under a situational threat of economic uncertainty (as exemplified by the poverty rate, the housing vacancy rate, and the unemployment rate) people escalate their support for dominant leaders. Further, we find that this phenomenon is mediated by participants’ psychological sense of a lack of personal control. Together, these results provide large-scale, globally representative evidence for the structural and psychological antecedents that increase the preference for dominant leaders over their prestigious counterparts.

Implications of maternity leave choice for perceptions of working mothers

Should I stay or should I go? Implications of maternity leave choice for perceptions of working mothers. By Thekla Morgenroth & Madeline Heilman
Journal of Experimental Social Psychology, September 2017, Pages 53–56. http://www.sciencedirect.com/science/article/pii/S0022103116307788

Highlights

•    We investigate how women's decisions regarding maternity leave affects their evaluation.
•    Women who choose to take maternity leave are seen as less competent at work and less worthy of organizational rewards.
•    Women who choose not to take maternity leave are seen as worse parents and less desirable partners.
•    Perceptions of whether women prioritize family or work play an important role in these processes.

Abstract: Working mothers often find themselves in a difficult situation when trying to balance work and family responsibilities and to manage expectations about their work and parental effectiveness. Family-friendly policies such as maternity leave have been introduced to address this issue. But how are women who then make the decision to go or not go on maternity leave evaluated? We presented 296 employed participants with information about a woman who made the decision to take maternity leave or not, or about a control target for whom this decision was not relevant, and asked them to evaluate her both in the work and the family domain. We found that both decisions had negative consequences, albeit in different domains. While the woman taking maternity leave was evaluated more negatively in the work domain, the woman deciding against maternity leave was evaluated more negatively in the family domain. These evaluations were mediated by perceptions of work/family commitment priorities. We conclude that while it is important to introduce policies that enable parents to reconcile family and work demands, decisions about whether to take advantage of these policies can have unintended consequences – consequences that can complicate women's efforts to balance work and childcare responsibilities.

Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar

Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar. By Christopher T. M Clack, Staffan A. Qvist, Jay Apt,, Morgan Bazilian, Adam R. Brandt, Ken Caldeira, Steven J. Davis, Victor Diakov, Mark A. Handschy, Paul D. H. Hines, Paulina Jaramillo, Daniel M. Kammen, Jane C. S. Long, M. Granger Morgan, Adam Reed, Varun Sivaram, James Sweeney, George R. Tynan, David G. Victor, John P. Weyant, and Jay F. Whitacre. Proceedings of the National Academy of Sciences. http://www.pnas.org/content/early/2017/06/16/1610381114.full

Significance: Previous analyses have found that the most feasible route to a low-carbon energy future is one that adopts a diverse portfolio of technologies. In contrast, Jacobson et al. (2015) consider whether the future primary energy sources for the United States could be narrowed to almost exclusively wind, solar, and hydroelectric power and suggest that this can be done at “low-cost” in a way that supplies all power with a probability of loss of load “that exceeds electric-utility-industry standards for reliability”. We find that their analysis involves errors, inappropriate methods, and implausible assumptions. Their study does not provide credible evidence for rejecting the conclusions of previous analyses that point to the benefits of considering a broad portfolio of energy system options. A policy prescription that overpromises on the benefits of relying on a narrower portfolio of technologies options could be counterproductive, seriously impeding the move to a cost effective decarbonized energy system.

Abstract: A number of analyses, meta-analyses, and assessments, including those performed by the Intergovernmental Panel on Climate Change, the National Oceanic and Atmospheric Administration, the National Renewable Energy Laboratory, and the International Energy Agency, have concluded that deployment of a diverse portfolio of clean energy technologies makes a transition to a low-carbon-emission energy system both more feasible and less costly than other pathways. In contrast, Jacobson et al. [Jacobson MZ, Delucchi MA, Cameron MA, Frew BA (2015) Proc Natl Acad Sci USA 112(49):15060–15065] argue that it is feasible to provide “low-cost solutions to the grid reliability problem with 100% penetration of WWS [wind, water and solar power] across all energy sectors in the continental United States between 2050 and 2055”, with only electricity and hydrogen as energy carriers. In this paper, we evaluate that study and find significant shortcomings in the analysis. In particular, we point out that this work used invalid modeling tools, contained modeling errors, and made implausible and inadequately supported assumptions. Policy makers should treat with caution any visions of a rapid, reliable, and low-cost transition to entire energy systems that relies almost exclusively on wind, solar, and hydroelectric power.

Monday, June 12, 2017

Less than a third of the 65 Chinese highway & rail projects examined were “genuinely economically productive.”

China’s New Bridges: Rising High, but Buried in Debt. By Chris Buckley
TNYT, Jun 10 2017
China has built hundreds of dazzling new bridges, including the longest and highest, but many have fostered debt and corruption.https://www.nytimes.com/2017/06/10/world/asia/china-bridges-infrastructure.html

“The amount of high bridge construction in China is just insane,” said Eric Sakowski, an American bridge enthusiast who runs a website on the world’s highest bridges. “China’s opening, say, 50 high bridges a year, and the whole of the rest of the world combined might be opening 10.”

Of the world’s 100 highest bridges, 81 are in China, including some unfinished ones, according to Mr. Sakowski’s data. (The Chishi Bridge ranks 162nd.)

China also has the world’s longest bridge, the 102-mile Danyang-Kunshan Grand Bridge, a high-speed rail viaduct running parallel to the Yangtze River, and is nearing completion of the world’s longest sea bridge, a 14-mile cable-stay bridge skimming across the Pearl River Delta, part of a 22-mile bridge and tunnel crossing that connects Hong Kong and Macau with mainland China.

The country’s expressway growth has been compared to that of the United States in the 1950s, when the Interstate System of highways got underway, but China is building at a remarkable clip. In 2016 alone, China added 26,100 bridges on roads, including 363 “extra large” ones with an average length of about a mile, government figures show.

[...]

A study that Mr. Ansar helped write [https://academic.oup.com/oxrep/article/32/3/360/1745622/Does-infrastructure-investment-lead-to-economic] said fewer than a third of the 65 Chinese highway and rail projects he examined were “genuinely economically productive,” while the rest contributed more to debt than to transportation needs. [...]

In the country that built the Great Wall, major feats of infrastructure have long been a point of pride. China has produced engineering coups like the world’s highest railway, from Qinghai Province to Lhasa, Tibet; the world’s largest hydropower project, the Three Gorges Dam; and an 800-mile canal from the Yangtze River system to Beijing that is part of the world’s biggest water transfer project.
Leaders defend the infrastructure spree as crucial to China’s development.

“It’s very important to improve transport and other infrastructure so that impoverished regions can escape poverty and prosper,” President Xi Jinping said while visiting the spectacular, recently opened Aizhai Bridge in Hunan in 2013. “We must do more of this and keep supporting it.”

Indeed, the new roads and railways have proved popular, especially in wealthier areas with many businesses and heavy commuter traffic. And even empty infrastructure often has a way of eventually filling up, as early critics of the country’s high-speed rail and the Pudong skyscrapers in Shanghai have discovered.

Why do some societies fail to adopt more efficient institutions in response to changing economic conditions?

The Ideological Roots of Institutional Change, by Murat Iyigun & Jared Rubin
University of Colorado Working Paper, April 2017
Abstract:Why do some societies fail to adopt more efficient institutions in response to changing economic conditions? And why do such conditions sometimes generate ideological backlashes and at other times lead to transformative sociopolitical movements? We propose an explanation that highlights the interplay - or lack thereof - between new technologies, ideologies, and institutions. When new technologies emerge, uncertainty results from a lack of understanding how the technology will fit with prevailing ideologies and institutions. This uncertainty discourages investment in institutions and the cultural capital necessary to take advantage of new technologies. Accordingly, increased uncertainty during times of rapid technological change may generate an ideological backlash that puts a higher premium on traditional values. We apply the theory to numerous historical episodes, including Ottoman reform initiatives, the Japanese Tokugawa reforms and Meiji Restoration, and the Tongzhi Restoration in Qing China.

Wednesday, June 7, 2017

Increase in own state's minimum wage increases frequency with which low-wage workers commute out of the state

Cross-state differences in the minimum wage and out-of-state commuting by low-wage workers. By Terra McKinnish
Regional Science and Urban Economics, Volume 64, May 2017, Pages 137–147
https://doi.org/10.1016/j.regsciurbeco.2017.02.006
Highlights

•  The federal minimum wage hike compressed cross-border minimum wage differentials.
•  Low wage workers responded by commuting out of states that increased their minimum wage.
•  Results are consistent with a disemployment effect of minimum wage increases.

Abstract: The 2009 federal minimum wage increase, which compressed cross-state differences in the minimum wage, is used to investigate the claim that low-wage workers are attracted to commute out of state to neighboring states that have higher minimum wages. The analysis focuses on Public Use Microdata Areas (PUMAs) that experience commuting flows with one or more neighboring state. A difference-in-differences-in-differences model compares PUMAs that experienced a sizeable increase or decrease in their cross-border minimum wage differential to those that experience smaller change in the cross-border differential. Out-of-state commuting of low wage workers (less than 10 dollars an hour) is then compared to that of moderate wage workers (10–13 dollars an hour). The results suggest that an increase in own state's minimum wage, relative to neighbor's, increases the frequency with which low-wage workers commute out of the state. The analysis is replicated on the subset of PUMAs that experience commuting flows with more than one neighboring state, so that the estimates are identified entirely within PUMA. As a whole, the results suggest that low-wage workers tend to commute away from minimum wage increases rather than towards them.

Sunday, June 4, 2017

Unintended Consequences: The Regressive Effects of Increased Access to Courts

Unintended Consequences: The Regressive Effects of Increased Access to Courts. By Anthony  Niblett & Albert  Yoon, University of Toronto - Faculty of Law
Small claims courts enable parties to resolve their disputes relatively quickly and cheaply. The court’s limiting feature, by design, is that alleged damages must be small, in accordance with the jurisdictional limit at that time. Accordingly, one might expect that a large increase in the upper limit of claim size would increase the court’s accessibility to a larger and potentially more diverse pool of litigants.

We examine this proposition by studying the effect of an increase in the jurisdictional limit of the Ontario Small Claims Court. Prior to January 2010, claims up to $10,000 could be litigated in the small claims court. After January 2010, this jurisdictional limit increased to include all claims up to $25,000. We study patterns in nearly 625,000 disputes over the period 2006-2013.

In this paper, we investigate plaintiff behavior. Interestingly, the total number of claims filed by plaintiffs does not increase significantly with the increased jurisdictional limit. We do find, however, changes to the composition of plaintiffs. Following the jurisdictional change, we find that plaintiffs using the small claims court are, on average, from richer neighborhoods. We also find that proportion of plaintiffs from poorer neighborhoods drops. The drop-off is most pronounced in plaintiffs from the poorest 10% of neighborhoods.

We explore potential explanations for this regressive effect, including crowding out, congestion, increased legal representation, and behavioral influences. Our findings suggest that legislative attempts to make the courts more accessible may have unintended regressive consequences.
                                                                   
Keywords: Courts, Regressive effects, Small claims court, Access to justice, Litigant behavior, Public goods, Empirical law and economics, Jurisdiction

Propaganda can be effective at changing the behavior of all citizens even if most do not believe it

Propaganda and credulity, by Andrew T. Little. In
Games and Economic Behavior,Volume 102, March 2017, Pages 224–232
http://www.sciencedirect.com/science/article/pii/S0899825616301476

Highlights
•   Propaganda can be effective at changing the behavior of all citizens even if most do not believe it.
•   This effect is particularly strong when citizens care a lot about behaving in a similar manner as others.
•    However, the government picks less propaganda when it is more effective.

Abstract: I develop a theory of propaganda which affects mass behavior without necessarily affecting mass beliefs. A group of citizens observe a signal of their government's performance, which is upwardly inflated by propaganda. Citizens want to support the government if it performs well and if others are supportive (i.e., to coordinate). Some citizens are unaware of the propaganda (“credulous”). Because of the coordination motive, the non-credulous still respond to propaganda, and when the coordination motive dominates they perfectly mimic the actions of the credulous. So, all can act as if they believe the government's lies even though most do not. The government benefits from this responsiveness to manipulation since it leads to a more compliant citizenry, but uses more propaganda precisely when citizens are less responsive.

JEL classification: D83

Keywords> Political economy; Propaganda; Authoritarian politics

Saturday, June 3, 2017

Review of Vijay Joshi's India’s Long Road: The Search for Prosperity

India’s long road to prosperity, by Martin Wolf
Martin Wolf is impressed by an analysis of what the world’s largest democracy must do in order to thriveFinancial Times, May 24, 2017
https://www.ft.com/content/d5cf8bb0-3fc3-11e7-9d56-25f963e998b2

India could do far better. That, in a sentence, is the conclusion of Vijay Joshi’s superb book. Joshi is an Indian economist who has spent most of his professional life at Oxford university. In this penetrating account of the past and present of Indian economic development, he casts a bright light on the prospects ahead. If India’s aim is to become a high-income country in the next generation, its economic, social and political performance needs to improve dramatically.

The good news is that there is room for improvement on many fronts. The bad news is that the obstacles to the needed improvement are huge. Worse, many emanate from the failures of the state and the political processes that guide it. Yet, as Joshi also notes, “The two fixed points in the socio-political setting of the Indian state’s development policies are that the country is a democracy, and an extremely diverse society.” The challenge is to improve performance within the constraints of these realities.

The success of Indian development matters, for at least three reasons: India will soon be the most populous country in the world; it is already far and away the largest democracy; and, above all, despite progress in the last three decades, between 270m and 360m Indians still lived in dire poverty (on slightly different definitions) in 2011 (that is, between 22 and 30 per cent of the population). If extreme poverty is to be eliminated from the world, it must be eliminated in India.

While the focus of India’s Long Road is on the economy, its analysis is appropriately comprehensive. It considers the post-independence growth record, the failure to create remunerative employment, the excessive role of publicly owned enterprises, the poor quality of Indian infrastructure and the inadequacy of environmental regulation. The book also analyses the successes and failures of macroeconomic management, the appalling quality of government-provided education and healthcare, the need for a better safety net for the poor, the long-term decay of the state, the prevalence of corruption and the role of India in the world economy.

In covering all these issues, Joshi combines enthusiastic engagement with the detachment of a scholar who has passed much of his life abroad. No better guide to India’s contemporary economy exists.

Over the past 70 years, India’s growth has shown two marked accelerations. The first followed independence in 1947. The second followed the economic liberalisation that began in the 1980s and accelerated dramatically after the balance of payments crisis of 1991. In the first period, growth averaged 3.5 per cent a year. In the second, it rose to 6 per cent (4 per cent per head). Unfortunately, after a further acceleration in the first decade of the 2000s, growth has slowed once again. The principal explanation for this recent slowdown is a marked weakening of investment by an over-indebted private sector.

                    "Joshi argues that India could provide a basic income to all by diverting resources wasted on subsidies"

So what should be the goal for the decades ahead? Joshi describes it simply as “rapid, inclusive, stable, and sustainable growth . . . within a political framework of liberal democracy”. More precisely, if incomes per head could grow at 7 per cent a year, India would achieve high-income status, at the level of Portugal, within a quarter of a century.

Only three economies have achieved something close to this in the past: Taiwan, South Korea and China. It represents an enormous challenge that cannot be met with the current “partial reform model”. The basic flaw of that model, argues Joshi, “is a failure to put the role of the state, and the relation between the state, the market, and the private sector, on the right footing”. The state, in brief, does what it does not need to do and fails to do what it does need to do.

It is no longer enough for the state merely to get out of the way, important though that still is in crucial areas. Among these is the labour market, whose huge distortions and inefficiencies have turned the demographic dividend into a demographic disaster.

Thus, in the 10 years from 1999 to 2009, India’s workforce increased by 63m. “Of these, 44 million joined the unorganized sector, 22 million became informal workers in the organized sector, and the number of formal workers in the organized sector fell by 3 million.” This is a social catastrophe. It is due not only to labour-market distortions, but to a host of constraints on the creation, operation and, not least, closure of organised and large-scale businesses.

Yet India also needs an effective state able to supply the public goods, public services and competent regulation on which an efficient economy depends. Unfortunately, that is not what now exists. All international surveys give India a very low rank for the efficiency and honesty of the state and the ease of doing business. Joshi argues that while the economy is more dynamic and the quality of policy has indeed improved since the 1980s, the quality of the state has deteriorated in many respects.

Among the many failures is the waste of state resources on inefficient subsidies that, though often given in the name of the poor, actually go to the better off. Indeed, one of the most original and persuasive aspects of the book is the argument that it would in principle be possible to provide a basic income to all Indians sufficient to lift everybody out of extreme poverty merely by diverting resources wasted on grotesquely costly subsidies. Yet, to take just one example, state governments continue to bribe farmers with free power, at the expense of a reliable electricity supply.

Will prime minister Narendra Modi be the new broom that sweeps all these cobwebs away? Alas no. His government’s performance is “mixed at best”. It has some achievements. But it has shown insufficient energy in tackling both the immediate problems of inadequate private investment, excessive debt and feeble banks, and the longer-term problems of dreadful education, lousy healthcare, weak infrastructure, corruption, regulatory incompetence, excessive interference and government waste.

A great opportunity for radically improved performance is being missed. This is not bad just for the Indian economy. There is a real danger that if the economy fails to perform as needed and desired, the governing Bharatiya Janata party will find itself increasingly attracted to its “dark side” of communal and caste division. That way lies not just economic failure, but possibly the destabilisation of Indian democracy, one of the great political achievements of the post-second world war era.

Those who care about the future of this remarkable country and indeed the future of democracy itself must hope that Modi gets this right. If they want to understand what he needs to do and why, they should first read this book.

India’s Long Road: The Search for Prosperity, by Vijay Joshi, Oxford University Press, RRP£22.99, 360 pages
Martin Wolf is the FT’s chief economics commentator

Small association between socioeconomic status and adult fast-food consumption in US

The association between socioeconomic status and adult fast-food consumption in the U.S. By Jay L. Zagorsky , Patricia K. Smith. Economics & Human Biology
http://www.sciencedirect.com/science/article/pii/S1570677X16300363

Highlights
•   Fast-food consumption among adults varies little across SES, measured as income and wealth.
•   Descriptive analyses indicate a weak, inverted U-shaped association between fast-food and SES.
•   Checking nutrition labels frequently and drinking less soda predict less adult fast-food intake.
•    More work hours predict greater fast-food intake.

Abstract: Health follows a socioeconomic status (SES) gradient in developed countries, with disease prevalence falling as SES rises. This pattern is partially attributed to differences in nutritional intake, with the poor eating the least healthy diets. This paper examines whether there is an SES gradient in one specific aspect of nutrition: fast-food consumption. Fast food is generally high in calories and low in nutrients. We use data from the 2008, 2010, and 2012 waves of the National Longitudinal Survey of Youth (NLSY79) to test whether adult fast-food consumption in the United States falls as monetary resources rise (n = 8136). This research uses more recent data than previous fast-food studies and includes a comprehensive measure of wealth in addition to income to measure SES.

Friday, June 2, 2017

Alemania reinventa la crisis energética. Por Holman W. Jenkins, Jr.


Alemania reinventa la crisis energética. Por Holman W. Jenkins, Jr.
http://online.wsj.com/news/articles/SB100014240527023044482045791857 20802195590
Wall Street Journal, Nov. 8, 2013 6:28 p.m. ET

ObamaCare no es el único tren en camino de descarrilar que tenemos  ahora. Como Mao apremiando a los campesinos para que fundieran sus cacharros, sartenes y útiles de labranza para convertir a China en un coloso  del acero de la noche a la mañana, Alemania repartía alegremente subsidios para alentar a los ciudadanos y granjeros a instalar  paneles solares y molinos de viento para luego vender la energía  resultante a las compañías eléctricas a precios inflados. El éxito  —Alemania obtiene un 25% de su energía de las renovables— ha  resultado ser un desastre.

Mientras los alemanes se apresuran a hacerse con su dinero fácil, la  producción de dióxido de carbono ha aumentado, no disminuido, porque las compañías, privadas de capital, han pasado a quemar carbón  americano barato para proveer de la necesaria energía cuando el  viento y el sol nos fallan.

Debido a que sol y viento son intermitentes y la red eléctrica está  pobremente preparada para acomodar estas fuentes, los apagones y las  reducciones de suministro amenazan en este invierno.

Como las facturas las pagan hogares y empresas, los precios de la  electricidad son el triple que en los EE UU. Un pánico apremiante es el del empleo, ya que industrias de gran aportación se dirigen a EE UU  para aprovechar la energía barata que ha producido la revolución de  las arenas bituminosas y los esquistos. El máximo responsable  energético de Europa habla ya francamente de la  "desindustrialización de Alemania".

En UK, donde la política pública ha sido casi tan generosa con las  renovables, "Está bien ser muy, muy verde, pero no si estás  interesado en la fabricación", según queja de un prominente CEO.

La gran virtud de la democracia es que no sigue con ciertos planes  hasta el precipicio, pero los mecanismos normales de ajuste están  agarrotados por el hecho de que el desastre energético de Europa  implica al entero espectro político.

Ed Miliband, líder del Partido Laborista de UK, ha fijado el tema de  las elecciones del próximo año cuando prometió recientemente  congelar los precios de la energía si se le elegía. Pero los  laboristas no van a abandonar los subsidios solares y eólicos que  crearon ellos mismos. Quieren dejarlos grabados en piedra, pasando  los costes a las empresas. En Alemania, la conservadora Angela  Merkel se adhirió completamente a las posiciones económicas sobre  energía de la oposición tras Fukushima, dejando a los electores  alarmados sobre los precios de la energía sin lugar al que tornar en  las elecciones de septiembre excepto a Angela Merkel, quien de forma  vaga mostró alguna moderación sobre la energiewende (revolución  energética) que lanzó y continúa liderando.

Un infrecuente destello de raciocinio ha partido en realidad del  probable socio de coalición de Angela Merkel, el SPD, autor de la  ley original sobre energías verdes, cuyo portavoz dice ahora:  "Necesitamos asegurar que la energía renovable es asequible. Y  necesitamos terminar con la idea de que podemos salirnos  simultáneamente de nucleares y el carbón. No va a funcionar."

Es tentador asumir que los políticos europeos eran feligreses de la  iglesia del calentamiento global. Pero más importante es su apego a  la ideología del agotamiento de recursos, que les convenció de haber  elegido un ganador en esta idea porque estaba garantizado que los  precios de los combustibles fósiles harían parecer baratos a los de  la energía verde.

"Cuanta más gente consuma petróleo y carbón, más subirá el precio,  pero cuanta más gente consuma energías renovables, más bajará su  precio", explicó el asesor energético de Angela Merkel.

He aquí una idea que parece ser impermeable a la experiencia y que  es parte del bagaje de todo político que pudiera ser elegido en  nuestro mundo. "Es absolutamente cierto que la demanda [de energías  fósiles] subirá mucho más rápido que el suministro. Ese es un  hecho", explicó el presidente Obama en 2011. Los EE UU "no pueden  permitirse apostar nuestra prosperidad a largo plazo a un recurso  que con el tiempo se agotará."

El Sr. Obama mencionó los fósiles no convencionales exactamente una  vez en su discurso — y solo para decir que también se agotarían.

Si todo esto fuera cierto, Europa no habría llegado a sus presentes  trabajos. Esta es la realidad: la revolución de los fósiles no  convencionales es menos revolucionaria de lo que parece. Ha sacudido  los errores comunes solo porque ha sucedido en las mismas narices de  los americanos, en áreas pobladas en que se asumía que los  "recursos" se habían extraído y transportado hace mucho.

De hecho, los depósitos de hidrocarburos que hay en el mundo son  verdaderamente vastos, incluyendo entre ellos cantidades inimaginables de hidratos de metano . El desafío es el tecnológico y económico de buscar el acceso a un determinado recurso  a un precio asequible — un desafío desde que se usaban trapos para  empaparlos en petróleo de manantiales naturales. Durante ciento  cincuenta años, el precio del barril de petróleo ha fluctuado entre  $10 y $100 (en dólares de 2011), un rango suficiente para encontrar  nuevas reservas cada vez que se quería requerían con objeto de  mantener a los hidrocarburos como fuente de energía de precio  competitivo.

La crisis energética europea es muy parecida a la nuestra de hace 40  años — autoinfligida. El sueño de Europa dejó de ser sostenible al  minuto de que los precios de la energía empezaran a caer en un  competidor comercial importante como los EE UU. LA gran pregunta ahora es cuán lejos irá la secudida política cuando toda la élite está implicada en un insatisfactorio experimento energético, que inevitablemente se ha visto envuelta en el desencanto del público con otro projecto fracasado de la élite, la Unión Europea.

Va a ser fascinante también la suerte de los shales europeos. En Europa, el gobierno, no los propietarios, controla y se beneficia de los recursos minerales, creando la política de suma zero en lo referente a recusos que han hecho al Oriente Medio un parangón de estabilidad y progreso. ¿Y el calentamiento global? Por suerte la respuesta es fácil. Los votantes europeos se van a acercar al punto en que están los americanos, dándose cuenta de que abjurar de la energía barata no hará nada por los niveles de CO2 (y aun menos por el clima) mientras otros no abjuren de la energía barata también.


---
Germany Reinvents the Energy Crisis
A love affair with renewables brings high prices, potential  blackouts and worries about 'deindustrialization.'
By Holman W. Jenkins, Jr.
http://online.wsj.com/news/articles/SB100014240527023044482045791857 20802195590
Wall Street Journal, Nov. 8, 2013 6:28 p.m. ET

ObamaCare isn't the only policy train wreck in progress. Like Mao  urging peasants to melt down their pots, pans and farm tools to turn  China into a steel-producing superpower overnight, Germany dished  out subsidies to encourage homeowners and farmers to install solar  panels and windmills and sell energy back to the power company at  inflated prices. Success—Germany now gets 25% of its power from  renewables—has turned out to be a disaster.

As Germans rush to grab this easy money, carbon dioxide output has  risen, not fallen, because money-strapped utilities have switched to  burning cheap American coal to provide the necessary standby power  when wind and sun fail.

Because the sun and wind are intermittent and the power grid is  poorly arranged to accommodate them, brownouts and blackouts  threaten this winter.

Because the bills are paid by households and businesses, electricity  rates are triple those in the United States. An immediate panic is  jobs, as prized industries head to the U.S. for cheaper energy  unleashed by the shale revolution. Europe's top energy official now  speaks frankly of the "deindustrialization in Germany."

In Britain, where policy has been nearly as generous to renewables,  "It's fine being very, very green, but not if you're interested in  manufacturing," complains a prominent CEO.
Enlarge Image

Wind turbines stand behind a solar power park near Werder, Germany.  Getty Images

Democracy's great virtue is that it doesn't follow schemes off a  cliff, but the normal adjustment mechanisms are hampered by the fact  that Europe's energy disaster implicates the entire political  spectrum.

Ed Miliband, leader of Britain's Labour Party, set the theme for  next year's British election when he recently promised to freeze  energy prices if elected. But Labour isn't about to disown the solar  and wind subsidies it created. It wants to soldier on, shifting the  cost to business. In Germany, conservative Angela Merkel embraced  the opposition's energy economics wholesale after Fukushima, leaving  voters who are alarmed about energy prices no place to turn in  September's election except Angela Merkel, who vaguely indicated  some moderation of the energiewende (energy revolution) she launched  and continues to champion.

An unwonted glimmer of reason has actually come from Mrs. Merkel's  likely Social Democrat coalition partner, author of Germany's  original green energy law, whose spokesman now says: "We need to  ensure that renewable energy is affordable. And we need to put an  end to the idea that we can pull out of nuclear and coal  simultaneously. This won't work."

It's tempting to assume Europe's politicians were praying in the  church of global warming. But more important is their subscription  to resource-depletion ideology, which convinced them they'd picked a  political winner because rising fossil fuel prices were guaranteed  to make green energy look cheap in comparison.

"When more people consume oil and coal, the price will go up, but  when more people consume renewable energy, the price of it will go  down," explained Ms. Merkel's top energy adviser.

We have here an idea seemingly impervious to experience and part of  the mental baggage of every politician likely to get elected in our  world. "It is absolutely certain that [fossil energy] demand will go  up a lot faster than supply. It's just a fact," President Obama  explained in 2011. The U.S. "cannot afford to bet our long-term  prosperity on a resource that will eventually run out."

Mr. Obama mentioned shale exactly once in his speech—and only to say  shale would run out too.

If all this were true, Europe wouldn't be in its present fix. Here's  the real truth: The shale revolution is less revolutionary than it  seems. It has shocked settled misconceptions only because it  happened under the noses of Americans, in populated areas where the  casual assumption was that "resources" would long ago have been dug  out and carted away.

In fact, the world's store of fossil hydrocarbons is truly vast,  including almost unimaginable quantities of methane hydrates. The  challenge is the technological and economic one of getting access to  a given resource at an affordable price—a challenge ever since men  used rags to soak up oil from natural seeps. For 150 years, the  price of a barrel of oil has fluctuated between $10 and $100 (in  2011 dollars), a range that has been sufficient to call forth new  reserves and feedstocks whenever needed to maintain hydrocarbons as  a source of competitively priced energy.

Europe's energy crisis is a lot like ours of 40 years ago—self- inflicted. Europe's dream was untenable the minute energy prices  began falling in a major trade competitor like the United States.  The big question now is how far will the political upheaval go when  an entire elite is implicated in an unsatisfactory energy  experiment, which inevitably has become wrapped up in public  disappointment with another failed elite project, the European Union  itself.

Fascinating too will be the fate of Europe's shale. In Europe,  government, not landowners, controls and benefits from mineral  resources, creating the zero-sum resource politics that have made  the Mideast a paragon of stability and civil progress. What about  global warming? At least that answer is easier. European voters are  coming out where Americans have, realizing that foreswearing cheap  energy will do nothing for CO2 levels (and even less for climate) as  long as others aren't foreswearing cheap energy too.

Sunday, May 7, 2017

Macroprudential Liquidity Stress Testing in FSAPs for Systemically Important Financial Systems

Author/Editor: Andreas A. Jobst ; Christian Schmieder ; Li Lian Ong

http://www.imf.org/en/Publications/WP/Issues/2017/05/01/Macroprudential-Liquidity-Stress-Testing-in-FSAPs-for-Systemically-Important-Financial-44873?cid=em-COM-123-35149

Summary:Bank liquidity stress testing, which has become de rigueur following the costly lessons of the global financial crisis, remains underdeveloped compared to solvency stress testing. The ability to adequately identify, model and assess the impact of liquidity shocks, which are infrequent but can have a severe impact on affected banks and financial systems, is complicated not only by data limitations but also by interactions among multiple factors. This paper provides a conceptual overview of liquidity stress testing approaches for banks and discusses their implementation by IMF staff in the Financial Sector Assessment Program (FSAP) for countries with systemically important financial sectors over the last six years.

Series:Working Paper No. 17/102
Publication Date: May 1, 2017
ISBN/ISSN: 9781475597240/1018-5941
Stock No: WPIEA2017102
Pages: 56

Monday, January 9, 2017

A way to market to conservatives the science behind climate change more effectively

Past-focused environmental comparisons promote pro-environmental outcomes for conservatives. By Matthew Baldwin and Joris Lammers
http://www.pnas.org/content/113/52/14953.abstract

Significance

Political polarization on important issues can have dire consequences for society, and divisions regarding the issue of climate change could be particularly catastrophic. Building on research in social cognition and psychology, we show that temporal comparison processes largely explain the political gap in respondents’ attitudes towards and behaviors regarding climate change. We found that conservatives’ proenvironmental attitudes and behaviors improved consistently and drastically when we presented messages that compared the environment today with that of the past. This research shows how ideological differences can arise from basic psychological processes, demonstrates how such differences can be overcome by framing a message consistent with these basic processes, and provides a way to market the science behind climate change more effectively.


Abstract

Conservatives appear more skeptical about climate change and global warming and less willing to act against it than liberals. We propose that this unwillingness could result from fundamental differences in conservatives’ and liberals’ temporal focus. Conservatives tend to focus more on the past than do liberals. Across six studies, we rely on this notion to demonstrate that conservatives are positively affected by past- but not by future-focused environmental comparisons. Past comparisons largely eliminated the political divide that separated liberal and conservative respondents’ attitudes toward and behavior regarding climate change, so that across these studies conservatives and liberals were nearly equally likely to fight climate change. This research demonstrates how psychological processes, such as temporal comparison, underlie the prevalent ideological gap in addressing climate change. It opens up a promising avenue to convince conservatives effectively of the need to address climate change and global warming.

Tuesday, December 6, 2016

My Unhappy Life as a Climate Heretic. By Roger Pielke Jr.

My Unhappy Life as a Climate Heretic. By Roger Pielke Jr.
My research was attacked by thought police in journalism, activist groups funded by billionaires and even the White House.http://www.wsj.com/articles/my-unhappy-life-as-a-climate-heretic-1480723518
Updated Dec. 2, 2016 7:04 p.m. ET

Much to my surprise, I showed up in the WikiLeaks releases before the election. In a 2014 email, a staffer at the Center for American Progress, founded by John Podesta in 2003, took credit for a campaign to have me eliminated as a writer for Nate Silver’s FiveThirtyEight website. In the email, the editor of the think tank’s climate blog bragged to one of its billionaire donors, Tom Steyer: “I think it’s fair [to] say that, without Climate Progress, Pielke would still be writing on climate change for 538.”

WikiLeaks provides a window into a world I’ve seen up close for decades: the debate over what to do about climate change, and the role of science in that argument. Although it is too soon to tell how the Trump administration will engage the scientific community, my long experience shows what can happen when politicians and media turn against inconvenient research—which we’ve seen under Republican and Democratic presidents.

I understand why Mr. Podesta—most recently Hillary Clinton’s campaign chairman—wanted to drive me out of the climate-change discussion. When substantively countering an academic’s research proves difficult, other techniques are needed to banish it. That is how politics sometimes works, and professors need to understand this if we want to participate in that arena.

More troubling is the degree to which journalists and other academics joined the campaign against me. What sort of responsibility do scientists and the media have to defend the ability to share research, on any subject, that might be inconvenient to political interests—even our own?

I believe climate change is real and that human emissions of greenhouse gases risk justifying action, including a carbon tax. But my research led me to a conclusion that many climate campaigners find unacceptable: There is scant evidence to indicate that hurricanes, floods, tornadoes or drought have become more frequent or intense in the U.S. or globally. In fact we are in an era of good fortune when it comes to extreme weather. This is a topic I’ve studied and published on as much as anyone over two decades. My conclusion might be wrong, but I think I’ve earned the right to share this research without risk to my career.

Instead, my research was under constant attack for years by activists, journalists and politicians. In 2011 writers in the journal Foreign Policy signaled that some accused me of being a “climate-change denier.” I earned the title, the authors explained, by “questioning certain graphs presented in IPCC reports.” That an academic who raised questions about the Intergovernmental Panel on Climate Change in an area of his expertise was tarred as a denier reveals the groupthink at work.

Yet I was right to question the IPCC’s 2007 report, which included a graph purporting to show that disaster costs were rising due to global temperature increases. The graph was later revealed to have been based on invented and inaccurate information, as I documented in my book “The Climate Fix.” The insurance industry scientist Robert-Muir Wood of Risk Management Solutions had smuggled the graph into the IPCC report. He explained in a public debate with me in London in 2010 that he had included the graph and misreferenced it because he expected future research to show a relationship between increasing disaster costs and rising temperatures.

When his research was eventually published in 2008, well after the IPCC report, it concluded the opposite: “We find insufficient evidence to claim a statistical relationship between global temperature increase and normalized catastrophe losses.” Whoops.

The IPCC never acknowledged the snafu, but subsequent reports got the science right: There is not a strong basis for connecting weather disasters with human-caused climate change.

Yes, storms and other extremes still occur, with devastating human consequences, but history shows they could be far worse. No Category 3, 4 or 5 hurricane has made landfall in the U.S. since Hurricane Wilma in 2005, by far the longest such period on record. This means that cumulative economic damage from hurricanes over the past decade is some $70 billion less than the long-term average would lead us to expect, based on my research with colleagues. This is good news, and it should be OK to say so. Yet in today’s hyper-partisan climate debate, every instance of extreme weather becomes a political talking point.

For a time I called out politicians and reporters who went beyond what science can support, but some journalists won’t hear of this. In 2011 and 2012, I pointed out on my blog and social media that the lead climate reporter at the New York Times,Justin Gillis, had mischaracterized the relationship of climate change and food shortages, and the relationship of climate change and disasters. His reporting wasn’t consistent with most expert views, or the evidence. In response he promptly blocked me from his Twitter feed. Other reporters did the same.

In August this year on Twitter, I criticized poor reporting on the website Mashable about a supposed coming hurricane apocalypse—including a bad misquote of me in the cartoon role of climate skeptic. (The misquote was later removed.) The publication’s lead science editor, Andrew Freedman, helpfully explained via Twitter that this sort of behavior “is why you’re on many reporters’ ‘do not call’ lists despite your expertise.”

I didn’t know reporters had such lists. But I get it. No one likes being told that he misreported scientific research, especially on climate change. Some believe that connecting extreme weather with greenhouse gases helps to advance the cause of climate policy. Plus, bad news gets clicks.

Yet more is going on here than thin-skinned reporters responding petulantly to a vocal professor. In 2015 I was quoted in the Los Angeles Times, by Pulitzer Prize-winning reporter Paige St. John, making the rather obvious point that politicians use the weather-of-the-moment to make the case for action on climate change, even if the scientific basis is thin or contested.

Ms. St. John was pilloried by her peers in the media. Shortly thereafter, she emailed me what she had learned: “You should come with a warning label: Quoting Roger Pielke will bring a hailstorm down on your work from the London Guardian, Mother Jones, and Media Matters.”

Or look at the journalists who helped push me out of FiveThirtyEight. My first article there, in 2014, was based on the consensus of the IPCC and peer-reviewed research. I pointed out that the global cost of disasters was increasing at a rate slower than GDP growth, which is very good news. Disasters still occur, but their economic and human effect is smaller than in the past. It’s not terribly complicated.

That article prompted an intense media campaign to have me fired. Writers at Slate, Salon, the New Republic, the New York Times, the Guardian and others piled on.

In March of 2014, FiveThirtyEight editor Mike Wilson demoted me from staff writer to freelancer. A few months later I chose to leave the site after it became clear it wouldn’t publish me. The mob celebrated. ClimateTruth.org, founded by former Center for American Progress staffer Brad Johnson, and advised by Penn State’s Michael Mann, called my departure a “victory for climate truth.” The Center for American Progress promised its donor Mr. Steyer more of the same.

Yet the climate thought police still weren’t done. In 2013 committees in the House and Senate invited me to a several hearings to summarize the science on disasters and climate change. As a professor at a public university, I was happy to do so. My testimony was strong, and it was well aligned with the conclusions of the IPCC and the U.S. government’s climate-science program. Those conclusions indicate no overall increasing trend in hurricanes, floods, tornadoes or droughts—in the U.S. or globally.

In early 2014, not long after I appeared before Congress, President Obama’s science adviser John Holdren testified before the same Senate Environment and Public Works Committee. He was asked about his public statements that appeared to contradict the scientific consensus on extreme weather events that I had earlier presented. Mr. Holdren responded with the all-too-common approach of attacking the messenger, telling the senators incorrectly that my views were “not representative of the mainstream scientific opinion.” Mr. Holdren followed up by posting a strange essay, of nearly 3,000 words, on the White House website under the heading, “An Analysis of Statements by Roger Pielke Jr.,” where it remains today.

I suppose it is a distinction of a sort to be singled out in this manner by the president’s science adviser. Yet Mr. Holdren’s screed reads more like a dashed-off blog post from the nutty wings of the online climate debate, chock-full of errors and misstatements.

But when the White House puts a target on your back on its website, people notice. Almost a year later Mr. Holdren’s missive was the basis for an investigation of me by Arizona Rep. Raul Grijalva, the ranking Democrat on the House Natural Resources Committee. Rep. Grijalva explained in a letter to my university’s president that I was being investigated because Mr. Holdren had “highlighted what he believes were serious misstatements by Prof. Pielke of the scientific consensus on climate change.” He made the letter public.

The “investigation” turned out to be a farce. In the letter, Rep. Grijalva suggested that I—and six other academics with apparently heretical views—might be on the payroll of Exxon Mobil (or perhaps the Illuminati, I forget). He asked for records detailing my research funding, emails and so on. After some well-deserved criticism from the American Meteorological Society and the American Geophysical Union, Rep. Grijalva deleted the letter from his website. The University of Colorado complied with Rep. Grijalva’s request and responded that I have never received funding from fossil-fuel companies. My heretical views can be traced to research support from the U.S. government.

But the damage to my reputation had been done, and perhaps that was the point. Studying and engaging on climate change had become decidedly less fun. So I started researching and teaching other topics and have found the change in direction refreshing. Don’t worry about me: I have tenure and supportive campus leaders and regents. No one is trying to get me fired for my new scholarly pursuits.

But the lesson is that a lone academic is no match for billionaires, well-funded advocacy groups, the media, Congress and the White House. If academics—in any subject—are to play a meaningful role in public debate, the country will have to do a better job supporting good-faith researchers, even when their results are unwelcome. This goes for Republicans and Democrats alike, and to the administration of President-elect Trump.

Academics and the media in particular should support viewpoint diversity instead of serving as the handmaidens of political expediency by trying to exclude voices or damage reputations and careers. If academics and the media won’t support open debate, who will?

---
Mr. Pielke is a professor and director of the Sports Governance Center at the University of Colorado, Boulder. His most recent book is “The Edge: The Wars Against Cheating and Corruption in the Cutthroat World of Elite Sports” (Roaring Forties Press, 2016).

Sunday, April 17, 2016

The Great Recession Blame Game - Banks took the heat, but it was Washington that propped up subprime debt and then stymied recovery

The Great Recession Blame Game

Banks took the heat, but it was Washington that propped up subprime debt and then stymied recovery.

By Phil Gramm and Michael Solon
WSJ, April 15, 2016 6:09 p.m. ET

When the subprime crisis broke in the 2008 presidential election year, there was little chance for a serious discussion of its root causes. Candidate Barack Obama weaponized the crisis by blaming greedy bankers, unleashed when financial regulations were “simply dismantled.” He would go on to blame them for taking “huge, reckless risks in pursuit of quick profits and massive bonuses.”
That mistaken diagnosis was the justification for the Dodd-Frank Act and the stifling regulations that shackled the financial system, stunted the recovery and diminished the American dream.

In fact, when the crisis struck, banks were better capitalized and less leveraged than they had been in the previous 30 years. The FDIC’s reported capital-to-asset ratio for insured commercial banks in 2007 was 10.2%—76% higher than it was in 1978. Federal Reserve data on all insured financial institutions show the capital-to-asset ratio was 10.3% in 2007, almost double its 1984 level, and the biggest banks doubled their capitalization ratios. On Sept. 30, 2008, the month Lehman failed, the FDIC found that 98% of all FDIC institutions with 99% of all bank assets were “well capitalized,” and only 43 smaller institutions were undercapitalized.

In addition, U.S. banks were by far the best-capitalized banks in the world. While the collapse of 31 million subprime mortgages fractured financial capital, the banking system in the 30 years before 2007 would have fared even worse under such massive stress.

Virtually all of the undercapitalization, overleveraging and “reckless risks” flowed from government policies and institutions. Federal regulators followed international banking standards that treated most subprime-mortgage-backed securities as low-risk, with lower capital requirements that gave banks the incentive to hold them. Government quotas forced Fannie Mae and Freddie Mac to hold ever larger volumes of subprime mortgages, and politicians rolled the dice by letting them operate with a leverage ratio of 75 to one—compared with Lehman’s leverage ratio of 29 to one.

Regulators also eroded the safety of the financial system by pressuring banks to make subprime loans in order to increase homeownership. After eight years of vilification and government extortion of bank assets, often for carrying out government mandates, it is increasingly clear that banks were more scapegoats than villains in the subprime crisis.

Similarly, the charge that banks had been deregulated before the crisis is a myth. From 1980 to 2007 four major banking laws—the Competitive Equality Banking Act (1987), the Financial Institutions, Reform, Recovery and Enforcement Act (1989), the Federal Deposit Insurance Corporation Improvement Act (1991), and Sarbanes-Oxley (2002)—undeniably increased bank regulations and reporting requirements. The charge that financial regulation had been dismantled rests almost solely on the disputed effects of the 1999 Gramm-Leach-Bliley Act (GLBA).

Prior to GLBA, the decades-old Glass-Steagall Act prohibited deposit-taking, commercial banks from engaging in securities trading. GLBA, which was signed into law by President Bill Clinton, allowed highly regulated financial-services holding companies to compete in banking, insurance and the securities business. But each activity was still required to operate separately and remained subject to the regulations and capital requirements that existed before GLBA. A bank operating within a holding company was still subject to Glass-Steagall (which was not repealed by GLBA)—but Glass-Steagall never banned banks from holding mortgages or mortgage-backed securities in the first place.

GLBA loosened federal regulations only in the narrow sense that it promoted more competition across financial services and lowered prices. When he signed the law, President Clinton said that “removal of barriers to competition will enhance the stability of our financial system, diversify their product offerings and thus their sources of revenue.” The financial crisis proved his point. Financial institutions that had used GLBA provisions to diversify fared better than those that didn’t.

Mr. Clinton has always insisted that “there is not a single solitary example that [GLBA] had anything to do with the financial crisis,” a conclusion that has never been refuted. When asked by the New York Times in 2012, Sen. Elizabeth Warren agreed that the financial crisis would not have been avoided had GLBA never been adopted. And President Obama effectively exonerated GLBA from any culpability in the financial crisis when, with massive majorities in both Houses of Congress, he chose not to repeal GLBA. In fact, Dodd-Frank expanded GLBA by using its holding-company structure to impose new regulations on systemically important financial institutions.

Another myth of the financial crisis is that the bailout was required because some banks were too big to fail. Had the government’s massive injection of capital—the Troubled Asset Relief Program, or TARP—been only about bailing out too-big-to-fail financial institutions, at most a dozen institutions might have received aid. Instead, 954 financial institutions received assistance, with more than half the money going to small banks.

Many of the largest banks did not want or need aid—and Lehman’s collapse was not a case of a too-big-to-fail institution spreading the crisis. The entire financial sector was already poisoned by the same subprime assets that felled Lehman. The subprime bailout occurred because the U.S. financial sector was, and always should be, too important to be allowed to fail.

Consider that, according to the Congressional Budget Office, bailing out the depositors of insolvent S&Ls in the 1980s on net cost taxpayers $258 billion in real 2009 dollars. By contrast, of the $245 billion disbursed by TARP to banks, 67% was repaid within 14 months, 81% within two years and the final totals show that taxpayers earned $24 billion on the banking component of TARP. The rapid and complete payback of TARP funds by banks strongly suggests that the financial crisis was more a liquidity crisis than a solvency crisis.

What turned the subprime crisis and ensuing recession into the “Great Recession” was not a failure of policies that addressed the financial crisis. Instead, it was the failure of subsequent economic policies that impeded the recovery.

The subprime crisis was largely the product of government policy to promote housing ownership and regulators who chose to promote that social policy over their traditional mission of guaranteeing safety and soundness. But blaming the financial crisis on reckless bankers and deregulation made it possible for the Obama administration to seize effective control of the financial system and put government bureaucrats in the corporate boardrooms of many of the most significant U.S. banks and insurance companies.

Suffocating under Dodd-Frank’s “enhanced supervision,” banks now focus on passing stress tests, writing living wills, parking capital at the Federal Reserve, and knowing their regulators better than they know their customers. But their ability to help the U.S. economy turn dreams into businesses and jobs has suffered.

In postwar America, it took on average just 2 1/4 years to regain in each succeeding recovery all of the real per capita income that had been lost in the previous recession. At the current rate of the Obama recovery, it will take six more years, 14 years in all, for the average American just to earn back what he lost in the last recession. Mr. Obama’s policies in banking, health care, power generation, the Internet and so much else have Europeanized America and American exceptionalism has waned—sadly proving that collectivism does not work any better in America than it has ever worked anywhere else.

Mr. Gramm, a former chairman of the Senate Banking Committee, is a visiting scholar at the American Enterprise Institute. Mr. Solon is a partner of US Policy Metrics.

 

Saturday, March 12, 2016

A New Tool for Avoiding Big-Bank Failures: ‘Chapter 14.’ By Emily C. Kapur and John B. Taylor

A New Tool for Avoiding Big-Bank Failures: ‘Chapter 14.’ By Emily C. Kapur and John B. Taylor

Bernie Sanders is right, Dodd-Frank doesn’t work, but his solution is wrong. Here’s what would work.

WSJ, Mar 11, 2016



For months Democratic presidential hopeful Bernie Sanders has been telling Americans that the government must “break up the banks” because they are “too big to fail.” This is the wrong role for government, but Sen. Sanders and others on both sides of the aisle have a point. The 2010 Dodd-Frank financial law, which was supposed to end too big to fail, has not.

Dodd-Frank gave the Federal Deposit Insurance Corp. authority to take over and oversee the reorganization of so-called systemically important financial institutions whose failure could pose a risk to the economy. But no one can be sure the FDIC will follow its resolution strategy, which leads many to believe Dodd-Frank will be bypassed in a crisis.

Reflecting on his own experience as overseer of the U.S. Treasury’s bailout program in 2008-09, Neel Kashkari, now president of the Federal Reserve Bank of Minneapolis, says government officials are once again likely to bail out big banks and their creditors rather than “trigger many trillions of additional costs to society.”

The solution is not to break up the banks or turn them into public utilities. Instead, we should do what Dodd-Frank failed to do: Make big-bank failures feasible without tanking the economy by writing a process to do so into the bankruptcy code through a new amendment—a “chapter 14.”

Chapter 14 would impose losses on shareholders and creditors while preventing the collapse of one firm from spreading to others. It could be initiated by the lead regulatory agency and would begin with an over-the-weekend bankruptcy hearing before a pre-selected U.S. district judge. After the hearing, the court would convert the bank’s eligible long-term debt into equity, reorganizing the bankrupt bank’s balance sheet without restructuring its operations.

A new non-bankrupt company, owned by the bankruptcy estate (the temporary legal owner of a failed company’s assets and property), would assume the recapitalized balance sheet of the failed bank, including all obligations to its short-term creditors. But the failed bank’s shareholders and long-term bondholders would have claims only against the estate, not the new company.

The new firm would take over the bank’s business and be led by the bankruptcy estate’s chosen private-sector managers. With regulations requiring minimum long-term debt levels, the new firm would be solvent. The bankruptcy would be entirely contained, both because the new bank would keep operating and paying its debts, and because losses would be allocated entirely to the old bank’s shareholders and long-term bondholders.

An examination by one of us (Emily Kapur) of previously unexplored discovery and court documents from Lehman Brothers’ September 2008 bankruptcy shows that chapter 14 would have worked especially well for that firm, without adverse effects on the financial system.

Here is how Lehman under chapter 14 would have played out. The process would start with a single, brief hearing for the parent company to facilitate the creation of a new recapitalized company—a hearing in which the judge would have minimal discretion. By contrast, Lehman’s actual bankruptcy involved dozens of complex proceedings in the U.S. and abroad, creating huge uncertainty and making it impossible for even part of the firm to remain in business.

When Lehman went under it had $20 billion of book equity and $96 billion of long-term debt, while its perceived losses were around $54 billion. If the costs of a chapter 14 proceeding amounted to an additional (and conservative) $10 billion, then the new company would be well capitalized with around $52 billion of equity.

The new parent company would take over Lehman’s subsidiaries, all of which would continue in business, outside of bankruptcy. And the new company would honor all obligations to short-term creditors, such as repurchase agreement and commercial paper lenders.

The result: Short-term creditors would have no reason to run on the bank before the bankruptcy proceeding, knowing they would be protected. And they would have no reason to run afterward, because the new firm would be solvent.

Without a run, Lehman would have $30 billion more liquidity after resolution than it had in 2008, easing subsequent operational challenges. In the broader marketplace, money-market funds would have no reason to curtail lending to corporations, hedge funds would not flee so readily from prime brokers, and investment banks would be less likely to turn to the government for financing.

Eventually, the new company would make a public stock offering to value the bankruptcy estate’s ownership interest, and the estate would distribute its assets according to statutory priority rules. If the valuation came in at $52 billion, Lehman shareholders would be wiped out, as they were in 2008. Long-term debtholders, with $96 billion in claims, would recover 54 cents on the dollar, more than the 37 cents they did receive. All other creditors—the large majority—would be paid in full at maturity.

Other reforms, such as higher capital requirements, may yet be needed to reduce risk and lessen the chance of financial failure. But that is no reason to wait on bankruptcy reform. A bill along the lines of the chapter 14 that we advocate passed the House Judiciary Committee on Feb. 11. Two versions await action in the Senate. Let’s end too big to fail, once and for all.
 
Ms. Kapur is an attorney and economics Ph.D. candidate at Stanford University. Mr. Taylor, a professor of economics at Stanford, co-edited “Making Failure Feasible” (Hoover, 2015) with Kenneth Scott and Thomas Jackson, which includes Ms. Kapur’s study.

Sunday, November 29, 2015

The Fed: shortcomings in policies & procedures, insufficient model testing & incomplete structures & information flows for proper oversight

The Fed Is Stressed Out. A WSJ Editorial
www.wsj.com/articles/the-fed-is-stressed-out-1448574493
What if a bank had the same problem the regulators have?Wall Street Journal, Nov 28, 2015

Almost nobody in Washington cares, and most of the financial media haven’t noticed. But the inspector general’s office at the Federal Reserve recently reported the disturbing results of an internal investigation. Last December the central bank internally identified “fundamental weaknesses in key areas” related to the Fed’s own governance of the stress testing it conducts of financial firms.

The Fed’s stress tests theoretically judge whether the country’s largest banks can withstand economic downturns. So the Fed identifying a problem with its own management of the stress tests is akin to an energy company noticing that something is not right at one of its nuclear reactors.

According to the inspector general, “The governance review findings include, among other items, a shortcoming in policies and procedures, insufficient model testing” and “incomplete structures and information flows to ensure proper oversight of model risk management.” These Fed models are essentially a black box to the public, so there’s no way to tell from the outside how large a problem this is.

The Fed’s ability to construct and maintain financial and economic models is much more than a subject of intellectual curiosity. Given that Fed-approved models at the heart of the so-called Basel capital standards proved to be spectacularly wrong in the run-up to the last financial crisis, the new report is more reason to wonder why anyone should expect them to be more accurate the next time.

The Fed’s IG adds that last year’s internal review “notes that similar findings identified at institutions supervised by the Federal Reserve have typically been characterized as matters requiring immediate attention or as matters requiring attention.”

That’s for sure. Receiving a “matters requiring immediate attention” letter from the Fed is a big deal at a bank. The Journal reported last year that after the Fed used this language in a letter to Credit Suisse castigating the bank’s work in the market for leveraged loans, the bank chose not to participate in the financing of several buy-out deals.

But it’s hard to tell if anything will come from this report that seems to have fallen deep in a Beltway forest. The IG office’s report says that the Fed is taking a number of steps to correct its shortcomings, and that the Fed’s reform plans “appear to be responsive to our recommendations.”

The Fed wields enormous power with little democratic accountability and transparency. This was tolerable when the Fed’s main job was monetary, but its vast new regulatory authority requires more scrutiny. Congress should add the Fed’s stressed-out standards for stress tests to its oversight list.