Tuesday, December 6, 2016

My Unhappy Life as a Climate Heretic. By Roger Pielke Jr.

My Unhappy Life as a Climate Heretic. By Roger Pielke Jr.
My research was attacked by thought police in journalism, activist groups funded by billionaires and even the White House.http://www.wsj.com/articles/my-unhappy-life-as-a-climate-heretic-1480723518
Updated Dec. 2, 2016 7:04 p.m. ET

Much to my surprise, I showed up in the WikiLeaks releases before the election. In a 2014 email, a staffer at the Center for American Progress, founded by John Podesta in 2003, took credit for a campaign to have me eliminated as a writer for Nate Silver’s FiveThirtyEight website. In the email, the editor of the think tank’s climate blog bragged to one of its billionaire donors, Tom Steyer: “I think it’s fair [to] say that, without Climate Progress, Pielke would still be writing on climate change for 538.”

WikiLeaks provides a window into a world I’ve seen up close for decades: the debate over what to do about climate change, and the role of science in that argument. Although it is too soon to tell how the Trump administration will engage the scientific community, my long experience shows what can happen when politicians and media turn against inconvenient research—which we’ve seen under Republican and Democratic presidents.

I understand why Mr. Podesta—most recently Hillary Clinton’s campaign chairman—wanted to drive me out of the climate-change discussion. When substantively countering an academic’s research proves difficult, other techniques are needed to banish it. That is how politics sometimes works, and professors need to understand this if we want to participate in that arena.

More troubling is the degree to which journalists and other academics joined the campaign against me. What sort of responsibility do scientists and the media have to defend the ability to share research, on any subject, that might be inconvenient to political interests—even our own?

I believe climate change is real and that human emissions of greenhouse gases risk justifying action, including a carbon tax. But my research led me to a conclusion that many climate campaigners find unacceptable: There is scant evidence to indicate that hurricanes, floods, tornadoes or drought have become more frequent or intense in the U.S. or globally. In fact we are in an era of good fortune when it comes to extreme weather. This is a topic I’ve studied and published on as much as anyone over two decades. My conclusion might be wrong, but I think I’ve earned the right to share this research without risk to my career.

Instead, my research was under constant attack for years by activists, journalists and politicians. In 2011 writers in the journal Foreign Policy signaled that some accused me of being a “climate-change denier.” I earned the title, the authors explained, by “questioning certain graphs presented in IPCC reports.” That an academic who raised questions about the Intergovernmental Panel on Climate Change in an area of his expertise was tarred as a denier reveals the groupthink at work.

Yet I was right to question the IPCC’s 2007 report, which included a graph purporting to show that disaster costs were rising due to global temperature increases. The graph was later revealed to have been based on invented and inaccurate information, as I documented in my book “The Climate Fix.” The insurance industry scientist Robert-Muir Wood of Risk Management Solutions had smuggled the graph into the IPCC report. He explained in a public debate with me in London in 2010 that he had included the graph and misreferenced it because he expected future research to show a relationship between increasing disaster costs and rising temperatures.

When his research was eventually published in 2008, well after the IPCC report, it concluded the opposite: “We find insufficient evidence to claim a statistical relationship between global temperature increase and normalized catastrophe losses.” Whoops.

The IPCC never acknowledged the snafu, but subsequent reports got the science right: There is not a strong basis for connecting weather disasters with human-caused climate change.

Yes, storms and other extremes still occur, with devastating human consequences, but history shows they could be far worse. No Category 3, 4 or 5 hurricane has made landfall in the U.S. since Hurricane Wilma in 2005, by far the longest such period on record. This means that cumulative economic damage from hurricanes over the past decade is some $70 billion less than the long-term average would lead us to expect, based on my research with colleagues. This is good news, and it should be OK to say so. Yet in today’s hyper-partisan climate debate, every instance of extreme weather becomes a political talking point.

For a time I called out politicians and reporters who went beyond what science can support, but some journalists won’t hear of this. In 2011 and 2012, I pointed out on my blog and social media that the lead climate reporter at the New York Times,Justin Gillis, had mischaracterized the relationship of climate change and food shortages, and the relationship of climate change and disasters. His reporting wasn’t consistent with most expert views, or the evidence. In response he promptly blocked me from his Twitter feed. Other reporters did the same.

In August this year on Twitter, I criticized poor reporting on the website Mashable about a supposed coming hurricane apocalypse—including a bad misquote of me in the cartoon role of climate skeptic. (The misquote was later removed.) The publication’s lead science editor, Andrew Freedman, helpfully explained via Twitter that this sort of behavior “is why you’re on many reporters’ ‘do not call’ lists despite your expertise.”

I didn’t know reporters had such lists. But I get it. No one likes being told that he misreported scientific research, especially on climate change. Some believe that connecting extreme weather with greenhouse gases helps to advance the cause of climate policy. Plus, bad news gets clicks.

Yet more is going on here than thin-skinned reporters responding petulantly to a vocal professor. In 2015 I was quoted in the Los Angeles Times, by Pulitzer Prize-winning reporter Paige St. John, making the rather obvious point that politicians use the weather-of-the-moment to make the case for action on climate change, even if the scientific basis is thin or contested.

Ms. St. John was pilloried by her peers in the media. Shortly thereafter, she emailed me what she had learned: “You should come with a warning label: Quoting Roger Pielke will bring a hailstorm down on your work from the London Guardian, Mother Jones, and Media Matters.”

Or look at the journalists who helped push me out of FiveThirtyEight. My first article there, in 2014, was based on the consensus of the IPCC and peer-reviewed research. I pointed out that the global cost of disasters was increasing at a rate slower than GDP growth, which is very good news. Disasters still occur, but their economic and human effect is smaller than in the past. It’s not terribly complicated.

That article prompted an intense media campaign to have me fired. Writers at Slate, Salon, the New Republic, the New York Times, the Guardian and others piled on.

In March of 2014, FiveThirtyEight editor Mike Wilson demoted me from staff writer to freelancer. A few months later I chose to leave the site after it became clear it wouldn’t publish me. The mob celebrated. ClimateTruth.org, founded by former Center for American Progress staffer Brad Johnson, and advised by Penn State’s Michael Mann, called my departure a “victory for climate truth.” The Center for American Progress promised its donor Mr. Steyer more of the same.

Yet the climate thought police still weren’t done. In 2013 committees in the House and Senate invited me to a several hearings to summarize the science on disasters and climate change. As a professor at a public university, I was happy to do so. My testimony was strong, and it was well aligned with the conclusions of the IPCC and the U.S. government’s climate-science program. Those conclusions indicate no overall increasing trend in hurricanes, floods, tornadoes or droughts—in the U.S. or globally.

In early 2014, not long after I appeared before Congress, President Obama’s science adviser John Holdren testified before the same Senate Environment and Public Works Committee. He was asked about his public statements that appeared to contradict the scientific consensus on extreme weather events that I had earlier presented. Mr. Holdren responded with the all-too-common approach of attacking the messenger, telling the senators incorrectly that my views were “not representative of the mainstream scientific opinion.” Mr. Holdren followed up by posting a strange essay, of nearly 3,000 words, on the White House website under the heading, “An Analysis of Statements by Roger Pielke Jr.,” where it remains today.

I suppose it is a distinction of a sort to be singled out in this manner by the president’s science adviser. Yet Mr. Holdren’s screed reads more like a dashed-off blog post from the nutty wings of the online climate debate, chock-full of errors and misstatements.

But when the White House puts a target on your back on its website, people notice. Almost a year later Mr. Holdren’s missive was the basis for an investigation of me by Arizona Rep. Raul Grijalva, the ranking Democrat on the House Natural Resources Committee. Rep. Grijalva explained in a letter to my university’s president that I was being investigated because Mr. Holdren had “highlighted what he believes were serious misstatements by Prof. Pielke of the scientific consensus on climate change.” He made the letter public.

The “investigation” turned out to be a farce. In the letter, Rep. Grijalva suggested that I—and six other academics with apparently heretical views—might be on the payroll of Exxon Mobil (or perhaps the Illuminati, I forget). He asked for records detailing my research funding, emails and so on. After some well-deserved criticism from the American Meteorological Society and the American Geophysical Union, Rep. Grijalva deleted the letter from his website. The University of Colorado complied with Rep. Grijalva’s request and responded that I have never received funding from fossil-fuel companies. My heretical views can be traced to research support from the U.S. government.

But the damage to my reputation had been done, and perhaps that was the point. Studying and engaging on climate change had become decidedly less fun. So I started researching and teaching other topics and have found the change in direction refreshing. Don’t worry about me: I have tenure and supportive campus leaders and regents. No one is trying to get me fired for my new scholarly pursuits.

But the lesson is that a lone academic is no match for billionaires, well-funded advocacy groups, the media, Congress and the White House. If academics—in any subject—are to play a meaningful role in public debate, the country will have to do a better job supporting good-faith researchers, even when their results are unwelcome. This goes for Republicans and Democrats alike, and to the administration of President-elect Trump.

Academics and the media in particular should support viewpoint diversity instead of serving as the handmaidens of political expediency by trying to exclude voices or damage reputations and careers. If academics and the media won’t support open debate, who will?

---
Mr. Pielke is a professor and director of the Sports Governance Center at the University of Colorado, Boulder. His most recent book is “The Edge: The Wars Against Cheating and Corruption in the Cutthroat World of Elite Sports” (Roaring Forties Press, 2016).

Sunday, September 25, 2016

The Value of Prospective Reasoning for Close Relationships

this is not only useful for close relationships:

 

The Value of Prospective Reasoning for Close Relationships

  1. Alex C. Huynh1
  2. Daniel Y.-J. Yang2
  3. Igor Grossmann1
  1. 1Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
  2. 2Child Study Center, Yale University, New Haven, CT, USA
  1. Alex C. Huynh or Igor Grossmann, Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, Canada N2L 3G1. Emails: alex.huynh@uwaterloo.ca; igrossma@uwaterloo.ca

Abstract

http://spp.sagepub.com/content/early/2016/07/27/1948550616660591.abstract

We examined how adopting a future (vs. present)-oriented perspective when reflecting on a relationship conflict impacts the process of reasoning and relationship well-being. Across two studies, participants instructed to think about how they would feel in the future (vs. present) expressed more adaptive reasoning over a relationship conflict—low partner blame, greater insight, and greater forgiveness, which was then associated with greater relationship well-being—for example, more positive versus negative emotions about the relationship and expectations that the relationship will grow. These findings were driven by a decrease in person-centered language when reflecting on the conflict. Implications for understanding how tempo

Sunday, April 17, 2016

The Great Recession Blame Game - Banks took the heat, but it was Washington that propped up subprime debt and then stymied recovery

The Great Recession Blame Game

Banks took the heat, but it was Washington that propped up subprime debt and then stymied recovery.

By Phil Gramm and Michael Solon
WSJ, April 15, 2016 6:09 p.m. ET

When the subprime crisis broke in the 2008 presidential election year, there was little chance for a serious discussion of its root causes. Candidate Barack Obama weaponized the crisis by blaming greedy bankers, unleashed when financial regulations were “simply dismantled.” He would go on to blame them for taking “huge, reckless risks in pursuit of quick profits and massive bonuses.”
That mistaken diagnosis was the justification for the Dodd-Frank Act and the stifling regulations that shackled the financial system, stunted the recovery and diminished the American dream.

In fact, when the crisis struck, banks were better capitalized and less leveraged than they had been in the previous 30 years. The FDIC’s reported capital-to-asset ratio for insured commercial banks in 2007 was 10.2%—76% higher than it was in 1978. Federal Reserve data on all insured financial institutions show the capital-to-asset ratio was 10.3% in 2007, almost double its 1984 level, and the biggest banks doubled their capitalization ratios. On Sept. 30, 2008, the month Lehman failed, the FDIC found that 98% of all FDIC institutions with 99% of all bank assets were “well capitalized,” and only 43 smaller institutions were undercapitalized.

In addition, U.S. banks were by far the best-capitalized banks in the world. While the collapse of 31 million subprime mortgages fractured financial capital, the banking system in the 30 years before 2007 would have fared even worse under such massive stress.

Virtually all of the undercapitalization, overleveraging and “reckless risks” flowed from government policies and institutions. Federal regulators followed international banking standards that treated most subprime-mortgage-backed securities as low-risk, with lower capital requirements that gave banks the incentive to hold them. Government quotas forced Fannie Mae and Freddie Mac to hold ever larger volumes of subprime mortgages, and politicians rolled the dice by letting them operate with a leverage ratio of 75 to one—compared with Lehman’s leverage ratio of 29 to one.

Regulators also eroded the safety of the financial system by pressuring banks to make subprime loans in order to increase homeownership. After eight years of vilification and government extortion of bank assets, often for carrying out government mandates, it is increasingly clear that banks were more scapegoats than villains in the subprime crisis.

Similarly, the charge that banks had been deregulated before the crisis is a myth. From 1980 to 2007 four major banking laws—the Competitive Equality Banking Act (1987), the Financial Institutions, Reform, Recovery and Enforcement Act (1989), the Federal Deposit Insurance Corporation Improvement Act (1991), and Sarbanes-Oxley (2002)—undeniably increased bank regulations and reporting requirements. The charge that financial regulation had been dismantled rests almost solely on the disputed effects of the 1999 Gramm-Leach-Bliley Act (GLBA).

Prior to GLBA, the decades-old Glass-Steagall Act prohibited deposit-taking, commercial banks from engaging in securities trading. GLBA, which was signed into law by President Bill Clinton, allowed highly regulated financial-services holding companies to compete in banking, insurance and the securities business. But each activity was still required to operate separately and remained subject to the regulations and capital requirements that existed before GLBA. A bank operating within a holding company was still subject to Glass-Steagall (which was not repealed by GLBA)—but Glass-Steagall never banned banks from holding mortgages or mortgage-backed securities in the first place.

GLBA loosened federal regulations only in the narrow sense that it promoted more competition across financial services and lowered prices. When he signed the law, President Clinton said that “removal of barriers to competition will enhance the stability of our financial system, diversify their product offerings and thus their sources of revenue.” The financial crisis proved his point. Financial institutions that had used GLBA provisions to diversify fared better than those that didn’t.

Mr. Clinton has always insisted that “there is not a single solitary example that [GLBA] had anything to do with the financial crisis,” a conclusion that has never been refuted. When asked by the New York Times in 2012, Sen. Elizabeth Warren agreed that the financial crisis would not have been avoided had GLBA never been adopted. And President Obama effectively exonerated GLBA from any culpability in the financial crisis when, with massive majorities in both Houses of Congress, he chose not to repeal GLBA. In fact, Dodd-Frank expanded GLBA by using its holding-company structure to impose new regulations on systemically important financial institutions.

Another myth of the financial crisis is that the bailout was required because some banks were too big to fail. Had the government’s massive injection of capital—the Troubled Asset Relief Program, or TARP—been only about bailing out too-big-to-fail financial institutions, at most a dozen institutions might have received aid. Instead, 954 financial institutions received assistance, with more than half the money going to small banks.

Many of the largest banks did not want or need aid—and Lehman’s collapse was not a case of a too-big-to-fail institution spreading the crisis. The entire financial sector was already poisoned by the same subprime assets that felled Lehman. The subprime bailout occurred because the U.S. financial sector was, and always should be, too important to be allowed to fail.

Consider that, according to the Congressional Budget Office, bailing out the depositors of insolvent S&Ls in the 1980s on net cost taxpayers $258 billion in real 2009 dollars. By contrast, of the $245 billion disbursed by TARP to banks, 67% was repaid within 14 months, 81% within two years and the final totals show that taxpayers earned $24 billion on the banking component of TARP. The rapid and complete payback of TARP funds by banks strongly suggests that the financial crisis was more a liquidity crisis than a solvency crisis.

What turned the subprime crisis and ensuing recession into the “Great Recession” was not a failure of policies that addressed the financial crisis. Instead, it was the failure of subsequent economic policies that impeded the recovery.

The subprime crisis was largely the product of government policy to promote housing ownership and regulators who chose to promote that social policy over their traditional mission of guaranteeing safety and soundness. But blaming the financial crisis on reckless bankers and deregulation made it possible for the Obama administration to seize effective control of the financial system and put government bureaucrats in the corporate boardrooms of many of the most significant U.S. banks and insurance companies.

Suffocating under Dodd-Frank’s “enhanced supervision,” banks now focus on passing stress tests, writing living wills, parking capital at the Federal Reserve, and knowing their regulators better than they know their customers. But their ability to help the U.S. economy turn dreams into businesses and jobs has suffered.

In postwar America, it took on average just 2 1/4 years to regain in each succeeding recovery all of the real per capita income that had been lost in the previous recession. At the current rate of the Obama recovery, it will take six more years, 14 years in all, for the average American just to earn back what he lost in the last recession. Mr. Obama’s policies in banking, health care, power generation, the Internet and so much else have Europeanized America and American exceptionalism has waned—sadly proving that collectivism does not work any better in America than it has ever worked anywhere else.

Mr. Gramm, a former chairman of the Senate Banking Committee, is a visiting scholar at the American Enterprise Institute. Mr. Solon is a partner of US Policy Metrics.

 

Saturday, March 12, 2016

A New Tool for Avoiding Big-Bank Failures: ‘Chapter 14.’ By Emily C. Kapur and John B. Taylor

A New Tool for Avoiding Big-Bank Failures: ‘Chapter 14.’ By Emily C. Kapur and John B. Taylor

Bernie Sanders is right, Dodd-Frank doesn’t work, but his solution is wrong. Here’s what would work.

WSJ, Mar 11, 2016



For months Democratic presidential hopeful Bernie Sanders has been telling Americans that the government must “break up the banks” because they are “too big to fail.” This is the wrong role for government, but Sen. Sanders and others on both sides of the aisle have a point. The 2010 Dodd-Frank financial law, which was supposed to end too big to fail, has not.

Dodd-Frank gave the Federal Deposit Insurance Corp. authority to take over and oversee the reorganization of so-called systemically important financial institutions whose failure could pose a risk to the economy. But no one can be sure the FDIC will follow its resolution strategy, which leads many to believe Dodd-Frank will be bypassed in a crisis.

Reflecting on his own experience as overseer of the U.S. Treasury’s bailout program in 2008-09, Neel Kashkari, now president of the Federal Reserve Bank of Minneapolis, says government officials are once again likely to bail out big banks and their creditors rather than “trigger many trillions of additional costs to society.”

The solution is not to break up the banks or turn them into public utilities. Instead, we should do what Dodd-Frank failed to do: Make big-bank failures feasible without tanking the economy by writing a process to do so into the bankruptcy code through a new amendment—a “chapter 14.”

Chapter 14 would impose losses on shareholders and creditors while preventing the collapse of one firm from spreading to others. It could be initiated by the lead regulatory agency and would begin with an over-the-weekend bankruptcy hearing before a pre-selected U.S. district judge. After the hearing, the court would convert the bank’s eligible long-term debt into equity, reorganizing the bankrupt bank’s balance sheet without restructuring its operations.

A new non-bankrupt company, owned by the bankruptcy estate (the temporary legal owner of a failed company’s assets and property), would assume the recapitalized balance sheet of the failed bank, including all obligations to its short-term creditors. But the failed bank’s shareholders and long-term bondholders would have claims only against the estate, not the new company.

The new firm would take over the bank’s business and be led by the bankruptcy estate’s chosen private-sector managers. With regulations requiring minimum long-term debt levels, the new firm would be solvent. The bankruptcy would be entirely contained, both because the new bank would keep operating and paying its debts, and because losses would be allocated entirely to the old bank’s shareholders and long-term bondholders.

An examination by one of us (Emily Kapur) of previously unexplored discovery and court documents from Lehman Brothers’ September 2008 bankruptcy shows that chapter 14 would have worked especially well for that firm, without adverse effects on the financial system.

Here is how Lehman under chapter 14 would have played out. The process would start with a single, brief hearing for the parent company to facilitate the creation of a new recapitalized company—a hearing in which the judge would have minimal discretion. By contrast, Lehman’s actual bankruptcy involved dozens of complex proceedings in the U.S. and abroad, creating huge uncertainty and making it impossible for even part of the firm to remain in business.

When Lehman went under it had $20 billion of book equity and $96 billion of long-term debt, while its perceived losses were around $54 billion. If the costs of a chapter 14 proceeding amounted to an additional (and conservative) $10 billion, then the new company would be well capitalized with around $52 billion of equity.

The new parent company would take over Lehman’s subsidiaries, all of which would continue in business, outside of bankruptcy. And the new company would honor all obligations to short-term creditors, such as repurchase agreement and commercial paper lenders.

The result: Short-term creditors would have no reason to run on the bank before the bankruptcy proceeding, knowing they would be protected. And they would have no reason to run afterward, because the new firm would be solvent.

Without a run, Lehman would have $30 billion more liquidity after resolution than it had in 2008, easing subsequent operational challenges. In the broader marketplace, money-market funds would have no reason to curtail lending to corporations, hedge funds would not flee so readily from prime brokers, and investment banks would be less likely to turn to the government for financing.

Eventually, the new company would make a public stock offering to value the bankruptcy estate’s ownership interest, and the estate would distribute its assets according to statutory priority rules. If the valuation came in at $52 billion, Lehman shareholders would be wiped out, as they were in 2008. Long-term debtholders, with $96 billion in claims, would recover 54 cents on the dollar, more than the 37 cents they did receive. All other creditors—the large majority—would be paid in full at maturity.

Other reforms, such as higher capital requirements, may yet be needed to reduce risk and lessen the chance of financial failure. But that is no reason to wait on bankruptcy reform. A bill along the lines of the chapter 14 that we advocate passed the House Judiciary Committee on Feb. 11. Two versions await action in the Senate. Let’s end too big to fail, once and for all.
 
Ms. Kapur is an attorney and economics Ph.D. candidate at Stanford University. Mr. Taylor, a professor of economics at Stanford, co-edited “Making Failure Feasible” (Hoover, 2015) with Kenneth Scott and Thomas Jackson, which includes Ms. Kapur’s study.

Sunday, January 17, 2016

The Secret of Immigrant Genius - Having your world turned upside down sparks creative thinking

The Secret of Immigrant Genius. By Eric Weiner 

Having your world turned upside down sparks creative thinking


Wall Street Journal, Jan 16, 2016

http://www.wsj.com/articles/the-secret-of-immigrant-genius-1452875951

Scan the roster of history’s intellectual and artistic giants, and you quickly notice something remarkable: Many were immigrants or refugees, from Victor Hugo, W.H. Auden and Vladimir Nabokov to Nikolas Tesla, Marie Curie and Sigmund Freud. At the top of this pantheon sits the genius’s genius: Einstein. His “miracle year” of 1905, when he published no fewer than four groundbreaking scientific papers, occurred after he had emigrated from Germany to Switzerland.

Lost in today’s immigration debate is this unavoidable fact: An awful lot of brilliant minds blossomed in alien soil. That is especially true of the U.S., a nation defined by the creative zeal of the newcomer. Today, foreign-born residents account for only 13% of the U.S. population but hold nearly a third of all patents and a quarter of all Nobel Prizes awarded to Americans.
But why? What is it about the act of relocating to distant shores—voluntarily or not—that sparks creative genius?

When pressed to explain, we usually turn to a tidy narrative: Scruffy but determined immigrant, hungry for success, arrives on distant shores. Immigrant works hard. Immigrant is bolstered by a supportive family, as well as a wider network from the old country. Immigrant succeeds, buys flashy new threads.

It is an inspiring narrative—but it is also misleading. That fierce drive might explain why immigrants and refugees succeed in their chosen fields, but it fails to explain their exceptional creativity. It fails to explain their genius.

Recent research points to an intriguing explanation. Several studies have shed light on the role of “schema violations” in intellectual development. A schema violation occurs when our world is turned upside-down, when temporal and spatial cues are off-kilter.

In a 2011 study led by the Dutch psychologist Simone Ritter and published in the Journal of Experimental Social Psychology, researchers asked some subjects to make breakfast in the “wrong” order and others to perform the task in the conventional manner. Those in the first group—the ones engaged in a schema violation—consistently demonstrated more “cognitive flexibility,” a prerequisite for creative thinking.

This suggests that it isn’t the immigrant’s ambition that explains her creativity but her marginality. Many immigrants possess what the psychologist Nigel Barber calls “oblique perspective.” Uprooted from the familiar, they see the world at an angle, and this fresh perspective enables them to surpass the merely talented. To paraphrase the philosopher Schopenhauer: Talent hits a target no one else can hit. Genius hits a target no one else can see.

Freud is a classic case. As a little boy, he and his family joined a flood of immigrants from the fringes of the Austro-Hungarian empire to Vienna, a city where, by 1913, less than half the population was native-born. Freud tried to fit in. He wore lederhosen and played a local card game called tarock, but as a Jew and an immigrant, he was never fully accepted. He was an insider-outsider, residing far enough beyond the mainstream to see the world through fresh eyes yet close enough to propagate his ideas.

Marie Curie, born and raised in Poland, was frustrated by the lack of academic opportunities in her homeland. In 1891, at age 24, she immigrated to Paris. Life was difficult at first; she studied during the day and tutored in the evenings. Two years later, though, she earned a degree in physics, launching a stellar career that culminated with two Nobel prizes.

Exceptionally creative people such as Curie and Freud possess many traits, of course, but their “openness to experience” is the most important, says the cognitive psychologist Scott Barry Kaufman of the University of Pennsylvania. That seems to hold for entire societies as well.

Consider a country like Japan, which has historically been among the world’s most closed societies. Examining the long stretch of time from 580 to 1939, Dean Simonton of the University of California, writing in the Journal of Personality and Social Psychology, compared Japan’s “extra cultural influx” (from immigration, travel abroad, etc.) in different eras with its output in such fields as medicine, philosophy, painting and literature. Dr. Simonton found a consistent correlation: the greater Japan’s openness, the greater its achievements.

It isn’t necessarily new ideas from the outside that directly drive innovation, Dr. Simonton argues. It’s simply their presence as a goad. Some people start to see the arbitrary nature of many of their own cultural habits and open their minds to new possibilities. Once you recognize that there is another way of doing X or thinking about Y, all sorts of new channels open to you, he says. “The awareness of cultural variety helps set the mind free,” he concludes.

History bears this out. In ancient Athens, foreigners known as metics (today we’d call them resident aliens) contributed mightily to the city-state’s brilliance. Renaissance Florence recruited the best and brightest from the crumbling Byzantine Empire. Even when the “extra cultural influx” arrives uninvited, as it did in India during the British Raj, creativity sometimes results. The intermingling of cultures sparked the “Bengal Renaissance” of the late 19th century.

In a 2014 study published in the Creativity Research Journal, Dr. Ritter and her colleagues found that people did not need to participate directly in a schema violation in order to boost their own creative thinking. Merely watching an actor perform an “upside-down” task did the trick, provided that the participants identified with the actor. This suggests that even non-immigrants benefit from the otherness of the newcomer.

Not all cultural collisions end happily, of course, and not all immigrants become geniuses. The adversity that spurs some to greatness sends others into despair. But as we wrestle with our own immigration and refugee policies, we would be wise to view the welcome mat not as charity but, rather, as enlightened self-interest. Once creativity is in the air, we all breathe a more stimulating air.

—Mr. Weiner is the author of “The Geography of Genius: A Search for the World’s Most Creative Places, From Ancient Athens to Silicon Valley,” just published by Simon & Schuster

Sunday, November 29, 2015

The Fed: shortcomings in policies & procedures, insufficient model testing & incomplete structures & information flows for proper oversight

The Fed Is Stressed Out. A WSJ Editorial
www.wsj.com/articles/the-fed-is-stressed-out-1448574493
What if a bank had the same problem the regulators have?Wall Street Journal, Nov 28, 2015

Almost nobody in Washington cares, and most of the financial media haven’t noticed. But the inspector general’s office at the Federal Reserve recently reported the disturbing results of an internal investigation. Last December the central bank internally identified “fundamental weaknesses in key areas” related to the Fed’s own governance of the stress testing it conducts of financial firms.

The Fed’s stress tests theoretically judge whether the country’s largest banks can withstand economic downturns. So the Fed identifying a problem with its own management of the stress tests is akin to an energy company noticing that something is not right at one of its nuclear reactors.

According to the inspector general, “The governance review findings include, among other items, a shortcoming in policies and procedures, insufficient model testing” and “incomplete structures and information flows to ensure proper oversight of model risk management.” These Fed models are essentially a black box to the public, so there’s no way to tell from the outside how large a problem this is.

The Fed’s ability to construct and maintain financial and economic models is much more than a subject of intellectual curiosity. Given that Fed-approved models at the heart of the so-called Basel capital standards proved to be spectacularly wrong in the run-up to the last financial crisis, the new report is more reason to wonder why anyone should expect them to be more accurate the next time.

The Fed’s IG adds that last year’s internal review “notes that similar findings identified at institutions supervised by the Federal Reserve have typically been characterized as matters requiring immediate attention or as matters requiring attention.”

That’s for sure. Receiving a “matters requiring immediate attention” letter from the Fed is a big deal at a bank. The Journal reported last year that after the Fed used this language in a letter to Credit Suisse castigating the bank’s work in the market for leveraged loans, the bank chose not to participate in the financing of several buy-out deals.

But it’s hard to tell if anything will come from this report that seems to have fallen deep in a Beltway forest. The IG office’s report says that the Fed is taking a number of steps to correct its shortcomings, and that the Fed’s reform plans “appear to be responsive to our recommendations.”

The Fed wields enormous power with little democratic accountability and transparency. This was tolerable when the Fed’s main job was monetary, but its vast new regulatory authority requires more scrutiny. Congress should add the Fed’s stressed-out standards for stress tests to its oversight list.

Tuesday, November 10, 2015

Yale's Little Robespierres - Students berate faculty who try to defend free speech

Yale's Little Robespierres. WSJ Editorial
http://www.wsj.com/articles/yales-little-robespierres-1447115476
Students berate faculty who try to defend free speech.WSJ, Nov. 9, 2015 7:31 p.m. ET

Someone at Yale University should have dressed up as Robespierre for Halloween, as its students seem to have lost their minds over what constitutes a culturally appropriate costume. Identity and grievance politics keeps hitting new lows on campus, and now even liberal professors are being consumed by the revolution.

On Oct. 28 Yale Dean Burgwell Howard and Yale’s Intercultural Affairs Committee blasted out an email advising students against “culturally unaware” Halloween costumes, with self-help questions such as: “If this costume is meant to be historical, does it further misinformation or historical and cultural inaccuracies?” Watch out for insensitivity toward “religious beliefs, Native American/Indigenous people, Socio-economic strata, Asians, Hispanic/Latino, Women, Muslims, etc.” In short, everyone.

Who knew Yale still employed anyone willing to doubt the costume wardens? But in response to the dean’s email, lecturer in early childhood education Erika Christakis mused to the student residential community she oversees with her husband, Nicholas, a Yale sociologist and physician: “I don’t wish to trivialize genuine concerns,” but she wondered if colleges had morphed into “places of censure and prohibition.”

And: “Nicholas says, if you don’t like a costume someone is wearing, look away, or tell them you are offended. Talk to each other. Free speech and the ability to tolerate offence are the hallmarks of a free and open society.”

Some 750 Yale students, faculty, alumni and others signed a letter saying Ms. Christakis’s “jarring” email served to “further degrade marginalized people,” as though someone with a Yale degree could be marginalized in America. Students culturally appropriated a Puritan shaming trial and encircled Mr. Christakis on a lawn, cursing and heckling him to quit. “I stand behind free speech,” he told the mob.

Hundreds of protesters also turned on Jonathan Holloway, Yale’s black dean, demanding to know why the school hadn’t addressed allegations that a black woman had been kept out of a fraternity party. Fragile scholars also melted down over a visiting speaker who made a joke about Yale’s fracas while talking at a conference sponsored by the school’s William F. Buckley, Jr. program focused on . . . the future of free speech.

The episode reminds us of when Yale alumnus Lee Bass in 1995 asked the university to return his $20 million donation. Mr. Bass had hoped to seed a curriculum in Western civilization, but Yale’s faculty ripped the idea as white imperialism, and he requested a refund. Two decades later the alternative to Western civilization is on display, and it seems to be censorship.

According to a student reporting for the Washington Post, Yale president Peter Salovey told minority students in response to the episode that “we failed you.” That’s true, though not how he means it. The failure is that elite colleges are turning out ostensible leaders who seem to have no idea why America’s Founders risked extreme discomfort—that is, death—for the right to speak freely.

Monday, October 19, 2015

Kissinger: A Path Out of the Middle East Collapse

A Path Out of the Middle East Collapse. By Henry Kissinger

With Russia in Syria, a geopolitical structure that lasted four decades is in shambles. The U.S. needs a new strategy and priorities.

Wall Street Journal, Oct 16, 2015

http://www.wsj.com/articles/a-path-out-of-the-middle-east-collapse-1445037513


The debate about whether the Joint Comprehensive Plan of Action with Iran regarding its nuclear program stabilized the Middle East’s strategic framework had barely begun when the region’s geopolitical framework collapsed. Russia’s unilateral military action in Syria is the latest symptom of the disintegration of the American role in stabilizing the Middle East order that emerged from the Arab-Israeli war of 1973.

In the aftermath of that conflict, Egypt abandoned its military ties with the Soviet Union and joined an American-backed negotiating process that produced peace treaties between Israel and Egypt, and Israel and Jordan, a United Nations-supervised disengagement agreement between Israel and Syria, which has been observed for over four decades (even by the parties of the Syrian civil war), and international support of Lebanon’s sovereign territorial integrity. Later, Saddam Hussein’s war to incorporate Kuwait into Iraq was defeated by an international coalition under U.S. leadership. American forces led the war against terror in Iraq and Afghanistan. Egypt, Jordan, Saudi Arabia and the other Gulf States were our allies in all these efforts. The Russian military presence disappeared from the region.

That geopolitical pattern is now in shambles. Four states in the region have ceased to function as sovereign. Libya, Yemen, Syria and Iraq have become targets for nonstate movements seeking to impose their rule. Over large swaths in Iraq and Syria, an ideologically radical religious army has declared itself the Islamic State (also called ISIS or ISIL) as an unrelenting foe of established world order. It seeks to replace the international system’s multiplicity of states with a caliphate, a single Islamic empire governed by Shariah law.

ISIS’ claim has given the millennium-old split between the Shiite and Sunni sects of Islam an apocalyptic dimension. The remaining Sunni states feel threatened by both the religious fervor of ISIS as well as by Shiite Iran, potentially the most powerful state in the region. Iran compounds its menace by presenting itself in a dual capacity. On one level, Iran acts as a legitimate Westphalian state conducting traditional diplomacy, even invoking the safeguards of the international system. At the same time, it organizes and guides nonstate actors seeking regional hegemony based on jihadist principles: Hezbollah in Lebanon and Syria; Hamas in Gaza; the Houthis in Yemen.

Thus the Sunni Middle East risks engulfment by four concurrent sources: Shiite-governed Iran and its legacy of Persian imperialism; ideologically and religiously radical movements striving to overthrow prevalent political structures; conflicts within each state between ethnic and religious groups arbitrarily assembled after World War I into (now collapsing) states; and domestic pressures stemming from detrimental political, social and economic domestic policies.

The fate of Syria provides a vivid illustration: What started as a Sunni revolt against the Alawite (a Shiite offshoot) autocrat Bashar Assad fractured the state into its component religious and ethnic groups, with nonstate militias supporting each warring party, and outside powers pursuing their own strategic interests. Iran supports the Assad regime as the linchpin of an Iranian historic dominance stretching from Tehran to the Mediterranean. The Gulf States insist on the overthrow of Mr. Assad to thwart Shiite Iranian designs, which they fear more than Islamic State. They seek the defeat of ISIS while avoiding an Iranian victory. This ambivalence has been deepened by the nuclear deal, which in the Sunni Middle East is widely interpreted as tacit American acquiescence in Iranian hegemony.

These conflicting trends, compounded by America’s retreat from the region, have enabled Russia to engage in military operations deep in the Middle East, a deployment unprecedented in Russian history. Russia’s principal concern is that the Assad regime’s collapse could reproduce the chaos of Libya, bring ISIS into power in Damascus, and turn all of Syria into a haven for terrorist operations, reaching into Muslim regions inside Russia’s southern border in the Caucasus and elsewhere.

On the surface, Russia’s intervention serves Iran’s policy of sustaining the Shiite element in Syria. In a deeper sense, Russia’s purposes do not require the indefinite continuation of Mr. Assad’s rule. It is a classic balance-of-power maneuver to divert the Sunni Muslim terrorist threat from Russia’s southern border region. It is a geopolitical, not an ideological, challenge and should be dealt with on that level. Whatever the motivation, Russian forces in the region—and their participation in combat operations—produce a challenge that American Middle East policy has not encountered in at least four decades.

American policy has sought to straddle the motivations of all parties and is therefore on the verge of losing the ability to shape events. The U.S. is now opposed to, or at odds in some way or another with, all parties in the region: with Egypt on human rights; with Saudi Arabia over Yemen; with each of the Syrian parties over different objectives. The U.S. proclaims the determination to remove Mr. Assad but has been unwilling to generate effective leverage—political or military—to achieve that aim. Nor has the U.S. put forward an alternative political structure to replace Mr. Assad should his departure somehow be realized.

Russia, Iran, ISIS and various terrorist organizations have moved into this vacuum: Russia and Iran to sustain Mr. Assad; Tehran to foster imperial and jihadist designs. The Sunni states of the Persian Gulf, Jordan and Egypt, faced with the absence of an alternative political structure, favor the American objective but fear the consequence of turning Syria into another Libya.

American policy on Iran has moved to the center of its Middle East policy. The administration has insisted that it will take a stand against jihadist and imperialist designs by Iran and that it will deal sternly with violations of the nuclear agreement. But it seems also passionately committed to the quest for bringing about a reversal of the hostile, aggressive dimension of Iranian policy through historic evolution bolstered by negotiation.

The prevailing U.S. policy toward Iran is often compared by its advocates to the Nixon administration’s opening to China, which contributed, despite some domestic opposition, to the ultimate transformation of the Soviet Union and the end of the Cold War. The comparison is not apt. The opening to China in 1971 was based on the mutual recognition by both parties that the prevention of Russian hegemony in Eurasia was in their common interest. And 42 Soviet divisions lining the Sino-Soviet border reinforced that conviction. No comparable strategic agreement exists between Washington and Tehran. On the contrary, in the immediate aftermath of the nuclear accord, Iran’s Supreme Leader Ayatollah Ali Khamenei described the U.S. as the “Great Satan” and rejected negotiations with America about nonnuclear matters. Completing his geopolitical diagnosis, Mr. Khamenei also predicted that Israel would no longer exist in 25 years.

Forty-five years ago, the expectations of China and the U.S. were symmetrical. The expectations underlying the nuclear agreement with Iran are not. Tehran will gain its principal objectives at the beginning of the implementation of the accord. America’s benefits reside in a promise of Iranian conduct over a period of time. The opening to China was based on an immediate and observable adjustment in Chinese policy, not on an expectation of a fundamental change in China’s domestic system. The optimistic hypothesis on Iran postulates that Tehran’s revolutionary fervor will dissipate as its economic and cultural interactions with the outside world increase.

American policy runs the risk of feeding suspicion rather than abating it. Its challenge is that two rigid and apocalyptic blocs are confronting each other: a Sunni bloc consisting of Egypt, Jordan, Saudi Arabia and the Gulf States; and the Shiite bloc comprising Iran, the Shiite sector of Iraq with Baghdad as its capital, the Shiite south of Lebanon under Hezbollah control facing Israel, and the Houthi portion of Yemen, completing the encirclement of the Sunni world. In these circumstances, the traditional adage that the enemy of your enemy can be treated as your friend no longer applies. For in the contemporary Middle East, it is likely that the enemy of your enemy remains your enemy.

A great deal depends on how the parties interpret recent events. Can the disillusionment of some of our Sunni allies be mitigated? How will Iran’s leaders interpret the nuclear accord once implemented—as a near-escape from potential disaster counseling a more moderate course, returning Iran to an international order? Or as a victory in which they have achieved their essential aims against the opposition of the U.N. Security Council, having ignored American threats and, hence, as an incentive to continue Tehran’s dual approach as both a legitimate state and a nonstate movement challenging the international order?

Two-power systems are prone to confrontation, as was demonstrated in Europe in the run-up to World War I. Even with traditional weapons technology, to sustain a balance of power between two rigid blocs requires an extraordinary ability to assess the real and potential balance of forces, to understand the accumulation of nuances that might affect this balance, and to act decisively to restore it whenever it deviates from equilibrium—qualities not heretofore demanded of an America sheltered behind two great oceans.

But the current crisis is taking place in a world of nontraditional nuclear and cyber technology. As competing regional powers strive for comparable threshold capacity, the nonproliferation regime in the Middle East may crumble. If nuclear weapons become established, a catastrophic outcome is nearly inevitable. A strategy of pre-emption is inherent in the nuclear technology. The U.S. must be determined to prevent such an outcome and apply the principle of nonproliferation to all nuclear aspirants in the region.
Too much of our public debate deals with tactical expedients. What we need is a strategic concept and to establish priorities on the following principles:

• So long as ISIS survives and remains in control of a geographically defined territory, it will compound all Middle East tensions. Threatening all sides and projecting its goals beyond the region, it freezes existing positions or tempts outside efforts to achieve imperial jihadist designs. The destruction of ISIS is more urgent than the overthrow of Bashar Assad, who has already lost over half of the area he once controlled. Making sure that this territory does not become a permanent terrorist haven must have precedence. The current inconclusive U.S. military effort risks serving as a recruitment vehicle for ISIS as having stood up to American might.

• The U.S. has already acquiesced in a Russian military role. Painful as this is to the architects of the 1973 system, attention in the Middle East must remain focused on essentials. And there exist compatible objectives. In a choice among strategies, it is preferable for ISIS-held territory to be reconquered either by moderate Sunni forces or outside powers than by Iranian jihadist or imperial forces. For Russia, limiting its military role to the anti-ISIS campaign may avoid a return to Cold War conditions with the U.S.

• The reconquered territories should be restored to the local Sunni rule that existed there before the disintegration of both Iraqi and Syrian sovereignty. The sovereign states of the Arabian Peninsula, as well as Egypt and Jordan, should play a principal role in that evolution. After the resolution of its constitutional crisis, Turkey could contribute creatively to such a process.

• As the terrorist region is being dismantled and brought under nonradical political control, the future of the Syrian state should be dealt with concurrently. A federal structure could then be built between the Alawite and Sunni portions. If the Alawite regions become part of a Syrian federal system, a context will exist for the role of Mr. Assad, which reduces the risks of genocide or chaos leading to terrorist triumph.

• The U.S. role in such a Middle East would be to implement the military assurances in the traditional Sunni states that the administration promised during the debate on the Iranian nuclear agreement, and which its critics have demanded.

• In this context, Iran’s role can be critical. The U.S. should be prepared for a dialogue with an Iran returning to its role as a Westphalian state within its established borders.

The U.S. must decide for itself the role it will play in the 21st century; the Middle East will be our most immediate—and perhaps most severe—test. At question is not the strength of American arms but rather American resolve in understanding and mastering a new world.

Mr. Kissinger served as national-security adviser and secretary of state under Presidents Nixon and Ford.

Friday, October 9, 2015

Daniel Schuchman's review of Harry G. Frankfurt's On Inequality

Beggar Thy Neighbor. By Daniel Schuchman
Daniel Schuchman's review of Harry G. Frankfurt's On Inequality (Princeton, 102 pages, $14.95)
http://www.wsj.com/articles/beggar-thy-neighbor-1444345359
Wall Street Journal, Oct 09, 2015

In a 2005 best seller, Harry Frankfurt, a Princeton philosophy professor, explored the often complex nature of popular false ideas. “On Bulls—” examined outright lies, ambiguous forms of obfuscation and the not-always-transparent intentions of those who promote them. Now, in “On Inequality,” Mr. Frankfurt eviscerates one of the shibboleths of our time: that economic inequality—in his definition, “the possession by some of more money than others”—is the most urgent issue confronting society. This idea, he believes, suffers from logical and moral errors of the highest order.

The fixation on equality, as a moral ideal in and of itself, is critically flawed, according to the professor. It holds that justice is determined by one person’s position relative to another, not his absolute well-being. Therefore the logic of egalitarianism can lead to perverse outcomes, he argues. Most egregiously, income inequality could be eliminated very effectively “by making everyone equally poor.” And while the lowest economic stratum of society is always associated with abject poverty, this need not be the case. Mr. Frankfurt imagines instances where those “who are doing considerably worse than others may nonetheless be doing rather well.” This possibility—as with contemporary America’s wide inequalities among relatively prosperous people—undermines the coherence of a philosophy mandating equality.

Mr. Frankfurt acknowledges that “among morally conscientious individuals, appeals in behalf of equality often have very considerable emotional or rhetorical power.” The motivations for pursuing equality may be well-meaning but they are profoundly misguided and contribute to “the moral disorientation and shallowness of our time.”

The idea that equality in itself is a paramount goal, Mr. Frankfurt argues, alienates people from their own characters and life aspirations. The amount of wealth possessed by others does not bear on “what is needed for the kind of life a person would most sensibly and appropriately seek for himself.” The incessant egalitarian comparison of one against another subordinates each individual’s goals to “those that are imposed on them by the conditions in which others happen to live.” Thus, individuals are led to apply an arbitrary relative standard that does not “respect” their authentic selves.

If his literalist critique of egalitarianism is often compelling, Mr. Frankfurt’s own philosophy has more in common with such thinking than is first apparent. For Mr. Frankfurt, the imperative of justice is to alleviate poverty and improve lives, not to make people equal. He does not, however, think that it is morally adequate merely to provide people with a safety net. Instead, he argues for an ideal of “sufficiency.”

By sufficiency Mr. Frankfurt means enough economic resources for every individual to be reasonably satisfied with his circumstances, assuming that the individual’s satisfaction need not be disturbed by others having more. While more money might be welcome, it would not “alter his attitude toward his life, or the degree of his contentment with it.” The achievement of economic and personal contentment by everyone is Mr. Frankfurt’s priority. In fact, his principle of sufficiency is so ambitious it demands that lack of money should never be the cause of anything “distressing or unsatisfying” in anyone’s life.

What’s the harm of such a desirable, if unrealistic goal? The author declares that inequality is “morally disturbing” only when his standard of sufficiency is not achieved. His just society would, in effect, mandate a universal entitlement to a lifestyle that has been attained only by a minuscule fraction of humans in all history. Mr. Frankfurt recognizes such reasoning may bring us full circle: “The most feasible approach” to universal sufficiency may well be policies that, in practice, differ little from those advocated in the “pursuit of equality.”

In passing, the author notes another argument against egalitarianism, the “dangerous conflict between equality and liberty.” He is referring to the notion that leaving people free to choose their work and what goods and services they consume will always lead to an unequal distribution of income. To impose any preconceived economic distribution will, as the philosopher Robert Nozick argued, involves “continuous interference in people’s lives.” Like egalitarianism, Mr. Frankfurt’s ideal of “sufficiency” would hold property rights and economic liberty hostage to his utopian vision.

Such schemes, Nozick argued, see economic assets as having arrived on earth fully formed, like “manna from heaven,” with no consideration of their human origin. Mr. Frankfurt also presumes that one person’s wealth must be the reason others don’t have a “sufficient” amount to be blissfully carefree; he condemns the “excessively affluent” who have “extracted” too much from the nation. This leaves a would-be philosopher-king the task of divvying up loot as he chooses.

On the surface, “On Inequality” is a provocative challenge to a prevailing orthodoxy. But as the author’s earlier book showed, appearances can deceive. When Thomas Piketty, in “Capital in the Twenty-First Century,” says that most wealth is rooted in theft or is arbitrary, or when Mr. Frankfurt’s former Princeton colleague Paul Krugman says the “rich” are “undeserving,” they are not (just) making the case for equality. By arguing that wealth accumulation is inherently unjust, they lay a moral groundwork for confiscation of property. Similarly, Mr. Frankfurt accuses the affluent of “gluttony”—a sentiment about which there appears to be unanimity in that temple of tenured sufficiency, the Princeton faculty club. The author claims to be motivated by respect for personal autonomy and fulfillment. By ignoring economic liberty, he reveals he is not.

Mr. Shuchman is a fund manager in New York.

Sunday, July 26, 2015

International Courts and the New Paternalism - African leaders are the targets because ambitious jurists consider them to be 'low-hanging fruit'

International Courts and the New Paternalism. By Jendayi Frazer
African leaders are the targets because ambitious jurists consider them to be ‘low-hanging fruit.’
http://www.wsj.com/articles/international-courts-and-the-new-paternalism-1437778048
WSJ, July 24, 2015 6:47 p.m. ET
Nairobi, Kenya

President Obama arrived in Kenya on Friday and will travel from here to Ethiopia, two crucial U.S. allies in East Africa. The region is not only emerging as an economic powerhouse, it is also an important front in the battle with al Qaeda, al-Shabaab, Islamic State and other Islamist radicals.

Yet grievances related to how the International Criminal Court’s universal jurisdiction is applied in Africa are interfering with U.S. and European relations on the continent. In Africa there are accusations of neocolonialism and even racism in ICC proceedings, and a growing consensus that Africans are being unjustly indicted by the court.

It wasn’t supposed to be this way. After the failure to prevent mass atrocities in Europe and Africa in the 1990s, a strong consensus emerged that combating impunity had to be an international priority. Ad hoc United Nations tribunals were convened to judge the masterminds of genocide and crimes against humanity in Yugoslavia, Rwanda and Sierra Leone. These courts were painfully slow and expensive. But their mandates were clear and limited, and they helped countries to turn the page and focus on rebuilding.

Soon universal jurisdiction was seen not only as a means to justice, but also a tool for preventing atrocities in the first place. Several countries in Western Europe including Spain, the United Kingdom, Belgium and France empowered their national courts with universal jurisdiction. In 2002 the International Criminal Court came into force.

Africa and Europe were early adherents and today constitute the bulk of ICC membership. But India, China, Russia and most of the Middle East—representing well over half the world’s population—stayed out. So did the United States. Leaders in both parties worried that an unaccountable supranational court would become a venue for politicized show trials. The track record of the ICC and European courts acting under universal jurisdiction has amply borne out these concerns.

Only when U.S. Defense Secretary Donald Rumsfeld threatened to move NATO headquarters out of Brussels in 2003 did Belgium rein in efforts to indict former President George H.W. Bush, and Gens. Colin Powell and Tommy Franks, for alleged “war crimes” during the 1990-91 Gulf War. Spanish courts have indicted American military personnel in Iraq and investigated the U.S. detention facility in Guantanamo Bay.

But with powerful states able to shield themselves and their clients, Africa has borne the brunt of indictments. Far from pursuing justice for victims, these courts have become a venue for public-relations exercises by activist groups. Within African countries, they have been manipulated by one political faction to sideline another, often featuring in electoral politics.
The ICC’s recent indictments of top Kenyan officials are a prime example. In October 2014, Kenyan President Uhuru Kenyatta became the first sitting head of state to appear before the ICC, though he took the extraordinary step of temporarily transferring power to his deputy to avoid the precedent. ICC prosecutors indicted Mr. Kenyatta in connection with Kenya’s post-election ethnic violence of 2007-08, in which some 1,200 people were killed.

Last December the ICC withdrew all charges against Mr. Kenyatta, saying the evidence had “not improved to such an extent that Mr Kenyatta’s alleged criminal responsibility can be proven beyond reasonable doubt.” As U.S. assistant secretary of state for African affairs from 2005-09, and the point person during Kenya’s 2007-08 post-election violence, I knew the ICC indictments were purely political. The court’s decision to continue its case against Kenya’s deputy president, William Ruto, reflects a degree of indifference and even hostility to Kenya’s efforts to heal its political divisions.

The ICC’s indictments in Kenya began with former chief prosecutor Luis Moreno-Ocampo’s determination to prove the court’s relevance in Africa by going after what he reportedly called “low-hanging fruit.” In other words, African political and military leaders unable to resist ICC jurisdiction.

More recently, the arrest of Rwandan chief of intelligence Lt. Gen. Emmanuel Karenzi Karake in London last month drew a unanimous reproach from the African Union’s Peace and Security Council. The warrant dates to a 2008 Spanish indictment for alleged reprisal killings following the 1994 Rwandan genocide. At the time of the indictment, Mr. Karenzi Karake was deputy commander of the joint U.N.-African Union peacekeeping operation in Darfur. The Rwandan troops under his command were the backbone of the Unamid force, and his performance in Darfur was by all accounts exemplary.

Moreover, a U.S. government interagency review conducted in 2007-08, when I led the State Department’s Bureau of African Affairs, found that the Spanish allegations against Mr. Karenzi Karake were false and unsubstantiated. The U.S. fully backed his reappointment in 2008 as deputy commander of Unamid forces. It would be a travesty of justice if the U.K. were to extradite Mr. Karake to Spain to stand trial.

Sadly, the early hope of “universal jurisdiction” ending impunity for perpetrators of genocide and crimes against humanity has given way to cynicism, both in Africa and the West. In Africa it is believed that, in the rush to demonstrate their power, these courts and their defenders have been too willing to brush aside considerations of due process that they defend at home.

In the West, the cynicism is perhaps even more damaging because it calls into question the moral capabilities of Africans and their leaders, and revives the language of paternalism and barbarism of earlier generations.

Ms. Frazer, a former U.S. ambassador to South Africa (2004-05) and assistant secretary of state for African affairs (2005-09), is an adjunct senior fellow for Africa studies at the Council on Foreign Relations.

Sunday, June 7, 2015

Five Bedrock Principles for Investors. By Morgan Housel

Brilliance isn’t the only key to Warren Buffett’s investing success. See rule No. 5.



The U.S. economy shrank last quarter. The Federal Reserve is widely expected to begin raising interest rates later this year. U.S. stocks are expensive by many measures. Greece’s national finances remain fragile. Oh, and election season already is under way in the U.S.

Investors who are tempted to sell risky assets and flee to safety don’t have to look far for justification.

If you are one of them, ponder this: Most of what matters in investing involves bedrock principles, not current events.

Here are five principles every investor should keep in mind:

1. Diversification is how you limit the risk of losses in an uncertain world.
If, 30 years ago, a visitor from the future had said that the Soviet Union had collapsed, Japan’s stock market had stagnated for a quarter century, China had become a superpower and North Dakota had helped turn the U.S. into a fast-growing source of crude oil, few would have believed it.

The next 30 years will be just as surprising.

Diversification among different assets can be frustrating. It requires, at every point in time, owning some unpopular assets.

Why would I want to own European stocks if its economy is such a mess? Why should I buy bonds if interest rates are so low?

The appropriate answer is, “Because the future will play out in ways you or your adviser can’t possibly comprehend.”

Owning a little bit of everything is a bet on humility, which the history of investing shows is a valuable trait.

2. You are your own worst enemy.

The biggest risk investors face isn’t a recession, a bear market, the Federal Reserve or their least favorite political party.

It is their own emotions and biases, and the destructive behaviors they cause.

You can be the best stock picker in the world, capable of finding tomorrow’s winning businesses before anyone else. But if you panic and sell during the next bear market, none of it will matter.

You can be armed with an M.B.A. and have 40 years before retirement to let your savings compound into a fortune. But if you have a gambling mentality and you day-trade penny stocks, your outlook seems dismal.

You can be a mathematical genius, building the most sophisticated stock-market forecasting models. But if you don’t understand the limits of your intelligence, you are on your way to disaster.

There aren’t many iron rules of investing, but one of them is that no amount of brain power can compensate for behavioral errors. Figure out what mistakes you are prone to make and embrace strategies that limit the risk.

3. There is a price to pay.

The stock market has historically offered stellar long-term returns, far better than cash or bonds.

But there is a cost. The price of admission to earn high long-term returns in stocks is a ceaseless torrent of unpredictable outcomes, senseless volatility and unexpected downturns.

If you can stick with your investments through the rough spots, you don’t actually pay this bill; it is a mental surcharge. But it is very real. Not everyone is willing to pay it, which is why there is opportunity for those who are.

There is an understandable desire to forecast what the market will do in the short run. But the reason stocks offer superior long-term returns is precisely because we can’t forecast what they will do in the short run.

4. When in doubt, choose the investment with the lowest fee.

As a group, investors’ profits always will equal the overall market’s returns minus all fees and expenses.

Below-average fees, therefore, offer one of your best shots at earning above-average results.

A talented fund manager can be worth a higher fee, mind you. But enduring outperformance is one of the most elusive investing skills.

According to Vanguard Group, which has championed low-cost investing products, more than 80% of actively managed U.S. stock funds underperformed a low-cost index fund in the 10 years through December. It is far more common for a fund manager to charge excess fees than to deliver excess performance.

There are no promises in investing. The best you can do is put the odds in your favor. And the evidence is overwhelming: The lower the costs, the more the odds tip in your favor.

5. Time is the most powerful force in investing.

Eighty-four year old Warren Buffett’s current net worth is around $73 billion, nearly all of which is in Berkshire Hathaway stock. Berkshire’s stock has risen 24-fold since 1990.

Do the math, and some $70 billion of Mr. Buffett’s $73 billion fortune was accumulated around or after his 60th birthday.

Mr. Buffett is, of course, a phenomenal investor whose talents few will replicate. But the real key to his wealth is that he has been a phenomenal investor for two-thirds of a century.

Wealth grows exponentially—a little at first, then slightly more, and then in a hurry for those who stick around the longest.

That lesson—that time, patience and endurance pay off—is something us mortals can learn from, particularly younger workers just starting to save for retirement.


Saturday, May 30, 2015

Magna Carta: Eight Centuries of Liberty

June marks the 800th anniversary of Magna Carta, the ‘Great Charter’ that established the rule of law for the English-speaking world. Its revolutionary impact still resounds today, writes Daniel Hannan

http://www.wsj.com/articles/magna-carta-eight-centuries-of-liberty-1432912022 

King John, pressured by English barons, reluctantly signs Magna Carta, the ‘Great Charter,’ on the Thames riverbank, Runnymede, June 15, 1215, as rendered in James Doyle’s ‘A Chronicle of England.’ Photo: Mary Evans Picture Library/Everett Collection http://si.wsj.net/public/resources/images/BN-IQ808_MAGNA_J_20150529103352.jpg

Eight hundred years ago next month, on a reedy stretch of riverbank in southern England, the most important bargain in the history of the human race was struck. I realize that’s a big claim, but in this case, only superlatives will do. As Lord Denning, the most celebrated modern British jurist put it, Magna Carta was “the greatest constitutional document of all time, the foundation of the freedom of the individual against the arbitrary authority of the despot.”

It was at Runnymede, on June 15, 1215, that the idea of the law standing above the government first took contractual form. King John accepted that he would no longer get to make the rules up as he went along. From that acceptance flowed, ultimately, all the rights and freedoms that we now take for granted: uncensored newspapers, security of property, equality before the law, habeas corpus, regular elections, sanctity of contract, jury trials.

Magna Carta is Latin for “Great Charter.” It was so named not because the men who drafted it foresaw its epochal power but because it was long. Yet, almost immediately, the document began to take on a political significance that justified the adjective in every sense.

The bishops and barons who had brought King John to the negotiating table understood that rights required an enforcement mechanism. The potency of a charter is not in its parchment but in the authority of its interpretation. The constitution of the U.S.S.R., to pluck an example more or less at random, promised all sorts of entitlements: free speech, free worship, free association. But as Soviet citizens learned, paper rights are worthless in the absence of mechanisms to hold rulers to account.

Magna Carta instituted a form of conciliar rule that was to develop directly into the Parliament that meets at Westminster today. As the great Victorian historian William Stubbs put it, “the whole constitutional history of England is little more than a commentary on Magna Carta.”

And not just England. Indeed, not even England in particular. Magna Carta has always been a bigger deal in the U.S. The meadow where the abominable King John put his royal seal to the parchment lies in my electoral district in the county of Surrey. It went unmarked until 1957, when a memorial stone was finally raised there—by the American Bar Association.

Only now, for the anniversary, is a British monument being erected at the place where freedom was born. After some frantic fundraising by me and a handful of local councilors, a large bronze statue of Queen Elizabeth II will gaze out across the slow, green waters of the Thames, marking 800 years of the Crown’s acceptance of the rule of law.

Eight hundred years is a long wait. We British have, by any measure, been slow to recognize what we have. Americans, by contrast, have always been keenly aware of the document, referring to it respectfully as the Magna Carta.

Why? Largely because of who the first Americans were. Magna Carta was reissued several times throughout the 14th and 15th centuries, as successive Parliaments asserted their prerogatives, but it receded from public consciousness under the Tudors, whose dynasty ended with the death of Elizabeth I in 1603.

In the early 17th century, members of Parliament revived Magna Carta as a weapon in their quarrels with the autocratic Stuart monarchs. Opposition to the Crown was led by the brilliant lawyer Edward Coke (pronounced Cook), who drafted the first Virginia Charter in 1606. Coke’s argument was that the king was sidelining Parliament, and so unbalancing the “ancient constitution” of which Magna Carta was the supreme expression.
United for the first time, the four surviving original Magna Carta manuscripts are prepared for display at the British Library, London, Feb. 1, 2015.
United for the first time, the four surviving original Magna Carta manuscripts are prepared for display at the British Library, London, Feb. 1, 2015. Photo: UPPA/ZUMA PRESS

The early settlers arrived while these rows were at their height and carried the mania for Magna Carta to their new homes. As early as 1637, Maryland sought permission to incorporate Magna Carta into its basic law, and the first edition of the Great Charter was published on American soil in 1687 by William Penn, who explained that it was what made Englishmen unique: “In France, and other nations, the mere will of the Prince is Law, his word takes off any man’s head, imposeth taxes, or seizes any man’s estate, when, how and as often as he lists; But in England, each man hath a fixed Fundamental Right born with him, as to freedom of his person and property in his estate, which he cannot be deprived of, but either by his consent, or some crime, for which the law has imposed such a penalty or forfeiture.”

There was a divergence between English and American conceptions of Magna Carta. In the Old World, it was thought of, above all, as a guarantor of parliamentary supremacy; in the New World, it was already coming to be seen as something that stood above both Crown and Parliament. This difference was to have vast consequences in the 1770s.

The American Revolution is now remembered on both sides of the Atlantic as a national conflict—as, indeed, a “War of Independence.” But no one at the time thought of it that way—not, at any rate, until the French became involved in 1778. Loyalists and patriots alike saw it as a civil war within a single polity, a war that divided opinion every bit as much in Great Britain as in the colonies.

The American Revolutionaries weren’t rejecting their identity as Englishmen; they were asserting it. As they saw it, George III was violating the “ancient constitution” just as King John and the Stuarts had done. It was therefore not just their right but their duty to resist, in the words of the delegates to the first Continental Congress in 1774, “as Englishmen our ancestors in like cases have usually done.”

Nowhere, at this stage, do we find the slightest hint that the patriots were fighting for universal rights. On the contrary, they were very clear that they were fighting for the privileges bestowed on them by Magna Carta. The concept of “no taxation without representation” was not an abstract principle. It could be found, rather, in Article 12 of the Great Charter: “No scutage or aid is to be levied in our realm except by the common counsel of our realm.” In 1775, Massachusetts duly adopted as its state seal a patriot with a sword in one hand and a copy of Magna Carta in the other.

I recount these facts to make an important, if unfashionable, point. The rights we now take for granted—freedom of speech, religion, assembly and so on—are not the natural condition of an advanced society. They were developed overwhelmingly in the language in which you are reading these words.

When we call them universal rights, we are being polite. Suppose World War II or the Cold War had ended differently: There would have been nothing universal about them then. If they are universal rights today, it is because of a series of military victories by the English-speaking peoples.

Various early copies of Magna Carta survive, many of them in England’s cathedrals, tended like the relics that were removed during the Reformation. One hangs in the National Archives in Washington, D.C., next to the two documents it directly inspired: the Declaration of Independence and the Constitution. Another enriches the Australian Parliament in Canberra.

But there are only four 1215 originals. One of them, normally housed at Lincoln Cathedral, has recently been on an American tour, resting for some weeks at the Library of Congress. It wasn’t that copy’s first visit to the U.S. The same parchment was exhibited in New York at the 1939 World’s Fair, attracting an incredible 13 million visitors. World War II broke out while it was still on display, and it was transferred to Fort Knox for safekeeping until the end of the conflict.

Could there have been a more apt symbol of what the English-speaking peoples were fighting for in that conflagration? Think of the world as it stood in 1939. Constitutional liberty was more or less confined to the Anglosphere. Everywhere else, authoritarianism was on the rise. Our system, uniquely, elevated the individual over the state, the rules over the rulers.

When the 18th-century statesman Pitt the Elder described Magna Carta as England’s Bible, he was making a profound point. It is, so to speak, the Torah of the English-speaking peoples: the text that sets us apart while at the same time speaking truths to the rest of mankind.

The very success of Magna Carta makes it hard for us, 800 years on, to see how utterly revolutionary it must have appeared at the time. Magna Carta did not create democracy: Ancient Greeks had been casting differently colored pebbles into voting urns while the remote fathers of the English were grubbing about alongside pigs in the cold soil of northern Germany. Nor was it the first expression of the law: There were Sumerian and Egyptian law codes even before Moses descended from Sinai.

What Magna Carta initiated, rather, was constitutional government—or, as the terse inscription on the American Bar Association’s stone puts it, “freedom under law.”

It takes a real act of imagination to see how transformative this concept must have been. The law was no longer just an expression of the will of the biggest guy in the tribe. Above the king brooded something more powerful yet—something you couldn’t see or hear or touch or taste but that bound the sovereign as surely as it bound the poorest wretch in the kingdom. That something was what Magna Carta called “the law of the land.”

This phrase is commonplace in our language. But think of what it represents. The law is not determined by the people in government, nor yet by clergymen presuming to interpret a holy book. Rather, it is immanent in the land itself, the common inheritance of the people living there.

The idea of the law coming up from the people, rather than down from the government, is a peculiar feature of the Anglosphere. Common law is an anomaly, a beautiful, miraculous anomaly. In the rest of the world, laws are written down from first principles and then applied to specific disputes, but the common law grows like a coral, case by case, each judgment serving as the starting point for the next dispute. In consequence, it is an ally of freedom rather than an instrument of state control. It implicitly assumes residual rights.

And indeed, Magna Carta conceives rights in negative terms, as guarantees against state coercion. No one can put you in prison or seize your property or mistreat you other than by due process. This essentially negative conception of freedom is worth clinging to in an age that likes to redefine rights as entitlements—the right to affordable health care, the right to be forgotten and so on.

It is worth stressing, too, that Magna Carta conceived freedom and property as two expressions of the same principle. The whole document can be read as a lengthy promise that the goods of a free citizen will not be arbitrarily confiscated by someone higher up the social scale. Even the clauses that seem most remote from modern experience generally turn out, in reality, to be about security of ownership.

There are, for example, detailed passages about wardship. King John had been in the habit of marrying heiresses to royal favorites as a way to get his hands on their estates. The abstruse-sounding articles about inheritance rights are, in reality, simply one more expression of the general principle that the state may not expropriate without due process.

Those who stand awe-struck before the Great Charter expecting to find high-flown phrases about liberty are often surprised to see that a chunk of it is taken up with the placing of fish-traps on the Thames. Yet these passages, too, are about property, specifically the freedom of merchants to navigate inland waterways without having arbitrary tolls imposed on them by fish farmers.

Liberty and property: how naturally those words tripped, as a unitary concept, from the tongues of America’s Founders. These were men who had been shaped in the English tradition, and they saw parliamentary government not as an expression of majority rule but as a guarantor of individual freedom. How different was the Continental tradition, born 13 years later with the French Revolution, which saw elected assemblies as the embodiment of what Rousseau called the “general will” of the people.

In that difference, we may perhaps discern explanation of why the Anglosphere resisted the chronic bouts of authoritarianism to which most other Western countries were prone. We who speak this language have always seen the defense of freedom as the duty of our representatives and so, by implication, of those who elect them. Liberty and democracy, in our tradition, are not balanced against each other; they are yoked together.

In February, the four surviving original copies of Magna Carta were united, for just a few hours, at the British Library—something that had not happened in 800 years. As I stood reverentially before them, someone recognized me and posted a photograph on Twitter with the caption: “If Dan Hannan gets his hands on all four copies of Magna Carta, will he be like Sauron with the Rings?”

Yet the majesty of the document resides in the fact that it is, so to speak, a shield against Saurons. Most other countries have fallen for, or at least fallen to, dictators. Many, during the 20th century, had popular communist parties or fascist parties or both. The Anglosphere, unusually, retained a consensus behind liberal capitalism.

This is not because of any special property in our geography or our genes but because of our constitutional arrangements. Those constitutional arrangements can take root anywhere. They explain why Bermuda is not Haiti, why Hong Kong is not China, why Israel is not Syria.

They work because, starting with Magna Carta, they have made the defense of freedom everyone’s responsibility. Americans, like Britons, have inherited their freedoms from past generations and should not look to any external agent for their perpetuation. The defense of liberty is your job and mine. It is up to us to keep intact the freedoms we inherited from our parents and to pass them on securely to our children.

Mr. Hannan is a British member of the European Parliament for the Conservative Party, a columnist for the Washington Examiner and the author of “Inventing Freedom: How the English-speaking Peoples Made the Modern World.”

Friday, April 3, 2015

The Federal President would not stay in power if he did not talk human rights. So look at it as a political imperative.

Joe Biden on Human Rights
The Vice President tells China’s leaders to ignore the U.S.
WSJ, Apr 01, 2015

White House officials can be oddly candid in talking to their liberal friends at the New Yorker magazine. That’s where an unnamed official in 2011 boasted of “leading from behind,” and where last year President Obama dismissed Islamic State as a terrorist “jayvee team.” Now the U.S. Vice President has revealed the Administration line on human rights in China.

In the April 6 issue, Joe Biden recounts meeting Xi Jinping months before his 2012 ascent to be China’s supreme leader. Mr. Xi asked him why the U.S. put “so much emphasis on human rights.” The right answer is simple: No government has the right to deny its citizens basic freedoms, and those that do tend also to threaten peace overseas, so U.S. support for human rights is a matter of values and interests.

Instead, Mr. Biden downplayed U.S. human-rights rhetoric as little more than political posturing. “No president of the United States could represent the United States were he not committed to human rights,” he told Mr. Xi. “President Barack Obama would not be able to stay in power if he did not speak of it. So look at it as a political imperative.” Then Mr. Biden assured China’s leader: “It doesn’t make us better or worse. It’s who we are. You make your decisions. We’ll make ours.” [not the WSJ's emphasis.]

Mr. Xi took the advice. Since taking office he has detained more than 1,000 political prisoners, from anticorruption activist Xu Zhiyong to lawyer Pu Zhiqiang and journalist Gao Yu. He has cracked down on Uighurs in Xinjiang, banning more Muslim practices and jailing scholar-activist Ilham Tohti for life. Anti-Christian repression and Internet controls are tightening. Nobel Peace laureate Liu Xiaobo remains in prison, his wife Liu Xia under illegal house arrest for the fifth year. Lawyer Gao Zhisheng left prison in August but is blocked from receiving medical care overseas. Hong Kong, China’s most liberal city, is losing its press freedom and political autonomy.

Amid all of this Mr. Xi and his government have faced little challenge from Washington. That is consistent with Hillary Clinton’s 2009 statement that human rights can’t be allowed to “interfere” with diplomacy on issues such as the economy and the environment. Mr. Obama tried walking that back months later, telling the United Nations that democracy and human rights aren’t “afterthoughts.” But his Administration’s record—and now Mr. Biden’s testimony—prove otherwise.