Monday, October 19, 2015

Kissinger: A Path Out of the Middle East Collapse

A Path Out of the Middle East Collapse. By Henry Kissinger

With Russia in Syria, a geopolitical structure that lasted four decades is in shambles. The U.S. needs a new strategy and priorities.

Wall Street Journal, Oct 16, 2015

http://www.wsj.com/articles/a-path-out-of-the-middle-east-collapse-1445037513


The debate about whether the Joint Comprehensive Plan of Action with Iran regarding its nuclear program stabilized the Middle East’s strategic framework had barely begun when the region’s geopolitical framework collapsed. Russia’s unilateral military action in Syria is the latest symptom of the disintegration of the American role in stabilizing the Middle East order that emerged from the Arab-Israeli war of 1973.

In the aftermath of that conflict, Egypt abandoned its military ties with the Soviet Union and joined an American-backed negotiating process that produced peace treaties between Israel and Egypt, and Israel and Jordan, a United Nations-supervised disengagement agreement between Israel and Syria, which has been observed for over four decades (even by the parties of the Syrian civil war), and international support of Lebanon’s sovereign territorial integrity. Later, Saddam Hussein’s war to incorporate Kuwait into Iraq was defeated by an international coalition under U.S. leadership. American forces led the war against terror in Iraq and Afghanistan. Egypt, Jordan, Saudi Arabia and the other Gulf States were our allies in all these efforts. The Russian military presence disappeared from the region.

That geopolitical pattern is now in shambles. Four states in the region have ceased to function as sovereign. Libya, Yemen, Syria and Iraq have become targets for nonstate movements seeking to impose their rule. Over large swaths in Iraq and Syria, an ideologically radical religious army has declared itself the Islamic State (also called ISIS or ISIL) as an unrelenting foe of established world order. It seeks to replace the international system’s multiplicity of states with a caliphate, a single Islamic empire governed by Shariah law.

ISIS’ claim has given the millennium-old split between the Shiite and Sunni sects of Islam an apocalyptic dimension. The remaining Sunni states feel threatened by both the religious fervor of ISIS as well as by Shiite Iran, potentially the most powerful state in the region. Iran compounds its menace by presenting itself in a dual capacity. On one level, Iran acts as a legitimate Westphalian state conducting traditional diplomacy, even invoking the safeguards of the international system. At the same time, it organizes and guides nonstate actors seeking regional hegemony based on jihadist principles: Hezbollah in Lebanon and Syria; Hamas in Gaza; the Houthis in Yemen.

Thus the Sunni Middle East risks engulfment by four concurrent sources: Shiite-governed Iran and its legacy of Persian imperialism; ideologically and religiously radical movements striving to overthrow prevalent political structures; conflicts within each state between ethnic and religious groups arbitrarily assembled after World War I into (now collapsing) states; and domestic pressures stemming from detrimental political, social and economic domestic policies.

The fate of Syria provides a vivid illustration: What started as a Sunni revolt against the Alawite (a Shiite offshoot) autocrat Bashar Assad fractured the state into its component religious and ethnic groups, with nonstate militias supporting each warring party, and outside powers pursuing their own strategic interests. Iran supports the Assad regime as the linchpin of an Iranian historic dominance stretching from Tehran to the Mediterranean. The Gulf States insist on the overthrow of Mr. Assad to thwart Shiite Iranian designs, which they fear more than Islamic State. They seek the defeat of ISIS while avoiding an Iranian victory. This ambivalence has been deepened by the nuclear deal, which in the Sunni Middle East is widely interpreted as tacit American acquiescence in Iranian hegemony.

These conflicting trends, compounded by America’s retreat from the region, have enabled Russia to engage in military operations deep in the Middle East, a deployment unprecedented in Russian history. Russia’s principal concern is that the Assad regime’s collapse could reproduce the chaos of Libya, bring ISIS into power in Damascus, and turn all of Syria into a haven for terrorist operations, reaching into Muslim regions inside Russia’s southern border in the Caucasus and elsewhere.

On the surface, Russia’s intervention serves Iran’s policy of sustaining the Shiite element in Syria. In a deeper sense, Russia’s purposes do not require the indefinite continuation of Mr. Assad’s rule. It is a classic balance-of-power maneuver to divert the Sunni Muslim terrorist threat from Russia’s southern border region. It is a geopolitical, not an ideological, challenge and should be dealt with on that level. Whatever the motivation, Russian forces in the region—and their participation in combat operations—produce a challenge that American Middle East policy has not encountered in at least four decades.

American policy has sought to straddle the motivations of all parties and is therefore on the verge of losing the ability to shape events. The U.S. is now opposed to, or at odds in some way or another with, all parties in the region: with Egypt on human rights; with Saudi Arabia over Yemen; with each of the Syrian parties over different objectives. The U.S. proclaims the determination to remove Mr. Assad but has been unwilling to generate effective leverage—political or military—to achieve that aim. Nor has the U.S. put forward an alternative political structure to replace Mr. Assad should his departure somehow be realized.

Russia, Iran, ISIS and various terrorist organizations have moved into this vacuum: Russia and Iran to sustain Mr. Assad; Tehran to foster imperial and jihadist designs. The Sunni states of the Persian Gulf, Jordan and Egypt, faced with the absence of an alternative political structure, favor the American objective but fear the consequence of turning Syria into another Libya.

American policy on Iran has moved to the center of its Middle East policy. The administration has insisted that it will take a stand against jihadist and imperialist designs by Iran and that it will deal sternly with violations of the nuclear agreement. But it seems also passionately committed to the quest for bringing about a reversal of the hostile, aggressive dimension of Iranian policy through historic evolution bolstered by negotiation.

The prevailing U.S. policy toward Iran is often compared by its advocates to the Nixon administration’s opening to China, which contributed, despite some domestic opposition, to the ultimate transformation of the Soviet Union and the end of the Cold War. The comparison is not apt. The opening to China in 1971 was based on the mutual recognition by both parties that the prevention of Russian hegemony in Eurasia was in their common interest. And 42 Soviet divisions lining the Sino-Soviet border reinforced that conviction. No comparable strategic agreement exists between Washington and Tehran. On the contrary, in the immediate aftermath of the nuclear accord, Iran’s Supreme Leader Ayatollah Ali Khamenei described the U.S. as the “Great Satan” and rejected negotiations with America about nonnuclear matters. Completing his geopolitical diagnosis, Mr. Khamenei also predicted that Israel would no longer exist in 25 years.

Forty-five years ago, the expectations of China and the U.S. were symmetrical. The expectations underlying the nuclear agreement with Iran are not. Tehran will gain its principal objectives at the beginning of the implementation of the accord. America’s benefits reside in a promise of Iranian conduct over a period of time. The opening to China was based on an immediate and observable adjustment in Chinese policy, not on an expectation of a fundamental change in China’s domestic system. The optimistic hypothesis on Iran postulates that Tehran’s revolutionary fervor will dissipate as its economic and cultural interactions with the outside world increase.

American policy runs the risk of feeding suspicion rather than abating it. Its challenge is that two rigid and apocalyptic blocs are confronting each other: a Sunni bloc consisting of Egypt, Jordan, Saudi Arabia and the Gulf States; and the Shiite bloc comprising Iran, the Shiite sector of Iraq with Baghdad as its capital, the Shiite south of Lebanon under Hezbollah control facing Israel, and the Houthi portion of Yemen, completing the encirclement of the Sunni world. In these circumstances, the traditional adage that the enemy of your enemy can be treated as your friend no longer applies. For in the contemporary Middle East, it is likely that the enemy of your enemy remains your enemy.

A great deal depends on how the parties interpret recent events. Can the disillusionment of some of our Sunni allies be mitigated? How will Iran’s leaders interpret the nuclear accord once implemented—as a near-escape from potential disaster counseling a more moderate course, returning Iran to an international order? Or as a victory in which they have achieved their essential aims against the opposition of the U.N. Security Council, having ignored American threats and, hence, as an incentive to continue Tehran’s dual approach as both a legitimate state and a nonstate movement challenging the international order?

Two-power systems are prone to confrontation, as was demonstrated in Europe in the run-up to World War I. Even with traditional weapons technology, to sustain a balance of power between two rigid blocs requires an extraordinary ability to assess the real and potential balance of forces, to understand the accumulation of nuances that might affect this balance, and to act decisively to restore it whenever it deviates from equilibrium—qualities not heretofore demanded of an America sheltered behind two great oceans.

But the current crisis is taking place in a world of nontraditional nuclear and cyber technology. As competing regional powers strive for comparable threshold capacity, the nonproliferation regime in the Middle East may crumble. If nuclear weapons become established, a catastrophic outcome is nearly inevitable. A strategy of pre-emption is inherent in the nuclear technology. The U.S. must be determined to prevent such an outcome and apply the principle of nonproliferation to all nuclear aspirants in the region.
Too much of our public debate deals with tactical expedients. What we need is a strategic concept and to establish priorities on the following principles:

• So long as ISIS survives and remains in control of a geographically defined territory, it will compound all Middle East tensions. Threatening all sides and projecting its goals beyond the region, it freezes existing positions or tempts outside efforts to achieve imperial jihadist designs. The destruction of ISIS is more urgent than the overthrow of Bashar Assad, who has already lost over half of the area he once controlled. Making sure that this territory does not become a permanent terrorist haven must have precedence. The current inconclusive U.S. military effort risks serving as a recruitment vehicle for ISIS as having stood up to American might.

• The U.S. has already acquiesced in a Russian military role. Painful as this is to the architects of the 1973 system, attention in the Middle East must remain focused on essentials. And there exist compatible objectives. In a choice among strategies, it is preferable for ISIS-held territory to be reconquered either by moderate Sunni forces or outside powers than by Iranian jihadist or imperial forces. For Russia, limiting its military role to the anti-ISIS campaign may avoid a return to Cold War conditions with the U.S.

• The reconquered territories should be restored to the local Sunni rule that existed there before the disintegration of both Iraqi and Syrian sovereignty. The sovereign states of the Arabian Peninsula, as well as Egypt and Jordan, should play a principal role in that evolution. After the resolution of its constitutional crisis, Turkey could contribute creatively to such a process.

• As the terrorist region is being dismantled and brought under nonradical political control, the future of the Syrian state should be dealt with concurrently. A federal structure could then be built between the Alawite and Sunni portions. If the Alawite regions become part of a Syrian federal system, a context will exist for the role of Mr. Assad, which reduces the risks of genocide or chaos leading to terrorist triumph.

• The U.S. role in such a Middle East would be to implement the military assurances in the traditional Sunni states that the administration promised during the debate on the Iranian nuclear agreement, and which its critics have demanded.

• In this context, Iran’s role can be critical. The U.S. should be prepared for a dialogue with an Iran returning to its role as a Westphalian state within its established borders.

The U.S. must decide for itself the role it will play in the 21st century; the Middle East will be our most immediate—and perhaps most severe—test. At question is not the strength of American arms but rather American resolve in understanding and mastering a new world.

Mr. Kissinger served as national-security adviser and secretary of state under Presidents Nixon and Ford.

Friday, October 9, 2015

Daniel Schuchman's review of Harry G. Frankfurt's On Inequality

Beggar Thy Neighbor. By Daniel Schuchman
Daniel Schuchman's review of Harry G. Frankfurt's On Inequality (Princeton, 102 pages, $14.95)
http://www.wsj.com/articles/beggar-thy-neighbor-1444345359
Wall Street Journal, Oct 09, 2015

In a 2005 best seller, Harry Frankfurt, a Princeton philosophy professor, explored the often complex nature of popular false ideas. “On Bulls—” examined outright lies, ambiguous forms of obfuscation and the not-always-transparent intentions of those who promote them. Now, in “On Inequality,” Mr. Frankfurt eviscerates one of the shibboleths of our time: that economic inequality—in his definition, “the possession by some of more money than others”—is the most urgent issue confronting society. This idea, he believes, suffers from logical and moral errors of the highest order.

The fixation on equality, as a moral ideal in and of itself, is critically flawed, according to the professor. It holds that justice is determined by one person’s position relative to another, not his absolute well-being. Therefore the logic of egalitarianism can lead to perverse outcomes, he argues. Most egregiously, income inequality could be eliminated very effectively “by making everyone equally poor.” And while the lowest economic stratum of society is always associated with abject poverty, this need not be the case. Mr. Frankfurt imagines instances where those “who are doing considerably worse than others may nonetheless be doing rather well.” This possibility—as with contemporary America’s wide inequalities among relatively prosperous people—undermines the coherence of a philosophy mandating equality.

Mr. Frankfurt acknowledges that “among morally conscientious individuals, appeals in behalf of equality often have very considerable emotional or rhetorical power.” The motivations for pursuing equality may be well-meaning but they are profoundly misguided and contribute to “the moral disorientation and shallowness of our time.”

The idea that equality in itself is a paramount goal, Mr. Frankfurt argues, alienates people from their own characters and life aspirations. The amount of wealth possessed by others does not bear on “what is needed for the kind of life a person would most sensibly and appropriately seek for himself.” The incessant egalitarian comparison of one against another subordinates each individual’s goals to “those that are imposed on them by the conditions in which others happen to live.” Thus, individuals are led to apply an arbitrary relative standard that does not “respect” their authentic selves.

If his literalist critique of egalitarianism is often compelling, Mr. Frankfurt’s own philosophy has more in common with such thinking than is first apparent. For Mr. Frankfurt, the imperative of justice is to alleviate poverty and improve lives, not to make people equal. He does not, however, think that it is morally adequate merely to provide people with a safety net. Instead, he argues for an ideal of “sufficiency.”

By sufficiency Mr. Frankfurt means enough economic resources for every individual to be reasonably satisfied with his circumstances, assuming that the individual’s satisfaction need not be disturbed by others having more. While more money might be welcome, it would not “alter his attitude toward his life, or the degree of his contentment with it.” The achievement of economic and personal contentment by everyone is Mr. Frankfurt’s priority. In fact, his principle of sufficiency is so ambitious it demands that lack of money should never be the cause of anything “distressing or unsatisfying” in anyone’s life.

What’s the harm of such a desirable, if unrealistic goal? The author declares that inequality is “morally disturbing” only when his standard of sufficiency is not achieved. His just society would, in effect, mandate a universal entitlement to a lifestyle that has been attained only by a minuscule fraction of humans in all history. Mr. Frankfurt recognizes such reasoning may bring us full circle: “The most feasible approach” to universal sufficiency may well be policies that, in practice, differ little from those advocated in the “pursuit of equality.”

In passing, the author notes another argument against egalitarianism, the “dangerous conflict between equality and liberty.” He is referring to the notion that leaving people free to choose their work and what goods and services they consume will always lead to an unequal distribution of income. To impose any preconceived economic distribution will, as the philosopher Robert Nozick argued, involves “continuous interference in people’s lives.” Like egalitarianism, Mr. Frankfurt’s ideal of “sufficiency” would hold property rights and economic liberty hostage to his utopian vision.

Such schemes, Nozick argued, see economic assets as having arrived on earth fully formed, like “manna from heaven,” with no consideration of their human origin. Mr. Frankfurt also presumes that one person’s wealth must be the reason others don’t have a “sufficient” amount to be blissfully carefree; he condemns the “excessively affluent” who have “extracted” too much from the nation. This leaves a would-be philosopher-king the task of divvying up loot as he chooses.

On the surface, “On Inequality” is a provocative challenge to a prevailing orthodoxy. But as the author’s earlier book showed, appearances can deceive. When Thomas Piketty, in “Capital in the Twenty-First Century,” says that most wealth is rooted in theft or is arbitrary, or when Mr. Frankfurt’s former Princeton colleague Paul Krugman says the “rich” are “undeserving,” they are not (just) making the case for equality. By arguing that wealth accumulation is inherently unjust, they lay a moral groundwork for confiscation of property. Similarly, Mr. Frankfurt accuses the affluent of “gluttony”—a sentiment about which there appears to be unanimity in that temple of tenured sufficiency, the Princeton faculty club. The author claims to be motivated by respect for personal autonomy and fulfillment. By ignoring economic liberty, he reveals he is not.

Mr. Shuchman is a fund manager in New York.

Sunday, July 26, 2015

International Courts and the New Paternalism - African leaders are the targets because ambitious jurists consider them to be 'low-hanging fruit'

International Courts and the New Paternalism. By Jendayi Frazer
African leaders are the targets because ambitious jurists consider them to be ‘low-hanging fruit.’
http://www.wsj.com/articles/international-courts-and-the-new-paternalism-1437778048
WSJ, July 24, 2015 6:47 p.m. ET
Nairobi, Kenya

President Obama arrived in Kenya on Friday and will travel from here to Ethiopia, two crucial U.S. allies in East Africa. The region is not only emerging as an economic powerhouse, it is also an important front in the battle with al Qaeda, al-Shabaab, Islamic State and other Islamist radicals.

Yet grievances related to how the International Criminal Court’s universal jurisdiction is applied in Africa are interfering with U.S. and European relations on the continent. In Africa there are accusations of neocolonialism and even racism in ICC proceedings, and a growing consensus that Africans are being unjustly indicted by the court.

It wasn’t supposed to be this way. After the failure to prevent mass atrocities in Europe and Africa in the 1990s, a strong consensus emerged that combating impunity had to be an international priority. Ad hoc United Nations tribunals were convened to judge the masterminds of genocide and crimes against humanity in Yugoslavia, Rwanda and Sierra Leone. These courts were painfully slow and expensive. But their mandates were clear and limited, and they helped countries to turn the page and focus on rebuilding.

Soon universal jurisdiction was seen not only as a means to justice, but also a tool for preventing atrocities in the first place. Several countries in Western Europe including Spain, the United Kingdom, Belgium and France empowered their national courts with universal jurisdiction. In 2002 the International Criminal Court came into force.

Africa and Europe were early adherents and today constitute the bulk of ICC membership. But India, China, Russia and most of the Middle East—representing well over half the world’s population—stayed out. So did the United States. Leaders in both parties worried that an unaccountable supranational court would become a venue for politicized show trials. The track record of the ICC and European courts acting under universal jurisdiction has amply borne out these concerns.

Only when U.S. Defense Secretary Donald Rumsfeld threatened to move NATO headquarters out of Brussels in 2003 did Belgium rein in efforts to indict former President George H.W. Bush, and Gens. Colin Powell and Tommy Franks, for alleged “war crimes” during the 1990-91 Gulf War. Spanish courts have indicted American military personnel in Iraq and investigated the U.S. detention facility in Guantanamo Bay.

But with powerful states able to shield themselves and their clients, Africa has borne the brunt of indictments. Far from pursuing justice for victims, these courts have become a venue for public-relations exercises by activist groups. Within African countries, they have been manipulated by one political faction to sideline another, often featuring in electoral politics.
The ICC’s recent indictments of top Kenyan officials are a prime example. In October 2014, Kenyan President Uhuru Kenyatta became the first sitting head of state to appear before the ICC, though he took the extraordinary step of temporarily transferring power to his deputy to avoid the precedent. ICC prosecutors indicted Mr. Kenyatta in connection with Kenya’s post-election ethnic violence of 2007-08, in which some 1,200 people were killed.

Last December the ICC withdrew all charges against Mr. Kenyatta, saying the evidence had “not improved to such an extent that Mr Kenyatta’s alleged criminal responsibility can be proven beyond reasonable doubt.” As U.S. assistant secretary of state for African affairs from 2005-09, and the point person during Kenya’s 2007-08 post-election violence, I knew the ICC indictments were purely political. The court’s decision to continue its case against Kenya’s deputy president, William Ruto, reflects a degree of indifference and even hostility to Kenya’s efforts to heal its political divisions.

The ICC’s indictments in Kenya began with former chief prosecutor Luis Moreno-Ocampo’s determination to prove the court’s relevance in Africa by going after what he reportedly called “low-hanging fruit.” In other words, African political and military leaders unable to resist ICC jurisdiction.

More recently, the arrest of Rwandan chief of intelligence Lt. Gen. Emmanuel Karenzi Karake in London last month drew a unanimous reproach from the African Union’s Peace and Security Council. The warrant dates to a 2008 Spanish indictment for alleged reprisal killings following the 1994 Rwandan genocide. At the time of the indictment, Mr. Karenzi Karake was deputy commander of the joint U.N.-African Union peacekeeping operation in Darfur. The Rwandan troops under his command were the backbone of the Unamid force, and his performance in Darfur was by all accounts exemplary.

Moreover, a U.S. government interagency review conducted in 2007-08, when I led the State Department’s Bureau of African Affairs, found that the Spanish allegations against Mr. Karenzi Karake were false and unsubstantiated. The U.S. fully backed his reappointment in 2008 as deputy commander of Unamid forces. It would be a travesty of justice if the U.K. were to extradite Mr. Karake to Spain to stand trial.

Sadly, the early hope of “universal jurisdiction” ending impunity for perpetrators of genocide and crimes against humanity has given way to cynicism, both in Africa and the West. In Africa it is believed that, in the rush to demonstrate their power, these courts and their defenders have been too willing to brush aside considerations of due process that they defend at home.

In the West, the cynicism is perhaps even more damaging because it calls into question the moral capabilities of Africans and their leaders, and revives the language of paternalism and barbarism of earlier generations.

Ms. Frazer, a former U.S. ambassador to South Africa (2004-05) and assistant secretary of state for African affairs (2005-09), is an adjunct senior fellow for Africa studies at the Council on Foreign Relations.

Sunday, June 7, 2015

Five Bedrock Principles for Investors. By Morgan Housel

Brilliance isn’t the only key to Warren Buffett’s investing success. See rule No. 5.



The U.S. economy shrank last quarter. The Federal Reserve is widely expected to begin raising interest rates later this year. U.S. stocks are expensive by many measures. Greece’s national finances remain fragile. Oh, and election season already is under way in the U.S.

Investors who are tempted to sell risky assets and flee to safety don’t have to look far for justification.

If you are one of them, ponder this: Most of what matters in investing involves bedrock principles, not current events.

Here are five principles every investor should keep in mind:

1. Diversification is how you limit the risk of losses in an uncertain world.
If, 30 years ago, a visitor from the future had said that the Soviet Union had collapsed, Japan’s stock market had stagnated for a quarter century, China had become a superpower and North Dakota had helped turn the U.S. into a fast-growing source of crude oil, few would have believed it.

The next 30 years will be just as surprising.

Diversification among different assets can be frustrating. It requires, at every point in time, owning some unpopular assets.

Why would I want to own European stocks if its economy is such a mess? Why should I buy bonds if interest rates are so low?

The appropriate answer is, “Because the future will play out in ways you or your adviser can’t possibly comprehend.”

Owning a little bit of everything is a bet on humility, which the history of investing shows is a valuable trait.

2. You are your own worst enemy.

The biggest risk investors face isn’t a recession, a bear market, the Federal Reserve or their least favorite political party.

It is their own emotions and biases, and the destructive behaviors they cause.

You can be the best stock picker in the world, capable of finding tomorrow’s winning businesses before anyone else. But if you panic and sell during the next bear market, none of it will matter.

You can be armed with an M.B.A. and have 40 years before retirement to let your savings compound into a fortune. But if you have a gambling mentality and you day-trade penny stocks, your outlook seems dismal.

You can be a mathematical genius, building the most sophisticated stock-market forecasting models. But if you don’t understand the limits of your intelligence, you are on your way to disaster.

There aren’t many iron rules of investing, but one of them is that no amount of brain power can compensate for behavioral errors. Figure out what mistakes you are prone to make and embrace strategies that limit the risk.

3. There is a price to pay.

The stock market has historically offered stellar long-term returns, far better than cash or bonds.

But there is a cost. The price of admission to earn high long-term returns in stocks is a ceaseless torrent of unpredictable outcomes, senseless volatility and unexpected downturns.

If you can stick with your investments through the rough spots, you don’t actually pay this bill; it is a mental surcharge. But it is very real. Not everyone is willing to pay it, which is why there is opportunity for those who are.

There is an understandable desire to forecast what the market will do in the short run. But the reason stocks offer superior long-term returns is precisely because we can’t forecast what they will do in the short run.

4. When in doubt, choose the investment with the lowest fee.

As a group, investors’ profits always will equal the overall market’s returns minus all fees and expenses.

Below-average fees, therefore, offer one of your best shots at earning above-average results.

A talented fund manager can be worth a higher fee, mind you. But enduring outperformance is one of the most elusive investing skills.

According to Vanguard Group, which has championed low-cost investing products, more than 80% of actively managed U.S. stock funds underperformed a low-cost index fund in the 10 years through December. It is far more common for a fund manager to charge excess fees than to deliver excess performance.

There are no promises in investing. The best you can do is put the odds in your favor. And the evidence is overwhelming: The lower the costs, the more the odds tip in your favor.

5. Time is the most powerful force in investing.

Eighty-four year old Warren Buffett’s current net worth is around $73 billion, nearly all of which is in Berkshire Hathaway stock. Berkshire’s stock has risen 24-fold since 1990.

Do the math, and some $70 billion of Mr. Buffett’s $73 billion fortune was accumulated around or after his 60th birthday.

Mr. Buffett is, of course, a phenomenal investor whose talents few will replicate. But the real key to his wealth is that he has been a phenomenal investor for two-thirds of a century.

Wealth grows exponentially—a little at first, then slightly more, and then in a hurry for those who stick around the longest.

That lesson—that time, patience and endurance pay off—is something us mortals can learn from, particularly younger workers just starting to save for retirement.


Saturday, May 30, 2015

Magna Carta: Eight Centuries of Liberty

June marks the 800th anniversary of Magna Carta, the ‘Great Charter’ that established the rule of law for the English-speaking world. Its revolutionary impact still resounds today, writes Daniel Hannan

http://www.wsj.com/articles/magna-carta-eight-centuries-of-liberty-1432912022 

King John, pressured by English barons, reluctantly signs Magna Carta, the ‘Great Charter,’ on the Thames riverbank, Runnymede, June 15, 1215, as rendered in James Doyle’s ‘A Chronicle of England.’ Photo: Mary Evans Picture Library/Everett Collection http://si.wsj.net/public/resources/images/BN-IQ808_MAGNA_J_20150529103352.jpg

Eight hundred years ago next month, on a reedy stretch of riverbank in southern England, the most important bargain in the history of the human race was struck. I realize that’s a big claim, but in this case, only superlatives will do. As Lord Denning, the most celebrated modern British jurist put it, Magna Carta was “the greatest constitutional document of all time, the foundation of the freedom of the individual against the arbitrary authority of the despot.”

It was at Runnymede, on June 15, 1215, that the idea of the law standing above the government first took contractual form. King John accepted that he would no longer get to make the rules up as he went along. From that acceptance flowed, ultimately, all the rights and freedoms that we now take for granted: uncensored newspapers, security of property, equality before the law, habeas corpus, regular elections, sanctity of contract, jury trials.

Magna Carta is Latin for “Great Charter.” It was so named not because the men who drafted it foresaw its epochal power but because it was long. Yet, almost immediately, the document began to take on a political significance that justified the adjective in every sense.

The bishops and barons who had brought King John to the negotiating table understood that rights required an enforcement mechanism. The potency of a charter is not in its parchment but in the authority of its interpretation. The constitution of the U.S.S.R., to pluck an example more or less at random, promised all sorts of entitlements: free speech, free worship, free association. But as Soviet citizens learned, paper rights are worthless in the absence of mechanisms to hold rulers to account.

Magna Carta instituted a form of conciliar rule that was to develop directly into the Parliament that meets at Westminster today. As the great Victorian historian William Stubbs put it, “the whole constitutional history of England is little more than a commentary on Magna Carta.”

And not just England. Indeed, not even England in particular. Magna Carta has always been a bigger deal in the U.S. The meadow where the abominable King John put his royal seal to the parchment lies in my electoral district in the county of Surrey. It went unmarked until 1957, when a memorial stone was finally raised there—by the American Bar Association.

Only now, for the anniversary, is a British monument being erected at the place where freedom was born. After some frantic fundraising by me and a handful of local councilors, a large bronze statue of Queen Elizabeth II will gaze out across the slow, green waters of the Thames, marking 800 years of the Crown’s acceptance of the rule of law.

Eight hundred years is a long wait. We British have, by any measure, been slow to recognize what we have. Americans, by contrast, have always been keenly aware of the document, referring to it respectfully as the Magna Carta.

Why? Largely because of who the first Americans were. Magna Carta was reissued several times throughout the 14th and 15th centuries, as successive Parliaments asserted their prerogatives, but it receded from public consciousness under the Tudors, whose dynasty ended with the death of Elizabeth I in 1603.

In the early 17th century, members of Parliament revived Magna Carta as a weapon in their quarrels with the autocratic Stuart monarchs. Opposition to the Crown was led by the brilliant lawyer Edward Coke (pronounced Cook), who drafted the first Virginia Charter in 1606. Coke’s argument was that the king was sidelining Parliament, and so unbalancing the “ancient constitution” of which Magna Carta was the supreme expression.
United for the first time, the four surviving original Magna Carta manuscripts are prepared for display at the British Library, London, Feb. 1, 2015.
United for the first time, the four surviving original Magna Carta manuscripts are prepared for display at the British Library, London, Feb. 1, 2015. Photo: UPPA/ZUMA PRESS

The early settlers arrived while these rows were at their height and carried the mania for Magna Carta to their new homes. As early as 1637, Maryland sought permission to incorporate Magna Carta into its basic law, and the first edition of the Great Charter was published on American soil in 1687 by William Penn, who explained that it was what made Englishmen unique: “In France, and other nations, the mere will of the Prince is Law, his word takes off any man’s head, imposeth taxes, or seizes any man’s estate, when, how and as often as he lists; But in England, each man hath a fixed Fundamental Right born with him, as to freedom of his person and property in his estate, which he cannot be deprived of, but either by his consent, or some crime, for which the law has imposed such a penalty or forfeiture.”

There was a divergence between English and American conceptions of Magna Carta. In the Old World, it was thought of, above all, as a guarantor of parliamentary supremacy; in the New World, it was already coming to be seen as something that stood above both Crown and Parliament. This difference was to have vast consequences in the 1770s.

The American Revolution is now remembered on both sides of the Atlantic as a national conflict—as, indeed, a “War of Independence.” But no one at the time thought of it that way—not, at any rate, until the French became involved in 1778. Loyalists and patriots alike saw it as a civil war within a single polity, a war that divided opinion every bit as much in Great Britain as in the colonies.

The American Revolutionaries weren’t rejecting their identity as Englishmen; they were asserting it. As they saw it, George III was violating the “ancient constitution” just as King John and the Stuarts had done. It was therefore not just their right but their duty to resist, in the words of the delegates to the first Continental Congress in 1774, “as Englishmen our ancestors in like cases have usually done.”

Nowhere, at this stage, do we find the slightest hint that the patriots were fighting for universal rights. On the contrary, they were very clear that they were fighting for the privileges bestowed on them by Magna Carta. The concept of “no taxation without representation” was not an abstract principle. It could be found, rather, in Article 12 of the Great Charter: “No scutage or aid is to be levied in our realm except by the common counsel of our realm.” In 1775, Massachusetts duly adopted as its state seal a patriot with a sword in one hand and a copy of Magna Carta in the other.

I recount these facts to make an important, if unfashionable, point. The rights we now take for granted—freedom of speech, religion, assembly and so on—are not the natural condition of an advanced society. They were developed overwhelmingly in the language in which you are reading these words.

When we call them universal rights, we are being polite. Suppose World War II or the Cold War had ended differently: There would have been nothing universal about them then. If they are universal rights today, it is because of a series of military victories by the English-speaking peoples.

Various early copies of Magna Carta survive, many of them in England’s cathedrals, tended like the relics that were removed during the Reformation. One hangs in the National Archives in Washington, D.C., next to the two documents it directly inspired: the Declaration of Independence and the Constitution. Another enriches the Australian Parliament in Canberra.

But there are only four 1215 originals. One of them, normally housed at Lincoln Cathedral, has recently been on an American tour, resting for some weeks at the Library of Congress. It wasn’t that copy’s first visit to the U.S. The same parchment was exhibited in New York at the 1939 World’s Fair, attracting an incredible 13 million visitors. World War II broke out while it was still on display, and it was transferred to Fort Knox for safekeeping until the end of the conflict.

Could there have been a more apt symbol of what the English-speaking peoples were fighting for in that conflagration? Think of the world as it stood in 1939. Constitutional liberty was more or less confined to the Anglosphere. Everywhere else, authoritarianism was on the rise. Our system, uniquely, elevated the individual over the state, the rules over the rulers.

When the 18th-century statesman Pitt the Elder described Magna Carta as England’s Bible, he was making a profound point. It is, so to speak, the Torah of the English-speaking peoples: the text that sets us apart while at the same time speaking truths to the rest of mankind.

The very success of Magna Carta makes it hard for us, 800 years on, to see how utterly revolutionary it must have appeared at the time. Magna Carta did not create democracy: Ancient Greeks had been casting differently colored pebbles into voting urns while the remote fathers of the English were grubbing about alongside pigs in the cold soil of northern Germany. Nor was it the first expression of the law: There were Sumerian and Egyptian law codes even before Moses descended from Sinai.

What Magna Carta initiated, rather, was constitutional government—or, as the terse inscription on the American Bar Association’s stone puts it, “freedom under law.”

It takes a real act of imagination to see how transformative this concept must have been. The law was no longer just an expression of the will of the biggest guy in the tribe. Above the king brooded something more powerful yet—something you couldn’t see or hear or touch or taste but that bound the sovereign as surely as it bound the poorest wretch in the kingdom. That something was what Magna Carta called “the law of the land.”

This phrase is commonplace in our language. But think of what it represents. The law is not determined by the people in government, nor yet by clergymen presuming to interpret a holy book. Rather, it is immanent in the land itself, the common inheritance of the people living there.

The idea of the law coming up from the people, rather than down from the government, is a peculiar feature of the Anglosphere. Common law is an anomaly, a beautiful, miraculous anomaly. In the rest of the world, laws are written down from first principles and then applied to specific disputes, but the common law grows like a coral, case by case, each judgment serving as the starting point for the next dispute. In consequence, it is an ally of freedom rather than an instrument of state control. It implicitly assumes residual rights.

And indeed, Magna Carta conceives rights in negative terms, as guarantees against state coercion. No one can put you in prison or seize your property or mistreat you other than by due process. This essentially negative conception of freedom is worth clinging to in an age that likes to redefine rights as entitlements—the right to affordable health care, the right to be forgotten and so on.

It is worth stressing, too, that Magna Carta conceived freedom and property as two expressions of the same principle. The whole document can be read as a lengthy promise that the goods of a free citizen will not be arbitrarily confiscated by someone higher up the social scale. Even the clauses that seem most remote from modern experience generally turn out, in reality, to be about security of ownership.

There are, for example, detailed passages about wardship. King John had been in the habit of marrying heiresses to royal favorites as a way to get his hands on their estates. The abstruse-sounding articles about inheritance rights are, in reality, simply one more expression of the general principle that the state may not expropriate without due process.

Those who stand awe-struck before the Great Charter expecting to find high-flown phrases about liberty are often surprised to see that a chunk of it is taken up with the placing of fish-traps on the Thames. Yet these passages, too, are about property, specifically the freedom of merchants to navigate inland waterways without having arbitrary tolls imposed on them by fish farmers.

Liberty and property: how naturally those words tripped, as a unitary concept, from the tongues of America’s Founders. These were men who had been shaped in the English tradition, and they saw parliamentary government not as an expression of majority rule but as a guarantor of individual freedom. How different was the Continental tradition, born 13 years later with the French Revolution, which saw elected assemblies as the embodiment of what Rousseau called the “general will” of the people.

In that difference, we may perhaps discern explanation of why the Anglosphere resisted the chronic bouts of authoritarianism to which most other Western countries were prone. We who speak this language have always seen the defense of freedom as the duty of our representatives and so, by implication, of those who elect them. Liberty and democracy, in our tradition, are not balanced against each other; they are yoked together.

In February, the four surviving original copies of Magna Carta were united, for just a few hours, at the British Library—something that had not happened in 800 years. As I stood reverentially before them, someone recognized me and posted a photograph on Twitter with the caption: “If Dan Hannan gets his hands on all four copies of Magna Carta, will he be like Sauron with the Rings?”

Yet the majesty of the document resides in the fact that it is, so to speak, a shield against Saurons. Most other countries have fallen for, or at least fallen to, dictators. Many, during the 20th century, had popular communist parties or fascist parties or both. The Anglosphere, unusually, retained a consensus behind liberal capitalism.

This is not because of any special property in our geography or our genes but because of our constitutional arrangements. Those constitutional arrangements can take root anywhere. They explain why Bermuda is not Haiti, why Hong Kong is not China, why Israel is not Syria.

They work because, starting with Magna Carta, they have made the defense of freedom everyone’s responsibility. Americans, like Britons, have inherited their freedoms from past generations and should not look to any external agent for their perpetuation. The defense of liberty is your job and mine. It is up to us to keep intact the freedoms we inherited from our parents and to pass them on securely to our children.

Mr. Hannan is a British member of the European Parliament for the Conservative Party, a columnist for the Washington Examiner and the author of “Inventing Freedom: How the English-speaking Peoples Made the Modern World.”

Friday, April 3, 2015

The Federal President would not stay in power if he did not talk human rights. So look at it as a political imperative.

Joe Biden on Human Rights
The Vice President tells China’s leaders to ignore the U.S.
WSJ, Apr 01, 2015

White House officials can be oddly candid in talking to their liberal friends at the New Yorker magazine. That’s where an unnamed official in 2011 boasted of “leading from behind,” and where last year President Obama dismissed Islamic State as a terrorist “jayvee team.” Now the U.S. Vice President has revealed the Administration line on human rights in China.

In the April 6 issue, Joe Biden recounts meeting Xi Jinping months before his 2012 ascent to be China’s supreme leader. Mr. Xi asked him why the U.S. put “so much emphasis on human rights.” The right answer is simple: No government has the right to deny its citizens basic freedoms, and those that do tend also to threaten peace overseas, so U.S. support for human rights is a matter of values and interests.

Instead, Mr. Biden downplayed U.S. human-rights rhetoric as little more than political posturing. “No president of the United States could represent the United States were he not committed to human rights,” he told Mr. Xi. “President Barack Obama would not be able to stay in power if he did not speak of it. So look at it as a political imperative.” Then Mr. Biden assured China’s leader: “It doesn’t make us better or worse. It’s who we are. You make your decisions. We’ll make ours.” [not the WSJ's emphasis.]

Mr. Xi took the advice. Since taking office he has detained more than 1,000 political prisoners, from anticorruption activist Xu Zhiyong to lawyer Pu Zhiqiang and journalist Gao Yu. He has cracked down on Uighurs in Xinjiang, banning more Muslim practices and jailing scholar-activist Ilham Tohti for life. Anti-Christian repression and Internet controls are tightening. Nobel Peace laureate Liu Xiaobo remains in prison, his wife Liu Xia under illegal house arrest for the fifth year. Lawyer Gao Zhisheng left prison in August but is blocked from receiving medical care overseas. Hong Kong, China’s most liberal city, is losing its press freedom and political autonomy.

Amid all of this Mr. Xi and his government have faced little challenge from Washington. That is consistent with Hillary Clinton’s 2009 statement that human rights can’t be allowed to “interfere” with diplomacy on issues such as the economy and the environment. Mr. Obama tried walking that back months later, telling the United Nations that democracy and human rights aren’t “afterthoughts.” But his Administration’s record—and now Mr. Biden’s testimony—prove otherwise.

Saturday, March 14, 2015

Disrupting Disruptive Physicians. By Bruce Gewertz


Viewpoint

Disrupting Disruptive Physicians

Bruce L Gewertz, MD
 
JAMA Surg. Published online March 11, 2015. doi:10.1001/jamasurg.2014.2911.


On Thursday mornings, our operating room management committee meets to handle items large and small. Most of our discussions focus on block-time allocation, purchasing decisions, and alike. However, too often we talk about behavioral issues, particularly the now well-characterized disruptive physician.

We have all seen it or been there before. A physician acts out in the operating room with shouting or biting sarcasm, intimidating colleagues and staff and impeding them from functioning at a high level. The most debilitating perpetrators of this behavior are repeat customers who engender such fear and uncertainty in all who contact them that the morale of the nursing staff and anesthesiologists is undermined, work becomes an unbearable chore, and performance suffers.

When one engages a difficult physician on his or her behavior, the physician responds in characteristic fashion. He or she defends his or her actions as patient advocacy, pointing out the shortcomings of the scrub nurse or instruments and showing limited, if any, remorse. He or she argues that such civil disobedience is the only way to enact change. In truth, disruptive physicians’ actions are often admired by a sizable minority of their colleagues as the only way to articulate real frustrations of working in today’s highly complex hospital. In extreme situations, these physicians become folk heroes to younger physicians who envy their fortitude in confronting the power of the bureaucracy.

A few days after a recent outburst by a particularly unpleasant and repeat offender, I was enjoying my daily interval on the stationary bicycle at my gym. My thoughts were wandering to a broad range of topics. I spent some time considering what really drives this nonproductive behavior and how otherwise valuable physicians could be channeled successfully into a more collegial state. As in the past, I was long on theory but short on conviction that it would make a difference.

After my workout as I prepared to shower, I received an urgent email. A patient I was consulting for upper extremity embolization had developed confusion and possible cerebral emboli despite full anticoagulation. I responded that I was on my way to see her and suggested a few diagnostic tests and consultations.

As I typed my message, a custodial employee of the gym reminded me that no cellular telephones were allowed in the locker room. I pointed out that I was not using my cellular telephone but rather an email function and I was not offending anyone by talking. He again pointed out that cellular telephones were not allowed under any circumstances. As I argued back, “I am a physician and this is an emergency.” My voice got louder and I became confrontational. I told him to call the manager. Another member next to me said quietly that the reason for the cellular telephone ban was the photographic potential of the devices and that I could have simply moved to the reception area and used the telephone any way I wished.

I felt like the fool I was. I trudged off to the showers feeling, as in the Texas homily, lower than a snake’s belly. After toweling off, I approached the employee and apologized for my behavior and for making his job more difficult. I told him he had handled the situation far better than me and I admired his restraint.

The lessons were stark and undeniable. Like my disruptive colleagues, I had justified my boorish behavior with patient care. I had assumed my need to break the rules far outweighed the reasonable and rational policy of the establishment; after all, I was important and people depended on me. Worse yet, I felt empowered to take out my frustration, enhanced by my worry about the patient, on someone unlikely to retaliate against me for fear of job loss.

I have come to realize that irrespective of disposition, when the setting is right, we are all potentially disruptive. The only questions are how frequent and how severe. Even more importantly, from a prognostic perspective, can we share the common drivers of these behaviors and develop insights that will lead to avoidance?

The most common approaches used today are only moderately effective. As in many other institutions, when physicians are deemed by their peers to have violated a carefully defined code of conduct, they are advised to apologize to any offended personnel. In many instances, these apologies are sincere and are, in fact, appreciated by all. Unfortunately, on occasion, the interaction is viewed as a forced function and the behavior is soon repeated albeit in a different nursing unit or operating room.

When such failures occur, persistently disruptive physicians are referred to our physician well-being committee. Through a highly confidential process, efforts are made to explore the potential causes for the behavior and acquaint the referred physician with the consequences of their actions on hospital function. Often, behavioral contracts are drawn up to precisely outline the individual’s issues and subsequent medical staff penalties if further violations occur.

That said, as well intentioned and psychologically sound as these programs are, there remains a hard core of repeat offenders. Despite the heightened stress and ill will engendered by disruptive physicians’ behavior, they simply cannot interact in other than confrontational fashion when frustrated by real or imagined shortcomings in the environment.

Based on nearly 20 years of physician management experience, it is my belief that in these few physicians, such behaviors are hard wired and fairly resistant to traditional counseling. An unfortunate end game is termination from a medical staff if the hostile working environment created by their outbursts is viewed as a liability threat by the institution. Such actions are always painful and bring no satisfaction to anyone involved. These high-stakes dramas, often involving critical institutional players on both sides, are played out behind closed doors. Few people are privy to the details of either the infraction or the attempts at remediation. Misunderstandings in the staff are common.

I suggest that an underused remedy is more intense peer pressure through continued education of those colleagues who might silently support these outbursts without fully realizing the consequences. This would begin by treating these incidents in the same way that we do other significant adverse events that occur in our hospitals. In confidential but interdisciplinary sessions, the genesis, nature, and consequences of the interaction could be explored openly. If indeed the inciting event was judged to be an important patient care issue, the problem could be identified and addressed yet clearly separated from the counterproductive interaction that followed. In addition to the deterrence provided by the more public airing of the incidents, the tenuous linkage between abusive behavior and patient protection could be severed. It is this linkage that provides any superficial legitimacy to the outbursts.

Through this process, peer pressure would be increased and provide a greater impetus for self-control and more productive interactions. Importantly, with such a direct and full examination of both the character and costs of poor conduct, whatever support exists for such behaviors within the medical staff would be diminished.
 
Bruce Gewertz, MD, Cedars-Sinai Health System Published Online: March 11, 2015. doi:10.1001/jamasurg.2014.2911.
Conflict of Interest Disclosures: None reported.

Thursday, January 29, 2015

In the name of ‘affordable’ loans, we are creating the conditions for a replay of the housing disaster


Building Toward Another Mortgage Meltdown. By Edward Pinto


In the name of ‘affordable’ loans, the White House is creating the conditions for a replay of the housing disaster

http://www.wsj.com/articles/edward-pinto-building-toward-another-mortgage-meltdown-1422489618
The Obama administration’s troubling flirtation with another mortgage meltdown took an unsettling turn on Tuesday with Federal Housing Finance Agency Director Mel Watt ’s testimony before the House Financial Services Committee.

Mr. Watt told the committee that, having received “feedback from stakeholders,” he expects to release by the end of March new guidance on the “guarantee fee” charged by Fannie Mae and Freddie Mac to cover the credit risk on loans the federal mortgage agencies guarantee.

Here we go again. In the Obama administration, new guidance on housing policy invariably means lowering standards to get mortgages into the hands of people who may not be able to afford them.

Earlier this month, President Obama announced that the Federal Housing Administration (FHA) will begin lowering annual mortgage-insurance premiums “to make mortgages more affordable and accessible.” While that sounds good in the abstract, the decision is a bad one with serious consequences for the housing market.

Government programs to make mortgages more widely available to low- and moderate-income families have consistently offered overleveraged, high-risk loans that set up too many homeowners to fail. In the long run-up to the 2008 financial crisis, for example, federal mortgage agencies and their regulators cajoled and wheedled private lenders to loosen credit standards. They have been doing so again. When the next housing crash arrives, private lenders will be blamed—and homeowners and taxpayers will once again pay dearly.

Lowering annual mortgage-insurance premiums is part of a new affordable-lending effort by the Obama administration. More specifically, it is the latest salvo in a price war between two government mortgage giants to meet government mandates.

Fannie Mae fired the first shot in December when it relaunched the 30-year, 97% loan-to-value, or LTV, mortgage (a type of loan that was suspended in 2013). Fannie revived these 3% down-payment mortgages at the behest of its federal regulator, the Federal Housing Finance Agency (FHFA)—which has run Fannie Mae and Freddie Mac since 2008, when both government-sponsored enterprises (GSEs) went belly up and were put into conservatorship. The FHA’s mortgage-premium price rollback was a counteroffensive.

Déjà vu: Fannie launched its first price war against the FHA in 1994 by introducing the 30-year, 3% down-payment mortgage. It did so at the behest of its then-regulator, the Department of Housing and Urban Development. This and other actions led HUD in 2004 to credit Fannie Mae’s “substantial part in the ‘revolution’ ” in “affordable lending” to “historically underserved households.”

Fannie’s goal in 1994 and today is to take market share from the FHA, the main competitor for loans it and Freddie Mac need to meet mandates set by Congress since 1992 to increase loans to low- and moderate-income homeowners. The weapons in this war are familiar—lower pricing and progressively looser credit as competing federal agencies fight over existing high-risk lending and seek to expand such lending.

Mortgage price wars between government agencies are particularly dangerous, since access to low-cost capital and minimal capital requirements gives them the ability to continue for many years—all at great risk to the taxpayers. Government agencies also charge low-risk consumers more than necessary to cover the risk of default, using the overage to lower fees on loans to high-risk consumers.

Starting in 2009 the FHFA released annual studies documenting the widespread nature of these cross-subsidies. The reports showed that low down payment, 30-year loans to individuals with low FICO scores were consistently subsidized by less-risky loans.

Unfortunately, special interests such as the National Association of Realtors—always eager to sell more houses and reap the commissions—and the left-leaning Urban Institute were cheerleaders for loose credit. In 1997, for example, HUD commissioned the Urban Institute to study Fannie and Freddie’s single-family underwriting standards. The Urban Institute’s 1999 report found that “the GSEs’ guidelines, designed to identify creditworthy applicants, are more likely to disqualify borrowers with low incomes, limited wealth, and poor credit histories; applicants with these characteristics are disproportionately minorities.” By 2000 Fannie and Freddie did away with down payments and raised debt-to-income ratios. HUD encouraged them to more aggressively enter the subprime market, and the GSEs decided to re-enter the “liar loan” (low doc or no doc) market, partly in a desire to meet higher HUD low- and moderate-income lending mandates.

On Jan. 6, the Urban Institute announced in a blog post: “FHA: Time to stop overcharging today’s borrowers for yesterday’s mistakes.” The institute endorsed an immediate cut of 0.40% in mortgage-insurance premiums charged by the FHA. But once the agency cuts premiums, Fannie and Freddie will inevitably reduce the guarantee fees charged to cover the credit risk on the loans they guarantee.

Now the other shoe appears poised to drop, given Mr. Watt’s promise on Tuesday to issue new guidance on guarantee fees.

This is happening despite Congress’s 2011 mandate that Fannie’s regulator adjust the prices of mortgages and guarantee fees to make sure they reflect the actual risk of loss—that is, to eliminate dangerous and distortive pricing by the two GSEs. Ed DeMarco, acting director of the FHFA since March 2009, worked hard to do so but left office in January 2014. Mr. Watt, his successor, suspended Mr. DeMarc o’s efforts to comply with Congress’s mandate. Now that Fannie will once again offer heavily subsidized 3%-down mortgages, massive new cross-subsidies will return, and the congressional mandate will be ignored.

The law stipulates that the FHA maintain a loss-absorbing capital buffer equal to 2% of the value of its outstanding mortgages. The agency obtains this capital from profits earned on mortgages and future premiums. It hasn’t met its capital obligation since 2009 and will not reach compliance until the fall of 2016, according to the FHA’s latest actuarial report. But if the economy runs into another rough patch, this projection will go out the window.

Congress should put an end to this price war before it does real damage to the economy. It should terminate the ill-conceived GSE affordable-housing mandates and impose strong capital standards on the FHA that can’t be ignored as they have been for five years and counting.

Mr. Pinto, former chief credit officer of Fannie Mae, is co-director and chief risk officer of the International Center on Housing Risk at the American Enterprise Institute.

Tuesday, December 30, 2014

Wednesday, December 10, 2014

Though Luke Somers died, jihadists know they are targets if they kidnap Americans

A Noble Rescue Attempt. WSJ Editorial

Though Luke Somers died, jihadists know they are targets if they kidnap Americans.

http://www.wsj.com/articles/a-noble-rescue-attempt-1417991769

WSJ, Dec. 7, 2014 5:36 p.m. ET

Condolences to the family of Luke Somers, the kidnapped American journalist who was murdered Saturday during a rescue attempt by U.S. special forces in Yemen. His death is a moment for sadness and anger, but also for pride in the rescue team and praise for the Obama Administration for ordering the attempt.

According to the Journal’s account based on military and Administration sources, some 40 special forces flew to a remote part of Yemen, marching five miles to escape detection, but lost the element of surprise about 100 yards from the jihadist hideout. One of the terrorists was observed by drone surveillance to enter a building where it is believed he shot Somers and a South African hostage, Pierre Korkie. The special forces carried the wounded men out by helicopter, but one died on route and the other aboard a Navy ship.

There is no blame for failing to save Somers, whose al Qaeda captors had released a video on Thursday vowing to kill him in 72 hours if the U.S. did not meet unspecified demands. The jihadists were no doubt on high alert after special forces conducted a rescue attempt in late November at a hillside cave. The commandos rescued eight people, mostly Yemenis, but Somers had been moved.

It’s a tribute to the skill of U.S. special forces that these high-risk missions against a dangerous enemy don’t fail more often. But given good intelligence and a reasonable chance to save Somers, the fault would have been not to try for fear of failure or political blame.

The reality is that most American and British citizens captured by jihadists are now likely to be murdered as a terrorist statement. This isn’t always true for citizens of other countries that pay ransom. But the U.S. and U.K. rightly refuse on grounds that the payments give incentive for more kidnappings while enriching the terrorists.

Jihadists don’t distinguish between civilians and soldiers, or among journalists, clergy, doctors or aid workers. They are waging what they think is a struggle to the death against other religious faiths and the West. Their goal is to kill for political control and their brand of Islam.

The murders are likely to increase as the U.S. fight against Islamic State intensifies. The jihadists know from experience that they can’t win a direct military confrontation, so their goal is to weaken the resolve of democracies at home. Imposing casualties on innocent Americans abroad and attacking the homeland are part of their military strategy.

They don’t seem to realize that such brutality often backfires, reinforcing U.S. public resolve, as even Osama bin Laden understood judging by his intercepted communications. But Americans need to realize that there are no safe havens in this long war. Everyone is a potential target.

So we are entering an era when the U.S. will have to undertake more such rescues of Americans kidnapped overseas. The results will be mixed, but even failed attempts will send a message to jihadists that capturing Americans will make them targets—and that there is no place in the world they can’t be found and killed.

It’s a tragedy that fanatical Islamists have made the world so dangerous, but Americans should be proud of a country that has men and women willing to risk their own lives to leave no American behind.

Saturday, November 15, 2014

Jonathan Gruber’s ‘Stupid’ Budget Tricks

Jonathan Gruber’s ‘Stupid’ Budget Tricks. WSJ Editorial
His ObamaCare candor shows how Congress routinely cons taxpayers.Wall Street Journal, Nov. 14, 2014 6:51 p.m. ET

As a rule, Americans don’t like to be called “stupid,” as Jonathan Gruber is discovering. Whatever his academic contempt for voters, the ObamaCare architect and Massachusetts Institute of Technology economist deserves the Presidential Medal of Freedom for his candor about the corruption of the federal budget process.

In his now-infamous talk at the University of Pennsylvania last year, Professor Gruber argued that the Affordable Care Act “would not have passed” had Democrats been honest about the income-redistribution policies embedded in its insurance regulations. But the more instructive moment is his admission that “this bill was written in a tortured way to make sure CBO did not score the mandate as taxes. If CBO scored the mandate as taxes, the bill dies.”

Mr. Gruber means the Congressional Budget Office, the institution responsible for putting “scores” or official price tags on legislation. He’s right that to pass ObamaCare Democrats perpetrated the rawest, most cynical abuse of the CBO since its creation in 1974.

In another clip from Mr. Gruber’s seemingly infinite video library, he discusses how he and Democrats wrote the law to game the CBO’s fiscal conventions and achieve goals that would otherwise be “politically impossible.” In still another, he explains that these ruses are “a sad statement about budget politics in the U.S., but there you have it.”

Yes you do. Such admissions aren’t revelations, since the truth has long been obvious to anyone curious enough to look. We and other critics wrote about ObamaCare’s budget gimmicks during the debate, and Rep. Paul Ryan exposed them at the 2010 “health summit.” President Obama changed the subject.

But rarely are liberal intellectuals as full frontal as Mr. Gruber about the accounting fraud ingrained in ObamaCare. Also notable are his do-what-you-gotta-do apologetics: “I’d rather have this law than not,” he says.

Recall five years ago. The White House wanted to pretend that the open-ended new entitlement would spend less than $1 trillion over 10 years and reduce the deficit too. Congress requires the budget gnomes to score bills as written, no matter how unrealistic the assumption or fake the promise. Democrats with the help of Mr. Gruber carefully designed the bill to exploit this built-in gullibility.

So they used a decade of taxes to fund merely six years of insurance subsidies. They made-believe that Medicare payments to hospitals will some day fall below Medicaid rates. A since-repealed program for long-term care front-loaded taxes but back-loaded spending, meant to gradually go broke by design. Remember the spectacle of Democrats waiting for the white smoke to come up from CBO and deliver the holy scripture verdict?

On the tape, Mr. Gruber also identifies a special liberal manipulation: CBO’s policy reversal to not count the individual mandate to buy insurance as an explicit component of the federal budget. In 1994, then CBO chief Robert Reischauer reasonably determined that if the government forces people to buy a product by law, then those transactions no longer belong to the private economy but to the U.S. balance sheet. The CBO’s face-melting cost estimate helped to kill HillaryCare.

The CBO director responsible for this switcheroo that moved much of ObamaCare’s real spending off the books was Peter Orszag, who went on to become Mr. Obama’s budget director. Mr. Orszag nonetheless assailed CBO during the debate for not giving him enough credit for the law’s phantom “savings.”

Then again, Mr. Gruber told a Holy Cross audience in 2010 that although ObamaCare “is 90% health insurance coverage and 10% about cost control, all you ever hear people talk about is cost control. How it’s going to lower the cost of health care, that’s all they talk about. Why? Because that’s what people want to hear about because a majority of Americans care about health-care costs.”

***

Both political parties for some reason treat the CBO with the same reverence the ancient Greeks reserved for the Delphic oracle, but Mr. Gruber’s honesty is another warning that the budget rules are rigged to expand government and hide the true cost of entitlements. CBO scores aren’t unambiguous facts but are guesses about the future, biased by the Keynesian assumptions and models its political masters in Congress instruct it to use.

Republicans who now run Congress can help taxpayers by appointing a new CBO director, as is their right as the majority. Current head Doug Elmendorf is a respected economist, and he often has a dry wit as he reminds Congressfolk that if they feed him garbage, he must give them garbage back. But if the GOP won’t abolish the institution, then they can find a replacement who is as candid as Mr. Gruber about the flaws and limitations of the CBO status quo. The Tax Foundation’s Steve Entin would be an inspired pick.

Democrats are now pretending they’ve never heard of Mr. Gruber, though they used to appeal to his authority when he still had some. His commentaries are no less valuable because he is now a political liability for Democrats.

Wednesday, August 13, 2014

More War for Oil? President Obama dispatches troops to Iraq—and has to listen to the old canards all over again

More War for Oil? By Holman W. Jenkins, Jr.
President Obama dispatches troops to Iraq—and has to listen to the old canards all over again.Wall Street Journal, Aug 12, 2014 7:00 p.m. ET
http://online.wsj.com/articles/holman-jenkins-more-war-for-oil-1407884431

The "no blood for oil" crowd has piped up with surprising speed and noisiness in the short hours since President Obama recommitted U.S. forces to the fight in Iraq.

Steve Coll, a writer for the New Yorker, suggests in a piece posted on the magazine's website that "Kurdish oil greed," whose partner Mr. Obama now becomes, has been a primary factor in making Iraq a failed state. That's apparently because of the Kurds' unwillingness to reach a revenue-sharing deal with Baghdad. For good measure, he refers readers to a Rachel Maddow video, featuring Steve Coll, that argues that the U.S. invaded Iraq to gets its oil in the first place.

John B. Judis, a veteran editor of the New Republic, in contrast is relatively sane under the headline "The U.S. Airstrikes in Northern Iraq Are All About Oil." While nodding toward Mr. Obama's stated humanitarian justifications, he insists oil "lies near the center of American motives for intervention."
There are a few problems with this argument. Oil exists in the hinterland of Erbil, all right, the capital of a stable, prosperous and relatively free Kurdistan that President Obama now is trying to protect from the Islamic murderers of ISIS.

But oil also exists in northwestern Iraq—in fact, vast amounts of oil around Mosul, whose fall did not trigger Obama intervention. Oil is in Libya, where the U.S. quickly took a hike after the fall of Gadhafi. Oil is in Canada, where Mr. Obama, who just fatally risked his legacy with his core admirers by dispatching forces to the Mideast, can't bring himself to choose between his labor and greenie constituents by deciding to approve or veto the Keystone pipeline.

Oil apparently explains nothing except when it explains everything.
Another problem is that Americans are both consumers and producers of oil. So does the U.S. want high or low prices? A bigger producer in recent years, America presumably has seen its interest shifting steadily in the direction of higher prices. Yet acting to protect Kurdish production would have the opposite effect.

But then Mr. Coll especially is ritualizing, not thinking—and what he's ritualizing is a certain leftist hymn about the origins of the 2003 Iraq war. Never mind that if the U.S. had wanted Iraq's oil, it would have been vastly cheaper to buy it— Saddam was certainly eager to sell. Never mind that the Bush administration, after overthrowing Saddam, stood idly by while Baghdad awarded the biggest contracts to India, China and Angola.

It was not a Bushie but Madeleine Albright, in her maiden speech as Bill Clinton's secretary of state, who first laid out the case for regime change in Iraq.

In the same 1997 speech, she explained, "Last August, Iraqi forces took advantage of intra-Kurdish tensions and attacked the city of Irbil, in northern Iraq. President Clinton responded by expanding the no-fly zone to the southern suburbs of Baghdad. . . . Contrary to some expectations, the attack on Irbil has not restored Saddam Hussein's authority in the north. We are firmly engaged alongside Turkey and the United Kingdom in helping the inhabitants of the region find stability and work towards a unified and pluralistic Iraq."

Madame Secretary did not mention oil any more than President Obama did last week. Of course, the catechism holds that, when politicians aren't freely voicing their obsession with oil as Bush and Cheney supposedly did while cooking up the Iraq War, politicians are concealing their obsession with oil. In fact, oil was not yet produced in significant quantities in Erbil at the time. It was the peace and stability that Presidents Bush, Clinton and Bush provided, and that President Obama is trying to restore, that allowed the flowering of Iraqi Kurdistan, including its oil industry.

By now, America has invested 23 years in shielding northern Iraq from the suppurating chaos that seems to flow endlessly from Baghdad and its Sunni-dominated Western suburbs. It's one of our few conspicuous successes in Iraq. Politics, in the best and worst senses of the word, drives every political decision. Despite his palpable lack of enthusiasm, President Obama knows surrender in northern Iraq would be an intolerable disgrace for his administration and U.S. policy. So he sends in the troops.

We come to an irony. The liberal habit of assuming everyone else's motives are corrupt is, of course, an oldie-moldie, if a tad free-floating in this case. But the critics in question don't actually oppose Mr. Obama's intervention, the latest in our costly and thankless efforts in Iraq. They don't exactly endorse it either. The New Yorker's Mr. Coll especially seems out to avoid committing himself while striking a knowing, superior tone about the alleged centrality of oil, which is perhaps the most ignoble reason to pick up a pen on this subject right now.

Tuesday, July 15, 2014

The Citigroup ATM - Jack Lew and Tim Geithner escape mention in the bank settlement.

The Citigroup ATM, WSJ Editorial
Jack Lew and Tim Geithner escape mention in the bank settlement.The Wall Street Journal, July 14, 2014 7:37 p.m. ET
http://online.wsj.com/articles/the-citigroup-atm-1405379378

The Department of Justice isn't known for a sense of humor. But on Monday it announced a civil settlement with Citigroup over failed mortgage investments that covers almost exactly the period when current Treasury Secretary Jack Lew oversaw divisions at Citi that presided over failed mortgage investments. Now, that's funny.

Though Justice, five states and the FDIC are prying $7 billion from the bank for allegedly misleading investors, there's no mention in the settlement of clawing back even a nickel of Mr. Lew's compensation. We also see no sanction for former Treasury Secretary Timothy Geithner, who allowed Citi to build colossal mortgage risks outside its balance sheet while overseeing the bank as president of the New York Federal Reserve.

The settlement says Citi's alleged misdeeds began in 2006, the year Mr. Lew joined the bank, and the agreement covers conduct "prior to January 1, 2009." That was shortly before Mr. Lew left to work for President Obama and two weeks before Mr. Lew received $944,518 from Citi in "salary, payout for vested restricted stock," and "discretionary cash compensation for work performed in 2008," according to a 2010 federal disclosure report. That was also the year Citi began receiving taxpayer bailouts of $45 billion in cash, plus hundreds of billions more in taxpayer guarantees.

While Attorney General Eric Holder is forgiving toward his Obama cabinet colleagues, he seems to believe that some housing transactions can never be forgiven. The $7 billion settlement includes the same collateralized debt obligation for which the bank already agreed to pay $285 million in a settlement with the Securities and Exchange Commission. The Justice settlement also includes a long list of potential charges not covered by the agreement, so prosecutors can continue to raid the Citi ATM.

Citi offers in return what looks like a blanket agreement not to sue the government over any aspect of the case, and waives its right to defend itself "based in whole or in part on a contention that, under the Double Jeopardy Clause in the Fifth Amendment of the Constitution, or under the Excessive Fines Clause in the Eighth Amendment of the Constitution, this Agreement bars a remedy sought in such criminal prosecution or administrative action." We hold no brief for Citi, which has been rescued three times by the feds. But what kind of government demands the right to exact repeated punishments for the same offense?

The bank's real punishment should have been failure, as former FDIC Chairman Sheila Bair and we argued at the time. Instead, the regulators kept Citi alive with taxpayer money far beyond what it provided most other banks as part of the Troubled Asset Relief Program. Keeping it alive means they can now use Citi as a political target when it's convenient to claim they're tough on banks.

And speaking of that $7 billion, good luck finding a justification for it in the settlement agreement. The number seems to have been pulled out of thin air since it's unrelated to Citi's mortgage-securities market share or any other metric we can see beyond having media impact.

If this sounds cynical, readers should consult the Justice Department's own leaks to the press about how the Citi deal went down. Last month the feds were prepared to bring charges against the bank, but the necessities of public relations intervened.

According to the Journal, "News had leaked that afternoon, June 17, that the U.S. had captured Ahmed Abu Khatallah, a key suspect in the attacks on the American consulate in Benghazi in 2012. Justice Department officials didn't want the announcement of the suit against Citigroup—and its accompanying litany of alleged misdeeds related to mortgage-backed securities—to be overshadowed by questions about the Benghazi suspect and U.S. policy on detainees. Citigroup, which didn't want to raise its offer again and had been preparing to be sued, never again heard the threat of a suit."

This week's settlement includes $4 billion for the Treasury, roughly $500 million for the states and FDIC, and $2.5 billion for mortgage borrowers. That last category has become a fixture of recent government mortgage settlements, even though the premise of this case involves harm done to bond investors, not mortgage borrowers.

But the Obama Administration's references to the needs of Benghazi PR remind us that it could be worse. At least Mr. Holder isn't blaming the Geithner and Lew failures on a video.

Thursday, July 10, 2014

Our Financial Crisis Amnesia - Remember the S&L crisis? Nobody else does either. And we'll soon forget about 2008 too

Our Financial Crisis Amnesia. By Alex J. Pollock
Remember the S&L crisis? Nobody else does either. And we'll soon forget about 2008 too.WSJ, July 9, 2014 6:50 p.m. ET
http://online.wsj.com/articles/alex-pollock-our-financial-crisis-amnesia-1404946250

It is now five years since the end of the most recent U.S. financial crisis of 2007-09. Stocks have made record highs, junk bonds and leveraged loans have boomed, house prices have risen, and already there are cries for lower credit standards on mortgages to "increase access."

Meanwhile, in vivid contrast to the Swiss central bank, which marks its investments to market, the Federal Reserve has designed its own regulatory accounting so that it will never have to recognize any losses on its $4 trillion portfolio of long-term bonds and mortgage securities.

Who remembers that such "special" accounting is exactly what the Federal Home Loan Bank Board designed in the 1980s to hide losses in savings and loans? Who remembers that there even was a Federal Home Loan Bank Board, which for its manifold financial sins was abolished in 1989?

It is 25 years since 1989. Who remembers how severe the multiple financial crises of the 1980s were?

The government of Mexico defaulted on its loans in 1982 and set off a global debt crisis. The Federal Reserve's double-digit interest rates had rendered insolvent the aggregate savings and loan industry, until then the principal supplier of mortgage credit. The oil bubble collapsed with enormous losses.

Between 1982 and 1992, a disastrous 2,270 U.S. depository institutions failed. That is an average of more than 200 failures a year or four a week over a decade. From speaking to a great many audiences about financial crises, I can testify that virtually no one knows this.

In the wake of the housing bust, I was occasionally asked, "Will we learn the lessons of this crisis?" "We will indeed," I would reply, "and we will remember them for at least four or five years." In 2007 as the first wave of panic was under way, I heard a senior international economist opine in deep, solemn tones, "What we have learned from this crisis is the importance of liquidity risk." "Yes," I said, "that's what we learn from every crisis."

The political reactions to the 1980s included the Financial Institutions Reform, Recovery and Enforcement Act of 1989, the FDIC Improvement Act of 1991, and the very ironically titled GSE Financial Safety and Soundness Act of 1992. Anybody remember the theories behind those acts?

After depositors in savings and loan associations were bailed out to the tune of $150 billion (the Federal Savings and Loan Insurance Corporation having gone belly up), then-Treasury Secretary Nicholas Brady pronounced that the great legislative point was "never again." Never, that is, until the Mexican debt crisis of 1994, the Asian debt crisis of 1997, and the Long-Term Capital Management crisis of 1998, all very exciting at the time.

And who remembers the Great Recession (so called by a prominent economist of the time) in 1973-75, the huge real-estate bust and New York City's insolvency crisis? That was the decade before the 1980s.

Viewing financial crises over several centuries, the great economic historian Charles Kindleberger concluded that they occur on average about once a decade. Similarly, former Fed Chairman Paul Volcker wittily observed that "about every 10 years, we have the biggest crisis in 50 years."

What is it about a decade or so? It seems that is long enough for memories to fade in the human group mind, as they are overlaid with happier recent experiences and replaced with optimistic new theories.

Speaking in 2013, Paul Tucker, the former deputy governor for financial stability of the Bank of England—a man who has thought long and hard about the macro risks of financial systems—stated, "It will be a while before confidence in the system is restored." But how long is "a while"? I'd say less than a decade.

Mr. Tucker went on to proclaim, "Never again should confidence be so blind." Ah yes, "never again." If Mr. Tucker's statement is meant as moral suasion, it's all right. But if meant as a prediction, don't bet on it.

Former Treasury Secretary Tim Geithner, for all his daydream of the government as financial Platonic guardian, knows this. As he writes in "Stress Test," his recent memoir: "Experts always have clever reasons why the boom they are enjoying will avoid the disastrous patterns of the past—until it doesn't." He predicts: "There will be a next crisis, despite all we did."

Right. But when? On the historical average, 2009 + 10 = 2019. Five more years is plenty of time for forgetting.

Mr. Pollock is a resident fellow at the American Enterprise Institute and was president and CEO of the Federal Home Loan Bank of Chicago 1991-2004.