Sunday, June 7, 2015

Five Bedrock Principles for Investors. By Morgan Housel

Brilliance isn’t the only key to Warren Buffett’s investing success. See rule No. 5.



The U.S. economy shrank last quarter. The Federal Reserve is widely expected to begin raising interest rates later this year. U.S. stocks are expensive by many measures. Greece’s national finances remain fragile. Oh, and election season already is under way in the U.S.

Investors who are tempted to sell risky assets and flee to safety don’t have to look far for justification.

If you are one of them, ponder this: Most of what matters in investing involves bedrock principles, not current events.

Here are five principles every investor should keep in mind:

1. Diversification is how you limit the risk of losses in an uncertain world.
If, 30 years ago, a visitor from the future had said that the Soviet Union had collapsed, Japan’s stock market had stagnated for a quarter century, China had become a superpower and North Dakota had helped turn the U.S. into a fast-growing source of crude oil, few would have believed it.

The next 30 years will be just as surprising.

Diversification among different assets can be frustrating. It requires, at every point in time, owning some unpopular assets.

Why would I want to own European stocks if its economy is such a mess? Why should I buy bonds if interest rates are so low?

The appropriate answer is, “Because the future will play out in ways you or your adviser can’t possibly comprehend.”

Owning a little bit of everything is a bet on humility, which the history of investing shows is a valuable trait.

2. You are your own worst enemy.

The biggest risk investors face isn’t a recession, a bear market, the Federal Reserve or their least favorite political party.

It is their own emotions and biases, and the destructive behaviors they cause.

You can be the best stock picker in the world, capable of finding tomorrow’s winning businesses before anyone else. But if you panic and sell during the next bear market, none of it will matter.

You can be armed with an M.B.A. and have 40 years before retirement to let your savings compound into a fortune. But if you have a gambling mentality and you day-trade penny stocks, your outlook seems dismal.

You can be a mathematical genius, building the most sophisticated stock-market forecasting models. But if you don’t understand the limits of your intelligence, you are on your way to disaster.

There aren’t many iron rules of investing, but one of them is that no amount of brain power can compensate for behavioral errors. Figure out what mistakes you are prone to make and embrace strategies that limit the risk.

3. There is a price to pay.

The stock market has historically offered stellar long-term returns, far better than cash or bonds.

But there is a cost. The price of admission to earn high long-term returns in stocks is a ceaseless torrent of unpredictable outcomes, senseless volatility and unexpected downturns.

If you can stick with your investments through the rough spots, you don’t actually pay this bill; it is a mental surcharge. But it is very real. Not everyone is willing to pay it, which is why there is opportunity for those who are.

There is an understandable desire to forecast what the market will do in the short run. But the reason stocks offer superior long-term returns is precisely because we can’t forecast what they will do in the short run.

4. When in doubt, choose the investment with the lowest fee.

As a group, investors’ profits always will equal the overall market’s returns minus all fees and expenses.

Below-average fees, therefore, offer one of your best shots at earning above-average results.

A talented fund manager can be worth a higher fee, mind you. But enduring outperformance is one of the most elusive investing skills.

According to Vanguard Group, which has championed low-cost investing products, more than 80% of actively managed U.S. stock funds underperformed a low-cost index fund in the 10 years through December. It is far more common for a fund manager to charge excess fees than to deliver excess performance.

There are no promises in investing. The best you can do is put the odds in your favor. And the evidence is overwhelming: The lower the costs, the more the odds tip in your favor.

5. Time is the most powerful force in investing.

Eighty-four year old Warren Buffett’s current net worth is around $73 billion, nearly all of which is in Berkshire Hathaway stock. Berkshire’s stock has risen 24-fold since 1990.

Do the math, and some $70 billion of Mr. Buffett’s $73 billion fortune was accumulated around or after his 60th birthday.

Mr. Buffett is, of course, a phenomenal investor whose talents few will replicate. But the real key to his wealth is that he has been a phenomenal investor for two-thirds of a century.

Wealth grows exponentially—a little at first, then slightly more, and then in a hurry for those who stick around the longest.

That lesson—that time, patience and endurance pay off—is something us mortals can learn from, particularly younger workers just starting to save for retirement.


Saturday, May 30, 2015

Magna Carta: Eight Centuries of Liberty

June marks the 800th anniversary of Magna Carta, the ‘Great Charter’ that established the rule of law for the English-speaking world. Its revolutionary impact still resounds today, writes Daniel Hannan

http://www.wsj.com/articles/magna-carta-eight-centuries-of-liberty-1432912022 

King John, pressured by English barons, reluctantly signs Magna Carta, the ‘Great Charter,’ on the Thames riverbank, Runnymede, June 15, 1215, as rendered in James Doyle’s ‘A Chronicle of England.’ Photo: Mary Evans Picture Library/Everett Collection http://si.wsj.net/public/resources/images/BN-IQ808_MAGNA_J_20150529103352.jpg

Eight hundred years ago next month, on a reedy stretch of riverbank in southern England, the most important bargain in the history of the human race was struck. I realize that’s a big claim, but in this case, only superlatives will do. As Lord Denning, the most celebrated modern British jurist put it, Magna Carta was “the greatest constitutional document of all time, the foundation of the freedom of the individual against the arbitrary authority of the despot.”

It was at Runnymede, on June 15, 1215, that the idea of the law standing above the government first took contractual form. King John accepted that he would no longer get to make the rules up as he went along. From that acceptance flowed, ultimately, all the rights and freedoms that we now take for granted: uncensored newspapers, security of property, equality before the law, habeas corpus, regular elections, sanctity of contract, jury trials.

Magna Carta is Latin for “Great Charter.” It was so named not because the men who drafted it foresaw its epochal power but because it was long. Yet, almost immediately, the document began to take on a political significance that justified the adjective in every sense.

The bishops and barons who had brought King John to the negotiating table understood that rights required an enforcement mechanism. The potency of a charter is not in its parchment but in the authority of its interpretation. The constitution of the U.S.S.R., to pluck an example more or less at random, promised all sorts of entitlements: free speech, free worship, free association. But as Soviet citizens learned, paper rights are worthless in the absence of mechanisms to hold rulers to account.

Magna Carta instituted a form of conciliar rule that was to develop directly into the Parliament that meets at Westminster today. As the great Victorian historian William Stubbs put it, “the whole constitutional history of England is little more than a commentary on Magna Carta.”

And not just England. Indeed, not even England in particular. Magna Carta has always been a bigger deal in the U.S. The meadow where the abominable King John put his royal seal to the parchment lies in my electoral district in the county of Surrey. It went unmarked until 1957, when a memorial stone was finally raised there—by the American Bar Association.

Only now, for the anniversary, is a British monument being erected at the place where freedom was born. After some frantic fundraising by me and a handful of local councilors, a large bronze statue of Queen Elizabeth II will gaze out across the slow, green waters of the Thames, marking 800 years of the Crown’s acceptance of the rule of law.

Eight hundred years is a long wait. We British have, by any measure, been slow to recognize what we have. Americans, by contrast, have always been keenly aware of the document, referring to it respectfully as the Magna Carta.

Why? Largely because of who the first Americans were. Magna Carta was reissued several times throughout the 14th and 15th centuries, as successive Parliaments asserted their prerogatives, but it receded from public consciousness under the Tudors, whose dynasty ended with the death of Elizabeth I in 1603.

In the early 17th century, members of Parliament revived Magna Carta as a weapon in their quarrels with the autocratic Stuart monarchs. Opposition to the Crown was led by the brilliant lawyer Edward Coke (pronounced Cook), who drafted the first Virginia Charter in 1606. Coke’s argument was that the king was sidelining Parliament, and so unbalancing the “ancient constitution” of which Magna Carta was the supreme expression.
United for the first time, the four surviving original Magna Carta manuscripts are prepared for display at the British Library, London, Feb. 1, 2015.
United for the first time, the four surviving original Magna Carta manuscripts are prepared for display at the British Library, London, Feb. 1, 2015. Photo: UPPA/ZUMA PRESS

The early settlers arrived while these rows were at their height and carried the mania for Magna Carta to their new homes. As early as 1637, Maryland sought permission to incorporate Magna Carta into its basic law, and the first edition of the Great Charter was published on American soil in 1687 by William Penn, who explained that it was what made Englishmen unique: “In France, and other nations, the mere will of the Prince is Law, his word takes off any man’s head, imposeth taxes, or seizes any man’s estate, when, how and as often as he lists; But in England, each man hath a fixed Fundamental Right born with him, as to freedom of his person and property in his estate, which he cannot be deprived of, but either by his consent, or some crime, for which the law has imposed such a penalty or forfeiture.”

There was a divergence between English and American conceptions of Magna Carta. In the Old World, it was thought of, above all, as a guarantor of parliamentary supremacy; in the New World, it was already coming to be seen as something that stood above both Crown and Parliament. This difference was to have vast consequences in the 1770s.

The American Revolution is now remembered on both sides of the Atlantic as a national conflict—as, indeed, a “War of Independence.” But no one at the time thought of it that way—not, at any rate, until the French became involved in 1778. Loyalists and patriots alike saw it as a civil war within a single polity, a war that divided opinion every bit as much in Great Britain as in the colonies.

The American Revolutionaries weren’t rejecting their identity as Englishmen; they were asserting it. As they saw it, George III was violating the “ancient constitution” just as King John and the Stuarts had done. It was therefore not just their right but their duty to resist, in the words of the delegates to the first Continental Congress in 1774, “as Englishmen our ancestors in like cases have usually done.”

Nowhere, at this stage, do we find the slightest hint that the patriots were fighting for universal rights. On the contrary, they were very clear that they were fighting for the privileges bestowed on them by Magna Carta. The concept of “no taxation without representation” was not an abstract principle. It could be found, rather, in Article 12 of the Great Charter: “No scutage or aid is to be levied in our realm except by the common counsel of our realm.” In 1775, Massachusetts duly adopted as its state seal a patriot with a sword in one hand and a copy of Magna Carta in the other.

I recount these facts to make an important, if unfashionable, point. The rights we now take for granted—freedom of speech, religion, assembly and so on—are not the natural condition of an advanced society. They were developed overwhelmingly in the language in which you are reading these words.

When we call them universal rights, we are being polite. Suppose World War II or the Cold War had ended differently: There would have been nothing universal about them then. If they are universal rights today, it is because of a series of military victories by the English-speaking peoples.

Various early copies of Magna Carta survive, many of them in England’s cathedrals, tended like the relics that were removed during the Reformation. One hangs in the National Archives in Washington, D.C., next to the two documents it directly inspired: the Declaration of Independence and the Constitution. Another enriches the Australian Parliament in Canberra.

But there are only four 1215 originals. One of them, normally housed at Lincoln Cathedral, has recently been on an American tour, resting for some weeks at the Library of Congress. It wasn’t that copy’s first visit to the U.S. The same parchment was exhibited in New York at the 1939 World’s Fair, attracting an incredible 13 million visitors. World War II broke out while it was still on display, and it was transferred to Fort Knox for safekeeping until the end of the conflict.

Could there have been a more apt symbol of what the English-speaking peoples were fighting for in that conflagration? Think of the world as it stood in 1939. Constitutional liberty was more or less confined to the Anglosphere. Everywhere else, authoritarianism was on the rise. Our system, uniquely, elevated the individual over the state, the rules over the rulers.

When the 18th-century statesman Pitt the Elder described Magna Carta as England’s Bible, he was making a profound point. It is, so to speak, the Torah of the English-speaking peoples: the text that sets us apart while at the same time speaking truths to the rest of mankind.

The very success of Magna Carta makes it hard for us, 800 years on, to see how utterly revolutionary it must have appeared at the time. Magna Carta did not create democracy: Ancient Greeks had been casting differently colored pebbles into voting urns while the remote fathers of the English were grubbing about alongside pigs in the cold soil of northern Germany. Nor was it the first expression of the law: There were Sumerian and Egyptian law codes even before Moses descended from Sinai.

What Magna Carta initiated, rather, was constitutional government—or, as the terse inscription on the American Bar Association’s stone puts it, “freedom under law.”

It takes a real act of imagination to see how transformative this concept must have been. The law was no longer just an expression of the will of the biggest guy in the tribe. Above the king brooded something more powerful yet—something you couldn’t see or hear or touch or taste but that bound the sovereign as surely as it bound the poorest wretch in the kingdom. That something was what Magna Carta called “the law of the land.”

This phrase is commonplace in our language. But think of what it represents. The law is not determined by the people in government, nor yet by clergymen presuming to interpret a holy book. Rather, it is immanent in the land itself, the common inheritance of the people living there.

The idea of the law coming up from the people, rather than down from the government, is a peculiar feature of the Anglosphere. Common law is an anomaly, a beautiful, miraculous anomaly. In the rest of the world, laws are written down from first principles and then applied to specific disputes, but the common law grows like a coral, case by case, each judgment serving as the starting point for the next dispute. In consequence, it is an ally of freedom rather than an instrument of state control. It implicitly assumes residual rights.

And indeed, Magna Carta conceives rights in negative terms, as guarantees against state coercion. No one can put you in prison or seize your property or mistreat you other than by due process. This essentially negative conception of freedom is worth clinging to in an age that likes to redefine rights as entitlements—the right to affordable health care, the right to be forgotten and so on.

It is worth stressing, too, that Magna Carta conceived freedom and property as two expressions of the same principle. The whole document can be read as a lengthy promise that the goods of a free citizen will not be arbitrarily confiscated by someone higher up the social scale. Even the clauses that seem most remote from modern experience generally turn out, in reality, to be about security of ownership.

There are, for example, detailed passages about wardship. King John had been in the habit of marrying heiresses to royal favorites as a way to get his hands on their estates. The abstruse-sounding articles about inheritance rights are, in reality, simply one more expression of the general principle that the state may not expropriate without due process.

Those who stand awe-struck before the Great Charter expecting to find high-flown phrases about liberty are often surprised to see that a chunk of it is taken up with the placing of fish-traps on the Thames. Yet these passages, too, are about property, specifically the freedom of merchants to navigate inland waterways without having arbitrary tolls imposed on them by fish farmers.

Liberty and property: how naturally those words tripped, as a unitary concept, from the tongues of America’s Founders. These were men who had been shaped in the English tradition, and they saw parliamentary government not as an expression of majority rule but as a guarantor of individual freedom. How different was the Continental tradition, born 13 years later with the French Revolution, which saw elected assemblies as the embodiment of what Rousseau called the “general will” of the people.

In that difference, we may perhaps discern explanation of why the Anglosphere resisted the chronic bouts of authoritarianism to which most other Western countries were prone. We who speak this language have always seen the defense of freedom as the duty of our representatives and so, by implication, of those who elect them. Liberty and democracy, in our tradition, are not balanced against each other; they are yoked together.

In February, the four surviving original copies of Magna Carta were united, for just a few hours, at the British Library—something that had not happened in 800 years. As I stood reverentially before them, someone recognized me and posted a photograph on Twitter with the caption: “If Dan Hannan gets his hands on all four copies of Magna Carta, will he be like Sauron with the Rings?”

Yet the majesty of the document resides in the fact that it is, so to speak, a shield against Saurons. Most other countries have fallen for, or at least fallen to, dictators. Many, during the 20th century, had popular communist parties or fascist parties or both. The Anglosphere, unusually, retained a consensus behind liberal capitalism.

This is not because of any special property in our geography or our genes but because of our constitutional arrangements. Those constitutional arrangements can take root anywhere. They explain why Bermuda is not Haiti, why Hong Kong is not China, why Israel is not Syria.

They work because, starting with Magna Carta, they have made the defense of freedom everyone’s responsibility. Americans, like Britons, have inherited their freedoms from past generations and should not look to any external agent for their perpetuation. The defense of liberty is your job and mine. It is up to us to keep intact the freedoms we inherited from our parents and to pass them on securely to our children.

Mr. Hannan is a British member of the European Parliament for the Conservative Party, a columnist for the Washington Examiner and the author of “Inventing Freedom: How the English-speaking Peoples Made the Modern World.”

Friday, April 3, 2015

The Federal President would not stay in power if he did not talk human rights. So look at it as a political imperative.

Joe Biden on Human Rights
The Vice President tells China’s leaders to ignore the U.S.
WSJ, Apr 01, 2015

White House officials can be oddly candid in talking to their liberal friends at the New Yorker magazine. That’s where an unnamed official in 2011 boasted of “leading from behind,” and where last year President Obama dismissed Islamic State as a terrorist “jayvee team.” Now the U.S. Vice President has revealed the Administration line on human rights in China.

In the April 6 issue, Joe Biden recounts meeting Xi Jinping months before his 2012 ascent to be China’s supreme leader. Mr. Xi asked him why the U.S. put “so much emphasis on human rights.” The right answer is simple: No government has the right to deny its citizens basic freedoms, and those that do tend also to threaten peace overseas, so U.S. support for human rights is a matter of values and interests.

Instead, Mr. Biden downplayed U.S. human-rights rhetoric as little more than political posturing. “No president of the United States could represent the United States were he not committed to human rights,” he told Mr. Xi. “President Barack Obama would not be able to stay in power if he did not speak of it. So look at it as a political imperative.” Then Mr. Biden assured China’s leader: “It doesn’t make us better or worse. It’s who we are. You make your decisions. We’ll make ours.” [not the WSJ's emphasis.]

Mr. Xi took the advice. Since taking office he has detained more than 1,000 political prisoners, from anticorruption activist Xu Zhiyong to lawyer Pu Zhiqiang and journalist Gao Yu. He has cracked down on Uighurs in Xinjiang, banning more Muslim practices and jailing scholar-activist Ilham Tohti for life. Anti-Christian repression and Internet controls are tightening. Nobel Peace laureate Liu Xiaobo remains in prison, his wife Liu Xia under illegal house arrest for the fifth year. Lawyer Gao Zhisheng left prison in August but is blocked from receiving medical care overseas. Hong Kong, China’s most liberal city, is losing its press freedom and political autonomy.

Amid all of this Mr. Xi and his government have faced little challenge from Washington. That is consistent with Hillary Clinton’s 2009 statement that human rights can’t be allowed to “interfere” with diplomacy on issues such as the economy and the environment. Mr. Obama tried walking that back months later, telling the United Nations that democracy and human rights aren’t “afterthoughts.” But his Administration’s record—and now Mr. Biden’s testimony—prove otherwise.

Saturday, March 14, 2015

Disrupting Disruptive Physicians. By Bruce Gewertz


Viewpoint

Disrupting Disruptive Physicians

Bruce L Gewertz, MD
 
JAMA Surg. Published online March 11, 2015. doi:10.1001/jamasurg.2014.2911.


On Thursday mornings, our operating room management committee meets to handle items large and small. Most of our discussions focus on block-time allocation, purchasing decisions, and alike. However, too often we talk about behavioral issues, particularly the now well-characterized disruptive physician.

We have all seen it or been there before. A physician acts out in the operating room with shouting or biting sarcasm, intimidating colleagues and staff and impeding them from functioning at a high level. The most debilitating perpetrators of this behavior are repeat customers who engender such fear and uncertainty in all who contact them that the morale of the nursing staff and anesthesiologists is undermined, work becomes an unbearable chore, and performance suffers.

When one engages a difficult physician on his or her behavior, the physician responds in characteristic fashion. He or she defends his or her actions as patient advocacy, pointing out the shortcomings of the scrub nurse or instruments and showing limited, if any, remorse. He or she argues that such civil disobedience is the only way to enact change. In truth, disruptive physicians’ actions are often admired by a sizable minority of their colleagues as the only way to articulate real frustrations of working in today’s highly complex hospital. In extreme situations, these physicians become folk heroes to younger physicians who envy their fortitude in confronting the power of the bureaucracy.

A few days after a recent outburst by a particularly unpleasant and repeat offender, I was enjoying my daily interval on the stationary bicycle at my gym. My thoughts were wandering to a broad range of topics. I spent some time considering what really drives this nonproductive behavior and how otherwise valuable physicians could be channeled successfully into a more collegial state. As in the past, I was long on theory but short on conviction that it would make a difference.

After my workout as I prepared to shower, I received an urgent email. A patient I was consulting for upper extremity embolization had developed confusion and possible cerebral emboli despite full anticoagulation. I responded that I was on my way to see her and suggested a few diagnostic tests and consultations.

As I typed my message, a custodial employee of the gym reminded me that no cellular telephones were allowed in the locker room. I pointed out that I was not using my cellular telephone but rather an email function and I was not offending anyone by talking. He again pointed out that cellular telephones were not allowed under any circumstances. As I argued back, “I am a physician and this is an emergency.” My voice got louder and I became confrontational. I told him to call the manager. Another member next to me said quietly that the reason for the cellular telephone ban was the photographic potential of the devices and that I could have simply moved to the reception area and used the telephone any way I wished.

I felt like the fool I was. I trudged off to the showers feeling, as in the Texas homily, lower than a snake’s belly. After toweling off, I approached the employee and apologized for my behavior and for making his job more difficult. I told him he had handled the situation far better than me and I admired his restraint.

The lessons were stark and undeniable. Like my disruptive colleagues, I had justified my boorish behavior with patient care. I had assumed my need to break the rules far outweighed the reasonable and rational policy of the establishment; after all, I was important and people depended on me. Worse yet, I felt empowered to take out my frustration, enhanced by my worry about the patient, on someone unlikely to retaliate against me for fear of job loss.

I have come to realize that irrespective of disposition, when the setting is right, we are all potentially disruptive. The only questions are how frequent and how severe. Even more importantly, from a prognostic perspective, can we share the common drivers of these behaviors and develop insights that will lead to avoidance?

The most common approaches used today are only moderately effective. As in many other institutions, when physicians are deemed by their peers to have violated a carefully defined code of conduct, they are advised to apologize to any offended personnel. In many instances, these apologies are sincere and are, in fact, appreciated by all. Unfortunately, on occasion, the interaction is viewed as a forced function and the behavior is soon repeated albeit in a different nursing unit or operating room.

When such failures occur, persistently disruptive physicians are referred to our physician well-being committee. Through a highly confidential process, efforts are made to explore the potential causes for the behavior and acquaint the referred physician with the consequences of their actions on hospital function. Often, behavioral contracts are drawn up to precisely outline the individual’s issues and subsequent medical staff penalties if further violations occur.

That said, as well intentioned and psychologically sound as these programs are, there remains a hard core of repeat offenders. Despite the heightened stress and ill will engendered by disruptive physicians’ behavior, they simply cannot interact in other than confrontational fashion when frustrated by real or imagined shortcomings in the environment.

Based on nearly 20 years of physician management experience, it is my belief that in these few physicians, such behaviors are hard wired and fairly resistant to traditional counseling. An unfortunate end game is termination from a medical staff if the hostile working environment created by their outbursts is viewed as a liability threat by the institution. Such actions are always painful and bring no satisfaction to anyone involved. These high-stakes dramas, often involving critical institutional players on both sides, are played out behind closed doors. Few people are privy to the details of either the infraction or the attempts at remediation. Misunderstandings in the staff are common.

I suggest that an underused remedy is more intense peer pressure through continued education of those colleagues who might silently support these outbursts without fully realizing the consequences. This would begin by treating these incidents in the same way that we do other significant adverse events that occur in our hospitals. In confidential but interdisciplinary sessions, the genesis, nature, and consequences of the interaction could be explored openly. If indeed the inciting event was judged to be an important patient care issue, the problem could be identified and addressed yet clearly separated from the counterproductive interaction that followed. In addition to the deterrence provided by the more public airing of the incidents, the tenuous linkage between abusive behavior and patient protection could be severed. It is this linkage that provides any superficial legitimacy to the outbursts.

Through this process, peer pressure would be increased and provide a greater impetus for self-control and more productive interactions. Importantly, with such a direct and full examination of both the character and costs of poor conduct, whatever support exists for such behaviors within the medical staff would be diminished.
 
Bruce Gewertz, MD, Cedars-Sinai Health System Published Online: March 11, 2015. doi:10.1001/jamasurg.2014.2911.
Conflict of Interest Disclosures: None reported.

Thursday, January 29, 2015

In the name of ‘affordable’ loans, we are creating the conditions for a replay of the housing disaster


Building Toward Another Mortgage Meltdown. By Edward Pinto


In the name of ‘affordable’ loans, the White House is creating the conditions for a replay of the housing disaster

http://www.wsj.com/articles/edward-pinto-building-toward-another-mortgage-meltdown-1422489618
The Obama administration’s troubling flirtation with another mortgage meltdown took an unsettling turn on Tuesday with Federal Housing Finance Agency Director Mel Watt ’s testimony before the House Financial Services Committee.

Mr. Watt told the committee that, having received “feedback from stakeholders,” he expects to release by the end of March new guidance on the “guarantee fee” charged by Fannie Mae and Freddie Mac to cover the credit risk on loans the federal mortgage agencies guarantee.

Here we go again. In the Obama administration, new guidance on housing policy invariably means lowering standards to get mortgages into the hands of people who may not be able to afford them.

Earlier this month, President Obama announced that the Federal Housing Administration (FHA) will begin lowering annual mortgage-insurance premiums “to make mortgages more affordable and accessible.” While that sounds good in the abstract, the decision is a bad one with serious consequences for the housing market.

Government programs to make mortgages more widely available to low- and moderate-income families have consistently offered overleveraged, high-risk loans that set up too many homeowners to fail. In the long run-up to the 2008 financial crisis, for example, federal mortgage agencies and their regulators cajoled and wheedled private lenders to loosen credit standards. They have been doing so again. When the next housing crash arrives, private lenders will be blamed—and homeowners and taxpayers will once again pay dearly.

Lowering annual mortgage-insurance premiums is part of a new affordable-lending effort by the Obama administration. More specifically, it is the latest salvo in a price war between two government mortgage giants to meet government mandates.

Fannie Mae fired the first shot in December when it relaunched the 30-year, 97% loan-to-value, or LTV, mortgage (a type of loan that was suspended in 2013). Fannie revived these 3% down-payment mortgages at the behest of its federal regulator, the Federal Housing Finance Agency (FHFA)—which has run Fannie Mae and Freddie Mac since 2008, when both government-sponsored enterprises (GSEs) went belly up and were put into conservatorship. The FHA’s mortgage-premium price rollback was a counteroffensive.

Déjà vu: Fannie launched its first price war against the FHA in 1994 by introducing the 30-year, 3% down-payment mortgage. It did so at the behest of its then-regulator, the Department of Housing and Urban Development. This and other actions led HUD in 2004 to credit Fannie Mae’s “substantial part in the ‘revolution’ ” in “affordable lending” to “historically underserved households.”

Fannie’s goal in 1994 and today is to take market share from the FHA, the main competitor for loans it and Freddie Mac need to meet mandates set by Congress since 1992 to increase loans to low- and moderate-income homeowners. The weapons in this war are familiar—lower pricing and progressively looser credit as competing federal agencies fight over existing high-risk lending and seek to expand such lending.

Mortgage price wars between government agencies are particularly dangerous, since access to low-cost capital and minimal capital requirements gives them the ability to continue for many years—all at great risk to the taxpayers. Government agencies also charge low-risk consumers more than necessary to cover the risk of default, using the overage to lower fees on loans to high-risk consumers.

Starting in 2009 the FHFA released annual studies documenting the widespread nature of these cross-subsidies. The reports showed that low down payment, 30-year loans to individuals with low FICO scores were consistently subsidized by less-risky loans.

Unfortunately, special interests such as the National Association of Realtors—always eager to sell more houses and reap the commissions—and the left-leaning Urban Institute were cheerleaders for loose credit. In 1997, for example, HUD commissioned the Urban Institute to study Fannie and Freddie’s single-family underwriting standards. The Urban Institute’s 1999 report found that “the GSEs’ guidelines, designed to identify creditworthy applicants, are more likely to disqualify borrowers with low incomes, limited wealth, and poor credit histories; applicants with these characteristics are disproportionately minorities.” By 2000 Fannie and Freddie did away with down payments and raised debt-to-income ratios. HUD encouraged them to more aggressively enter the subprime market, and the GSEs decided to re-enter the “liar loan” (low doc or no doc) market, partly in a desire to meet higher HUD low- and moderate-income lending mandates.

On Jan. 6, the Urban Institute announced in a blog post: “FHA: Time to stop overcharging today’s borrowers for yesterday’s mistakes.” The institute endorsed an immediate cut of 0.40% in mortgage-insurance premiums charged by the FHA. But once the agency cuts premiums, Fannie and Freddie will inevitably reduce the guarantee fees charged to cover the credit risk on the loans they guarantee.

Now the other shoe appears poised to drop, given Mr. Watt’s promise on Tuesday to issue new guidance on guarantee fees.

This is happening despite Congress’s 2011 mandate that Fannie’s regulator adjust the prices of mortgages and guarantee fees to make sure they reflect the actual risk of loss—that is, to eliminate dangerous and distortive pricing by the two GSEs. Ed DeMarco, acting director of the FHFA since March 2009, worked hard to do so but left office in January 2014. Mr. Watt, his successor, suspended Mr. DeMarc o’s efforts to comply with Congress’s mandate. Now that Fannie will once again offer heavily subsidized 3%-down mortgages, massive new cross-subsidies will return, and the congressional mandate will be ignored.

The law stipulates that the FHA maintain a loss-absorbing capital buffer equal to 2% of the value of its outstanding mortgages. The agency obtains this capital from profits earned on mortgages and future premiums. It hasn’t met its capital obligation since 2009 and will not reach compliance until the fall of 2016, according to the FHA’s latest actuarial report. But if the economy runs into another rough patch, this projection will go out the window.

Congress should put an end to this price war before it does real damage to the economy. It should terminate the ill-conceived GSE affordable-housing mandates and impose strong capital standards on the FHA that can’t be ignored as they have been for five years and counting.

Mr. Pinto, former chief credit officer of Fannie Mae, is co-director and chief risk officer of the International Center on Housing Risk at the American Enterprise Institute.