For months Democratic presidential hopeful Bernie Sanders has been
telling Americans that the government must “break up the banks” because
they are “too big to fail.” This is the wrong role for government, but
Sen. Sanders and others on both sides of the aisle have a point. The
2010 Dodd-Frank financial law, which was supposed to end too big to
fail, has not.
Dodd-Frank gave the Federal Deposit Insurance
Corp. authority to take over and oversee the reorganization of so-called
systemically important financial institutions whose failure could pose a
risk to the economy. But no one can be sure the FDIC will follow its
resolution strategy, which leads many to believe Dodd-Frank will be
bypassed in a crisis.
Reflecting on his own experience as overseer of the U.S. Treasury’s bailout program in 2008-09, Neel Kashkari, now president of the Federal Reserve Bank of Minneapolis, says government
officials are once again likely to bail out big banks and their
creditors rather than “trigger many trillions of additional costs to
society.”
The solution is not to break up the banks or turn them
into public utilities. Instead, we should do what Dodd-Frank failed to
do: Make big-bank failures feasible without tanking the economy by
writing a process to do so into the bankruptcy code through a new
amendment—a “chapter 14.”
Chapter 14 would impose losses on
shareholders and creditors while preventing the collapse of one firm
from spreading to others. It could be initiated by the lead regulatory
agency and would begin with an over-the-weekend bankruptcy hearing
before a pre-selected U.S. district judge. After the hearing, the court
would convert the bank’s eligible long-term debt into equity,
reorganizing the bankrupt bank’s balance sheet without restructuring its
operations.
A new non-bankrupt company, owned by the bankruptcy
estate (the temporary legal owner of a failed company’s assets and
property), would assume the recapitalized balance sheet of the failed
bank, including all obligations to its short-term creditors. But the
failed bank’s shareholders and long-term bondholders would have claims
only against the estate, not the new company.
The new firm would
take over the bank’s business and be led by the bankruptcy estate’s
chosen private-sector managers. With regulations requiring minimum
long-term debt levels, the new firm would be solvent. The bankruptcy
would be entirely contained, both because the new bank would keep
operating and paying its debts, and because losses would be allocated
entirely to the old bank’s shareholders and long-term bondholders.
An
examination by one of us (Emily Kapur) of previously unexplored
discovery and court documents from Lehman Brothers’ September 2008
bankruptcy shows that chapter 14 would have worked especially well for
that firm, without adverse effects on the financial system.
Here
is how Lehman under chapter 14 would have played out. The process would
start with a single, brief hearing for the parent company to facilitate
the creation of a new recapitalized company—a hearing in which the judge
would have minimal discretion. By contrast, Lehman’s actual bankruptcy
involved dozens of complex proceedings in the U.S. and abroad, creating
huge uncertainty and making it impossible for even part of the firm to
remain in business.
When Lehman went under it had $20 billion of
book equity and $96 billion of long-term debt, while its perceived
losses were around $54 billion. If the costs of a chapter 14 proceeding
amounted to an additional (and conservative) $10 billion, then the new
company would be well capitalized with around $52 billion of equity.
The
new parent company would take over Lehman’s subsidiaries, all of which
would continue in business, outside of bankruptcy. And the new company
would honor all obligations to short-term creditors, such as repurchase
agreement and commercial paper lenders.
The result: Short-term
creditors would have no reason to run on the bank before the bankruptcy
proceeding, knowing they would be protected. And they would have no
reason to run afterward, because the new firm would be solvent.
Without
a run, Lehman would have $30 billion more liquidity after resolution
than it had in 2008, easing subsequent operational challenges. In the
broader marketplace, money-market funds would have no reason to curtail
lending to corporations, hedge funds would not flee so readily from
prime brokers, and investment banks would be less likely to turn to the
government for financing.
Eventually, the new company would make
a public stock offering to value the bankruptcy estate’s ownership
interest, and the estate would distribute its assets according to
statutory priority rules. If the valuation came in at $52 billion,
Lehman shareholders would be wiped out, as they were in 2008. Long-term
debtholders, with $96 billion in claims, would recover 54 cents on the
dollar, more than the 37 cents they did receive. All other creditors—the
large majority—would be paid in full at maturity.
Other reforms,
such as higher capital requirements, may yet be needed to reduce risk
and lessen the chance of financial failure. But that is no reason to
wait on bankruptcy reform. A bill along the lines of the chapter 14 that
we advocate passed the House Judiciary Committee on Feb. 11. Two
versions await action in the Senate. Let’s end too big to fail, once and
for all. Ms. Kapur is an attorney and economics Ph.D.
candidate at Stanford University. Mr. Taylor, a professor of economics
at Stanford, co-edited “Making Failure Feasible” (Hoover, 2015) with
Kenneth Scott and Thomas Jackson, which includes Ms. Kapur’s study.
Almost nobody in Washington cares, and most of the financial media haven’t noticed. But the inspector general’s office at the Federal Reserve recently reported the disturbing results of an internal investigation. Last December the central bank internally identified “fundamental weaknesses in key areas” related to the Fed’s own governance of the stress testing it conducts of financial firms.
The Fed’s stress tests theoretically judge whether the country’s largest banks can withstand economic downturns. So the Fed identifying a problem with its own management of the stress tests is akin to an energy company noticing that something is not right at one of its nuclear reactors.
According to the inspector general, “The governance review findings include, among other items, a shortcoming in policies and procedures, insufficient model testing” and “incomplete structures and information flows to ensure proper oversight of model risk management.” These Fed models are essentially a black box to the public, so there’s no way to tell from the outside how large a problem this is.
The Fed’s ability to construct and maintain financial and economic models is much more than a subject of intellectual curiosity. Given that Fed-approved models at the heart of the so-called Basel capital standards proved to be spectacularly wrong in the run-up to the last financial crisis, the new report is more reason to wonder why anyone should expect them to be more accurate the next time.
The Fed’s IG adds that last year’s internal review “notes that similar findings identified at institutions supervised by the Federal Reserve have typically been characterized as matters requiring immediate attention or as matters requiring attention.”
That’s for sure. Receiving a “matters requiring immediate attention” letter from the Fed is a big deal at a bank. The Journal reported last year that after the Fed used this language in a letter to Credit Suisse castigating the bank’s work in the market for leveraged loans, the bank chose not to participate in the financing of several buy-out deals.
But it’s hard to tell if anything will come from this report that seems to have fallen deep in a Beltway forest. The IG office’s report says that the Fed is taking a number of steps to correct its shortcomings, and that the Fed’s reform plans “appear to be responsive to our recommendations.”
The Fed wields enormous power with little democratic accountability and transparency. This was tolerable when the Fed’s main job was monetary, but its vast new regulatory authority requires more scrutiny. Congress should add the Fed’s stressed-out standards for stress tests to its oversight list.
In a 2005 best seller, Harry Frankfurt, a Princeton philosophy professor, explored the often complex nature of popular false ideas. “On Bulls—” examined outright lies, ambiguous forms of obfuscation and the not-always-transparent intentions of those who promote them. Now, in “On Inequality,” Mr. Frankfurt eviscerates one of the shibboleths of our time: that economic inequality—in his definition, “the possession by some of more money than others”—is the most urgent issue confronting society. This idea, he believes, suffers from logical and moral errors of the highest order.
The fixation on equality, as a moral ideal in and of itself, is critically flawed, according to the professor. It holds that justice is determined by one person’s position relative to another, not his absolute well-being. Therefore the logic of egalitarianism can lead to perverse outcomes, he argues. Most egregiously, income inequality could be eliminated very effectively “by making everyone equally poor.” And while the lowest economic stratum of society is always associated with abject poverty, this need not be the case. Mr. Frankfurt imagines instances where those “who are doing considerably worse than others may nonetheless be doing rather well.” This possibility—as with contemporary America’s wide inequalities among relatively prosperous people—undermines the coherence of a philosophy mandating equality.
Mr. Frankfurt acknowledges that “among morally conscientious individuals, appeals in behalf of equality often have very considerable emotional or rhetorical power.” The motivations for pursuing equality may be well-meaning but they are profoundly misguided and contribute to “the moral disorientation and shallowness of our time.”
The idea that equality in itself is a paramount goal, Mr. Frankfurt argues, alienates people from their own characters and life aspirations. The amount of wealth possessed by others does not bear on “what is needed for the kind of life a person would most sensibly and appropriately seek for himself.” The incessant egalitarian comparison of one against another subordinates each individual’s goals to “those that are imposed on them by the conditions in which others happen to live.” Thus, individuals are led to apply an arbitrary relative standard that does not “respect” their authentic selves.
If his literalist critique of egalitarianism is often compelling, Mr. Frankfurt’s own philosophy has more in common with such thinking than is first apparent. For Mr. Frankfurt, the imperative of justice is to alleviate poverty and improve lives, not to make people equal. He does not, however, think that it is morally adequate merely to provide people with a safety net. Instead, he argues for an ideal of “sufficiency.”
By sufficiency Mr. Frankfurt means enough economic resources for every individual to be reasonably satisfied with his circumstances, assuming that the individual’s satisfaction need not be disturbed by others having more. While more money might be welcome, it would not “alter his attitude toward his life, or the degree of his contentment with it.” The achievement of economic and personal contentment by everyone is Mr. Frankfurt’s priority. In fact, his principle of sufficiency is so ambitious it demands that lack of money should never be the cause of anything “distressing or unsatisfying” in anyone’s life.
What’s the harm of such a desirable, if unrealistic goal? The author declares that inequality is “morally disturbing” only when his standard of sufficiency is not achieved. His just society would, in effect, mandate a universal entitlement to a lifestyle that has been attained only by a minuscule fraction of humans in all history. Mr. Frankfurt recognizes such reasoning may bring us full circle: “The most feasible approach” to universal sufficiency may well be policies that, in practice, differ little from those advocated in the “pursuit of equality.”
In passing, the author notes another argument against egalitarianism, the “dangerous conflict between equality and liberty.” He is referring to the notion that leaving people free to choose their work and what goods and services they consume will always lead to an unequal distribution of income. To impose any preconceived economic distribution will, as the philosopher Robert Nozick argued, involves “continuous interference in people’s lives.” Like egalitarianism, Mr. Frankfurt’s ideal of “sufficiency” would hold property rights and economic liberty hostage to his utopian vision.
Such schemes, Nozick argued, see economic assets as having arrived on earth fully formed, like “manna from heaven,” with no consideration of their human origin. Mr. Frankfurt also presumes that one person’s wealth must be the reason others don’t have a “sufficient” amount to be blissfully carefree; he condemns the “excessively affluent” who have “extracted” too much from the nation. This leaves a would-be philosopher-king the task of divvying up loot as he chooses.
On the surface, “On Inequality” is a provocative challenge to a prevailing orthodoxy. But as the author’s earlier book showed, appearances can deceive. When Thomas Piketty, in “Capital in the Twenty-First Century,” says that most wealth is rooted in theft or is arbitrary, or when Mr. Frankfurt’s former Princeton colleague Paul Krugman says the “rich” are “undeserving,” they are not (just) making the case for equality. By arguing that wealth accumulation is inherently unjust, they lay a moral groundwork for confiscation of property. Similarly, Mr. Frankfurt accuses the affluent of “gluttony”—a sentiment about which there appears to be unanimity in that temple of tenured sufficiency, the Princeton faculty club. The author claims to be motivated by respect for personal autonomy and fulfillment. By ignoring economic liberty, he reveals he is not.
President Obama arrived in Kenya
on Friday and will travel from here to Ethiopia, two crucial U.S. allies
in East Africa. The region is not only emerging as an economic
powerhouse, it is also an important front in the battle with al Qaeda,
al-Shabaab, Islamic State and other Islamist radicals.
Yet
grievances related to how the International Criminal Court’s universal
jurisdiction is applied in Africa are interfering with U.S. and European
relations on the continent. In Africa there are accusations of
neocolonialism and even racism in ICC proceedings, and a growing
consensus that Africans are being unjustly indicted by the court.
It
wasn’t supposed to be this way. After the failure to prevent mass
atrocities in Europe and Africa in the 1990s, a strong consensus emerged
that combating impunity had to be an international priority. Ad hoc
United Nations tribunals were convened to judge the masterminds of
genocide and crimes against humanity in Yugoslavia, Rwanda and Sierra
Leone. These courts were painfully slow and expensive. But their
mandates were clear and limited, and they helped countries to turn the
page and focus on rebuilding.
Soon universal jurisdiction was
seen not only as a means to justice, but also a tool for preventing
atrocities in the first place. Several countries in Western Europe
including Spain, the United Kingdom, Belgium and France empowered their
national courts with universal jurisdiction. In 2002 the International
Criminal Court came into force.
Africa and Europe were early
adherents and today constitute the bulk of ICC membership. But India,
China, Russia and most of the Middle East—representing well over half
the world’s population—stayed out. So did the United States. Leaders in
both parties worried that an unaccountable supranational court would
become a venue for politicized show trials. The track record of the ICC
and European courts acting under universal jurisdiction has amply borne
out these concerns.
Only when U.S. Defense Secretary Donald Rumsfeld threatened to move NATO headquarters out of Brussels in 2003 did Belgium rein in efforts to indict former President George H.W. Bush, and Gens. Colin Powell and Tommy Franks,
for alleged “war crimes” during the 1990-91 Gulf War. Spanish courts
have indicted American military personnel in Iraq and investigated the
U.S. detention facility in Guantanamo Bay.
But with powerful
states able to shield themselves and their clients, Africa has borne the
brunt of indictments. Far from pursuing justice for victims, these
courts have become a venue for public-relations exercises by activist
groups. Within African countries, they have been manipulated by one
political faction to sideline another, often featuring in electoral
politics.
The ICC’s recent indictments of top Kenyan officials are a prime example. In October 2014, Kenyan President Uhuru Kenyatta
became the first sitting head of state to appear before the ICC, though
he took the extraordinary step of temporarily transferring power to his
deputy to avoid the precedent. ICC prosecutors indicted Mr. Kenyatta in
connection with Kenya’s post-election ethnic violence of 2007-08, in
which some 1,200 people were killed.
Last December the ICC
withdrew all charges against Mr. Kenyatta, saying the evidence had “not
improved to such an extent that Mr Kenyatta’s alleged criminal
responsibility can be proven beyond reasonable doubt.” As U.S. assistant
secretary of state for African affairs from 2005-09, and the point
person during Kenya’s 2007-08 post-election violence, I knew the ICC
indictments were purely political. The court’s decision to continue its
case against Kenya’s deputy president, William Ruto, reflects a degree of indifference and even hostility to Kenya’s efforts to heal its political divisions.
The ICC’s indictments in Kenya began with former chief prosecutor Luis Moreno-Ocampo’s
determination to prove the court’s relevance in Africa by going after
what he reportedly called “low-hanging fruit.” In other words, African
political and military leaders unable to resist ICC jurisdiction.
More
recently, the arrest of Rwandan chief of intelligence Lt. Gen. Emmanuel
Karenzi Karake in London last month drew a unanimous reproach from the
African Union’s Peace and Security Council. The warrant dates to a 2008
Spanish indictment for alleged reprisal killings following the 1994
Rwandan genocide. At the time of the indictment, Mr. Karenzi Karake was
deputy commander of the joint U.N.-African Union peacekeeping operation
in Darfur. The Rwandan troops under his command were the backbone of
the Unamid force, and his performance in Darfur was by all accounts
exemplary.
Moreover, a U.S. government interagency review
conducted in 2007-08, when I led the State Department’s Bureau of
African Affairs, found that the Spanish allegations against Mr. Karenzi
Karake were false and unsubstantiated. The U.S. fully backed his
reappointment in 2008 as deputy commander of Unamid forces. It would be a
travesty of justice if the U.K. were to extradite Mr. Karake to Spain
to stand trial.
Sadly, the early hope of “universal jurisdiction”
ending impunity for perpetrators of genocide and crimes against
humanity has given way to cynicism, both in Africa and the West. In
Africa it is believed that, in the rush to demonstrate their power,
these courts and their defenders have been too willing to brush aside
considerations of due process that they defend at home.
In the
West, the cynicism is perhaps even more damaging because it calls into
question the moral capabilities of Africans and their leaders, and
revives the language of paternalism and barbarism of earlier
generations.
Ms. Frazer, a former U.S. ambassador to South
Africa (2004-05) and assistant secretary of state for African affairs
(2005-09), is an adjunct senior fellow for Africa studies at the Council
on Foreign Relations.
June marks the 800th
anniversary of Magna Carta, the ‘Great Charter’ that established the
rule of law for the English-speaking world. Its revolutionary impact
still resounds today, writes Daniel Hannan
King John, pressured
by English barons, reluctantly signs Magna Carta, the ‘Great Charter,’
on the Thames riverbank, Runnymede, June 15, 1215, as rendered in James
Doyle’s ‘A Chronicle of England.’
Photo:
Mary Evans Picture Library/Everett Collection
http://si.wsj.net/public/resources/images/BN-IQ808_MAGNA_J_20150529103352.jpg
By Daniel Hannan
Eight hundred years ago next month, on a reedy stretch of
riverbank in southern England, the most important bargain in the history
of the human race was struck. I realize that’s a big claim, but in this
case, only superlatives will do. As Lord Denning, the most celebrated
modern British jurist put it, Magna Carta was “the greatest
constitutional document of all time, the foundation of the freedom of
the individual against the arbitrary authority of the despot.”
It
was at Runnymede, on June 15, 1215, that the idea of the law standing
above the government first took contractual form. King John accepted
that he would no longer get to make the rules up as he went along. From
that acceptance flowed, ultimately, all the rights and freedoms that we
now take for granted: uncensored newspapers, security of property,
equality before the law, habeas corpus, regular elections, sanctity of
contract, jury trials.
Magna Carta is Latin for “Great Charter.”
It was so named not because the men who drafted it foresaw its epochal
power but because it was long. Yet, almost immediately, the document
began to take on a political significance that justified the adjective
in every sense.
The bishops and barons who had brought King John
to the negotiating table understood that rights required an enforcement
mechanism. The potency of a charter is not in its parchment but in the
authority of its interpretation. The constitution of the U.S.S.R., to
pluck an example more or less at random, promised all sorts of
entitlements: free speech, free worship, free association. But as Soviet
citizens learned, paper rights are worthless in the absence of
mechanisms to hold rulers to account.
Magna Carta instituted a form of conciliar rule that was to develop
directly into the Parliament that meets at Westminster today. As the
great Victorian historian William Stubbs put it, “the whole
constitutional history of England is little more than a commentary on
Magna Carta.”
And
not just England. Indeed, not even England in particular. Magna Carta
has always been a bigger deal in the U.S. The meadow where the
abominable King John put his royal seal to the parchment lies in my
electoral district in the county of Surrey. It went unmarked until 1957,
when a memorial stone was finally raised there—by the American Bar
Association.
Only now, for the anniversary, is a British
monument being erected at the place where freedom was born. After some
frantic fundraising by me and a handful of local councilors, a large
bronze statue of Queen Elizabeth II will gaze out across the slow, green
waters of the Thames, marking 800 years of the Crown’s acceptance of
the rule of law.
Eight hundred years is a long wait. We British
have, by any measure, been slow to recognize what we have. Americans, by
contrast, have always been keenly aware of the document, referring to
it respectfully as the Magna Carta.
Why? Largely because
of who the first Americans were. Magna Carta was reissued several times
throughout the 14th and 15th centuries, as successive Parliaments
asserted their prerogatives, but it receded from public consciousness
under the Tudors, whose dynasty ended with the death of Elizabeth I in
1603.
In the early 17th century, members of Parliament revived
Magna Carta as a weapon in their quarrels with the autocratic Stuart
monarchs. Opposition to the Crown was led by the brilliant lawyer Edward
Coke (pronounced Cook), who drafted the first Virginia Charter in 1606.
Coke’s argument was that the king was sidelining Parliament, and so
unbalancing the “ancient constitution” of which Magna Carta was the
supreme expression.
United for the first
time, the four surviving original Magna Carta manuscripts are prepared
for display at the British Library, London, Feb. 1, 2015.
Photo:
UPPA/ZUMA PRESS
The early settlers arrived while these rows were at their height and
carried the mania for Magna Carta to their new homes. As early as 1637,
Maryland sought permission to incorporate Magna Carta into its basic
law, and the first edition of the Great Charter was published on
American soil in 1687 by William Penn, who explained that it was what
made Englishmen unique: “In France, and other nations, the mere will of
the Prince is Law, his word takes off any man’s head, imposeth taxes, or
seizes any man’s estate, when, how and as often as he lists; But in
England, each man hath a fixed Fundamental Right born with him, as to
freedom of his person and property in his estate, which he cannot be
deprived of, but either by his consent, or some crime, for which the law
has imposed such a penalty or forfeiture.”
There was a
divergence between English and American conceptions of Magna Carta. In
the Old World, it was thought of, above all, as a guarantor of
parliamentary supremacy; in the New World, it was already coming to be
seen as something that stood above both Crown and Parliament. This
difference was to have vast consequences in the 1770s.
The
American Revolution is now remembered on both sides of the Atlantic as a
national conflict—as, indeed, a “War of Independence.” But no one at
the time thought of it that way—not, at any rate, until the French
became involved in 1778. Loyalists and patriots alike saw it as a civil
war within a single polity, a war that divided opinion every bit as much
in Great Britain as in the colonies.
The American
Revolutionaries weren’t rejecting their identity as Englishmen; they
were asserting it. As they saw it, George III was violating the “ancient
constitution” just as King John and the Stuarts had done. It was
therefore not just their right but their duty to resist, in the words of
the delegates to the first Continental Congress in 1774, “as Englishmen
our ancestors in like cases have usually done.”
Nowhere, at this
stage, do we find the slightest hint that the patriots were fighting
for universal rights. On the contrary, they were very clear that they
were fighting for the privileges bestowed on them by Magna Carta. The
concept of “no taxation without representation” was not an abstract
principle. It could be found, rather, in Article 12 of the Great
Charter: “No scutage or aid is to be levied in our realm except by the
common counsel of our realm.” In 1775, Massachusetts duly adopted as its
state seal a patriot with a sword in one hand and a copy of Magna Carta
in the other.
I recount these facts to make an important, if
unfashionable, point. The rights we now take for granted—freedom of
speech, religion, assembly and so on—are not the natural condition of an
advanced society. They were developed overwhelmingly in the language in
which you are reading these words.
When we call them universal
rights, we are being polite. Suppose World War II or the Cold War had
ended differently: There would have been nothing universal about them
then. If they are universal rights today, it is because of a series of
military victories by the English-speaking peoples.
Various early
copies of Magna Carta survive, many of them in England’s cathedrals,
tended like the relics that were removed during the Reformation. One
hangs in the National Archives in Washington, D.C., next to the two
documents it directly inspired: the Declaration of Independence and the
Constitution. Another enriches the Australian Parliament in Canberra.
But
there are only four 1215 originals. One of them, normally housed at
Lincoln Cathedral, has recently been on an American tour, resting for
some weeks at the Library of Congress. It wasn’t that copy’s first visit
to the U.S. The same parchment was exhibited in New York at the 1939
World’s Fair, attracting an incredible 13 million visitors. World War II
broke out while it was still on display, and it was transferred to Fort
Knox for safekeeping until the end of the conflict.
Could there
have been a more apt symbol of what the English-speaking peoples were
fighting for in that conflagration? Think of the world as it stood in
1939. Constitutional liberty was more or less confined to the
Anglosphere. Everywhere else, authoritarianism was on the rise. Our
system, uniquely, elevated the individual over the state, the rules over
the rulers.
When the 18th-century statesman Pitt the Elder
described Magna Carta as England’s Bible, he was making a profound
point. It is, so to speak, the Torah of the English-speaking peoples:
the text that sets us apart while at the same time speaking truths to
the rest of mankind.
The very success of Magna Carta makes it
hard for us, 800 years on, to see how utterly revolutionary it must have
appeared at the time. Magna Carta did not create democracy: Ancient
Greeks had been casting differently colored pebbles into voting urns
while the remote fathers of the English were grubbing about alongside
pigs in the cold soil of northern Germany. Nor was it the first
expression of the law: There were Sumerian and Egyptian law codes even
before Moses descended from Sinai.
What Magna Carta initiated,
rather, was constitutional government—or, as the terse inscription on
the American Bar Association’s stone puts it, “freedom under law.”
It
takes a real act of imagination to see how transformative this concept
must have been. The law was no longer just an expression of the will of
the biggest guy in the tribe. Above the king brooded something more
powerful yet—something you couldn’t see or hear or touch or taste but
that bound the sovereign as surely as it bound the poorest wretch in the
kingdom. That something was what Magna Carta called “the law of the
land.”
This phrase is commonplace in our language. But think of
what it represents. The law is not determined by the people in
government, nor yet by clergymen presuming to interpret a holy book.
Rather, it is immanent in the land itself, the common inheritance of the
people living there.
The idea of the law coming up from the
people, rather than down from the government, is a peculiar feature of
the Anglosphere. Common law is an anomaly, a beautiful, miraculous
anomaly. In the rest of the world, laws are written down from first
principles and then applied to specific disputes, but the common law
grows like a coral, case by case, each judgment serving as the starting
point for the next dispute. In consequence, it is an ally of freedom
rather than an instrument of state control. It implicitly assumes
residual rights.
And indeed, Magna Carta conceives rights in
negative terms, as guarantees against state coercion. No one can put you
in prison or seize your property or mistreat you other than by due
process. This essentially negative conception of freedom is worth
clinging to in an age that likes to redefine rights as entitlements—the
right to affordable health care, the right to be forgotten and so on.
It
is worth stressing, too, that Magna Carta conceived freedom and
property as two expressions of the same principle. The whole document
can be read as a lengthy promise that the goods of a free citizen will
not be arbitrarily confiscated by someone higher up the social scale.
Even the clauses that seem most remote from modern experience generally
turn out, in reality, to be about security of ownership.
There
are, for example, detailed passages about wardship. King John had been
in the habit of marrying heiresses to royal favorites as a way to get
his hands on their estates. The abstruse-sounding articles about
inheritance rights are, in reality, simply one more expression of the
general principle that the state may not expropriate without due
process.
Those who stand awe-struck before the Great Charter
expecting to find high-flown phrases about liberty are often surprised
to see that a chunk of it is taken up with the placing of fish-traps on
the Thames. Yet these passages, too, are about property, specifically
the freedom of merchants to navigate inland waterways without having
arbitrary tolls imposed on them by fish farmers.
Liberty and
property: how naturally those words tripped, as a unitary concept, from
the tongues of America’s Founders. These were men who had been shaped in
the English tradition, and they saw parliamentary government not as an
expression of majority rule but as a guarantor of individual freedom.
How different was the Continental tradition, born 13 years later with
the French Revolution, which saw elected assemblies as the embodiment of
what Rousseau called the “general will” of the people.
In that
difference, we may perhaps discern explanation of why the Anglosphere
resisted the chronic bouts of authoritarianism to which most other
Western countries were prone. We who speak this language have always
seen the defense of freedom as the duty of our representatives and so,
by implication, of those who elect them. Liberty and democracy, in our
tradition, are not balanced against each other; they are yoked together.
In February, the four surviving original copies of Magna Carta were
united, for just a few hours, at the British Library—something that had
not happened in 800 years. As I stood reverentially before them, someone
recognized me and posted a photograph on Twitter with the caption: “If
Dan Hannan gets his hands on all four copies of Magna Carta, will he be
like Sauron with the Rings?”
Yet the majesty of the document
resides in the fact that it is, so to speak, a shield against Saurons.
Most other countries have fallen for, or at least fallen to, dictators.
Many, during the 20th century, had popular communist parties or fascist
parties or both. The Anglosphere, unusually, retained a consensus behind
liberal capitalism.
This is not because of any special property
in our geography or our genes but because of our constitutional
arrangements. Those constitutional arrangements can take root anywhere.
They explain why Bermuda is not Haiti, why Hong Kong is not China, why
Israel is not Syria.
They work because, starting with Magna
Carta, they have made the defense of freedom everyone’s responsibility.
Americans, like Britons, have inherited their freedoms from past
generations and should not look to any external agent for their
perpetuation. The defense of liberty is your job and mine. It is up to
us to keep intact the freedoms we inherited from our parents and to pass
them on securely to our children.
Mr. Hannan is a British
member of the European Parliament for the Conservative Party, a
columnist for the Washington Examiner and the author of “Inventing
Freedom: How the English-speaking Peoples Made the Modern World.”
White House officials can be oddly candid in talking to their liberal
friends at the New Yorker magazine. That’s where an unnamed official in
2011 boasted of “leading from behind,” and where last year President Obama
dismissed Islamic State as a terrorist “jayvee team.” Now the U.S. Vice
President has revealed the Administration line on human rights in
China.
In the April 6 issue, Joe Biden recounts meeting Xi Jinping
months before his 2012 ascent to be China’s supreme leader. Mr. Xi
asked him why the U.S. put “so much emphasis on human rights.” The right
answer is simple: No government has the right to deny its citizens
basic freedoms, and those that do tend also to threaten peace overseas,
so U.S. support for human rights is a matter of values and interests.
Instead,
Mr. Biden downplayed U.S. human-rights rhetoric as little more than
political posturing. “No president of the United States could represent
the United States were he not committed to human rights,” he told Mr.
Xi. “President Barack Obama would not be able to stay in power if he did
not speak of it. So look at it as a political imperative.” Then Mr.
Biden assured China’s leader: “It doesn’t make us better or worse. It’s
who we are. You make your decisions. We’ll make ours.” [not the WSJ's emphasis.]
Mr. Xi took the advice. Since taking office he has detained more
than 1,000 political prisoners, from anticorruption activist Xu Zhiyong
to lawyer Pu Zhiqiang and journalist Gao Yu. He has cracked down on Uighurs in Xinjiang, banning more Muslim practices and jailing scholar-activist Ilham Tohti for life. Anti-Christian repression and Internet controls are tightening. Nobel Peace laureate Liu Xiaobo remains in prison, his wife Liu Xia
under illegal house arrest for the fifth year. Lawyer Gao Zhisheng left
prison in August but is blocked from receiving medical care overseas.
Hong Kong, China’s most liberal city, is losing its press freedom and
political autonomy.
Amid all of this Mr. Xi and his government have faced little challenge from Washington. That is consistent with Hillary Clinton’s
2009 statement that human rights can’t be allowed to “interfere” with
diplomacy on issues such as the economy and the environment. Mr. Obama
tried walking that back months later, telling the United Nations that
democracy and human rights aren’t “afterthoughts.” But his
Administration’s record—and now Mr. Biden’s testimony—prove otherwise.
The
Obama
administration’s troubling flirtation with another mortgage
meltdown took an unsettling turn on Tuesday with Federal Housing Finance
Agency Director
Mel Watt
’s testimony before the House Financial Services Committee.
Mr.
Watt told the committee that, having received “feedback from
stakeholders,” he expects to release by the end of March new guidance on
the “guarantee fee” charged by
Fannie Mae
and
Freddie Mac
to cover the credit risk on loans the federal mortgage agencies guarantee.
Here
we go again. In the Obama administration, new guidance on housing
policy invariably means lowering standards to get mortgages into the
hands of people who may not be able to afford them.
Earlier this
month, President Obama announced that the Federal Housing
Administration (FHA) will begin lowering annual mortgage-insurance
premiums “to make mortgages more affordable and accessible.” While that
sounds good in the abstract, the decision is a bad one with serious
consequences for the housing market.
Government programs to make
mortgages more widely available to low- and moderate-income families
have consistently offered overleveraged, high-risk loans that set up too
many homeowners to fail. In the long run-up to the 2008 financial
crisis, for example, federal mortgage agencies and their regulators
cajoled and wheedled private lenders to loosen credit standards. They
have been doing so again. When the next housing crash arrives, private
lenders will be blamed—and homeowners and taxpayers will once again pay
dearly.
Lowering annual mortgage-insurance premiums is part of a
new affordable-lending effort by the Obama administration. More
specifically, it is the latest salvo in a price war between two
government mortgage giants to meet government mandates.
Fannie
Mae fired the first shot in December when it relaunched the 30-year, 97%
loan-to-value, or LTV, mortgage (a type of loan that was suspended in
2013). Fannie revived these 3% down-payment mortgages at the behest of
its federal regulator, the Federal Housing Finance Agency (FHFA)—which
has run Fannie Mae and Freddie Mac since 2008, when both
government-sponsored enterprises (GSEs) went belly up and were put into
conservatorship. The FHA’s mortgage-premium price rollback was a
counteroffensive.
Fannie’s
goal in 1994 and today is to take market share from the FHA, the main
competitor for loans it and Freddie Mac need to meet mandates set by
Congress since 1992 to increase loans to low- and moderate-income
homeowners. The weapons in this war are familiar—lower pricing and
progressively looser credit as competing federal agencies fight over
existing high-risk lending and seek to expand such lending.
Mortgage
price wars between government agencies are particularly dangerous,
since access to low-cost capital and minimal capital requirements gives
them the ability to continue for many years—all at great risk to the
taxpayers. Government agencies also charge low-risk consumers more than
necessary to cover the risk of default, using the overage to lower fees
on loans to high-risk consumers.
Starting in 2009 the FHFA
released annual studies documenting the widespread nature of these
cross-subsidies. The reports showed that low down payment, 30-year loans
to individuals with low FICO scores were consistently subsidized by
less-risky loans.
Unfortunately, special interests such as the
National Association of Realtors—always eager to sell more houses and
reap the commissions—and the left-leaning Urban Institute were
cheerleaders for loose credit. In 1997, for example, HUD commissioned
the Urban Institute to study Fannie and Freddie’s single-family
underwriting standards. The Urban Institute’s 1999 report found that
“the GSEs’ guidelines, designed to identify creditworthy applicants, are
more likely to disqualify borrowers with low incomes, limited wealth,
and poor credit histories; applicants with these characteristics are
disproportionately minorities.” By 2000 Fannie and Freddie did away with
down payments and raised debt-to-income ratios. HUD encouraged them to
more aggressively enter the subprime market, and the GSEs decided to
re-enter the “liar loan” (low doc or no doc) market, partly in a desire
to meet higher HUD low- and moderate-income lending mandates.
On
Jan. 6, the Urban Institute announced in a blog post: “FHA: Time to stop
overcharging today’s borrowers for yesterday’s mistakes.” The institute
endorsed an immediate cut of 0.40% in mortgage-insurance premiums
charged by the FHA. But once the agency cuts premiums, Fannie and
Freddie will inevitably reduce the guarantee fees charged to cover the
credit risk on the loans they guarantee.
Now the other shoe appears poised to drop, given Mr. Watt’s promise on Tuesday to issue new guidance on guarantee fees.
This
is happening despite Congress’s 2011 mandate that Fannie’s regulator
adjust the prices of mortgages and guarantee fees to make sure they
reflect the actual risk of loss—that is, to eliminate dangerous and
distortive pricing by the two GSEs. Ed DeMarco, acting director of the
FHFA since March 2009, worked hard to do so but left office in January
2014. Mr. Watt, his successor, suspended
Mr. DeMarc
o’s efforts to comply with Congress’s mandate. Now that Fannie
will once again offer heavily subsidized 3%-down mortgages, massive new
cross-subsidies will return, and the congressional mandate will be
ignored.
The law stipulates that the FHA maintain a
loss-absorbing capital buffer equal to 2% of the value of its
outstanding mortgages. The agency obtains this capital from profits
earned on mortgages and future premiums. It hasn’t met its capital
obligation since 2009 and will not reach compliance until the fall of
2016, according to the FHA’s latest actuarial report. But if the economy
runs into another rough patch, this projection will go out the window.
Congress
should put an end to this price war before it does real damage to the
economy. It should terminate the ill-conceived GSE affordable-housing
mandates and impose strong capital standards on the FHA that can’t be
ignored as they have been for five years and counting.
Mr. Pinto,
former chief credit officer of Fannie Mae, is co-director and
chief risk officer of the International Center on Housing Risk at the
American Enterprise Institute.
Jonathan Gruber’s ‘Stupid’ Budget Tricks. WSJ Editorial His ObamaCare candor shows how Congress routinely cons taxpayers.Wall Street Journal, Nov. 14, 2014 6:51 p.m. ET
As a rule, Americans don’t like to be called “stupid,” as Jonathan Gruber is discovering. Whatever his academic contempt for voters, the ObamaCare architect and Massachusetts Institute of Technology economist deserves the Presidential Medal of Freedom for his candor about the corruption of the federal budget process.
In his now-infamous talk at the University of Pennsylvania last year, Professor Gruber argued that the Affordable Care Act “would not have passed” had Democrats been honest about the income-redistribution policies embedded in its insurance regulations. But the more instructive moment is his admission that “this bill was written in a tortured way to make sure CBO did not score the mandate as taxes. If CBO scored the mandate as taxes, the bill dies.”
Mr. Gruber means the Congressional Budget Office, the institution responsible for putting “scores” or official price tags on legislation. He’s right that to pass ObamaCare Democrats perpetrated the rawest, most cynical abuse of the CBO since its creation in 1974.
In another clip from Mr. Gruber’s seemingly infinite video library, he discusses how he and Democrats wrote the law to game the CBO’s fiscal conventions and achieve goals that would otherwise be “politically impossible.” In still another, he explains that these ruses are “a sad statement about budget politics in the U.S., but there you have it.”
Yes you do. Such admissions aren’t revelations, since the truth has long been obvious to anyone curious enough to look. We and other critics wrote about ObamaCare’s budget gimmicks during the debate, and Rep. Paul Ryan exposed them at the 2010 “health summit.” President Obama changed the subject.
But rarely are liberal intellectuals as full frontal as Mr. Gruber about the accounting fraud ingrained in ObamaCare. Also notable are his do-what-you-gotta-do apologetics: “I’d rather have this law than not,” he says.
Recall five years ago. The White House wanted to pretend that the open-ended new entitlement would spend less than $1 trillion over 10 years and reduce the deficit too. Congress requires the budget gnomes to score bills as written, no matter how unrealistic the assumption or fake the promise. Democrats with the help of Mr. Gruber carefully designed the bill to exploit this built-in gullibility.
So they used a decade of taxes to fund merely six years of insurance subsidies. They made-believe that Medicare payments to hospitals will some day fall below Medicaid rates. A since-repealed program for long-term care front-loaded taxes but back-loaded spending, meant to gradually go broke by design. Remember the spectacle of Democrats waiting for the white smoke to come up from CBO and deliver the holy scripture verdict?
On the tape, Mr. Gruber also identifies a special liberal manipulation: CBO’s policy reversal to not count the individual mandate to buy insurance as an explicit component of the federal budget. In 1994, then CBO chief Robert Reischauer reasonably determined that if the government forces people to buy a product by law, then those transactions no longer belong to the private economy but to the U.S. balance sheet. The CBO’s face-melting cost estimate helped to kill HillaryCare.
The CBO director responsible for this switcheroo that moved much of ObamaCare’s real spending off the books was Peter Orszag, who went on to become Mr. Obama’s budget director. Mr. Orszag nonetheless assailed CBO during the debate for not giving him enough credit for the law’s phantom “savings.”
Then again, Mr. Gruber told a Holy Cross audience in 2010 that although ObamaCare “is 90% health insurance coverage and 10% about cost control, all you ever hear people talk about is cost control. How it’s going to lower the cost of health care, that’s all they talk about. Why? Because that’s what people want to hear about because a majority of Americans care about health-care costs.”
***
Both political parties for some reason treat the CBO with the same reverence the ancient Greeks reserved for the Delphic oracle, but Mr. Gruber’s honesty is another warning that the budget rules are rigged to expand government and hide the true cost of entitlements. CBO scores aren’t unambiguous facts but are guesses about the future, biased by the Keynesian assumptions and models its political masters in Congress instruct it to use.
Republicans who now run Congress can help taxpayers by appointing a new CBO director, as is their right as the majority. Current head Doug Elmendorf is a respected economist, and he often has a dry wit as he reminds Congressfolk that if they feed him garbage, he must give them garbage back. But if the GOP won’t abolish the institution, then they can find a replacement who is as candid as Mr. Gruber about the flaws and limitations of the CBO status quo. The Tax Foundation’s Steve Entin would be an inspired pick.
Democrats are now pretending they’ve never heard of Mr. Gruber, though they used to appeal to his authority when he still had some. His commentaries are no less valuable because he is now a political liability for Democrats.
The Department of Justice isn't known for a sense of humor. But on Monday it announced a civil settlement with Citigroup over failed mortgage investments that covers almost exactly the period when current Treasury Secretary Jack Lew oversaw divisions at Citi that presided over failed mortgage investments. Now, that's funny.
Though Justice, five states and the FDIC are prying $7 billion from the bank for allegedly misleading investors, there's no mention in the settlement of clawing back even a nickel of Mr. Lew's compensation. We also see no sanction for former Treasury Secretary Timothy Geithner, who allowed Citi to build colossal mortgage risks outside its balance sheet while overseeing the bank as president of the New York Federal Reserve.
The settlement says Citi's alleged misdeeds began in 2006, the year Mr. Lew joined the bank, and the agreement covers conduct "prior to January 1, 2009." That was shortly before Mr. Lew left to work for President Obama and two weeks before Mr. Lew received $944,518 from Citi in "salary, payout for vested restricted stock," and "discretionary cash compensation for work performed in 2008," according to a 2010 federal disclosure report. That was also the year Citi began receiving taxpayer bailouts of $45 billion in cash, plus hundreds of billions more in taxpayer guarantees.
While Attorney General Eric Holder is forgiving toward his Obama cabinet colleagues, he seems to believe that some housing transactions can never be forgiven. The $7 billion settlement includes the same collateralized debt obligation for which the bank already agreed to pay $285 million in a settlement with the Securities and Exchange Commission. The Justice settlement also includes a long list of potential charges not covered by the agreement, so prosecutors can continue to raid the Citi ATM.
Citi offers in return what looks like a blanket agreement not to sue the government over any aspect of the case, and waives its right to defend itself "based in whole or in part on a contention that, under the Double Jeopardy Clause in the Fifth Amendment of the Constitution, or under the Excessive Fines Clause in the Eighth Amendment of the Constitution, this Agreement bars a remedy sought in such criminal prosecution or administrative action." We hold no brief for Citi, which has been rescued three times by the feds. But what kind of government demands the right to exact repeated punishments for the same offense?
The bank's real punishment should have been failure, as former FDIC Chairman Sheila Bair and we argued at the time. Instead, the regulators kept Citi alive with taxpayer money far beyond what it provided most other banks as part of the Troubled Asset Relief Program. Keeping it alive means they can now use Citi as a political target when it's convenient to claim they're tough on banks.
And speaking of that $7 billion, good luck finding a justification for it in the settlement agreement. The number seems to have been pulled out of thin air since it's unrelated to Citi's mortgage-securities market share or any other metric we can see beyond having media impact.
If this sounds cynical, readers should consult the Justice Department's own leaks to the press about how the Citi deal went down. Last month the feds were prepared to bring charges against the bank, but the necessities of public relations intervened.
According to the Journal, "News had leaked that afternoon, June 17, that the U.S. had captured Ahmed Abu Khatallah, a key suspect in the attacks on the American consulate in Benghazi in 2012. Justice Department officials didn't want the announcement of the suit against Citigroup—and its accompanying litany of alleged misdeeds related to mortgage-backed securities—to be overshadowed by questions about the Benghazi suspect and U.S. policy on detainees. Citigroup, which didn't want to raise its offer again and had been preparing to be sued, never again heard the threat of a suit."
This week's settlement includes $4 billion for the Treasury, roughly $500 million for the states and FDIC, and $2.5 billion for mortgage borrowers. That last category has become a fixture of recent government mortgage settlements, even though the premise of this case involves harm done to bond investors, not mortgage borrowers.
But the Obama Administration's references to the needs of Benghazi PR remind us that it could be worse. At least Mr. Holder isn't blaming the Geithner and Lew failures on a video.
It is now five years since the end of the most recent U.S. financial crisis of 2007-09. Stocks have made record highs, junk bonds and leveraged loans have boomed, house prices have risen, and already there are cries for lower credit standards on mortgages to "increase access."
Meanwhile, in vivid contrast to the Swiss central bank, which marks its investments to market, the Federal Reserve has designed its own regulatory accounting so that it will never have to recognize any losses on its $4 trillion portfolio of long-term bonds and mortgage securities.
Who remembers that such "special" accounting is exactly what the Federal Home Loan Bank Board designed in the 1980s to hide losses in savings and loans? Who remembers that there even was a Federal Home Loan Bank Board, which for its manifold financial sins was abolished in 1989?
It is 25 years since 1989. Who remembers how severe the multiple financial crises of the 1980s were?
The government of Mexico defaulted on its loans in 1982 and set off a global debt crisis. The Federal Reserve's double-digit interest rates had rendered insolvent the aggregate savings and loan industry, until then the principal supplier of mortgage credit. The oil bubble collapsed with enormous losses.
Between 1982 and 1992, a disastrous 2,270 U.S. depository institutions failed. That is an average of more than 200 failures a year or four a week over a decade. From speaking to a great many audiences about financial crises, I can testify that virtually no one knows this.
In the wake of the housing bust, I was occasionally asked, "Will we learn the lessons of this crisis?" "We will indeed," I would reply, "and we will remember them for at least four or five years." In 2007 as the first wave of panic was under way, I heard a senior international economist opine in deep, solemn tones, "What we have learned from this crisis is the importance of liquidity risk." "Yes," I said, "that's what we learn from every crisis."
The political reactions to the 1980s included the Financial Institutions Reform, Recovery and Enforcement Act of 1989, the FDIC Improvement Act of 1991, and the very ironically titled GSE Financial Safety and Soundness Act of 1992. Anybody remember the theories behind those acts?
After depositors in savings and loan associations were bailed out to the tune of $150 billion (the Federal Savings and Loan Insurance Corporation having gone belly up), then-Treasury Secretary Nicholas Brady pronounced that the great legislative point was "never again." Never, that is, until the Mexican debt crisis of 1994, the Asian debt crisis of 1997, and the Long-Term Capital Management crisis of 1998, all very exciting at the time.
And who remembers the Great Recession (so called by a prominent economist of the time) in 1973-75, the huge real-estate bust and New York City's insolvency crisis? That was the decade before the 1980s.
Viewing financial crises over several centuries, the great economic historian Charles Kindleberger concluded that they occur on average about once a decade. Similarly, former Fed Chairman Paul Volcker wittily observed that "about every 10 years, we have the biggest crisis in 50 years."
What is it about a decade or so? It seems that is long enough for memories to fade in the human group mind, as they are overlaid with happier recent experiences and replaced with optimistic new theories.
Speaking in 2013, Paul Tucker, the former deputy governor for financial stability of the Bank of England—a man who has thought long and hard about the macro risks of financial systems—stated, "It will be a while before confidence in the system is restored." But how long is "a while"? I'd say less than a decade.
Mr. Tucker went on to proclaim, "Never again should confidence be so blind." Ah yes, "never again." If Mr. Tucker's statement is meant as moral suasion, it's all right. But if meant as a prediction, don't bet on it.
Former Treasury Secretary Tim Geithner, for all his daydream of the government as financial Platonic guardian, knows this. As he writes in "Stress Test," his recent memoir: "Experts always have clever reasons why the boom they are enjoying will avoid the disastrous patterns of the past—until it doesn't." He predicts: "There will be a next crisis, despite all we did."
Right. But when? On the historical average, 2009 + 10 = 2019. Five more years is plenty of time for forgetting. Mr. Pollock is a resident fellow at the American Enterprise Institute and was president and CEO of the Federal Home Loan Bank of Chicago 1991-2004.
---
Thomas Piketty Revives Marx for the 21st Century. By Daniel Schuman An 80% tax rate on incomes above $500,000 is not meant to bring in money for education or benefits, but 'to put an end to such incomes.'
Wall Street Journal, April 21, 2014 7:18 p.m. ET http://online.wsj.com/news/articles/SB10001424052702303825604579515452952131592
Thomas Piketty likes capitalism because it efficiently allocates resources. But he does not like how it allocates income. There is, he thinks, a moral illegitimacy to virtually any accumulation of wealth, and it is a matter of justice that such inequality be eradicated in our economy. The way to do this is to eliminate high incomes and to reduce existing wealth through taxation.
"Capital in the Twenty-First Century" is Mr. Piketty's dense exploration of the history of wages and wealth over the past three centuries. He presents a blizzard of data about income distribution in many countries, claiming to show that inequality has widened dramatically in recent decades and will soon get dangerously worse. Whether or not one is convinced by Mr. Piketty's data—and there are reasons for skepticism, given the author's own caveats and the fact that many early statistics are based on extremely limited samples of estate tax records and dubious extrapolation—is ultimately of little consequence. For this book is less a work of economic analysis than a bizarre ideological screed.
A professor at the Paris School of Economics, Mr. Piketty believes that only the productivity of low-wage workers can be measured objectively. He posits that when a job is replicable, like an "assembly line worker or fast-food server," it is relatively easy to measure the value contributed by each worker. These workers are therefore entitled to what they earn. He finds the productivity of high-income earners harder to measure and believes their wages are in the end "largely arbitrary." They reflect an "ideological construct" more than merit.
Soaring pay for corporate "supermanagers" has been the largest source of increased inequality, according to Mr. Piketty, and these executives can only have attained their rewards through luck or flaws in corporate governance. It requires only an occasional glance at this newspaper to confirm that this can be the case. But the author believes that no CEO could ever justify his or her pay based on performance. He doesn't say whether any occupation—athletes? physicians? economics professors who sell zero-marginal-cost e-books for $21.99 a copy?—is entitled to higher earnings because he does not wish to "indulge in constructing a moral hierarchy of wealth."
He does admit that entrepreneurs are "absolutely indispensable" for economic development, but their success, too, is usually tainted. While some succeed thanks to "true entrepreneurial labor," some are simply lucky or succeed through "outright theft." Even the fortunes made from entrepreneurial labor, moreover, quickly evolve into an "excessive and lasting concentration of capital." This is a self-reinforcing injustice because "property sometimes begins with theft, and the arbitrary return on capital can easily perpetuate the initial crime." Indeed laced throughout the book is an almost medieval hostility to the notion that financial capital earns a return.
While America's corporate executives are his special bête noire, Mr. Piketty is also deeply troubled by the tens of millions of working people—a group he disparagingly calls "petits rentiers"—whose income puts them nowhere near the "one percent" but who still have savings, retirement accounts and other assets. That this very large demographic group will get larger, grow wealthier and pass on assets via inheritance is "a fairly disturbing form of inequality." He laments that it is difficult to "correct" because it involves a broad segment of the population, not a small elite that is easily demonized.
So what is to be done? Mr. Piketty urges an 80% tax rate on incomes starting at "$500,000 or $1 million." This is not to raise money for education or to increase unemployment benefits. Quite the contrary, he does not expect such a tax to bring in much revenue, because its purpose is simply "to put an end to such incomes." It will also be necessary to impose a 50%-60% tax rate on incomes as low as $200,000 to develop "the meager US social state." There must be an annual wealth tax as high as 10% on the largest fortunes and a one-time assessment as high as 20% on much lower levels of existing wealth. He breezily assures us that none of this would reduce economic growth, productivity, entrepreneurship or innovation.
Not that enhancing growth is much on Mr. Piketty's mind, either as an economic matter or as a means to greater distributive justice. He assumes that the economy is static and zero-sum; if the income of one population group increases, another one must necessarily have been impoverished. He views equality of outcome as the ultimate end and solely for its own sake. Alternative objectives—such as maximizing the overall wealth of society or increasing economic liberty or seeking the greatest possible equality of opportunity or even, as in the philosophy of John Rawls, ensuring that the welfare of the least well-off is maximized—are scarcely mentioned.
There is no doubt that poverty, unemployment and unequal opportunity are major challenges for capitalist societies, and varying degrees of luck, hard work, sloth and merit are inherent in human affairs. Mr. Piketty is not the first utopian visionary. He cites, for instance, the "Soviet experiment" that allowed man to throw "off his chains along with the yoke of accumulated wealth." In his telling, it only led to human disaster because societies need markets and private property to have a functioning economy. He says that his solutions provide a "less violent and more efficient response to the eternal problem of private capital and its return." Instead of Austen and Balzac, the professor ought to read "Animal Farm" and "Darkness at Noon." Mr. Shuchman is a New York fund manager who often writes on law and economics.
Equity-market structure in the U.S. has made important advances over the past 20 years, promoting greater transparency and liquidity. Three powerful forces have been at work: technology, regulation and competition. The result has been narrower spreads, faster execution and lower overall explicit costs to trading stocks.
With the overwhelming majority of transactions now done over multiple electronic markets each with its own rule books, the equity-market structure is increasingly fragmented and complex. The risks associated with this fragmentation and complexity are amplified by the dramatic increase in the speed of execution and trading communications.
In the U.S., there are 13 public exchanges and nearly 50 alternative trading systems. Regulation NMS (National Market System), adopted in 2007, requires that market participants route their orders to the exchange that displays the best public price at any given time. This has increased both the number of linkages in the market and the speed at which transactions are done. The Securities and Exchange Commission has correctly called for an "assessment of whether market structure rules have kept pace with, among other things, changes in trading technology and practices."
In the past year alone, multiple technology failures have occurred in the equities markets, with a severe impact on the markets' ability to operate. Even though industry groups have met after the market disruptions to discuss responses, there has not been enough progress. Execution venues are decentralized and unable to agree on common rules. While an industry-based solution is preferable, some issues cannot be addressed by market forces alone and require a regulatory response. Innovation is critical to a healthy and competitive market structure, but not at the cost of introducing substantial risk.
Regulators and industry participants, including asset managers, broker-dealers, exchanges and trading firms, have all put forth ideas and reforms. We agree with a number of their concerns and propose the following four principles:
• First, the equity market needs a stronger safety net of controls to reduce the magnitude and frequency of disruptions. A fragmented trading landscape, increasingly sophisticated routing algorithms, constant software updates and an explosion in electronic-order instructions have made markets more susceptible to technology failures and their consequences.
We propose that all exchanges adopt a stringent set of uniform, SEC-mandated execution controls to reduce errors. In addition to limit-up, limit-down rules that prevent trades from occurring outside a specified price band, pre-trade price and volume limits should be implemented to block problematic orders from entering the market. Mechanisms should also be introduced to halt a firm's, market maker's or other entity's trading when an established threshold is breached, thus minimizing the uncontrolled accumulation of trades.
• Second: Create incentives to reduce excessive market instability. The economic model of the exchanges, as shaped by regulation, is oriented around market volume. Volume generates price discovery and liquidity, which are clearly beneficial. But the industry must recognize how certain activities related to volume can place stress on a market infrastructure ill-equipped to deal with it.
Electronic-order instructions connect the objectives of buyers and sellers to actions on exchanges. These transaction messages direct the placement, cancellation and correction of orders, and in recent years they have skyrocketed. In the 2010 "flash crash," a spike in the volume of these messages exacerbated volatility, overwhelming the market's infrastructure.
According to industry analysis, since 2005 the flow of these order instructions sent through U.S. stock exchanges has increased more than 1000%, yet trade volume has increased by only 50%. One consequence of the enormous growth in order-message traffic is that increasingly the quote that an investor sees isn't the price he or she can transact, as orders often get canceled at lightning-quick speeds.
Currently there is no cost to market participants who generate excessive order-message traffic. One idea would be to consider if regulatory fees applied on the basis of extreme message traffic—rather than executions alone—are appropriate and would enhance the underlying strength and resiliency of the system. Regulators in Canada and Australia have adopted this approach.
• Third: Public market data should be disseminated to all market participants simultaneously. Exchanges currently disseminate prices and transaction data to the SEC-sanctioned distributor for all investors, but exchanges may also send this information directly to private subscribers. While the data leave the exchange simultaneously, the public data are delayed because they go through the intermediary's processing infrastructure. The public aggregator should release information to all market participants at the same time.
Removing the possibility of differentiated channels for market data also reduces incentives that favor investment in the speed of one channel over the stability and resiliency of another. Instability creates and compounds market disruptions. Stable and accurate market data is one of the most important elements of market safety; it is the backbone of the market that must weather the most extreme periods.
• Fourth: Give clearing members more tools to limit risk. A central clearing house with strong operational and financial integrity can reduce credit risk, increase liquidity and enhance transparency through enforced margin requirements and verified and recorded trades. But because clearing members extend credit, the associated risks must be recognized. Tools like pre-trade credit checks and being able to monitor positions and credit on an intraday basis are essential. Clearing firms use various tools like margin and capital adequacy to manage their risk, but exchanges should also provide uniform mechanisms for clearers to set credit limits and to revoke a client's ability to trade immediately upon request, when necessary.
U.S. markets today are the deepest, most liquid in the world and serve an indispensable role in allocating capital. That means the companies that have the greatest potential to innovate and grow will get the capital they need to create jobs, build new industries and ensure a vibrant economy. Investors have benefited significantly from technology and innovation, but the speed and complexity at which our markets operate aren't being matched with the operational and control environment to support them.
Mr. Cohn is president and chief operating officer of Goldman Sachs
The term "shadow banking" is one of those Orwellian terms that can undermine critical thought. It has a negative, vaguely sinister connotation about a source of financing that is an essential and desirable part of the financial system. As discussion about the regulation of nonbank entities begins in earnest, it's time to clear the air about what these institutions are and how they operate.
Shadow banking—or more accurately, market-based financing—is simply the provision of capital by loans or investments to some companies by other companies that are not banks. Examples include insurance companies, credit investment funds, hedge funds, private-equity funds, and broker dealers. These institutions do not operate in the dark. Market-based finance in the U.S. amounts to trillions of dollars and is significantly larger than the country's entire banking system.
Mark Carney, Governor of the Bank of England, has correctly noted the role of shadow banking in "diversifying the sources of financing of our economies in a sustainable way." For example, traditional bank financing is not always available for many small- and medium-size companies. Market-based financing has fueled the creation of companies (and thousands of jobs) in many industries. It has rescued companies on the edge of bankruptcy and saved the jobs associated with them. And market-based financing has built warehouses, manufacturing plants and hotels, such as the Four Seasons Hotel and Residences in downtown New York City, when traditional banks could not, or would not, provide capital.
Large banks concentrate risk in relatively few hands, which can pose a risk to the economic system. That is not the case for market-based financing. Risks are safely dispersed across many sophisticated investors who can readily absorb any potential losses. Unlike traditional banks, market-based funds do not borrow from the Federal Reserve, nor do they rely on government-guaranteed deposits. Substantially all their capital comes from well-advised institutional investors who know what they are getting into, and understand the associated risks. Bank depositors (and taxpayers) on the other hand, do not typically know what a bank's investments are or how risky they may be.
Typically, market-based funds also lack the elements that are sources of systemic instability, including high leverage and interdependence. Each investment within a fund is independent and not cross-collateralized or supporting a common debt structure. Losses in any one fund are without recourse to any other fund or to the manager of the capital.
In addition, investors in many market-based funds, including credit investment funds, hedge funds and private-equity funds often cannot instantly withdraw their capital, unlike depositors in banks. Large, sudden withdrawals can lead to runs on the bank or force "fire sales" of assets. With stable, in-place capital, these funds can provide a critical source of liquidity to trading markets in times of turmoil.
Of course, some regulation may be appropriate for nonbank entities that present bank-like risks to financial stability or that lend to consumers. But let's not forget that it was the regulated entities that were the source of almost all the systemic risk in the financial crisis.
Regulations are far from a panacea and would need to be carefully constructed to ensure that the enormous economic benefits of market-based financing are not lost through inappropriate and stifling regulatory policies established for large, deposit-taking banks.
While banks in the U.S. are better capitalized and much safer today than before the financial crisis, market-based financing—shadow banking, if you prefer—still brings enormous economic advantages to a wide range of businesses and employees, and fills a real gap in the market.
In Europe, where banks are less well capitalized, the need for market-based financing is even more critical. As the G-20's Financial Stability Board noted in its policy framework last August, market-based financing creates "competition in financial markets that may lead to innovation, efficient credit allocation and cost reduction."
It is critical that any misunderstanding of the shadow banking system does not result in regulations that undermine the many thousands of companies and jobs that need market-based financing to survive and grow. Mr. James is president and chief operating officer of Blackstone, a global investment and advisory firm.
European banks remain vulnerable to another financial crisis. Capital buffers are too small, creditor bail-ins too low and emergency resolution mechanisms still inadequate to insulate governments from losses that could arise from a system-wide failure. In short, we need a new formula for financial stability.
A banking crisis today would still cost European sovereigns between 2% and 10% of GDP. The final bill would depend on many factors, the size of the initial loss being only one. Others include how much is mitigated by the banks' own capital reserves, by pre-existing government backstops and by any money that can be recouped by bailing-in bondholders.
The size of a country's banking system is also critical. By that measure, Europe's banks remain too big to fail.
Euro-zone lenders have shrunk their balance sheets by a total of €4.4 trillion since the second quarter of 2012, but still hold assets worth more than €30 trillion, according to the European Central Bank. That's three times the output of the euro area and far outstrips the U.S., where bank assets are less than GDP.
And if European banks are too big, they're also still undercapitalized. Capital is adequate relative to risk-weighted assets, against which "capital ratios" are measured, but falls short as a proportion of the banks' total asset base and of potential losses.
To make their capital ratios look better, many banks have reduced their risk-weighted assets over the past few years, often by "optimizing" their internal risk models. More than one-third of European banks now have less than 30% risk-weighted assets over total (RWA), some as low as 20%. This means a bank's "10% capital ratio" is effectively equal to €2 of capital for every €100 of assets (20% RWA times 10% equals €2). That's too low, especially for systemic banks whose balance sheets are as large as a country's GDP.
Regulators recognize the issue and are trying to introduce an absolute floor on capital: the Basel committee's 3% leverage ratio prescribes a minimum €3 of capital over assets. But even this would not have helped troubled banks such as Dexia or Anglo Irish, which lost the equivalent of 4% and 20% of their assets during the 2008-09 crisis, respectively.
We estimate that to stand on their own feet, banks need a leverage ratio of about 5.8% of capital over assets. This is consistent with the approach of Swiss and U.S. regulators, who recommend a 6% leverage ratio. It means euro-zone banks would need to raise an additional €492 billion of capital—more than six times the €80 billion that the European Banking Authority says they raised in 2013.
Another solution would be to increase bail-in requirements or state backstops. German Finance Minister Wolfgang Schäuble recently proposed speeding up the formation of Europe's Single Resolution Fund, a bank-financed pool of money planned to help wrap up or restructure failing banks. But the proposed €55 billion that would be available in the fund pales in comparison to the potential losses generated by bank failures, even if a bail-in were to be implemented first. We estimate the fund could help one large or two mid-sized institutions withstand failure, at best.
These backstops are also very difficult to put into action—the decision to restructure or resolve a bank has to pass through national committees, the European Commission, the Single-Resolution Mechanism (whose fine-print is still being drafted), and various other boards. Not a weekend job.
Regulators need a more comprehensive approach to making banks safe. It must encompass the total size of capital reserves, the size and structure of banking systems, and rules that can efficiently bail-in bondholders. Regulators are moving in the right direction, but they have not gone far enough.
Ultimately, I believe that Europe will be free from the threat of failing banks only once it has a smaller banking system and a more diversified supply of credit.
Think of credit like an energy grid: In Europe, 80% to 90% of the energy comes from banks—the coal or nuclear plants of the system. If something goes wrong with them, the costs will be high, the collateral effects toxic, and the damage could take years to clean up.
We need more "renewable energy" in the form of non-bank sources of credit. That means bonds, securitizations, and lending from insurance companies and asset managers. Only then will Europe be free from its banks. Mr. Gallo is the head of macro-credit research at the Royal Bank of Scotland. The views expressed are his own.