Friday, January 15, 2010

Monsanto Response: de Vendomois (Seralini) et al. 2009

Monsanto Response: de Vendomois (Seralini) et al. 2009.

(A Comparison of the Effects of Three GM Corn Varieties on Mammalian Health)
Regarding: MON 863, MON 810 and NK603

Assessment of Quality and Response to Technical Issues


Synopsis:

  • The laboratory findings primarily related to kidney and liver function reflect the large proportion of tests applicable to these organ systems. This is not a defect in the design of the study, but simply the reality of biochemical testing - there are good clinical tests of these systems which are reflected in blood chemistry. The function of other organ systems is assessed primarily via functional assessment, organ weight, and organ pathology rather than through blood or urine biochemical assays.

  • The authors apply a variety of non-standard statistical approaches. Each unique statistical approach and each comparison performed increases the number of statistically significant findings which will occur by chance alone. Thus, the fact that de Vendomois et al. find more statistically significant findings than reported in the Monsanto analysis is entirely expected. The question, which de Vendomois et al. fail to address, is whether these non-routine statistical tests contribute anything of value to a safety assessment. Do they help to ascertain whether there are biologically and toxicologically significant events? In our opinion (consistent with prior reviews of other publications from Seralini and colleagues) they do not.

  • The authors undertake a complex “principle component analysis” to demonstrate that kidney and liver function tests vary between male and female rodents. This phenomenon is well-recognized in rodents (and, for that matter, humans) as a matter of gender difference. (This does not indicate any toxic effect, and is not claimed to do so by the authors, but may be confusing to those not familiar with the method and background.)

  • De Vendomois et al. appear to draw from this a conclusion that there is a gender difference in susceptibility to toxic effects. While such differences are possible, no difference in susceptibility can be demonstrated by gender differences in normal baseline values. Utilizing this alleged difference in gender susceptibility, the authors proceed to identify statistically significant, but biologically meaningless differences (see next bullet) and to evaluate the extent to which these changes occur in males verses females.

  • De Vendomois et al. fail to consider whether a result is biologically meaningful, based on the magnitude of the difference observed, whether the observation falls outside of the normal range for the species, whether the observation falls outside the range observed in various reference materials, whether there is evidence of a dose-response, and whether there is consistency between sexes and consistency among tested GM materials. These failures are similar to those observed in previous publications by the same group of authors.

  • While the number of tests that are statistically significant in males verses females would ON AVERAGE be equal in a random distribution, this ratio will fluctuate statistically. The authors have not, in fact, demonstrated any consistent susceptibility between genders, nor have they demonstrated that the deviations from equality in regards to numbers of positive tests fall outside of expectation. For example, if you flip a coin 10 times, on average you will get 50% heads and 50% tails but it is not unusual to get 7 heads and 3 tails on a particular 10 tosses. If you do this over and over and consistently get on average 7 heads and 3 tails then there may be something different about the coin that is causing this unexpected result. However, de Vendomois et al. have not shown any such consistent difference.

  • While de Vendomois et al. criticize the lack of testing for cytochrome P450, such testing is not routinely a part of any toxicity testing protocol. These enzymes are responsible for (among other things) the metabolism of chemicals from the environment, and respond to a wide variety of external stimuli as a part of their normal function. There is no rational reason to test for levels of cytochromes in this type of testing, as they do not predict pathology. De Vendomois et al. could have identified thousands of different elements, enzymes and proteins that were not measured but this does not indicate a deficiency in the study design since there is no logical basis for testing them.

  • While de Vendomois et al. criticize the occurrence of missing laboratory values, the vast majority of missing values are accounted for by missing urine specimens (which may or may not be obtainable at necropsy) or by a small number of animals found in a deceased condition (which are not analyzed due to post-mortem changes). Overall, despite the challenges in carrying out such analyses on large numbers of animals, almost 99% of values were reported.

  • The statistical power analysis done by de Vendomois et al. is invalid, as it is based upon non-relevant degrees of difference and upon separate statistical tests rather than the ANOVA technique used by Monsanto (and generally preferred). The number of animals used is consistent with generally applicable designs for toxicology studies.

  • Prior publications by Seralini and colleagues in both the pesticide and GM crops arenas have been found wanting in both scientific methodology and credibility by numerous regulatory agencies and independent scientific panels (as detailed below).

  • In the press release associated with this publication, the authors denounce the various regulatory and scientific bodies which have criticized prior work, and claim, in advance, that these agencies and individuals suffer from incompetency and/or conflict of interest. In effect, the authors claim that their current publication cannot be legitimately criticized by anyone who disagrees with their overall opinions, past or present.

To summarize, as with the prior publication of Seralini et al. (2007), de Vendomois et al. (2009) uses non-traditional and inappropriate statistical methods to reach unsubstantiated conclusions in a reassessment of toxicology data from studies conducted with MON 863, MON 810 and NK603. Not surprisingly, they assert that they have found evidence for safety concerns with these crops but these claims are based on faulty analytical methods and reasoning and do not call into question the safety findings for these products.



Response to de Vendomois et al. 2009:

In the recent publication “A comparison of the effects of three GM corn varieties on mammalian health”, (de Vendomois et al., 2009), the authors claim to have found evidence of hepatorenal toxicity through reanalysis of the data from toxicology studies with three biotechnology-derived corn products (MON 863, MON 810 and NK603).

This theme of hepatorenal toxicity was raised in a previous publication on MON 863 by the same authors (Seralini et al., 2007). Scientists who reviewed the 2007 publication did not support that paper’s conclusions on MON 863 and the review addressed many deficiencies in the statistical reanalysis (Doull et al., 2007; EFSA, 2007a; EFSA, 2007b; Bfr, 2007; AFFSA, 2007, Monod, 2007, FSANZ, 2007). These reviews of the 2007 paper confirmed that the original analysis of the data by various regulatory agencies was correct and that MON 863 grain is safe for consumption based on the weight of evidence that includes a 90-day rat feeding study.

De Vendomois et al., (2009) elected to ignore the aforementioned expert scientific reviews by global authorities and regulatory agencies and again have used non-standard and inappropriate methods to reanalyze toxicology studies with MON 863, MON 810 and NK603. This is despite more than 10 years of safe cultivation and consumption of crops developed through modern biotechnology that have also completed extensive safety assessment and review by worldwide regulatory agencies, in each case reaching a conclusion that these products are safe.



General Comments:

De Vendomois et al. (2009) raise a number of general criticisms of the Monsanto studies that are worthy of mention before commenting on the analytical approach used by de Vendomois et al. and pointing out a number of examples where the application of their approach leads to misinterpretation of the data.

  1. Testing for cytochrome P450 levels is not a part of any standard toxicology study, nor do changes in P450 levels per-se indicate organ pathology, as the normal function of these enzymes is to respond to the environment. Testing of cytochrome P450 levels is not part of any recognized standard for laboratory testing.

  2. De Vendomois et al. note that the “effects” assessed by laboratory analysis were “mostly associated with the kidney and liver”. However, a review of the laboratory tests (annex 1 of paper), ignoring weight parameters, will indicate that measures of liver and kidney function are disproportionately represented among the laboratory tests. Urinary electrolytes are also particularly variable (see below). The apparent predominance of statistical differences in liver and kidney parameters is readily explained by the testing performed.

  3. As noted by the authors, findings are largely within the normal range for parameters even if statistically significant, are inconsistent among GM crops, and are inconsistent between sexes. Despite this, and the lack of associated illness or organ pathology, the authors choose to interpret small random variations typically seen in studies of this type as evidence of potential toxicity.

  4. The authors criticize the number of missing laboratory data, and indicate that the absence of values is not adequately explained. We would note that the bulk of missing values relate to urinalysis. The ability to analyze urine depends upon the availability of sufficient quantities of urine in the bladder at the time of necropsy, and thus urine specimens are often missing in any rodent study. Organ weights and other studies are generally not measured on animals found deceased (due to post-mortem changes the values are not considered valid). Each study consisted of 200 animals, or 800 possible data collections (counting urine, hematology, or organ weights + blood chemistry as one “type” as in the paper).

    1. NK 603- of 600 possible data determinations, 28 values were missing. 20 were due to missing urines and 2 were missing weights and biochemical analysis due to animals found dead (1 GM, 1 reference). Of the remaining 6 values (hematology), only 1 value is from the GM-fed group.

    2. MON 810- Of 600 possible determinations, 24 values were missing. 18 were due to missing urines and 1 value was missing (weight and biochemical analysis) due to an animal found dead (reference group). Of the remaining 5 values (hematology), 2 are from the GM-fed group and 3 from various reference groups.

    3. MON 863- Of 600 possible determinations, 25 values were missing. 13 were due to missing urines. 9 hematology analyses (3 GMO-fed) and 3 organ weight/biochemical analyses due to deaths (1 GMO) were reported as missing (not deceased).

    4. These are large and complex studies. Ignoring urines and the small number of animals found deceased (which occurs in any large study), 20 data sets (17 hematology, 3 organ weights/chemistry) are missing from a possible 1800 sets, i.e.- almost 99% of data were present, despite the technical difficulties inherent in handling large numbers of animals.

  5. The “findings” in this study are stated to be due to “either the recognized mutagenic effects of the GM transformation process or to the presence of… novel pesticides.” We would note that there is no evidence for “mutagenic effect” other than stable gene insertion in the tested products. We would also note that while the glyphosate tolerant crop (NK603) may indeed have glyphosate residues present, this is not a “novel” pesticide residue. The toxicity of glyphosate has been extensively evaluated, and the “effects” with NK603 cannot be explained on this basis. Similarly, other available data regarding the Bt insecticidal proteins in MON 810 and MON 863 do not support the occurrence of toxic effects due to these agents.



Statistical Analysis Approach:

De Vendomois et al., (2009) used a flawed basis for risk assessment, focusing only on statistical manipulation of data (sometimes using questionable methods) and ignoring consideration of other relevant biological information. By focusing only on statistical manipulations, the authors found more statistically significant differences for the data than was previously reported and claimed that this is new evidence for adverse effects. As is well documented in toxicology textbooks (e.g., Casarett and Doull, Toxicology, The Basic Science of Poisons, Klaassen Ed., The McGraw-Hill Companies, 2008, Chapter 2) and other resources mentioned below, interpretation of study findings involves more than statistical manipulations, one has to consider data in the context of the biology of the animal. This subject was addressed by a peer review panel of internationally recognized toxicologists and statisticians who reviewed the Seralini et al., (2007) publication. They state in Doull et al. (2007)

The Panel concludes that the Seralini et al. (2007) reanalysis provided no evidence to indicate that MON 863 was associated with any adverse effects in the 90-day rat study (Covance, 2002; Hammond et al., 2006). In each case the statistical findings reported by both Monsanto (Covance, 2002; Hammond et al., 2006) or Seralini et al. (2007) were considered to be unrelated to treatment or of no biological or clinical importance because they failed to demonstrate a dose–response relationship, reproducibility over time, association with other relevant changes (e.g., histopathology), occurrence in both sexes, difference outside the normal range of variation, or biological plausibility with respect to cause-and-effect”

There are numerous ways to analyze biological data and a multitude of statistical tools. To provide consistency in the way that toxicology data are analyzed, regulatory agencies have provided guidance regarding the statistical methods to be used. The aforementioned peer review panel stated:

“The selection of the types of statistical methods to be performed is totally dependent upon the design of the toxicology study, and on the questions expected to be answered, as discussed in the US FDA Redbook (FDA, 2000). Hypothesis testing statistical analyses as described by WHO (1987), Gad (2001), and OECD (2002b) include those tests that have been traditionally conducted on data generated from rodent 90-day and chronic toxicity studies. These are also the procedures that have been widely accepted by regulatory agencies that review the results of subchronic and/or chronic toxicity tests as part of the product approval process. There are many other statistical tests available such as 2k factorial analysis when k factors are evaluated, each at two levels, specific dose–response contrasts, and generalized linear modeling methods, but these methods typically have not been used to evaluate data from toxicology studies intended for regulatory submissions”

Commenting on the statistical analysis used originally to analyze the toxicology data for MON 863 conducted at Covance labs, the expert panel also stated:

“All of these statistical procedures are in accordance with the principles for the assessment of food additives set forth by the WHO (1987). Moreover, these tests represent those that are used commonly by contract research organisations throughout the world and have generally been accepted by FDA, EFSA, Health Canada, Food Standards Australia New Zealand (FSANZ), and the Japanese Ministry of Health and Welfare. In fact, EFSA (2004) in their evaluation of the Covance (2002) study noted that it ‘‘was statistically well designed’’.”

de Vendomois et al., (2009) selected non-traditional statistical tests to assess the data and failed to consider the entire data set in order to draw biologically meaningful conclusions. Their limited approach generated differences that, while being statistically significant, are insufficient to draw conclusions without considering the broader dataset to determine whether the findings are biologically meaningful. In Doull et al., (2007) the expert panel clearly stated:

“In the conduct of toxicity studies, the general question to be answered is whether or not administration of the test substance causes biologically important effects (i.e., those effects relevant to human health risk assessment). While statistics provide a tool by which to compare treated groups to controls; the assessment of the biological importance of any ‘‘statistically significant’’ effect requires a broader evaluation of the data, and, as described by Wilson et al. (2001), includes:

  • Dose-related trends
  • Reproducibility
  • Relationship to other findings
  • Magnitude of the differences
  • Occurrence in both sexes.”

Doull et al., (2007) raised questions regarding the appropriateness of some of the statistical analyses described in Seralini et al., (2007):

“The statistical analyses of the serum biochemistry, haematological, and clinical chemistry data conducted by Seralini et al. (2007) and by Monsanto were similar in concept as both used testing for homogeneity of variance and various pair-wise contrasts. The principle difference was that Seralini et al. (2007) did not use an ANOVA approach. The use of t-tests in the absence of multiple comparison methods may have had the effect of increasing the number of statistically significant results (emphasis added). The principle difference between the Monsanto and Seralini et al. (2007) analyses was in the evaluation of the body weight data. Monsanto used ‘traditional’ ANOVA and parametric analyses while Seralini et al. (2007) used the Gompertz model to estimate body weight as a function of time. The Gompertz model assumes equal variance between weeks, an assumption unlikely to hold with increasing body weights. While not inappropriate, as previously stated the Gompertz model does have limitation with respect to the interpretation of the results since it was not clear from the published paper whether Seralini et al. (2007) accounted for the changing variance and the correlated nature of the body weight data over time (emphasis added).

Based on the expert panel conclusions in Doull et al., (2007); the statistical analysis used by, and the conclusions reached in, the de Vendomois et al. (2009) publication need to be carefully assessed. The authors use of inappropriate statistical methods in the examples below illustrate how inadequate analyses underpin the false and misleading claims found in de Vendomois et al., (2009).

Inappropriate use of False Discovery Rate method. De Vendomois et al., (2009) conducted t-test comparisons among the test and control and then applied the False Discovery Rate (FDR) method to adjust the p-values and hence the number of false positives. The FDR method is similar to many of the multiple comparison procedures that are available for controlling the family-wise error rate. Monsanto did not use any procedures for controlling the percentage of false positives for two reasons: (1) preplanned comparisons were defined that were pertinent to the experimental design and purpose of the analysis, i.e., it was not necessary to do all pairwise comparisons among the test, control, and reference substances and; (2) to maintain transparency and to further investigate all statistically significant differences using the additional considerations (Wilson et al, 2001) detailed above.

Inappropriate power assessment method. De Vendomois et al., (2009) claim that the Monsanto study had low power and support their claim with an inappropriate power assessment that is based on a simple t-test comparison of the test and control using an arbitrary numerical difference. This type of power assessment is incorrect because Monsanto used a one-way ANOVA, not a simple t-test. The appropriate power assessment should be relative to the ANOVA and not a simple t-test. In addition, an appropriate power assessment should be done relative to the numerical difference that constitutes a biologically meaningful difference.

Other non-traditional statistical methods. De Vendomois et al., (2009) also claim that Monsanto did not apply the described statistical methods and simply used a one-way ANOVA and contrasts. This is a false statement since Monsanto used Levine’s test to check for homogeneity of variances and if the variances were different the one-way ANOVA was conducted on the ranks rather than the original observations, i.e., Kruskal-Wallis test.



Specific examples of flawed analysis and conclusions.

De Vendomois et al., (2009) have compared the results across toxicology feeding studies with three different biotech crops using some of the same statistical tests that were used in the previous publication (Seralini et al, 2007). Each of these biotech crops (MON 863, MON 810, NK603) are the result of unique molecular transformations and express different proteins. De Vendomois et al., (2009) claims that all three studies provide evidence of hepatorenal toxicity by their analysis of clinical pathology data only. One might anticipate, if these claims were true, that similar changes in clinical parameters could be observed across the three studies and that the changes observed would be diagnostic for kidney and liver toxicity and would be accompanied by cytopathological indications of kidney or liver disease. However, as shown in Tables 1 and 2 in Vendomois et al., (2009), the statistically significant “findings” in clinical parameters are different across studies, suggesting that these are more likely due to random variation (type one errors) rather than due to biologically meaningful effects. Moreover, as indicated below, there is no evidence of any liver and kidney toxicity in these studies, particularly in relation to other data included in the original study reports that is not mentioned in Vendomois et al., (2009).

NK603 - Kidney

For the NK603 study (Table 1), de Vendomois et al., (2009) listed data from some of the measured urinary electrolytes, urinary creatinine, blood urea nitrogen and creatinine, phosphorous and potassium as evidence of renal toxicity. It has been pointed out that urinalysis may be important if one is testing nephrotoxins (Hayes, 2008), particularly those that produce injury to the kidney. However, it has also been noted that “Urinalysis is frequently of limited value because the collection of satisfactory urine samples is fraught with technical difficulties” (Hayes, 2008). There was a lot of variability for some of the urinary electrolytes as indicated by the high standard deviations that may be attributed to the technical difficulties in collecting satisfactory urine samples.

Examining the original kidney data for NK603, the urine phosphorous values are generally comparable for 11% and 33% NK603 males and the 33% reference groups, while the 33% controls are generally lower than all groups. For females, 33% control females also had slightly lower phosphorous values, but they were not statistically different from 33% NK603 females, unlike males where the 33% NK603 male value was statistically different (higher) than 33% controls. When the blood phosphorous values were compared, there was a slight, but statistically significant reduction in 33% NK603 males compared to controls (but not references) at week 5, and there were no statistically significant differences in NK603 male and female blood phosphorous levels when compared to controls at the end of the 14 week study.

There were no statistically significant differences in urine sodium in males at weeks 5 and 14 in the original analysis (in contrast to the reanalysis reported by de Vendomois et al., 2009). As with phosphorous, there was considerable variability in urine sodium across all groups. The same results were observed for females. In addition, blood sodium levels for 11 and 33% NK 603 males and females were not different from controls. It is apparent when reviewing the data in the table below that the measured urinary electrolytes for the NK603 groups were similar to the values for reference, conventional (e.g., non-GM) corn groups.

Looking at the other parameters listed in Table 1 (de Vendomois et al., 2009), while there was a slight increase in urine creatinine clearance in 33% NK603 males at the interim bleed at week 5 compared to the controls and reference population, this was not apparent at the end of the study when the rats had been exposed longer to the test diets. There was no difference in urine creatinine levels in males. Blood creatinine levels were slightly, but statistically significantly lower in high dose males compared to controls at week 5. Increases in creatinine, not reductions are associated with renal toxicity. The same response was observed for serum urea nitrogen, a slight reduction at week 5 and no differences at in male blood creatinine or urea nitrogen at the end of the study. BUN, like creatinine, is not a very sensitive indicator of renal injury” (Hayes, 2008). Thus the small differences in BUN and serum and urine creatinine are not suggestive of kidney injury.

There was no evidence of changes in other urinary parameters such as pH, specific gravity, protein, sodium, calcium, chloride, volume and kidney weights. The most important factor relating to the kidney that de Vendomois et al., (2009) did not consider was the normal microscopic appearance of the kidneys of rats fed NK603 grain. There was no evidence of treatment-related renal pathologic changes that the authors ignored in their risk assessment, a critical biological factor that an objective, scientific assessment would have considered.

MON 810 - Kidney

If Table 2 in de Vendomois et al., (2009) is examined, none of the aforementioned “findings” listed in Table 1 for NK603 are consistent except for blood urea nitrogen. Kidney weight data was listed, but this was not included in Table 1 for NK603. If the hypothesis of renal toxicity is correct, it is scientifically reasonable to have expected to observe at least some of the same “findings” between studies. The fact that there were no common findings supports the original conclusions reached by the investigative laboratory (and supported by regulatory agency review of these studies) that there is no evidence of kidney toxicity in rats fed either MON 810 or NK603 grain. Indeed, the data alleged by de Vendomois et al., (2009) to be indicative of kidney findings are more attributable to random variation that is commonly observed in rodent toxicology studies, which is well discussed in publications such as Doull, et al., (2007).

In Table 2, de Vendomois et al., (2009) highlights absolute kidney weights for males as being suggestive of kidney toxicity. The scientific basis for this assertion is unclear because there is no differences in male or female kidney weights (absolute, relative to body weight or brain weight) as shown in the table below:

De Vendomois et al., (2009) also lists blood urea nitrogen as indicative of kidney toxicity, yet there were no statistically significant differences in either MON 810 males or females when compared to controls (Hammond et al., 2006). In the absence of any other changes in urine or blood chemistry parameters that could be suggestive of kidney toxicity, and in consideration of the normal histologic appearance of kidneys of rats fed MON 810 grain, there is no scientific data to support the assertion of kidney toxicity in MON 810 fed rats.

NK603/MON 810 liver

Although de Vendomois et al., (2009) lists “findings” in Table 1 and 2 as being indicative of liver toxicity, analysis of these “findings” does not support this conclusion. There are no common “findings” in the liver between both studies. For NK603 de Vendomois et al., (2009) listed liver weights and serum alkaline phosphatase; for MON 810, serum albumin and albumin/globulin ratio. For NK603, the original analysis did not demonstrate statistical differences in absolute or, relative (to body or brain) liver weights for NK603 males and females compared to controls. Therefore, the statistical differences cited by de Vendomois et al., (2009) must be owing to the non-traditional statistical methods being used for their reanalysis of liver weight data. In regard to serum alkaline phosphatase, there were no differences for NK603 males or females when compared to controls; again de Vendomois et al., (2009) report statistical differences, but examination of the original data shows that the values for NK603 males and females are similar to controls and well within the range of values for the reference controls. There were no other associated changes in other liver enzymes, bilirubin, or protein that would be changes associated with liver toxicity. Lastly, but most importantly, the microscopic appearance of NK603 male and female livers was within normal limits for rats of that age and strain; therefore there was no evidence of liver toxicity. Similarly for rats fed MON 810, the only findings de Vendomois et al., (2009) list to support a conclusion of liver toxicity was albumin and albumin/globulin ratios. Contrary to the analysis in Table 2 of de Vendomois et al., (2009), there were no statistically significant differences in male or female serum albumin levels based on the original analysis. There were similarly no statistically significant differences in albumin/globulin with the exception of a slight decrease for 11% MON810 females when compared to controls at week 5. There were no differences observed at week 14 when the rats had been on test diets longer, nor were the differences dose related as they were not apparent in 33% MON 810 females relative to controls. The numerical values for serum albumin and albumin/globulin for MON 810 males and females were also similar to values for the reference groups. Consistent with NK603 rats, there were no other changes in serum liver enzymes, protein, bilirubin, etc., that might be associated with liver toxicity. The liver weights also appeared within normal limits for rats of the same strain and age used, again, consistent with a conclusion of no evidence of liver toxicity. In summary, no experimental evidence supports the conclusion for liver toxicity in rats fed NK603 and MON 810 grain as claimed by de Vendomois et al., (2009).

Kinetic plots

De Vendomois et al., (2009) has also presented some kinetic plots showing time-related variations for selected clinical parameters chosen for discussion. For 11% (low dose) control fed females, this publication reports that there is a trend for decreasing triglyceride levels over time (week 5 compared to week 14) whereas for 11% MON 863 fed rats, levels increase slightly during the same time period. It is unclear why this publication used these complicated figures to assess these data sets since the same time course information can be obtained by simply comparing the mean data for the group at the two time points. Using this simpler method to assess the data, low dose control triglycerides dropped from a mean of 56.7 at week 5 to 40.9 at week 14. Low dose MON 863 female triglycerides increased slightly from 50.2 to 50.9. What de Vendomois et al., (2009) fails to mention is that high dose control female triglyceride levels increased from 39.3 at week 5 to 43.9 at week 14 and high dose MON 863 triglyceride levels decreased from 54.9 to 46.7. These trends are opposite from what occurred at the low dose, and the low dose trends are, therefore, not dose related. For the female reference groups, triglycerides went either up or down a bit between weeks 5 to 14, illustrating that these minor fluctuations occur naturally. Since most of the other figures reported were for the low dose groups, the trend for the high dose was sometimes opposite to that observed at the low dose. In summary, none of this analysis changes the conclusion of the study that there were no treatment-related adverse effects in rats fed MON 863 grain.

Summary

To summarize, as with the prior publication of Seralini et al, (2007), de Vendomois et al., (2009) uses non-traditional statistical methods to reassess toxicology data from studies conducted with MON 863, MON 810 and NK603 to reach an unsubstantiated conclusion that they have found evidence for safety concerns with these crops. As stated by the expert panel that reviewed the Seralini et al 2007 paper (Doull et al., 2007) “In the conduct of toxicity studies, the general question to be answered is whether or not administration of the test substance causes biologically important effects (i.e., those effects relevant to human health risk assessment). While statistics provide a tool by which to compare treated groups to controls; the assessment of the biological importance of any ‘‘statistically significant’’ effect requires a broader evaluation of the data, and, as described by Wilson et al. (2001), includes:

  • Dose-related trends
  • Reproducibility
  • Relationship to other findings
  • Magnitude of the differences
  • Occurrence in both sexes.

A review of the original data for clinical parameters, organ weights and organ histology also found no evidence of any changes suggestive of hepato/renal toxicity as alleged in the de Vendomois et al., (2009) publication. This same publication also made false allegations regarding how Monsanto carried out their statistical analysis which has been addressed above.

Although there are many other points that could be made in regards to de Vendomois et al., (2009), given the fact that these authors continue to use the same flawed techniques despite input from other experts, it is not worthwhile to exhaustively document all of the problems with their safety assessment. Most importantly, regulatory agencies that have reviewed the safety data for MON 863, MON 810 and NK603 (including data from the 90 day rat toxicology studies reassessed by de Vendomois et al., 2009) have, in all instances, reached a conclusion that these three products are safe for human and animal consumption and safe for the environment. Peer reviewed publications on 90 day rat feeding studies with NK603, MON 810 and MON 863 grain have also concluded that there are no safety concerns identified for these three biotechnology-derived crops.

Additional Background:

Over the last five years, Seralini and associated investigators have published a series of papers first regarding glyphosate and later regarding Genetically Modified Organisms (GMOs, specifically MON 863). Reviews by government agencies and independent scientists have raised questions regarding the methodology and credibility of this work. The paper by de Vendomois et al. (December 2009) is the most recent publication by this group, and continues to raise the same questions regarding quality and credibility associated with the prior publications.

Seralini and his associates have suggested that glyphosate (the herbicide commonly referred to as “Roundup”™, widely used on GM crops (Roundup Ready™ and others) is responsible for a variety of human health effects. These allegations were not considered to be valid human health concerns according to several regulatory and technical reviews. Claims of mammalian endocrine disruption by glyphosate in Richards at al. (2005) were evaluated by the Commission d”Etude de la Toxicité (French Toxicology Commission), which identified major methodological gaps and multiple instances of bias in arguments and data interpretation. The conclusion of the French Toxicology Commission was that this 2005 publication from Seralini’s laboratory served no value for the human health risk assessment of glyphosate. A subsequent paper from Seralini’s laboratory, Benachour et al. (2009), which was released via the internet in 2008, was reviewed by the Agence Française de Sécurité Sanitaire des Aliments (AFSSA, the French Agency for Food Safety). This review also pooled Richard et al (2005) and Benachour et al (2007) from Seralini’s laboratory under the same umbrella of in vitro study designs on glyphosate and glyphosate based formulations. Again, the regulatory review detailed methodological flaws and questionable data interpretation by the Seralini group. The AFSSA final remarks of their review were “the French Agency for Food Safety judges that the cytotoxic effects of glyphosate, its metabolite AMPA, the tensioactive POAE and other glyphosate-based preparations put forward in this publication do not bring out any pertinent new facts of a nature to call into question the conclusions of the European assessment of glyphosate or those of the national assessment of the preparations”. In August 2009, Health Canada’s Pest Management Regulatory Authority (PMRA) published a response to a “Request for a Special Review of Glyphosate Herbicides Containing Polyethoxylated Tallowamine”. The requester submitted 12 documents, which included the same claims made in the Benachour et al. (2009) publication. The PMRA response to this request concluded “PMRA has determined that the information submitted does not meet the requirements to invoke a special review,” clearly indicating no human health concerns were raised in the review of those 12 documents in support of the request.

Regarding GMOs, Seralini et al. (2007) previously published a re-analysis of Monsanto’s 90-day rat safety studies of MON863 corn. Scientists and regulatory agencies who reviewed the 2007 publication did not support that paper’s conclusions on MON 863 and the review addressed many deficiencies in the statistical reanalysis (Doull et al., 2007; EFSA, 2007a; EFSA, 2007b; Bfr, 2007; AFFSA, 2007; Monod, 2007; FSANZ, 2007). These reviews of the 2007 paper confirmed that the original analysis of the data by various regulatory agencies was correct and that MON 863 grain is safe for consumption.

Using the MON 863 analysis as an example, Seralini et al. (2009) recently published a “review” article in the International Journal of Biological Sciences, claiming that improper interpretation of scientific data allowed sub-chronic and chronic health effects to be ignored in scientific studies of GMOs, pesticides, and other chemicals. This paper applies a complex analysis (principle component analysis) to demonstrate a difference in liver and kidney function between male and female rats. Despite the fact that these gender differences are well known and are demonstrated in control and GMO-fed animals, Seralini and his colleagues conclude that these normal findings demonstrate some type of sex-specific susceptibility to toxic effects. Based upon this reasoning, they proceed to over-interpret a variety of minor statistical findings in the MON 863 study. These very same conclusions were roundly criticized in 2007. In fact, the authors of this study admit that their observations “do not allow a clear statement of toxicological effects.”

De Vendomois et al., (2009) elected to ignore the aforementioned expert scientific reviews by global authorities and regulatory agencies and again have used non-standard and inappropriate methods to reanalyze toxicology studies with MON 863, MON 810 and NK603. This is despite more than 10 years of safe cultivation and consumption of crops developed through modern biotechnology that have also completed extensive safety assessment and review by worldwide regulatory agencies, in each case reaching a conclusion that these products are safe.

Although some Seralini group publications acknowledge some funding sources, there are no acknowledgements of funding bias and conflict of interest. Financial support for Seralini’s research includes the Committee for Research and Independent Information on Genetic Engineering (CRIIGEN) and the Human Earth Foundation. Seralini has been the Chairman of the Scientific Council for CRIIGEN since 1999. Seralini and this organization are known for their anti-biotechnology positions (http://www.crii-gen.org/). Both CRIIGEN and the Human Earth Foundation promote organic agriculture and alternatives to pesticides. It is interesting that over the last five years Seralini’s group has published at least seven papers, four of which specifically target Monsanto’s glyphosate-based formulations as detrimental to human health, and the remaining papers allege that Monsanto’s biotechnology or GMO crops have human health implications. In addition, Seralini has a history of anti-Monsanto media releases and statements, including those on YouTube, reflecting not only Seralini’s anti-Monsanto sentiment, but a lack of scientific objectivity.
See: (http://www.youtube.com/watch?v=HkRFGtyabSA; and http://www.youtube.com/watch?v=k_gF6gpSVdY).

Finally, it is worth noting the press release from CRIIGEN, issued at the time of release of the de Vendomois et al. publication:

CRIIGEN denounces in particular the past opinions of EFSA, AFSSA and CGB, committees of European and French Food Safety Authorities, and others who spoke on the lack of risks on the tests which were conducted just for 90 days on rats to assess the safety of these three GM varieties of maize. While criticizing their failure to examine the detailed statistics, CRIIGEN also emphasizes the conflict of interest and incompetence of these committees to counter expertise this publication as they have already voted positively on the same tests ignoring the side effects.”

This rather remarkable approach clearly indicates how far the authors of this publication have drifted from appropriate scientific discourse regarding GMO safety data. While they would reject criticisms of their methods and arguments by regulatory authorities and other eminent toxicology experts, most persons seeking an objective analysis will welcome broad expert input and a full assessment of the weight of evidence on the subject.

Thursday, January 14, 2010

Don't Shoot the Pollster - Attacks on Scott Rasmussen and Fox News show a disturbing attitude toward dissent

Don't Shoot the Pollster. By PATRICK CADDELL AND DOUGLAS E. SCHOEN
Attacks on Scott Rasmussen and Fox News show a disturbing attitude toward dissent.
WSJ, Jan 15, 2010

Polling is both an art and a science, but recently it's also become a subject of political intimidation.

One shot was fired by White House Press Secretary Robert Gibbs on Dec. 8, when he dismissed Gallup's daily tracking of President Obama's job approval. It had hit a record low of 47%, and Mr. Gibbs called the results meaningless:

"If I was a heart patient and Gallup was my EKG I'd visit my doctor. If you look back I think five days ago. . . there was an 11 point spread, now there's a one point spread. . . I'm sure a six-year-old with a crayon could do something not unlike that. I don't put a lot of stake in, never have, in the EKG that is the daily Gallup trend. I don't pay a lot of attention to meaninglessness."

Polling is a science because it requires a range of sampling techniques to be used to select a sample. It is an art because constructing a sample and asking questions is something that requires skill, experience and intellectual integrity. The possibility of manipulation—or, indeed, intimidation—is great.

A recent case in point is what has happened to Scott Rasmussen, an independent pollster we both work with, who has an unchallenged record for both integrity and accuracy. Mr. Rasmussen correctly predicted the 2004 and 2008 presidential races within a percent, and accurately called the vast majority of contested Senate races in 2004 and 2006. His work has sometimes been of concern for Republicans, particularly when they were losing congressional seats in 2004 and 2006.

Most recently, Mr. Rasmussen has been the leader in chronicling the decline in the public's support for President Obama. And so he has been the target of increasingly virulent attacks from left-wing bloggers seeking to undermine his credibility, and thus muffle his findings. A Politico piece, "Low Favorables: Democrats Rip Rasmussen," reported on the attacks from blogs like the Daily Kos, Swing State Project, and Media Matters.

"Rasmussen Caught With Their Thumb on the Scale," cried the Daily Kos last summer. "Rasmussen Reports, You Decide," the blog Swing State Project headlined not long ago in a play on the Fox News motto.

"I don't think there are Republican polling firms that get as good a result as Rasmussen does," Eric Boehlert, a senior fellow with the progressive research outfit Media Matters, said in a Jan. 2 Politico article. "His data looks like it all comes out of the RNC."

Liberals have also noted that Rasmussen's daily presidential tracking polls have consistently placed Mr. Obama's approval numbers around five percentage points lower than other polling outfits throughout the year. This is because Rasmussen surveys likely voters, who are now more Republican in orientation than the overall electorate. (Gallup and other pollsters survey the entire adult population.) On other key issues like health care, Rasmussen's numbers have been echoed by everyone else.

Mr. Rasmussen, who is avowedly not part of the Beltway crowd in Washington, has been willing to take on issues like ethics and corruption in ways no other pollsters have been able to do. He was also one of the first pollsters to stress people's real fear of the growing size of government, the size of the deficit, and the concern about spending at a time when these issues were not really on Washington's radar screen.

The reaction against him has been strident and harsh. He's been called an adjunct of the Republican Party when in fact he has never worked for any political party. Nor has he consulted with any candidates seeking elective office.

The attacks on Rasmussen and Gallup follow an effort by the White House to wage war on Fox News and to brand it, as former White House Director of Communications Anita Dunn did, as "not a real news organization." The move backfired; in time, other news organizations rallied around Fox News. But the message was clear: criticize the White House at your peril.

As pollsters for two Democratic presidents who served before Barack Obama, we view this unprecedented attempt to silence the media and to attack the credibility of unpopular polling as chilling to the free exercise of democracy.

This is more than just inside baseball. As practicing political consultants, both of us have seen that the established parties try to stifle dissent among their political advisers and consultants. The parties go out of their way to try to determine in advance what questions will be asked and what answers will be obtained to reinforce existing party messages. The thing most feared is independence, which is what Mr. Rasmussen brings.

Mr. Gibbs's comments and the recent attempts by the Democratic left to muzzle Scott Rasmussen reflect a disturbing trend in our politics: a tendency to try to stifle legitimate feedback about political concerns—particularly if the feedback is negative to the incumbent administration.

Mr. Caddell served as a pollster for President Jimmy Carter. Mr. Schoen, who served as a pollster for President Bill Clinton, is the author of "The Political Fix" just out from Henry Holt.

Don't Like the Numbers? Change 'Em

Don't Like the Numbers? Change 'Em. By MICHAEL J. BOSKIN
If a CEO issued the kind of distorted figures put out by politicians and scientists, he'd wind up in prison.
WSJ, Jan 14, 2010

Politicians and scientists who don't like what their data show lately have simply taken to changing the numbers. They believe that their end—socialism, global climate regulation, health-care legislation, repudiating debt commitments, la gloire française—justifies throwing out even minimum standards of accuracy. It appears that no numbers are immune: not GDP, not inflation, not budget, not job or cost estimates, and certainly not temperature. A CEO or CFO issuing such massaged numbers would land in jail.

The late economist Paul Samuelson called the national income accounts that measure real GDP and inflation "one of the greatest achievements of the twentieth century." Yet politicians from Europe to South America are now clamoring for alternatives that make them look better.

A commission appointed by French President Nicolas Sarkozy suggests heavily weighting "stability" indicators such as "security" and "equality" when calculating GDP. And voilà!—France outperforms the U.S., despite the fact that its per capita income is 30% lower. Nobel laureate Ed Prescott called this disparity the difference between "prosperity and depression" in a 2002 paper—and attributed it entirely to France's higher taxes.

With Venezuela in recession by conventional GDP measures, President Hugo Chávez declared the GDP to be a capitalist plot. He wants a new, socialist-friendly way to measure the economy. Maybe East Germans were better off than their cousins in the West when the Berlin Wall fell; starving North Koreans are really better off than their relatives in South Korea; the 300 million Chinese lifted out of abject poverty in the last three decades were better off under Mao; and all those Cubans risking their lives fleeing to Florida on dinky boats are loco.

There is historical precedent for a "socialist GDP." When President George H.W. Bush sent me to help Mikhail Gorbachev with economic reform, I found out that the Soviet statistics office kept two sets of books: those they published, and those they actually believed (plus another for Stalin when he was alive).

In Argentina, President Néstor Kirchner didn't like the political and budget hits from high inflation. After a politicized personnel purge in 2002, he changed the inflation measures. Conveniently, the new numbers showed lower inflation and therefore lower interest payments on the government's inflation-linked bonds. Investor and public confidence in the objectivity of the inflation statistics evaporated. His wife and successor Cristina Kirchner is now trying to grab the central bank's reserves to pay for the country's debt.

America has not been immune from this dangerous numbers game. Every president is guilty of spinning unpleasant statistics. President Richard Nixon even thought there was a conspiracy against him at the Bureau of Labor Statistics. But President Barack Obama has taken it to a new level. His laudable attempt at transparency in counting the number of jobs "created or saved" by the stimulus bill has degenerated into farce and was just junked this week.

The administration has introduced the new notion of "jobs saved" to take credit where none was ever taken before. It seems continually to confuse gross and net numbers. For example, it misses the jobs lost or diverted by the fiscal stimulus. And along with the congressional leadership it hypes the number of "green jobs" likely to be created from the explosion of spending, subsidies, loans and mandates, while ignoring the job losses caused by its taxes, debt, regulations and diktats.

The president and his advisers—their credibility already reeling from exaggeration (the stimulus bill will limit unemployment to 8%) and reneged campaign promises (we'll go through the budget "line-by-line")—consistently imply that their new proposed regulation is a free lunch. When the radical attempt to regulate energy and the environment with the deeply flawed cap-and-trade bill is confronted with economic reality, instead of honestly debating the trade-offs they confidently pronounce that it boosts the economy. They refuse to admit that it simply boosts favored sectors and firms at the expense of everyone else.

Rabid environmentalists have descended into a separate reality where only green counts. It's gotten so bad that the head of the California Air Resources Board, Mary Nichols, announced this past fall that costly new carbon regulations would boost the economy shortly after she was told by eight of the state's most respected economists that they were certain these new rules would damage the economy. The next day, her own economic consultant, Harvard's Robert Stavis, denounced her statement as a blatant distortion.

Scientists are expected to make sure their findings are replicable, to make the data available, and to encourage the search for new theories and data that may overturn the current consensus. This is what Galileo, Darwin and Einstein—among the most celebrated scientists of all time—did. But some climate researchers, most notably at the University of East Anglia, attempted to hide or delete temperature data when that data didn't show recent rapid warming. They quietly suppressed and replaced the numbers, and then attempted to squelch publication of studies coming to different conclusions.

The Obama administration claims a dubious "Keynesian" multiplier of 1.5 to feed the Democrats' thirst for big spending. The administration's idea is that virtually all their spending creates jobs for unemployed people and that additional rounds of spending create still more—raising income by $1.50 for each dollar of government spending. Economists differ on such multipliers, with many leading figures pegging them at well under 1.0 as the government spending in part replaces private spending and jobs. But all agree that every dollar of spending requires a present value of a dollar of future taxes, which distorts decisions to work, save, and invest and raises the cost of the dollar of spending to well over a dollar. Thus, only spending with large societal benefits is justified, a criterion unlikely to be met by much current spending (perusing the projects on recovery.gov doesn't inspire confidence).

Even more blatant is the numbers game being used to justify health-insurance reform legislation, which claims to greatly expand coverage, decrease health-insurance costs, and reduce the deficit. That magic flows easily from counting 10 years of dubious Medicare "savings" and tax hikes, but only six years of spending; assuming large cuts in doctor reimbursements that later will be cancelled; and making the states (other than Sen. Ben Nelson's Nebraska) pay a big share of the cost by expanding Medicaid eligibility. The Medicare "savings" and payroll tax hikes are counted twice—first to help pay for expanded coverage, and then to claim to extend the life of Medicare.

One piece of good news: The public isn't believing much of this out-of-control spin. Large majorities believe the health-care legislation will raise their insurance costs and increase the budget deficit. Most Americans are highly skeptical of the claims of climate extremists. And they have a more realistic reaction to the extraordinary deterioration in our public finances than do the president and Congress.

As a society and as individuals, we need to make difficult, even wrenching choices, often with grave consequences. To base those decisions on highly misleading, biased, and even manufactured numbers is not just wrong, but dangerous.

Squandering their credibility with these numbers games will only make it more difficult for our elected leaders to enlist support for difficult decisions from a public increasingly inclined to disbelieve them.

Mr. Boskin is a professor of economics at Stanford University and a senior fellow at the Hoover Institution. He chaired the Council of Economic Advisers under President George H.W. Bush.

Health Experts and Double Standards - Jonathan Gruber, Peter Orszag and the press corps

Health Experts and Double Standards. WSJ Editorial
Jonathan Gruber, Peter Orszag and the press corps.
The Wall Street Journal, page A18, Jan 14, 2010

The press corps is agonizing, or claims to be agonizing, over the news of Jonathan Gruber's conflict of interest: The MIT economist has been among the foremost promoters of ObamaCare—even as he had nearly $400,000 in consulting contracts with the Administration that weren't disclosed in the many stories in which he was cited as an independent authority.

Mr. Gruber is a health economist and former Clinton Treasury hand, as well an architect of Mitt Romney's 2006 health plan in Massachusetts that so closely resembles ObamaCare. His econometric health-care modelling is well-regarded. So his $297,600 plum from the Department of Health and Human Services in March for "technical assistance" estimating changes in insurance costs and coverage under ObamaCare, plus another $95,000 job, is at least defensible.

However, this financial relationship only came to wide notice when Mr. Gruber wrote a commentary for the New England Journal of Medicine, which has a more stringent disclosure policy than most media outlets. Last week the New York Times said it would have disclosed Mr. Gruber's financial ties had it known when it published one of his op-eds last year. Mr. Gruber told Politico's Ben Smith that "at no time have I publicly advocated a position that I did not firmly believe—indeed, I have been completely consistent with my academic track record."

We don't doubt Mr. Gruber's sincerity about his research, though the same benefit of the political doubt wasn't extended to, say, Armstrong Williams when it was revealed that the conservative pundit had a contract with the Department of Education during the No Child Left Behind debate. Any number of former Generals-turned-TV-analysts were skewered in the New York Times in 2008 merely because of continuing contact—and no financial ties—with the Pentagon.

The political exploitation of Mr. Gruber's commentary is another matter. His work figured heavily into a recent piece by Ron Brownstein in the Atlantic Monthly that the Administration promoted as an antidote to skepticism about ObamaCare's cost control (or lack thereof). White House budget director Peter Orszag has also relied on a letter from Mr. Gruber and other economists endorsing the Senate bill.

In a December conference call with reporters, Mr. Orszag said that "I agree with Jon Gruber that basically everything that has been put forward in health policy discussions for a decade is in this bill." He also praised "the folks who have actually done the reporting and read the bill and gone through and done the hard work to actually examine, rather than just going on buzz and sort of loose talk, but actually gone through and looked at the specific details in the bill," citing Mr. Brownstein in particular. Which is to say, the journalists who had "done the reporting" were those who agreed with the Gruber-White House spin.

Mr. Orszag never mentioned Mr. Gruber's contract. Nor did HHS disclose the contract when Mike Enzi, the ranking Republican on the Senate health committee, asked specifically for a list of all consultants as part of routine oversight in July. His request noted that "Transparency regarding these positions will help ensure that the public has confidence in the qualifications, character and abilities of individuals serving in these positions."

We're not Marxists who think everyone's opinion depends entirely on financial circumstances. But if Mr. Gruber qualifies as a health expert despite his self-interest, then the studies of self-interested businesses deserve at least as much media attention. The insurer WellPoint has built a very detailed and rigorous model on the likely impact of ObamaCare, using its own actuarial data in regional markets, and found that insurance costs will spike across the board. The White House trashed it, and the press corps ignored it.

This is a double standard that has corroded much of the coverage of ObamaCare, with journalists treating government claims as oracular but business arguments as self-serving. We'll bet Messrs. Orszag and Brownstein that WellPoint's analysis will more closely reflect the coming insurance reality than the fruits of Mr. Gruber's government paycheck.

Tuesday, January 12, 2010

Bashing Bankers Is a Political Duty

Bashing Bankers Is a Political Duty. By HOLMAN W. JENKINS, JR
But don't overlook the fact that taxpayers are making out on the bailout too.
WSJ, Jan 13, 2010

If you would know why bankers are enjoying a large and controversial deluge of annual bonuses, look no further than the monthly report of the New York State Comptroller's Office. The economy may be in the dumps, but Wall Street enjoyed record profits of $50 billion in the first nine months of last year—"nearly two and a half times the previous annual peak in 2000."

"Profitability," adds the state of New York, "has soared because revenues rose while the costs of doing business—particularly interest costs—declined" (in other words, thank you Federal Reserve).

That $50 billion may seem odd in relation to Wall Street's reported bonus pool of $90 billion, but compensation isn't paid out of profits, it's paid out of revenues. Goldman last year paid out about 44% of revenues as compensation, Citigoup about 30%. In contrast, an auto company pays out about 11% of revenues, but an auto company consumes a lot of other inputs—glass, steel, energy, advertising, aluminum—whereas Wall Street has only two inputs: smarts and money.

Bonuses are a dominant part of compensation because Wall Street firms pay a large chunk of compensation as variable comp, for which the word bonus has been used. Now some firms are paying larger fixed salaries just so the public won't hear the word bonus.

But look at it this way: The $90 billion that will be distributed to employees is but a sliver of the massive capital Wall Street is sitting on. One firm, Goldman, cares for $880 billion, Citi another $1.9 trillion, JP Morgan another $2 trillion. Much of the nation's paper wealth rebounded sharply last year from depressed values after (choose your reason) Americans overbet on housing or the federal government briefly fumbled public trust in its ability to protect the financial system.

Do bankers deserve it? Of course not. Do you deserve your good looks, good health or good luck in choice of parents and/or country you were born in?

Compensation in our society is not set by Henry Waxman and a committee of Congress, but as a matter of legal and instrumental obligation under circumstances of market competition. A firm's management, with its own interests strongly in mind, ultimately decides how much of a firm's revenue to spend pleasing the highly mobile employees who do the work of pleasing the firm's highly mobile clients and investors.

But didn't taxpayers bail out the financial system, so don't taxpayers deserve the bonuses? No. Taxpayers (aka voters) were acting in their own interests in bailing out the system. They weren't doing anybody a favor. Furthermore, government already stands to collect about 50% of any Wall Street cash bonuses in the form of income tax (which explains why the subject is of interest to the New York state comptroller).

What's more, despite a casual imputation that taxpayers were the suckers at the table, taxpayers did not, as commonly alleged, "spend" money to bail out the banks. They traded one claim for another. Mostly, they traded claims they printed (dollars) for claims on real assets, such as housing, commercial property and industrial equipment.

Taxpayers effectively acquired these assets on a bet that taxpayers' own intervention would raise their value, which had previously been depressed at least partly by fears that taxpayers wouldn't intervene. That bet has proved a good one so far (as bets often do when you control the outcome). Even the most notorious of the exchanges that taxpayers engaged in—dollars for securities held by Goldman Sachs that had been guaranteed by AIG—are accruing profits on the balance sheet of the Federal Reserve.

In fact, yesterday the Fed, whose balance sheet is about the size of Citigroup's, reported whopping profits for 2009 of $52 billion—just a few billion shy of what Wall Street as a whole is likely to report for the year. (All this throws a mocking light on the Obama administration's claim yesterday that a new tax must be imposed on banks to "recoup" bailout costs.)

None of this means Americans don't have an ancient and abiding interest in subjecting bankers to scorn. A rough socialism is fundamental to civilization: The most beautiful virgin must be sacrificed to make the other virgins feel better—a service politicians are especially keen to provide when the alternative might be looking at their own role in the reckless risk-taking of banks and homebuyers.

Still, looking at Washington's own role would be a good idea, since taxpayers' success (so far) in catching the falling knife is certainly no reason to repeat the experiment.

Monday, January 11, 2010

Group of Central Bank Governors and Heads of Supervision reinforces Basel Committee reform package

Group of Central Bank Governors and Heads of Supervision reinforces Basel Committee reform package

BIS, January 11, 2010

The Group of Central Bank Governors and Heads of Supervision, the oversight body of the Basel Committee on Banking Supervision, met on 10 January at the Bank for International Settlements. It welcomed the substantial progress of the Basel Committee to translate the Group's September 2009 agreements into a concrete package of measures, as elaborated in the Committee's 17 December 2009 Consultative proposals for Strengthening the resilience of the banking sector and the International framework for liquidity risk measurement, standards and monitoring. Governors and Heads of Supervision requested the Committee to deliver a fully calibrated and finalised package of reforms by the end of this year.

President Jean-Claude Trichet, who chairs the Group, emphasised that "timely completion of the Basel Committee reform programme is critical to achieving a more resilient banking system that can support sound economic growth over the long term."

Central Bank Governors and Heads of Supervision welcomed the Basel Committee's focus on both microprudential reforms to strengthen the level and quality of international capital and liquidity standards, as well as the introduction of a macroprudential overlay to address procyclicality and systemic risk. They also provided guidance and noted the importance of making progress in the following key areas:

Provisioning: It is essential that accounting standards setters and supervisors develop a truly robust provisioning approach based on expected losses (EL). Building on the Basel Committee's August 2009 Guiding Principles for the replacement of IAS 39, a sound EL provisioning approach should achieve the following key objectives: 1) address the deficiencies of the incurred loss approach without introducing an expansion of fair value accounting, 2) promote adequate and more forward looking provisioning through early identification and recognition of credit losses in a consistent and robust manner, 3) address concerns about procyclicality under the current incurred loss provisioning model, 4) incorporate a broader range of credit information, both quantitative and qualitative, 5) draw from banks' risk management and capital adequacy systems and 6) be transparent and subject to appropriate internal and external validation by auditors, supervisors and other constituents. So-called "through-the-cycle" approaches that are consistent with these principles and which promote the build up of provisions when credit exposures are taken on in good times that can be used in a downturn would be recognised. The Basel Committee should translate these principles into a practical proposal by its March 2010 meeting for subsequent consideration by both supervisors and accounting standards setters.

Introducing a framework of countercyclical capital buffers: Such a framework could contain two key elements that are complementary. First, it is intended to promote the build-up of appropriate buffers at individual banks and the banking sector that can be used in periods of stress. This would be achieved through a combination of capital conservation measures, including actions to limit excessive dividend payments, share buybacks and compensation. Second, it would achieve the broader macroprudential goal of protecting the banking sector from periods of excess credit growth through a countercyclical capital buffer linked to one or more credit variables.

Addressing the risk of systemic banking institutions: Supervisors are working to develop proposals to address the risk of systemically important banks (SIBs). To this end, the Basel Committee has established a Macroprudential Group. The Committee should develop a menu of approaches using continuous measures of systemic importance to address the risk for the financial system and the broader economy. This includes evaluating the pros and cons of a capital and liquidity surcharge and other supervisory tools as additional possible policy options such as resolution mechanisms and structural adjustments. This forms a key input to the Financial Stability Board's initiatives to address the "too-big-to-fail" problem.

Contingent capital: The Basel Committee is reviewing the role that contingent capital and convertible capital instruments could play in the regulatory capital framework. This includes possible entry criteria for such instruments in Tier 1 and/or Tier 2 to ensure loss absorbency and the role of contingent and convertible capital more generally both within the regulatory capital minimum and as buffers.

Liquidity: Based on information collected through the quantitative impact assessment, the Committee should flesh out the details of the global minimum liquidity standard, which includes both the 30-day liquidity coverage ratio and the longer term structural liquidity ratio.

Central Bank Governors and Heads of Supervision will review concrete proposals on each of these topics later this year.

They endorsed the Committee's approach to extensive consultation on and comprehensive assessment of the proposed reforms, covering both the impact on the banking sector and the broader economy, before arriving at a final calibration of the minimum level of capital and the buffers above the minimum at the end of this year. They stressed that the aim of the new global standards should be to achieve a better balance between banking sector stability and sustainable credit growth. President Trichet noted that "the Group of Central Bank Governors and Heads of Supervision will provide strong oversight of the work of the Basel Committee during this phase, including both the completion and calibration of the reforms."

The fully calibrated set of standards will be developed by the end of 2010 to be phased in as financial conditions improve and the economic recovery is assured with the aim of implementation by the end of 2012. This includes appropriate phase-in measures and grandfathering arrangements for a sufficiently long period to ensure a smooth transition to the new standards.

About the Group of Central Bank Governors and Heads of Supervision

The Group of Central Bank Governors and Heads of Supervision is the oversight body of the Basel Committee on Banking Supervision and is comprised of the same member jurisdictions as the Committee.

Friday, January 8, 2010

On Lender of Last Resort facilities

On Lender of Last Resort facilities. By A J R, contributing blogger.

Nov 28, 2009


---

As is elegantly summarized in Alexander, Dhumale and Eatwell [ADhE 2006], to manage risks, especially of systemic nature ("the awkward tendency for financial crises to cluster over time" [S 2000]), regulators choose from a "crisis management menu" [T 2009] of ex ante and ex post measures. Among the former we can find:

  • capital adequacy requirements - e.g., the Basel I and II frameworks;

  • large exposure limits;

  • limitations on lending


The ex post measures they mention are deposit insurance and the lender of last resort (LOLR) function. Basically, "[t]hese regulatory measures compose the main framework of bank prudential regulation."


A detailed list of steps to be taken after weaknesses are detected after ex ante measures, but before direct support is warranted by the bank being in a distress situation, and of course before closure or liquidation, is printed in Figure 1 of BCBS "Supervisory Guidance on Dealing with Weak Banks," p. 5 [BCBS 2002]. We show several we'll mention later:


  • Cash (equity) injection by shareholders (*)

  • Suspension of shareholders' rights, including voting rights

  • Prohibition on distribution of profits or other withdrawals by shareholders

  • Removal of directors and managers

  • Immediate and/or enhanced provisioning of doubtful assets (*)

  • Stop principal or interest repayments on subordinated debt

  • Appointment of an administrator



Most of the measures are designed to reduce the hemorrhage of money in case of weakness and to help the bank to strengthen itself. But the two measures highlighted with asterisk are direct ways (if successful) to avoid invoking the LOLR facilities.


How it differs LOLR support from those weak banks measures and other ex post ones? Which is the nature of lender of last resort facilities?



Definition, purpose of LOLR support


Freixas et alii [FGHS 1999] define LOLR as


the discretionary provision of liquidity to a financial institution (or the market [...]) by the central bank in reaction to an adverse shock which causes an abnormal increase in demand for liquidity which cannot be met from an alternative source.


This discretionary element that [FGHS 1999] mentions is important, and is used in the guidance to deal with weak banks [BCBS 2002]. It is obvious the need to keep awake in the night those who may need funds. If the LOLR function were automatic, moral hazard would be even greater than it already is, with the repeated bail-outs of unsecured creditors, the Treasuries support to any amount in individuals' accounts, and the more easy criteria for acceptance to be a member of the too-big-to-fail category.


Humphrey [H 1989], focusing in the system, sees the LOLR role as defined by Thornton and Bagehot [B 1873] as valid back in 1989:


(1) to protect the money stock, (2) to support the whole financial system rather than individual financial institutions, (3) to behave consistently with the longer-run objective of stable money growth, and (4) to preannounce its policy in advance of crises so as to remove uncertainty. They also advised the LLR to let insolvent institutions fail, to lend to creditworthy institutions only, to charge penalty rates, and to require good collateral. Such rules they thought would minimize problems of moral hazard and remove bankers’ incentives to take undue risks.


Humphrey adds:


These precepts, though honored in the breach as well as in the observance, continue to serve as a benchmark and model for central bank policy today.


So we can say that until 1988 at least, LOLR followed closely the Thornton-Bagehot doctrine. We'll see below today's changing picture.


Aside from these valuable definitions, I'd like to add the following as main differentiating characteristics of this kind of support relative to the measures for weak banks:


  • LOLR facilities are not designed for weak banks (or financial systems), but for those institutions or systems with liquidity/solvency problems – as such, most of the [BCBS 2002]measures must be tried first.

  • The many different pathways than can be followed by the regulator are even less clearly determined.

  • There is a need for high publicity in case of panics, both for just a single institution (to fight "internal descredit" (Bagehot [B 1873]) and in general failure ("to remove instability") – but the [BCBS 2002] measures are recommended to be applied as silently as possible, to prevent doing harm to the bank that is under such intervention.




Transparency of last resort support


Obviously, the last point is not applicable to many of the measures for weak banks. Those printed supra in italics will be highly publicized or at least will be known by rival banks pretty soon. But the intention is to delay as much as possible public knowledge about them. In many cases it is possible to avoid the knowledge the suspension of rights or the prohibition of distribution of profits until a general meeting of shareholders, and that can be almost a year, if you can wait for the next one to pass without intervention.


In the case of other soft ex post measures, it is conceivable to keep them under wraps for some time, supposedly more than a year. In his ‘Emergency Liquidity Facilities’ [DH 2002, p. 124], Dong He details support for a Finnish bank. The last resort help didn't get published until more than a year later. Also, not only the central bank did help, but this wasn't known until it was published with such delay in the central bank bulletin.


This is just a schematic way to explain the differences. In practice, some of the [BCBS 2002]measures that we've referred to as clearly separated in time overlap to some degree. Nothing prevents the regulator from injecting emergency funds, both to the interbank market or to the institutions, and simultaneously adopting many of the [BCBS 2002] measures.


There is few guidance on LOLR facilities if we exclude national laws. Dong He summarizes the relevant information for twenty nations in his paper's appendix, but, as he says, the "study of emergency lending operations is hampered by a general lack of information on country practices in this area".



LOLR support types – by need


We can try to classify LOLR support according to the need of it. By this criterium, it can be invoked mainly by two needs:


  • because of undercapitalization by day-to-day operations – this is not a contentious point;

  • because of a panic – when many players "run for the exit at the same time" [W-P 2009] due to the sequential service constraint, which was noted by both Thornton and Bagehot.


[W-P 2009] summarizes many cases of panics in the XIX century and explains how central banks appeared in several countries, and with them the LOLR support in both "emergency lending in normal times" and "in systemic crises", as [DH 2002] calls them. This last support is subject of much discussion today.



Liquidity problems caused by normal, day-to-day operations


If the bank is not suffering "internal discredit", but the regulator, thru routine work, discovers that the financial institution is in distress, which is more common an ocurrence than panics, it may be decided to provide "advances [...] to an immense amount" as Bagehot registered in his book, aside from possible using many of the measures for weak banks we mentioned at the beginning.


Although there is agreement on this, the path is full of perils. Bagehot notes:


But though the rule is clear, the greatest delicacy, the finest and best skilled judgment, are needed to deal [with such affairs].


I've counted twelve times the root "delic-" in his book applied to how fragile and difficult to manage is the system. See References.



Liquidity problems caused by panics – both individual and systemic


Why runs happen even when the bank is not insolvent and can find ways to gather capital at reasonable cost? The work of Diamond and Dybvig [DD 1983] shows that the uncoordinated character of dispersed, uninsured, small depositors can reach what seem several equilibria, among them a bank run, even in such favorable circumstances. Later, Morris & Shin confirmed this [MS 1999]:


Creditors of a distressed borrower face a coordination problem. Even if the fundamentals are sound, fear of premature foreclosure by others may lead to pre-emptive action, undermining the project.



Bagehot offers what it seemed a counterintuitive method to stop a run:


In opposition to what might be at first sight supposed, the best way for the bank or banks who have the custody of the bank reserve to deal with a drain arising from internal discredit, is to lend freely.


He also explains why this solution seems to work:


This discredit means, 'an opinion that you have not got any money,' and to dissipate that opinion, you must, if possible, show that you have money: you must employ it for the public benefit in order that the public may know that you have it. The time for economy and for accumulation is before. A good banker will have accumulated in ordinary times the reserve he is to make use of in extraordinary times.


And later adds, speaking of panics thru an example of successfully solving it (my emphasis):


The holders of the cash reserve must be ready not only to keep it for their own liabilities, but to advance it most freely for the liabilities of others. They must lend to merchants, to minor bankers, to 'this man and that man,' whenever the security is good. In wild periods of alarm, one failure makes many [...]. 'We lent it [...] by every possible means and in modes we had never adopted before; [...] we not only discounted outright, but we made advances on the deposit of bills of exchange to an immense amount, in short, by every possible means consistent with the safety of the Bank [...].' After a day or two of this treatment, the entire panic subsided, and the 'City' was quite calm.


In a way or the other, these considerations keep living in current legislation, at least until Dong He wrote his working paper for the IMF. We can also say that no single country preserves all those prescriptions in writing, there are always subtle changes that well check in the last section, "LOLR support types – by differences in application".




LOLR support types – by differences in application


We can too classify LOLR support according to its differences in application or last resort support.



Size limit


In some countries there are great limits to the amount of liquidity that can be provided – that is, they do not follow the principle of lending freely. As D. He says, p. 129, Argentina, Hong Kong, Turkey and the Philippines have clear limitations in the size limit of last resort support. Dong He adds that this kind of limits is controversial, as they can encourage runs.



Independence of regulator


In others, there is a need to coordinate the bank with the money reserves with politicians at the executive departments, because of law or because of subtle pressure. In the end, in many cases the much trumpeted independence of the regulator is not fully respected. In the IMF report on Turkey [IMF 2007] we find (details in References):


81. In practice the BRSA lacks full operational independence in regulatory and budgetary matters. Before issuing a regulation [...] must conduct consultations with the related Ministry and the Undersecretariat of the State Planning Organization. While nonbinding consultation is appropriate, the process is perceived as binding, thus raising the possibility of undue government interference. [...] All visits abroad by the BRSA management or staff, such as for on-site inspections or for training, must be approved by the government. [...]


[The situation is not as bad as it sounds in this text selection we made. Turkey made great progress in the years before this report was written, as the report says.]



Collateral


In some countries, the quality of the collateral doesn't follow Bagehot's criterium (to relax the standard). The Hong Kong Monetary Authority's policy is to accept only high-quality paper. Other countries follow the criterium: e.g., the Bank of Korea may support banks with "the collateral of any assets which are defined temporarily as acceptable security" in some cases. See examples in [DH 2002, p. 126].


Fisher [F 2000] argues that requiring adequate collateral diminishes moral hazard: the prospective borrower would reduce excessive risks because the risky assets won't be accepted as collateral.



Solvent and insolvent institutions


If the firm is just temporarily illiquid but it is solvent, Bagehot recommends to lend freely (although at "very high" interest rates). If the company is insolvent, he recommends to let it fail.


Several issues are raised here:


  • In the hurry in which these crises are dealt with it is very difficult to know if the institution is solvent or not.

  • The pressure to help the firm if it is systemically important is enormous, and Bagehot view is increasingly not followed today. See next section.


Impact of recent developments on LOLR principles


Several reasons force our contemporary regulator to provide assistance to institutions that previously would not qualify. Paramount among them are Too Big To Fail (TBTF) issues:

  • bank failures have direct large costs of liquidation (see abstract of James below [J 1991]), which affect management and shareholders – much more so with big institutions

  • the payments system is affected – stopping payments by a TBTF bank touches too many economic agents

  • the interbank market (both as borrower and lender) is greatly affected if such a bank is liquidated

  • all this helps reduce economic activity – but even worse, the bank knowledge about the customers (how good or bad is someone repaying his debts, his bills, etc.) is lost, further slowing the economy

  • the Treasury also suffers, since it must support the interbank market and try to soften the situation

  • compounding this, the size of such a bank can affect the banking system credibility much more than a small bank's failure – system disorders can emerge


Abandoning punitive rates


Rustomjee [R 2009, p. 19] offers two reasons why many "regulators have opted not to provide support at penalty rates":

  • "charging a higher than market rate could aggravate [...] the bank’s liquidity or solvency challenge"

  • "public knowledge that a bank is being offered emergency lending at a penalty rate could deepen the loss of confidence in bank management and could itself exacerbate any concerns either by banks that have lent to the weak bank, or depositors, so prompting a bank run."



Market makers of last resort (MMLR)


BoE's Tucker [T 2009], which defines LOLR as putting "a finger in the dyke," comments on the increasing "amount of credit that gets intermediated via markets" rather than via institutions in a speech this month and discusses the need to stand ready to act as MMLRs, which is really complex: When you buy from an individual bank you have some leverage to change things, but once you enter into the market as a buyer (that means potentially a really large number of transactions and counterparties) you cannot later renegotiate the price of the assets if their quality deteriorate.



References


[ADhE 2006] Alexander K, R Dhumale and J Eatwell /2006): 'Global Governance of the Financial System - The international regulation of systemic risk'. Oxford UK: Oxford University Press.


[B 1873] Bagehot W (1873) Lombard Street: A Description of the Money Market, London: HS King. http://www.econlib.org/library/Bagehot/bagLom.html


But in exact proportion to the power of this system is its delicacy I should har dly say too much if I said its danger.


And this foreign deposit is evidently of a delicate and peculiar nature.


That such an arrangement is strange must be plain; but its strangeness can only be comprehended when we know what the custody of a national banking reserve means, and how delicate and difficult it is.


But though the rule is clear, the greatest delicacy, the finest and best skilled judgment, are needed to deal at once with such great and contrary evils.


And great as is the delicacy of such a problem in all countries, it is far great er in England now than it was or is elsewhere. The strain thrown by a panic on the final bank reserve is proportional to the magnitude of a country's commerce [...].


2ndly. Because, being a one-reserve system, it reduces the spare cash of the Money Market to a smaller amount than any other system, and so makes that market more delicate.


But it is of great importance to point out that our industrial organisation is liable not only to irregnlar external accidents, but likewise to regular internal changes; that these changes make our credit system much more delicate at some times than at others; and that it is the recurrence of these periodical seasons of delicacy which has given rise to the notion that panics come according to a fixed rule, that every ten years or so we must have one of them.


This is the surer to happen that Lombard Street is, as has been shown before, a very delicate market.


In a time when the trading classes were much ruder than they now are, many private bankers possessed variety of knowledge and a delicacy of attainment which would even now be very rare.


So that our one-reserve system of banking combines two evils: first, it makes the demand of the brokers upon the final reserve greater, because under it so many bankers remove so much money from the brokers; and under it also the final reserve is reduced to its minimum point, and the entire system of credit is made more delicate, and more sensitive.


And this is the reason why the Bank of England ought, I think, to deal most cautiously and delicately with their banking deposits.


We must therefore, I think, have recourse to feeble and humble palliatives such as I have suggested. With good sense, good judgment, and good care, I have no doubt that they may be enough. But I have written in vain if I require to say now that the problem is delicate, that the solution is varying and difficult, and that the result is inestimable to us all.


[BCBS 2002] Basel Committee on Banking Supervision (2002): ‘Supervisory Guidance on Dealing with Weak Banks’, Basel: Bank for International Settlements.


[DD 1983] Diamond D and P Dybvig (1983) ‘Bank Runs, Deposit Insurance and Liquidity’, Journal of Political Economy, Vol. 91, pp. 401–19.


[DH 2002] Dong He (2002) ‘Emergency Liquidity Facilities’, Chapter 5 of C Enoch, D Marston and M Taylor, Building Strong Banks through Surveillance and Resolution, Washington DC: International Monetary Fund.


With the same title as a working paper (2000), Washington, DC: IMF.


[F 2000] Fischer S (2000) 'On the Need for an International Lender of Last Resort ' Essays in international economics, no. 220. Princeton University. http://www.princeton.edu/~ies/IES_Essays/E220.pdf


[FGHS 1999] Freixas X, C Giannini, G Hoggarth and F Soussa (1999) ‘Lender of Last Resort: A Review of the Literature’, Bank of England, Financial Stability Review, November. http://www.bankofengland.co.uk/publications/fsr/1999/fsr07art6.pdf


[H 1989] Humphrey T (1989) ‘The Lender of Last Resort: The Concept in History’, Federal Reserve Bank of Richmond Economic Review, March/April, pp. 8-16. http://www.richmondfed.org/publications/research/economic_review/1989/er750202.cfm


[IMF 2007] International Monetary Fund (2007) ‘Turkey: Financial System Stability Assessment’, IMF Country Report Number 07/361, November, Washington DC: IMF.


62. The 2001 amendment of the CBRT Law gave the CBRT [the central bank] substantially greater independence in the implementation of monetary policy and freed it of any obligation to finance the public sector. [...] However, a lacuna in the CBRT Law is lack of specificity concerning the removal from office of members of the governing bodies [...].


81. In practice the BRSA [very much like the FSA] lacks full operational independence in regulatory and budgetary matters. Before issuing a regulation, the BRSA must conduct consultations with the related Ministry and the Undersecretariat of the State Planning Organization. While nonbinding consultation is appropriate, the process is perceived as binding, thus raising the possibility of undue government interference. Moreover, the related Ministry is legally empowered to file a lawsuit for the cancellation of the Board’s regulatory decision. [...] All visits abroad by the BRSA management or staff, such as for on-site inspections or for training, must be approved by the government. [...]


125. The CMB [the Capital Markets Board] has adequate funding [...]. When markets are inactive, the potential shortfall must be covered by a budgetary transfer from the Ministry of Finance, which could have a bearing on its independence. [...]


[J 1991] James C (1991) ‘The Losses Realised in Bank Failures’, Journal of Finance, September, pp 1223–42. James-TheLossesRealisedinBankFailures1991.pdf


This paper examines the losses realized in bank failures. Losses are measured as the difference between the book value of assets and the recovery value net of the direct expenses associated with the failure. I find the loss on assets is substantial, averaging 30 percent of the failed bank's assets. Direct expenses associated with bank closures average 10 percent of assets. An empirical analysis of the determinants of these losses reveals a significant difference in the value of assets retained by the FDIC and similar assets assumed by acquiring banks.


[MS 1999] Stephen Morris and Hyun Song Shin: Coordination Risk and the Price of Debt. Cowles Foundation Discussion Paper no. 1241. December 1999


[R 2009] Rustomjee C (2009) 'Bank Regulation and the Resolution of Banking Crises, Unit 2'. London: School of Oriental and Management Studies-CeFiMS.


[S 2000] Sinclair, Peter JN (2000) ‘Central Banks and Financial Stability’, Bank of England Quarterly Bulletin, November: 377–89. http://www.bankofengland.co.uk/publications/quarterlybulletin/qb000403.pdf


[T 2009] Paul Tucker (2009) 'The crisis management menu'. Speech by Mr Paul Tucker, Deputy Governor for Financial Stability at the Bank of England, at the SUERF, CEPS and Belgian Financial Forum Conference: "Crisis Management at the Cross-Roads", Brussels, November 16, 2009.


[W-P 2009] Wickman-Parak B (2009): Financial stability in focus. Speech by Ms Barbro Wickman-Parak, Deputy Governor of the Sveriges Riksbank, at the Swedish Chambers, Gothenburg, October 29, 2009. Wickman-Parak-FinancialstabilityinfocusOct292009.pdf

Where U.S. Health Care Ranks Number One - Isn't 'responsiveness' what medicine is all about?

Where U.S. Health Care Ranks Number One. By MARK B. CONSTANTIAN
Isn't 'responsiveness' what medicine is all about?
WSJ, Jan 08, 2010

Last August the cover of Time pictured President Obama in white coat and stethoscope. The story opened: "The U.S. spends more to get less [health care] than just about every other industrialized country." This trope has dominated media coverage of health-care reform. Yet a majority of Americans opposes Congress's health-care bills. Why?

The comparative ranking system that most critics cite comes from the U.N.'s World Health Organization (WHO). The ranking most often quoted is Overall Performance, where the U.S. is rated No. 37. The Overall Performance Index, however, is adjusted to reflect how well WHO officials believe that a country could have done in relation to its resources.

The scale is heavily subjective: The WHO believes that we could have done better because we do not have universal coverage. What apparently does not matter is that our population has universal access because most physicians treat indigent patients without charge and accept Medicare and Medicaid payments, which do not even cover overhead expenses. The WHO does rank the U.S. No. 1 of 191 countries for "responsiveness to the needs and choices of the individual patient." Isn't responsiveness what health care is all about?

Data assembled by Dr. Ronald Wenger and published recently in the Bulletin of the American College of Surgeons indicates that cardiac deaths in the U.S. have fallen by two-thirds over the past 50 years. Polio has been virtually eradicated. Childhood leukemia has a high cure rate. Eight of the top 10 medical advances in the past 20 years were developed or had roots in the U.S.

The Nobel Prizes in medicine and physiology have been awarded to more Americans than to researchers in all other countries combined. Eight of the 10 top-selling drugs in the world were developed by U.S. companies. The U.S. has some of the highest breast, colon and prostate cancer survival rates in the world. And our country ranks first or second in the world in kidney transplants, liver transplants, heart transplants, total knee replacements, coronary artery bypass, and percutaneous coronary interventions.

We have the shortest waiting time for nonemergency surgery in the world; England has one of the longest. In Canada, a country of 35 million citizens, 1 million patients now wait for surgery and another million wait to see specialists.

When my friend, cardiac surgeon Peter Alivizatos, returned to Greece after 10 years heading the heart transplantation program at Baylor University in Dallas, the one-year heart transplant survival rate there was 50%—five-year survival was only 35%. He soon increased those numbers to 94% one-year and 90% five-year survival, which is what we achieve in the U.S. So the next time you hear that the U.S. is No. 37, remember that Greece is No. 14. Cuba, by the way, is No. 39.

But the issue is only partly about quality. As we have all heard, the U.S. spends a higher percentage of its gross domestic product for health care than any other country.

Actually, health-care spending now increases more moderately than it has in previous decades. Food, energy, housing and health care consume the same share of American spending today (55%) that they did in 1960 (53%).

So what does this money buy? Certainly some goes to inefficiencies, corporate profits, and costs that should be lowered by professional liability reform and national, free-market insurance access by allowing for competition across state lines. But the majority goes to a long list of advantages that American citizens now expect: the easiest access, the shortest waiting times the widest choice of physicians and hospitals, and constant availability of health care to elderly Americans. What we need now is insurance and liability reform—not health-care reform.

Who determines how much a nation should pay for its health? Is 17% too much, or too little? What better way could there be to dedicate our national resources than toward the health and productivity of our citizens?

Perhaps it's not that America spends too much on health care, but that other nations don't spend enough.

Dr. Constantian is a plastic and reconstructive surgeon in New Hampshire.