Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Friday, November 10, 2017

Richard Feynman on Why Questions

[Transcript] Richard Feynman on Why Questions
61 Post author: Grognor 08 January 2012 07:01PM
I thought this video was a really good question dissolving by Richard Feynman. But it's in 240p! Nobody likes watching 240p videos. So I transcribed it. (Edit: That was in jest. The real reasons are because I thought I could get more exposure this way, and because a lot of people appreciate transcripts. Also, Paul Graham speculates that the written word is universally superior than the spoken word for the purpose of ideas.) I was going to post it as a rationality quote, but the transcript was sufficiently long that I think it warrants a discussion post instead.

Here you go:
Interviewer: If you get hold of two magnets, and you push them, you can feel this pushing between them. Turn them around the other way, and they slam together. Now, what is it, the feeling between those two magnets?
Feynman: What do you mean, "What's the feeling between the two magnets?"
Interviewer: There's something there, isn't there? The sensation is that there's something there when you push these two magnets together.
Feynman: Listen to my question. What is the meaning when you say that there's a feeling? Of course you feel it. Now what do you want to know?
Interviewer: What I want to know is what's going on between these two bits of metal?
Feynman: They repel each other.
Interviewer: What does that mean, or why are they doing that, or how are they doing that? I think that's a perfectly reasonable question.
Feynman: Of course, it's an excellent question. But the problem, you see, when you ask why something happens, how does a person answer why something happens? For example, Aunt Minnie is in the hospital. Why? Because she went out, slipped on the ice, and broke her hip. That satisfies people. It satisfies, but it wouldn't satisfy someone who came from another planet and who knew nothing about why when you break your hip do you go to the hospital. How do you get to the hospital when the hip is broken? Well, because her husband, seeing that her hip was broken, called the hospital up and sent somebody to get her. All that is understood by people. And when you explain a why, you have to be in some framework that you allow something to be true. Otherwise, you're perpetually asking why. Why did the husband call up the hospital? Because the husband is interested in his wife's welfare. Not always, some husbands aren't interested in their wives' welfare when they're drunk, and they're angry.
And you begin to get a very interesting understanding of the world and all its complications. If you try to follow anything up, you go deeper and deeper in various directions. For example, if you go, "Why did she slip on the ice?" Well, ice is slippery. Everybody knows that, no problem. But you ask why is ice slippery? That's kinda curious. Ice is extremely slippery. It's very interesting. You say, how does it work? You could either say, "I'm satisfied that you've answered me. Ice is slippery; that explains it," or you could go on and say, "Why is ice slippery?" and then you're involved with something, because there aren't many things as slippery as ice. It's very hard to get greasy stuff, but that's sort of wet and slimy. But a solid that's so slippery? Because it is, in the case of ice, when you stand on it (they say) momentarily the pressure melts the ice a little bit so you get a sort of instantaneous water surface on which you're slipping. Why on ice and not on other things? Because water expands when it freezes, so the pressure tries to undo the expansion and melts it. It's capable of melting, but other substances get cracked when they're freezing, and when you push them they're satisfied to be solid.
Why does water expand when it freezes and other substances don't? I'm not answering your question, but I'm telling you how difficult the why question is. You have to know what it is that you're permitted to understand and allow to be understood and known, and what it is you're not. You'll notice, in this example, that the more I ask why, the deeper a thing is, the more interesting it gets. We could even go further and say, "Why did she fall down when she slipped?" It has to do with gravity, involves all the planets and everything else. Nevermind! It goes on and on. And when you're asked, for example, why two magnets repel, there are many different levels. It depends on whether you're a student of physics, or an ordinary person who doesn't know anything. If you're somebody who doesn't know anything at all about it, all I can say is the magnetic force makes them repel, and that you're feeling that force.
You say, "That's very strange, because I don't feel kind of force like that in other circumstances." When you turn them the other way, they attract. There's a very analogous force, electrical force, which is the same kind of a question, that's also very weird. But you're not at all disturbed by the fact that when you put your hand on a chair, it pushes you back. But we found out by looking at it that that's the same force, as a matter of fact (an electrical force, not magnetic exactly, in that case). But it's the same electric repulsions that are involved in keeping your finger away from the chair because it's electrical forces in minor and microscopic details. There's other forces involved, connected to electrical forces. It turns out that the magnetic and electrical force with which I wish to explain this repulsion in the first place is what ultimately is the deeper thing that we have to start with to explain many other things that everybody would just accept. You know you can't put your hand through the chair; that's taken for granted. But that you can't put your hand through the chair, when looked at more closely, why, involves the same repulsive forces that appear in magnets. The situation you then have to explain is why, in magnets, it goes over a bigger distance than ordinarily. There it has to do with the fact that in iron all the electrons are spinning in the same direction, they all get lined up, and they magnify the effect of the force 'til it's large enough, at a distance, that you can feel it. But it's a force which is present all the time and very common and is a basic force of almost - I mean, I could go a little further back if I went more technical - but on an early level I've just got to tell you that's going to be one of the things you'll just have to take as an element of the world: the existence of magnetic repulsion, or electrical attraction, magnetic attraction.
I can't explain that attraction in terms of anything else that's familiar to you. For example, if we said the magnets attract like if rubber bands, I would be cheating you. Because they're not connected by rubber bands. I'd soon be in trouble. And secondly, if you were curious enough, you'd ask me why rubber bands tend to pull back together again, and I would end up explaining that in terms of electrical forces, which are the very things that I'm trying to use the rubber bands to explain. So I have cheated very badly, you see. So I am not going to be able to give you an answer to why magnets attract each other except to tell you that they do. And to tell you that that's one of the elements in the world - there are electrical forces, magnetic forces, gravitational forces, and others, and those are some of the parts. If you were a student, I could go further. I could tell you that the magnetic forces are related to the electrical forces very intimately, that the relationship between the gravity forces and electrical forces remains unknown, and so on. But I really can't do a good job, any job, of explaining magnetic force in terms of something else you're more familiar with, because I don't understand it in terms of anything else that you're more familiar with.

Monday, July 10, 2017

Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar

Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar. By Christopher T. M Clack, Staffan A. Qvist, Jay Apt,, Morgan Bazilian, Adam R. Brandt, Ken Caldeira, Steven J. Davis, Victor Diakov, Mark A. Handschy, Paul D. H. Hines, Paulina Jaramillo, Daniel M. Kammen, Jane C. S. Long, M. Granger Morgan, Adam Reed, Varun Sivaram, James Sweeney, George R. Tynan, David G. Victor, John P. Weyant, and Jay F. Whitacre. Proceedings of the National Academy of Sciences.

Significance: Previous analyses have found that the most feasible route to a low-carbon energy future is one that adopts a diverse portfolio of technologies. In contrast, Jacobson et al. (2015) consider whether the future primary energy sources for the United States could be narrowed to almost exclusively wind, solar, and hydroelectric power and suggest that this can be done at “low-cost” in a way that supplies all power with a probability of loss of load “that exceeds electric-utility-industry standards for reliability”. We find that their analysis involves errors, inappropriate methods, and implausible assumptions. Their study does not provide credible evidence for rejecting the conclusions of previous analyses that point to the benefits of considering a broad portfolio of energy system options. A policy prescription that overpromises on the benefits of relying on a narrower portfolio of technologies options could be counterproductive, seriously impeding the move to a cost effective decarbonized energy system.

Abstract: A number of analyses, meta-analyses, and assessments, including those performed by the Intergovernmental Panel on Climate Change, the National Oceanic and Atmospheric Administration, the National Renewable Energy Laboratory, and the International Energy Agency, have concluded that deployment of a diverse portfolio of clean energy technologies makes a transition to a low-carbon-emission energy system both more feasible and less costly than other pathways. In contrast, Jacobson et al. [Jacobson MZ, Delucchi MA, Cameron MA, Frew BA (2015) Proc Natl Acad Sci USA 112(49):15060–15065] argue that it is feasible to provide “low-cost solutions to the grid reliability problem with 100% penetration of WWS [wind, water and solar power] across all energy sectors in the continental United States between 2050 and 2055”, with only electricity and hydrogen as energy carriers. In this paper, we evaluate that study and find significant shortcomings in the analysis. In particular, we point out that this work used invalid modeling tools, contained modeling errors, and made implausible and inadequately supported assumptions. Policy makers should treat with caution any visions of a rapid, reliable, and low-cost transition to entire energy systems that relies almost exclusively on wind, solar, and hydroelectric power.

Monday, January 9, 2017

A way to market to conservatives the science behind climate change more effectively

Past-focused environmental comparisons promote pro-environmental outcomes for conservatives. By Matthew Baldwin and Joris Lammers


Political polarization on important issues can have dire consequences for society, and divisions regarding the issue of climate change could be particularly catastrophic. Building on research in social cognition and psychology, we show that temporal comparison processes largely explain the political gap in respondents’ attitudes towards and behaviors regarding climate change. We found that conservatives’ proenvironmental attitudes and behaviors improved consistently and drastically when we presented messages that compared the environment today with that of the past. This research shows how ideological differences can arise from basic psychological processes, demonstrates how such differences can be overcome by framing a message consistent with these basic processes, and provides a way to market the science behind climate change more effectively.


Conservatives appear more skeptical about climate change and global warming and less willing to act against it than liberals. We propose that this unwillingness could result from fundamental differences in conservatives’ and liberals’ temporal focus. Conservatives tend to focus more on the past than do liberals. Across six studies, we rely on this notion to demonstrate that conservatives are positively affected by past- but not by future-focused environmental comparisons. Past comparisons largely eliminated the political divide that separated liberal and conservative respondents’ attitudes toward and behavior regarding climate change, so that across these studies conservatives and liberals were nearly equally likely to fight climate change. This research demonstrates how psychological processes, such as temporal comparison, underlie the prevalent ideological gap in addressing climate change. It opens up a promising avenue to convince conservatives effectively of the need to address climate change and global warming.

Monday, December 26, 2016

What scientists think of themselves, other scientists and the population at large

Who Believes in the Storybook Image of the Scientist? 
Dec 2016

Abstract: Do lay people and scientists themselves recognize that scientists are human and therefore prone to human fallibilities such as error, bias, and even dishonesty? In a series of three experimental studies and one correlational study (total N = 3,278) we found that the ‘storybook image of the scientist’ is pervasive: American lay people and scientists from over 60 countries attributed considerably more objectivity, rationality, open-mindedness, intelligence, integrity, and communality to scientists than other highly-educated people. Moreover, scientists perceived even larger differences than lay people did. Some groups of scientists also differentiated between different categories of scientists: established scientists attributed higher levels of the scientific traits to established scientists than to early-career scientists and PhD students, and higher levels to PhD students than to early-career scientists. Female scientists attributed considerably higher levels of the scientific traits to female scientists than to male scientists. A strong belief in the storybook image and the (human) tendency to attribute higher levels of desirable traits to people in one’s own group than to people in other groups may decrease scientists’ willingness to adopt recently proposed practices to reduce error, bias and dishonesty in science.

Wednesday, November 27, 2013

Alzheimer's Disease - The Puzzles, The Partners, The Path Forward

Alzheimer's Disease - The Puzzles, The Partners, The Path Forward
PhRMA, November 26, 2013

Alzheimer's is a debilitating neurodegenerative disease that currently afflicts more than 5 million people in the U.S. If no new medicines are found to prevent, delay or stop the progression of Alzheimer's disease, the number of affected people in America will jump to 15 million by 2050 and related healthcare costs could increase five-fold to $1.2 trillion, according to the Alzheimer's Association. In contrast, a medicine that delays onset of Alzheimer's disease by five years would lower the number of Americans suffering from the disease by nearly half and save $447 billion in related costs, in 2050.

America's biopharmaceutical companies are currently developing 73 potential new treatments and diagnostics for Alzheimer's, according to a recent report released by PhRMA. At a recent all day-forum, "Alzheimer's: The Puzzle, The Partners, The Path Forward," the Alzheimer's Association, Alzheimer's Drug Discovery Foundation, and PhRMA convened key stakeholders from the Alzheimer's community to discuss these therapies presently in development to treat the disease, as well as the current state of innovation and R&D for Alzheimer's disease treatments and diagnostics.

Among the key areas of discussion were pre-competitive partnerships, including potential areas for collaboration and public-private partnerships, as well as pre-symptomatic clinical trials, which may help researchers understand the clinical heterogeneity of the disease and subsequent challenges in the use and adoption of clinical and functional endpoints in new clinical trial design.

Panelists included top industry and academic scientists, policymakers, patients, payers, and many others. Executives from the Alzheimer's Association and Alzheimer's Drug Discovery Foundation also discussed the path forward for Alzheimer's disease in relation to science and policy.

Continue the conversation online using the event hashtag #ALZpov

Saturday, August 3, 2013

Nearly 450 Innovative Medicines in Development for Neurological Disorders

Neurological Disorders
July 30, 2013

Nearly 450 Innovative Medicines in Development for Neurological Disorders

Neurological disorders—such as epilepsy, multiple sclerosis, Alzheimer’s disease, and Parkinson’s disease—inflict great pain and suffering on patients and their families, and every year cost the U.S. economy billions of dollars. However, a growing understanding of how neurological disorders work at a genetic and molecular level has spurred improvements in treatment for many of these diseases.

America’s biopharmaceutical research companies are developing 444 medicines to prevent and treat neurological disorders, according to a new report released by the Pharmaceutical Research and Manufacturers of America (PhRMA). 

The report demonstrates the wide range of medicines in development for the more than 600 neurological disorders that affect millions of Americans each year. These medicines are all currently in clinical trials or awaiting Food & Drug Administration (FDA) review. They include 82 for Alzheimer’s disease, 82 for pain, 62 for brain tumors, 38 for multiple sclerosis, 28 for epilepsy and seizures, 27 for Parkinson’s disease, and 25 for headache.

Many of the potential medicines use cutting-edge technologies and new scientific approaches. For example:

  • A medicine that prompts the immune system to protect neurons affected by amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's disease
  • A gene therapy for the treatment of Alzheimer’s disease
  • A gene therapy to reverse the effects of Parkinson’s disease
These new medicines promise to continue the already remarkable progress against neurological disorders and to raise the quality of life for patients suffering from these diseases and their families. Read more about selected medicines in development for neurological disorders.

Alzheimer's Disease

Every 68 seconds someone in America develops Alzheimer’s disease, according to the Alzheimer’s Association, and by 2050 it could be every 33 seconds, or nearly a million new cases per year. Disease-modifying treatments currently in development could delay the onset of the disease by five years, and result in 50 percent fewer patients by 2050.

There are also potential cost savings offered by innovative disease-modifying treatments. As the 6th leading cause of death in the United States and one of the most common neurological disorders, Alzheimer’s disease currently costs society approximately $203 billion. This number could increase to $1.2 trillion by 2050; however, delaying the onset of the disease by five years could reduce the cost of care of Alzheimer’s patients in 2050 by nearly $450 billion.

Additional Resources

Friday, May 31, 2013

241 Medicines in Development for Leukemia, Lymphoma and Other Blood Cancers

241 Medicines in Development for Leukemia, Lymphoma and Other Blood Cancers
PhRMA, May 2013

Biopharmaceutical research companies are developing 241 medicines for blood cancers—leukemia, lymphoma and myeloma. This report lists medicines in human clinical trials or under review by the U.S. Food and Drug Administration (FDA).

The medicines in development include:

• 98 for lymphoma, including Hodgkin and non-Hodgkin lymphoma, which affect nearly 80,000 Americans each year.
• 97 for leukemia, including the four major types, which affect nearly 50,000 people in the United States each year.
• 52 for myeloma, a cancer of the plasma cells, which impacts more than 22,000 people each year in the United States.
• 24 medicines are targeting hematological malignancies, which affect bone marrow, blood and lymph nodes.
• 15 each for myeloproliferative neoplasms, such as myelofibrosis, polycythemia vera and essential thrombocythemia; and for myelodysplastic syndromes, which are diseases affecting the blood and bone marrow.

These medicines in development offer hope for greater survival for the thousands of Americans who are affected by these cancers of the blood.

Definitions for the cancers listed in this report and other terms can be found on page 27. Links to sponsor company web sites provide more information on the potential products. See full report:

Sunday, April 21, 2013

Generalized linear modeling with highly dimensional data

Question from a student, University of Missouri-Kansas City:

Hi guys,
I have project in Regression class, and we have to use R to do it,but till now I didn't find appropriate code for this project, and I dont now which method I have to use.

I have to analysis of a high dimensional dataset. The data has a total of 500 features.

we have no knowledge as to which of the features are useful and which not. Thus we want to apply model selection techniques to obtain a subset of useful features. What we have to do is the following:

a) There are totally 2000 observations in the data. Use the first 1000 to train or fit your model, and the other 1000 for prediction.

b) You will report the number of features you select and the percentage of response you correctly predict. Your project is considered valid only if the obtained percentage exceeds 54%.

Please help me as much as you can.
Your help would be appreciated..
Thank you!


well, doing batches of 30 variables I came across 88 of the 500 that minimize AIC for each batch:

t1=read.csv("qw.csv", header=FALSE)
# not a good solution -- better to get 1000 records randomly, but this is enough for now:
(bestAIC = bestglm(xy, IC="AIC"))

, and so on, going from  x=train_data[,2:31] to x=train_data[32:61], etc. Each run gives you a list of best variables to minimize AIC (I chose AIC but it can be any other criterion).

If I try to process more than 30 (or 31) columns with bestglm it takes too much time because it uses other programs and optimization is different... and clearly inefficient.

now, the problem seems reduced to using less than 90 variables instead of the original 500. Not the real solution, since I am doing this in a piecemeal basis, but maybe close to what we are looking for, which is to get 54pct of the observed values.

using other methods I got even less candidates to be used as variables, but let's keep the ones we found before

then I tried this: after finding the best candidates I created this object, a data frame:

dat = data.frame(train_data$V1, train_data$V50, train_data$V66, train_data$V325, train_data$V426, train_data$V28, train_data$V44, train_data$V75, train_data$V111, train_data$V128, train_data$V149, train_data$V152, train_data$V154, train_data$V179, train_data$V181, train_data$V189, train_data$V203, train_data$V210, train_data$V213, train_data$V216, train_data$V218, train_data$V234, train_data$V243, train_data$V309, train_data$V311, train_data$V323, train_data$V338, train_data$V382, train_data$V384, train_data$V405, train_data$V412, train_data$V415, train_data$V417, train_data$V424, train_data$V425, train_data$V434, train_data$V483)

then, I invoked this:

model = train(train_data$V1 ~ train_data$V50 + train_data$V66 + train_data$V325 + train_data$V426 + train_data$V28 + train_data$V44 + train_data$V75 + train_data$V111 + train_data$V128 + train_data$V149 + train_data$V152 + train_data$V154 + train_data$V179 + train_data$V181 + train_data$V189 + train_data$V203 + train_data$V210 + train_data$V213 + train_data$V216 + train_data$V218 + train_data$V234 + train_data$V243 + train_data$V309 + train_data$V311 + train_data$V323 + train_data$V338 + train_data$V382 + train_data$V384 + train_data$V405 + train_data$V412 + train_data$V415 + train_data$V417 + train_data$V424 + train_data$V425 + train_data$V434 + train_data$V483,
               trace = FALSE)
ps = predict(model, dat)

if you check the result, ps, you find that most values are the same:

606 are -0.2158001115381
346 are 0.364988437287819

the rest of the 1000 values are very close to these two, the whole thing is this:

just 1 is -0.10
   1  is -0.14
   1  is -0.17
   1  is -0.18
   3  is -0.20
 617 are -0.21
   1  is 0.195
   1  is 0.359
   1  is 0.360
   1  is 0.362
   2  is 0.363
 370  are 0.364

, so I just converged all negative values to -1 and all positive values to 1 (let's assume is propensity not to buy or to buy), and then I found that 380 rows were negative when the original value to be predicted was -1 (499 rows), that is, a success percentage of 76 pct

only 257 values were positive when the original values were positive (success rate of 257/501 = 51.3pct)

the combined success rate in predicting the response variable values is a bit above 63%, which is above the value we aimed at, 54pct

now, I tried with the second data set, test_data (the second 1000 rows)

negative values when original response value was negative too:
          success rate is 453/501 = .90419

impressive?  See how disappointing is this:

positive values when original response value was positive too:
          success rate is 123/499 = .24649

the combined success rate is about 57pct, which is barely above the mark

do I trust my own method?

of course not, I would get all previous consumer surveys (buy/not buy) my company had in the files and then I will check if I can get a success rate at or above 57pct (which to me is too low, to say nothing of 54pct)

for the time and effort I spent maybe I should have tossed an electronic coin, with a bit of luck you can get a bit above 50pct success     : - )

maybe to prevent this they chose 54pct, since in 1000 runs you could be very well near 50pct

refinement, or "If we had all the time of the world..."

since I got enough free time, I tried this (same dat data frame):

model = train(train_data$V1 ~ log(train_data$V50) + log(train_data$V66) + log(train_data$V325) + log(train_data$V426) + log(train_data$V28) + log(train_data$V44) + log(train_data$V75) + log(train_data$V111) + log(train_data$V128) + log(train_data$V149) + log(train_data$V152) + log(train_data$V154) + log(train_data$V179) + log(train_data$V181) + log(train_data$V189) + log(train_data$V203) + log(train_data$V210) + log(train_data$V213) + log(train_data$V216) + log(train_data$V218) + log(train_data$V234) + log(train_data$V243) + log(train_data$V309) + log(train_data$V311) + log(train_data$V323) + log(train_data$V338) + log(train_data$V382) + log(train_data$V384) + log(train_data$V405) + log(train_data$V412) + log(train_data$V415) + log(train_data$V417) + log(train_data$V424) + log(train_data$V425) + log(train_data$V434) + log(train_data$V483),
               trace = FALSE)
ps = predict(model, dat)

negative values when original response value was negative too: .7

positive values when original response value was positive too: .69

combined success rate: 69.4pct

# now we try with the other 1000 values:
[same dat data frame, but using test_data instead of train_data]

model = train(test_data$V1 ~ log(test_data$V50) + log(test_data$V66) + log(test_data$V325) + log(test_data$V426) + log(test_data$V28) + log(test_data$V44) + log(test_data$V75) + log(test_data$V111) + log(test_data$V128) + log(test_data$V149) + log(test_data$V152) + log(test_data$V154) + log(test_data$V179) + log(test_data$V181) + log(test_data$V189) + log(test_data$V203) + log(test_data$V210) + log(test_data$V213) + log(test_data$V216) + log(test_data$V218) + log(test_data$V234) + log(test_data$V243) + log(test_data$V309) + log(test_data$V311) + log(test_data$V323) + log(test_data$V338) + log(test_data$V382) + log(test_data$V384) + log(test_data$V405) + log(test_data$V412) + log(test_data$V415) + log(test_data$V417) + log(test_data$V424) + log(test_data$V425) + log(test_data$V434) + log(test_data$V483),
               trace = FALSE)
ps = predict(model, dat)

negative values when original response value was negative too:
          success rate is 322/499 = .645

positive values when original response value was positive too:
          success rate is 307/501 = .612

combined success rate: 62.9pct

other things I tried failed -- if we had all the time of the world we could try other possibilities and get better results... or not

you'll tell me if you can reproduce the results, which are clearly above the 54pct mark

Tuesday, March 26, 2013

Issues with the Bayes estimator of a conjugate normal hierarchy model

Someone asks for an instability issue in R's integrate program:
Hello everyone,

I am supposed to calculate the Bayes estimator of a conjugate normal hierarchy model. However, the Bayes estimator does not have a closed form,

The book "Theory of Point Estimation" claims that the numerical evaluation of  this estimator is simple. But my two attempts below both failed.

1. I tried directly using the integration routine in R on the numerator and denominator separately. Maybe because of the infinite domain, occasionally the results are far from reasonable.

2. I tried two ways of change of variables so that the resulting domain can be finite. I let

But the estimator results are very similar to the direct integration on the original integrand. More often than it should occur, we obtain quite large evaluation of the Bayes estimator, up to 10^6 magnitude.

I wonder if there is any other numerical integration trick which can lead to a more accurate evaluation.

I appreciate any suggestion.

Some State University

Well, what happens here? Her program have a part that says:

[Bayes(nu,p,sigma,xbar) is the ratio of both integrals, "f", and "g" are the integrals, f is the numerator, g the denominator, so Bayes = f/g]

Now, executing Bayes(2,10,1,9.3) fails:

> Bayes(2,10,1,9.3)
[1] 1477.394

, which is much greater than the expected approx. 8.

I tried this with the same program, integrate, to do this simple case (dnorm is the normal distribution density):

> integrate(dnorm,0,1)
0.3413447 with absolute error < 3.8e-15
> integrate(dnorm,0,10)
0.5 with absolute error < 3.7e-05
> integrate(dnorm,0,100)
0.5 with absolute error < 1.6e-07
> integrate(dnorm,0,1000)
0.5 with absolute error < 4.4e-06
> integrate(dnorm,0,10000000000)
0 with absolute error < 0

As we can see, the last try, with a very large value, fails miserably, value is 0 (instead of 0.5) and absolute error is negative.

The program "integrate" uses code that is part of supposedly "a Subroutine Package for Automatic Integration", as it is advertised, but it cannot anticipate everything -- and we hit an instability we cannot solve.

My suggestion was to use integrate(f,0,1) and integrate(g,0,1) always until we get results outside what is reasonable. In those cases, we should try integrate(f,0,.999) and integrate(g,0,.999) with as many nines as we can (I got problems with just .9999, that's why I wrote .999 there).

Of course, you can always try a different method. Since this function is well-behaved, any simple method could be good enough.

Friday, March 1, 2013

Robust Biopharmaceutical Pipeline Offers New Hope for Patients

Robust Biopharmaceutical Pipeline Offers New Hope for Patients 
January 31, 2013

According to a new report by the Analysis Group, the biopharmaceutical pipeline is innovative and robust, with a high proportion of potential first-in-class medicines and therapies targeting diseases with limited treatment options. The report, “Innovation in the Biopharmaceutical Pipeline: A Multidimensional View,” uses several different measures to look at innovation in the pipeline.

The report reveals that more than 5,000 new medicines are in the pipeline globally. Of these medicines in various phases of clinical development, 70 percent are potential first-in-class medicines, which means that they have a different mechanism of action than any other existing medicine. Subsequent medicines in a class offer different profiles and benefits for patients but first-in-class medicines also provide exciting new approaches to treating disease for patients. Potential first-in-class medicines make up as much as 80% of the pipeline for disease areas such as cancer and neurology.

Many of the new medicines in the pipeline are also for diseases for which no new therapies have been approved in the last decade and significant treatment gaps exist.  For example, there are 158 potential medicines for ovarian cancer, 19 for sickle cell disease 61 for amyotrophic lateral sclerosis, and 41 for small cell lung cancer.

The authors also found that personalized medicines account for an increasing proportion of the pipeline, and the number of potential new medicines for rare diseases designated by the FDA each year averaged 140 per year in the last 10 years compared to 64 in the previous decade.

The record 39 new drugs approved by the FDA in 2012 – a 16 year high – and the robust pipeline of drugs in development reflect the continuing commitment of the biomedical research community, including industry, academia, government researchers, patient groups, and others to develop novel treatments that will advance our understanding of disease and improve patient outcomes.

New medicines have brought tremendous value to the U.S. health care system and the economy more broadly. But more progress is needed to address the most costly and challenging diseases facing patients in America and across the globe. As our population ages, the need will only grow. Researchers are working to deliver on the promise of unprecedented scientific advances. 

Saturday, January 5, 2013

We, Too, Are Violent Animals. By Jane Goodall, Richard Wrangham, and Dale Peterson

We, Too, Are Violent Animals. By Jane Goodall, Richard Wrangham, and Dale Peterson
Those who doubt that human aggression is an evolved trait should spend more time with chimpanzees and wolvesThe Wall Street Journal,January 5, 2013, on page C3

Where does human savagery come from? The animal behaviorist Marc Bekoff, writing in Psychology Today after last month's awful events in Newtown, Conn., echoed a common view: It can't possibly come from nature or evolution. Harsh aggression, he wrote, is "extremely rare" in nonhuman animals, while violence is merely an odd feature of our own species, produced by a few wicked people. If only we could "rewild our hearts," he concluded, we might harness our "inborn goodness and optimism" and thereby return to our "nice, kind, compassionate, empathic" original selves.

If only if it were that simple. Calm and cooperative behavior indeed predominates in most species, but the idea that human aggression is qualitatively different from that of every other species is wrong.

The latest report from the research site that one of us (Jane Goodall) directs in Tanzania gives a quick sense of what a scientist who studies chimpanzees actually sees: "Ferdinand [the alpha male] is rather a brutal ruler, in that he tends to use his teeth rather a lot…a number of the males now have scars on their backs from being nicked or gashed by his canines…The politics in Mitumba [a second chimpanzee community] have also been bad. If we recall that: they all killed alpha-male Vincent when he reappeared injured; then Rudi as his successor probably killed up-and-coming young Ebony to stop him helping his older brother Edgar in challenging him…but to no avail, as Edgar eventually toppled him anyway."

A 2006 paper reviewed evidence from five separate chimpanzee populations in Africa, groups that have all been scientifically monitored for many years. The average "conservatively estimated risk of violent death" was 271 per 100,000 individuals per year. If that seems like a low rate, consider that a chimpanzee's social circle is limited to about 50 friends and close acquaintances. This means that chimpanzees can expect a member of their circle to be murdered once every seven years. Such a rate of violence would be intolerable in human society.

The violence among chimpanzees is impressively humanlike in several ways. Consider primitive human warfare, which has been well documented around the world. Groups of hunter-gatherers who come into contact with militarily superior groups of farmers rapidly abandon war, but where power is more equal, the hostility between societies that speak different languages is almost endless. Under those conditions, hunter-gatherers are remarkably similar to chimpanzees: Killings are mostly carried out by males, the killers tend to act in small gangs attacking vulnerable individuals, and every adult male in the society readily participates. Moreover, with hunter-gatherers as with chimpanzees, the ordinary response to encountering strangers who are vulnerable is to attack them.

Most animals do not exhibit this striking constellation of behaviors, but chimpanzees and humans are not the only species that form coalitions for killing. Other animals that use this strategy to kill their own species include group-living carnivores such as lions, spotted hyenas and wolves. The resulting mortality rate can be high: Among wolves, up to 40% of adults die from attacks by other packs.

Killing among these carnivores shows that ape-sized brains and grasping hands do not account for this unusual violent behavior. Two other features appear to be critical: variable group size and group-held territory. Variable group size means that lone individuals sometimes encounter small, vulnerable parties of neighbors. Having group territory means that by killing neighbors, the group can expand its territory to find extra resources that promote better breeding. In these circumstances, killing makes evolutionary sense—in humans as in chimpanzees and some carnivores.

What makes humans special is not our occasional propensity to kill strangers when we think we can do so safely. Our unique capacity is our skill at engineering peace. Within societies of hunter-gatherers (though only rarely between them), neighboring groups use peacemaking ceremonies to ensure that most of their interactions are friendly. In state-level societies, the state works to maintain a monopoly on violence. Though easily misused in the service of those who govern, the effect is benign when used to quell violence among the governed.

Under everyday conditions, humans are a delightfully peaceful and friendly species. But when tensions mount between groups of ordinary people or in the mind of an unstable individual, emotion can lead to deadly events. There but for the grace of fortune, circumstance and effective social institutions go you and I. Instead of constructing a feel-good fantasy about the innate goodness of most people and all animals, we should strive to better understand ourselves, the good parts along with the bad.

—Ms. Goodall has directed the scientific study of chimpanzee behavior at Gombe Stream National Park in Tanzania since 1960. Mr. Wrangham is the Ruth Moore Professor of Biological Anthropology at Harvard University. Mr. Peterson is the author of "Jane Goodall: The Woman Who Redefined Man."

Saturday, December 22, 2012

Novel Drug Approvals Strong in 2012

Novel Drug Approvals Strong in 2012
Dec 21, 2012

Over the past year, biopharmaceutical researchers' work has continued to yield innovative treatments to improve the lives of patients. In fiscal year (FY) 2012 (October 1, 2011 – September 30, 2012), the U.S. Food and Drug Administration (FDA) approved 35 new medicines, keeping pace with the previous fiscal year’s approvals and representing one of the highest levels of FDA approvals in recent years.[i] For the calendar year FDA is on track to approve more new medicines than any year since 2004.[ii]

A recent report from the FDA highlights the groundbreaking medicines to treat diseases ranging from the very common to the most rare. Some are the first treatment option available for a condition, others improve care for treatable diseases.

Notable approvals in FY 2012 include:
  • A breakthrough personalized medicine for a rare form of cystic fibrosis;
  • The first approved human cord blood product;
  • A total of ten drugs to treat cancer, including the first treatments for advanced basal cell carcinoma and myelofibrosis and a targeted therapy for HER2-positive metastatic breast cancer;
  • Nine treatments for rare diseases; and
  • Important new therapies for HIV, macular degeneration, and meningitis.
The number of new drugs approved this year reflects the continuing commitment of the biomedical research community – from biopharmaceutical companies to academia to government researchers to patient groups – to advance basic science and translate that knowledge into novel treatments that will advance our understanding of disease and improve patient outcomes.

Building on these noteworthy approvals, we look to the new year where continued innovation is needed to leverage our growing understanding of the underpinnings of human disease and to harness the power of scientific research tools to discover and develop new medicines.

To learn more about the more than 3,200 new medicines in development visit

Wednesday, September 19, 2012

New Report Aims to Improve the Science Behind Regulatory Decision-Making

New Report Aims to Improve the Science Behind Regulatory Decision-Making

WASHINGTON, D.C. (September 18, 2012) – Scientists and policy experts from industry, government, and nonprofit sectors reached consensus on ways to improve the rigor and transparency of regulatory decision-making in a report being released today. The Research Integrity Roundtable, a cross-sector working group convened and facilitated by The Keystone Center, an independent public policy organization, is releasing the new report to improve the scientific analysis and independent expert reviews which underpin many important regulatory decisions. The report, Model Practices and Procedures for Improving the Use of Science in Regulatory Decision-Making, builds on the work of the Bipartisan Policy Center (BPC) in its 2009 report Science for Policy Project: Improving the Use of Science in Regulatory Policy.

"Americans need to have confidence in a U.S. regulatory system that encourages rational, science-based decision-making," said Mike Walls, Vice President of Regulatory and Technical Affairs for the American Chemistry Council (ACC), one of the sponsors of the Keystone Roundtable. "For this report, a broad spectrum of stakeholders came together to identify and help resolve some of the more troubling inconsistencies and roadblocks at the intersection of science and regulatory policy."

Controversies surrounding a regulatory decision often arise over the composition and transparency of scientific advisory panels and the scientific analysis used to support such decisions. The Roundtable's report is the product of 18 months of deliberations among experts from advocacy groups, professional associations and industry, as well as liaisons from several key Federal agencies. The report centers on two main public policy challenges that lead to controversy in the regulatory process: appointments of scientific experts, and the conduct of systematic scientific reviews.

The Roundtable's recommendations aim to improve the selection process for scientists on federal advisory panels and the scientific analysis used to draw conclusions that inform policy. The report seeks to maximize transparency and objectivity at every step in the regulatory decision-making process by informing the formation of scientific advisory committees and use of systematic reviews. The Roundtable's report offers specific recommendations for improving expert panel selection by better addressing potential conflicts of interest and bias. In addition, the report recommends ways to improve systematic reviews of scientific studies by outlining a step-by-step process, and by calling for clearer criteria to determine the relevance and credibility of studies.

"Conflicted experts and poor scientific assessments threaten the scientific integrity of agency decision making as well as the public's faith in agencies to protect their health and safety," said Francesca Grifo, Senior Scientist and Science Policy Fellow for the Union of Concerned Scientists. "Given the abundance of inflamed partisan dialogue around regulatory issues, it was refreshing to be a part of a rational and respectful roundtable. If adopted by agencies, the changes recommended in the report have the potential to reduce the ability of narrow interests to weaken regulations' power to protect the public good."

The Keystone Center and members of the Research Integrity Roundtable welcome additional conversations and dialogue on the matters explored in and recommendations presented in this report.

For more information, access the Roundtable's website at:

Thursday, September 13, 2012

The Rough Road to Progress Against Alzheimer's Disease

The Rough Road to Progress Against Alzheimer's Disease
Sep 13, 2012

Two high-profile Alzheimer’s drug development failures were announced in recent weeks shining a spotlight on the challenges and frustrations inherent in Alzheimer’s research. Alzheimer’s disease is among the most devastating and costly illnesses we face and the need for new treatments will only become more acute as our population ages.

Understanding a disease and developing medicines to treat it is always a herculean task but Alzheimer’s brings particular challenges and long odds. A new report from the Pharmaceutical Research and Manufacturers of America (PhRMA), "Researching Alzheimer’s Medicines: Setbacks and Stepping Stones", examines the complexities of researching and treating Alzheimer’s and drug development success rates in recent years.

Since 1998, there have been 101 unsuccessful attempts to develop drugs to treat Alzheimer’s—or as some call them “failures,” according to the new analysis. In that time three new medicines have been approved to treat the symptoms of Alzheimer’s disease; however, for every research project that succeeded, 34 failed to yield a new medicine.

These “failures” may appear to be dead ends – a waste of time and resources – but to researchers they are both an inevitable and necessary part of making progress. These setbacks often contribute to eventual success by helping guide and redirect research on potential new drugs. In fact, the recent unsuccessful trials have provided a wealth of new information which researchers are now sifting through to inform their ongoing research.

Alzheimer’s disease is the sixth leading cause of death in the United States today, with 5.4 million people currently affected.[i]  By 2050, the number of Americans with the disease is projected to reach 13.5 million at a cost of over $1.1 trillion unless new treatments to prevent, arrest or cure the disease are found.[ii]  According to the Alzheimer’s Association a new medicine that delays the onset of the disease could change that trajectory and save $447 billion a year by 2050.

According to another new report, researchers are currently working on nearly 100 medicines in development for Alzheimer’s and other dementias. Although research is not a straight, predictable path, with continued dedication, we will make a difference for every person at risk of suffering from this terrible, debilitating disease.

[i]Alzheimer's Association, “Factsheet,” (March 2012), 
[ii]Alzheimer's Association, 2012 Alzheimer's Disease Facts and Figures, Alzheimer's and Dementia, Volume 8, Issue 2

Tuesday, May 1, 2012

Pharma: New Tufts Report Shows Academic-Industry Partnerships Are Mutually Beneficial

New Tufts Report Shows Academic-Industry Partnerships Are Mutually Beneficial
April 30, 2012 -

According to a new study by the Tufts Center for the Study of Drug Development, collaboration among organizations is becoming increasingly important to advancing basic research and developing new medicines. This study specifically explores the breadth and nature of partnerships between biopharmaceutical companies and academic medical centers (AMCs)
[1] which are likely to play an increasingly important role in making progress in treating unmet medical needs. 
In the study, researchers examine a subset of public-private partnerships, including more than 3,000 grants to AMCs from approximately 450 biopharmaceutical company sponsors that were provided through 22 medical schools. Findings show that while it is generally accepted that these partnerships have become an increasingly common approach both to promote public health objectives and to produce healthcare innovations, it is anticipated that their nature will continue to evolve over time and their full potential is yet to be realized.

Tufts researchers also found that the nature of these relationships is varied, ever-changing, and expanding. They often involve company and AMC scientists and other researchers working side-by-side on cutting-edge science, applying advanced tools and resources. This type of innovative research has enabled the United States to advance biomedical research in a number of areas, such as the development of personalized medicines and the understanding of rare diseases.

The report outlines the 12 primary models of academic-industry collaborations and highlights other emerging models, which reflect a shift in the nature of academic-industry relationships toward more risk- and resource-sharing partnerships. While unrestricted research support has generally represented the most common form of academic-industry collaboration, Tufts research found that this model is becoming less frequently used. A range of innovative partnership models are emerging, from corporate venture capital funds to pre-competitive research centers to increasingly used academic drug discovery centers.

These collaborations occur across all aspects of drug discovery and the partnerships benefit both industry and academia since they provide the opportunity for the leading biomedical researchers in both sectors to work together to explore new technologies and scientific discoveries. Such innovation in both the science and technology has the potential to treat the most challenging diseases and conditions facing patients today.

According to Tufts, “[t]he industry is funding and working collaboratively with the academic component of the public sector on basic research that contributes broadly across the entire spectrum of biomedical R&D, not just for products in its portfolio.” In conclusion, the report notes that in the face of an increasingly challenging R&D environment and overall global competition, we are likely to witness the continued proliferation of AMC-industry partnerships.

[1] C.P. Milne, et al., “Academic-Industry Partnerships for Biopharmaceutical Research & Development: Advancing Medical Science in the U.S.,” Tufts Center for the Study of Drug Development, April 2012.

Wednesday, November 30, 2011

Over 900 Biotechnology Medicines in Development, Targeting More than 100 Diseases

Over 900 Biotechnology Medicines in Development, Targeting More than 100 Diseases
September 14, 2011

Biotechnology has opened the door to the discovery and development of new types of human therapeutics. Advancements in both cellular and molecular biology have allowed scientists to identify and develop a host of new products. These cutting-edge medicines provide significant clinical benefits, and in many cases, address therapeutic categories where no effective treatment previously existed.

Innovative, targeted therapies offer enormous potential to address unmet medical needs of patients with cancer, HIV/AIDS, and many other serious diseases. These medicines also hold the potential to help us meet the challenge of rising healthcare costs by avoiding treatment complications and making sure each patient gets the most effective care possible.

Approved biotechnology medicines already treat or help prevent heart attacks, stroke, multiple sclerosis, leukemia, hepatitis, congestive heart failure, lymphoma, kidney cancer, cystic fibrosis, and other diseases. These medicines use many different approaches to treat disease as do medicines currently in the pipeline.

America's biopharmaceutical research companies have 901 biotechnology medicines and vaccines in development to target more than 100 debilitating and life- threatening diseases, such as cancer, arthritis and diabetes, according to a new report [] by the Pharmaceutical Research and Manufacturers of America (PhRMA). The medicines in development—all in either clinical trials or under Food and Drug Administration review—include 353 for cancer and related conditions, 187 for infectious diseases, 69 for autoimmune diseases and 59 for cardiovascular diseases.

The biotechnology medicines now in development make use of these and other state-of- the-art approaches. For example:

•A genetically-modified virus-based vaccine to treat melanoma.
•A monoclonal antibody for the treatment of cancer and asthma.
•An antisense medicine for the treatment of cancer.
•A recombinant fusion protein to treat age-related macular degeneration.


Autoimmune Diseases: Autoimmunity is the underlying cause of more than 100 serious, chronic illnesses, targeting women 75 percent of the time. Autoimmune diseases have been cited in the top 10 leading causes of all deaths among U.S. women age 65 and younger, representing the fourth largest cause of disability among women in the United States.

Blood Disorders: Hemophilia affects 1 in 5,000 male births. About 400 babies are born with hemophilia each year. Currently, the number of people with hemophilia in the United States is estimated to be about 20,000, based on expected births and deaths since 1994.

Sickle cell disease is an inherited disease that affects more than 80,000 people in the United States, 98 percent of whom are of African descent.

Von Willebrand disease, the most common inherited bleeding condition, affects males and females about equally and is present in up to 1 percent of the U.S. population.

Cancer: Cancer is the second leading cause of death by disease in the United States—1 of every 4 deaths—exceeded only by heart disease. This year nearly 1.6 million new cancer cases will be diagnosed, 78 percent of which will be for individuals ages 55 and older.

Cardiovascular Diseases (CVD): CVD claims more lives each year than cancer, chronic lower respiratory diseases, and accidents combined. More than 82 million American adults—greater than one in three—had one or more types of CVD. Of that total, 40.4 million were estimated to be age 60 and older.

Diabetes: In the United States, 25.8 million people, or 8.3 percent of the population, have diabetes. An estimated 18.8 million have been diagnosed, but 7 million people are not aware that they have the disease. Another 79 million have pre-diabetes. Diabetes is the seventh leading cause of death in the United States.

Genetic Disorders: There are more than 6,000 known genetic disorders. Approximately 4 million babies are born each year, and about 3 percent-4 percent will be born with a genetic disease or major birth defect. More than 20 percent of infant deaths are caused by birth defects or genetic conditions (e.g., congenital heart defects, abnormalities of the nervous system, or chromosomal abnormalities).

Alzheimer’s Disease: In 2010 there were an estimated 454,000 new cases of Alzheimer’s disease. In 2008, Alzheimer’s was reported as the underlying cause of death for 82,476 people. Almost two-thirds of all Americans living with Alzheimer’s are women.

Parkinson's Disease: This disease has been reported to affect approximately 1 percent of Americans over age 50, but unrecognized early symptoms of the disease may be present in as many as 10 percent of those over age 60. Parkinson's disease is more prevalent in men than in women by a ratio of three to two.

Asthma: An estimated 39.9 million Americans have been diagnosed with asthma by a health professional within their lifetime. Females traditionally have consistently higher rates of asthma than males. African Americans are also more likely to be diagnosed with asthma over their lifetime.

Skin Diseases: More than 100 million Americans—one-third of the U.S. population—are afflicted with skin diseases.

Friday, October 21, 2011

The Case Against Global-Warming Skepticism

The Case Against Global-Warming Skepticism. By Richard A Muller
There were good reasons for doubt, until now.
WSJ, Oct 21, 2011

Are you a global warming skeptic? There are plenty of good reasons why you might be.

As many as 757 stations in the United States recorded net surface-temperature cooling over the past century. Many are concentrated in the southeast, where some people attribute tornadoes and hurricanes to warming.

The temperature-station quality is largely awful. The most important stations in the U.S. are included in the Department of Energy's Historical Climatology Network. A careful survey of these stations by a team led by meteorologist Anthony Watts showed that 70% of these stations have such poor siting that, by the U.S. government's own measure, they result in temperature uncertainties of between two and five degrees Celsius or more. We do not know how much worse are the stations in the developing world.

Using data from all these poor stations, the U.N.'s Intergovernmental Panel on Climate Change estimates an average global 0.64ºC temperature rise in the past 50 years, "most" of which the IPCC says is due to humans. Yet the margin of error for the stations is at least three times larger than the estimated warming.

We know that cities show anomalous warming, caused by energy use and building materials; asphalt, for instance, absorbs more sunlight than do trees. Tokyo's temperature rose about 2ºC in the last 50 years. Could that rise, and increases in other urban areas, have been unreasonably included in the global estimates? That warming may be real, but it has nothing to do with the greenhouse effect and can't be addressed by carbon dioxide reduction.

Moreover, the three major temperature analysis groups (the U.S.'s NASA and National Oceanic and Atmospheric Administration, and the U.K.'s Met Office and Climatic Research Unit) analyze only a small fraction of the available data, primarily from stations that have long records. There's a logic to that practice, but it could lead to selection bias. For instance, older stations were often built outside of cities but today are surrounded by buildings. These groups today use data from about 2,000 stations, down from roughly 6,000 in 1970, raising even more questions about their selections.

On top of that, stations have moved, instruments have changed and local environments have evolved. Analysis groups try to compensate for all this by homogenizing the data, though there are plenty of arguments to be had over how best to homogenize long-running data taken from around the world in varying conditions. These adjustments often result in corrections of several tenths of one degree Celsius, significant fractions of the warming attributed to humans.

And that's just the surface-temperature record. What about the rest? The number of named hurricanes has been on the rise for years, but that's in part a result of better detection technologies (satellites and buoys) that find storms in remote regions. The number of hurricanes hitting the U.S., even more intense Category 4 and 5 storms, has been gradually decreasing since 1850. The number of detected tornadoes has been increasing, possibly because radar technology has improved, but the number that touch down and cause damage has been decreasing. Meanwhile, the short-term variability in U.S. surface temperatures has been decreasing since 1800, suggesting a more stable climate.

Without good answers to all these complaints, global-warming skepticism seems sensible. But now let me explain why you should not be a skeptic, at least not any longer.

Over the last two years, the Berkeley Earth Surface Temperature Project has looked deeply at all the issues raised above. I chaired our group, which just submitted four detailed papers on our results to peer-reviewed journals. We have now posted these papers online at to solicit even more scrutiny.

Our work covers only land temperature—not the oceans—but that's where warming appears to be the greatest. Robert Rohde, our chief scientist, obtained more than 1.6 billion measurements from more than 39,000 temperature stations around the world. Many of the records were short in duration, and to use them Mr. Rohde and a team of esteemed scientists and statisticians developed a new analytical approach that let us incorporate fragments of records. By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.

We discovered that about one-third of the world's temperature stations have recorded cooling temperatures, and about two-thirds have recorded warming. The two-to-one ratio reflects global warming. The changes at the locations that showed warming were typically between 1-2ºC, much greater than the IPCC's average of 0.64ºC.

To study urban-heating bias in temperature records, we used satellite determinations that subdivided the world into urban and rural areas. We then conducted a temperature analysis based solely on "very rural" locations, distant from urban ones. The result showed a temperature increase similar to that found by other groups. Only 0.5% of the globe is urbanized, so it makes sense that even a 2ºC rise in urban regions would contribute negligibly to the global average.

What about poor station quality? Again, our statistical methods allowed us to analyze the U.S. temperature record separately for stations with good or acceptable rankings, and those with poor rankings (the U.S. is the only place in the world that ranks its temperature stations). Remarkably, the poorly ranked stations showed no greater temperature increases than the better ones. The mostly likely explanation is that while low-quality stations may give incorrect absolute temperatures, they still accurately track temperature changes.

When we began our study, we felt that skeptics had raised legitimate issues, and we didn't know what we'd find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections.

Global warming is real. Perhaps our results will help cool this portion of the climate debate. How much of the warming is due to humans and what will be the likely effects? We made no independent assessment of that.

Mr. Muller is a professor of physics at the University of California, Berkeley, and the author of "Physics for Future Presidents" (W.W. Norton & Co., 2008).

Wednesday, October 12, 2011

Personalized Therapies Mark Significant Leap Forward in Fight Against Cancer

Personalized Therapies Mark Significant Leap Forward in Fight Against Cancer
October 12, 2011 

This year marks the 40th anniversary of the signing of the National Cancer Act of 1971. Indeed, the 12 million cancer survivors living in the U.S. today attest to the significant progress in cancer prevention and treatment we have made over the past decades. Despite the remarkable advances we have made there are still more than 550,000 men and woman who lose their battle to cancer each year.
Recently released scientific data demonstrate that the collective commitment to cancer research is unwavering and our knowledge of the biology of cancer and ability to treat it continues to expand. One promising trend in cancer research: drug developers are harnessing an improved understanding of the molecular basis of many types of cancer to develop therapies uniquely targeted to these pathways.
For example, a newly approved drug for lung cancer called crizotinib is targeted to a mutation in a gene called anaplastic lymphoma kinase, or ALK. Mutations in the ALK gene are found in approximately 5% of patients with non-small-cell lung cancer. In data presented at this year’s meeting of the American Society of Clinical Oncology (ASCO), 54% of patients who received crizotinib were still alive after two years compared to just 12% in a control group. Crizotinib received fast-track review by the U.S. Food and Drug Administration (FDA) and was approved in August ahead of the six-month priority review schedule.
Dramatic advances are being made in the treatment of the skin cancer melanoma as well. More than 60 drugs are currently in development for the disease and this year two new medicines have been approved – the first approvals for the disease in 13 years. The first, ipilimumab, was approved in March and was the first treatment ever approved by FDA to show a survival benefit for patients with metastatic melanoma. In August the second, a new personalized medicine called vemurafenib, was approved to treat this deadliest form of skin cancer. This drug, which is taken orally, selectively inhibits a mutated form of the BRAF kinase gene. The mutated gene is associated with increased tumor aggressiveness, decreased survival, and is found in approximately half of all malignant melanomas. Recently reported clinical trials results demonstrate that the medicine reduces the risk of death by 63% percent.
Personalized medicine holds great potential beyond these two select examples in lung cancer and melanoma. MD Anderson Cancer Center recently reported on the results of a large-scale clinical trial examining the effect of matching targeted therapies with specific gene mutations across many cancer types. According to the results of the study, patients who received a targeted therapy demonstrated a 27% response rate compared to 5% for those whose therapy was not matched. This clinical trial marks the largest examination of a personalized approach to cancer care to date, and as principal investigator Apostolia-Maria Tsimberidou, M.D., Ph.D. concludes, "This study suggests that a personalized approach is needed to improve clinical outcomes for patients with cancer."
As these and many other studies illustrate, a dramatic transformation in cancer diagnosis and treatment is underway. Therapies targeted to the genetic and molecular underpinnings of disease are being developed, and patient outcomes are improving as a result. The studies highlighted above only begin to scratch the surface of the remarkable potential of personalized, targeted therapies, but are an indication of the great reward of years of research and investment, as well as great promise for continued innovation in the years to come.

Thursday, September 29, 2011

Publication Bubble Threatens China's Scientific Advance

Publication Bubble Threatens China's Scientific Advance
Chinese Academy of Sciences
Sep 26, 2011

As China's economy has soared to the second place in the world, the country's scientific strength has also surged -- if only measured by the numbers.

Chinese researchers published more than 1.2 million papers from 2006 to 2010 -- second only to the United States but well ahead of Britain, Germany and Japan, according to data recently published by Elsevier, a leading international scientific publisher and data provider. This figure represents a 14 percent increase over the period from 2005 to 2009.

The number of published academic papers in science and technology is often seen as a gauge of national scientific prowess.

But these impressive numbers mask an uncomfortable fact: most of these papers are of low quality or have little impact. Citation per article (CPA) measures the quality and impact of papers. China's CPA is 1.47, the lowest figure among the top 20 publishing countries, according to Elsevier's Scopus citation database.

China's CPA dropped from 1.72 for the period from 2005 to 2009, and is now below emerging countries such as India and Brazil. Among papers lead-authored by Chinese researchers, most citations were by domestic peers and, in many cases, were self-citations.

"While quantity is an important indicator because it gives a sense of scientific capacity and the overall level of scientific activity in any particular field, citations are the primary indicator of overall scientific impact," said Daniel Calto, Director of SciVal Solutions at Elsevier North America.

Calto attributed China's low CPA to a "dilution effect."

"When the rise in the number of publications is so rapid, as it has been in China -- increasing quantity does not necessarily imply an overall increase in quality," said Calto.

He noted the same pattern in a variety of rapidly emerging research countries such as India, Brazil, and earlier in places like the Republic of Korea.

"Chinese researchers are too obsessed with SCI (Science Citation Index), churning out too many articles of low quality," said Mu Rongping, Director-General of the Institute of Policy and Management at the Chinese Academy of Sciences, China's major think tank.

SCI is one of the databases used by Chinese researchers to look-up their citation performance. The alternative, Scopus, provides a wider coverage worldwide.

"Chinese researchers from a wide range of areas and institutions are vying for publication, as it is a key criterion for academic appraisal in China, if not the only one. As a result, the growth of quality pales in comparison to that of quantity," said Mu, an expert on China's national science policy and competitiveness.

On the other hand, China also falls behind the United States in multidisciplinary research, which is a core engine for scientific advance and research excellence.

From 2006 to 2010, China published 1,229,706 papers while the United States churned out 2,082,733. According to a new metric introduced by Elsevier's Spotlight research assessment solution, China generated 885 competencies while the United States had 1,817.

In other words, China's total research output is more than half that of the United States, while the number of competencies showing China's strength in multidisciplinary research is less than half that of the United States.

Cong Cao, an expert on China's science and technology, put it more bluntly in an article he wrote: "When the paper bubble bursts, which will happen sooner or later, one may find that the real situation of scientific research in China probably is not that rosy."

China has been investing heavily in scientific research and technological development in recent years to strengthen its innovative capacity, The proportion of GDP spent on R&D grew from 0.9 percent in 2000 to 1.4 percent in 2007, according to the World Bank.

An IMF forecast in 2010 says China now ranks second globally in R&D spending. The IMF calculates China's R&D expenditure at 150 billion U.S. dollars when based on Purchasing Power Parity, a widely used economic concept that attempts to equalize differences in standard of living among countries.

By this measure, China surpassed Japan in R&D spending in 2010.

Many see China's huge investment in R&D as the momentum behind the country's explosive increase in research papers.

"Getting published is, in some ways, an improvement over being unable to get published," Mu said. "But the problem is, if the papers continue to be of low quality for a long time, it will be a waste of resources."

In China, academic papers play a central role in the academic appraisal system, which is closely related to degrees and job promotions.

While acknowledging the importance of academic papers in research, Mu believes a more balanced appraisal system should be adopted. "This is a problem with science management. If we put too much focus on the quantity of research papers, we leave the job of appraisal to journal editors."

In China, the avid pursuit of publishing sometimes gives rise to scientific fraud. In the most high-profile case in recent years, two lecturers from central China's Jinggangshan University were sacked in 2010 after a journal that published their work admitted 70 papers they wrote over two years had been falsified.

"This is one of the worst cases. These unethical people not only deceived people to further their academic reputations, they also led academic research on the wrong path, which is a waste of resources," Mu said.

A study done by researchers at Wuhan University in 2010 says more than 100 million U.S. dollars changes hands in China every year for ghost-written academic papers. The market in buying and selling scientific papers has grown five-fold in the past three years.

The study says Chinese academics and students often buy and sell scientific papers to swell publication lists and many of the purported authors never write the papers they sign. Some master's or doctoral students are making a living by churning out papers for others. Others mass-produce scientific papers in order to get monetary rewards from their institutions.

A 2009 survey by the China Association for Science and Technology (CAST) of 30,078 people doing science-related work shows that nearly one-third of respondents attributed fraud to the current system that evaluates researchers' academic performance largely on the basis of how many papers they write and publish.

Despite rampant fraud, China will continue to inject huge money into science. According to the latest national science guideline, which was issued in 2006 by the State Council, the investment in R&D will account for 2.5 percent of GDP in 2020.

"If China achieves its stated goal of investing 2.5 percent of its GDP in R&D in 2020, and sustains its very fast economic growth over the next decade, it would quite likely pass the U.S. in terms of total R&D investment sometime in the late 2010s," said Calto, adding that it is also quite likely that at some point China will churn out more papers than the United States.

According to Calto, China does mostly applied research, which helps drive manufacturing and economic growth, while basic research only accounts for 6 percent, compared with about 35 percent in Germany, Britain, and the United States, and 16 percent in Japan.

"In the long term, in order to really achieve dominance in any scientific area, I think it will be necessary to put significant financial resources into fundamental basic research -- these are the theoretical areas that can drive the highest level of innovation," Calto said. (Xinhua)

Friday, January 15, 2010

Monsanto Response: de Vendomois (Seralini) et al. 2009

Monsanto Response: de Vendomois (Seralini) et al. 2009.

(A Comparison of the Effects of Three GM Corn Varieties on Mammalian Health)
Regarding: MON 863, MON 810 and NK603

Assessment of Quality and Response to Technical Issues


  • The laboratory findings primarily related to kidney and liver function reflect the large proportion of tests applicable to these organ systems. This is not a defect in the design of the study, but simply the reality of biochemical testing - there are good clinical tests of these systems which are reflected in blood chemistry. The function of other organ systems is assessed primarily via functional assessment, organ weight, and organ pathology rather than through blood or urine biochemical assays.

  • The authors apply a variety of non-standard statistical approaches. Each unique statistical approach and each comparison performed increases the number of statistically significant findings which will occur by chance alone. Thus, the fact that de Vendomois et al. find more statistically significant findings than reported in the Monsanto analysis is entirely expected. The question, which de Vendomois et al. fail to address, is whether these non-routine statistical tests contribute anything of value to a safety assessment. Do they help to ascertain whether there are biologically and toxicologically significant events? In our opinion (consistent with prior reviews of other publications from Seralini and colleagues) they do not.

  • The authors undertake a complex “principle component analysis” to demonstrate that kidney and liver function tests vary between male and female rodents. This phenomenon is well-recognized in rodents (and, for that matter, humans) as a matter of gender difference. (This does not indicate any toxic effect, and is not claimed to do so by the authors, but may be confusing to those not familiar with the method and background.)

  • De Vendomois et al. appear to draw from this a conclusion that there is a gender difference in susceptibility to toxic effects. While such differences are possible, no difference in susceptibility can be demonstrated by gender differences in normal baseline values. Utilizing this alleged difference in gender susceptibility, the authors proceed to identify statistically significant, but biologically meaningless differences (see next bullet) and to evaluate the extent to which these changes occur in males verses females.

  • De Vendomois et al. fail to consider whether a result is biologically meaningful, based on the magnitude of the difference observed, whether the observation falls outside of the normal range for the species, whether the observation falls outside the range observed in various reference materials, whether there is evidence of a dose-response, and whether there is consistency between sexes and consistency among tested GM materials. These failures are similar to those observed in previous publications by the same group of authors.

  • While the number of tests that are statistically significant in males verses females would ON AVERAGE be equal in a random distribution, this ratio will fluctuate statistically. The authors have not, in fact, demonstrated any consistent susceptibility between genders, nor have they demonstrated that the deviations from equality in regards to numbers of positive tests fall outside of expectation. For example, if you flip a coin 10 times, on average you will get 50% heads and 50% tails but it is not unusual to get 7 heads and 3 tails on a particular 10 tosses. If you do this over and over and consistently get on average 7 heads and 3 tails then there may be something different about the coin that is causing this unexpected result. However, de Vendomois et al. have not shown any such consistent difference.

  • While de Vendomois et al. criticize the lack of testing for cytochrome P450, such testing is not routinely a part of any toxicity testing protocol. These enzymes are responsible for (among other things) the metabolism of chemicals from the environment, and respond to a wide variety of external stimuli as a part of their normal function. There is no rational reason to test for levels of cytochromes in this type of testing, as they do not predict pathology. De Vendomois et al. could have identified thousands of different elements, enzymes and proteins that were not measured but this does not indicate a deficiency in the study design since there is no logical basis for testing them.

  • While de Vendomois et al. criticize the occurrence of missing laboratory values, the vast majority of missing values are accounted for by missing urine specimens (which may or may not be obtainable at necropsy) or by a small number of animals found in a deceased condition (which are not analyzed due to post-mortem changes). Overall, despite the challenges in carrying out such analyses on large numbers of animals, almost 99% of values were reported.

  • The statistical power analysis done by de Vendomois et al. is invalid, as it is based upon non-relevant degrees of difference and upon separate statistical tests rather than the ANOVA technique used by Monsanto (and generally preferred). The number of animals used is consistent with generally applicable designs for toxicology studies.

  • Prior publications by Seralini and colleagues in both the pesticide and GM crops arenas have been found wanting in both scientific methodology and credibility by numerous regulatory agencies and independent scientific panels (as detailed below).

  • In the press release associated with this publication, the authors denounce the various regulatory and scientific bodies which have criticized prior work, and claim, in advance, that these agencies and individuals suffer from incompetency and/or conflict of interest. In effect, the authors claim that their current publication cannot be legitimately criticized by anyone who disagrees with their overall opinions, past or present.

To summarize, as with the prior publication of Seralini et al. (2007), de Vendomois et al. (2009) uses non-traditional and inappropriate statistical methods to reach unsubstantiated conclusions in a reassessment of toxicology data from studies conducted with MON 863, MON 810 and NK603. Not surprisingly, they assert that they have found evidence for safety concerns with these crops but these claims are based on faulty analytical methods and reasoning and do not call into question the safety findings for these products.

Response to de Vendomois et al. 2009:

In the recent publication “A comparison of the effects of three GM corn varieties on mammalian health”, (de Vendomois et al., 2009), the authors claim to have found evidence of hepatorenal toxicity through reanalysis of the data from toxicology studies with three biotechnology-derived corn products (MON 863, MON 810 and NK603).

This theme of hepatorenal toxicity was raised in a previous publication on MON 863 by the same authors (Seralini et al., 2007). Scientists who reviewed the 2007 publication did not support that paper’s conclusions on MON 863 and the review addressed many deficiencies in the statistical reanalysis (Doull et al., 2007; EFSA, 2007a; EFSA, 2007b; Bfr, 2007; AFFSA, 2007, Monod, 2007, FSANZ, 2007). These reviews of the 2007 paper confirmed that the original analysis of the data by various regulatory agencies was correct and that MON 863 grain is safe for consumption based on the weight of evidence that includes a 90-day rat feeding study.

De Vendomois et al., (2009) elected to ignore the aforementioned expert scientific reviews by global authorities and regulatory agencies and again have used non-standard and inappropriate methods to reanalyze toxicology studies with MON 863, MON 810 and NK603. This is despite more than 10 years of safe cultivation and consumption of crops developed through modern biotechnology that have also completed extensive safety assessment and review by worldwide regulatory agencies, in each case reaching a conclusion that these products are safe.

General Comments:

De Vendomois et al. (2009) raise a number of general criticisms of the Monsanto studies that are worthy of mention before commenting on the analytical approach used by de Vendomois et al. and pointing out a number of examples where the application of their approach leads to misinterpretation of the data.

  1. Testing for cytochrome P450 levels is not a part of any standard toxicology study, nor do changes in P450 levels per-se indicate organ pathology, as the normal function of these enzymes is to respond to the environment. Testing of cytochrome P450 levels is not part of any recognized standard for laboratory testing.

  2. De Vendomois et al. note that the “effects” assessed by laboratory analysis were “mostly associated with the kidney and liver”. However, a review of the laboratory tests (annex 1 of paper), ignoring weight parameters, will indicate that measures of liver and kidney function are disproportionately represented among the laboratory tests. Urinary electrolytes are also particularly variable (see below). The apparent predominance of statistical differences in liver and kidney parameters is readily explained by the testing performed.

  3. As noted by the authors, findings are largely within the normal range for parameters even if statistically significant, are inconsistent among GM crops, and are inconsistent between sexes. Despite this, and the lack of associated illness or organ pathology, the authors choose to interpret small random variations typically seen in studies of this type as evidence of potential toxicity.

  4. The authors criticize the number of missing laboratory data, and indicate that the absence of values is not adequately explained. We would note that the bulk of missing values relate to urinalysis. The ability to analyze urine depends upon the availability of sufficient quantities of urine in the bladder at the time of necropsy, and thus urine specimens are often missing in any rodent study. Organ weights and other studies are generally not measured on animals found deceased (due to post-mortem changes the values are not considered valid). Each study consisted of 200 animals, or 800 possible data collections (counting urine, hematology, or organ weights + blood chemistry as one “type” as in the paper).

    1. NK 603- of 600 possible data determinations, 28 values were missing. 20 were due to missing urines and 2 were missing weights and biochemical analysis due to animals found dead (1 GM, 1 reference). Of the remaining 6 values (hematology), only 1 value is from the GM-fed group.

    2. MON 810- Of 600 possible determinations, 24 values were missing. 18 were due to missing urines and 1 value was missing (weight and biochemical analysis) due to an animal found dead (reference group). Of the remaining 5 values (hematology), 2 are from the GM-fed group and 3 from various reference groups.

    3. MON 863- Of 600 possible determinations, 25 values were missing. 13 were due to missing urines. 9 hematology analyses (3 GMO-fed) and 3 organ weight/biochemical analyses due to deaths (1 GMO) were reported as missing (not deceased).

    4. These are large and complex studies. Ignoring urines and the small number of animals found deceased (which occurs in any large study), 20 data sets (17 hematology, 3 organ weights/chemistry) are missing from a possible 1800 sets, i.e.- almost 99% of data were present, despite the technical difficulties inherent in handling large numbers of animals.

  5. The “findings” in this study are stated to be due to “either the recognized mutagenic effects of the GM transformation process or to the presence of… novel pesticides.” We would note that there is no evidence for “mutagenic effect” other than stable gene insertion in the tested products. We would also note that while the glyphosate tolerant crop (NK603) may indeed have glyphosate residues present, this is not a “novel” pesticide residue. The toxicity of glyphosate has been extensively evaluated, and the “effects” with NK603 cannot be explained on this basis. Similarly, other available data regarding the Bt insecticidal proteins in MON 810 and MON 863 do not support the occurrence of toxic effects due to these agents.

Statistical Analysis Approach:

De Vendomois et al., (2009) used a flawed basis for risk assessment, focusing only on statistical manipulation of data (sometimes using questionable methods) and ignoring consideration of other relevant biological information. By focusing only on statistical manipulations, the authors found more statistically significant differences for the data than was previously reported and claimed that this is new evidence for adverse effects. As is well documented in toxicology textbooks (e.g., Casarett and Doull, Toxicology, The Basic Science of Poisons, Klaassen Ed., The McGraw-Hill Companies, 2008, Chapter 2) and other resources mentioned below, interpretation of study findings involves more than statistical manipulations, one has to consider data in the context of the biology of the animal. This subject was addressed by a peer review panel of internationally recognized toxicologists and statisticians who reviewed the Seralini et al., (2007) publication. They state in Doull et al. (2007)

The Panel concludes that the Seralini et al. (2007) reanalysis provided no evidence to indicate that MON 863 was associated with any adverse effects in the 90-day rat study (Covance, 2002; Hammond et al., 2006). In each case the statistical findings reported by both Monsanto (Covance, 2002; Hammond et al., 2006) or Seralini et al. (2007) were considered to be unrelated to treatment or of no biological or clinical importance because they failed to demonstrate a dose–response relationship, reproducibility over time, association with other relevant changes (e.g., histopathology), occurrence in both sexes, difference outside the normal range of variation, or biological plausibility with respect to cause-and-effect”

There are numerous ways to analyze biological data and a multitude of statistical tools. To provide consistency in the way that toxicology data are analyzed, regulatory agencies have provided guidance regarding the statistical methods to be used. The aforementioned peer review panel stated:

“The selection of the types of statistical methods to be performed is totally dependent upon the design of the toxicology study, and on the questions expected to be answered, as discussed in the US FDA Redbook (FDA, 2000). Hypothesis testing statistical analyses as described by WHO (1987), Gad (2001), and OECD (2002b) include those tests that have been traditionally conducted on data generated from rodent 90-day and chronic toxicity studies. These are also the procedures that have been widely accepted by regulatory agencies that review the results of subchronic and/or chronic toxicity tests as part of the product approval process. There are many other statistical tests available such as 2k factorial analysis when k factors are evaluated, each at two levels, specific dose–response contrasts, and generalized linear modeling methods, but these methods typically have not been used to evaluate data from toxicology studies intended for regulatory submissions”

Commenting on the statistical analysis used originally to analyze the toxicology data for MON 863 conducted at Covance labs, the expert panel also stated:

“All of these statistical procedures are in accordance with the principles for the assessment of food additives set forth by the WHO (1987). Moreover, these tests represent those that are used commonly by contract research organisations throughout the world and have generally been accepted by FDA, EFSA, Health Canada, Food Standards Australia New Zealand (FSANZ), and the Japanese Ministry of Health and Welfare. In fact, EFSA (2004) in their evaluation of the Covance (2002) study noted that it ‘‘was statistically well designed’’.”

de Vendomois et al., (2009) selected non-traditional statistical tests to assess the data and failed to consider the entire data set in order to draw biologically meaningful conclusions. Their limited approach generated differences that, while being statistically significant, are insufficient to draw conclusions without considering the broader dataset to determine whether the findings are biologically meaningful. In Doull et al., (2007) the expert panel clearly stated:

“In the conduct of toxicity studies, the general question to be answered is whether or not administration of the test substance causes biologically important effects (i.e., those effects relevant to human health risk assessment). While statistics provide a tool by which to compare treated groups to controls; the assessment of the biological importance of any ‘‘statistically significant’’ effect requires a broader evaluation of the data, and, as described by Wilson et al. (2001), includes:

  • Dose-related trends
  • Reproducibility
  • Relationship to other findings
  • Magnitude of the differences
  • Occurrence in both sexes.”

Doull et al., (2007) raised questions regarding the appropriateness of some of the statistical analyses described in Seralini et al., (2007):

“The statistical analyses of the serum biochemistry, haematological, and clinical chemistry data conducted by Seralini et al. (2007) and by Monsanto were similar in concept as both used testing for homogeneity of variance and various pair-wise contrasts. The principle difference was that Seralini et al. (2007) did not use an ANOVA approach. The use of t-tests in the absence of multiple comparison methods may have had the effect of increasing the number of statistically significant results (emphasis added). The principle difference between the Monsanto and Seralini et al. (2007) analyses was in the evaluation of the body weight data. Monsanto used ‘traditional’ ANOVA and parametric analyses while Seralini et al. (2007) used the Gompertz model to estimate body weight as a function of time. The Gompertz model assumes equal variance between weeks, an assumption unlikely to hold with increasing body weights. While not inappropriate, as previously stated the Gompertz model does have limitation with respect to the interpretation of the results since it was not clear from the published paper whether Seralini et al. (2007) accounted for the changing variance and the correlated nature of the body weight data over time (emphasis added).

Based on the expert panel conclusions in Doull et al., (2007); the statistical analysis used by, and the conclusions reached in, the de Vendomois et al. (2009) publication need to be carefully assessed. The authors use of inappropriate statistical methods in the examples below illustrate how inadequate analyses underpin the false and misleading claims found in de Vendomois et al., (2009).

Inappropriate use of False Discovery Rate method. De Vendomois et al., (2009) conducted t-test comparisons among the test and control and then applied the False Discovery Rate (FDR) method to adjust the p-values and hence the number of false positives. The FDR method is similar to many of the multiple comparison procedures that are available for controlling the family-wise error rate. Monsanto did not use any procedures for controlling the percentage of false positives for two reasons: (1) preplanned comparisons were defined that were pertinent to the experimental design and purpose of the analysis, i.e., it was not necessary to do all pairwise comparisons among the test, control, and reference substances and; (2) to maintain transparency and to further investigate all statistically significant differences using the additional considerations (Wilson et al, 2001) detailed above.

Inappropriate power assessment method. De Vendomois et al., (2009) claim that the Monsanto study had low power and support their claim with an inappropriate power assessment that is based on a simple t-test comparison of the test and control using an arbitrary numerical difference. This type of power assessment is incorrect because Monsanto used a one-way ANOVA, not a simple t-test. The appropriate power assessment should be relative to the ANOVA and not a simple t-test. In addition, an appropriate power assessment should be done relative to the numerical difference that constitutes a biologically meaningful difference.

Other non-traditional statistical methods. De Vendomois et al., (2009) also claim that Monsanto did not apply the described statistical methods and simply used a one-way ANOVA and contrasts. This is a false statement since Monsanto used Levine’s test to check for homogeneity of variances and if the variances were different the one-way ANOVA was conducted on the ranks rather than the original observations, i.e., Kruskal-Wallis test.

Specific examples of flawed analysis and conclusions.

De Vendomois et al., (2009) have compared the results across toxicology feeding studies with three different biotech crops using some of the same statistical tests that were used in the previous publication (Seralini et al, 2007). Each of these biotech crops (MON 863, MON 810, NK603) are the result of unique molecular transformations and express different proteins. De Vendomois et al., (2009) claims that all three studies provide evidence of hepatorenal toxicity by their analysis of clinical pathology data only. One might anticipate, if these claims were true, that similar changes in clinical parameters could be observed across the three studies and that the changes observed would be diagnostic for kidney and liver toxicity and would be accompanied by cytopathological indications of kidney or liver disease. However, as shown in Tables 1 and 2 in Vendomois et al., (2009), the statistically significant “findings” in clinical parameters are different across studies, suggesting that these are more likely due to random variation (type one errors) rather than due to biologically meaningful effects. Moreover, as indicated below, there is no evidence of any liver and kidney toxicity in these studies, particularly in relation to other data included in the original study reports that is not mentioned in Vendomois et al., (2009).

NK603 - Kidney

For the NK603 study (Table 1), de Vendomois et al., (2009) listed data from some of the measured urinary electrolytes, urinary creatinine, blood urea nitrogen and creatinine, phosphorous and potassium as evidence of renal toxicity. It has been pointed out that urinalysis may be important if one is testing nephrotoxins (Hayes, 2008), particularly those that produce injury to the kidney. However, it has also been noted that “Urinalysis is frequently of limited value because the collection of satisfactory urine samples is fraught with technical difficulties” (Hayes, 2008). There was a lot of variability for some of the urinary electrolytes as indicated by the high standard deviations that may be attributed to the technical difficulties in collecting satisfactory urine samples.

Examining the original kidney data for NK603, the urine phosphorous values are generally comparable for 11% and 33% NK603 males and the 33% reference groups, while the 33% controls are generally lower than all groups. For females, 33% control females also had slightly lower phosphorous values, but they were not statistically different from 33% NK603 females, unlike males where the 33% NK603 male value was statistically different (higher) than 33% controls. When the blood phosphorous values were compared, there was a slight, but statistically significant reduction in 33% NK603 males compared to controls (but not references) at week 5, and there were no statistically significant differences in NK603 male and female blood phosphorous levels when compared to controls at the end of the 14 week study.

There were no statistically significant differences in urine sodium in males at weeks 5 and 14 in the original analysis (in contrast to the reanalysis reported by de Vendomois et al., 2009). As with phosphorous, there was considerable variability in urine sodium across all groups. The same results were observed for females. In addition, blood sodium levels for 11 and 33% NK 603 males and females were not different from controls. It is apparent when reviewing the data in the table below that the measured urinary electrolytes for the NK603 groups were similar to the values for reference, conventional (e.g., non-GM) corn groups.

Looking at the other parameters listed in Table 1 (de Vendomois et al., 2009), while there was a slight increase in urine creatinine clearance in 33% NK603 males at the interim bleed at week 5 compared to the controls and reference population, this was not apparent at the end of the study when the rats had been exposed longer to the test diets. There was no difference in urine creatinine levels in males. Blood creatinine levels were slightly, but statistically significantly lower in high dose males compared to controls at week 5. Increases in creatinine, not reductions are associated with renal toxicity. The same response was observed for serum urea nitrogen, a slight reduction at week 5 and no differences at in male blood creatinine or urea nitrogen at the end of the study. BUN, like creatinine, is not a very sensitive indicator of renal injury” (Hayes, 2008). Thus the small differences in BUN and serum and urine creatinine are not suggestive of kidney injury.

There was no evidence of changes in other urinary parameters such as pH, specific gravity, protein, sodium, calcium, chloride, volume and kidney weights. The most important factor relating to the kidney that de Vendomois et al., (2009) did not consider was the normal microscopic appearance of the kidneys of rats fed NK603 grain. There was no evidence of treatment-related renal pathologic changes that the authors ignored in their risk assessment, a critical biological factor that an objective, scientific assessment would have considered.

MON 810 - Kidney

If Table 2 in de Vendomois et al., (2009) is examined, none of the aforementioned “findings” listed in Table 1 for NK603 are consistent except for blood urea nitrogen. Kidney weight data was listed, but this was not included in Table 1 for NK603. If the hypothesis of renal toxicity is correct, it is scientifically reasonable to have expected to observe at least some of the same “findings” between studies. The fact that there were no common findings supports the original conclusions reached by the investigative laboratory (and supported by regulatory agency review of these studies) that there is no evidence of kidney toxicity in rats fed either MON 810 or NK603 grain. Indeed, the data alleged by de Vendomois et al., (2009) to be indicative of kidney findings are more attributable to random variation that is commonly observed in rodent toxicology studies, which is well discussed in publications such as Doull, et al., (2007).

In Table 2, de Vendomois et al., (2009) highlights absolute kidney weights for males as being suggestive of kidney toxicity. The scientific basis for this assertion is unclear because there is no differences in male or female kidney weights (absolute, relative to body weight or brain weight) as shown in the table below:

De Vendomois et al., (2009) also lists blood urea nitrogen as indicative of kidney toxicity, yet there were no statistically significant differences in either MON 810 males or females when compared to controls (Hammond et al., 2006). In the absence of any other changes in urine or blood chemistry parameters that could be suggestive of kidney toxicity, and in consideration of the normal histologic appearance of kidneys of rats fed MON 810 grain, there is no scientific data to support the assertion of kidney toxicity in MON 810 fed rats.

NK603/MON 810 liver

Although de Vendomois et al., (2009) lists “findings” in Table 1 and 2 as being indicative of liver toxicity, analysis of these “findings” does not support this conclusion. There are no common “findings” in the liver between both studies. For NK603 de Vendomois et al., (2009) listed liver weights and serum alkaline phosphatase; for MON 810, serum albumin and albumin/globulin ratio. For NK603, the original analysis did not demonstrate statistical differences in absolute or, relative (to body or brain) liver weights for NK603 males and females compared to controls. Therefore, the statistical differences cited by de Vendomois et al., (2009) must be owing to the non-traditional statistical methods being used for their reanalysis of liver weight data. In regard to serum alkaline phosphatase, there were no differences for NK603 males or females when compared to controls; again de Vendomois et al., (2009) report statistical differences, but examination of the original data shows that the values for NK603 males and females are similar to controls and well within the range of values for the reference controls. There were no other associated changes in other liver enzymes, bilirubin, or protein that would be changes associated with liver toxicity. Lastly, but most importantly, the microscopic appearance of NK603 male and female livers was within normal limits for rats of that age and strain; therefore there was no evidence of liver toxicity. Similarly for rats fed MON 810, the only findings de Vendomois et al., (2009) list to support a conclusion of liver toxicity was albumin and albumin/globulin ratios. Contrary to the analysis in Table 2 of de Vendomois et al., (2009), there were no statistically significant differences in male or female serum albumin levels based on the original analysis. There were similarly no statistically significant differences in albumin/globulin with the exception of a slight decrease for 11% MON810 females when compared to controls at week 5. There were no differences observed at week 14 when the rats had been on test diets longer, nor were the differences dose related as they were not apparent in 33% MON 810 females relative to controls. The numerical values for serum albumin and albumin/globulin for MON 810 males and females were also similar to values for the reference groups. Consistent with NK603 rats, there were no other changes in serum liver enzymes, protein, bilirubin, etc., that might be associated with liver toxicity. The liver weights also appeared within normal limits for rats of the same strain and age used, again, consistent with a conclusion of no evidence of liver toxicity. In summary, no experimental evidence supports the conclusion for liver toxicity in rats fed NK603 and MON 810 grain as claimed by de Vendomois et al., (2009).

Kinetic plots

De Vendomois et al., (2009) has also presented some kinetic plots showing time-related variations for selected clinical parameters chosen for discussion. For 11% (low dose) control fed females, this publication reports that there is a trend for decreasing triglyceride levels over time (week 5 compared to week 14) whereas for 11% MON 863 fed rats, levels increase slightly during the same time period. It is unclear why this publication used these complicated figures to assess these data sets since the same time course information can be obtained by simply comparing the mean data for the group at the two time points. Using this simpler method to assess the data, low dose control triglycerides dropped from a mean of 56.7 at week 5 to 40.9 at week 14. Low dose MON 863 female triglycerides increased slightly from 50.2 to 50.9. What de Vendomois et al., (2009) fails to mention is that high dose control female triglyceride levels increased from 39.3 at week 5 to 43.9 at week 14 and high dose MON 863 triglyceride levels decreased from 54.9 to 46.7. These trends are opposite from what occurred at the low dose, and the low dose trends are, therefore, not dose related. For the female reference groups, triglycerides went either up or down a bit between weeks 5 to 14, illustrating that these minor fluctuations occur naturally. Since most of the other figures reported were for the low dose groups, the trend for the high dose was sometimes opposite to that observed at the low dose. In summary, none of this analysis changes the conclusion of the study that there were no treatment-related adverse effects in rats fed MON 863 grain.


To summarize, as with the prior publication of Seralini et al, (2007), de Vendomois et al., (2009) uses non-traditional statistical methods to reassess toxicology data from studies conducted with MON 863, MON 810 and NK603 to reach an unsubstantiated conclusion that they have found evidence for safety concerns with these crops. As stated by the expert panel that reviewed the Seralini et al 2007 paper (Doull et al., 2007) “In the conduct of toxicity studies, the general question to be answered is whether or not administration of the test substance causes biologically important effects (i.e., those effects relevant to human health risk assessment). While statistics provide a tool by which to compare treated groups to controls; the assessment of the biological importance of any ‘‘statistically significant’’ effect requires a broader evaluation of the data, and, as described by Wilson et al. (2001), includes:

  • Dose-related trends
  • Reproducibility
  • Relationship to other findings
  • Magnitude of the differences
  • Occurrence in both sexes.

A review of the original data for clinical parameters, organ weights and organ histology also found no evidence of any changes suggestive of hepato/renal toxicity as alleged in the de Vendomois et al., (2009) publication. This same publication also made false allegations regarding how Monsanto carried out their statistical analysis which has been addressed above.

Although there are many other points that could be made in regards to de Vendomois et al., (2009), given the fact that these authors continue to use the same flawed techniques despite input from other experts, it is not worthwhile to exhaustively document all of the problems with their safety assessment. Most importantly, regulatory agencies that have reviewed the safety data for MON 863, MON 810 and NK603 (including data from the 90 day rat toxicology studies reassessed by de Vendomois et al., 2009) have, in all instances, reached a conclusion that these three products are safe for human and animal consumption and safe for the environment. Peer reviewed publications on 90 day rat feeding studies with NK603, MON 810 and MON 863 grain have also concluded that there are no safety concerns identified for these three biotechnology-derived crops.

Additional Background:

Over the last five years, Seralini and associated investigators have published a series of papers first regarding glyphosate and later regarding Genetically Modified Organisms (GMOs, specifically MON 863). Reviews by government agencies and independent scientists have raised questions regarding the methodology and credibility of this work. The paper by de Vendomois et al. (December 2009) is the most recent publication by this group, and continues to raise the same questions regarding quality and credibility associated with the prior publications.

Seralini and his associates have suggested that glyphosate (the herbicide commonly referred to as “Roundup”™, widely used on GM crops (Roundup Ready™ and others) is responsible for a variety of human health effects. These allegations were not considered to be valid human health concerns according to several regulatory and technical reviews. Claims of mammalian endocrine disruption by glyphosate in Richards at al. (2005) were evaluated by the Commission d”Etude de la Toxicité (French Toxicology Commission), which identified major methodological gaps and multiple instances of bias in arguments and data interpretation. The conclusion of the French Toxicology Commission was that this 2005 publication from Seralini’s laboratory served no value for the human health risk assessment of glyphosate. A subsequent paper from Seralini’s laboratory, Benachour et al. (2009), which was released via the internet in 2008, was reviewed by the Agence Française de Sécurité Sanitaire des Aliments (AFSSA, the French Agency for Food Safety). This review also pooled Richard et al (2005) and Benachour et al (2007) from Seralini’s laboratory under the same umbrella of in vitro study designs on glyphosate and glyphosate based formulations. Again, the regulatory review detailed methodological flaws and questionable data interpretation by the Seralini group. The AFSSA final remarks of their review were “the French Agency for Food Safety judges that the cytotoxic effects of glyphosate, its metabolite AMPA, the tensioactive POAE and other glyphosate-based preparations put forward in this publication do not bring out any pertinent new facts of a nature to call into question the conclusions of the European assessment of glyphosate or those of the national assessment of the preparations”. In August 2009, Health Canada’s Pest Management Regulatory Authority (PMRA) published a response to a “Request for a Special Review of Glyphosate Herbicides Containing Polyethoxylated Tallowamine”. The requester submitted 12 documents, which included the same claims made in the Benachour et al. (2009) publication. The PMRA response to this request concluded “PMRA has determined that the information submitted does not meet the requirements to invoke a special review,” clearly indicating no human health concerns were raised in the review of those 12 documents in support of the request.

Regarding GMOs, Seralini et al. (2007) previously published a re-analysis of Monsanto’s 90-day rat safety studies of MON863 corn. Scientists and regulatory agencies who reviewed the 2007 publication did not support that paper’s conclusions on MON 863 and the review addressed many deficiencies in the statistical reanalysis (Doull et al., 2007; EFSA, 2007a; EFSA, 2007b; Bfr, 2007; AFFSA, 2007; Monod, 2007; FSANZ, 2007). These reviews of the 2007 paper confirmed that the original analysis of the data by various regulatory agencies was correct and that MON 863 grain is safe for consumption.

Using the MON 863 analysis as an example, Seralini et al. (2009) recently published a “review” article in the International Journal of Biological Sciences, claiming that improper interpretation of scientific data allowed sub-chronic and chronic health effects to be ignored in scientific studies of GMOs, pesticides, and other chemicals. This paper applies a complex analysis (principle component analysis) to demonstrate a difference in liver and kidney function between male and female rats. Despite the fact that these gender differences are well known and are demonstrated in control and GMO-fed animals, Seralini and his colleagues conclude that these normal findings demonstrate some type of sex-specific susceptibility to toxic effects. Based upon this reasoning, they proceed to over-interpret a variety of minor statistical findings in the MON 863 study. These very same conclusions were roundly criticized in 2007. In fact, the authors of this study admit that their observations “do not allow a clear statement of toxicological effects.”

De Vendomois et al., (2009) elected to ignore the aforementioned expert scientific reviews by global authorities and regulatory agencies and again have used non-standard and inappropriate methods to reanalyze toxicology studies with MON 863, MON 810 and NK603. This is despite more than 10 years of safe cultivation and consumption of crops developed through modern biotechnology that have also completed extensive safety assessment and review by worldwide regulatory agencies, in each case reaching a conclusion that these products are safe.

Although some Seralini group publications acknowledge some funding sources, there are no acknowledgements of funding bias and conflict of interest. Financial support for Seralini’s research includes the Committee for Research and Independent Information on Genetic Engineering (CRIIGEN) and the Human Earth Foundation. Seralini has been the Chairman of the Scientific Council for CRIIGEN since 1999. Seralini and this organization are known for their anti-biotechnology positions ( Both CRIIGEN and the Human Earth Foundation promote organic agriculture and alternatives to pesticides. It is interesting that over the last five years Seralini’s group has published at least seven papers, four of which specifically target Monsanto’s glyphosate-based formulations as detrimental to human health, and the remaining papers allege that Monsanto’s biotechnology or GMO crops have human health implications. In addition, Seralini has a history of anti-Monsanto media releases and statements, including those on YouTube, reflecting not only Seralini’s anti-Monsanto sentiment, but a lack of scientific objectivity.
See: (; and

Finally, it is worth noting the press release from CRIIGEN, issued at the time of release of the de Vendomois et al. publication:

CRIIGEN denounces in particular the past opinions of EFSA, AFSSA and CGB, committees of European and French Food Safety Authorities, and others who spoke on the lack of risks on the tests which were conducted just for 90 days on rats to assess the safety of these three GM varieties of maize. While criticizing their failure to examine the detailed statistics, CRIIGEN also emphasizes the conflict of interest and incompetence of these committees to counter expertise this publication as they have already voted positively on the same tests ignoring the side effects.”

This rather remarkable approach clearly indicates how far the authors of this publication have drifted from appropriate scientific discourse regarding GMO safety data. While they would reject criticisms of their methods and arguments by regulatory authorities and other eminent toxicology experts, most persons seeking an objective analysis will welcome broad expert input and a full assessment of the weight of evidence on the subject.