Showing posts with label climate models. Show all posts
Showing posts with label climate models. Show all posts

Tuesday, December 6, 2016

My Unhappy Life as a Climate Heretic. By Roger Pielke Jr.

My Unhappy Life as a Climate Heretic. By Roger Pielke Jr.
My research was attacked by thought police in journalism, activist groups funded by billionaires and even the White House.http://www.wsj.com/articles/my-unhappy-life-as-a-climate-heretic-1480723518
Updated Dec. 2, 2016 7:04 p.m. ET

Much to my surprise, I showed up in the WikiLeaks releases before the election. In a 2014 email, a staffer at the Center for American Progress, founded by John Podesta in 2003, took credit for a campaign to have me eliminated as a writer for Nate Silver’s FiveThirtyEight website. In the email, the editor of the think tank’s climate blog bragged to one of its billionaire donors, Tom Steyer: “I think it’s fair [to] say that, without Climate Progress, Pielke would still be writing on climate change for 538.”

WikiLeaks provides a window into a world I’ve seen up close for decades: the debate over what to do about climate change, and the role of science in that argument. Although it is too soon to tell how the Trump administration will engage the scientific community, my long experience shows what can happen when politicians and media turn against inconvenient research—which we’ve seen under Republican and Democratic presidents.

I understand why Mr. Podesta—most recently Hillary Clinton’s campaign chairman—wanted to drive me out of the climate-change discussion. When substantively countering an academic’s research proves difficult, other techniques are needed to banish it. That is how politics sometimes works, and professors need to understand this if we want to participate in that arena.

More troubling is the degree to which journalists and other academics joined the campaign against me. What sort of responsibility do scientists and the media have to defend the ability to share research, on any subject, that might be inconvenient to political interests—even our own?

I believe climate change is real and that human emissions of greenhouse gases risk justifying action, including a carbon tax. But my research led me to a conclusion that many climate campaigners find unacceptable: There is scant evidence to indicate that hurricanes, floods, tornadoes or drought have become more frequent or intense in the U.S. or globally. In fact we are in an era of good fortune when it comes to extreme weather. This is a topic I’ve studied and published on as much as anyone over two decades. My conclusion might be wrong, but I think I’ve earned the right to share this research without risk to my career.

Instead, my research was under constant attack for years by activists, journalists and politicians. In 2011 writers in the journal Foreign Policy signaled that some accused me of being a “climate-change denier.” I earned the title, the authors explained, by “questioning certain graphs presented in IPCC reports.” That an academic who raised questions about the Intergovernmental Panel on Climate Change in an area of his expertise was tarred as a denier reveals the groupthink at work.

Yet I was right to question the IPCC’s 2007 report, which included a graph purporting to show that disaster costs were rising due to global temperature increases. The graph was later revealed to have been based on invented and inaccurate information, as I documented in my book “The Climate Fix.” The insurance industry scientist Robert-Muir Wood of Risk Management Solutions had smuggled the graph into the IPCC report. He explained in a public debate with me in London in 2010 that he had included the graph and misreferenced it because he expected future research to show a relationship between increasing disaster costs and rising temperatures.

When his research was eventually published in 2008, well after the IPCC report, it concluded the opposite: “We find insufficient evidence to claim a statistical relationship between global temperature increase and normalized catastrophe losses.” Whoops.

The IPCC never acknowledged the snafu, but subsequent reports got the science right: There is not a strong basis for connecting weather disasters with human-caused climate change.

Yes, storms and other extremes still occur, with devastating human consequences, but history shows they could be far worse. No Category 3, 4 or 5 hurricane has made landfall in the U.S. since Hurricane Wilma in 2005, by far the longest such period on record. This means that cumulative economic damage from hurricanes over the past decade is some $70 billion less than the long-term average would lead us to expect, based on my research with colleagues. This is good news, and it should be OK to say so. Yet in today’s hyper-partisan climate debate, every instance of extreme weather becomes a political talking point.

For a time I called out politicians and reporters who went beyond what science can support, but some journalists won’t hear of this. In 2011 and 2012, I pointed out on my blog and social media that the lead climate reporter at the New York Times,Justin Gillis, had mischaracterized the relationship of climate change and food shortages, and the relationship of climate change and disasters. His reporting wasn’t consistent with most expert views, or the evidence. In response he promptly blocked me from his Twitter feed. Other reporters did the same.

In August this year on Twitter, I criticized poor reporting on the website Mashable about a supposed coming hurricane apocalypse—including a bad misquote of me in the cartoon role of climate skeptic. (The misquote was later removed.) The publication’s lead science editor, Andrew Freedman, helpfully explained via Twitter that this sort of behavior “is why you’re on many reporters’ ‘do not call’ lists despite your expertise.”

I didn’t know reporters had such lists. But I get it. No one likes being told that he misreported scientific research, especially on climate change. Some believe that connecting extreme weather with greenhouse gases helps to advance the cause of climate policy. Plus, bad news gets clicks.

Yet more is going on here than thin-skinned reporters responding petulantly to a vocal professor. In 2015 I was quoted in the Los Angeles Times, by Pulitzer Prize-winning reporter Paige St. John, making the rather obvious point that politicians use the weather-of-the-moment to make the case for action on climate change, even if the scientific basis is thin or contested.

Ms. St. John was pilloried by her peers in the media. Shortly thereafter, she emailed me what she had learned: “You should come with a warning label: Quoting Roger Pielke will bring a hailstorm down on your work from the London Guardian, Mother Jones, and Media Matters.”

Or look at the journalists who helped push me out of FiveThirtyEight. My first article there, in 2014, was based on the consensus of the IPCC and peer-reviewed research. I pointed out that the global cost of disasters was increasing at a rate slower than GDP growth, which is very good news. Disasters still occur, but their economic and human effect is smaller than in the past. It’s not terribly complicated.

That article prompted an intense media campaign to have me fired. Writers at Slate, Salon, the New Republic, the New York Times, the Guardian and others piled on.

In March of 2014, FiveThirtyEight editor Mike Wilson demoted me from staff writer to freelancer. A few months later I chose to leave the site after it became clear it wouldn’t publish me. The mob celebrated. ClimateTruth.org, founded by former Center for American Progress staffer Brad Johnson, and advised by Penn State’s Michael Mann, called my departure a “victory for climate truth.” The Center for American Progress promised its donor Mr. Steyer more of the same.

Yet the climate thought police still weren’t done. In 2013 committees in the House and Senate invited me to a several hearings to summarize the science on disasters and climate change. As a professor at a public university, I was happy to do so. My testimony was strong, and it was well aligned with the conclusions of the IPCC and the U.S. government’s climate-science program. Those conclusions indicate no overall increasing trend in hurricanes, floods, tornadoes or droughts—in the U.S. or globally.

In early 2014, not long after I appeared before Congress, President Obama’s science adviser John Holdren testified before the same Senate Environment and Public Works Committee. He was asked about his public statements that appeared to contradict the scientific consensus on extreme weather events that I had earlier presented. Mr. Holdren responded with the all-too-common approach of attacking the messenger, telling the senators incorrectly that my views were “not representative of the mainstream scientific opinion.” Mr. Holdren followed up by posting a strange essay, of nearly 3,000 words, on the White House website under the heading, “An Analysis of Statements by Roger Pielke Jr.,” where it remains today.

I suppose it is a distinction of a sort to be singled out in this manner by the president’s science adviser. Yet Mr. Holdren’s screed reads more like a dashed-off blog post from the nutty wings of the online climate debate, chock-full of errors and misstatements.

But when the White House puts a target on your back on its website, people notice. Almost a year later Mr. Holdren’s missive was the basis for an investigation of me by Arizona Rep. Raul Grijalva, the ranking Democrat on the House Natural Resources Committee. Rep. Grijalva explained in a letter to my university’s president that I was being investigated because Mr. Holdren had “highlighted what he believes were serious misstatements by Prof. Pielke of the scientific consensus on climate change.” He made the letter public.

The “investigation” turned out to be a farce. In the letter, Rep. Grijalva suggested that I—and six other academics with apparently heretical views—might be on the payroll of Exxon Mobil (or perhaps the Illuminati, I forget). He asked for records detailing my research funding, emails and so on. After some well-deserved criticism from the American Meteorological Society and the American Geophysical Union, Rep. Grijalva deleted the letter from his website. The University of Colorado complied with Rep. Grijalva’s request and responded that I have never received funding from fossil-fuel companies. My heretical views can be traced to research support from the U.S. government.

But the damage to my reputation had been done, and perhaps that was the point. Studying and engaging on climate change had become decidedly less fun. So I started researching and teaching other topics and have found the change in direction refreshing. Don’t worry about me: I have tenure and supportive campus leaders and regents. No one is trying to get me fired for my new scholarly pursuits.

But the lesson is that a lone academic is no match for billionaires, well-funded advocacy groups, the media, Congress and the White House. If academics—in any subject—are to play a meaningful role in public debate, the country will have to do a better job supporting good-faith researchers, even when their results are unwelcome. This goes for Republicans and Democrats alike, and to the administration of President-elect Trump.

Academics and the media in particular should support viewpoint diversity instead of serving as the handmaidens of political expediency by trying to exclude voices or damage reputations and careers. If academics and the media won’t support open debate, who will?

---
Mr. Pielke is a professor and director of the Sports Governance Center at the University of Colorado, Boulder. His most recent book is “The Edge: The Wars Against Cheating and Corruption in the Cutthroat World of Elite Sports” (Roaring Forties Press, 2016).

Tuesday, May 5, 2009

What You Can(‘t) Do About Global Warming

What You Can(‘t) Do About Global Warming
World Climate Report, April 30, 2009

We are always hearing about ways that you can “save the planet” from the perils of global warming—from riding your bicycle to work, to supporting the latest national greenhouse gas restriction limitations, and everything in between.

In virtually each and every case, advocates of these measures provide you with the amount of greenhouse gas emissions (primarily carbon dioxide) that will be saved by the particular action.
And if you want to figure this out for yourself, the web is full of CO2 calculators (just google “CO2 calculator”) which allow you to calculate your carbon footprint and how much it can be reduced by taking various conservations steps—all with an eye towards reducing global warming.

However, in absolutely zero of these cases are you told, or can you calculate, how much impact you are going to have on the actual climate itself. After all, CO2 emissions are not climate—they are gases. Climate is temperature and precipitation and storms and winds, etc. If the goal of the actions is to prevent global warming, then you shouldn’t really care a hoot about the amount of CO2 emissions that you are reducing, but instead, you want to know how much of the planet you are saving. How much anthropogenic climate change is being prevented by unplugging your cell phone charger, from biking to the park, or from slashing national carbon dioxide emissions?
Why do none of the CO2 calculators give you that most valuable piece of information? Why don’t the politicians, the EPA, and/or greenhouse gas reduction advocates tell you the bottom line?

How much global warming are we avoiding?

Embarrassingly for them, this information is readily available.

After all, what do you think climate models do? Simply, they take greenhouse gas emissions scenarios and project the future climate—thus providing precisely the answer we are looking for. You tweak the scenarios to account for your emission savings, run the models, and you get your answer.

Since climate model projections of the future climate are what are being used to attempt to scare us into action, climate models should very well be used to tell us how much of the scary future we are going to avoid by taking the suggested/legislated/regulated actions.

So where are the answers?

OK, so full-fledged climate models are very expensive tools—they are extremely complex computer programs that take weeks to run on the world’s fastest supercomputers. So, consequently, they don’t lend themselves to web calculators.

But, you would think that in considering our national energy plan, or EPA’s plan to regulate CO2, that this would be of enough import to deserve a couple of climate model runs to determine the final result. Otherwise, how can the members of Congress fairly assess what it is they are considering doing? Again, if the goal is to change the future course of climate to avoid the potential negative consequences of global warming, then to what degree is the plan that they are proposing going to be successful? Can it deliver the desired results? The American public deserves to know.

In lieu of full-out climate models, there are some “pocket” climate models that run on your desktop computer in a matter of seconds and which are designed to emulate the large-scale output from the complex general circulation models. One of best of these “pocket” models is the Model for the Assessment of Greenhouse-gas Induced Climate Change, or MAGICC. Various versions of MAGICC have been used for years to simulate climate model output for a fraction of the cost. In fact, the latest version of MAGICC was developed under a grant from the EPA. Just like a full climate model, MAGICC takes in greenhouse gas emissions scenarios and outputs such quantities as the projected global average temperature. Just the thing we are looking for. It would only take a bit of technical savvy to configure the web-based CO2 calculators so that they interfaced with MAGICC and produced a global temperature savings based upon the emissions savings. Yet not one has seemed fit to do so. If you are interested in attempting to do so yourself, MAGICC is available here.

As a last resort, for those of us who don’t have general circulation models, supercomputers, or even much technical savvy of our own, it is still possible, in a rough, back-of-the-envelope sort of way, to come up with a simple conversion from CO2 emissions to global temperatures. This way, what our politicians and favorite global warming alarmists won’t tell us, we can figure out for ourselves.

Here’s how.

We need to go from emissions of greenhouse gases, to atmospheric concentrations of greenhouse gases, to global temperatures.

We’ll determine how much CO2 emissions are required to change the atmospheric concentration of CO2 by 1 part per million (ppm), then we’ll figure out how many ppms of CO2 it takes to raise the global temperature 1ºC. Then, we’ll have our answer.

So first things first. Figure 1 shows the total global emissions of CO2 (in units of million metric tons, mmt) each year from 1958-2006 as well as the annual change in atmospheric CO2 content (in ppm) during the same period. Notice that CO2 emissions are rising, as is the annual change in atmospheric CO2 concentration.

[figure 1]

Figure 1. (top) Annual global total carbon dioxide emissions (mmt), 1958-2006; (bottom) Year-to-year change in atmospheric CO2 concentrations (ppm), 1959-2006. (Data source: Carbon Dioxide Information Analysis Center)

If we divide the annual emissions by the annual concentration change, we get Figure 2—the amount of emissions required to raise the atmospheric concentration by 1 ppm. Notice that there is no trend at all through the data in Figure 2. This means that the average amount of CO2 emissions required to change the atmospheric concentration by a unit amount has stayed constant over time. This average value in Figure 2 is 15,678mmtCO2/ppm.

[figure 2]

Figure 2. Annual CO2 emissions responsible for a 1 ppm change in atmospheric CO2 concentrations (Figure 1a divided by Figure 1b), 1959-2006. The blue horizontal line is the 1959-2006 average, the red horizontal line is the average excluding the volcano-influenced years of 1964, 1982, and 1992.

You may wonder about the two large spikes in Figure 2—indicating that in those years, the emissions did not result in much of change in the atmospheric CO2 concentrations. It turns out that the spikes, in 1964 and 1992 (and a smaller one in 1982), are the result of large volcanic eruptions. The eruptions cooled the earth by blocking solar radiation and making it more diffuse, which has the duel effect of increasing the CO2 uptake by oceans and increasing the CO2 uptake by photosynthesis—both effects serving to offset the effect of the added emissions and resulting in little change in the atmospheric concentrations. As the volcanic effects attenuated in the following year, the CO2 concentrations then responded to emissions as expected.

Since volcanic eruptions are more the exception than the norm, we should remove them from our analysis. In doing so, the average amount of CO2 emissions that lead to an atmospheric increase of 1 ppm drops from 15,678 (the blue line in Figure 2), to 14,138mmtCO2 (red line in Figure 2).

Now, we need to know how many ppms of CO2 it takes to raise the global temperature a degree Celsius. This is a bit trickier, because this value is generally not thought to be constant, but instead to decrease with increasing concentrations. But, for our purposes, we can consider it to be constant and still be in the ballpark. But what is that value?

We can try to determine it from observations.

Over the past 150 years or so, the atmospheric concentration of CO2 has increased about 100 ppm, from ~280ppm to ~380ppm, and global temperatures have risen about 0.8ºC over the same time. Dividing the concentration change by the temperature change (100ppm/0.8ºC) produces the answer that it takes 125ppm to raise the global temperature 1ºC. Now, it is possible that some of the observed temperature rise has occurred as a result of changes other than CO2 (say, solar, for instance). But it is also possible that the full effect of the temperature change resulting from the CO2 changes has not yet been manifest. So, rather than nit-pick here, we’ll call those two things a wash and go with 125ppm/ºC as a reasonable value as determined from observations.

We can also try to determine it from models.

Climate models run with only CO2 increases produce about 1.8C of warming at the time of a doubling of the atmospheric carbon dioxide concentrations. A doubling is usually taken to be a change of about 280ppm. So, we have 280ppm divided by 1.8ºC equals 156ppm/ºC. But, the warming is not fully realized by the time of doubling, and the models go on to produce a total warming of about 3ºC for the same 280ppm rise. This gives us, 280ppm divided by 3ºC which equals 93ppm/ºC. The degree to which the models have things exactly right is highly debatable, but close to the middle of all of this is that 125ppm/ºC number again—the same that we get from observations.

So both observations and models give us a similar number, within a range of loose assumptions.
Now we have what we need. It takes ~14,138mmt of CO2 emissions to raise the atmospheric CO2 concentration by ~1 ppm and it takes ~125 ppm to raise the global temperature ~1ºC. So multiplying ~14,138mmt/pmm by ~125ppm/ºC gives us ~1,767,250mmt/ºC.

That’s our magic number—1,767,250.

Write that number down on a piece of paper and put it in your wallet or post it on your computer.

This is a handy-dandy and powerful piece of information to have, because now, whenever you are presented with an emissions savings that some action to save the planet from global warming is supposed to produce, you can actually see how much of a difference it will really make. Just take the emissions savings (in units of mmt of CO2) and divide it by 1,767,250.
Just for fun, let’s see what we get when we apply this to a few save-the-world suggestions.

According to NativeEnergy.com (in association with Al Gore’s ClimateCrisis.net), if you stopped driving your average mid-sized car for a year, you’d save about 5.5 metric tons (or 0.0000055 million metric tons, mmt) of CO2 emissions per year. Divide 0.0000055mmtCO2 by 1,767,250 mmt/ºC and you get a number too small to display on my 8-digit calculator (OK, Excel tells me the answer is 0.00000000000311ºC). And, if you send in $84, NativeEnergy will invest in wind and methane power to offset that amount in case you actually don’t want to give up your car for the year. We’ll let you decide if you think that is worth it.

How about something bigger like not only giving up your mid-sized car, but also your SUV and everything else your typical household does that results in carbon dioxide emissions from fossil fuels. Again, according to NativeEnvergy.com, that would save about 24 metric tons of CO2 (or 0.000024 mmt) per year. Dividing this emissions savings by our handy-dandy converter yields 0.0000000000136ºC/yr. If you lack the fortitude to actually make these sacrifices to prevent one hundred billionth of a degree of warming, for $364 each year, NativeEnergy.com will offset your guilt.

And finally, looking at the Waxman-Markey Climate Bill that is now being considered by Congress, CO2 emissions from the U.S. in the year 2050 are proposed to be 83% less than they were in 2005. In 2005, U.S. emissions were about 6,000 mmt, so 83% below that would be 1,020mmt or a reduction of 4,980mmtCO2. 4,980 divided by 1,767,250 = 0.0028ºC per year. In other words, even if the entire United States reduced its carbon dioxide emissions by 83% below current levels, it would only amount to a reduction of global warming of less than three-thousandths of a ºC per year. A number that is scientifically meaningless.

This is the type of information that we should be provide with. And, as we have seen here, it is not that difficult to come by.

The fact that we aren’t routinely presented with this data, leads to the inescapable conclusion that it is purposefully being withheld. None of the climate do-gooders want to you know that what they are suggesting/demanding will do no good at all (at least as far as global warming is concerned).

So, if you really want to, dust off your bicycle, change out an incandescent bulb with a compact fluorescent, or support legislation that will raise your energy bill. Just realize that you will be doing so for reasons other than saving the planet. It is a shame that you have to hear that from us, rather than directly from the folks urging you on (under false pretenses).

Friday, April 24, 2009

Shanahan et alii's article on severe droughts in Africa

Comment On “Debate Over Climate Risks - Natural or Not” On Dot Earth. By Roger Pielke Sr
Climate Science, Apr 20, 2009

There is an interesting discussion on going at Andy Revkin’s weglob Dot Earth on the topic Debate Over Climate Risks - Natural or Not, which invites responses to the statement,

“One clear-cut lesson [of this study] seems to be that human-driven warming, for this part of Africa, could be seen as a sideshow given the normal extremes. Tell me why that thought is misplaced if you feel it is.”

This subject was initiated by a Science article by Shanahan et al and subsequent news item on April 16 2009 by Andy Revkin which includes the text

“For at least 3,000 years, a regular drumbeat of potent droughts, far longer and more severe than any experienced recently, have seared a belt of sub-Saharan Africa that is now home to tens of millions of the world’s poorest people, climate researchers reported in a new study.

That sobering finding, published in the April 17th issue of Science magazine emerged from the first study of year-by-year climate conditions in the region over the millenniums, based on layered mud and dead trees in a crater lake in Ghana. “

The abstract of the Science article by Shanahan et al reads

“ Although persistent drought in West Africa is well documented from the instrumental record and has been primarily attributed to changing Atlantic sea surface temperatures, little is known about the length, severity, and origin of drought before the 20th century. We combined geomorphic, isotopic, and geochemical evidence from the sediments of Lake Bosumtwi, Ghana, to reconstruct natural variability in the African monsoon over the past three millennia. We find that intervals of severe drought lasting for periods ranging from decades to centuries are characteristic of the monsoon and are linked to natural variations in Atlantic temperatures. Thus the severe drought of recent decades is not anomalous in the context of the past three millennia, indicating that the monsoon is capable of longer and more severe future droughts.”

Climate Science and our research papers have emphasized the large natural variations of climate that have occurred in the paleo-climate record and that these variations dwarf anything we have experienced in the instrumental record.

For example, in

Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38,

our abstract reads

“The Earth’s climate system is highly nonlinear: inputs and outputs are not proportional, change is often episodic and abrupt, rather than slow and gradual, and multiple equilibria are the norm. While this is widely accepted, there is a relatively poor understanding of the different types of nonlinearities, how they manifest under various conditions, and whether they reflect a climate system driven by astronomical forcings, by internal feedbacks, or by a combination of both. In this paper, after a brief tutorial on the basics of climate nonlinearity, we provide a number of illustrative examples and highlight key mechanisms that give rise to nonlinear behavior, address scale and methodological issues, suggest a robust alternative to prediction that is based on using integrated assessments within the framework of vulnerability studies and, lastly, recommend a number of research priorities and the establishment of education programs in Earth Systems Science. It is imperative that the Earth’s climate system research community embraces this nonlinear paradigm if we are to move forward in the assessment of the human influence on climate.”

In an article specifically with respect to drought,

Pielke Sr., R.A., 2008: Global climate models - Many contributing influences. Citizen’s Guide to Colorado Climate Change, Colorado Climate Foundation for Water Education, pp. 28-29,
I wrote

“A vulnerability perspective, focused on regional and local societal and environmental resources, is a more inclusive, useful and scientifically robust framework to use with policymakers. In contrast to the limited range of possible future risks by current climate models, the vulnerability framework permits the evaluation of the entire spectrum of risks to the water resources associated with all social and environmental threats, including climate variability and change.”

Thus, regardless of the role humans play within the climate system (and it is much more than due to carbon dioxide increases; see), adaptation plans to deal with climate variations, beyond what occurred in the historical record, should be a priority.

Saturday, April 11, 2009

Amazon Experts Cautious on Climate Threat

Amazon Experts Cautious on Climate Threat, by Andrew Revkin
Dot Earth/TNYT, April 7, 2009, 2:54 pm

The lure of the “front-page thought” — for both scientists and the press — was very much on display at the recent Copenhagen summit on climate change. Presentations and speeches were followed by a wave of coverage, primarily in Europe, focused on what many papers said was strong new evidence of pending climate calamity.

Some scientists who attended the meeting pushed back. Mike Hulme of the University of East Anglia criticized efforts to cast the six-point manifesto released at the meeting’s end as the product of a broad consensus (simultaneously published on the Prometheus blog). Other scientists, who study facets of how global warming could affect things that matter — in particular the Amazon rain forest — criticized what they saw as overstatements coming out of the meeting and have now followed up afresh.

Yadvinder Malhi, a professor of ecosystem science at the University of Oxford, and Oliver Phillips, a professor of tropical ecology at the University of Leeds, have written a response to a story in the Guardian on a modeling study that projected that the Amazon forest was poised to die off. The scientists contend in a response published today in the paper that the single study, not yet peer reviewed, was laced with uncertainties downplayed both by the scientists describing it and the resulting news story.

(Dr. Malhi also contributed to my recent article assessing what is, and isn’t known, about possible tipping points related to global warming.)

Here’s the take-home point from Dr. Malhi and Dr. Phillips:

Forest dieback is a possibility that should not be ignored, and the probability increases with increasing air temperatures; but it is not inevitable. What is clear is that climate change magnifies the threat from advancing agricultural development, as a drier Amazon will burn more easily….

Climate change is undeniably a serious threat, and our comments should not be seized upon as an excuse for delay or inaction. Rather, conserving Amazonian forests both reduces the carbon dioxide flux from deforestation, which contributes up to a fifth of global emissions, and also increases the resilience of the forest to climate change. The potential impacts of climate change on the Amazon forest must be a call to action to conserve the Amazon, not a reason to retreat in despair.

Tuesday, February 17, 2009

Greenhouse Gases Up, Global Temperatures Down

Greenhouse Gases Up, Global Temperatures Down. By Chip Knappenberger
Master Resource, February 17, 2009

Over the weekend, a widely-distributed story by AP science writer Randolph Schmid voiced the concerns of several scientists that humans were emitting greenhouse gases in the atmosphere at a rate much faster than anyone expected. Funny thing is, Schmid failed to mention that during the same time, global warming proceeded at a rate much slower than anyone expected.
Schmid described the situation like this:
Carbon emissions have been growing at 3.5 percent per year since 2000, up
sharply from the 0.9 percent per year in the 1990s, Christopher Field of the
Carnegie Institution for Science told the annual meeting of the American
Association for the Advancement of Science [AAAS].

“It is now outside the entire envelope of possibilities” considered in the
2007 report of the International Panel on Climate Change, he said. The IPCC and
former vice president Al Gore received the Nobel Prize for drawing attention to
the dangers of climate change.

The largest factor in this increase is the widespread adoption of coal as
an energy source, Field said, “and without aggressive attention societies will
continue to focus on the energy sources that are cheapest, and that means
coal.”

When it comes right down to it, carbon dioxide emissions are not bad in and of them selves; in fact, they are a direct fertilizer for the earth’s plant species. The potential problem surrounds how and how much they may impact the climate. So to complete his coal-is-bad tale, Schmid should have included some comments about how badly the earth’s climate was behaving.

Problem is, such data is getting hard to come by. In fact, while Schmid was busy covering the AAAS meeting in Chicago, Dr. Patrick J. Michaels testified before the U.S. House Subcommittee on Energy and the Environment that global warming was proceeding at a rate that was at the lowest values projected by a large suite of climate models. Dr. Michaels further told the Subcommittee members in the nation’s capital that another year or so of little warming would put global temperature trends outside the accepted range model prognostications.

So, clearly, the picture is a lot more complicated than CO2 in/catastrophic climate change out. It is just that most environmental alarmists (reporters included) don’t like to think of it as such.
I wasn’t the only one who noticed the slanted reporting coming from the coverage of the AAAS meeting. University of Colorado researcher and renowned climatologist Roger Pielke Sr. had this to say at over at his ClimateScience blog:
Since papers and weblogs have documented that the warming is being
over-estimated in recent years, and, thus, these sources of information are
readily available to the reporters, there is, therefore, no other alternative
than these reporters are deliberately selecting a biased perspective to promote
a particular viewpoint on climate. The reporting of this news without
presenting counter viewpoints is clearly an example of yellow
journalism
;

“Journalism that exploits, distorts, or exaggerates the news to create
sensations and attract readers.”

When will the news media and others realize that by presenting such biased
reports, which are easily refuted by real world data, they are losing their
credibility among many in the scientific community as well as with the
public.

Good question.

Friday, February 13, 2009

Libertarian on Antarctica Cooling and Warming

Climate Scientists Blow Hot and Cold, by Patrick J. Michaels
Cato, February 12, 2009

Just about every major outlet has jumped on the news: Antarctica is warming up.
Most previous science had indicated that, despite a warming of global temperatures, readings from Antarctica were either staying the same or even going down.

The problem with Antarctic temperature measurement is that all but three longstanding weather stations are on or very near the coast. Antarctica is a big place, about one-and-a-half times the size of the US. Imagine trying to infer our national temperature only with stations along the Atlantic and Pacific coasts, plus three others in the interior.

Eric Steig, from University of Washington, filled in the huge blanks by correlating satellite-measured temperatures with the largely coastal Antarctic network and then creating inland temperatures based upon the relationship between the satellite and the sparse observations. The result was a slight warming trend, but mainly at the beginning of the record in the 1950s and 1960s. One would expect greenhouse effect warming from carbon dioxide to be more pronounced in recent years, which it is not.

There's actually very little that is new here. Antarctic temperatures do show a warming trend if you begin your study between 1957, when the International Geophysical Year deployed the first network of thermometers there, and the mid-1960s. Studies that start after then find either cooling or no change.

Steig and his colleagues didn't graph the data for the continent as a whole. Instead they broke it into two pieces: the east and west Antarctic ice sheet regions. A naïve reader would give equal weight to both. In fact, in the east, which is much larger, there is clearly no significant warming in the last several decades. When the results are combined, the same old result reappears, namely that the "warming" is driven by years very early in the record, and that the net change since the early 1970s is insignificant.

The reaction to this study by Steig and his co-authors is more enlightening than its results. When Antarctica was cooling, some climate scientists said that was consistent with computer models for global warming. When a new study, such as Steig's, says it's warming, well that's just fine with the models, too. That's right: people glibly relate both warming and cooling of the frigid continent to human-induced climate change.

Perhaps the most prominent place to see how climatologists mix their science with their opinions is a blog called RealClimate.org, primarily run by Gavin Schmidt, one of the computer jockeys for Nasa's James Hansen, the world's loudest climate alarmist.

When studies were published showing a net cooling in recent decades, RealClimate had no problem. A 12 February 2008 post noted: "We often hear people remarking that parts of Antarctica are getting colder, and indeed the ice pack in the southern ocean around Antarctica has actually been getting bigger. Doesn't this contradict the calculations that greenhouse gases are warming the globe? Not at all, because a cold Antarctica is just what calculations predict ... and have predicted for the past quarter century."

A co-author of Steig's paper (and frequent blogger on RealClimate), Penn State's Michael Mann, turned a 180 on Antarctic cooling. He told Associated Press: "Now we can say: No, it's not true. ... [Antarctica] is not bucking the trend."

So, Antarctic cooling and warming are both now consistent with computer models of dreaded global warming caused by humans.

In reality, the warming is largely at the beginning of the record — before there should have been much human-induced climate change. New claims that both warming and cooling of the same place are consistent with forecasts isn't going to help the credibility of climate science, and, or reduce the fatigue of Americans regarding global warming.

Have climate alarmists beaten global warming to death? The Pew Research Centre recently asked over 1,500 people to rank 20 issues in order of priority. Global warming came in dead last.

We can never run the experiment to see if indeed it is the constant hyping of this issue that has sent it to the bottom of the priority ladder. But, as long as scientists blog on that both warming and cooling of the coldest place on earth is consistent with their computer models, why should anyone believe them?

Friday, January 30, 2009

Tennekes on Real Climate

Real Climate Suffers from Foggy Perception, by Henk Tennekes
Climate Science, January 29, 2009 @ 7:00 am

Excerpts:

Roger Pielke Sr. has graciously invited me to add my perspective to his discussion with Gavin Schmidt at RealClimate. [...]

A weather model deals with the atmosphere. Slow processes in the oceans, the biosphere, and human activities can be ignored or crudely parameterized. This strategy has been very successful. The dominant fraternity in the meteorological modeling community has appropriated this advantage, and made itself the lead community for climate modeling. Backed by an observational system much more advanced than those in oceanography or other parts of the climate system, they have exploited their lead position for all they can. For them, it is a fortunate coincidence that the dominant synoptic systems in the atmosphere have scales on the order of many hundreds of kilometers, so that the shortcomings of the parameterizations and the observation network, including weather satellite coverage, do not prevent skillful predictions several days ahead.

A climate model, however, has to deal with the entire climate system, which does include the world’s oceans. The oceans constitute a crucial slow component of the climate system. Crucial, because this is where most of the accessible heat in the system is stored. Meteorologists tend to forget that just a few meters of water contain as much heat as the entire atmosphere. Also, the oceans are the main source of the water vapor that makes atmospheric dynamics on our planet both interesting and exceedingly complicated. For these and other reasons, an explicit representation of the oceans should be the core of any self-respecting climate model.

However, the observational systems for the oceans are primitive in comparison with their atmospheric counterparts. Satellites that can keep track of what happens below the surface of the ocean have limited spatial and temporalresolution. Also, the scale of synoptic motions in the ocean is much smaller than that of cyclones in the atmosphere, requiring a spatial resolution in numerical models and in the observation network beyond the capabilities of present observational systems and supercomputers. We cannot observe, for example, the vertical and horizontal structure of temperature, salinity and motion of eddies in the Gulf Stream in real time with sufficient detail, and cannot model them at the detail that is needed because of computer limitations. How, for goodness’ sake, can we then reliably compute their contribution to multi-decadal changes in the meridional transport of heat? Are the crude parameterizations used in practice up to the task of skillfully predicting the physical processes in the ocean several tens of years ahead? I submit they are not.

Since heat storage and heat transport in the oceans are crucial to the dynamics of the climate system, yet cannot be properly observed or modeled, one has to admit that claims about the predictive performance of climate models are built on quicksand. Climate modelers claiming predictive skill decades into the future operate in a fantasy world, where they have to fiddle with the numerous knobs of the parameterizations to produce results that have some semblance of veracity. Firm footing? Forget it!

Gavin Schmidt is not the only meteorologist with an inadequate grasp of the role of the oceans in the climate system. In my weblog of June 24, 2008, I addressed the limited perception that at least one other climate modeler appears to have. A few lines from that essay deserve repeating here. In response to a paper by Tim Palmer of ECMWF, I wrote: “Palmer et al. seem to forget that, though weather forecasting is focused on the rapid succession of atmospheric events, climate forecasting has to focus on the slow evolution of the circulation in the world ocean and slow changes in land use and natural vegetation. In the evolution of the Slow Manifold (to borrow a term coined by Ed Lorenz) the atmosphere acts primarily as stochastic high-frequency noise. If I were still young, I would attempt to build a conceptual climate model based on a deterministic representation of the world ocean and a stochastic representation of synoptic activity in the atmosphere.”

From my perspective it is not a little bit alarming that the current generation of climate models cannot simulate such fundamental phenomena as the Pacific Decadal Oscillation. I will not trust any climate model until and unless it can accurately represent the PDO and other slow features of the world ocean circulation. Even then, I would remain skeptical about the potential predictive skill of such a model many tens of years into the future.

Thursday, January 22, 2009

Consistent With Chronicles, Antarctic Edition

Consistent With Chronicles, Antarctic Edition. By Roger Pielke, Jr
Prometheus, January 21st, 2009

Excerpts:

An new paper is out in Nature that argues that the Antarctic continent has been warming. In an AP news story, two of its authors (one is Michael Mann from the Real Climate blog) argue that this refutes the skeptics and is “consistent with” greenhouse warming:

“Contrarians have sometime grabbed on to this idea that the entire continent of Antarctica is cooling, so how could we be talking about global warming,” said study co-author Michael Mann, director of the Earth System Science Center at Penn State University. “Now we can say: no, it’s not true … It is not bucking the trend.”

The study does not point to man-made climate change as the cause of the Antarctic warming — doing so is a highly intricate scientific process — but a different and smaller study out late last year did make that connection.

“We can’t pin it down, but it certainly is consistent with the influence of greenhouse gases,” said NASA scientist Drew Shindell, another study co-author. Some of the effects also could be natural variability, he said.

Of course, not long ago we learned from Real Climate that a cooling Antarctica was “consistent with” greenhouse warming and thus the skeptics were wrong:

. . . we often hear people remarking that parts of Antarctica are getting colder, and indeed the ice pack in the Southern Ocean around Antarctica has actually been getting bigger. Doesn’t this contradict the calculations that greenhouse gases are warming the globe? Not at all, because a cold Antarctica is just what calculations predict… and have predicted for the past quarter century. . .

. . . computer models have improved by orders of magnitude, but they continue to show that Antarctica cannot be expected to warm up very significantly until long after the rest of the world’s climate is radically changed.

Bottom line: A cold Antarctica and Southern Ocean do not contradict our models of global warming. For a long time the models have predicted just that.

So a warming Antarctica and a cooling Antarctica are both “consistent with” model projections of global warming. [...]

AP Article By Seth Borenstein Entitled “Study: Antarctica Joins Rest Of Globe In Warming”

Follow Up On Today’s AP Article By Seth Borenstein Entitled “Study: Antarctica Joins Rest Of Globe In Warming”, by Roger Pielke Sr
Climate Science, Jan 21, 2009

An AP article was released today which reports on a Nature paper on a finding of warming over much of Antarctica. I was asked by Seth Borenstein to comment on the paper (which he sent to me). I have been critical of his reporting in the past, but except for the title of the article (which as I understand is created by others), he presented a balanced summary of the study.

My reply to Seth is given below.

I have read the paper and have the following comments/questions

1. The use of the passive infrared brightness temperatures from the AVHRR (a polar orbiting satellite) means that only time samples of the surface temperature are obtained. The surface observations, in contrast, provide maximum and minimum temperatures which are used to construct the surface mean temperature trend. The correlation between the two data sets, therefore, requires assumptions on the temporal variation of the brightness temperature at locations removed from the surface in-situ observations. What uncertainty (quantitatively) resulted from their interpolation procedure?

2. Since the authors use data from 42 occupied stations and 65 AWSs sites, they should provide photographs of the locations (e.g. as provided in
http://gallery.surfacestations.org/main.php?g2_itemId=20) in order to ascertain how well they are sited. This photographs presumably exist. Do any of the surface observing sites produce a possible bias because they are poorly sited at locations with significant local human microclimate modifications?

3. How do the authors reconcile the conclusions in their paper with the cooler than average long term sea surface temperature anomalies off of the coast of Antarctica? [see
http://www.osdpd.noaa.gov/PSB/EPS/SST/data/anomnight.1.15.2009.gif]. These cool anomalies have been there for at least several years. This cool region is also undoubtedly related to the above average Antarctic sea ice areal coverage that has been monitored over recent years; see http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/current.anom.south.jpg].

4. In Figure 2 of their paper, much of their analyzed warming took place prior to 1980. For East Antarctica, the trend is essentially flat since 1980. The use of a linear fit for the entire period of the record produces a larger trend than has been seen in more recent years.

In terms of the significance of their paper, it overstates what they have obtained from their analysis. In the abstract they write, for example,

“West Antarctic warming exceeds 0.1C per decade over the past 50 years”.

However, even a cursory view of Figure 2 shows that since the late 1990s, the region has been cooling in their analysis in this region. The paper would be more balanced if they presented this result, even if they cannot explain why.

Please let me know if you would like more feedback. Thank you reaching out to include a broader perspective on these papers in your articles.

Regards

Roger

Monday, January 12, 2009

RealClimate: Communicating the Science of Climate Change

Communicating the Science of Climate Change, by Mike Donald
Real Climate, January 12, 2009 @ 9:14 AM


It is perhaps self-evident that those of us here at RealClimate have a keen interest in the topic of science communication. A number of us have written books aimed at communicating the science to the lay public, and have participated in forums devoted to the topic of science communication (see e.g. here, here, and here). We have often written here about the challenges of communicating science to the public in the modern media environment (see e.g. here, here, and here).

It is naturally our pleasure, in this vein, to bring to the attention of our readers a masterful new book on this topic by veteran environmental journalist and journalism educator Bud Ward. The book, entitled Communicating on Climate Change: An Essential Resource for Journalists, Scientists, and Educators, details the lessons learned in a series of Metcalf Institute workshops held over the past few years, funded by the National Science Foundation, and co-organized by Ward and AMS senior science and communications fellow Tony Socci. These workshops have collectively brought together numerous leading members of the environmental journalism and climate science communities in an effort to develop recommendations that might help bridge the cultural divide between these two communities that sometimes impedes accurate and effective science communication.

I had the privilege of participating in a couple of the workshops, including the inaugural workshop in Rhode Island in November 2003. The discussions emerging from these workshops were, at least in part, the inspiration behind "RealClimate". The workshops formed the foundation for this new book, which is an appropriate resource for scientists, journalists, editors, and others interested in science communication and popularization. In addition to instructive chapters such as "Science for Journalism", "Journalism for Scientists" and "What Institutions Can Do", the book is interspersed with a number of insightful essays by leading scientists (e.g. "Mediarology–The Role of Climate Scientists in Debunking Climate Change Myths" by Stephen Schneider) and environmental journalists (e.g. "Hot Words" by Andy Revkin). We hope this book will serve as a standard reference for how to effectively communicate the science of climate change.

Tuesday, January 6, 2009

RealClimate FAQ on climate models: Part II

FAQ on climate models: Part II. By Gavin Schmidt
Real Climate, Jan 06, 2009 @ 8:09 AM

[This is a continuation of a previous post including interesting questions from the comments.]

What are parameterisations?

Some physics in the real world, that is necessary for a climate model to work, is only known empirically. Or perhaps the theory only really applies at scales much smaller than the model grid size. This physics needs to be 'parameterised' i.e. a formulation is used that captures the phenomenology of the process and its sensitivity to change but without going into all of the very small scale details. These parameterisations are approximations to the phenomena that we wish to model, but which work at the scales the models actually resolve. A simple example is the radiation code - instead of using a line-by-line code which would resolve the absorption at over 10,000 individual wavelengths, a GCM generally uses a broad-band approximation (with 30 to 50 bands) which gives very close to the same results as a full calculation. Another example is the formula for the evaporation from the ocean as a function of the large-scale humidity, temperature and wind-speed. This is really a highly turbulent phenomena, but there are good approximations that give the net evaporation as a function of the large scale ('bulk') conditions. In some parameterisations, the functional form is reasonably well known, but the values of specific coefficients might not be. In these cases, the parameterisations are 'tuned' to reproduce the observed processes as much as possible.


How are the parameterisations evaluated?

In at least two ways. At the process scale, and at the emergent phenomena scale. For instance, taking one of the two examples mentioned above, the radiation code can be tested against field measurements at specific times and places where the composition of the atmosphere is known alongside a line-by-line code. It would need to capture the variations seen over time (the daily cycle, weather, cloudiness etc.). This is a test at the level of the actual process being parameterised and is a necessary component in all parameterisations. The more important tests occur when we examine how the parameterisation impacts larger-scale or emergent phenomena. Does changing the evaporation improve the patterns of precipitation? the match of the specific humidity field to observations? etc. This can be an exhaustive set of tests but again are mostly necessary. Note that most 'tunings' are done at the process level. Only those that can't be constrained using direct observations of the phenomena are available for tuning to get better large scale climate features. As mentioned in the previous post, there are only a handful of such parameters that get used in practice.


Are clouds included in models? How are they parameterised?

Models do indeed include clouds, and do allow changes in clouds as a response to forcings. There are certainly questions about how realistic those clouds are and whether they have the right sensitivity - but all models do have them! In general, models suggest that they are a positive feedback - i.e. there is a relative increase in high clouds (which warm more than they cool) compared to low clouds (which cool more than they warm) - but this is quite variable among models and not very well constrained from data.

Cloud parameterisations are amongst the most complex in the models. The large differences in mechanisms for cloud formation (tropical convection, mid-latitude storms, marine stratus decks) require multiple cases to be looked at and many sensitivities to be explored (to vertical motion, humidity, stratification etc.). Clouds also have important micro-physics that determine their properties (such as cloud particle size and phase) and interact strongly with aerosols. Standard GCMs have most of this physics included, and some are even going so far as to embed cloud resolving models in each grid box. These models are supposed to do away with much of the parameterisation (though they too need some, smaller-scale, ones), but at the cost of greatly increased complexity and computation time. Something like this is probably the way of the future.


What is being done to address the considerable uncertainty associated with cloud and aerosol forcings?

As alluded to above, cloud parameterisations are becoming much more detailed and are being matched to an ever larger amount of observations. However, there are still problems in getting sufficient data to constrain the models. For instance, it's only recently that separate diagnostics for cloud liquid water and cloud ice have become available. We still aren't able to distinguish different kinds of aerosols from satellites (though maybe by this time next year).

However, none of this is to say that clouds are a done deal, they certainly aren't. In both cloud and aerosol modelling the current approach is get as wide a spectrum of approaches as possible and to discern what is and what is not robust among those results. Hopefully soon we will start converging on the approaches that are the most realistic, but we are not there yet.

Forcings over time are a slightly different issue, and there it is likely that substantial uncertainties will remain because of the difficulty in reconstructing the true emission data for periods more than a few decades back. That involves making pretty unconstrained estimates of the efficiency of 1930s technology (for instance) and 19th Century deforestation rates. Educated guesses are possible, but independent constraints (such as particulates in ice cores) are partial at best.


Do models assume a constant relative humidity?

No. Relative humidity is a diagnostic of the models' temperature and water distribution and will vary according to the dynamics, convection etc. However, many processes that remove water from the atmosphere (i.e. cloud formation and rainfall) have a clear functional dependence on the relative humidity rather than the total amount of water (i.e. clouds form when air parcels are saturated at their local temperature, not when humidity reaches X g/m3). These leads to the phenomenon observed in the models and the real world that long-term mean relative humidity is pretty stable. In models it varies by a couple of percent over temperature changes that lead to specific humidity (the total amount of water) changing by much larger amounts. Thus a good estimate of the model relative humidity response is that it is roughly constant, similar to the situation seen in observations. But this is a derived result, not an assumption. You can see for yourself here (select Relative Humidty (%) from the diagnostics).


What are boundary conditions?

These are the basic data input into the models that define the land/ocean mask, the height of the mountains, river routing and the orbit of the Earth. For standard models additional inputs are the distribution of vegetation types and their properties, soil properties, and mountain glacier, lake, and wetland distributions. In more sophisticated models some of what were boundary conditions in simpler models have now become prognostic variables. For instance, dynamic vegetation models predict the vegetation types as a function of climate. Other examples in a simple atmospheric model might be the distribution of ozone or the level of carbon dioxide. In more complex models that calculate atmospheric chemistry or the carbon cycle, the boundary conditions would instead be the emissions of ozone precursors or anthropogenic CO2. Variations in these boundary conditions (for whatever reason) will change the climate simulation and can be considered forcings in the most general sense (see the next few questions).


Does the climate change if the boundary conditions are stable?

The answer to this question depends very much on perspective. On the longest timescales a climate model with constant boundary conditions is stable - that is, the mean properties and their statistical distribution don't vary. However, the spectrum of variability can be wide, and so there is variation from one decade to the next, from one century to the next, that are the result of internal variations in (for instance) the ocean circulation. While the long term stability is easy to demonstrate in climate models, it can't be unambiguously determined whether this is true in the real world since boundary conditions are always changing (albeit slowly most of the time).


Does the climate change if boundary conditions change?

Yes. If any of the factors that influence the simulation change, there will be a response in the climate. It might be large or small, but it will always be detectable if you run the model for long enough. For example, making the Rockies smaller (as they were a few million years ago) changes the planetary wave patterns and the temperature patterns downstream. Changing the ozone distribution changes temperatures, the height of the tropopause and stratospheric winds. Changing the land-ocean mask (because of sea level rise or tectonic changes for instance) changes ocean circulation, patterns of atmospheric convection and heat transports.


What is a forcing then?

The most straightforward definition is simply that a forcing is a change in any of the boundary conditions. Note however that this definition is not absolute with respect to any particular bit of physics. Take ozone for instance. In a standard atmospheric model, the ozone distribution is fixed and any change in that fixed distribution (because of stratospheric ozone depletion, tropospheric pollution, or changes over a solar cycle) would be a forcing causing the climate to change. In a model that calculates atmospheric chemistry, the ozone distribution is a function of the emissions of chemical precursors, the solar UV input and the climate itself. In such a model, ozone changes are a response (possibly leading to a feedback) to other imposed changes. Thus it doesn't make sense to ask whether ozone changes are or aren't a forcing without discussing what kind of model you are talking about.

There is however a default model setup in which many forcings are considered. This is not always stated explicitly and leads to (somewhat semantic) confusion even among specialists. This setup consists of an atmospheric model with a simple mixed-layer ocean model, but that doesn't include chemistry, aerosol vegetation or dynamic ice sheet modules. Not coincidentally this corresponds to the state-of-the-art of climate models around 1980 when the first comparisons of different forcings started to be done. It persists in the literature all the way through to the latest IPCC report (figure xx). However, there is a good reason for this, and that is observation that different forcings that have equal 'radiative' impacts have very similar responses. This allows many different forcings to be compared in magnitude and added up.

The 'radiative forcing' is calculated (roughly) as the net change in radiative fluxes (both short wave and long wave) at the top of the atmosphere when a component of the default model set up is changed. Increased solar irradiance is an easy radiative forcing to calculate, as is the value for well-mixed greenhouse gases. The direct effect of aerosols (the change in reflectance and absorption) is also easy (though uncertain due to the distributional uncertainty), while the indirect effect of aerosols on clouds is a little trickier. However, some forcings in the general sense defined above don't have an easy-to-caclulate 'radiative forcing' at all. What is the radiative impact of opening the isthmus of Panama? or the collapse of Lake Agassiz? Yet both of these examples have large impacts on the models' climate. Some other forcings have a very small global radiative forcing and yet lead to large impacts (orbital changes for instance) through components of the climate that aren't included in the default set-up. This isn't a problem for actually modelling the effects, but it does make comparing them to other forcings without doing the calculations a little more tricky.


What are the differences between climate models and weather models?

Conceptually they are very similar, but in practice they are used very differently. Weather models use as much data as there is available to start off close to the current weather situation and then use their knowledge of physics to step forward in time. This has good skill for a few days and some skill for a little longer. Because they are run for short periods of time only, they tend to have much higher resolution and more detailed physics than climate models (but note that the Hadley Centre for instance, uses the same model for climate and weather purposes). Weather models develop in ways that improve the short term predictions, though the impact for long term statistics or the climatology needs to be assessed independently. Curiously, the best weather models often have a much worse climatology than the best climate models. There are many current attempts to improve the short-term predictability in climate models in line with the best weather models, though it is unclear what impact that will have on projections.


How are solar variations represented in the models?

This varies a lot because of uncertainties in the past record and complexities in the responses. But given a particular estimate of solar activity there are a number of modelled responses. First, the total amount of solar radiation (TSI) can be varied - this changes the total amount of energy coming into the system and is very easy to implement. Second, the variation over the the solar cycle at different frequencies (from the UV to the near infra-red) don't all vary with the same amplitude - UV changes are about 10 times as large as those in the total irradiance. Since UV is mostly absorbed by ozone in the stratosphere, including these changes increases the magnitude of the solar cycle variability in the stratosphere. Furthermore, the change in UV has an impact on the production of ozone itself (even down into the troposphere). This can be calculated with chemistry-climate models, and is increasingly being used in climate model scenarios (see here for instance).

There are also other hypothesised impacts of solar activity on climate, most notably the impact of galactic cosmic rays (which are modulated by the solar magnetic activity on solar cycle timescales) on atmospheric ionisation, which in turn has been linked to aerosol formation, and in turn linked to cloud amounts. Most of these links are based on untested theories and somewhat dubious correlations, however, as was recognised many years ago (Dickinson, 1975), this is a plausible idea. Implementing it in climate models is however a challenge. It requires models to have a full model of aerosol creation, growth, accretion and cloud nucleation. There are many other processes that affect aerosols and GCR-related ionisation is only a small part of that. Additionally there is a huge amount of uncertainty in aerosol-cloud effects (the 'aerosol indirect effect'). Preliminary work seems to indicate that the GCR-aerosol-cloud link is very small (i.e. the other effects dominate), but this is still in the early stages of research. Should this prove to be significant, climate models will likely incorporate this directly (using embedded aerosol codes), or will parameterise the effects based on calculated cloud variations from more detailed models. What models can't do (except perhaps as a sensitivity study) is take purported global scale correlations and just 'stick them in' - cloud processes and effects are so tightly wound up in the model dynamics and radiation and have so much spatial and temporal structure that this couldn't be done in a way that made physical sense. For instance, part of the observed correlation could be due to the other solar effects, and so how could they be separated out? (and that's even assuming that the correlations actually hold up over time, which doesn't seem to be the case).


What do you mean when you say model has “skill”?

'Skill' is a relative concept. A model is said to have skill if it gives more information than a naive heuristic. Thus for weather forecasts, a prediction is described as skilful if it works better than just assuming that each day is the same as the last ('persistence'). It should be noted that 'persistence' itself is much more skillful than climatology (the historical average for that day) for about a week. For climate models, there is a much larger range of tests available and there isn't necessarily an analogue for 'persistence' in all cases. For a simulation of a previous time period (say the mid-Holocene), skill is determined relative to a 'no change from the present'. Thus if a model predicts a shift northwards of the tropical rain bands (as was observed), that would be skillful. This can be quantified and different models can exhibit more or less skill with respect to that metric. For the 20th Century, models show skill for the long-term changes in global and continental-scale temperatures - but only if natural and anthropogenic forcings are used - compared to an expectation of no change. Standard climate models don't show skill at the interannual timescales which depend heavily on El Niño's and other relatively unpredictable internal variations (note that initiallised climate model projections that use historical ocean conditions may show some skill, but this is still a very experimental endeavour).


How much can we learn from paleoclimate?

Lots! The main issue is that for the modern instrumental period the changes in many aspects of climate have not been very large - either compared with what is projected for the 21st Century, or from what we see in the past climate record. Thus we can't rely on the modern observations to properly assess the sensitivity of the climate to future changes. For instance, we don't have any good observations of changes in the ocean's thermohaline circulation over recent decades because a) the measurements are difficult, and b) there is a lot of noise. However, in periods in the past, say around 8,200 years ago, or during the last ice age, there is lots of evidence that this circulation was greatly reduced, possibly as a function of surface freshwater forcing from large lake collapses or from the ice sheets. If those forcings and the response can be quantified they provide good targets against which the models' sensitivity can be tested. Periods that are of possibly the most interest for testing sensitivities associated with uncertainties in future projections are the mid-Holocene (for tropical rainfall, sea ice), the 8.2kyr event (for the ocean thermohaline circulation), the last two millennia (for decadal/multi-decadal variability), the last interglacial (for ice sheets/sea level) etc. There are plenty of other examples, and of course, there is a lot of intrinsic interest in paleoclimate that is not related to climate models at all!

As before, if there are additional questions you'd like answered, put them in the comments and we'll collate the interesting ones for the next FAQ.

Monday, December 29, 2008

“Forecasting the Future of Hurricanes” by Anna Barratt In Nature

“Forecasting the Future of Hurricanes” by Anna Barratt In Nature. By Roger Pielke Sr.
Climate Science, December 29, 2008 7:00 am

There was a recent Nature news article
Barratt, A., 2008: Forecasting the future of hurricanes. Nature News. Published online December 11, 2008. doi:10.1038/news.2008.1298.
The article is titled

A meteorologist’s new model zooms in on how climate change affects Atlantic
storms. by Anna Barnett

“The world’s most advanced simulation of extreme weather on a warming Earth completed its first run on 5 December. Greg Holland at the US National Center for Atmospheric Research (NCAR) in Boulder, Colorado, is leading the project, which nests detailed regional forecasts into a model of global climate change up to the mid-21st century. Under the model’s microscope are future hurricane seasons in the Gulf of Mexico and the Caribbean, along with rainfall over the Rocky Mountains and wind patterns in the Great Plains.”

This type of article perpetuates the myth that the climate science community currently has the capability to make skilled regional multi-decadal predictions [in this case of hurricane activity]. Such claims to not conform even to the statements by IPCC authors.

For example, see An Essay “The IPCC Report: What The Lead Authors Really Think” By Ann Henderson-Sellers where she reports that

“The rush to emphasize regional climate does not have a scientifically sound
basis.”

Even Kevin Trenberth, one of the Lead IPCC authors, has written (see)

“the science is not done because we do not have reliable or regional predictions
of climate.” [see the Climate Science posting on the Trenberth essay - Comment on the Nature Weblog By Kevin Trenberth Entitled “Predictions of climate”.]
The Nature article Forecasting the future of hurricanes is yet another example of not critically and objectively assessing claims made by climate scientists. What ever happened to objective journalism in Nature?