Note on comments: Commenting requires registration. The first comment after registration requires acceptance, later comments are published without delay. Open discussion and criticism is encouraged, but comments that deviate from substantial commenting will be deleted.

Discounting over long periods, an operative view

April 15th, 2014 No comments

I have written earlier on issues, where choices related to discounting enter strongly. I have also argued that many of the central questions cannot be answered by approaches affected strongly by small changes in discount rate. In particular I have found that comparison of alternatives is likely to be dominated by too poorly known inputs, whenever the use of very low discount rates is otherwise justified. In such cases results obtained by high discount rates are likely to be erroneous, but figuring out the more correct conclusions requires some other approach.

Here I do, however, start anew from basics trying to make the concepts and settings clearer for myself, and hoping that this would be helpful also for some others. The approach is based on an operational perspective. By operational I mean everything that influences practical decision making with no limitation to some type of application, but having in mind the most difficult questions related to very long time scales of intergenerational justice.

I start from the concepts of consumption and capital. Capital includes everything that can be used in future. Some examples of that are natural resources, buildings, machinery, and knowledge. Neither consumption nor capital can be described well by a single summary number, neither is fully measurable in monetary units. The consumptions of two individuals are separate dimensions of the consumption, combining them to the total consumption of those two is not straightforward. In the following I skip at first references to the capital, its role will come up at the end of this post.

Consumption is linked strongly to well-being, but again the issue is complex. A further concept that helps in making a link is the utility of the consumption. The idea is that each consumption pattern of an individual at some moment can be given a utility value.

To proceed various assumptions must be made. When we compare two possible alternatives for a decision, we need some measure that tells, which alternative is the better choice. We may use just our intuition, but what I’m discussing here is a more systematic and more quantitative approach that can also be presented to others, argued upon knowing what the argument is on, and discussed constructively searching for better mutual agreement and acceptable compromises.

The first step is to figure out a measure for the utility of each individual. The standard solution is to use a simple function of the total consumption measured in monetary units. The utility function is generally chosen in such a way that the utility of one monetary unit is much higher for the poor than for the rich. Several different functional forms have been proposed for linking the utility to monetary consumption, most follow either a power law or are logarithmic. Some of them are discussed in my earlier post Climate policies, sustainable development and real options”. In the following we use mainly the case η = 2:

U(C) = \frac{1}{C_0} - \frac{1}{C}   (1)

\delta U(C) = U(C+\delta C) - U(C) = \frac{1}{C^2} \delta C   (2)

where \delta C is a small change in consumption and \delta U(C) the corresponding change in the utility. In formula (1) the first constant term is added to make the utility positive above C0. Some minimum level of C must be applied, because one single individual might otherwise dominate the total global utility.

The second step is calculating the overall difference between our alternative decisions to the total utility for all individuals over all future.

\delta U_{tot} = \int_{t_0}^\infty(\sum_i{a_i(t) \delta U_i(t)})dt   (3)

According to this formula the total difference between the utilities of the two alternatives is formed by summing the individual momentary differentials over all individuals and integrating over all future times. The weight function ai(t) tells both the relative weights given to the each individual and the time dependence of these weights, i.e. discounting of the utilities. The formula allows for giving different weights to each individual and to each point of time, but does not require that, if the result is finite based on the properties of the values Ui(t).

All discounting implied by the time dependence of the weights ai(t) has the nature of pure time preference, where distant future is given less weight than the immediate one. Similarly all dependence on the individual has the nature of kinship preference, where more weight is given to those closer to us by some measure. Whether pure time preference and kinship preference can be accepted is a subject of dispute, that can be decided only based on value judgments of individuals.

The above is true as long as Ui represents a true unbiased estimate of the utilities. In practice formulas of the structure of (3) are, however, used mostly in ways, where that’s not true. I turn now to the implications of these deviations. The best known effect of this nature is obtained, when the changes in utilities are replaced by changes in consumption in the formula (3). To see, how that influences the outcome, we use (2) in (3) and get

\delta U_{tot} = \int_{t_0}^\infty(\sum_i{\frac{a_i(t)}{C_i(t)^2} \delta C_i(t)})dt   (4)

Thus the time dependence of the coefficients includes now both the possible pure time preference and the time dependence of the level of consumption. If the consumption is increasing exponentially that results in exponential discounting of the changes in consumption. This is the wealth effect that many have considered the strongest argument for discounting, but what fails in absence of growth, and leads to a negative contribution to the discount rate in cases, where consumption is declining.

There are, however, two more issues that enter in the use of the formula (3) in virtually all long term applications that I know about. In my view these two issues have an essential influence on the outcome.

The first of these concerns the way the changes in consumption are estimated and used as input to the formula. The influence of later decisions by others, and the related automatic adaptation to prevailing conditions of the time is not evaluated properly. Later decisions are based on more information than presently available including knowledge on the consequences of the original choice. If a decision that turns out to be justified based on the later posterior knowledge is not done now, it’s likely that the error is corrected later leading to a reduction in the loss. Adaptation to new conditions means that even permanent differences in living conditions that result from alternative decisions of today lead typically to a difference in future well-being that gets smaller and smaller in time. The details of these adaptive processes are often impossible to foresee, but the main effect is generally unidirectional (Note: This is an assertion that I believe to be true, but I have no proof for that). If the correctly estimated difference in the utilities of the alternatives goes down in time we should apply an extra discounting factor to all calculations that do not include this effect explicitly.

The second additional issue results from our ability to make decisions that are appropriate for distant conditions, where distant may refer to temporal, geographic or cultural distance. The larger the distance the more likely it is that the decision is not correct. When the likelihood that the chosen option is really better than the other alternative decreases, the expectation value of the difference in utilities goes down. Everybody can visualize this point by considering the question, how well people of distant cultures or of distant past could have made decisions valuable for her or his own present well-being. This argument gives support for both temporal discounting and kinship-type corrections to the outcome.

There might be cases where the above does not apply. Some effects might be truly global and such that further decisions by others cannot lead to adaptation in the way that the difference between the total utilities of the two alternatives would diminish in time. If it’s considered that the total well-being is higher with a larger population, then a permanent reduction in the carrying capacity of the Earth has an effect that does not diminish, but for an overall measure of well-being based on averages of the actual population the situation is not the same.

It’s plausible that the discount rate implied by the capital markets is also a manifestation of these same effects. People who decide on concrete productive investments do commonly foresee that the investments lose value by losing competitiveness against new alternatives rather than wearing out technically. They know also that making investments in unfamiliar societies are much more likely to fail than those done on the home turf. These are concrete smaller scale equivalents of the issues I discuss above. There’s little reason to expect that the same values of discount rate or the same decline of efficiency of decisions with distance would apply in all applications. Even so the observations I make above may contribute to the resolution of the apparent paradoxes that remain between various approaches to discount rates.

While the issues that I describe above support the use of higher discount rates than based on pure time preference and the wealth effect, there are also contrary arguments. Weitzman and Gollier have argued (see my earlier posts for reference) that when longer and longer periods are being considered, the applicable discount rate goes down towards the lowest discount rate that appears in the integrand of equation (4) above, while Fleurbaey and Zuber Climate policies deserve a negative discount rate (Chi. J. Int’l L, 2012) have extended the argument to take into account also the influence of varying consumption levels of individuals arguing that it’s sufficient for a very low, even negative long term discount rate that a fraction of individuals of the future have a consumption level lower than that of high consumption individuals of today. Both of these arguments depend on the absence of other mechanisms that lead to a cut-off in the period to be considered. Thus the effects I discuss above may largely negate the arguments.

The argument of Sterner and Persson discussed briefly in my earlier post (link below) presents a point that’s directly relevant and opposite in effect to my above points. My present view is that their argument may change the outcome in some special cases, while the normal situation remains as discussed above. Further analysis is needed to give a more definitive answer.

I end this post with a short comment on the role of the capital. When we consider the investments we must consider also their influence on further capital formation. Science or R&D is a good example. Investing in them does not influence directly well-being, but it allows for further investments that do have such an effect. It may be easier to consider investments in science and R&D as short term investments that give their pay-back rapidly as increased knowledge capital than to try calculate the outcome of whole chain that ends up as change in well-being possibly far in the future. The approach of Partha Dasgupta that I have discussed at the end of the post Climate policies, sustainable development and real options” can be consider as extending this way of thinking to most benefits.

Categories: Climate change, General Tags:

What could be done?

April 23rd, 2011 3 comments

My previous posts have discussed the difficulty of identifying proper goals. The discussion applies rather to concrete act like investment decisions than to policy decisions like carbon tax, cap & trade or feed-in tariffs for renewable electricity. These policy decisions have the apparent advantage that they are supposed to lead private actors to make the needed concrete decisions. It’s certainly true that public authorities are not capable of making well all the numerous decisions that are needed for significant results, but accepting this fact does not solve the real problems.

The world is facing two related developments: running out of high quality resources of oil and gas, and adding CO2 to the atmosphere. The most straightforward answer to the first problem, switching gradually to coal and perhaps shale oil, is at the same time the worst path concerning the second problem. The oil crisis of 1970′s led to a rapid increase in funding of alternative energy technologies. It’s certainly possible to find some signs of success from that research, but in my general judgment it has produced pitifully little. We have learned that it’s very difficult to develop solutions that have a major effect on the scale of the overall energy use. Some of the most promising alternatives, most notably new technological alternatives of nuclear energy production, have been out of favor, but mostly the lack of success reflects rather the difficulty of the task than lack of trying.

The economics of wind power has improved slowly, and has led to competiveness under most favorable conditions. In a worldwide assessment the wind power cannot, however, produce enough to solve more than a small fraction of the energy problems. Solar electricity is significantly behind. Again the operating environment has a major effect, and acceptable economics could perhaps be reached rather soon in areas of best sunshine, when that is combined with a load peaking during sunny hours and a lack of most economic alternatives. Expanding from that to acceptable costs on a wider basis appears still to be a very distant goal. Overly generous feed-in tariffs have resulted in a large solar capacity in Germany, but at a cost that is far too high – by a factor exceeding five. This is in my judgment the most serious existing example of excessive support for renewable energy. Some research done in Germany agrees (see, e.g., Frondel, M., N. Ritter, C. M. Schmidt and C. Vance (2010), Economic Impacts from the Promotion of Renewable Energy Technologies: The German Experience. Energy Policy 38 : 4048-4056. and Ruhr Economic Papers #156), but the opinion is certainly not shared by everybody. While the problem of wind and solar power is mainly in economic efficiency, the net environmental value is seriously contested for biofuels. Here it’s questionable, whether the support has given any positive results at all, or whether the money spent is used to deteriorate also the state of the environment.

The above examples are related to an attempt to use economic incentives to produce more than the state of knowledge allows. A Pigovian tax that corresponds to actual externalities is economically justified, but an economic incentive scaled to produce strong effects disregarding the economic efficiency may be much more distortive than doing nothing at all. This seems to be the case for numerous European policy choices. Support decisions should always have well defined goals worth the cost of the support. Such goals may include direct effect on energy balance or speeding up of the development of technologies that have good potential of reaching a positive contribution in the future, but many of the policies are not justified by either argument – or by their combination.

Pointing out failures of present policies is not enough, but neither is it useful to continue these policies in spite of their failures. Presently I do not see better alternatives than emphasizing research on a wide range of alternatives, and also research on policies for efficient advancement of technological change and on economic incentives of environmental and climate policies. We know far too little on, how different incentives actually influence decision making. The lack of proper knowledge leads at least in Europe continuously to choosing policies that seem nice and make us feel that we have done something rather than policies that influence efficiently the future development leading to the results envisioned and avoiding unnecessary and excessive costs or collateral damage to environment and social structures.

How to decide on climate policies

April 23rd, 2011 No comments

The justification of climate policies is in the mitigation that they provide against the perceived threats of global warming. Making wise decisions is extremely difficult, because both the severity of the threat and the value of the proposed policies are highly uncertain. An accurate uncontroversial quantitative cost-benefit analysis is certainly beyond our capabilities. Still, every rational policy choice implies that we have judged one alternative to be superior to the others. We have expressed a view on the ordering of the alternatives, if not on the quantitative difference between them. If we wish to do that as well as possible and using as much of the available information as practical, we need some way of comparing costs and benefits of the policy to its alternatives. The issues are important and we cannot afford making seriously suboptimal decisions.

Let’s look at the ways our decisions may affect the human well-being in near and distant future, here and in other parts of the world. If we accept the views of the Stern Review and of those with similar thoughts, the future must be considered over several hundreds of years as their conclusions are dominated by those periods with little weight given to the next century. But is that possible? (It’s not.) Is that necessary? (Perhaps.)

The Stern Review presents a quantitative cost-benefit analysis (CBA) comparing the discounted damages from climate change to the costs of mitigation. The costs of damages are very high, because they are summed over a very long period. Having a social discount rate of 0.1% corresponds for a typical time span of 1000 years. Changing that to 0.2% would halve the time span and changing it to 0.05% would double it. Choosing the value as 0.1% is totally arbitrary; there is certainly no good argument to support that value rather than any other between 0% and 0.5%. How sensitive the final cost estimates are to this value depends on the other choices done in the calculation. The Stern Review happened (sic.) to make those choices in the way that the methods used to handle uncertainties became also important. Emphasizing the worst outcomes in the spirit of risk aversion (see my previous post) and making the uncertainties large with a possibility for very serious consequences turned out to be important in their analysis. Again this contained several parameters, which cannot be determined with any accuracy from our present knowledge. Other choices made in the calculation have received less emphasis, but they are equally important in determining the final results.

The conclusion on the approach of the Stern Review is that it’s not valid, if it’s premises are accepted. With these premises, the result is really undefined as it’s determined by arbitrary choices of the analysis. The analysis assumes that we can calculate the consequences of near term decisions to very distant future. How impossible that is can be envisioned by thinking, how decisions of the 19th or 18th century influence the present. Some decisions of those periods may have a great influence, but, how can we decide, what would be the alternative counterfactual history, and how much worse or better it’s results would have been. Great inventions and scientific discoveries may have the strongest influence, but were they finally dependent on individual decisions, or did these decisions have only minor influences on their timing? The decisions that led to World Wars might have been more influential, but even for them the counterfactual history is impossible to define.

Thinking on the past reveals that most individual acts have a temporary influence. Even innovations are mostly based on the actual state of knowledge. The brightest individuals speed up the process, and so do also policies that support science and innovative development. Almost every concrete act has a finite time span of influence, but most discoveries would have occurred irrespectively of them. Over years people make new decisions, new innovations, and new investments, which reduce the significance of earlier acts and make the old investments obsolete. For the constructs of the human societies we must realize that their influence vanes. Many decisions have influence on short periods from days to a few years, some extend their influence to decades. For many present issues a history of hundreds or thousands or years can be identified, but even in these cases the influence of any single original decision had hardly significant influence for long, when the comparison is done to potential counterfactual histories. It’s mostly a fallacy to say that we are here with the present state of the world, because of some specific decision of distant history, and even in those very few cases, we don’t know, how the alternative would be different.

It may be argued that influencing the global resources and the global environment is different. There is some truth in that, as fossil fuels can be burned only once and using them soon means less fossil fuels for the later future. Similarly carbon dioxide added to the atmosphere today disappears from there slowly, but the case is not quite the same as with fossil fuels, because the carbon cycle will continue forever, and can thus also be influenced significantly forever. The dynamics of the carbon cycle and the climate are complicated. They lead to long delays both in the warming phase and in the cooling phase that must follow, if we first use the fossil fuels running out of them rapidly. After all we have already used most of the easiest oil resources and a large fraction of the best gas resources. The emissions from oil and gas are bound to turn to decline rather soon with a maximum not far above the present level. The future of coal use is more difficult to forecast, but the practical limits are strong even for coal. What will be ultimately decided about the extent of the use of coal will be important for the climate, but again the important issue is the ultimate future, not immediate acts.

We see that the human influence on climate will in the long run be determined mainly by how coal will be used (the oil shale may potentially add significantly, some other fuels like peat are comparable, but their resources are much smaller). The use of coal is driven by the social value of energy and by the availability of alternative socially acceptable paths. The immediately available means of reducing the social value of utilization of coal are limited, not by the number of alternatives, but by their quantitative potential.

The main alternatives of continued and even growing use of coal can be divided to

  • alternative energy production technologies (renewable and nuclear, carbon sequestration and storage is also a comparable solution),
  • more efficient technologies for energy use, and
  • changes in the consumption patterns.

The most commonly presented goals requiring a limitation of the CO2 concentration in the atmosphere to 450 ppm or less are very difficult to reach, even 550 ppm may fall to the same category. We really don’t know, what kind of combination of the main approaches could be achievable. The solution must offer space for the social and economic development of emerging economies like China and India. It cannot present politically unrealistic requirements for the industrial economies either. The present optimism on possibilities expressed by many European states and organizations is not based on solid arguments.

There is an alternative way of looking at the future, which is not as directly contradicted by the impossibility of knowing much about the distant future, and which recognizes better the importance of future decisions by the future decision makers based on the better knowledge of the time. I have emphasized that in my previous posts. It is expressed by the Brundtland Commission as the Principle of Sustainable Development, it is discussed by Partha Dasgupta in his research and books, and it has been formulated as a mathematical theory of Dynamic Programming by Richard Bellman and others. One principal observation behind all these formulations is that our present decisions have to types of consequences:

  • they influence the immediate well-being and
  • they influence the state of the world at a nearby future point of time.

This future state of the world is the basis for decisions done at that future time, and these decisions influence in turn following near term well-being and the state of the world at a moment a bit later in the future. This approach does not mean that the distant future would be irrelevant, but it tells that the influence comes through intermediate steps and that the decision makers active at each step modify the influence.

Is this alternative approach useful in practice? There is plentiful of evidence on the value of Dynamic Programming as a tool for analyzing more limited problems that are influenced by successive decision making. The most familiar problem for me concerns electricity production in a system with a lot of hydropower and reservoirs, as we have in the Nordic power system. The power system is mathematically easy to describe and the results are both excellent and valuable in practice. The worldwide human well-being over generations is certainly not easy to describe mathematically, but the approach seems to me be far superior to alternatives – at least, if we are willing to give much weight to future generations in the analysis. Using a high social discount rate might allow for the more straightforward approach of William Nordhaus, but choices approaching those of the Stern Review makes these methods practically worthless.

I have argued that the future decisions reduce the weight of later times similarly with discounting, but the mechanism is different from the combination expressed in the Ramsey rule discussed in my previous post. I believe that the role of later decisions and adaptation as a factor influencing discount rate is actually taken into account in a simplistic way by many economists, although I cannot present direct evidence on that.  Later decisions do not affect all consequences equally, and taking all likely forms of adaptation properly into account remains a major problem in the analysis. This may have some connection with the idea of a declining discount rate, as the faster adaptation affects shorter periods and the slow adaptation longer ones. The willingness to express the discount rate by the Ramsey rule appears to be limited largely to environmental and development economists, while other economists leave the determination of the discount rate commonly for the markets, which are likely to foresee the importance of all processes that make an investment obsolete. Thus Nordhaus might actually include these effects in his DICE model calculations although he has tried to explain his chosen rate by the Ramsey rule (and had difficulties in that attempt).

The approach that I support here will at the minimum offer a framework for discussing the multitude of issues involved and through that also some basis for deciding, which issues should be taken explicitly into account at the minimum and which might be left out without distorting the conclusions severely. Dasgupta’s book Human Well-Being and the Natural Environment (Oxford University Press, 2001) is perhaps the best starting point, although I have many reservations on the details of his conclusions.

Categories: Climate change Tags:

Climate policies, sustainable development and real options

March 14th, 2011 4 comments

The Stern Review presented the conclusion that the cost of climate change is very much higher than the cost of mitigation. The analysis was based on calculating the present value of the expected damages using a very low pure time preference of 0.1 %/a. The total discount rate was 1.4 % as the economic growth rate was assumed to be 1.3 % and the elasticity of marginal utility was taken as one corresponding to a logarithmic utility function. As the value of future damages grows proportionally to the economy, the pure time preference is the most important component in the discounting.

The low discount rate alone would not have led to the very high damage estimate without another factor. The possible damages included a non-negligible possibility of extreme scenarios. Using the logarithmic utility function these damage scenarios dominate the expectation value, which was found to be 10 % of the total discounted world wealth (see e.g. Dietz, Anderson, Stern, Taylor and Zenghelis: Right for the Right Reasons, World Economics, Vol 8, No 2, 2007) . The result is very sensitive to the selected parameters. Thus it should be interpreted rather as evidence for very large expectation value of damages than as a correct estimate of the damages.

Many other economists, including William Nordhaus, Martin Weitzman, Richard Tol, and Gary Yohe have criticized the methodological choices of the Stern Review claiming both that the properly discounted damages are much less and possibly also that the mitigation is likely to be much more costly that estimated in Stern Review where the cost is estimated as 1-2 % of world GDP. On the other hand Thomas Sterner and Martin Persson have argued in An Even Sterner Review: Introducing Relative Prices into the Discounting Debate, Review of Environmental Economics and Policy, Vol 2, 1 (2008), that the damages may be even larger when the expected future development of relative prices is taken into account. They argue also that disaggregating the damages to the poor from those to the rich would also add to the utility value of the damages. Christian Gollier has also argued that the lowest discount rate of all plausible rates should be used. Gollier and Weitzman published a common paper How Should the Distant Future be Discounted When Discount Rates are Uncertain? , which explains, how taking the risk premium correctly into account explains the difference between Gollier’s result and the contrary conclusion presented by Weitzman following more traditional way of looking at risk premium. Their conclusion is that the correct discount rate declines over time toward its lowest possible value.

Partha Dasgupta discusses in some length the issue of discounting in his paper Discounting Climate Change, Journal of Risk and Uncertainty, 2008. He describes many open questions that concern the discounting procedure. One of the difficult problems is related to the basic idea of determining the discount rate from pure time preference δ, economic growth rate g, and the elasticity of marginal felicity η using the Ramsey rule r = δ + ηg (approximate form valid for small rates). The elasticity of marginal felicity is a constant that describes the strength of risk aversion or how much less the same additional income means for a rich person compared to a poor one. This preference is defined mathematically by the utility function U(C) of the consumption C.

The risk aversion leads to a concave utility function that is monotonously increasing, but with a monotonously decreasing rate. The elasticity is defined as η = – CU”(C)/U’(C) and it is constant for so called CRRA (Constant Relative Risk Aversion) utility functions:

CRRA function

CRRA graphics

CRRA utility functions are popular more for their simple mathematical properties than for being in some sense correct. The logarithmic utility function and all CRRA functions with η > 1 share the intuitively plausible property that they go to minus infinity at zero consumption signifying that the state of zero consumption is to be avoided by all possible means. This property is, however, also behind many of the problems that result from the use of utility functions, because defining the state of zero consumption is not unique in any practical analysis and having a minus infinity at a state that cannot be defined uniquely may lead to contradictory results. For high consumption levels η > 1 leads to a utility function that approaches zero asymptotically from below. This is also a problematic property that may lead to counterintuitive results in practice. The difficulty of determining proper forms for the utility function remains unsolved, and is perhaps unsolvable at least in theoretical settings where the same function is supposed to describe risk aversion of investors, risk aversion related to major risks, inequality aversion between the poor and the rich, and also influences strongly the analysis of intergenerational well-being and justice. The book The Economics of Risk and Time (MIT Press, 1999, 2004) by Christian Gollier discusses much research done, but cannot provide simple answers.

But is this the whole picture? Does this approach succeed in determining an answer that we should use to guide policy decisions? Not necessarily as all these analyses may be beside the point. They may all answer a wrong question, or at least we me have an alternative question of more immediate importance that has a little better hope of having well defined answers. The problem that the above analysis has, is trying to know too much about the future. The correct posterior answers known in the future may differ to a very substantial extent from the best estimates that we can produce right now. We should not forget that the future will be different from all scenarios that we can create now and that we can generate now extremely different plausible scenarios for the distant future – say 30 years from the present and beyond. We are really incapable of guessing, what the 22nd century will be like. Trying to calculate discounted sums over 200 or 300 years just doesn’t make sense. This doesn’t mean that we should forget intergenerational justice, but it means that we should come back to the Brundtland Commission definition of the sustainable development

Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.

This definition is very close to the spirit of optimization using the method of dynamic programming, which is also close to the concept of real options. The real options mean here anything that opens possibilities of choice for the future decision makers. In terms of real options the principle can be formulated:

Social and intergenerational justice are maximized, when we maximize the total wealth taking into account the value of real options that we transfer to the future.

This approach has much in common also with the approach of Partha Dasgupta that he discusses most extensively in his book Human Well-Being and the Natural Environment (Oxford University Press, 2001)

We must realize that the people of each future moment build their well-being and influence the further future combining what they have obtained from the past and what they can do themselves. Our main responsibility is to provide the next actors (which include also those of us, who have not yet died) the means for maintaining their well-being and transmitting similar means for the further future. If we do not take into account the fact that new choices are being made continuously and always using some knowledge that we do not have presently, then we are not doing the right choices. The conclusion is also that improving the capabilities of the future actors is often more important than providing something ready-made for them. The latter may be even counterproductive, if the thing that we provide is of little positive value in the future.

Concerning climate policies and energy policies the conclusion is that we must put more effort in developing solutions with large potential than in building infrastructure using inefficient technologies. Money put in R&D and also basic science is often better spent than money put to wide scale deployment of maturing technologies. The advantages of early deployment must be taken into account, but they should not be overstated. At the same time we must avoid choices that reduce severely the resources the future generations need.

Categories: Climate change, General, Science policy Tags:

Climate science and evidence

February 28th, 2011 10 comments

0. Background: Discussion at “Climate Etc.”

During the last weeks I have been following and participating in the discussion on Dr. Judith Curry’s blog Climate Etc. Most of the comments on that site present a skeptical view of climate science, but there are also commenters close to the main stream science and beyond. Dr. Curry is a climate scientist, whose description of her views can be interpreted as agreeing with the main stream science on all issues considered generally well known, but having less trust than most main stream scientists in results that are considered imprecisely known by others as well. As an example, she has given a significantly wider error range for the climate sensitivity.

Another characteristic of her site is that some guest postings have brought to discussion ideas that are clearly contrary to the mainstream views. Judith Curry has introduced some of them in a neutral fashion also in cases, which I would judge to be ill-conceived criticism of climate science using arguments, which approach fallacies of logic.

What is most striking to me in the discussions is perhaps the general difficulty of understanding, what the science is about, what it can provide and what not, or what can be required from science to make it valid and proper. Here I present briefly my views on some of these issues. The point of view is personal, but I expect it to agree largely with many active scientists, while some differences of opinion certainly exist.

1. Climate science and anthropogenic global warming (AGW)

The goal of climate science is to collect knowledge and understanding about processes influencing climate. The goal is not to prove or disprove AGW. Actually the question of proving or disproving AGW does not make logical sense. A more correct way of presenting the relationship between climate science and AGW is that climate science tries to find out, how human actions affect climate. There is no doubt that there is some influence, but determining its strength and details is one central problem of climate science. It is not the only one and from point of view of science itself, it is just a distraction.

The climate science does not study AGW forgetting other variability of the climate, but as one component of the comprehensive knowledge.

The requirement that AGW has to be proven is moot, as some influence certainly exists. Inverting the null hypothesis requiring that a strong AGW does not exist is slightly more logical, but not well defined. Therefore the whole concept of disproving some null hypothesis does not fit the problems related to AGW. The correct question is:

What do we know about the strength and other properties of AGW?

2. Empirical evidence and models

Climate is not the same as the weather at one point at one moment. It concerns statistical properties of weather as the weather varies in time. In most cases also wider areas are considered rather than a specific location. Typical climate parameters include average temperatures, extent of variation around the mean, severity and frequency of extreme temperatures as well as similar quantities related to precipitation, winds and other weather patterns. None of these quantities is directly observable; all require the use of some model. At its simplest the model may specify, how the measured temperatures are combined to get the average temperature at one measuring station, in other cases the empirical data can be used to calibrate a rather complicated model and the searched quantity is an outcome of the model. The second description applies, e.g., to satellite determination of the temperature of the lower atmosphere, and it applies to an estimation of the climate sensitivity. The view that there would be a fundamental difference between the simpler and more complex cases is unjustified. The uncertainties do vary and part of the variation is linked to the use of models, but there is no fundamental difference.

For every piece of knowledge the best evidence combines empirical data with model based analyzing methods. The models used are furthermore based on a combination of theoretical understanding and further empirical data that has been used to verify their validity and specify some of their parameters. How much the models used in the analysis of observations affect the reliability of the results, varies greatly from case to case. Thus no general conclusions can be presented on that. Particular problems arise from such uncertain model features that affect in a similar way the analysis of several sets of empirical data. The mutual agreement of results of the separate experiments may in such cases create false confidence, as one single uncertain parameter may cause an error in all of them. These problems have been discussed in the comments of my previous posting.

All assessment reports of IPCC have indicated that the most representative single parameter of AGW, the climate sensitivity remains highly uncertain (between 1.5 °C and 6°C with 90% certainty is one way of describing the present estimate; the climate sensitivity is the increase in the average surface temperature of the Earth surface that results from doubling the atmospheric CO2 concentration and keeping it at that level long enough for the temperature to reach its new value). Whether such quantitative limits can be considered objectively justifiable from scientific knowledge or whether the limits should rather be taken as a typical subjective assessment of individual scientists remains also unclear. Either way they the numbers appear to give a reasonable picture of the present thinking in the climate science community.

Even when all the above caveats are taken into account, it remains true, that results, whose analysis involves complex models are not necessarily any less accurate or reliable than more direct observations. Estimating their accuracy is often more difficult, but when that is done carefully such results should not be given any less value.

3. Physics and climate science

Atmospheric sciences including climate science are physical sciences. Their basis is in well-known laws of physics and in material properties measured accurately and reliably in laboratory experiments. Atmosphere and the larger Earth systems that include oceans, continents with their biosphere and glaciers are so complex that a full analysis starting from first principles is impossible. Much can, however, be learned from the basic physical knowledge without the need of building complex models.

The two most important fields of physics for basic understanding of the atmosphere are thermodynamics and the theories of the interaction of electromagnetic radiation (visible and infrared) with matter. Fluid dynamics is also essential, but its role in the analysis comes at a more detailed level of work. The basic structure of atmospheric models is based on fluid dynamics.

Thermodynamics tells how the temperature profile of the atmosphere is determined for most of the globe (in Polar Regions the profile is different). The result is that the temperature falls with altitude following the adiabatic lapse rate through the troposphere

The theory of interaction of the radiation with matter is based on theories of electromagnetism and quantum mechanics, or their combination Quantum Electrodynamics. On the molecular level the interactions are described by quantum mechanics. The results of the relevant quantum mechanical calculations have been combined with experimental data. On this basis the extensive Hitran data base is known to contain most of what needs to be known about the interactions. Beyond that point the simple picture of photon emission, absorption and scattering is sufficient for analyzing the radiative energy transfer in the atmosphere. There may be alternative approaches, but they do not give any additional value or change the results, they would probably make the analysis only more complicated.

The two physical theories described above form the sufficient basis for understanding the greenhouse effect and the radiative forcing of additional CO2 for an atmosphere where the only change in troposphere is a sudden addition of CO2. (The radiative forcing is defined as the reduction of outgoing radiative energy at the top of troposphere after such sudden change in CO2 concentration.) The calculation requires a description of the state of troposphere, but not any other knowledge about its behavior. The lack of need for any other knowledge makes the calculation of radiative forcing very reliable and rather accurate. The small inaccuracies are mainly due to the inaccuracies in the description of the atmosphere.

4. Scientific method and climate science

In discussion forums a common claim is that climate science is not done in accordance with proper scientific methods. These comments are based on erroneous interpretation of science, the scientific process, and the scientific methods. The critics set such requirements for science that are seldom applied in any field of science. The requirements originate often from Popper’s work. Popper presented an idealized and narrow description for science. His ideas have since been criticized, because it has been observed that most of good science does not follow his idealization. The idealization appears to still have great appeal among those, who want to criticize some particular field of science like climate science. As the idealization is not valid, this argumentation presents a fallacy of logic close to the well-known straw man fallacy.

A typical erroneous way of using Popper’s ideas is to require that scientific work should proceed following formally the steps of presenting and testing hypotheses, and in particular through presenting a null hypothesis and proving it wrong. Furthermore it is stated that some results do not present science unless they can be falsified applying this requirement again formally. All these requirements are too narrow and fit poorly the goals of climate science. In climate science there are no new well defined basic hypotheses to prove or disprove. The theories are theories of physics and other basic sciences, such as chemistry. All these theories have already been confirmed extremely well.

Instead of proving or disproving basic theories, the climate science searches for the best possible description of the Earth system. In this work there are no fully correct results, there are only better and worse partial results. The results are not false or correct, they are good or bad, close to the truth or far from it, they are valuable or without merit. The quality of the results of climate science is not two-valued; its value must be measured on a continuous scale of merit.

A real problem of present climate science is that results are expected from it faster than the normal scientific process can produce. The scientific process does not rely on the faultlessness of individual scientific activities; it relies on the self-correcting nature of science. Doing research carefully following all good practices does not guarantee correct results. Science is of creative process searching genuinely new knowledge and that cannot be done without occasional and even frequent errors. The dilemma is created, when decision makers want to use all possible information and when this includes unavoidably also information, whose reliability has not been verified sufficiently by the full scientific process, which may take years to correct some of errors, including sometimes essential errors.

Categories: Climate change, Science policy Tags:

Uncertainties, climate policy choices and sustainable development

February 14th, 2011 8 comments

Climate policy decisions form a prime case of decision making under uncertainty, but how extensive are the uncertainties and do we have clear ideas on how policies can and should be formulated?

Much of the discussion has concentrated on estimating the strength of the warming that we may expect and on its direct consequences like the rising sea level. The best single parameter to describe warming is arguably climate sensitivity defined as the increase of the globally averaged surface temperature that results from doubling of the CO2 concentration in the atmosphere. According to the Fourth Assessment Report (AR4) of IPCC, the climate sensitivity is likely to be between 2°C and 4.5°C. This range is wide and the difference between the lower and upper limit presents an essential difference in the risk that the climate change creates. Furthermore ‘likely’ refers in the IPCC vocabulary to a probability of 67%. Thus the probability of either larger or smaller climate sensitivity is one in three. Still the climate sensitivity is among the best known of all important factors affecting decision making.

The global average temperature change by itself does not tell much about the risks involved. The potential damage would come indirectly through rising sea level and through local changes in the weather patterns. The sea level rise resulting from a specified increase in temperature is relatively easy to estimate as it is a result of thermal expansion of water and of melting of continental glaciers. The analysis of expected changes in local and regional weather patterns is on a much less certain basis, but here the analysis is still within physical sciences, where we have some basis for estimating uncertainties.

Rising sea levels make some densely populated areas inhabitable, and forces a large number of people to move. The damage and costs related to this depend strongly on the rate of the sea level rise and on the adaptability of the societies. The adaptability is an equally important factor in determining the consequences of changes in weather patterns. In the case of weather patterns the changes are not all negative, but in many cases also positive. Good adaptability helps both in mitigating costs of the negative changes and in taking advantage of the positive ones. Estimating the size of the net damage is extremely difficult. The adaptability means also that the negative consequences of specific changes do not continue forever but disappear gradually. How fast this occurs and how much continuing changes in climate influence the outcome are again issues we know very little about.

Climate change is only one of the problems of sustainability that confront human societies. Population growth combined with economic development is the main driving force for all anthropogenic risks for sustainability, including the climate change. The actions made to mitigate climate change will influence also other potential consequences of development paths. In some cases the influences of policy decisions may solve simultaneously several problems, but there is also a great risk that solving one problem aggravates others, perhaps more severely than the target problem is eased. The different development paths are also likely to lead to different population growth, and making judgment on the desirability of different levels of future population is certainly beyond widely accepted practices of valuating futures.

Many expect help from new technologies, perhaps most commonly from renewable energy solutions. Their potential and future costs are, however, impossible to estimate accurately and difficult to a really useful level. Very many solutions have been proposed, but only few are presently economically feasible, without major potential risks, and allow for a really significant volume of production. Wind energy is perhaps the least problematic, but its potential is limited and its costs increase when the local wind conditions are less optimal. Solar electricity has theoretically a huge potential, but may remain too inefficient and costly in most parts of the world. The potential of both wind and solar energy is reduced also by their uneven availability. Particularly in colder climates the availability of solar energy may be negligible at the time of largest demand. Very large sums of money have been used to support solar energy development in several countries including in particular Germany, where the resulting energy production is almost negligible in comparison due to the high costs and limited solar radiation.

Bioenergy has been proposed as one of the most important sources of energy and in particular of liquid fuels. The results are, to say mildly, controversial. It has turned out that some of the proposed solutions have a very low efficiency up to the point that the energy balance may in some cases be negative. More generally production of biofuels competes with other forms of land use including food production and protection of ecologically valuable habitats. Practically every form of production of bioenergy has led to problems that had not been fully anticipated and which are often serious.

Valuing all influences on human well-being at the present in different parts of the world and in the future is not possible without methodological choices on which no general agreement exists. When policy choices may influence oppositely presently living poor countries and future generations of next century, it is difficult to believe that any clear answers can ever be found. Concerning future damages, the rules of calculation must make a choice on, how to discount costs and benefits occurring at widely separated times, but even solving that problem does not help, if the strength of adaptation is not known and also other factors influencing the estimates of the future situations are widely open. How far into the future do the consequences of a major decision made today really influence the well-being and how strongly?

Even if we could miraculously succeed in determining, what kind of future state we wish to reach, we are still faced with the problem of how to implement our wishes through realizable policy decisions. We should be able to formulate a policy that allows the economy to develop without major disruptions – and we know now all too well, how easily the world economy may end up in disruptions. We should also get this policy accepted in our political system.

Who believes that he can present a solution that identifies correctly the proper long term goals and the right way of reaching them?

In absence of such unlikely political leaders, we should perhaps accept a more modest approach that looks for immediate actions most likely to lead to the better than to the worse and accept that we must rather modify the starting steps gradually when understanding improves than to declare now “binding goals” for distant future that will turn out to be impossible to reach, irrelevant or in the worst case to lead to the wrong general direction in practical realization. This means that we should choose the most robust alternatives for policy and think more about the near term consequences than dreams about the future. Long term future should also be analyzed and sustainability should be a guiding principle, but we should be realistic on our capabilities of forecasting what is going to happen.

Note: The precautionary principle is not mentioned in the above. It is an essential principle, when decisions involving great risks are considered. I will return to it in later posts. I will come back to other details as well and try to include then also relevant references. These first postings are meant to set the general scene as I see it.

Categories: Climate change Tags:

The impossible task of IPCC

February 6th, 2011 3 comments

On the Finnish side of my blog I wrote with this same title promptly after the InterAcademy Council assessment of the processes and procedures of IPCC had been released. Now I expand on some of my earlier comments, this time in English.

The stated role of IPCC is according to approved governing principles

.. to assess on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change, its potential impacts and options for adaptation and mitigation.

It furthermore states

IPCC reports should be neutral with respect to policy, although they may need to deal objectively with scientific, technical and socio-economic factors relevant to the application of particular policies.

which is often rephrased: “IPCC shall be policy neutral, but policy relevant”.

The range of knowledge and understanding that governments need for wise decision making is very wide and diverse and IPCC has interpreted that it should try to cover a similarly wide range of research from the physical sciences of atmosphere to biology, ecology, technologies and social sciences. The nature of scientific research and the actual state of knowledge and understanding varies as widely among these fields as do their subject matters. IPCC has, however, chosen to approach the whole range using the same structure of activities. Here I see one potential source for persistent problems.

It is presently agreed widely by both the supporters of the IPCC approach and by its critics that uncertainties in knowledge are large and that this is a central issue in using this knowledge as basis for decision making. This observation is so central that different approaches in reacting to uncertainties may lead to totally different policy recommendations even when there is an agreement on the factual basis. At the other extreme no actions to mitigate the climate change are deemed too costly and at the other no near term action at all is considered justified. The science cannot give logically binding answers for policy choices, but the scientific knowledge is still the only reliable source for much of the information basis needed in decision making. As concerns IPCC the questions arise:

  • Is the role of IPCC defined in a way that allows for most useful contribution to wise policies?
  • Are the tasks that IPCC has chosen to perform appropriate given the role approved for it?
  • Is the basic structure of IPCC appropriate for the task?
  • Are the approaches well-chosen and should they be similar for all its activities or vary even essentially depending on the area of research covered?
  • Can IPCC give an optimally correct and unbiased representation of the content of the present scientific knowledge and in particular of the uncertainties inherent in this knowledge?

The tree working groups

Presently the activities of IPCC are performed mostly by three working groups. The first working group WG1 covers the physical sciences of atmosphere and other major subsystems of the earth. On these issues a very large amount of scientific work has been done and published as peer-reviewed literature. The extent of the research was, however, not very large before the possibility of a dangerous climate change was brought up and became a major policy issue. Therefore and also because of the rather recent major changes in observational capabilities, the science is still developing strongly relative to the well-established knowledge base. In general terms the science basis of the WG1 is wide, many results have been confirmed by independent research and the status of the research allows for summary reviews and for objective description of many of the remaining scientific disagreements.

The task of the second working group is to assess impacts, adaptation and vulnerability. The field to cover is extremely wide and diverse. Many of issues that come up have not been studied extensively. If any serious research is done at all, it is often done by one group only and not necessarily published in peer-reviewed publications, or when some peer-review has been applied it may be more superficial that typical for physical sciences. The task of the WG2 has therefore been very different. The review cannot be based on assessing independent studies on same issues. It is also likely that the existing research is often been motivated by arguments that are not productive of best objectivity. This leads to a definite risk of significant bias, but the importance of this bias may be impossible to estimate.

The third working group on the mitigation of the climate change is even more distanced from natural sciences. It covers areas that could best be described as futurology, a field of research always based extensively on subjective judgments.  It covers also technology development where the researchers have often direct conflicts of interests, as the results may affect strongly the levels of public funding of both research and implementation of the technologies studied. Similar conflicts of interest have influenced strongly research on land use and various related economic activities.

All three working groups combined result in a large information basis that is difficult to interpret and use without major biases. The process of writing the Synthesis Report and summaries for policy makers is not sufficient and objective enough for giving the right picture of all this material properly weighted and providing understanding on the uncertainties and their significance. There is ample evidence on this in the difficulties in agreeing on climate policies. In particular the relevance of the precautionary principle – what it does imply and what not – is a very major issue requiring much more support through some process not included in the present activities of IPCC. Presently the whole range of interpretations is left open for politically motivated claims without any real support for more analytic approach.

While all assessment reports have provided valuable results by listing existing research and summarizing its findings, it is questionable, whether the approach is optimal. It appears possible that a new structure could fit much better the diversity of issues. In what follows I present some ideas to discuss further. Some of them have certainly already been proposed by others, whom I do not give references, some may be more novel.

Should WG1 reports be replaced by a continuous process?

The task of WG1 appears rather clear and WG1 has been capable of producing reviews that cover existing research rather well and objectively, much better than many of its critics claim, when they have not even looked what is in the report. It is, however, not obvious to me that writing similar reports every five years or so is the most productive way of collecting and presenting this information. While the reports are good, something even better might be produced with less effort. A continuing process based on the use of internet might now be better. That would involve the maintenance of various databases for both publications and for the data. Concerning the data, processes of creating such publicly available data bases are already underway, but the same might be extended to publications.

A public publications data base should operate at more than one level. At the lowest level, the criteria for accepting a publication should be minimal making mainly technical requirements on the presentation and excluding publications shown to be explicitly erroneous or nonsense in their essential content. This leads to a very large number of publications of variable quality, but that is not a problem in a multilayer data base. At least one upper layer would apply criteria similar to those of the present WG1 in including publications in a list of papers judged significant. This level could possibly be combined with texts characterizing the research in a way similar to the present assessment report.

Accepting publications to the second layer should not take very long, but still be slow enough to give time for open peer assessment of the papers after publication. To speed up this process a candidate status might be given to papers soon after their publication, when supported by some other scientists. Peer comments could be collected to a separate discussion data base both during the period of candidacy and also continuously on the papers selected as being significant and on the way they are characterized in the text.

The whole process would be rather similar than the present work on creating the assessment report, but continuing without interruptions and maintaining continuously an up to date data base and report. The continuity of the process should not require more work. After a while it might actually lead to lesser effort as there would not be the need of starting over and over again. This would not prevent essential improvements at any moment, as the system would be open to well justified proposals.

One essential problem has, however, to be considered before creating an organization as described above or before deciding on further rounds of assessment reports. It is questionable, whether having permanently or for a long period a body assessing the work is a good practice for science that has reached the level of maturity of sciences included in WG1. Either process leads to an assessment by a selected body of authority. This whole concept is to an extent contrary to ideal scientific process, where the peer control should work through less formalized processes, where nobody is nominated to make judgments, but any peer can influence the outcome based only on the quality of the arguments he or she presents. While the reality is never equal to ideal, deviating from the ideal by purpose is a step in a wrong direction. Therefore the pros and cons of the assessments must be discussed and the conclusion may change as science matures.

Difficulties of WG2 and WG3 and limits of IPCC’s future mandate

Above I have discussed in some length my ideas on WG1, but what about the others. For them the situation differs with respect to the nature and quality of research and scientific knowledge. Collecting a data base is certainly possible, but applying well defined criteria to select significant publications from the lot is very problematic. Conflicts of interests and direct links between political opinions and selection of research subjects or the way results are presented are also a very serious obstacle for unbiased presentation of much the research in the fields covered presently by WG2 and WG3. Furthermore it has turned out that the content of these reports has not received similar public appreciation as that of WG1 in spite of the direct relevance of the issues studied. The likely reason for this is in the lack of trust in the results and to some extent their complexity.

The present weaknesses of the WG2 and WG3 reports and the fact that they are not valued similarly with WG1 seem to indicate that their tasks must be reformulated to essential extent. A process similar to that of WG1 in its present form or in a new form might be applied to a fraction of the areas included in WG2 and WG3 reports. Even when combined the research to be included is likely to have a significantly lesser volume that what is included for WG1. Much of what is presently covered by WG2 and WG3 cannot be covered successfully by the same approach. The concepts of “policy descriptive” and “policy prescriptive” are too closely linked in these areas for producing unbiased reports and trying to do that in spite of this leads to a lessened valuation of all IPCC work.

The issues that I propose to be separated from IPCC’s tasks are definitely very important, but they require a different approach, in which the policy and value conflicts are recognized. It should be done by fully separated bodies, whose realization requires still serious thinking and experimentation. These bodies should also have particular understanding on the significance of uncertainties in the knowledge base used to support decision making. They should also be capable of relating the risks of climate change to the other pressing issues of environment and development. I plan to return to these issues in a later essay. The only additional comment that I have now is, that the controversies unavoidable in the work of the proposed bodies may require that several independent bodies work in parallel to produce more than one concrete view on each set of issues for which no single proposal may be expected to be acceptable to all independent countries of different state of development and different dominating political philosophies.

Categories: Climate change Tags:

The Honest Broker, Climate Change and IPCC

November 7th, 2010 2 comments

In his book “The Honest Broker: Making Sense of Science in Policy and Politics” Roger A. Pielke, Jr. considers four possible roles for scientists: The Pure Scientist, The Science Arbiter, The Issue Advocate and The Honest Broker.

The two first roles apply to issues, where scientists can agree on a range of answers that does not leave space for significant value based disagreement and controversies. Large and even extreme uncertainties may be present in pure science, but they do not affect policies or politics in foreseeable future when these roles are applicable. The Science Arbiter gives advice on policy relevant issues and his advice is taken into account without major disagreement.

When large uncertainties are present and allow for opposing value based choices for important policy decisions, the role of The Science Arbiter becomes impossible. The Pure Scientist may then also face difficulties in pursuing his or her research e.g. in securing funding for research that appears to lead to directions deviating from prevailing preferences. Under these conditions many scientists may turn into issue advocates trying to influence the policies to reach better agreement with their own views on the scientific knowledge and with their own value judgments based on these scientific views. Many scientists do this based on purely personal reasons, some are drawn by their scientific colleges, but scientists are also used in issue advocacy by outside political actors who seek support for their goals by picking from a wide spectrum of scientific statements only those favorable for their cause. In this process the statements become often overly simplistic and may leave the scientist in an awkward position.

The book of Pielke accepts that issue advocacy is a natural position and that politics is about issue advocacy, but it emphasizes the need for the fourth role for scientists: The Honest Broker, whose goal is not to support certain policies and help in narrowing the choices to that advocated, but to widen the view on possible paths by describing the pros and cons of the alternatives – and possibly adding new paths, which might provide basis for commonly acceptable compromises.

The idea of honest brokers is compelling but it is far from straightforward in practice. All individuals have their subjective preferences and all scientists have their own differing views on issues where uncertainties are large. Thus Pielke appears to favor the idea that more often The Honest Broker would be a group of people, a panel, than an individual scientist. But the problem remains. How can we know that The Honest Broker is not a stealth issue advocate? How can even the panel itself know that it is not significantly biased? The climate change issue is a strong example of this problem because the reliability estimates of many central results are largely based on scientists’ subjective assessments rather than faultless formal tests.

Many advocates of strong climate policies try to give the impression that the whole issue is settled, the science is strong and supported by The Science Arbiter. Those who do not agree are simply wrong and funded by questionable sources or otherwise misled. The opposite side is equally vehement. They accuse the scientific community of outright wrongdoings or at least of incapability in maintaining scientific objectivity and in accepting the full uncertainty of even the most basic results.

What is the role of the IPCC and what it should be? Its task is defined in its governing principles to be close to The Pure Scientist and many of its procedures are built on this basis. The critics of IPCC typically claim that it has turned to an issue advocate. Others may insist that it should be more strongly an issue advocate or that it should act as a science arbiter or an honest broker. All these views either conclude that IPCC is not really acting as pure scientist or that it should drop the role of a pure scientist. On the other side are those that insist that IPCC should remain as a pure scientist or return to that role if it has lost it.

The role of a pure scientist has the advantage of being objectively better defined than other possible roles of IPCC. It would require an even stricter adherence to objectively justifiable procedures than e.g. the report of The InterAcademy Council has recommended. It might make impossible to write such summaries for policymakers and synthesis reports that IPCC has so far produced. It means that the reports of IPCC would not anymore be as accessible to policymakers as they have been so far. But are there any alternatives for separating issue advocacy from the work of the IPCC. It is certainly not possible to write concise and at the same time policy relevant reports without losing the role of The Pure Scientist. Presenting science to non-science society requires one of the other roles. Some may claim that these summary reports present an objective science arbiter, but this is definitely questionable, and even that goes outside of the role of The Pure Scientist.

What could be done? Perhaps one should indeed make the IPCC to present The Pure Scientist as well as it can be done and to create separate bodies outside of the IPCC to act as Honest Brokers. My idea is to have several competing and mutually complementary Honest Brokers, as full objectivity is not possible to achieve and as its excessive search might make the bodies too passive. Still all these bodies should aim to be honest brokers instead of issue advocates, which will flourish anyway. The role of these additional bodies would be to interpret the scientific knowledge and connect it to other goals and values of the global and national societies. Each of the bodies might emphasize different points of view and different concerns of the society. All of them should have good science background to make them capable in interpreting scientific knowledge. Separation of these interpretative tasks from the tasks of IPCC, would allow IPCC perform better in those tasks that its governing principles define.

Categories: Climate change, Science policy Tags:

Pekka Pirilä on Energy opened

July 29th, 2009 No comments

This is my English language site, where I expect to write and post articles irregularly on a variety of energy and climate related issues. Similar site in Finnish is available at http://www.pirila.fi/energia. The Finnish site is likely to be more active, but concentrate more on issues with mainly local interest.

Categories: General Tags: