Previous article

Return to Tiempo index

Next article

The missing dimension



Jaro Mayda discusses the need for developing a better social technology for regional and local decision making on global change.

The author has been Professor of Law and Public Policy at the University of Puerto Rico and a frequent United Nations consultant. He has resided on Madeira since 1989.

The primary task of the Intergovernmental Panel on Climate Change (IPCC) has been the collection and evaluation of relevant data.

One watershed was crossed when the 1995 Second Assessment Report confirmed the assumed “anthropogenic interference with the climate system” (Article 2, the UN Framework Convention on Climate Change, Rio de Janeiro, 1992) by concluding on the “balance of evidence” that there is a “discernable human influence on global climate.” Another watershed, some elements of which are discussed here, must hopefully wait for the Third Assessment Report to be completed late in the year 2000.

The IPCC has followed a formula neatly summarized by a United Nations Environment Programme writer based on experience with the stratospheric ozone problem: “First assess, then act.” This sequence has been right for the climatic data, as well as for the transition between the assessing and the acting — the inquiry into the possible technologies and costs of mitigation of, and adaptation to, the climate warming. Unfortunately, the quantification, indispensable to the scientific and technical analysis, carried over lock, stock and barrel into the socio-economic realm.

As a result, the early work of the IPCC Working Group III on the economic and social dimensions of climate change was dominated by economists who relished in equations, game theory and other “scientific” paraphernalia, at the cost of social analysis focused on how problems will be solved sur place, and why the particular solutions may differ regionally and locally, while still going in the same direction.

The reason for the differences is obvious: human reality is only partly quantifiable. The ingredients that mostly determine any concrete solution are local social data, political expectations and preferences, institutional and individual capacity, rules of law, their interpretation and execution, elements of conflict resolution, changing cultural values, and perhaps still other data, most of them qualitative, different from place to place, not reducible to any a priori formulae, and grasped and manipulated at best by educated intuition.

So, the local, national, and regional decision makers, faced by an imposing stack of documents — that the magazine The Economist (speaking of the Second Assessment Report) not inappropriately called “2000 pages of bumf” — are likely to take only the outer framework, the obligatory goals and other basic information, and try to squeeze them into their local political and economic framework.

The question then is: How do you arrive at decisions that are tolerable locally and right by global standards? This question is pending and neither the IPCC nor social sciences in general have so far tackled it.

The response of science to the flood of data has been integrated assessment, which has in turn generated integrated assessment models. The inevitable critique was quick to come: integrated assessment models were shaped on the economic and cultural model of the OECD type, which is not applicable wholesale to developing countries.

The IPCC recognized this point prominently in the Second Assessment Report and responded by organizing a workshop in Tokyo, in March 1997, which was to span this gap by “enhancing communications between integrated assessment researchers and policy [probably meaning decision] makers, especially in Asian developing countries.” Unfortunately, a complementary aim was to explore whether these models “can become a standard methodology to integrate science into policy.” As I will explain, they cannot because of the logic of an orderly progression from scientific-technological data to political and managerial decision making.

At any rate, only a few isolated voices rose above the key note of the Tokyo integrated assessment model-fest, mostly to distinguish integrated assessment from integrated assessment models and to suggest that decision making was more complicated than the models could handle, because they were research rather than policy tools. Which pretty much draws the essential distinction.

That the process of policy analysis and development still is such a black box is not the primary responsibility of the modellers. Some of them may make exaggerated claims, such as that one integrated assessment model or the other represent a “communications platform between scientists and decision makers” (a recently-founded institute for research and consulting is based on that premise). But the base belief of scientists that they can “format” scientific knowledge so as to make it directly useful to decision makers (to quote from a comment on a major project that inevitably failed in the early 1990s) goes way back. So does the easy use of the mantra word “policy.” To offer a topical example, the first announcement of the 1990 Second World Climate Conference was full of language about “substantial policy thrust,” “(intent to address) these policy concerns,” and the like — the reality of the proceedings was another matter.

The principal responsibility falls on the less than adequate work of social scientists, camouflaged by quantification and empty formulae about “dynamic processes forging linkage between knowledge, understanding and action.”

For instance, the proceedings of a major preliminary to the Earth Summit, the 1990 Bergen Conference on Sustainable Development, Science and Policy abound in references to “interfaces between science and policy” and “policy maker’s input,” but never explain how to go about combining scientific data and qualitative socio-political information into policy and decisional knowledge.

Just how wide open the field still is was neatly highlighted when a biologist (!) proposed that “scientists with ‘policy antennae’ generally have to pick up the [policy] message by osmosis.” (Never mind the mixed metaphor.) The various follow-ups, prominent among them, the International Human Dimensions Programme of UNESCO’s International Social Science Association, strive, but have yet, to show the way.

It is just so easy to talk about “policy support research,” “programme responses,” “science-policy domain,” “policy mandates,” etc., but apparently more difficult to put some firm ground under the terminological surface. This, despite the fact that policy analysis and development have been rather well-developed in fields such as macroeconomics or foreign affairs.

The basic policy algorithm that spans scientific data and decisional knowledge is simple and thoroughly logical — it follows the old cybernetic sequence: Input (data, information) - Conversion (policy process) - Output (decision making).

A really integrated integrated assessment must comprise all the relevant inputs. To achieve this, the science and technology data needs to be supplemented by social information in the broadest sense, including the political and institutional culture as well as the accumulated national (public and non-governmental organizations) and transborder experience, where available. This is the vaunted “overlapping” — better interplay — between the scientific community and the broad policy community.

But the analysis of the data (quantitative) and information (qualitative) in the decisional perspective is only part of the policy process proper. The second half of the job is to develop and evaluate the decisional options. This process includes the outline of the response plan, the evaluation of the various alternatives by an “old-fashioned” full impact and cost-benefit assessment (the present IPCC practice is, largely, limited to the economic costs of alternative technologies), conflict resolution, and the recommendations to the decision maker(s).

In sum, the policy process is a selection and transformation of data in the perspective and language of decision making. This process must be developed much beyond the traditional policy sciences — an outgrowth of the operations research of the 1940s and the systems analysis of the 1950s and 1960s.

There are two major preconditions for the success of this “new” social technology which, it must be emphasized, is at least as important as any of the other scientific, technical or economic dimensions of global environmental change.

One precondition is the acceptance not only of the intuitive element in decision making, but of the fact that the power of careful, if crude, observations or calculations has been confirmed numerous times.

Possible examples range from the 15th century astronomical tables, to the 1815 Tambora eruption which caused a “year without summer,” to the original back-of-the-envelope calculations of the human impact on the stratospheric ozone layer, all later confirmed by supercomputers as essentially correct. Another example is the very costly study in the mid-1970s of possible models for regional environmental management in the United States which resulted in an admonition that we have to “learn how to make decisions with fuzzy data.”

So, while detailed quantifications are necessary to convince the politicians that something must be done — the Kyoto Protocol intended to strengthen the Climate Convention is beginning to have a perceptible impact — it is not the quantified side of the equation that will determine the eventual concrete decisions on the regional and local level.

The second need is to develop the necessary personal and institutional capacity to make and carry out the decisions. This is another long-haul task, so far not very successfully tackled by the United Nations under Chapter 37 of Agenda 21. And the capacity building does not yet even visualize the need to educate the new “specialists in generalization” needed to carry the burden of policy development by acting as the digesters and translators of the science and technological data, economic projections, and social-political information into feasible and tolerable local options for decision makers.

To sum up: research and debate on global environmental change have been so preoccupied with data that they have tended to neglect processing the data for application. This dimension (the policy analysis, development and evaluations of decisional alternatives) has been rather an obligatory frosting on the cake, a kind of well-intentioned abracadabra, not an integral part of the process toward decision making.

Even the IPCC lexicon reveals the terminological insecurity — for example, the Second Assessment Report speaks correctly about “decision-making” (hyphenated) or “policy implementation” (no hyphen) but refers to the political decisors repeatedly (including in the title of the several summaries) as “policymakers” (one word).

The process of deliberations toward the Third Assessment Report is the opportunity to change all that. The charge to Working Group III of the IPCC (economic and social dimensions of climate change) is, among other things, to explore “the methodological aspects of cross cutting issues such as... decision making [no hyphen!] frameworks.” And while one or even two swallows do not spring make, there are stirrings in the integrated assessment community which indicate the sense that there is something more out there, though the underdeveloped conceptual architecture and methodological equipment do not quite support a more specific elaboration.

In addition to the already mentioned isolated voices at the Tokyo workshop, the best recent example has been an IPCC workshop on mitigation and adaptation cost assessment which was held in Risø, Denmark, in June 1997. But the lack of an operational model was palpable, and was perhaps further aggravated by the narrow topic of the workshop — the economic costs of alternative emission control strategies — which was not hospitable to broader consideration of social policy application.

At any rate, as these strands begin to interweave, the IPCC should deliberately foster the policy aspect, not as a “cross cutting issue” (what is this lingo anyway?), but as the missing, or at least unfinished, bridge between the worlds of scientific and technical intelligence and intelligent decision making on all levels.

Further information

Jaro Mayda, Rua Pedro Ornelas 12-B, 9050 Funchal, Portugal. Fax: 351-91-226254.


Previous article | Return to Tiempo index | Next article