UNEP has published three reports to date in the comprehensive Global Environment Outlook (GEO) series—GEO-1 in 1997, GEO-2000 in 1999, and GEO-3 in 2002— and GEO-4 will be published in 2007. The objectives of the main GEO report series are to produce a comprehensive, policy-relevant overview of the state of the global environment that incorporates global and regional perspectives and to provide an outlook for the future. Indicators are used to encapsulate and convey information on the state and trends of high-priority environmental issues and related pressures, impacts, and pol icy responses, to illustrate global and regional dimensions, and to support outlook analyses. UNEP uses a participatory, consultive process for GEO integrated environmental assessment and reporting.
Senior decision makers and their advisors, especially those in ministries of environment, are the primary audiences targeted by GEO reports. Secondary target groups include international environmental organizations and NGOs, the academic community, other UN agencies, the media, and concerned members of the general public (Table 4.1).
Monitoring and evaluation of the use of GEO-2000 have provided insight into the composition of the GEO user community and how it has used the report (Figure 4.1).
Table 4.1. User categories and needs.
Indicators relevant to voters would help them identify actions that they can take and actions that government should take.
The indicators must be applicable and relevant at an individual or local level and conceptually clear. ' The indicators should also be few in number, simple, and unambiguous, with no technical and methodological information.
' Journalists need clear bundling of information. ' They also need information ("sound bites") that they can use to pad out other stories.
The data should be unambiguous and simple, with clear messages and assessments (notes to editors, interpretation guidelines, limitations) that enable journalists to make statements about whether a trend is stabilizing, worsening, or improving.
Decision makers need simple information that provides an overview but with some assessment and possibly some analysis that highlights areas where action should be taken. Targets are important.
' Local governments need to be able to disaggregate the information in order to target policy appropriately. They need the indicators and methods to be applicable and relevant in different settings for towns, cities, and municipalities.
Table 4.1. User categories and needs (continued ).
Policy implementers and checkers
Policymakers, developers, and designers
Policy implementers and checkers need a wide range of indicators that are clearly defined and stable in terms of methods and data requirements and can be used to monitor progress over time.
They need good guidelines and clearly formulated targets, objectives, and policy effectiveness indicators.
NGOs need information for use in campaigns to raise public awareness and lobby politicians. ' They might need a wide range of indicators, with assessment and some analysis; this should include access to technical documentation, guidelines, and possibly data (these might be made available on the Web).
Industry probably needs indicators that provide engagement incentives and use appropriate language (e.g., eco-efficiency, cost effectiveness, sector-specific and pressure indicators).
It needs indicators that can anticipate future trends (for investments needs) and costs.
' Policymakers need a comprehensive set of many indicators to inform specific areas of policy.
Indicators are likely to already be in use outside the SD
area, so the specific need may be for an SD set or a focus on the interlinkages between the separate pillars.
There might be a need for links to outlooks and scenarios and to costs when designing policies.
The indicators should link to existing indicators and data if possible.
Academics need very specific data for research, as inputs to studies and models and for use in evaluating and developing methods.
They are also likely to need the detailed assessments, analyses, and reasoning behind the analyses.
' These users may need a set of indicators as a basis on which to evaluate whether to select further project proposals for funding.
They may also need information on data availability, the conceptual basis of indicators, methodology, feasibility, and reliability.
The largest categories of GEO-2000 users were members of the research community and information compilers, the policy development and decision-making community, and other environmental information depositories and distributors.
The results of the reader survey indicate that GEO-2000 was read not only by the target audiences but also by a variety of people in other types of organizations and positions. The largest number of respondents was from the education sector, accounting for almost 38 percent of the total respondents. Sixty percent of respondents in this group held professional or faculty positions. Fourteen percent of respondents belonged to ministries of environment and related government bodies. The largest number of respondents (33 percent) worked as professional staff or faculty members, closely followed by senior management and other decision makers (28 percent). This example shows that it is possible, with the same report, to reach different types of audiences using different layers of communication.
An indicator set must be flexible to provide maximum policy relevance, but this flexibility must be weighed against the risk of losing familiarity and continuity. Composite indicators and indices are made up of a combination of separate variables, often with different weighting. They can be used to summarize complex or multidimensional issues and provide the big picture for policymakers (Saisana and Tarantola 2002; Nardo et al. 2005). The separate underlying variables in composite indicators constitute a pool of background information and provide flexibility, and the summary index or composite ensures continuity and familiarity.
Headline indicators fulfill a similar function because the small and familiar headline set can be extracted from a larger pool of underlying variables. However, the use of headline sets might increase management needs because the headline indicators must be reviewed and changed in response to changing policy needs. The larger pool of underlying variables would also need to be maintained and updated rather than allowed to decline in quality. The main successes in the use of headline sets occur when the larger pool is made up of indicators that have other uses and therefore are automatically maintained (e.g., as part of a core set of national statistics) and when the headline set itself is reviewed and updated regularly. A key example of this process occurs in the United Kingdom, but this is also the pattern used by some organizations, such as the European Environment Agency (EEA), where the larger set is used for reports and assessments that are not specifically targeted at policymakers (e.g., general public, students, and NGOs). This dual purpose ensures the maintenance of a larger base than the small, flexible, and policy-relevant core set (EEA 2005).
Distortion is an additional concern for indicator sets: A set of indicators cannot adequately represent reality if the composition is skewed or biased in some way. A skewed headline set might contain more indicators that show improving trends than those that show a lack of progress or deteriorating trends. Similarly, an index or composite indicator might be skewed if the choice of variables or the weighting given to each is biased in some way. A wide participatory process would help in preventing and responding to such criticisms.
Data that are clearly needed and used to populate a stable indicator system are a helpful driver for creating stable data flows and structured, balanced monitoring systems. There is a debate about whether to publish indicators that contain data known to be of poor quality (e.g., with substantial data gaps), but although there must be a lower quality limit beyond which the indicator becomes distorted and misleading, publishing a poor-quality indicator often acts as a driver for an improvement in data quality. The 2001—2002 ESI is an example that was published using the best available but often low-quality data, resulting in some rapid improvements of data flows and data quality.
The evolution of methods also provides difficult choices: Policy relevance and wider acceptability mean that an indicator should be believable. So if the methods are improving, the indicator may need to change to reflect this improvement. Although changing the methods of an existing indicator may increase its relevance and acceptability, it may create difficulties in comparability over time, especially where definitional changes are made in underlying data (e.g., municipal waste).
Meadows (1998) states that an environmental indicator becomes a sustainability indicator with the addition of time, limit, or target. Indicators become especially powerful tools for policy when they relate to political targets, thus adding the element of performance; for example, the Kyoto Protocol, with its precise reduction targets, could not exist without the indicator "CO2 emissions."
Three distinct types of targets can be identified:
• Political or hard targets
• Soft targets such as those for sustainability reference values, minimum viable populations, and thresholds
Hard targets are set through political processes and usually are beyond the scope of the expert indicator producers. However, the nature of these targets can play a key role in their use by the indicator producers. A conceptual target such as "halting the loss of biodiversity" may be an essential policy driver, providing a focus for research and indicator development, but it must be broken down to more accessible and specific subsidiary targets for inclusion in indicator exercises. Even apparently concrete targets such as "decoupling transport demand from gross domestic product" will raise many questions about the definition and the measurement of progress toward this target. In general, vague or qualitative hard targets in need of clarification or definition can be identified and highlighted by the indicator community, although it is the responsibility of the political process to set and refine such targets. Drawing attention to the lack of targets may be a key role of indicator producers, and identifying vague targets can provide an opportunity to define them in a more precise fashion.
Soft targets, such as a sustainable reference value or a minimum viable population size, could be used more fully by the indicator community. Although the identification and use of soft targets are often associated with scientific debate and differing opinions, the use of such targets can both highlight the inadequacy of the political targets and raise awareness of the complexities and uncertainties inherent in environmental systems. In addition, soft targets may offer an opportunity for analysis and interpretation of the fuzzy areas that hard quantitative targets do not provide.
Benchmarking is a widely used way to add context to indicators. The mean value of neighboring countries is often used as a benchmark, as are the Organisation for Economic Co-operation and Development and the EU-15 and EU-25 averages. Benchmarking has a key role to play at local and national levels, helping to create a race for the top, whereas at the international level the use of benchmarking and best practice examples as illustrative targets could be further developed. Benchmarking is distinct from ranking in its focus on comparisons with a few selected countries rather than a list of ranks. A benchmark is more powerful if the comparison made is acceptable and relevant. Benchmarking against the average for a group of countries may not have the desired effect if the mean does not provide an example of best (or better) practice.
Ranking on the basis of an indicator set or composite indicator is appealing: For the national media, the international ranking of countries often makes headlines. However, such assessments often are irrelevant to policymakers: Ranking based on relative performance means that a country may be successful and be ranked highly one year but then may be ranked low the next year not as a result of a decline but because the rate of improvement slows and even stops once a country has reached the top. This occurred in the United Kingdom, which was ranked sixteenth in the ESI in 2001 and ranked only ninety-first in 2002. Because the public was not aware of any environmental catastrophe between those two assessments that in their minds would explain such a drop, the index lost credibility in the United Kingdom and no longer has impact on policy or the media.
Was this article helpful?