After more than 10 years of experience in developing and using sustainable development indicators, in 2005 the United Kingdom established its third generation of indicators to support a new sustainable development strategy.
When sustainable development indicators were first established, the proactive use of indicators and targets in government was in its infancy. The first set of indicators therefore was breaking new ground. By the second generation of indicators, there had been a proliferation of indicators and performance targets across the machinery of government. It was therefore possible to strengthen sustainable development indicators with a commitment to make progress, and a set of headline indicators was established to be drivers for action and to highlight where policies needed to be adjusted.
However, with performance measures across every area of government and every new policy initiative generating more targets, the approach to developing a set of sustainable development indicators is more challenging, and their perceived role has changed. In one respect, with a multitude of indicators and targets already in place, establishing a set of sustainable development indicators ought to be easier because it should be possible to cherry-pick the best indicators from a wider variety of existing ones. However, the challenge now is to identify and develop indicators that are adding value and bringing a sustainable development perspective rather than simply reusing existing performance measures and giving them a sustainable development badge.
There is now greater sensitivity about what messages a set of sustainable development indicators might convey and, in many cases, a desire by policymakers and politicians that these should be consistent with the indicators and targets already adopted in specific policy areas. Consequently, there is a danger that a set of sustainable development indicators may be only a repackaging exercise. However, where a set of sustainable devel opment indicators is closely allied with a sustainable development strategy, as is the case for the UK, perhaps greater pressure can be applied through the commitments in the strategy to extract agreement for more challenging sustainable development indicators, perhaps reducing some of the repackaging.
It has long been desired that sustainable development indicators be fully integrated into policymaking and directly influencing policy decisions. However, there are very few examples in which this has happened. The problem is that the principal role of indicators is in communication, particularly to the public and to ministers, who do not necessarily need lots of detail.
Most indicators therefore provide only a broad overview of an issue and are of little use for detailed policy considerations. They are often too broad for policymakers to identify other policy areas where their decisions may have effects.
Some stakeholders call for a set of indicators that are better integrated internally (i.e., with all the linkages identified and quantified), but we are a long way from being able to construct models that allow us to know what impact a change in one indicator will have on another. Other stakeholders believe that holistic sustainable development measures are needed, and a growing number of aggregate indices have been promoted internationally that profess to be measures of sustainability. However, there is also a high degree of skepticism about their methods and meaningfulness. Although aggregate indices may have their place in a package of communication tools, there is a concern that they are more likely to mislead than to lead to discernible progress. However, the idea of condensing the messages is valid, and indicators sets might be reduced to be more manageable for those trying to understand the messages and those trying to maintain them.
The immediate focus should be on raising the profile of indicators and making them more effective as communication tools in order to raise awareness and understanding of sustainable development.
In 1994 the UK became one of the first countries to produce a sustainable development strategy (HM Government 1994) in response to the 1992 Earth Summit in Rio de Janeiro. The strategy led the government to pursue, via an interdepartmental working group, a set of indicators with which to monitor progress. In 1996 a preliminary set of 120 indicators, Indicators of Sustainable Development for the United Kingdom (Department of the Environment 1996), was published for discussion and consultation.
In reflecting the structure and hence also the inadequacies of the strategy, the indicators unfortunately focused too heavily on economic and environmental issues and also preempted, by a few months, the UN Commission for Sustainable Development draft menu of indicators. However, the UK was subsequently one of twenty-two countries to volunteer to pilot test the applicability of the commission's indicators.
Following a change of government in 1997, a new strategy, "A better quality of life" (DETR 1999a), was published in 1999. The establishment of indicators was an integral part of the development of the new strategy, with work on indicators going alongside and sometimes ahead of discussions on the content of the strategy.
One of the strengths of this approach was that the indicators helped to focus people's minds on the issues that should be covered by the strategy. In some cases indicators led to the inclusion of issues in the strategy that might not otherwise have been included, or at least not in the same way, such as indicators on wild bird populations and air quality. However, some of the indicator work (e.g., on social indicators) was not used in the final set, or the experts engaged in the exercise felt unable to contribute constructively without knowing the direction of the strategy.
Working to some extent blind, without a strong policy lead, may have resulted in a much larger volume of candidate indicators than would have been the case if indicator development had awaited finalization of the policy framework.
Furthermore, and perhaps inevitably, when the debate on indicators was opened to stakeholders, they tended to be strongly motivated to see their own areas of concern covered by an indicator. This was often on the erroneous assumption that if it was not an "indicator of sustainable development" then it was not monitored at all. Another motivation may have been that in their view a particular issue had to be seen as contributing to sustainable development through the indicators, possibly in anticipation of potential funding or for political or presentational elevation.
Though undoubtedly eliciting wider support and ensuring a more robust set of indicators, stakeholder involvement, with a still-evolving policy framework, had the potential to hamper the establishment of a coherent set. For example, in one particular workshop event, the aim was to reduce an already large list of indicators, some 200 or so, down to perhaps as few as 50. By the end of the day's deliberations, rather than reducing the list, stakeholders had argued the need for more candidate indicators, and the list had grown to more than 400.
Second-Generation Indicators: Headline Indicators and Quality of Life Counts
With a potentially large set of indicators, it was clear that it would be very difficult to answer the question, "Are we becoming more or less sustainable?" Each indicator would give a different answer for a specific area. Ministers therefore asked that some headline indicators be established that might provide a broad overview of progress. Responses to a public consultation paper, Sustainability Counts (DETR 1998), showed wide support for a set of headline indicators.
Some 6 months after the publication of the strategy document, Quality of Life Counts (DETR 1999b) was published. This provided a baseline assessment of15 head line indicators and 132 core sustainable development indicators. The headline indicators were described as a quality-of-life barometer "to provide a high level overview of progress, and be a powerful tool for simplifying and communicating the main messages for the public."
The headline indicators were to play a key role in the promotion of sustainable development and were at the center of four successive UK government annual reports on progress, Achieving a Better Quality of Life (DEFRA 2004a).
The wider Quality of Life Counts proved to be very influential in other indicator initiatives throughout the UK and internationally. However, with hindsight it is questionable whether such a large set of indicators, 147 including the headline indicators, was practical to maintain and effective in communicating or in influencing policy.
Third-Generation Indicators: Public Consultation
The 1999 strategy document included a commitment to review the strategy and its supporting indicators after 5 years. In 2004, the UK government launched a public consultation document, Taking It On (DEFRA 2004c), which sought views on the direction of sustainable development strategy and future monitoring of progress through indicators. The questions on how progress should be reviewed and communicated were as follow:
• What are the strengths and weaknesses of the current sustainable development indicators, and how they are used in general? And, more specifically, what about indicators used in the UK government's headline set; in the wider UK core set in Quality of Life Counts; in Scotland, Wales, and Northern Ireland; in the English regions; in local authorities; and elsewhere (e.g., sectoral indicators)?
• What needs to be monitored and measured across the UK?
• Who are the audiences for indicators, and how can we better meet their needs?
• Should any set of indicators supporting the new strategy concentrate on just the main priorities in the strategic framework or be wider and more comprehensive?
• Should important high-level sustainable development indicators focus on monitoring general progress toward final outcomes, specific delivery actions and targets, or both?
Despite best efforts to have indicator questions positioned early in the consultation document, they were relegated to the end. Many of the preceding questions required respondents to provide detailed answers, so there was an inevitable decline in the responses for later questions. However, in practice monitoring and indicators were important threads running through responses to many of the questions in the consultation document. More than 700 individuals and organizations responded, and their responses included 1,500 references to monitoring or indicators.
Ninety-five percent of respondents supported a set of headline indicators, but only 11 percent specifically favored the existing headline set with no change, and 25 percent supported the existing set with some modification.
Eleven percent of all indicator responses were specifically about gross domestic product (GDP) as a measure of sustainable development, with the majority of these advocating its exclusion from the set or changing it radically.
A wide variety of candidate indicators were proposed for a headline set, including a number of aggregate indices, with some people suggesting that there should be no more than five headline indicators and that these should be aggregate measures. Eight percent of all indicator responses strongly supported the inclusion of an ecological footprint. There was also strong support for other measures that encapsulate a concept of well-being.
In addition to the consultation, a review was undertaken of indicators used directly to monitor sustainable development and indicators in closely related national strategies. The exercise was then extended to a wider array of indicator sets used nationally and internationally. In total, more than 5,000 indicators were identified. These were then grouped into broad themes and into economic, social, and environmental impacts and drivers. This exercise was not as useful as was hoped but provided a useful insight into indicators used elsewhere and a reassurance that the UK set was not missing important measures used by others.
Similar to the situation for the earlier Quality of Life Counts report, there was the challenge of trying to establish a policy-relevant set of indicators in time for inclusion in the new strategy while the policy thinking for the new strategy was still being developed. A degree of pragmatism was needed, along with constructive dialogue with policy colleagues to negotiate an acceptable set of indicators.
Third-Generation Indicators: The Final Set
In 2005 the new UK government sustainable development strategy, Securing the Future (DEFRA 2005a), was published. Twenty "UK Framework Indicators" were outlined that reflected the broad priorities set out in a framework for sustainable development agreed between the UK government and the devolved administrations in Scotland, Wales, and Northern Ireland. These broadly take on the role of headline indicators, for which the devolved administrations and the UK government have shared responsibility.
In addition to the "UK Framework Indicators," the UK government sustainable development strategy outlined another forty-eight indicators related to the priority policy areas covered by the strategy.
The new indicator set included eight that needed development, and the most challenging of these is well-being, which had been so strongly suggested in the consultation responses. A number of surveys have asked people to rate their life satisfaction, but the degree of satisfaction is surprisingly high and has changed little over many years. Research has been commissioned to investigate the concept of well-being, its relevance for policy and sustainable development, and how it might be measured in a meaningful way.
The sixty-eight indicators include the previous fifteen headline indicators, although not all of them are included in the twenty "UK Framework Indicators."
This means that GDP has been retained. Arguments for its retention included recognition that GDP provides essential context for a number of the other indicators, it is a driver for many environmental pressures, and economic growth is an essential aspect of sustainable development in terms of supporting environmental and social development.
A number of the indicators in the new set were decoupling indicators, which attempt to show whether impacts (predominantly environmental) are being decoupled from their potential drivers (predominantly economic growth and demographic changes).
There was much debate about the new indicator set featuring a number of international indicators. Indeed, policy colleagues and some of the Taking It On consultation responses suggested the inclusion of indicators that capture the UK's global footprint or other international indicators.
This was fine in theory, but it was not clear what people meant by "international indicators" because they could include indicators of the UK's performance compared with other countries, indicators highlighting global trends, and indicators trying to capture the UK's international impacts. Given the aim of reducing the size of the national indicator set to improve its manageability and communication, there was a danger that the indicator set could be swamped with international indicators, which would be difficult to maintain and would duplicate reporting being done by many reputable international organizations.
A more practical approach therefore was needed, and it was agreed that the new set of indicators would not formally include international indicators. Instead, commitments were made to make international comparative information available via links to international Web sites and in due course to explore how the UK's international impacts might be measured for particular sectors.
Much work has been undertaken nationally and internationally to determine frameworks for sustainable development indicators. Sometimes perhaps too much effort is expended in theorizing about frameworks. They may help to ensure that cause and effect can be monitored, and they may help to ensure that significant gaps in monitoring are filled. So it is clear that some structure is needed.
However, as seen in the experience of Quality of Life Counts, the strength of the indicator structure was that it was precisely the same as the policy framework, with direct links to both broad and specific policy objectives in the strategy. It meant that the indicators were seen not as an academic or statistical exercise but as core components of the overall policy approach. Ensuring their policy relevance in structure and coverage also meant that strong government commitments were associated with the indicators.
In the third generation of indicators, the approach was not as meticulous, and indicators were selected that related to the four broad priority areas identified in the strategy. The links to policy were not necessarily specific but came through the preexistence of policy targets that if achieved would directly or indirectly contribute to progress in the broad policy area. This approach reflected in part a stronger focus in the new strategy on tangible delivery of sustainable development through outcomes rather than laudable but vaguely defined objectives.
Compared with a detailed list of criteria used to select indicators for Quality of Life Counts, which often had to be compromised, criteria for the new set of indicators were less ambitious, and wherever possible indicators were linked to the purpose and priorities in the UK strategy, were held as high priorities by the UK government, had UK coverage, had trends available, highlighted challenges, and were statistically robust and meaningful.
There was also an overall aim of having about fifty indicators in the final set. Although they did not quite achieve this goal, the sixty-eight indicators in the new set are less than half the number in the Quality of Life Counts set.
It is unlikely that many of the indicators have influenced policy because they are part of a sustainable development set. In most cases the indicators selected were already well-established measures. One of the exceptions to this was the indicator on populations of wild birds. The media initially made much of the novelty of the government measuring people's quality of life by counting birds, but the messages conveyed by the indicator demanded action. Although overall the population of birds had not changed significantly, the populations of farmland species had fallen dramatically since the 1970s. As a direct result of the indicator, a policy response was put in place to halt the decline and stabilize populations.
The UK has a decentralized statistical system, with statistics collected and published by all principal ministries and their agencies. The Department for Environment, Food and Rural Affairs (DEFRA) is responsible for coordinating efforts across the government for sustainable development. Statisticians in DEFRA therefore have the task of establishing and maintaining the UK sustainable development indicators.
DEFRA statisticians have been at the forefront of negotiations with other ministries to establish the indicators, agree on presentations, and in some cases persuade them to initiate new data collection. DEFRA statisticians have collated all the indicator data and have had responsibility for assessing and publishing the indicator set. Although coordinating the indicators is a logistical challenge, the work is done under the auspices of National Statistics, which is the independent framework under which statistics are produced in the UK, thus enabling the indicators to be compiled and reported without policy or ministerial interference.
Only a handful of countries and institutions have actively made summary assessments of indictors; in most cases the indicators are presented only as charts and commentary. For the UK's Quality of Life Counts, early attempts were made to have targets associated with the indicators, but it was concluded that in most cases there was no easily identified point at which a trend was sustainable. Therefore the approach of assessing progress over baselines was established and reported using "traffic lights."
With hindsight, there are some arguments for why it might have been better to avoid making summary assessments. Policymakers and ministers undoubtedly are sensitive about what color traffic light is reported for their particular policy areas, and the media can become very focused on the traffic lights and not on the wider issues behind the indicators. However, on balance, symbol assessments probably are useful to help people understand what the charts are saying and to learn at a glance whether things are improving or getting worse. Now that traffic light assessments have been in use for 5 years, it is doubtful that stakeholders and the media would accept UK indicators without assessments.
Problems surrounding this means of assessment include the fact that baselines are arbitrary, with the danger that a different baseline could result in a very different assessment of progress and the difficulty of determining whether change in an indicator should be regarded as significant. Pressure has been applied by the National Audit Office and others for the basis of the assessments to be made much more transparent, with clear justifications for the traffic lights.
This has remained difficult, not least because for many of the data sets limited statistical information on significance was available. Assessments hitherto had been made based on the experience and knowledge (and sometimes gut feeling) of the statisticians involved, but it was very difficult to robustly justify the assessments beyond saying what the latest data were and what the baseline figures were.
To try to make the assessments more rigorous, a threshold percentage change in the indicators was determined, above which a change was considered significant. This work was undertaken as part of an update of Quality of Life Counts in 2004. The determination of the threshold was still arbitrary to some extent but was based on what percentage change would support the assessments previously made for most if not all indicators. So it was an a priori judgment rather than one based on statistical rigor. The main benefit was that although debates could be had about the threshold, there was at least greater transparency in and defense of the traffic light assessments. For most indicators a 3 percent change was regarded as sufficient for a green or red traffic light. Where the value of an indicator was already very high and could not be expected to change greatly, a smaller amount of change might be regarded as significant, so there remained some latitude for common sense to prevail.
In the new set of indicators, attempts have been made to reduce the effect of the base line year by making the baseline figure, against which the latest data are assessed, a 3-year average around the baseline year.
The more transparent method of assessing and reporting the new indicators has recently been endorsed by the National Audit Office and the independent UK Statistics Commission.
Communication Products: "Quality of Life Barometer" Leaflet
In the initial years of Quality of Life Counts there was frustration among ministers that the headline indicators were not making headlines in the media, and awareness of sustainable development was low. The main approach to highlighting the indicators was through the government's sustainable development Web site and through annual reports, but these were eliciting little interest from the media. It was clear that a more succinct way of getting the indicators across to audiences beyond the cognoscenti was needed.
A leaflet, the "Quality of Life Barometer," attempted to present the indicators in simplified form, stripping out unnecessary detail and providing very short commentary and traffic light assessments. Information on all fifteen headline indicators was condensed onto two sides of A4 paper. (See Annex 18.1 for an example of the leaflet.)
The leaflet proved to be extremely effective in promoting the headline indicators to wider audiences, not least because it could be updated regularly, produced in bulk, and easily distributed. It was applauded by the UK's independent Sustainable Development Commission and EU indicator experts and was described as "the single most important development in communicating sustainable development" (Professor Anne Power, UK Sustainable Development Commissioner, 2001).
At media briefings, it was often the "Quality of Life Barometer" leaflet that the journalists turned to rather than the weighty tome that was the main focus of the event. Many of their questions directed at ministers were then based on the headline indicators and traffic light assessments shown in the leaflet.
The leaflet was particularly successful at one media briefing. It resulted in a healthy debate in newspapers and television news programs on what quality of life means, how it should be measured, and whether the government's assessments of progress were the right ones. Examples of the newspaper headlines were as follow:
Evening Standard: "Crime up, roads worse but life is better says Labour"
The Times: "Life is better despite crime, illness and cars, says Labour"
The Express: "Quality of life is better? But what about all the thuggery and the jams"
The Guardian: "Quality of life 'getting better'"
The leaflet has inspired similar documents to be produced by, for example, the European Commission, the Environment Agency (England and Wales), and the Finnish Environment Institute and has been emulated more widely since.
Communication Products: Pocket-Sized Booklets
The Quality of Life Counts set was not intended to be updated as frequently as the fifteen headline indicators; to have done so would be impractical, and most trends would not be expected to change dramatically annually. An updated compendium of the indicators, Quality of Life Counts: Update 2004, was published on the sustainable development Web site but received little stakeholder and media recognition.
A month later a new publication, Sustainable Development Indicators in Your Pocket 2004 (DEFRA 2004b), was published and was a great success. This pocket-sized booklet (A6 in size) contained a selection of fifty indicators to help illustrate the breadth of issues covered by the sustainable development agenda but without overloading the reader with too many indicators. Orders for the booklet surpassed expectations, and a reprint had to be done to meet demand from, in particular, schools and other educational institutions. This success thus reinforced the assumption that pocket summaries of indicators would be more useful and attract wider audiences than large statistical volumes.
This in part influenced the aim for the third generation of indicators to try to reduce the number of indicators in the set and thereby make them more manageable in communication terms. A new booklet, Sustainable Development Indicators in Your Pocket 2005 (DEFRA 2005b), provides baseline assessments for the new indicator set and contains all sixty-eight indicators in one small volume. It has proved very popular and has been applauded by a wide variety of stakeholders.
Once Quality of Life Counts was released, there were demands for indicators that were more relevant to local experiences. Regional Quality of Life Counts therefore was produced and updated annually, providing regional versions of the headline indicators, where data were available, for the English Regions. These were intended to help raise awareness of sustainable development, provide a useful input into regional sustainable development frameworks, and help direct policies where there are regional disparities.
Inevitably, producing regional indicators led to comparisons between regions, and in England the media often assume that things are better in the south of the country than in the north. The Regional Quality of Life Counts (DEFRA 2002) publication generated some interesting newspaper headlines:
The Daily Telegraph: "It's grim up North, say life quality statistics" Daily Express: "Great divide" Head south if you want a longer life northerners told" The Guardian: "Poverty and crime make it tough up north—but more birds are singing"
The Times: "Life sounds sweet in poorer North"
In December 2005, new regional versions of forty-four of the sixty-eight national indicators were published. In terms of interest, they generated possibly the best media cov erage ever in the UK of sustainable development and indicators. Articles featured in both the national and the regional press, particularly regional newspapers, produced analyses of the indicators for their regions and highlighted the successes and the challenges.
Work has been done at the local level, too. In 2000 a menu of twenty-nine indicators was developed, which local authorities were encouraged to consider using for their strategies and other local monitoring. The menu Local Quality of Life Counts (DETR 2000) was developed jointly by Central Government, local government bodies, the Audit Commission, and Local Agenda 21 groups and tested by thirty local authorities. The development of local indicators was then taken forward by the Audit Commission, and in collaboration with DEFRA and other ministries a new set of local indicators have been produced, which are related where possible to the national indicators.
Was this article helpful?