Indicator developers use frameworks to provide a common language and perspective on the issue and its solution. This facilitates indicator development, particularly when many different actors are involved. The way in which issues are framed becomes important in the interpretation and deeper analyses of the results because the frameworks are the assumptions and rationales on which the indicator is based and should be made available to those wanting to interpret the indicators. Understanding these assumptions and the frameworks is essential in order to compare and discuss indicators from different institutions because they may be based on different frameworks. For the majority of users, however, showing the frameworks themselves, or categories from such frameworks, would only add an unnecessary degree of complexity that might distract them from the results.
Institutions that work on sustainable development need to have one foot in the politics of problem definition, responsive to issues of appropriate participation and representation, and the other foot in the world of science and technology, responsive to issues of expertise and quality control (Clark 2003). Clark writes that perhaps the strongest message to come out of the Johannesburg summit was that the research community needs to complement its historical role in identifying problems of sustainability with a greater willingness to join other organizations in finding practical solutions to those problems and that institutions that spend most of their time doing pure science or pure politics are not likely to be as successful as boundary-spanning institutions (e.g., those providing scientific assessments or regional decision support). Boundary-spanning institutions that consciously manage and balance the multiple boundaries within a system (e.g., between disciplines, between organizational levels, and between different forms of knowledge) tend to be more effective than other institutions in creating information that can influence policymaking (Cash et al. 2002).
The three criteria of credibility, legitimacy, and salience are key attributes for characterizing the effectiveness of sustainable development indicators (Cash et al. 2002; Par-ris and Kates 2003) where credibility refers to the scientific and technical adequacy of the measurement system, legitimacy refers to the process of fair dealing with the divergent values and beliefs of stakeholders, and salience refers to the relevance of the indicator to decision makers. The indicator development process itself is responsible for ensuring at least the first two of these criteria.
Resources (e.g., equipment, monitoring, data, research, and knowledge) vary substantially between developed and developing countries. The socioeconomic, environmental, and knowledge dichotomies between the two hemispheres may be exacerbated by this resource distribution (Clark 2003). Finding ways to bridge this resource gap is essential for equitable representation, both geographically and in terms of recognizing and framing important issues. Equitable representation increases the legitimacy and credibility of both the process and the final product.
A capacity for mobilizing and using science and technology is also an essential component of strategies promoting sustainable development (Cash et al. 2003). Generating adequate scientific capacity and institutional support in developing countries is particularly urgent in order to enhance resilience in regions that are vulnerable to the multiple stresses that arise from rapid, simultaneous changes in social and environmental systems (Kates et al. 2000).
However, scientific capacity alone is not sufficient for the purposes of producing credible SDIs. Instead, capacity building is needed, with emphasis placed on supporting the wider processes that ensure legitimacy and credibility of the indicator development process.
Effective capacity building places emphasis on the key components of communication, translation, and facilitation (mediation) (Cash et al. 2003). Providing for adequate communication between stakeholders is essential, as is ensuring that mutual understanding is possible. Communication is often hindered by jargon, language differences, experiences, and presumptions about what constitutes a persuasive argument. Facilitation or mediation further enhances transparency of the process by bringing all perspectives to the table, defining the rules of conduct and procedure, and establishing decision-making criteria.
A serious commitment by institutions to managing the boundaries between expertise and decision making will help link knowledge to action. Establishing accountability to key actors across the boundary and using joint outputs to foster cohesion and commitment to the process are also helpful in developing capacity for sustainable development.
Indicator legitimacy and acceptability hinge on recognition of the plurality of legitimate perspectives (Funtowitz et al. 1999). Where there are complex issues, the qualities of the decision-making process itself are critical, and processes designed to open the dialogue between stakeholders rather than diluting the authority of science are key to creating a broad base of consensus. It has been suggested that the role of indicators is to serve as aids to this dialogue and decision making (Funtowitz et al. 1999).
The value of a specific indicator set varies between users and situations. The users should be able to influence the choice of the indicators that they will have to use. Sometimes this local choice will result in a loss of comparability as different groups and processes elect to use different indicators. This can be acceptable when the main purpose of indicators is to promote effective decision making. On the other hand, when the main purpose is comparability, then more importance should be given to standardization. It is not always possible to have both.
Was this article helpful?