Last week I didn’t quite manage to complete the JustScience week challenge to blog on a science topic, and only on science, every day that week. I managed five days but then the weekend got in the way. On those five days I wrote about the application of scientific methods to examine landscape processes – specifically wildfire regimes and land use/cover change (LUCC). Another of my ‘scientific’ interests is the relationship between science and policy- and decision-making, so what I was planning to write on Saturday might not have fitted the JustScience bill anyway. I’ll post it now instead; a brief review of some of the ways commentators have suggested science may need to adapt in the 21st century to ensure it remains relevant to ‘real world problems’.
Ulrich Beck has suggested that we now live in the ‘risk society‘. Beck’s view that the risks contemporary societies face – such as changing climates, atmospheric pollution, exposure to radioactive substances – shares common themes with others examining contemporary society and their relationships with science, technology and their environment (Giddens for example).
In the risk society, many threats are difficult to identify in everyday life, requiring complicated, expensive (usually scientific) equipment to measure and identify them. These threats, requiring methods and tools from science and technology to investigate them, have frequently been initiated by previous scientific and technological endeavours. Consequences which are no longer simply further academic and scientific problems for study, but consequences that are important socially, politically, culturally, and environmentally. Furthermore, these consequences may be imminent, potentially necessitating action before the often lengthy traditional scientific method (hypothesis testing, academic peer review etc.) has produced a consensus on the state of knowledge about it.
Beck goes on to suggest a distinction between two divergent sciences; the science of data and the science of experience. The former is older, specialised, laboratory-based science that uses the language of mathematics to explore the world. The latter will identify consequences and threats, publicly testing its objectives and standards to examine the doubts the former ignores. Traditional science, Beck suggests, is at the root of current environmental problems and will simply propagate risk further rather than reducing it.
Taking a similar perspective, Funtowicz and Ravetz have presented ‘post-normal’ science as a new type of science to replace the reductionist, analytic worldview of ‘normal’ science with a “systemic, synthetic and humanistic” approach. The term ‘post-normal’ deliberately echoes Thomas Kuhn’s formulation of ‘normal’ science functioning between paradigm shifts, to emphasise the need for a shift in scientific thinking and practices that takes it outside of the standard objective, value-free perspective. The methodology of post-normal science then, emphasises uncertainties in knowledge, quality of method, and complexities in ethics. Post-normal science, according to Funtowicz and Ravetz, embraces the uncertainties inherent in issues of risk and the environment, makes values explicit rather than presupposing them, and generates knowledge and understanding through an interactive dialogue rather than formalised deduction. You can read more about Post-Normal science itself at NUSAP.net, and the Post-Normal Times blog will keep you up-to-date on recent events and issues at the interface between science and policy-making.
Recently I’ve been thinking about the utility of environmental simulation models (particularly those that explicitly consider human activity) for examining the sorts of problems present in the ‘risk society’ and that post-normal science has been promoted as being able to contribute to. I’ll write in more detail at a later date, but briefly many of the theoretical facets post-normal science suggests seem relevant to the issues facing environmental (and landscape) simulation models. Particularly, the epistemological problems of model validation recently discussed in the academic literature (e.g. Naomi Oreskes et al., Keith Beven , and which I have touched on briefly in the past, but must post about in more detail soon) have highlighted the importance of considering the subjective aspects of the model construction process.
As a result I have come to think that model ‘validation’ might be better achieved by taking an evaluative, qualitative approach to rather than a confirmatory approach. A shift in this approach would essentially mean asking “is this model good enough” rather than “is this model true”? Ethical questions about who should be asked, and who is qualified to ask, whether a model is to be deemed trustworthy and fit for purpose to examine real world problems (and not those confined to a laboratory) also become important when these criteria are used. These model validation issues are thus resonant with a post-normal science perspective toward examining the environmental issues contemporary societies currently face.
I’ll write more on both the epistemological problems of confirmatory model validation for environmental and landscape simulation models and potential ways we might go about assessing the trustworthiness and practical adequacy of these models for addressing the problems of the ‘risk society‘ soon.
Technorati Tags: post-normal, science, policy, risk, simulation, modelling, validation,