Initial Michigan UP Ecological Economic Modelling Webpage


We now have a very basic webpage online, (very) briefly outlining the Michigan UP Ecological-Economic Modeling project. This is just so that we have an online presence for now – in time we will develop this into a much more comprehensive document detailing the model, its construction and use. Hopefully, at some point in the future we’ll also mount a version of the model online. I’ll keep you posted on the online development of the project.

Equifinality

One of the problems of determining the appropriateness of model structure is caused by the presence of equifinality. In order to model open, middle-numbered systems, boundaries need to be drawn on the system to delineate what will be considered in the model and what will not. This positioning of model boundaries, dictating what processes will be represented at which spatial and temporal scales, is known as model ‘closure’.

Model closure is not a problem for metaphor models described previously, as the very formulation of those model systems of study ensures they are closed (i.e. they are logically self-contained). But the systems examined and modelled by geographers, ecologists and environmental scientists are inherently open and at scales on the order of the human observer – model closure of the these systems has been in an important point of discussion in these disciplines.

Equifinality is the characteristic of all open systems that a final system state may be reached from multiple initial conditions and via different sequences of system state. In modelling terms, equifinality implies that there are multiple (closed) model structures that may adequately reproduce empirically observed behaviour of an open system. Choosing between these two models then becomes a matter of judgement based on an analysis of the process of model construction – How was the model constructed? What variables were included/excluded? Why? Why not?. Alternatively, the two models might be used in tandem to reflect on what the assumption of each implies for the other and to highlight deficiencies in system understanding.

Thus, equifinality generates uncertainty in the appropriateness of model structure and emphasises that evaluation of the modelling process is as important as evaluation of the model itself.

Critical Realism for Environmental Modelling?

As I’ve discussed before, Critical Realism has been suggested as a useful framework for understanding the nature of reality (ontology) for scientists studying both the environmental and social sciences. The recognition of the ‘open’ and middle-numbered nature of real world systems has led to a growing acceptance of both realist (and relativist – more on that in a few posts time) perspectives toward the modelling of these systems in the environmental and geographical sciences.

To re-cap, the critical realist ontology states that reality exits independently of our knowledge, and that it is structured into three levels: real natural generating mechanisms; actual events generated by those mechanisms; and empirical observations of actual events. Whilst mechanisms are time and space invariant (i.e are universal), actual events are not because they are realisations of the real generating mechanisms acting in particular conditions and contingent circumstances. This view seems to fit well with the previous discussion on the nature of ‘open’ systems – identical mechanisms will not necessarily produce identical events at different locations in space and time in the real world.

Richards initiated debate on the possibility of adopting a critical realist perspective toward research in the environmental sciences by criticising emphasis on rationalist (hypothetico-deductive) methods. The hypothetico-deductive method states that claims to knowledge (i.e. theories or hypotheses) should be subjected to tests that are able to falsify those claims. Once a theory has been produced (based on empirical observations) a consequence of that theory is deduced (i.e. a prediction is made) and an experiment constructed to examine whether the predicted consequences are observed. By replicating experiments credence is given to the theory and knowledge based upon it (i.e. laws and facts) is held as provisional until evidence is found to disprove the theory.

However, critical realism does not value regularity and replication as highly as rationalism. The separation of real mechanisms from empirical observations, via actual events, means that “What causes something to happen has nothing to do with the number of times we have observed it happening”. Thus, in the search for the laws of nature, a rationalist approach leaves open the possibility of the creation of laws as artefacts of the experimental (or model) ‘closure’ of the inherently open system it seeks to represent (more on model ‘closure’ next time).

The separation of the three levels of reality means that whilst reality exists objectively and independently, we cannot observe it. This separation causes a problem – how can science progress toward understanding the true nature of reality if the real world is unobservable? How do critical realists assess whether they have reached the real underlying mechanisms of a system and can stop studying it?

Whilst critical realism offers reasons for why the nature of reality makes the modelling of ‘open’ systems tricky for scientists, it doesn’t seem to provide a useful method by which to overcome the remaining epistemological problem of knowing whether a given (simulation) model structure is appropriate. In the next few posts I’ll examine some of these epistemological issues (equifinality, looping effects, and affirming the consequent) before switching to examine some potential responses.

Validating Models of Open Systems

A simulation model is an internally logically-consistent theory of how a system functions. Simulation models are currently recognised by environmental scientists as powerful tools, but the ways in which these tools should be used, the questions they should be used to examine, and the ways in which they can be ‘validated’ are still much debated. Whether a model aims to represent an ‘open’ or ‘closed’ systems has implications for the process of validation.

Issues of validation and model assessment are largely absent in discussions of abstract models that purport to represent the fundamental underlying processes of ‘real world’ phenomena such as wildfire, social preferences and human intelligence. These ‘metaphor models’ do not require empirical validation in the sense that environmental and earth systems modellers use it, as the very formulation of the system of study ensures it is ‘closed’. That is, the system the model examines is logically self-contained and uninfluenced by, nor interactive with, outside statements or phenomena. The modellers do not claim to know much about the real world system which their model is purported to represent, and do not claim their model is the best representation of it. Rather, the modelled system is related to the empirical phenomena via ‘rich analogy’ and investigators aim to elucidate the essential system properties that emerge from the simplest model structure and starting conditions.

In contrast to these virtual, logically closed systems, empirically observed systems in the real world are ‘open’. That is, they are in a state of disequilibrium with flows of mass and energy both into and out of them. Examples in environmental systems are flows of water and sediment into and out of watersheds and flows of energy into (via photosynthesis) and out of (via respiration and movement) ecological systems. Real world systems containing humans and human activity are open not only in terms of conservation of energy and mass, but also in terms of information, meaning and value. Political, economic, social, cultural and scientific flows of information across the boundaries of the system cause changes in the meanings, values and states of the processes, patterns and entities of each of the above social structures and knowledge systems. Thus, system behaviour is open to modification by events and phenomena outside the system of study.

Alongside being ‘open’, these systems are also ‘middle-numbered’. Middle-numbered systems differ from small-numbered systems (controlled situations with few interacting components, e.g. two billiard balls colliding) that can be described and studied well using Cartesian methods, and large-numbered systems (many, many interacting components, e.g. air molecules in a room) that can be described and studied using techniques from statistical physics. Rather, middle-numbered systems have many components, the nature of interactions between which is not homogenous and is often dictated or influenced by the condition of other variables, themselves changing (and potentially distant) in time and space. Such a situation might be termed complex (though many perspectives on complexity exist). Systems at the landscape scale in the real world are complex and middle-numbered. They exist in a unique time and place. In these systems history and location are important and their study is necessarily a <a href="http://dx.doi.org/10.1130/0016-7606(1995)1072.3.CO;2&#8243; target=”_blank” class=”regular”>‘historical science’ that recognises the difficulty of analysing unique events scientifically through formal, laboratory-type testing and the hypothetico-deductive method. Most real-world systems possess these properties, and coupled human-environment systems are a prime example.

Traditionally laboratory science has attempted to isolate real world systems such that they become closed and amenable to the hypothetico-eductive method. The hypothetico-deductive method is based upon logical prediction of phenomena independent of time and place and is therefore useful for generating knowledge about logically, energetically and materially ‘closed’ systems. However, the ‘open’ nature of many real-world, environmental systems (which cannot be taken into the laboratory and instead must be studies in situ) is such that the hypothetico-deductive method is often problematic to implement in order to generate knowledge about environmental systems from simulation models. Any conclusions draw using the hypothetico-deductive method for open systems using a simulation model will implicitly be about the model rather than the open system it represents. Validation has also frequently been used, incorrectly, as synonymous with demonstrating that the model is a truly accurate representation of the real world. By contrast, validation in the discussion presented in this series of blog posts refers to the process by which a model constructed to represent a real-world system has been shown to represent that system well enough to serve that model’s intended purpose. That is, validation is taken to mean the establishment of model legitimacy – usually of arguments and methods.

In the next few posts I’ll examine the rise of (critical) realist philosophies in the environmental sciences and environmental modelling and will explore the philosophy underlying these problems of model validation in more detail.

Validating and Interpreting Socio-Ecological Simulation Models

Over the next 9 posts I’ll discuss the validation, evaluation and interpretation of environmental simulation modelling. Much of this discussion is taken from chapter seven of my PhD thesis, arising out of my efforts to model the impacts of agricultural land use change on wildfire regimes in Spain. Specifically, the discussion and argument are focused on simulation models that represent socio-ecological systems. Socio-Ecological Simulation Models (SESMs), as I will refer to them, are those that represent explicitly the feedbacks between the activities and decisions of individual actors and their social, economic and ecological environments.

To represent such real-world behaviour, models of this type are usually spatially explicit and agent-based (e.g. Evans et al., Moss et al., Evans and Kelley, An et al., Matthews and Selman) – the model I developed is an example of a SESM. One motivating question for the discussion that follows is, considering the nature of the systems and issues they are used to examine, how we should go about approaching model evaluation or ‘validation’. That is, how do we identify the level of confidence that can be placed in the knowledge produced by the use of a SESM? A second question is, given the nature of SESMs, what approaches and tools are available and should be used to ensure models of this type provide the most useful knowledge to address contemporary environmental problems?

The discussion that follows adopts a (pragmatic) realist perspective (in the tradition of Richards and Sayer) and recognises and the importance of the open, historically and geographically contingent nature of socio-ecological systems. The difficulties of attempting to use general rules and theories (i.e. a model) to investigate and understand a unique place in time are addressed. As increasingly acknowledged in environmental simulation modelling (e.g. Sarewitz et al.), socio-ecological simulation modelling is a process in itself in which human decisions come to the fore – both because human decision-making is being modelled but also, importantly, because modellers’ decisions during model construction are a vital component of the process.

If these models intended to inform policy-makers and stakeholders about potential impacts of human activity, the uncertainty inherent in them needs to be managed to ensure their effective use. Fostering trust and understanding via a model that is practically adequate for purpose may aid traditional scientific forms of model validation and evaluation. The list below gives the titles of the posts that will follow over the next couple of weeks (and will become links when the post is online).

The Nature of Open Systems
Realist Philosophy in the Environmental Sciences
Equifinality
Interactive vs. Indifferent Kinds
Affirming the Consequent
Relativism in Modelling
Alternative Model Assessment Criteria
Stakeholder Participation and Expertise
Summary

getting my head round things

Now that I’m into my second week at MSU, things have calmed down a little. I’ve ploughed through most of the necessary admin, met many of the people I’ll be working with here at CSIS and throughout MSU (although being summer campus is quiet right now – the undergrads are gone and the postgrads are away on their fieldwork), and finally got my apartment into a liveable state. The next few weeks will no doubt be spent really getting my head around what we’re aiming to achieve with this integrated ecological-economic modelling project. For example, during the next month or two I’ll take a trip up to our study area to get a feel for the landscape, see the experimental plots that have been put in place previously, and gain a better understanding regarding the subsequent effects of timber harvesting. Also I plan on meeting and interviewing several key management stakeholders from organisations such as Michigan’s Department of Natural Resources and The Nature Conservancy to get their perspective on the landscape and what they might gain from our work. I’ve also been examining some of the tools that we hope to utilise and build upon, such as the USFS’ Forest Vegetation Simulator.

So whilst I get my head around exactly what this new project is all about, I’ll continue to blog about some of the work coming out of my Phd thesis. I’ve been threatening to do this for a while, and now I really mean it. Specifically, I’ll walk through the later stages of my thesis where I explored the potential of more reflexive forms of model validation – seeing the modelling process as an end in itself, a learning process, rather than a means to an end (i.e. the model) which is then used to ‘predict’ the future. I’ll discuss the philosophy underlying this perspective before re-examining my efforts to engage the model I produced with local stakeholders after the model had been ‘completed’ with their minimal input.

And of course, I’ll throw in the odd comment to let you know how things are going here in this new world I’ve recently landed in. Like my trip to the grey and windswept Lake Michigan at the weekend – I’m going to have to look into this kite-surfing stuff…

kitesurfer

Agent-Based Modelling for Interdisciplinary Geographical Enquiry

Bruce Rhoads argued that;

“The time has come for geography to fulfil its potential by adopting a position of intellectual leadership in the realm of interconnections between human and biophysical systems.”

Many areas of scientific endeavour are currently attempting to do the same and interdisciplinarity has become a big buzzword. Modelling has become a common tool for this interdisciplinary study (for example ecological-economic models), with several different approaches available. Increases in computing power and the arrival of object-oriented programming have led to the rise of agent-based modelling (also termed individual-based and discrete element).

In their latest paper in Geoforum, Bithell et al. propose this form of modelling, with its “rich diversity of approaches”, as an opportune way to explore the interactions of social and environmental processes in Geography. The authors illustrate the potential of this form of modelling by providing outlines of individual-based models from hydrology, geomorphology, ecology and land-use change (the latter of which I have tried to turn my hand to). The advantages of agent-based modelling, the authors suggest, include the ability to represent

  1. agents as embedded within their environment,
  2. agents as able to perceive both their internal state and the state of their environment
  3. agents that may interact with one another in a non-homogeneous manner
  4. agents that can take action to change both their relationships with other agents and their environment
  5. agents that can retain a ‘memory’ of a history of past events.

However the development of these representation can be a challenging task as I found during my PhD modelling exploits, and requires a ‘diversity of resources’. When representing human agents these resources include past population censuses, surveys and interviews of contemporary populations, and theoretical understanding of social, cultural and economic behaviour from the literature. In my modelling of a contemporary population I used interviews and theoretical understanding from the literature and found that, whilst more resource intensive, actually going to speak with those being represented in the model was by far more useful (and actually revealed the deficiencies of accepted theories).

In their discussion, Bithell et al. consider the problems of representing social structures within and an individual-based model suggesting that;

“simulation of social structure may be a case of equipping model agents with the right set of tools to allow perception of, and interaction with, dynamic structures both social and environmental at scales much larger than individual agents”.

Thus, the suggestion is that individually-based models of this type may need some form of hierarchical representation.

Importantly I think, the authors also briefly highlight the reflexive nature of agent-based models of human populations. This reflexivity occurs of the model is embedded within the society which it represents, thus potentially modifying the structure of system it represents. This situation has parallels with Hacking’s ‘looping effect’ that I’ll write about more another time. Bithell et al. suggest that this reflexive nature may, in the end, limit the questions that such models can hope meaningfully address. However, this does not prevent them from concluding;

“The complex intertwined networks of physical, ecological and social systems that govern human attachment to, and exploitation of, particular places (including, perhaps, the Earth itself) may seem an intractable problem to study, but these methods have the potential to throw some light on the obscurity; and, indeed, to permit geographers to renew their exploration of space–time geographies.”

The Importance of Land Tenure

The Economist today highlighted some recent work by Dr Thomas Elmqvist of Stockholm University. Using a combination of Landsat satellite imagery and interviews and surveys with locals in Madagascar, they examined whether human population densities or land tenure systems were more important for determining patters of tropical deforestation.

“From the Landsat images they were able to distinguish areas of forest loss, forest gain and stable cover. Different parts of Androy exhibited different patterns. The west showed a continuous loss. The north showed continuous increase. The centre and the south appeared stable. Damagingly for the population-density theory, the western part of the region, the one area of serious deforestation, had a low population density.

This is not to say that a thin population is bad for forests; the north, where forest cover is increasing, is also sparsely populated. But what is clear is that lots of people do not necessarily harm the forest, since cover was stable in the most highly populated area, the south.

The difference between the two sparsely populated regions was that in the west, where forest cover has dwindled, neither formal nor customary tenure was enforced. In the north—only about 20km away—land rights were well defined and forest cover increased. As with ocean fisheries, so with tropical forests, everybody’s business is nobody’s business.”

Land tenure (spatial) structure was one of the variables I examined in my agent-based model of agricultural land-use decision-making in Spain. I found that whilst the neighbourhood effects were evident in patterns of land-use due to land tenure, market conditions were the primary driver of change (NB land-use/cover change in the traditional Mediterranean landscape I examined is of a markedly different type).

Useless Arithmetic?

Can we predict the future? Orrin Pilkey and Linda Pilkey-Jarvis say we can’t. They blame the complexity of the real world alongside a political preference to rely on the predictive results of models. I’m largely in agreement with them on many of their points but their popular science book doesn’t do an adequate job of explaining why.

The book is introduced with an example of the failure of mathematical models to predict the collapse of the Grand Banks cod fisheries. The second chapter tries to lay the basis of their argument, providing an outline of underlying philosophy and approaches of environmental modelling. This is then followed by six case studies of the difficulties of using models and modelling in the real world: the Yucca Mountain nuclear waste depository, climate change and sea-level rise, beach erosion, open-cast pit mining, and invasive plant species. Their conclusion is entitled ‘A Promise Unfulfilled’ – those promises having been made by engineers attempting to apply methods developed in simple, closed systems to those of complex, open systems.

Unfortunately the authors don’t describe this conclusion in such terms. The main problems here are the authors’ rather vague distinction between quantitative and qualitative models and their inadequate examination of ‘complexity’. In the authors’ own words;

“The distinction between quantitative and qualitative models is a critical one. The principle message in this volume is that quantitative models predicting the outcome of natural processes on the surface of the earth don’t work. On the other hand, qualitative models, when applied correctly, can be valuable tools for understanding these processes.” p.24

This sounds fine, but it’s hard to discern, from their descriptions, exactly what the difference between quantitative and qualitative models is. In their words again,

Quantitative Models:

  • “are predictive models that answer the questions ‘where’, ‘when’, ‘how much'” p.24
  • “if the answer [a model provides] is a single number the model is quantitative” p.25

Qualitative Models:

  • “predict directions and magnitudes” p.24
  • do not provide a single number but consider relative measures, e.g “the temperature will continue to increase over the next century” p.24

So they both predict, just one produces absolute values and the other relative values. Essentially what the authors are saying is that both types of models predict and both produce some form of quantitative output – just one tries to be more accurate than another. That’s a pretty subtle difference.

Further on they try to clarify the definition of a qualitative model by appealing to concepts;

“a conceptual model is a qualitative one in which the description or prediction can be expressed as written or spoken word or by technical drawings or even cartoons. The model provides an explanation for how something works – the rules behind some process” p.27.

But all environmental models considering process (i.e. that are not empirical/statistical) are conceptual, regardless of whether they produce absolute or relative answers! Whether the model is Arrhenius’ back of the envelope model of how the greenhouse effect works, or a Global Circulation Model (GCM) running on a Cray Supercomputer and considering multiple variables, they are both built on conceptual foundations. We could write down the structure of the GCM, it would just take a long time. So again, their distinction between quantitative and qualitative models doesn’t really make things much clearer.

With this sandy foundation the authors examine suggest that the problem is that the real world is just too complex for the quantitative models to be able to predict anything. So what is this ‘complexity’? According to Pilkey and Pilkey-Jarvis;

“Interactions among the numerous components of a complex system occur in unpredictable and unexpected sequences.” p.32

So, models can’t predict complex systems because they’re unpredictable. hmm… A tautology no? The next sentence;

“In a complex natural process, the various parameters that run it may kick in at various times, intensities, and directions, or they may operate for various time spans”.

Okay, now were getting somewhere – a complex system is one that has many components in which the system processes might change in time. But that’s it, that’s our lot. That’s what complexity is. That’s why environmental scientists can’t predict the future using quantitative models – because there are too many components or parameters that may change at any time to keep track of such that we couls calculate an absolute numerical result. A relative result maybe, but not an absolute value. I don’t think this analysis quite lives up to it’s billing as a sub-title. Sure, the case-studies are good, informative and interesting but I think this conceptual foundation is pretty loose.

I think the authors’ would have been better off making more use of Naomi Oreskes’ work (which they themselves cite) by talking about the difference between logical and temporal prediction, and the associated difference between ‘open’ and ‘closed’ systems. Briefly, closed systems are those in which the intrinsic and extrinsic conditions remain constant – the structure of the system, the processes operating it, and the context within which the system sits do no change. Thus the system – and predictions about it – are outside history and geography. Think gas particles bouncing around in a sealed box. If we know the volume of the box and the pressure of the gas, assuming nothing else changes we can predict the temperature.

Contrast this with an ‘open’ system in which the intrinsic and extrinsic conditions are open to change. Here, the structure of the system and the processes operating the system might change as a result of the influence of processes or events outside the system of study. In turn, where the system is situated in time and space becomes important (i.e. these are geohistorical systems), and prediction becomes temporal in nature. All environmental systems are open. Think the global atmosphere. What do we need to know in order to predict the temperature in the future in this particular atmosphere? Many processes and events influencing this particular system (the atmosphere) are clearly not constant and are open to change.

As such, I am in general agreement with Pilkey and Pilkey-Jarvis’ message, but I don’t think they do the sub-title of their book justice. They show plenty of cases in where quantitative predictive models of environmental and earth systems haven’t worked, and highlight many of the political reasons why this approach has been taken, but they don’t quite get to the guts of why environmental models will never be able to accurately make predictions about specific places at specific times in the future. The book Prediction: Science, Decisions Making, and the Future of Nature provides a much more comprehensive consideration of these issues and, if you can get your hands on it, is much better.

I guess that’s the point though isn’t it – this is a popular science book that is widely available. So I shouldn’t moan too much about this book as I think it’s important that non-modellers be aware of the deficiencies of environmental models and modelling and how they are used to make decisions about, and manage, environmental systems. These include:

  • the inherent unpredictability of ‘open’ systems (regardless of their complexity)
  • the over-emphasis of environmental models’ predictive capabilities and expectations (as a result of positivist philosophies of science that have been successful in ‘closed’ and controlled conditions)
  • the politics of modelling and management
  • the need to publish (or at least make available) model source code and conceptual structure
  • an emphasis on models to understand rather than predict environmental systems
  • any conclusions based on experimentation with the model are conclusions about the structure of the model not the structure of nature

I’ve come to these conclusions over the last couple of years during the development of a socio-ecological model, in which I’ve been confronted by differing modelling philosophies. As such, I think the adoption of something more akin to ‘Post-Normal’ Science, and greater involvement of the local publics in the environments under study is required for better management. The understanding of the interactions of social, economic and ecological systems poses challenges, but is one that I am sure environmental modelling can contribute. However, given the open nature of these systems this modelling will be more useful in the ‘qualitative’ sense as Pilkey and Pilkey-Jarvis suggest.

Orrin H. Pilkey and Linda Pilkey-Jarvis (2007)
Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future
Columbia University Press
ISBN: 978-0-231-13212-1

Buy at Amazon.com

[June 3rd 2007: I just noticed Roger Pielke reviewed Useless Arithmetic for Nature the same day as this original post. Read the review here.]

Ecological and economic models for biodiversity conservation

As a follow-up to yesterday’s post, the latest volume of Ecological Economics has a paper by Drechsler et al. entitled, ‘Differences and similarities between ecological and economic models for biodiversity conservation’. They compare 60 ecological and economic models and suggest:

“Since models are a common tool of research in economics and ecology, it is often implicitly assumed that they can easily be combined. By making the differences between economic and ecological models explicit we hope to have helped to avoid miscommunication that may arise if economists and ecologists talk about “models” and believe they mean the same but in fact think of something different. The question that arises from the analysis of this paper is, of course: What are the reasons for the differences between economic and ecological models?”

The authors suggest five possible routes into the examination of this question:

  1. Different disciplinary traditions
  2. Differences in the systems analysed
  3. Differences in the perception of the system analysed
  4. Varying personal preferences of researchers
  5. Models serve different purposes

Drechsler et al. conclude:

“The general lesson from this is that economists who start thinking about developing ecological–economic models have to be prepared that they might be involved in complex modelling not typical and possibly less respected in economics. On the other hand, ecologists starting collaborations with modellers from economics have to be aware that in economics analytical tractability is much higher valued and simple models are more dominant than in ecology.”