Agent-Based Modelling for Interdisciplinary Geographical Enquiry

Bruce Rhoads argued that;

“The time has come for geography to fulfil its potential by adopting a position of intellectual leadership in the realm of interconnections between human and biophysical systems.”

Many areas of scientific endeavour are currently attempting to do the same and interdisciplinarity has become a big buzzword. Modelling has become a common tool for this interdisciplinary study (for example ecological-economic models), with several different approaches available. Increases in computing power and the arrival of object-oriented programming have led to the rise of agent-based modelling (also termed individual-based and discrete element).

In their latest paper in Geoforum, Bithell et al. propose this form of modelling, with its “rich diversity of approaches”, as an opportune way to explore the interactions of social and environmental processes in Geography. The authors illustrate the potential of this form of modelling by providing outlines of individual-based models from hydrology, geomorphology, ecology and land-use change (the latter of which I have tried to turn my hand to). The advantages of agent-based modelling, the authors suggest, include the ability to represent

  1. agents as embedded within their environment,
  2. agents as able to perceive both their internal state and the state of their environment
  3. agents that may interact with one another in a non-homogeneous manner
  4. agents that can take action to change both their relationships with other agents and their environment
  5. agents that can retain a ‘memory’ of a history of past events.

However the development of these representation can be a challenging task as I found during my PhD modelling exploits, and requires a ‘diversity of resources’. When representing human agents these resources include past population censuses, surveys and interviews of contemporary populations, and theoretical understanding of social, cultural and economic behaviour from the literature. In my modelling of a contemporary population I used interviews and theoretical understanding from the literature and found that, whilst more resource intensive, actually going to speak with those being represented in the model was by far more useful (and actually revealed the deficiencies of accepted theories).

In their discussion, Bithell et al. consider the problems of representing social structures within and an individual-based model suggesting that;

“simulation of social structure may be a case of equipping model agents with the right set of tools to allow perception of, and interaction with, dynamic structures both social and environmental at scales much larger than individual agents”.

Thus, the suggestion is that individually-based models of this type may need some form of hierarchical representation.

Importantly I think, the authors also briefly highlight the reflexive nature of agent-based models of human populations. This reflexivity occurs of the model is embedded within the society which it represents, thus potentially modifying the structure of system it represents. This situation has parallels with Hacking’s ‘looping effect’ that I’ll write about more another time. Bithell et al. suggest that this reflexive nature may, in the end, limit the questions that such models can hope meaningfully address. However, this does not prevent them from concluding;

“The complex intertwined networks of physical, ecological and social systems that govern human attachment to, and exploitation of, particular places (including, perhaps, the Earth itself) may seem an intractable problem to study, but these methods have the potential to throw some light on the obscurity; and, indeed, to permit geographers to renew their exploration of space–time geographies.”

The Importance of Land Tenure

The Economist today highlighted some recent work by Dr Thomas Elmqvist of Stockholm University. Using a combination of Landsat satellite imagery and interviews and surveys with locals in Madagascar, they examined whether human population densities or land tenure systems were more important for determining patters of tropical deforestation.

“From the Landsat images they were able to distinguish areas of forest loss, forest gain and stable cover. Different parts of Androy exhibited different patterns. The west showed a continuous loss. The north showed continuous increase. The centre and the south appeared stable. Damagingly for the population-density theory, the western part of the region, the one area of serious deforestation, had a low population density.

This is not to say that a thin population is bad for forests; the north, where forest cover is increasing, is also sparsely populated. But what is clear is that lots of people do not necessarily harm the forest, since cover was stable in the most highly populated area, the south.

The difference between the two sparsely populated regions was that in the west, where forest cover has dwindled, neither formal nor customary tenure was enforced. In the north—only about 20km away—land rights were well defined and forest cover increased. As with ocean fisheries, so with tropical forests, everybody’s business is nobody’s business.”

Land tenure (spatial) structure was one of the variables I examined in my agent-based model of agricultural land-use decision-making in Spain. I found that whilst the neighbourhood effects were evident in patterns of land-use due to land tenure, market conditions were the primary driver of change (NB land-use/cover change in the traditional Mediterranean landscape I examined is of a markedly different type).

Useless Arithmetic?

Can we predict the future? Orrin Pilkey and Linda Pilkey-Jarvis say we can’t. They blame the complexity of the real world alongside a political preference to rely on the predictive results of models. I’m largely in agreement with them on many of their points but their popular science book doesn’t do an adequate job of explaining why.

The book is introduced with an example of the failure of mathematical models to predict the collapse of the Grand Banks cod fisheries. The second chapter tries to lay the basis of their argument, providing an outline of underlying philosophy and approaches of environmental modelling. This is then followed by six case studies of the difficulties of using models and modelling in the real world: the Yucca Mountain nuclear waste depository, climate change and sea-level rise, beach erosion, open-cast pit mining, and invasive plant species. Their conclusion is entitled ‘A Promise Unfulfilled’ – those promises having been made by engineers attempting to apply methods developed in simple, closed systems to those of complex, open systems.

Unfortunately the authors don’t describe this conclusion in such terms. The main problems here are the authors’ rather vague distinction between quantitative and qualitative models and their inadequate examination of ‘complexity’. In the authors’ own words;

“The distinction between quantitative and qualitative models is a critical one. The principle message in this volume is that quantitative models predicting the outcome of natural processes on the surface of the earth don’t work. On the other hand, qualitative models, when applied correctly, can be valuable tools for understanding these processes.” p.24

This sounds fine, but it’s hard to discern, from their descriptions, exactly what the difference between quantitative and qualitative models is. In their words again,

Quantitative Models:

  • “are predictive models that answer the questions ‘where’, ‘when’, ‘how much'” p.24
  • “if the answer [a model provides] is a single number the model is quantitative” p.25

Qualitative Models:

  • “predict directions and magnitudes” p.24
  • do not provide a single number but consider relative measures, e.g “the temperature will continue to increase over the next century” p.24

So they both predict, just one produces absolute values and the other relative values. Essentially what the authors are saying is that both types of models predict and both produce some form of quantitative output – just one tries to be more accurate than another. That’s a pretty subtle difference.

Further on they try to clarify the definition of a qualitative model by appealing to concepts;

“a conceptual model is a qualitative one in which the description or prediction can be expressed as written or spoken word or by technical drawings or even cartoons. The model provides an explanation for how something works – the rules behind some process” p.27.

But all environmental models considering process (i.e. that are not empirical/statistical) are conceptual, regardless of whether they produce absolute or relative answers! Whether the model is Arrhenius’ back of the envelope model of how the greenhouse effect works, or a Global Circulation Model (GCM) running on a Cray Supercomputer and considering multiple variables, they are both built on conceptual foundations. We could write down the structure of the GCM, it would just take a long time. So again, their distinction between quantitative and qualitative models doesn’t really make things much clearer.

With this sandy foundation the authors examine suggest that the problem is that the real world is just too complex for the quantitative models to be able to predict anything. So what is this ‘complexity’? According to Pilkey and Pilkey-Jarvis;

“Interactions among the numerous components of a complex system occur in unpredictable and unexpected sequences.” p.32

So, models can’t predict complex systems because they’re unpredictable. hmm… A tautology no? The next sentence;

“In a complex natural process, the various parameters that run it may kick in at various times, intensities, and directions, or they may operate for various time spans”.

Okay, now were getting somewhere – a complex system is one that has many components in which the system processes might change in time. But that’s it, that’s our lot. That’s what complexity is. That’s why environmental scientists can’t predict the future using quantitative models – because there are too many components or parameters that may change at any time to keep track of such that we couls calculate an absolute numerical result. A relative result maybe, but not an absolute value. I don’t think this analysis quite lives up to it’s billing as a sub-title. Sure, the case-studies are good, informative and interesting but I think this conceptual foundation is pretty loose.

I think the authors’ would have been better off making more use of Naomi Oreskes’ work (which they themselves cite) by talking about the difference between logical and temporal prediction, and the associated difference between ‘open’ and ‘closed’ systems. Briefly, closed systems are those in which the intrinsic and extrinsic conditions remain constant – the structure of the system, the processes operating it, and the context within which the system sits do no change. Thus the system – and predictions about it – are outside history and geography. Think gas particles bouncing around in a sealed box. If we know the volume of the box and the pressure of the gas, assuming nothing else changes we can predict the temperature.

Contrast this with an ‘open’ system in which the intrinsic and extrinsic conditions are open to change. Here, the structure of the system and the processes operating the system might change as a result of the influence of processes or events outside the system of study. In turn, where the system is situated in time and space becomes important (i.e. these are geohistorical systems), and prediction becomes temporal in nature. All environmental systems are open. Think the global atmosphere. What do we need to know in order to predict the temperature in the future in this particular atmosphere? Many processes and events influencing this particular system (the atmosphere) are clearly not constant and are open to change.

As such, I am in general agreement with Pilkey and Pilkey-Jarvis’ message, but I don’t think they do the sub-title of their book justice. They show plenty of cases in where quantitative predictive models of environmental and earth systems haven’t worked, and highlight many of the political reasons why this approach has been taken, but they don’t quite get to the guts of why environmental models will never be able to accurately make predictions about specific places at specific times in the future. The book Prediction: Science, Decisions Making, and the Future of Nature provides a much more comprehensive consideration of these issues and, if you can get your hands on it, is much better.

I guess that’s the point though isn’t it – this is a popular science book that is widely available. So I shouldn’t moan too much about this book as I think it’s important that non-modellers be aware of the deficiencies of environmental models and modelling and how they are used to make decisions about, and manage, environmental systems. These include:

  • the inherent unpredictability of ‘open’ systems (regardless of their complexity)
  • the over-emphasis of environmental models’ predictive capabilities and expectations (as a result of positivist philosophies of science that have been successful in ‘closed’ and controlled conditions)
  • the politics of modelling and management
  • the need to publish (or at least make available) model source code and conceptual structure
  • an emphasis on models to understand rather than predict environmental systems
  • any conclusions based on experimentation with the model are conclusions about the structure of the model not the structure of nature

I’ve come to these conclusions over the last couple of years during the development of a socio-ecological model, in which I’ve been confronted by differing modelling philosophies. As such, I think the adoption of something more akin to ‘Post-Normal’ Science, and greater involvement of the local publics in the environments under study is required for better management. The understanding of the interactions of social, economic and ecological systems poses challenges, but is one that I am sure environmental modelling can contribute. However, given the open nature of these systems this modelling will be more useful in the ‘qualitative’ sense as Pilkey and Pilkey-Jarvis suggest.

Orrin H. Pilkey and Linda Pilkey-Jarvis (2007)
Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future
Columbia University Press
ISBN: 978-0-231-13212-1

Buy at Amazon.com

[June 3rd 2007: I just noticed Roger Pielke reviewed Useless Arithmetic for Nature the same day as this original post. Read the review here.]

Ecological and economic models for biodiversity conservation

As a follow-up to yesterday’s post, the latest volume of Ecological Economics has a paper by Drechsler et al. entitled, ‘Differences and similarities between ecological and economic models for biodiversity conservation’. They compare 60 ecological and economic models and suggest:

“Since models are a common tool of research in economics and ecology, it is often implicitly assumed that they can easily be combined. By making the differences between economic and ecological models explicit we hope to have helped to avoid miscommunication that may arise if economists and ecologists talk about “models” and believe they mean the same but in fact think of something different. The question that arises from the analysis of this paper is, of course: What are the reasons for the differences between economic and ecological models?”

The authors suggest five possible routes into the examination of this question:

  1. Different disciplinary traditions
  2. Differences in the systems analysed
  3. Differences in the perception of the system analysed
  4. Varying personal preferences of researchers
  5. Models serve different purposes

Drechsler et al. conclude:

“The general lesson from this is that economists who start thinking about developing ecological–economic models have to be prepared that they might be involved in complex modelling not typical and possibly less respected in economics. On the other hand, ecologists starting collaborations with modellers from economics have to be aware that in economics analytical tractability is much higher valued and simple models are more dominant than in ecology.”

Integrating Ecology and Economics

With my viva voce just over two weeks ago I really should be concentrating all my efforts on ensuring that I’m adequately prepared for the oral defence of my PhD thesis. I’m doing OK, but I’m a little distracted by my impending move to the Center for Systems Integration and Sustainability at Michigan State University. There I’ll be working on a project that will take a systems approach to develop an integrated ecological-economic model for the management of a forest landscape in Michigan’s Upper Peninsula.

I touched on some of the difficulties of integrated ecological-economic modelling in my thesis:

The difficulties of integrating ecological and economic theory into a model or framework for study have been outlined by Svedin and Bockstael et al.. These authors highlight some common points regarding time and space scales. First, the spatial boundaries on systems’ analysis may not coincide, as economists place their boundaries according to the extent of the market, whilst ecologists typically use physical features. Second, the temporal extents of study may differ vastly as economists do not believe they can predict too far into the future, but ecologists are often more ambitious. Potentially the biggest stumbling block for integrating economic and ecological approaches however, is the difference in the disciplines’ fundamental approach and philosophy. First, economists disregard things that they cannot value financially but ecologists believe that a theoretical framework must take account of the most important aspects of a problem (regardless of financial value – Bockstael et al.). As ecosystem processes are very difficult (if not impossible) to value in financial terms, these two standpoints are hard to reconcile. These differences in approach, and the difference in the systems of study, result in different “units of measurement, populations of interest, handling of risk and uncertainty and paradigms of analysis” when modelling (Bockstael et al. p.146). Svedin discusses the potential of using energy or information as fundamental units that might be used in common by the two disciplines. However, Bockstael et al. point out that reducing systems to the lowest possible common denominator has often simply resulted in larger black box models, compromising individual model modules’ integrity. Svedin possibly realised this when he concluded that integration should be context-dependent for the study at hand, and that the underlying philosophies of different disciplines must be remembered when attempting integration.

One method that has been developed to address these issues is economic valuation of ecosystem services. A recent example of this sort of exercise was undertaken for the trees of New York City. Designed for use in urban areas, the USFS Stratum model uses a tree growth model coupled with data on the regional climate, building construction and energy use patterns, fuel mix for energy production, and air pollutant concentrations to estimate environmental benefits and costs as well as effects on property values. Alongside the economic value of the trees (the annual monetary value of the benefits provided and costs accrued), Stratum estimates the resource’s structure (species composition, extent and diversity), function (the environmental & aesthetic benefits trees afford the community), and resource management needs (evaluations of diversity, canopy cover, and pruning needs). According to Stratum the nearly 600,000 trees lining the streets of New York City are worth $122 million – and this doesn’t include the 4.5 million trees in parks and on private land.

As the outputs of Stratum suggest, there are both monetary and non-monetary forms of ecosystem valuation, both with pros and cons. One notable form of monetary ecosystem valuation is non-market valuation. Non-market valuation attempts to estimate the value of goods and services that do not have observable market values. In the forthcoming project at CSIS we hope to use non-market valuation as a complementary approach to more traditional market valuation analysis to better examine economic trade-offs between various ecosystem services and ensure the development of sustainable management plans. In developing the model in this way we will be exploring ways to overcome the fundamental differences between economic and ecological theory.

Reference
Svedin, U. (1985) Economic and ecological theory: differences and similarities In: Hall, D. O., Myers, N. and Margaris, N. S. Economics of ecosystems management:31-39 Dordrecht: Dr W. Junk Publishers

‘What I Want’ versus ‘What Is Best’

When ‘what is best’ doesn’t align with ‘what I want’, making the right decision is hard. We need to find ways of working out how make these options align as closely as possible.

Jared Diamond’s point in Collapse is that the fate of contemporary society is in our own hands. I read and wrote about the introductory chapter to a while ago. Eventually I did read the whole book, though as Michael Kavanagh points out;

“You could read the introduction and the last few chapters and get the point. But then you’d miss out on what Jared Diamond does best: tell stories.”

Kavanagh is right; as I’ve talked about before here storytelling is an important way of understanding the world. William Cronon has suggested narratives of global change that offer hope are needed for us to tackle the (potential) problems that contemporary society faces. Most of Diamond’s stories about the fate of previous societies don’t offer much hope however – most collapsed and the only modern example of positive action on the environment is Iceland. Diamond’s identifies five contributing factors to societal collapse:

“… climate change, hostile neighbours, trade partners (that is, alternative sources of essential goods), environmental problems, and, finally, a society’s response to its environmental problems. The first four may or may not prove significant in each society’s demise, Diamond claims, but the fifth always does. The salient point, of course, is that a society’s response to environmental problems is completely within its control, which is not always true of the other factors. In other words, as his subtitle puts it, a society can “choose to fail.”

Diamond emphasises the need for individual action – for a bottom-up approach to make sure that we choose not to fail. Kavanagh suggests the implications is that

“in a world where public companies are legally required to maximize their profits, the burden is on citizens to make it unprofitable to ruin the environment — for an individual, a company, or a society as a whole.”

Others suggest more dramatic action is needed however. Richard Smith suggests that this ‘market meliorist strategy’ won’t be enough. Smith contrasts the bottom-up decision-making of the New Guinea villages that Diamond uses as a potential model for contemporary decision-making with that of contemporary capitalist society. Whereas the New Guinea villages’ decision-making process takes into account everyone’s input:

“…we do not live in such a ‘bottom-up’ democratic society. We live in a capitalist society in which ownership and control of the economy is largely in the hands of private corporations who do not answer to society. In this system, democracy is limited to the political sphere. …under capitalism, economic power is effectively monopolized by corporate boards whose day-to-day requirements for reproduction compel their officers to systematically make ‘wrong’ decisions, to favour the particular interests of shareholders against the general interests of society.”

Smith’s solution? As the global issues contemporary society faces are so interconnected and international, international governance by a “global citizenry” is required. Critics to this approach are likely to be many, but whether it will be enough for individual consumers to “make it unprofitable to ruin the environment”, or whether the we develop a “global citizenry”, the ultimate question here seems to be ‘Are we prepared to change our lifestyles to ensure the survival of our contemporary (global) society’?

With the “End of Tradition” in western societies (i.e. life is no longer lived as fate in these societies) maybe the future of society really is in our hands as Diamond suggests. On the other hand, as Beck points out, as contemporary problems are due to dispersed causes (e.g. individuals driving their car to work everyday) responsibility is rather easily evaded and some form of global decision-making would be useful. To me the latter seems unlikely – those with power are unlikely to give it up easily. The ‘global’ institutions we currently have are frequently undermined by the actions of individual states and leaders. The power to change society and lifestyles (in the west at least) now lies with individuals. But with power comes a responsibly which, on the whole, currently we individuals are shirking.

The changes my and the next generation will need to make will have to go further than simply throwing our glass, paper and plastic in different boxes. There are small ways in which we can save ourselves money whilst helping the environment and they all add up. But sea changes in lifestyle are likely to be required. Governments will not make people do that, and have no right in a democracy. They can cajole via taxation (if they do it right) but they can’t force people to change their lifestyles. People must make those changes themselves because they want to make it profitable to sustain contemporary society. The problem is it’s very difficult to do what’s best when it doesn’t align with what you want. It can hurt. Findings ways of making the two align will become increasingly important. Often the two will not align and it will be necessary to take individual responsibility by accepting there will be a degree of pain. But once this responsibility has been accepted, the next step can be taken – working to minimise the pain whilst ensuring people get as close to what they want as possible.

Inevitably, I think modelling may have something to offer here. Just as Diamond uses evidence of historical environmental, technological and social change to discuss and tell stories about past problems we might use models to discuss and tell stories about potential problems we might face in the future. Simulation models, if appropriately constructed, offer us a tool to reconstruct and examine uncertain landscape change due to environmental, technological and social change in the future. Further, simulation models offer the opportunity to examine alternative futures, to investigate traps that might lie in wait. Just as we should learn from past histories of landscape change (as Diamond suggests), we should be able to use simulation models to construct future histories of change in our contemporary landscapes.

These alternative ‘model futures’ are unlikely to be realised exactly as the model says (that’s the nature of modelling complex open systems), and may not contain the details some people might like, but if they are useful to get people around a table discussing the most sustainable ways of managing their consumption of natural resources then they can’t be a bad thing. Modelling offers insight into states of potential future environmental systems given different scenarios of human activity. At the very least, models will provide a common focus for debate on, and offer a muse to inspire reflection about, how to align ‘what I want’ with ,‘what is best’.

EGU 2007 Poster

I’m not attending the European Geophysics Union General Assembly this year as I have done the past couple. However, I do have a poster there (today, thanks to Bruce Malamud for posting it) on some work I have been doing with Raul Romero Calcerrada at Universidad Rey Juan Carlos in Madrid, Spain. We have been using various spatial statistical modelling techniques to examine the spatial patterns and causes (including both socioeconomic and biophysical) of wildfire ignition probabilities in central Spain. The poster abstract is presented below and we’re working on writing a couple of papers related to this right now.

Spatial analysis of patterns and causes of fire ignition probabilities using Logistic Regression and Weights-of-Evidence based GIS modelling
R. Romero-Calcerrada, J.D.A. Millington
In countries where more than 95% of wildfires are caused by direct or indirect human activity, such as those in the Iberian Peninsula, ignition risk estimation must consider anthropic influences. However, the importance of human factors has been given scant regard when compared to biophysical factors (topography, vegetation and meteorology) in quantitative analyses of risk. This disregard for the primary cause of wildfires in the Iberian Peninsula is owed to the difficulties in evaluating, modelling and representing spatially the human component of both fire ignition and spread. We use logistic regression and weights-of-evidence based GIS modelling to examine the relative influence of biophysical and socio-economic variables on the spatial distribution of wildfire ignition risk for a six year time series of 508 fires in the south west of the Autonomous Community of Madrid, Spain. We find that socioeconomic variables are more important than biophysical to understand spatial wildfire ignition risk, and that models using socioeconomic data have a greater accuracy than those using biophysical data alone. Our findings suggest the importance of socioeconomic variables for the explanation and prediction of the spatial distribution of wildfire ignition risk in the study area. Socioeconomic variables need to be included in models of wildfire ignition risk in the Mediterranean and will likely be very important in wildfire prevention and planning in this region.

PhD Thesis Completed

So, finally, it is done. As I write, three copies of my PhD Thesis are being bound ready for submission tomorrow! I’ve posted a short abstract below. If you want a more complete picture of what I’ve done you can look at the Table of Contents and read the online versions of the Introduction and Discussion and Conclusions. Email me if you want a copy of the whole thesis (all 81,000 words, 277 pages of it).

So just the small matter of defending the thesis at my viva voce in May. But before that I think it’s time for a celebratory beer on the South Bank of the Thames in the evening sunshine…

Modelling Land-Use/Cover Change and Wildfire Regimes in a Mediterranean Landscape

James D.A. Millington
March 2007

Department of Geography
King’s College, London

Abstract
This interdisciplinary thesis examines the potential impacts of human land-use/cover change upon wildfire regimes in a Mediterranean landscape using empirical and simulation models that consider both social and ecological processes and phenomena. Such an examination is pertinent given contemporary agricultural land-use decline in some areas of the northern Mediterranean Basin due to social and economic trends, and the ecological uncertainties in the consequent feedbacks between landscape-level patterns and processes of vegetation- and wildfire-dynamics.

The shortcomings of empirical modelling of these processes are highlighted, leading to the development of an integrated socio-ecological simulation model (SESM). A grid-based landscape fire succession model is integrated with an agent-based model of agricultural land-use decision-making. The agent-based component considers non-economic alongside economic influences on actors’ land-use decision-making. The explicit representation of human influence on wildfire frequency and ignition in the model is a novel approach and highlights biases in the areas of land-covers burned according to ignition cause. Model results suggest if agricultural change (i.e. abandonment) continues as it has recently, the risk of large wildfires will increase and greater total area will be burned.

The epistemological problems of representation encountered when attempting to simulate ‘open’, middle numbered systems – as is the case for many ‘real world’ geographical and ecological systems – are discussed. Consequently, and in light of recent calls for increased engagement between science and the public, a shift in emphasis is suggested for SESMs away from establishing the truth of a model’s structure via the mimetic accuracy of its results and toward ensuring trust in a model’s results via practical adequacy. A ‘stakeholder model evaluation’ exercise is undertaken to examine this contention and to evaluate, with the intent of improving, the SESM developed in this thesis. A narrative approach is then adopted to reflect on what has been learnt.

Landscape Simulation Modelling

This is my fifth contribution to JustScience week.

The last couple of days I’ve discussed some techniques and case studies of statistical model of landscape processes. Monday and Tuesday I looked at the power-law frequency-area characteristics of wildfire regimes in the US, Wednesday and Thursday I looked at regression modelling for predicting and explaining land use/cover change (LUCC). The main alternative to these empirical modelling methods are simulation modelling techniques.

When a problem is not analytically tractable (i.e. equations cannot be written down to represent the processes) simulation models may be used to represent a system by making certain approximations and idealisations. When attempting to mimic a real world system (for example a forest ecosystem), simulation modelling has become the method of choice for many researchers. This may have become the case since simulation modelling can be used when data is sparse. Also, simulation modelling overcomes many of the problems associated with the large time and space scales involved in landscapes studies. Frequently, study areas are so large (upwards of 10 square kilometres – see photo below of my PhD study area) that empirical experimentation in the field is virtually impossible because of logistic, political and financial constraints. Experimenting with simulation models allows experiments and scenarios to be run and tested that would not be possible in real environments and landscapes.

Spatially-explicit simulation models of LUCC have been used since the 1970s and have dramatically increased in use recently with the growth in computing power available. These advances mean that simulation modelling is now one of the most powerful tools for environmental scientists investigating the interaction(s) between the environment, ecosystems and human activity. A spatially explicit model is one in which the behaviour of a single model unit of spatial representation (often a pixel or grid cell) cannot be predicted without reference to its relative location in the landscape and to neighbouring units. Current spatially-explicit simulation modelling techniques allow the spatial and temporal examination of the interaction of numerous variables, sensitivity analyses of specific variables, and projection of multiple different potential future landscapes. In turn, this allows managers and researchers to evaluate proposed alternative monitoring and management schemes, identify key drivers of change, and potentially improve understanding of the interaction(s) between variables and processes (both spatially and temporally).

Early spatially-explicit simulation models of LUCC typically considered only ecological factors. Because of the recognition that landscapes are the historical outcome of multiple complex interactions between social and natural processes, more recent spatially-explicit LUCC modelling exercises have begun to integrate both ecological and socio-economic process to examine these interactions.

A prime example of a landscape simulation model is LANDIS. LANDIS is a spatially explicit model of forest landscape dynamics and processes, representing vegetation at the species-cohort level. The model requires life-history attributes for each vegetation species modelled (e.g. age of sexual maturity, shade tolerance and effective seed-dispersal distance), along with various other environmental data (e.g. climatic, topographical and lithographic data) to classify ‘land types’ within the landscape. Previous uses of LANDIS examined the interactions between vegetation-dynamics and disturbance regimes , the effects of climate change on landscape disturbance regimes , and simulated the impacts of forest management practices such as timber harvesting.

Recently, LANDIS-II was released with a new website and a paper published in Ecological Modelling;


LANDIS-II advances forest landscape simulation modeling in many respects. Most significantly, LANDIS-II, 1) preserves the functionality of all previous LANDIS versions, 2) has flexible time steps for every process, 3) uses an advanced architecture that significantly increases collaborative potential, and 4) optionally allows for the incorporation of ecosystem processes and states (eg live biomass accumulation) at broad spatial scales.

During my PhD I’ve been developing a spatially-explicit, socio-ecological landscape simulation model. Taking a combined agent-based/cellular automata approach, it directly considers:

  1. human land management decision-making in a low-intensity Mediterranean agricultural landscape [agent-based model]
  2. landscape vegetation dynamics, including seed dispersal and disturbance (human or wildfire) [cellular automata model]
  3. the interaction between 1 and 2

Read more about it here. I’m nearly finished now, so I’ll be posting results from the model in the near future. Finally, some other useful spatial simulation modelling links:

Wisconsin Ecosystem Lab – at the University of Wisconsin

Center for Systems Integration and Sustainability – at Michigan State University

Landscape Ecology and Modelling Laboratory – at Arizona State University

Great Basin Landscape Ecology Lab – at the University of Nevada, Reno

Baltimore Ecosystem Study – at the Institute of Ecosystems Studies

The Macaulay Institute – Scottish land research centre

Hierarchical Partitioning for Understanding LUCC

This post is my fourth contribution to JustScience week.

Multiple regression is an empirical, data-driven approach for modelling the response of a single (dependent) variable from a suite of predictor (independent) variables. Mac Nally (2002) suggests that multiple regression is generally used for two purposes by ecologists and biologists; 1) to assess the amount of variance exhibited by the dependent variable that can be attributed to each predictor variable, and 2) to find the ‘best’ predictive model (the model that explains most total variance). Yesterday I discussed the use of logistic regression (a form of multiple regression) models for predictive purposes in Land Use/Cover Change (LUCC) studies. Today I’ll present some work on an explanatory use of these methods.

Finding a multivariate model that uses the ‘best’ set of predictors does not imply that those predictors will remain the ‘best’ when used independently of one another. Multi-collinearity between predictor variables means that the use of the ‘best’ subset of variables (i.e. model) to infer causality between independent and dependent variables provides little valid ‘explanatory power’ (Mac Nally, 2002). The individual coefficients of a multiple regression model can only be interpreted for direct effects on the response variable when the other predictor variables are held constant (James & McCulloch, 1990). The use of a model to explain versus its use to predict must therefore be considered (Mac Nally, 2000).

Hierarchical partitioning (HP) is a statistical method that provides explanatory power, rather than predictive. It allows the contribution of each predictor to the total explained variance of a model, both independently and in conjunction with the other predictors, to be calculated for all possible candidate models. The use of the HP method developed by Chevan and Sutherland (1991) by ecologists and biologists in their multivariate analyses was first suggested by Mac Nally (1996). More recently, the method has been extended to help provide the ability to statistically choose which variables to retain once they have been ranked for their predictive use (Mac Nally, 2002). Details of how HP works can be found here.

With colleagues, I examined the use of hierarchical partitioning for understanding LUCC in my PhD study area, leading to a recent publication in Ecosystems. We examined the difference in using two different land-cover (LC) classifications for the same landscape, one classification with 10 LC classes, another with four. Using HP we found that more coarse LC classifications (i.e. fewer LC classes) causes the joint effects of variables to suppress total variance explained in LUCC. That is, the combined effect of explanatory variables increases the total explained variance (in LUCC) in regression models using the 10-class LC classification, but reduces total explained variance in the dependent variable for four-class models.

We suggested that (in our case at least) this was because the aggregated nature of the four-class models means broad observed changes (for example from agricultural land to forested land) masks specific changes within the classes (for example from pasture to pine forest or from arable land to oak forest). These specific transitions may have explanatory variables (causes) that oppose one another for the different specific transitions, decreasing the explanatory power of models that use both variables to explain a single broader shift. By considering more specific transitions, the utility of HP for elucidating important causal factors will increase.

We concluded that a systematic examination of specific LUCC transitions is important for elucidating drivers of change, and is one that has been under-used in the literature. Specifically, we suggested hierarchical partitioning should be useful for assessing the importance of causal mechanisms in LUCC studies in many regions around the world.

Technorati Tags: , ,