Wildfire Frequency-Area Scaling Relationships

This post is the first of my contribution to JustScience week.

Wildfire is considered an integral component of ecosystem functioning, but often comes into conflict with human interests. Thus, understanding and managing relationship between wildfire, ecology and human activity is of particular interest to both ecologists and wildfire managers. Quantifying the wildfire regime is useful in this regard. The wildfire regime is the name given to the combination of the timing, frequency and magnitude of all fires in a region. The relationship between the frequency and magnitude of fires, the frequency-area distribution, is one particular aspect of the wildfire regime that has become of interest recently.

Malamud et al. 1998 examined ‘Forest Fire Cellular Automata‘ finding a power-law relationship between the frequency and size of events. The power-law relationship takes the form:

power-law function

where frequency is the frequency of fires with size area, and beta is a constant. beta is a measure of the ratio of small to medium to large size fires and how frequently they occur. The smaller the value of beta, the greater the contribution of large fires (compared to smaller fires) to the total burned area of a region. The greater the value, the smaller the contribution. Such a power-law relation is represented on a log-log plot as straight line, as the example from Malamud et al. 2005 shows:

power-law distribution

Shown circles are number of wildfires per “unit bin” of 1 km^2 (in this case normalized by database length in years and area in km^2) plotted as a function of wildfire area. Also shown is a solid line (best least-squares fit) with coefficient of determination r^2. Dashed lines represent lower/upper 95% confidence intervals, calculated from the standard error. Horizontal error bars on burned area are due to measurement and size binning of individual wildfires. Vertical error bars represent two standard deviations of the normalized frequency densities and are approximately the same as the lower and upper 95% confidence interval.

As a result of their work on the forest fire cellular automata Malamud et al. 1998 wondered whether the same relation would hold for empirical wildfire data. They found the power-law relationship did indeed hold for observed wildfire data for parts of the US and Australia. As Millington et al. 2006 discuss, since this seminal publication several other studies have suggested a power-law relationship is the best descriptor of the frequency-size distribution of wildfires around the world.

During my Master’s Thesis I worked with Dr. Bruce Malamud to examine wildfire frequency-area statistics and their ecological and anthropogenic drivers. Work resulting from this thesis led to the publication of Malamud et al. 2005 which I’ll discuss in more detail tomorrow.

Technorati Tags: , , ,

Generative Landscape Science

A paper from the recent special issue of Professional Geographer (and discussed briefly here) of particular interest to me, as it examines and emphasises an approach and perspective similar to my own, was that by Brown et al. (2006). They suggest that a generative landscape science, one which considers the implications microscale processes for macroscale phenomena, offers a complementary approach to explanation via other methods. Such an approach would combine ‘bottom-up’ models of candidate processes, believed to give rise to observed patterns, with empirical observations, predominantly through individual-based modelling approaches such as agent-based models. There are strong parallels between modelling in a generative landscape science approach and the pattern-oriented modelling of agent-based systems in ecology discussed by Grimm et al. (1995). As a result of the theory-ladeness of data (Oreskes et al. 1994) and issues of equifinality (Beven 2002) landscape modellers often find themselves encountering an ‘interesting’ issue (as Brown et al. put it):


“we may understand well the processes that operate on a landscape, but still be unable to make accurate predictions about the outcomes of those processes.”

Thus, whilst pattern-matching of (model and observed) system-level properties from models of microscale interactions may be useful for examining and explaining system structure, it does not imply prediction is necessarily possible. There is a distinction between pattern-matching for validation (sensu Oreskes and Beven) and pattern-matching for understanding (via strong inference), but it is a fine line. If we say, “Model 1 uses structure A and Model 2 uses structure B, Model 1 reproduces observed patterns at multiple scales more accurately than Model 2, so Model 1 is more like reality, and therefore we understand reality better”, we’re still left with the problems of equifinality.

And so (rightly IMHO) in turn, Brown et al. suggest that whilst the use of pattern-matching exercises to evaluate and interpret models will be useful, we should wary of an over-emphasis on these techniques at the expense of intuition and deduction. This perspective partly contributed to my investigation of the use of ‘stakeholder assessment’ to evaluate the landscape change model I’ve been developing as part of my PhD.

In conclusion Brown et al. suggest a generative component (i.e. exploiting individual- and process-based modelling approaches) in landscape science will help;

  • develop and encode explanations that combine multiple scales
  • evaluate the implicaitons of theory
  • identify and structure needs for empirical investigation
  • deal with uncertainty
  • highlight when prediction may not be a reasonable goal

This modelling approach adopts perspective that is characteristic of recent attitudes toward the uses and interpretation of models arising recently in other areas of simulation modelling (e.g. Beven in hydrology and Moss and Edmonds in social science) and is also resonant with perspectives arising from critical realism (without explicitly discussing ontology). As such their discussion is illustrative of recent trends environmental and social simulation with some good modelling examples from Elk-Wolf population dynamics in Yellowstone National Park, and places the discussion in a context and forum in which individuals with backgrounds in Geography, GIScience and Landscape Ecology can all associate.

Reference
Daniel G. Brown, Richard Aspinall, David A. Bennett (2006)
Landscape Models and Explanation in Landscape Ecology—A Space for Generative Landscape Science?
The Professional Geographer 58 (4), 369–382.
doi:10.1111/j.1467-9272.2006.00575.x

Technorati Tags: , , , , , , ,

Spring Conferences

The preliminary program and schedule of sessions for the 2007 AAG (Association of American Geographers) National Meeting in San Francisco, April 17-21, is now available online.

It looks like I should have some time during April, and several colleagues from King’s Geography Dept. are going to San Francisco, so it might be good to go. Unfortunately, I wasn’t banking on having the opportunity so I haven’t submitted anything to present.

The alternative would be to go to the EGU (European Geophysics Union) General Assembly 2007 in Vienna, Austria, 15 – 20 April. I’m second author on a poster due to be displayed there:

Spatial analysis of patterns and causes of fire ignition probabilities using Logistic Regression and Weights-of-Evidence based GIS modelling
Romero-Calcerrada, R. and Millington, J.D.A
Session NH8.04/BG1.04: Spatial and temporal patterns of wildfires: models, theory, and reality (co-organized by BG & NH)

I’ll have a think about it…

Technorati Tags: , , , , , ,

Volcano Modelling with Google Earth

One of my former colleagues (and good mate) at King’s, Dr. Peter Webley, is now working at The University of Alaska, Fairbanks. Pete is a volcanologist, with a particular interest in the remote monitoring and modelling of volcanic phenomena. Recently, he’s been working on the integration of Puff, a computer model of ash cloud formations, with Google Earth to improve communication between scientists and the public at large. Pretty cool stuff – checkout videos and animations here or even run your own volcano model here.

Categories: , , , ,

Ecosystems Paper

In an effort not to become one of the estimated 200 million blogs that have now been abandoned, I thought it about time I let the blogosphere know that the paper I submitted to Ecosystems with Dr. George Perry and Dr. Raul Romero-Calcerrada has been accepted for publication. The paper arose out of the initial statistical modelling of the SPA I did for my PhD thesis (also used in Millington 2005) and examines the use of statistical techniques for explaining causes of land use and cover changes versus techniques for projecting change.

Here’s the abstract:

In many areas of the northern Mediterranean Basin the abundance of forest and scrubland vegetation is increasing, commensurate with decreases in agricultural land use(s). Much of the land use/cover change (LUCC) in this region is associated with the marginalisation of traditional agricultural practices due to ongoing socioeconomic shifts and subsequent ecological change. Regression-based models of LUCC have two purposes: (i) to aid explanation of the processes driving change and/or (ii) spatial projection of the changes themselves. The independent variables contained in the single ‘best’ regression model (i.e. that which minimises variation in the dependent variable) cannot be inferred as providing the strongest causal relationship with the dependent variable. Here, we examine the utility of hierarchical partitioning and multinomial regression models for, respectively, explanation and prediction of LUCC in EU Special Protection Area 56, ‘Encinares del río Alberche y Cofio’ (SPA 56) near Madrid, Spain. Hierarchical partitioning estimates the contribution of regression model variables, both independently and in conjunction with other variables in a model, to the total variance explained by that model and is a tool to isolate important causal variables. By using hierarchical partitioning we find that the combined effects of factors driving land cover transitions varies with land cover classification, with a coarser classification reducing explained variance in LUCC. We use multinomial logistic regression models solely for projecting change, finding that accuracies of maps produced vary by land cover classification and are influenced by differing spatial resolutions of socioeconomic and biophysical data. When examining LUCC in human-dominated landscapes such as those of the Mediterranean Basin, the availability and analysis of spatial data at scales that match causal processes is vital to the performance of the statistical modelling techniques used here.

Look out for it during 2007:

MILLINGTON, J.D.A., Perry, G.L.W. and Romero-Calcerrada, R. (In Press) Regression techniques for explanation versus prediction: A case study of Mediterranean land use/cover change Ecosystems

Categories: , , , , , ,

Critical Mass and Metaphor Models

Bruce Edmonds has reviewed Phillip Ball’s 2005 book Critical Mass: How One Thing Leads to Another for the Journal of Artificial Societies and Social Simulation (JASSS). Providing a popular science account of the history the development of sociophysics and abstract social simulation the book (apparently – I haven’t read it) makes the common mistake of conflating models and their results for the systems they have been built to represent. In Edmonds’ words:

In all of this the book is quite careful as to matters of fact – in detail all its statements are cautiously worded and filled with subtle caveats. However its broad message is very different, implying that abstract physics-style models have been successful at identifying some general laws and tendencies in social phenomena. It does this in two ways: firstly, by slipping between statements about the behaviour of the models and statements about the target social phenomena, so that it is able to make definite pronouncements and establish the success and relevance of its approach; and secondly, by implying that it is as well-validated as any established physics model but, in fact, only establishing that the models can be used as sophisticated analogies – ways of thinking about social phenomena. The book particularly makes play of analogies with the phase transitions observed in fluids since this was the author’s area of expertise.

This book is by no means unique in making these kinds of conflation – they are rife within the world of social simulation.

(from Edmonds 2006, JASSS)

And not only within social simulation. In a previous paper, I highlighted with some colleagues that the name given to the ‘Forest Fire Cellular Automata’ made famous by Per Bak and colleagues, is better treated as a metaphor than an accurate representation of the dynamics of a real world forest fire (Millington et al 2006). This may be a seemingly an obvious point to make, but simulation models can provide an unjustified sense of verisimilitude and the appearance of the reproduction of complex empirical systems’ behaviour by simple models can lead to the false conclusion that those simple mechanisms are the cause of the observed complexity.

In a forthcoming paper with Dr. George Perry in a special issue of Perspectives in Plant Ecology, Evolution and Systematics, we discuss the lure of these ‘metaphor models’ and other issues regarding the approaches to spatial modelling of succession-disturbance dynamics in terrestrial ecological systems. I’ll keep you posted on the paper’s progress…

Categories: , , , , , , , , ,

Stakeholder Model Assessment

This last week I have been undertaking the final piece of fieldwork for my PhD thesis in my study area, EU Special Protection Area number 56, ‘Encinares del rio Alberche y Cofio’ (SPA56). The aim of this fieldwork is what I have been terming ‘Stakeholder Model Assessment’ and involved interviews with several actors and stakeholders within the study area to assess the credibility and potential utility of my integrated socio-ecological simulation model of land use and cover change (LUCC).

Specifically, two questions guiding these meetings were;

  1. from a technical/modelling standpoint, how can we utilise local stakeholder knowledge and understandings of LUCC better in our simulation models?
  2. if we understand that often science does not move fast enough to deal with pressing environmental and political problems, how can we use socio-ecological models (incorporating local knowledge) to speed the process of decision-making and consensus building in the face of incomplete knowledge about a system?

The simulation model I have developed is a tangible manifestation of my ‘mental model’ (i.e. understanding) of processes of change in SPA56. This research aimed to develop an understanding of how well this manifestation corresponds with a (hypothetical) simulation model that would be produced using the ‘mental model’ of the stakeholder.

I embarked on this fieldtrip with a certain amount of trepidation as I was laying myself and my model open to a degree of criticism from a source of knowledge not often tapped. That is, whilst LUCC models developed in an academic setting are routinely exposed to academic peer review they are infrequently reviewed by those actors which they attempt to represent. I was quite prepared to be told that the results and model structure I had developed were not realistic or largely irrelevant.

I was pleasantly surprised to be proven wrong as much of the feedback received was positive, both about model results (maps of land cover for 25 years hence – i.e. 2026) and model structure (i.e. model rules and assumptions). I’m just about to start writing this all up for my thesis but the findings can be outlined as follows;

1. Interviewees were very accepting of the results but focused on the results of individual scenarios that fitted most closely with their projections of future change. They did not seem to have any problems with model output for the scenario that matched their perception of future change, suggesting that the model accurately reflects the expected change for that scenario. (Spatial) criticism of results was rather weak however and their analysis was rather broad.

2. Interviewees confirmed model rules and assumptions, with some caveats;

  1. Distance between fields and farmstead were not deemed important for farmer decision-making
  2. Some interviewees suggested land tenure was not important, others that size of land parcels would dictate what land was changed to
  3. Agent types (i.e. ‘Traditional’ vs. ‘Commercial’ farmers) were deemed sensible. Greater variation is present in SPA56 farmer behaviour but generally this dichotomy is accurate

3. All interviewees commented that the model was lacking consideration of urban development and change (i.e. expansion)

4. Individual agricultural actors (i.e. farmers) were generally apathetic towards model (linked I suggest with their generally pessimistic view of future state of agriculture in the study area). Higher-level, institutional stakeholders (i.e. local development officials and planners) were much more interested in potential uses of the model for planning.

5. Interviews suggest the model is realistic/credible enough to act as a focus around which discussion about future change can proceed (‘model as mediator’ or ‘model as discussant’). Interview discussion followed the presentation of model assumptions and allowed the stakeholder to reflect on the processes causing change.

6. Interviewees’ ‘mental models’ were little influenced by the process of model assessment and discussion for two main reasons;

  1. they are apathetic towards the model and sceptical about what it can do for them
  2. presentation of model structure (and the model structure itself) is not as detailed or nuanced as their understanding of processes and change.

7. Related to point six, some interviewees were positive about the model because it confirmed their understanding of future change. That is, they envisaged opportunities to use the model as a rhetorical tool to further their interests. [More thoughts on this important point to follow soon…]

All-in-all a useful and interesting trip. These are my initial thoughts, more in-depth analysis and reflection is ongoing – I’ll post something more permanent on a page on my main website in the near future.

Categories: , , , , , , ,

Fire-Fighting Strategy Software

Some guys at the University of Granada, Spain, have developed software for managing wildfire-fighting efforts. SIADEX is designed to speed decision-making for resource allocation, as an article in New Scientist describes:

“Computerised maps are already used by people in charge of managing the fire-fighting effort. These maps are used to plan which areas to focus on and which resources to deploy, such as fire engines, planes and helicopters.

But working out the details of such a plan involves coordinating thousands of people, hundreds of vehicles and many other resources. SIADEX is able to help by rapidly weighing up different variables.
Shift patterns

For example, it calculates which fire engines could reach an area first, where aircraft could be used, and even how to organise the shift patterns of individual fire fighters. It then very quickly produces several different detailed plans. … One plan might be the cheapest, another the fastest, and a third the least complicated.”

I wonder how Normal Maclean would have felt about this approach to fire-fighting. I imagine like me he’d be interested in how this new tool can be used to aid and protect wildland fire-fighters, but the given the unpredictability of fire behaviour (in the light of current understanding) would still maintain that human experience, gained over many years dealing with unique situations, will be invaluable in managing fire-fighters and their resources. As with much computer software, this should remain as a tool to aid human decision-making, not replace it.

Categories: , , , ,

Naveh’s Holistic Landscape Ecology

(or “One of the reasons I’ve ended up doing what I’m doing“)

I don’t know if he was the first to come up with the term, but I first read about holistic landscape ecology in a couple of papers by Prof. Zev Naveh (in 2001 during my third year undergrad course at King’s, ‘Landscape Ecology’ run by Dr. George Perry). Whilst reading today I came across some old notes I made from one of those papers (not terribly critical as you can see!?). Distinguished Professors of a Certain Age are allowed licence to run riot with their accumulated wisdom as you can see. I’m not being facetious – they can write bigger ‘blue skies’, ‘call to arms’ pieces than other (more lowly) academics.

These are the two papers that really got me interested anwyay (as well as my Disertation; finally, as a 3rd year undergrad!?). I think I thought something along the lines of, “there are problems here that we should be thinking about now and this guy is suggesting a paradigm of how we might start approaching them scientifically“. I think they’re one of the reasons I started a MSc (“I can’t stop now I’ve only just found this stuff“), and then later continued onto this ‘ere PhD (“this is interesting – I want to keep going“).

Later I got to these questions;

  • What sort of scientific tools and methods will we need to address problems that we have in our socio-environmental systems now?
  • How do we integrate tools and methods from different scientific disciplines? (i.e. how do we really become ‘inter-disciplinary’?)
  • What sort of science will this be? Normal? Post-Normal? Something else?

It could take a while to answer these – but it doesn’t seem like we’ve got that long. We’ll have to work them out as we go along I think…

Refs

Categories: , , , , , ,

Applications of Complex Systems to Social Sciences

I’ve recently returned from the GIACS summer school in Poland: Applications of Complex Systems to Social Sciences. Whilst not a social scientist, I am interested in the incorporation of aspects of human/social behaviour into models of the physical environment and its change. I thought this summer school might be an opportunity to get a glimpse at what the future of modelling these systems might be, and how others are approaching investigation of social phenomena.

The set of lecturers was composed of a Psychologist, three Physicists (P1, P2, and P3), a Geographer, and an Economist. I’m sure plenty of ‘real social scientists’ wouldn’t be too happy with what some of these modellers are doing with their differential equations, cellular automata, agent-based models and network theory. One of the students I spoke to (a social psychologist) complained that these guys were modelling social systems but not humans; another (a computer scientist interested in robotics) suggested the models were too ‘reactive’ rather than ‘proactive’. Pertinent comments I think, and ones that made me realise that to really understand what was going on would need me to take a step back and look at the broader modelling panorama.

Some of the toughest comments from the school attendees were levelled at the Geographer’s model (or “virtual Geography”) that attempts to capture the patterns of population growth observed for European cities, using a mechanistic approach based on the representation of economic processes. The main criticism was that the large parameter space of this model (i.e. a large number of interacting parameters) makes the model very difficult to analyse, interpret and understand. Such criticisms were certainly valid and have been previously observed by other modellers of geographic systems. However, the same criticisms could not be levelled at the models presented at the physicists’ (and psychologist’s) models, simply because their models have far fewer parameters.

And so this, I think, is the one of the problems that the social psychologist and cognitive scientist alluded to; the majority of the models arising from the techniques of physics (and mathematics) are generally interested in the system properties as whole and not individual interactions and components. One or two key state variables (a variable used to describe the state of the system) are reported and analysed. But actually, there’s nothing wrong with this approach because of the nature of their models, based as they are on very simple assumptions and largely homogenous in the agents, actors and interactions they considered.

Such an approach didn’t settle well with the social psychologist because the agents being modelled are supposed to be representative of humans; humans are individuals that make decisions based on their individual preferences and understandings. The computer scientists didn’t want to know about broad decision-making strategies – he wants his robot to be able to make the right decision in individual, specific situations (i.e. move left and survive not right and fall off a cliff). Understanding broad system properties of homogenous agents and interactions is no good to these guys.

It’s also why the Geographer’s model stood out from the rest – it actually tries to recreate European urban development (or more specifically, “Simulate the emergence of a system of cities functionally differentiated from initial configurations of settlements and resources, development parameters and interaction rules”). It’a a model that attempts to understand the system within its context. [One other model presented that did model a specific system within its context was presented by the Economist’s model (“virtual archaeology”) of the population dynamics of the lost Kayenta Anasazi civilisation in New Mexico. This model also has a large parameter space but performed well largely (I’d suggest) because it was driven by such good data for parameterisation (though some parameter tuning was clearly needed).]

So no, there is nothing wrong with an approach that considers homogenous agents, actors and interactions with simple rules. It’s just that these models are more divorced from ‘reality’ – they are looking at the essence of the system properties that arise from the simplest of starting conditions. What is really happening here it that the systems that have not be modelled previously because of the problems of quantitative representation of systems of ‘middle numbers’ (i.e. systems that have neither so many system elements and interactions that statistical mechanics is not useful, but have more elements and interactions than allows simple modelling and analysis) are now being broken down for analysis. The attitude is “we have to start somewhere, so lets start at the bottom with the simplest cases and work our way up”. Such an approach has recently been suggested for the advancement of social science as a whole.

This still means our “virtual Geographies” and “virtual Landscapes” will still be hampered by huge parameter spaces for now. But what about if we try to integrate simple agent-based models of real systems into larger models of systems that we know to be more homogenous (‘predictable’?) in their behaviour. This is the problem I have been wrestling with regarding my landscape model – how do I integrate a model of human decision-making with a model of vegetation dynamics and wildfire. From the brief discussion I’ve presented here (and some other thinking) I think the most appropriate approach is to treat the agent-based decision-making model like the physicists do – examine the system properties that emerge from the individual interactions of agents. In my case, I can run the model for characteristic parameter sets and examine the composition (i.e. “how much?”) and configuration (i.e. “how spatially oriented?”) of the land cover that emerges and use this to constrain the vegetation dynamics model.

So, the summer school was very interesting, I got to meet many people from very different academic backgrounds (physicists, mathematicians, computer scientists, cognitive scientists, psychologists, sociologists, economists…) and discuss how they approach their problems. I think this has given me a broader understanding of the types and uses of models available for studying complex systems. Hopefully I’ll be able to use some of this understanding of different techniques in the future to good effect when studying the interaction between social and environmental systems.

The complex systems approach does offer many possibilities for the investigation of social systems. However, for the study of humans and society this sort of modelling will only go so far. We’ll still need our sociologists, ‘human’ geographers, and the like to study the qualitative aspects of these systems, their components and interactions. After all, real people don’t like being labelled or pigeon-holed.